News: 0175465357

  ARM Give a man a fire and he's warm for a day, but set fire to him and he's warm for the rest of his life (Terry Pratchett, Jingo)

OpenAI Nears Launch of AI Agent Tool To Automate Tasks For Users (yahoo.com)

(Wednesday November 13, 2024 @05:40PM (BeauHD) from the what-to-expect dept.)


An anonymous reader quotes a report from Bloomberg:

> OpenAI is preparing to launch a new artificial intelligence agent codenamed "Operator" that [1]can use a computer to take actions on a person's behalf (Warning: source may be paywalled; [2]alternative source ), such as writing code or booking travel [...]. In a staff meeting on Wednesday, OpenAI's leadership announced plans to release the tool in January as a research preview and through the company's application programming interface for developers [...]. The one nearest completion will be a general-purpose tool that executes tasks in a web browser, one of the people said.

>

> OpenAI Chief Executive Officer Sam Altman hinted at the shift to agents in response to a question last month during an Ask Me Anything session on Reddit. "We will have better and better models," Altman wrote. "But I think the thing that will feel like the next giant breakthrough will be agents." The move to release an agentic AI tool also comes as OpenAI and its competitors have seen diminishing returns from their costly efforts to develop more advanced AI models.



[1] https://www.bloomberg.com/news/articles/2024-11-13/openai-nears-launch-of-ai-agents-to-automate-tasks-for-users?embedded-checkout=true

[2] https://finance.yahoo.com/news/openai-nears-launch-ai-agent-195413391.html



Great attack vector! (Score:2)

by gweihir ( 88907 )

And if the "AI" hallucinates, it may just delete all your data without even any need for an attacker.

Does anybody here think this is a good idea?

Re: (Score:1)

by starworks5 ( 139327 )

Its not a a good idea for users and it is a huge security vulnerability, but they have to look like they are not lagging behind Anthropic, who released a similar tool. That being said, its not a bad thing that an AI agent can interact with a desktop, i.e. an ai agent that can use MS Paint in a sandbox, the problem is that people are going to deploy this tool in a insecure manner.

Re: (Score:2)

by gweihir ( 88907 )

> the problem is that people are going to deploy this tool in a insecure manner.

People are always doing that, because people are not experts. Hence there are whole classes of tools that regular people either cannot get or it is simply not marketed to them.

Re: (Score:2)

by zlives ( 2009072 )

can they create an agent to help fix issues with browser and outlook plugins, i am assuming this will be a plug in

Re: (Score:3)

by nightflameauto ( 6607976 )

> And if the "AI" hallucinates, it may just delete all your data without even any need for an attacker.

> Does anybody here think this is a good idea?

While it is an attack vector, what the management types are seeing is a glorious way to train a person's replacement. If the AI agent doesn't cause any clusterfucks? Time to take that task off the human's plate. Nobody gives a flying fuck about security issues until they actually cause damage. Then it's a, "We sincerely apologize" moment for some middle manager who'll get fired, while the upper management continues to scramble to find new ways to break security through automation.

Re: (Score:2)

by gweihir ( 88907 )

> Nobody gives a flying fuck about security issues until they actually cause damage.

Damage from security problems for example in Germany in 2023: 2600 EUR per person (!). This is very likely a significant underestimation. Insecure software has become a major economic factor. Unreliable software adds damage on top of that.

Re: (Score:2)

by nightflameauto ( 6607976 )

>> Nobody gives a flying fuck about security issues until they actually cause damage.

> Damage from security problems for example in Germany in 2023: 2600 EUR per person (!). This is very likely a significant underestimation. Insecure software has become a major economic factor. Unreliable software adds damage on top of that.

I don't disagree at all. How do you convince decision makers of that without "proof" when they think they can save pennies by throwing security out the window?

Re: (Score:2)

by Mirnotoriety ( 10462951 )

And if the "AI" hallucinates, it may just delete all your data without even any need for an attacker.

Or sign you up to Dignitas.

booking travel (Score:2)

by zlives ( 2009072 )

i mean if their AI can book travel based on my requirements

cost, time of travel, time to travel, stop overs, stop over location preference, air lines preference,

i could use it. does it come with a big FU warning that this automation is not automation and the end user is responsible for all mistakes? ie autopilot

Re: (Score:2)

by Fly Swatter ( 30498 )

For business it would be far more efficient to just send the AI 'over' to wherever and do whatever and avoid air travel all together. Oh wait, that's your job? Oops.

Re: (Score:2)

by gweihir ( 88907 )

> does it come with a big FU warning that [...] the end user is responsible for all mistakes?

Obviously. If it books you first class to fly DC to Seattle via Hong Kong and Sydney, that is your problem.

That's the funniest thing I've ever heard and I will _not_ condone it.
-- DyerMaker, 17 March 2000 MegaPhone radio show