News: 0174990817

  ARM Give a man a fire and he's warm for a day, but set fire to him and he's warm for the rest of his life (Terry Pratchett, Jingo)

OpenAI Acknowledges New Models Increase Risk of Misuse To Create Bioweapons

(Friday September 13, 2024 @05:22PM (msmash) from the PSA dept.)


OpenAI's latest models have "meaningfully" increased the risk that AI [1]will be misused to create biological weapons

[2]non-paywalled link

, the company has acknowledged. From a report:

> The San Francisco-based company announced its new models, known as o1, on Thursday, touting their new abilities to reason, solve hard maths problems and answer scientific research questions. OpenAI's system card, a tool to explain how the AI operates, said the new models had a "medium risk" for issues related to chemical, biological, radiological and nuclear (CBRN) weapons -- the highest risk that OpenAI has ever given for its models. The company said it meant that the technology has "meaningfully improved" the ability of experts to create bioweapons. AI software with more advanced capabilities, such as the ability to perform step-by-step reasoning, pose an increased risk of misuse in the hands of bad actors, according to experts.

>



[1] https://www.ft.com/content/37ba7236-2a64-4807-b1e1-7e21ee7d0914

[2] https://www.msn.com/en-us/news/technology/openai-o1-model-warning-issued-by-scientist-particularly-dangerous/ar-AA1qvMpm



Er, ok (Score:3)

by cascadingstylesheet ( 140919 )

So, what are you going to do? Ban knowledge? Ban computing?

Re: (Score:2)

by m00sh ( 2538182 )

> So, what are you going to do? Ban knowledge? Ban computing?

Yes, that is the logical conclusion that will be reached in the future.

When AI starts replacing humans in mass, the first thing they will do is restrict knowledge. The elites will have access to all of it, they will have a small controlled group that will maintain it for them and the rest will be slowly deprived.

One of the surest way to maintain power is to restrict knowledge.

With mass media control of the general population, we will do anything if they can advertise and play the message enough times.

Tools of abundance misused from scarcity thinking (Score:2)

by Paul Fernhout ( 109597 )

as with my sig: ""The biggest challenge of the 21st century is the irony of technologies of abundance in the hands of those still thinking in terms of scarcity."

More details:

[1]https://pdfernhout.net/recogni... [pdfernhout.net]

"There is a fundamental mismatch between 21st century reality and 20th century security thinking. Those "security" agencies are using those tools of abundance, cooperation, and sharing mainly from a mindset of scarcity, competition, and secrecy. Given the power of 21st century technology as an amplifier

[1] https://pdfernhout.net/recognizing-irony-is-a-key-to-transcending-militarism.html

Re: (Score:3)

by serviscope_minor ( 664417 )

You fell for it.

Altman is very very very good at getting media coverage by making claims like this. The flavor of danger makes it extra reportable, and it just so happens that it keeps open ai in the news cycle about just how darned good open ai's models are.

It's just clever marketing that's all

Just an opportunity for them to lie some more... (Score:5, Insightful)

by gweihir ( 88907 )

I mean, come on. "Reason", "solve hard maths problems" and "answer scientific research questions"? That is complete bullshit. Obviously, it cannot do any of those. I think they are operating on the principle here that if they repeat a lie often enough, many people will believe it.

Re: (Score:3)

by serviscope_minor ( 664417 )

It's marketing more than a lie per se (but also a lie). He keeps making predictions about how his ai is so good it's DANGEROUS 11!!1one, which gets lots of hits in the news cycle and keeps his company's name known along with how good their AI is.

It's transparent bullshit, but not apparently transparent enough.

It's a matter of liberty.. (Score:1)

by dowhileor ( 7796472 )

If we restrict applications that can synthesize bio weapons only the government and terrorists will have them.....

Great Filter? (Score:2)

by spaceman375 ( 780812 )

We are entering a time where anyone can have an assistant who knows almost everything and has every skill, but has no wisdom, no ethics, and no free will. The vast majority of people will use this for good. The few who don't may well kill us all, long before we reach AI capable of doing it to us. Hopefully we can catch/impede/derail these attempts long enough to grow beyond it. If we actually attain GAI, maybe it will force us to play nice. Wouldn't that be a plot twist?

Re: (Score:2)

by penguinoid ( 724646 )

That's an interesting take on the Fermi Paradox. With technology as an infinite force multiplier, plus competition for limited resources the core of evolution, once a civ figures out technology they shortly hit themselves with the banhammer with infinite force. And there seems to be a strong pattern for defending being much harder than attacking, for nukes, bioweapons, asteroid redirect, etc, and no upper limit on the potential damage.

Quantum computing poses a much larger risk. (Score:2)

by mmell ( 832646 )

RIght now, supercomputing is largely a game of handling programming tasks as multithreaded parallelized processes. Quantum superposition would allow for a fairly rapid solution to microbiological simulations such as the Folding @home project running under BOINC.

I don't think 'It's better than hurling yourself into a meat grinder'
is a good rationale for doing something.
-- Andrew Suffield in
<20030905221055.GA22354@doc.ic.ac.uk> on debian-devel