What's the Best Way to Stop AI From Designing Hazardous Proteins? (msn.com)
- Reference: 0179649392
- News link: https://science.slashdot.org/story/25/10/04/0539239/whats-the-best-way-to-stop-ai-from-designing-hazardous-proteins
- Source link: https://www.msn.com/en-us/news/technology/ai-can-design-toxic-proteins-they-re-escaping-through-biosecurity-cracks/ar-AA1NKKl6
"We will continue to stay on it and send out patches as needed, and also define the research processes and best practices moving forward to stay ahead of the curve as best we can."
But is that enough?
> Outside biosecurity experts applauded the study and the patch, but said that this is not an area where one single approach to biosecurity is sufficient. "What's happening with AI-related science is that the front edge of the technology is accelerating much faster than the back end ... in managing the risks," said David Relman, a microbiologist at Stanford University School of Medicine. "It's not just that we have a gap — we have a rapidly widening gap, as we speak. Every minute we sit here talking about what we need to do about the things that were just released, we're already getting further behind."
The Washington Post notes not every company deploys biosecurity software. But "A different approach, biosecurity experts say, is to ensure AI software itself is imbued with safeguards before digital ideas are at the cusp of being brought into labs for research and experimentation."
> "The only surefire way to avoid problems is to log all DNA synthesis, so if there is a worrisome new virus or other biological agent, the sequence can be cross-referenced with the logged DNA database to see where it came from," David Baker, who shared the Nobel Prize in chemistry for his work on proteins, said in an email.
[1] https://www.msn.com/en-us/news/technology/ai-can-design-toxic-proteins-they-re-escaping-through-biosecurity-cracks/ar-AA1NKKl6
[2] https://news.microsoft.com/signal/articles/researchers-find-and-help-fix-a-hidden-biosecurity-threat/
[3] https://www.science.org/doi/10.1126/science.adu8578
This is a halting-problem variant, isn't it? (Score:2)
Given the program that takes inputs vector x named P(x); write a function f(x) such that f(P) is true if and only if P halts for all possible inputs.
The only difference is we're asked.. given a protein that interactions with molecules vector x named -- P(x). Write a function such that f(P) returns true if and only if P for all possible inpuit vectors is not capable of causing a catastrophic failure or serious impediment upon a complex biological process resulting in the Injury to, Or loss of
Re: (Score:3)
Sure, it's a halting problem, and while it is indeed impossible to design an algorithm that can tell you if any arbitrary program will halt, you can absolutely infer sequences of code that are likely to halt.
The goal isn't perfection, the goal is better than 0%.
With very few constraints, the halting problem goes away entirely.
Re: (Score:2)
To be particular, [1]Coq (now Rcoq) is a language [learnxinyminutes.com] that doesn't have a halting problem. Every program written in the language terminates.
[1] https://learnxinyminutes.com/coq/
Re: (Score:2)
Cool. Never heard of it. Thanks for the cite.
Re: (Score:2)
There's a difference between a "hazardous protein" and a "protein that doesn't cause damage until three generations into the future." They aren't trying to stop the second category.
Re: (Score:2)
The halting problem depends on the decider not being able to decide on itself. Here you would need a decider, but the decider would not emulate being a protein.
Wouldn't like to live in a world (Score:2)
depending on "Windows update"..
Just don't do it? (Score:2)
What is the best way to stop humans from doing it?
It's not like AI is deciding "I want to fold a protein" but some human runs the program.
Re: (Score:2)
Correct. As with most things, the answer is to punish the people at the top, not only their AI underlings.
There is no way to code the Three Laws (Score:2)
Asimov's Laws of Robotics can't be encoded using any existing technology. So there is no way to stop bad stuff from being generated at this time.
Same as with stopping anything AI (Score:1)
AI doesn't do this on its own.
AI is still only a tool.
The solution to this problem and many others is to stop ( and/or punish ) humans USING AI to do anything bad.
Nuke it from orbit (Score:3)
It's the only way to be sure.
Re: (Score:2)
> It's the only way to be sure.
Given this latest version of the proposition, "How do we outlaw math?", yours is the only actual answer that is possible.
Re: (Score:2)
How do you outlaw criminal behaviour now?