Microsoft will force its 'superintelligence' to be a 'humanist' and play nice with people
- Reference: 1762469950
- News link: https://www.theregister.co.uk/2025/11/06/microsoft_suleyman_humanist_superintelligence/
- Source link:
To be clear, “superintelligent AI” hasn't been invented yet. Pundits use the term to describe systems that appear to "think" independently of what humans program them to do, an ability that some previously described as “artificial general intelligence," or AGI.
Suleyman announced he's heading up a new AI Superintelligence Team at Microsoft in a [1]blog post Thursday. The key difference in Microsoft's vision, Suleyman said, is the vision of a humanist superintelligence, or HSI, that trades blind AI ambition for carefully planned limitations to make sure AI helps rather than harms us.
[2]
"This isn't about some directionless technological goal, an empty challenge, a mountain for its own sake," Suleyman said. "We are doing this to solve real concrete problems and do it in such a way that it remains grounded and controllable."
[3]
[4]
The announcement that Microsoft is pursuing its own AI ambitions comes as Redmond's relationship with AI pioneer OpenAI has soured.
The pair were inextricably linked for years, with Redmond [5]pouring [6]billions into the cash-burning enterprise. Since OpenAI decided to transform itself into a for-profit enterprise and lessen Microsoft's control over its affairs, however, the companies have been drifting apart, with OpenAI starting to diversify away from Azure and add [7]other cloud providers .
[8]
The Microsoft AI CEO expounded on his views in an [9]interview with Semafor that laid out what seems to amount to his own set of three rules for the AI, which are comparable to science fiction author Isaac Asimov's [10]three laws of robotics .
To Suleyman's mind, humanist superintelligence can't have total autonomy, the capacity to self-improve, or the ability to set its own goals. If allowed to do those, the Microsoft man worries superintelligent AI could quickly become a threat.
"The project of superintelligence should not be about replacing or threatening our species," Suleyman told Semafor. "It's crazy to have to actually declare that – that should be self-evident, but I'm seeing lots of indications that people don't always agree."
[11]
Suleyman didn't name names in his blog post or the interview, but he did mention that it's incredibly dangerous when chatbot users [12]anthropomorphize AI by [13]ascribing human feelings to it, which naturally leads to discussion of [14]granting rights to algorithms .
"That mentality, if it really takes hold, will lead to a huge amount of conflict and threat to our species," Suleyman warned. "I'm embarking on the project of building [Microsoft's] superintelligence explicitly designed to avoid those things."
[15]OpenAI API moonlights as malware HQ in Microsoft's latest discovery
[16]DeepMind founder behind NHS data slurp to be beamed up to Google mothership
[17]OpenAI goes after Microsoft 365 Copilot's lunch with 'company knowledge' feature
f
[18]Microsoft hits Inflection point, peels off top personnel to form AI division
Suleyman said Microsoft's human-first approach will differ from the rest of the industry by requiring AIs to interact with humans in ways we can understand, instead of talking among themselves in “vector space”.
"If [AIs] just communicate in vector space, we're always going to be at the mercy of compressing their vector-to-vector space into a set of words that we feel we can hold accountable," Suleyman explained. "We will be limiting ourselves from performance maximization because it's probably true that vector-to-vector communication is going to be more efficient," he added. "We're going to forego efficiency or performance or some improved capability because we're going to prioritize safety, and safety means human- understandable, human-interpretable, rigid guidelines." ®
Get our [19]Tech Resources
[1] https://microsoft.ai/news/towards-humanist-superintelligence/
[2] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_software/aiml&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=2&c=2aQ186iQViTQoRAj5W4UXVwAAAEY&t=ct%3Dns%26unitnum%3D2%26raptor%3Dcondor%26pos%3Dtop%26test%3D0
[3] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_software/aiml&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=4&c=44aQ186iQViTQoRAj5W4UXVwAAAEY&t=ct%3Dns%26unitnum%3D4%26raptor%3Dfalcon%26pos%3Dmid%26test%3D0
[4] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_software/aiml&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=3&c=33aQ186iQViTQoRAj5W4UXVwAAAEY&t=ct%3Dns%26unitnum%3D3%26raptor%3Deagle%26pos%3Dmid%26test%3D0
[5] https://www.theregister.com/2023/01/10/microsoft_openai_investment_google/
[6] https://www.theregister.com/2025/03/05/cma_microsoft_openai/
[7] https://www.theregister.com/2025/11/03/openai_inks_38b_deal_with_aws/
[8] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_software/aiml&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=4&c=44aQ186iQViTQoRAj5W4UXVwAAAEY&t=ct%3Dns%26unitnum%3D4%26raptor%3Dfalcon%26pos%3Dmid%26test%3D0
[9] https://www.youtube.com/watch?v=aIifmbE2Ztw
[10] https://en.wikipedia.org/wiki/Three_Laws_of_Robotics
[11] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_software/aiml&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=3&c=33aQ186iQViTQoRAj5W4UXVwAAAEY&t=ct%3Dns%26unitnum%3D3%26raptor%3Deagle%26pos%3Dmid%26test%3D0
[12] https://www.theregister.com/2025/07/25/is_ai_contributing_to_mental/
[13] https://www.theregister.com/2022/06/13/google_lamda_sentient_claims/
[14] https://www.theregister.com/2022/06/13/google_lamda_sentient_claims/
[15] https://www.theregister.com/2025/11/04/openai_api_moonlights_as_malware/
[16] https://www.theregister.com/2019/12/06/deepmind_founder_shifts_to_google/
[17] https://www.theregister.com/2025/10/24/openai_chatgpt_company_knowledge/
[18] https://www.theregister.com/2024/03/19/microsoft_inflection/
[19] https://whitepapers.theregister.com/
Let's assume and worship Microsoft's benevolent Omnipotents.
Who wants to know what a superintelegent AI would look like as created by Mr Musk?
Last attempt
It's just Microsoft's last and final attempt to try to keep AI relevant while the bubble is finally popping and its stocks are falling.
For Microsoft to "force" something a specific way Microsoft would have to have that thing in its posession first or be able to create one. But Microsoft - or any other company for that matter - is not capable of doing that, and it's doubtful that it ever will be.
We won't get to superintelligence through LLMs, as the latter completely and utterly lack of any kind of intelligence on their own. They appear only intelligent because they spit back intelligent thoughts of humans whose content they digested, without actually understanding anything they do.
Microsoft promises to pray to the Omnissiah that it's vaporware does the good thing and not the bad thing.