'PromptQuest' is the worst game of 2025. You play it when trying to make chatbots work
- Reference: 1766747230
- News link: https://www.theregister.co.uk/2025/12/26/ai_is_like_adventure_games/
- Source link:
Adventure games like Zork and its many imitators invited players to explore a virtual world, often a Tolkien-esque cave, that existed only as words.
"You enter a dark room. A Goblin pulls a rusty knife from its belt and prepares to attack!" was a typical moment in such games. Players, usually armed with imagined medieval weapons, might respond "Hit Goblin" in the expectation that phrase would see them draw a sword to smite the monster.
[2]
But the game might respond to "Hit Goblin" by informing players "You punch the Goblin."
[3]
[4]
The Goblin would dodge the punch and stab the player with the rusty knife.
Game over... until the player tried "Hit Goblin with sword" or "Stab Goblin" or whatever other syntax the game required, assuming they didn't just give up out of frustration at having to guess the correct verb/noun combination.
[5]
Adventure games were big in the 1980s, a time when computers were flaky and unpredictable, and AI was an imagined technology. Games that required obscure syntax were mostly tolerable and generally excused.
I'm less tolerant of AI making me learn its language.
For example, I recently prompted Microsoft's Copilot chatbot to scour data available online and convert some elements of it into a downloadable spreadsheet. The bot accepted that request and produced a Python script that it claimed will write a spreadsheet.
[6]
In other AI experiments, I have found that the same prompt produces different results on different days. One prompt I use to check I haven't left any terrible typos in stories produces responses in a different format every time I use it. Microsoft has also, in its wisdom, decided to offer different versions of Copilot in Office and in its desktop app. Each produces different results from the same prompt and the same source material.
Using AI has therefore become a non-stop experiment in "Hit/Kill/Stab/Smite Goblin."
[7]Microsoft research shows chatbots seeping into everyday life
[8]US teens not only love AI, but also let it rot their brains
[9]Brits believe the bots even though study finds they're often talking nonsense
[10]Now you can share your AI delusions with Group ChatGPT
And when Copilot starts using a new model, which it does without any change to its UI, prompts that worked reliably in the past produce different results, meaning I need to relearn what works.
My point here is not that chatbots do dumb things and make mistakes. It's that working with this tech feels like groping through a cave in the dark – a horrible game I call "PromptQuest" – while being told this is improving my productivity.
After Copilot gave me a Python script instead of a spreadsheet, I played a long session of PromptQuest during which Microsoft's AI responded to many different prompts by repeatedly telling me it was ready to make a spreadsheet, would make it available to download, and had completed the job to my satisfaction.
It never delivered the spreadsheet, and my frustration grew to the point at which I instructed Copilot to produce a progress bar so I could see it work.
[11]
A progress bar produced by Microsoft Copilot
You can see the results above. Ironically, I think it looks a lot like the output of a text adventure game. ®
Get our [12]Tech Resources
[1] https://www.theregister.com/2025/11/21/microsoft_zork_source/
[2] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_software/aiml&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=2&c=2aU6_LU7lnxrSRDd2pRkvbwAAAAY&t=ct%3Dns%26unitnum%3D2%26raptor%3Dcondor%26pos%3Dtop%26test%3D0
[3] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_software/aiml&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=4&c=44aU6_LU7lnxrSRDd2pRkvbwAAAAY&t=ct%3Dns%26unitnum%3D4%26raptor%3Dfalcon%26pos%3Dmid%26test%3D0
[4] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_software/aiml&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=3&c=33aU6_LU7lnxrSRDd2pRkvbwAAAAY&t=ct%3Dns%26unitnum%3D3%26raptor%3Deagle%26pos%3Dmid%26test%3D0
[5] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_software/aiml&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=4&c=44aU6_LU7lnxrSRDd2pRkvbwAAAAY&t=ct%3Dns%26unitnum%3D4%26raptor%3Dfalcon%26pos%3Dmid%26test%3D0
[6] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_software/aiml&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=3&c=33aU6_LU7lnxrSRDd2pRkvbwAAAAY&t=ct%3Dns%26unitnum%3D3%26raptor%3Deagle%26pos%3Dmid%26test%3D0
[7] https://www.theregister.com/2025/12/11/microsoft_research_chatbots/
[8] https://www.theregister.com/2025/12/10/teenagers_ai_chatbot_use/
[9] https://www.theregister.com/2025/11/18/which_ai_consumer_advice/
[10] https://www.theregister.com/2025/11/14/openai_chatgpt_group_texts/
[11] https://regmedia.co.uk/2025/12/16/screenshot_copilot_progress_bar.jpg
[12] https://whitepapers.theregister.com/
The only way to win is not to play.
The progress bar
All it needs is “you were eaten by a grue” to be added to the messages.
Key phrase
I think the phrase you are looking for is - Non-Deterministic.... Its like expecting a random number generator to serve as a clock or a fruit machine to pay out every time you pull the lever.
Re: Key phrase
" expecting a random number generator to serve as a clock "
If it periodically serves up two random number H,M (24>H≥0, 60>M≥0) that would be a more useful clock than PromptQuest could be for anything.
Even if randoclock only offerred the two numbers once a day, it would still be correct for a minute each and every day.
Re: Key phrase
... and it would still be more accurate than [1]Meteo France most of the time ... (they rely on Monte-Carlo methods I'm told) ... ;)
[1] https://meteofrance.com/
Wait
Wait for PromptQuest the MUD. We can all enjoy the shit show together while dodging prompt injecting Player Killers looking to grief you. A glorious future.
hmmm
Try the prompt
"Stab AI with sword, use axe to cut AI into bits, Burn the bits"
I never did finish the hobbit(game that is)
Re: hmmm
I hope you're talking about the C= 64 version of The Hobbit. Some well-meaning friend of the family gave me a copy on tape as a gift. Minor problem - I had a Vic-20. This did not work. So I've never finished it, but for different reasons.
Humanity is a bug, not a feature
I get on well with computers because they provide the same result each time you request they perform a function. They're more reliable in this regard than the equivalent human-group I am forced to deal with on an ongoing basis. "AI" not providing consistent, repeatable output is a bug, not a feature, and one I would like to submit a request for change to correct. Thank you.
TL;DR - still a toy
"AI" can impress under the right conditions. Which are artificial and contrived.
However I wouldn't trust it to cross the road.