News: 0180227443

  ARM Give a man a fire and he's warm for a day, but set fire to him and he's warm for the rest of his life (Terry Pratchett, Jingo)

Why Can't ChatGPT Tell Time? (theverge.com)

(Friday November 28, 2025 @11:01AM (msmash) from the stranger-things dept.)


ChatGPT can browse the web, write code and analyze images, but ask it what time it is and you might get the correct answer, a confident wrong answer, or a polite refusal -- [1]sometimes all three within minutes of each other .

The problem stems from how large language models work. These systems predict answers based on training data and don't receive constant real-time updates about things like time unless they specifically search the internet. AI robotics expert Yervant Kulbashian told The Verge that a language model "is only referencing things that have entered this space," comparing it to a castaway on an island stocked with books but no watch.

OpenAI can give ChatGPT access to system clocks, and does so through features like Search. But there are tradeoffs: every clock check consumes space in the model's context window, the finite portion of information it can hold at any given moment. Pasquale Minervini, a natural language processing researcher at the University of Edinburgh, said the leading models also struggle to read analog clock faces and have trouble with calendars.



[1] https://www.theverge.com/report/829137/openai-chatgpt-time-date



Prove you're a human (Score:3)

by ZiggyZiggyZig ( 5490070 )

Maybe in the future we can use this test to differentiate between replicants and real humans...

Re: (Score:3, Funny)

by Chris Mattern ( 191822 )

And now I'm picturing Harrison Ford going around asking people what time it is.

Re: (Score:3)

by stealth_finger ( 1809752 )

I dunno, daytime?

Re: (Score:2)

by excelsior_gr ( 969383 )

I was thinking about something more along the style of Flavor Flav.

Re: (Score:2)

by Viol8 ( 599362 )

Holdern asks Leon the time...

Leon looks at his Casio and tells him.

Roll end credits.

Hmm.....

Re: (Score:3)

by Knightman ( 142928 )

As usual, there are always caveats. Many youths these days can't tell time from an analog clock, or perhaps, they actually are replicants. Who is to say since we are in the age were many people think correlation is "factual" proof.

Article is paywalled. (Score:2)

by msauve ( 701917 )

That sucks.

AI is just limited. (Score:2)

by Kokuyo ( 549451 )

I have been using both Grok and ChatGPT relatively extensively lately and in versatile scenarios... all I can say is there is merit in using them but only to a very limited degree.

If you can put to them a task they're good at by design, hey great for you... but I find most tasks we want to give them are not that.

Re: (Score:2)

by caseih ( 160668 )

I find the various LLMs are helpful as a form of search engine, enabling me to drill down to potentially useful information more quickly. However at the same time they are far worse than a search engine because they aren't able to actually give you the sources to check. When ChatGPT generates a chunk of code, if you ask it where it got it from, it will say it didn't get it from a specific site, it just knows this stuff. Which of course ends up wrong half the time. So you end up with wrong stuff confident

So, basically (Score:2)

by Chris Mattern ( 191822 )

In a constantly changing world where change is happening ever faster, ChatGPT is essentially a static model.

Re: (Score:1)

by dexterace ( 68362 )

So, basically if you have a nail, a hammer is still better than a screwdriver to push it in the wall.

That is to say, there are some things that LLM don't do well because, basically, they are just that: large language models. Returning information about curernt, timely events is not in their nature.

Re: (Score:2)

by martin-boundary ( 547041 )

Even a stopped clock is right twice a day! Are you saying LLMs are less intelligent than a stopped clock?

Re: (Score:3)

by timeOday ( 582209 )

Not really. If you ask it what's going on in the news then you'll get an up-to-date response, because it knows in that instance to check and summarize the news rather than just generating something from its LLM.

And if you ask Gemini what time it is you'll get the right answer, for the same reason.

The fact that ChatGPT fails to do this is a problem with ChatGPT, not any inherent problem for AI. Probably in response to this embarrassing article it will be fixed within a couple weeks.

Nope (Score:2)

by The MAZZTer ( 911996 )

Any modern AI model can be provided "tools" that it can use to perform various tasks or retrieve various information. The current date and time is easy to do. I can't say why the author and/or ChatGPT seems to have trouble but you can easily set up a tool to return the current date and time, instruct the AI "this will return the current date and time" and then if the user asks for it the AI will automatically leverage the tool. It's possible ChatGPT just has a lot of tools at its disposal and is getting con

Humans cant tell time either (Score:4, Insightful)

by brunes69 ( 86786 )

Humans are notoriously horrible at measuring the passage of time. Ever heard of the expression "a watched pot never boils" or "time flies when you're having fun"? Where do you think those expressions come from?

If I took away your watch/phone and all external tools and asked you to be able to tell me what time it was a few hours from now, do you think you would succeed within even a 30 minute margin of error?

Re: Humans cant tell time either (Score:2)

by dfarrow ( 1683868 )

We aren't asking it to guess based on how tired it is. Just check its own system clock.

Re: (Score:2)

by brunes69 ( 86786 )

An LLM doesn't have a "system clock" any more than your brain does. Do you know anything at all about how LLMs work?

If an LLM made a decision to "check it's system clock" for the time, then it is no different than it calling out to a web service to do the same. It is a RAG-enabled external tool use. That is all it is.

Re: (Score:2)

by DaTrueDave ( 992134 ) *

Yeah, I don't understand how these models can "learn" from millions of scraped websites, but can't scrape some atomic clock website once a day to at least be relatively accurate with time.

Re: Humans cant tell time either (Score:2)

by brunes69 ( 86786 )

They can, and do do that, already.

Re: (Score:2)

by martin-boundary ( 547041 )

What do humans have to do with super human intelligences? Are you saying that if humans are bad at telling the time, then super human intelligences should be excused for being worse than humans at that task? Did you miss the bit about "super"? It's from Latin, it means "above", "over", "beyond", "better".

Re: (Score:3)

by brunes69 ( 86786 )

I am saying that the question is irrelevant, just like a lot of pontificating about AGI is.

An LLM needing to use an external tool to check the time, is irrelevant as to how intelligent it is. Humans also need to use tools to check the time.

It's a stupid thing to even discuss.

Really? (Score:2)

by oldgraybeard ( 2939809 )

"read analog clock faces and have trouble with calendars" automated image pattern matching has been around for decades. And in the limited arena of clock faces and calendars seems very doable.

Re:Really? (Score:4, Insightful)

by drinkypoo ( 153816 )

> automated image pattern matching has been around for decades

The problem is that the LLM only does one trick. When you start integrating other software with it, the other software's input has to be fed in the same way as your other tokens. As the last paragraph of TFS says, "every clock check consumes space in the model's context window" and that's because it's just more data being fed in. But the model doesn't actually ever know what time it is, even for a second; the current time is just mixed into the stew and cooked with everything else. It doesn't have a concept of the current time because it doesn't have a concept of anything.

You could have a traditional system interpreting the time, and checking the LLM's output to determine whether what it said made sense. But now that system has to be complicated enough to determine that, and since the LLM is capable of so much complexity of output it can never really be reliable either. You can check the LLM with another LLM, and that's better than not checking its output at all, but the output checking is subject to the same kinds of failures as the initial processing.

So yeah, we can do that, but it won't eliminate the [class of] problem.

Algorithms (Score:2)

by fluffernutter ( 1411889 )

Because AI cannot do anything it hasn't been programmed to do. When we enter the quit command into a regular program if it doesn't quit then that is called a bug, but when AI is doing it then it is trying to survive? No, people have designed a program that works on data that is so complex they don't even understand it, and they made the poor design choice to put the part they don't understand in front of the internal quit. It's still a bug, but people want to make it something more mystical than that. T

Re: (Score:2)

by burtosis ( 1124179 )

For the record, maybe we shouldn’t call it a bug because they have rudimentary sentience and thinking ability. Maybe unproductive mutation would be more apt.

Re: (Score:2)

by Viol8 ( 599362 )

Thinking yes, sentience no.

Re: (Score:2)

by WaffleMonster ( 969671 )

> Because AI cannot do anything it hasn't been programmed to do.

This is comically incorrect.

Re: (Score:2)

by fluffernutter ( 1411889 )

So I know copilot can execute Python code. Are you saying it can also execute Cobol and pl/I and APL and lisp?

D'oh - is this "LLMs are LLMs"-week on slashdot? (Score:2)

by unami ( 1042872 )

Same if you ask it about the next full moon - or really anything temporal subject about the present or future where there's a lot of training data from the past - as is to be expected.

Re: (Score:2)

by serviscope_minor ( 664417 )

Yes it will keep being LLMs are LLMs week until the outside world moves on from hammering "LLMs are magic" week every week.

LLMs are of course LLMs.

Re: (Score:2)

by cmseagle ( 1195671 )

> Same if you ask it about the next full moon - or really anything temporal subject about the present or future where there's a lot of training data from the past - as is to be expected.

I just asked Chat GPT "When is the next full moon?" and it did a web search to give me the correct answer for my location.

Not only that (Score:2)

by r1348 ( 2567295 )

Try using this prompt for image generation: an analog watch showing 3.15pm time

This is why we use "agents" instead of "LLM's" (Score:3, Insightful)

by nategasser ( 224001 )

The ChatGPT (or any AI product) people actually use is more than just an LLM. People get wound up over the inherent limitations of LLM's as if that's some brick wall preventing these things from ever becoming useful.

It's ludicrously simple to program a chatbot script to return the time instead of diving into the LLM if the user asks for the time. Same with mathematical operations - yes, LLM's sometimes get basic math wrong because that's not what they're designed for. But again, super simple to channel math requests to an engine that's built for that.

If they haven't fixed a particular shortcoming yet it's because they're seeking more complete solutions than picking off one tiny complaint at a time. But these are trivial to fix and smarmy posts like this will not age well.

Re:This is why we use "agents" instead of "LLM's" (Score:4, Insightful)

by burtosis ( 1124179 )

> The ChatGPT (or any AI product) people actually use is more than just an LLM. People get wound up over the inherent limitations of LLM's as if that's some brick wall preventing these things from ever becoming useful.

> It's ludicrously simple to program a chatbot script to return the time instead of diving into the LLM if the user asks for the time. Same with mathematical operations - yes, LLM's sometimes get basic math wrong because that's not what they're designed for. But again, super simple to channel math requests to an engine that's built for that.

> If they haven't fixed a particular shortcoming yet it's because they're seeking more complete solutions than picking off one tiny complaint at a time. But these are trivial to fix and smarmy posts like this will not age well.

The whole problem with this approach is a) we don’t have all the systems we need individually at a level of capability that meets the current level of hype and b) we don’t have a capable enough system management AI architecture, just using some weighted tokens is horribly deficient.

Re: (Score:2)

by The MAZZTer ( 911996 )

Yeah I have used Semantic Kernel to code AI in .NET and I did not give it the capability to tell the current date and time but it would be a 5 minute fix to do so since getting the current time is trivial. The bigger problem would be ensuring the offline server the AI runs on has its clock set correctly.

GPT5-Pro gets it (Score:2)

by michaelmalak ( 91262 )

GPT5-Pro ($200/mo) answers with the correct time, even though it takes five minutes to come up with the answer.

tells me the time fine (Score:1)

by roman_mir ( 125474 )

I don't know how some people get these types of weird answers, I cannot get the damn thing to tell me the wrong time. The only really weird thing is that it is 'searching the web' while 'thinking'.

Re: (Score:1)

by roman_mir ( 125474 )

Asked the thing to tell me:

> I have heard that you sometimes give wrong time and sometimes even answer that you cannot tell the time, why does this happen to some people?

> ChatGPT said:

> You see different behavior because the system reads time from the userâ(TM)s environment. Some users block this data. Some browsers hide it. Some devices send inconsistent region info. This leads to gaps.

> You see steady answers because your browser shares stable time data.

> Other users trigger fallback behavior. In those cases I answer that I cannot read the time.

is this it?

Much training data needs time-tagging (Score:2)

by Applehu Akbar ( 2968043 )

One of my common online searches is fur technical info about software bugs. Google's assistant is great at giving me results that apply to three OS versions ago.

I don't care (Score:2)

by rossdee ( 243626 )

Its 5 o'clock somewhere

AI can tell time (Score:3)

by FudRucker ( 866063 )

When you show it how, AI should be renamed artificial ignorance [1]https://postimg.cc/30Tqjs4Z [postimg.cc]

[1] https://postimg.cc/30Tqjs4Z

Twenty five (Score:2)

by zawarski ( 1381571 )

Or six to four.

Also, why can't ChatGPT control a robot? (Score:1)

by Posthoc_Prior ( 7057067 )

Two years ago, on Slashdot, there was a post about how Microsoft was trying to get ChatGPT to control a robot:

https://hardware.slashdot.org/story/23/02/25/2330204/microsoft-tests-chatgpts-ability-to-control-robots

As far as I can tell, there's been little or no progress on this. That is, there is no (public) prototype of either a robot that's built using the same principles as ChatGPT or that ChatGPT can control one. Why not? Why doesn't the mathematical foundation of large language models translate to the p

Re: (Score:2)

by fluffernutter ( 1411889 )

Because ChatGPT and most AIs only work with digitally encoded data. The movement of an arm, like turning a dial, implies a translation from digital to real-world analog. Translating digital to interface with an analog world is really hard. Another example would be self-driving.

Re: (Score:1)

by Posthoc_Prior ( 7057067 )

The "encoded data" is encoded using lexical analysis (see link). The purpose of this is to convert characters into numerical representations in a vector form. That is, it's to find distance relationships between sets of characters. In other words, lexical analysis is the equivalent of vectors.

All physical data can be transformed into vector form. This is, for instance, the basis of classical mechanics (eg, vector calculus). So, lexical analysis adds a step to get to the vector representation. If anything,

Re: (Score:2)

by fluffernutter ( 1411889 )

You are talking about the structure of vectors that compose the model. I'm pretty sure that there is no feedback loop to modify the vectors after the model is built. It's not like the AI can assign a vector to a joint. It doesn't work that way.

Re: (Score:2)

by Jeremi ( 14640 )

There has been [1]plenty of progress [interestin...eering.com] in using AI to control robotics; they use robotics-specific AIs for that, of course.

The fact that ChatGPT (or even LLMs in general) isn't particularly useful for robots shouldn't be a surprise, since robots (other than maybe C3PO) are about physical manipulation of objects, not about language generation.

[1] https://interestingengineering.com/ai-robotics/humanoids-complete-household-tasks-with-less-training

Re: (Score:1)

by Posthoc_Prior ( 7057067 )

The link wasn't to a prototype by OpenAI -- or anything related to OpenAI.

Language generation of a large language model works by finding mathematical relationships between concepts. And this is done using vector calculus. Robots work by translating physical space into a coordinate space of vectors. In other words, both large language models and robots work by calculating on vectors -- usually in a Euclidean space. There should be a linear correlation between the two but there doesn't seem to be.

Why can't LLMs tell time ? (Score:2)

by greytree ( 7124971 )

"Honest" Sam Altman here.

Why can't LLMs tell time ?

I don't know, but give me $100 billion dollars and I will blindly throw it at building datacenters and burning up the planet until AI can tell the time. Or I go bust.

Which one do you think it will be ?

My experience (Score:2)

by BytePusher ( 209961 )

My experience is that you have to double check everything any model tells you. They don't "think" or understand. I find them most useful as a search engine when I'm not knowledgeable enough to know the specific terms I'm looking for. Also, they're good if you just want to generate text, but aren't too concerned with the specifics. With image generation, even nanobanana, they have never given me anything remotely usable as they have extremely strong attractors towards producing images that already exist. I t

Re: (Score:2)

by groobly ( 6155920 )

It all depends on what you mean by "think." For many purposes that I've used them, they "think" a lot better than employees I've had. These programs are more than just LLMs.

gibberish (Score:2)

by groobly ( 6155920 )

Article is conflating a lot of different things. I'm currently using chatgpt to write some code. New versions get timestamps. Chatgpt is able to obtain the current time via python calls, but the results are variably inaccurate, and it's unclear why. It seems to have to do with resource limitations in its sandbox.

Sometimes you get an almost irresistible urge to go on living.