News: 1714029972

  ARM Give a man a fire and he's warm for a day, but set fire to him and he's warm for the rest of his life (Terry Pratchett, Jingo)

Forget the AI doom and hype, let's make computers useful

(2024/04/25)


Systems Approach Full disclosure: I have a history with AI, having flirted with it in the 1980s (remember expert systems?) and then having safely avoided the AI winter of the late 1980s by veering off into formal verification before finally landing on networking as my specialty in 1988.

And just as my Systems Approach colleague Larry Peterson has classics like the Pascal manual on his bookshelf, I still have a couple of AI books from the Eighties on mine, notably P. H. Winston’s [1]Artificial Intelligence (1984). Leafing through that book is quite a blast, in the sense that much of it looks like it might have been written yesterday. For example, the preface begins this way:

The field of Artificial Intelligence has changed enormously since the first edition of this book was published. Subjects in Artificial Intelligence are de rigueur for undergraduate computer-science majors, and stories on Artificial Intelligence are regularly featured in most of the reputable news magazines. Part of the reason for change is that solid results have accumulated.

I was also intrigued to see some 1984 examples of “what computers can do.” One example was solving seriously hard calculus problems – notable because accurate arithmetic seems to be beyond the capabilities of today’s LLM-based systems.

If calculus was already solvable by computers in 1984, while basic arithmetic stumps the systems we view as today’s state of the art, perhaps the amount of progress in AI in the last 40 years isn’t quite as great as it first appears. (That said, there are even [2]better calculus-tackling systems today, they just aren’t based on LLMs, and it’s unclear if anyone refers to them as AI.)

One reason I picked up my old copy of Winston was to see what he had to say about the definition of AI, because that too is a controversial topic. His first take on this isn’t very encouraging:

Artificial Intelligence is the study of ideas that enable computers to be intelligent.

Well, OK, that’s pretty circular, since you need to define intelligence somehow, as Winston admits. But he then goes on to state two goals of AI:

To make computers more useful

To understand the principles that make intelligence possible.

In other words, it’s hard to define intelligence, but maybe the study of AI will help us get a better understanding of what it is. I would go so far as to say that we are still having the debate about what constitutes intelligence 40 years later. The first goal seems laudable but clearly applies to a lot of non-AI technology.

This debate over the meaning of “AI” continues to hang over the industry. I have come across plenty of rants that we wouldn’t need the term Artificial General Intelligence, aka AGI, if only the term AI hadn’t been so polluted by people marketing statistical models as AI. I don’t really buy this. As far as I can tell AI has always covered a wide range of computing techniques, most of which wouldn’t fool anyone into thinking the computer was displaying human levels of intelligence.

[3]

When I started to re-engage with the field of AI about eight years ago, neural networks – which some of my colleagues were using in 1988 before they fell out of favor – had made a startling comeback, to the point where image recognition by deep neural networks had [4]surpassed the speed and accuracy of humans albeit with some caveats. This rise of AI led to a certain level of anxiety among my engineering colleagues at VMware, who sensed that an important technological shift was underway that (a) most of us didn’t understand (b) our employer was not positioned to take advantage of.

Your PC can probably run inferencing just fine – so it's already an AI PC [5]DON'T MISS

As I threw myself into the task of learning how neural networks operate (with a [6]big assist from Rodney Brooks) I came to realize that the language we use to talk about AI systems has a significant impact on how we think about them. For example, by 2017 we were hearing a lot about “deep learning” and “deep neural networks”, and the use of the word “deep” has an interesting double meaning. If I say that I am having “deep thoughts” you might imagine that I am thinking about the meaning of life or something equally weighty, and “deep learning” seems to imply something similar.

But in fact the “deep” in “deep learning” is a reference to the depth, measured in number of layers, of the neural network that supports the learning. So it’s not “deep” in the sense of meaningful, but just deep in the same way that a swimming pool has a deep end – the one with more water in it. This double meaning contributes to the illusion that neural networks are “thinking."

[7]

[8]

A similar confusion applies to "learning," which is where Brooks was so helpful: A deep neural network (DNN) gets better at a task the more training data it is exposed to, so in that sense it “learns” from experience, but the way that it learns is nothing like the way a human learns things.

As an example of how DNNs learn, consider [9]AlphaGo , the game-playing system that used neural networks to [10]defeat human grandmasters. According to the system developers, whereas a human would easily handle a change of board size (normally a 19×19 grid), a small change would render AlphaGo impotent until it had time to train on new data from the resized board.

[11]

To me this neatly illustrates how the “learning” of DNNs is fundamentally unlike human learning, even if we use the same word. The neural network is unable to generalize from what it has “learned.” And making this point, AlphaGo was recently [12]defeated by a human opponent who repeatedly used a style of play that had not been in the training data. This inability to handle new situations seems to be a hallmark of AI systems.

Language matters

The language used to describe AI systems continues to influence how we think about them. Unfortunately, given the reasonable pushback on recent AI hype, and some notable failures with AI systems, there may now be as many people convinced that AI is completely worthless as there are members of the camp that says AI is about to achieve human-like intelligence.

I am highly skeptical of the latter camp, as outlined above, but I also think it would be unfortunate to lose sight of the positive impact that AI systems – or, if you prefer, machine-learning systems – can have.

[13]Law prof predicts generative AI will die at the hands of watchdogs

[14]Opera browser dev branch rolls out support for running LLMs locally

[15]Why Microsoft's Copilot will only kinda run locally on AI PCs for now

[16]Ex-Amazon exec claims she was asked to ignore copyright law in race to AI

I am currently assisting a couple of colleagues writing a book on machine-learning applications for networking, and it should not surprise anyone to hear that there are lots of networking problems that are amenable to ML-based solutions. In particular, traces of network traffic are fantastic sources of data, and training data is the food on which machine-learning systems thrive.

Applications ranging from denial-of-service-prevention to malware detection to geolocation can all make use of ML algorithms, and the goal of this book is to help networking people understand that ML is not some magic powder that you sprinkle on your data to get answers, but a set of engineering tools that can be selectively applied to produce solutions to real problems. In other words, neither a panacea nor an over-hyped placebo. The aim of the book is to help readers understand which ML tools are suitable for different classes of networking problems.

One story that caught my eye some time back was the use of AI to help Network Rail in the UK [17]manage the vegetation that grows alongside British railway lines. The key “AI” technology here is image recognition (to identify plant species) – leveraging the sort of technology that DNNs delivered over the past decade. Not perhaps as exciting as the generative AI systems that captured the world’s attention in 2023, but a good, practical application of a technique that sits under the AI umbrella.

My tendency these days is to try to use the term “machine learning” rather than AI when it’s appropriate, hoping to avoid both the hype and allergic reactions that “AI” now produces. And with the words of Patrick Winston fresh in my mind, I might just take to talking about “making computers useful." ®

Larry Peterson and Bruce Davie are the authors behind [18]Computer Networks: A Systems Approach and the related [19]Systems Approach series of books. All their content is open source and available for free on [20]GitHub . You can find them on [21]Mastodon , their newsletter [22]right here , and past The Register columns [23]here .

Get our [24]Tech Resources



[1] https://archive.org/details/artificialintell0000wins_t2g6

[2] https://www.wolframalpha.com/calculators/integral-calculator/?redirected=true

[3] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_software/front&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=2&c=2ZiopwXKrpsTHOtQWvkNsZQAAAJA&t=ct%3Dns%26unitnum%3D2%26raptor%3Dcondor%26pos%3Dtop%26test%3D0

[4] https://arxiv.org/abs/1706.06969

[5] https://www.theregister.com/2024/03/13/age_of_ai_pc/

[6] https://rodneybrooks.com/forai-machine-learning-explained/

[7] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_software/front&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=4&c=44ZiopwXKrpsTHOtQWvkNsZQAAAJA&t=ct%3Dns%26unitnum%3D4%26raptor%3Dfalcon%26pos%3Dmid%26test%3D0

[8] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_software/front&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=3&c=33ZiopwXKrpsTHOtQWvkNsZQAAAJA&t=ct%3Dns%26unitnum%3D3%26raptor%3Deagle%26pos%3Dmid%26test%3D0

[9] https://www.theregister.com/2016/03/14/google_alphago/

[10] https://www.theregister.com/2016/01/27/google_go_beats_human_master/

[11] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_software/front&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=4&c=44ZiopwXKrpsTHOtQWvkNsZQAAAJA&t=ct%3Dns%26unitnum%3D4%26raptor%3Dfalcon%26pos%3Dmid%26test%3D0

[12] https://www.theregister.com/2023/02/20/human_go_ai_defeat/

[13] https://www.theregister.com/2024/04/24/generative_ai_death/

[14] https://www.theregister.com/2024/04/03/opera_local_llm/

[15] https://www.theregister.com/2024/03/31/microsoft_copilot_hardware/

[16] https://www.theregister.com/2024/04/22/ghaderi_v_amazon/

[17] https://www.ceh.ac.uk/ai-holds-key-improving-biodiversity-britains-railway-tracks

[18] https://book.systemsapproach.org/

[19] https://www.systemsapproach.org/

[20] https://github.com/SystemsApproach

[21] https://discuss.systems/@SystemsAppr

[22] https://systemsapproach.org/newsletter/

[23] https://www.theregister.com/Tag/Systems%20Approach

[24] https://whitepapers.theregister.com/



That is a quote I will keep

Pascal Monett

" The neural network is unable to generalize from what it has 'learned' ”

And I'm going to keep this article's URL to be able to show it to anyone who starts spouting off about how computers are now "intelligent".

Unless they're in marketing. Then it would just be a waste of time.

Re: That is a quote I will keep

Ian Johnston

To be fair, computers are probably now more intelligent than people who work in marketing. Mind you, Fischer-Price have for years made computers more intelligent than anyone involved in HR.

Thank you!

Mike 137

Superb informative article from someone who really knows their stuff. A breath of fresh air among the fog of commercial hype and misunderstandings that constitute the public impression that LLMs are the sum total of "AI".

Dave 126

A layman such as myself always expected a computer to be good at calculus (just I expect a pocket calculator to be better than me at arithmetic), yet really bad at 'human' (or indeed animal) things like speech recognition, image recognition, and knowing when to stop beeping before I throw it out of the window. Or rather, computers were bad at these things until a few years ago.

A useful umbrella term for all these AI ML LLM NN approaches might be "Newish Techniques for Making Computers Less Rubbish at Doing Things That They Always Used To Be Pretty Rubbish At Doing"

It doesn't roll off the tongue, I grant you. But I find it useful as a placeholder.

Evil Auditor

"Newish Techniques for Making Computers Less Rubbish at Doing Things That They Always Used To Be Pretty Rubbish At Doing"

Absolutely. And to make the result of a LLM less rubbish, we need Prompt Engineers - as an acquaintance recently pitched how AI will make everything better, more exciting, replace countless jobs and emerges new jobs such as the mentioned prompt engineers. I compulsively had to curb his enthusiasm with mentioning that prompt engineer will be one of the first function to be fully replaced by AI.

HuBo

Then again, prompt contortionists, prompt charmers, and prompt swallowers, will probably survive as rather unique arts showcased in roadshows of future traveling three-ring AI circuses ... with dancing robodogs in tutus and talking llamas!

Evil Auditor

...prompt contortionists, prompt charmers, and prompt swallowers... And make LMM do things it is supposed not to do? Brilliant!

There are two kinds of fools...

theOtherJT

"...those that think religion is literally true, and those that think it is worthless."

Can't remember where I heard that quote today, but replace "religion" with "AI" and I think we have a reasonable summary of the state of play. No, it's not actually "Intelligent" bit it does have its uses. We just need to remain realistic about what they are.

Chism's Law of Completion:
The amount of time required to complete a government project is
precisely equal to the length of time already spent on it.