Microsoft, OpenAI, and a US Teachers' Union Are Hatching a Plan To 'Bring AI into the Classroom' (wired.com)
- Reference: 0178310450
- News link: https://news.slashdot.org/story/25/07/08/1237220/microsoft-openai-and-a-us-teachers-union-are-hatching-a-plan-to-bring-ai-into-the-classroom
- Source link: https://www.wired.com/story/microsoft-openai-and-a-us-teachers-union-are-hatching-a-plan-to-bring-ai-into-the-classroom/
The initiative will [1]provide free AI training and curriculum to teachers in the second-largest US teachers' union , which represents about 1.8 million workers including K-12 teachers, school nurses and college staff. The academy builds on Microsoft's December 2023 partnership with the AFL-CIO, the umbrella organization that includes the American Federation of Teachers.
[1] https://www.wired.com/story/microsoft-openai-and-a-us-teachers-union-are-hatching-a-plan-to-bring-ai-into-the-classroom/
Limited (Score:2)
Use it for lesson plans and schemes of work then sack it off.
The Diamond Age (Score:2)
But the cloud replaced the nanotech. And the AI still can't do the voices, right?
Watching this slow demise (Score:2)
I remember when progress felt like a net positive thing. Now it seems like progress is measured like a countdown to the irrelevance of humanity.
If we eventually get to a point where all school does is teach folks how to properly interact with the AI, and the AIs take over most aspects of actually working, creating art, and most likely living, and interacting with others (see Zuck's offer to provide artificial friends since people no longer make real friends), what will be the point of the humans 'under' the
Re: (Score:1)
Have you seen k-12 test scores lately? No wonder AI is being pitched. They can't seem to teach kids the material, so they have to fall back to teaching them how to use a crutch. 25 years ago when I was in school they didn't let us use graphing calculators on math tests because (in their words) we have to learn how to do it ourselves. They've thrown in the towel. How about become skilled then increase productivity with these tools? IE start using them in 11/12th grade instead of early. There's no way AI in e
Re: (Score:2)
We are already there. AI is the next step. Most people today can't think for themselves and have no interest in doing so. They choose options based on popularity and not their needs. The popular options are always the easiest which removes functionality and requires assistance or $ to make it fit their needs. We see this in our schools today from the subjects to the tools used.
Good Idea! (Score:2)
The only way to make LLMs intelligent is to change the definition of intelligent. So let's change humans first, and adapt the definition later to "as dumb as our own children".
It's already there (Score:2)
AI is already there, just no one is openly admitting it...
Personally I think A1 skills should be taught instead. Not too many people can cook a perfect medium rare steak with a tasty sauce. It's a much more usable life skill.
Re: (Score:2)
> AI is already there, just no one is openly admitting it... Personally I think A1 skills should be taught instead. Not too many people can cook a perfect medium rare steak with a tasty sauce. It's a much more usable life skill.
If you need A1 sauce, you either have a low-quality steak, which is fine when you just want some meat and don't want to spend on it, or you royally fucked up a seasoning and cooking routine on a good steak, which is not fine at all, and deserves ridicule.
WTF (Score:2)
Around 2000 we had the dot com bubble, which was a whole bunch of "irrational exuberance" about over-inflated evaluations of companies that were doing *anything* on the web. The poster-child of the dot-com crash was [1]Pets.com [wikipedia.org]. In the lead-up to the 2008 real estate bubble and crash I used to hear advertisements on the radio for "interest-only mortgages" and people would say things like "real estate is the one thing they're not making any more of" and "real estate prices always go up". Well, then someone i
[1] https://en.wikipedia.org/wiki/Pets.com
Re: (Score:2)
I have seen a pattern here for a long time, and well before the Web hype.
> How are people so duped by these companies? Is it just blind optimism? Why are we so predisposed to falling for this hype cycle?
People in general are dumb, cannot fact-check and try to conform to expectations of others. That makes them easy to manipulate. Countless examples are available of that effect. For commercial hypes (other than political or religious or moral panics or the like), you always find some scummy people that want to get rich and maybe get power and hence design and fuel the hype. Many hypes do not take off and nobody really notices. By some d
Make idiocracy a reality! (Score:2)
It was time a bunch or really stupid people go to work on that.
While this could be done in a way that teaches kids to be careful with AI and make sure to not atrophy their own skills, that is not how this will be going.
ClippyAI: your ubiquitous AI friend /s (Score:2)
In the future, as AI becomes ubiquitous in shaping our access to information, could this lead to a monoculture driven by the orthodoxy established by AI companies? This could result in cultural homogenization, reinforce biases, and diminish our ability to think critically.
AI already WENT to the classroom (Score:2)
It already read ALL the schoolbooks of the last 100 years in most languages.
Some people even complain about it.
Why Can't ced3302833214ce98866863af7c09bad Read? (Score:2)
Good idea. We can save a lot of--well, a LITTLE--money by taking the slightly-more-expensive people out of the equation entirely and just having LLMs teach LLMs.
Fun! (Score:2)
Fun fact: feeding the output of a LLM back into itself makes it slowly output complete nonsense. A podcast I listen to fed the transcript of one of it's shows into NotebookLM and asked it to create a summary. They then fed that summary back into NotebookLM and asked it to do another summary. Every iteration introduced nonsense that had nothing to do with the original summary. The show is a technology and politics podcast, and by the end the summary-of-a-summary was talking about football games and the weath
Re: (Score:2)
It is a well-known effect called "model collapse" when you do this with training data. With temporary data ("context") you see something similar, namely that LLMs only ever capture a relatively small part of the input data and add noise and statistical correlations to it. You also see that no fact-checking or consistency checking is being done ("does this make sense?"), bacause LLMs are incapable of logical reasoning. After a few iterations, crap accumulates and original meaning vanishes and things turn to