National Archives Pushes Google Gemini AI on Employees
- Reference: 0175258061
- News link: https://tech.slashdot.org/story/24/10/15/1553228/national-archives-pushes-google-gemini-ai-on-employees
- Source link:
> In June, the U.S. National Archives and Records Administration (NARA) gave employees a presentation and tech demo called "AI-mazing Tech-venture" in which Google's Gemini AI was presented as a tool archives employees could use to "enhance productivity." During a demo, the AI was queried with questions about the John F. Kennedy assassination, according to a copy of the presentation obtained by 404 Media using a public records request.
>
> In December, NARA plans to launch a public-facing AI-powered chatbot called "Archie AI," 404 Media has learned. "The National Archives has big plans for AI," a NARA spokesperson told 404 Media. "It's going to be essential to how we conduct our work, how we scale our services for Americans who want to be able to access our records from anywhere, anytime, and how we ensure that we are ready to care for the records being created today and in the future."
>
> Employee chat logs given during the presentation show that National Archives employees are concerned about the idea that AI tools will be used in archiving, a practice that is inherently concerned with accurately recording history. One worker who attended the presentation told 404 Media "I suspect they're going to introduce it to the workplace. I'm just a person who works there and hates AI bullshit." The presentation was given about a month after the National Archives banned employees from using ChatGPT because it said it posted an "unacceptable risk to NARA data security," and cautioned employees that they should "not rely on LLMs for factual information."
[1] https://www.404media.co/ai-mazing-tech-venture-national-archives-pushes-google-gemini-ai-on-employees/
Hopefully, after it fails... (Score:2)
...useful data will be gathered in order to make future versions better.
It's always a hard question whether or not to use very early versions of tech that are known to have problems
Coming soon to a textbook near you (Score:3)
Black nazis.
LLM's would probably be good at this (Score:2)
One of the primary things I want, as a service, from archive sites in general are summaries. LLM's like Perplexity have worked quite well at that task for me. I haven't really played with Gemini much outside of Google Search results, but they seem OK too. The only reservation I have are the reported "hallucinations" where if the LLM doesn't understand or know the information prompted for, they will sometimes make up information. If we can just teach a LMM to say "I don't know" instead, it could be a use
Truckload of money? Bosses fired? Something else? (Score:4, Interesting)
I really have to wonder what triggered the complete turnaround in policy. Going from "unacceptable security risk" and "cannot be relied upon", over to "big plans" and "essential to how we conduct our work", would require some massive changes.
The only two I can think of are truckloads of money that allowed the business team to override the technical team, or some people who were focused on the technical risk getting sacked . I don't see either in the headlines, so wonder what was happening behind the scenes.
Re: (Score:2)
Maybe they learned what the tool actually can do and found out their initial concerns were overstated?
Re: (Score:3)
Note that is often a marketing trick too.
We have an executive, and I don't think he sees benefit in throwing his main job under the bus to appease a vendor, at least at a price point that would be worth while to the vendor.
So he came and seemingly relayed the pitch and then collected generally negative feedback about how unreliable the output was and sent out a broad mail acknowledging it didn't work based on everyone's feedback.
Then a couple weeks later, he had seen the light after reviewing demo material
Re: (Score:3)
The answer likely is: "Truckload of money, and by the way, do the thing we we asked you to do. Find all the wrong-think you can in our National Archive and "correct" it.
I don't see a good outcome from this. Best case, our tax money is wasted. Worst case, they re-write or otherwise shame, ridicule and "community note" history to suit the current whim.
The pandering, it's going to kill this nation.
Re: (Score:2)
No, stupidly, the truckload of money is likely going the other way, from taxpayers to google. Not from google to government employees.
If you've paid attention to the normie net in the past little while, there's about even numbers of people who get what the "new" AI does through metaphors like "fancy autocorrect" and people who think "AI is here and can do anything! Look at this picture it made!"
I think us technical folks lean much more disproportionately to the skeptical side. Government managers who ma
Re: (Score:3)
When the National Archive is working for the people, AI is an unnecessary and unreliable security risk. When they're working for Google, AI is essential to operations.
Like all public-private partnerships, the sole purpose is to funnel as much taxpayer money as possible into corporate coffers. This won't change until we get money out of politics.
Re: (Score:3)
> Going from "unacceptable security risk"
For this I suspect it's the difference between users using their personal ChatGPT accounts vs NARA using commercial Gemini. There are much tighter policies when you pay for enterprise access vs some Joe Schmoe personal or free account.
> and "cannot be relied upon"
Depends on what they are using it for. NARA is in control of how the model is used vs, again, some personal account. So they have control over what it is being used for. If they are using it to answer common operational questions then it could do fine. If they are trying to
Re: (Score:2)
something operating on a public cloud versus operating on a private government cloud. Because it should be isolated.
Re: (Score:2)
Based on the way my current CEO talks about AI, I think that lots of private and public organizations are really optimistic that AI can lower labour costs.
It's not even always necessarily about replacing head count with automation, it's just as much about solving hard-to-solve problems and making existing head counts more productive through AI.
For example, our CEO is convinced that within a year or two we'll be able to use AI to find the root cause of bugs in highly complex systems.
Do I share the optimism?