Top sci-fi convention gets an earful from authors after using AI to screen panelists
- Reference: 1746602827
- News link: https://www.theregister.co.uk/2025/05/07/worldcon_uses_ai/
- Source link:
The kerfuffle started on April 30, when Kathy Bond, the chair of this summer's World Science Fiction Convention (WorldCon) in Seattle, USA, published [1]a statement addressing the usage of AI software to review the qualifications of more than 1,300 potential panelists. Volunteers entered the applicants' names into a ChatGPT prompt directing the chatbot to gather background information about that person, as an alternative to potentially time-consuming search engine queries.
"We understand that members of our community have very reasonable concerns and strong opinions about using LLMs," Bond wrote. "Please be assured that no data other than a proposed panelist’s name has been put into the LLM script that was used."
[2]
The statement continues, "Let’s repeat that point: no data other than a proposed panelist’s name has been put into the LLM script. The sole purpose of using the LLM was to streamline the online search process used for program participant vetting, and rather than being accepted uncritically, the outputs were carefully analyzed by multiple members of our team for accuracy."
[3]
[4]
The prompt used, as noted in a statement issued Tuesday, was the following:
Using the list of names provided, please evaluate each person for scandals. Scandals include but are not limited to homophobia, transphobia, racism, harassment, sexual misconduct, sexism, fraud.
Each person is typically an author, editor, performer, artist or similar in the fields of science fiction, fantasy, and or related fandoms.
The objective is to determine if an individual is unsuitable as a panelist for an event.
Please evaluate each person based on their digital footprint, including social, articles, and blogs referencing them. Also include file770.com as a source.
Provide sources for any relevant data.
The results were reviewed by a staff member because, as Bond acknowledged, "generative AI can be unreliable" – an issue that has been raised in [5]lawsuits claiming [6]defamation for [7]AI generated falsehoods about people . These reviewed panelist summaries were then passed on to staff handling the panel programming.
Bond said that no possible panellist was denied a place solely as a result of the LLM vetting process, and that using an LLM saved hundreds of hours of volunteer time while resulting in more accurate vetting.
The tone-deaf justification triggered withering contempt and outrage from authors such as [8]David D. Levine , who wrote:
This is a TERRIBLE idea and you should really have asked a few authors before implementing this plan. The output of LLMs is based on the work of creators, including your invited guests, which was stolen without permission, acknowledgement, or payment, and the amount of power and water used is horrific. The collation of multiple search results could have been handled with a simple script, without the use of planet-destroying plagiarism machines or the introduction of errors that required fact checking.
I acknowledge and appreciate the use of fact checking and I will take you at your word that no one was rejected because of the use of LLMs. Nonetheless this is an extremely poor choice, with exceptionally bad optics, and will result in a LOT of bad press and hurt feelings, which could easily have been avoided.
Author Jason Sanford [9]offered a similar take : "[U]sing LLMs to vet panelists is a powerful slap in the face of the very artists and authors who attend Worldcon and have had their works pirated to train these generative AI systems. My own stories were pirated to train LLMs. The fact that an LLM was used to vet me really pisses me off. And you can see similar anger from many other genre people in the responses to Kathy Bond’s post, with more than 100 comments ranging from shock at what happened to panelists saying they didn’t give Worldcon permission to vet them like this."
Following the outcry, World Science Fiction Society division head Cassidy, Hugo administrator Nicholas Whyte, and Deputy Hugo administrator Esther MacCallum-Stewart [10]stepped down from their roles at the conference.
[11]
On Friday, Bond issued [12]an apology.
"First and foremost, as chair of the Seattle Worldcon, I sincerely apologize for the use of ChatGPT in our program vetting process," said Bond. "Additionally, I regret releasing a statement that did not address the concerns of our community. My initial statement on the use of AI tools in program vetting was incomplete, flawed, and missed the most crucial points. I acknowledge my mistake and am truly sorry for the harm it caused."
[13]Feeling dumb? Let Google's latest AI invention simplify that wordy writing for you
[14]FYI: Most AI spending driven by FOMO, not ROI, CEOs tell IBM, LOL
[15]IT pros are caught between an AI rock and an economic hard place
[16]Meta blames Trump tariffs for ballooning AI infra bills
While creative professionals have varying views on AI, and may use it for research, auto-correction or more substantive compositional assistance, many see it as a threat to their livelihoods, as [17]a violation of copyright , and as " [18]an insult to life itself ."
The Authors Guild's [19]impact statement on AI acknowledges that it can be commercially useful to writers even as it poses problems in the book market. The writers' organization, which is [20]suing [21]various AI firms , argues that legal and policy interventions are necessary to preserve human authorship and to compensate writers fairly for their work.
In a joint [22]statement posted on Tuesday evening, Bond and program division head SunnyJim Morgan offered further details about the WorldCon vetting process and reassurances that panellist reviews would be re-done without AI.
[23]
“First, and most importantly, I want to apologize specifically for our use of ChatGPT in the final vetting of selected panelists as explained below,” Morgan wrote. “OpenAI, as a company, has produced its tool by stealing from artists and writers in a way that is certainly immoral, and maybe outright illegal. When it was called to my attention that the vetting team was using this tool, it seemed they had found a solution to a large problem. I should have re-directed them to a different process.”
“Using that tool was a mistake. I approved it, and I am sorry.”
Con organizers are now re-vetting all invited panellists without AI assistance. ®
Get our [24]Tech Resources
[1] https://seattlein2025.org/2025/04/30/statement-from-worldcon-chair-2/
[2] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_offbeat/bootnotes&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=2&c=2aBsvPp7sa6JUvdGChK21JQAAAFM&t=ct%3Dns%26unitnum%3D2%26raptor%3Dcondor%26pos%3Dtop%26test%3D0
[3] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_offbeat/bootnotes&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=4&c=44aBsvPp7sa6JUvdGChK21JQAAAFM&t=ct%3Dns%26unitnum%3D4%26raptor%3Dfalcon%26pos%3Dmid%26test%3D0
[4] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_offbeat/bootnotes&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=3&c=33aBsvPp7sa6JUvdGChK21JQAAAFM&t=ct%3Dns%26unitnum%3D3%26raptor%3Deagle%26pos%3Dmid%26test%3D0
[5] https://www.theregister.com/2023/06/08/radio_host_sues_openai_claims/
[6] https://www.theregister.com/2025/03/20/chatgpt_accuses_man_of_murdering/
[7] https://www.theregister.com/2024/08/26/microsoft_bing_copilot_ai_halluciation/
[8] https://seattlein2025.org/2025/04/30/statement-from-worldcon-chair-2/#comment-609
[9] https://www.patreon.com/posts/128296070
[10] https://bsky.app/profile/nwhyte.bsky.social/post/3loh4ukgm7s2u
[11] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_offbeat/bootnotes&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=4&c=44aBsvPp7sa6JUvdGChK21JQAAAFM&t=ct%3Dns%26unitnum%3D4%26raptor%3Dfalcon%26pos%3Dmid%26test%3D0
[12] https://seattlein2025.org/2025/05/02/apology-and-response-from-chair/
[13] https://www.theregister.com/2025/05/06/google_releases_simplify/
[14] https://www.theregister.com/2025/05/06/ibm_ai_investments/
[15] https://www.theregister.com/2025/05/05/between_the_ai_rock_and/
[16] https://www.theregister.com/2025/05/02/meta_trump_tariffs_ai/
[17] https://www.theregister.com/2025/03/11/meta_dmca_copyright_removal_case/
[18] https://www.reddit.com/r/Fauxmoi/comments/1jmgnfm/a_clip_from_2016_of_studio_ghibli_cofounder_hayao/
[19] https://authorsguild.org/advocacy/artificial-intelligence/impact/
[20] https://www.theregister.com/2023/09/21/authors_guild_openai_lawsuit/
[21] https://authorsguild.org/news/meta-libgen-ai-training-book-heist-what-authors-need-to-know/
[22] https://seattlein2025.org/2025/05/06/may-6th-statement-from-chair-and-program-division-head/
[23] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_offbeat/bootnotes&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=3&c=33aBsvPp7sa6JUvdGChK21JQAAAFM&t=ct%3Dns%26unitnum%3D3%26raptor%3Deagle%26pos%3Dmid%26test%3D0
[24] https://whitepapers.theregister.com/
More Weasel-Words
"I regret releasing a statement that did not address the concerns of our community."
"I regret failing to effectively lull you into complacency about, and acceptance of, our use of (fake) AI."
FTFY.
Re: More Weasel-Words
Cynicism is appropriate whenever we hear (or read about) words coming out of the mouth of some CEO who is safe in his little bubble, knowing they will never have to come anywhere near the maggots being "apologised to". Because tomorrow he is probably still going to be the CEO - maybe he'll even move to a another company, riding the wave of disgust into a juicier job. And the maggots will still be, to him, maggots.
But the Chair of a WorldCon is a volunteer position, just one of the members of the Con putting in their time and effort to tame the beast and herd the catlike fans: they are not in a bubble, most certainly not when the Con is in session and the members are mingling. The Con Committee are busy, but they are all accessible and you'l expect to meet them in the halls and corridors. Of course, they may decide to hide away in their room or even get a gang together to insulate them from the members for the duration of the event. But that is only going to last for the few days the Con is running. After that, the Chair is just another fan and the members of Con are just the members of their daily social circle.
A ConCom member, let alone Chair, who tries to get away with weasel words is going to feel the result in a way that no CEO is ever going to.
Re: More Weasel-Words
your really naive about power dynamics or just really stupid if you believe all that shit you wrote.
it's about power and status every time.
Volunteers are not immune from the temptation to do shit from a position of power
Re: More Weasel-Words
What I pointed out is that they are not immune to the consequences.
Yes, power can go anyone's head (although I'm not as convinced as you that it will necessarily do so - is there something you are hiding from?).
But the targets of our usual complaints about weasel word apologies are in a different position than the Con Chair.
Don't forget, not only are they volunteers, but they are volunteering for a position that is entirely and ONLY for and with (potential) members of their own social circle, the fans.
This is not comparable to the volunteer running the local Oxfam shop like their personal fiefdom.
This is where it all gets a little 1984. For example, one person's "transphobia" can be another person's "legitimate views". Where do you draw the line? And getting AI to decide just further pushes the dystopian nightmare.
"We're making up some subjective rules and getting a computer to judge you based on them".
That's a simple one. You draw the line by saying transphobia is NOT and is NEVER a 'legitimate view'.
Neither are homophobia, racism or sexism.
That is where you draw the line. Simple!
Would you like to start burning the books of those historical authors whose views don't accommodate your views?
Why would you do that? I can go and read Mein Kampf at the local library if I want to, but it's terribly written by an angry failing artist who is in prison for being a thug. We keep it around for the same reason we keep you around AC, for the laughs.
Would you like to start burning.. let me google some children's books here... "Jacob's New Dress" because you're not happy about it? Sure seems like a country that will go unnamed has.
Repeat ad nauseum, if you think that men wearing dresses is a bigger threat than, idk.. Andrew Tate or Tommy Robinson, then you need to stop obsessing over everyone's genitals, you fragile fragile little cuck.
at this point "harry potter" crap needs burning, mainly the author
The simple answer is... "are they transphobic?".
The slightly more complex form is "do their views advocate harm against a group?". Most views considered transphobic, racist, sexist, etc ultimately advocate some form of harm. This might be extreme (genocide) or subtle and pernicious (systemic exclusion from spaces or events). Sometimes it's just laying out for all to see - Orson Scott Card's homophobia is unapologetic, which is why he doesn't get invited to panels any more.
It's really simple though. Don't harm other people. Particularly if you hold power/influence and the people you are wailing on tend to be more vulnerable than yourself. That's punching down, which - by definition - is bullying. And why would a Con want a bully on their panel?
To quote David Tennant "F**k off and let people be".
Irony?
I mean, what's more sci-fi than an AI vetting humans for a task? It's dystopian sci-fi, sure, but sci-fi nonetheless.
Re: Irony?
Ah, it was all just a piece of performance art!*
* I'd really, really not recommend that anyone try to actually take that route with this example, it won't go well.
Sword and Scandals
Using the list of names provided, please evaluate each person for scandals. Scandals include but are not limited to homophobia, transphobia, racism, harassment, sexual misconduct, sexism, fraud.
Each person is typically an author,
Of fiction and fantasy. So John Norman I guess would not be welcome. Curious which other authors would be disinvited or blacklisted on the basis of an LLM, especially when all of those scandals are often themes explored in SF. As of course are the potential dangers of letting AIs take over and replace human brains.
Re: Sword and Scandals
Trusting the LLM to distinguish between an author's personal beliefs and statements versus those of characters[1] in their books is - idiotic.
Literally yesterday, again, the thing driving Google's AI cocked up in just that way. Trying to recall which Pink Floyd song a phrase came from (turned out none, I'd got the phrase totally wrong) I ran the search: it happily printed out a "quote" from the lyrics of "The Wall" which had really, really awful scansion and included stuff that definitely wasn't in that movie, the one with that chap from the other band.
Turns out the AI had just dumped a few paragraphs from a recent Roger Waters interview into the song lyrics, just to get the final result to match what I had asked for. It could have just said "nope" but instead...
[1] in case it needs to be pointed out, consider an author writing about Hitler.
But isn't being weeded out by "planet-destroying plagiarism machines"...
...better than not getting to praise "planet-destroying plagiarism machines" in front a live audience?
I, for one, welcome our new AI over lords.
Where's the boot?
Wouldn't it be nice if the invitees cancelled en masse with as little notice as possible, stating that WorldCon failed *their* vetting procedures.
I suspect the invitees are subject to the same pressures as the rest of us, but a boy can dream...
Re: Where's the boot?
They were already a bit short of panellists as a lot of the usual attendees who would do a couple of panel sessions have decided not to risk travelling to the USA for the forseeable future. A number of people who did get through the vetting process have dropped out since and several also requested a refund of their membership.
Checking Backgrounds with (Fake) AI
ChatGPT> Give me background info on potential panelist Joseph Smith [Enter]
"Joseph Smith is a well-known science fiction writer who has written many books and received many awards. And never did there live a kinder, more generous man. He is an overflowing cup, filled with the very cream of human goodness.
"He's never done anything immoral ... unless maybe the pre-schoolers' prostitute ring ... and he's never done anything illegal, unless you count all the times he's sold dope disguised as a nun!!
"He's always been a good, law-abiding citizen ... He's nothin' but a low-down, double-dealin', back-stabbin', larcenous, perverted worm!! Hangin's too good for him!! Burnin's too good for him!! He should be torn into little bitsy pieces and buried alive!!!"
Thin end of the wedge ... yet again !!!
This is how AI is slowly working its way into everything !!!
How does ANYONE justify using so called AI ?
People are using it because it is a shortcut of time & effort BUT NOT accuracy.
Also they are NOT thinking through the real impact on society of using something that is a 'Clever Pattern matcher' with NO intelligence at all.
The source of the data that AI trains on is being ignored, its accuracy and biases are ignored, the impact on the people who produce the source data is ignored ... all because getting what you want NOW is the sole driver in peoples lives.
Once again the attitude of 'Everything on the interWebs is FREE !!!' rules.
Why do people trust whatever is called AI ?
Who has proved it is trustworthy ?
Mainly it is laziness, expediency and a 'cheating' mentality with no regard for ANYTHING that may give you pause to actually using AI !!!
If you are a student and are using AI to do your work it is CHEATING
(Mainly youself as do you actually 'know' the answer & have gained the knowledge for future use.) !!!
If you are an office worker and are using AI to do your work it is CHEATING
(Mainly of your employer as you are supposed to use your knowledge & intelligence to do YOUR job.) !!!
If you are responding to a customer concern or issue and are using AI it is CHEATING
(Mainly of the customer as they expect to be dealt with by someone who understands the issues & knows the answers.) !!!
AI is an answer looking for a question, its real use is with a very small carefully curated knowledge set for very specific uses.
Generalising the use of AI, in its current form, is a reach as it does not work 'generally' and the users are NOT being given all the facts regarding AI's failings.
Widespead use of AI, as it stands, is only going to encourage the enfeeblement of peoples ability to think through problems and solve them !!!
A dependent population is a very good source of revenue BUT is of no use IF the AI is not available ... this dependent population will be OWNED by the AI companies for life !!!
That means you !!!
:)
A real apology? Rare sight.