News: 0181103104

  ARM Give a man a fire and he's warm for a day, but set fire to him and he's warm for the rest of his life (Terry Pratchett, Jingo)

Canada's Immigration Rejected Applicant Based On AI-Invented Job Duties (thestar.com)

(Wednesday March 25, 2026 @06:00PM (BeauHD) from the immigration-by-chatbot dept.)


New submitter [1]haroldbasset writes:

> Canada's Immigration Department rejected an applicant because the duties of her current job did not match the Canadian work experience she had claimed, but the Department's AI assistant had [2]invented that work experience . She has been working in Canada as a health scientist -- she has a Ph.D. in the immunology of aging -- but the AI genius instead described her as "wiring and assembling control circuits, building control and robot panels, programming and troubleshooting."

"It's believed to be the first time that the department explicitly referred to the use of generative AI to support application processing in immigration refusals," reports the Toronto Star. "The disclaimer also noted that all generated content was verified by an officer and that generative AI was not used to make or recommend a decision."

The applicant's lawyer was shocked "how any human being could make this decision." "Somehow, it hallucinated my client's job description," he said. "I would love to see what the officer saw. Something seriously went wrong here."

The applicant's refusal came just as Canada's Immigration Department released its first AI strategy, which frames artificial intelligence as a way to improve efficiency, service delivery, and program integrity. The department says it has long used digital tools like analytics and automation to flag fraud risks and triage applications, and is now also experimenting with generative AI for tasks such as research, summarizing, and analysis. In this case, however, the department insisted the decision was made by a human officer and that generative AI was not involved in the final decision.



[1] https://slashdot.org/~haroldbasset

[2] https://www.thestar.com/news/canada/canada-rejected-her-permanent-residence-application-her-job-duties-were-made-up--by-immigrations-ai-reviewer/article_3f1ea5be-0b3d-4541-ac00-0a1b8484d877.html



Re: (Score:3)

by dskoll ( 99328 )

I suggest you visit Toronto some time if you think Canada is homogeneous...

AI needs to die (Score:2)

by dskoll ( 99328 )

Not all AI. Some is good, like image-recognition, audio transcription, etc. But these LLMs and GPT bullshitters need to die.

Proof read! (Score:2)

by wakeboarder ( 2695839 )

AI won't do it for you.

Human Nature vs Policy (Score:5, Interesting)

by Brain-Fu ( 1274756 )

This business of having an AI do the legwork and then having a human review it and make a final decision keeps going badly. Humans are intrinsically lazy and the moment they get a few good results from the AI they are going to stop doing the validation and start rubber-stamping. It doesn't matter if policy disallows this, they will do it anyway. It doesn't matter if the human really cares; they won't be able to help themselves. Human laziness is too deep an instinct.

It's the same with the self-driving cars where a human is required to stay at the wheel and alert so they can manually override the instant the AI starts doing something wrong. Humans CAN'T keep that up. It's not possible. The brain just doesn't work that way. The mind knows that it isn't doing the work, and it will get bored and lose focus or just nod off.

Everyone is SO eager to have it both ways: "an AI does all the work but a human verifies it so we know its good." We just can't have it both ways. Once the AI does the work, the human stops verifying. That is how and why things went wrong here, it is how and why things have gone wrong for several law firms that submitted hallucinated historical court rulings, and it is how and why things will continue to go badly across all industries that adopt AI in such a role.

"Human in the loop" is really easy to say. Much harder to actually do reliably.

Main problem with AI (Score:2)

by Luckyo ( 1726890 )

Main problem with AI in these cases is that it is so good, that people stop checking it.

Even when they're explicitly employed to do so as is the case here. "It's been great last ten thousand cases I checked, it's right here too".

Also several cases of face recognition software (Score:3)

by rsilvergun ( 571051 )

Leading to arrest. So far it's been white people so the victims are at least alive. One of them was poor and couldn't afford a lawyer so they spent 6 months in jail before their public attorney got enough time to prove they weren't the person reported.

Dumb people think the computers are smarter than they are. And so they think the computer is infallible because dumb people already think they are pretty smart.

And the entire country of America right now is being run by dumb people top to bottom. Hell with the police we intentionally screen out intelligent people because the job is generally fairly boring and people who aren't dumb have a bad habit of quitting after a few months of expensive training. I've had a few buddies who thought being a cop might be a easy job with decent benefits so they went in for it and couldn't pass the psych exam not because they were crazy but because it said they were too smart to be a cop.

Self-Review (Score:2)

by im_thatoneguy ( 819432 )

I feel like a good idea for this sort of thing if it's going to be deployed is include the applicant in the loop.

"Hi, your application will be rejected because:

* You list your qualifications as an electrician, not a medical expert.

If this anything is in error and you want to continue with your submission, please explain the error below and click "Contest" attesting that you believe this to be in error and someone will be sure to review more carefully."

Even without AI it would be nice for job application for

God grant me the senility to accept the things I cannot change,
The frustration to try to change things I cannot affect,
and the wisdom to tell the difference.