GitHub Users Angry at the Prospect of AI-Written Issues From Copilot (github.com)
- Reference: 0177887059
- News link: https://developers.slashdot.org/story/25/06/01/0049240/github-users-angry-at-the-prospect-of-ai-written-issues-from-copilot
- Source link: https://github.com/orgs/community/discussions/159749
> Describe the issue you want and watch as Copilot fills in your issue form... Skip lengthy descriptions — just upload an image with a few words of context.... We hope these changes transform issue creation from a chore into a breeze.
But in the GitHub Community discussion, these announcements prompted a request. " [3]Allow us to block Copilot-generated issues (and Pull Requests) from our own repositories."
> This says to me that GitHub will soon start allowing GitHub users to submit issues which they did not write themselves and were machine-generated. I would consider these issues/PRs to be both a waste of my time and a violation of my projects' code of conduct. Filtering out AI-generated issues/PRs will become an additional burden for me as a maintainer, wasting not only my time, but also the time of the issue submitters (who generated "AI" content I will not respond to), as well as the time of your server (which had to prepare a response I will close without response).
>
> As I am not the only person on this website with "AI"-hostile beliefs, the most straightforward way to avoid wasting a lot of effort by literally everyone is if Github allowed accounts/repositories to have a checkbox or something blocking use of built-in Copilot tools on designated repos/all repos on the account.
1,239 GitHub users upvoted the comment — and 125 comments followed.
"I have now started migrating repos off of github..."
"Disabling AI generated issues on a repository should not only be an option, it should be the default."
"I do not want any AI in my life, especially in my code."
"I am not against AI necessarily but giving it write-access to most of the world's mission-critical code-bases including building-blocks of the entire web... is an extremely tone-deaf move at this early-stage of AI. "
One user complained there was no "visible indication" of the fact that an issue was AI-generated "in either the UI or API." Someone suggested a Copilot-blocking Captcha test to prevent AI-generated slop. Another commenter even suggested naming it "Sloptcha".
And after more than 10 days, someone noticed the "Create New Issue" page [4]seemed to no longer have the option to "Save time by creating issues with Copilot."
Thanks to long-time Slashdot reader [5]jddj for sharing the news.
[1] https://github.com/orgs/community/discussions/159749#discussioncomment-13205797
[2] https://github.blog/changelog/2025-05-19-creating-issues-with-copilot-on-github-com-is-in-public-preview/
[3] https://github.com/orgs/community/discussions/159749
[4] https://github.com/orgs/community/discussions/159749#discussioncomment-13299840
[5] https://www.slashdot.org/~jddj
Article misses the main point (Score:3)
If there are 10,000 fully automated AI generated pull requests sent to GitHub repository owners and enough of those pull requests are accepted, there will be a commercial marketing blitz that AI can find and 'fix' X percent of your code issues.
Ultimately, there will be a 'AI linter as a feature' used to block code pull requests when they don't 'fix' something that AI flags as 'not good enough'.
And free paid expert training data for AI (Score:2)
The accepted or rejected AI generated code change pull requests will go into a bucket of AI training data to help make 'better' AI based code suggestions next time.
Free training data for AI models....
No consider chapter books.....
1. AI writes the first chapter and the next 6 chapters
2. Measure which books get readers to engage longer by reading more chapters in a single reading session
3. Rank up those books
4. Rank down other books which did not have repeat readers or readers reading multiple chapters in a d
Wouldn't it be at least human-directed? (Score:1)
I have not looked into this myself yet, but in order to write anything GPT needs to have a direction to go in... so I assume the way it would work is you'd write a summary of the issue and chatGPT would fill out details?
Maybe an indicator that the issue was written by ChatGPT along with a link to the prompt that generated the summary. Then you'd have original intent of the issue.
I can kind of see the point where people might want to block it altogether but it does come off as a bit luddite. But I can see
Re: (Score:1)
> Re:Wouldn't it be at least human-directed?
One would hope. However, lawyers have been filing false legal documents with the courts, citing made-up court cases, so I'm guessing not even lawyers check what's spewed up by AI.
the problem is (Score:1)
People do not understand how bad AI is. It's at best fit for factchecking a slashdot post, or asking random questions like when the National League adopted the designated hitter.
If you have expertise in an area (or in situations like these idiot lawyers relying on ChatGPT, if you're EXPECTED to have expertise in an area), you cannot rely on AI because the issues will be far too subtle for it to understand.
If you're at a top-tier professional firm, you will be told nearly at the beginning of your time there
Re: the problem is (Score:2)
That's only mostly true. An LLM model on its own is like a decently intelligent guy at Starbucks. Able to talk but you have no way of knowing which of the things he says are true.
Now, if you use an LLM with a huge context window and drop a technical manual into that context, it's like you handed that guy at Starbucks the book and asked him to look stuff up in it. You can have a lot more confidence in his answers as they pertain to that book....but he's still going to be imperfect for anything that isn't in
Re: (Score:2)
A screenshot and a few words constitutes an issue? Sure, I do understand that there is something as too many useless words which takes too much time of the developers to ingest. But with edge-cases, which I tend to find, I sometimes need more than a few words to explain all the steps.
And I have seen very undescriptive/unclear issue reports that take as much or more time of developers to investigate than a "wordy" one. Which tend to be quickly ignored as well by developers.
So my issue reports remain "wordy".