News: 1771321687

  ARM Give a man a fire and he's warm for a day, but set fire to him and he's warm for the rest of his life (Terry Pratchett, Jingo)

GitHub previews Agentic Workflows as part of continuous AI concept

(2026/02/17)


Agentic workflows - where an AI agent runs automatically in GitHub Actions - are now in technical preview, following their introduction at the Universe event in San Francisco last year.

The workflow type is being developed by GitHub Next and Microsoft Research, and features sandboxed execution and a feature called secure output, which is intended to protect the agentic workflow from misuse.

The service is part of the continuous AI concept, also presented at Universe. According to principal researcher Eddie Aftandilian, speaking at the event, "we coined the term continuous AI to describe an engineering paradigm that we see as the agentic evolution of continuous integration."

[1]

An agentic workflow is defined in a markdown file and compiled to GitHub Actions YAML with the GitHub CLI (command line interface). The workflow is triggered by events, with developers able to choose one or more from events including new issues, new issue comments, pull requests and their comments, and new discussions. The actions to be taken by the agent are determined by prompt instructions, such as asking the agent to analyze issues, add labels, review pull requests, and output a structured report. The agent used can be GitHub Copilot, Claude Code, or OpenAI Codex.

[2]

[3]

[4]According to the team, typical use cases for agentic workflows include triaging issues, updating documentation, identifying code improvements, monitoring test coverage and adding new tests, investigating continuous integration (CI) failures, and creating regular reports on repository health. GitHub states that agentic workflows make "entirely new categories of repository automation and software engineering possible," that could not be achieved without AI.

The new agentic workflows are not intended to replace traditional CI/CD (continuous integration and delivery) workflows, but to be used alongside. The [5]FAQ notes that CI/CD needs to be deterministic, whereas agentic workflows are not. "If you use agentic workflows, you should use them for tasks that benefit from a coding agent's flexibility, not for core build and release processes that require strict reproducibility," it says.

Guardrails

Giving AI agents access to code repositories has obvious risks, particularly in the case of public repositories where malicious prompts may be hidden in new issues, pull requests or comments. In order to address this, there are guardrails which, GitHub claims, makes its agentic workflows safer than simply running AI agent CLIs directly inside an Action. That approach "often grants these agents more permission than is required," the team said.

The [6]security architecture has several layers. Agentic workflows run in an isolated container, and the agent has read-only access to a repository. Access to the wider internet is restricted by a firewall and can be constrained to specified destinations. User content is sanitized before being passed to the agent. In addition there is a [7]Safe Outputs subsystem where tasks that do write content run in separate permission-controlled jobs.

[8]Open source registries don't have enough money to implement basic security

[9]Anthropic tries to hide Claude's AI actions. Devs hate it

[10]Investors shove another $30B into the Anthropic money furnace

[11]OK, so Anthropic's AI built a C compiler. That don't impress me much

The cost of an agentic workflow, as is often the case with AI workloads, is somewhat opaque. "Costs vary depending on workflow complexity," the FAQ states. The logs contain usage metrics and an audit command shows "detailed token usage and costs," according to the docs.

Despite the security features, the documentation [12]warns that the product is in early development, may change significantly, and that even with careful supervision "things can still go wrong. Use it with caution, and at your own risk."

[13]

Nevertheless, security is a large part of this new GitHub feature and is unusually prominent in its presentation. Aftandilian said at Universe that the "agent can only do the things that we want it to do, and nothing else," a bold but welcome claim. ®

Get our [14]Tech Resources



[1] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_software/devops&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=2&c=2aZRKTQQAU4P7GIN-xSCztwAAAVA&t=ct%3Dns%26unitnum%3D2%26raptor%3Dcondor%26pos%3Dtop%26test%3D0

[2] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_software/devops&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=4&c=44aZRKTQQAU4P7GIN-xSCztwAAAVA&t=ct%3Dns%26unitnum%3D4%26raptor%3Dfalcon%26pos%3Dmid%26test%3D0

[3] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_software/devops&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=3&c=33aZRKTQQAU4P7GIN-xSCztwAAAVA&t=ct%3Dns%26unitnum%3D3%26raptor%3Deagle%26pos%3Dmid%26test%3D0

[4] https://github.blog/ai-and-ml/automate-repository-tasks-with-github-agentic-workflows/

[5] https://github.github.com/gh-aw/reference/faq/

[6] https://github.github.com/gh-aw/introduction/architecture/

[7] https://github.github.com/gh-aw/reference/safe-outputs/

[8] https://www.theregister.com/2026/02/16/open_source_registries_fund_security/

[9] https://www.theregister.com/2026/02/16/anthropic_claude_ai_edits/

[10] https://www.theregister.com/2026/02/13/anthropic_series_g/

[11] https://www.theregister.com/2026/02/13/anthropic_c_compiler/

[12] https://github.github.com/gh-aw/

[13] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_software/devops&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=4&c=44aZRKTQQAU4P7GIN-xSCztwAAAVA&t=ct%3Dns%26unitnum%3D4%26raptor%3Dfalcon%26pos%3Dmid%26test%3D0

[14] https://whitepapers.theregister.com/



"Computers may be stupid, but they're always obedient. Well, almost always."

-- Larry Wall (Open Sources, 1999 O'Reilly and Associates)