Overmind bags $6M to predict deployment blast radius before the explosion
(2025/09/16)
- Reference: 1758012310
- News link: https://www.theregister.co.uk/2025/09/16/overmind_interview/
- Source link:
Exclusive How big could the blast radius be if that change you're about to push to production goes catastrophically wrong? Overmind is the latest company to come up with ways to stop the explosion before it happens.
CEO Dylan Ratcliffe is keen on the words "blast radius," which will be all too familiar to engineers looking glumly at their dashboards after a deployment doesn't go to plan.
[1]
Dylan Ratcliffe (pic: Overmind)
[2]Overmind is all about predicting what a change may do to an environment, giving a picture of just how risky a given update might be.
The company has just raised $6 million in a seed round led by Renegade Partners. Not the vast sums raised by certain artificial intelligence companies, but still a chunk of money.
... we've got it to a point now where it's mandatory in quite a lot of our customers. So you are not allowed to deploy to production until this has done its analysis and you've addressed the output...
Predicting the impact of a change or a deployment is a thorny issue. There are a number of tools in the observability world that will give a warning when something goes awry, but predicting that ahead of time can be a challenge. Many engineers will know that sinking feeling when things have veered off course – perhaps just as bad – experienced paralysis by review meeting when nobody wants to take responsibility for pushing go.
Recently, engineers have started to rely on AI tools more heavily, both in the creation of code and asking "what could possibly go wrong?"
Ratcliffe explains the Overmind approach thus. "Rather than just taking a Terraform change, and guessing at what could potentially happen – you can do that very easily with AI, it will happily guess at all kinds of outcomes and produce pretty non-useful output – what we're doing is overlaying the change you're making over your actual production infrastructure in real time."
[3]
Overmind isn't alone in the change impact analysis world. [4]Puppet , for example, sells customers impact analysis tooling. Overmind's approach is to slot into an individual or enterprise's CI/CD pipeline.
[5]
[6]
"We started off just working with Terraform and just with AWS," Ratcliffe tells The Register , "and we've been able to sort of validate the approach and make sure that it actually works and that we can produce predictions that are not only accurate, but compelling enough to change someone's behavior."
"It's no help if we drop a comment on a pull request, for example, that says, 'Hey, you're going to break something,' but it's so vague that they won't actually change – they won't actually stop pressing the button."
[7]
"And so we've been working on that, and we've got it to a point now where it's mandatory in quite a lot of our customers. So you are not allowed to deploy to production until this has done its analysis and you've addressed the output."
It all sounds splendid, but Ratcliffe notes that no two organizations have workflows which are necessarily the same. "There are no two people with the same workflow for deploying to production, it's a very personal thing. You don't just want to go in and be like 'You're doing it wrong' because this is stuff they've learned through years and have battle scars!"
Ratcliffe extends the model of fitting in with a workflow rather than dictating how things should work to enterprises that already have standard operating procedures around deployment and change analysis. "You have to fit in," he said. "You can't expect them to change the SOP."
[8]
Returning to the theme of AI assistants, there is an increasing risk of engineers using the technology to generate perfect-looking infrastructure code that could have catastrophic consequences. Ratcliffe tells us about a customer with developers using Copilot to produce wonderful Terraform code, "but they don't understand the implications of their changes."
[9]HashiCorp speaks up about adjusting to life under IBM
[10]IBM likes Hashicorp, finally puts a $6.4B ring on it
[11]HashiCorp unveils 'Terraform 2.0' while tiptoeing around Big Blue elephant in the room
[12]OpenTofu hits version 1.8 with more crowd-pleasing features
"So if they propose a change that has a massive blast radius... they're not trying to prevent them from [suggesting the change], they're just trying to prevent them from actually clicking the button and deploying it."
"So Overmind, in that instance, is almost like a learning tool for developers who don't understand the implications of their actions."
The theory is that engineers should be allowed to make mistakes, but learn from them for next time, and before those mistakes go anywhere near production.
In the future, Ratcliffe wants to delve deeper into the realm of AI assistants and have scripts generated that take into account what could go wrong. "We could answer those questions against live data while [the assistant] was coming up with the Terraform code."
That said, Ratcliffe also accepts that a tool like Overmind should not be used as a Band-Aid to cover up development ills. "This is not a replacement for architecting your services in a way that makes them reliable," he says. "It's not a replacement for automated testing."
It is, however, a very useful tool when it isn't practical or possible to test down to exhaustive detail on an exact replica of a production environment. Ratcliffe points to NASA's Voyager probes as an extreme example of how a test environment can be quite different to the production experience.
"Test is like Voyager-in-a-rack – nothing ever gets bit-flipped. It's not getting blasted by cosmic rays. Whereas production is..."
Different.
"It just goes to show production is always harder than you anticipate, and you cannot replicate production."
"Even if you're going to be planning what something is going to do, you have to take into account not what production is supposed to look like, but what it looks like right now." ®
Get our [13]Tech Resources
[1] https://regmedia.co.uk/2025/09/12/dylan_ratcliffe.jpg
[2] https://overmind.tech/
[3] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_software/devops&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=2&c=2aMk1OkKjkyYUEbn3QjlqeAAAABU&t=ct%3Dns%26unitnum%3D2%26raptor%3Dcondor%26pos%3Dtop%26test%3D0
[4] https://www.puppet.com/products/impact-analysis
[5] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_software/devops&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=4&c=44aMk1OkKjkyYUEbn3QjlqeAAAABU&t=ct%3Dns%26unitnum%3D4%26raptor%3Dfalcon%26pos%3Dmid%26test%3D0
[6] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_software/devops&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=3&c=33aMk1OkKjkyYUEbn3QjlqeAAAABU&t=ct%3Dns%26unitnum%3D3%26raptor%3Deagle%26pos%3Dmid%26test%3D0
[7] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_software/devops&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=4&c=44aMk1OkKjkyYUEbn3QjlqeAAAABU&t=ct%3Dns%26unitnum%3D4%26raptor%3Dfalcon%26pos%3Dmid%26test%3D0
[8] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_software/devops&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=3&c=33aMk1OkKjkyYUEbn3QjlqeAAAABU&t=ct%3Dns%26unitnum%3D3%26raptor%3Deagle%26pos%3Dmid%26test%3D0
[9] https://www.theregister.com/2025/06/05/hashicorp_ibm_hashidays/
[10] https://www.theregister.com/2025/02/28/ibm_hashicorp_deal_closing/
[11] https://www.theregister.com/2024/10/18/hashicorp_hashiconf_terraform_updates/
[12] https://www.theregister.com/2024/07/31/opentofu_version_18/
[13] https://whitepapers.theregister.com/
CEO Dylan Ratcliffe is keen on the words "blast radius," which will be all too familiar to engineers looking glumly at their dashboards after a deployment doesn't go to plan.
[1]
Dylan Ratcliffe (pic: Overmind)
[2]Overmind is all about predicting what a change may do to an environment, giving a picture of just how risky a given update might be.
The company has just raised $6 million in a seed round led by Renegade Partners. Not the vast sums raised by certain artificial intelligence companies, but still a chunk of money.
... we've got it to a point now where it's mandatory in quite a lot of our customers. So you are not allowed to deploy to production until this has done its analysis and you've addressed the output...
Predicting the impact of a change or a deployment is a thorny issue. There are a number of tools in the observability world that will give a warning when something goes awry, but predicting that ahead of time can be a challenge. Many engineers will know that sinking feeling when things have veered off course – perhaps just as bad – experienced paralysis by review meeting when nobody wants to take responsibility for pushing go.
Recently, engineers have started to rely on AI tools more heavily, both in the creation of code and asking "what could possibly go wrong?"
Ratcliffe explains the Overmind approach thus. "Rather than just taking a Terraform change, and guessing at what could potentially happen – you can do that very easily with AI, it will happily guess at all kinds of outcomes and produce pretty non-useful output – what we're doing is overlaying the change you're making over your actual production infrastructure in real time."
[3]
Overmind isn't alone in the change impact analysis world. [4]Puppet , for example, sells customers impact analysis tooling. Overmind's approach is to slot into an individual or enterprise's CI/CD pipeline.
[5]
[6]
"We started off just working with Terraform and just with AWS," Ratcliffe tells The Register , "and we've been able to sort of validate the approach and make sure that it actually works and that we can produce predictions that are not only accurate, but compelling enough to change someone's behavior."
"It's no help if we drop a comment on a pull request, for example, that says, 'Hey, you're going to break something,' but it's so vague that they won't actually change – they won't actually stop pressing the button."
[7]
"And so we've been working on that, and we've got it to a point now where it's mandatory in quite a lot of our customers. So you are not allowed to deploy to production until this has done its analysis and you've addressed the output."
It all sounds splendid, but Ratcliffe notes that no two organizations have workflows which are necessarily the same. "There are no two people with the same workflow for deploying to production, it's a very personal thing. You don't just want to go in and be like 'You're doing it wrong' because this is stuff they've learned through years and have battle scars!"
Ratcliffe extends the model of fitting in with a workflow rather than dictating how things should work to enterprises that already have standard operating procedures around deployment and change analysis. "You have to fit in," he said. "You can't expect them to change the SOP."
[8]
Returning to the theme of AI assistants, there is an increasing risk of engineers using the technology to generate perfect-looking infrastructure code that could have catastrophic consequences. Ratcliffe tells us about a customer with developers using Copilot to produce wonderful Terraform code, "but they don't understand the implications of their changes."
[9]HashiCorp speaks up about adjusting to life under IBM
[10]IBM likes Hashicorp, finally puts a $6.4B ring on it
[11]HashiCorp unveils 'Terraform 2.0' while tiptoeing around Big Blue elephant in the room
[12]OpenTofu hits version 1.8 with more crowd-pleasing features
"So if they propose a change that has a massive blast radius... they're not trying to prevent them from [suggesting the change], they're just trying to prevent them from actually clicking the button and deploying it."
"So Overmind, in that instance, is almost like a learning tool for developers who don't understand the implications of their actions."
The theory is that engineers should be allowed to make mistakes, but learn from them for next time, and before those mistakes go anywhere near production.
In the future, Ratcliffe wants to delve deeper into the realm of AI assistants and have scripts generated that take into account what could go wrong. "We could answer those questions against live data while [the assistant] was coming up with the Terraform code."
That said, Ratcliffe also accepts that a tool like Overmind should not be used as a Band-Aid to cover up development ills. "This is not a replacement for architecting your services in a way that makes them reliable," he says. "It's not a replacement for automated testing."
It is, however, a very useful tool when it isn't practical or possible to test down to exhaustive detail on an exact replica of a production environment. Ratcliffe points to NASA's Voyager probes as an extreme example of how a test environment can be quite different to the production experience.
"Test is like Voyager-in-a-rack – nothing ever gets bit-flipped. It's not getting blasted by cosmic rays. Whereas production is..."
Different.
"It just goes to show production is always harder than you anticipate, and you cannot replicate production."
"Even if you're going to be planning what something is going to do, you have to take into account not what production is supposed to look like, but what it looks like right now." ®
Get our [13]Tech Resources
[1] https://regmedia.co.uk/2025/09/12/dylan_ratcliffe.jpg
[2] https://overmind.tech/
[3] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_software/devops&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=2&c=2aMk1OkKjkyYUEbn3QjlqeAAAABU&t=ct%3Dns%26unitnum%3D2%26raptor%3Dcondor%26pos%3Dtop%26test%3D0
[4] https://www.puppet.com/products/impact-analysis
[5] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_software/devops&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=4&c=44aMk1OkKjkyYUEbn3QjlqeAAAABU&t=ct%3Dns%26unitnum%3D4%26raptor%3Dfalcon%26pos%3Dmid%26test%3D0
[6] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_software/devops&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=3&c=33aMk1OkKjkyYUEbn3QjlqeAAAABU&t=ct%3Dns%26unitnum%3D3%26raptor%3Deagle%26pos%3Dmid%26test%3D0
[7] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_software/devops&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=4&c=44aMk1OkKjkyYUEbn3QjlqeAAAABU&t=ct%3Dns%26unitnum%3D4%26raptor%3Dfalcon%26pos%3Dmid%26test%3D0
[8] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_software/devops&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=3&c=33aMk1OkKjkyYUEbn3QjlqeAAAABU&t=ct%3Dns%26unitnum%3D3%26raptor%3Deagle%26pos%3Dmid%26test%3D0
[9] https://www.theregister.com/2025/06/05/hashicorp_ibm_hashidays/
[10] https://www.theregister.com/2025/02/28/ibm_hashicorp_deal_closing/
[11] https://www.theregister.com/2024/10/18/hashicorp_hashiconf_terraform_updates/
[12] https://www.theregister.com/2024/07/31/opentofu_version_18/
[13] https://whitepapers.theregister.com/
Brave new world
OhForF'
"There are no two people with the same workflow for deploying to production,"
"developers using Copilot to produce wonderful Terraform code, "but they don't understand the implications of their changes."
So the problem is vibe coding cowboys that do not understand what they are doing and can't be bothered to even follow a preset procedure for deploying to production.
The solution obviously is using more AI - can't start doing the sensible thing an hire developers that know what they are doing.
"Overmind bags $6M"
My wife wants an Overmind bag now and wants to know what colours are available.
Icon: yes, she'll be wanting a new jacket to go with it ... and shoes ...
;-)