Webinar Replay | 60 minutes

Autonomous software agents - not human operators - are beginning to run attacks end to end: recon, exploitation, lateral movement, and exfiltration at machine speed. Detection and response windows are shrinking from weeks to minutes.

At the same time, AI-generated identities and deepfakes are eroding trust in voice, video, and text, turning routine business interactions into attack paths. For IT and security leaders, this isn’t a tooling problem. It’s an operating model problem.

 

Hear Bluewave Technology Group & Thrive Experts Discuss:

  • How autonomous, agentic AI is changing the attack lifecycle
    Why incident-centric detection and response alone is no longer sufficient.
  • Where machine-speed attacks expose today’s biggest blind spots
    The identity, edge, and operational pressure points attackers automate against first.
  • What “trust” means when humans aren’t always on the other end
    Practical thinking around identity, verification, and controls in a deepfake-driven world.
  • What a 2026-ready security operating model looks like
    How leaders are shifting toward continuous exposure management, with clearer metrics and executive narratives.

Notable Quotes

 

What fundamentally changes when attacks are run by autonomous agents instead of human operators?

"Traditionally, when we looked at attacks, we were looking at kill chains, which tended to be more linear paths. One step would follow another, and if anything failed anywhere along that chain, human operators would have to go back and figure out, okay, what's our next way in? What do we need to do to circumvent this and go another direction?

But now that's really all changed. With agentic AI, you can automate pretty much the entire chain. So now it's really more of a dynamic loop — when you hit a roadblock, AI can simply reroute attacks or reroute whatever the objective is that they're trying to accomplish and really automate from end to end. It's made attacks a lot easier, a lot less expensive, which is very scary. Now more people can partake in being threat actors."

Tony Scribner, VP of Solution Advisory, Bluewave Technology Group

 

Where are the blind spots that AI agents typically expose in a cyber attack?

"One of the real-world places we see this happening is in MFA handoffs between machines, between systems. If they're really not seamlessly integrated — if you don't have your policies absolutely synchronized, especially between cloud and on-prem infrastructure — this creates a vulnerability or a gap that can be exploited in that MFA handoff. Which is really unfortunate, because we've spent the last decade getting everybody into MFA and talking about how it was going to solve a lot of the problems.

But like everything else, it has its set of weaknesses and vulnerabilities. Really ensuring that you're standardizing these policies across different environments and developing a unified identity and access management strategy is a huge plus in this regard, especially between on-prem and cloud environments.

There's still a lot of people in that hybrid state. And then making sure that you're regularly auditing and updating these policies — things change frequently, and it's important to keep up, because this is a big gap that is being exploited."

Stephen Jones, VP Detection and Response, Thrive

 

What should an IT leader change in their security program to combat AI threats?

"Our users are our weakest link. Which isn't really fair to the users, because they're up against a tremendous battle. I agree they should have some risk in their KPIs, because they have to be aware of what they're doing. Sometimes you just have to slow down and think about things — does this make sense?

But there's an incredible battle being waged against them to have them do things they shouldn't be doing — opening a document, clicking a link, putting credentials into a website, whatever the case is.

So rather than just harping on users being the weakest link, let's talk in terms of identity being our most exploitable surface. We need to protect the employees. We need to train the employees. We certainly need to measure how our employees are doing against manufactured threats in a continuous learning methodology. But we just can't rely on the fact that training is enough, that they won't be fooled. The threat actors are too good anymore. So how do we make identity a priority? And how do we make it not so exploitable? I think that's one good area to tackle, certainly in the next 90 days."

Tony Scribner, VP of Solution Advisory, Bluewave Technology Group

 

"I'm going to be a little more provocative with my answer. I think everything we've talked about today — and just what I'm seeing in real life — is that the adoption of AI is not going to slow down. It's only going to get worse. Which means the attacks are going to continue to surge in both volume and sophistication.

However, today almost everybody still has their security or defender's budget prioritizing compliance versus innovation. And this is ultimately putting your company in a bad spot. If you are not pivoting the way your security team functions to be more innovative, more predictive, more capable and ready for what they're facing — if you're focused solely on compliance, focused solely on being reactive — you're going to lose the fight eventually.

So, I would say the thing you can do is really make that cultural shift and provide the budgeting and funding necessary for it to happen. Because as we discussed, it's not if, it's when you're going to have a problem. You need to assume it's an ongoing problem."

Stephen Jones, VP Detection and Response, Thrive