top of page

AgentOps is the next Ops layer and nobody's staffed for it...

  • 5 hours ago
  • 6 min read

Ask a MOPs team how many automated programs are running in their marketing automation platform right now and you'll get a rough answer. Maybe not a confident one, but something in the right postcode. They'll know the major nurtures, the scoring models, the lifecycle triggers. It's their system. They built it.


Now ask how many AI agents are running. Across the CRM, the MAP, the service desk, the data enrichment layer. How many are live. What data they access. What actions they can take. Who activated them. When they were last reviewed.


More often than not, you will get silence. Very occasionally, you'll even get "…what agents?"


AI agents are multiplying inside the platforms MOPs teams already operate whether you know it or not, and nobody has built the operational layer to manage them. 


Not IT. Not Marketing Ops. Not the RevOps team that's still arguing about lifecycle stage definitions. The result can be a growing fleet of autonomous processes running inside your revenue systems with no monitoring, no audit trail, and no clear owner.


We've been here before with marketing automation - build it, launch it, orphan it. Except agents don't just execute static rules. They reason. They adapt. And they can go quietly wrong in ways that won't show up until someone asks why pipeline looks off.


This is the AgentOps problem. And most organisations don't even know they have it yet.



Agents aren't automations. They need a different kind of oversight.


Traditional marketing automation runs a script. If the data says X, do Y. It's deterministic. Predictable. Boring in the best possible way. When a smart campaign breaks in Marketo, you can trace the logic, find the error, fix it. The system did what you told it to do.


AI agents are different. Agentforce uses an LLM-powered reasoning layer to interpret context, plan actions, and execute across systems. HubSpot's Breeze agents - now running on GPT-5 for some marketplace agents - make judgement calls about how to qualify a lead, what to say to a customer, when to escalate.


They don't follow a flowchart. They interpret.


That distinction matters enormously for operations, because it means the failure mode is different. A broken automation sends the wrong email. You catch it in QA or someone complains. An agent that's reasoning poorly routes high-value prospects to the wrong sales team, or gives a customer an answer that's confidently wrong, or quietly updates CRM fields based on stale data - and it does all of this while looking like it's working perfectly.


One Salesforce implementation partner published a detailed account of exactly this pattern earlier this year. A client deployed an Agentforce lead qualification agent that was routing high-value prospects to the wrong sales team.


The cause? A territory assignment field that hadn't been updated after a recent re-org. The agent didn't flag the stale data. It didn't hesitate. It treated six-month-old field values as ground truth and processed 340 leads through incorrect routing before anyone noticed.


Human reps would have caught it within the first few calls. The agent just kept going.


That's the operational gap. The technology worked. The reasoning worked. The data was wrong, and nobody was watching.



AI governance in Marketing Ops now means agent governance


The governance conversation has been happening for a while. Policies about data usage, consent, content review. Most of it has centred on generative AI - who's allowed to use ChatGPT, what can be fed into a model, who reviews AI-generated copy before it ships.


That conversation was necessary. It was also about the last generation of AI use cases.


Agents are a different governance surface entirely. They don't just generate content. They take actions. They modify records. They make routing decisions. They interact with customers. The governance questions aren't "is this content on brand?" - they're "did this agent just change a lead score based on data that's three months stale, and did anyone notice?"


Agent governance requires a different set of capabilities.


You need monitoring... not just logging what happened, but flagging when agent behaviour deviates from expected patterns. You need periodic review cycles, where someone checks that the agent's reasoning still aligns with current business rules, pricing, territories, product availability. You need escalation paths, so when an agent encounters something outside its boundaries, the right human gets involved instead of the agent improvising.


And you need ownership. Clear, named, accountable ownership. Not "the team," not "IT handles the platform," not "we'll figure it out." A person who knows which agents are running, what they're doing, what data they depend on, and when they were last reviewed.


That's AgentOps. It's not a product. It's not a platform. It's an operational discipline, and it doesn't exist yet in most organisations.



The 2026 AI Benchmark Report
Take part in the 2026 AI Benchmark Report

Hallucination rates are a design reality, not a scare statistic


Here's a number that should shape how you think about agent operations: hallucination rates for AI agents inside CRMs range from 3% to 27%, depending on configuration, grounding data, and prompt design. That's from published implementation data across dozens of enterprise deployments.


At the low end - proper Knowledge article coverage, well-structured prompts, tight topic guardrails - agents get it right 95-97% of the time. That's genuinely useful.


At the high end - minimal grounding data, broad topic definitions, no monitoring - you get an agent that fabricates pricing, invents product features, or confidently cites policies that don't exist.


The point isn't that agents are unreliable. It's that they're probabilistic. They will occasionally get things wrong. That's not a bug. It's the nature of the technology. The operational question is whether your organisation has the capacity to detect when that happens, assess the damage, and correct course.


Right now, for most teams, the answer is no. Some platforms are starting to ship transparency features - audit trails showing which CRM properties an agent modified and what actions it took. That's a step in the right direction. But a feature isn't a practice. An audit trail is useless if nobody's reading it.


That's the operational equivalent of installing a smoke detector and never checking the batteries.



What AgentOps actually looks like


This doesn't require a new team or a new budget line. It requires treating agents as operational assets - not features you activate and forget.


That means maintaining an inventory. How many agents are running in your systems right now? What data do they access? What actions can they take? Who activated them? If you can't answer those questions today, you have an agent sprawl problem and you don't know how big it is.


It means defining review cadences. Not annual audits - practical, lightweight checks.


Monthly: is the agent behaving as expected? Are the data fields it depends on still reliable?


Quarterly: do the business rules baked into agent behaviour still match reality? Have territories shifted? Has pricing changed?


It means setting performance baselines. What does "working" look like for each agent? If you can't define success, you can't detect failure. And the agent won't tell you it's failing. It'll just keep going with impressive confidence.


And it means building escalation clarity. When an agent does something unexpected, who gets told? How fast? Salesforce learned this the hard way on its own Help portal - 26% abandonment rate before anyone intervened. Most orgs don't have Salesforce's engineering resources to react that quickly.



The agents are already live. The ops layer isn't.


Every ops discipline starts the same way. Something breaks. Leadership asks who was supposed to be watching. Nobody has a good answer. A process gets created under pressure, after the fact, while someone patches the damage.


Marketing automation governance happened that way. Marketing automation data quality programmes happened that way. GDPR compliance happened that way for a depressingly large number of organisations.


You can build AgentOps the same way - reactively, after an agent has been quietly misrouting leads for six weeks or breaching compliance boundaries for 48 hours because someone edited a topic description. Or you can look at the agents already running in your systems, admit that nobody's managing them, and start.


The agents are already live. The ops layer isn't. That gap has an expiry date. It's just a question of whether you close it on your terms or someone else's.



Discover our AI in Marketing Operations Services
Discover our AI in Marketing Operations Services






Our Customer Case Studies

Sojourn Solutions logo, B2B marketing consultants specializing in ABM, Marketing Automation, and Data Analytics

Sojourn Solutions is a growth-minded marketing operations consultancy that helps ambitious marketing organizations solve problems while delivering real business results.

MARKETING OPERATIONS. OPTIMIZED.

  • LinkedIn
  • YouTube

© 2026 Sojourn Solutions, LLC. | Privacy Policy

bottom of page
Clients Love Us

Leader