
AI deleted the database. Now imagine it had campaign access...
- 22 hours ago
- 12 min read
Ask most enterprise teams what they fear most about AI and you will usually get the obvious answers.
Bad content.
Dodgy prompts.
Hallucinated summaries.
A sales email that sounds like it was written by a motivational fridge magnet.
Annoying? Yes. Brand-damaging? Potentially. But not the real nightmare.
The real nightmare is what happens when AI stops being a clever assistant and starts becoming an operator. When it can access systems. Query databases. Change records. Trigger workflows. Sync audiences. Update fields. Delete things. Move people between lifecycle stages. Send campaigns. Alter routing. Touch customer data.
At that point, AI is no longer just helping your team think. It is acting inside the machinery of your business.
And when that machinery is Marketing Operations, the blast radius gets very real, very quickly.
A recent story reported by The Independent gives us a useful, uncomfortable example. PocketOS, a software company serving car rental businesses, suffered a major outage after an AI coding agent reportedly deleted its production database and backups in seconds.
According to the report, the agent was powered by Anthropic’s Claude model via Cursor, and the company founder said there was no confirmation request before the destructive action was taken. The data was later reported as recovered, but not before customers were left unable to access key records.
This is not an article about Claude. Or Cursor. Or one company having a very bad weekend.
It is about what happens when AI capability moves faster than AI governance. And Marketing Operations teams should be paying very close attention...
AI is moving from assistant to actor
For the last couple of years, most AI use in marketing has been relatively low-risk.
Draft this email, summarise this webinar, rewrite this landing page, generate campaign ideas, turn these notes into a blog outline, make this sound less like a committee had a fight in a Google Doc.
Useful, yes. Transformational, occasionally. Dangerous, usually only in the “please don’t publish that without a human reading it” sense.
But that phase has ended...
AI is now being connected directly into business systems. CRM. MAP. CDP. DAM. Analytics. Data warehouses. Customer support platforms. Sales engagement tools. Project management tools. The places where the actual work happens and that shift changes everything...
Because once an AI agent has access to your Marketing Automation Platform, it is no longer just suggesting a nurture flow. It may be able to build one. Once it has access to your CRM, it is no longer just analysing lifecycle data. It may be able to update it. Once it has access to campaign operations, it is no longer just spotting a QA issue. It may be able to “fix” it.
And the word “fix” is doing a terrifying amount of work there.
The PocketOS example matters because the agent was reportedly working on a routine task when it decided, without explicit instruction, to take a destructive action. The founder described the failure as systemic, warning that AI-agent integrations are being pushed into production infrastructure faster than the safety architecture needed to make them safe.
That line should be printed out and stuck above every enterprise AI planning session.
Because it is exactly where many marketing teams are heading.
Marketing Operations has more risk than people think
Marketing Operations is often talked about as if it is mostly campaign execution, platform admin, and workflow plumbing.
That is dangerously outdated.
Modern Marketing Operations sits across customer data, consent, segmentation, scoring, routing, lifecycle management, attribution, campaign orchestration, sales handoff, analytics, and increasingly, AI enablement.
In other words, MOPs touches the systems that decide:
Who gets contacted.
What they receive.
When they receive it.
How they are scored.
How they are routed.
What data is stored.
What consent rules apply.
Which audiences are synced to paid media.
Which leads go to sales.
Which customers are suppressed.
Which reports leadership trusts.
Give an AI agent the wrong level of access in that environment and you do not just risk a weird email subject line - you risk operational damage.
An AI agent with excessive permissions could accidentally overwrite lead source fields. It could change scoring logic. It could update suppression criteria. It could sync the wrong audience to LinkedIn. It could remove members from an active nurture. It could expose sensitive customer data in a prompt response. It could “clean up” a list that should never have been touched. It could trigger a campaign before approvals are complete.
None of that requires malice.
It only requires an AI system trying to be helpful in an environment where the rules, permissions, escalation paths, and human controls have not been properly designed. Which is, frankly, how a lot of AI is being introduced right now.
A demo works, someone senior gets excited, a team plugs it into a tool, everyone calls it innovation, nobody asks who owns the risk. Lovely. Very modern. Also how you end up explaining to Legal why an AI agent just “optimised” your consent architecture into a crater.
Guardrails are not a nice-to-have. They are the operating model.
The phrase “AI guardrails” gets thrown around a lot, usually in the same vague tone as “best practice” and “alignment”. That is a problem, because guardrails are not a brand slogan. They are practical, technical, operational controls.
In Marketing Operations, AI guardrails should define what AI can and cannot do across your MarTech stack.
That includes:
What systems AI can access.
What data it can read.
What data it can write.
What actions require approval.
What actions are completely prohibited.
What prompts are acceptable.
What data can be included in prompts.
Who can activate AI-powered workflows.
How AI actions are logged.
How exceptions are handled.
How rollback works.
Who reviews outputs.
Who owns incidents.
Who gets woken up when something goes sideways.
This is the unglamorous bit. Which is exactly why it matters. The AI industry loves the shiny front end. The magic box. The productivity story. The “your team can move ten times faster” promise. But speed without control is not transformation. It is just a faster way to make a bigger mess.
And MOPs teams already know this. They have spent years learning the hard way that automation is only valuable when it is governed. Nobody serious lets people casually edit live scoring models, suppression rules, campaign templates, or data flows without process. We have change control for a reason. We have QA for a reason. We have sandbox environments for a reason. We have approval workflows for a reason.
AI does not remove that need.
It increases it.
The real risk is excessive agency
One of the most useful concepts in AI security is “excessive agency”. OWASP describes this as the risk that an LLM-based system is given too much autonomy, functionality, or permissions, allowing it to perform damaging actions when something goes wrong.
That is exactly the risk Marketing Operations teams need to understand.
The problem is not simply that an AI system might generate a bad answer. The problem is that the bad answer might be connected to an action.
Bad recommendation plus no permissions? Annoying.
Bad recommendation plus write access? Operational incident.
Bad recommendation plus production access? Clear your afternoon.
In a MOPs context, excessive agency might look like an AI tool that can both analyse a campaign and edit it. Or an agent that can identify “bad data” and delete it. Or a chatbot that can query customer records without strict access controls. Or a workflow assistant that can update segmentation logic without human approval. Or an AI-powered QA tool that can “fix” campaign errors directly in a live MAP.
Some of those capabilities may be useful eventually.
But “useful” and “safe to deploy without governance” are not the same thing.
That distinction is where a lot of organisations are going to get hurt.
Governance should come before integration
This is the bit many companies are getting backwards.
They start with the tool.
They ask: “What AI platform should we use?”
Or: “Can we connect this to Marketo?”
Or: “Can it work with HubSpot?”
Or: “Can it read Salesforce data?”
Or: “Can we automate campaign QA?”
Or: “Can it build workflows for us?”
Those are not bad questions. They are just not the first questions.
The first questions should be:
What are we allowing AI to do?
Where should it only advise?
Where can it act?
Where must it never act?
Which systems are too sensitive for direct access?
Which data types are restricted?
Which use cases require human approval?
How do we test safely before production?
How do we prove what happened after the fact?
How do we recover if something breaks?
That is governance.
Not a dusty policy PDF. Not a theoretical risk matrix. Not a 48-page deck that makes everyone lose the will to live by slide seven. Real governance is the practical operating model that lets teams use AI without gambling with their stack.
The NIST AI Risk Management Framework is built around four core functions: Govern, map, measure, and manage. It treats governance as the foundation that informs the rest of AI risk management, not as a decorative afterthought once the tool has already been plugged in.
That is the mindset Marketing Operations needs.
Before AI gets access, define the rules.
Before AI takes action, define the approvals.
Before AI touches data, define the boundaries.
Before AI goes near production, define the rollback plan.
Wildly radical stuff, apparently.
Why this matters specifically for enterprise MOPs
Enterprise Marketing Operations is not a sandbox. It is an interconnected operating environment with a lot of dependencies. A change in one place can cause problems somewhere else. A field update in the CRM can affect segmentation in the MAP. A lifecycle stage change can affect sales routing. A consent flag can affect campaign eligibility. A scoring model adjustment can affect MQL volume. A sync rule can affect paid media audiences. A data enrichment change can affect reporting.A campaign status update can affect attribution.
AI does not automatically understand those dependencies.
It may understand the task. It may even understand the platform. But that does not mean it understands your operating model, your governance structure, your internal politics, your data history, your compliance obligations, your exception cases, your sales process, or the weird legacy field that absolutely nobody likes but half the reporting suite still depends on.
Every MOPs team has at least one of those fields - probably seven.
This is why generic, off-the-shelf AI deployment is risky in enterprise Marketing Operations. Not because AI is bad. Because context matters.
The same action can be harmless in one environment and catastrophic in another. Deleting a test segment in a sandbox is not the same as deleting a suppression list in production. Updating a campaign status in a demo instance is not the same as changing live program logic across regions. Recommending a data cleanup is not the same as executing one.
Governance is what separates those scenarios.
AI governance is not anti-AI
This point matters, because too often governance gets framed as the boring department of “no”.
That is lazy.
Good AI governance is not about slowing everything down. It is about making AI usable at scale.
Without governance, AI adoption gets stuck in one of two bad places.
Either teams move too fast and create risk, or everyone becomes so nervous that AI stays trapped in low-value use cases like content drafting and meeting summaries.
Neither is good enough.
The real opportunity is in using AI to improve how Marketing Operations actually runs. Campaign QA. Data quality checks. Workflow diagnostics. Audience validation. Documentation. Change impact analysis. Platform monitoring. Performance insights. Process recommendations. Governance enforcement.
But those use cases only work if the foundations are there.
You need clear permissions.
You need human oversight.
You need approval thresholds.
You need audit trails.
You need data boundaries.
You need incident processes.
You need ownership.
You need testing environments.
You need a sensible definition of what “safe enough” means.
That is not bureaucracy. That is how you get AI out of the toy box and into the operating model.
What should AI governance include in Marketing Operations?
A practical AI governance framework for MOPs should cover six core areas.
1. Use case classification
Not every AI use case carries the same level of risk.
A tool that helps draft campaign briefs is low risk.
A tool that analyses nurture performance is moderate risk.
A tool that updates live segmentation rules is high risk.
A tool that can change production data without approval should trigger every alarm bell in the building.
Use cases need to be classified based on access, autonomy, data sensitivity, customer impact, compliance exposure, and reversibility.
If an action cannot be easily reversed, it should not be casually delegated to AI.
There you go. Put that on a mug.
2. Access and permissions
AI should never get broad access by default.
It should have the minimum access needed for the specific use case. Read-only where possible. Sandbox first. Production only when justified. Write access only with controls. Destructive actions blocked or escalated.
This is basic operational hygiene, but AI makes it more urgent because agents can move quickly, misinterpret instructions, and chain actions together in ways humans may not expect.
The question is not “Can the AI do this?” - The question is “Should it be allowed to?”
3. Human approval points
Some actions should always require human approval.
Sending campaigns.
Deleting data.
Changing scoring models.
Updating consent logic.
Syncing paid media audiences.
Editing live nurture flows.
Changing routing rules.
Altering lifecycle stages.
Modifying integrations.
Approval should not be vague. It should be designed into the workflow.
Who approves?
At what threshold?
With what information?
Where is that approval recorded?
What happens if approval is denied?
“Someone will probably check it” is not a control. It is a hope wearing a lanyard.
4. Logging and auditability
If AI takes an action, you need to know what happened.
What did it access?
What prompt or instruction triggered the action?
What data did it use?
What did it change?
When did it change it?
Who authorised it?
Was there a human review?
Can the action be rolled back?
This is especially important in Marketing Operations because problems often surface downstream. A campaign underperforms. Sales complains about lead quality. Reporting looks strange. An audience is wrong. Consent exclusions fail.
Without logs, you are left reconstructing the crime scene with vibes and Slack messages.
Not ideal.
5. Testing and sandboxing
AI should not be learning its boundaries in your production environment. New AI use cases should be tested in controlled conditions. Synthetic data where possible.
Sandboxes before live systems. Limited pilots before wider rollout. Clear success criteria. Failure-mode testing. Red-team style scenarios. Rollback rehearsals.
The goal is not to prove AI works when everything goes perfectly.
The goal is to understand what happens when it does something weird.
Because it will.
6. Ownership and escalation
AI governance fails when everyone assumes someone else owns it.
Marketing thinks IT owns it.
IT thinks Marketing owns the use case.
Legal thinks Procurement reviewed it.
Procurement thinks Security signed it off.
Security thinks the platform owner configured it.
The platform owner thinks the AI vendor handled it.
Beautiful. A governance conga line straight into the bin.
Every AI-enabled MOPs use case needs named ownership. Business owner. Technical owner. Data owner. Risk owner. Escalation contact. Review cadence.
If nobody owns the risk, the risk owns you.
Why external expertise matters
This is where working with a specialist consultancy becomes valuable.
Not because internal teams are incapable. Far from it. Most enterprise MOPs teams know their systems incredibly well. They know the weird dependencies, the fragile workflows, the stakeholder sensitivities, the data quality issues, and the operational pain points.
But AI governance across Marketing Operations requires a specific blend of skills.
You need to understand marketing automation.
You need to understand CRM data models.
You need to understand campaign operations.
You need to understand privacy and consent implications.
You need to understand workflow architecture.
You need to understand AI capability and risk.
You need to know where automation genuinely helps and where it should be kept on a very short lead.
That combination is not always sitting neatly inside one internal team.
A consultancy like Sojourn brings the outside perspective and the MOPs-specific experience needed to help organisations introduce AI without treating their MarTech stack like a science experiment with invoices.
Sojourn’s role is not to turn AI off. It is to help enterprises turn it on properly.
That means identifying the right use cases, assessing readiness, defining guardrails, mapping risk, setting approval processes, documenting controls, supporting rollout, and making sure AI fits the operating model rather than crashing through it like an overexcited intern with admin rights.
The uncomfortable truth
The PocketOS story is dramatic because the outcome was immediate and visible.
Database gone. Customers affected. Panic stations.
Marketing Operations failures are often quieter.
A field gets overwritten.
A segment rule changes.
A suppression logic breaks.
A nurture excludes the wrong accounts.
A scoring change floods sales with rubbish.
A paid audience syncs with the wrong criteria.
A consent rule is misread.
A report becomes unreliable.
A workflow quietly routes valuable leads into the void.
No sirens. No dramatic explosion. Just slow operational damage.
That is what makes AI governance in MOPs SO important.
The risk is not always one giant disaster. Sometimes it is hundreds of small, invisible decisions made by a system nobody is properly supervising.
And by the time anyone notices, the dashboard is lying, the sales team is annoyed, the campaign data is messy, and someone is saying “can we just manually fix it?” in the sort of tone that ruins a Tuesday.
The answer is not fear. It is control.
AI has a serious role to play in the future of Marketing Operations.
It can reduce manual QA.
It can spot issues faster.
It can support better documentation.
It can help teams understand complex workflows.
It can identify data anomalies.
It can speed up campaign planning.
It can improve operational consistency.
It can help MOPs teams move from firefighting to proactive management.
But only if it is introduced with discipline.
The lesson from the PocketOS incident is not “AI is too dangerous to use.” The lesson is “AI is too powerful to introduce casually.”
That distinction matters.
Marketing Operations teams do not need panic. They need governance. They need practical guardrails. They need clear controls. They need expert support. They need a way to move forward without pretending risk magically disappears because the demo looked impressive.
AI is coming deeper into the MarTech stack. That is not really up for debate.
The question is whether it arrives as a governed capability or an unsupervised operator with too much access and not enough judgement.
One of those helps your team scale.
The other deletes the database and apologises afterwards.
Choose wisely.









