
Off-the-shelf AI in Marketo sounds great. But governance is where it gets real...
- 4 days ago
- 7 min read
We've all seen the latest tech news and there's is a certain type of announcement that makes people behave like they have just seen the second coming of competence.
A major platform connects to a major model. The screenshots look slick. The demo looks fast. Suddenly everyone starts talking as if typing a request in plain English has solved complexity, governance, and the last ten years of operational mess in one go.
This is one of those moments.
To recap: Adobe has now published a Marketo MCP server that lets AI tools connect to Marketo, including Claude Desktop, Claude Code, Cursor, and VS Code with GitHub Copilot. Adobe says it exposes more than 100 operations and requires REST API access, Marketo LaunchPoint credentials, and network access to Adobe’s hosted MCP endpoint. Adobe also tells customers to use a dedicated API user and only the permissions required.
All of that is real.
What is not real is the idea that this automatically equals an enterprise-ready AI strategy.
And that is the part worth questioning.
Because the issue is not AI in Marketo. The issue is whether a general-purpose off-the-shelf LLM, connected through credentials into a live enterprise marketing platform, is actually the right operational answer for teams dealing with approvals, customer data, access controls, regional constraints, procurement reviews, and the usual internal politics of “who signed this off?” Adobe’s own setup guide makes clear this is not just a chat feature. It is a permissioned connection into Marketo through API access and a hosted MCP service.
That is a very different story.
The mistake is confusing “can connect” with “is fit for purpose”
This is where the market gets a bit silly.
A connection gets announced and people act as though capability and suitability are the same thing.
They are not.
Yes, Claude can connect to Marketo through Adobe’s MCP server. But “can connect” is the beginning of the conversation, not the end of it. Adobe’s documentation is explicit that setup involves REST API access, admin-created credentials in LaunchPoint, a hosted MCP URL, and a permissioned API user. That means the real enterprise question is not whether the connection exists. It is whether the connection belongs in your operating model.
Because off-the-shelf intelligence is still off-the-shelf.
It is broad. It is flexible. It is good at looking useful quickly. Those are all reasons people like it. They are not, on their own, reasons to trust it with live enterprise Marketing Ops.
Enterprise teams do not just need something that can do things. They need something that can do the right things, in the right places, under the right controls, with the right auditability, and without giving Security, Procurement, Legal, and Operations a collective eye twitch.
That is a much harder brief.
Procurement is not rejecting AI. Procurement is rejecting vagueness.
This is the bit people love to get wrong.
When procurement or security starts asking awkward questions, the lazy narrative is that the business is being slow, old-fashioned, resistant, bureaucratic, or anti-innovation.
Rubbish.
What procurement is usually rejecting is not AI. It is vagueness.
Who is the vendor relationship actually with?
Where does the traffic go?
What data can be exposed?
What are the processor and subprocessor implications?
What gets logged?
What controls exist around usage?
What happens if one part of the chain changes?
Who owns the incident if something goes wrong between tools and nobody wants to claim it?
Those are not irritating questions. They are the whole point...
And this setup gives them plenty to work with. Adobe lists multiple supported AI tools, requires external client configuration, and routes the connection through an Adobe-hosted MCP endpoint into Marketo APIs. That means this is not just “Marketo has AI now.” It is a multi-system, permissioned integration that an enterprise is expected to govern properly.
So no, procurement is not the problem here.
Procurement is the first person in the room acting like production systems deserve adult supervision.
A famous LLM is not an operating model
This is probably the line most worth saying out loud.
A famous LLM is not an operating model.
It is a tool.
A connection is not a strategy.
A setup guide is not a governance framework.
And a slick demo is not proof that your enterprise should be doing any of this in production.
What actually matters is everything around the model.
What permissions does it run under?
What tasks is it allowed to perform?
What is read-only and what is not?
What environments can it touch?
What approval gates still apply?
What gets logged and reviewed?
What is blocked outright?
Who owns the design of those boundaries?
Adobe’s documentation already points straight at that reality by stressing dedicated API users, least privilege, and inherited Marketo API limits. Those warnings exist because the real risk is not that the tool is clever. It is that people will connect it too broadly and pretend that is innovation.
That is not innovation. That is just a new route to old mistakes.
Data handling is where the excitement usually goes quiet
It is very easy to sound bullish about AI when you keep the conversation abstract.
It gets harder when you remember what is actually inside a real Marketo instance. Lead records. Personal information. Segmentation logic. Behavioural data. Sales handoff points. Operational workflows. Regional rules. Compliance concerns. Legacy fields nobody wants to go near. Program history. Approval dependencies.
Once a general-purpose model can query or act through a connection into that environment, the question is no longer “is this powerful?” It is “what exactly could this expose, retrieve, summarise, alter, or help someone access more casually than before?” Adobe says the MCP server exposes a wide range of Marketo operations and depends on the API role assigned to the connection.
That should make enterprise teams more serious, not less.
Because conversational access changes behaviour. It makes systems feel lighter, softer, more forgiving. It makes requests feel harmless. But the data underneath does not become harmless just because the interface becomes friendly.
This is the trap with off-the-shelf LLM thinking. People confuse ease of interaction with safety of use.
Those are not the same thing.
Cross-border and compliance questions do not disappear because the demo was impressive
Another thing people like to pretend away is geography.
Enterprises, especially global ones, do not just care whether something works. They care where services are hosted, how data moves, which terms apply, which entities are involved, what internal policies say, and whether the setup creates risk they will later have to explain to a privacy team or regulator.
Adobe’s public Marketo MCP documentation gives customers a hosted MCP server URL and the technical prerequisites to connect external AI tools. What it does not do is magically settle all the regional, residency, or cross-border questions for your organisation. Those still have to be worked through internally.
Which is why an enterprise buyer is perfectly justified in looking at this and saying, “Fine, but under what conditions would we actually be comfortable with it?”
That is not anti-AI.
That is what mature buyers sound like.
API credentials are where the grown-up questions begin
This part is wonderfully unglamorous, which is exactly why it matters.
Adobe’s setup requires Marketo API credentials and explicitly warns customers not to put secrets into version control, recommending environment variables or secret managers instead.
That is sensible advice - it is also a massive clue.
Because whenever credentials enter the story, so do all the practical questions people tend to skip in the excitement phase.
Where are the secrets stored?
Who has access?
How are they rotated?
Are they scoped properly?
Are they different by environment?
Has anyone audited where they are referenced?
Was the proof of concept built with permissions that are broader than anyone would admit in a proper review?
If those questions feel awkward, good. They should.
That awkwardness is the sound of enterprise reality catching up with a feature announcement.
Off-the-shelf is not the same as enterprise-ready
The argument is not that AI assistants are bad.
The argument is that there is a big difference between a general-purpose LLM being able to connect to something and a purpose-built AI solution being designed around the realities of enterprise Marketing Ops.
Anthropic itself has spent the last year expanding admin and compliance controls for business and enterprise customers, including policy controls, monitoring, data retention controls, and compliance visibility. That tells you the market already understands the gap between consumer-famous AI and enterprise-governed AI.
And that is the gap worth talking about.
Enterprise teams do not need less AI.
They need AI that is narrower where it should be narrow, governed where it should be governed, observable where it should be observable, and designed around actual operational workflows rather than general-purpose novelty.
That is the distinction.
Not anti-AI - Anti-naive AI adoption. Not “don’t use intelligence in Marketo.”
“Don’t assume a broad off-the-shelf model connection is automatically the right answer for production Marketing Ops.”
That is a much more intelligent argument, and frankly a much more commercial one too.
The real enterprise question is not “is Claude clever?”
Of course it is clever. That is not the hard part. The hard part is whether a general-purpose model, connected through APIs into a live marketing platform, is the thing your enterprise actually wants to standardise around.
That is the better procurement question. Not “can we connect it?”
“Should this be the shape of our AI operating model?” Not “does the demo work?”
“Does this survive vendor review, data governance, access control design, audit requirements, and internal accountability?” Not “is this innovative?”
“Is this controlled?”
That is where the grown-up conversation lives.
And that is why the sharpest position here is not anti-AI at all.
It is simply this:
Off-the-shelf LLMs make a great demo. Enterprise Marketing Ops needs something more deliberate.
That is the tension worth writing about. Because the businesses that get real value from this wave will not be the ones who rushed to connect the most famous model first.
They will be the ones who worked out what should be purpose-built, what should be tightly governed, what should be ring-fenced, what should never touch production casually, and what kind of AI they can actually defend when procurement, security, legal, and the Ops team all start asking the same lovely question:
Who approved this?









