top of page

Search our Resources

264 results found with an empty search

  • Your LinkedIn ads are not failing. Your buyer-stage targeting is.

    Most B2B LinkedIn advertising has a targeting problem. Not because the platforms are useless. Not because the creative is always terrible. Not even because the audience is completely wrong. The bigger issue is that too many campaigns still treat the buyer journey as if it is one flat, convenient spreadsheet. Everyone gets the same message. The person just starting to understand the problem gets the same advert as the person already comparing partners. The account vaguely showing interest gets the same CTA as the account showing much stronger buying signals. The buying committee member who needs education gets the same proof point as the stakeholder who needs a reason to act. Then everyone gathers around the performance report and wonders why the budget did not go further. The answer is usually quite simple. The message did not match the moment. ABM is not just a better account list A lot of organisations talk about account-based marketing as if the whole job is building a list of companies and pointing paid media at it. That is not ABM. That is a spreadsheet with ambition. A target account list is useful, of course. You need to know which organisations you want to reach. You need some way of identifying where there is a genuine fit between the account, the problem, the technology environment, the buying committee, and your ability to help. But an account list alone does not tell you enough. It tells you who might matter. It does not tell you whether they are paying attention. It does not tell you whether they are actively researching. It does not tell you whether they are just becoming aware of a problem or already moving closer to a decision. It does not tell you what message they need next. That is where many paid campaigns fall down. They start with “who do we want to target?” and stop there. A better question is: What does this account need to hear right now? That is where account intelligence becomes genuinely useful. Intent data should change the message, not just the audience Intent data is often treated as a targeting input. A way of deciding which accounts to include or exclude. That is part of the value, but it is not the full story. If an account is showing signs of interest, that should influence more than whether they are added to a campaign. It should influence what they are shown, how directly they are approached, and what kind of next step makes sense. There is a big difference between an account that appears to be exploring a broad issue and an account showing stronger signals around a specific solution area. Those two audiences should not be treated the same. The first may need education.The second may need evidence.The third may need a sharper reason to act. This is where stage-based advertising becomes far more effective than simply uploading a target account list and hoping the algorithm has a moment of divine inspiration. Spoiler: it usually does not. The problem with one-size-fits-all LinkedIn campaigns LinkedIn is a powerful B2B advertising platform, but it is not magic. It will let you reach people by company, role, seniority, function, industry, geography, and plenty of other useful signals. That is helpful. But if every audience receives the same message, the campaign still has a relevance problem. This is especially true in complex B2B sales. Buying committees do not move in a straight line. Different stakeholders care about different things. Some are looking for commercial impact. Some care about technical feasibility. Some are worried about risk. Some are just trying to stop a broken process from eating another quarter of their life. A generic advert cannot do all of that work. And yet many campaigns still try. They send broad thought leadership to accounts that may already be much further along. They push sales-heavy CTAs to accounts that are barely aware of the problem. They serve vague brand messaging to accounts that probably need proof. They treat “targeted” as a substitute for “relevant.” That is how budget leaks. Not dramatically. Not all at once. Just quietly, one poorly matched impression at a time. Better targeting starts before LinkedIn One of the biggest mistakes in paid media is expecting the advertising platform to do all the strategic work. LinkedIn can help you reach a defined audience. But the quality of that audience depends heavily on the thinking that happens before it gets into Campaign Manager. The stronger approach is to define the market first. That means identifying accounts where there is a clear fit between business complexity, technology environment, likely challenges, and the services or expertise you can credibly provide. For Marketing Operations, that fit matters enormously. A company with a simple tech stack, small team, and limited operational complexity is unlikely to need the same kind of support as a larger organisation managing multiple platforms, global campaigns, messy data, complex governance, and pressure to prove marketing’s contribution to revenue. The same applies to technology environment. If you have deep expertise in certain platforms, systems, or operating models, it makes sense to focus attention on accounts where that expertise is most relevant. That is not exclusionary. It is sensible. Paid media budget is not a charity. It should not be scattered lovingly across the entire internet in the hope that something nice happens. It should be focused where there is a meaningful chance of relevance. Buyer readiness should shape the campaign Once you have a better understanding of account fit and intent, the next step is to think about buying readiness. Broadly speaking, accounts can usually be grouped into different levels of awareness and engagement. Some are early. They may be showing signs of interest, but they are probably still understanding the problem. They need content that educates, challenges, and helps them see the issue more clearly. Some are warmer. They may be engaging more actively, exploring specific topics, or showing signs that the problem is becoming more important. They need proof, examples, and practical confidence. Some are further along. They may be showing stronger signals and may be closer to commercial consideration. They need sharper differentiation, clearer value, and a reason to speak to someone. The mistake is treating all of these accounts as if they are ready for the same advert. They are not. Earlier-stage accounts do not need to be shoved toward a sales conversation before they trust you. Warmer accounts do not need yet another broad “why this topic matters” article. Later-stage accounts do not need vague inspiration when they are looking for a partner who can actually help. The job of advertising changes depending on where the buyer appears to be. At the top, the job is to create recognition. In the middle, the job is to build confidence. Closer to action, the job is to create preference. That distinction matters. Match the content to the stage A simple way to think about this is: Educate early. Prove in the middle. Differentiate when the account is ready. For earlier-stage accounts, thought leadership is usually the right play. Not the bland kind that says nothing in 900 words and calls it insight. Proper thought leadership. Content that names the problem, challenges assumptions, and helps the buyer understand why the issue matters. For more engaged accounts, proof becomes more important. This is where customer stories, case studies, practical guides, and outcome-led content can do the heavy lifting. These accounts are more likely to be asking: Can this team solve problems like ours? Have they done it before? Do they understand our world? Will they make things better or just add another layer of noise? For later-stage accounts, the message can become more direct. That does not mean shouting “book a meeting” at everyone like a toddler with a LinkedIn budget. It means being clearer about why your approach is different, where you add value, and why the account should take the next step. The closer someone is to action, the less patience they have for vague content. This is how paid media becomes more useful When account intelligence and buyer-stage thinking are connected to LinkedIn advertising, the whole campaign model improves. You are no longer just asking: Who should see this ad? You are asking: Why this account?Why this person?Why this message?Why now? That is a much better foundation. It also makes performance easier to interpret. If early-stage audiences are not engaging with thought leadership, the issue may be the angle, the creative, or the problem framing. If engaged audiences are not responding to proof-led content, the issue may be credibility, relevance, or the strength of the case study. If later-stage audiences are not taking action, the issue may be differentiation, offer, friction, or timing. That is far more useful than staring at one blended campaign report and trying to extract wisdom from a click-through rate that looks like it needs medical attention. Stage-based advertising gives the performance data more context. And context is where better decisions come from. ABM and paid media need to stop living in separate rooms In many organisations, ABM strategy and paid media execution are still too disconnected. ABM teams build the account strategy. Paid media teams build the campaigns. Content teams create the assets. Sales teams ask why none of it has produced a meeting by Tuesday. Lovely. Very calming. A better model connects the whole thing. The account strategy should inform the paid media audience. The intent signals should inform the campaign structure. The buying stage should inform the message. The content should support the likely next step. The performance data should feed back into the wider ABM programme. That is how LinkedIn advertising becomes part of a proper account-based motion, rather than a very expensive noticeboard. Less reach. More relevance. There is still a strange obsession in B2B with reach. More impressions. Bigger audiences. Larger numbers in the report. But in ABM, reach is not the prize. Relevance is. A smaller, better-qualified audience with a sharper message will usually be more valuable than a huge audience that technically matches a few filters but has no clear reason to care. The goal is not to be seen by everyone. The goal is to be noticed by the right people, inside the right accounts, with a message that makes sense for where they are in the buying process. That is how paid media starts working harder. Not by shouting louder, but by being more precise. The real opportunity The real opportunity is not just connecting platforms. It is connecting thinking. Account fit. Intent. Buyer readiness. Paid media targeting. Content strategy. Sales follow-up. When those pieces are aligned, LinkedIn advertising becomes much more than a traffic driver. It becomes a way to support account progression, build familiarity, create preference, and reduce wasted spend. That does not mean every advert will suddenly perform beautifully. This is still B2B marketing, not witchcraft. But it does mean campaigns are built on stronger logic. And that matters. Because the future of B2B advertising is not more generic campaigns aimed at broader audiences. It is sharper account selection, better use of intent, smarter stage-based messaging, and a much clearer understanding of what the buyer needs next. Your LinkedIn ads may not be failing because LinkedIn is the problem. They may be failing because every buyer is being treated as if they are in the same place. And they are not. Discover our ABM Services

  • The biggest data breach in your Org is happening in a chat window

    The risk isn't that your team is using AI. The risk is that they're feeding it data you're responsible for - client data, prospect data, CRM data - without any shared understanding of what's allowed, what's not, and who's accountable when something goes wrong. And right now, in the vast majority of organisations, that shared understanding doesn't exist. There's no policy. There's no guidance. There's just a team doing their best work with the best tools available and hoping nobody asks uncomfortable questions. This isn't hypothetical. It's Tuesday. The gap between "we use AI" and "we have rules about how we use AI" has become one of the biggest unmanaged risks in Marketing Operations. Not because the tools are dangerous. Because the tools are useful - so useful that adoption outran governance by about two years. Consider what a typical MOPs practitioner might paste into an AI tool in a normal working week. Campaign performance data tied to specific accounts. CRM exports with lead scores, lifecycle stages, and sales notes. Email copy containing personalisation tokens that reference real people. None of that is malicious. All of it is efficient. And most of it involves personal data that, depending on your jurisdiction, is covered by GDPR, CCPA, or equivalent regulations that your legal team spent significant time and money building compliance frameworks around. Those compliance frameworks were designed for your CRM, your MAP, your data warehouse - systems with known data processing agreements, documented retention policies, and controlled access. They were not designed for a situation where someone copies 500 contact records into a third-party AI chat window that may or may not retain the input, may or may not use it for model training, and definitely wasn't included in your last data protection impact assessment. The AI tools aren't the problem. The absence of rules is. To be clear: This is not an argument against using AI. That ship sailed. AI is now fundamental to how MOPs work gets done, and pretending otherwise helps nobody. The issue is that most organisations skipped the step between "AI is useful" and "we have clear guidelines for using it." They went straight from experimentation to daily dependence without ever defining the boundaries. What data can go into an AI tool? Nobody decided. Which AI tools are approved? Nobody compiled a list. What happens if someone pastes data from a client engagement into a personal ChatGPT account? Nobody thought about it. Who is responsible if pasted data ends up in a training set and resurfaces in someone else's output? Nobody wants to answer that one. The result is a team making individual judgement calls, dozens of times a day, with no framework to guide them. Some people are conservative and avoid pasting anything sensitive. Some people paste everything because it's faster and nobody told them not to. Most people are somewhere in the middle, vaguely uncomfortable but not uncomfortable enough to stop. That's not a sustainable position. It's a pre-incident position. What's actually at risk There are three layers of risk, and they escalate. The first is data retention. Different AI platforms handle inputs differently. Some retain inputs to improve their models unless you explicitly opt out. Some retain inputs for a period as part of their service. Some offer enterprise tiers with no-retention guarantees. Most practitioners don't know which tier their organisation is on, because nobody told them. If your team is using free or personal accounts - which many are - the retention defaults are almost certainly broader than your data compliance framework allows. The second is data leakage. When data goes into an AI model's training set, it doesn't come back out in recognisable form - but elements can resurface in unexpected ways. The more specific and structured the input, the higher the theoretical risk. Nobody has had a major public incident tied to B2B marketing data leaking through an AI model. Yet. But "it hasn't happened publicly" is not the same as "it can't happen," and it's definitely not a position your legal team would endorse. The third is regulatory exposure. If your organisation operates under GDPR, you have specific obligations about where personal data gets processed, by whom, and on what legal basis. Pasting personal data into an AI tool that wasn't included in your processing records, wasn't covered by a data processing agreement, and wasn't part of the consent basis you collected the data under is, technically, a compliance gap. Whether anyone will notice is a different question from whether it's compliant. And the answer to the compliance question is almost certainly no. The fix isn't complicated. It's just overdue. The good news is that building a sensible AI usage policy for a Marketing Ops team is not a six-month governance project. It's a conversation, a document, and a decision about where the lines are. Start with which tools are approved. If your organisation has enterprise accounts with data processing agreements in place - most major AI platforms offer these on paid tiers - those are the tools the team should use. Personal accounts, free tiers, and unvetted tools should be off limits for any work involving client or prospect data. That's not restrictive. That's basic hygiene. Then define what data can and can't go in. A useful framework is to think about it in tiers. Publicly available information - company names, general industry categories, publicly listed job titles - is low risk. Internal operational data - campaign structures, automation logic, anonymised performance metrics - is medium risk and probably fine in an approved tool. Personal data - names, email addresses, contact records, anything tied to an identifiable individual - needs explicit rules, and in many cases the answer should be "don't paste it." Make the policy specific enough to be useful. "Use AI responsibly" is not a policy. "Do not paste CRM contact records into any AI tool that is not on the approved list" is a policy. "If you need to use AI to clean or segment a list containing personal data, use [approved tool] on the enterprise account and remove names and email addresses first" is a policy. People follow rules they can understand. They ignore principles that sound nice but don't tell them what to do at 3pm on a Wednesday when they're trying to fix a list. Finally, tell people. The most common reason teams don't follow AI data policies is that nobody communicated the policy. It lives in a document nobody reads, or it was mentioned once in a meeting that half the team missed. If you want compliance, you need a one-page guide that's easy to find, easy to understand, and explicitly covers the five or six tasks where practitioners most commonly use AI with company data. Waiting for an incident is not a strategy Every organisation that eventually builds an AI data policy does so for one of two reasons. Either someone thought ahead and built it proactively, or something went wrong and they built it under pressure. The proactive version takes a week. A conversation with legal, a conversation with IT, a conversation with the team leads, and a document. The reactive version takes months - because it comes with an incident review, a legal assessment, a communications plan, and the kind of organisational anxiety that makes everyone overcorrect. Right now, most B2B marketing teams are sitting between these two scenarios. AI usage is widespread, data is flowing into tools that aren't governed, and nobody has had the conversation yet. It's working fine. Until it isn't. The policy doesn't need to be perfect. It needs to exist. Because the alternative isn't "no rules and everything is fine." The alternative is "no rules and everyone is guessing." And guessing, at scale, with other people's data, is not something any organisation should be comfortable with for long. Discover our AI Services

  • Connecting AI to your MAP was never the hard part

    If you work in Marketing Operations, you've probably seen the news. Claude - Anthropic's AI assistant - can now connect directly to Marketo, HubSpot, Salesforce, and hundreds of other tools via the Model Context Protocol (MCP). It can query live CRM data, create records, manage lists, and interact with your marketing automation platform from inside a chat window. It's a genuine step forward. And it's worth understanding clearly - both what it makes possible and where it falls short. Because connecting an AI to your platform is not the same as having an AI that knows how to use it. What Claude + MCP actually does MCP is an open protocol that lets AI assistants connect to external tools. Adobe launched an official Marketo MCP server with over 100 operations - forms, programs, smart campaigns, leads, emails, lists, folders. HubSpot has a native connector in Claude's directory. Salesforce integrates through Agentforce and third-party MCP servers. Middleware platforms like CData Connect AI offer access to 350+ data sources in a single setup. Once connected, Claude can pull live data from your MAP and CRM, answer questions about your instance, create and update records, and execute API operations - all through natural language. For a MOPs practitioner who knows exactly what they need, this is useful. You can ask Claude to find leads matching specific criteria, pull campaign performance data, or create records without switching between platforms. It's a faster way to interact with your existing APIs. That's the strength. It's also where the limitations begin. There's a difference between talking to your platform and working inside it. Claude + MCP gives you a general-purpose AI with access to your platform's API. It can do what the API can do - which is a lot. But it approaches every task from scratch. It doesn't know your naming conventions. It doesn't know your campaign architecture. It doesn't know your QA process, your scoring model logic, or the reason your nurture programs are structured the way they are. Every interaction with Claude starts with you explaining what you need, how you need it, and what the constraints are. The quality of the output depends entirely on the quality of your prompt. If you know exactly what to ask for and how to ask for it, Claude will execute. If you don't - or if the task involves institutional knowledge that lives in your team's heads rather than your platform's data - Claude is guessing. This is the gap that purpose-built AI agents are designed to close. MOPsy, for example, was configured to work inside specific MAP instances - tuned to naming conventions, folder structures, and QA rules during setup. When it builds a campaign, it doesn't need the user to explain what a campaign looks like in their environment. That context is already there. It's a different approach. Claude gives you breadth - connect to anything, ask anything. A purpose-built agent trades that breadth for depth inside a specific operational domain. The prompt problem This is the part that matters most for day-to-day MOPs work. Claude + MCP requires you to be a good prompter. You need to know the right questions to ask, the right level of specificity, and enough about the platform's API structure to guide Claude toward the right actions. MOPsy doesn't require prompts in the traditional sense. It was designed around the workflows MOPs teams actually run - campaign builds, email QA, analytics pulls - and it executes them with minimal instruction. You don't need to explain what a QA check involves or what fields to validate. MOPsy knows because that knowledge was built into the agent, not left to the user to provide each time. This is the difference between a tool that can do anything you tell it to do and a tool that already knows what needs doing. Setup, support, and the operational gap Claude + MCP is a self-service integration. You configure credentials, connect the MCP server, and start chatting. If something goes wrong, you troubleshoot it yourself. If you want Claude to follow your team's processes, you need to teach it - every time, in every conversation, because Claude doesn't retain context between sessions. MOPsy ships with complete setup into your MAP instance. Sojourn's team configures it to match your campaigns, your conventions, and your QA standards. Your team gets live training with real-world use cases. And ongoing support means MOPsy evolves with your operations - new use cases get built, edge cases get handled, and the agent gets smarter about your specific environment over time. This isn't a product difference. It's a model difference. Claude + MCP is a tool you adopt. MOPsy is a capability you gain - with the people behind it to make sure it actually works in your environment. What matters for MOPs teams Most MOPs work isn't ad hoc data exploration. It's operational, repeatable, and specific to your instance. Campaign builds. Email design and QA. Workflow execution that follows your team's actual processes rather than generic best practices. Tasks where the value isn't in connecting to the API - it's in knowing what to do once you're connected. This is where purpose-built agents earn their place. MOPsy was designed around these workflows, with guardrails and human oversight built in as design principles. When AI is executing actions inside your MAP - creating campaigns, modifying records, triggering workflows - the margin for error is real. A general-purpose AI operates with whatever permissions the API user has, and the guardrails are whatever you remember to include in your prompt. An agent built for MOPs was designed with that risk in mind from day one. There's also an adoption question. Most MOPs teams are under-resourced and overloaded. They don't have time to learn prompt engineering on top of everything else. An agent that already knows the workflows - and comes with setup, training, and ongoing support - meets them where they are, not where they'd need to learn to be. Connecting to AI isn't the hard part anymore The Model Context Protocol is a meaningful development. Platforms connecting to AI assistants are going to become standard infrastructure, not a competitive advantage. Every MAP and CRM will offer it within a year if they don't already. The hard part was never the connection. It's making AI work reliably inside your specific operations - with your conventions, your processes, your QA standards, and the institutional knowledge that no API exposes. That's the problem worth solving, and it's not one that a general-purpose connector solves on its own. Meet MOPsy

  • How our Canada based luxury holiday client saved their Marketo instance - and turned email into a revenue engine

    For our luxury holiday client, this wasn’t a “nice-to-have” optimisation project. It was an existential one. Email performance was flat. Marketo was under scrutiny. And the CEO had made it very clear: Prove the value of the platform and the team behind it in 2025, or both were gone. That put the Marketing Operations team in an unenviable position. They didn’t just need incremental improvement. They needed a visible, defensible, board-level turnaround... and fast. That’s when they partnered with Sojourn Solutions. What followed was a complete rethink of lifecycle strategy, data flow, measurement, and how email actually contributes to revenue. Not vanity metrics. Not “ engagement .” Real pipeline and bookings. The result? Nearly double the number of sales-ready leads, a 92% increase in Email-Influenced Opportunities, and a 74% year-over-year increase in Email-Influenced Bookings. Marketo didn’t get cut. It became indispensable. The challenge: Email had no credibility Before Sojourn got involved, email was struggling to justify its place in the marketing mix. Marketo was live, campaigns were being sent, but the impact wasn’t clear or trusted. Sales didn’t believe the leads were ready. Leadership couldn’t see a straight line from email activity to revenue. And without that connection, the platform itself became an easy target. The CEO’s mandate was blunt: If email and Marketo couldn’t demonstrate measurable value in 2025, the subscription, and the team supporting it, would be eliminated. The clock was ticking. Phase one: Fix the foundations, fast The first priority wasn’t more campaigns. It was control. Sojourn started with a full strategic and operational audit of both the email channel and Marketo. This surfaced the usual suspects: Lifecycle stages that didn’t reflect reality, inconsistent lead assignment, data gaps between Salesforce and Marketo, and reporting that looked busy but proved nothing. From there, the focus shifted to building a lifecycle model the business could actually stand behind. Lead lifecycle programs were redesigned to reflect how prospects really move from early interest to sales-ready, with clear definitions that both marketing and sales agreed on. Salesforce and Marketo sync issues were resolved, ensuring leads flowed cleanly, ownership was clear, and no one was chasing ghosts. Email templates were standardised and rebuilt to support scalability, while the their team received hands-on Marketo training and ongoing on-demand expertise. This wasn’t about dependency. It was about confidence. By the end of phase one, Marketo wasn’t just “working.” It was finally trustworthy. Phase two: Lifecycle-based nurture that actually nurtures With the foundations in place, attention can now turn to what email was always meant to do: move buyers forward through more relevant, timely communication. Having established a lifecycle model to identify where individuals are in their buying journey, the next phase will focus on building lifecycle-aligned nurture streams that deliver deeper, stage-specific personalisation. Sojourn will be working closely with the their team to design and activate nurtures that respond to buying intent and behaviour, ensuring email communications are most relevant at key decision points. Using behavioural signals and expressed intent, emails will become more tailored to what prospects are actively looking for. For example, once someone reaches the quote stage, they will receive messaging aligned to their specific interests, products, or use cases, rather than generic follow-ups. This ensures email plays a meaningful role in influencing progression, without prematurely over-engineering nurture programs. Behind the scenes, data quality, integration, and QA will ensure that this personalisation is accurate, scalable, and reliable. This work lays the groundwork for future lifecycle-based nurtures, while immediately delivering more relevant, buyer-aligned email experiences that reflect how people actually buy - not how marketing wishes they do. Measurement: Proving email’s impact on revenue One of the biggest turning points in this engagement was measurement. Sojourn helped them move beyond surface-level email metrics and set up reporting that tied email engagement directly to pipeline and bookings. Email-influenced opportunity and booking reporting was configured to show how email contributed across the lifecycle, not just at first touch. This gave leadership something they’d never had before: Clear visibility into how email supported revenue creation over time. Instead of arguing about opens and clicks, the conversation shifted to influenced opportunities, bookings, and year-over-year growth. Email stopped being a cost centre and started showing up as a revenue driver in its own right. And once that happened, the CEO stopped asking whether Marketo should be cut, and started asking what else the platform could do. The results: From survival to sustained growth The impact was immediate and sustained. Year over year, the Canada based client delivered additional bookings, representing $13.3M in incremental revenue - an 83% uplift . Email-Influenced Opportunities increased by 92% , driven by a near doubling of sales-ready leads. Just as importantly, performance became consistent. Instead of relying on seasonal spikes, FY2025 delivered strong results quarter after quarter. Growth stabilised. Forecasting improved. Confidence returned. Email wasn’t just back in favour. It had become a core pillar of revenue generation. What the client had to say: “Can’t say enough wonderful things about your team. These guys are amazing. They are bright and smart and work well together. They understand each other’s strengths and are super collaborative. There is no stress on the project. I’ve worked with ******* and ***** and you are hands down the best agency I’ve worked with. We’re getting our ROI! 100 more bookings than last year this quarter.” - **** ******, Marketing Operations Manager The takeaway This wasn’t about saving a tool. It was about proving impact. By fixing lifecycle strategy, tightening data and measurement, and building nurture programs rooted in how buyers actually behave, our Canada based client transformed email from a liability into leverage. Marketo didn’t just survive the CEO’s scrutiny. It earned its place at the table. And that’s what real Marketing Operations success looks like.

  • Off-the-shelf AI in Marketo sounds great. But governance is where it gets real...

    We've all seen the latest tech news and there's is a certain type of announcement that makes people behave like they have just seen the second coming of competence. A major platform connects to a major model. The screenshots look slick. The demo looks fast. Suddenly everyone starts talking as if typing a request in plain English has solved complexity, governance, and the last ten years of operational mess in one go. This is one of those moments. To recap: Adobe has now published a Marketo MCP server that lets AI tools connect to Marketo, including Claude Desktop, Claude Code, Cursor, and VS Code with GitHub Copilot. Adobe says it exposes more than 100 operations and requires REST API access, Marketo LaunchPoint credentials, and network access to Adobe’s hosted MCP endpoint. Adobe also tells customers to use a dedicated API user and only the permissions required.   All of that is real. What is not real is the idea that this automatically equals an enterprise-ready AI strategy. And that is the part worth questioning. Because the issue is not AI in Marketo. The issue is whether a general-purpose off-the-shelf LLM, connected through credentials into a live enterprise marketing platform, is actually the right operational answer for teams dealing with approvals, customer data, access controls, regional constraints, procurement reviews, and the usual internal politics of “ who signed this off? ” Adobe’s own setup guide makes clear this is not just a chat feature. It is a permissioned connection into Marketo through API access and a hosted MCP service.   That is a very different story. The mistake is confusing “ can connect ” with “ is fit for purpose ” This is where the market gets a bit silly. A connection gets announced and people act as though capability and suitability are the same thing. They are not. Yes, Claude can connect to Marketo through Adobe’s MCP server. But “can connect” is the beginning of the conversation, not the end of it. Adobe’s documentation is explicit that setup involves REST API access, admin-created credentials in LaunchPoint, a hosted MCP URL, and a permissioned API user. That means the real enterprise question is not whether the connection exists. It is whether the connection belongs in your operating model.   Because off-the-shelf intelligence is still off-the-shelf. It is broad. It is flexible. It is good at looking useful quickly. Those are all reasons people like it. They are not, on their own, reasons to trust it with live enterprise Marketing Ops. Enterprise teams do not just need something that can do things. They need something that can do the right things, in the right places, under the right controls, with the right auditability, and without giving Security, Procurement, Legal, and Operations a collective eye twitch. That is a much harder brief. Procurement is not rejecting AI. Procurement is rejecting vagueness. This is the bit people love to get wrong. When procurement or security starts asking awkward questions, the lazy narrative is that the business is being slow, old-fashioned, resistant, bureaucratic, or anti-innovation. Rubbish. What procurement is usually rejecting is not AI. It is vagueness. Who is the vendor relationship actually with? Where does the traffic go? What data can be exposed? What are the processor and subprocessor implications? What gets logged? What controls exist around usage? What happens if one part of the chain changes? Who owns the incident if something goes wrong between tools and nobody wants to claim it? Those are not irritating questions. They are the whole point... And this setup gives them plenty to work with. Adobe lists multiple supported AI tools, requires external client configuration, and routes the connection through an Adobe-hosted MCP endpoint into Marketo APIs. That means this is not just “ Marketo has AI now. ” It is a multi-system, permissioned integration that an enterprise is expected to govern properly.   So no, procurement is not the problem here. Procurement is the first person in the room acting like production systems deserve adult supervision. A famous LLM is not an operating model This is probably the line most worth saying out loud. A famous LLM is not an operating model. It is a tool. A connection is not a strategy. A setup guide is not a governance framework. And a slick demo is not proof that your enterprise should be doing any of this in production. What actually matters is everything around the model. What permissions does it run under? What tasks is it allowed to perform? What is read-only and what is not? What environments can it touch? What approval gates still apply? What gets logged and reviewed? What is blocked outright? Who owns the design of those boundaries? Adobe’s documentation already points straight at that reality by stressing dedicated API users, least privilege, and inherited Marketo API limits. Those warnings exist because the real risk is not that the tool is clever. It is that people will connect it too broadly and pretend that is innovation.   That is not innovation. That is just a new route to old mistakes. Data handling is where the excitement usually goes quiet It is very easy to sound bullish about AI when you keep the conversation abstract. It gets harder when you remember what is actually inside a real Marketo instance. Lead records. Personal information. Segmentation logic. Behavioural data. Sales handoff points. Operational workflows. Regional rules. Compliance concerns. Legacy fields nobody wants to go near. Program history. Approval dependencies. Once a general-purpose model can query or act through a connection into that environment, the question is no longer “is this powerful?” It is “what exactly could this expose, retrieve, summarise, alter, or help someone access more casually than before?” Adobe says the MCP server exposes a wide range of Marketo operations and depends on the API role assigned to the connection.   That should make enterprise teams more serious, not less. Because conversational access changes behaviour. It makes systems feel lighter, softer, more forgiving. It makes requests feel harmless. But the data underneath does not become harmless just because the interface becomes friendly. This is the trap with off-the-shelf LLM thinking. People confuse ease of interaction with safety of use. Those are not the same thing. Cross-border and compliance questions do not disappear because the demo was impressive Another thing people like to pretend away is geography. Enterprises, especially global ones, do not just care whether something works. They care where services are hosted, how data moves, which terms apply, which entities are involved, what internal policies say, and whether the setup creates risk they will later have to explain to a privacy team or regulator. Adobe’s public Marketo MCP documentation gives customers a hosted MCP server URL and the technical prerequisites to connect external AI tools. What it does not do is magically settle all the regional, residency, or cross-border questions for your organisation. Those still have to be worked through internally.   Which is why an enterprise buyer is perfectly justified in looking at this and saying, “Fine, but under what conditions would we actually be comfortable with it?” That is not anti-AI. That is what mature buyers sound like. API credentials are where the grown-up questions begin This part is wonderfully unglamorous, which is exactly why it matters. Adobe’s setup requires Marketo API credentials and explicitly warns customers not to put secrets into version control, recommending environment variables or secret managers instead.   That is sensible advice - it is also a massive clue. Because whenever credentials enter the story, so do all the practical questions people tend to skip in the excitement phase. Where are the secrets stored? Who has access? How are they rotated? Are they scoped properly? Are they different by environment? Has anyone audited where they are referenced? Was the proof of concept built with permissions that are broader than anyone would admit in a proper review? If those questions feel awkward, good. They should. That awkwardness is the sound of enterprise reality catching up with a feature announcement. Off-the-shelf is not the same as enterprise-ready The argument is not that AI assistants are bad. The argument is that there is a big difference between a general-purpose LLM being able to connect to something and a purpose-built AI solution being designed around the realities of enterprise Marketing Ops. Anthropic itself has spent the last year expanding admin and compliance controls for business and enterprise customers, including policy controls, monitoring, data retention controls, and compliance visibility. That tells you the market already understands the gap between consumer-famous AI and enterprise-governed AI.   And that is the gap worth talking about. Enterprise teams do not need less AI. They need AI that is narrower where it should be narrow, governed where it should be governed, observable where it should be observable, and designed around actual operational workflows rather than general-purpose novelty. That is the distinction. Not anti-AI - Anti-naive AI adoption. Not “don’t use intelligence in Marketo.” “Don’t assume a broad off-the-shelf model connection is automatically the right answer for production Marketing Ops.” That is a much more intelligent argument, and frankly a much more commercial one too. The real enterprise question is not “is Claude clever?” Of course it is clever. That is not the hard part. The hard part is whether a general-purpose model, connected through APIs into a live marketing platform, is the thing your enterprise actually wants to standardise around. That is the better procurement question. Not “can we connect it?” “Should this be the shape of our AI operating model?” Not “does the demo work?” “Does this survive vendor review, data governance, access control design, audit requirements, and internal accountability?” Not “is this innovative?” “Is this controlled?” That is where the grown-up conversation lives. And that is why the sharpest position here is not anti-AI at all. It is simply this: Off-the-shelf LLMs make a great demo. Enterprise Marketing Ops needs something more deliberate. That is the tension worth writing about. Because the businesses that get real value from this wave will not be the ones who rushed to connect the most famous model first. They will be the ones who worked out what should be purpose-built, what should be tightly governed, what should be ring-fenced, what should never touch production casually, and what kind of AI they can actually defend when procurement, security, legal, and the Ops team all start asking the same lovely question: Who approved this? Discover our purpose built Agentic AI

  • Nobody Googles You Anymore

    The discovery layer has moved. The place where buyers form their first impression of your category is no longer a search engine - it's an AI assistant.   And the content that informs that AI's answer is being written right now, by whoever publishes the clearest, most structured, most accessible version of the truth. If that's your competitor, then the AI tells the buyer their version of reality. If it's you, the AI tells the buyer yours. Most marketing teams are still pouring budget into a channel the buyer is quietly leaving. This is the shift from SEO to GEO - Generative Engine Optimisation - and it's not coming. It's here. The buyer journey now starts in a chat window Think about how a senior decision-maker actually researches a purchase now. They don't open Google and type "best marketing automation platform 2026" and scroll through ten pages of vendor content pretending to be objective. That behaviour is dying. Not dead - dying. Quickly. What they do instead is open an AI assistant and ask a real question. Not a keyword. A question. "What should I look for in a MAP if my team is small and we're migrating off our current platform?" "Who are the strongest partners for CRM implementation in EMEA?" "What's the difference between RevOps and MOPs and does the distinction actually matter?" The AI gives them one answer. Not ten links. One synthesised, confident, cited response. It names brands. It describes capabilities. It makes comparisons. The buyer reads it in 30 seconds and forms a mental shortlist that would have taken an hour of Googling to build two years ago. If your brand is part of that answer, you're on the shortlist before the buyer has visited a single website. If you're not, you missed the window entirely. And here's the part that makes this difficult to measure: you'll never know you missed it. The buyer who was never introduced to you doesn't visit your site, doesn't become a lead, and doesn't show up in your CRM. They just go somewhere else. The loss is invisible. SEO and GEO are not the same discipline The instinct will be to hand this to whoever manages SEO. That makes sense on the surface - it involves content, search, and visibility. But the mechanics are different enough that treating GEO as an SEO extension will produce disappointing results. SEO optimises for ranking position in a list of links. The buyer sees the list, clicks a link, and lands on your site. Even ranking fifth means you're visible. GEO optimises for inclusion inside an AI-generated answer. There is no list. The AI gives one answer and names the brands it considers relevant. You're either in the answer or you're not. There is no page two. What gets you into an AI-generated answer is also different from what gets you ranked in search. Keyword density doesn't help - AI isn't matching keywords. Backlink volume matters less than content clarity. What AI systems are looking for is content they can confidently extract an answer from. That means clear definitions, specific arguments, structured formatting, and substance that doesn't require five paragraphs of preamble before the actual point arrives. Each AI platform also behaves differently. What gets you cited in ChatGPT won't necessarily get you cited in Perplexity, or Claude, or Google's AI Overviews. They weight different signals - recency, domain authority, content structure, source diversity, promotional tone. Optimising for one doesn't mean you're visible in the others. It's not one new channel. It's several, each with different rules, and the overlap between them is surprisingly small. Gated content is now a competitive disadvantage at the discovery layer This is the part that will be uncomfortable for a lot of marketing teams, because it challenges a model that's been the backbone of demand gen for a decade. If your best thinking is locked behind a form - download the whitepaper, register for the webinar, fill in three fields to read the guide - the AI can't see it. AI systems pull from publicly accessible content. A gated PDF behind a form isn't publicly accessible. It's invisible to the discovery layer. Your competitor who published the same insight as an open, well-structured blog post? That's what the AI reads. That's what it cites. That's the version of reality the buyer receives. Not because the competitor's thinking is better, but because it's available  to the systems now shaping first impressions. This doesn't mean ungating everything. Mid-funnel content that's genuinely valuable enough to justify a form still has a role. But the early-stage, category-defining content - the "here's how to think about this problem," the "here's what good looks like," the "here's how these options compare" - needs to be in the open. Because that's the content AI uses to form the answers your buyers are reading. The brands that treat expertise as something to share publicly are the ones AI will cite. The brands that treat it as something to withhold until a form is filled are writing content for an audience of one: their own CRM. Content structure now matters more than content volume Most B2B content is written to drive a click from a search results page. It's keyword-optimised, structured around headings designed for SEO crawlers, and padded with enough words to hit a length threshold that Google's algorithm favours. AI doesn't care about any of that. AI is looking for content it can extract a confident answer from. That means the first paragraph needs to say something meaningful - not warm up with a paragraph about "the evolving landscape." Definitions should be clear and early. Arguments should be specific. Data, where it exists, should be concrete. The structure should serve comprehension, not crawlability. Volume doesn't compensate for vagueness. Publishing 20 articles a month that circle around a topic without committing to a point of view is less useful - to AI and to humans - than four articles that each make a clear, specific argument. AI isn't impressed by your publishing cadence. It's looking for the content it trusts enough to cite. And freshness matters more than it used to. Traditional SEO rewards evergreen content that compounds authority over years. AI-generated answers disproportionately favour recent content. The article you published six months ago is already being displaced by whatever your competitor published last week. This is a treadmill. Acknowledging that doesn't make it go away. This isn't just a content problem - it's a measurement problem If you run Marketing Operations, the GEO shift hits your world in a specific and awkward way: it creates a blind spot in attribution that no current model accounts for. When a buyer's first meaningful interaction with your brand happens inside an AI chat window, you get no click. No cookie. No UTM parameter. No referral data. The buyer forms an impression, builds a shortlist, and then - maybe - visits your website. By the time they arrive, they look like a direct visit or an organic visit. Your CRM captures them as a new lead with no traceable source. But the actual source was an AI-generated answer you had no visibility into and no control over. This means your funnel metrics are about to get strange. You'll see higher-intent leads with shorter sales cycles and no clear acquisition source. That's not organic magic. That's AI doing your top-of-funnel work for you - or for your competitor - and your reporting infrastructure can't tell the difference. If you don't start tracking AI referral traffic as a distinct segment, you're flying blind in a channel that's growing while the channels you can  measure are shrinking. The configuration work to set this up is minor. The insight it provides is not. The game changed. The scoreboard didn't. SEO isn't dead. But it's no longer the only game that matters for discovery, and it's no longer where the most consequential first impressions are being formed. The shift from SEO to GEO isn't something to prepare for. It's something to respond to. The buyers who matter most - the senior decision-makers with budget authority and short timelines - are the ones most likely to ask an AI instead of running a search. They're the ones who form a shortlist in 30 seconds and never look back. The brands they find are the ones whose content is clear, structured, publicly accessible, and recent. That's not a new set of skills. It's a new discipline for applying the skills marketing should already have. The question is whether you apply them to the channel the buyer is actually using - or the one they used to. Discover our AI services

  • Claude can now connect to Marketo. That should make enterprise teams nervous, not giddy.

    There is a certain kind of enterprise tech announcement that makes people lose the run of themselves. A new connection appears. A big-name platform links arms with a big-name model. The screenshots look slick. The demos look effortless. Within minutes, half the market starts talking as if the future has arrived, sorted the backlog, and fixed governance on its lunch break. This is one of those announcements. Claude can now connect to Marketo. And yes, there is obvious appeal in that. Ask questions in plain English. Move faster. Find things more easily. Reduce some of the drudgery. Cut through an interface that nobody has ever lovingly described as intuitive. Fine. But sensible enterprise teams should not be reacting with giddy excitement. They should be reacting with a healthy level of suspicion. Because this is not just a handy new feature. It is a new way into a live enterprise marketing system. One that contains customer data, campaign logic, approval structures, reporting dependencies, legacy weirdness, and enough hidden risk to make experienced Marketing Ops people instinctively flinch when someone says, “I’ve just made a quick change in production.” That is why the interesting question is not whether Claude can connect to Marketo. It can. The interesting question is why so many people seem ready to celebrate that fact before they have done the boring grown-up work of asking what it means for permissions, approvals, audit trails, QA, and production risk. This is not an anti-AI argument. FAR from it. It is an anti-naivety one. And there is plenty of naivety here going around. The fantasy version is lovely The fantasy version of this story is easy to sell. A user asks for help. Claude finds the right thing. Maybe it speeds up a task. Maybe it helps someone cut through clutter. Maybe it reduces the amount of time spent digging through programs, folders, assets, and all the other bits of enterprise software that seem specifically designed to make simple things feel needlessly painful. That version will do very well in demos. Unfortunately, enterprise reality is not built for demos. Real Marketo environments are rarely tidy. They are usually a mix of current work, old work, half-retired work, undocumented fixes, inherited structures, strange dependencies, local naming conventions, and processes that are technically still alive despite nobody being fully sure why. That is what makes this connection worth taking seriously. Because Claude is not being attached to a neat little sandbox full of clean logic and sensible governance. In many organisations, it is being attached to a live production environment already held together by caution, experience, and the occasional whispered warning not to touch a specific folder unless you want to ruin your week. That is not the kind of setup that should inspire giddiness. The issue is also not whether the model is clever A lot of the public conversation around this sort of thing goes sideways almost immediately. People get distracted by the intelligence question. Can the model understand the task? Can it retrieve the right thing? Can it help teams move faster? Can it make the system easier to use? That is all mildly interesting. The more important question is much less glamorous. What can it access? Because that is where the real enterprise risk sits. Can it only retrieve information, or can it change things too? Can it create assets? Can it clone programs? Can it update records? Can it approve emails? Can it activate campaigns? Can it export data? Can it interact with live production objects under the permissions of a user that was set up too broadly because someone wanted the demo to be impressive? That is the real story. The danger is not that a model might say something daft. Humans do that all the time and enterprise software has somehow carried on regardless. The danger is that a conversational layer gets connected to a platform where access, scope, and control matter far more than enthusiasm - don't forget that this is an "off the shelf" LLM model we are talking about here... not a bespoke Agentic AI. Permissions are where this stops being fun The moment a language model connects to Marketo, this stops being a shiny feature story and becomes a permissions story. Which is exactly why so many people will try to avoid that conversation. Permissions are dull. Permissions are fiddly. Permissions are the part that ruins the fun by asking irritating questions like who gets access to what, under which conditions, with what restrictions, and with what consequences if something goes wrong. In other words, permissions are where adults enter the room. And they matter because enterprise platforms do not become safe just because the front end becomes conversational. If anything, the opposite is true. The easier it feels to ask for something, the easier it becomes to forget that the system underneath is still capable of doing very real things with very real consequences. That is the trap. Prompting feels casual. Production is not. A request typed into a friendly interface does not feel like changing a live marketing system. It feels more like asking for help. That softer feeling is precisely what makes stronger permissions and tighter boundaries more important, not less. Because the consequences have not become gentler. Only the experience of issuing the instruction has. Enterprise stability is often built on caution, not elegance There is a great myth in enterprise Marketing Operations that stable environments are always the result of pristine architecture, immaculate governance, and flawless documentation. That would be lovely. It is also nonsense. A lot of enterprise stability comes from experienced people being careful. They know which programs are safe to use and which are not. They know which campaigns need extra checks. They know what can be changed quickly and what needs a proper review. They know where the bodies are buried, metaphorically speaking, and they know better than to go poking around them on a Wednesday afternoon. That caution has value. It is not glamorous. It does not make for exciting feature launches. But it is often the thing preventing expensive mistakes. Now place a conversational layer on top of that environment and the tone changes immediately. Suddenly the interaction feels easier. Lighter. More natural. Less formal. Less loaded. That sounds good until you realise how much enterprise safety relied on the fact that Marketo did not feel casual. The interface was annoying, yes. It was also part of the ceremony. It forced some level of navigation, context, and intent. It reminded users they were inside a system with structure and consequence. A prompt box does none of that. A prompt box says, go on then. Approvals are not red tape. They are damage control in advance. Approvals are one of those things people only appreciate properly after they have bypassed them and regretted it. Nobody enjoys extra process for the sake of it. Fair enough. Plenty of enterprise process exists only because someone somewhere once survived a committee and decided everybody else should suffer too. But approval structures in Marketing Operations are not there purely for decoration. They exist because the gap between intention and execution is where a lot of nonsense gets caught. An email is checked before it goes live. A campaign is reviewed before activation. A change is questioned before it lands in production. Someone else has a chance to look at it and say, hang on, are we sure this is right? That pause matters. The problem with conversational access is that it shortens the emotional distance between wanting something done and trying to do it. That is a big part of the appeal. Less friction. Less digging. Less messing about. But some of that friction was doing useful work. It was giving teams just enough resistance to stop every half-formed idea marching directly into a live environment wearing confidence it had not earned. Making access easier does not make approvals less necessary. It makes them more important. Audit trails suddenly matter a lot more Here is where things get properly uncomfortable. In traditional platform use, there is usually some way to reconstruct what happened. Who made a change. What they touched. When they did it. What was approved. Which sequence led to the issue. It may not be elegant, but there is generally a trail. Once you start putting an LLM model in the middle, that clarity can get foggy very quickly. Was the action directly initiated by the user? Was the request ambiguous? Did the system infer something beyond what was intended? Which permissions were in play? What exactly was executed? What review existed around that setup? Who owns the resulting action when the person typed a broad request, the model interpreted it, and the system carried it out under legitimate credentials? That is why auditability is not some dreary back-office concern here. It is central. If a business cannot clearly trace what was requested, what was executed, under whose permissions, against which assets, and with what safeguards in place, then it has no business pretending this is all under control. That is not negativity. That is basic enterprise hygiene. With Claude, QA does not go away. It gets harder. There is a lazy idea floating around that this kind of connection reduces manual effort and therefore eases pressure on teams. In some places, maybe. With the correct AI integration, absolutely. But with an LLM? Let’s not get carried away. What usually happens in enterprise environments is that effort moves. It does not vanish. You may spend less time hunting around for assets or navigating a clunky interface. Fine. But the need to verify what is being surfaced, what is being changed, and what context sits around that action does not disappear. If anything, it increases. Because when work can be requested more casually, teams need stronger QA, not weaker . They need clearer checks around scope, asset selection, environment, downstream impact, inherited logic, and side effects. They need less blind trust, not more. And they absolutely cannot afford to fall into the trap of assuming that because something was easy to ask for, it must also be safe to carry out. That is how messy systems become messier. Production risk is rarely dramatic at first The people rushing to celebrate this sort of thing tend to imagine risk in extremes. Either everything is brilliant or everything is on fire. Real enterprise risk is usually much duller than that, which is part of what makes it dangerous. A live asset gets changed in the wrong place. A program is cloned from the wrong template. A campaign goes active without the right review. A data export happens too easily. A workflow touches something it should not. A team starts leaning on conversational access without fully appreciating where the boundaries should be. None of that necessarily creates instant disaster. What it creates is drift. A slow erosion of trust. A buildup of avoidable rework. More nervous stakeholders. More technical debt. More sceptical legal and compliance teams. More pressure on already stretched Ops people to clean up problems created by convenience being mistaken for competence. That is usually how production risk shows up. Not as one giant cinematic failure, but as a series of smaller decisions made too casually until the overall environment gets shakier than anyone wants to admit. The wrong question is whether you can use it Of course you can use it. That is not the serious question. The serious question is whether your organisation has the discipline to use it without making things worse. Do you have a permission model that is genuinely fit for purpose? Do you have clear rules around what can and cannot be done? Do you have approval structures that still hold when interaction becomes conversational? Do you have meaningful audit trails? Do you have QA discipline strong enough for a lower-friction access layer? Do you have separation between experimentation and production? Do you have named ownership when something goes wrong? Do you have governance that lives in the actual operating model, not in a document everybody claims to support and nobody opens? If the answer to those questions is vague, patchy, or politely avoided, then no, you are not ready. Not because the technology is bad, but because your operating model is too flimsy to carry it safely. That is the point enterprise teams need to hear. This is a maturity test, not a toy That is the sharper read on this launch. It is not just a feature release. It is a maturity test. It reveals whether a business sees Marketing Operations as a serious control function or as a convenient place to experiment with shiny new capabilities and hope the risk gets sorted out later. A mature organisation will look at this and ask hard questions about permissions, approvals, audit, QA, and production boundaries before it starts applauding. An immature one will rush to the demo, celebrate the novelty, and act surprised later when Security, Legal, Compliance, or the head of Marketing Ops starts asking the sort of questions that make innovation fans suddenly very interested in changing the subject. That is why enterprise teams should be nervous. Not fearful. Nervous. There is a difference. Fear says do nothing. Nervousness says pay attention. And right now, paying attention would be a refreshing change. Nervous is the correct response A little nervousness would be healthy here. Enterprise teams should be nervous when conversational access becomes easier than governance. They should be nervous when approvals are treated like optional friction. They should be nervous when audit trails are vague. They should be nervous when QA is assumed rather than designed. They should be nervous when production starts to feel casual. Because nervous teams ask better questions. Giddy teams usually skip straight to the part where they create new problems and then hold a post-mortem to discuss how the warning signs were missed. Claude connecting to Marketo may well become useful. In the right environment, with the right controls, it could genuinely help capable teams move faster without losing discipline. But that outcome will not belong to the teams who got excited first. It will belong to the teams who treated permissions seriously, kept approvals intact, demanded proper auditability, strengthened QA, respected production risk, and resisted the now very fashionable urge to mistake easy access for readiness. That may not be the fun version of the story. It is, however, the one enterprise teams should be reading. Discover MOPsy Discover our latest benchmark report

  • Campaign QA is eating your team alive and nobody wants to admit it

    There is nothing quite like campaign QA for making expensive, experienced enterprise teams do work that feels suspiciously close to digital scavenger hunting. Open the email. Check the links. Check the tokens. Check the form. Check the follow-up. Check the workflow. Check the audience. Check the field mapping. Check the suppression rules. Check the approval status. Check it all again because someone made a “tiny change” after sign-off. Then check it one more time because nobody wants to be the person who lets a broken campaign go live. It is not glamorous. It is not strategic. It is not the kind of work anybody brags about. But it quietly eats hours every week across most enterprise marketing teams. And the worst part is this: most of those hours are not being wasted because teams are careless. They are being wasted because campaign QA in large organisations has become bloated, manual, inconsistent, and held together by stressed people trying to stop avoidable mistakes from escaping into the wild. That is a problem in its own right. It is also exactly why tools like MOPsy matter. Because when highly capable Marketing Operations teams are spending huge chunks of their week doing repetitive campaign checks, something has gone wrong in the operating model. Campaign QA is necessary. The current way of doing it is the issue Nobody is suggesting campaign QA should disappear. In enterprise environments, quality control matters. A lot. One broken workflow, one wrong audience, one bad sync, one incorrect token, one missed suppression rule, and suddenly you have internal panic, external embarrassment, and a clean-up job that takes longer than the original build. The problem is not the existence of QA. The problem is how it happens. In too many enterprise teams, campaign QA is still heavily manual. It lives in checklists, spreadsheets, screenshots, Slack threads, email chains, approval comments, and whatever vague institutional memory happens to be sitting inside the heads of the people who have been there longest. Everyone knows what needs checking, broadly speaking. The trouble is that the checking process is often fragmented, inconsistent, and massively reliant on humans doing repetitive work over and over again. That is where the hours disappear. A campaign that looks simple from the outside can have a ridiculous number of moving parts underneath. Emails, forms, landing pages, hidden fields, workflow logic, list criteria, dynamic content, CRM integration, lead routing, alerting, timing rules, tracking codes, audience exclusions, nurture logic, webinar connections, regional variations, legal tweaks, and stakeholder edits that arrive three minutes before launch. Every extra layer adds risk. Every risk creates another check. Every check takes time. Very quickly, QA stops being a sensible final control and starts becoming a full-blown drain on the team. Most teams are not inefficient. They are compensating This is the bit many people get wrong. When enterprise teams spend too long on campaign QA, the lazy explanation is that they need to be more efficient. Usually, that is nonsense. More often, they are compensating for an environment that is too complex, too fragile, or too inconsistent to trust. That lack of trust shows up everywhere. People double-check because they have been burned before. Stakeholders insist on seeing final versions because something slipped through six months ago. Approvers keep asking for screenshots because they do not trust the build. Ops teams re-run tests because a last-minute change always seems to break something unexpected. Marketers ask for “just one more review” because they know one small error can become a very visible mess. This is not laziness. It is risk management by exhausted humans. The issue is that humans are doing too much of the safety work. And humans are a very expensive place to park repetitive validation. What campaign QA looks like in the real world Campaign QA sounds tidy until you look at what it actually involves. It is not just proofreading an email and clicking a few links. It is checking whether the segmentation logic is correct, whether the form writes cleanly to the right fields, whether the thank-you journey fires properly, whether the routing rules still behave as expected, whether the campaign naming follows standards, whether the audience exclusions are working, whether UTM parameters are consistent, whether cloned assets have carried over the wrong settings, whether alerts fire, whether wait steps are right, whether approval comments were actually actioned, whether the CRM sync is behaving, whether the preference centre is connected properly, whether the footer is compliant, and whether the one stakeholder who always spots the obscure edge case is going to have another moment just before launch. Then do that across multiple campaigns. Across regions. Across product lines. Across different teams. Across multiple systems. Across campaigns that were built by different people, in slightly different ways, to slightly different standards. That is where the wheels start wobbling. Because QA is rarely one neat, contained stage. It spills across the whole delivery cycle. It is rarely just one person doing one clean review. It is bits of time from multiple people, spread across multiple tools, with multiple interruptions and plenty of re-checking when something changes after the “final” review. That is not a quick task. That is death by a thousand tabs. Discover the 2026 AI Benchmark Report The hidden cost is bigger than most teams realise The obvious cost of campaign QA is time. The less obvious cost is the way that time gets shredded. A team might say a campaign takes two hours to QA. What they often mean is there are two visible hours of checking. What they usually do not include is the context switching, the waiting, the duplicated review, the stakeholder back-and-forth, the rechecks after edits, the confusion over versions, and the extra time spent validating things that should have been easier to verify in the first place. This is where enterprise teams quietly lose entire days. Not because somebody sat in a room for eight straight hours doing QA, but because ten people each lost twenty minutes here, forty minutes there, and another half hour because someone made a change after sign-off and nobody wanted to risk not checking it again. That kind of waste is hard to spot because it hides inside the flow of work. It feels normal. It feels responsible. It even feels unavoidable. But it is still waste. And worse, it is waste involving some of the most capable people in the team. Highly experienced Marketing Operations professionals should not be spending huge chunks of their week manually checking whether the same set of campaign rules were followed again. That is not strategic oversight. That is process debt. Manual QA does not scale nicely This is where things get especially grim. Manual QA might limp along when campaign volumes are low and the team is small. Once scale enters the picture, it starts to creak. More campaigns mean more checks. More regions mean more variations. More stakeholders mean more approvals. More platforms mean more handoffs. More complexity means more risk. And most teams respond to rising risk in the same way: they add more manual review. That feels sensible in the moment. It also creates a system where campaign velocity slows down, people become bottlenecks, and launch confidence drops rather than improves. So teams end up stuck between two bad options. They either keep throwing time at QA and slow everything down, or they cut corners and accept more risk. Neither is a particularly grown-up answer. Customers see the consequences, not the excuses Internally, a campaign error may look small. Externally, it looks sloppy. That is the uncomfortable truth. Customers and prospects do not see the tight deadline, the late-stage change request, the weird MAP behaviour, or the fact that three different teams touched the build. They see the thing that lands in front of them. An email with the wrong personalisation. A form that behaves strangely. A broken page. A follow-up that does not make sense. A message sent too early, too late, or to the wrong people. A clunky experience that makes the brand feel careless. Enterprise marketing teams know this, which is exactly why they overcompensate with extra QA. They are trying to avoid reputational damage. Fair enough. The trouble is that the answer has too often been more human effort instead of a smarter system. That is not sustainable. A lot of QA processes were never properly designed Let’s be honest. Many enterprise QA processes did not come from a thoughtful redesign workshop with a neat operating model at the end of it. They evolved. One person made a checklist. Another added a spreadsheet. Somebody started keeping screenshots. A stakeholder demanded final approval because of one painful incident two years ago. A platform migration added more steps. A reorg split ownership. Regional teams created local variations. Legal got more involved. Nobody really rebuilt the process from the ground up. They just kept adding layers. The result is predictable. Checks happen late. Standards vary by team. Known issues keep repeating. Approvals are inconsistent. Documentation is patchy. Too much knowledge sits in the heads of a few over-relied-upon people. And the team spends far too much energy catching preventable errors instead of building a cleaner, more resilient way of delivering campaigns. That is the real issue. Manual QA often looks like control, but in many cases it is just a workaround for a messy system upstream. The smarter question is not “who checks this?” but “why does this need checking this way?” This is where the conversation gets more useful. Most teams frame QA as a people problem. Who owns it? Who signs it off? Who catches errors? Who reviews the reviewer? That is understandable, but it is also limiting. A better question is why so much of the checking still depends on humans in the first place. Some things absolutely should. Brand judgement, tone, compliance nuance, context, audience appropriateness, stakeholder sensitivity. Those things still need human eyes and human brains. But a lot of campaign QA is not that. A lot of it is repetitive validation. Does this asset follow the right naming convention? Are these links structured correctly? Are these components present? Does the build align to known standards? Has this flow been configured the way it should be? Are the same rules being followed every time? That is not human brilliance. That is structured checking. And structured checking is exactly where many enterprise teams are still burning ridiculous hours because the process has not caught up with the complexity of the work. Where MOPsy comes in This is not about replacing your team. It is about protecting your team from work they should not still be buried in. MOPsy is built for Marketing Operations, which means it is not some generic AI gadget trying to force its way into a serious workflow wearing a shiny badge and a lot of confidence. It is designed to be useful in the kind of operational environments where campaign complexity, governance, and quality control actually matter. That makes campaign QA a very obvious fit. Because the problem with QA is not usually that teams do not care. It is that too much of the process still relies on manual review, repeated checking, and humans spotting patterns that a smarter system should be helping to identify much earlier and much more consistently. MOPsy can help teams review campaign builds against defined standards, flag inconsistencies, surface likely issues, support governance, and reduce the amount of repetitive checking that currently eats into experienced team time. That matters because enterprise QA is rarely just about spelling mistakes and rogue buttons. It is about checking campaign logic, process discipline, consistency, configuration, and execution quality across a lot of moving parts. It is exactly the sort of environment where repetitive validation should not still depend so heavily on humans clicking through the same things every week. MOPsy does not remove the need for judgment. It removes more of the grind. And that is the point. This is about more than saving time Saving time is useful. Nobody is going to argue with that. But the more interesting benefit is what happens when teams stop drowning in manual QA. Friction drops. Confidence improves. Campaigns move with less drama. Approvals become cleaner. Standards become easier to enforce. Fewer issues slip through. Ops talent gets used for higher-value work instead of repetitive campaign checking. This is where the real gain sits. Not in a vague promise of efficiency, but in a better operating model. One where the team is not constantly relying on heroic effort, invisible knowledge, and last-minute checks to keep quality intact. Because that is another truth most teams recognise instantly: QA often depends far too heavily on a small number of people who know exactly where problems usually hide. They know the awkward workflows, the strange field behaviour, the steps that always get forgotten, the stakeholders who make late changes, the assets most likely to break, and the checks that can never be skipped. That may feel reassuring. It is not resilience. It is a fragile process wearing a familiar face. A stronger QA model, supported by the right tooling, helps shift that knowledge into something more repeatable, more scalable, and less dependent on human memory and personal heroics. Which is, frankly, how enterprise Marketing Operations should be operating. The teams that improve this will move differently The best teams will not be the ones who keep tolerating more QA pain and calling it diligence. They will be the ones who take a hard look at where the hours are really going, separate genuine human review from repetitive validation, and start building a smarter system around campaign quality. That means improving standards. Tightening process. Reducing inconsistency. Strengthening governance. And using tools like MOPsy where they genuinely help make campaign delivery safer, sharper, and less painfully manual. Because enterprise teams are not wasting hours on campaign QA because they are bad at their jobs. They are wasting hours because the work has become too complex for old habits, too risky for guesswork, and too repetitive to keep throwing humans at it forever. That is the real opportunity. Not shiny AI nonsense. Not another toy with a big promise and a weak use case. Just a very practical shift in how campaign quality gets managed. And for a lot of enterprise teams, that shift is overdue. A better way to handle campaign QA If your team is spending hours every week manually checking campaigns, rechecking last-minute changes, chasing approvals, and relying on experienced people to catch the same issues over and over again, the problem is not just workload. It is the model. MOPsy helps enterprise Marketing Operations teams bring more consistency, more control, and less manual drag into campaign QA. That means fewer hours lost to repetitive checking and more time spent on the work that actually moves the needle. If campaign QA is still eating your team alive, it may be time to stop accepting that as normal. MOPsy was built for exactly this kind of problem. Discover MOPsy

  • AgentOps is the next Ops layer and nobody's staffed for it...

    Ask a MOPs team how many automated programs are running in their marketing automation platform right now and you'll get a rough answer. Maybe not a confident one, but something in the right postcode. They'll know the major nurtures, the scoring models, the lifecycle triggers. It's their system. They built it. Now ask how many AI agents are running. Across the CRM, the MAP, the service desk, the data enrichment layer. How many are live. What data they access. What actions they can take. Who activated them. When they were last reviewed. More often than not, you will get silence. Very occasionally, you'll even get " …what agents? " AI agents are multiplying inside the platforms MOPs teams already operate whether you know it or not, and nobody has built the operational layer to manage them.   Not IT. Not Marketing Ops. Not the RevOps team that's still arguing about lifecycle stage definitions. The result can be a growing fleet of autonomous processes running inside your revenue systems with no monitoring, no audit trail, and no clear owner. We've been here before with marketing automation - build it, launch it, orphan it. Except agents don't just execute static rules. They reason. They adapt. And they can go quietly wrong in ways that won't show up until someone asks why pipeline looks off. This is the AgentOps problem. And most organisations don't even know they have it yet. Agents aren't automations. They need a different kind of oversight. Traditional marketing automation runs a script. If the data says X, do Y. It's deterministic. Predictable. Boring in the best possible way. When a smart campaign breaks in Marketo, you can trace the logic, find the error, fix it. The system did what you told it to do. AI agents are different. Agentforce uses an LLM-powered reasoning layer to interpret context, plan actions, and execute across systems. HubSpot's Breeze agents - now running on GPT-5 for some marketplace agents - make judgement calls about how to qualify a lead, what to say to a customer, when to escalate. They don't follow a flowchart. They interpret . That distinction matters enormously for operations, because it means the failure mode is different. A broken automation sends the wrong email. You catch it in QA or someone complains. An agent that's reasoning poorly routes high-value prospects to the wrong sales team, or gives a customer an answer that's confidently wrong, or quietly updates CRM fields based on stale data - and it does all of this while looking like it's working perfectly. One Salesforce implementation partner published a detailed account of exactly this pattern earlier this year. A client deployed an Agentforce lead qualification agent that was routing high-value prospects to the wrong sales team. The cause? A territory assignment field that hadn't been updated after a recent re-org. The agent didn't flag the stale data. It didn't hesitate. It treated six-month-old field values as ground truth and processed 340 leads through incorrect routing before anyone noticed. Human reps would have caught it within the first few calls. The agent just kept going. That's the operational gap. The technology worked. The reasoning worked. The data was wrong, and nobody was watching. AI governance in Marketing Ops now means agent governance The governance conversation has been happening for a while. Policies about data usage, consent, content review. Most of it has centred on generative AI - who's allowed to use ChatGPT, what can be fed into a model, who reviews AI-generated copy before it ships. That conversation was necessary. It was also about the last  generation of AI use cases. Agents are a different governance surface entirely. They don't just generate content. They take actions. They modify records. They make routing decisions. They interact with customers. The governance questions aren't " is this content on brand? " - they're " did this agent just change a lead score based on data that's three months stale, and did anyone notice? " Agent governance requires a different set of capabilities. You need monitoring... not just logging what happened, but flagging when agent behaviour deviates from expected patterns. You need periodic review cycles, where someone checks that the agent's reasoning still aligns with current business rules, pricing, territories, product availability. You need escalation paths, so when an agent encounters something outside its boundaries, the right human gets involved instead of the agent improvising. And you need ownership. Clear, named, accountable ownership. Not "the team," not "IT handles the platform," not "we'll figure it out." A person who knows which agents are running, what they're doing, what data they depend on, and when they were last reviewed. That's AgentOps. It's not a product. It's not a platform. It's an operational discipline, and it doesn't exist yet in most organisations. Take part in the 2026 AI Benchmark Report Hallucination rates are a design reality, not a scare statistic Here's a number that should shape how you think about agent operations: hallucination rates for AI agents inside CRMs range from 3% to 27%, depending on configuration, grounding data, and prompt design. That's from published implementation data across dozens of enterprise deployments. At the low end - proper Knowledge article coverage, well-structured prompts, tight topic guardrails - agents get it right 95-97% of the time. That's genuinely useful. At the high end - minimal grounding data, broad topic definitions, no monitoring - you get an agent that fabricates pricing, invents product features, or confidently cites policies that don't exist. The point isn't that agents are unreliable. It's that they're probabilistic . They will occasionally get things wrong. That's not a bug. It's the nature of the technology. The operational question is whether your organisation has the capacity to detect when that happens, assess the damage, and correct course. Right now, for most teams, the answer is no. Some platforms are starting to ship transparency features - audit trails showing which CRM properties an agent modified and what actions it took. That's a step in the right direction. But a feature isn't a practice. An audit trail is useless if nobody's reading it. That's the operational equivalent of installing a smoke detector and never checking the batteries. What AgentOps actually looks like This doesn't require a new team or a new budget line. It requires treating agents as operational assets - not features you activate and forget. That means maintaining an inventory. How many agents are running in your systems right now? What data do they access? What actions can they take? Who activated them? If you can't answer those questions today, you have an agent sprawl problem and you don't know how big it is. It means defining review cadences. Not annual audits - practical, lightweight checks. Monthly: is the agent behaving as expected? Are the data fields it depends on still reliable? Quarterly: do the business rules baked into agent behaviour still match reality? Have territories shifted? Has pricing changed? It means setting performance baselines. What does "working" look like for each agent? If you can't define success, you can't detect failure. And the agent won't tell you it's failing. It'll just keep going with impressive confidence. And it means building escalation clarity. When an agent does something unexpected, who gets told? How fast? Salesforce learned this the hard way on its own Help portal - 26% abandonment rate before anyone intervened. Most orgs don't have Salesforce's engineering resources to react that quickly. The agents are already live. The ops layer isn't. Every ops discipline starts the same way. Something breaks. Leadership asks who was supposed to be watching. Nobody has a good answer. A process gets created under pressure, after the fact, while someone patches the damage. Marketing automation governance happened that way. Marketing automation data quality programmes happened that way. GDPR compliance happened that way for a depressingly large number of organisations. You can build AgentOps the same way - reactively, after an agent has been quietly misrouting leads for six weeks or breaching compliance boundaries for 48 hours because someone edited a topic description. Or you can look at the agents already running in your systems, admit that nobody's managing them, and start. The agents are already live. The ops layer isn't. That gap has an expiry date. It's just a question of whether you close it on your terms or someone else's. Discover our AI in Marketing Operations Services

  • AI Beyond Productivity: Where are the real business gains?

    Productivity was always the starting point For the last year or two, most AI conversations in business have sounded oddly familiar. How can we write faster? Summarise faster. Analyse faster. Build presentations faster. Reply to emails faster. Produce more content with fewer people and less effort. Fair enough. That is where most organisations start. It is the easiest sell. Efficiency is measurable, non-threatening, and easy to explain in a board meeting. Nobody gets fired for saying they want a team to spend less time on repetitive work. But productivity is only the opening act. Doing the same work faster is useful. It is just not all that transformative. If AI simply helps a busy team clear its backlog at greater speed, that is an improvement. It is not reinvention. It is admin with a nicer user interface. Helpful, yes. Revolutionary, not quite. The more interesting shift is what happens when AI starts changing how work gets done in the first place. That is where the real gains start to show up. Not just shaved minutes. Not just reduced agency hours. Not just “we saved the team two days a month.” Those are nice wins, but they are rarely the ones that change a business. The bigger opportunity is when AI changes operating models across marketing, sales, customer success, and revenue operations. When it improves decisions, closes gaps between teams, reduces commercial friction, and helps organisations act with more consistency and confidence. That is where the conversation gets more serious. Because in most businesses, the real drag on growth is not that people type too slowly. It is that teams are misaligned. Data is messy. Processes are inconsistent. Handoffs are clunky. Campaigns take too long to launch. Reporting arrives too late to change anything. Sales does not trust marketing’s signals. Marketing does not trust sales follow-up. Customer success is left out of the loop. Everyone is busy, yet somehow the business still struggles to move faster in the places that matter. AI does not magically fix that. In fact, without structure, it can make the mess worse. But when it is applied properly, it can do something much more valuable than improve task efficiency. It can help organisations operate better. That is the real prize. The real gains start with better decisions The first leap beyond productivity is better decision-making. Many businesses are drowning in information while starving for clarity. Dashboards everywhere. Reports on reports. Endless exports from CRM, MAP, BI platforms, intent tools, web analytics, and customer systems. Everyone has data. Very few have a version of it that is timely, connected, and useful enough to support action. This is where AI can start earning its keep in a more meaningful way. Not by generating another summary nobody asked for, but by helping teams spot patterns, risks, and opportunities that would otherwise stay buried. Which segments are actually converting, not just engaging? Which campaign themes are influencing pipeline quality, not just volume? Which accounts are showing the kind of buying behaviour that deserves action now, rather than another nurture stream they will ignore with impressive consistency? That shift matters because the value is no longer about faster reporting. It is about better commercial judgement. A marketing team that can see which messages are moving buyers through complex journeys is in a stronger position than one that simply produces more assets. A sales leader who can prioritise outreach based on stronger signals is in a better position than one relying on a glorified hunch. A revenue team that can identify where conversion is breaking down can fix real problems before quarter-end panic sets in. This is where AI starts moving from labour-saving assistant to decision-support layer. That is a more serious role. It is also where the gains start to compound. AI gets more interesting when it improves orchestration The second leap is orchestration. Most revenue functions are still held together by a patchwork of systems, handoffs, habits, and crossed fingers. Marketing runs campaigns. Sales follows up, or does not. Ops tries to stitch the process together. Customer success gets involved later, sometimes with context, sometimes without. Everyone talks about journey orchestration, but the lived reality is usually closer to organised chaos. AI can help reduce that chaos, not by replacing teams, but by improving coordination between them. Think about how much commercial value is lost in the gaps. Leads routed too late. Follow-ups triggered with the wrong context. Accounts sitting untouched because one system says they are warm and another says they are dead. Customer signals ignored because they live in a platform nobody checks. Campaigns launched without real feedback from the field. Handoffs based on static rules that made sense eighteen months ago and now quietly sabotage performance every day. This is where AI becomes more than a content machine. It can help interpret signals across systems, recommend next-best actions, surface anomalies, and support more responsive plays across teams. Not in a science-fiction “the robot runs the revenue engine” kind of way. More in a very practical “the business is no longer relying on three spreadsheets and Claire from ops to hold everything together” kind of way. That may sound less glamorous, but it is far more valuable. When marketing and revenue teams operate with better timing, better context, and better coordination, the business feels different. Work flows more cleanly. Friction drops. Decisions get made earlier. Opportunities get acted on faster. That is not just efficiency. That is improved commercial execution. Consistency is not sexy, but it is where scale lives The third leap is consistency at scale. One of the least glamorous truths in business is that performance often suffers because execution is wildly inconsistent. Not because the strategy was terrible. Not because the technology stack is broken. Just because different teams, regions, markets, or managers are all doing things slightly differently, with varying levels of quality and discipline. AI can help standardise that. Not in a rigid, joyless, corporate-policy-manual way. In a way that makes good practice easier to repeat. It can support consistent QA, flag compliance issues, improve data hygiene, reinforce process standards, and reduce the kind of avoidable variation that causes downstream pain. In marketing operations especially, this matters more than many leaders realise. A campaign build process that is followed properly every time is not exciting. A lead management framework applied consistently across markets is not sexy. Metadata standards and naming conventions do not exactly set LinkedIn on fire. But these are the things that determine whether a business can scale without tripping over its own shoelaces. AI can strengthen those foundations if it is deployed with intent. It can act as a layer of support around governance, quality control, and operational discipline. That is important because scale usually breaks where standards are weakest. And this is where a lot of the current AI hype becomes mildly ridiculous. Too many organisations are still obsessing over how quickly AI can produce outputs, while ignoring whether those outputs sit inside a functioning operating model. Faster content in a broken system is not transformation. It is just more noise, delivered promptly. The businesses that will get real gains are not the ones generating the highest volume of AI-assisted activity. They will be the ones using AI to reduce variability, improve judgement, tighten execution, and create more reliable pathways from activity to revenue. That is a much less flashy story. It is also the one that actually affects business performance. The bigger shift is role redesign, not task acceleration The fourth leap is redesigning roles, not just accelerating tasks. This is where the conversation gets uncomfortable. A lot of leaders still talk about AI as a helper. Something that sits beside existing roles and makes them more productive. That framing is understandable, especially when companies are trying not to terrify their own workforce. But it is also limiting. Because the bigger question is not “how can AI help this person do their existing job faster?” It is “what should this job now include, exclude, or become?” That is a harder discussion because it forces teams to examine work that has existed for years and ask whether it still deserves to. It means challenging legacy processes, duplicated effort, manual review chains, bloated reporting habits, and all the odd little tasks that nobody likes but everyone keeps doing because “that is just how it works here.” AI gives organisations a reason to revisit those assumptions. In marketing, that may mean fewer hours spent producing first-draft material and more time spent on strategic planning, audience insight, experimentation, and commercial alignment. In operations, it may mean less manual policing and more proactive system design, governance, and optimisation. In revenue teams, it may mean moving people closer to decisions and away from repetitive admin that should have been automated years ago. That is where the gains become structural. Not because jobs vanish overnight, despite the breathless nonsense often pushed online, but because the mix of work changes. Teams that keep using AI as a glorified speed tool will get modest gains. Teams that redesign roles around better judgement, stronger systems thinking, and more intelligent coordination will get far more. And yes, this requires management courage. Which is inconvenient, because courage is in shorter supply than AI tools. Better internal operations create better customer experience The fifth leap is better customer experience, even if people do not label it that way. A lot of internal AI use cases are sold around productivity because it is easier to win budget with an internal efficiency story. But customers feel the impact when internal operations improve. They notice when handoffs are cleaner, messaging is more relevant, follow-up is better timed, and service teams have actual context instead of a blank screen and a forced smile. AI can help businesses become easier to buy from and easier to work with. That matters. In B2B especially, customer experience is often damaged by internal fragmentation. The buyer sees one company. Behind the scenes there are six teams, nine systems, conflicting definitions, and at least one dashboard that everyone pretends to understand. When AI helps join those dots, the customer gets a smoother experience, even if they never see the plumbing. That is a real gain. Not a vanity metric. Not an internal time-saving story dressed up as innovation. A proper improvement in how the business shows up to the market. Tools alone will not create business value Of course, none of this happens just because a company bought licences and told people to “have a play.” That is where many AI programmes drift into parody. Real gains do not come from random experimentation with no structure behind it. They do not come from telling every employee to use a chatbot and hoping transformation will emerge from the chaos like some sort of digital swamp creature. They come from identifying meaningful business problems, improving the operating environment around them, and applying AI where it can genuinely change the way teams work. That means process first, then tooling. It means governance before scale. It means data quality before grand promises. It means deciding where human judgement matters most, and where it is currently being wasted on tasks that do not deserve it. Most importantly, it means being honest about what kind of business gain you are actually chasing. If the goal is simple productivity, say that. There is nothing wrong with efficiency. Most organisations still have plenty of low-value work that can and should be reduced. But do not confuse that with transformation. Saving time is good. Changing performance is better. The businesses that win will be the ones that operate differently The next phase of AI value will not be defined by who can create the most content, automate the most tasks, or boast the loudest about “copilot” adoption. It will be defined by who can build a better operating model around it. Who can connect functions more intelligently. Who can improve decision quality. Who can standardise execution without suffocating teams. Who can reduce friction across the revenue engine. Who can turn AI from a productivity trick into a business capability. That is the real shift now underway. Most businesses are still on the first rung, using AI to do the same things a bit faster. That is understandable. It is where the market started, and for many teams it is still where the easiest wins live. But the bigger gains sit further ahead. They show up when AI starts helping businesses work differently, not just faster. And that is where the conversation gets worth having. Discover our AI Services

  • Thinking of moving from 6sense to Demandbase? Here’s why more B2B teams are making the switch

    There comes a point with some platforms where the issue is no longer capability. It is tolerance. Yes, the dashboards look clever. Yes, the intent signals sound impressive. Yes, everybody nodded politely during the demo. But once the thing is live, the questions start. Why does this account matter? Why is that one surging? Why does everything useful seem to sit behind another commercial conversation? And why, despite all this supposed intelligence, does the platform still feel like hard work? That is the point where teams stop asking whether they bought something powerful and start asking whether they bought something practical. For B2B organisations weighing up a move from 6sense to Demandbase, that is the real story. This is not about swapping one ABM badge for another because a partner deck said so. It is about choosing a platform that is easier to trust, easier to use, and easier to turn into actual pipeline. Demandbase is pitching exactly that, positioning its migration guide around faster time-to-value, transparent AI, and buying-group intelligence that helps teams drive revenue rather than just admire charts about it.  The biggest problem with black boxes is that eventually people stop believing them A lot of ABM and intent platforms suffer from the same issue. They promise precision, but deliver opacity. That is fine for about five minutes. After that, marketing wants to know what is really driving prioritisation. Sales wants to know why one account is apparently red hot while another with actual human conversations is being ignored. Leadership wants to know whether the investment is producing something tangible or simply generating prettier versions of uncertainty. Demandbase’s guide goes straight at this by calling out frustration with “black-box intent models” and contrasting that with its own pitch around transparent AI. That matters because transparency is not some whimsical product virtue. It is operationally useful. If teams can understand what the platform is doing, they can explain it internally, challenge it when needed, and build better workflows around it. If they cannot, adoption drops and the tool becomes one more expensive thing that only a small handful of people pretend to fully understand. And let’s be honest, nobody wants their pipeline strategy resting on “the model says so.” Faster time-to-value beats feature theatre every time There is a weird habit in B2B software buying where complexity gets mistaken for sophistication. The more complicated the platform sounds, the more “enterprise” it must be. Usually, that is nonsense. Teams do not win because they bought the most elaborate system. They win because they bought something that gets useful, quickly. Demandbase makes that point hard in its move-over guide, saying teams are choosing it for faster time-to-value and laying out what to expect when switching, how long it actually takes to get live, and how to avoid delays, adoption issues, and wasted spend.  That is not a side point. It is the point. Marketing ops and revenue ops teams are not judged on how advanced their tooling sounds in a procurement meeting. They are judged on whether campaigns run, sales trusts the signals, pipeline improves, and nobody has to sit through six months of “transformation” before seeing any value. A platform that gets there faster is not merely more convenient. It is more commercially sane. Discover our Podcast Buying groups reflect reality. Single-lead obsession does not. One of the stronger reasons to look at Demandbase is its emphasis on buying-group intelligence. Again, that is not just product wording. It is a reflection of how B2B buying actually works. Enterprise purchases are rarely driven by one heroic individual who reads an ebook and then wanders directly into closed-won status. They involve multiple stakeholders, competing priorities, internal politics, silent research, and at least one person who turns up late and somehow still gets veto power. Demandbase explicitly positions buying-group intelligence as part of the reason teams are making the switch.  That gives teams a better way to prioritise accounts. Instead of obsessing over isolated activity from one contact, they can see whether momentum is building across the people who actually influence a deal. That leads to better orchestration, more sensible sales prioritisation, and less time wasted pretending one engaged individual equals account readiness. In other words, it gets a little closer to the messy truth of B2B revenue. Pricing and support are not boring details. They are where goodwill goes to die. Vendors love innovation language. Buyers, meanwhile, are over here wondering how many extra invoices stand between them and the features they thought they were already paying for. Demandbase’s guide is pretty blunt on this front. It calls out “endless upcharges” and support that has gone from helpful to nonexistent as part of the frustration driving some teams away from 6sense. That is a pointed comparison, but it lands because every ops leader knows the feeling. Nothing sours platform confidence faster than realising the commercial model is built around drip-feeding value back to you one awkward upsell at a time. Support matters too, especially during transition periods. If your platform becomes harder to optimise, harder to troubleshoot, and harder to expand without vendor intervention, then every internal stakeholder feels it. Campaigns slow down. Confidence drops. Adoption gets patchy. What looked strategic in the sales cycle starts looking suspiciously like admin with branding. Demandbase is clearly making the case that it offers a smoother partnership model. Whether that is the deciding factor depends on the buyer, but it is often the thing that moves a team from interested to serious. Ease of use is not a compromise. It is the whole game. There is no prize for owning an ABM platform that only three people can operate without emotional damage. Demandbase includes customer proof points to reinforce this. PageUp said it compared Demandbase and 6sense across transparency, configurability, and partnership, and found Demandbase the better fit. Case IQ, meanwhile, chose Demandbase over 6sense because of its intuitive design, competitive pricing, existing familiarity within the team, and the support available.  That should not be underestimated. An intuitive platform is easier to adopt across teams. It is easier to train on. Easier to operationalise. Easier to build repeatable processes around. It reduces the gap between insight and action, which is kind of the whole point of having the thing in the first place. A platform does not become more valuable because it is difficult. It becomes more valuable when teams actually use it properly. A shocking concept, I know. Discover our ABM Services Migration is usually less scary than staying stuck The biggest thing that stops teams switching is not loyalty. It is fear of disruption. Fair enough. A platform migration can sound like a MarTech root canal. There are integrations to think about, reporting to preserve, sales teams to reassure, workflows to rebuild, governance to tighten, and at least one legacy process no one fully understands but everyone is terrified to touch. Demandbase leans directly into that concern, framing the guide around how to switch “without missing a beat” and how to move forward without losing momentum, trust, or pipeline. That is smart because most buyers are not asking whether change is possible. They are asking whether change is survivable. The reality is that a good migration is not a leap of faith. It is a structured operational project. Audit what matters, map dependencies, define success clearly, sort the integrations properly, clean up the mess you were going to have to deal with eventually anyway, and move with intent instead of panic. Done well, a switch does not create chaos. It removes it. And frankly, sometimes the bigger risk is staying with a platform your teams no longer trust just because the pain has become familiar. The real win is not the move itself. It is what the move forces you to fix. This is the bit that matters most. Switching from 6sense to Demandbase is not just a technology decision. It is a chance to reset how your go-to-market teams work. To tighten account selection. To rethink prioritisation. To align marketing and sales around signals they actually believe. To stop paying for platform complexity that sounds impressive but struggles to produce value in the real world. That is where the biggest payoff usually sits. The platform matters, obviously. But the migration process also forces better questions. What do we actually need from intent? Which insights do we trust? What counts as meaningful engagement? Where are we overcomplicating things? Which workflows are driving pipeline, and which ones are just keeping dashboards busy? A move to Demandbase can absolutely improve the tech stack. The smarter outcome is that it also improves the operating model. Final thought No ABM platform is magic. None of them can rescue poor process, vague ownership, or marketing teams that are still mistaking activity for progress. But platforms can make good teams better, or they can trap them inside expensive ambiguity. That is why the case for moving from 6sense to Demandbase is getting attention. Demandbase is making a straightforward pitch: less black-box nonsense, faster time-to-value, stronger support, more intuitive usability, and buying-group intelligence that better reflects how B2B buying really happens. That is the core promise behind its move-over guide, and it is a promise that will resonate with any team that is tired of paying premium rates for unnecessary friction.  If your current platform feels more like something you manage than something that helps you win, that is usually your answer. Read the Demandbase guide here:

  • MQLs are the hangover: Why marketing should stop celebrating leads and start building pipeline

    For years, marketing teams have had a favourite party trick. Take a person. Watch them click on a few things. Maybe they download a guide, attend a webinar, glance at a pricing page, or fill in a form because they were cornered by a decent headline and a mild identity crisis. Add some points. Push them over a threshold. Then declare, with a straight face, that they are now “qualified”. Cue the applause. Cue the dashboard. Cue the monthly report proudly announcing a rise in MQLs as if the revenue team should be popping champagne. And then, as usual, the hangover arrives. Because many of those leads do not become pipeline. Many do not become conversations worth having. Many were never serious buying signals in the first place. They were just activity. Nicely packaged activity, perhaps. But still activity. That is the problem. Marketing has spent years rewarding itself for creating moments that look like progress instead of conditions that actually lead to revenue. The result is a lot of businesses still measuring demand with a model that feels tidy, looks familiar, and increasingly tells them absolutely nothing useful. If lead scoring is cosplay, then the MQL is the morning after. It is the consequence of believing the costume was real. The MQL made sense once. That time has passed. To be fair, the MQL was not invented by idiots. It came from a reasonable desire to create order. Sales teams needed a way to separate random names from people showing signs of interest. Marketing teams needed a way to prove they were doing more than sending emails and fiddling with landing pages. Leadership wanted a metric that looked like a bridge between activity and pipeline. So the MQL was born. A neat little handoff point. A moment where marketing could say, “Here you go, this one looks promising,” and sales could at least pretend to believe them. The problem is that modern buying no longer behaves in a way that makes this model particularly trustworthy. Buying is rarely driven by a single person. It is messy, delayed, political, often irrational, and usually spread across multiple stakeholders who do not all leave the same digital breadcrumbs. The person who fills in the form is not always the one with budget. The person researching solutions is not always the one making the decision. The loudest signal in your system is often not the most commercially meaningful one. So the contact that becomes an MQL may be the least important person in the room. Or worse, there may not even be a room yet. That is where the model starts to crack. Because while buying happens at account level, many marketing teams are still measuring success at contact level and acting surprised when the story does not hold together. A lot of MQLs are just reporting events dressed up as buying signals This is the real issue, and it is worth saying plainly. An MQL often tells you that a person did something trackable. It does not reliably tell you that an account is becoming buyable. Those are two very different things. A person downloading an asset is a trackable event. A person attending a webinar is a trackable event. A person clicking around your site three times in a week is a trackable event. Useful, maybe. Interesting, perhaps. But still not the same as a buying condition emerging within an account. And yet businesses continue to build dashboards, goals, routing logic, and team incentives around exactly those kinds of moments. This is where the wheels start to come off. Marketing celebrates lead volume. Sales sees weak conversion. SDRs work lists they do not trust. Revenue leaders start asking why “qualified” leads are not turning into genuine opportunities. Marketing responds by refining the scoring model, tweaking the thresholds, and adding even more detail to the reporting. Which is a bit like trying to fix a bad haircut by measuring it more precisely. The problem is not always that the system lacks sophistication. Quite often, the problem is that the system is classifying the wrong thing. A reporting event helps explain activity. A buying signal helps you decide where commercial effort should go. Too many businesses confuse the two. Discover our Podcast Easy to count has become more important than useful to know This is one of the less glamorous reasons so many demand models quietly fail. It is far easier to count an individual conversion than it is to interpret account-level momentum. It is far easier to report a lead threshold than it is to understand whether a buying group is forming. It is far easier to tell the board that MQL volume is up 23 percent than it is to say, “We are seeing stronger commercial movement in accounts that match our best-fit profile and show genuine timing pressure.” One sounds neat. The other sounds like actual work. So guess which one most businesses default to. Marketing has been rewarded for what is visible, not necessarily for what is meaningful. That would be tolerable if the visible thing still behaved like a useful proxy for pipeline. In many cases, it no longer does. A single person from a target account engaging with content may mean nothing. Three stakeholders from the same account arriving within a short period, each looking at different pieces of decision-stage content, probably means a lot more. An implementation-related conversation means more than a webinar registration. A pricing discussion means more than a content download. A security review means more than someone clicking a nurture email while avoiding a meeting. The point is not that engagement is irrelevant. The point is that engagement without context is flimsy. And a flimsy signal should not be carrying the weight of your demand strategy. The MQL has become a permission slip for optimism That sounds harsh, but it is often true. In many organisations, the MQL is not a robust qualification model. It is simply the point at which marketing is allowed to feel good about itself. The lead crossed the line. The number moved. The target was hit. Everyone can now behave as though progress has occurred. This is comforting. It is also dangerous. Because once the metric becomes emotionally important, it stops being challenged properly. Teams begin defending the existence of the MQL rather than asking whether it still reflects how buying works. Sales gets blamed for weak follow-up. Campaign teams get asked for more volume. SDR teams get told to work harder. Nobody wants to say the obvious thing, which is that a lot of this so-called qualification may have very little to do with commercial readiness at all. And that is how businesses end up running entire revenue motions around glorified hand-raisers. Marketing does not need more lead theatre. It needs a better operating model. The answer is not to replace MQLs with chaos. Nor is it to delete every lifecycle stage and start speaking in mystical revenue riddles. What is needed is a shift in what marketing is actually trying to identify and influence. Instead of asking, “When is this lead qualified?” the better question is, “What conditions suggest this account is moving closer to a real buying decision?” That changes everything. It changes what you measure. It changes what you route. It changes what sales trusts. It changes how campaigns are judged. It also nudges marketing into a much more commercially useful role, which is long overdue. Because marketing’s job is not just to generate names. It is to create movement. To increase the likelihood that the right accounts engage, progress, and enter sales conversations with something resembling genuine intent. That is a more serious job than producing a pile of contacts and calling it pipeline. What should replace MQL obsession? Not a single new acronym, thankfully. The world does not need another one. What it does need is a model built around commercial conditions rather than arbitrary thresholds. That starts with account fit. Real account fit, not fantasy ICP nonsense where half the market somehow qualifies as ideal. Good fit should reflect whether the account has the right level of complexity, the right kinds of pain, the right operational reality, and the right commercial shape for your business to win and serve well. Fit should be a gate, not a decorative line in a strategy deck. Then there is buying-group emergence. One person engaging is a weak signal. Several relevant stakeholders showing up from the same account in a pattern that suggests evaluation is something else entirely. That is where things begin to get interesting. Not because it guarantees a deal, but because it starts to resemble the way decisions are actually made. Next comes timing pressure. This is one of the most underused and most commercially important pieces of the puzzle. Why now matters more than almost everything else. A replatforming plan, a looming renewal, an internal re-org, reporting chaos, a change in leadership, a compliance deadline, a broken process, a strategic mandate, these are the conditions that create movement. Someone downloading a whitepaper does not create urgency. It may simply indicate boredom between meetings. And finally, there are progression signals with actual weight behind them. Meetings involving multiple stakeholders. Implementation conversations. Commercial discussions. Timeline questions. Security reviews. Requests for technical validation. Internal language shifting from casual curiosity to practical decision-making. These are not perfect either, but they are much harder to fake. They also cost the buyer something, which is usually a very good sign. This is where marketing should be focusing its attention. Not on whether a lead scored 74 instead of 71. Not on whether a form fill should count double if it came from paid social. Not on endlessly polishing a framework that was built for a simpler buying environment and now survives mostly because everyone knows where it lives in the CRM. Discover our AI Coworker This is also why sales and marketing keep annoying each other The MQL model does not just distort measurement. It distorts trust. Marketing says it delivered qualified leads. Sales says those leads are rubbish. Marketing says sales is ignoring good demand. Sales says marketing is measuring engagement, not intent. Then both sides sit in a meeting staring at the same funnel with completely different levels of faith in it. It is a deeply inefficient way to run a revenue team. The deeper issue is that both sides are often reacting sensibly to a broken shared model. Marketing has been taught to optimise for visible conversion. Sales has been trained by experience to be sceptical of anything that looks too easy. The result is a constant tension between volume and credibility. A better model lowers that tension. If both teams are aligned around account fit, buying-group activity, timing pressure, and commercially meaningful progression, the conversation gets healthier fast. Marketing is no longer defending a pile of shiny contacts. Sales is no longer rolling its eyes every time a dashboard says pipeline is “warming up”. Both teams are looking at the same kinds of signals and asking the same practical question: is this account moving in a way that deserves serious effort? That is a much better conversation to have. You do not necessarily need to kill the MQL. But you should absolutely demote it. Some businesses will still need an MQL stage for workflow reasons. Fine. Use it as an internal signal if you must. Use it to trigger routing. Use it to mark a point in a process. Use it because your systems are held together by string and inherited logic and you cannot rip it all out in one go. But stop treating it like the headline metric for marketing contribution. That is where the damage happens. An MQL can still exist without being worshipped. It can be a checkpoint, not a trophy. It can serve operations without pretending to represent commercial truth. The trouble starts when businesses build their whole demand story around it. Because the story that matters is not whether marketing produced more qualified leads this quarter. The story that matters is whether marketing improved the conditions that make pipeline more likely in the accounts that actually matter. That is a much stronger claim. It is also much harder to fake. The next era of demand generation will be less flattering and more useful That is probably for the best. The old model produced very pretty dashboards. It also produced an awful lot of false confidence. Teams could point to rising lead volumes while pipeline quality quietly sagged underneath. Targets got hit. Reports got written. Revenue teams kept wondering why all this apparent demand still felt so anaemic in the real world. The businesses that move fastest now will be the ones willing to let go of neat-but-empty metrics and get more honest about what buying actually looks like. That means less worship of individual conversions. Less obsession with lead thresholds. Less applause for activity that happens to be easy to track. More attention to account movement. More weight given to urgency and buying conditions. More focus on signals that indicate real commercial effort from the buyer side. In other words, less theatre. More evidence. That may make some dashboards uglier for a while. Good. Ugly truth is still better than polished nonsense. Stop asking how to generate more MQLs That is the wrong question now. The better question is this: How do we help more of the right accounts become sales-ready in ways that look like deals we actually win? That question forces a more grown-up strategy. It pushes marketing closer to revenue. It exposes weak measurement. It sharpens targeting. It improves alignment with sales. And, perhaps most importantly, it stops teams mistaking form fills for progress. Because the brutal truth is that many MQLs were never a sign of momentum. They were just the easiest thing to celebrate. And marketing has celebrated enough easy things. It is time to build pipeline instead. Need help with that? Let's talk... Discover our Services

Sojourn Solutions logo, B2B marketing consultants specializing in ABM, Marketing Automation, and Data Analytics

Sojourn Solutions is a growth-minded marketing operations consultancy that helps ambitious marketing organizations solve problems while delivering real business results.

MARKETING OPERATIONS. OPTIMIZED.

  • LinkedIn
  • YouTube

© 2026 Sojourn Solutions, LLC. | Privacy Policy

bottom of page
Clients Love Us

Leader