top of page

Search our Resources

258 results found with an empty search

  • Claude can now connect to Marketo. That should make enterprise teams nervous, not giddy.

    There is a certain kind of enterprise tech announcement that makes people lose the run of themselves. A new connection appears. A big-name platform links arms with a big-name model. The screenshots look slick. The demos look effortless. Within minutes, half the market starts talking as if the future has arrived, sorted the backlog, and fixed governance on its lunch break. This is one of those announcements. Claude can now connect to Marketo. And yes, there is obvious appeal in that. Ask questions in plain English. Move faster. Find things more easily. Reduce some of the drudgery. Cut through an interface that nobody has ever lovingly described as intuitive. Fine. But sensible enterprise teams should not be reacting with giddy excitement. They should be reacting with a healthy level of suspicion. Because this is not just a handy new feature. It is a new way into a live enterprise marketing system. One that contains customer data, campaign logic, approval structures, reporting dependencies, legacy weirdness, and enough hidden risk to make experienced Marketing Ops people instinctively flinch when someone says, “I’ve just made a quick change in production.” That is why the interesting question is not whether Claude can connect to Marketo. It can. The interesting question is why so many people seem ready to celebrate that fact before they have done the boring grown-up work of asking what it means for permissions, approvals, audit trails, QA, and production risk. This is not an anti-AI argument. FAR from it. It is an anti-naivety one. And there is plenty of naivety here going around. The fantasy version is lovely The fantasy version of this story is easy to sell. A user asks for help. Claude finds the right thing. Maybe it speeds up a task. Maybe it helps someone cut through clutter. Maybe it reduces the amount of time spent digging through programs, folders, assets, and all the other bits of enterprise software that seem specifically designed to make simple things feel needlessly painful. That version will do very well in demos. Unfortunately, enterprise reality is not built for demos. Real Marketo environments are rarely tidy. They are usually a mix of current work, old work, half-retired work, undocumented fixes, inherited structures, strange dependencies, local naming conventions, and processes that are technically still alive despite nobody being fully sure why. That is what makes this connection worth taking seriously. Because Claude is not being attached to a neat little sandbox full of clean logic and sensible governance. In many organisations, it is being attached to a live production environment already held together by caution, experience, and the occasional whispered warning not to touch a specific folder unless you want to ruin your week. That is not the kind of setup that should inspire giddiness. The issue is also not whether the model is clever A lot of the public conversation around this sort of thing goes sideways almost immediately. People get distracted by the intelligence question. Can the model understand the task? Can it retrieve the right thing? Can it help teams move faster? Can it make the system easier to use? That is all mildly interesting. The more important question is much less glamorous. What can it access? Because that is where the real enterprise risk sits. Can it only retrieve information, or can it change things too? Can it create assets? Can it clone programs? Can it update records? Can it approve emails? Can it activate campaigns? Can it export data? Can it interact with live production objects under the permissions of a user that was set up too broadly because someone wanted the demo to be impressive? That is the real story. The danger is not that a model might say something daft. Humans do that all the time and enterprise software has somehow carried on regardless. The danger is that a conversational layer gets connected to a platform where access, scope, and control matter far more than enthusiasm - don't forget that this is an "off the shelf" LLM model we are talking about here... not a bespoke Agentic AI. Permissions are where this stops being fun The moment a language model connects to Marketo, this stops being a shiny feature story and becomes a permissions story. Which is exactly why so many people will try to avoid that conversation. Permissions are dull. Permissions are fiddly. Permissions are the part that ruins the fun by asking irritating questions like who gets access to what, under which conditions, with what restrictions, and with what consequences if something goes wrong. In other words, permissions are where adults enter the room. And they matter because enterprise platforms do not become safe just because the front end becomes conversational. If anything, the opposite is true. The easier it feels to ask for something, the easier it becomes to forget that the system underneath is still capable of doing very real things with very real consequences. That is the trap. Prompting feels casual. Production is not. A request typed into a friendly interface does not feel like changing a live marketing system. It feels more like asking for help. That softer feeling is precisely what makes stronger permissions and tighter boundaries more important, not less. Because the consequences have not become gentler. Only the experience of issuing the instruction has. Enterprise stability is often built on caution, not elegance There is a great myth in enterprise Marketing Operations that stable environments are always the result of pristine architecture, immaculate governance, and flawless documentation. That would be lovely. It is also nonsense. A lot of enterprise stability comes from experienced people being careful. They know which programs are safe to use and which are not. They know which campaigns need extra checks. They know what can be changed quickly and what needs a proper review. They know where the bodies are buried, metaphorically speaking, and they know better than to go poking around them on a Wednesday afternoon. That caution has value. It is not glamorous. It does not make for exciting feature launches. But it is often the thing preventing expensive mistakes. Now place a conversational layer on top of that environment and the tone changes immediately. Suddenly the interaction feels easier. Lighter. More natural. Less formal. Less loaded. That sounds good until you realise how much enterprise safety relied on the fact that Marketo did not feel casual. The interface was annoying, yes. It was also part of the ceremony. It forced some level of navigation, context, and intent. It reminded users they were inside a system with structure and consequence. A prompt box does none of that. A prompt box says, go on then. Approvals are not red tape. They are damage control in advance. Approvals are one of those things people only appreciate properly after they have bypassed them and regretted it. Nobody enjoys extra process for the sake of it. Fair enough. Plenty of enterprise process exists only because someone somewhere once survived a committee and decided everybody else should suffer too. But approval structures in Marketing Operations are not there purely for decoration. They exist because the gap between intention and execution is where a lot of nonsense gets caught. An email is checked before it goes live. A campaign is reviewed before activation. A change is questioned before it lands in production. Someone else has a chance to look at it and say, hang on, are we sure this is right? That pause matters. The problem with conversational access is that it shortens the emotional distance between wanting something done and trying to do it. That is a big part of the appeal. Less friction. Less digging. Less messing about. But some of that friction was doing useful work. It was giving teams just enough resistance to stop every half-formed idea marching directly into a live environment wearing confidence it had not earned. Making access easier does not make approvals less necessary. It makes them more important. Audit trails suddenly matter a lot more Here is where things get properly uncomfortable. In traditional platform use, there is usually some way to reconstruct what happened. Who made a change. What they touched. When they did it. What was approved. Which sequence led to the issue. It may not be elegant, but there is generally a trail. Once you start putting an LLM model in the middle, that clarity can get foggy very quickly. Was the action directly initiated by the user? Was the request ambiguous? Did the system infer something beyond what was intended? Which permissions were in play? What exactly was executed? What review existed around that setup? Who owns the resulting action when the person typed a broad request, the model interpreted it, and the system carried it out under legitimate credentials? That is why auditability is not some dreary back-office concern here. It is central. If a business cannot clearly trace what was requested, what was executed, under whose permissions, against which assets, and with what safeguards in place, then it has no business pretending this is all under control. That is not negativity. That is basic enterprise hygiene. With Claude, QA does not go away. It gets harder. There is a lazy idea floating around that this kind of connection reduces manual effort and therefore eases pressure on teams. In some places, maybe. With the correct AI integration, absolutely. But with an LLM? Let’s not get carried away. What usually happens in enterprise environments is that effort moves. It does not vanish. You may spend less time hunting around for assets or navigating a clunky interface. Fine. But the need to verify what is being surfaced, what is being changed, and what context sits around that action does not disappear. If anything, it increases. Because when work can be requested more casually, teams need stronger QA, not weaker . They need clearer checks around scope, asset selection, environment, downstream impact, inherited logic, and side effects. They need less blind trust, not more. And they absolutely cannot afford to fall into the trap of assuming that because something was easy to ask for, it must also be safe to carry out. That is how messy systems become messier. Production risk is rarely dramatic at first The people rushing to celebrate this sort of thing tend to imagine risk in extremes. Either everything is brilliant or everything is on fire. Real enterprise risk is usually much duller than that, which is part of what makes it dangerous. A live asset gets changed in the wrong place. A program is cloned from the wrong template. A campaign goes active without the right review. A data export happens too easily. A workflow touches something it should not. A team starts leaning on conversational access without fully appreciating where the boundaries should be. None of that necessarily creates instant disaster. What it creates is drift. A slow erosion of trust. A buildup of avoidable rework. More nervous stakeholders. More technical debt. More sceptical legal and compliance teams. More pressure on already stretched Ops people to clean up problems created by convenience being mistaken for competence. That is usually how production risk shows up. Not as one giant cinematic failure, but as a series of smaller decisions made too casually until the overall environment gets shakier than anyone wants to admit. The wrong question is whether you can use it Of course you can use it. That is not the serious question. The serious question is whether your organisation has the discipline to use it without making things worse. Do you have a permission model that is genuinely fit for purpose? Do you have clear rules around what can and cannot be done? Do you have approval structures that still hold when interaction becomes conversational? Do you have meaningful audit trails? Do you have QA discipline strong enough for a lower-friction access layer? Do you have separation between experimentation and production? Do you have named ownership when something goes wrong? Do you have governance that lives in the actual operating model, not in a document everybody claims to support and nobody opens? If the answer to those questions is vague, patchy, or politely avoided, then no, you are not ready. Not because the technology is bad, but because your operating model is too flimsy to carry it safely. That is the point enterprise teams need to hear. This is a maturity test, not a toy That is the sharper read on this launch. It is not just a feature release. It is a maturity test. It reveals whether a business sees Marketing Operations as a serious control function or as a convenient place to experiment with shiny new capabilities and hope the risk gets sorted out later. A mature organisation will look at this and ask hard questions about permissions, approvals, audit, QA, and production boundaries before it starts applauding. An immature one will rush to the demo, celebrate the novelty, and act surprised later when Security, Legal, Compliance, or the head of Marketing Ops starts asking the sort of questions that make innovation fans suddenly very interested in changing the subject. That is why enterprise teams should be nervous. Not fearful. Nervous. There is a difference. Fear says do nothing. Nervousness says pay attention. And right now, paying attention would be a refreshing change. Nervous is the correct response A little nervousness would be healthy here. Enterprise teams should be nervous when conversational access becomes easier than governance. They should be nervous when approvals are treated like optional friction. They should be nervous when audit trails are vague. They should be nervous when QA is assumed rather than designed. They should be nervous when production starts to feel casual. Because nervous teams ask better questions. Giddy teams usually skip straight to the part where they create new problems and then hold a post-mortem to discuss how the warning signs were missed. Claude connecting to Marketo may well become useful. In the right environment, with the right controls, it could genuinely help capable teams move faster without losing discipline. But that outcome will not belong to the teams who got excited first. It will belong to the teams who treated permissions seriously, kept approvals intact, demanded proper auditability, strengthened QA, respected production risk, and resisted the now very fashionable urge to mistake easy access for readiness. That may not be the fun version of the story. It is, however, the one enterprise teams should be reading. Discover MOPsy Discover our latest benchmark report

  • Campaign QA is eating your team alive and nobody wants to admit it

    There is nothing quite like campaign QA for making expensive, experienced enterprise teams do work that feels suspiciously close to digital scavenger hunting. Open the email. Check the links. Check the tokens. Check the form. Check the follow-up. Check the workflow. Check the audience. Check the field mapping. Check the suppression rules. Check the approval status. Check it all again because someone made a “tiny change” after sign-off. Then check it one more time because nobody wants to be the person who lets a broken campaign go live. It is not glamorous. It is not strategic. It is not the kind of work anybody brags about. But it quietly eats hours every week across most enterprise marketing teams. And the worst part is this: most of those hours are not being wasted because teams are careless. They are being wasted because campaign QA in large organisations has become bloated, manual, inconsistent, and held together by stressed people trying to stop avoidable mistakes from escaping into the wild. That is a problem in its own right. It is also exactly why tools like MOPsy matter. Because when highly capable Marketing Operations teams are spending huge chunks of their week doing repetitive campaign checks, something has gone wrong in the operating model. Campaign QA is necessary. The current way of doing it is the issue Nobody is suggesting campaign QA should disappear. In enterprise environments, quality control matters. A lot. One broken workflow, one wrong audience, one bad sync, one incorrect token, one missed suppression rule, and suddenly you have internal panic, external embarrassment, and a clean-up job that takes longer than the original build. The problem is not the existence of QA. The problem is how it happens. In too many enterprise teams, campaign QA is still heavily manual. It lives in checklists, spreadsheets, screenshots, Slack threads, email chains, approval comments, and whatever vague institutional memory happens to be sitting inside the heads of the people who have been there longest. Everyone knows what needs checking, broadly speaking. The trouble is that the checking process is often fragmented, inconsistent, and massively reliant on humans doing repetitive work over and over again. That is where the hours disappear. A campaign that looks simple from the outside can have a ridiculous number of moving parts underneath. Emails, forms, landing pages, hidden fields, workflow logic, list criteria, dynamic content, CRM integration, lead routing, alerting, timing rules, tracking codes, audience exclusions, nurture logic, webinar connections, regional variations, legal tweaks, and stakeholder edits that arrive three minutes before launch. Every extra layer adds risk. Every risk creates another check. Every check takes time. Very quickly, QA stops being a sensible final control and starts becoming a full-blown drain on the team. Most teams are not inefficient. They are compensating This is the bit many people get wrong. When enterprise teams spend too long on campaign QA, the lazy explanation is that they need to be more efficient. Usually, that is nonsense. More often, they are compensating for an environment that is too complex, too fragile, or too inconsistent to trust. That lack of trust shows up everywhere. People double-check because they have been burned before. Stakeholders insist on seeing final versions because something slipped through six months ago. Approvers keep asking for screenshots because they do not trust the build. Ops teams re-run tests because a last-minute change always seems to break something unexpected. Marketers ask for “just one more review” because they know one small error can become a very visible mess. This is not laziness. It is risk management by exhausted humans. The issue is that humans are doing too much of the safety work. And humans are a very expensive place to park repetitive validation. What campaign QA looks like in the real world Campaign QA sounds tidy until you look at what it actually involves. It is not just proofreading an email and clicking a few links. It is checking whether the segmentation logic is correct, whether the form writes cleanly to the right fields, whether the thank-you journey fires properly, whether the routing rules still behave as expected, whether the campaign naming follows standards, whether the audience exclusions are working, whether UTM parameters are consistent, whether cloned assets have carried over the wrong settings, whether alerts fire, whether wait steps are right, whether approval comments were actually actioned, whether the CRM sync is behaving, whether the preference centre is connected properly, whether the footer is compliant, and whether the one stakeholder who always spots the obscure edge case is going to have another moment just before launch. Then do that across multiple campaigns. Across regions. Across product lines. Across different teams. Across multiple systems. Across campaigns that were built by different people, in slightly different ways, to slightly different standards. That is where the wheels start wobbling. Because QA is rarely one neat, contained stage. It spills across the whole delivery cycle. It is rarely just one person doing one clean review. It is bits of time from multiple people, spread across multiple tools, with multiple interruptions and plenty of re-checking when something changes after the “final” review. That is not a quick task. That is death by a thousand tabs. Discover the 2026 AI Benchmark Report The hidden cost is bigger than most teams realise The obvious cost of campaign QA is time. The less obvious cost is the way that time gets shredded. A team might say a campaign takes two hours to QA. What they often mean is there are two visible hours of checking. What they usually do not include is the context switching, the waiting, the duplicated review, the stakeholder back-and-forth, the rechecks after edits, the confusion over versions, and the extra time spent validating things that should have been easier to verify in the first place. This is where enterprise teams quietly lose entire days. Not because somebody sat in a room for eight straight hours doing QA, but because ten people each lost twenty minutes here, forty minutes there, and another half hour because someone made a change after sign-off and nobody wanted to risk not checking it again. That kind of waste is hard to spot because it hides inside the flow of work. It feels normal. It feels responsible. It even feels unavoidable. But it is still waste. And worse, it is waste involving some of the most capable people in the team. Highly experienced Marketing Operations professionals should not be spending huge chunks of their week manually checking whether the same set of campaign rules were followed again. That is not strategic oversight. That is process debt. Manual QA does not scale nicely This is where things get especially grim. Manual QA might limp along when campaign volumes are low and the team is small. Once scale enters the picture, it starts to creak. More campaigns mean more checks. More regions mean more variations. More stakeholders mean more approvals. More platforms mean more handoffs. More complexity means more risk. And most teams respond to rising risk in the same way: they add more manual review. That feels sensible in the moment. It also creates a system where campaign velocity slows down, people become bottlenecks, and launch confidence drops rather than improves. So teams end up stuck between two bad options. They either keep throwing time at QA and slow everything down, or they cut corners and accept more risk. Neither is a particularly grown-up answer. Customers see the consequences, not the excuses Internally, a campaign error may look small. Externally, it looks sloppy. That is the uncomfortable truth. Customers and prospects do not see the tight deadline, the late-stage change request, the weird MAP behaviour, or the fact that three different teams touched the build. They see the thing that lands in front of them. An email with the wrong personalisation. A form that behaves strangely. A broken page. A follow-up that does not make sense. A message sent too early, too late, or to the wrong people. A clunky experience that makes the brand feel careless. Enterprise marketing teams know this, which is exactly why they overcompensate with extra QA. They are trying to avoid reputational damage. Fair enough. The trouble is that the answer has too often been more human effort instead of a smarter system. That is not sustainable. A lot of QA processes were never properly designed Let’s be honest. Many enterprise QA processes did not come from a thoughtful redesign workshop with a neat operating model at the end of it. They evolved. One person made a checklist. Another added a spreadsheet. Somebody started keeping screenshots. A stakeholder demanded final approval because of one painful incident two years ago. A platform migration added more steps. A reorg split ownership. Regional teams created local variations. Legal got more involved. Nobody really rebuilt the process from the ground up. They just kept adding layers. The result is predictable. Checks happen late. Standards vary by team. Known issues keep repeating. Approvals are inconsistent. Documentation is patchy. Too much knowledge sits in the heads of a few over-relied-upon people. And the team spends far too much energy catching preventable errors instead of building a cleaner, more resilient way of delivering campaigns. That is the real issue. Manual QA often looks like control, but in many cases it is just a workaround for a messy system upstream. The smarter question is not “who checks this?” but “why does this need checking this way?” This is where the conversation gets more useful. Most teams frame QA as a people problem. Who owns it? Who signs it off? Who catches errors? Who reviews the reviewer? That is understandable, but it is also limiting. A better question is why so much of the checking still depends on humans in the first place. Some things absolutely should. Brand judgement, tone, compliance nuance, context, audience appropriateness, stakeholder sensitivity. Those things still need human eyes and human brains. But a lot of campaign QA is not that. A lot of it is repetitive validation. Does this asset follow the right naming convention? Are these links structured correctly? Are these components present? Does the build align to known standards? Has this flow been configured the way it should be? Are the same rules being followed every time? That is not human brilliance. That is structured checking. And structured checking is exactly where many enterprise teams are still burning ridiculous hours because the process has not caught up with the complexity of the work. Where MOPsy comes in This is not about replacing your team. It is about protecting your team from work they should not still be buried in. MOPsy is built for Marketing Operations, which means it is not some generic AI gadget trying to force its way into a serious workflow wearing a shiny badge and a lot of confidence. It is designed to be useful in the kind of operational environments where campaign complexity, governance, and quality control actually matter. That makes campaign QA a very obvious fit. Because the problem with QA is not usually that teams do not care. It is that too much of the process still relies on manual review, repeated checking, and humans spotting patterns that a smarter system should be helping to identify much earlier and much more consistently. MOPsy can help teams review campaign builds against defined standards, flag inconsistencies, surface likely issues, support governance, and reduce the amount of repetitive checking that currently eats into experienced team time. That matters because enterprise QA is rarely just about spelling mistakes and rogue buttons. It is about checking campaign logic, process discipline, consistency, configuration, and execution quality across a lot of moving parts. It is exactly the sort of environment where repetitive validation should not still depend so heavily on humans clicking through the same things every week. MOPsy does not remove the need for judgment. It removes more of the grind. And that is the point. This is about more than saving time Saving time is useful. Nobody is going to argue with that. But the more interesting benefit is what happens when teams stop drowning in manual QA. Friction drops. Confidence improves. Campaigns move with less drama. Approvals become cleaner. Standards become easier to enforce. Fewer issues slip through. Ops talent gets used for higher-value work instead of repetitive campaign checking. This is where the real gain sits. Not in a vague promise of efficiency, but in a better operating model. One where the team is not constantly relying on heroic effort, invisible knowledge, and last-minute checks to keep quality intact. Because that is another truth most teams recognise instantly: QA often depends far too heavily on a small number of people who know exactly where problems usually hide. They know the awkward workflows, the strange field behaviour, the steps that always get forgotten, the stakeholders who make late changes, the assets most likely to break, and the checks that can never be skipped. That may feel reassuring. It is not resilience. It is a fragile process wearing a familiar face. A stronger QA model, supported by the right tooling, helps shift that knowledge into something more repeatable, more scalable, and less dependent on human memory and personal heroics. Which is, frankly, how enterprise Marketing Operations should be operating. The teams that improve this will move differently The best teams will not be the ones who keep tolerating more QA pain and calling it diligence. They will be the ones who take a hard look at where the hours are really going, separate genuine human review from repetitive validation, and start building a smarter system around campaign quality. That means improving standards. Tightening process. Reducing inconsistency. Strengthening governance. And using tools like MOPsy where they genuinely help make campaign delivery safer, sharper, and less painfully manual. Because enterprise teams are not wasting hours on campaign QA because they are bad at their jobs. They are wasting hours because the work has become too complex for old habits, too risky for guesswork, and too repetitive to keep throwing humans at it forever. That is the real opportunity. Not shiny AI nonsense. Not another toy with a big promise and a weak use case. Just a very practical shift in how campaign quality gets managed. And for a lot of enterprise teams, that shift is overdue. A better way to handle campaign QA If your team is spending hours every week manually checking campaigns, rechecking last-minute changes, chasing approvals, and relying on experienced people to catch the same issues over and over again, the problem is not just workload. It is the model. MOPsy helps enterprise Marketing Operations teams bring more consistency, more control, and less manual drag into campaign QA. That means fewer hours lost to repetitive checking and more time spent on the work that actually moves the needle. If campaign QA is still eating your team alive, it may be time to stop accepting that as normal. MOPsy was built for exactly this kind of problem. Discover MOPsy

  • AgentOps is the next Ops layer and nobody's staffed for it...

    Ask a MOPs team how many automated programs are running in their marketing automation platform right now and you'll get a rough answer. Maybe not a confident one, but something in the right postcode. They'll know the major nurtures, the scoring models, the lifecycle triggers. It's their system. They built it. Now ask how many AI agents are running. Across the CRM, the MAP, the service desk, the data enrichment layer. How many are live. What data they access. What actions they can take. Who activated them. When they were last reviewed. More often than not, you will get silence. Very occasionally, you'll even get " …what agents? " AI agents are multiplying inside the platforms MOPs teams already operate whether you know it or not, and nobody has built the operational layer to manage them.   Not IT. Not Marketing Ops. Not the RevOps team that's still arguing about lifecycle stage definitions. The result can be a growing fleet of autonomous processes running inside your revenue systems with no monitoring, no audit trail, and no clear owner. We've been here before with marketing automation - build it, launch it, orphan it. Except agents don't just execute static rules. They reason. They adapt. And they can go quietly wrong in ways that won't show up until someone asks why pipeline looks off. This is the AgentOps problem. And most organisations don't even know they have it yet. Agents aren't automations. They need a different kind of oversight. Traditional marketing automation runs a script. If the data says X, do Y. It's deterministic. Predictable. Boring in the best possible way. When a smart campaign breaks in Marketo, you can trace the logic, find the error, fix it. The system did what you told it to do. AI agents are different. Agentforce uses an LLM-powered reasoning layer to interpret context, plan actions, and execute across systems. HubSpot's Breeze agents - now running on GPT-5 for some marketplace agents - make judgement calls about how to qualify a lead, what to say to a customer, when to escalate. They don't follow a flowchart. They interpret . That distinction matters enormously for operations, because it means the failure mode is different. A broken automation sends the wrong email. You catch it in QA or someone complains. An agent that's reasoning poorly routes high-value prospects to the wrong sales team, or gives a customer an answer that's confidently wrong, or quietly updates CRM fields based on stale data - and it does all of this while looking like it's working perfectly. One Salesforce implementation partner published a detailed account of exactly this pattern earlier this year. A client deployed an Agentforce lead qualification agent that was routing high-value prospects to the wrong sales team. The cause? A territory assignment field that hadn't been updated after a recent re-org. The agent didn't flag the stale data. It didn't hesitate. It treated six-month-old field values as ground truth and processed 340 leads through incorrect routing before anyone noticed. Human reps would have caught it within the first few calls. The agent just kept going. That's the operational gap. The technology worked. The reasoning worked. The data was wrong, and nobody was watching. AI governance in Marketing Ops now means agent governance The governance conversation has been happening for a while. Policies about data usage, consent, content review. Most of it has centred on generative AI - who's allowed to use ChatGPT, what can be fed into a model, who reviews AI-generated copy before it ships. That conversation was necessary. It was also about the last  generation of AI use cases. Agents are a different governance surface entirely. They don't just generate content. They take actions. They modify records. They make routing decisions. They interact with customers. The governance questions aren't " is this content on brand? " - they're " did this agent just change a lead score based on data that's three months stale, and did anyone notice? " Agent governance requires a different set of capabilities. You need monitoring... not just logging what happened, but flagging when agent behaviour deviates from expected patterns. You need periodic review cycles, where someone checks that the agent's reasoning still aligns with current business rules, pricing, territories, product availability. You need escalation paths, so when an agent encounters something outside its boundaries, the right human gets involved instead of the agent improvising. And you need ownership. Clear, named, accountable ownership. Not "the team," not "IT handles the platform," not "we'll figure it out." A person who knows which agents are running, what they're doing, what data they depend on, and when they were last reviewed. That's AgentOps. It's not a product. It's not a platform. It's an operational discipline, and it doesn't exist yet in most organisations. Take part in the 2026 AI Benchmark Report Hallucination rates are a design reality, not a scare statistic Here's a number that should shape how you think about agent operations: hallucination rates for AI agents inside CRMs range from 3% to 27%, depending on configuration, grounding data, and prompt design. That's from published implementation data across dozens of enterprise deployments. At the low end - proper Knowledge article coverage, well-structured prompts, tight topic guardrails - agents get it right 95-97% of the time. That's genuinely useful. At the high end - minimal grounding data, broad topic definitions, no monitoring - you get an agent that fabricates pricing, invents product features, or confidently cites policies that don't exist. The point isn't that agents are unreliable. It's that they're probabilistic . They will occasionally get things wrong. That's not a bug. It's the nature of the technology. The operational question is whether your organisation has the capacity to detect when that happens, assess the damage, and correct course. Right now, for most teams, the answer is no. Some platforms are starting to ship transparency features - audit trails showing which CRM properties an agent modified and what actions it took. That's a step in the right direction. But a feature isn't a practice. An audit trail is useless if nobody's reading it. That's the operational equivalent of installing a smoke detector and never checking the batteries. What AgentOps actually looks like This doesn't require a new team or a new budget line. It requires treating agents as operational assets - not features you activate and forget. That means maintaining an inventory. How many agents are running in your systems right now? What data do they access? What actions can they take? Who activated them? If you can't answer those questions today, you have an agent sprawl problem and you don't know how big it is. It means defining review cadences. Not annual audits - practical, lightweight checks. Monthly: is the agent behaving as expected? Are the data fields it depends on still reliable? Quarterly: do the business rules baked into agent behaviour still match reality? Have territories shifted? Has pricing changed? It means setting performance baselines. What does "working" look like for each agent? If you can't define success, you can't detect failure. And the agent won't tell you it's failing. It'll just keep going with impressive confidence. And it means building escalation clarity. When an agent does something unexpected, who gets told? How fast? Salesforce learned this the hard way on its own Help portal - 26% abandonment rate before anyone intervened. Most orgs don't have Salesforce's engineering resources to react that quickly. The agents are already live. The ops layer isn't. Every ops discipline starts the same way. Something breaks. Leadership asks who was supposed to be watching. Nobody has a good answer. A process gets created under pressure, after the fact, while someone patches the damage. Marketing automation governance happened that way. Marketing automation data quality programmes happened that way. GDPR compliance happened that way for a depressingly large number of organisations. You can build AgentOps the same way - reactively, after an agent has been quietly misrouting leads for six weeks or breaching compliance boundaries for 48 hours because someone edited a topic description. Or you can look at the agents already running in your systems, admit that nobody's managing them, and start. The agents are already live. The ops layer isn't. That gap has an expiry date. It's just a question of whether you close it on your terms or someone else's. Discover our AI in Marketing Operations Services

  • AI Beyond Productivity: Where are the real business gains?

    Productivity was always the starting point For the last year or two, most AI conversations in business have sounded oddly familiar. How can we write faster? Summarise faster. Analyse faster. Build presentations faster. Reply to emails faster. Produce more content with fewer people and less effort. Fair enough. That is where most organisations start. It is the easiest sell. Efficiency is measurable, non-threatening, and easy to explain in a board meeting. Nobody gets fired for saying they want a team to spend less time on repetitive work. But productivity is only the opening act. Doing the same work faster is useful. It is just not all that transformative. If AI simply helps a busy team clear its backlog at greater speed, that is an improvement. It is not reinvention. It is admin with a nicer user interface. Helpful, yes. Revolutionary, not quite. The more interesting shift is what happens when AI starts changing how work gets done in the first place. That is where the real gains start to show up. Not just shaved minutes. Not just reduced agency hours. Not just “we saved the team two days a month.” Those are nice wins, but they are rarely the ones that change a business. The bigger opportunity is when AI changes operating models across marketing, sales, customer success, and revenue operations. When it improves decisions, closes gaps between teams, reduces commercial friction, and helps organisations act with more consistency and confidence. That is where the conversation gets more serious. Because in most businesses, the real drag on growth is not that people type too slowly. It is that teams are misaligned. Data is messy. Processes are inconsistent. Handoffs are clunky. Campaigns take too long to launch. Reporting arrives too late to change anything. Sales does not trust marketing’s signals. Marketing does not trust sales follow-up. Customer success is left out of the loop. Everyone is busy, yet somehow the business still struggles to move faster in the places that matter. AI does not magically fix that. In fact, without structure, it can make the mess worse. But when it is applied properly, it can do something much more valuable than improve task efficiency. It can help organisations operate better. That is the real prize. The real gains start with better decisions The first leap beyond productivity is better decision-making. Many businesses are drowning in information while starving for clarity. Dashboards everywhere. Reports on reports. Endless exports from CRM, MAP, BI platforms, intent tools, web analytics, and customer systems. Everyone has data. Very few have a version of it that is timely, connected, and useful enough to support action. This is where AI can start earning its keep in a more meaningful way. Not by generating another summary nobody asked for, but by helping teams spot patterns, risks, and opportunities that would otherwise stay buried. Which segments are actually converting, not just engaging? Which campaign themes are influencing pipeline quality, not just volume? Which accounts are showing the kind of buying behaviour that deserves action now, rather than another nurture stream they will ignore with impressive consistency? That shift matters because the value is no longer about faster reporting. It is about better commercial judgement. A marketing team that can see which messages are moving buyers through complex journeys is in a stronger position than one that simply produces more assets. A sales leader who can prioritise outreach based on stronger signals is in a better position than one relying on a glorified hunch. A revenue team that can identify where conversion is breaking down can fix real problems before quarter-end panic sets in. This is where AI starts moving from labour-saving assistant to decision-support layer. That is a more serious role. It is also where the gains start to compound. AI gets more interesting when it improves orchestration The second leap is orchestration. Most revenue functions are still held together by a patchwork of systems, handoffs, habits, and crossed fingers. Marketing runs campaigns. Sales follows up, or does not. Ops tries to stitch the process together. Customer success gets involved later, sometimes with context, sometimes without. Everyone talks about journey orchestration, but the lived reality is usually closer to organised chaos. AI can help reduce that chaos, not by replacing teams, but by improving coordination between them. Think about how much commercial value is lost in the gaps. Leads routed too late. Follow-ups triggered with the wrong context. Accounts sitting untouched because one system says they are warm and another says they are dead. Customer signals ignored because they live in a platform nobody checks. Campaigns launched without real feedback from the field. Handoffs based on static rules that made sense eighteen months ago and now quietly sabotage performance every day. This is where AI becomes more than a content machine. It can help interpret signals across systems, recommend next-best actions, surface anomalies, and support more responsive plays across teams. Not in a science-fiction “the robot runs the revenue engine” kind of way. More in a very practical “the business is no longer relying on three spreadsheets and Claire from ops to hold everything together” kind of way. That may sound less glamorous, but it is far more valuable. When marketing and revenue teams operate with better timing, better context, and better coordination, the business feels different. Work flows more cleanly. Friction drops. Decisions get made earlier. Opportunities get acted on faster. That is not just efficiency. That is improved commercial execution. Consistency is not sexy, but it is where scale lives The third leap is consistency at scale. One of the least glamorous truths in business is that performance often suffers because execution is wildly inconsistent. Not because the strategy was terrible. Not because the technology stack is broken. Just because different teams, regions, markets, or managers are all doing things slightly differently, with varying levels of quality and discipline. AI can help standardise that. Not in a rigid, joyless, corporate-policy-manual way. In a way that makes good practice easier to repeat. It can support consistent QA, flag compliance issues, improve data hygiene, reinforce process standards, and reduce the kind of avoidable variation that causes downstream pain. In marketing operations especially, this matters more than many leaders realise. A campaign build process that is followed properly every time is not exciting. A lead management framework applied consistently across markets is not sexy. Metadata standards and naming conventions do not exactly set LinkedIn on fire. But these are the things that determine whether a business can scale without tripping over its own shoelaces. AI can strengthen those foundations if it is deployed with intent. It can act as a layer of support around governance, quality control, and operational discipline. That is important because scale usually breaks where standards are weakest. And this is where a lot of the current AI hype becomes mildly ridiculous. Too many organisations are still obsessing over how quickly AI can produce outputs, while ignoring whether those outputs sit inside a functioning operating model. Faster content in a broken system is not transformation. It is just more noise, delivered promptly. The businesses that will get real gains are not the ones generating the highest volume of AI-assisted activity. They will be the ones using AI to reduce variability, improve judgement, tighten execution, and create more reliable pathways from activity to revenue. That is a much less flashy story. It is also the one that actually affects business performance. The bigger shift is role redesign, not task acceleration The fourth leap is redesigning roles, not just accelerating tasks. This is where the conversation gets uncomfortable. A lot of leaders still talk about AI as a helper. Something that sits beside existing roles and makes them more productive. That framing is understandable, especially when companies are trying not to terrify their own workforce. But it is also limiting. Because the bigger question is not “how can AI help this person do their existing job faster?” It is “what should this job now include, exclude, or become?” That is a harder discussion because it forces teams to examine work that has existed for years and ask whether it still deserves to. It means challenging legacy processes, duplicated effort, manual review chains, bloated reporting habits, and all the odd little tasks that nobody likes but everyone keeps doing because “that is just how it works here.” AI gives organisations a reason to revisit those assumptions. In marketing, that may mean fewer hours spent producing first-draft material and more time spent on strategic planning, audience insight, experimentation, and commercial alignment. In operations, it may mean less manual policing and more proactive system design, governance, and optimisation. In revenue teams, it may mean moving people closer to decisions and away from repetitive admin that should have been automated years ago. That is where the gains become structural. Not because jobs vanish overnight, despite the breathless nonsense often pushed online, but because the mix of work changes. Teams that keep using AI as a glorified speed tool will get modest gains. Teams that redesign roles around better judgement, stronger systems thinking, and more intelligent coordination will get far more. And yes, this requires management courage. Which is inconvenient, because courage is in shorter supply than AI tools. Better internal operations create better customer experience The fifth leap is better customer experience, even if people do not label it that way. A lot of internal AI use cases are sold around productivity because it is easier to win budget with an internal efficiency story. But customers feel the impact when internal operations improve. They notice when handoffs are cleaner, messaging is more relevant, follow-up is better timed, and service teams have actual context instead of a blank screen and a forced smile. AI can help businesses become easier to buy from and easier to work with. That matters. In B2B especially, customer experience is often damaged by internal fragmentation. The buyer sees one company. Behind the scenes there are six teams, nine systems, conflicting definitions, and at least one dashboard that everyone pretends to understand. When AI helps join those dots, the customer gets a smoother experience, even if they never see the plumbing. That is a real gain. Not a vanity metric. Not an internal time-saving story dressed up as innovation. A proper improvement in how the business shows up to the market. Tools alone will not create business value Of course, none of this happens just because a company bought licences and told people to “have a play.” That is where many AI programmes drift into parody. Real gains do not come from random experimentation with no structure behind it. They do not come from telling every employee to use a chatbot and hoping transformation will emerge from the chaos like some sort of digital swamp creature. They come from identifying meaningful business problems, improving the operating environment around them, and applying AI where it can genuinely change the way teams work. That means process first, then tooling. It means governance before scale. It means data quality before grand promises. It means deciding where human judgement matters most, and where it is currently being wasted on tasks that do not deserve it. Most importantly, it means being honest about what kind of business gain you are actually chasing. If the goal is simple productivity, say that. There is nothing wrong with efficiency. Most organisations still have plenty of low-value work that can and should be reduced. But do not confuse that with transformation. Saving time is good. Changing performance is better. The businesses that win will be the ones that operate differently The next phase of AI value will not be defined by who can create the most content, automate the most tasks, or boast the loudest about “copilot” adoption. It will be defined by who can build a better operating model around it. Who can connect functions more intelligently. Who can improve decision quality. Who can standardise execution without suffocating teams. Who can reduce friction across the revenue engine. Who can turn AI from a productivity trick into a business capability. That is the real shift now underway. Most businesses are still on the first rung, using AI to do the same things a bit faster. That is understandable. It is where the market started, and for many teams it is still where the easiest wins live. But the bigger gains sit further ahead. They show up when AI starts helping businesses work differently, not just faster. And that is where the conversation gets worth having. Discover our AI Services

  • Thinking of moving from 6sense to Demandbase? Here’s why more B2B teams are making the switch

    There comes a point with some platforms where the issue is no longer capability. It is tolerance. Yes, the dashboards look clever. Yes, the intent signals sound impressive. Yes, everybody nodded politely during the demo. But once the thing is live, the questions start. Why does this account matter? Why is that one surging? Why does everything useful seem to sit behind another commercial conversation? And why, despite all this supposed intelligence, does the platform still feel like hard work? That is the point where teams stop asking whether they bought something powerful and start asking whether they bought something practical. For B2B organisations weighing up a move from 6sense to Demandbase, that is the real story. This is not about swapping one ABM badge for another because a partner deck said so. It is about choosing a platform that is easier to trust, easier to use, and easier to turn into actual pipeline. Demandbase is pitching exactly that, positioning its migration guide around faster time-to-value, transparent AI, and buying-group intelligence that helps teams drive revenue rather than just admire charts about it.  The biggest problem with black boxes is that eventually people stop believing them A lot of ABM and intent platforms suffer from the same issue. They promise precision, but deliver opacity. That is fine for about five minutes. After that, marketing wants to know what is really driving prioritisation. Sales wants to know why one account is apparently red hot while another with actual human conversations is being ignored. Leadership wants to know whether the investment is producing something tangible or simply generating prettier versions of uncertainty. Demandbase’s guide goes straight at this by calling out frustration with “black-box intent models” and contrasting that with its own pitch around transparent AI. That matters because transparency is not some whimsical product virtue. It is operationally useful. If teams can understand what the platform is doing, they can explain it internally, challenge it when needed, and build better workflows around it. If they cannot, adoption drops and the tool becomes one more expensive thing that only a small handful of people pretend to fully understand. And let’s be honest, nobody wants their pipeline strategy resting on “the model says so.” Faster time-to-value beats feature theatre every time There is a weird habit in B2B software buying where complexity gets mistaken for sophistication. The more complicated the platform sounds, the more “enterprise” it must be. Usually, that is nonsense. Teams do not win because they bought the most elaborate system. They win because they bought something that gets useful, quickly. Demandbase makes that point hard in its move-over guide, saying teams are choosing it for faster time-to-value and laying out what to expect when switching, how long it actually takes to get live, and how to avoid delays, adoption issues, and wasted spend.  That is not a side point. It is the point. Marketing ops and revenue ops teams are not judged on how advanced their tooling sounds in a procurement meeting. They are judged on whether campaigns run, sales trusts the signals, pipeline improves, and nobody has to sit through six months of “transformation” before seeing any value. A platform that gets there faster is not merely more convenient. It is more commercially sane. Discover our Podcast Buying groups reflect reality. Single-lead obsession does not. One of the stronger reasons to look at Demandbase is its emphasis on buying-group intelligence. Again, that is not just product wording. It is a reflection of how B2B buying actually works. Enterprise purchases are rarely driven by one heroic individual who reads an ebook and then wanders directly into closed-won status. They involve multiple stakeholders, competing priorities, internal politics, silent research, and at least one person who turns up late and somehow still gets veto power. Demandbase explicitly positions buying-group intelligence as part of the reason teams are making the switch.  That gives teams a better way to prioritise accounts. Instead of obsessing over isolated activity from one contact, they can see whether momentum is building across the people who actually influence a deal. That leads to better orchestration, more sensible sales prioritisation, and less time wasted pretending one engaged individual equals account readiness. In other words, it gets a little closer to the messy truth of B2B revenue. Pricing and support are not boring details. They are where goodwill goes to die. Vendors love innovation language. Buyers, meanwhile, are over here wondering how many extra invoices stand between them and the features they thought they were already paying for. Demandbase’s guide is pretty blunt on this front. It calls out “endless upcharges” and support that has gone from helpful to nonexistent as part of the frustration driving some teams away from 6sense. That is a pointed comparison, but it lands because every ops leader knows the feeling. Nothing sours platform confidence faster than realising the commercial model is built around drip-feeding value back to you one awkward upsell at a time. Support matters too, especially during transition periods. If your platform becomes harder to optimise, harder to troubleshoot, and harder to expand without vendor intervention, then every internal stakeholder feels it. Campaigns slow down. Confidence drops. Adoption gets patchy. What looked strategic in the sales cycle starts looking suspiciously like admin with branding. Demandbase is clearly making the case that it offers a smoother partnership model. Whether that is the deciding factor depends on the buyer, but it is often the thing that moves a team from interested to serious. Ease of use is not a compromise. It is the whole game. There is no prize for owning an ABM platform that only three people can operate without emotional damage. Demandbase includes customer proof points to reinforce this. PageUp said it compared Demandbase and 6sense across transparency, configurability, and partnership, and found Demandbase the better fit. Case IQ, meanwhile, chose Demandbase over 6sense because of its intuitive design, competitive pricing, existing familiarity within the team, and the support available.  That should not be underestimated. An intuitive platform is easier to adopt across teams. It is easier to train on. Easier to operationalise. Easier to build repeatable processes around. It reduces the gap between insight and action, which is kind of the whole point of having the thing in the first place. A platform does not become more valuable because it is difficult. It becomes more valuable when teams actually use it properly. A shocking concept, I know. Discover our ABM Services Migration is usually less scary than staying stuck The biggest thing that stops teams switching is not loyalty. It is fear of disruption. Fair enough. A platform migration can sound like a MarTech root canal. There are integrations to think about, reporting to preserve, sales teams to reassure, workflows to rebuild, governance to tighten, and at least one legacy process no one fully understands but everyone is terrified to touch. Demandbase leans directly into that concern, framing the guide around how to switch “without missing a beat” and how to move forward without losing momentum, trust, or pipeline. That is smart because most buyers are not asking whether change is possible. They are asking whether change is survivable. The reality is that a good migration is not a leap of faith. It is a structured operational project. Audit what matters, map dependencies, define success clearly, sort the integrations properly, clean up the mess you were going to have to deal with eventually anyway, and move with intent instead of panic. Done well, a switch does not create chaos. It removes it. And frankly, sometimes the bigger risk is staying with a platform your teams no longer trust just because the pain has become familiar. The real win is not the move itself. It is what the move forces you to fix. This is the bit that matters most. Switching from 6sense to Demandbase is not just a technology decision. It is a chance to reset how your go-to-market teams work. To tighten account selection. To rethink prioritisation. To align marketing and sales around signals they actually believe. To stop paying for platform complexity that sounds impressive but struggles to produce value in the real world. That is where the biggest payoff usually sits. The platform matters, obviously. But the migration process also forces better questions. What do we actually need from intent? Which insights do we trust? What counts as meaningful engagement? Where are we overcomplicating things? Which workflows are driving pipeline, and which ones are just keeping dashboards busy? A move to Demandbase can absolutely improve the tech stack. The smarter outcome is that it also improves the operating model. Final thought No ABM platform is magic. None of them can rescue poor process, vague ownership, or marketing teams that are still mistaking activity for progress. But platforms can make good teams better, or they can trap them inside expensive ambiguity. That is why the case for moving from 6sense to Demandbase is getting attention. Demandbase is making a straightforward pitch: less black-box nonsense, faster time-to-value, stronger support, more intuitive usability, and buying-group intelligence that better reflects how B2B buying really happens. That is the core promise behind its move-over guide, and it is a promise that will resonate with any team that is tired of paying premium rates for unnecessary friction.  If your current platform feels more like something you manage than something that helps you win, that is usually your answer. Read the Demandbase guide here:

  • MQLs are the hangover: Why marketing should stop celebrating leads and start building pipeline

    For years, marketing teams have had a favourite party trick. Take a person. Watch them click on a few things. Maybe they download a guide, attend a webinar, glance at a pricing page, or fill in a form because they were cornered by a decent headline and a mild identity crisis. Add some points. Push them over a threshold. Then declare, with a straight face, that they are now “qualified”. Cue the applause. Cue the dashboard. Cue the monthly report proudly announcing a rise in MQLs as if the revenue team should be popping champagne. And then, as usual, the hangover arrives. Because many of those leads do not become pipeline. Many do not become conversations worth having. Many were never serious buying signals in the first place. They were just activity. Nicely packaged activity, perhaps. But still activity. That is the problem. Marketing has spent years rewarding itself for creating moments that look like progress instead of conditions that actually lead to revenue. The result is a lot of businesses still measuring demand with a model that feels tidy, looks familiar, and increasingly tells them absolutely nothing useful. If lead scoring is cosplay, then the MQL is the morning after. It is the consequence of believing the costume was real. The MQL made sense once. That time has passed. To be fair, the MQL was not invented by idiots. It came from a reasonable desire to create order. Sales teams needed a way to separate random names from people showing signs of interest. Marketing teams needed a way to prove they were doing more than sending emails and fiddling with landing pages. Leadership wanted a metric that looked like a bridge between activity and pipeline. So the MQL was born. A neat little handoff point. A moment where marketing could say, “Here you go, this one looks promising,” and sales could at least pretend to believe them. The problem is that modern buying no longer behaves in a way that makes this model particularly trustworthy. Buying is rarely driven by a single person. It is messy, delayed, political, often irrational, and usually spread across multiple stakeholders who do not all leave the same digital breadcrumbs. The person who fills in the form is not always the one with budget. The person researching solutions is not always the one making the decision. The loudest signal in your system is often not the most commercially meaningful one. So the contact that becomes an MQL may be the least important person in the room. Or worse, there may not even be a room yet. That is where the model starts to crack. Because while buying happens at account level, many marketing teams are still measuring success at contact level and acting surprised when the story does not hold together. A lot of MQLs are just reporting events dressed up as buying signals This is the real issue, and it is worth saying plainly. An MQL often tells you that a person did something trackable. It does not reliably tell you that an account is becoming buyable. Those are two very different things. A person downloading an asset is a trackable event. A person attending a webinar is a trackable event. A person clicking around your site three times in a week is a trackable event. Useful, maybe. Interesting, perhaps. But still not the same as a buying condition emerging within an account. And yet businesses continue to build dashboards, goals, routing logic, and team incentives around exactly those kinds of moments. This is where the wheels start to come off. Marketing celebrates lead volume. Sales sees weak conversion. SDRs work lists they do not trust. Revenue leaders start asking why “qualified” leads are not turning into genuine opportunities. Marketing responds by refining the scoring model, tweaking the thresholds, and adding even more detail to the reporting. Which is a bit like trying to fix a bad haircut by measuring it more precisely. The problem is not always that the system lacks sophistication. Quite often, the problem is that the system is classifying the wrong thing. A reporting event helps explain activity. A buying signal helps you decide where commercial effort should go. Too many businesses confuse the two. Discover our Podcast Easy to count has become more important than useful to know This is one of the less glamorous reasons so many demand models quietly fail. It is far easier to count an individual conversion than it is to interpret account-level momentum. It is far easier to report a lead threshold than it is to understand whether a buying group is forming. It is far easier to tell the board that MQL volume is up 23 percent than it is to say, “We are seeing stronger commercial movement in accounts that match our best-fit profile and show genuine timing pressure.” One sounds neat. The other sounds like actual work. So guess which one most businesses default to. Marketing has been rewarded for what is visible, not necessarily for what is meaningful. That would be tolerable if the visible thing still behaved like a useful proxy for pipeline. In many cases, it no longer does. A single person from a target account engaging with content may mean nothing. Three stakeholders from the same account arriving within a short period, each looking at different pieces of decision-stage content, probably means a lot more. An implementation-related conversation means more than a webinar registration. A pricing discussion means more than a content download. A security review means more than someone clicking a nurture email while avoiding a meeting. The point is not that engagement is irrelevant. The point is that engagement without context is flimsy. And a flimsy signal should not be carrying the weight of your demand strategy. The MQL has become a permission slip for optimism That sounds harsh, but it is often true. In many organisations, the MQL is not a robust qualification model. It is simply the point at which marketing is allowed to feel good about itself. The lead crossed the line. The number moved. The target was hit. Everyone can now behave as though progress has occurred. This is comforting. It is also dangerous. Because once the metric becomes emotionally important, it stops being challenged properly. Teams begin defending the existence of the MQL rather than asking whether it still reflects how buying works. Sales gets blamed for weak follow-up. Campaign teams get asked for more volume. SDR teams get told to work harder. Nobody wants to say the obvious thing, which is that a lot of this so-called qualification may have very little to do with commercial readiness at all. And that is how businesses end up running entire revenue motions around glorified hand-raisers. Marketing does not need more lead theatre. It needs a better operating model. The answer is not to replace MQLs with chaos. Nor is it to delete every lifecycle stage and start speaking in mystical revenue riddles. What is needed is a shift in what marketing is actually trying to identify and influence. Instead of asking, “When is this lead qualified?” the better question is, “What conditions suggest this account is moving closer to a real buying decision?” That changes everything. It changes what you measure. It changes what you route. It changes what sales trusts. It changes how campaigns are judged. It also nudges marketing into a much more commercially useful role, which is long overdue. Because marketing’s job is not just to generate names. It is to create movement. To increase the likelihood that the right accounts engage, progress, and enter sales conversations with something resembling genuine intent. That is a more serious job than producing a pile of contacts and calling it pipeline. What should replace MQL obsession? Not a single new acronym, thankfully. The world does not need another one. What it does need is a model built around commercial conditions rather than arbitrary thresholds. That starts with account fit. Real account fit, not fantasy ICP nonsense where half the market somehow qualifies as ideal. Good fit should reflect whether the account has the right level of complexity, the right kinds of pain, the right operational reality, and the right commercial shape for your business to win and serve well. Fit should be a gate, not a decorative line in a strategy deck. Then there is buying-group emergence. One person engaging is a weak signal. Several relevant stakeholders showing up from the same account in a pattern that suggests evaluation is something else entirely. That is where things begin to get interesting. Not because it guarantees a deal, but because it starts to resemble the way decisions are actually made. Next comes timing pressure. This is one of the most underused and most commercially important pieces of the puzzle. Why now matters more than almost everything else. A replatforming plan, a looming renewal, an internal re-org, reporting chaos, a change in leadership, a compliance deadline, a broken process, a strategic mandate, these are the conditions that create movement. Someone downloading a whitepaper does not create urgency. It may simply indicate boredom between meetings. And finally, there are progression signals with actual weight behind them. Meetings involving multiple stakeholders. Implementation conversations. Commercial discussions. Timeline questions. Security reviews. Requests for technical validation. Internal language shifting from casual curiosity to practical decision-making. These are not perfect either, but they are much harder to fake. They also cost the buyer something, which is usually a very good sign. This is where marketing should be focusing its attention. Not on whether a lead scored 74 instead of 71. Not on whether a form fill should count double if it came from paid social. Not on endlessly polishing a framework that was built for a simpler buying environment and now survives mostly because everyone knows where it lives in the CRM. Discover our AI Coworker This is also why sales and marketing keep annoying each other The MQL model does not just distort measurement. It distorts trust. Marketing says it delivered qualified leads. Sales says those leads are rubbish. Marketing says sales is ignoring good demand. Sales says marketing is measuring engagement, not intent. Then both sides sit in a meeting staring at the same funnel with completely different levels of faith in it. It is a deeply inefficient way to run a revenue team. The deeper issue is that both sides are often reacting sensibly to a broken shared model. Marketing has been taught to optimise for visible conversion. Sales has been trained by experience to be sceptical of anything that looks too easy. The result is a constant tension between volume and credibility. A better model lowers that tension. If both teams are aligned around account fit, buying-group activity, timing pressure, and commercially meaningful progression, the conversation gets healthier fast. Marketing is no longer defending a pile of shiny contacts. Sales is no longer rolling its eyes every time a dashboard says pipeline is “warming up”. Both teams are looking at the same kinds of signals and asking the same practical question: is this account moving in a way that deserves serious effort? That is a much better conversation to have. You do not necessarily need to kill the MQL. But you should absolutely demote it. Some businesses will still need an MQL stage for workflow reasons. Fine. Use it as an internal signal if you must. Use it to trigger routing. Use it to mark a point in a process. Use it because your systems are held together by string and inherited logic and you cannot rip it all out in one go. But stop treating it like the headline metric for marketing contribution. That is where the damage happens. An MQL can still exist without being worshipped. It can be a checkpoint, not a trophy. It can serve operations without pretending to represent commercial truth. The trouble starts when businesses build their whole demand story around it. Because the story that matters is not whether marketing produced more qualified leads this quarter. The story that matters is whether marketing improved the conditions that make pipeline more likely in the accounts that actually matter. That is a much stronger claim. It is also much harder to fake. The next era of demand generation will be less flattering and more useful That is probably for the best. The old model produced very pretty dashboards. It also produced an awful lot of false confidence. Teams could point to rising lead volumes while pipeline quality quietly sagged underneath. Targets got hit. Reports got written. Revenue teams kept wondering why all this apparent demand still felt so anaemic in the real world. The businesses that move fastest now will be the ones willing to let go of neat-but-empty metrics and get more honest about what buying actually looks like. That means less worship of individual conversions. Less obsession with lead thresholds. Less applause for activity that happens to be easy to track. More attention to account movement. More weight given to urgency and buying conditions. More focus on signals that indicate real commercial effort from the buyer side. In other words, less theatre. More evidence. That may make some dashboards uglier for a while. Good. Ugly truth is still better than polished nonsense. Stop asking how to generate more MQLs That is the wrong question now. The better question is this: How do we help more of the right accounts become sales-ready in ways that look like deals we actually win? That question forces a more grown-up strategy. It pushes marketing closer to revenue. It exposes weak measurement. It sharpens targeting. It improves alignment with sales. And, perhaps most importantly, it stops teams mistaking form fills for progress. Because the brutal truth is that many MQLs were never a sign of momentum. They were just the easiest thing to celebrate. And marketing has celebrated enough easy things. It is time to build pipeline instead. Need help with that? Let's talk... Discover our Services

  • HubSpot migration mistakes that quietly wreck reporting, automation and trust

    There is a particular kind of confidence that appears right before a messy HubSpot migration goes live. It usually sounds something like this: “ We’ve got it all covered. ” The CRM has been mapped. The lists have been exported. The workflows are “mostly” rebuilt. Someone has checked the field names, someone else has built the lifecycle stages, and now the business is marching toward launch with the kind of calm optimism usually seen moments before the kitchen ceiling starts dripping water. Then, a few days later, the cracks begin to show. We've seen it so many times with new clients - having been asked to step in and rescue the situation... Sales starts complaining that contact records look odd. Marketing notices that reporting has gone wonky. Someone in leadership asks why leads have dropped off a cliff, even though they have not. Customer journeys no longer make sense. Attribution is suddenly telling a fairy tale. Emails are firing at the wrong people. Forms are creating duplicates. And the once-beautiful promise of a clean move into HubSpot starts to look less like transformation and more like a very expensive house move where half the boxes were labelled “misc.” This is the problem with bad migrations. They rarely fail in one dramatic, obvious moment. They fail quietly. They fail in ways that are easy to miss during testing and hard to fix once teams have adapted to the damage. They fail by slowly eroding the one thing every revenue team needs to function: Trust . That is what makes migration mistakes so dangerous. Not just the technical mess. Not just the wasted time. Not even the cost of putting it right. The real issue is that once people stop trusting the system, they stop using it properly . And when that happens, you do not just have a HubSpot problem. You have an operational credibility problem. A lot of businesses treat migration as a transfer exercise. Take what exists in your existing platform, move it into HubSpot, rebuild what matters, and carry on. Simple . Lovely. Box ticked. But a migration is never just a transfer. It is a redesign whether you admit it or not. The minute you move data, workflows, properties, scoring models, lifecycle logic, routing rules, forms, integrations and reports into a new environment, you are making decisions about how the business operates. Pretending otherwise is how teams end up recreating nonsense at speed. One of the most common mistakes is moving bad data as if it were valuable just because it already exists. There is something oddly sentimental about legacy CRM data. Businesses cling to it like an old cable drawer. “ We might need that. ” “T hat field used to be important. ” “ We cannot delete those contacts because they came from a 2019 webinar series. ” So over it all comes. Dead properties. Duplicate records. Inconsistent country values. Zombie lifecycle stages. Fields with names like “ Lead Source Final Final 2. ” You know the type. The problem is that HubSpot is only as useful as the data structure you give it. If you migrate chaos, you do not get a fresh start. You just get a shinier version of the same confusion. Worse, people assume a new platform equals improved quality. So bad data becomes more dangerous because it carries an undeserved sense of legitimacy. Suddenly reporting is wrong, segmentation is unreliable, automation behaves strangely, and no one can quite work out whether the issue is the setup or the business itself. Spoiler: It is usually the setup. Another classic mistake is rebuilding automation too literally. This happens when teams approach migration like a museum restoration project. Every workflow, every trigger, every odd little workaround gets reproduced exactly as it existed before. It sounds sensible on paper. In reality, it is how you preserve years of bad decisions in a new home. Old systems often contain automations that were built for one campaign, one process, one team structure or one emergency patch three years ago. Over time, those automations become tangled. They overlap. They contradict each other. They run because no one is brave enough to switch them off. A migration should be the moment you ask whether that logic still deserves to exist. Instead, many teams just copy it all across and congratulate themselves on completeness. Then the new HubSpot portal goes live, and the same old operational weirdness returns wearing a cleaner interface. Contacts are enrolled in conflicting workflows. Sales notifications misfire. Leads skip important stages. Internal teams start saying things like, “ HubSpot doesn’t seem to do what we need, ” when what they really mean is, “ We imported our own bad habits and gave them a fresh postcode. ” Reporting is where the pain usually becomes undeniable. Bad migrations have a special talent for wrecking reporting in ways that are both subtle and deeply annoying. Dashboards still load. Charts still move. Numbers still appear. But the story underneath has been bent out of shape. This often starts with sloppy property mapping. A field in one platform looks similar to a field in HubSpot, so it gets matched without much thought. Job title goes somewhere sensible. Company name behaves itself. But then you get into the more delicate stuff. Original source, lead status, lifecycle stage, handoff dates, qualification criteria, owner history, pipeline movement. These are not just fields. They are the logic behind how performance is measured. Map them badly and you do not simply lose information. You break meaning. That is when leadership starts asking dangerous questions. Why are MQL numbers down? Why are conversion rates inconsistent? Why does sales say the leads are rubbish when marketing claims pipeline influence is up? Why does the dashboard disagree with the CRM export? Once that happens, the room fills with theories. The campaign must be weak. Sales follow-up must be slow. The market must have changed. Sometimes those things are true. But after a migration, it is often the plumbing. And nothing wastes more time than a business trying to solve a strategic problem that is actually a data structure problem... Then there is attribution, which is already a minefield before migration enters the chat. Move to HubSpot badly and attribution can become complete fiction with a straight face. Contacts lose source context. Historical interactions are only partially preserved. Form submissions are disconnected from original journeys. Campaign naming conventions collapse into inconsistency. Teams start reading reports that look polished enough to trust and flawed enough to mislead. That is a nasty combination. Attribution does not need to be perfect to be useful. Everyone sensible knows that. But it does need to be consistent. That is the bit poor migrations destroy. Once consistency goes, the business starts making budget and channel decisions based on warped signals. So now the damage is not only operational. It is commercial. You are not just reporting the wrong story. You are funding it. Trust, though, is the part that really hurts. Discover our Podcast Internal trust in systems is fragile. Much more fragile than most leadership teams realise. Users do not need many bad experiences before they start working around the platform instead of through it. Sales sees a contact record with missing fields and starts keeping their own notes elsewhere. A marketer notices lists pulling in the wrong contacts and starts exporting CSVs “just to be safe.” Ops teams lose faith in workflow logic and start manually checking everything. Before long, the platform becomes a place where data goes to look official rather than a place where teams actually operate with confidence. Once that behaviour sets in, it spreads fast. Workarounds breed more workarounds. Manual fixes create more inconsistency. Different teams start defining success differently because they no longer believe the system can hold a shared version of truth. And that is the quiet tragedy of a botched migration. You did not buy HubSpot to create more admin, more doubt and more internal politics. Yet somehow here you are, paying handsomely for all three. Part of the problem is timing. Businesses often migrate under pressure. A contract is ending. A platform has been outgrown. A leadership team wants faster reporting. A merger has created a systems mess. There is urgency, and urgency makes people do deeply optimistic things with timelines. They compress discovery. They skip governance decisions. They assume someone else has validated the data. They leave testing until the end as though it is a nice final polish rather than the point at which reality barges into the room carrying a baseball bat. Testing, by the way, is another area where migrations quietly come apart. Teams often test whether things exist, not whether they behave properly. Yes, the form submits. Great. Yes, the workflow triggers. Lovely. Yes, the record is in HubSpot. Champagne all round. But does the form map correctly for every scenario? Does the workflow fire only when it should? Does the record preserve history in a way that supports reporting, routing and future automation? That is where grown-up migration testing lives, and it is less glamorous than launch announcements but considerably more useful. The same goes for integrations. People love to underestimate integrations. They assume the sync will more or less work because the connector exists and the logos look reassuring. But integrations are where hidden operational assumptions come to die. Ownership fields sync strangely. Product data behaves differently. Custom objects do not align. Date formats become chaos merchants. One system overwrites another with all the confidence of a junior manager on their first day. Then everyone acts surprised when sales, service and marketing are reading different versions of the same account. A good migration is not just about getting data into HubSpot. It is about deciding which system owns which truth, and making that decision deliberately. Without that, integration is just a well-dressed argument between platforms. And then there is the most expensive mistake of all: Treating migration as finished the day the portal goes live. That is fantasy. A migration is not finished at go-live. Go-live is the point at which the real audit begins. That is when real users do real things in the system and expose all the logic gaps that polished workshops somehow missed. Businesses that do this well plan for a bedding-in period. They monitor. They review. They adjust. They keep a close eye on reporting, workflow behaviour, routing, duplicates, source data and user confidence. Businesses that do it badly declare victory too early and let small issues harden into accepted dysfunction. This is usually where resentment creeps in. Marketing feels blamed for reporting issues. Sales loses patience with lead handling. Leadership becomes suspicious of both. The internal consultant who led the migration has either vanished, gone defensive, or started using the phrase “edge case” far too often. And the team that has to live in HubSpot every day is left cleaning up the operational confetti. The good news is that these mistakes are avoidable. Not by magic, and not by buying more software, and certainly not by hoping HubSpot will somehow impose order on a messy operating model through sheer force of branding. They are avoidable when migration is treated as an operational design project, rather than a technical transfer. That means being ruthless about what deserves to move. It means defining property purpose before field mapping starts. It means rebuilding automation based on current business logic, not inherited superstition. It means deciding what reporting needs to mean before dashboards are recreated. It means testing behaviour, not appearances. It means planning for adoption, not just deployment. It means accepting that a faster migration is not always a better one if it leaves the business quietly bleeding trust for the next twelve months. HubSpot can be a brilliant platform. But it is not a miracle worker. It will not rescue poor governance, fuzzy definitions, inconsistent data, muddled ownership or years of operational corner-cutting just because you paid for onboarding and changed the logo on the login screen. If anything, it exposes those issues faster, because once a modern platform is in place, the excuses start to look a bit thin. And perhaps that is the uncomfortable truth underneath all of this. A migration does not create the mess. It reveals it. The move to HubSpot simply gives businesses a very expensive chance to decide whether they want to keep pretending. The companies that get real value from migration are usually the ones willing to be a bit unsentimental. They are prepared to challenge legacy logic. They accept that some old processes deserve a dignified death. They understand that clean reporting is built, not wished into existence. And they know that trust is not restored by telling teams the platform works. It is restored when the platform actually behaves in a way that deserves belief. So if a HubSpot migration is on the horizon, the question is not whether the data can be moved. Of course it can. The question is whether the business is willing to do the harder, less flashy work of deciding what should move, how it should behave, and what truth needs to survive the trip. Because when migrations go wrong, the damage is rarely loud at first. It is quieter than that. More boring. More dangerous. A missed field here. A broken report there. A workflow that sort of works. A sales team that starts keeping side notes. A marketing team that exports one more spreadsheet. A leadership team that stops trusting the dashboard. Death by a thousand “that’s odd” moments. And that is how reporting gets wrecked, automation gets compromised, and trust slips out through the floorboards while everyone is still admiring the new furniture. If you are going to migrate to HubSpot, do it properly. Not perfectly. Properly. There is a difference. Perfect is theatre. Proper is structure, discipline and enough honesty to admit where the old setup was nonsense. That is not the glamorous version. It is, however, the version that works. Migrating to HubSpot and Require some guidance? Let's talk Discover our Services

  • Big News: TLS Certificate validity moving to 199 Days

    Online security standards have changed - as of February 24, 2026 , Certificate Authorities (CAs) will issue public TLS/SSL certificates with a maximum validity of 199 days  (previously 397 days). This is an industry-wide update driven by the latest CA/Browser Forum Baseline Requirements , and it’s all about strengthening security across the web. Why the Shorter Validity? Shorter certificate lifespans enhance security in a few key ways: Reduced risk exposure  if a private key is compromised Faster cryptographic agility , allowing the industry to adapt more quickly to evolving threats and standards Lower long-term impact  of mis-issuance or outdated configurations In short: Smaller validity windows = tighter security controls and faster innovation. Important CA Cutoff Dates Here’s when the new 199-day maximum goes into effect: DigiCert:  February 24, 2026 Sectigo:  March 12, 2026 Any certificates issued on or after these dates will follow the new maximum validity rule. Two Ways to Navigate the Change You’ve got options, choose the workflow that best fits your team. Path 1: Manual Re-Issuance (Business as Usual) You can continue purchasing certificates as you do today (e.g., 1-year or 2-year products). The difference? You’ll need to reissue and reinstall the certificate every ~6 months , until the order term is complete. Best practice: Most SSL Management services offer   renewal notifications, ensure these are enabled in your account so you never miss a reissuance window. This approach works well for teams already comfortable managing certificate lifecycle tasks manually. Path 2: Embrace Automation  Want to set it and forget it? Automation is your friend. GoGetSSL  currently offers ACME-based SSL certificates , enabling automated issuance and renewal. Once configured, your certificates can reissue seamlessly without manual intervention. For enterprise-scale environments, consider DigiCert Trust Lifecycle Manager . It provides comprehensive certificate lifecycle management, including discovery, automation, policy enforcement, and centralized visibility. Technical Considerations Here’s what your development and operations teams should be aware of: API Certificate Order Requests After the cutoff dates: API requests specifying a validity greater than 199 days will still create an order for the requested duration. However, the issued certificate itself will be capped at 199 days . This design prevents API errors and ensures your public TLS/SSL orders continue processing smoothly. Pro tip: Use the getOrderStatus  detail response parameters to monitor the difference between: The order validity term The actual certificate expiration date Tracking both values will be important for lifecycle planning. DigiCert Validation Reuse Changes DigiCert   customers should also note adjustments to validation reuse periods: Domain Validation (DV) reuse Changing from 397 days → 199 days  (effective February 24, 2026) Organization Validation (OV) reuse Changing from 825 days → 397 days These updates align validation lifecycles more closely with the new certificate validity standards and reinforce stronger identity assurance practices. What this means for you This isn’t just a policy change, it's a strategic shift toward a more secure and agile internet. Continue managing certificates manually (with more frequent reissuance), or Transition to automation and streamline your operations. Some MOps platforms already have features enabled to keep it all in one place. For example, Eloqua offers Automated Certificate Management   at no additional cost. Either way, planning ahead will ensure a smooth transition. If you’d like help evaluating or implementing automation options for your SSL certificates or updating your certificate management strategy, we’re here to support you. Discover our Email Services

  • Guardrails aren’t optional when the tool can speak for you...

    A few years ago, most marketing mistakes were slow mistakes. Someone wrote the email, someone proofed it, someone hit send. If it went wrong, it went wrong at human speed. You had time to catch the awkward phrasing, the wrong link, the “Dear {FirstName}” horror. The damage was real, but it was usually contained to a campaign, a segment, a moment. Now you’ve got tools that can speak for you. Not just suggest, not just draft, not just “help”. Speak. In your tone. Under your brand. At scale. Across channels. With alarming confidence. That changes the deal. When a tool can produce customer facing language, take action in systems, and create outputs that look official, you’re no longer talking about productivity. You’re talking about authority. And if you hand authority to a system without guardrails, you are effectively outsourcing your standards to a probability machine and hoping your customers never notice. They will. The uncomfortable truth is that AI in Marketing Operations doesn’t fail like software used to fail. Traditional automation breaks loudly. Integrations fail, jobs error out, workflows stop. You get alerts. You get tickets. You get something you can point at. AI fails quietly. It produces something that looks plausible. It produces something that sounds like you. It produces something that passes a quick skim. And then it slips into the world and does its damage in the most painful way: It looks like you meant it. This is why guardrails are not optional. Not because the tool is evil. Not because everyone should panic. Because once the tool speaks, the brand is accountable. “It’s just a draft” is a comforting lie Most teams start with the safest narrative possible. The tool is “just drafting”. Someone will review it. Nothing goes out unapproved. It is assistance, not autonomy. And at the start, that is true. But the reality of modern marketing is volume. Too many emails, too many landing pages, too many ads, too many variations, too many segments, too many stakeholders. When the tool makes output easier, you produce more output. When you produce more output, review becomes thinner. When review becomes thinner, the definition of “approved” turns into “nobody complained”. That is how risk creeps in. Not through one dramatic decision to let the robot run your marketing. Through a thousand tiny shortcuts made by busy people who are rewarded for speed, not for diligence. A draft becomes a “close enough”. A “close enough” becomes a template. A template becomes a system. And then one day your brand voice is quietly shaped by whatever the model thinks sounds professional, persuasive, or reassuring. If you’ve ever read a company message that felt oddly hollow, oddly generic, oddly not human, you already know what that looks like. Customers do too. They might not say “this was generated”, but they feel the distance. They feel the lack of accountability. They feel the absence of a real person. In a market where trust is already fragile, that’s not a minor issue. It is the issue. When the tool speaks, it represents your intent This is where the conversation needs to get more serious than “accuracy”. Accuracy matters, of course. Nobody wants hallucinated features or invented pricing. But accuracy is only one slice of the problem. The bigger problem is implied intent. When your brand sends something, customers assume it reflects what you believe, what you value, how you operate, and how you’ll treat them. The tone matters. The promises matter. The certainty matters. The choice of words matters. The absence of empathy matters. AI is very good at sounding certain. It is very good at smoothing rough edges into confident statements. It is very good at making things sound resolved even when they’re not. That is a dangerous trait in a customer context. Because confidence is persuasive, and persuasion under your brand name is a promise. If you accidentally overpromise, if you accidentally mislead, if you accidentally claim compliance you haven’t earned, the customer doesn’t blame the tool. They blame you. They should. It is your logo at the top of the email. Your name on the website. Your ad account paying to put the message in front of them. Your sales team following up as if the claim was deliberate. Guardrails are how you protect intent. They are how you stop the tool from speaking with more authority than your business can actually support. The new failure mode is “looks fine” This is the part that catches even smart teams out. Most governance efforts are designed for obvious failures. Broken processes. Missing approvals. Wrong recipients. Compliance red flags. Things you can spot in a checklist. AI’s most common failure mode is more subtle: It produces output that looks fine at a glance and is wrong in a way that matters later. It might be wrong legally. It might be wrong commercially. It might be wrong ethically. It might be wrong in tone. It might be wrong in a way that sets the wrong expectation. It might take a sensitive topic and sand it down into corporate cheerfulness, which feels disrespectful. It might take a complex product limitation and simplify it into something misleading. It might take a customer concern and respond with “we value your feedback”, which is the fastest way to sound like you don’t. And because the output looks polished, it often bypasses the kind of scrutiny that a messy human draft would invite. Humans are suspicious of imperfect writing. We notice it. We challenge it. We ask questions. AI writing often arrives wearing a suit. People assume it has done the thinking because it has done the formatting. That’s how you end up publishing something that nobody would have consciously written, but everyone accidentally approved. Speed makes small mistakes expensive Marketing has always had risk. But speed changes the economics of risk. When a human team writes slowly, mistakes are slower too. When you have the ability to produce ten variants instead of one, you also have ten chances to be wrong. When you can spin up campaigns faster, you also shorten the time between a decision and the moment it reaches a customer. Less time means less reflection. Less reflection means more accidents. And the tool does not get tired, so you keep going. This is where teams often miss the point of guardrails. They think guardrails exist to slow things down. In reality, guardrails exist to allow speed without gambling your reputation every time you hit publish. The teams who win with AI will not be the ones who use it the most. They will be the ones who use it with enough discipline that they can trust their own output again. Your brand voice is an asset, not a formatting preference A lot of organisations treat brand voice as a style guide. A few adjectives. A list of do’s and don’ts. Maybe a handful of examples. Useful, but not sacred. When AI enters the picture, brand voice becomes something else. It becomes the training data for your outward identity. The guardrails around how you speak are no longer “nice to have”. They are the constraints that stop your company from slowly turning into generic marketing sludge. Because AI has a default voice. It’s the voice of polite certainty. Professional, helpful, mildly enthusiastic, oddly uncontroversial. That voice is fine for a toaster manual. It is terrible for differentiation. If your competitors use the same tools with the same defaults, you will all start sounding the same. Same phrases, same cadence, same vague confidence, same “we are committed to delivering value”. Customers will not remember you for that. They will remember you for the moments when your communication felt real, specific, and accountable. Guardrails are not only about preventing disaster. They are also about preventing dilution. They protect what makes you recognisable. The risk isn’t only what the tool says. It’s what it makes people do. Here’s the part many teams ignore because it feels less glamorous than content. Once AI is embedded in workflows, it stops being a writing assistant and starts being a decision shaper. It changes what people choose to ship, what they choose to test, what they choose to claim, what they choose to ignore. If the tool reliably produces something “good enough”, you stop pushing for “great”. If the tool can generate five angles quickly, you stop thinking deeply about the one angle that truly matters. If the tool can answer customer questions instantly, you stop investing in better documentation and clearer product truth. The tool doesn’t just produce content. It changes standards. That is why governance and guardrails sit in Marketing Operations, not only in legal or IT. This is an operational quality problem. It is about maintaining standards under acceleration. Customers don’t care how it happened When something goes wrong, organisations love to explain the internal story. It was an experiment. It was a vendor issue. It was a misconfiguration. It was a one off. It was an edge case. It was an isolated incident. It was unintended. Customers do not care. They care that you spoke to them in a way that felt careless, misleading, or disrespectful. They care that you used their data in a way you cannot clearly explain. They care that your messaging implied something that was not true. They care that you are now backpedalling. The moment you start defending the process instead of owning the outcome, you lose more trust. Because accountability is the whole point of a brand. Guardrails are how you avoid needing excuses in the first place. Guardrails are not a policy document nobody reads Let’s be blunt. A policy document is not a guardrail. It is a wish. Teams love policies because they create the feeling of control. They also love them because they can be written once and then forgotten. They become a box ticked. “We have an AI policy”. Great. Where is it used? Who follows it? What happens when someone ignores it? How do you know? Real guardrails show up where work happens. In the tools. In the templates. In the workflows. In the approvals. In the way you capture decisions. In the way you log what was generated and why. In the way you constrain what is allowed to be said in certain contexts. In the way you enforce brand voice and claims. If you cannot point to the guardrails inside the process, you don’t have guardrails. You have vibes. And vibes are a terrible risk strategy. The irony: Guardrails make AI more useful The fear some teams have is that guardrails will reduce the value of AI. That constraints will kill creativity. That approvals will slow delivery. That governance will turn an exciting tool into another corporate process. In practice, the opposite happens. Without guardrails, teams never fully trust what they generate. They second guess, they rewrite, they hesitate, they argue, they avoid using the tool for anything important. They keep it in the “nice to have” corner. They treat it like a toy. With guardrails, the tool becomes reliable. Not perfect, but reliable enough that teams can use it in real work without constantly worrying that it will embarrass them. Constraints create confidence. Confidence creates adoption. Adoption creates impact. The best marketing ops teams understand this instinctively. They know that freedom without control is not freedom. It is chaos. This is the moment to decide what kind of organisation you are AI is forcing a choice that many companies have been postponing for years. Do you operate with standards, or do you operate with output? Do you want to be trusted, or do you want to be fast? Do you want your marketing to be a real representation of your business, or a high volume content factory that occasionally hits the right note? Because once the tool can speak for you, every weak spot in your operation becomes louder. Every unclear rule becomes an argument. Every missing owner becomes a gap. Every undocumented decision becomes a risk. Some teams will respond by pretending it is fine. They will let the tool run, then scramble when something breaks. They will call it learning. Other teams will respond by putting simple, sensible constraints in place that protect customers and protect the brand, while still getting the productivity gains that made them adopt AI in the first place. That second group will be the one that looks competent in two years. Not because they had better tools, but because they had better discipline. And discipline is the real advantage right now. AI can speak for you. That is powerful. It is also a responsibility. Guardrails aren’t optional, not because you’re afraid of the tool, but because you respect what it means to speak under your name. Discover our AI Services

  • AI Governance is not optional, it is the price of using the tool

    Every Marketing Operations team is having the same conversation right now. Someone has shipped a chatbot into the website. Someone else is feeding prospect data into a model to “improve targeting”. A third person has quietly wired an AI assistant into the CRM to auto log activities, write follow ups, and “clean” fields. And then the organisation pats itself on the back for being modern. But if you are using AI in production without governance, you are not innovative. You are careless. You are outsourcing risk to your future self, your legal team, and your customers. You are also guaranteeing a messy internal backlash later, because the first time it misfires you will watch the business slam the brakes on everything. Governance is not paperwork. It is the operating system that lets you use AI without turning your MarTech stack into a liability. Why Marketing Ops is uniquely exposed Marketing Ops sits in the blast radius of AI for three reasons. First, you handle a ridiculous amount of personal data, often across multiple systems, with varying consent states and hazy provenance. That is not a moral judgement, it is the reality of modern marketing. Second, your work touches revenue. When AI changes what gets sent, scored, routed, or reported, you are not “testing a feature”. You are changing the way the company makes money. Third, Marketing Ops tends to be the place where “quick wins” become permanent. A prototype becomes a workflow. A workflow becomes business as usual. Nobody writes down what it does, why it does it, or what it is allowed to touch. Then one day something breaks and everyone acts shocked. AI accelerates that pattern. It automates decisions. It generates content at scale. It can behave differently tomorrow than it did today. That is why governance matters more here than in a team building slide decks. Guardrails are not “compliance”, they are performance The common argument against governance is that it slows teams down. That only sounds true if you have never lived through the alternative: Chaos, rework, and a six month freeze after a public or internal incident. AI guardrails speed you up because they remove ambiguity. People know what tools are approved, what data they can use, what needs review, and what gets logged. They stop you shipping the same mistakes over and over again with increasing confidence. The NIST AI Risk Management Framework is a good way to think about this. It frames risk management around governance and lifecycle management, not one time approvals. The core idea is simple: Govern the approach, map the context, measure the risks, manage the controls. If you have no GOVERN function, the rest becomes theatre.  ISO/IEC 42001 points in the same direction from a management system angle: You need a structured way to establish, run, and continually improve how AI is used. This is not about one policy PDF. It is about ownership, controls, and continuous improvement.  The uncomfortable truth about “we are just using it for marketing” A lot of teams still talk about marketing use cases as if they are low stakes. They are not. If AI personalises a message, decides who gets an offer, changes lead routing, or rewrites copy based on customer data, you are in the realm of fairness, transparency, and accountability. You are also in the realm of data protection obligations, because personal data is often in the loop, even when people pretend it is not. Regulators are not buying the “it is just marketing” line either. The UK ICO’s guidance on AI and data protection is explicit about accountability and governance, and it ties it to concrete practices like impact assessments, documenting decision making, and involving appropriate stakeholders.  In Europe, the EU AI Act has put “trustworthy AI” into law, with a risk based approach and requirements that include risk management, data governance, transparency, and human oversight depending on the system and risk category. Whether or not your specific use case is classified as high risk, the direction of travel is clear. The bar is rising, and “we did not think about it” is not a defence.  What good governance actually looks like in Marketing Ops Governance fails when it is vague. “Be responsible” is not a control. It is a hope. Good governance is operational. It answers questions people actually have to answer on a Tuesday afternoon, under pressure, with a campaign deadline looming. Here is what we tend to come across in a Marketing Ops context. 1. A clear inventory of AI use cases If you do not know where AI is used, you cannot govern it. Most organisations already have shadow AI, including browser based tools, plug ins, CRM add ons, and “temporary” scripts. A proper inventory is not a spreadsheet that dies after week one. It is a living register: What the use case is, what system it touches, what data is involved, what model or vendor is used, what the failure modes are, and who owns it. 2. Data boundaries that are blunt, not poetic You need rules that can be enforced, not mission statements. What data is allowed into prompts and workflows. What must be masked or excluded. What cannot be used at all. How retention works. What happens to data sent to third parties. The UK ICO has been clear that organisations should think seriously about governance and accountability when processing personal data in AI systems, including assessing risks and documenting the rationale. That starts with knowing what you are feeding into the machine.  3. Human oversight that is real “Human in the loop” is often marketing theatre. People claim oversight exists, but in practice nobody checks anything until it goes wrong. Real oversight means defining which outputs are allowed to run automatically, which need review, and what “review” actually means. It also means training reviewers to spot the failure modes, not just grammar errors. The EU AI Act explicitly points to human oversight as a core requirement in higher risk contexts, because systems can fail in ways humans do not anticipate. Even if your specific use case is not formally high risk, the principle still applies.  4. Logging, traceability, and auditability This is the part Marketing Ops teams avoid because it feels technical. It is also the part that saves you when someone asks, “Why did this customer receive that message?” or “Why did this lead get marked as unqualified?” You need to be able to trace inputs, prompts, outputs, and downstream actions. That includes versioning of prompts and workflows, so you can explain behaviour changes over time. Without logs, you cannot learn. You also cannot defend yourself. 5. Vendor and model controls Most teams do not “build AI”. They buy it. That does not reduce responsibility. It changes the governance surface. You need procurement standards for AI vendors, clarity on data usage, model training policies, retention, and security. You need to know what happens when the vendor changes the model. You need exit plans. You need to treat AI features like critical infrastructure, not a shiny add on. ISO/IEC 42001 is useful here because it is designed for organisations providing or using AI based products or services, with an emphasis on responsible use and management system controls.  6. A governance cadence, not a one time workshop AI governance is not a launch task. It is a loop. New use cases appear. Old ones change. Vendors update. Regulations evolve. Teams find new ways to break things. If governance is a quarterly committee that nobody takes seriously, it will fail. If it is embedded in change control, release management, and campaign operations, it becomes normal. Risk management should apply across the lifecycle, not just at the start and lifecycle framing matters a lot in Marketing Ops as systems and workflows are constantly evolving.  The three failure modes that guardrails prevent Let’s make this painfully practical. Guardrails stop three common disasters. First, data leakage. Someone pastes customer data into a tool they should not be using. Someone connects a plugin that exports data to a vendor that stores it indefinitely. Someone uses a feature without understanding where the data goes. Regulators have been increasingly vocal about privacy harms in AI contexts, and not just in abstract terms.  Second, hallucinated operations. AI makes up a field value. It confidently “dedupes” records that should not be merged. It assigns a lead score based on nonsense. It rewrites copy and introduces claims you cannot substantiate. Marketing Ops teams love automation, which means they are especially vulnerable to quietly automating errors at scale. Third, accountability collapse. When things go wrong, nobody owns it. The vendor blames configuration. The marketer blames the tool. The Ops team blames “the model”. Leadership responds by banning everything. The outcome is predictable: Fear replaces learning. Governance is how you avoid turning one mistake into a full organisational retreat. “But we want to move fast” Move fast is fine. Move fast with rules. The teams that win with AI are not the ones with the most experiments. They are the ones that can experiment safely, keep what works, and kill what does not without drama. Guardrails are what make that possible. A strong governance setup does not mean every prompt needs legal approval. It means you have sensible tiers. Low risk tasks, like drafting internal summaries or rewriting existing public copy, can have light controls. Higher risk tasks, like using personal data for personalisation, changing routing, or automating outbound messages, should have stronger controls: Defined review, logging, and monitoring. This is exactly how risk based frameworks are designed to work. The EU AI Act is built around risk categories, and NIST’s RMF is intentionally flexible and context driven.  What to do next if your “governance” is basically vibes If you are reading this and realising your current stance is somewhere between “ad hoc” and “hope”, you are normal. Most organisations are there. The fix is not a 40 page policy. The fix is a working system. Start with a short inventory of every AI touchpoint in your marketing stack. Include the unofficial ones. Define data boundaries in plain language and make them enforceable. Create an approval and oversight model that matches risk, with clear ownership. Implement logging and traceability so you can explain what happened. Set vendor standards so you are not surprised by where data goes or what changes. Then run it as a process, not a project. If that sounds unsexy, good. Most things that save companies from expensive mistakes are unsexy. Marketing Ops is already the team that makes the unsexy work pay off. AI should not be the exception. Guardrails are not the thing stopping you from getting value from AI. Guardrails are the thing that lets you keep the value once you find it. Find out how we can help you with your AI Governance and Guard rails: Discover our AI Services

  • Stack rationalisation is not downsizing. It’s a MarTech ROI rescue.

    Every few years, a Marketing Ops team looks at its technology stack and has the same realisation you get when you open the “misc” kitchen drawer. Nothing in there is individually  a bad idea. It’s just… a lot. Half of it does the same job. Some of it hasn’t been used since the last merger. One item is only kept because a former colleague swore it was “mission critical”, and nobody’s brave enough to ask what it actually does. That’s your MarTech stack. And here’s the uncomfortable truth: Most stacks don’t fail because the tools are bad. They fail because the stack stopped being designed and started being collected. The result is predictable. Costs creep up. Adoption fragments. Data gets weird. Reporting becomes interpretive dance. The team spends more time keeping systems alive than using them to create pipeline. Then someone says, “ We need to rationalise the stack ,” and everyone hears, “ We’re about to take your toys away. ” But that’s not what rationalisation should be. Done properly, it’s not a finance-led haircut. It’s a performance rescue. It’s how you turn “ we have loads of tools ” into “ we get value from what we pay for ”. It’s also one of the fastest ways to regain executive trust, because nothing screams “ adult supervision ” like knowing what you own, why you own it, and what it’s delivering. The ROI myth that keeps stacks bloated MarTech ROI is usually treated like a scoreboard. We bought tool X. Tool X has dashboard Y. Therefore we can report ROI. But “reporting ROI” and “having ROI” are not the same thing. Most MarTech spend is justified with a story, not evidence. A story about efficiency. A story about personalisation. A story about scale. A story about being “ data-driven ”. Great stories, honestly. Very fundable. Then reality arrives. The tool requires clean data you don’t have. It needs an integration nobody scoped. It assumes a process you’ve never standardised. It gets deployed halfway, then the team gets busy, then six months pass, then renewal comes around and you renew because the alternative is admitting you don’t know what you’re doing. That is not a tool problem. That is an operating model problem. And the longer it goes on, the harder it gets to unwind, because the stack becomes political. People attach their identity to platforms. Procurement decisions become legacy monuments. Usage becomes impossible to measure because “ using it ” might mean anything from logging in once a quarter to running mission-critical workflows. So the stack grows. Overlaps multiply. ROI gets fuzzier. Everyone gets used to it. Until a CFO asks a very fair question: “ What are we paying for, and what are we getting back? ” If you can’t answer that crisply, you don’t have a stack. You have a subscription museum. Rationalisation isn’t removing tools. It’s restoring design. A rationalised stack is not the smallest possible number of tools. It’s the fewest tools required to reliably execute your strategy. That’s a huge difference. Because the goal is not austerity. The goal is performance. It’s speed, consistency, measurable outcomes, and reduced dependency on heroics. When stacks are bloated, teams start compensating with workarounds and manual effort. They build fragile automations. They export spreadsheets. They invent processes to deal with tool limitations instead of choosing tools that fit the process. Rationalisation reverses that. It gets you back to intentional design: What do we actually need to do to win? What capabilities matter most to deliver that? What is the simplest architecture that supports those capabilities? Where are we paying twice for the same outcome? Where do we have “features” but not “adoption”? Where does the data fall apart? This is why stack rationalisation is not primarily a procurement exercise. It’s a strategy and operations exercise that happens to result in procurement changes. The hidden cost: Operational drag Most teams underestimate how expensive complexity is, because it doesn’t show up as a single line item. Complexity costs you in: Time : Training, troubleshooting, triage, and all the “small” tasks that become constant. Speed : Every new campaign takes longer because the workflow touches more systems and more handoffs. Risk : Data privacy, consent, access control, and governance failures become more likely as systems multiply. Insight : Reporting degrades because definitions split across tools and no one trusts the numbers. Morale : Nothing kills motivation like working inside a stack that feels unreliable. If you want a simple definition of “MarTech debt”, it’s the gap between the stack you have and the stack your team can actually operate confidently. Paying off that debt is where ROI rescue starts. Why most rationalisation attempts fail Plenty of teams try to rationalise. Many even reduce vendor count. And then, weirdly, not much improves. That happens when rationalisation is done as a cleanup rather than a redesign. Common failure patterns: It becomes a cost-only exercise. If the main goal is to cut spend, the team will keep anything that looks defensible and ditch anything that looks optional, regardless of whether the “optional” tool is the one actually driving outcomes. It ignores workflows. Tools get evaluated in isolation, not based on the end-to-end journey they support. You can’t rationalise your stack if you can’t describe your core workflows. It confuses usage with value. A heavily used tool might still be a net negative if it drives manual work, fragmented data, or duplicated processes. It avoids hard questions. Teams keep tools because “ someone uses it ”, but nobody can define the value, the owner, the success metrics, or the alternative. It forgets change management. Removing tools is easy. Removing habits is hard. If you don’t redesign workflows and retrain the team, the old problems will reappear inside the “new” stack. If you want stack rationalisation to stick, it has to be tied to operational clarity and measurable outcomes. The ROI rescue approach: stop measuring tools, start measuring capabilities A better way to think about MarTech ROI is this: You don’t buy tools. You buy capabilities. Tools are just one way to deliver those capabilities. So instead of asking, “What does this platform do?”, ask “What capability does this enable, and how will we prove it?” Capabilities might include: Reliable lifecycle email execution. Accurate attribution you trust enough to bet budget on. Lead management that doesn’t create sales distrust. Consent and preference management that reduces risk. Personalisation that actually moves conversion rates. Reporting that doesn’t require a therapist. Once you frame it this way, rationalisation becomes clearer. Overlap is not “two tools do similar things”. Overlap is “we’re paying twice for the same capability”. And gaps become obvious too. Sometimes teams have ten tools yet still can’t do one critical thing consistently because the foundations are missing: Data, governance, process ownership. That’s why the ROI rescue is not simply consolidation. It’s capability alignment. Step one: Name the outcomes you’re trying to buy Before you touch vendors, get specific about what the business expects MOPS to deliver, and what MOPS expects the stack to make easier. Not vague outcomes like “better engagement”, Concrete outcomes like: Reduce campaign launch time from ten days to five. Increase lead-to-meeting conversion rate by 15 percent. Improve lifecycle email contribution to pipeline by X. Increase MQL to SQL acceptance by Y. Reduce manual list pulls and CSV-based processes by Z. If you can’t define outcomes, the stack will keep being evaluated based on opinion and politics. The fastest way to kill a rationalisation project is to make it about which tools people like. Step two: Map the workflows that create value You don’t need a massive process library. You need the handful of workflows where performance lives. For most B2B teams, that’s usually Lead capture to routing, lifecycle email and nurture, campaign execution and measurement, attribution and reporting, data enrichment and deduplication, consent and preference management and integration between CRM, MAP, and analytics. Map those workflows at a human level, not at a vendor feature level. Who does what, when, with what inputs, and where the system should automate vs where humans need control. This is where you’ll find the truth. The truth is usually that the stack is not too big. It’s too inconsistent. It allows different parts of the org to operate different versions of “the process”, which creates downstream chaos. Rationalisation should standardise workflows, not just reduce logos on a slide. Step three: Assign ownership, or accept you’re buying waste Tools without owners become toys, then become liabilities. Every core system and every core workflow needs an accountable owner. Not a committee. A named person. Ownership means: Defining standards. Managing changes. Measuring performance. Training users. Deciding what gets built and what gets blocked. If nobody owns it, you’re not buying a platform. You’re buying entropy. This is also where ROI becomes measurable. You can’t prove ROI on something nobody is responsible for improving. Step four: Create a “keep, kill, consolidate, fix foundations” decision model This is where teams expect a dramatic tool-culling session. Sometimes you will cut tools. Often you should. But more often, the biggest ROI is in “fix foundations”. Because you can consolidate your stack beautifully and still get terrible results if: Data is inconsistent. Lifecycle definitions are unclear. UTM governance is non-existent. CRM hygiene is a fantasy. Sales stages and lead statuses mean different things to different people. Consent tracking is messy. Rationalisation should result in decisions across four buckets: Keep : tools that directly support priority capabilities and are adopted properly. Kill: tools that are unused, redundant, or never delivered the promised capability. Consolidate : overlap where one tool can reasonably replace another without wrecking workflows. Fix foundations : areas where the tool is fine, but the operating model is broken. That last bucket is where ROI rescue often lives. Because you can save money by cutting a tool. You can make money by making the stack work. Step five: Measure a few things that actually matter ROI is not platform cost divided by vibes. Pick metrics that connect stack performance to business performance and operational efficiency. Examples that tend to expose the truth quickly include time-to-launch for campaigns, percentage of leads routed correctly within SLA, sales acceptance rate of leads, percentage of lifecycle emails using approved templates and tracking, duplicate rate in CRM, percentage of records with required fields. And then other things such as report reliability: Do teams trust the dashboards enough to use them in decisions? And support load: How many hours per week are spent troubleshooting basic execution issues? These metrics do two important things. They prove value when things improve. And they make it painfully obvious when a tool is not the problem. The part nobody likes: Rationalisation changes power This is why it’s hard. A rationalised stack usually means fewer exceptions, more standards and clearer governance - less “I do it my way”. That feels restrictive if you’re used to improvising. But it’s the difference between creativity and chaos. High-performing teams don’t move faster because they have more tools. They move faster because they have fewer decisions to remake every week. Standards create speed. Governance creates confidence. Clarity creates adoption. And adoption is the thing that turns software into ROI. What “good” looks like when you’ve rescued ROI A rationalised stack doesn’t look exciting. It looks boring in the best way. Campaigns launch reliably. Reporting is trusted. Integrations are stable. Lead routing works without daily drama. New hires can learn the system without needing a private tour from the one person who understands it. You spend less time arguing about tools and more time improving outcomes. And the CFO stops asking awkward questions because you’ve already answered them. That’s the real goal. Not fewer vendors for the sake of it, but a stack that behaves like infrastructure, not a science project. The kicker: Rationalisation is an AI readiness project in disguise Most organisations are desperate to “use AI” and confused about why it isn’t magically working. Here’s why. AI can’t save a broken operating model. It will only automate the chaos faster. If your data is inconsistent, AI will generate inconsistent outputs. If your processes are unclear, AI will amplify the ambiguity. If nobody owns the system, AI will become another orphan tool. Stack rationalisation, done properly, is one of the best AI readiness moves you can make. Because it forces you to create the conditions where automation can be trusted: Clean data, standard workflows, and clear accountability. You don’t become AI-ready by buying an AI feature - You become AI-ready by becoming operationally serious. A final thought: If your stack can’t be explained, it can’t be defended If you can’t describe, in plain language, what each major tool is for, who owns it, what capability it supports, and how you measure its success, you’re not managing a stack. You’re hosting one. Stack rationalisation is not about being smaller. It’s about being deliberate. And MarTech ROI rescue is not about proving your spend was justified. It’s about ensuring your spend becomes productive. If you want a simple rule to start with, use this: If a tool doesn’t reduce time, reduce risk, or increase revenue, it’s either mismanaged or unnecessary. Either way, it’s on the list. Discover our MarTech Services

  • The EU AI Act will expose your Marketing Ops: Who’s accountable when AI breaks things?

    Marketing Ops has always been accountable. It just rarely looked like it. When a campaign misfires, it’s “a creative issue”. When data goes bad, it’s “a CRM issue”. When attribution turns into astrology, it’s “a market issue”. Marketing Ops sits in the middle quietly fixing everything while everyone else argues about the colour of the button. Now add AI to that mix. Because AI does not fail politely. It fails at scale, at speed, and with enough confidence to make the wrong answer look like policy. The EU AI Act is basically Europe’s way of saying: If you deploy AI, you do not get to shrug when it breaks. Someone has to own the risks, the controls, the monitoring, and the outcomes. And if your Marketing Ops function currently runs the stack, the workflows, the routing, the automation, the data, and increasingly the “helpful” AI features inside your tools, congratulations. You are about to get pulled into an accountability conversation you did not schedule. This article is not legal advice. It’s a practical, Marketing Ops view of what the EU AI Act changes, what it forces you to be clear about, and how to answer the uncomfortable question: Who is accountable when AI breaks things? And are you prepared for when it becomes applicable in August 2026? What the EU AI Act actually is, and why Marketing Ops should care... The EU AI Act is a regulation that sets risk-based rules for AI. It applies to public and private actors inside and outside the EU if they place AI systems or general-purpose AI models on the EU market, put them into service, or use them in the EU.  The timeline matters because this is not some distant future threat you can park in a Q4 roadmap and never touch again. The Act entered into force on 1st August 2024 and becomes fully applicable on 2nd August 2026 , with staged dates for different parts. Prohibited practices and AI literacy obligations have applied since 2 February 2025. Obligations for general-purpose AI models became applicable on 2 August 2025.  You do not need to be “building AI” to be on the hook. If your marketing team is using AI features in a CRM, marketing automation platform, ad platform, analytics tool, chatbot, content tool, sales engagement tool, or customer data platform, you are already in the system. Marketing Ops cares for one simple reason: The Act forces clarity about who is responsible for what. And Marketing Ops is usually the only function that can map what is actually being used, where, by whom, and with what data. The first accountability trap: “We didn’t build it, we just used it” Under the Act, obligations fall on different actors, including providers and deployers. The Commission’s guidance describes the framework applying to providers (for example, a developer of a tool) and deployers (for example, an organisation using that tool).  This is where a lot of Marketing Ops teams try to mentally exit the building. “ We’re not an AI company. We’re just using features in our tools. ” That may reduce some obligations, but it does not remove accountability. Even in the high-risk context, the Commission’s guidance describes deployer obligations that are very operational: Using the system according to instructions, monitoring operation, acting on identified risks or serious incidents, and assigning human oversight to people in the organisation.  So the real question is not “ are we a provider? ” It’s “ are we a deployer, and if so, are we operating the system responsibly? ” In Marketing Ops terms, that translates into boring, unavoidable work: Governance, documentation, controls, training, monitoring, and incident response. The second accountability trap: “AI is everywhere, so nobody owns it” When everything has an AI button, it becomes culturally tempting to treat AI as a vibe rather than a system. But the EU AI Act is designed to do the opposite. It is trying to turn AI back into something you can audit. That means you will get asked questions like: Who approved this use case? Who decided what data goes into it? Who checked the output? Who is monitoring performance drift? Who is accountable when it produces misleading content, discriminatory outcomes, or security incidents? If your organisation cannot answer those questions, you do not have “ AI adoption ”. You have unmanaged operational risk. And unmanaged risk has a habit of becoming a budget line, a headline, or both. Where Marketing Ops is most exposed Most Marketing Ops teams are not deploying AI for medical triage or border control. That’s not the point. The exposure comes from how marketing actually uses AI in the real world. You run customer-facing AI interactions If you deploy chatbots or other interactive systems, someone needs to think about transparency, user expectations, and what happens when the system confidently says something untrue. The Commission’s guidance explains that the Act introduces transparency requirements for certain interactive or generative AI systems, such as chatbots, to address risks like manipulation, fraud, impersonation and consumer deception.  That is marketing territory. Customer experience, web journeys, lead capture, qualification, and support deflection are all places where Marketing Ops often owns the tooling and the workflow. When those systems break, the first question will be “ why did you deploy it like this? ” not “ which vendor did you buy it from? ” You publish AI-assisted content at scale Marketing teams are already generating images, audio, video, and written content with AI-assisted tools. The Act’s transparency obligations include requirements on deployers in certain situations, including disclosure for AI content, and disclosure when text is generated or manipulated and published with the purpose of informing the public on matters of public interest. The Commission notes these transparency obligations and that guidelines will further clarify how they apply.  Even if your content does not fall into those specific categories, the direction of travel is clear. You are expected to be honest about what is synthetic when that matters to the audience, and to avoid systems that create deception. Marketing Ops is exposed here because it often owns the content workflow tooling, approvals, templates, distribution and tracking. You are the function that can actually operationalise a disclosure rule without turning the team into a bureaucratic mess. You use AI for targeting, segmentation, and decisioning This is the area where marketing loves to pretend the model is “just helping”. If AI influences who sees what, who gets prioritised, who is suppressed, who is routed, or who gets categorised, you are using AI as a decisioning layer. Even when the Act does not label a specific marketing use case as “high-risk”, you still have obligations under other laws, and the AI Act does not replace those. The European Data Protection Board has been explicit that the AI Act and EU data protection laws should be considered complementary and mutually reinforcing, and that EU data protection law remains fully applicable to the processing of personal data involved in the lifecycle of AI systems.  So if your AI-driven segmentation relies on personal data, you are automatically in GDPR land as well, and your accountability picture now has at least two regulators’ expectations in it. You might accidentally wander into high-risk territory through HR and recruitment marketing A lot of marketing teams support recruitment, employer brand, internal comms, and candidate journeys. Some teams run targeted job advertising systems and automation. Some use tools that “optimise” job ads and candidate targeting. The Commission’s guidance lists employment-related AI systems as examples of high-risk use cases, including systems intended to be used for recruitment or selection, which includes placing targeted job advertisements.  If your marketing stack touches that area, you need a grown-up conversation with HR and Legal about who owns the system, who is the deployer, and what controls exist. Marketing Ops does not need to own HR compliance, but Marketing Ops often owns the platforms that make these workflows possible. That makes you part of the accountability chain. “When AI breaks things” , what counts as “ breaks ”? This is where organisations get dangerously vague. AI “breaking” is not just a system outage. It can mean: A chatbot gives incorrect product claims, pricing, security assurances, or legal statements. An AI feature generates content that creates deception, impersonation risk, or misleading communications. An optimisation system shifts targeting in a way that creates discriminatory outcomes, even unintentionally. A data pipeline feeds the wrong inputs, and the model output becomes systematically wrong. A generative tool produces content that breaches IP rules or internal policy. A vendor updates a model, performance changes, and your safeguards do not catch it. A workflow creates an outcome you cannot explain to an affected person, which becomes a practical problem in high-risk contexts where the Commission describes a right to an explanation for natural persons in certain situations.  The point is not to predict every failure mode. The point is to stop acting surprised when failure happens, and to have an accountable operating model ready. So who is accountable, legally? There is no single magical job title that makes the risk disappear. Accountability is shared, but not vague. At a legal role level, the Act places obligations on the relevant actor types (providers, deployers, and others depending on the scenario). The Commission’s guidance makes clear that deployers have concrete responsibilities in how they use and monitor certain systems, including assigning human oversight within their organisation.  At a governance level, enforcement is not theoretical. The Commission’s materials outline penalties, with maximum thresholds including up to €35m or 7% of worldwide annual turnover for certain infringements, and other tiers for other non-compliance categories.  At a data and privacy level, the AI Act does not push GDPR aside. The EDPB has stressed that data protection law remains fully applicable to personal data processing across the AI lifecycle, and the AI Act should be interpreted as complementary to GDPR and related laws.  So if your question is “ who will regulators look at? ”, the honest answer is: They will look at the entity that deploys the system in the EU, the entity that provides it, and the people inside those entities who were supposed to provide oversight. Which brings us to the more useful question... Who should be accountable inside a company? This is the Marketing Ops version of “ stop pointing at each other like a Spiderman meme and design a process ”. The EU AI Act effectively rewards organisations that can do three things on demand. They can show what AI is in use, where it is used, and why. They can show who approved it, what data it uses, and what safeguards exist. They can show how they monitor it, how they handle incidents, and how they train staff. The Act’s AI literacy obligations have been in application since 2 February 2025.  That is not a “ nice to have ”. It is a forcing function that pushes companies to ensure the people using AI understand it well enough to use it responsibly. Inside most B2B companies, accountability ends up looking like this. Legal and Compliance sets rules, interprets obligations, and decides risk appetite. Security sets requirements for vendor assessments, access controls, and incident response. The DPO and privacy function owns the GDPR posture where personal data is involved, and the EDPB has been clear this remains fully relevant in AI systems.  Marketing leadership owns what the business chooses to do, and what it is willing to sign off. Marketing Ops owns how the work is actually done across platforms, workflows, data, and governance. If you want a single throat to choke, organisations are already trying to dump this on “ the AI person ” or “ the data person ”. That fails because the risk lives in operations. It lives in who can actually change how tools are configured and used. That is why the EU AI Act will expose Marketing Ops. It makes operational accountability visible. The uncomfortable part: Your vendor contracts will not save you... Vendors can promise compliance. They can offer documentation. They can add toggles and disclaimers. They can be very convincing in sales calls and contracts. But the moment you deploy the system in your environment, with your data, for your purpose, you become responsible for how it is used. The Commission’s guidance on deployer obligations in high-risk contexts is blunt about deployers needing to use systems according to instructions, monitor operation, act on identified risks, and assign human oversight. The spirit of that is useful even outside high-risk: You cannot outsource oversight. This is where Marketing Ops should stop accepting “ the vendor said it’s compliant ” as a meaningful internal control. A practical accountability model for Marketing Ops You do not need to turn your Marketing Ops team into a compliance department, but you do need a system that creates answers quickly when someone asks, “ What AI are we using, and what happens if it fails? ” Here is what that looks like in practice, without turning this into a checklist article. Start with an AI inventory that is brutally honest. Not a slide. A living list of tools and features, where they are used, what data they touch, and whether they interact with customers. If you cannot map it, you cannot govern it. Then define use-case ownership. Not tool ownership. Use cases. “ Website chatbot ”. “ Email content generation ”. “ Lead enrichment ”. “ Audience segmentation ”. “ Recruitment ad targeting ”. Every use case needs a named business owner and a named operational owner. The operational owner is often Marketing Ops. Then decide what “ human oversight ” means for each use case. The Commission’s language on assigning human oversight inside the organisation should not be treated as a high-risk-only curiosity. If a system can publish, route, prioritise, or decide, someone needs to be accountable for review points, guardrails, and escalation. Then put monitoring where it belongs: On outcomes, not activity. Monitor for things like hallucinated claims in customer-facing responses, unexpected shifts in routing, sudden performance drift after vendor updates, spikes in complaint patterns, and outputs that create deception risk. Then add an incident pathway that does not rely on panic. If AI produces a harmful or misleading output, who gets notified, who can shut it down, who contacts the vendor, who handles customer comms, and who documents what happened? Finally, train people like adults. The AI literacy obligations are already in application. Training should be specific to the tools and use cases your team actually uses, and it should include what not to do, what must be reviewed, and what needs disclosure. If your training is a generic “AI 101” webinar, you have technically done a thing. You have not reduced risk. The privacy and compliance overlap you cannot ignore! Marketing teams often treat GDPR as “ the cookie banner problem ”. That mindset is going to get expensive. The EDPB’s statement is clear that data protection law remains fully applicable to personal data processing across the AI lifecycle and should be interpreted as complementary with the AI Act.  On top of that, regulators are actively thinking about the interplay. The EDPB and EDPS have noted work on joint guidelines about the interplay between GDPR and the AI Act.  For Marketing Ops, that means your AI governance cannot be divorced from your data governance. If you cannot explain what data goes in, why it is lawful, how it is minimised, how it is secured, and how it is deleted, you are not “ doing AI ”. You are doing risk. One more complication: The rules are still being operationalised It’s tempting to read a regulation like it’s a final instruction manual. In practice, there will be standards, guidelines, and codes of practice that affect how organisations implement parts of the Act. For example, the Commission notes work on guidance for transparency obligations and a code of practice to support marking and labelling of AI-generated content.  The Commission has also proposed adjustments to the timeline for applying high-risk rules linked to the availability of support measures like standards and guidelines, and that proposal is in the legislative process.  So yes, some details will evolve. That is not a reason to wait. It is a reason to build an operating model that can adapt without chaos. The blunt reality: Marketing Ops is accountable for readiness When AI breaks things, the provider may be accountable for parts of compliance, depending on their role. The deployer is accountable for how it is used in their organisation. Regulators and stakeholders will not accept “ the tool did it ” as a defence, especially where transparency, oversight, and monitoring were expected.  Inside the company, Marketing Ops is rarely the legal owner of the risk, but it is often the operational owner of whether the business can prove it is acting responsibly. That is the exposure. Not because Marketing Ops is to blame, but because Marketing Ops is where reality lives. If you want a simple line to use internally, use this: Legal interprets the rules, Security protects the environment, Privacy governs personal data, and Marketing Ops makes the controls real across the stack. And the fastest way to find out whether your Marketing Ops is ready is to ask one question: " If we had to explain our AI usage to a regulator, a customer, and our board tomorrow, could we do it without improvising?" If the answer is no, the EU AI Act didn’t create the problem. It just stopped letting you hide it. Discover our AI Services

Sojourn Solutions logo, B2B marketing consultants specializing in ABM, Marketing Automation, and Data Analytics

Sojourn Solutions is a growth-minded marketing operations consultancy that helps ambitious marketing organizations solve problems while delivering real business results.

MARKETING OPERATIONS. OPTIMIZED.

  • LinkedIn
  • YouTube

© 2026 Sojourn Solutions, LLC. | Privacy Policy

bottom of page
Clients Love Us

Leader