Search our Resources
256 results found with an empty search
- AgentOps is the next Ops layer and nobody's staffed for it...
Ask a MOPs team how many automated programs are running in their marketing automation platform right now and you'll get a rough answer. Maybe not a confident one, but something in the right postcode. They'll know the major nurtures, the scoring models, the lifecycle triggers. It's their system. They built it. Now ask how many AI agents are running. Across the CRM, the MAP, the service desk, the data enrichment layer. How many are live. What data they access. What actions they can take. Who activated them. When they were last reviewed. More often than not, you will get silence. Very occasionally, you'll even get " …what agents? " AI agents are multiplying inside the platforms MOPs teams already operate whether you know it or not, and nobody has built the operational layer to manage them. Not IT. Not Marketing Ops. Not the RevOps team that's still arguing about lifecycle stage definitions. The result can be a growing fleet of autonomous processes running inside your revenue systems with no monitoring, no audit trail, and no clear owner. We've been here before with marketing automation - build it, launch it, orphan it. Except agents don't just execute static rules. They reason. They adapt. And they can go quietly wrong in ways that won't show up until someone asks why pipeline looks off. This is the AgentOps problem. And most organisations don't even know they have it yet. Agents aren't automations. They need a different kind of oversight. Traditional marketing automation runs a script. If the data says X, do Y. It's deterministic. Predictable. Boring in the best possible way. When a smart campaign breaks in Marketo, you can trace the logic, find the error, fix it. The system did what you told it to do. AI agents are different. Agentforce uses an LLM-powered reasoning layer to interpret context, plan actions, and execute across systems. HubSpot's Breeze agents - now running on GPT-5 for some marketplace agents - make judgement calls about how to qualify a lead, what to say to a customer, when to escalate. They don't follow a flowchart. They interpret . That distinction matters enormously for operations, because it means the failure mode is different. A broken automation sends the wrong email. You catch it in QA or someone complains. An agent that's reasoning poorly routes high-value prospects to the wrong sales team, or gives a customer an answer that's confidently wrong, or quietly updates CRM fields based on stale data - and it does all of this while looking like it's working perfectly. One Salesforce implementation partner published a detailed account of exactly this pattern earlier this year. A client deployed an Agentforce lead qualification agent that was routing high-value prospects to the wrong sales team. The cause? A territory assignment field that hadn't been updated after a recent re-org. The agent didn't flag the stale data. It didn't hesitate. It treated six-month-old field values as ground truth and processed 340 leads through incorrect routing before anyone noticed. Human reps would have caught it within the first few calls. The agent just kept going. That's the operational gap. The technology worked. The reasoning worked. The data was wrong, and nobody was watching. AI governance in Marketing Ops now means agent governance The governance conversation has been happening for a while. Policies about data usage, consent, content review. Most of it has centred on generative AI - who's allowed to use ChatGPT, what can be fed into a model, who reviews AI-generated copy before it ships. That conversation was necessary. It was also about the last generation of AI use cases. Agents are a different governance surface entirely. They don't just generate content. They take actions. They modify records. They make routing decisions. They interact with customers. The governance questions aren't " is this content on brand? " - they're " did this agent just change a lead score based on data that's three months stale, and did anyone notice? " Agent governance requires a different set of capabilities. You need monitoring... not just logging what happened, but flagging when agent behaviour deviates from expected patterns. You need periodic review cycles, where someone checks that the agent's reasoning still aligns with current business rules, pricing, territories, product availability. You need escalation paths, so when an agent encounters something outside its boundaries, the right human gets involved instead of the agent improvising. And you need ownership. Clear, named, accountable ownership. Not "the team," not "IT handles the platform," not "we'll figure it out." A person who knows which agents are running, what they're doing, what data they depend on, and when they were last reviewed. That's AgentOps. It's not a product. It's not a platform. It's an operational discipline, and it doesn't exist yet in most organisations. Take part in the 2026 AI Benchmark Report Hallucination rates are a design reality, not a scare statistic Here's a number that should shape how you think about agent operations: hallucination rates for AI agents inside CRMs range from 3% to 27%, depending on configuration, grounding data, and prompt design. That's from published implementation data across dozens of enterprise deployments. At the low end - proper Knowledge article coverage, well-structured prompts, tight topic guardrails - agents get it right 95-97% of the time. That's genuinely useful. At the high end - minimal grounding data, broad topic definitions, no monitoring - you get an agent that fabricates pricing, invents product features, or confidently cites policies that don't exist. The point isn't that agents are unreliable. It's that they're probabilistic . They will occasionally get things wrong. That's not a bug. It's the nature of the technology. The operational question is whether your organisation has the capacity to detect when that happens, assess the damage, and correct course. Right now, for most teams, the answer is no. Some platforms are starting to ship transparency features - audit trails showing which CRM properties an agent modified and what actions it took. That's a step in the right direction. But a feature isn't a practice. An audit trail is useless if nobody's reading it. That's the operational equivalent of installing a smoke detector and never checking the batteries. What AgentOps actually looks like This doesn't require a new team or a new budget line. It requires treating agents as operational assets - not features you activate and forget. That means maintaining an inventory. How many agents are running in your systems right now? What data do they access? What actions can they take? Who activated them? If you can't answer those questions today, you have an agent sprawl problem and you don't know how big it is. It means defining review cadences. Not annual audits - practical, lightweight checks. Monthly: is the agent behaving as expected? Are the data fields it depends on still reliable? Quarterly: do the business rules baked into agent behaviour still match reality? Have territories shifted? Has pricing changed? It means setting performance baselines. What does "working" look like for each agent? If you can't define success, you can't detect failure. And the agent won't tell you it's failing. It'll just keep going with impressive confidence. And it means building escalation clarity. When an agent does something unexpected, who gets told? How fast? Salesforce learned this the hard way on its own Help portal - 26% abandonment rate before anyone intervened. Most orgs don't have Salesforce's engineering resources to react that quickly. The agents are already live. The ops layer isn't. Every ops discipline starts the same way. Something breaks. Leadership asks who was supposed to be watching. Nobody has a good answer. A process gets created under pressure, after the fact, while someone patches the damage. Marketing automation governance happened that way. Marketing automation data quality programmes happened that way. GDPR compliance happened that way for a depressingly large number of organisations. You can build AgentOps the same way - reactively, after an agent has been quietly misrouting leads for six weeks or breaching compliance boundaries for 48 hours because someone edited a topic description. Or you can look at the agents already running in your systems, admit that nobody's managing them, and start. The agents are already live. The ops layer isn't. That gap has an expiry date. It's just a question of whether you close it on your terms or someone else's. Discover our AI in Marketing Operations Services
- AI Beyond Productivity: Where are the real business gains?
Productivity was always the starting point For the last year or two, most AI conversations in business have sounded oddly familiar. How can we write faster? Summarise faster. Analyse faster. Build presentations faster. Reply to emails faster. Produce more content with fewer people and less effort. Fair enough. That is where most organisations start. It is the easiest sell. Efficiency is measurable, non-threatening, and easy to explain in a board meeting. Nobody gets fired for saying they want a team to spend less time on repetitive work. But productivity is only the opening act. Doing the same work faster is useful. It is just not all that transformative. If AI simply helps a busy team clear its backlog at greater speed, that is an improvement. It is not reinvention. It is admin with a nicer user interface. Helpful, yes. Revolutionary, not quite. The more interesting shift is what happens when AI starts changing how work gets done in the first place. That is where the real gains start to show up. Not just shaved minutes. Not just reduced agency hours. Not just “we saved the team two days a month.” Those are nice wins, but they are rarely the ones that change a business. The bigger opportunity is when AI changes operating models across marketing, sales, customer success, and revenue operations. When it improves decisions, closes gaps between teams, reduces commercial friction, and helps organisations act with more consistency and confidence. That is where the conversation gets more serious. Because in most businesses, the real drag on growth is not that people type too slowly. It is that teams are misaligned. Data is messy. Processes are inconsistent. Handoffs are clunky. Campaigns take too long to launch. Reporting arrives too late to change anything. Sales does not trust marketing’s signals. Marketing does not trust sales follow-up. Customer success is left out of the loop. Everyone is busy, yet somehow the business still struggles to move faster in the places that matter. AI does not magically fix that. In fact, without structure, it can make the mess worse. But when it is applied properly, it can do something much more valuable than improve task efficiency. It can help organisations operate better. That is the real prize. The real gains start with better decisions The first leap beyond productivity is better decision-making. Many businesses are drowning in information while starving for clarity. Dashboards everywhere. Reports on reports. Endless exports from CRM, MAP, BI platforms, intent tools, web analytics, and customer systems. Everyone has data. Very few have a version of it that is timely, connected, and useful enough to support action. This is where AI can start earning its keep in a more meaningful way. Not by generating another summary nobody asked for, but by helping teams spot patterns, risks, and opportunities that would otherwise stay buried. Which segments are actually converting, not just engaging? Which campaign themes are influencing pipeline quality, not just volume? Which accounts are showing the kind of buying behaviour that deserves action now, rather than another nurture stream they will ignore with impressive consistency? That shift matters because the value is no longer about faster reporting. It is about better commercial judgement. A marketing team that can see which messages are moving buyers through complex journeys is in a stronger position than one that simply produces more assets. A sales leader who can prioritise outreach based on stronger signals is in a better position than one relying on a glorified hunch. A revenue team that can identify where conversion is breaking down can fix real problems before quarter-end panic sets in. This is where AI starts moving from labour-saving assistant to decision-support layer. That is a more serious role. It is also where the gains start to compound. AI gets more interesting when it improves orchestration The second leap is orchestration. Most revenue functions are still held together by a patchwork of systems, handoffs, habits, and crossed fingers. Marketing runs campaigns. Sales follows up, or does not. Ops tries to stitch the process together. Customer success gets involved later, sometimes with context, sometimes without. Everyone talks about journey orchestration, but the lived reality is usually closer to organised chaos. AI can help reduce that chaos, not by replacing teams, but by improving coordination between them. Think about how much commercial value is lost in the gaps. Leads routed too late. Follow-ups triggered with the wrong context. Accounts sitting untouched because one system says they are warm and another says they are dead. Customer signals ignored because they live in a platform nobody checks. Campaigns launched without real feedback from the field. Handoffs based on static rules that made sense eighteen months ago and now quietly sabotage performance every day. This is where AI becomes more than a content machine. It can help interpret signals across systems, recommend next-best actions, surface anomalies, and support more responsive plays across teams. Not in a science-fiction “the robot runs the revenue engine” kind of way. More in a very practical “the business is no longer relying on three spreadsheets and Claire from ops to hold everything together” kind of way. That may sound less glamorous, but it is far more valuable. When marketing and revenue teams operate with better timing, better context, and better coordination, the business feels different. Work flows more cleanly. Friction drops. Decisions get made earlier. Opportunities get acted on faster. That is not just efficiency. That is improved commercial execution. Consistency is not sexy, but it is where scale lives The third leap is consistency at scale. One of the least glamorous truths in business is that performance often suffers because execution is wildly inconsistent. Not because the strategy was terrible. Not because the technology stack is broken. Just because different teams, regions, markets, or managers are all doing things slightly differently, with varying levels of quality and discipline. AI can help standardise that. Not in a rigid, joyless, corporate-policy-manual way. In a way that makes good practice easier to repeat. It can support consistent QA, flag compliance issues, improve data hygiene, reinforce process standards, and reduce the kind of avoidable variation that causes downstream pain. In marketing operations especially, this matters more than many leaders realise. A campaign build process that is followed properly every time is not exciting. A lead management framework applied consistently across markets is not sexy. Metadata standards and naming conventions do not exactly set LinkedIn on fire. But these are the things that determine whether a business can scale without tripping over its own shoelaces. AI can strengthen those foundations if it is deployed with intent. It can act as a layer of support around governance, quality control, and operational discipline. That is important because scale usually breaks where standards are weakest. And this is where a lot of the current AI hype becomes mildly ridiculous. Too many organisations are still obsessing over how quickly AI can produce outputs, while ignoring whether those outputs sit inside a functioning operating model. Faster content in a broken system is not transformation. It is just more noise, delivered promptly. The businesses that will get real gains are not the ones generating the highest volume of AI-assisted activity. They will be the ones using AI to reduce variability, improve judgement, tighten execution, and create more reliable pathways from activity to revenue. That is a much less flashy story. It is also the one that actually affects business performance. The bigger shift is role redesign, not task acceleration The fourth leap is redesigning roles, not just accelerating tasks. This is where the conversation gets uncomfortable. A lot of leaders still talk about AI as a helper. Something that sits beside existing roles and makes them more productive. That framing is understandable, especially when companies are trying not to terrify their own workforce. But it is also limiting. Because the bigger question is not “how can AI help this person do their existing job faster?” It is “what should this job now include, exclude, or become?” That is a harder discussion because it forces teams to examine work that has existed for years and ask whether it still deserves to. It means challenging legacy processes, duplicated effort, manual review chains, bloated reporting habits, and all the odd little tasks that nobody likes but everyone keeps doing because “that is just how it works here.” AI gives organisations a reason to revisit those assumptions. In marketing, that may mean fewer hours spent producing first-draft material and more time spent on strategic planning, audience insight, experimentation, and commercial alignment. In operations, it may mean less manual policing and more proactive system design, governance, and optimisation. In revenue teams, it may mean moving people closer to decisions and away from repetitive admin that should have been automated years ago. That is where the gains become structural. Not because jobs vanish overnight, despite the breathless nonsense often pushed online, but because the mix of work changes. Teams that keep using AI as a glorified speed tool will get modest gains. Teams that redesign roles around better judgement, stronger systems thinking, and more intelligent coordination will get far more. And yes, this requires management courage. Which is inconvenient, because courage is in shorter supply than AI tools. Better internal operations create better customer experience The fifth leap is better customer experience, even if people do not label it that way. A lot of internal AI use cases are sold around productivity because it is easier to win budget with an internal efficiency story. But customers feel the impact when internal operations improve. They notice when handoffs are cleaner, messaging is more relevant, follow-up is better timed, and service teams have actual context instead of a blank screen and a forced smile. AI can help businesses become easier to buy from and easier to work with. That matters. In B2B especially, customer experience is often damaged by internal fragmentation. The buyer sees one company. Behind the scenes there are six teams, nine systems, conflicting definitions, and at least one dashboard that everyone pretends to understand. When AI helps join those dots, the customer gets a smoother experience, even if they never see the plumbing. That is a real gain. Not a vanity metric. Not an internal time-saving story dressed up as innovation. A proper improvement in how the business shows up to the market. Tools alone will not create business value Of course, none of this happens just because a company bought licences and told people to “have a play.” That is where many AI programmes drift into parody. Real gains do not come from random experimentation with no structure behind it. They do not come from telling every employee to use a chatbot and hoping transformation will emerge from the chaos like some sort of digital swamp creature. They come from identifying meaningful business problems, improving the operating environment around them, and applying AI where it can genuinely change the way teams work. That means process first, then tooling. It means governance before scale. It means data quality before grand promises. It means deciding where human judgement matters most, and where it is currently being wasted on tasks that do not deserve it. Most importantly, it means being honest about what kind of business gain you are actually chasing. If the goal is simple productivity, say that. There is nothing wrong with efficiency. Most organisations still have plenty of low-value work that can and should be reduced. But do not confuse that with transformation. Saving time is good. Changing performance is better. The businesses that win will be the ones that operate differently The next phase of AI value will not be defined by who can create the most content, automate the most tasks, or boast the loudest about “copilot” adoption. It will be defined by who can build a better operating model around it. Who can connect functions more intelligently. Who can improve decision quality. Who can standardise execution without suffocating teams. Who can reduce friction across the revenue engine. Who can turn AI from a productivity trick into a business capability. That is the real shift now underway. Most businesses are still on the first rung, using AI to do the same things a bit faster. That is understandable. It is where the market started, and for many teams it is still where the easiest wins live. But the bigger gains sit further ahead. They show up when AI starts helping businesses work differently, not just faster. And that is where the conversation gets worth having. Discover our AI Services
- Thinking of moving from 6sense to Demandbase? Here’s why more B2B teams are making the switch
There comes a point with some platforms where the issue is no longer capability. It is tolerance. Yes, the dashboards look clever. Yes, the intent signals sound impressive. Yes, everybody nodded politely during the demo. But once the thing is live, the questions start. Why does this account matter? Why is that one surging? Why does everything useful seem to sit behind another commercial conversation? And why, despite all this supposed intelligence, does the platform still feel like hard work? That is the point where teams stop asking whether they bought something powerful and start asking whether they bought something practical. For B2B organisations weighing up a move from 6sense to Demandbase, that is the real story. This is not about swapping one ABM badge for another because a partner deck said so. It is about choosing a platform that is easier to trust, easier to use, and easier to turn into actual pipeline. Demandbase is pitching exactly that, positioning its migration guide around faster time-to-value, transparent AI, and buying-group intelligence that helps teams drive revenue rather than just admire charts about it. The biggest problem with black boxes is that eventually people stop believing them A lot of ABM and intent platforms suffer from the same issue. They promise precision, but deliver opacity. That is fine for about five minutes. After that, marketing wants to know what is really driving prioritisation. Sales wants to know why one account is apparently red hot while another with actual human conversations is being ignored. Leadership wants to know whether the investment is producing something tangible or simply generating prettier versions of uncertainty. Demandbase’s guide goes straight at this by calling out frustration with “black-box intent models” and contrasting that with its own pitch around transparent AI. That matters because transparency is not some whimsical product virtue. It is operationally useful. If teams can understand what the platform is doing, they can explain it internally, challenge it when needed, and build better workflows around it. If they cannot, adoption drops and the tool becomes one more expensive thing that only a small handful of people pretend to fully understand. And let’s be honest, nobody wants their pipeline strategy resting on “the model says so.” Faster time-to-value beats feature theatre every time There is a weird habit in B2B software buying where complexity gets mistaken for sophistication. The more complicated the platform sounds, the more “enterprise” it must be. Usually, that is nonsense. Teams do not win because they bought the most elaborate system. They win because they bought something that gets useful, quickly. Demandbase makes that point hard in its move-over guide, saying teams are choosing it for faster time-to-value and laying out what to expect when switching, how long it actually takes to get live, and how to avoid delays, adoption issues, and wasted spend. That is not a side point. It is the point. Marketing ops and revenue ops teams are not judged on how advanced their tooling sounds in a procurement meeting. They are judged on whether campaigns run, sales trusts the signals, pipeline improves, and nobody has to sit through six months of “transformation” before seeing any value. A platform that gets there faster is not merely more convenient. It is more commercially sane. Discover our Podcast Buying groups reflect reality. Single-lead obsession does not. One of the stronger reasons to look at Demandbase is its emphasis on buying-group intelligence. Again, that is not just product wording. It is a reflection of how B2B buying actually works. Enterprise purchases are rarely driven by one heroic individual who reads an ebook and then wanders directly into closed-won status. They involve multiple stakeholders, competing priorities, internal politics, silent research, and at least one person who turns up late and somehow still gets veto power. Demandbase explicitly positions buying-group intelligence as part of the reason teams are making the switch. That gives teams a better way to prioritise accounts. Instead of obsessing over isolated activity from one contact, they can see whether momentum is building across the people who actually influence a deal. That leads to better orchestration, more sensible sales prioritisation, and less time wasted pretending one engaged individual equals account readiness. In other words, it gets a little closer to the messy truth of B2B revenue. Pricing and support are not boring details. They are where goodwill goes to die. Vendors love innovation language. Buyers, meanwhile, are over here wondering how many extra invoices stand between them and the features they thought they were already paying for. Demandbase’s guide is pretty blunt on this front. It calls out “endless upcharges” and support that has gone from helpful to nonexistent as part of the frustration driving some teams away from 6sense. That is a pointed comparison, but it lands because every ops leader knows the feeling. Nothing sours platform confidence faster than realising the commercial model is built around drip-feeding value back to you one awkward upsell at a time. Support matters too, especially during transition periods. If your platform becomes harder to optimise, harder to troubleshoot, and harder to expand without vendor intervention, then every internal stakeholder feels it. Campaigns slow down. Confidence drops. Adoption gets patchy. What looked strategic in the sales cycle starts looking suspiciously like admin with branding. Demandbase is clearly making the case that it offers a smoother partnership model. Whether that is the deciding factor depends on the buyer, but it is often the thing that moves a team from interested to serious. Ease of use is not a compromise. It is the whole game. There is no prize for owning an ABM platform that only three people can operate without emotional damage. Demandbase includes customer proof points to reinforce this. PageUp said it compared Demandbase and 6sense across transparency, configurability, and partnership, and found Demandbase the better fit. Case IQ, meanwhile, chose Demandbase over 6sense because of its intuitive design, competitive pricing, existing familiarity within the team, and the support available. That should not be underestimated. An intuitive platform is easier to adopt across teams. It is easier to train on. Easier to operationalise. Easier to build repeatable processes around. It reduces the gap between insight and action, which is kind of the whole point of having the thing in the first place. A platform does not become more valuable because it is difficult. It becomes more valuable when teams actually use it properly. A shocking concept, I know. Discover our ABM Services Migration is usually less scary than staying stuck The biggest thing that stops teams switching is not loyalty. It is fear of disruption. Fair enough. A platform migration can sound like a MarTech root canal. There are integrations to think about, reporting to preserve, sales teams to reassure, workflows to rebuild, governance to tighten, and at least one legacy process no one fully understands but everyone is terrified to touch. Demandbase leans directly into that concern, framing the guide around how to switch “without missing a beat” and how to move forward without losing momentum, trust, or pipeline. That is smart because most buyers are not asking whether change is possible. They are asking whether change is survivable. The reality is that a good migration is not a leap of faith. It is a structured operational project. Audit what matters, map dependencies, define success clearly, sort the integrations properly, clean up the mess you were going to have to deal with eventually anyway, and move with intent instead of panic. Done well, a switch does not create chaos. It removes it. And frankly, sometimes the bigger risk is staying with a platform your teams no longer trust just because the pain has become familiar. The real win is not the move itself. It is what the move forces you to fix. This is the bit that matters most. Switching from 6sense to Demandbase is not just a technology decision. It is a chance to reset how your go-to-market teams work. To tighten account selection. To rethink prioritisation. To align marketing and sales around signals they actually believe. To stop paying for platform complexity that sounds impressive but struggles to produce value in the real world. That is where the biggest payoff usually sits. The platform matters, obviously. But the migration process also forces better questions. What do we actually need from intent? Which insights do we trust? What counts as meaningful engagement? Where are we overcomplicating things? Which workflows are driving pipeline, and which ones are just keeping dashboards busy? A move to Demandbase can absolutely improve the tech stack. The smarter outcome is that it also improves the operating model. Final thought No ABM platform is magic. None of them can rescue poor process, vague ownership, or marketing teams that are still mistaking activity for progress. But platforms can make good teams better, or they can trap them inside expensive ambiguity. That is why the case for moving from 6sense to Demandbase is getting attention. Demandbase is making a straightforward pitch: less black-box nonsense, faster time-to-value, stronger support, more intuitive usability, and buying-group intelligence that better reflects how B2B buying really happens. That is the core promise behind its move-over guide, and it is a promise that will resonate with any team that is tired of paying premium rates for unnecessary friction. If your current platform feels more like something you manage than something that helps you win, that is usually your answer. Read the Demandbase guide here:
- MQLs are the hangover: Why marketing should stop celebrating leads and start building pipeline
For years, marketing teams have had a favourite party trick. Take a person. Watch them click on a few things. Maybe they download a guide, attend a webinar, glance at a pricing page, or fill in a form because they were cornered by a decent headline and a mild identity crisis. Add some points. Push them over a threshold. Then declare, with a straight face, that they are now “qualified”. Cue the applause. Cue the dashboard. Cue the monthly report proudly announcing a rise in MQLs as if the revenue team should be popping champagne. And then, as usual, the hangover arrives. Because many of those leads do not become pipeline. Many do not become conversations worth having. Many were never serious buying signals in the first place. They were just activity. Nicely packaged activity, perhaps. But still activity. That is the problem. Marketing has spent years rewarding itself for creating moments that look like progress instead of conditions that actually lead to revenue. The result is a lot of businesses still measuring demand with a model that feels tidy, looks familiar, and increasingly tells them absolutely nothing useful. If lead scoring is cosplay, then the MQL is the morning after. It is the consequence of believing the costume was real. The MQL made sense once. That time has passed. To be fair, the MQL was not invented by idiots. It came from a reasonable desire to create order. Sales teams needed a way to separate random names from people showing signs of interest. Marketing teams needed a way to prove they were doing more than sending emails and fiddling with landing pages. Leadership wanted a metric that looked like a bridge between activity and pipeline. So the MQL was born. A neat little handoff point. A moment where marketing could say, “Here you go, this one looks promising,” and sales could at least pretend to believe them. The problem is that modern buying no longer behaves in a way that makes this model particularly trustworthy. Buying is rarely driven by a single person. It is messy, delayed, political, often irrational, and usually spread across multiple stakeholders who do not all leave the same digital breadcrumbs. The person who fills in the form is not always the one with budget. The person researching solutions is not always the one making the decision. The loudest signal in your system is often not the most commercially meaningful one. So the contact that becomes an MQL may be the least important person in the room. Or worse, there may not even be a room yet. That is where the model starts to crack. Because while buying happens at account level, many marketing teams are still measuring success at contact level and acting surprised when the story does not hold together. A lot of MQLs are just reporting events dressed up as buying signals This is the real issue, and it is worth saying plainly. An MQL often tells you that a person did something trackable. It does not reliably tell you that an account is becoming buyable. Those are two very different things. A person downloading an asset is a trackable event. A person attending a webinar is a trackable event. A person clicking around your site three times in a week is a trackable event. Useful, maybe. Interesting, perhaps. But still not the same as a buying condition emerging within an account. And yet businesses continue to build dashboards, goals, routing logic, and team incentives around exactly those kinds of moments. This is where the wheels start to come off. Marketing celebrates lead volume. Sales sees weak conversion. SDRs work lists they do not trust. Revenue leaders start asking why “qualified” leads are not turning into genuine opportunities. Marketing responds by refining the scoring model, tweaking the thresholds, and adding even more detail to the reporting. Which is a bit like trying to fix a bad haircut by measuring it more precisely. The problem is not always that the system lacks sophistication. Quite often, the problem is that the system is classifying the wrong thing. A reporting event helps explain activity. A buying signal helps you decide where commercial effort should go. Too many businesses confuse the two. Discover our Podcast Easy to count has become more important than useful to know This is one of the less glamorous reasons so many demand models quietly fail. It is far easier to count an individual conversion than it is to interpret account-level momentum. It is far easier to report a lead threshold than it is to understand whether a buying group is forming. It is far easier to tell the board that MQL volume is up 23 percent than it is to say, “We are seeing stronger commercial movement in accounts that match our best-fit profile and show genuine timing pressure.” One sounds neat. The other sounds like actual work. So guess which one most businesses default to. Marketing has been rewarded for what is visible, not necessarily for what is meaningful. That would be tolerable if the visible thing still behaved like a useful proxy for pipeline. In many cases, it no longer does. A single person from a target account engaging with content may mean nothing. Three stakeholders from the same account arriving within a short period, each looking at different pieces of decision-stage content, probably means a lot more. An implementation-related conversation means more than a webinar registration. A pricing discussion means more than a content download. A security review means more than someone clicking a nurture email while avoiding a meeting. The point is not that engagement is irrelevant. The point is that engagement without context is flimsy. And a flimsy signal should not be carrying the weight of your demand strategy. The MQL has become a permission slip for optimism That sounds harsh, but it is often true. In many organisations, the MQL is not a robust qualification model. It is simply the point at which marketing is allowed to feel good about itself. The lead crossed the line. The number moved. The target was hit. Everyone can now behave as though progress has occurred. This is comforting. It is also dangerous. Because once the metric becomes emotionally important, it stops being challenged properly. Teams begin defending the existence of the MQL rather than asking whether it still reflects how buying works. Sales gets blamed for weak follow-up. Campaign teams get asked for more volume. SDR teams get told to work harder. Nobody wants to say the obvious thing, which is that a lot of this so-called qualification may have very little to do with commercial readiness at all. And that is how businesses end up running entire revenue motions around glorified hand-raisers. Marketing does not need more lead theatre. It needs a better operating model. The answer is not to replace MQLs with chaos. Nor is it to delete every lifecycle stage and start speaking in mystical revenue riddles. What is needed is a shift in what marketing is actually trying to identify and influence. Instead of asking, “When is this lead qualified?” the better question is, “What conditions suggest this account is moving closer to a real buying decision?” That changes everything. It changes what you measure. It changes what you route. It changes what sales trusts. It changes how campaigns are judged. It also nudges marketing into a much more commercially useful role, which is long overdue. Because marketing’s job is not just to generate names. It is to create movement. To increase the likelihood that the right accounts engage, progress, and enter sales conversations with something resembling genuine intent. That is a more serious job than producing a pile of contacts and calling it pipeline. What should replace MQL obsession? Not a single new acronym, thankfully. The world does not need another one. What it does need is a model built around commercial conditions rather than arbitrary thresholds. That starts with account fit. Real account fit, not fantasy ICP nonsense where half the market somehow qualifies as ideal. Good fit should reflect whether the account has the right level of complexity, the right kinds of pain, the right operational reality, and the right commercial shape for your business to win and serve well. Fit should be a gate, not a decorative line in a strategy deck. Then there is buying-group emergence. One person engaging is a weak signal. Several relevant stakeholders showing up from the same account in a pattern that suggests evaluation is something else entirely. That is where things begin to get interesting. Not because it guarantees a deal, but because it starts to resemble the way decisions are actually made. Next comes timing pressure. This is one of the most underused and most commercially important pieces of the puzzle. Why now matters more than almost everything else. A replatforming plan, a looming renewal, an internal re-org, reporting chaos, a change in leadership, a compliance deadline, a broken process, a strategic mandate, these are the conditions that create movement. Someone downloading a whitepaper does not create urgency. It may simply indicate boredom between meetings. And finally, there are progression signals with actual weight behind them. Meetings involving multiple stakeholders. Implementation conversations. Commercial discussions. Timeline questions. Security reviews. Requests for technical validation. Internal language shifting from casual curiosity to practical decision-making. These are not perfect either, but they are much harder to fake. They also cost the buyer something, which is usually a very good sign. This is where marketing should be focusing its attention. Not on whether a lead scored 74 instead of 71. Not on whether a form fill should count double if it came from paid social. Not on endlessly polishing a framework that was built for a simpler buying environment and now survives mostly because everyone knows where it lives in the CRM. Discover our AI Coworker This is also why sales and marketing keep annoying each other The MQL model does not just distort measurement. It distorts trust. Marketing says it delivered qualified leads. Sales says those leads are rubbish. Marketing says sales is ignoring good demand. Sales says marketing is measuring engagement, not intent. Then both sides sit in a meeting staring at the same funnel with completely different levels of faith in it. It is a deeply inefficient way to run a revenue team. The deeper issue is that both sides are often reacting sensibly to a broken shared model. Marketing has been taught to optimise for visible conversion. Sales has been trained by experience to be sceptical of anything that looks too easy. The result is a constant tension between volume and credibility. A better model lowers that tension. If both teams are aligned around account fit, buying-group activity, timing pressure, and commercially meaningful progression, the conversation gets healthier fast. Marketing is no longer defending a pile of shiny contacts. Sales is no longer rolling its eyes every time a dashboard says pipeline is “warming up”. Both teams are looking at the same kinds of signals and asking the same practical question: is this account moving in a way that deserves serious effort? That is a much better conversation to have. You do not necessarily need to kill the MQL. But you should absolutely demote it. Some businesses will still need an MQL stage for workflow reasons. Fine. Use it as an internal signal if you must. Use it to trigger routing. Use it to mark a point in a process. Use it because your systems are held together by string and inherited logic and you cannot rip it all out in one go. But stop treating it like the headline metric for marketing contribution. That is where the damage happens. An MQL can still exist without being worshipped. It can be a checkpoint, not a trophy. It can serve operations without pretending to represent commercial truth. The trouble starts when businesses build their whole demand story around it. Because the story that matters is not whether marketing produced more qualified leads this quarter. The story that matters is whether marketing improved the conditions that make pipeline more likely in the accounts that actually matter. That is a much stronger claim. It is also much harder to fake. The next era of demand generation will be less flattering and more useful That is probably for the best. The old model produced very pretty dashboards. It also produced an awful lot of false confidence. Teams could point to rising lead volumes while pipeline quality quietly sagged underneath. Targets got hit. Reports got written. Revenue teams kept wondering why all this apparent demand still felt so anaemic in the real world. The businesses that move fastest now will be the ones willing to let go of neat-but-empty metrics and get more honest about what buying actually looks like. That means less worship of individual conversions. Less obsession with lead thresholds. Less applause for activity that happens to be easy to track. More attention to account movement. More weight given to urgency and buying conditions. More focus on signals that indicate real commercial effort from the buyer side. In other words, less theatre. More evidence. That may make some dashboards uglier for a while. Good. Ugly truth is still better than polished nonsense. Stop asking how to generate more MQLs That is the wrong question now. The better question is this: How do we help more of the right accounts become sales-ready in ways that look like deals we actually win? That question forces a more grown-up strategy. It pushes marketing closer to revenue. It exposes weak measurement. It sharpens targeting. It improves alignment with sales. And, perhaps most importantly, it stops teams mistaking form fills for progress. Because the brutal truth is that many MQLs were never a sign of momentum. They were just the easiest thing to celebrate. And marketing has celebrated enough easy things. It is time to build pipeline instead. Need help with that? Let's talk... Discover our Services
- HubSpot migration mistakes that quietly wreck reporting, automation and trust
There is a particular kind of confidence that appears right before a messy HubSpot migration goes live. It usually sounds something like this: “ We’ve got it all covered. ” The CRM has been mapped. The lists have been exported. The workflows are “mostly” rebuilt. Someone has checked the field names, someone else has built the lifecycle stages, and now the business is marching toward launch with the kind of calm optimism usually seen moments before the kitchen ceiling starts dripping water. Then, a few days later, the cracks begin to show. We've seen it so many times with new clients - having been asked to step in and rescue the situation... Sales starts complaining that contact records look odd. Marketing notices that reporting has gone wonky. Someone in leadership asks why leads have dropped off a cliff, even though they have not. Customer journeys no longer make sense. Attribution is suddenly telling a fairy tale. Emails are firing at the wrong people. Forms are creating duplicates. And the once-beautiful promise of a clean move into HubSpot starts to look less like transformation and more like a very expensive house move where half the boxes were labelled “misc.” This is the problem with bad migrations. They rarely fail in one dramatic, obvious moment. They fail quietly. They fail in ways that are easy to miss during testing and hard to fix once teams have adapted to the damage. They fail by slowly eroding the one thing every revenue team needs to function: Trust . That is what makes migration mistakes so dangerous. Not just the technical mess. Not just the wasted time. Not even the cost of putting it right. The real issue is that once people stop trusting the system, they stop using it properly . And when that happens, you do not just have a HubSpot problem. You have an operational credibility problem. A lot of businesses treat migration as a transfer exercise. Take what exists in your existing platform, move it into HubSpot, rebuild what matters, and carry on. Simple . Lovely. Box ticked. But a migration is never just a transfer. It is a redesign whether you admit it or not. The minute you move data, workflows, properties, scoring models, lifecycle logic, routing rules, forms, integrations and reports into a new environment, you are making decisions about how the business operates. Pretending otherwise is how teams end up recreating nonsense at speed. One of the most common mistakes is moving bad data as if it were valuable just because it already exists. There is something oddly sentimental about legacy CRM data. Businesses cling to it like an old cable drawer. “ We might need that. ” “T hat field used to be important. ” “ We cannot delete those contacts because they came from a 2019 webinar series. ” So over it all comes. Dead properties. Duplicate records. Inconsistent country values. Zombie lifecycle stages. Fields with names like “ Lead Source Final Final 2. ” You know the type. The problem is that HubSpot is only as useful as the data structure you give it. If you migrate chaos, you do not get a fresh start. You just get a shinier version of the same confusion. Worse, people assume a new platform equals improved quality. So bad data becomes more dangerous because it carries an undeserved sense of legitimacy. Suddenly reporting is wrong, segmentation is unreliable, automation behaves strangely, and no one can quite work out whether the issue is the setup or the business itself. Spoiler: It is usually the setup. Another classic mistake is rebuilding automation too literally. This happens when teams approach migration like a museum restoration project. Every workflow, every trigger, every odd little workaround gets reproduced exactly as it existed before. It sounds sensible on paper. In reality, it is how you preserve years of bad decisions in a new home. Old systems often contain automations that were built for one campaign, one process, one team structure or one emergency patch three years ago. Over time, those automations become tangled. They overlap. They contradict each other. They run because no one is brave enough to switch them off. A migration should be the moment you ask whether that logic still deserves to exist. Instead, many teams just copy it all across and congratulate themselves on completeness. Then the new HubSpot portal goes live, and the same old operational weirdness returns wearing a cleaner interface. Contacts are enrolled in conflicting workflows. Sales notifications misfire. Leads skip important stages. Internal teams start saying things like, “ HubSpot doesn’t seem to do what we need, ” when what they really mean is, “ We imported our own bad habits and gave them a fresh postcode. ” Reporting is where the pain usually becomes undeniable. Bad migrations have a special talent for wrecking reporting in ways that are both subtle and deeply annoying. Dashboards still load. Charts still move. Numbers still appear. But the story underneath has been bent out of shape. This often starts with sloppy property mapping. A field in one platform looks similar to a field in HubSpot, so it gets matched without much thought. Job title goes somewhere sensible. Company name behaves itself. But then you get into the more delicate stuff. Original source, lead status, lifecycle stage, handoff dates, qualification criteria, owner history, pipeline movement. These are not just fields. They are the logic behind how performance is measured. Map them badly and you do not simply lose information. You break meaning. That is when leadership starts asking dangerous questions. Why are MQL numbers down? Why are conversion rates inconsistent? Why does sales say the leads are rubbish when marketing claims pipeline influence is up? Why does the dashboard disagree with the CRM export? Once that happens, the room fills with theories. The campaign must be weak. Sales follow-up must be slow. The market must have changed. Sometimes those things are true. But after a migration, it is often the plumbing. And nothing wastes more time than a business trying to solve a strategic problem that is actually a data structure problem... Then there is attribution, which is already a minefield before migration enters the chat. Move to HubSpot badly and attribution can become complete fiction with a straight face. Contacts lose source context. Historical interactions are only partially preserved. Form submissions are disconnected from original journeys. Campaign naming conventions collapse into inconsistency. Teams start reading reports that look polished enough to trust and flawed enough to mislead. That is a nasty combination. Attribution does not need to be perfect to be useful. Everyone sensible knows that. But it does need to be consistent. That is the bit poor migrations destroy. Once consistency goes, the business starts making budget and channel decisions based on warped signals. So now the damage is not only operational. It is commercial. You are not just reporting the wrong story. You are funding it. Trust, though, is the part that really hurts. Discover our Podcast Internal trust in systems is fragile. Much more fragile than most leadership teams realise. Users do not need many bad experiences before they start working around the platform instead of through it. Sales sees a contact record with missing fields and starts keeping their own notes elsewhere. A marketer notices lists pulling in the wrong contacts and starts exporting CSVs “just to be safe.” Ops teams lose faith in workflow logic and start manually checking everything. Before long, the platform becomes a place where data goes to look official rather than a place where teams actually operate with confidence. Once that behaviour sets in, it spreads fast. Workarounds breed more workarounds. Manual fixes create more inconsistency. Different teams start defining success differently because they no longer believe the system can hold a shared version of truth. And that is the quiet tragedy of a botched migration. You did not buy HubSpot to create more admin, more doubt and more internal politics. Yet somehow here you are, paying handsomely for all three. Part of the problem is timing. Businesses often migrate under pressure. A contract is ending. A platform has been outgrown. A leadership team wants faster reporting. A merger has created a systems mess. There is urgency, and urgency makes people do deeply optimistic things with timelines. They compress discovery. They skip governance decisions. They assume someone else has validated the data. They leave testing until the end as though it is a nice final polish rather than the point at which reality barges into the room carrying a baseball bat. Testing, by the way, is another area where migrations quietly come apart. Teams often test whether things exist, not whether they behave properly. Yes, the form submits. Great. Yes, the workflow triggers. Lovely. Yes, the record is in HubSpot. Champagne all round. But does the form map correctly for every scenario? Does the workflow fire only when it should? Does the record preserve history in a way that supports reporting, routing and future automation? That is where grown-up migration testing lives, and it is less glamorous than launch announcements but considerably more useful. The same goes for integrations. People love to underestimate integrations. They assume the sync will more or less work because the connector exists and the logos look reassuring. But integrations are where hidden operational assumptions come to die. Ownership fields sync strangely. Product data behaves differently. Custom objects do not align. Date formats become chaos merchants. One system overwrites another with all the confidence of a junior manager on their first day. Then everyone acts surprised when sales, service and marketing are reading different versions of the same account. A good migration is not just about getting data into HubSpot. It is about deciding which system owns which truth, and making that decision deliberately. Without that, integration is just a well-dressed argument between platforms. And then there is the most expensive mistake of all: Treating migration as finished the day the portal goes live. That is fantasy. A migration is not finished at go-live. Go-live is the point at which the real audit begins. That is when real users do real things in the system and expose all the logic gaps that polished workshops somehow missed. Businesses that do this well plan for a bedding-in period. They monitor. They review. They adjust. They keep a close eye on reporting, workflow behaviour, routing, duplicates, source data and user confidence. Businesses that do it badly declare victory too early and let small issues harden into accepted dysfunction. This is usually where resentment creeps in. Marketing feels blamed for reporting issues. Sales loses patience with lead handling. Leadership becomes suspicious of both. The internal consultant who led the migration has either vanished, gone defensive, or started using the phrase “edge case” far too often. And the team that has to live in HubSpot every day is left cleaning up the operational confetti. The good news is that these mistakes are avoidable. Not by magic, and not by buying more software, and certainly not by hoping HubSpot will somehow impose order on a messy operating model through sheer force of branding. They are avoidable when migration is treated as an operational design project, rather than a technical transfer. That means being ruthless about what deserves to move. It means defining property purpose before field mapping starts. It means rebuilding automation based on current business logic, not inherited superstition. It means deciding what reporting needs to mean before dashboards are recreated. It means testing behaviour, not appearances. It means planning for adoption, not just deployment. It means accepting that a faster migration is not always a better one if it leaves the business quietly bleeding trust for the next twelve months. HubSpot can be a brilliant platform. But it is not a miracle worker. It will not rescue poor governance, fuzzy definitions, inconsistent data, muddled ownership or years of operational corner-cutting just because you paid for onboarding and changed the logo on the login screen. If anything, it exposes those issues faster, because once a modern platform is in place, the excuses start to look a bit thin. And perhaps that is the uncomfortable truth underneath all of this. A migration does not create the mess. It reveals it. The move to HubSpot simply gives businesses a very expensive chance to decide whether they want to keep pretending. The companies that get real value from migration are usually the ones willing to be a bit unsentimental. They are prepared to challenge legacy logic. They accept that some old processes deserve a dignified death. They understand that clean reporting is built, not wished into existence. And they know that trust is not restored by telling teams the platform works. It is restored when the platform actually behaves in a way that deserves belief. So if a HubSpot migration is on the horizon, the question is not whether the data can be moved. Of course it can. The question is whether the business is willing to do the harder, less flashy work of deciding what should move, how it should behave, and what truth needs to survive the trip. Because when migrations go wrong, the damage is rarely loud at first. It is quieter than that. More boring. More dangerous. A missed field here. A broken report there. A workflow that sort of works. A sales team that starts keeping side notes. A marketing team that exports one more spreadsheet. A leadership team that stops trusting the dashboard. Death by a thousand “that’s odd” moments. And that is how reporting gets wrecked, automation gets compromised, and trust slips out through the floorboards while everyone is still admiring the new furniture. If you are going to migrate to HubSpot, do it properly. Not perfectly. Properly. There is a difference. Perfect is theatre. Proper is structure, discipline and enough honesty to admit where the old setup was nonsense. That is not the glamorous version. It is, however, the version that works. Migrating to HubSpot and Require some guidance? Let's talk Discover our Services
- Big News: TLS Certificate validity moving to 199 Days
Online security standards have changed - as of February 24, 2026 , Certificate Authorities (CAs) will issue public TLS/SSL certificates with a maximum validity of 199 days (previously 397 days). This is an industry-wide update driven by the latest CA/Browser Forum Baseline Requirements , and it’s all about strengthening security across the web. Why the Shorter Validity? Shorter certificate lifespans enhance security in a few key ways: Reduced risk exposure if a private key is compromised Faster cryptographic agility , allowing the industry to adapt more quickly to evolving threats and standards Lower long-term impact of mis-issuance or outdated configurations In short: Smaller validity windows = tighter security controls and faster innovation. Important CA Cutoff Dates Here’s when the new 199-day maximum goes into effect: DigiCert: February 24, 2026 Sectigo: March 12, 2026 Any certificates issued on or after these dates will follow the new maximum validity rule. Two Ways to Navigate the Change You’ve got options, choose the workflow that best fits your team. Path 1: Manual Re-Issuance (Business as Usual) You can continue purchasing certificates as you do today (e.g., 1-year or 2-year products). The difference? You’ll need to reissue and reinstall the certificate every ~6 months , until the order term is complete. Best practice: Most SSL Management services offer renewal notifications, ensure these are enabled in your account so you never miss a reissuance window. This approach works well for teams already comfortable managing certificate lifecycle tasks manually. Path 2: Embrace Automation Want to set it and forget it? Automation is your friend. GoGetSSL currently offers ACME-based SSL certificates , enabling automated issuance and renewal. Once configured, your certificates can reissue seamlessly without manual intervention. For enterprise-scale environments, consider DigiCert Trust Lifecycle Manager . It provides comprehensive certificate lifecycle management, including discovery, automation, policy enforcement, and centralized visibility. Technical Considerations Here’s what your development and operations teams should be aware of: API Certificate Order Requests After the cutoff dates: API requests specifying a validity greater than 199 days will still create an order for the requested duration. However, the issued certificate itself will be capped at 199 days . This design prevents API errors and ensures your public TLS/SSL orders continue processing smoothly. Pro tip: Use the getOrderStatus detail response parameters to monitor the difference between: The order validity term The actual certificate expiration date Tracking both values will be important for lifecycle planning. DigiCert Validation Reuse Changes DigiCert customers should also note adjustments to validation reuse periods: Domain Validation (DV) reuse Changing from 397 days → 199 days (effective February 24, 2026) Organization Validation (OV) reuse Changing from 825 days → 397 days These updates align validation lifecycles more closely with the new certificate validity standards and reinforce stronger identity assurance practices. What this means for you This isn’t just a policy change, it's a strategic shift toward a more secure and agile internet. Continue managing certificates manually (with more frequent reissuance), or Transition to automation and streamline your operations. Some MOps platforms already have features enabled to keep it all in one place. For example, Eloqua offers Automated Certificate Management at no additional cost. Either way, planning ahead will ensure a smooth transition. If you’d like help evaluating or implementing automation options for your SSL certificates or updating your certificate management strategy, we’re here to support you. Discover our Email Services
- Guardrails aren’t optional when the tool can speak for you...
A few years ago, most marketing mistakes were slow mistakes. Someone wrote the email, someone proofed it, someone hit send. If it went wrong, it went wrong at human speed. You had time to catch the awkward phrasing, the wrong link, the “Dear {FirstName}” horror. The damage was real, but it was usually contained to a campaign, a segment, a moment. Now you’ve got tools that can speak for you. Not just suggest, not just draft, not just “help”. Speak. In your tone. Under your brand. At scale. Across channels. With alarming confidence. That changes the deal. When a tool can produce customer facing language, take action in systems, and create outputs that look official, you’re no longer talking about productivity. You’re talking about authority. And if you hand authority to a system without guardrails, you are effectively outsourcing your standards to a probability machine and hoping your customers never notice. They will. The uncomfortable truth is that AI in Marketing Operations doesn’t fail like software used to fail. Traditional automation breaks loudly. Integrations fail, jobs error out, workflows stop. You get alerts. You get tickets. You get something you can point at. AI fails quietly. It produces something that looks plausible. It produces something that sounds like you. It produces something that passes a quick skim. And then it slips into the world and does its damage in the most painful way: It looks like you meant it. This is why guardrails are not optional. Not because the tool is evil. Not because everyone should panic. Because once the tool speaks, the brand is accountable. “It’s just a draft” is a comforting lie Most teams start with the safest narrative possible. The tool is “just drafting”. Someone will review it. Nothing goes out unapproved. It is assistance, not autonomy. And at the start, that is true. But the reality of modern marketing is volume. Too many emails, too many landing pages, too many ads, too many variations, too many segments, too many stakeholders. When the tool makes output easier, you produce more output. When you produce more output, review becomes thinner. When review becomes thinner, the definition of “approved” turns into “nobody complained”. That is how risk creeps in. Not through one dramatic decision to let the robot run your marketing. Through a thousand tiny shortcuts made by busy people who are rewarded for speed, not for diligence. A draft becomes a “close enough”. A “close enough” becomes a template. A template becomes a system. And then one day your brand voice is quietly shaped by whatever the model thinks sounds professional, persuasive, or reassuring. If you’ve ever read a company message that felt oddly hollow, oddly generic, oddly not human, you already know what that looks like. Customers do too. They might not say “this was generated”, but they feel the distance. They feel the lack of accountability. They feel the absence of a real person. In a market where trust is already fragile, that’s not a minor issue. It is the issue. When the tool speaks, it represents your intent This is where the conversation needs to get more serious than “accuracy”. Accuracy matters, of course. Nobody wants hallucinated features or invented pricing. But accuracy is only one slice of the problem. The bigger problem is implied intent. When your brand sends something, customers assume it reflects what you believe, what you value, how you operate, and how you’ll treat them. The tone matters. The promises matter. The certainty matters. The choice of words matters. The absence of empathy matters. AI is very good at sounding certain. It is very good at smoothing rough edges into confident statements. It is very good at making things sound resolved even when they’re not. That is a dangerous trait in a customer context. Because confidence is persuasive, and persuasion under your brand name is a promise. If you accidentally overpromise, if you accidentally mislead, if you accidentally claim compliance you haven’t earned, the customer doesn’t blame the tool. They blame you. They should. It is your logo at the top of the email. Your name on the website. Your ad account paying to put the message in front of them. Your sales team following up as if the claim was deliberate. Guardrails are how you protect intent. They are how you stop the tool from speaking with more authority than your business can actually support. The new failure mode is “looks fine” This is the part that catches even smart teams out. Most governance efforts are designed for obvious failures. Broken processes. Missing approvals. Wrong recipients. Compliance red flags. Things you can spot in a checklist. AI’s most common failure mode is more subtle: It produces output that looks fine at a glance and is wrong in a way that matters later. It might be wrong legally. It might be wrong commercially. It might be wrong ethically. It might be wrong in tone. It might be wrong in a way that sets the wrong expectation. It might take a sensitive topic and sand it down into corporate cheerfulness, which feels disrespectful. It might take a complex product limitation and simplify it into something misleading. It might take a customer concern and respond with “we value your feedback”, which is the fastest way to sound like you don’t. And because the output looks polished, it often bypasses the kind of scrutiny that a messy human draft would invite. Humans are suspicious of imperfect writing. We notice it. We challenge it. We ask questions. AI writing often arrives wearing a suit. People assume it has done the thinking because it has done the formatting. That’s how you end up publishing something that nobody would have consciously written, but everyone accidentally approved. Speed makes small mistakes expensive Marketing has always had risk. But speed changes the economics of risk. When a human team writes slowly, mistakes are slower too. When you have the ability to produce ten variants instead of one, you also have ten chances to be wrong. When you can spin up campaigns faster, you also shorten the time between a decision and the moment it reaches a customer. Less time means less reflection. Less reflection means more accidents. And the tool does not get tired, so you keep going. This is where teams often miss the point of guardrails. They think guardrails exist to slow things down. In reality, guardrails exist to allow speed without gambling your reputation every time you hit publish. The teams who win with AI will not be the ones who use it the most. They will be the ones who use it with enough discipline that they can trust their own output again. Your brand voice is an asset, not a formatting preference A lot of organisations treat brand voice as a style guide. A few adjectives. A list of do’s and don’ts. Maybe a handful of examples. Useful, but not sacred. When AI enters the picture, brand voice becomes something else. It becomes the training data for your outward identity. The guardrails around how you speak are no longer “nice to have”. They are the constraints that stop your company from slowly turning into generic marketing sludge. Because AI has a default voice. It’s the voice of polite certainty. Professional, helpful, mildly enthusiastic, oddly uncontroversial. That voice is fine for a toaster manual. It is terrible for differentiation. If your competitors use the same tools with the same defaults, you will all start sounding the same. Same phrases, same cadence, same vague confidence, same “we are committed to delivering value”. Customers will not remember you for that. They will remember you for the moments when your communication felt real, specific, and accountable. Guardrails are not only about preventing disaster. They are also about preventing dilution. They protect what makes you recognisable. The risk isn’t only what the tool says. It’s what it makes people do. Here’s the part many teams ignore because it feels less glamorous than content. Once AI is embedded in workflows, it stops being a writing assistant and starts being a decision shaper. It changes what people choose to ship, what they choose to test, what they choose to claim, what they choose to ignore. If the tool reliably produces something “good enough”, you stop pushing for “great”. If the tool can generate five angles quickly, you stop thinking deeply about the one angle that truly matters. If the tool can answer customer questions instantly, you stop investing in better documentation and clearer product truth. The tool doesn’t just produce content. It changes standards. That is why governance and guardrails sit in Marketing Operations, not only in legal or IT. This is an operational quality problem. It is about maintaining standards under acceleration. Customers don’t care how it happened When something goes wrong, organisations love to explain the internal story. It was an experiment. It was a vendor issue. It was a misconfiguration. It was a one off. It was an edge case. It was an isolated incident. It was unintended. Customers do not care. They care that you spoke to them in a way that felt careless, misleading, or disrespectful. They care that you used their data in a way you cannot clearly explain. They care that your messaging implied something that was not true. They care that you are now backpedalling. The moment you start defending the process instead of owning the outcome, you lose more trust. Because accountability is the whole point of a brand. Guardrails are how you avoid needing excuses in the first place. Guardrails are not a policy document nobody reads Let’s be blunt. A policy document is not a guardrail. It is a wish. Teams love policies because they create the feeling of control. They also love them because they can be written once and then forgotten. They become a box ticked. “We have an AI policy”. Great. Where is it used? Who follows it? What happens when someone ignores it? How do you know? Real guardrails show up where work happens. In the tools. In the templates. In the workflows. In the approvals. In the way you capture decisions. In the way you log what was generated and why. In the way you constrain what is allowed to be said in certain contexts. In the way you enforce brand voice and claims. If you cannot point to the guardrails inside the process, you don’t have guardrails. You have vibes. And vibes are a terrible risk strategy. The irony: Guardrails make AI more useful The fear some teams have is that guardrails will reduce the value of AI. That constraints will kill creativity. That approvals will slow delivery. That governance will turn an exciting tool into another corporate process. In practice, the opposite happens. Without guardrails, teams never fully trust what they generate. They second guess, they rewrite, they hesitate, they argue, they avoid using the tool for anything important. They keep it in the “nice to have” corner. They treat it like a toy. With guardrails, the tool becomes reliable. Not perfect, but reliable enough that teams can use it in real work without constantly worrying that it will embarrass them. Constraints create confidence. Confidence creates adoption. Adoption creates impact. The best marketing ops teams understand this instinctively. They know that freedom without control is not freedom. It is chaos. This is the moment to decide what kind of organisation you are AI is forcing a choice that many companies have been postponing for years. Do you operate with standards, or do you operate with output? Do you want to be trusted, or do you want to be fast? Do you want your marketing to be a real representation of your business, or a high volume content factory that occasionally hits the right note? Because once the tool can speak for you, every weak spot in your operation becomes louder. Every unclear rule becomes an argument. Every missing owner becomes a gap. Every undocumented decision becomes a risk. Some teams will respond by pretending it is fine. They will let the tool run, then scramble when something breaks. They will call it learning. Other teams will respond by putting simple, sensible constraints in place that protect customers and protect the brand, while still getting the productivity gains that made them adopt AI in the first place. That second group will be the one that looks competent in two years. Not because they had better tools, but because they had better discipline. And discipline is the real advantage right now. AI can speak for you. That is powerful. It is also a responsibility. Guardrails aren’t optional, not because you’re afraid of the tool, but because you respect what it means to speak under your name. Discover our AI Services
- AI Governance is not optional, it is the price of using the tool
Every Marketing Operations team is having the same conversation right now. Someone has shipped a chatbot into the website. Someone else is feeding prospect data into a model to “improve targeting”. A third person has quietly wired an AI assistant into the CRM to auto log activities, write follow ups, and “clean” fields. And then the organisation pats itself on the back for being modern. But if you are using AI in production without governance, you are not innovative. You are careless. You are outsourcing risk to your future self, your legal team, and your customers. You are also guaranteeing a messy internal backlash later, because the first time it misfires you will watch the business slam the brakes on everything. Governance is not paperwork. It is the operating system that lets you use AI without turning your MarTech stack into a liability. Why Marketing Ops is uniquely exposed Marketing Ops sits in the blast radius of AI for three reasons. First, you handle a ridiculous amount of personal data, often across multiple systems, with varying consent states and hazy provenance. That is not a moral judgement, it is the reality of modern marketing. Second, your work touches revenue. When AI changes what gets sent, scored, routed, or reported, you are not “testing a feature”. You are changing the way the company makes money. Third, Marketing Ops tends to be the place where “quick wins” become permanent. A prototype becomes a workflow. A workflow becomes business as usual. Nobody writes down what it does, why it does it, or what it is allowed to touch. Then one day something breaks and everyone acts shocked. AI accelerates that pattern. It automates decisions. It generates content at scale. It can behave differently tomorrow than it did today. That is why governance matters more here than in a team building slide decks. Guardrails are not “compliance”, they are performance The common argument against governance is that it slows teams down. That only sounds true if you have never lived through the alternative: Chaos, rework, and a six month freeze after a public or internal incident. AI guardrails speed you up because they remove ambiguity. People know what tools are approved, what data they can use, what needs review, and what gets logged. They stop you shipping the same mistakes over and over again with increasing confidence. The NIST AI Risk Management Framework is a good way to think about this. It frames risk management around governance and lifecycle management, not one time approvals. The core idea is simple: Govern the approach, map the context, measure the risks, manage the controls. If you have no GOVERN function, the rest becomes theatre. ISO/IEC 42001 points in the same direction from a management system angle: You need a structured way to establish, run, and continually improve how AI is used. This is not about one policy PDF. It is about ownership, controls, and continuous improvement. The uncomfortable truth about “we are just using it for marketing” A lot of teams still talk about marketing use cases as if they are low stakes. They are not. If AI personalises a message, decides who gets an offer, changes lead routing, or rewrites copy based on customer data, you are in the realm of fairness, transparency, and accountability. You are also in the realm of data protection obligations, because personal data is often in the loop, even when people pretend it is not. Regulators are not buying the “it is just marketing” line either. The UK ICO’s guidance on AI and data protection is explicit about accountability and governance, and it ties it to concrete practices like impact assessments, documenting decision making, and involving appropriate stakeholders. In Europe, the EU AI Act has put “trustworthy AI” into law, with a risk based approach and requirements that include risk management, data governance, transparency, and human oversight depending on the system and risk category. Whether or not your specific use case is classified as high risk, the direction of travel is clear. The bar is rising, and “we did not think about it” is not a defence. What good governance actually looks like in Marketing Ops Governance fails when it is vague. “Be responsible” is not a control. It is a hope. Good governance is operational. It answers questions people actually have to answer on a Tuesday afternoon, under pressure, with a campaign deadline looming. Here is what we tend to come across in a Marketing Ops context. 1. A clear inventory of AI use cases If you do not know where AI is used, you cannot govern it. Most organisations already have shadow AI, including browser based tools, plug ins, CRM add ons, and “temporary” scripts. A proper inventory is not a spreadsheet that dies after week one. It is a living register: What the use case is, what system it touches, what data is involved, what model or vendor is used, what the failure modes are, and who owns it. 2. Data boundaries that are blunt, not poetic You need rules that can be enforced, not mission statements. What data is allowed into prompts and workflows. What must be masked or excluded. What cannot be used at all. How retention works. What happens to data sent to third parties. The UK ICO has been clear that organisations should think seriously about governance and accountability when processing personal data in AI systems, including assessing risks and documenting the rationale. That starts with knowing what you are feeding into the machine. 3. Human oversight that is real “Human in the loop” is often marketing theatre. People claim oversight exists, but in practice nobody checks anything until it goes wrong. Real oversight means defining which outputs are allowed to run automatically, which need review, and what “review” actually means. It also means training reviewers to spot the failure modes, not just grammar errors. The EU AI Act explicitly points to human oversight as a core requirement in higher risk contexts, because systems can fail in ways humans do not anticipate. Even if your specific use case is not formally high risk, the principle still applies. 4. Logging, traceability, and auditability This is the part Marketing Ops teams avoid because it feels technical. It is also the part that saves you when someone asks, “Why did this customer receive that message?” or “Why did this lead get marked as unqualified?” You need to be able to trace inputs, prompts, outputs, and downstream actions. That includes versioning of prompts and workflows, so you can explain behaviour changes over time. Without logs, you cannot learn. You also cannot defend yourself. 5. Vendor and model controls Most teams do not “build AI”. They buy it. That does not reduce responsibility. It changes the governance surface. You need procurement standards for AI vendors, clarity on data usage, model training policies, retention, and security. You need to know what happens when the vendor changes the model. You need exit plans. You need to treat AI features like critical infrastructure, not a shiny add on. ISO/IEC 42001 is useful here because it is designed for organisations providing or using AI based products or services, with an emphasis on responsible use and management system controls. 6. A governance cadence, not a one time workshop AI governance is not a launch task. It is a loop. New use cases appear. Old ones change. Vendors update. Regulations evolve. Teams find new ways to break things. If governance is a quarterly committee that nobody takes seriously, it will fail. If it is embedded in change control, release management, and campaign operations, it becomes normal. Risk management should apply across the lifecycle, not just at the start and lifecycle framing matters a lot in Marketing Ops as systems and workflows are constantly evolving. The three failure modes that guardrails prevent Let’s make this painfully practical. Guardrails stop three common disasters. First, data leakage. Someone pastes customer data into a tool they should not be using. Someone connects a plugin that exports data to a vendor that stores it indefinitely. Someone uses a feature without understanding where the data goes. Regulators have been increasingly vocal about privacy harms in AI contexts, and not just in abstract terms. Second, hallucinated operations. AI makes up a field value. It confidently “dedupes” records that should not be merged. It assigns a lead score based on nonsense. It rewrites copy and introduces claims you cannot substantiate. Marketing Ops teams love automation, which means they are especially vulnerable to quietly automating errors at scale. Third, accountability collapse. When things go wrong, nobody owns it. The vendor blames configuration. The marketer blames the tool. The Ops team blames “the model”. Leadership responds by banning everything. The outcome is predictable: Fear replaces learning. Governance is how you avoid turning one mistake into a full organisational retreat. “But we want to move fast” Move fast is fine. Move fast with rules. The teams that win with AI are not the ones with the most experiments. They are the ones that can experiment safely, keep what works, and kill what does not without drama. Guardrails are what make that possible. A strong governance setup does not mean every prompt needs legal approval. It means you have sensible tiers. Low risk tasks, like drafting internal summaries or rewriting existing public copy, can have light controls. Higher risk tasks, like using personal data for personalisation, changing routing, or automating outbound messages, should have stronger controls: Defined review, logging, and monitoring. This is exactly how risk based frameworks are designed to work. The EU AI Act is built around risk categories, and NIST’s RMF is intentionally flexible and context driven. What to do next if your “governance” is basically vibes If you are reading this and realising your current stance is somewhere between “ad hoc” and “hope”, you are normal. Most organisations are there. The fix is not a 40 page policy. The fix is a working system. Start with a short inventory of every AI touchpoint in your marketing stack. Include the unofficial ones. Define data boundaries in plain language and make them enforceable. Create an approval and oversight model that matches risk, with clear ownership. Implement logging and traceability so you can explain what happened. Set vendor standards so you are not surprised by where data goes or what changes. Then run it as a process, not a project. If that sounds unsexy, good. Most things that save companies from expensive mistakes are unsexy. Marketing Ops is already the team that makes the unsexy work pay off. AI should not be the exception. Guardrails are not the thing stopping you from getting value from AI. Guardrails are the thing that lets you keep the value once you find it. Find out how we can help you with your AI Governance and Guard rails: Discover our AI Services
- Stack rationalisation is not downsizing. It’s a MarTech ROI rescue.
Every few years, a Marketing Ops team looks at its technology stack and has the same realisation you get when you open the “misc” kitchen drawer. Nothing in there is individually a bad idea. It’s just… a lot. Half of it does the same job. Some of it hasn’t been used since the last merger. One item is only kept because a former colleague swore it was “mission critical”, and nobody’s brave enough to ask what it actually does. That’s your MarTech stack. And here’s the uncomfortable truth: Most stacks don’t fail because the tools are bad. They fail because the stack stopped being designed and started being collected. The result is predictable. Costs creep up. Adoption fragments. Data gets weird. Reporting becomes interpretive dance. The team spends more time keeping systems alive than using them to create pipeline. Then someone says, “ We need to rationalise the stack ,” and everyone hears, “ We’re about to take your toys away. ” But that’s not what rationalisation should be. Done properly, it’s not a finance-led haircut. It’s a performance rescue. It’s how you turn “ we have loads of tools ” into “ we get value from what we pay for ”. It’s also one of the fastest ways to regain executive trust, because nothing screams “ adult supervision ” like knowing what you own, why you own it, and what it’s delivering. The ROI myth that keeps stacks bloated MarTech ROI is usually treated like a scoreboard. We bought tool X. Tool X has dashboard Y. Therefore we can report ROI. But “reporting ROI” and “having ROI” are not the same thing. Most MarTech spend is justified with a story, not evidence. A story about efficiency. A story about personalisation. A story about scale. A story about being “ data-driven ”. Great stories, honestly. Very fundable. Then reality arrives. The tool requires clean data you don’t have. It needs an integration nobody scoped. It assumes a process you’ve never standardised. It gets deployed halfway, then the team gets busy, then six months pass, then renewal comes around and you renew because the alternative is admitting you don’t know what you’re doing. That is not a tool problem. That is an operating model problem. And the longer it goes on, the harder it gets to unwind, because the stack becomes political. People attach their identity to platforms. Procurement decisions become legacy monuments. Usage becomes impossible to measure because “ using it ” might mean anything from logging in once a quarter to running mission-critical workflows. So the stack grows. Overlaps multiply. ROI gets fuzzier. Everyone gets used to it. Until a CFO asks a very fair question: “ What are we paying for, and what are we getting back? ” If you can’t answer that crisply, you don’t have a stack. You have a subscription museum. Rationalisation isn’t removing tools. It’s restoring design. A rationalised stack is not the smallest possible number of tools. It’s the fewest tools required to reliably execute your strategy. That’s a huge difference. Because the goal is not austerity. The goal is performance. It’s speed, consistency, measurable outcomes, and reduced dependency on heroics. When stacks are bloated, teams start compensating with workarounds and manual effort. They build fragile automations. They export spreadsheets. They invent processes to deal with tool limitations instead of choosing tools that fit the process. Rationalisation reverses that. It gets you back to intentional design: What do we actually need to do to win? What capabilities matter most to deliver that? What is the simplest architecture that supports those capabilities? Where are we paying twice for the same outcome? Where do we have “features” but not “adoption”? Where does the data fall apart? This is why stack rationalisation is not primarily a procurement exercise. It’s a strategy and operations exercise that happens to result in procurement changes. The hidden cost: Operational drag Most teams underestimate how expensive complexity is, because it doesn’t show up as a single line item. Complexity costs you in: Time : Training, troubleshooting, triage, and all the “small” tasks that become constant. Speed : Every new campaign takes longer because the workflow touches more systems and more handoffs. Risk : Data privacy, consent, access control, and governance failures become more likely as systems multiply. Insight : Reporting degrades because definitions split across tools and no one trusts the numbers. Morale : Nothing kills motivation like working inside a stack that feels unreliable. If you want a simple definition of “MarTech debt”, it’s the gap between the stack you have and the stack your team can actually operate confidently. Paying off that debt is where ROI rescue starts. Why most rationalisation attempts fail Plenty of teams try to rationalise. Many even reduce vendor count. And then, weirdly, not much improves. That happens when rationalisation is done as a cleanup rather than a redesign. Common failure patterns: It becomes a cost-only exercise. If the main goal is to cut spend, the team will keep anything that looks defensible and ditch anything that looks optional, regardless of whether the “optional” tool is the one actually driving outcomes. It ignores workflows. Tools get evaluated in isolation, not based on the end-to-end journey they support. You can’t rationalise your stack if you can’t describe your core workflows. It confuses usage with value. A heavily used tool might still be a net negative if it drives manual work, fragmented data, or duplicated processes. It avoids hard questions. Teams keep tools because “ someone uses it ”, but nobody can define the value, the owner, the success metrics, or the alternative. It forgets change management. Removing tools is easy. Removing habits is hard. If you don’t redesign workflows and retrain the team, the old problems will reappear inside the “new” stack. If you want stack rationalisation to stick, it has to be tied to operational clarity and measurable outcomes. The ROI rescue approach: stop measuring tools, start measuring capabilities A better way to think about MarTech ROI is this: You don’t buy tools. You buy capabilities. Tools are just one way to deliver those capabilities. So instead of asking, “What does this platform do?”, ask “What capability does this enable, and how will we prove it?” Capabilities might include: Reliable lifecycle email execution. Accurate attribution you trust enough to bet budget on. Lead management that doesn’t create sales distrust. Consent and preference management that reduces risk. Personalisation that actually moves conversion rates. Reporting that doesn’t require a therapist. Once you frame it this way, rationalisation becomes clearer. Overlap is not “two tools do similar things”. Overlap is “we’re paying twice for the same capability”. And gaps become obvious too. Sometimes teams have ten tools yet still can’t do one critical thing consistently because the foundations are missing: Data, governance, process ownership. That’s why the ROI rescue is not simply consolidation. It’s capability alignment. Step one: Name the outcomes you’re trying to buy Before you touch vendors, get specific about what the business expects MOPS to deliver, and what MOPS expects the stack to make easier. Not vague outcomes like “better engagement”, Concrete outcomes like: Reduce campaign launch time from ten days to five. Increase lead-to-meeting conversion rate by 15 percent. Improve lifecycle email contribution to pipeline by X. Increase MQL to SQL acceptance by Y. Reduce manual list pulls and CSV-based processes by Z. If you can’t define outcomes, the stack will keep being evaluated based on opinion and politics. The fastest way to kill a rationalisation project is to make it about which tools people like. Step two: Map the workflows that create value You don’t need a massive process library. You need the handful of workflows where performance lives. For most B2B teams, that’s usually Lead capture to routing, lifecycle email and nurture, campaign execution and measurement, attribution and reporting, data enrichment and deduplication, consent and preference management and integration between CRM, MAP, and analytics. Map those workflows at a human level, not at a vendor feature level. Who does what, when, with what inputs, and where the system should automate vs where humans need control. This is where you’ll find the truth. The truth is usually that the stack is not too big. It’s too inconsistent. It allows different parts of the org to operate different versions of “the process”, which creates downstream chaos. Rationalisation should standardise workflows, not just reduce logos on a slide. Step three: Assign ownership, or accept you’re buying waste Tools without owners become toys, then become liabilities. Every core system and every core workflow needs an accountable owner. Not a committee. A named person. Ownership means: Defining standards. Managing changes. Measuring performance. Training users. Deciding what gets built and what gets blocked. If nobody owns it, you’re not buying a platform. You’re buying entropy. This is also where ROI becomes measurable. You can’t prove ROI on something nobody is responsible for improving. Step four: Create a “keep, kill, consolidate, fix foundations” decision model This is where teams expect a dramatic tool-culling session. Sometimes you will cut tools. Often you should. But more often, the biggest ROI is in “fix foundations”. Because you can consolidate your stack beautifully and still get terrible results if: Data is inconsistent. Lifecycle definitions are unclear. UTM governance is non-existent. CRM hygiene is a fantasy. Sales stages and lead statuses mean different things to different people. Consent tracking is messy. Rationalisation should result in decisions across four buckets: Keep : tools that directly support priority capabilities and are adopted properly. Kill: tools that are unused, redundant, or never delivered the promised capability. Consolidate : overlap where one tool can reasonably replace another without wrecking workflows. Fix foundations : areas where the tool is fine, but the operating model is broken. That last bucket is where ROI rescue often lives. Because you can save money by cutting a tool. You can make money by making the stack work. Step five: Measure a few things that actually matter ROI is not platform cost divided by vibes. Pick metrics that connect stack performance to business performance and operational efficiency. Examples that tend to expose the truth quickly include time-to-launch for campaigns, percentage of leads routed correctly within SLA, sales acceptance rate of leads, percentage of lifecycle emails using approved templates and tracking, duplicate rate in CRM, percentage of records with required fields. And then other things such as report reliability: Do teams trust the dashboards enough to use them in decisions? And support load: How many hours per week are spent troubleshooting basic execution issues? These metrics do two important things. They prove value when things improve. And they make it painfully obvious when a tool is not the problem. The part nobody likes: Rationalisation changes power This is why it’s hard. A rationalised stack usually means fewer exceptions, more standards and clearer governance - less “I do it my way”. That feels restrictive if you’re used to improvising. But it’s the difference between creativity and chaos. High-performing teams don’t move faster because they have more tools. They move faster because they have fewer decisions to remake every week. Standards create speed. Governance creates confidence. Clarity creates adoption. And adoption is the thing that turns software into ROI. What “good” looks like when you’ve rescued ROI A rationalised stack doesn’t look exciting. It looks boring in the best way. Campaigns launch reliably. Reporting is trusted. Integrations are stable. Lead routing works without daily drama. New hires can learn the system without needing a private tour from the one person who understands it. You spend less time arguing about tools and more time improving outcomes. And the CFO stops asking awkward questions because you’ve already answered them. That’s the real goal. Not fewer vendors for the sake of it, but a stack that behaves like infrastructure, not a science project. The kicker: Rationalisation is an AI readiness project in disguise Most organisations are desperate to “use AI” and confused about why it isn’t magically working. Here’s why. AI can’t save a broken operating model. It will only automate the chaos faster. If your data is inconsistent, AI will generate inconsistent outputs. If your processes are unclear, AI will amplify the ambiguity. If nobody owns the system, AI will become another orphan tool. Stack rationalisation, done properly, is one of the best AI readiness moves you can make. Because it forces you to create the conditions where automation can be trusted: Clean data, standard workflows, and clear accountability. You don’t become AI-ready by buying an AI feature - You become AI-ready by becoming operationally serious. A final thought: If your stack can’t be explained, it can’t be defended If you can’t describe, in plain language, what each major tool is for, who owns it, what capability it supports, and how you measure its success, you’re not managing a stack. You’re hosting one. Stack rationalisation is not about being smaller. It’s about being deliberate. And MarTech ROI rescue is not about proving your spend was justified. It’s about ensuring your spend becomes productive. If you want a simple rule to start with, use this: If a tool doesn’t reduce time, reduce risk, or increase revenue, it’s either mismanaged or unnecessary. Either way, it’s on the list. Discover our MarTech Services
- The EU AI Act will expose your Marketing Ops: Who’s accountable when AI breaks things?
Marketing Ops has always been accountable. It just rarely looked like it. When a campaign misfires, it’s “a creative issue”. When data goes bad, it’s “a CRM issue”. When attribution turns into astrology, it’s “a market issue”. Marketing Ops sits in the middle quietly fixing everything while everyone else argues about the colour of the button. Now add AI to that mix. Because AI does not fail politely. It fails at scale, at speed, and with enough confidence to make the wrong answer look like policy. The EU AI Act is basically Europe’s way of saying: If you deploy AI, you do not get to shrug when it breaks. Someone has to own the risks, the controls, the monitoring, and the outcomes. And if your Marketing Ops function currently runs the stack, the workflows, the routing, the automation, the data, and increasingly the “helpful” AI features inside your tools, congratulations. You are about to get pulled into an accountability conversation you did not schedule. This article is not legal advice. It’s a practical, Marketing Ops view of what the EU AI Act changes, what it forces you to be clear about, and how to answer the uncomfortable question: Who is accountable when AI breaks things? And are you prepared for when it becomes applicable in August 2026? What the EU AI Act actually is, and why Marketing Ops should care... The EU AI Act is a regulation that sets risk-based rules for AI. It applies to public and private actors inside and outside the EU if they place AI systems or general-purpose AI models on the EU market, put them into service, or use them in the EU. The timeline matters because this is not some distant future threat you can park in a Q4 roadmap and never touch again. The Act entered into force on 1st August 2024 and becomes fully applicable on 2nd August 2026 , with staged dates for different parts. Prohibited practices and AI literacy obligations have applied since 2 February 2025. Obligations for general-purpose AI models became applicable on 2 August 2025. You do not need to be “building AI” to be on the hook. If your marketing team is using AI features in a CRM, marketing automation platform, ad platform, analytics tool, chatbot, content tool, sales engagement tool, or customer data platform, you are already in the system. Marketing Ops cares for one simple reason: The Act forces clarity about who is responsible for what. And Marketing Ops is usually the only function that can map what is actually being used, where, by whom, and with what data. The first accountability trap: “We didn’t build it, we just used it” Under the Act, obligations fall on different actors, including providers and deployers. The Commission’s guidance describes the framework applying to providers (for example, a developer of a tool) and deployers (for example, an organisation using that tool). This is where a lot of Marketing Ops teams try to mentally exit the building. “ We’re not an AI company. We’re just using features in our tools. ” That may reduce some obligations, but it does not remove accountability. Even in the high-risk context, the Commission’s guidance describes deployer obligations that are very operational: Using the system according to instructions, monitoring operation, acting on identified risks or serious incidents, and assigning human oversight to people in the organisation. So the real question is not “ are we a provider? ” It’s “ are we a deployer, and if so, are we operating the system responsibly? ” In Marketing Ops terms, that translates into boring, unavoidable work: Governance, documentation, controls, training, monitoring, and incident response. The second accountability trap: “AI is everywhere, so nobody owns it” When everything has an AI button, it becomes culturally tempting to treat AI as a vibe rather than a system. But the EU AI Act is designed to do the opposite. It is trying to turn AI back into something you can audit. That means you will get asked questions like: Who approved this use case? Who decided what data goes into it? Who checked the output? Who is monitoring performance drift? Who is accountable when it produces misleading content, discriminatory outcomes, or security incidents? If your organisation cannot answer those questions, you do not have “ AI adoption ”. You have unmanaged operational risk. And unmanaged risk has a habit of becoming a budget line, a headline, or both. Where Marketing Ops is most exposed Most Marketing Ops teams are not deploying AI for medical triage or border control. That’s not the point. The exposure comes from how marketing actually uses AI in the real world. You run customer-facing AI interactions If you deploy chatbots or other interactive systems, someone needs to think about transparency, user expectations, and what happens when the system confidently says something untrue. The Commission’s guidance explains that the Act introduces transparency requirements for certain interactive or generative AI systems, such as chatbots, to address risks like manipulation, fraud, impersonation and consumer deception. That is marketing territory. Customer experience, web journeys, lead capture, qualification, and support deflection are all places where Marketing Ops often owns the tooling and the workflow. When those systems break, the first question will be “ why did you deploy it like this? ” not “ which vendor did you buy it from? ” You publish AI-assisted content at scale Marketing teams are already generating images, audio, video, and written content with AI-assisted tools. The Act’s transparency obligations include requirements on deployers in certain situations, including disclosure for AI content, and disclosure when text is generated or manipulated and published with the purpose of informing the public on matters of public interest. The Commission notes these transparency obligations and that guidelines will further clarify how they apply. Even if your content does not fall into those specific categories, the direction of travel is clear. You are expected to be honest about what is synthetic when that matters to the audience, and to avoid systems that create deception. Marketing Ops is exposed here because it often owns the content workflow tooling, approvals, templates, distribution and tracking. You are the function that can actually operationalise a disclosure rule without turning the team into a bureaucratic mess. You use AI for targeting, segmentation, and decisioning This is the area where marketing loves to pretend the model is “just helping”. If AI influences who sees what, who gets prioritised, who is suppressed, who is routed, or who gets categorised, you are using AI as a decisioning layer. Even when the Act does not label a specific marketing use case as “high-risk”, you still have obligations under other laws, and the AI Act does not replace those. The European Data Protection Board has been explicit that the AI Act and EU data protection laws should be considered complementary and mutually reinforcing, and that EU data protection law remains fully applicable to the processing of personal data involved in the lifecycle of AI systems. So if your AI-driven segmentation relies on personal data, you are automatically in GDPR land as well, and your accountability picture now has at least two regulators’ expectations in it. You might accidentally wander into high-risk territory through HR and recruitment marketing A lot of marketing teams support recruitment, employer brand, internal comms, and candidate journeys. Some teams run targeted job advertising systems and automation. Some use tools that “optimise” job ads and candidate targeting. The Commission’s guidance lists employment-related AI systems as examples of high-risk use cases, including systems intended to be used for recruitment or selection, which includes placing targeted job advertisements. If your marketing stack touches that area, you need a grown-up conversation with HR and Legal about who owns the system, who is the deployer, and what controls exist. Marketing Ops does not need to own HR compliance, but Marketing Ops often owns the platforms that make these workflows possible. That makes you part of the accountability chain. “When AI breaks things” , what counts as “ breaks ”? This is where organisations get dangerously vague. AI “breaking” is not just a system outage. It can mean: A chatbot gives incorrect product claims, pricing, security assurances, or legal statements. An AI feature generates content that creates deception, impersonation risk, or misleading communications. An optimisation system shifts targeting in a way that creates discriminatory outcomes, even unintentionally. A data pipeline feeds the wrong inputs, and the model output becomes systematically wrong. A generative tool produces content that breaches IP rules or internal policy. A vendor updates a model, performance changes, and your safeguards do not catch it. A workflow creates an outcome you cannot explain to an affected person, which becomes a practical problem in high-risk contexts where the Commission describes a right to an explanation for natural persons in certain situations. The point is not to predict every failure mode. The point is to stop acting surprised when failure happens, and to have an accountable operating model ready. So who is accountable, legally? There is no single magical job title that makes the risk disappear. Accountability is shared, but not vague. At a legal role level, the Act places obligations on the relevant actor types (providers, deployers, and others depending on the scenario). The Commission’s guidance makes clear that deployers have concrete responsibilities in how they use and monitor certain systems, including assigning human oversight within their organisation. At a governance level, enforcement is not theoretical. The Commission’s materials outline penalties, with maximum thresholds including up to €35m or 7% of worldwide annual turnover for certain infringements, and other tiers for other non-compliance categories. At a data and privacy level, the AI Act does not push GDPR aside. The EDPB has stressed that data protection law remains fully applicable to personal data processing across the AI lifecycle, and the AI Act should be interpreted as complementary to GDPR and related laws. So if your question is “ who will regulators look at? ”, the honest answer is: They will look at the entity that deploys the system in the EU, the entity that provides it, and the people inside those entities who were supposed to provide oversight. Which brings us to the more useful question... Who should be accountable inside a company? This is the Marketing Ops version of “ stop pointing at each other like a Spiderman meme and design a process ”. The EU AI Act effectively rewards organisations that can do three things on demand. They can show what AI is in use, where it is used, and why. They can show who approved it, what data it uses, and what safeguards exist. They can show how they monitor it, how they handle incidents, and how they train staff. The Act’s AI literacy obligations have been in application since 2 February 2025. That is not a “ nice to have ”. It is a forcing function that pushes companies to ensure the people using AI understand it well enough to use it responsibly. Inside most B2B companies, accountability ends up looking like this. Legal and Compliance sets rules, interprets obligations, and decides risk appetite. Security sets requirements for vendor assessments, access controls, and incident response. The DPO and privacy function owns the GDPR posture where personal data is involved, and the EDPB has been clear this remains fully relevant in AI systems. Marketing leadership owns what the business chooses to do, and what it is willing to sign off. Marketing Ops owns how the work is actually done across platforms, workflows, data, and governance. If you want a single throat to choke, organisations are already trying to dump this on “ the AI person ” or “ the data person ”. That fails because the risk lives in operations. It lives in who can actually change how tools are configured and used. That is why the EU AI Act will expose Marketing Ops. It makes operational accountability visible. The uncomfortable part: Your vendor contracts will not save you... Vendors can promise compliance. They can offer documentation. They can add toggles and disclaimers. They can be very convincing in sales calls and contracts. But the moment you deploy the system in your environment, with your data, for your purpose, you become responsible for how it is used. The Commission’s guidance on deployer obligations in high-risk contexts is blunt about deployers needing to use systems according to instructions, monitor operation, act on identified risks, and assign human oversight. The spirit of that is useful even outside high-risk: You cannot outsource oversight. This is where Marketing Ops should stop accepting “ the vendor said it’s compliant ” as a meaningful internal control. A practical accountability model for Marketing Ops You do not need to turn your Marketing Ops team into a compliance department, but you do need a system that creates answers quickly when someone asks, “ What AI are we using, and what happens if it fails? ” Here is what that looks like in practice, without turning this into a checklist article. Start with an AI inventory that is brutally honest. Not a slide. A living list of tools and features, where they are used, what data they touch, and whether they interact with customers. If you cannot map it, you cannot govern it. Then define use-case ownership. Not tool ownership. Use cases. “ Website chatbot ”. “ Email content generation ”. “ Lead enrichment ”. “ Audience segmentation ”. “ Recruitment ad targeting ”. Every use case needs a named business owner and a named operational owner. The operational owner is often Marketing Ops. Then decide what “ human oversight ” means for each use case. The Commission’s language on assigning human oversight inside the organisation should not be treated as a high-risk-only curiosity. If a system can publish, route, prioritise, or decide, someone needs to be accountable for review points, guardrails, and escalation. Then put monitoring where it belongs: On outcomes, not activity. Monitor for things like hallucinated claims in customer-facing responses, unexpected shifts in routing, sudden performance drift after vendor updates, spikes in complaint patterns, and outputs that create deception risk. Then add an incident pathway that does not rely on panic. If AI produces a harmful or misleading output, who gets notified, who can shut it down, who contacts the vendor, who handles customer comms, and who documents what happened? Finally, train people like adults. The AI literacy obligations are already in application. Training should be specific to the tools and use cases your team actually uses, and it should include what not to do, what must be reviewed, and what needs disclosure. If your training is a generic “AI 101” webinar, you have technically done a thing. You have not reduced risk. The privacy and compliance overlap you cannot ignore! Marketing teams often treat GDPR as “ the cookie banner problem ”. That mindset is going to get expensive. The EDPB’s statement is clear that data protection law remains fully applicable to personal data processing across the AI lifecycle and should be interpreted as complementary with the AI Act. On top of that, regulators are actively thinking about the interplay. The EDPB and EDPS have noted work on joint guidelines about the interplay between GDPR and the AI Act. For Marketing Ops, that means your AI governance cannot be divorced from your data governance. If you cannot explain what data goes in, why it is lawful, how it is minimised, how it is secured, and how it is deleted, you are not “ doing AI ”. You are doing risk. One more complication: The rules are still being operationalised It’s tempting to read a regulation like it’s a final instruction manual. In practice, there will be standards, guidelines, and codes of practice that affect how organisations implement parts of the Act. For example, the Commission notes work on guidance for transparency obligations and a code of practice to support marking and labelling of AI-generated content. The Commission has also proposed adjustments to the timeline for applying high-risk rules linked to the availability of support measures like standards and guidelines, and that proposal is in the legislative process. So yes, some details will evolve. That is not a reason to wait. It is a reason to build an operating model that can adapt without chaos. The blunt reality: Marketing Ops is accountable for readiness When AI breaks things, the provider may be accountable for parts of compliance, depending on their role. The deployer is accountable for how it is used in their organisation. Regulators and stakeholders will not accept “ the tool did it ” as a defence, especially where transparency, oversight, and monitoring were expected. Inside the company, Marketing Ops is rarely the legal owner of the risk, but it is often the operational owner of whether the business can prove it is acting responsibly. That is the exposure. Not because Marketing Ops is to blame, but because Marketing Ops is where reality lives. If you want a simple line to use internally, use this: Legal interprets the rules, Security protects the environment, Privacy governs personal data, and Marketing Ops makes the controls real across the stack. And the fastest way to find out whether your Marketing Ops is ready is to ask one question: " If we had to explain our AI usage to a regulator, a customer, and our board tomorrow, could we do it without improvising?" If the answer is no, the EU AI Act didn’t create the problem. It just stopped letting you hide it. Discover our AI Services
- Lead scoring is cosplay: What actually predicts revenue now
Lead scoring used to feel like grown-up marketing. A neat little system that turned chaos into order. A tidy number that told sales who to call first. A dashboard that made everyone feel like the funnel was being managed by competent adults. And then real life happened. Buying committees got bigger. Intent got noisier. Forms got optional. Cookies got nerfed. Inboxes got hostile. Sales cycles became less linear and more like a drunken treasure hunt. Yet somehow, a lot of teams are still proudly running the same scoring model they built when people downloaded whitepapers for fun and marketing could pretend it “handed leads to sales” like a factory line. That’s why lead scoring now is often cosplay. Not because scoring is inherently bad, but because most scoring models are pretending the world works the way it did when the model was invented. Why your lead score is confidently wrong Most lead scoring systems break for three reasons. First, they’re built on activities that are easy to track, not activities that predict revenue. Email opens, page views, webinar attendance, “visited pricing page”, “downloaded asset”, “clicked CTA”. All observable. All measurable. Many only weakly tied to a buying decision. Second, they assume the buyer is a single person moving through a funnel. In reality, the person filling out the form is often not the person with budget. Sometimes they are not even the person with a problem. They might be a researcher, an intern, a manager asked to “look into it”, or someone collecting screenshots for an internal deck. Your model gives them 82 points and everyone panics, while the actual decision maker never touches your website. Third, they confuse engagement with intent. Engagement can be curiosity, education, boredom, or comparison shopping. Intent is “we have a problem, we are prioritising it, and we are moving towards a decision”. Most scoring models treat the first as a proxy for the second. That’s the fundamental lie. If you’ve ever watched an account rack up score like a slot machine and then ghost you completely, you’ve seen this lie in the wild. The hidden cost of lead scoring theatre Bad scoring isn’t neutral. It doesn’t just fail quietly. It actively wastes time and damages trust. Sales loses faith and starts ignoring anything marketing sends. Marketing then tries to “fix adoption” with enablement sessions, new dashboards, or another scoring tweak. That makes it worse, because the problem is not communication. The problem is the signal. Meanwhile, truly winnable opportunities sit in the shadows because they don’t behave like your model expects. They don’t click the right emails. They don’t fill the right forms. They might come in through a partner. They might show up in pipeline because a rep already has a relationship. Your model shrugs and calls them “low score”. And when leadership asks, “Why are we not converting more MQLs?”, the answer becomes a shrug wrapped in charts. The goal isn’t a better score. The goal is better prioritisation. So let’s talk about what actually predicts revenue now. What predicts revenue now: Fewer signals, better signals Revenue prediction in B2B isn’t about counting more clicks. It’s about identifying the conditions that exist when a deal is genuinely likely to happen. Those conditions are usually not individual behaviours. They’re patterns. And they’re often account-level, not lead-level. Think in terms of three layers: Fit : Should this account buy from you, in a realistic universe? Readiness : Are they in a buying window, or just browsing? Momentum : Are they moving forward in a way that resembles real deals you’ve won? Lead scoring usually over-indexes on layer two, and mostly measures the wrong thing. The best predictors combine all three. Predictor 1: Verified ICP fit that sales actually agrees with This sounds obvious. It’s not. Most teams have a “target customer” slide and a CRM full of everyone anyway. Fit is still the strongest baseline predictor of revenue, but only if you define it like you mean it. Fit is not “company size and industry”. That’s demographic cosplay, too. Fit is: Do they have the problem you solve, at the scale you solve it, with the constraints you can handle? If your scoring model can’t clearly separate “perfect fit but quiet” from “loud but wrong fit”, you’re going to keep feeding sales junk. Fit should be a gate. If fit is poor, you don’t “nurture harder”. You deprioritise and stop wasting time. Predictor 2: Buying group emergence, not individual activity Revenue happens when a group forms around a decision. So the question is not “Did Jamie click the pricing page?” The question is “Is a buying group forming inside this account?” Buying group emergence looks like: Multiple people engaging from the same domain within a short window. Engagement coming from different functions (for example, marketing plus ops plus leadership). One person’s activity causing another person to appear (forwarding, internal sharing, follow-on visits). Conversations that shift from “what is this?” to “how would this work for us?” A single person binge-reading your blog can be a fan. Or a competitor. Or someone building a business case they will never get approved. Three to six relevant people showing up within a month is the kind of pattern that starts to smell like revenue. And no, this doesn’t require creepy tracking. Even with imperfect tracking, you can observe account-level patterns: Domains, meeting attendees, inbound sources, and the pace of interactions across contacts. Predictor 3: Problem intensity signals, not content consumption Content consumption is often a lagging indicator of curiosity. Problem intensity is closer to a leading indicator of action. Problem intensity looks like: Operational disruption: Migration, re-org, new leadership, tool consolidation, compliance deadlines. Performance pressure: Pipeline targets missed, CAC creeping up, SDR efficiency dropping, conversion rates flat. Technical pressure: Systems breaking, data quality issues, workflow debt, integration failures. Internal urgency: Hiring for ops roles, firing agencies, changing tools, leadership mandates. These signals rarely show up as “clicked email #3 ”. They show up in conversations, in CRM notes, in support tickets, in inbound form fields, in job descriptions, and in the way prospects describe their situation. If your model can’t ingest these, at least design your process to capture them when they appear. A simple “why now?” field that sales actually fills, plus a few required dropdowns about current state, can outperform 50 points of email clicks. Predictor 4: High-intent actions that cost the buyer something A strong signal often has a cost. Not a monetary cost, but a time cost, a political cost, or a commitment cost. High-intent actions include: Requesting a tailored demo (not a generic “learn more”). Bringing colleagues to a call. Asking about implementation, security, procurement, or contract terms. Sharing internal constraints and timelines. Asking for a proposal, SOW, or business case help. Engaging in mutual planning: Next steps with dates, not vibes. These are harder to fake. They’re harder to do casually. If your scoring model treats “webinar attended” as equal to “introduced their IT lead”, you’ve built a points costume, not a revenue predictor. Predictor 5: Momentum patterns that match your won deals Most teams score leads as if every deal moves the same way. But you already have the answer to “what predicts revenue”: it’s in your closed-won history. Not as a generic attribution report. As a behavioural pattern. Take your last 30 closed-won deals and ask: What happened in the 30 to 90 days before the opportunity was created? Look for common sequences like: Multi-contact engagement followed by a consult request. A spike in product-related page views followed by a stakeholder call. Partner referral plus leadership attendee on call one. Pricing conversation within two meetings of first contact. Security review triggered early, not late. Then look at your last 30 closed-lost deals and ask: What did they do that looked promising but went nowhere? You will often find patterns that your score currently rewards, even though they correlate with failure. That’s a fun day. Momentum is not “more activity”. Momentum is “the right activity in the right order”. Replace “lead scoring” with “pipeline readiness” If you want a disruptive idea that actually works, stop calling it lead scoring. Call it pipeline readiness. This simple naming shift forces the right questions. Pipeline readiness asks: Is this person or account likely to enter pipeline soon, and if they do, is it likely to progress? That pushes you away from vanity engagement and towards decision conditions. Pipeline readiness is built from a small set of signals that you can defend in a room with sales leadership. And crucially, it’s not one number. It’s a simple classification that drives action. For example: Not ready : Wrong fit or no buying window. Warming : Fit is strong, early buying group signals. Active : Clear buying window, high-intent actions present. Sales engaged : Meetings happening, mutual plan forming. Give sales something they can understand without a training session. Give marketing something they can improve without inventing new points. The scoring model you can actually run without hating your life Here’s a practical approach that doesn’t require perfection. Step 1: Set a “fit gate” that blocks nonsense Create a fit classification based on a handful of fields that are stable: Segment (size band that matches your pricing and delivery). Use case match (the problem you actually solve). Environment match (tech, complexity, constraints). Exclusions (industries you don’t serve, geographies you can’t support, unrealistic budgets). Fit should be a simple label: strong, medium, weak. If you can’t confidently label fit, default to medium, not strong. Strong should be earned. Step 2: Track buying group emergence at the account level Stop pretending lead-level alone can guide prioritisation. Set up a rolling 14 to 30 day view of account engagement across contacts: Number of engaged contacts from the domain. Variety of roles engaged. Recency and frequency of meaningful interactions. Meaningful interactions are not all clicks. Weight things that indicate effort: Form submissions, meeting requests, product documentation, implementation content, pricing, comparison pages, and replies. If your tracking is imperfect, still do it. Imperfect account-level signals can outperform perfect lead-level vanity metrics. Step 3: Define 5 to 7 “high intent” events and treat them as sacred Pick a short list. No more than seven. These should be actions that are clearly tied to revenue outcomes in your world. Examples: Demo request with a real company email. Meeting booked that includes more than one attendee. Request for pricing, proposal, or security information. Reply that answers “why now?” Product trial activation plus meaningful usage milestone (if relevant). Then design your process so these events trigger immediate, human follow-up. Not a nurture email. Not a “wait until they hit 100 points”. If you can’t act on the event within a day, don’t pretend the score matters. Step 4: Bake “momentum” into your sales process, not just your dashboards Momentum is often captured in conversation, not clicks. So build lightweight capture into the workflow: A required field for timeline (even if it’s “unknown”). A dropdown for current solution or status quo. A simple “primary pain” field. A checkbox for “buying group identified” with a minimum of two named stakeholders. This is not admin theatre. It’s the information you need to predict revenue. If reps won’t fill it, that’s feedback: Either the fields are junk, or the process has no consequence. Fix that before you blame the CRM. The uncomfortable truth: The best predictor is still a good salesperson Marketing Ops can build cleaner signals, better routing, and smarter prioritisation. But you cannot automate your way out of fundamental sales quality. If sales follow-up is slow, inconsistent, or purely transactional, no scoring model will save you. If reps can’t diagnose pain, map stakeholders, and create urgency ethically, then the problem isn’t your score. It’s execution. The goal of pipeline readiness is to make good sales teams faster and more consistent, not to create “hot leads” that close themselves. So what should you do this week, not this quarter? Kill anything that feels like scoring for scoring’s sake. Then do three practical moves. First, audit your last 20 opportunities that became real pipeline and identify what happened immediately before they did. Not what your dashboards say. What actually happened. Second, reduce your scoring inputs. If your model uses 40 signals, you are not sophisticated. You are overwhelmed. Third, move from lead-level obsession to account-level readiness. If your business sells to buying committees and you are still scoring individuals like it’s 2014, you’re choosing to be wrong. You don’t need a perfect model. You need a model you can defend, a process you can run, and signals that match how revenue actually happens now. Because the job isn’t to create high-scoring leads. It’s to create deals. Discover our Services
- Building an AI-ready HubSpot: The foundations that pay off
AI in HubSpot is not a magic layer you sprinkle on top of chaos. It is more like a turbocharger. If the engine is healthy, you feel the lift immediately. If the engine is full of duct tape and mystery fluids, you just reach the next breakdown faster. The good news is you do not need a “perfect” portal to benefit. You need a set of foundations that make HubSpot reliable, predictable, and safe to automate. Do that, and the AI features become genuinely useful: Better routing, faster content drafts, quicker summaries, more consistent service responses, and fewer tasks that exist purely to keep humans busy. HubSpot’s current AI experience sits under its Breeze umbrella, including assistants and agents that work across marketing, sales, and service. The exact features available will depend on your subscription, region, and the feature itself, but the pattern is consistent: The best outcomes come from clean data, clear definitions, controlled access, and strong reference material. Start with the boring truth: AI can only work with what you give it Most teams think their HubSpot issues are “ AI readiness ” issues. They are usually “ we do not agree on what anything means ” issues. If your lifecycle stages are used as vibes rather than definitions, nobody (human or machine) can make good decisions. If sales reps log activities in five different ways, any summary will be incomplete. If a single contact can be both “customer” and “open deal” with no rules for which wins, automation becomes a lottery. AI works best when your CRM behaves like a system, not a scrapbook. So your first foundation is not an AI setting. It is operational clarity. Foundation 1: Define your customer language (and lock it) AI gets better when your business is consistent. Consistency starts with shared definitions. The minimum set of definitions you need You do not need a dictionary of every edge case. You need a handful of “truth anchors” that everyone agrees on: Lifecycle stages : What triggers each stage, who owns the change, and what evidence is required. Keep it simple and auditable. Lead statuses : What each status means, what the next action is, and who is responsible. Deal stages : What must be true for a deal to move forward, and what data must be captured at each stage. Company ownership : When the company record is the source of truth versus the contact record. If you have those nailed down, you have created something precious: Context that does not change depending on who is looking at it. Then you make it enforceable. Use required properties at key moments, pipeline rules, and validation where appropriate. The goal is not to police people, the goal is to stop “creative interpretation” from leaking into your data model. Foundation 2: Make your data fit for automation, not just reporting Most CRM clean-up projects aim for prettier dashboards. AI readiness aims for dependable behaviour. You want your data to answer questions like: Can we trust this field enough to route a lead? Can we trust this stage enough to trigger a customer experience? Can we trust this source enough to measure performance? If the answer is “sometimes”, automation turns into support tickets. The practical fix: design for the decisions you want to automate Pick the highest value decisions you want HubSpot to make faster. Then work backwards to the data required. Examples: If you want an agent to resolve common support questions, you need a strong knowledge base and clear categorisation of issues. If you want automated lead qualification, you need consistent capture of company size, territory, intent signals, and a definition of what “qualified” means in your world. If you want sales summaries that actually help, you need activity logging that is standardised, plus key properties that capture deal reality rather than hope. HubSpot is increasingly building AI into features that rely on your CRM context, so the more structured and dependable that context is, the more you get out of it. Foundation 3: Get serious about consent, sensitive data, and guardrails If your AI rollout ignores privacy, it will get blocked, quietly sabotaged, or turned off after one uncomfortable meeting. HubSpot has an AI Trust and safety approach that includes controls like data masking for personal information in select features. It also publishes information about its AI infrastructure and how it works with AI service providers. For example, HubSpot states it does not allow the AI service providers it uses for Subscription Services to train on customer data, and it aims to minimise retention, including “zero-day” retention where possible. That said, you still need to govern what you put into HubSpot and how features are used. Your job: Decide what should never be used as input Create a simple rule-set for teams: What types of data are sensitive in your business? Which properties should be treated as restricted? Where should sensitive information live if it should not be in HubSpot at all? HubSpot’s own documentation notes that if you enable Sensitive Data, the sensitive data properties you create will not be used to train Breeze models. It also notes that other customer data in your account may be used to train Breeze models, and that you can opt out by contacting HubSpot. So do not pretend this is a purely technical decision. Make it a policy decision, then configure around it. If you need to opt out, do it early, not after you have trained habits across the team. Foundation 4: Fix your permissions model before you add more power AI makes it easier to act quickly. That is the point. But it is also the risk. If everyone can change lifecycle stages, edit key properties, create workflows, and rewrite templates, you do not have a CRM. You have a shared Google Doc with better branding. At minimum: Limit who can create and publish workflows. Limit who can edit critical properties, pipelines, and lifecycle settings. Use teams and partitioning where appropriate. Separate experimentation from production where possible. This is not about distrust. It is about protecting the system so you can move faster with confidence. Foundation 5: Build your “knowledge spine” (this is where agents win or fail) If you want AI to help customers, prospects, or your internal team, it needs reference material that is accurate and current. HubSpot’s Breeze Customer Agent is positioned as a way to qualify leads, answer questions, and resolve support issues 24/7, and HubSpot provides guidance on training and deploying it. It also announced expanded availability for Customer Agent via HubSpot Credits for Pro and Enterprise customers starting June 2, 2025. None of that matters if your help content is thin, outdated, or written like it was created under duress. The knowledge spine is not “more articles” It is: A clear structure: categories, tags, and consistent naming. Coverage of the top issues: the questions customers ask repeatedly. A single source of truth: avoid three competing answers across PDFs, old pages, and random internal docs. A refresh habit: ownership, review cycles, and expiry rules. When that exists, AI becomes a multiplier. When it does not, AI becomes a confident way to spread confusion. Foundation 6: Stop treating integrations like plumbing AI readiness is integration readiness. Breeze is designed to work inside HubSpot, but it also benefits from a connected ecosystem, because your team’s reality is spread across email, calls, meetings, documents, and support conversations. HubSpot highlights that its AI capabilities can connect with your broader tools and use CRM context to help with meeting prep, content, and analysis. If your integration layer is unreliable, your AI layer will inherit that unreliability. The foundations here look like: One integration owner per system. Clear field mapping and documentation. A change control process that prevents “quick fixes” from becoming permanent data damage. Monitoring for sync errors, duplicates, and unexpected overwrites. If you do not have this, you will spend your AI rollout explaining why “the system said” something that is not true. Foundation 7: Standardise activity capture (because summaries depend on it) Teams love the idea of automatic summaries, meeting prep, and record insights. Breeze Assistant is positioned to help with things like refining content, preparing for meetings, and summarising data inside HubSpot. But a summary is only as good as the underlying trail. So decide: What counts as a meaningful activity? How do you log it? Where does it live? What must be captured after key moments like discovery calls, demos, and implementation milestones? This is where most teams need fewer fields and more discipline. Do not add fifteen properties for “completeness”. Add five that you will actually maintain, and design the process so they are easy to keep current. Foundation 8: Create a safe sandbox for experimentation AI features encourage experimentation. That is fine. It is also how production portals get trashed. Build a simple rule: Experiment in a controlled space. Publish changes through an agreed process. Document what you ship and why. If you have access to sandboxes or separate environments, use them. If you do not, create operational sandboxes: Test lists, test pipelines, and staging assets that do not touch live routing and reporting. Your goal is to make it easy to try things without making your CRM feel unstable. Foundation 9: Make brand voice a system, not a person Content generation is one of the first things teams try, because it is immediate. But “AI-ready content” is not about pushing a button for a blog post. It is about capturing what makes your content sound like you, then making it reusable. That means: Clear messaging pillars. Approved claims and proof points. A library of examples that represent your voice. Rules for tone by channel: support, sales outreach, marketing emails, landing pages. Then you build templates and prompts around those assets. Do that, and your drafts get closer to publishable. Skip it, and you get content that sounds like it was written by a polite stranger who read your homepage once. Foundation 10: Design human-in-the-loop on purpose The fastest way to make AI “not pay off” is to either let it run unchecked, then panic when it makes a mistake, or force a manual review of everything, then wonder why nobody uses it. Pick your risk points and add review there. For example: Customer-facing responses might require a tighter approval model at first. Internal summaries can be low risk and rolled out broadly. Lead qualification can start as recommendations before it becomes automated routing. This matches how HubSpot positions trust and controls around its AI features: build confidence, understand the flows, then scale usage. What “AI-ready” looks like in practice When the foundations are in place, you will notice a few things quickly: Sales reps stop arguing with the CRM because it starts reflecting reality. Marketing stops building lists that need three disclaimers. Service stops re-answering the same questions. Ops stops playing whack-a-mole with workflows. At that point, AI becomes less of a headline and more of a daily advantage: Faster handoffs, better consistency, and fewer “how did this happen” moments. And the best part is that these foundations pay off even if you never touch a single AI feature. They make HubSpot perform better as a platform, full stop. Discover our Services











