Search our Resources
251 results found with an empty search
- Big News: TLS Certificate validity moving to 199 Days
Online security standards have changed - as of February 24, 2026 , Certificate Authorities (CAs) will issue public TLS/SSL certificates with a maximum validity of 199 days (previously 397 days). This is an industry-wide update driven by the latest CA/Browser Forum Baseline Requirements , and it’s all about strengthening security across the web. Why the Shorter Validity? Shorter certificate lifespans enhance security in a few key ways: Reduced risk exposure if a private key is compromised Faster cryptographic agility , allowing the industry to adapt more quickly to evolving threats and standards Lower long-term impact of mis-issuance or outdated configurations In short: Smaller validity windows = tighter security controls and faster innovation. Important CA Cutoff Dates Here’s when the new 199-day maximum goes into effect: DigiCert: February 24, 2026 Sectigo: March 12, 2026 Any certificates issued on or after these dates will follow the new maximum validity rule. Two Ways to Navigate the Change You’ve got options, choose the workflow that best fits your team. Path 1: Manual Re-Issuance (Business as Usual) You can continue purchasing certificates as you do today (e.g., 1-year or 2-year products). The difference? You’ll need to reissue and reinstall the certificate every ~6 months , until the order term is complete. Best practice: Most SSL Management services offer renewal notifications, ensure these are enabled in your account so you never miss a reissuance window. This approach works well for teams already comfortable managing certificate lifecycle tasks manually. Path 2: Embrace Automation Want to set it and forget it? Automation is your friend. GoGetSSL currently offers ACME-based SSL certificates , enabling automated issuance and renewal. Once configured, your certificates can reissue seamlessly without manual intervention. For enterprise-scale environments, consider DigiCert Trust Lifecycle Manager . It provides comprehensive certificate lifecycle management, including discovery, automation, policy enforcement, and centralized visibility. Technical Considerations Here’s what your development and operations teams should be aware of: API Certificate Order Requests After the cutoff dates: API requests specifying a validity greater than 199 days will still create an order for the requested duration. However, the issued certificate itself will be capped at 199 days . This design prevents API errors and ensures your public TLS/SSL orders continue processing smoothly. Pro tip: Use the getOrderStatus detail response parameters to monitor the difference between: The order validity term The actual certificate expiration date Tracking both values will be important for lifecycle planning. DigiCert Validation Reuse Changes DigiCert customers should also note adjustments to validation reuse periods: Domain Validation (DV) reuse Changing from 397 days → 199 days (effective February 24, 2026) Organization Validation (OV) reuse Changing from 825 days → 397 days These updates align validation lifecycles more closely with the new certificate validity standards and reinforce stronger identity assurance practices. What this means for you This isn’t just a policy change, it's a strategic shift toward a more secure and agile internet. Continue managing certificates manually (with more frequent reissuance), or Transition to automation and streamline your operations. Some MOps platforms already have features enabled to keep it all in one place. For example, Eloqua offers Automated Certificate Management at no additional cost. Either way, planning ahead will ensure a smooth transition. If you’d like help evaluating or implementing automation options for your SSL certificates or updating your certificate management strategy, we’re here to support you. Discover our Email Services
- Guardrails aren’t optional when the tool can speak for you...
A few years ago, most marketing mistakes were slow mistakes. Someone wrote the email, someone proofed it, someone hit send. If it went wrong, it went wrong at human speed. You had time to catch the awkward phrasing, the wrong link, the “Dear {FirstName}” horror. The damage was real, but it was usually contained to a campaign, a segment, a moment. Now you’ve got tools that can speak for you. Not just suggest, not just draft, not just “help”. Speak. In your tone. Under your brand. At scale. Across channels. With alarming confidence. That changes the deal. When a tool can produce customer facing language, take action in systems, and create outputs that look official, you’re no longer talking about productivity. You’re talking about authority. And if you hand authority to a system without guardrails, you are effectively outsourcing your standards to a probability machine and hoping your customers never notice. They will. The uncomfortable truth is that AI in Marketing Operations doesn’t fail like software used to fail. Traditional automation breaks loudly. Integrations fail, jobs error out, workflows stop. You get alerts. You get tickets. You get something you can point at. AI fails quietly. It produces something that looks plausible. It produces something that sounds like you. It produces something that passes a quick skim. And then it slips into the world and does its damage in the most painful way: It looks like you meant it. This is why guardrails are not optional. Not because the tool is evil. Not because everyone should panic. Because once the tool speaks, the brand is accountable. “It’s just a draft” is a comforting lie Most teams start with the safest narrative possible. The tool is “just drafting”. Someone will review it. Nothing goes out unapproved. It is assistance, not autonomy. And at the start, that is true. But the reality of modern marketing is volume. Too many emails, too many landing pages, too many ads, too many variations, too many segments, too many stakeholders. When the tool makes output easier, you produce more output. When you produce more output, review becomes thinner. When review becomes thinner, the definition of “approved” turns into “nobody complained”. That is how risk creeps in. Not through one dramatic decision to let the robot run your marketing. Through a thousand tiny shortcuts made by busy people who are rewarded for speed, not for diligence. A draft becomes a “close enough”. A “close enough” becomes a template. A template becomes a system. And then one day your brand voice is quietly shaped by whatever the model thinks sounds professional, persuasive, or reassuring. If you’ve ever read a company message that felt oddly hollow, oddly generic, oddly not human, you already know what that looks like. Customers do too. They might not say “this was generated”, but they feel the distance. They feel the lack of accountability. They feel the absence of a real person. In a market where trust is already fragile, that’s not a minor issue. It is the issue. When the tool speaks, it represents your intent This is where the conversation needs to get more serious than “accuracy”. Accuracy matters, of course. Nobody wants hallucinated features or invented pricing. But accuracy is only one slice of the problem. The bigger problem is implied intent. When your brand sends something, customers assume it reflects what you believe, what you value, how you operate, and how you’ll treat them. The tone matters. The promises matter. The certainty matters. The choice of words matters. The absence of empathy matters. AI is very good at sounding certain. It is very good at smoothing rough edges into confident statements. It is very good at making things sound resolved even when they’re not. That is a dangerous trait in a customer context. Because confidence is persuasive, and persuasion under your brand name is a promise. If you accidentally overpromise, if you accidentally mislead, if you accidentally claim compliance you haven’t earned, the customer doesn’t blame the tool. They blame you. They should. It is your logo at the top of the email. Your name on the website. Your ad account paying to put the message in front of them. Your sales team following up as if the claim was deliberate. Guardrails are how you protect intent. They are how you stop the tool from speaking with more authority than your business can actually support. The new failure mode is “looks fine” This is the part that catches even smart teams out. Most governance efforts are designed for obvious failures. Broken processes. Missing approvals. Wrong recipients. Compliance red flags. Things you can spot in a checklist. AI’s most common failure mode is more subtle: It produces output that looks fine at a glance and is wrong in a way that matters later. It might be wrong legally. It might be wrong commercially. It might be wrong ethically. It might be wrong in tone. It might be wrong in a way that sets the wrong expectation. It might take a sensitive topic and sand it down into corporate cheerfulness, which feels disrespectful. It might take a complex product limitation and simplify it into something misleading. It might take a customer concern and respond with “we value your feedback”, which is the fastest way to sound like you don’t. And because the output looks polished, it often bypasses the kind of scrutiny that a messy human draft would invite. Humans are suspicious of imperfect writing. We notice it. We challenge it. We ask questions. AI writing often arrives wearing a suit. People assume it has done the thinking because it has done the formatting. That’s how you end up publishing something that nobody would have consciously written, but everyone accidentally approved. Speed makes small mistakes expensive Marketing has always had risk. But speed changes the economics of risk. When a human team writes slowly, mistakes are slower too. When you have the ability to produce ten variants instead of one, you also have ten chances to be wrong. When you can spin up campaigns faster, you also shorten the time between a decision and the moment it reaches a customer. Less time means less reflection. Less reflection means more accidents. And the tool does not get tired, so you keep going. This is where teams often miss the point of guardrails. They think guardrails exist to slow things down. In reality, guardrails exist to allow speed without gambling your reputation every time you hit publish. The teams who win with AI will not be the ones who use it the most. They will be the ones who use it with enough discipline that they can trust their own output again. Your brand voice is an asset, not a formatting preference A lot of organisations treat brand voice as a style guide. A few adjectives. A list of do’s and don’ts. Maybe a handful of examples. Useful, but not sacred. When AI enters the picture, brand voice becomes something else. It becomes the training data for your outward identity. The guardrails around how you speak are no longer “nice to have”. They are the constraints that stop your company from slowly turning into generic marketing sludge. Because AI has a default voice. It’s the voice of polite certainty. Professional, helpful, mildly enthusiastic, oddly uncontroversial. That voice is fine for a toaster manual. It is terrible for differentiation. If your competitors use the same tools with the same defaults, you will all start sounding the same. Same phrases, same cadence, same vague confidence, same “we are committed to delivering value”. Customers will not remember you for that. They will remember you for the moments when your communication felt real, specific, and accountable. Guardrails are not only about preventing disaster. They are also about preventing dilution. They protect what makes you recognisable. The risk isn’t only what the tool says. It’s what it makes people do. Here’s the part many teams ignore because it feels less glamorous than content. Once AI is embedded in workflows, it stops being a writing assistant and starts being a decision shaper. It changes what people choose to ship, what they choose to test, what they choose to claim, what they choose to ignore. If the tool reliably produces something “good enough”, you stop pushing for “great”. If the tool can generate five angles quickly, you stop thinking deeply about the one angle that truly matters. If the tool can answer customer questions instantly, you stop investing in better documentation and clearer product truth. The tool doesn’t just produce content. It changes standards. That is why governance and guardrails sit in Marketing Operations, not only in legal or IT. This is an operational quality problem. It is about maintaining standards under acceleration. Customers don’t care how it happened When something goes wrong, organisations love to explain the internal story. It was an experiment. It was a vendor issue. It was a misconfiguration. It was a one off. It was an edge case. It was an isolated incident. It was unintended. Customers do not care. They care that you spoke to them in a way that felt careless, misleading, or disrespectful. They care that you used their data in a way you cannot clearly explain. They care that your messaging implied something that was not true. They care that you are now backpedalling. The moment you start defending the process instead of owning the outcome, you lose more trust. Because accountability is the whole point of a brand. Guardrails are how you avoid needing excuses in the first place. Guardrails are not a policy document nobody reads Let’s be blunt. A policy document is not a guardrail. It is a wish. Teams love policies because they create the feeling of control. They also love them because they can be written once and then forgotten. They become a box ticked. “We have an AI policy”. Great. Where is it used? Who follows it? What happens when someone ignores it? How do you know? Real guardrails show up where work happens. In the tools. In the templates. In the workflows. In the approvals. In the way you capture decisions. In the way you log what was generated and why. In the way you constrain what is allowed to be said in certain contexts. In the way you enforce brand voice and claims. If you cannot point to the guardrails inside the process, you don’t have guardrails. You have vibes. And vibes are a terrible risk strategy. The irony: Guardrails make AI more useful The fear some teams have is that guardrails will reduce the value of AI. That constraints will kill creativity. That approvals will slow delivery. That governance will turn an exciting tool into another corporate process. In practice, the opposite happens. Without guardrails, teams never fully trust what they generate. They second guess, they rewrite, they hesitate, they argue, they avoid using the tool for anything important. They keep it in the “nice to have” corner. They treat it like a toy. With guardrails, the tool becomes reliable. Not perfect, but reliable enough that teams can use it in real work without constantly worrying that it will embarrass them. Constraints create confidence. Confidence creates adoption. Adoption creates impact. The best marketing ops teams understand this instinctively. They know that freedom without control is not freedom. It is chaos. This is the moment to decide what kind of organisation you are AI is forcing a choice that many companies have been postponing for years. Do you operate with standards, or do you operate with output? Do you want to be trusted, or do you want to be fast? Do you want your marketing to be a real representation of your business, or a high volume content factory that occasionally hits the right note? Because once the tool can speak for you, every weak spot in your operation becomes louder. Every unclear rule becomes an argument. Every missing owner becomes a gap. Every undocumented decision becomes a risk. Some teams will respond by pretending it is fine. They will let the tool run, then scramble when something breaks. They will call it learning. Other teams will respond by putting simple, sensible constraints in place that protect customers and protect the brand, while still getting the productivity gains that made them adopt AI in the first place. That second group will be the one that looks competent in two years. Not because they had better tools, but because they had better discipline. And discipline is the real advantage right now. AI can speak for you. That is powerful. It is also a responsibility. Guardrails aren’t optional, not because you’re afraid of the tool, but because you respect what it means to speak under your name. Discover our AI Services
- AI Governance is not optional, it is the price of using the tool
Every Marketing Operations team is having the same conversation right now. Someone has shipped a chatbot into the website. Someone else is feeding prospect data into a model to “improve targeting”. A third person has quietly wired an AI assistant into the CRM to auto log activities, write follow ups, and “clean” fields. And then the organisation pats itself on the back for being modern. But if you are using AI in production without governance, you are not innovative. You are careless. You are outsourcing risk to your future self, your legal team, and your customers. You are also guaranteeing a messy internal backlash later, because the first time it misfires you will watch the business slam the brakes on everything. Governance is not paperwork. It is the operating system that lets you use AI without turning your MarTech stack into a liability. Why Marketing Ops is uniquely exposed Marketing Ops sits in the blast radius of AI for three reasons. First, you handle a ridiculous amount of personal data, often across multiple systems, with varying consent states and hazy provenance. That is not a moral judgement, it is the reality of modern marketing. Second, your work touches revenue. When AI changes what gets sent, scored, routed, or reported, you are not “testing a feature”. You are changing the way the company makes money. Third, Marketing Ops tends to be the place where “quick wins” become permanent. A prototype becomes a workflow. A workflow becomes business as usual. Nobody writes down what it does, why it does it, or what it is allowed to touch. Then one day something breaks and everyone acts shocked. AI accelerates that pattern. It automates decisions. It generates content at scale. It can behave differently tomorrow than it did today. That is why governance matters more here than in a team building slide decks. Guardrails are not “compliance”, they are performance The common argument against governance is that it slows teams down. That only sounds true if you have never lived through the alternative: Chaos, rework, and a six month freeze after a public or internal incident. AI guardrails speed you up because they remove ambiguity. People know what tools are approved, what data they can use, what needs review, and what gets logged. They stop you shipping the same mistakes over and over again with increasing confidence. The NIST AI Risk Management Framework is a good way to think about this. It frames risk management around governance and lifecycle management, not one time approvals. The core idea is simple: Govern the approach, map the context, measure the risks, manage the controls. If you have no GOVERN function, the rest becomes theatre. ISO/IEC 42001 points in the same direction from a management system angle: You need a structured way to establish, run, and continually improve how AI is used. This is not about one policy PDF. It is about ownership, controls, and continuous improvement. The uncomfortable truth about “we are just using it for marketing” A lot of teams still talk about marketing use cases as if they are low stakes. They are not. If AI personalises a message, decides who gets an offer, changes lead routing, or rewrites copy based on customer data, you are in the realm of fairness, transparency, and accountability. You are also in the realm of data protection obligations, because personal data is often in the loop, even when people pretend it is not. Regulators are not buying the “it is just marketing” line either. The UK ICO’s guidance on AI and data protection is explicit about accountability and governance, and it ties it to concrete practices like impact assessments, documenting decision making, and involving appropriate stakeholders. In Europe, the EU AI Act has put “trustworthy AI” into law, with a risk based approach and requirements that include risk management, data governance, transparency, and human oversight depending on the system and risk category. Whether or not your specific use case is classified as high risk, the direction of travel is clear. The bar is rising, and “we did not think about it” is not a defence. What good governance actually looks like in Marketing Ops Governance fails when it is vague. “Be responsible” is not a control. It is a hope. Good governance is operational. It answers questions people actually have to answer on a Tuesday afternoon, under pressure, with a campaign deadline looming. Here is what we tend to come across in a Marketing Ops context. 1. A clear inventory of AI use cases If you do not know where AI is used, you cannot govern it. Most organisations already have shadow AI, including browser based tools, plug ins, CRM add ons, and “temporary” scripts. A proper inventory is not a spreadsheet that dies after week one. It is a living register: What the use case is, what system it touches, what data is involved, what model or vendor is used, what the failure modes are, and who owns it. 2. Data boundaries that are blunt, not poetic You need rules that can be enforced, not mission statements. What data is allowed into prompts and workflows. What must be masked or excluded. What cannot be used at all. How retention works. What happens to data sent to third parties. The UK ICO has been clear that organisations should think seriously about governance and accountability when processing personal data in AI systems, including assessing risks and documenting the rationale. That starts with knowing what you are feeding into the machine. 3. Human oversight that is real “Human in the loop” is often marketing theatre. People claim oversight exists, but in practice nobody checks anything until it goes wrong. Real oversight means defining which outputs are allowed to run automatically, which need review, and what “review” actually means. It also means training reviewers to spot the failure modes, not just grammar errors. The EU AI Act explicitly points to human oversight as a core requirement in higher risk contexts, because systems can fail in ways humans do not anticipate. Even if your specific use case is not formally high risk, the principle still applies. 4. Logging, traceability, and auditability This is the part Marketing Ops teams avoid because it feels technical. It is also the part that saves you when someone asks, “Why did this customer receive that message?” or “Why did this lead get marked as unqualified?” You need to be able to trace inputs, prompts, outputs, and downstream actions. That includes versioning of prompts and workflows, so you can explain behaviour changes over time. Without logs, you cannot learn. You also cannot defend yourself. 5. Vendor and model controls Most teams do not “build AI”. They buy it. That does not reduce responsibility. It changes the governance surface. You need procurement standards for AI vendors, clarity on data usage, model training policies, retention, and security. You need to know what happens when the vendor changes the model. You need exit plans. You need to treat AI features like critical infrastructure, not a shiny add on. ISO/IEC 42001 is useful here because it is designed for organisations providing or using AI based products or services, with an emphasis on responsible use and management system controls. 6. A governance cadence, not a one time workshop AI governance is not a launch task. It is a loop. New use cases appear. Old ones change. Vendors update. Regulations evolve. Teams find new ways to break things. If governance is a quarterly committee that nobody takes seriously, it will fail. If it is embedded in change control, release management, and campaign operations, it becomes normal. Risk management should apply across the lifecycle, not just at the start and lifecycle framing matters a lot in Marketing Ops as systems and workflows are constantly evolving. The three failure modes that guardrails prevent Let’s make this painfully practical. Guardrails stop three common disasters. First, data leakage. Someone pastes customer data into a tool they should not be using. Someone connects a plugin that exports data to a vendor that stores it indefinitely. Someone uses a feature without understanding where the data goes. Regulators have been increasingly vocal about privacy harms in AI contexts, and not just in abstract terms. Second, hallucinated operations. AI makes up a field value. It confidently “dedupes” records that should not be merged. It assigns a lead score based on nonsense. It rewrites copy and introduces claims you cannot substantiate. Marketing Ops teams love automation, which means they are especially vulnerable to quietly automating errors at scale. Third, accountability collapse. When things go wrong, nobody owns it. The vendor blames configuration. The marketer blames the tool. The Ops team blames “the model”. Leadership responds by banning everything. The outcome is predictable: Fear replaces learning. Governance is how you avoid turning one mistake into a full organisational retreat. “But we want to move fast” Move fast is fine. Move fast with rules. The teams that win with AI are not the ones with the most experiments. They are the ones that can experiment safely, keep what works, and kill what does not without drama. Guardrails are what make that possible. A strong governance setup does not mean every prompt needs legal approval. It means you have sensible tiers. Low risk tasks, like drafting internal summaries or rewriting existing public copy, can have light controls. Higher risk tasks, like using personal data for personalisation, changing routing, or automating outbound messages, should have stronger controls: Defined review, logging, and monitoring. This is exactly how risk based frameworks are designed to work. The EU AI Act is built around risk categories, and NIST’s RMF is intentionally flexible and context driven. What to do next if your “governance” is basically vibes If you are reading this and realising your current stance is somewhere between “ad hoc” and “hope”, you are normal. Most organisations are there. The fix is not a 40 page policy. The fix is a working system. Start with a short inventory of every AI touchpoint in your marketing stack. Include the unofficial ones. Define data boundaries in plain language and make them enforceable. Create an approval and oversight model that matches risk, with clear ownership. Implement logging and traceability so you can explain what happened. Set vendor standards so you are not surprised by where data goes or what changes. Then run it as a process, not a project. If that sounds unsexy, good. Most things that save companies from expensive mistakes are unsexy. Marketing Ops is already the team that makes the unsexy work pay off. AI should not be the exception. Guardrails are not the thing stopping you from getting value from AI. Guardrails are the thing that lets you keep the value once you find it. Find out how we can help you with your AI Governance and Guard rails: Discover our AI Services
- Stack rationalisation is not downsizing. It’s a MarTech ROI rescue.
Every few years, a Marketing Ops team looks at its technology stack and has the same realisation you get when you open the “misc” kitchen drawer. Nothing in there is individually a bad idea. It’s just… a lot. Half of it does the same job. Some of it hasn’t been used since the last merger. One item is only kept because a former colleague swore it was “mission critical”, and nobody’s brave enough to ask what it actually does. That’s your MarTech stack. And here’s the uncomfortable truth: Most stacks don’t fail because the tools are bad. They fail because the stack stopped being designed and started being collected. The result is predictable. Costs creep up. Adoption fragments. Data gets weird. Reporting becomes interpretive dance. The team spends more time keeping systems alive than using them to create pipeline. Then someone says, “ We need to rationalise the stack ,” and everyone hears, “ We’re about to take your toys away. ” But that’s not what rationalisation should be. Done properly, it’s not a finance-led haircut. It’s a performance rescue. It’s how you turn “ we have loads of tools ” into “ we get value from what we pay for ”. It’s also one of the fastest ways to regain executive trust, because nothing screams “ adult supervision ” like knowing what you own, why you own it, and what it’s delivering. The ROI myth that keeps stacks bloated MarTech ROI is usually treated like a scoreboard. We bought tool X. Tool X has dashboard Y. Therefore we can report ROI. But “reporting ROI” and “having ROI” are not the same thing. Most MarTech spend is justified with a story, not evidence. A story about efficiency. A story about personalisation. A story about scale. A story about being “ data-driven ”. Great stories, honestly. Very fundable. Then reality arrives. The tool requires clean data you don’t have. It needs an integration nobody scoped. It assumes a process you’ve never standardised. It gets deployed halfway, then the team gets busy, then six months pass, then renewal comes around and you renew because the alternative is admitting you don’t know what you’re doing. That is not a tool problem. That is an operating model problem. And the longer it goes on, the harder it gets to unwind, because the stack becomes political. People attach their identity to platforms. Procurement decisions become legacy monuments. Usage becomes impossible to measure because “ using it ” might mean anything from logging in once a quarter to running mission-critical workflows. So the stack grows. Overlaps multiply. ROI gets fuzzier. Everyone gets used to it. Until a CFO asks a very fair question: “ What are we paying for, and what are we getting back? ” If you can’t answer that crisply, you don’t have a stack. You have a subscription museum. Rationalisation isn’t removing tools. It’s restoring design. A rationalised stack is not the smallest possible number of tools. It’s the fewest tools required to reliably execute your strategy. That’s a huge difference. Because the goal is not austerity. The goal is performance. It’s speed, consistency, measurable outcomes, and reduced dependency on heroics. When stacks are bloated, teams start compensating with workarounds and manual effort. They build fragile automations. They export spreadsheets. They invent processes to deal with tool limitations instead of choosing tools that fit the process. Rationalisation reverses that. It gets you back to intentional design: What do we actually need to do to win? What capabilities matter most to deliver that? What is the simplest architecture that supports those capabilities? Where are we paying twice for the same outcome? Where do we have “features” but not “adoption”? Where does the data fall apart? This is why stack rationalisation is not primarily a procurement exercise. It’s a strategy and operations exercise that happens to result in procurement changes. The hidden cost: Operational drag Most teams underestimate how expensive complexity is, because it doesn’t show up as a single line item. Complexity costs you in: Time : Training, troubleshooting, triage, and all the “small” tasks that become constant. Speed : Every new campaign takes longer because the workflow touches more systems and more handoffs. Risk : Data privacy, consent, access control, and governance failures become more likely as systems multiply. Insight : Reporting degrades because definitions split across tools and no one trusts the numbers. Morale : Nothing kills motivation like working inside a stack that feels unreliable. If you want a simple definition of “MarTech debt”, it’s the gap between the stack you have and the stack your team can actually operate confidently. Paying off that debt is where ROI rescue starts. Why most rationalisation attempts fail Plenty of teams try to rationalise. Many even reduce vendor count. And then, weirdly, not much improves. That happens when rationalisation is done as a cleanup rather than a redesign. Common failure patterns: It becomes a cost-only exercise. If the main goal is to cut spend, the team will keep anything that looks defensible and ditch anything that looks optional, regardless of whether the “optional” tool is the one actually driving outcomes. It ignores workflows. Tools get evaluated in isolation, not based on the end-to-end journey they support. You can’t rationalise your stack if you can’t describe your core workflows. It confuses usage with value. A heavily used tool might still be a net negative if it drives manual work, fragmented data, or duplicated processes. It avoids hard questions. Teams keep tools because “ someone uses it ”, but nobody can define the value, the owner, the success metrics, or the alternative. It forgets change management. Removing tools is easy. Removing habits is hard. If you don’t redesign workflows and retrain the team, the old problems will reappear inside the “new” stack. If you want stack rationalisation to stick, it has to be tied to operational clarity and measurable outcomes. The ROI rescue approach: stop measuring tools, start measuring capabilities A better way to think about MarTech ROI is this: You don’t buy tools. You buy capabilities. Tools are just one way to deliver those capabilities. So instead of asking, “What does this platform do?”, ask “What capability does this enable, and how will we prove it?” Capabilities might include: Reliable lifecycle email execution. Accurate attribution you trust enough to bet budget on. Lead management that doesn’t create sales distrust. Consent and preference management that reduces risk. Personalisation that actually moves conversion rates. Reporting that doesn’t require a therapist. Once you frame it this way, rationalisation becomes clearer. Overlap is not “two tools do similar things”. Overlap is “we’re paying twice for the same capability”. And gaps become obvious too. Sometimes teams have ten tools yet still can’t do one critical thing consistently because the foundations are missing: Data, governance, process ownership. That’s why the ROI rescue is not simply consolidation. It’s capability alignment. Step one: Name the outcomes you’re trying to buy Before you touch vendors, get specific about what the business expects MOPS to deliver, and what MOPS expects the stack to make easier. Not vague outcomes like “better engagement”, Concrete outcomes like: Reduce campaign launch time from ten days to five. Increase lead-to-meeting conversion rate by 15 percent. Improve lifecycle email contribution to pipeline by X. Increase MQL to SQL acceptance by Y. Reduce manual list pulls and CSV-based processes by Z. If you can’t define outcomes, the stack will keep being evaluated based on opinion and politics. The fastest way to kill a rationalisation project is to make it about which tools people like. Step two: Map the workflows that create value You don’t need a massive process library. You need the handful of workflows where performance lives. For most B2B teams, that’s usually Lead capture to routing, lifecycle email and nurture, campaign execution and measurement, attribution and reporting, data enrichment and deduplication, consent and preference management and integration between CRM, MAP, and analytics. Map those workflows at a human level, not at a vendor feature level. Who does what, when, with what inputs, and where the system should automate vs where humans need control. This is where you’ll find the truth. The truth is usually that the stack is not too big. It’s too inconsistent. It allows different parts of the org to operate different versions of “the process”, which creates downstream chaos. Rationalisation should standardise workflows, not just reduce logos on a slide. Step three: Assign ownership, or accept you’re buying waste Tools without owners become toys, then become liabilities. Every core system and every core workflow needs an accountable owner. Not a committee. A named person. Ownership means: Defining standards. Managing changes. Measuring performance. Training users. Deciding what gets built and what gets blocked. If nobody owns it, you’re not buying a platform. You’re buying entropy. This is also where ROI becomes measurable. You can’t prove ROI on something nobody is responsible for improving. Step four: Create a “keep, kill, consolidate, fix foundations” decision model This is where teams expect a dramatic tool-culling session. Sometimes you will cut tools. Often you should. But more often, the biggest ROI is in “fix foundations”. Because you can consolidate your stack beautifully and still get terrible results if: Data is inconsistent. Lifecycle definitions are unclear. UTM governance is non-existent. CRM hygiene is a fantasy. Sales stages and lead statuses mean different things to different people. Consent tracking is messy. Rationalisation should result in decisions across four buckets: Keep : tools that directly support priority capabilities and are adopted properly. Kill: tools that are unused, redundant, or never delivered the promised capability. Consolidate : overlap where one tool can reasonably replace another without wrecking workflows. Fix foundations : areas where the tool is fine, but the operating model is broken. That last bucket is where ROI rescue often lives. Because you can save money by cutting a tool. You can make money by making the stack work. Step five: Measure a few things that actually matter ROI is not platform cost divided by vibes. Pick metrics that connect stack performance to business performance and operational efficiency. Examples that tend to expose the truth quickly include time-to-launch for campaigns, percentage of leads routed correctly within SLA, sales acceptance rate of leads, percentage of lifecycle emails using approved templates and tracking, duplicate rate in CRM, percentage of records with required fields. And then other things such as report reliability: Do teams trust the dashboards enough to use them in decisions? And support load: How many hours per week are spent troubleshooting basic execution issues? These metrics do two important things. They prove value when things improve. And they make it painfully obvious when a tool is not the problem. The part nobody likes: Rationalisation changes power This is why it’s hard. A rationalised stack usually means fewer exceptions, more standards and clearer governance - less “I do it my way”. That feels restrictive if you’re used to improvising. But it’s the difference between creativity and chaos. High-performing teams don’t move faster because they have more tools. They move faster because they have fewer decisions to remake every week. Standards create speed. Governance creates confidence. Clarity creates adoption. And adoption is the thing that turns software into ROI. What “good” looks like when you’ve rescued ROI A rationalised stack doesn’t look exciting. It looks boring in the best way. Campaigns launch reliably. Reporting is trusted. Integrations are stable. Lead routing works without daily drama. New hires can learn the system without needing a private tour from the one person who understands it. You spend less time arguing about tools and more time improving outcomes. And the CFO stops asking awkward questions because you’ve already answered them. That’s the real goal. Not fewer vendors for the sake of it, but a stack that behaves like infrastructure, not a science project. The kicker: Rationalisation is an AI readiness project in disguise Most organisations are desperate to “use AI” and confused about why it isn’t magically working. Here’s why. AI can’t save a broken operating model. It will only automate the chaos faster. If your data is inconsistent, AI will generate inconsistent outputs. If your processes are unclear, AI will amplify the ambiguity. If nobody owns the system, AI will become another orphan tool. Stack rationalisation, done properly, is one of the best AI readiness moves you can make. Because it forces you to create the conditions where automation can be trusted: Clean data, standard workflows, and clear accountability. You don’t become AI-ready by buying an AI feature - You become AI-ready by becoming operationally serious. A final thought: If your stack can’t be explained, it can’t be defended If you can’t describe, in plain language, what each major tool is for, who owns it, what capability it supports, and how you measure its success, you’re not managing a stack. You’re hosting one. Stack rationalisation is not about being smaller. It’s about being deliberate. And MarTech ROI rescue is not about proving your spend was justified. It’s about ensuring your spend becomes productive. If you want a simple rule to start with, use this: If a tool doesn’t reduce time, reduce risk, or increase revenue, it’s either mismanaged or unnecessary. Either way, it’s on the list. Discover our MarTech Services
- The EU AI Act will expose your Marketing Ops: Who’s accountable when AI breaks things?
Marketing Ops has always been accountable. It just rarely looked like it. When a campaign misfires, it’s “a creative issue”. When data goes bad, it’s “a CRM issue”. When attribution turns into astrology, it’s “a market issue”. Marketing Ops sits in the middle quietly fixing everything while everyone else argues about the colour of the button. Now add AI to that mix. Because AI does not fail politely. It fails at scale, at speed, and with enough confidence to make the wrong answer look like policy. The EU AI Act is basically Europe’s way of saying: If you deploy AI, you do not get to shrug when it breaks. Someone has to own the risks, the controls, the monitoring, and the outcomes. And if your Marketing Ops function currently runs the stack, the workflows, the routing, the automation, the data, and increasingly the “helpful” AI features inside your tools, congratulations. You are about to get pulled into an accountability conversation you did not schedule. This article is not legal advice. It’s a practical, Marketing Ops view of what the EU AI Act changes, what it forces you to be clear about, and how to answer the uncomfortable question: Who is accountable when AI breaks things? And are you prepared for when it becomes applicable in August 2026? What the EU AI Act actually is, and why Marketing Ops should care... The EU AI Act is a regulation that sets risk-based rules for AI. It applies to public and private actors inside and outside the EU if they place AI systems or general-purpose AI models on the EU market, put them into service, or use them in the EU. The timeline matters because this is not some distant future threat you can park in a Q4 roadmap and never touch again. The Act entered into force on 1st August 2024 and becomes fully applicable on 2nd August 2026 , with staged dates for different parts. Prohibited practices and AI literacy obligations have applied since 2 February 2025. Obligations for general-purpose AI models became applicable on 2 August 2025. You do not need to be “building AI” to be on the hook. If your marketing team is using AI features in a CRM, marketing automation platform, ad platform, analytics tool, chatbot, content tool, sales engagement tool, or customer data platform, you are already in the system. Marketing Ops cares for one simple reason: The Act forces clarity about who is responsible for what. And Marketing Ops is usually the only function that can map what is actually being used, where, by whom, and with what data. The first accountability trap: “We didn’t build it, we just used it” Under the Act, obligations fall on different actors, including providers and deployers. The Commission’s guidance describes the framework applying to providers (for example, a developer of a tool) and deployers (for example, an organisation using that tool). This is where a lot of Marketing Ops teams try to mentally exit the building. “ We’re not an AI company. We’re just using features in our tools. ” That may reduce some obligations, but it does not remove accountability. Even in the high-risk context, the Commission’s guidance describes deployer obligations that are very operational: Using the system according to instructions, monitoring operation, acting on identified risks or serious incidents, and assigning human oversight to people in the organisation. So the real question is not “ are we a provider? ” It’s “ are we a deployer, and if so, are we operating the system responsibly? ” In Marketing Ops terms, that translates into boring, unavoidable work: Governance, documentation, controls, training, monitoring, and incident response. The second accountability trap: “AI is everywhere, so nobody owns it” When everything has an AI button, it becomes culturally tempting to treat AI as a vibe rather than a system. But the EU AI Act is designed to do the opposite. It is trying to turn AI back into something you can audit. That means you will get asked questions like: Who approved this use case? Who decided what data goes into it? Who checked the output? Who is monitoring performance drift? Who is accountable when it produces misleading content, discriminatory outcomes, or security incidents? If your organisation cannot answer those questions, you do not have “ AI adoption ”. You have unmanaged operational risk. And unmanaged risk has a habit of becoming a budget line, a headline, or both. Where Marketing Ops is most exposed Most Marketing Ops teams are not deploying AI for medical triage or border control. That’s not the point. The exposure comes from how marketing actually uses AI in the real world. You run customer-facing AI interactions If you deploy chatbots or other interactive systems, someone needs to think about transparency, user expectations, and what happens when the system confidently says something untrue. The Commission’s guidance explains that the Act introduces transparency requirements for certain interactive or generative AI systems, such as chatbots, to address risks like manipulation, fraud, impersonation and consumer deception. That is marketing territory. Customer experience, web journeys, lead capture, qualification, and support deflection are all places where Marketing Ops often owns the tooling and the workflow. When those systems break, the first question will be “ why did you deploy it like this? ” not “ which vendor did you buy it from? ” You publish AI-assisted content at scale Marketing teams are already generating images, audio, video, and written content with AI-assisted tools. The Act’s transparency obligations include requirements on deployers in certain situations, including disclosure for AI content, and disclosure when text is generated or manipulated and published with the purpose of informing the public on matters of public interest. The Commission notes these transparency obligations and that guidelines will further clarify how they apply. Even if your content does not fall into those specific categories, the direction of travel is clear. You are expected to be honest about what is synthetic when that matters to the audience, and to avoid systems that create deception. Marketing Ops is exposed here because it often owns the content workflow tooling, approvals, templates, distribution and tracking. You are the function that can actually operationalise a disclosure rule without turning the team into a bureaucratic mess. You use AI for targeting, segmentation, and decisioning This is the area where marketing loves to pretend the model is “just helping”. If AI influences who sees what, who gets prioritised, who is suppressed, who is routed, or who gets categorised, you are using AI as a decisioning layer. Even when the Act does not label a specific marketing use case as “high-risk”, you still have obligations under other laws, and the AI Act does not replace those. The European Data Protection Board has been explicit that the AI Act and EU data protection laws should be considered complementary and mutually reinforcing, and that EU data protection law remains fully applicable to the processing of personal data involved in the lifecycle of AI systems. So if your AI-driven segmentation relies on personal data, you are automatically in GDPR land as well, and your accountability picture now has at least two regulators’ expectations in it. You might accidentally wander into high-risk territory through HR and recruitment marketing A lot of marketing teams support recruitment, employer brand, internal comms, and candidate journeys. Some teams run targeted job advertising systems and automation. Some use tools that “optimise” job ads and candidate targeting. The Commission’s guidance lists employment-related AI systems as examples of high-risk use cases, including systems intended to be used for recruitment or selection, which includes placing targeted job advertisements. If your marketing stack touches that area, you need a grown-up conversation with HR and Legal about who owns the system, who is the deployer, and what controls exist. Marketing Ops does not need to own HR compliance, but Marketing Ops often owns the platforms that make these workflows possible. That makes you part of the accountability chain. “When AI breaks things” , what counts as “ breaks ”? This is where organisations get dangerously vague. AI “breaking” is not just a system outage. It can mean: A chatbot gives incorrect product claims, pricing, security assurances, or legal statements. An AI feature generates content that creates deception, impersonation risk, or misleading communications. An optimisation system shifts targeting in a way that creates discriminatory outcomes, even unintentionally. A data pipeline feeds the wrong inputs, and the model output becomes systematically wrong. A generative tool produces content that breaches IP rules or internal policy. A vendor updates a model, performance changes, and your safeguards do not catch it. A workflow creates an outcome you cannot explain to an affected person, which becomes a practical problem in high-risk contexts where the Commission describes a right to an explanation for natural persons in certain situations. The point is not to predict every failure mode. The point is to stop acting surprised when failure happens, and to have an accountable operating model ready. So who is accountable, legally? There is no single magical job title that makes the risk disappear. Accountability is shared, but not vague. At a legal role level, the Act places obligations on the relevant actor types (providers, deployers, and others depending on the scenario). The Commission’s guidance makes clear that deployers have concrete responsibilities in how they use and monitor certain systems, including assigning human oversight within their organisation. At a governance level, enforcement is not theoretical. The Commission’s materials outline penalties, with maximum thresholds including up to €35m or 7% of worldwide annual turnover for certain infringements, and other tiers for other non-compliance categories. At a data and privacy level, the AI Act does not push GDPR aside. The EDPB has stressed that data protection law remains fully applicable to personal data processing across the AI lifecycle, and the AI Act should be interpreted as complementary to GDPR and related laws. So if your question is “ who will regulators look at? ”, the honest answer is: They will look at the entity that deploys the system in the EU, the entity that provides it, and the people inside those entities who were supposed to provide oversight. Which brings us to the more useful question... Who should be accountable inside a company? This is the Marketing Ops version of “ stop pointing at each other like a Spiderman meme and design a process ”. The EU AI Act effectively rewards organisations that can do three things on demand. They can show what AI is in use, where it is used, and why. They can show who approved it, what data it uses, and what safeguards exist. They can show how they monitor it, how they handle incidents, and how they train staff. The Act’s AI literacy obligations have been in application since 2 February 2025. That is not a “ nice to have ”. It is a forcing function that pushes companies to ensure the people using AI understand it well enough to use it responsibly. Inside most B2B companies, accountability ends up looking like this. Legal and Compliance sets rules, interprets obligations, and decides risk appetite. Security sets requirements for vendor assessments, access controls, and incident response. The DPO and privacy function owns the GDPR posture where personal data is involved, and the EDPB has been clear this remains fully relevant in AI systems. Marketing leadership owns what the business chooses to do, and what it is willing to sign off. Marketing Ops owns how the work is actually done across platforms, workflows, data, and governance. If you want a single throat to choke, organisations are already trying to dump this on “ the AI person ” or “ the data person ”. That fails because the risk lives in operations. It lives in who can actually change how tools are configured and used. That is why the EU AI Act will expose Marketing Ops. It makes operational accountability visible. The uncomfortable part: Your vendor contracts will not save you... Vendors can promise compliance. They can offer documentation. They can add toggles and disclaimers. They can be very convincing in sales calls and contracts. But the moment you deploy the system in your environment, with your data, for your purpose, you become responsible for how it is used. The Commission’s guidance on deployer obligations in high-risk contexts is blunt about deployers needing to use systems according to instructions, monitor operation, act on identified risks, and assign human oversight. The spirit of that is useful even outside high-risk: You cannot outsource oversight. This is where Marketing Ops should stop accepting “ the vendor said it’s compliant ” as a meaningful internal control. A practical accountability model for Marketing Ops You do not need to turn your Marketing Ops team into a compliance department, but you do need a system that creates answers quickly when someone asks, “ What AI are we using, and what happens if it fails? ” Here is what that looks like in practice, without turning this into a checklist article. Start with an AI inventory that is brutally honest. Not a slide. A living list of tools and features, where they are used, what data they touch, and whether they interact with customers. If you cannot map it, you cannot govern it. Then define use-case ownership. Not tool ownership. Use cases. “ Website chatbot ”. “ Email content generation ”. “ Lead enrichment ”. “ Audience segmentation ”. “ Recruitment ad targeting ”. Every use case needs a named business owner and a named operational owner. The operational owner is often Marketing Ops. Then decide what “ human oversight ” means for each use case. The Commission’s language on assigning human oversight inside the organisation should not be treated as a high-risk-only curiosity. If a system can publish, route, prioritise, or decide, someone needs to be accountable for review points, guardrails, and escalation. Then put monitoring where it belongs: On outcomes, not activity. Monitor for things like hallucinated claims in customer-facing responses, unexpected shifts in routing, sudden performance drift after vendor updates, spikes in complaint patterns, and outputs that create deception risk. Then add an incident pathway that does not rely on panic. If AI produces a harmful or misleading output, who gets notified, who can shut it down, who contacts the vendor, who handles customer comms, and who documents what happened? Finally, train people like adults. The AI literacy obligations are already in application. Training should be specific to the tools and use cases your team actually uses, and it should include what not to do, what must be reviewed, and what needs disclosure. If your training is a generic “AI 101” webinar, you have technically done a thing. You have not reduced risk. The privacy and compliance overlap you cannot ignore! Marketing teams often treat GDPR as “ the cookie banner problem ”. That mindset is going to get expensive. The EDPB’s statement is clear that data protection law remains fully applicable to personal data processing across the AI lifecycle and should be interpreted as complementary with the AI Act. On top of that, regulators are actively thinking about the interplay. The EDPB and EDPS have noted work on joint guidelines about the interplay between GDPR and the AI Act. For Marketing Ops, that means your AI governance cannot be divorced from your data governance. If you cannot explain what data goes in, why it is lawful, how it is minimised, how it is secured, and how it is deleted, you are not “ doing AI ”. You are doing risk. One more complication: The rules are still being operationalised It’s tempting to read a regulation like it’s a final instruction manual. In practice, there will be standards, guidelines, and codes of practice that affect how organisations implement parts of the Act. For example, the Commission notes work on guidance for transparency obligations and a code of practice to support marking and labelling of AI-generated content. The Commission has also proposed adjustments to the timeline for applying high-risk rules linked to the availability of support measures like standards and guidelines, and that proposal is in the legislative process. So yes, some details will evolve. That is not a reason to wait. It is a reason to build an operating model that can adapt without chaos. The blunt reality: Marketing Ops is accountable for readiness When AI breaks things, the provider may be accountable for parts of compliance, depending on their role. The deployer is accountable for how it is used in their organisation. Regulators and stakeholders will not accept “ the tool did it ” as a defence, especially where transparency, oversight, and monitoring were expected. Inside the company, Marketing Ops is rarely the legal owner of the risk, but it is often the operational owner of whether the business can prove it is acting responsibly. That is the exposure. Not because Marketing Ops is to blame, but because Marketing Ops is where reality lives. If you want a simple line to use internally, use this: Legal interprets the rules, Security protects the environment, Privacy governs personal data, and Marketing Ops makes the controls real across the stack. And the fastest way to find out whether your Marketing Ops is ready is to ask one question: " If we had to explain our AI usage to a regulator, a customer, and our board tomorrow, could we do it without improvising?" If the answer is no, the EU AI Act didn’t create the problem. It just stopped letting you hide it. Discover our AI Services
- Lead scoring is cosplay: What actually predicts revenue now
Lead scoring used to feel like grown-up marketing. A neat little system that turned chaos into order. A tidy number that told sales who to call first. A dashboard that made everyone feel like the funnel was being managed by competent adults. And then real life happened. Buying committees got bigger. Intent got noisier. Forms got optional. Cookies got nerfed. Inboxes got hostile. Sales cycles became less linear and more like a drunken treasure hunt. Yet somehow, a lot of teams are still proudly running the same scoring model they built when people downloaded whitepapers for fun and marketing could pretend it “handed leads to sales” like a factory line. That’s why lead scoring now is often cosplay. Not because scoring is inherently bad, but because most scoring models are pretending the world works the way it did when the model was invented. Why your lead score is confidently wrong Most lead scoring systems break for three reasons. First, they’re built on activities that are easy to track, not activities that predict revenue. Email opens, page views, webinar attendance, “visited pricing page”, “downloaded asset”, “clicked CTA”. All observable. All measurable. Many only weakly tied to a buying decision. Second, they assume the buyer is a single person moving through a funnel. In reality, the person filling out the form is often not the person with budget. Sometimes they are not even the person with a problem. They might be a researcher, an intern, a manager asked to “look into it”, or someone collecting screenshots for an internal deck. Your model gives them 82 points and everyone panics, while the actual decision maker never touches your website. Third, they confuse engagement with intent. Engagement can be curiosity, education, boredom, or comparison shopping. Intent is “we have a problem, we are prioritising it, and we are moving towards a decision”. Most scoring models treat the first as a proxy for the second. That’s the fundamental lie. If you’ve ever watched an account rack up score like a slot machine and then ghost you completely, you’ve seen this lie in the wild. The hidden cost of lead scoring theatre Bad scoring isn’t neutral. It doesn’t just fail quietly. It actively wastes time and damages trust. Sales loses faith and starts ignoring anything marketing sends. Marketing then tries to “fix adoption” with enablement sessions, new dashboards, or another scoring tweak. That makes it worse, because the problem is not communication. The problem is the signal. Meanwhile, truly winnable opportunities sit in the shadows because they don’t behave like your model expects. They don’t click the right emails. They don’t fill the right forms. They might come in through a partner. They might show up in pipeline because a rep already has a relationship. Your model shrugs and calls them “low score”. And when leadership asks, “Why are we not converting more MQLs?”, the answer becomes a shrug wrapped in charts. The goal isn’t a better score. The goal is better prioritisation. So let’s talk about what actually predicts revenue now. What predicts revenue now: Fewer signals, better signals Revenue prediction in B2B isn’t about counting more clicks. It’s about identifying the conditions that exist when a deal is genuinely likely to happen. Those conditions are usually not individual behaviours. They’re patterns. And they’re often account-level, not lead-level. Think in terms of three layers: Fit : Should this account buy from you, in a realistic universe? Readiness : Are they in a buying window, or just browsing? Momentum : Are they moving forward in a way that resembles real deals you’ve won? Lead scoring usually over-indexes on layer two, and mostly measures the wrong thing. The best predictors combine all three. Predictor 1: Verified ICP fit that sales actually agrees with This sounds obvious. It’s not. Most teams have a “target customer” slide and a CRM full of everyone anyway. Fit is still the strongest baseline predictor of revenue, but only if you define it like you mean it. Fit is not “company size and industry”. That’s demographic cosplay, too. Fit is: Do they have the problem you solve, at the scale you solve it, with the constraints you can handle? If your scoring model can’t clearly separate “perfect fit but quiet” from “loud but wrong fit”, you’re going to keep feeding sales junk. Fit should be a gate. If fit is poor, you don’t “nurture harder”. You deprioritise and stop wasting time. Predictor 2: Buying group emergence, not individual activity Revenue happens when a group forms around a decision. So the question is not “Did Jamie click the pricing page?” The question is “Is a buying group forming inside this account?” Buying group emergence looks like: Multiple people engaging from the same domain within a short window. Engagement coming from different functions (for example, marketing plus ops plus leadership). One person’s activity causing another person to appear (forwarding, internal sharing, follow-on visits). Conversations that shift from “what is this?” to “how would this work for us?” A single person binge-reading your blog can be a fan. Or a competitor. Or someone building a business case they will never get approved. Three to six relevant people showing up within a month is the kind of pattern that starts to smell like revenue. And no, this doesn’t require creepy tracking. Even with imperfect tracking, you can observe account-level patterns: Domains, meeting attendees, inbound sources, and the pace of interactions across contacts. Predictor 3: Problem intensity signals, not content consumption Content consumption is often a lagging indicator of curiosity. Problem intensity is closer to a leading indicator of action. Problem intensity looks like: Operational disruption: Migration, re-org, new leadership, tool consolidation, compliance deadlines. Performance pressure: Pipeline targets missed, CAC creeping up, SDR efficiency dropping, conversion rates flat. Technical pressure: Systems breaking, data quality issues, workflow debt, integration failures. Internal urgency: Hiring for ops roles, firing agencies, changing tools, leadership mandates. These signals rarely show up as “clicked email #3 ”. They show up in conversations, in CRM notes, in support tickets, in inbound form fields, in job descriptions, and in the way prospects describe their situation. If your model can’t ingest these, at least design your process to capture them when they appear. A simple “why now?” field that sales actually fills, plus a few required dropdowns about current state, can outperform 50 points of email clicks. Predictor 4: High-intent actions that cost the buyer something A strong signal often has a cost. Not a monetary cost, but a time cost, a political cost, or a commitment cost. High-intent actions include: Requesting a tailored demo (not a generic “learn more”). Bringing colleagues to a call. Asking about implementation, security, procurement, or contract terms. Sharing internal constraints and timelines. Asking for a proposal, SOW, or business case help. Engaging in mutual planning: Next steps with dates, not vibes. These are harder to fake. They’re harder to do casually. If your scoring model treats “webinar attended” as equal to “introduced their IT lead”, you’ve built a points costume, not a revenue predictor. Predictor 5: Momentum patterns that match your won deals Most teams score leads as if every deal moves the same way. But you already have the answer to “what predicts revenue”: it’s in your closed-won history. Not as a generic attribution report. As a behavioural pattern. Take your last 30 closed-won deals and ask: What happened in the 30 to 90 days before the opportunity was created? Look for common sequences like: Multi-contact engagement followed by a consult request. A spike in product-related page views followed by a stakeholder call. Partner referral plus leadership attendee on call one. Pricing conversation within two meetings of first contact. Security review triggered early, not late. Then look at your last 30 closed-lost deals and ask: What did they do that looked promising but went nowhere? You will often find patterns that your score currently rewards, even though they correlate with failure. That’s a fun day. Momentum is not “more activity”. Momentum is “the right activity in the right order”. Replace “lead scoring” with “pipeline readiness” If you want a disruptive idea that actually works, stop calling it lead scoring. Call it pipeline readiness. This simple naming shift forces the right questions. Pipeline readiness asks: Is this person or account likely to enter pipeline soon, and if they do, is it likely to progress? That pushes you away from vanity engagement and towards decision conditions. Pipeline readiness is built from a small set of signals that you can defend in a room with sales leadership. And crucially, it’s not one number. It’s a simple classification that drives action. For example: Not ready : Wrong fit or no buying window. Warming : Fit is strong, early buying group signals. Active : Clear buying window, high-intent actions present. Sales engaged : Meetings happening, mutual plan forming. Give sales something they can understand without a training session. Give marketing something they can improve without inventing new points. The scoring model you can actually run without hating your life Here’s a practical approach that doesn’t require perfection. Step 1: Set a “fit gate” that blocks nonsense Create a fit classification based on a handful of fields that are stable: Segment (size band that matches your pricing and delivery). Use case match (the problem you actually solve). Environment match (tech, complexity, constraints). Exclusions (industries you don’t serve, geographies you can’t support, unrealistic budgets). Fit should be a simple label: strong, medium, weak. If you can’t confidently label fit, default to medium, not strong. Strong should be earned. Step 2: Track buying group emergence at the account level Stop pretending lead-level alone can guide prioritisation. Set up a rolling 14 to 30 day view of account engagement across contacts: Number of engaged contacts from the domain. Variety of roles engaged. Recency and frequency of meaningful interactions. Meaningful interactions are not all clicks. Weight things that indicate effort: Form submissions, meeting requests, product documentation, implementation content, pricing, comparison pages, and replies. If your tracking is imperfect, still do it. Imperfect account-level signals can outperform perfect lead-level vanity metrics. Step 3: Define 5 to 7 “high intent” events and treat them as sacred Pick a short list. No more than seven. These should be actions that are clearly tied to revenue outcomes in your world. Examples: Demo request with a real company email. Meeting booked that includes more than one attendee. Request for pricing, proposal, or security information. Reply that answers “why now?” Product trial activation plus meaningful usage milestone (if relevant). Then design your process so these events trigger immediate, human follow-up. Not a nurture email. Not a “wait until they hit 100 points”. If you can’t act on the event within a day, don’t pretend the score matters. Step 4: Bake “momentum” into your sales process, not just your dashboards Momentum is often captured in conversation, not clicks. So build lightweight capture into the workflow: A required field for timeline (even if it’s “unknown”). A dropdown for current solution or status quo. A simple “primary pain” field. A checkbox for “buying group identified” with a minimum of two named stakeholders. This is not admin theatre. It’s the information you need to predict revenue. If reps won’t fill it, that’s feedback: Either the fields are junk, or the process has no consequence. Fix that before you blame the CRM. The uncomfortable truth: The best predictor is still a good salesperson Marketing Ops can build cleaner signals, better routing, and smarter prioritisation. But you cannot automate your way out of fundamental sales quality. If sales follow-up is slow, inconsistent, or purely transactional, no scoring model will save you. If reps can’t diagnose pain, map stakeholders, and create urgency ethically, then the problem isn’t your score. It’s execution. The goal of pipeline readiness is to make good sales teams faster and more consistent, not to create “hot leads” that close themselves. So what should you do this week, not this quarter? Kill anything that feels like scoring for scoring’s sake. Then do three practical moves. First, audit your last 20 opportunities that became real pipeline and identify what happened immediately before they did. Not what your dashboards say. What actually happened. Second, reduce your scoring inputs. If your model uses 40 signals, you are not sophisticated. You are overwhelmed. Third, move from lead-level obsession to account-level readiness. If your business sells to buying committees and you are still scoring individuals like it’s 2014, you’re choosing to be wrong. You don’t need a perfect model. You need a model you can defend, a process you can run, and signals that match how revenue actually happens now. Because the job isn’t to create high-scoring leads. It’s to create deals. Discover our Services
- Building an AI-ready HubSpot: The foundations that pay off
AI in HubSpot is not a magic layer you sprinkle on top of chaos. It is more like a turbocharger. If the engine is healthy, you feel the lift immediately. If the engine is full of duct tape and mystery fluids, you just reach the next breakdown faster. The good news is you do not need a “perfect” portal to benefit. You need a set of foundations that make HubSpot reliable, predictable, and safe to automate. Do that, and the AI features become genuinely useful: Better routing, faster content drafts, quicker summaries, more consistent service responses, and fewer tasks that exist purely to keep humans busy. HubSpot’s current AI experience sits under its Breeze umbrella, including assistants and agents that work across marketing, sales, and service. The exact features available will depend on your subscription, region, and the feature itself, but the pattern is consistent: The best outcomes come from clean data, clear definitions, controlled access, and strong reference material. Start with the boring truth: AI can only work with what you give it Most teams think their HubSpot issues are “ AI readiness ” issues. They are usually “ we do not agree on what anything means ” issues. If your lifecycle stages are used as vibes rather than definitions, nobody (human or machine) can make good decisions. If sales reps log activities in five different ways, any summary will be incomplete. If a single contact can be both “customer” and “open deal” with no rules for which wins, automation becomes a lottery. AI works best when your CRM behaves like a system, not a scrapbook. So your first foundation is not an AI setting. It is operational clarity. Foundation 1: Define your customer language (and lock it) AI gets better when your business is consistent. Consistency starts with shared definitions. The minimum set of definitions you need You do not need a dictionary of every edge case. You need a handful of “truth anchors” that everyone agrees on: Lifecycle stages : What triggers each stage, who owns the change, and what evidence is required. Keep it simple and auditable. Lead statuses : What each status means, what the next action is, and who is responsible. Deal stages : What must be true for a deal to move forward, and what data must be captured at each stage. Company ownership : When the company record is the source of truth versus the contact record. If you have those nailed down, you have created something precious: Context that does not change depending on who is looking at it. Then you make it enforceable. Use required properties at key moments, pipeline rules, and validation where appropriate. The goal is not to police people, the goal is to stop “creative interpretation” from leaking into your data model. Foundation 2: Make your data fit for automation, not just reporting Most CRM clean-up projects aim for prettier dashboards. AI readiness aims for dependable behaviour. You want your data to answer questions like: Can we trust this field enough to route a lead? Can we trust this stage enough to trigger a customer experience? Can we trust this source enough to measure performance? If the answer is “sometimes”, automation turns into support tickets. The practical fix: design for the decisions you want to automate Pick the highest value decisions you want HubSpot to make faster. Then work backwards to the data required. Examples: If you want an agent to resolve common support questions, you need a strong knowledge base and clear categorisation of issues. If you want automated lead qualification, you need consistent capture of company size, territory, intent signals, and a definition of what “qualified” means in your world. If you want sales summaries that actually help, you need activity logging that is standardised, plus key properties that capture deal reality rather than hope. HubSpot is increasingly building AI into features that rely on your CRM context, so the more structured and dependable that context is, the more you get out of it. Foundation 3: Get serious about consent, sensitive data, and guardrails If your AI rollout ignores privacy, it will get blocked, quietly sabotaged, or turned off after one uncomfortable meeting. HubSpot has an AI Trust and safety approach that includes controls like data masking for personal information in select features. It also publishes information about its AI infrastructure and how it works with AI service providers. For example, HubSpot states it does not allow the AI service providers it uses for Subscription Services to train on customer data, and it aims to minimise retention, including “zero-day” retention where possible. That said, you still need to govern what you put into HubSpot and how features are used. Your job: Decide what should never be used as input Create a simple rule-set for teams: What types of data are sensitive in your business? Which properties should be treated as restricted? Where should sensitive information live if it should not be in HubSpot at all? HubSpot’s own documentation notes that if you enable Sensitive Data, the sensitive data properties you create will not be used to train Breeze models. It also notes that other customer data in your account may be used to train Breeze models, and that you can opt out by contacting HubSpot. So do not pretend this is a purely technical decision. Make it a policy decision, then configure around it. If you need to opt out, do it early, not after you have trained habits across the team. Foundation 4: Fix your permissions model before you add more power AI makes it easier to act quickly. That is the point. But it is also the risk. If everyone can change lifecycle stages, edit key properties, create workflows, and rewrite templates, you do not have a CRM. You have a shared Google Doc with better branding. At minimum: Limit who can create and publish workflows. Limit who can edit critical properties, pipelines, and lifecycle settings. Use teams and partitioning where appropriate. Separate experimentation from production where possible. This is not about distrust. It is about protecting the system so you can move faster with confidence. Foundation 5: Build your “knowledge spine” (this is where agents win or fail) If you want AI to help customers, prospects, or your internal team, it needs reference material that is accurate and current. HubSpot’s Breeze Customer Agent is positioned as a way to qualify leads, answer questions, and resolve support issues 24/7, and HubSpot provides guidance on training and deploying it. It also announced expanded availability for Customer Agent via HubSpot Credits for Pro and Enterprise customers starting June 2, 2025. None of that matters if your help content is thin, outdated, or written like it was created under duress. The knowledge spine is not “more articles” It is: A clear structure: categories, tags, and consistent naming. Coverage of the top issues: the questions customers ask repeatedly. A single source of truth: avoid three competing answers across PDFs, old pages, and random internal docs. A refresh habit: ownership, review cycles, and expiry rules. When that exists, AI becomes a multiplier. When it does not, AI becomes a confident way to spread confusion. Foundation 6: Stop treating integrations like plumbing AI readiness is integration readiness. Breeze is designed to work inside HubSpot, but it also benefits from a connected ecosystem, because your team’s reality is spread across email, calls, meetings, documents, and support conversations. HubSpot highlights that its AI capabilities can connect with your broader tools and use CRM context to help with meeting prep, content, and analysis. If your integration layer is unreliable, your AI layer will inherit that unreliability. The foundations here look like: One integration owner per system. Clear field mapping and documentation. A change control process that prevents “quick fixes” from becoming permanent data damage. Monitoring for sync errors, duplicates, and unexpected overwrites. If you do not have this, you will spend your AI rollout explaining why “the system said” something that is not true. Foundation 7: Standardise activity capture (because summaries depend on it) Teams love the idea of automatic summaries, meeting prep, and record insights. Breeze Assistant is positioned to help with things like refining content, preparing for meetings, and summarising data inside HubSpot. But a summary is only as good as the underlying trail. So decide: What counts as a meaningful activity? How do you log it? Where does it live? What must be captured after key moments like discovery calls, demos, and implementation milestones? This is where most teams need fewer fields and more discipline. Do not add fifteen properties for “completeness”. Add five that you will actually maintain, and design the process so they are easy to keep current. Foundation 8: Create a safe sandbox for experimentation AI features encourage experimentation. That is fine. It is also how production portals get trashed. Build a simple rule: Experiment in a controlled space. Publish changes through an agreed process. Document what you ship and why. If you have access to sandboxes or separate environments, use them. If you do not, create operational sandboxes: Test lists, test pipelines, and staging assets that do not touch live routing and reporting. Your goal is to make it easy to try things without making your CRM feel unstable. Foundation 9: Make brand voice a system, not a person Content generation is one of the first things teams try, because it is immediate. But “AI-ready content” is not about pushing a button for a blog post. It is about capturing what makes your content sound like you, then making it reusable. That means: Clear messaging pillars. Approved claims and proof points. A library of examples that represent your voice. Rules for tone by channel: support, sales outreach, marketing emails, landing pages. Then you build templates and prompts around those assets. Do that, and your drafts get closer to publishable. Skip it, and you get content that sounds like it was written by a polite stranger who read your homepage once. Foundation 10: Design human-in-the-loop on purpose The fastest way to make AI “not pay off” is to either let it run unchecked, then panic when it makes a mistake, or force a manual review of everything, then wonder why nobody uses it. Pick your risk points and add review there. For example: Customer-facing responses might require a tighter approval model at first. Internal summaries can be low risk and rolled out broadly. Lead qualification can start as recommendations before it becomes automated routing. This matches how HubSpot positions trust and controls around its AI features: build confidence, understand the flows, then scale usage. What “AI-ready” looks like in practice When the foundations are in place, you will notice a few things quickly: Sales reps stop arguing with the CRM because it starts reflecting reality. Marketing stops building lists that need three disclaimers. Service stops re-answering the same questions. Ops stops playing whack-a-mole with workflows. At that point, AI becomes less of a headline and more of a daily advantage: Faster handoffs, better consistency, and fewer “how did this happen” moments. And the best part is that these foundations pay off even if you never touch a single AI feature. They make HubSpot perform better as a platform, full stop. Discover our Services
- GDPR + eprivacy changes your Marketing Operations team may have missed.
The big picture: eprivacy didn’t get replaced, it got stuck… and then quietly killed For years, everyone waited for the EU’s ePrivacy Regulation to replace the old ePrivacy Directive and finally standardise cookie rules across Europe. That wait is over. The European Commission formally withdrew the ePrivacy Regulation proposal in 2025, and the European Parliament’s legislative tracker lists the file as withdrawn , with the withdrawal announced in the Official Journal in October 2025. What that means in practice: The ePrivacy Directive still runs the show for cookies and similar tracking tech in the EU, and it’s still implemented through national laws . So yes: you still have a “European” standard, but enforcement and cookie banner expectations can vary by country (and by regulator mood). If you’re a Marketing Ops team supporting multi-region websites, this is the part where you stop pretending one banner configuration works everywhere. Change #1: “cookies” now means a lot more than cookies, and regulators are spelling it out Most Marketing Ops teams still talk about “cookie consent” like it’s just GA + a couple of pixels. Regulators don’t. The European Data Protection Board published Guidelines 2/2023 (final version 2.0) clarifying the technical scope of the ePrivacy cookie rule (Article 5(3)). It’s explicitly aimed at newer tracking methods replacing third-party cookies. Translation into Marketing Ops reality: Tracking pixels, device fingerprinting approaches, identifiers stored in browser storage, SDK identifiers in apps, and other “cookie-like” tricks are still within scope if they store/access info on a user’s device (or gain access to info already stored). “Server-side tagging” is not a magic cloak. If you’re still dropping identifiers or reading from the device, you’re still in the same consent conversation... you’ve just moved the furniture. If you’re doing any of the following, you should treat it as part of your consent architecture, not a side quest: Identity stitching / probabilistic matching Fingerprinting (including “privacy-safe” variants) Persistent IDs passed through tags or SDKs Cross-domain tracking setups Clean-room style matching where the web layer still drops identifiers Change #2: Cookie banner UX is now a compliance surface, not just a conversion surface Regulators have been consistent about one thing: If it’s easier to accept than reject, you’re nudging - and nudging is increasingly treated as non-compliance. Across Europe, “reject” being as visible/easy as “accept” has become a baseline expectation in cookie UX enforcement (country by country). Spain’s regulator (AEPD) is one of the clearer examples: Guidance updates moved Spain toward requiring a reject button at the first layer . France (CNIL) has also pushed hard against “dark pattern” cookie banners, including formal notices and enforcement attention around misleading designs. Marketing Ops takeaway: Your cookie banner is now effectively a regulated UI component. It needs: Symmetry (accept/reject equally prominent) Clear purpose descriptions (no “enhance your experience” nonsense) Granular choices that actually do something A withdrawal path that’s as easy as consent If you’re A/B testing consent banners: fine, but every variant must still meet valid consent requirements. Don’t “test” your way into an enforcement letter. Change #3: You can record “no” but be careful what you store to remember it Marketing teams often ask: “Can we remember a user’s rejection so we don’t keep pestering them?” Yes, sometimes you can store a refusal signal to reduce repeated prompts, but the details matter. In the draft joint guidance on the interplay between GDPR and the Digital Markets Act (DMA), the European Commission and the European Data Protection Board point out that recording a refusal may be necessary for effectiveness, but they recommend that a record of “negative consent” should not contain a unique identifier . Marketing Ops implication: If your “remember rejection” mechanism becomes a stealth identifier, you’re creating the very tracking you claim you’re avoiding. Practical pattern that usually behaves better: Store a short-lived, non-unique refusal flag (or a strictly local preference) Avoid building a cross-session identity just to remember someone said “no” Change #4: “Consent or pay” got official scrutiny and it spills into marketing patterns While this is most famous in publisher/media land, it matters for Marketing Ops because the same logic shows up in: “Download the whitepaper only if you consent to tracking” “Use the site only if you accept marketing cookies” “Access pricing only if you opt into marketing” The European Data Protection Board adopted Opinion 08/2024 on “consent or pay” models used by large online platforms for behavioural advertising, warning that these models can undermine the idea of freely given consent and should offer real choice. Now, you’re probably not Meta. But enforcement logic spreads downhill. What to do with your gated content and forms: Separate “get the thing” (contract/legitimate interest) from “track me everywhere” (consent) If you require an email for a download, don’t bundle it with behavioural advertising consent Give a real alternative path if you’re asking for optional processing If your consent mechanism starts sounding like a bouncer, regulators start acting like the police. Change #5: The UK quietly raised the stakes for eprivacy enforcement (massively) If you operate in the UK (or have UK traffic/customers), this is not subtle. The UK’s Data (Use and Access) Act 2025 has been rolling in changes between June 2025 and June 2026. A major batch of provisions took effect on 5 February 2026 . The headline Marketing Ops change: PECR fines now look like GDPR fines The UK regulator, the Information Commissioner’s Office, confirmed the Act gives it power to issue PECR fines up to £17.5m or 4% of global turnover (previously capped much lower). Why Marketing Ops should care: In the UK, a lot of “marketing enforcement” happens under PECR (cookies, email marketing), not just UK GDPR. Raising PECR penalties is basically putting a turbo engine on the thing that already hits marketers most often. UK cookie rules: More exceptions, but don’t celebrate like it’s a free-for-all The ICO has updated guidance on “storage and access technologies” to reflect PECR changes and added a section explaining exceptions. Depending on your exact use case, some low-risk cookies/tech may be easier to justify without consent in the UK than in many EU countries... but: Advertising cookies are still advertising cookies Cross-site tracking is still cross-site tracking “Analytics” can be low-risk or very much not, depending on how it’s configured and shared Marketing Ops action: Treat the UK as its own compliance configuration and not a copy/paste of your EU setup. Change #6: GDPR itself isn’t being rewritten, but targeted “simplification” is moving through the system In the EU, there’s an active policy push to reduce admin burden - especially for SMEs and “small mid-caps”. In May 2025, the Commission published a proposal that would amend GDPR Article 30(5) (records of processing activities / ROPA). It aims to broaden exemptions and shift the trigger toward processing that’s likely to result in high risk . The European Data Protection Supervisor and the European Data Protection Board responded via a joint opinion in July 2025. Important nuance: this is a proposal , not “GDPR changed yesterday”. But it signals direction: Regulators want to reduce paperwork for smaller orgs without weakening core principles. Marketing Ops reality check: Even if ROPA thresholds loosen for some organisations, Marketing Ops still needs a working data inventory to survive: DSARs vendor reviews cookie audits consent proof incident response AI and enrichment governance So yes, you might get less paperwork. No, you don’t get to be less organised. Change #7: “Digital rulebook” overlap is becoming a real compliance factor Marketing Ops used to treat GDPR like the privacy layer and everything else like “someone else’s problem”. That era is ending. The European Data Protection Board adopted guidelines on the interplay between the Digital Services Act (DSA) and GDPR in September 2025. And the European Commission + EDPB ran a public consultation on draft guidance for the interplay between the Digital Markets Act (DMA) and GDPR from October–December 2025, with finalisation expected some time in 2026. Why this matters for Marketing Ops: Consent, personalised ads, profiling, and data-sharing can be scrutinised under multiple frameworks Platform changes (especially by “gatekeepers”) can ripple into your tracking stack and measurement model Your “compliance by CMP” strategy won’t cover everything if your downstream processing is messy Discover our MOPs Maturity Indicator So what should Marketing Ops do now? Here’s a practical plan that doesn’t require you to become an EU lawyer or develop a sudden love for policy PDFs - although your company red tape department really needs to be involved ASAP. 1) Treat consent as infrastructure, not a banner If your CMP is just “a thing we installed,” you’re behind. You need: A consent state that flows into tag management, CDP rules, ad platforms, and CRM sync logic Proof trails (what was shown, what was chosen, when it was applied) A way to prevent “shadow firing” tags when consent is missing Also: Audit what your site actually does, not what your tag map says it does. Tag maps lie. Browsers don’t. 2) Reclassify your tracking methods using the EDPB’s broader scope Use the EDPB technical scope guidelines as your internal taxonomy refresh. Specifically, update your tracking register to include: Pixels and non-cookie identifiers Fingerprinting-like techniques SDK-based tracking in apps Identity matching flows If you can’t describe it clearly, you can’t justify it credibly. 3) Fix banner UX where it’s obviously indefensible If your reject button is hidden behind “Manage options” but accept is a big shiny button… you already know how that looks. Aim for: equal prominence plain language no guilt-tripping no pre-ticked toggles no “legitimate interest” switcheroo that behaves like consent 4) Split “marketing” into lawful buckets (and stop mixing them) Marketing Ops teams get in trouble because they treat all growth activity as one blob. You need separate rules for: Service messaging (contract / legitimate interest) Customer marketing (soft opt-in may apply in some jurisdictions; check local rules) Prospecting (legitimate interest may be possible, but transparency + opt-out must be real) Behavioural advertising (usually consent-heavy, especially once ePrivacy applies) The ICO’s direct marketing guidance is a solid operational reference point for UK interpretations. 5) UK-specific: review PECR risk like it’s GDPR risk now Because the penalty ceiling just moved into grown-up territory. Do a UK pass on: Cookie classifications and exceptions (based on the ICO’s updated storage/access guidance) Email marketing basis (consent vs soft opt-in vs B2B rules) Suppression lists, opt-out mechanisms, and proof of consent where required 6) Prepare for the boring-but-deadly bits: DSARs and complaints The UK reform programme is phased, and some obligations land later (including elements around complaints handling during 2026). Even if you’re EU-only, DSAR operational maturity is often where orgs fail in practice: You can’t find data fast enough You can’t delete it cleanly You can’t explain why you have it Marketing Ops is usually the owner of half the systems involved. Lucky you. A quick “what to tell your team” summary EU ePrivacy Regulation is dead ; the ePrivacy Directive lives on , so cookie rules stay fragmented across member states. The EDPB has clarified that tracking beyond cookies still falls into the consent regime. Banner UX is enforcement fuel: R eject must be easy , dark patterns are a liability. “Consent or pay” scrutiny is real and the logic spreads into gated experiences. The UK has escalated PECR enforcement: £17.5m / 4% is now on the table. GDPR “simplification” is in motion (especially around ROPA thresholds), but it’s not a free pass to be messy. Discover our Services
- The no-BS guide to cleaning up your HubSpot instance
You promised better pipeline, a cleaner CRM, and fewer late-night “where did that lead go?” panic emails next year. Good. This guide will get you there without buzzwords, or vague platitudes. It’s a practical, step-by-step plan you can start today (yes, today) so your next quarter doesn’t smell like 2024’s data dumpster fire. This isn’t a checklist you print and forget. It’s a playbook: Triage, triage again, fix the real problems first, then tidy up. Expect decisions, compromises, and a little corporate bravery. Someone will need to own it. Make that person you or make it someone you can glare at until it gets done. First things first... who needs to be in the room Before you touch anything: Gather three roles (one person can play multiple roles, but don’t make a single martyr do everything): An Operations lead (HubSpot admin or Marketing Ops), S data owner (Sales ops or a senior rep who cares about lists and deals), A stakeholder (Sales leader or CMO who will sign off on rules and deletions). If your org can spare a business analyst or developer for an afternoon, bring them. If not, at least brief the person who handles integrations... those are the things that’ll embarrass you later. Triage: Find the things that actually hurt Not all mess is equal. Start by identifying what’s actively costing you time or money. Focus on four pain zones: Duplicate records, stale contacts/companies, broken automations, and bad reporting. Run these quick checks: How many contacts haven’t been touched in 18 months? (That’s candidate stale.) How many automations have run in the past 90 days vs how many workflows exist? (Alert: Unused workflows are future liabilities.) Which lists have more than 30% exclusions or errors? (Lists that lie are worse than no lists.) Are there integrations writing bad or duplicate data (Forms, events, Salesforce, ad platforms)? The goal is to find the high-impact fixes first. Don’t get lost prettifying contact property labels while the lead-to-deal conversion is leaking like a sieve. Step 1. Duplicate cleanup (but don’t be a trigger-happy scrubber) Duplicates are the low-hanging fruit. HubSpot has built-in de-duplication for contacts by email, and companies by domain, but it misses clever duplicates (e.g., same person with work and personal emails, or +aliases). Start with a conservative merge policy: Identify duplicates by email and company domain, Flag fuzzy duplicates (name + phone, email variations) for human review, Merge confirmed duplicates, preserving the most complete record and timeline. Important : Export a full backup of records you’re about to merge. Yes, export. If someone yells later (“where’s my notes?”), you can restore data or explain what changed. Also document your merging rules... future you will thank past you. Step 2. Archive the dead (stale contacts & companies) “ Stale ” is different by business, but a practical threshold is 12–18 months of no opens, clicks, site visits, form submissions, deals, or calls. Don’t delete on day one though... archive. Create an “archived - inactive 18m” lifecycle stage or property and: Move contacts to a low-cost marketing list (or suppress them from campaigns), Set a re-engagement campaign that runs for 45 days with two honest offers, If no response, move them to a final archive (or mark them for deletion after 90 days). This keeps your database lean, reduces send costs, and improves deliverability, and you can still re-activate a lead if they come back. Step 3. Stop the bad inputs at the source Broken forms, weird API pushes, and over-eager Zapier paths make a mess faster than people clean it. Audit every inbound source: Public forms, Landing pages, Chatbots, Integrations (Salesforce, e-commerce, ad platforms), Manual CSV imports in the last 12 months. For each source, ask: What fields are we writing? Are we mapping them consistently? Who owns that source? Fix the ones creating garbage: normalise property mappings, add validation on forms, and lock down who can import. If you have external teams hitting HubSpot (agencies, contractors), revoke access and set up a controlled import process. No more “we’ll just upload a CSV.” Step 4. Workflow triage: Keep the useful, kill the rest Workflows that don’t run aren’t “assets.” They’re technical debt. Filter workflows by “last run” and owner. Then: Archive workflows that haven’t run in 90 days and have no business purpose, Fix workflows throwing errors (those red logs scream for attention), Consolidate overlapping workflows (multiple flows doing the same thing = chaos), Label workflows clearly: Purpose, owner, last modified date. Add a naming convention: [team] - [purpose] - [owner initials] - [YYYYMMDD]. It’s boring, but future you will not have to guess who killed the lead. Discover our Podcast Step 5. Property tidy: Less is more HubSpot instances accumulate properties like trophies from short, painful projects. Ask: Is this property used in lists, workflows, filters, or reports? If not, it’s probably a candidate for deletion or consolidation. Process: Export a list of custom properties and where they’re used, Mark properties as “in use,” “duplicate,” or “orphaned”, Merge duplicates and delete orphaned properties after a 30-day warning period. Rename properties only if you can update all dependencies. Keep an audit sheet: Property name, apiName, purpose, owner, and last used. Step 6. Standardise lifecycle stages and lead scoring If Sales and Marketing disagree about what a lead is, nothing works. Agree on definitions for lifecycle stages (lead, mql, sql, opportunity, customer) and who moves the stages. Make them simple and enforceable. For lead scoring: Keep it meaningful. Remove noisy signals (e.g., pageviews with low intent), prioritise fit and intent, and map scores to clear actions. Test scoring thresholds with a 30-day run and adjust. Document everything and publish the definitions to Sales. Then make sure workflows align to these definitions, otherwise you’ll have people operating on different planets. Step 7. Fix reporting so you can stop guessing If your dashboards are full of “last 90 days” widgets that mean nothing, rebuild them. Identify five core metrics your execs actually use (e.g., MQL to SQL conversion, average sales cycle, pipeline by stage, lead source ROI, email deliverability). Build one clean dashboard that tells the truth. When rebuilding reports: Use consistent date ranges, Standardise UTM tracking and source attribution, Avoid duplicated metrics across dashboards (confusing). If reports disagree, trace them back to source definitions, often the disagreement is a definition problem, not a math problem. Step 8. Lock down access & reduce human error Too many admins = too many ways to break things. Audit user permissions. Make a strict admin group and a broader editor group. Enforce: Two-person approval for automation that can change lifecycle stage or delete data, Limited API keys with named owners and expiration dates, Logging and a change request process for major modifications. Yes, it adds friction. You want friction for things that can break revenue. Step 9. Communication and change management Cleaning HubSpot is a political act. Tell people what you’re doing before you do it. Run a short internal campaign: A kickoff email that explains why (no drama, just facts), A 30-day “watch period” where changes are flagged but reversible, Training docs and a recording for any new flows or dashboards. Include a short FAQ: What will be deleted, who to contact if a record disappears, and where the backup lives. The goal is fewer surprise Slack freakouts. (If you want, use this subject line: “FYI: HubSpot clean-up. What’s changing and why.” Short, direct, no panic.) Step 10. Create a maintenance rhythm Once clean, keep it clean. Schedule: Monthly duplicate and error reports, Quarterly property reviews, Bi-weekly workflow review for any new builds, An annual archive purge. Make these tasks part of someone’s role and include them in your ops calendar. If it’s not scheduled, it won’t happen. Final safety net. Backup & rollback Before you delete or merge anything irreversible: Export. Full exports of contacts, companies, deals, and properties should be saved with a timestamp and stored in a shared drive. If an automated process goes sideways, you need a rollback plan and a contact who can execute it. Also keep a change log: What was changed, why, who approved it, and links to the export. This is not busywork, it’s insurance. Sample 30-/60-/90 day plan (high level) 30 days : Triage, duplicate merges, archive clearly stale records, stop bad inputs, start stakeholder comms. 60 days : Workflow consolidation, property cleanup, reporting rebuild, lock down access. 90 days : Finalise deletions/archives, train teams, schedule maintenance cadence. Adjust timing to your org size; small teams move faster, big teams need more approvals. The point is momentum: Fix the biggest leaks first. Wrap-up: What success looks like Clean data, fewer manual fixes, reports you can trust, faster handoffs to sales, and a predictable ops rhythm. You’ll lose some vanity properties and bad automations, but you’ll get a CRM that earns its keep. If there’s one last thing: Stop treating HubSpot like a dumping ground. Make it a system of record, not a personal playground. When people know there’s ownership, standards, and consequences, behaviour changes. And your next quarter will thank you for it. Discover our Services
- Marketing technology governance: The unsexy discipline saving budgets
Marketing technology governance is the operational equivalent of flossing. Nobody brags about it. Nobody puts it in a slide deck with fireworks. And almost nobody does it properly. Yet the teams that take it seriously spend less, move faster, and avoid the slow, painful decay that turns once-promising MarTech stacks into expensive, brittle messes. Governance has a branding problem. Say the word in a meeting and half the room hears “approval gates”, “process police”, or “IT says no”. The other half quietly checks out because it sounds like admin. That’s unfortunate, because good governance isn’t about slowing marketing down. It’s about stopping money, data, and momentum from leaking out of the system. What governance actually means (and what it definitely doesn’t) Let’s clear the air early. Marketing technology governance is not about adding more approval layers. It’s not about centralising power. And it’s not about locking tools away behind bureaucracy. At its core, governance answers four very simple questions: Who owns this platform? What is it for (and what is it not for)? How do changes get made safely? How do we know it’s still earning its keep? That’s it. Good governance is clarity, not control. It creates shared understanding so teams can move independently without breaking things, duplicating effort, or quietly racking up unnecessary spend. In high-performing teams, governance is often invisible. It lives in lightweight documentation, clear ownership, and predictable change patterns. People don’t feel governed. They feel confident. Bad governance, on the other hand, is loud. It shows up as rigid approval workflows, endless forms, and blanket rules that ignore context. That’s not governance. That’s organisational anxiety wearing a process hat. The silent cost of unowned platforms Every MarTech stack has at least one orphaned tool. You know the one. It was bought for a very specific reason three years ago. The person who championed it has moved on. The integration “mostly works”. Nobody is quite sure who can make changes without breaking something. The invoice still arrives. Faithfully. Monthly. Annoyingly. Unowned platforms are where budgets go to die. Without clear ownership, a tool slowly drifts. Features go unused. Configurations grow inconsistent. Integrations degrade as upstream and downstream systems evolve. Data quality erodes so gradually that nobody notices until reporting becomes unreliable. And when something breaks, everyone assumes someone else is responsible. This is where governance earns its keep. Assigning ownership doesn’t mean one person does all the work. It means someone is accountable for: Defining the platform’s purpose Maintaining its configuration standards Coordinating changes Making the call on renewals or retirement Ownership creates decision velocity. Without it, teams default to indecision, workarounds, or buying yet another tool to solve a problem they technically already own. That’s how stacks bloat. Not through bad intent, but through neglect . Documentation that people actually use Most teams don’t have a documentation problem. They have a documentation trust problem . Either it’s too high-level to be useful, or so detailed it’s immediately outdated. Often both. The result is predictable: People stop reading it and rely on tribal knowledge instead. Effective governance documentation has a very different goal. It’s not trying to capture everything. It’s trying to capture the decisions that matter . At minimum, every core platform should have: A clear purpose statement Defined ownership and escalation paths Key integrations and dependencies Configuration principles (not step-by-step instructions) Known risks and constraints Notice what’s missing: Screenshots of every setting. Those rot fast. Principles last longer. Good documentation is opinionated. It explains why things are set up the way they are, not just how . That context is what allows new team members, agencies, or AI tools to work safely without reverse-engineering the system. And yes, it should be short. If it takes longer to read than to ask someone on Slack, you’ve already lost. Change control without killing momentum This is where most governance efforts fall apart. Marketing Ops teams hear “change control” and immediately imagine ticket queues, CAB meetings, and two-week waits to update a form. That fear isn’t irrational. Many organisations have experienced exactly that. But change control doesn’t have to mean friction. It means predictability . High-performing teams distinguish between different types of change: Low-risk changes that can be made freely Medium-risk changes that require peer review High-risk changes that need coordination and testing This tiered approach keeps velocity high while protecting the foundations. Nobody needs approval to update an email template. But changes to lifecycle logic, scoring models, or core integrations probably deserve a second pair of eyes. The key is making these rules explicit. When people know what they can do safely, they stop hesitating. When they know when to slow down, incidents drop dramatically. Ironically, the teams with the strongest governance often move faster than those without it. They spend less time fixing mistakes, rolling back changes, or debating who broke what. Discover our Podcast Governance as an enabler, not a blocker Here’s the uncomfortable truth: Most marketing teams already have governance. It’s just accidental. It lives in unspoken rules, personal preferences, and historical decisions nobody remembers making. That kind of governance is fragile. It only works while the same people stick around. Intentional governance externalises that knowledge. It turns “how we do things” into something the organisation owns, not just individuals. This matters enormously as teams scale, outsource, or adopt new capabilities. Without governance, every change becomes risky. With it, experimentation becomes safer because the blast radius is understood. And this is exactly why governance “converts”. Executives don’t wake up excited about documentation. They care about predictability, cost control, and risk reduction. Governance delivers all three without asking for headcount increases or flashy new tools. How governance enables AI safely AI has poured accelerant on every existing weakness in MarTech stacks. Suddenly tools can generate campaigns, update data, create workflows, and personalise content at scale. That’s powerful. It’s also dangerous if the underlying systems aren’t well governed. AI doesn’t understand intent. It understands instructions and patterns. Without clear governance, those instructions are inconsistent, outdated, or simply wrong. Good MarTech governance creates the guardrails AI needs to be useful, rather than destructive. Clear ownership defines who is responsible for AI-driven changes. Documentation provides the context models need to operate correctly. Change control ensures automated actions are tested before they go live. Most importantly, governance defines where AI is allowed to act autonomously and where it isn’t . This isn’t about fear. It’s about alignment. AI thrives in environments with clear rules, clean data, and consistent patterns. Governance creates exactly that. Teams that skip this step don’t move faster. They just accumulate invisible risk at machine speed. The maturity curve nobody talks about There’s a pattern that shows up again and again. Early-stage teams move fast with almost no governance. It works because everything is small and visible. As complexity grows, cracks appear. Data inconsistencies. Duplicate tools. Conflicting processes. At this point, many organisations double down on speed instead of structure. They add more tools, more automations, more “quick fixes”. This works briefly, then collapses under its own weight. Governance is what allows teams to exit this cycle. It doesn’t require perfection. It requires intentionality. A willingness to decide how the stack should behave, not just react to how it currently behaves. Teams that make this shift don’t talk about governance much. They talk about clarity. About confidence. About finally trusting their numbers again. Why this feels boring (and why that’s a good sign) Governance doesn’t demo well. You can’t show it in a sales deck with animated arrows. You can’t easily quantify it in isolation. When it’s working, nothing dramatic happens. Campaigns launch smoothly. Data behaves. Renewals get questioned instead of rubber-stamped. That’s precisely why it’s valuable. The best operational disciplines feel dull because they remove drama. They turn chaos into routine. They replace heroics with systems. If your MarTech stack feels exciting all the time, something is probably wrong. Getting started without boiling the ocean The mistake most teams make is trying to “fix governance” all at once. You don’t need a framework, a steering committee, or a six-month initiative. You need to start answering the uncomfortable questions you’ve been avoiding. Who actually owns each platform? Which tools are critical, and which are just nice to have? Where do changes currently go wrong? What knowledge lives only in people’s heads? Start there. Document the answers. Socialise them. Adjust as reality pushes back. Governance is iterative. It improves through use, not theory. The commercial reality nobody says out loud Here’s the part people rarely admit: Governance is one of the easiest ways to unlock budget without asking for more money. It reveals shelfware. It exposes overlapping capabilities. It highlights processes that cost more to maintain than they return. For consultancies and internal ops teams alike, this is where real value lives. Not in selling another tool, but in making the existing stack behave like a coherent system. That’s why governance conversations resonate so strongly once they land. They speak to pain executives already feel but struggle to articulate. Final thought Marketing technology governance will never win awards for creativity. It won’t make your brand more exciting. It won’t give you something flashy to post on LinkedIn. What it will do is stop your stack from quietly draining time, money, and trust. In a world obsessed with speed, governance is how you move fast without breaking everything. And yes, it’s unsexy. That’s exactly why it works. If your stack has grown faster than your confidence in it, governance isn’t a “nice to have”. It’s the discipline that makes everything else work properly again. Discover our Services
- Revenue Ops vs Marketing Ops: Stop arguing and start designing
There’s a familiar conversation happening inside a lot of B2B companies right now. Marketing Ops says, “This sits with us.” Revenue Ops says, “No, this is ours now.” Leadership nods politely, adds another role to the org chart, and hopes the noise dies down. It rarely does. Because this isn’t really a role problem. It’s a design problem. And MarTech platforms have a habit of exposing design problems very quickly. Why this debate exists in the first place A few years ago, nobody was arguing about this. Marketing Ops had a fairly clear remit. Own the tools. Run the campaigns. Keep the data usable. Try not to break anything important. Then things shifted . Marketing automation stopped being “top of funnel software” and became core infrastructure. HubSpot is a great example as it evolved from a marketing platform into a CRM, a sales system, a service platform, and eventually a full revenue engine. At the same time, leadership started asking better questions. Questions like why pipeline looked healthy but revenue didn’t. Why forecasts changed depending on who built the report. Why marketing and sales could sit in the same meeting and talk about entirely different numbers. So organisations reacted. They created Revenue Ops. Not because Marketing Ops failed, but because the business outgrew the way responsibility had been set up. Where marketing ops genuinely ends Marketing Ops is still critical. That hasn’t changed. In a well-run HubSpot setup, Marketing Ops is usually responsible for how demand is generated, captured, and prepared for sales. Campaign architecture, lifecycle logic at the marketing level, lead capture and enrichment, consent and compliance, attribution, and the overall health of the marketing side of the platform. This is not admin work. It’s skilled, technical, and often underappreciated . But there’s a line that matters. Marketing Ops shouldn’t be responsible for defining how revenue works. Not how pipelines are structured. Not how deals progress. Not how forecasts are calculated. And not how success is measured once money is involved. When Marketing Ops is pulled into those decisions, it’s rarely because they want to be. It’s usually because nobody else has taken ownership. Where revenue ops genuinely begins Revenue Ops exists to answer a very simple but uncomfortable question. How does revenue actually move through this business? In simple terms, that means owning the structure that sits underneath the numbers leadership cares about. The CRM data model, lifecycle alignment across teams, pipeline definitions, forecasting logic, reporting consistency, and the rules that govern handoffs between functions. Revenue Ops is not a fancier name for Marketing Ops. And it’s not a replacement for Sales Ops either. It’s the layer that connects how teams work to how revenue is reported and predicted. When that layer is missing or unclear, everyone ends up filling the gap in their own way. What good ownership really looks like High-performing organisations don’t spend much time debating who owns what. They’ve already decided. Marketing Ops focuses on generating and qualifying demand. Revenue Ops focuses on how that demand converts, scales, and shows up in revenue numbers. Sales Ops focuses on enabling reps to execute within that model. Leadership focuses on priorities and trade-offs when things get messy. No single role “owns MarTech” end to end. The system is shared. Responsibility is deliberately split. That’s the difference between teams that argue about tools and teams that use them properly. The real issue hiding underneath the debate Most companies never design an operating model. They hire roles. They buy software. They assume clarity will emerge over time. It doesn’t. Without an explicit operating model, people default to protecting their patch. Data becomes political. Reports become negotiable. HubSpot turns into a very expensive collection of half-working processes. When things go wrong, the conversation drifts back to job titles instead of structure. Marketing Ops vs Revenue Ops is the wrong argument. The real question is whether the way your business operates has ever been intentionally designed. Stop arguing. Start designing. If your teams are debating boundaries, that’s not dysfunction. It’s a signal that the business has changed and the operating model hasn’t caught up yet. The fix isn’t another hire or another tool. It’s deciding how your revenue engine is meant to work, then aligning roles around that reality. If your MarTech feels powerful but oddly underwhelming, and if your teams spend more time debating ownership than improving performance, an Operating model workshop is the fastest way to reset. Design the system once. Stop having the same argument every quarter. Discover our Services
- How UCLA Health cut Eloqua costs, simplified operations, and unlocked room to grow
Not every Marketing Operations success story is about doing more. Some of the most valuable work is about stopping waste before it quietly drains budget, time, and credibility. For UCLA Health, the problem wasn’t campaign performance or engagement. It was structural. Their Oracle Eloqua environment was costing more than it should and was on track to cost even more. Sojourn Solutions was brought in to fix it. Permanently. The challenge: When contacts quietly become a liability UCLA Health’s Eloqua contract capped total contacts at 1.5 million across two instances. On paper, this seemed manageable. In reality, it wasn’t. Because Eloqua counts duplicate email addresses separately across instances, the combined contact volume exceeded the contractual limit by roughly 200,000 contacts. The secondary instance, originally built to support strategic acquisition initiatives, contained approximately 175,000 contacts - but by this point, it was no longer delivering meaningful business value. Instead, it had become a workaround. For nearly six months, the secondary instance was being kept alive primarily to warm IP and domain reputation, requiring the team to repurpose and deploy a bi-weekly newsletter with minimal strategic impact. Meanwhile, growth in the primary instance was constrained, and UCLA Health faced a looming choice: Pay recurring monthly overage fees or commit to an expensive contract upgrade. Neither option was appealing. The solution: Simplify, consolidate, and stop paying for nothing Sojourn began by assessing usage trends across both Eloqua instances. The conclusion was clear: The second instance was underutilised, operationally expensive, and no longer aligned with UCLA Health’s marketing strategy. The recommendation was decisive - decommission the secondary instance entirely. Sojourn led the full decommissioning process end to end. Active forms, landing pages, campaigns, and reports were identified and disabled. Embedded landing pages were updated and redirected appropriately to avoid broken experiences. Database records, activity history, and assets were archived to ensure nothing was lost and could be referenced in the future if needed. Engaged and relevant contacts from the secondary instance were carefully migrated into the primary instance, preserving data integrity and ensuring continuity for active marketing programs. The result was a single, cleaner Eloqua environment that the team could actually focus on and scale. Just as importantly, ongoing operational noise disappeared. No more maintaining a second instance simply to keep it “warm.” No more duplicated effort for minimal return. Building a healthier database for the long term Instance consolidation solved the immediate overage risk, but Sojourn didn’t stop there. To enable sustainable growth, Sojourn also supports UCLA Health with an annual, ad hoc database hygiene process designed to reduce contact counts without compromising compliance or data integrity. This process manually identifies and removes non-emailable contacts such as hard bounces, long-term inactive leads, and deceased patients. Given UCLA Health’s daily full-file patient feeds, this work is coordinated closely with the Office of Health Informatics and Analytics (OHIA) to ensure suppression happens at the source - preventing removed records from being unintentionally re-created in Eloqua. Deceased patients are flagged using MRN or Patient ID to ensure they never re-enter the system. Hard bouncebacks are flagged by email address, allowing contacts to be reintroduced automatically if they update their email address in their patient record and become emailable again in future daily feeds. This hygiene process typically reduces total contact counts by an additional 10–20% each year, creating ongoing headroom for growth without triggering contract penalties. The results: Fewer contacts, lower costs, more flexibility The impact was immediate and measurable. UCLA Health reduced its total Eloqua contact count by approximately 175,000 through instance decommissioning alone, with a further 10–20% reduction driven by ongoing database hygiene efforts. By addressing the root cause rather than treating the symptoms, UCLA Health avoided recurring overage fees estimated at $5,000 per month, as well as the need for a costly contract upgrade. At the same time, Marketing Operations became simpler, leaner, and easier to manage. Most importantly, the primary Eloqua instance now has room to grow... without fear that success will be punished with unexpected costs. What the client had to say: “Sojourn helped us take a strategic, long-term approach to managing our Eloqua environment. By consolidating instances and improving database hygiene, we were able to reduce unnecessary costs, simplify our operations, and create room for future growth without disrupting active marketing programs.” The takeaway Marketing Operations isn’t always about launching more campaigns or adding more tools. Sometimes, the smartest move is knowing what to turn off. By decommissioning an unused Eloqua instance, cleaning up contact data, and putting long-term governance in place, UCLA Health transformed a growing cost risk into a scalable, sustainable foundation. No overages. No wasted effort. No unpleasant surprises in the renewal meeting. Just a cleaner system and a marketing team back in control.











