Marketing automation audit checklist: What to review and when
- Feb 1
- 9 min read
A marketing automation audit is a systematic review of everything running inside your marketing automation platform - the workflows, the data, the integrations, the scoring models, the consent records, and the campaigns - to identify what's working, what's broken, what's redundant, and what's creating risk. It's the single most effective way to improve platform performance, reduce operational debt, and ensure your automation environment is fit for purpose.
At Sojourn Solutions, audits are one of the most common starting points for our client engagements. The pattern is consistent: an organisation has been running their marketing automation platform for two or more years, things have accumulated, and nobody has taken a full accounting of what's actually in there. This checklist reflects what we review and what we recommend every marketing operations team checks at least annually.
When to run an audit
An audit should happen at least once a year as standard practice. Beyond that, specific triggers should prompt an immediate review:
A platform migration is being planned
A new AI feature or agent is being activated
Campaign performance has declined without a clear cause
The team has experienced significant turnover
A regulatory change affects how personal data is processed
The organisation has gone through a reorg, product pivot, or market shift
Nobody can confidently explain what all active automations do
If any of these apply, the audit is already overdue.
1. Platform hygiene
This is the foundation. Before looking at strategy or performance, check whether the platform itself is clean and well-maintained.
Active vs inactive programmes. How many programmes, campaigns, or workflows are currently active? How many of those are actually in use vs running on autopilot with no owner? Most instances that have been running for two or more years have a significant percentage of active programmes that nobody monitors. Identify everything that's active, confirm whether it should be, and deactivate or archive anything that's no longer needed.
Orphaned assets. Emails, landing pages, forms, snippets, and templates that aren't connected to any active programme. These accumulate over time and create clutter that makes the instance harder to navigate and maintain. Flag anything that hasn't been used in six months and review whether it should be archived.
Folder structure and naming conventions. A consistent folder structure and naming convention makes the platform navigable, auditable, and manageable at scale. If your instance has evolved organically over several years, the folder structure has likely drifted. Review whether current naming conventions are documented, followed consistently, and make it possible to find any asset quickly.
User access and permissions. Review who has access to the platform, what permissions they hold, and whether those permissions are appropriate. People leave organisations, change roles, or accumulate access over time. Audit active users against current team members, remove access for anyone who no longer needs it, and verify that permission levels match current responsibilities.
2. Data quality and integrity
Bad data is the most common cause of automation failures, poor campaign performance, and compliance risk. This section is where most audits uncover the biggest problems.
Duplicate records. Check the volume and distribution of duplicate contacts in your database. Duplicates cause inflated reporting, inconsistent personalisation, and conflicting automation behaviour. Identify the sources of duplication - form submissions, list imports, CRM sync issues - and address both the existing duplicates and the root cause.
Field completeness and consistency. Review critical fields across your database: email, company, job title, country, lifecycle stage, consent status. What percentage are populated? Are values consistent - or are there 15 variations of "United Kingdom" across the country field? Inconsistent field values break segmentation, scoring, and personalisation.
Data decay. Contact data degrades over time. People change jobs, companies rebrand, email addresses become invalid. Check your bounce rate trends, the age distribution of your database, and when records were last updated. A database where 30% of records haven't been touched in two years is a database with a significant decay problem.
Integration health. Review every integration between your MAP and other systems - CRM, enrichment tools, analytics platforms, webinar tools, event platforms. Is data syncing correctly in both directions? Are field mappings still accurate? Are there sync errors accumulating that nobody's monitoring? Integration failures are a common source of data quality issues that compound over time.
3. Lead management and scoring
Lead scoring and lifecycle management are the operational backbone of how marketing passes leads to sales. If these are misconfigured, everything downstream suffers.
Scoring model calibration. When was the lead scoring model last reviewed against actual conversion data? Pull your closed-won deals from the past six months and check what their lead scores were at the point of handoff to sales. If high-scoring leads aren't converting, or low-scoring leads are, the model needs recalibration. Scoring models should be reviewed at least every six months.
Score distribution. What does the current score distribution look like across your database? If a large percentage of records sit above your MQL threshold, the threshold is either too low or the scoring logic is too generous. If almost nobody reaches the threshold, it's too restrictive. A healthy distribution should show a clear concentration of records at lower scores with a meaningful but manageable volume at MQL level and above.
Lifecycle stage definitions. Are lifecycle stages clearly defined, documented, and consistently applied? Can everyone on the team explain the criteria for moving from one stage to the next? If lifecycle stages are being updated manually by different people using different criteria, the data is unreliable and reporting based on it is meaningless.
Lead routing and SLA. Review how leads are being routed to sales. Are routing rules current - do they reflect the current territory model, the current sales team structure, and the current qualification criteria? How quickly are routed leads being followed up? If leads are sitting in a queue for days, the routing is functional but the handoff process is broken.
4. Campaign and workflow operations
This is where operational debt accumulates fastest. Every campaign that gets built adds to the total complexity of the instance, and very few teams actively retire campaigns when they're no longer needed.
Active automation inventory. Create a complete list of every active automation, trigger campaign, nurture programme, and workflow. For each one, document what it does, what data it uses, when it was last reviewed, and who owns it. This is the single most valuable output of any audit - and in most organisations, it's never been done.
Workflow logic review. For each active automation, trace the logic end to end. Are there conditional branches that reference deprecated fields? Wait steps that no longer make sense? Triggers that fire on activities that are no longer tracked? Workflow logic that was correct when built can drift as the platform, the data model, and the business change around it.
Email deliverability. Review key deliverability metrics: bounce rates, complaint rates, inbox placement, and sender reputation. Check authentication records - SPF, DKIM, and DMARC should all be properly configured. Review whether you're on any blocklists. Deliverability problems often have root causes in data quality (sending to invalid addresses) or consent management (sending to people who don't want to hear from you), so this section connects directly to the data and consent sections.
A/B testing and optimisation. Are campaigns being tested and optimised, or are they running on the same configuration they launched with? Review whether subject lines, send times, content variants, and audience segments are being tested systematically. A campaign that's been running for six months without any testing is a campaign that's been stagnating for six months.
5. Reporting and attribution
If your reporting is wrong, every decision based on it is wrong too. This section is where most organisations discover that the numbers they've been presenting to leadership don't mean what they think they mean.
Report accuracy. Pull three reports your team relies on - pipeline contribution, campaign performance, lead volume by source. Now rebuild them from scratch using raw data. Do the numbers match? In most instances, they don't. Reports built years ago often reference fields, stages, or definitions that have since changed. Nobody updated the report because nobody questioned the output.
Attribution model configuration. If you're running multi-touch attribution, check whether the model reflects your actual buyer journey. First-touch, last-touch, linear, W-shaped - each tells a different story. The question isn't which model is best. It's whether the model you're using is configured correctly against current touchpoints and whether anyone on the team can explain what it's measuring and why.
Dashboard hygiene. How many dashboards exist? How many are actively used? Who built them, and do they still reflect current KPIs? Dashboards accumulate just like everything else in a MAP. The ones nobody looks at should be archived. The ones leadership relies on should be verified quarterly against actual data.
CRM-to-MAP reporting alignment. Does your MAP report the same numbers as your CRM for shared metrics like MQLs, pipeline contribution, and conversion rates? If not, find where the discrepancy originates. This is one of the fastest ways to lose credibility with sales and leadership - two systems showing different numbers for the same metric.
6. Templates and brand consistency
Templates drift. What was on-brand and functional two years ago may not be now. This section is quick but catches problems that affect every campaign going out the door.
Email template review. Are your email templates current, mobile-responsive, and consistent with your brand guidelines? Check rendering across major email clients - Outlook, Gmail, Apple Mail. Templates that look fine in preview but break in Outlook are more common than anyone wants to admit. If your templates haven't been updated in over a year, they're probably overdue.
Landing page and form templates. Same check - are they on-brand, mobile-responsive, and functional? Test every active form. Do submissions route correctly? Do they trigger the right automations? Do they capture the right data? A form that's been live for two years without being tested is a form that might have been broken for months without anyone knowing.
Template governance. Is there a process for creating new templates, or does anyone with platform access build their own? Ungoverned template creation leads to brand inconsistency and technical debt. Define which templates are approved, who can create new ones, and what the review process looks like.
7. Consent and compliance
This section has become significantly more important as AI features make autonomous decisions based on consent data. Consent that was valid two years ago may not cover current processing purposes.
Consent record currency. When was consent captured for the records in your database? What was it captured for? Does it cover your current campaign types and processing purposes? If your preference centre offers categories that don't map to the campaigns you actually run, the consent data is structurally misleading — technically valid but operationally inaccurate.
Suppression list integrity. Review every suppression list and suppression rule. Are contacts suppressed for valid, current reasons? Are there rules that were written for campaigns or business conditions that no longer exist? Suppression rules accumulate over time and can silently exclude contacts who should be contactable — or fail to exclude contacts who shouldn't be.
Regulatory alignment. Are your consent management practices aligned with the regulations that apply to your contacts? GDPR, CASL, CAN-SPAM, CCPA, and the EU AI Act all have implications for how marketing automation handles personal data. If your database spans multiple jurisdictions, review whether consent is managed appropriately for each one.
Privacy and data processing documentation. Can you produce a record of what personal data your MAP processes, where it comes from, who has access, and what automated decisions are made based on it? If a regulator, auditor, or enterprise customer asks this question, you need to be able to answer it clearly and quickly.
8. AI features and governance
This is the newest section of any marketing automation audit and one that most organisations haven't addressed yet. Every major MAP now includes AI-powered features, and many of them are active without anyone having made a deliberate decision to deploy them.
AI feature inventory. Which AI features are currently active in your platform? Predictive scoring, automated segmentation, content recommendations, send-time optimisation, AI-assisted campaign building - catalogue what's turned on and what it does. Many AI features get activated during platform upgrades or by individual team members without formal approval.
AI data dependencies. What data is each AI feature consuming? AI features are only as good as the data they act on. If a predictive scoring model is using fields that haven't been updated in two years, the predictions are based on stale information. Map each AI feature to the data it depends on and verify that data is current and accurate.
AI decision documentation. Can you explain how each AI feature makes its decisions? Not at a technical level - at an operational level. If someone asks "why was this lead scored highly" or "why was this contact suppressed," can you trace the decision back to the AI feature's logic and the data it used? If not, the AI is operating as a black box inside your operations.
AI governance ownership. Who is responsible for monitoring, reviewing, and maintaining AI features in your platform? In most organisations, the answer is nobody - AI features were activated and then left to run. Assign a named owner for each AI feature with responsibility for periodic review.
How to use this checklist
Don't try to do everything at once. Prioritise based on where the biggest risks and inefficiencies are likely to sit.
If your main concern is campaign performance, start with sections 2 (data quality), 3 (scoring), 4 (campaigns), and 5 (reporting).
If your main concern is compliance and risk, start with sections 7 (consent) and 8 (AI governance).
If you're preparing for a migration, start with section 1 (platform hygiene) and section 2 (data quality) - cleaning up before you migrate saves significant time and cost.
If you're activating AI features, start with sections 2 (data quality) and 8 (AI governance) - AI amplifies whatever state your data and operations are in, good or bad.
If leadership is questioning marketing's numbers, start with section 5 (reporting) - nothing kills credibility faster than two systems showing different pipeline figures.
For organisations that haven't audited their marketing automation platform in over a year, a full review across all eight sections is recommended. At Sojourn Solutions, our platform audits follow this structure - and every one ends the same way: a clear plan, named owners, and a team that finally knows what's actually running in their instance.
If your marketing automation platform hasn't been audited in over a year, it's overdue. We'd welcome the conversation.









