top of page

The EU AI Act will expose your Marketing Ops: Who’s accountable when AI breaks things?

  • 1 day ago
  • 11 min read

Marketing Ops has always been accountable. It just rarely looked like it.


When a campaign misfires, it’s “a creative issue”. When data goes bad, it’s “a CRM issue”. When attribution turns into astrology, it’s “a market issue”. Marketing Ops sits in the middle quietly fixing everything while everyone else argues about the colour of the button.


Now add AI to that mix.


Because AI does not fail politely. It fails at scale, at speed, and with enough confidence to make the wrong answer look like policy.


The EU AI Act is basically Europe’s way of saying: If you deploy AI, you do not get to shrug when it breaks. Someone has to own the risks, the controls, the monitoring, and the outcomes. And if your Marketing Ops function currently runs the stack, the workflows, the routing, the automation, the data, and increasingly the “helpful” AI features inside your tools, congratulations. You are about to get pulled into an accountability conversation you did not schedule.


This article is not legal advice. It’s a practical, Marketing Ops view of what the EU AI Act changes, what it forces you to be clear about, and how to answer the uncomfortable question: Who is accountable when AI breaks things?


And are you prepared for when it becomes applicable in August 2026?



What the EU AI Act actually is, and why Marketing Ops should care...


The EU AI Act is a regulation that sets risk-based rules for AI. It applies to public and private actors inside and outside the EU if they place AI systems or general-purpose AI models on the EU market, put them into service, or use them in the EU. 


The timeline matters because this is not some distant future threat you can park in a Q4 roadmap and never touch again.


The Act entered into force on 1st August 2024 and becomes fully applicable on 2nd August 2026, with staged dates for different parts. Prohibited practices and AI literacy obligations have applied since 2 February 2025. Obligations for general-purpose AI models became applicable on 2 August 2025. 


You do not need to be “building AI” to be on the hook. If your marketing team is using AI features in a CRM, marketing automation platform, ad platform, analytics tool, chatbot, content tool, sales engagement tool, or customer data platform, you are already in the system.


Marketing Ops cares for one simple reason: The Act forces clarity about who is responsible for what. And Marketing Ops is usually the only function that can map what is actually being used, where, by whom, and with what data.



The first accountability trap: “We didn’t build it, we just used it”


Under the Act, obligations fall on different actors, including providers and deployers. The Commission’s guidance describes the framework applying to providers (for example, a developer of a tool) and deployers (for example, an organisation using that tool). 


This is where a lot of Marketing Ops teams try to mentally exit the building.


We’re not an AI company. We’re just using features in our tools.


That may reduce some obligations, but it does not remove accountability.


Even in the high-risk context, the Commission’s guidance describes deployer obligations that are very operational: Using the system according to instructions, monitoring operation, acting on identified risks or serious incidents, and assigning human oversight to people in the organisation. 


So the real question is not “are we a provider?” It’s “are we a deployer, and if so, are we operating the system responsibly?


In Marketing Ops terms, that translates into boring, unavoidable work: Governance, documentation, controls, training, monitoring, and incident response.



The second accountability trap: “AI is everywhere, so nobody owns it”


When everything has an AI button, it becomes culturally tempting to treat AI as a vibe rather than a system. But the EU AI Act is designed to do the opposite. It is trying to turn AI back into something you can audit.


That means you will get asked questions like:


Who approved this use case? Who decided what data goes into it? Who checked the output? Who is monitoring performance drift? Who is accountable when it produces misleading content, discriminatory outcomes, or security incidents?


If your organisation cannot answer those questions, you do not have “AI adoption”. You have unmanaged operational risk. And unmanaged risk has a habit of becoming a budget line, a headline, or both.



Where Marketing Ops is most exposed


Most Marketing Ops teams are not deploying AI for medical triage or border control. That’s not the point. The exposure comes from how marketing actually uses AI in the real world.


You run customer-facing AI interactions


If you deploy chatbots or other interactive systems, someone needs to think about transparency, user expectations, and what happens when the system confidently says something untrue.


The Commission’s guidance explains that the Act introduces transparency requirements for certain interactive or generative AI systems, such as chatbots, to address risks like manipulation, fraud, impersonation and consumer deception. 


That is marketing territory. Customer experience, web journeys, lead capture, qualification, and support deflection are all places where Marketing Ops often owns the tooling and the workflow.


When those systems break, the first question will be “why did you deploy it like this?” not “which vendor did you buy it from?


You publish AI-assisted content at scale


Marketing teams are already generating images, audio, video, and written content with AI-assisted tools. The Act’s transparency obligations include requirements on deployers in certain situations, including disclosure for AI content, and disclosure when text is generated or manipulated and published with the purpose of informing the public on matters of public interest. The Commission notes these transparency obligations and that guidelines will further clarify how they apply. 


Even if your content does not fall into those specific categories, the direction of travel is clear. You are expected to be honest about what is synthetic when that matters to the audience, and to avoid systems that create deception.


Marketing Ops is exposed here because it often owns the content workflow tooling, approvals, templates, distribution and tracking. You are the function that can actually operationalise a disclosure rule without turning the team into a bureaucratic mess.


You use AI for targeting, segmentation, and decisioning


This is the area where marketing loves to pretend the model is “just helping”.


If AI influences who sees what, who gets prioritised, who is suppressed, who is routed, or who gets categorised, you are using AI as a decisioning layer.


Even when the Act does not label a specific marketing use case as “high-risk”, you still have obligations under other laws, and the AI Act does not replace those.


The European Data Protection Board has been explicit that the AI Act and EU data protection laws should be considered complementary and mutually reinforcing, and that EU data protection law remains fully applicable to the processing of personal data involved in the lifecycle of AI systems. 


So if your AI-driven segmentation relies on personal data, you are automatically in GDPR land as well, and your accountability picture now has at least two regulators’ expectations in it.


You might accidentally wander into high-risk territory through HR and recruitment marketing


A lot of marketing teams support recruitment, employer brand, internal comms, and candidate journeys. Some teams run targeted job advertising systems and automation. Some use tools that “optimise” job ads and candidate targeting.


The Commission’s guidance lists employment-related AI systems as examples of high-risk use cases, including systems intended to be used for recruitment or selection, which includes placing targeted job advertisements. 


If your marketing stack touches that area, you need a grown-up conversation with HR and Legal about who owns the system, who is the deployer, and what controls exist.


Marketing Ops does not need to own HR compliance, but Marketing Ops often owns the platforms that make these workflows possible. That makes you part of the accountability chain.



“When AI breaks things”, what counts as “breaks”?


This is where organisations get dangerously vague. AI “breaking” is not just a system outage. It can mean:


  • A chatbot gives incorrect product claims, pricing, security assurances, or legal statements.

  • An AI feature generates content that creates deception, impersonation risk, or misleading communications.

  • An optimisation system shifts targeting in a way that creates discriminatory outcomes, even unintentionally.

  • A data pipeline feeds the wrong inputs, and the model output becomes systematically wrong.

  • A generative tool produces content that breaches IP rules or internal policy.

  • A vendor updates a model, performance changes, and your safeguards do not catch it.

  • A workflow creates an outcome you cannot explain to an affected person, which becomes a practical problem in high-risk contexts where the Commission describes a right to an explanation for natural persons in certain situations. 


The point is not to predict every failure mode. The point is to stop acting surprised when failure happens, and to have an accountable operating model ready.



So who is accountable, legally?


There is no single magical job title that makes the risk disappear. Accountability is shared, but not vague.


At a legal role level, the Act places obligations on the relevant actor types (providers, deployers, and others depending on the scenario). The Commission’s guidance makes clear that deployers have concrete responsibilities in how they use and monitor certain systems, including assigning human oversight within their organisation. 


At a governance level, enforcement is not theoretical. The Commission’s materials outline penalties, with maximum thresholds including up to €35m or 7% of worldwide annual turnover for certain infringements, and other tiers for other non-compliance categories. 


At a data and privacy level, the AI Act does not push GDPR aside. The EDPB has stressed that data protection law remains fully applicable to personal data processing across the AI lifecycle, and the AI Act should be interpreted as complementary to GDPR and related laws. 


So if your question is “who will regulators look at?”, the honest answer is: They will look at the entity that deploys the system in the EU, the entity that provides it, and the people inside those entities who were supposed to provide oversight.


Which brings us to the more useful question...



Who should be accountable inside a company?


This is the Marketing Ops version of “stop pointing at each other like a Spiderman meme and design a process”.


The EU AI Act effectively rewards organisations that can do three things on demand.


  1. They can show what AI is in use, where it is used, and why.

  2. They can show who approved it, what data it uses, and what safeguards exist.

  3. They can show how they monitor it, how they handle incidents, and how they train staff.


The Act’s AI literacy obligations have been in application since 2 February 2025.  That is not a “nice to have”. It is a forcing function that pushes companies to ensure the people using AI understand it well enough to use it responsibly.


Inside most B2B companies, accountability ends up looking like this.


  • Legal and Compliance sets rules, interprets obligations, and decides risk appetite.

  • Security sets requirements for vendor assessments, access controls, and incident response.

  • The DPO and privacy function owns the GDPR posture where personal data is involved, and the EDPB has been clear this remains fully relevant in AI systems. 

  • Marketing leadership owns what the business chooses to do, and what it is willing to sign off.

  • Marketing Ops owns how the work is actually done across platforms, workflows, data, and governance.


If you want a single throat to choke, organisations are already trying to dump this on “the AI person” or “the data person”. That fails because the risk lives in operations. It lives in who can actually change how tools are configured and used.


That is why the EU AI Act will expose Marketing Ops. It makes operational accountability visible.



The uncomfortable part: Your vendor contracts will not save you...


Vendors can promise compliance. They can offer documentation. They can add toggles and disclaimers. They can be very convincing in sales calls and contracts.


But the moment you deploy the system in your environment, with your data, for your purpose, you become responsible for how it is used.


The Commission’s guidance on deployer obligations in high-risk contexts is blunt about deployers needing to use systems according to instructions, monitor operation, act on identified risks, and assign human oversight. The spirit of that is useful even outside high-risk: You cannot outsource oversight.


This is where Marketing Ops should stop accepting “the vendor said it’s compliant” as a meaningful internal control.



A practical accountability model for Marketing Ops


You do not need to turn your Marketing Ops team into a compliance department, but you do need a system that creates answers quickly when someone asks, “What AI are we using, and what happens if it fails?


Here is what that looks like in practice, without turning this into a checklist article.


Start with an AI inventory that is brutally honest. Not a slide. A living list of tools and features, where they are used, what data they touch, and whether they interact with customers. If you cannot map it, you cannot govern it.


Then define use-case ownership. Not tool ownership. Use cases. “Website chatbot”. “Email content generation”. “Lead enrichment”. “Audience segmentation”. “Recruitment ad targeting”. Every use case needs a named business owner and a named operational owner. The operational owner is often Marketing Ops.


Then decide what “human oversight” means for each use case. The Commission’s language on assigning human oversight inside the organisation should not be treated as a high-risk-only curiosity. If a system can publish, route, prioritise, or decide, someone needs to be accountable for review points, guardrails, and escalation.


Then put monitoring where it belongs: On outcomes, not activity. Monitor for things like hallucinated claims in customer-facing responses, unexpected shifts in routing, sudden performance drift after vendor updates, spikes in complaint patterns, and outputs that create deception risk.


Then add an incident pathway that does not rely on panic. If AI produces a harmful or misleading output, who gets notified, who can shut it down, who contacts the vendor, who handles customer comms, and who documents what happened?


Finally, train people like adults. The AI literacy obligations are already in application. Training should be specific to the tools and use cases your team actually uses, and it should include what not to do, what must be reviewed, and what needs disclosure.


If your training is a generic “AI 101” webinar, you have technically done a thing. You have not reduced risk.



The privacy and compliance overlap you cannot ignore!


Marketing teams often treat GDPR as “the cookie banner problem”. That mindset is going to get expensive.


The EDPB’s statement is clear that data protection law remains fully applicable to personal data processing across the AI lifecycle and should be interpreted as complementary with the AI Act. 


On top of that, regulators are actively thinking about the interplay. The EDPB and EDPS have noted work on joint guidelines about the interplay between GDPR and the AI Act. 


For Marketing Ops, that means your AI governance cannot be divorced from your data governance. If you cannot explain what data goes in, why it is lawful, how it is minimised, how it is secured, and how it is deleted, you are not “doing AI”. You are doing risk.



One more complication: The rules are still being operationalised


It’s tempting to read a regulation like it’s a final instruction manual. In practice, there will be standards, guidelines, and codes of practice that affect how organisations implement parts of the Act.


For example, the Commission notes work on guidance for transparency obligations and a code of practice to support marking and labelling of AI-generated content. 


The Commission has also proposed adjustments to the timeline for applying high-risk rules linked to the availability of support measures like standards and guidelines, and that proposal is in the legislative process. 


So yes, some details will evolve. That is not a reason to wait. It is a reason to build an operating model that can adapt without chaos.



The blunt reality: Marketing Ops is accountable for readiness


When AI breaks things, the provider may be accountable for parts of compliance, depending on their role. The deployer is accountable for how it is used in their organisation. Regulators and stakeholders will not accept “the tool did it” as a defence, especially where transparency, oversight, and monitoring were expected. 


Inside the company, Marketing Ops is rarely the legal owner of the risk, but it is often the operational owner of whether the business can prove it is acting responsibly.


That is the exposure.


Not because Marketing Ops is to blame, but because Marketing Ops is where reality lives.


If you want a simple line to use internally, use this: Legal interprets the rules, Security protects the environment, Privacy governs personal data, and Marketing Ops makes the controls real across the stack.


And the fastest way to find out whether your Marketing Ops is ready is to ask one question: "If we had to explain our AI usage to a regulator, a customer, and our board tomorrow, could we do it without improvising?"


If the answer is no, the EU AI Act didn’t create the problem. It just stopped letting you hide it.



Discover our AI Services
Discover our AI Services

Our Customer Case Studies

Sojourn Solutions logo, B2B marketing consultants specializing in ABM, Marketing Automation, and Data Analytics

Sojourn Solutions is a growth-minded marketing operations consultancy that helps ambitious marketing organizations solve problems while delivering real business results.

MARKETING OPERATIONS. OPTIMIZED.

  • LinkedIn
  • YouTube

© 2026 Sojourn Solutions, LLC. | Privacy Policy

bottom of page
Clients Love Us

Leader