top of page

The biggest data breach in your Org is happening in a chat window

  • 4 hours ago
  • 5 min read

The risk isn't that your team is using AI. The risk is that they're feeding it data you're responsible for - client data, prospect data, CRM data - without any shared understanding of what's allowed, what's not, and who's accountable when something goes wrong. 


And right now, in the vast majority of organisations, that shared understanding doesn't exist. There's no policy. There's no guidance. There's just a team doing their best work with the best tools available and hoping nobody asks uncomfortable questions.



This isn't hypothetical. It's Tuesday.


The gap between "we use AI" and "we have rules about how we use AI" has become one of the biggest unmanaged risks in Marketing Operations. Not because the tools are dangerous. Because the tools are useful - so useful that adoption outran governance by about two years.


Consider what a typical MOPs practitioner might paste into an AI tool in a normal working week. Campaign performance data tied to specific accounts. CRM exports with lead scores, lifecycle stages, and sales notes. Email copy containing personalisation tokens that reference real people.


None of that is malicious. All of it is efficient. And most of it involves personal data that, depending on your jurisdiction, is covered by GDPR, CCPA, or equivalent regulations that your legal team spent significant time and money building compliance frameworks around.


Those compliance frameworks were designed for your CRM, your MAP, your data warehouse - systems with known data processing agreements, documented retention policies, and controlled access. They were not designed for a situation where someone copies 500 contact records into a third-party AI chat window that may or may not retain the input, may or may not use it for model training, and definitely wasn't included in your last data protection impact assessment.



The AI tools aren't the problem. The absence of rules is.


To be clear: This is not an argument against using AI. That ship sailed. AI is now fundamental to how MOPs work gets done, and pretending otherwise helps nobody.

The issue is that most organisations skipped the step between "AI is useful" and "we have clear guidelines for using it." They went straight from experimentation to daily dependence without ever defining the boundaries.


What data can go into an AI tool? Nobody decided. Which AI tools are approved? Nobody compiled a list. What happens if someone pastes data from a client engagement into a personal ChatGPT account? Nobody thought about it. Who is responsible if pasted data ends up in a training set and resurfaces in someone else's output? Nobody wants to answer that one.


The result is a team making individual judgement calls, dozens of times a day, with no framework to guide them. Some people are conservative and avoid pasting anything sensitive. Some people paste everything because it's faster and nobody told them not to. Most people are somewhere in the middle, vaguely uncomfortable but not uncomfortable enough to stop.


That's not a sustainable position. It's a pre-incident position.



What's actually at risk


There are three layers of risk, and they escalate.


The first is data retention. Different AI platforms handle inputs differently. Some retain inputs to improve their models unless you explicitly opt out. Some retain inputs for a period as part of their service. Some offer enterprise tiers with no-retention guarantees. Most practitioners don't know which tier their organisation is on, because nobody told them. If your team is using free or personal accounts - which many are - the retention defaults are almost certainly broader than your data compliance framework allows.


The second is data leakage. When data goes into an AI model's training set, it doesn't come back out in recognisable form - but elements can resurface in unexpected ways. The more specific and structured the input, the higher the theoretical risk. Nobody has had a major public incident tied to B2B marketing data leaking through an AI model. Yet. But "it hasn't happened publicly" is not the same as "it can't happen," and it's definitely not a position your legal team would endorse.


The third is regulatory exposure. If your organisation operates under GDPR, you have specific obligations about where personal data gets processed, by whom, and on what legal basis. Pasting personal data into an AI tool that wasn't included in your processing records, wasn't covered by a data processing agreement, and wasn't part of the consent basis you collected the data under is, technically, a compliance gap.


Whether anyone will notice is a different question from whether it's compliant. And the answer to the compliance question is almost certainly no.



The fix isn't complicated. It's just overdue.


The good news is that building a sensible AI usage policy for a Marketing Ops team is not a six-month governance project. It's a conversation, a document, and a decision about where the lines are.


Start with which tools are approved. If your organisation has enterprise accounts with data processing agreements in place - most major AI platforms offer these on paid tiers - those are the tools the team should use. Personal accounts, free tiers, and unvetted tools should be off limits for any work involving client or prospect data. That's not restrictive. That's basic hygiene.


Then define what data can and can't go in. A useful framework is to think about it in tiers. Publicly available information - company names, general industry categories, publicly listed job titles - is low risk. Internal operational data - campaign structures, automation logic, anonymised performance metrics - is medium risk and probably fine in an approved tool. Personal data - names, email addresses, contact records, anything tied to an identifiable individual - needs explicit rules, and in many cases the answer should be "don't paste it."


Make the policy specific enough to be useful. "Use AI responsibly" is not a policy. "Do not paste CRM contact records into any AI tool that is not on the approved list" is a policy. "If you need to use AI to clean or segment a list containing personal data, use [approved tool] on the enterprise account and remove names and email addresses first" is a policy. People follow rules they can understand. They ignore principles that sound nice but don't tell them what to do at 3pm on a Wednesday when they're trying to fix a list.


Finally, tell people. The most common reason teams don't follow AI data policies is that nobody communicated the policy. It lives in a document nobody reads, or it was mentioned once in a meeting that half the team missed. If you want compliance, you need a one-page guide that's easy to find, easy to understand, and explicitly covers the five or six tasks where practitioners most commonly use AI with company data.



Waiting for an incident is not a strategy


Every organisation that eventually builds an AI data policy does so for one of two reasons. Either someone thought ahead and built it proactively, or something went wrong and they built it under pressure.


The proactive version takes a week. A conversation with legal, a conversation with IT, a conversation with the team leads, and a document. The reactive version takes months - because it comes with an incident review, a legal assessment, a communications plan, and the kind of organisational anxiety that makes everyone overcorrect.


Right now, most B2B marketing teams are sitting between these two scenarios. AI usage is widespread, data is flowing into tools that aren't governed, and nobody has had the conversation yet. It's working fine. Until it isn't.


The policy doesn't need to be perfect. It needs to exist. Because the alternative isn't "no rules and everything is fine." The alternative is "no rules and everyone is guessing."


And guessing, at scale, with other people's data, is not something any organisation should be comfortable with for long.


Discover our AI Services
Discover our AI Services


Our Customer Case Studies

Sojourn Solutions logo, B2B marketing consultants specializing in ABM, Marketing Automation, and Data Analytics

Sojourn Solutions is a growth-minded marketing operations consultancy that helps ambitious marketing organizations solve problems while delivering real business results.

MARKETING OPERATIONS. OPTIMIZED.

  • LinkedIn
  • YouTube

© 2026 Sojourn Solutions, LLC. | Privacy Policy

bottom of page
Clients Love Us

Leader