Generative AI is great until someone pastes confidential data into a public model.
ChatGPT is fast, smart, and helpful, and it’s wormed its way into your workforce faster than any technology before it. Employees are using it to draft emails and content, summarize notes, brainstorm products, and debug code. But it wasn’t built with enterprise data security in mind.
Unlike Microsoft Copilot, which lives inside your environment, ChatGPT sits on the public internet. And that means anything employees enter — including sensitive documents, financial details, and customer data — can leave the safe confines of your security program entirely.
The problem is how easy and casual it is to sit down and prompt ChatGPT. It’s almost too easy, which only makes the risks all too real.
10 Risky ChatGPT Prompts (And What They Could Expose)
Once a prompt has been entered, it’s out of your hands. Although OpenAI states they don’t train on API or enterprise inputs, it’s not a universal policy for all usage. And most employees aren’t using the API anyway; they’re in the free web version.
This makes prompt security a hidden yet increasing risk vector.
Below is a list of 10 real-world examples of prompts you don’t want your users to try — and why they’re dangerous. Then we’ll cover the underlying risks driving those exposures, and how to prepare your organization for the reality of prompt-based data leakage.
These examples reflect actual behaviors we’ve observed across industries where convenience often wins over caution.
- “Can you summarize this meeting transcript?” (uploaded file)
Risk: That all-hands recording from last week might contain discussions of layoffs, acquisitions, or compliance concerns. None of which belongs in a public AI model. This one is probably the most common and overlooked risk.
- “Help me write a performance review for my direct report.”
Risk: ChatGPT now has access to HR notes or employee evaluations, that could be linked to individuals. Managers seeking help with tone or structure might unknowingly copy and paste employee names, performance metrics, or internal development concerns.
- “Here’s our product roadmap doc. Give me a customer-friendly version.”
Risk: Strategic plans, launch dates, or competitive IP can be ingested by the model with zero data retention guarantees. One quick copy/paste and you’ve handed your go-to-market strategy to a system outside your control.
- “Can you debug this Python script?” (includes environment variables or keys)
Risk: Code often carries embedded secrets or internal URLs that shouldn’t be exposed outside the network. Engineers moving fast might forget to scrub AWS keys or API tokens before seeking help, turning a quick fix into a scary security incident.
- “Compare pricing between our tiers and those of competitor X.”
Risk: Internal pricing strategy and financial details could be leaked — intentionally or not. In-house pricing models, bundling tactics, and discount strategies might get bundled into a prompt meant for market positioning or sales enablement.
- “I’m working on a contract. What’s a better way to phrase this NDA language?”
Risk: Legal documents, especially drafts, should never be uploaded to external AI platforms. NDA clauses, indemnification language, MSA terms, even partial contracts can expose sensitive relationships or negotiation positions.
- “Take this spreadsheet and find errors in the financial forecast.”
Risk: Forecasts, revenue targets, and budget allocations are now beyond your control. Even anonymized, these spreadsheets could include pipeline deals, burn rates, and investor-sensitive projections that should stay internal.
- “What’s the best way to de-identify this customer data set?”
Risk: Even anonymized data can include patterns or residual PII, and uploading it to ChatGPT can pose privacy and compliance risks. Sharing data samples for ‘sanitization help’ can violate internal privacy policies or trigger compliance violations under GDPR, HIPAA, or DPDP.
- “Help me write a rebuttal to this customer complaint.” (pastes email thread)
Risk: Private communications and customer data can be exposed, and there are potential litigation risks. Sensitive client conversations, contract details, and support case histories don’t belong in prompts, especially when trust and reputation are on the line.
- “This RFP looks too long. Can you distill it down to key asks?”
Risk: RFPs often include confidential partner information, internal capabilities, or pricing thresholds. A simple attempt to save time can inadvertently reveal privileged bid content and potentially breach NDAs or damage future deal viability.
Four Key Security Risks Behind Those Prompts
Sure, ChatGPT doesn’t have access to your data, but that’s not the point. Your users give it access when they paste sensitive information into prompts.
Here are the core security risks that go along with access to ChatGPT:
- Misclassification and lack of labels
You can’t trust users to know what’s sensitive, so why expect that from ChatGPT? If data isn’t labeled, protected, or flagged internally, it’s far too easy to paste into a prompt without realizing the consequences.
- No access controls or audit trails
Unlike enterprise tools, ChatGPT has no native access management tied to your organization. There’s no way to restrict who can use it, what they upload, or who sees what. And once it’s gone… it’s gone.
- Lack of business context
ChatGPT doesn’t understand the difference between a public press release and an internal draft, nor does it know what matters to your business or what compliance requirements your company is subject to.
- Uncontrolled outputs and reuse
Even if the prompt seems harmless, the output may not be. Generated summaries, rewrites, or translations can contain sensitive context that employees can then reuse in decks, emails, or public forums.
The ChatGPT-Readiness Checklist
Here’s what organizations should consider before giving generative AI tools like ChatGPT the green light.
✅ Have you educated employees on what not to share?
Assume they’re using ChatGPT. Training is your first line of defense.
✅ Do you have visibility into AI tool usage across the organization?
Shadow AI is the new shadow IT, and it’s even harder to track.
✅ Is sensitive data consistently classified?
If your data isn’t labeled correctly, employees won’t know what they shouldn’t share.
✅ Do you apply DLP or CASB policies to browser-based tools?
You need controls at the edge, not just in your email or cloud apps.
✅ Are you monitoring downstream use of AI-generated content?
That summary pasted into a customer-facing deck might contain more than you think.
✅ Do you offer safer, approved alternatives for common ChatGPT use cases?
If employees need help drafting, summarizing, or rewriting, give them secure tools (where available) that don’t compromise data.
Are You Ready for GenAI?
ChatGPT isn’t the enemy here. Neither are your employees. But ungoverned use of any GenAI is the enemy hiding in plain sight.
Make sure your security strategy includes protections for AI-generated risk, because your users aren’t going to stop pasting and prompting. The question is whether you’re ready for what happens next.
Want to See ChatGPT Security in Action?
Concentric AI makes it easy to get ChatGPT-ready.
✅ Discover your data
✅ Monitor your data for risks
✅ Classify your data
✅ Fix permissions
✅. Block or mask sensitive data from being shared with ChatGPT
✅ Protect ChatGPT’s outputs
Book a demo and we’ll show you how to keep ChatGPT from becoming your biggest security liability.