Skip Navigation
The Shadow AI Problem: Why 57% of Employees Hide Their AI Usage at Work 

The Shadow AI Problem: Why 57% of Employees Hide Their AI Usage at Work 

Andrea Mondello Withum AI
Author: Andrea Mondello
Date: March 12, 2026

Multiple surveys in 2025 found that 63% of companies lack any AI usage policies at all. A large global study cited by Business Insider found that 57% of employees admit to hiding their AI usage from their employers.

Read that again: more than half of employees using AI at work are hiding it from their employers. 

This behavior is commonly referred to as shadow AI, where employees use AI tools without visibility from leadership or IT teams. Shadow AI usually appears when organizations have not established a clear AI governance policy or framework. Employees experiment on their own because they see productivity benefits, but do not know what is allowed. 

Either organizations experience chaos, including compliance violations, data leaks and inconsistent output quality. Or they experience paralysis, where multiple approvals are required for simple automation that could have been implemented in an afternoon. 

Both outcomes prevent organizations from capturing real value from AI.  

Without clear boundaries, employees experiment randomly with AI: 

An employee pastes customer Personally Identifiable Information (PII) into a public AI tool to draft a collections email. The data is now in a third-party system with unknown retention policies. You just created a compliance incident that nobody knows about yet. 

Marketing uses AI in one way. Sales uses it differently. Finance has its own approach. Customer communication has no consistent voice. Internal documents have wildly varying quality. Nobody knows which outputs to trust. 

Employees upload confidential financial data to get “AI help” with analysis. Proprietary information goes into systems you don’t control. Competitive intelligence walks out the door inside AI prompts. 

AI tools with poor security practices get access to internal systems. Phishing attacks use AI-generated content that sounds exactly like internal communications. Nobody has visibility into what tools employees are actually using.

The 57% who hide their usage aren’t doing it maliciously. They’re doing it because nobody told them the rules, and they’re afraid of getting in trouble. So they hide it, which makes everything worse. 

Some organizations overcorrect with excessive controls: 

Want to use AI to draft a meeting summary? Submit a request to the AI governance committee. Wait six weeks. Attend three meetings. Justify the use case. By then, the project is over. 

“No AI allowed until we have a complete governance framework.” Employees watch competitors move faster while internal projects stall waiting for perfect policies that never arrive. 

Any AI use feels like career risk. Employees who could improve their work 30% with AI assistance do things the old way because asking for permission feels too dangerous.

The people most likely to find valuable AI applications are exactly the people who won’t risk their careers proposing them. Good ideas die in silence. 

The paradox: organizations without policies generate chaos. Organizations with too many policies generate paralysis. Both outcomes destroy value. 

You don’t need a 47-page policy document. You need three clear boundaries that employees can remember and actually follow. 

Boundary 1: What’s Allowed Without Asking

 Define a safe zone where employees can experiment freely: 

  • Using AI to draft internal communications 
  • Summarizing internal documents 
  • Researching publicly available information 
  • Generating first drafts for human review 
  • Any use case where errors are easily caught and corrected 

Boundary 2: What Requires Approval 

Define a yellow zone that needs a manager or compliance sign-off: 

  • Client-facing communications (a human must review 100%) 
  • Use cases involving financial data 
  • Any automation that makes decisions without human review 
  • Connecting AI tools to internal systems 
  • Any use case where errors would be costly or embarrassing 

Boundary 3: What’s Prohibited 

Define a red zone. No exceptions: 

  • PII in unapproved AI tools 
  • Confidential financial data in external systems 
  • Client data in any AI tool without client consent 
  • Any use that violates existing data handling policies 
  • Autonomous decisions above defined thresholds 

Clear boundaries only work if employees know them, trust them and can follow them easily. 

  • Communicate once, reinforce constantly: Don’t announce policies in one email and expect compliance. Include AI boundaries in onboarding, team meetings, and project kickoffs. Make them impossible to forget. 
  • Explain the “why”: “Don’t paste customer data into ChatGPT” lands differently than “We could lose a major client if their data ends up in a third-party system. Here’s how to get AI help safely instead.” 
  • Provide approved alternatives: If you prohibit certain uses, provide approved tools that accomplish the same goal. Prohibition without alternatives just creates hidden workarounds. 
  • Make compliance easy: If following the rules is harder than breaking them, people will break them. Pre-approved tools, simple approval processes, quick turnaround on requests. 
  • Create psychological safety: Employees who made mistakes before policies existed need amnesty. If asking questions feels risky, people won’t ask – they’ll just hide. 

Even clear boundaries have edge cases. Define how employees handle uncertainty: 

Green zone: Act, inform the manager later if relevant 

Yellow zone: Propose to manager, wait for approval, document decision 

Red zone: Don’t act, escalate to compliance, get written approval before proceeding 

Genuinely unclear: Ask before acting. “I wasn’t sure, so I asked” is always acceptable 

When policies are clear and psychological safety exists: 

  • The 57% who were hiding usage start sharing what they learned 
  • Effective practices spread across teams 
  • Ineffective practices get caught and corrected 
  • Compliance concerns surface before they become incidents 
  • Good ideas don’t die in fear 

The competitive advantage isn’t having the best AI tools. It’s having an organization where people actually use them productively, openly and safely. 

A simple AI governance framework creates the boundaries employees need to experiment safely while protecting data, compliance and operational quality. Without governance, AI stays hidden. With the right structure, it becomes a visible source of operational improvement. 

Continue the AI Readiness Series 

Organizations exploring AI adoption often discover that success depends less on the technology and more on operational readiness. 

You can also evaluate your organization’s readiness using the 90-Day AI Readiness Checklist and self-assessment