Shadow AI Is Already Inside Your Business: Here’s What to Do About It
Shadow AI is the use of artificial intelligence tools by employees without their organization’s IT or security teams’ knowledge, approval, or oversight. It is the fastest-growing shadow IT risk category in 2026.
If your company lacks a formal AI acceptable use policy, employees likely use tools like ChatGPT, Google Gemini, Perplexity, Grok, Claude, and many AI-powered browser extensions for work. Each use may transmit your organizational data externally. AI is not a future concern; it is an immediate issue occurring within your organization on managed devices. Notably, employees utilizing these tools are typically motivated by efficiency rather than negligence. The primary challenge is that, without visibility into which tools are in use and what data they process, your organization cannot adequately safeguard its information.
What Is Shadow AI and How Is It Different From Shadow IT?
Shadow AI is a specific subset of shadow IT focused on artificial intelligence tools, platforms and features used inside an organization without IT’s approval or oversight. Businesses have dealt with unauthorized software for decades, but AI changes the equation significantly.
When someone uses an unauthorized file-sharing app, that represents a known risk. When someone pastes a client contract into ChatGPT, a proprietary pricing model into Google Gemini, or a draft legal filing into Perplexity, something different happens. Those AI platforms process the data, generate responses based on it and, in some cases, retain it for model training.
IBM’s widely circulated 2025 data breach report studied 600 organizations globally. It found that 13 percent of them experienced a data breach directly connected to unauthorized AI use. Those breaches cost more than traditional types; furthermore, 63 percent of organizations surveyed had no AI governance policy. They were not prepared because they did not know the risk existed.
What Does Shadow AI Actually Look Like Inside a Business?
It does not require technical sophistication. Any employee with a browser can access it. From what we see working with organizations across the Midwest, the unauthorized AI tools showing up most often fall into three categories:
Standalone AI Platforms
- ChatGPT (OpenAI) holds roughly 80 percent of the U.S. AI market with over 900 million weekly active users as of mid-2025. Most employees access it through personal accounts with default privacy settings that allow data to be used for model training.
- Google Gemini is embedded in Google Workspace and available as a standalone tool. The consumer version trains on user data by default. Employees who use personal Google accounts at work blur the line between personal and corporate data every time they prompt it.
- Perplexity AI is rapidly growing as an AI-powered research and answer engine. Employees use it to summarize documents, draft research and pull together competitive intelligence. It searches the web and processes uploaded documents, which means proprietary files can enter its pipeline.
- Grok (xAI) is integrated into the X/Twitter platform and gaining traction as a conversational AI tool. Its data-handling policies are less transparent than competitors’ policies, which introduces additional uncertainty about how submitted information is stored and processed.
- Claude (Anthropic) and Microsoft Copilot (personal editions) round out the most common platforms showing up in enterprise environments without IT approval.
AI-embedded SaaS Applications
- Tools your company already approved are adding AI features without requiring a new purchase or a new security review. Palo Alto Networks reported in 2025 that the average organization now has 66 GenAI-connected applications, with 10 percent classified as high risk.
- Writing assistants like Grammarly and QuillBot, note-taking tools like Otter.ai and Notion AI, and project management platforms are quietly integrating AI features that process your data on external servers. Your approved software stack may now include AI capabilities nobody evaluated for security or compliance.
Browser Extensions and Add-ons
- AI-powered browser extensions for grammar checking, email drafting, meeting transcription and sales prospecting are among the fastest-growing categories of unauthorized AI. These extensions often request broad permissions like access to all web page content, email content or clipboard data.
- Google’s AI Overviews in search results also deserve attention. When employees search for answers to work problems using Google, AI-generated summaries may incorporate data from their Google Workspace environment if they are signed into a personal account. The lines between “search” and “AI processing” are disappearing fast.
What Data Is at Risk From Unauthorized AI Use in 2026?
Harmonic Security analyzed 22.4 million enterprise AI prompts in a January 2026 study and found that ChatGPT alone accounted for over 71 percent of all incidents of sensitive data exposure. A separate 2024 study by CybSafe and the National Cybersecurity Alliance surveyed 7,000 workers and found that 38 percent share sensitive work data with AI tools without their employer’s knowledge.
The data at risk is specific and consequential:
- Customer records and personally identifiable information (PII) pasted into AI tools to draft communications or summarize accounts
- Financial projections, pricing models and proprietary business data entered into tools like Perplexity or Gemini for analysis
- Source code and technical documentation submitted for debugging. This is exactly what happened in the widely reported 2023 Samsung data leak, where three engineers pasted proprietary semiconductor code into ChatGPT within 20 days of the company allowing its use. Samsung banned the tool entirely afterward
- Protected health information (PHI) entered in violation of HIPAA, where fines can reach $1.5 million per violation category and no Business Associate Agreement exists with consumer AI platforms
- Controlled Unclassified Information (CUI) submitted to commercial AI platforms that lack FedRAMP authorization, creating potential CMMC compliance violations for defense contractors and government agencies
Here is a detail most business owners miss: ChatGPT’s free, Plus and Pro tiers all train on user data by default. Google Gemini’s consumer version does the same. Grok’s data retention and training policies remain less transparent than its competitors. Even Perplexity processes uploaded files through its AI pipeline.
Enterprise-tier agreements typically do not train on user data, but employees using these tools are almost never on enterprise plans. They are using personal accounts with whatever the default settings are.
What Are the First Steps to Address Unauthorized AI Use?
The NIST AI Risk Management Framework (AI RMF 1.0), published in January 2023, provides the governance foundation for tackling this. Its GOVERN function requires organizations to maintain inventories of AI systems in use and assign clear accountability for managing these risks. The framework’s core principle is straightforward: You cannot govern what you have not found yet.
Step 1: Run an AI discovery assessment.
Find shadow AI use by reviewing network traffic for connections to known AI service endpoints, auditing OAuth and SSO logs for third-party AI app authorizations, checking browser extensions deployed across company devices, reviewing expense reports and credit card statements for AI subscriptions and running a confidential employee survey asking what AI tools people use and why. The point is not to catch anyone. It is to understand the scope so you can make informed decisions about what to allow, what to restrict and where your data exposure sits right now.
Step 2: Write an AI acceptable use policy.
IBM’s 2025 data found that only 37 percent of organizations have any policy to manage AI use or detect unauthorized tools. A workable policy should define:
- The AI tools that are approved for business use, reviewed and updated quarterly.
- The types of data employees can enter into an AI platform.
- A clear process for requesting new tools that includes a security and compliance review.
- Human review requirements for AI-generated outputs used in client-facing or regulated work.
The most effective policies frame AI governance around enablement, not prohibition. Banning AI outright pushes usage underground and kills your visibility. Provide approved tools with appropriate security controls and make it easy for people to use AI productively within guardrails that your IT team can monitor.
Step 3: Put technical controls in place.
Policy without enforcement is documentation without protection. Configure data loss prevention (DLP) rules to detect sensitive data flowing to AI service APIs. Use endpoint management to control browser extensions. For Microsoft 365 environments, leverage Purview sensitivity labels and insider risk management to flag AI-related data movement. Require AI tool access through federated identity (SSO with multi-factor authentication), so IT has visibility and employees cannot use personal accounts for work data.
How Does the NIST AI Risk Management Framework Apply to Shadow AI?
The NIST AI RMF 1.0 does not use the term “shadow AI” directly, but its GOVERN function maps to the problem precisely. GOVERN 1 requires that AI risk management policies exist and are implemented across the organization. GOVERN 2 requires clear accountability, meaning someone has to take on the responsibility. GOVERN 6 addresses risks from third-party software and supply chain issues, which is exactly what happens every time an employee uses an external AI service to process company data.
The AI RMF Playbook goes a step further, recommending that organizations inventory all AI models or systems in use, not just the ones someone has flagged as high-risk. For business owners, the takeaway is this: Even the federal government’s own governance framework assumes you know what AI tools are running in your organization. If you do not, that is the gap to close first.
What Should Business Owners Do About Unauthorized AI Right Now?
This is not a problem you solve once and walk away from. It is an ongoing governance discipline that evolves as fast as the AI tools themselves. Gartner predicted in November 2025 that by 2030, more than 40 percent of enterprises will experience security or compliance incidents stemming from unauthorized AI use. The businesses that stay ahead of that curve will be the ones that started building governance before an incident forced the conversation.
G6 IT helps businesses across the Midwest identify, assess and govern AI use through our cybersecurity risk assessment and compliance management services. Our approach is practical, not theoretical. We find what is running in your environment, assess where your data is exposed, build a policy that fits your operations and implement the technical controls to enforce it.
If no one has audited your organization’s AI use yet, that is where to start. Schedule a conversation with our team to get a clear picture of where you stand.
Frequently Asked Questions About Shadow AI
What is the difference between shadow AI and shadow IT?
Shadow IT is any technology used within an organization without IT’s approval. Shadow AI is a subset focused specifically on AI tools and platforms. The critical distinction is that AI tools actively process, analyze and in some configurations learn from the data employees input. Traditional shadow IT stores or transmits data. Unauthorized AI consumes it. That makes the risk profile fundamentally different and the urgency for governance higher.
Can I just ban AI tools to solve the unauthorized AI problem?
Blanket bans rarely work and often make the problem worse. The 2024 Microsoft Work Trend Index found that a vast majority of AI users bring their own tools regardless of company policy. Banning AI pushes usage underground where you lose all visibility and control. A more effective approach is to provide approved AI tools with appropriate security configurations, establish clear data classification rules, and monitor compliance. An AI acceptable use policy built around safe enablement produces better security outcomes than prohibition.
Does unauthorized AI use create compliance risks for HIPAA or CMMC?
Yes. For healthcare organizations, employees entering protected health information into consumer AI tools like ChatGPT or Google Gemini violates the HIPAA Privacy Rule because no Business Associate Agreement exists with those providers. For defense contractors, submitting Controlled Unclassified Information (CUI) to commercial AI platforms that lack FedRAMP authorization can constitute a CUI spillage event and jeopardize CMMC certification. Both scenarios create compliance exposure with significant financial consequences.
How do I find out if my employees are using shadow AI?
Start with an AI discovery assessment. Review network traffic logs for connections to known AI endpoints (api.openai.com, gemini.google.com, api.perplexity.ai, etc.), audit OAuth logs for third-party app authorizations, inventory browser extensions across managed devices, check expense reports for AI subscriptions, and conduct a confidential employee survey. G6 IT includes AI tool discovery as part of our cybersecurity risk assessment process.
How much does an unauthorized AI data breach cost?
IBM’s 2025 Cost of a Data Breach Report found that unauthorized AI use adds an average of $670,000 to the total cost of a data breach. The global average breach cost overall reached $4.44 million in the same study. For small and mid-sized businesses, even a fraction of that figure threatens operational continuity. Investing in AI governance before an incident is substantially less expensive than responding to one afterward.
Does ChatGPT train on the data my employees enter?
ChatGPT’s Free, Plus and Pro tiers train on user data by default, though users can opt out in settings. ChatGPT Team, Business and Enterprise plans do not train on user data by default. The same pattern holds across other platforms: Google Gemini’s consumer version trains on inputs by default, while enterprise agreements do not. Employees using these tools are almost always on personal or free-tier accounts with default training enabled.
What is an AI acceptable use policy, and does my business need one?
An AI acceptable use policy defines which AI tools employees may use, what data can and cannot be entered into them, and how the organization monitors compliance. IBM’s 2025 research found that only 37 percent of organizations have such a policy. Given that 80 percent of employees at small businesses use AI tools without approval (per Microsoft’s 2024 research), the answer for most organizations is yes. The policy should be reviewed quarterly because the AI landscape changes that quickly.
Are AI-powered browser extensions an unauthorized AI risk?
Yes, and they are one of the most overlooked vectors. AI browser extensions for grammar checking, email composition, meeting transcription and sales outreach frequently request broad permissions, including access to all web page content and clipboard data. These extensions send data to external AI services, often with minimal transparency about retention or processing. Controlling browser extensions through endpoint management is a critical component of any AI governance strategy.
Is Perplexity AI a data risk for businesses?
Perplexity AI is increasingly popular as a research and answer engine, and yes, it creates governance risks when used without IT oversight. Employees upload documents, paste research and enter proprietary questions into the platform. Perplexity processes this data through its AI pipeline and may retain it in accordance with its own terms of service. Like any external AI tool used outside IT’s visibility, it creates data exposure that the organization cannot monitor, audit or control.
What framework should my business use for AI governance?
The NIST AI Risk Management Framework (AI RMF 1.0, published January 2023) is the most widely referenced governance structure in the United States. Its four core functions, – Govern, Map, Measure and Manage – provide a practical structure for organizations of any size. ISO 42001 (the AI management systems standard) and the OWASP Top 10 for LLM Applications offer complementary guidance. For most small and mid-sized businesses, the NIST AI RMF is the strongest starting point because it aligns with the broader NIST Cybersecurity Framework that many organizations already follow.
Expert Author Bio
Blake King, co-founder and CEO of G6 Communications, launched the veteran-owned managed IT and cybersecurity firm in 2007 after serving as a tactical network engineer in the United States Marine Corps, where he was a non-commissioned officer honor graduate and received the Navy and Marine Corps Commendation Medal. He and his team have almost two decades of experience designing, building and securing enterprise-level IT environments for diverse organizations, from DoD and DOE agencies to small and mid-sized businesses. He now leads G6’s strategic advisory practice, helping business owners align technology decisions with operational goals and compliance.