What Is an AI Risk Assessment, and Does Your Business Need One?

AI Risk Assessment - G6IT

At G6, a managed IT and cybersecurity firm, we have been fielding more questions about AI governance in the past 12 months than any other topic. Your employees are already using AI. ChatGPT, Microsoft Copilot, Google Gemini and dozens of others are helping your team members draft emails, summarize documents and analyze data, often without IT approval. 

An AI risk assessment is the key to finding out what is actually happening and what it is costing you now.

Key Takeaways

  • Most businesses are already exposed; they just don’t know it. Shadow AI is the rule, not the exception. 
  • Microsoft 365 Copilot serves as a critical test of your organization’s permission management practices. Copilot can amplify existing vulnerabilities within your systems.
  • Regulated industries must adhere to mandatory compliance timelines rather than optional best practices. For healthcare, defense contractors and legal firms, regulations such as HIPAA, CMMC and ABA ethics opinions dictate AI governance requirements.

What It Is and What It Covers

An AI risk assessment is a structured evaluation of the security, compliance and operational risks associated with adopting AI tools within your organization. Unlike an audit focused on algorithmic bias, this assessment addresses practical concerns: what AI is being used for in your business and whether its use is secure.

The assessment typically addresses three areas that most businesses have not examined in detail:

  • Identification of all AI tools currently in use, including those not formally approved by the IT department
  • Analysis of the types of data being processed by these tools
  • Assessment of whether current AI usage results in regulatory exposure

What Does an AI Risk Assessment Look For?

A thorough evaluation examines eight risk categories:

Data Leakage

When employees paste customer records, financial data or source code into external AI tools, the vendor may store that data or use it to train future models. 

Shadow AI

Nearly two-thirds of employees engage shadow AI through personal accounts outside company oversight, and 57 percent enter sensitive data when doing so.

Compliance Exposure

AI tools do not know your regulatory obligations. HIPAA, CMMC, the FTC Safeguards Rule and state privacy laws all apply to data that flows through AI tools.

Access Permissions

This is the single most important category for Microsoft 365 Copilot users. Copilot does not introduce new security holes, but it will definitely surface existing ones.

Vendor Risk

Who has your data? Where is it stored? Is the vendor’s infrastructure FedRAMP authorized? 

Intellectual Property Exposure

Trade secrets or source code submitted to certain AI tools may no longer be exclusively yours under their terms of service.

Reliability Risk

AI tools can produce outputs with high confidence that are occasionally incorrect. In contexts such as billing, legal interpretation or clinical decision-making, these errors represent significant liabilities.

Security Vulnerabilities

Every AI integration expands your attack surface. Prompt injection is the top vulnerability in large language model deployments, according to OWASP.

The Shadow AI Problem

Shadow AI is not recklessness, but rather the predictable result of powerful free tools landing in the hands of everyone trying to do their jobs faster.

StatisticSourceYear
55% of employees use unapproved AI tools at workSalesforce2024
68% use AI via personal accounts; 57% enter sensitive dataMenlo Security2025
38% share confidential work data with AI without employer approvalCybSafe / Nat’l Cybersecurity Alliance2024
Only 37% of organizations have policies to manage shadow AIIBM Cost of a Data Breach Report2025
Shadow AI breaches cost an average of $670,000 moreIBM Cost of a Data Breach Report2025

Who Needs an AI Risk Assessment Most Urgently?

The majority of organizations would benefit from conducting an AI risk assessment.

Healthcare Organizations

HIPAA mandates a risk analysis prior to adopting AI tools that process protected health information (PHI). The average cost of a healthcare data breach is $7.42 million (IBM 2025).

Defense Contractors

CMMC 2.0 enforcement is currently active. Under the Department of Justice’s Civil Cyber-Fraud Initiative, contractors who certify compliance without implementing actual controls may be subject to False Claims Act liability.

Legal Firms

ABA Formal Opinion 512 states that using generative AI without understanding its data handling practices can violate attorney-client confidentiality as outlined in Model Rule 1.6.

Financial Services

Federal Reserve and OCC guidance (SR 11-7) on model risk management applies to AI tools utilized in credit, fraud detection and risk assessment processes.

What Does the AI Risk Assessment Process Look Like?

For a business adopting commercial AI tools, the process follows six steps:

  1. Create an inventory of all AI tools in use, cataloging both authorized and unauthorized tools, including those embedded in SaaS platforms and browser extensions.
  2. Identify and categorize risks. Map each tool to the eight risk categories above.
  3. Score and prioritize identified risks by evaluating their likelihood and potential impact. Address Controlled Unclassified Information (CUI) exposure and PHI in commercial AI tools as top priorities.
  4. Develop a mitigation plan that includes technical controls such as data loss prevention (DLP), access restrictions, audit logging and the implementation of an AI acceptable use policy.
  5. Remediate excessive permission allocations, which is particularly important prior to deploying Microsoft 365 Copilot.
  6. Implement ongoing monitoring, with quarterly reviews as a minimum standard. The introduction of new tools, regulatory changes and emerging vulnerabilities requires continuous oversight.

Frequently Asked Questions

What is the difference between an AI risk assessment and a standard cybersecurity risk assessment?

A standard cybersecurity assessment examines infrastructure, networks and endpoints for known vulnerabilities. In contrast, an AI risk assessment specifically addresses risks introduced by AI tools, including shadow AI, data leakage to external platforms, excessive access permissions, vendor risk and regulatory exposure related to AI usage. Most organizations require both assessments, yet few have conducted the latter.

How do I know if my employees are using AI tools I have not approved?

In most cases, organizations are unaware of unapproved AI tool usage without proactive investigation. Discovering shadow AI requires network traffic analysis, DNS filtering logs, or the use of a cloud access security broker (CASB). G6 Communications maps AI tool usage across your environment to identify both authorized and unauthorized tools, as well as the data processed by them.

Does using Microsoft Copilot require an AI risk assessment?

Yes, an AI risk assessment is recommended prior to deploying Microsoft Copilot. Copilot accesses data according to existing Microsoft 365 permissions, which are often broader than necessary in environments that have not undergone prior audits. G6 IT conducts pre-Copilot permission reviews to identify and remediate potential exposures before deployment.

Is an AI risk assessment required under HIPAA?

HIPAA’s Security Rule mandates a comprehensive risk analysis before adopting any technology that processes protected health information (PHI), including AI tools. Each AI vendor handling PHI on your behalf must also sign a Business Associate Agreement before implementation. In January 2025, HHS proposed the first major Security Rule update in two decades, with AI-specific requirements anticipated to become mandatory for covered entities.

What does an AI acceptable use policy cover?

An AI acceptable use policy specifies which tools employees are permitted to use, the conditions for their use and prohibited activities. A comprehensive policy includes an approved tools list, restrictions on certain data types (such as PHI, CUI, trade secrets and PII), mandatory training, vendor assessment requirements for new tools, incident reporting procedures and enforcement measures. This policy is a central component of the overall evaluation process.

How long does the process take?

For small to mid-sized businesses utilizing commercial AI tools, an initial assessment generally requires two to six weeks, depending on organizational complexity and regulatory obligations. G6 IT structures engagements to deliver a prioritized action plan rather than a prolonged consulting project.

What happens after the assessment?

The assessment produces a prioritized action plan. We collaborate with your team to implement the most critical items first, such as permission remediation in Microsoft 365, deployment of data loss prevention policies, establishment of an approved tools list and development of an AI acceptable use policy. Ongoing monitoring is integrated into your standard security practices.

Can my business be held liable for what an AI tool does with customer data?

Yes. Regulatory frameworks place responsibility on the organization that collected the data, not the AI vendor that processed it. Under HIPAA, covered entities are liable if a Business Associate mishandles PHI. Under CMMC, contractors are responsible for CUI regardless of which tool processed it. The FTC’s Section 5 authority applies to organizations that cause consumer harm through AI – even unintentionally. This evaluation documents your controls and creates a defensible record of due diligence.

Does a paid AI subscription make it safer than a free tool?

Paid tiers generally offer stronger data-handling commitments; ChatGPT Enterprise and Microsoft 365 Copilot both commit to not using customer data for model training, which free tiers do not always provide. However, a paid subscription does not resolve overpermissioned access, shadow AI from other tools, compliance configuration gaps or vendor security practices. It is one factor in vendor risk evaluation, not a substitute for governance.

How often should an AI risk assessment be repeated?

A quarterly review is the minimum recommended frequency for most businesses. A comprehensive reassessment should be conducted when a new AI tool or major platform update is introduced, a regulatory change occurs, a security incident involving AI takes place, there is a significant workforce change, or a new vendor introduces AI capabilities. G6 IT incorporates ongoing AI risk monitoring into managed IT engagements, ensuring clients maintain continuous oversight.

Ready to Find Out What AI Is Doing in Your Business?

G6 Communications is a managed IT and cybersecurity firm based in Fort Wayne, Indiana, with over 17 years helping businesses manage technology risk. Schedule a strategy session with our team. We will review your current AI environment, identify your specific exposure and give you a clear picture of where you stand – no jargon, no sales pitch.

Expert Author Bio

Blake King, co-founder and CEO of G6 Communications, launched the veteran-owned managed IT and cybersecurity firm in 2007 after serving as a tactical network engineer in the United States Marine Corps, where he was a non-commissioned officer honor graduate and received the Navy and Marine Corps Commendation Medal. He and his team have almost two decades of experience designing, building and securing enterprise-level IT environments for diverse organizations, from DoD and DOE agencies to small and mid-sized businesses. He now leads G6’s strategic advisory practice, helping business owners align technology decisions with operational goals and compliance.

Share This
Blake King
Blake King

The CEO and founder of G6 IT, Blake has more than 20 years of experience managing information technology and cyber security operations at Fortune 100/500/1000 companies in the Defense, Aerospace, Energy, and Healthcare sectors. He served as a tactical network engineer as a proud Marine and led a team that supported several thousand end users across three continents. Blake was awarded the Navy & Marine Corps Commendation Medal and was recognized for Outstanding Service as a Tactical Network Engineer.