Back to Posts

Like Humans, AI Requires Zero Trust

Feb. 23, 2026

Man working at desk with shadow AI behind him

Generative artificial intelligence (GenAI) workplace adoption has advanced much faster than anticipated. As such, many corporate security policies have failed to maintain momentum.

Nearly every company uses AI agents. But few have implemented technical controls. This has led to the proliferation of shadow AI, which is employees using unauthorized AI tools.

AI chat tools and agentic assistants have quietly become the new shadow IT. Not because people are reckless, but because these tools are fast, accessible, and genuinely helpful. But they can turn productivity habits into an untracked data-export pipeline.

An AI system is a machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments, according to The Organisation for Economic Co-operation and Development (OECD), international public policy organization. Different AI systems vary in their levels of autonomy and adaptiveness after deployment. The European Union, Council of Europe, United States, United Nations, and other jurisdictions use the OECD’s definition of an AI system in their legislative and regulatory frameworks.

"The most effective and efficient way to reduce risks... is by providing people with authorized platforms," said Marcelo Felman, Microsoft's general manager of enterprise security for Latin America. The goal, he said, is not to eliminate risk entirely by stopping the use of technology, but to strike a balance.

Shadow AI Means Unsanctioned Usage

When companies share data with AI systems without controls, they’re effectively exposing proprietary information to third-party models. That can increase risk by broadening the potential attack surface available to hackers.

There’s also a deeper operational challenge. AI systems are not deterministic, like calculators.

A probabilistic system like AI works differently. It generates outputs based on statistical likelihoods learned from data patterns. When you ask the same question twice, you might get two wildly different answers. Both may be reasonable. One might contain an error. That’s because the model calculates the most likely next word at each step, not executing a fixed decision tree.

Operational security risks expand further with what some refer to as the “double agent” problem. If an AI agent isn’t properly secured and validated, a malicious actor could manipulate it to act against your company’s interests. That’s why some cybersecurity leaders argue users should assume AI agents are already compromised.

"We talk a lot about securing people and devices, but AI agents demand the same level of scrutiny," said Rich Miller, Founder and CEO of STACK Cybersecurity. "If you can't see what an AI tool is accessing, you can't govern it. And if you can't govern it, you can't secure it. That's the shadow AI risk most businesses aren't prepared for."

Shadow AI is any AI usage that isn't reviewed, governed, or logged the way you'd expect for tools handling confidential information. That can be public AI chat tools, browser plugins, desktop apps, or AI features turned on inside Software-as-a-Service (SaaS) platforms.

At CruiseCon East 2026, Chief Information Security Officer (CISO) at Noma Security, Diana Kelley, captured the scale:

"The most important thing to know here is that pretty much everybody at your company has the ability to start creating agents."

Microsoft research shows more than 80% of the Fortune 500 deploying low-code/no-code agents. Creating an agent in the paid version of Microsoft Copilot takes less than a minute with zero code or technical expertise required.

And it isn't only employees. Kelley also warned AI agents are appearing in the software you already pay for:

"Your SaaS providers are creating agentic systems, too, and are using them behind the scenes on the SaaS. So, can you get away from agents? Not really at this point, not the AI-driven."

Visibility Gap

A lot of companies still assume AI use is rare, centralized, or obvious. It's not.

  • In a recent survey, 38% of employees admitted to sharing sensitive information with AI tools without employer knowledge.
  • A Microsoft-linked report found 71% of workers surveyed in the UK had used unauthorized consumer AI tools at work.
  • A global KPMG and University of Melbourne study reported 57% of employees said they hide their AI use, and 48% said they've uploaded sensitive company data into AI tools.
  • Only about 25-41% of companies have a formal AI policy, according to KPMG.

Kelley put the governance issue in plain language:

"Get some visibility; you know this but we can't manage what we can't see."

Confidentiality Risk

When someone pastes client information, HR details, financial data, legal drafts, product plans, or proprietary code into a public AI tool, you have to assume it has left your controlled environment. Whether it can be retained, logged, reviewed, or used for product improvement depends on the provider and the service tier.

For example, OpenAI explains that data handling differs across offerings and settings, and it publishes its consumer and enterprise policy approach publicly. The details matter, because "we don't train on your data" isn't the same thing as "this is governed, logged, and contractually protected."

This kind of exposure has already happened in the real world. In 2023, Samsung restricted employee use of generative AI after engineers reportedly entered sensitive code into ChatGPT.

AI Agents Raise Forensic Stakes

AI agents can take actions, not just generate text. They can connect to tools, call APIs, retrieve files, and run workflows. If that activity isn't tied to identity and access management, incident response gets ugly fast.

Kelley called out the forensics challenge directly:

Diana Kelley

Diana Kelley

Chief Information Security Officer (CISO) at Noma Security

"We will not be able to forensically track and understand what they did if we can't tie them into identities. If we give them that golden login, it's going to be really hard to figure out when what went wrong after the fact."

Privacy, Breach Obligations Still Apply

AI doesn't create a legal free pass. If regulated data is shared with a third party in an uncontrolled way, it can trigger contractual obligations, privacy requirements, and potentially breach notification duties depending on what happened and what data was involved.

All 50 states have data breach notification laws, which is a useful reminder that disclosure obligations aren't rare edge cases.

And the AI-specific regulatory landscape is accelerating too. The Transparency Coalition’s 2025 report found 73 new AI-related laws were enacted across 27 states, with California leading the country in 2025 enactments. We track that momentum in our AI Hub and our roundups on AI state laws and California AI legislation.

Governance, Not Panic

Bans usually don't work. People still need the capability, so usage goes underground. A better approach is setting clear rules, offering approved options, and making sure your controls extend to AI the same way they do to other third parties.

  1. Get visibility into what AI tools are already being used (logs, CASB, endpoint telemetry, surveys).
  2. Make your AI acceptable-use rules explicit, especially around sensitive data categories.
  3. Require review before deploying agents or automations that can take actions across systems.
  4. Make identity non-negotiable. No shared "golden logins" for agents, bots, or integrations.
  5. Assess AI vendors like any other third party handling confidential data, including retention, logging, and contractual safeguards.

If you're building a program and want a deeper primer, start with our Shadow AI post, then browse the AI Hub for governance and legislation updates.

Related Resources

Questions about AI governance, visibility, or zero data retention? Contact Us

Cybersecurity Consultation

Do you know if your company is secure against cyber threats? Do you have the right security policies, tools, and practices in place to protect your data, reputation, and productivity? If you're not sure, it's time for a cybersecurity risk assessment (CSRA). STACK Cybersecurity's CSRA will meticulously identify and evaluate vulnerabilities and risks within your IT environment. We'll assess your network, systems, applications, and devices, and provide you a detailed report and action plan to improve your security posture. Don't wait until it's too late.

Schedule a Consultation Explore our Risk Assessment