
Vercel announced a security breach on April 19, 2026. However, the attack didn't start at Vercel.
The breach began two months earlier, when a Vercel employee signed up for Context.ai, an AI Office Suite, using their work Google account. There was no IT review or contract between the companies. The employee simply clicked "Allow All" on the OAuth permissions screen and continued working.
In February, a Lumma Stealer infection on a Context.ai endpoint exposed the OAuth token. Before Vercel found out, an attacker had already used it to access the employee's Google Workspace, move into Vercel's internal systems, and list environment variables. A threat actor then posted the stolen data on BreachForums for $2 million.
One employee, one AI tool, and one OAuth token with "Allow All" permissions made up the entire attack chain.
If you think this couldn't happen to your organization, the data shows otherwise.
Here are five lessons from the Vercel breach about Shadow AI, and what they mean for organizations that rely on assumptions instead of clear visibility.
Vercel did not have a contract with Context.ai. The integration happened only because an employee found the tool helpful and signed up using their work credentials.
This is not just a Vercel problem. Most organizations operate this way today.
78% of employees use AI tools like ChatGPT and Claude. Only 30% of organizations have full visibility into what those tools are. Source: The Shadow AI Crisis, 2026
In a company with 500 employees, about 390 people are connecting AI tools to codebases, internal documents, customer data, and system credentials. Yet, fewer than one in three IT teams can see this happening.
The Vercel breach began like many Shadow AI incidents: a single integration, unnoticed by IT, became the entry point for attackers.
What made this breach possible was not a complex exploit. It was simply an OAuth permission screen.
Context.ai's OAuth app asked for broad access. The employee approved it, and Vercel's Google Workspace settings allowed it. That one action gave the external AI tool the same permissions as the employee, and the access continued even after Context.ai found its own breach and shut down its AWS environment.
The OAuth token was still valid, so the access remained open.
This is how Shadow AI risk works: the employee was not careless, just efficient. But clicking "Allow All" on an unapproved AI tool hands over your organization's identity to a third party you cannot see or hold accountable.
45% of employees say they would keep using AI tools even if their organization explicitly banned them. Source: Anagram Security, 2025
Banning these tools does not solve the problem. Employees find ways around restrictions because the tools help them work better. A ban only drives the activity underground, making it even harder for IT to know what permissions are being given.
When Shadow AI is the way in, the cost of a breach rises sharply.
Shadow AI-related breaches cost organizations $670,000 more on average than standard data breaches. Source: IBM Cost of a Data Breach Report, 2025
That extra cost comes from a specific issue: investigating a breach that was not on your risk list. In Vercel's case, responders had to piece together a two-month attack chain involving a third-party AI vendor's infected endpoint, a stolen OAuth token, a compromised Google Workspace account, and movement into internal systems. All this had to be understood before they could even see what data was exposed.
Every hour spent on that investigation costs money, time, and reputation. For Vercel, this process is happening now, in full view of their developer community.
Context.ai brought in CrowdStrike for its own incident response, but the stolen OAuth token was still not detected. Visibility into endpoints and infrastructure is different from tracking identity and permission chains across SaaS integrations. The Vercel breach happened in this gap, where most security tools offer no protection.
Organizations are not ignoring Shadow AI on purpose. They are struggling because governance frameworks have not kept up with how quickly AI is being adopted.
63% of organizations have no AI governance framework. Only 18% have any AI security policy at all. Source: IBM, 2025
The Vercel breach happened because of this gap. Traditional SaaS management tracks only contracted software, but Context.ai had no contract with Vercel. The integration was completely outside any governance process, so no one could check Context.ai's security, limit OAuth permissions, or revoke access when Context.ai was breached. This did not happen because of a lack of skill. It happened because no organization, no matter how advanced, can defend against threats they do not know about.
This is the lesson we keep seeing from these attack chains.
Banning tools does not work because it hides activity. Policies fail if they only address intent, not actual use. Regular audits fall short because Shadow AI spreads faster than audits can keep up. Even MFA does not help here, since the attacker stole a valid OAuth token, not a password. Usual credential hygiene cannot protect against identity theft through SaaS-to-SaaS integrations.
The only way to match today's threats is with continuous visibility. You need to know which AI tools are active, what OAuth permissions they have, what data they access, and if their security meets your standards—before an incident forces you to find out.
Organizations with this level of visibility can manage Shadow AI risks. Those without it face the risk that any unapproved tool could lead to a Vercel-like incident if an attacker finds the right OAuth token.

Josys designed its Shadow IT discovery features to address the blind spots revealed by the Vercel breach. For IT teams and MSPs managing environments where employees add AI tools with broad OAuth permissions, Josys offers three layers of control:
As a result, IT teams and MSPs can shift from reacting to breaches to proactively managing Shadow AI, with a complete, always-updated map of every app, its OAuth permissions, and its access across the organization.
The Vercel breach is not unusual. It is an early example of a pattern that will repeat in organizations that ignore Shadow AI risks.
Every IT and security leader should be asking, not "How did this happen to Vercel?" but "How many AI tools in our environment have OAuth tokens with broad permissions, and do we even know what they are?"
Most organizations do not know. Those who learn the hard way will have more in common with Vercel's April 19 disclosure than they would like.
From operating in the dark to having a real-time map of every AI integration and its permissions is what separates organizations that only read about breaches from those that experience them. Is Shadow AI running in your environment? Book a demo to learn more.
Sign-up for a 14-day free trial and transform your IT operations.
