Two-Thirds of Leading AI Companies Leaking Secrets on GitHub, Report Finds
6:52
Wed | Nov 12, 2025 | 10:00 AM PST

A new study from cloud security firm Wiz revealed that 65% of leading private AI companies—those featured on the Forbes AI 50 list—have leaked verified secrets such as API keys, tokens, and credentials on GitHub.

According to Wiz researchers Shay Berkovich and Rami McCarthy, the findings demonstrate that the race to innovate in artificial intelligence is leaving serious gaps in security hygiene. "Even companies with minimal public repositories were found to have leaked information," they wrote. "Our research shows that rapid AI development and open collaboration can easily outpace security controls if organizations are not vigilant."

Collectively, the affected companies are valued at more than $400 billion, underscoring the significant risk posed when private AI models and training data are exposed through developer missteps.

Deeper scans reveal secrets hidden in plain sight

To identify the leaks, Wiz applied its "Depth, Perimeter, and Coverage" framework—an advanced scanning methodology that examines more than just public repositories. The researchers examined commit histories, deleted forks, gists, and contributors' personal repositories.

This approach uncovered secrets that conventional scanning tools miss. "By analyzing deeper layers of developer activity," the report notes, "we found credentials embedded in places that traditional tools never check."

Among the most frequently exposed credentials were API keys from Weights & Biases, Hugging Face, and ElevenLabs—platforms commonly used in AI training and model management. Some of these keys could have provided unauthorized access to private data, model weights, or inference endpoints.

In one notable example, a company with no public repositories and only 14 organization members still leaked sensitive information. Meanwhile, another firm with 60 public repositories avoided leaks entirely, which the researchers attributed to stronger security discipline and secret-management automation.

The human factor: disclosure gaps and governance debt

Wiz also attempted to notify affected companies about their exposed secrets. The researchers reported that nearly half of their responsible disclosure attempts went unanswered, suggesting that many AI startups still lack formal vulnerability-response processes.

That lack of maturity, according to Wiz, underscores a growing security debt within the AI ecosystem. As one line from the report put it, "Secrets management is still treated as an afterthought, even by companies whose entire business depends on safeguarding data and algorithms."

AI hasn't reinvented vulnerabilities—it's amplified them

Randolph Barr, CISO at Cequence Security, said Wiz's findings reflect a predictable outcome of "hyper-speed AI development colliding with long-standing security debt."

"The majority of these exposures stem from traditional weaknesses such as misconfigurations, unpatched dependencies, and exposed API keys in developer repositories," Barr explained. "What's changed is the scale and impact. In AI environments, a single leaked key doesn't just expose infrastructure; it can unlock private training data, model weights, or inference endpoints—the intellectual property that defines a company's competitive advantage."

Barr added that the AI revolution has introduced new, "AI-native" risks—including model and data poisoning, prompt injection, and autonomous agents chaining together API calls with minimal human oversight. These dynamics, he warned, "create an attack surface that grows faster than most security programs can respond."

He also emphasized how closely AI's growth is tied to API proliferation: "Every AI capability—model calls, data retrieval, inference requests—flows through APIs. Each connector or SDK adds new endpoints and tokens, often created automatically by AI agents." Without strong governance, he said, "shadow AI" and "shadow APIs" will continue to expand unchecked.

"If hyper-development is inevitable," Barr concluded, "so too must be hyper-defense. That means automating the fundamentals—secret hygiene, access control, anomaly detection, and policy enforcement—so human teams can focus on governance and strategic oversight."

Machine identities: the next blind spot

Shane Barney, CISO at Keeper Security, stated that the Wiz findings underscore the growing challenge of managing machine-based credentials at scale.

"Each of these credentials represents an access pathway that, if left unsecured, can expose sensitive systems or data," Barney noted. "As organizations adopt AI and cloud-native development, the number of non-human accounts and automated processes continues to rise. These machine identities are critical—but they often exist outside traditional identity and access management frameworks."

He urged CISOs to treat machine-to-machine credentials with the same rigor as human ones, combining Privileged Access Management (PAM) with enterprise secrets management to enforce boundaries and rotation. "The fundamentals still apply: know what identities exist, understand what they can access, and ensure those privileges are tightly governed," he said.

Building security into the pipeline

Jason Soroko, Senior Fellow at Sectigo, agreed that avoiding leaks is about engineering secure defaults—not luck.

"If a company with many public repositories can avoid leaks, the lesson is not luck, but investment in plumbing that makes the safe path the fast path," Soroko said. "Organizations should default to secretless authentication—short-lived tokens, workload identity, scoped permissions—and block merges that add values to environment files. Pre-commit scanning and canary keys that alert on use are critical to tracking how fast leaks are detected and rotated."

He warned that even after key rotation, model artifacts may still contain old credentials: "Rotation fixes the immediate door lock, yet the old combination can live on inside models and evaluation artifacts for months."

The bottom line

Wiz's research—and the expert reactions it sparked—underline a clear reality: AI innovation is moving faster than most organizations can secure it.

The solution, experts agree, isn't to slow innovation but to automate defense at the same speed: continuous secret scanning, runtime credential management, governance baked into pipelines, and AI-driven anomaly detection that augments human oversight.

As the Wiz team concluded: "Security must evolve as fast as the systems it protects. The cost of a single exposed secret can now extend far beyond infrastructure—it can define the future of an entire AI company."

Follow SecureWorld News for more stories related to cybersecurity.

Comments