A new AI-powered exploit has thrown a spotlight on the risks of over-reliance on coding assistants, with crypto exchange Coinbase now in the hot seat. The so-called “CopyPasta License Attack”—uncovered by cybersecurity firm HiddenLayer—targets AI coding tools by hiding malicious prompts inside ordinary project files such as README.md or LICENSE.txt.
How It Works
Unlike traditional malware, CopyPasta embeds hidden markdown comments that AI models treat as legitimate instructions. Because license files are considered authoritative by coding assistants, the malicious payload gets replicated into every new or modified file without the developer’s knowledge.
HiddenLayer researchers demonstrated that once infected, AI tools like Cursor—Coinbase’s go-to assistant reportedly used by “every Coinbase engineer”—could be tricked into:
– Staging backdoors
– Silently exfiltrating sensitive data
– Running resource-draining commands
“All untrusted data entering LLM contexts should be treated as potentially malicious,” the firm warned, stressing the exploit’s ability to spread semi-autonomously across codebases.
Why Coinbase Is in the Spotlight
Coinbase CEO Brian Armstrong recently revealed that AI has already written up to 40% of the exchange’s code, with a target of 50% by next month. While Armstrong clarified that AI coding is currently limited to user interfaces and non-critical backends, the optics of a potential AI-driven exploit have fueled industry concerns.
The timing is notable: Coinbase, a publicly listed company and the largest U.S. crypto exchange, is a high-profile target. A successful CopyPasta attack could have sweeping consequences if malicious code made its way into production systems.
Why This Matters
AI prompt injections aren’t new, but CopyPasta raises the stakes by enabling semi-autonomous spread. Instead of infecting just one engineer’s environment, compromised files become carriers that can poison every AI assistant reading them, creating a chain reaction across repositories.
For comparison, earlier “AI worm” concepts like Morris II faltered because human checks often caught suspicious email activity. CopyPasta, by contrast, hides in plain sight—inside trusted project documentation that developers rarely scrutinize.
The Security Response
Experts are now urging companies to:
Scan files for hidden markdown comments
Manually review AI-generated code before deployment
Treat all external data feeding into large language models as “potentially hostile”
With Coinbase leaning heavily on AI to accelerate development, the CopyPasta exploit is more than a theoretical threat. It’s a reminder that in the AI era, documentation can be weaponized, and the line between code and compromise has never been thinner.