Flaw in Gemini CLI coding tool could allow hackers to run nasty commands
Flaw in Gemini CLI coding tool could allow hackers to run nasty commands

Flaw in Gemini CLI coding tool could allow hackers to run nasty commands

How did your country report this? Share your view in the comments.

Diverging Reports Breakdown

Flaw in Gemini CLI coding tool could allow hackers to run nasty commands

Gemini is a free, open-source AI tool that works in the terminal environment to help developers write code. By June 27, researchers at security firm Tracebit had devised an attack that overrode built-in security controls that are designed to prevent the execution of harmful commands. The malicious code package looked no different than millions of others available in repositories such as NPM, PyPI, or GitHub.

Read full article ▼
Researchers needed less than 48 hours with Google’s new Gemini CLI coding agent to devise an exploit that made a default configuration of the tool surreptitiously exfiltrate sensitive data to an attacker-controlled server.

Gemini CLI is a free, open-source AI tool that works in the terminal environment to help developers write code. It plugs into Gemini 2.5 Pro, Google’s most advanced model for coding and simulated reasoning. Gemini CLI is similar to Gemini Code Assist except that it creates or modifies code inside a terminal window instead of a text editor. As Ars Senior Technology Reporter Ryan Whitwam put it last month, “It’s essentially vibe coding from the command line.”

Gemini, silently nuke my hard drive

Our report was published on June 25, the day Google debuted the tool. By June 27, researchers at security firm Tracebit had devised an attack that overrode built-in security controls that are designed to prevent the execution of harmful commands. The exploit required only that the user (1) instruct Gemini CLI to describe a package of code created by the attacker and (2) add a benign command to an allow list.

The malicious code package looked no different than millions of others available in repositories such as NPM, PyPI, or GitHub, which regularly host malicious code uploaded by threat actors in supply-chain attacks. The code itself in the package was completely benign. The only trace of malice was a handful of natural-language sentences buried in a README.md file, which like all such files was included in the code package to provide basic information about its purpose, scope, and requirements.

That was the perfect place for the researchers to hide a prompt-injection, a class of AI attack that has emerged as the biggest single threat confronting the safety and security of AI chatbots. Developers frequently skim these files at most, decreasing the chances they’d notice the injection. Meanwhile, Gemini CLI could be expected to carefully read and digest the file in full.

Source: Arstechnica.com | View original article

Source: https://arstechnica.com/security/2025/07/flaw-in-gemini-cli-coding-tool-allowed-hackers-to-run-nasty-commands-on-user-devices/

Leave a Reply

Your email address will not be published. Required fields are marked *