Researchers from Cornell University [2] [3] [5], Technion-Israel Institute of Technology [2] [3] [5] [8], and Intuit have developed the Morris II worm [5], a zero-click cyber threat targeting AI applications powered by generative models like Gemini Pro, ChatGPT 4.0 [5] [6], and LLaVA [5].


This worm exploits vulnerabilities in AI email assistants [4] [9], using self-replicating prompts to spread across interconnected GenAI ecosystems [2]. Morris II injects adversarial prompts to manipulate AI models and breach security [2], posing significant risks to applications relying on GenAI services and those utilizing the retrieval augmented generation (RAG) application to enhance queries. The worm utilizes RAG to contaminate AI models, forcing them to exfiltrate sensitive data and propagate the malware further [6]. To combat this threat, researchers recommend using countermeasures against jailbreaking techniques to detect malicious propagation patterns [7] [8]. Additionally, a non-active RAG can be employed to prevent the spread of the RAG-based worm [8], highlighting the evolving cybersecurity landscape in the age of AI and the importance of ongoing vigilance [9], advanced security protocols [9], and collaboration between developers and researchers to mitigate emerging threats [9].


The Morris II worm [5], capable of stealing confidential data [4], sending spam emails [4] [9], and spreading malware through generative AI systems [4], targets AI email assistants [4] [7] [9], extracts data [4], and compromises security measures of AI-powered chatbots [4]. By using self-replicating prompts [2] [3] [4] [5] [8] [9], Morris II can navigate through AI systems undetected [4]. The researchers have alerted OpenAI and Google about the worm [4], with Google declining to comment and OpenAI working on improving system security [4]. The worm exploits vulnerabilities in AI systems to mine sensitive information like social security numbers and credit card details [4]. Security researchers have developed a self-replicating AI worm named Morris II [1], targeting generative AI-powered applications like OpenAI’s ChatGPT and Google’s Gemini [1]. This worm can infiltrate emails [1], steal data [1] [3] [9], and launch spamming campaigns without the victim clicking on anything [1], highlighting the risks associated with GenAI [1] [8]. The worm exploits the automatic actions of the AI tool to propagate itself and engage in malicious activities [1]. Cybersecurity experts have warned about the potential for hackers to use generative AI to carry out attacks [1], as it can realistically imitate human-generated text [1], making it easier for cyber criminals to create convincing fraudulent emails and texts [1]. Generative AI is expected to be used for cyber activities in 2024 [1], lowering the barrier of entry for sophisticated operations [1]. The researchers have also discovered a method to embed the self-replicating prompt within image files [3], allowing for the spread of spam [3], abuse material [3], or propaganda [3], emphasizing the need for more resilient systems and caution against working with harmful input [3].