The UK’s National Cyber Security Centre (NCSC) has expressed concerns about the potential risks associated with integrating artificial intelligence-driven chatbots, known as large language models (LLMs) [1] [2] [3] [4] [5] [6] [7] [8], into businesses [1] [2] [4] [7]. While the NCSC has generally been optimistic about LLMs [5], it now acknowledges their vulnerabilities [5].


Researchers have found that LLMs like ChatGPT, Google Bard [1], and Meta’s LlaMA can be manipulated and hijacked [6], leading to an increase in fraud [6], illegal transactions [6], and data breaches [3] [6]. One major risk highlighted is prompt injection attacks [2], where attackers manipulate the output of LLMs to launch scams or cyber-attacks [2]. These attacks could manipulate chatbots to perform malicious actions [5], such as transferring funds to an attacker’s account [3] [5].

The NCSC advises caution when integrating LLMs into services or businesses [2] [4], as their behavior is not fully understood and their capabilities, weaknesses [2] [3] [9], and vulnerabilities are not yet known [2]. The potential consequences of prompt injection attacks include the generation of offensive content and the disclosure of confidential information [8]. The NCSC emphasizes the importance of architecting systems and data flows to account for worst-case scenarios and the potential for undiscovered vulnerabilities [9].

Tech leaders are urged to exercise caution when deploying LLMs and consider the possibility of future vulnerabilities or weaknesses [9]. Ongoing research is being conducted to find mitigations for these attacks, but there are currently no guaranteed solutions [2]. Different techniques [2], such as social engineering-like approaches [2], may need to be applied to test applications based on LLMs [2]. Organizations using LLMs should exercise caution and not fully trust them with sensitive tasks [5].

With proper security measures [5], the impact of cyber attacks stemming from AI and machine learning can be reduced [5]. The NCSC has issued a warning about the cybersecurity risks associated with prompt injection attacks [8], which involve manipulating chatbots to behave in unintended ways by creating prompts [8]. Designing systems with security in mind and understanding the vulnerabilities in machine learning algorithms are crucial in addressing these risks [8].

Additionally, businesses should be cautious about relying on LLM APIs [7], as models may change or integrations may become obsolete [7]. The NCSC advises organizations to exercise caution when executing code downloaded from the internet [7], stay updated on vulnerabilities [7], and regularly upgrade software [7].


Authorities worldwide are grappling with the rise of LLMs [6], such as OpenAI’s ChatGPT [2] [4] [6], and the security implications of AI technology [6]. It is important to recognize the potential impacts of prompt injection attacks and take steps to mitigate these risks. Ongoing research and advancements in security measures are necessary to address the vulnerabilities associated with LLMs. Organizations should remain vigilant, exercise caution [5] [6] [7] [9], and stay informed about the evolving landscape of AI technology and its potential implications for cybersecurity.