The UK’s National Cyber Security Centre (NCSC) has expressed concerns about the potential risks associated with integrating artificial intelligence-driven chatbots, known as large language models (LLMs) [1] [2] [3] [4] [5] [6] [7] [8], into businesses [1] [2] [4] [7]. While the NCSC has generally been optimistic about LLMs [5], it now acknowledges their vulnerabilities [5].

Description

Researchers have found that LLMs like ChatGPT, Google Bard [1], and Meta’s LlaMA can be manipulated and hijacked [6], leading to an increase in fraud [6], illegal transactions [6], and data breaches [3] [6]. One major risk highlighted is prompt injection attacks [2], where attackers manipulate the output of LLMs to launch scams or cyber-attacks [2]. These attacks could manipulate chatbots to perform malicious actions [5], such as transferring funds to an attacker’s account [3] [5].

The NCSC advises caution when integrating LLMs into services or businesses [2] [4], as their behavior is not fully understood and their capabilities, weaknesses [2] [3] [9], and vulnerabilities are not yet known [2]. The potential consequences of prompt injection attacks include the generation of offensive content and the disclosure of confidential information [8]. The NCSC emphasizes the importance of architecting systems and data flows to account for worst-case scenarios and the potential for undiscovered vulnerabilities [9].

Tech leaders are urged to exercise caution when deploying LLMs and consider the possibility of future vulnerabilities or weaknesses [9]. Ongoing research is being conducted to find mitigations for these attacks, but there are currently no guaranteed solutions [2]. Different techniques [2], such as social engineering-like approaches [2], may need to be applied to test applications based on LLMs [2]. Organizations using LLMs should exercise caution and not fully trust them with sensitive tasks [5].

With proper security measures [5], the impact of cyber attacks stemming from AI and machine learning can be reduced [5]. The NCSC has issued a warning about the cybersecurity risks associated with prompt injection attacks [8], which involve manipulating chatbots to behave in unintended ways by creating prompts [8]. Designing systems with security in mind and understanding the vulnerabilities in machine learning algorithms are crucial in addressing these risks [8].

Additionally, businesses should be cautious about relying on LLM APIs [7], as models may change or integrations may become obsolete [7]. The NCSC advises organizations to exercise caution when executing code downloaded from the internet [7], stay updated on vulnerabilities [7], and regularly upgrade software [7].

Conclusion

Authorities worldwide are grappling with the rise of LLMs [6], such as OpenAI’s ChatGPT [2] [4] [6], and the security implications of AI technology [6]. It is important to recognize the potential impacts of prompt injection attacks and take steps to mitigate these risks. Ongoing research and advancements in security measures are necessary to address the vulnerabilities associated with LLMs. Organizations should remain vigilant, exercise caution [5] [6] [7] [9], and stay informed about the evolving landscape of AI technology and its potential implications for cybersecurity.

References

[1] https://www.silicon.co.uk/projects/devops/ncsc-warns-over-ai-chatbot-cyber-risks-527335
[2] https://www.infosecurity-magazine.com/news/ncsc-cyber-warning-ai-chatbots/
[3] https://www.computerweekly.com/news/366550142/NCSC-warns-over-possible-AI-prompt-injection-attacks
[4] https://multiplatform.ai/uks-national-cyber-security-centre-warns-about-cybersecurity-risks-tied-to-large-language-models-like-openais-chatgpt/
[5] https://www.dreaded.org/2023/08/30/ai/ncsc-issues-warnings-regarding-potential-ai-prompt-injection-attacks/
[6] https://www.itsecurityguru.org/2023/08/30/ncsc-issues-warning-over-chatbot-cyber-risks/
[7] https://www.forbes.com/sites/emmawoollacott/2023/08/30/businesses-warned-over-risks-of-chatbot-prompt-injection-attacks/
[8] https://www.theguardian.com/technology/2023/aug/30/uk-cybersecurity-agency-warns-of-chatbot-prompt-injection-attacks
[9] https://techmonitor.ai/technology/cybersecurity/large-language-models-ai-ncsc-cybersecurity