Introduction
On October 16, 2024 [1] [2] [3] [4] [5], the New York State Department of Financial Services (NYDFS) issued a memorandum under its Cybersecurity Regulation—23 NYCRR Part 500, addressing the cybersecurity risks associated with artificial intelligence (AI) [1] [2] [3] [4] [5]. This guidance is directed at entities regulated by the New York Banking Law [2] [5], the Insurance Law [2] [5], and the Financial Services Law [2] [5], collectively referred to as Covered Entities [2]. It underscores the dual impact of AI on cybersecurity, highlighting both its potential to enhance security measures and the new vulnerabilities it introduces.
Description
On October 16, 2024 [1] [2] [3] [4] [5], the New York State Department of Financial Services (NYDFS) issued a memorandum addressing the cybersecurity risks associated with artificial intelligence (AI) under its Cybersecurity Regulation—23 NYCRR Part 500 [2] [4] [5]. This guidance is directed at entities regulated by the New York Banking Law [2] [5], the Insurance Law [2] [5], and the Financial Services Law [2] [5], collectively referred to as Covered Entities [2]. It emphasizes the importance of utilizing the existing framework established in Part 500 to assess and mitigate the evolving cybersecurity threats posed by advancements in AI technology.
The memorandum highlights the dual impact of AI on cybersecurity [2]. While AI enhances capabilities for preventing cyberattacks [2], improving threat detection [2], and strengthening incident response [2], it also creates new opportunities for cybercriminals to exploit vulnerabilities at an unprecedented scale [2]. The guidance underscores the increasing use of AI to create realistic deepfakes—audio, video [2] [5], and text—which can be exploited by threat actors to access nonpublic information (NPI) and manipulate employees into unauthorized actions [2] [5]. A notable example involves a finance worker who was deceived into transferring $25 million to criminals after a video call featuring deepfake participants [2] [5], including a fake Chief Financial Officer [2].
Entities utilizing AI may face new vulnerabilities [5], particularly when handling large amounts of NPI and sensitive data [2] [5], such as biometric information [2] [5]. The guidance also highlights the risks posed by third-party service providers (TPSPs) that employ AI platforms [5], which can further expose organizations to cyber threats [5]. Covered Entities are required to implement multiple layers of overlapping cybersecurity controls to ensure that if one control fails [2], others can still prevent or mitigate a cyberattack [2]. This includes conducting annual Risk Assessments to identify and address AI-related risks [5], including the threat of deepfakes [5]. These assessments should inform the development of programs [5], policies [3] [5], and procedures to mitigate identified risks [5], with updates prompting reviews of existing measures [5]. Proactive strategies to investigate and respond to potential AI-related cyberattacks are also necessary [5].
For biometric authentication [3] [5], the guidance recommends technologies that incorporate liveness detection or texture analysis to ensure the authenticity of biometric inputs [5]. It also suggests combining different biometric modalities or integrating biometric data with user behavior patterns for enhanced security [5].
Annual cybersecurity training for all personnel [3] [5], including executives [5], must now include AI-related risks and responses to AI-enhanced social engineering attacks [3] [5]. The guidance advocates for deepfake simulation exercises and training on handling unusual requests that may indicate a security threat [5]. Cybersecurity personnel should receive specialized training on the role of AI in social engineering and cyberattacks [3] [5], as well as on leveraging AI to enhance cybersecurity measures [5]. Employees involved in the internal deployment of AI must be trained in secure design and defense against cybersecurity threats [3], while users of AI systems should be educated on safeguarding NPI [3].
Data management practices are essential to minimize exposure of NPI during cybersecurity incidents [3] [5]. Organizations are required to implement data minimization strategies and dispose of unnecessary NPI [3], including that used for AI purposes [3]. Covered Entities must monitor email and web traffic for malicious content and quickly identify vulnerabilities in their information systems [3] [5]. The guidance suggests monitoring for unusual query behavior that may indicate potential exposure of NPI when using AI-enabled products or services [3] [5].
AI can also be utilized to enhance monitoring capabilities [5], such as analyzing security logs [5], detecting anomalies [5], and predicting threats [5]. Risk Assessments must be updated to reflect new material risks associated with AI tools and providers [5], and organizations should ensure that their policies and procedures are aligned with these assessments [5]. Senior executives and governing body members should actively participate in evaluating AI usage and ensure they receive reports on compliance with AI and cybersecurity standards [3], including training and policy development related to AI-related risks [5].
Conclusion
The NYDFS memorandum serves as a critical reminder of the dual-edged nature of AI in cybersecurity. While AI offers significant advancements in threat detection and response, it simultaneously introduces new vulnerabilities that must be addressed. Covered Entities are urged to adopt comprehensive strategies, including robust risk assessments, enhanced training [3] [5], and vigilant data management practices, to mitigate these risks [1] [4] [5]. The guidance emphasizes the need for proactive measures and continuous evaluation to safeguard against the evolving threats posed by AI technologies.
References
[1] https://www.jdsupra.com/topics/nydfs/
[2] https://www.lexology.com/library/detail.aspx?g=b900366d-d7a6-4821-89fb-ca8ee7544644
[3] https://www.whitecase.com/insight-alert/nydfs-releases-artificial-intelligence-cybersecurity-guidance-covered-entities
[4] https://www.blankrome.com/publications/new-york-department-financial-services-provides-ai-cybersecurity-guidance-what-you
[5] https://www.jdsupra.com/legalnews/nydfs-releases-artificial-intelligence-3440076/