Microsoft has launched the Secure Future Initiative (SFI) as part of their commitment to enhancing cybersecurity. This initiative focuses on three pillars: AI-based cyber defenses [1], advancements in software engineering [1] [2] [3] [4] [6], and advocating for stronger application of international norms [1] [3].


To improve the security of their software and services [7], Microsoft plans to leverage automation and AI during software development [7]. They will utilize CodeQL, GitHub’s code analysis engine [5] [7], to automate security checks [7]. Additionally, Microsoft aims to build an AI-based cyber shield to protect customers worldwide and combat cyberattacks [3]. They are also prioritizing responsible AI principles in their services [3]. Furthermore, Microsoft is implementing a new standard for security in their design [3], build [3] [7], testing [3], and operation processes to advance their technology. They are committed to cutting the time it takes to mitigate cloud vulnerabilities by 50 percent and are focusing on hardening the platforms that protect encryption keys [7]. Microsoft also plans to use AI technology to improve threat intelligence and analysis [1], strengthen identity protection against sophisticated attacks [1], and promote the acceptance of red lines in cyberspace [1]. By incorporating automation, AI [1] [2] [3] [5] [7], and advanced software engineering techniques [6], Microsoft aims to enhance the security of their products and services, protect customer data [3] [7], and contribute to the establishment of international norms in cyberspace. They also plan to enable customers with more secure default settings for Multi-Factor Authentication (MFA) out-of-the-box [7], providing an added layer of security for users.

As part of their new cybersecurity initiative, Microsoft has launched Microsoft Secure Copilot [5], an AI tool that defends against attacks from state-backed actors. This initiative aligns with the AI-based cyber defense pillar of the Secure Future Initiative. Microsoft plans to use AI within its Threat Intelligence framework to make systems safer and reduce delays in vulnerability patching [2]. They also utilize AI to assist security analysts [2], with Microsoft Security Copilot providing recommendations based on data analysis. Microsoft acknowledges the need for its AI code of ethics to evolve alongside the technology [2]. The company is committed to advancing software engineering and development, advocating for better protection through international cybersecurity norms [2], and ensuring responsible AI principles in their services [3]. Additionally, Microsoft plans to propose that cloud services be recognized as critical infrastructure under international law [6]. These updates aim to improve the security of Microsoft’s platforms [4], with a focus on default security [4], identity security [4], and cloud vulnerability mitigation [4].


Microsoft’s SFI is part of broader efforts to address nation-state attacks on public infrastructure [5]. They use AI to detect and analyze cyber threats and plan to extend this capability to security software customers [5]. Microsoft is developing the Microsoft Security Copilot AI product and uses Microsoft Defender for Endpoint to find threats on unmanaged devices [5]. Ransomware attacks have increased by over 200% since September 2022 [5], and Microsoft tracks 123 sophisticated ransomware-as-a-service affiliates [5]. The Secure Future Initiative demonstrates Microsoft’s commitment to enhancing cybersecurity, protecting customer data [7], and contributing to the establishment of international norms in cyberspace.