Researchers at Praetorian have discovered critical misconfigurations in the open-source TensorFlow machine learning framework’s continuous integration and continuous delivery (CI/CD) systems [1]. These misconfigurations have allowed attackers to compromise TensorFlow’s build agents [3] [4] [5], potentially leading to the compromise of TensorFlow releases on GitHub and PyPi [3] [4].


The misconfigurations in TensorFlow’s CI/CD systems have enabled attackers to exploit vulnerabilities and carry out various malicious activities. By using a malicious pull request, attackers can compromise TensorFlow’s build agents [1] [3] [4] [5], resulting in the upload of malicious releases [4], remote code execution [1] [2] [4], and the retrieval of a GitHub Personal Access Token [4]. The self-hosted runner used in TensorFlow workflows also has extensive write permissions [4], allowing for the upload of releases and the injection of malicious code [4]. This could lead to the theft of the AWSPYPIACCOUNTTOKEN and compromise of the GITHUBTOKEN repository secret.

To address these issues [1], the project maintainers have implemented measures such as requiring approval for workflows from fork pull requests and changing the GITHUB_TOKEN permissions to read-only for self-hosted runners [4]. However, this incident highlights the growing threat of CI/CD attacks [4], particularly for AI/ML companies relying on self-hosted runners [4]. Praetorian has also identified other vulnerable public GitHub repositories that are susceptible to code injection through self-hosted GitHub Actions runners. These misconfigurations in TensorFlow’s CI/CD systems could have allowed attackers to orchestrate supply chain attacks [3], compromising TensorFlow’s build agents and conducting a supply chain compromise of TensorFlow releases on GitHub and PyPi [3] [4].


The discovery of these misconfigurations in TensorFlow’s CI/CD systems has significant implications for the security of machine learning frameworks and the potential for supply chain attacks. It is crucial for organizations to implement measures such as using self-hosted runners with private repositories to mitigate these risks. Additionally, the incident highlights the need for increased awareness and security measures in the CI/CD process, especially for AI/ML companies [4]. The findings by Praetorian serve as a reminder of the importance of maintaining a secure and robust CI/CD pipeline to protect against potential vulnerabilities and attacks.