Title: Microsoft Discovers Security Gaffe Exposing Massive Amounts of Private Data
In a recent security breach, tech giant Microsoft has accidentally exposed a staggering 38 terabytes of private data. The leak was discovered on the company’s AI GitHub repository, which was mistakenly made public during the publication of open-source training data.
The repository, meant for storing artificial intelligence files, included a disk backup of former employees’ workstations, containing highly sensitive information such as secrets, keys, passwords, and even internal Teams messages. This alarming security issue was caused by an overly permissive SAS token, an Azure feature designed for data sharing.
The severity of the breach was escalated due to the fact that the repository’s README.md file inadvertently granted access to the entire storage account, thereby exposing additional private data. However, according to Microsoft, there is currently no evidence of unauthorized exposure of customer data, and there is no immediate risk to other internal services.
To address the issue, Microsoft promptly revoked the SAS token, blocking any external access. In addition, a bug in their scanning system was identified and resolved. As an additional security measure, Microsoft has expanded its secret scanning service to identify and prevent overly permissive SAS tokens in the future.
This incident highlights the potential dangers of misconfigured Azure storage accounts, an issue previously warned against by security firm JUMPSEC Labs. Researchers strongly advise against using Account SAS for external sharing, as it increases the risk of data exposure.
The security lapse comes shortly after China-based hackers breached Microsoft’s systems in March, where they managed to infiltrate and steal a sensitive signing key. This incident serves as a reminder of the constant need for robust security measures, particularly when handling large amounts of data for AI projects.
Wiz CTO, Ami Luttwak, stressed the importance of implementing additional security measures to protect sensitive data associated with AI endeavors. Luttwak highlighted that this incident should serve as a wakeup call for organizations to invest in comprehensive security protocols.
As Microsoft works diligently to rectify the breach, users are reminded to remain vigilant and follow best practices to ensure the safety of their data.