Microsoft ensures data privacy for AI tool users

The company asserts that customer data is not used for training models without permission and is not shared with third parties.

Logo, windows, green, yellow,

Microsoft has outlined its commitment to safeguarding customer data privacy as businesses increasingly utilise generative AI tools such as Azure OpenAI Service and Copilot. In a blog post published on 28 March, the tech giant assured that customer organisations leveraging these services are protected under existing privacy policies and contractual agreements. Notably, Microsoft emphasised that organisations’ data is only utilised to train OpenAI models or foundational models if explicitly permitted by the users.

The tech giant clarified that customer data used in its generative AI solutions, including Azure OpenAI Service and Copilot, is not accessible for training open-source AI, addressing concerns raised by data privacy experts in the past. Furthermore, Microsoft affirmed that it does not share customer data with third parties like OpenAI without explicit permission, nor does it use it to train OpenAI’s foundational models. Any fine-tuned AI solutions resulting from organisations using their data will remain exclusive to them and not be shared externally.

The blog post highlights measures to protect organisations from copyright infringement lawsuits related to using Azure OpenAI and Microsoft Copilot services. Through the 2023 Customer Copyright Commitment plan, Microsoft pledged to defend customers and cover settlements in the event of copyright infringement lawsuits, provided customers utilise available guardrails and content filters within the products.

In addition to copyright protection, Microsoft is focused on safeguarding sensitive data associated with AI usage. Chief Privacy Officer Julie Brill detailed how Microsoft Purview enables corporate customers to identify risks linked to AI usage, including sensitive prompts. Azure OpenAI and Copilot users can employ sensitivity labels and classifications to protect their sensitive data, with Copilot summarising content only when authorised by users. This integration ensures that Copilot-generated output inherits sensitivity labels from reference files, maintaining data protection policies and preventing unauthorised access.