Microsoft’s is committed to developing and deploying artificial intelligence (AI) technologies in an ethical and responsible manner.
This includes ensuring that AI systems are designed, developed, and used in ways that prioritise fairness, transparency, accountability, privacy, and security. Microsoft’s approach to ethical AI encompasses several key principles:
Microsoft integrates these principles into its AI research, product development, and customer engagements, aiming to build trust and advance the responsible use of AI technology across industries and applications.
This includes offering tools and resources for developers and organisations to incorporate ethical AI practices into their own projects and operations.
Implement responsible AI, your way
Responsible AI is a way of evaluating, creating, and using AI systems in a secure, reliable, and moral way.
The Responsible AI Toolbox from Microsoft is a set of interconnected tools and features to help implement responsible AI in reality. With the abilities of this toolbox, you can test your models and make decisions for your users, quicker and easier.
Discover the Responsible AI ToolboxMicrosoft ethical AI FAQs
Microsoft created a Responsible AI Standard. It’s a set of guidelines for making AI systems that follow six principles: fairness, reliability and safety, privacy and security, inclusiveness, transparency, and accountability.