Guides

Coming soon

Market insights

Coming soon

Search

Personalize

0%

Microsoft’s AI tool generates sexually harmful and violent images, engineer warns

3 mins

Nazarii Bezkorovainyi

Published by: Nazarii Bezkorovainyi

18 March 2024, 02:50PM

In Brief

Shane Jones, a Microsoft software engineer, expressed serious concerns to the FTC and Microsoft's board regarding the potential dangers of the Copilot Designer tool's AI image generation.

Jones highlighted the tool's capability to produce unsafe content, including sexualization, conspiracy theories, and drug-related imagery.

Despite repeated attempts to persuade Microsoft, Jones urged the suspension of Copilot Designer's public use until stronger safeguards are in place.

He called for an independent investigation by Microsoft's committee and stressed the need for industry collaboration to raise public awareness about AI risks and benefits.

Recent controversies in the AI sector, such as Google's suspension of image generation in Gemini chatbot, add to the ongoing debate about responsible AI practices and the challenges faced by tech companies in ensuring safety.

Microsoft’s AI Tool Generates Sexually Harmful and Violent Images, Engineer Warns

In letters addressed to the Federal Trade Commission (FTC) and Microsoft's board, Shane Jones, a principal software engineering manager at Microsoft, voiced serious concerns about the company's AI image tool. Specifically, Jones highlighted potential dangers stemming from Microsoft's Copilot Designer tool, which he argued could be exploited to generate unsafe content, ranging from sexualization to conspiracy theories and drug use.

Jones emphasized his repeated attempts to persuade Microsoft to halt the public use of Copilot Designer until stronger safeguards could be implemented. He called for an independent investigation by Microsoft's environmental, social, and public policy committee to uphold responsible AI practices. In his letters, Jones also stressed the importance of robust collaboration across the industry, with governments and civil society, to build public awareness and education on the risks and benefits of AI.



We are committed to addressing any and all concerns employees have in accordance with our company policies and appreciate the employee’s effort in studying and testing our latest technology to further enhance its safety,

Microsoft spokesperson

Having worked at Microsoft for six years, Jones pointed out that he had been testing the AI image generator since December. During this period, he identified flaws and security vulnerabilities that could potentially allow users to generate harmful and offensive images.



As these new tools come to market from Microsoft and across the tech sector, we must take new steps to ensure these new technologies are resistant to abuse

Brad Smith

Microsoft responded to Jones's concerns by stating they are committed to addressing employee concerns in accordance with company policies. A spokesperson for Microsoft acknowledged the employee's efforts in studying and testing the latest AI technology to enhance its safety.

This development comes at a time when tech companies, including Microsoft, are striving to ensure the safety of AI tools. Microsoft President Brad Smith, in a recent blog post, underlined the company's commitment to making AI safe for users, acknowledging the necessity of taking new steps to prevent abuse as these technologies come to market.

Jones's letters follow recent controversies in the AI sector, such as Google's suspension of image generation capabilities in its Gemini chatbot due to concerns about the tool's treatment of race and ethnicity. Jones also referenced his prior discovery of security vulnerabilities with OpenAI's DALL-E 3 model, a technology used in Microsoft's Copilot Designer. He had previously urged OpenAI's board to suspend the software's availability until the identified issues were addressed.

In conclusion, Jones urged the FTC to collaborate with Microsoft and other companies to enhance AI safety and promote transparency in public disclosures about AI risks. The ongoing debate surrounding responsible AI practices highlights the challenges tech companies face in balancing innovation with ethical considerations.



Over the last three months, I have repeatedly urged Microsoft to remove Copilot Designer from public use until better safeguards could be put in place,

Shane Jones

User Comments

There are no reviews here yet. Be the first to leave review.

Hi, there!

TAKE QUIZ TO GET

RELEVANT CONTENT

Blue robot
Brown robot
Green robot
Purple robot

Share this material in socials

Copy link

Join our newsletter

Stay in the know on the latest alpha, news and product updates.