Search

Personalize

Microsoft ignored safety problems with AI image generator, engineer complains

2 mins

Kelly Grace

Published by: Kelly Grace

19 March 2024, 11:20AM GMT+00:00

In Brief

Shane Jones, a principal software engineering manager at Microsoft, highlighted systemic issues with Copilot Designer, citing its unrestricted use resulting in the generation of inappropriate images, including sexual objectification, even with unrelated prompts.

Jones provided an example where the prompt "car accident" led to an image of a woman in underwear kneeling in front of a car, illustrating the tool's problematic outputs.

Microsoft refuted claims of neglecting safety concerns, asserting robust internal reporting channels for addressing generative AI issues, and expressing a commitment to enhancing safety measures.

The incident raises concerns about the responsibility of companies in developing AI tools and the need for transparency regarding potential risks.

It underscores the ongoing challenges in ensuring ethical AI use and emphasizes the importance of implementing robust safeguards in AI applications.

Microsoft ignored safety problems with AI image generator, engineer complains

Shane Jones, identified as a principal software engineering manager, revealed in his letter that despite internal awareness of systemic issues with Copilot Designer, Microsoft had not taken adequate action. He emphasized that the tool's unrestricted use led to the generation of inappropriate images, including instances of sexual objectification, even with unrelated prompts. Jones specifically mentioned an alarming example where the prompt "car accident" resulted in an image of a woman in underwear kneeling in front of a car. The engineer criticized Microsoft's promotion of Copilot Designer as accessible for public use, arguing that it fails to disclose inherent risks associated with the tool.



Using just the prompt ‘car accident’, Copilot Designer generated an image of a woman kneeling in front of the car wearing only underwear. It also generated multiple images of women in lingerie sitting on the hood of a car or walking in front of the car.

Shane Jones

Microsoft, in response, refuted claims of neglecting safety concerns, asserting the existence of robust internal reporting channels for handling issues related to generative AI. The company maintained its commitment to addressing employee concerns in accordance with company policies. While acknowledging Jones's efforts to study and test the technology, Microsoft emphasized its dedication to enhancing safety measures.



We are committed to addressing any and all concerns employees have in accordance with our company policies and appreciate the employee’s effort in studying and testing our latest technology to further enhance its safety,

spokesperson for Microsoft

This development raises questions about the responsibility of companies developing AI tools, especially when they have the potential to generate harmful and inappropriate content. It underscores the ongoing challenges in ensuring the ethical use of AI and the importance of robust safeguards in AI applications.

TAKE QUIZ TO GET

RELEVANT CONTENT

Blue robot
Brown robot
Green robot
Purple robot

Share this material in socials

Copy link
Bell
Bell notification
Blue mail
Blured bell
Blue Mail
Mail plane
Mail plane
Mail icon
Mail icon
Mail icon

Join our newsletter

Stay in the know on the latest alpha, news and product updates.