Microsoft engineer says company’s AI image generator produces ‘harmful content’
1 mins
![Daniil Bazylenko](/_next/image?url=https%3A%2F%2Fjt1nbf6nkmxvv680.public.blob.vercel-storage.com%2F32a9d6b5-f56b-4fe8-b4e6-82ab44d5f4cc-tz0EH0CewXh60vFDHwrTDx5c0PPyam.jpg&w=3840&q=75)
Published by: Daniil Bazylenko
20 March 2024, 05:40PM
Microsoft engineer Shane Jones raises alarm about harmful content generated by Copilot Designer AI.
Jones identifies vulnerabilities in OpenAI's DALL-E 3, affecting Copilot Designer due to Microsoft's ties with OpenAI.
Troubling images produced by Copilot Designer include sexual objectification and inappropriate content involving teenagers and infants.
Debate arises over regulation, while Jones stresses transparency and urges an independent review of Microsoft's AI practices.
Jones calls for voluntary disclosure of AI risks, particularly for products targeting children, and recommends adjusting Copilot Designer's rating.
Microsoft Engineer Says Company’s AI Image Generator Produces ‘Harmful Content’
A Microsoft engineer raises concerns about the company's AI image generator, Copilot Designer, citing "harmful content" and urging an investigation by the FTC.
Shane Jones, the engineer, highlights vulnerabilities in OpenAI's DALL-E 3, which also affects Copilot Designer due to Microsoft's involvement with OpenAI.
Jones describes troubling images generated by Copilot Designer, including sexually objectified women and inappropriate content involving teenagers and infants.
While some debate the need for regulation, Jones emphasizes the importance of transparency and calls for an independent review of Microsoft's AI incident reporting and disclosure practices.
Jones advocates for voluntary disclosure of AI risks, especially for products marketed to children, and suggests changes in Copilot Designer's rating to reflect its content.
User Comments
There are no reviews here yet. Be the first to leave review.
Hi, there!
Read more regarding this material:
Tags: