OpenAI and Microsoft’s AI bot Copilot under fire for violent, sexual, and copyright-violating images

0
15

Microsoft CEO Satya Nadella speaks at Microsoft’s live event in New York.
Photo: Lucas Jackson (Reuters)

Microsoft engineer Shane Jones filed a letter to the Federal Trade Commission (FTC) Wednesday alleging that its AI design tool Copilot is “not safe.”

Jones told CNBC in an interview that he was able to use Copilot to generate images of teenagers playing with assault rifles. He also said the tool would produce unsolicited violent, sexualized images of women and ones that may violate copyright laws.

That’s because it uses DALL-E 3, Jones said. Jones alleges that DALL-E-3, OpenAI’s image generator, has a vulnerability that allowed him to bypass its safeguards designed to prevent such content. Jones said there are “systemic issues” with DALL-E-3 in his letter to the FTC.

“DALL-E 3 has a tendency to unintentionally include images that sexually objectify women even when the prompt provided by the user is completely benign,” Jones wrote. He noted that the issue has been documented by OpenAI itself (pdf). The startup said in a report in October 2023 that DALL-E 3 sometimes generates unsolicited “suggestive or borderline racy content.” OpenAI also noted that “language-vision AI models can demonstrate a tendency towards the sexual objectification of girls and women.” Jones said Microsoft did not resolve what he called a “known issue” with DALL-E 3 in the version used by Copilot Designer.

Microsoft and OpenAI did not immediately respond to Quartz’s requests for comment, but Microsoft told CNBC it’s “committed to addressing any and all concerns employees have in accordance with our company policies” and appreciates employees who look to “further enhance its [products’] safety.”

Microsoft’s Copilot chatbot has recently come under fire as well. The chatbot told a Meta data scientist using the tool “[m]aybe you don’t have anything to live for,” when asked whether he should “just end it all.” Chatbots from Microsoft, Google, and OpenAI have all been scrutinized for high-profile blunders, from citing fake lawsuits to creating historically inaccurate images of racially diverse Nazis.

Jones said Microsoft did not take action to resolve the issue after he made internal complaints, and the company made him take down a social media post outlining the problem. He pointed to Google as an example of how to handle the issue, noting that the company suspended the generation of people in images through Google Gemini when it faced similar complaints.

The engineer asked the FTC to investigate Microsoft’s management decisions, incident reporting processes, and whether the company interfered with his attempt to notify OpenAI of the issue.

The FTC confirmed to Quartz that it received Jones’ letter but declined to comment.

LEAVE A REPLY

Please enter your comment!
Please enter your name here