Tech companies are releasing AI-generated content tools without robust safeguards, leading to the creation of inappropriate and offensive images.
Microsoft Bing's Image Creator, powered by OpenAI's DALL-E, has been used to generate images depicting violence, racism, and copyright infringement.
Meta's Messenger app also allows users to generate stickers with AI, resulting in images that include violence and nudity.
While some argue that the harm caused by these images is minimal, others highlight the potential for misuse and the responsibility of tech companies to ensure the safety of their tools.
Companies' responses to the media have been vague, promising future improvements but lacking concrete actions.
The challenge lies in creating safeguards that can effectively prevent the creation of harmful content in all contexts.
[link] [comments]