![]() ![]() And with the coming elections, there are also concerns around using AI to create fake images, aka deepfakes, to mislead or misinform the voting public. Of course, there are even more serious concerns around the use of AI image generators, as pedophiles were discovered using open source AI tools to create child sexual abuse material (CSAM) at scale. ![]() ![]() Then there were the more recent issues with Microsoft’s and Meta’s AI tools, where people found ways to bypass the guardrails to make images like Sonic the Hedgehog pregnant or fictional characters doing 9/11. For instance, an app that went viral this summer for AI headshots, Remini, was found to be greatly enhancing the size of some women’s breasts or cleavage, and thinning them. Others, meanwhile, have more subtle issues. The change to the policy follows an explosion of AI-generated apps, some of which where users tricked the apps into creating NSFW imagery, as with Lensa last year. The new policy will insist that flagging and reporting can be done in-app and developers should use the report to inform their own approaches to filtering and moderation, the company says. Google is taking aim at potentially problematic generative AI apps with a new policy, to be enforced starting early next year, that will require developers of Android applications published on its Play Store to offer the ability to report or flag offensive AI-generated content. ![]()
0 Comments
Leave a Reply. |
Details
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |