Ammon News - Meta.ai, a new AI-and-social app meant to compete with ChatGPT and others, launched a couple of months ago like Meta’s products often do: with a massive privacy fuckup. The app, which has been promoted across Meta’s other platforms, lets users chat in text or by voice, generate images, and, as of more recently, restyle videos. It also has a sharing function and a discover feed, designed in such a way that it led countless users to unwittingly post extremely private information into a public feed intended for strangers.
The issue was flagged in May by, among others, Katie Notopoulos at Business Insider, who found public chats in which people asked for help with insurance bills, private medical matters, and legal advice following a layoff.
Over the following weeks, Meta’s experiment in AI-powered user confusion turned up weirder and more distressing examples of people who didn’t know they were sharing their AI interactions publicly: Young children talking candidly about their lives; incarcerated people accidentally sharing chats about possible cooperation with authorities; and users chatting about “red bumps on inner thigh” under identifiable handles. New York Magazine