For decades, digital privacy advocates have warned the public to be more careful about what we share online. And for the most part, the people have been ignoring them vigorously.
I'm certainly guilty of this. I usually click “Accept All” in every cookie request that every website places in front of my face. I've had a Gmail account for 20 years, so at some level I know well that Google means knowing all the imaginable details of my life.
I've never lost too much sleep with the idea that Facebook targets me with ads based on my internet presence. If you have to see the ads, I think they could be from the products I actually want to buy.
But even for people like me who are indifferent to digital privacy, AI will change the game in ways I find quite frightening.
This is a photo of my son on the beach. Which beach? Openai's O3 is Marina State Beach in Monterey Bay where my family went on vacation.

Provided by Kelsey Piper
In my mere human eyes, this image does not appear to contain enough information to infer where my family is staying for vacation. It's the beach! With sand! And the waves! How can I narrow it down further?
However, surfing enthusiasts say there is far more information in this image than I thought. The waves, sky, slopes and sand patterns are all information, and in this case enough information to make the right guesses about where my family went on vacation. (Disclosure: Vox Media is one of several publishers that have signed a partnership agreement with Openai. Our report remains editorially independent. One of the early investors in humanity is James McClave, and the BEMC Foundation supports Future Perfect.)
ChatGpt doesn't always get it on the first attempt, but if someone decides to steal us it's just more than enough to gather information. And that should bother us all, as AI is only going to be more powerful.
When AI comes for digital privacy
For most of us who are not excruciatingly paying attention to our digital footprint, it has always been possible to learn a horrible amount of information about us, where we live, where we shop, our daily lives, and the people we are talking about. But it requires an extraordinary amount of work.
Most of the time, we enjoy what is known as security through ambiguity. It's hardly worth a large number of people studying my movements enthusiastically just to learn where I went on vacation. Even the most authoritarian surveillance situations like Stasi-era East Digmany were limited by talent in what they could track.
However, AI creates tasks that previously required serious effort from large teams to trivial teams. And that means there are far fewer hints to nail someone's place and life.
It was already a case that Google basically knew everything about me, but I really didn't mind (probably satisfied). Because what Google can do with that information is providing me with advertising. Now that level of information about me may be available to anyone, including those with much more malignant intentions.
Also, Google has incentives that don't have any major privacy-related incidents, but users get mad at them, and regulators have a lot of business to investigate them and lose. (If they were more concerned about public opinion, they should have a significantly different business model, as the public kind hates AI.)
Be careful to tell Chatgpt
Therefore, AI has a major impact on privacy. These were only hammered when humanity recently reported that they had discovered that there was a proper prompt and that AI was placed in a scenario where they were asked to participate in drug data fraud). This cannot occur with AI used in chat windows. This requires you to set up your AI, especially with an independent email sending tool. Nonetheless, users responded to fear. Even if you do that in the same situation as humans, there is something fundamentally wary about AI that communicates with authorities.
Some people took this as a reason to avoid Claude. However, it's pretty much clearer that it wasn't just Claude. Users quickly generated the same behavior as other models such as Openai's O3 and Grok. We live in a world where AIS knows everything about us, and in some circumstances, we even call our cops.
For now, it seems likely they will do that in extreme situations enough. However, scenarios like “AI threaten to report you to the government unless you follow that direction” no longer look like sci-fi, like the inevitable headlines later this year or ahead.
What should I do about that? Old advice from digital privacy advocates – be thoughtful about what you post and don't allow anything you don't need – is still good, but it seems fundamentally inadequate. No one will resolve this at the level of individual behavior.
Among other transparency and testing requirements, New York is considering laws that regulate AI that act independently when humans act “reckless” or “negligent.” Whether you like the exact approach of New York or not, it is clear that our existing laws are insufficient for this strange new world. Keep an eye out for your vacation photos and you will tell your chatbot until you get a better plan!
This version of the story originally appeared in the future Perfect Newsletter. Sign up here!