There is some concern that new trends will go viral. People use ChatGpt to know where they are shown in the photos.
This week, Openai released its latest AI models, the O3 and O4-Mini. In reality, models can crop, rotate and zoom in on photos – even blurry and distorted ones can be thoroughly analyzed.
These image analysis features combine with the ability to search the model’s web to create powerful location survey tools. X users quickly discovered that O3, in particular, is extremely good at inferring restaurants and bars from cities, landmarks and even subtle visual cues.
Wow, I nailed it, not even the tree in front of me. pic.twitter.com/bvcoe1fq0z
– Swax (@swax) April 17, 2025
Often, the model does not appear to depict past ChatGPT conversations or “memory” of Exif data. Metadata attached to the photo reveals details such as where the photo was taken.
X is filled with examples of users who provide ChatGpt restaurant menus, neighborhood snaps, facades, self-portraits and tell O3 to play GooguessR.
This is a fun ChatGpt O3 feature. Geoguessr! pic.twitter.com/hrcmixs8yd
– Jason Burns (@vyrotek) April 17, 2025
This is an obvious potential privacy issue. There’s nothing bad actors can hinder screenshotting. For example, try Doxx using ChatGpt using someone’s Instagram Stories.
O3 is unconventional
I asked my friend to give a random photo
They gave me random photos taken at the library.
O3 knows it in 20 seconds and it’s correct pic.twitter.com/0k8dxifkoy– Yumi (@izyuuumi) April 17, 2025
Of course, this can also be done before the release of the O3 and O4-Mini. TechCrunch ran many photos through the old model GPT-4O, which has no O3 and image seasoning capabilities, to compare the position guessing skills of Models. Surprisingly, the GPT-4O reached the same correct answer as O3, arriving frequently, and less time.
During a short test where O3 found where GPT-4o could not be done, there was at least one instance. Given the photo of a rhino head mounted in purple in a dimly lit bar, O3 correctly replied that it was from Williamsburg’s speaking.
This does not suggest that O3 is perfect in this respect. Some of our tests failed – O3 stuck in a loop and was unable to reach the answer that it was reasonably confident or volunteered to the wrong place. X users also said that O3 could be quite far away in the deductions for that location.
However, this trend illustrates some of the emerging risks presented by more capable and so-called inference AI models. There appears to be few safeguards to prevent this kind of “reverse search” in ChatGpt. Openai, the company behind ChatGPT, has not addressed the issue with its O3 and O4-Mini safety reports.
I contacted Openai for comment. If they respond, we will update our work.