Science
Artificial Intelligence

AI is getting better at reading our minds

How worried should we be?
By Teodosia Dobriyanova  on 
An illustration of a brain seen from above against a red futuristic backdrop. Caption reads: AI MIND READING.
Watch Next

From quick hits to deep dives, this Mashable series cuts through the noise to explain what on Earth is going on and what you should know about it.


AI is getting way better at deciphering our thoughts, for better or worse.

Scientists at University of Texas published a study in Nature(opens in a new tab) describing how they used functional magnetic resonance (fMRI) and an AI system preceding ChatGPT called GPT-1, to create a non-invasive mind decoder that can detect brain activity and capture the essence of what someone is thinking.

To train the AI, researchers placed three people in fMRI scans and played entertaining podcasts for them to listen to, including The New York Times’ Modern Love(opens in a new tab), and The Moth Radio Hour(opens in a new tab). The scientists used transcripts of the podcasts to track brain activity and figure out which parts of the brain were activated by different words.

To see if the AI can decode imagery, scientists played silent clips from Pixar movies with subtitles, then tested whether they could translate related stories the subjects conjured in their heads without speaking. The results weren’t shockingly detailed, but they were accurate enough for the decoder to understand the meaning behind the subjects' thoughts and convert it into text.

On one hand, this is really exciting news. Just imagine a future where people with neurological conditions or survivors of stroke can once again communicate with the help of this type of technology.

The decoder, however, is not fully developed yet. The AI only works if it's trained with data from the brain activity of the person it is used on, which limits its distribution possibilities. There's also a barrier with the fMRI scans, which are big and expensive. Plus, scientists found that the decoder can get confused if people decide to ‘lie’ to it by choosing to think about something different than what is required.

These obstacles may be a positive, as the potential to create a machine that can decode people’s thoughts raises serious privacy concerns; there’s currently no way to limit the tech's use to medicine, and just imagine if the decoder could be used as a surveillance or an interrogation method. So, before AI mind-reading develops further, scientists and policy makers need to seriously consider the ethical implications, and enforce laws that protect mental privacy to ensure this kind of tech is only used to benefit humanity.

Picture of Teodosia
Teodosia Dobriyanova
Video Producer

Teodosia is a video producer at Mashable UK, focussing on stories about climate resilience, urban development, and social good.


More from Mashable Now
How campaigning saved Europe's 'last wild river'

Activists paint a giant Ukrainian flag outside the Russian Embassy in London

Why Venice's canals are drying out

Watch a supermassive black hole devour a star

UK’s first ever satellite failed to make it to orbit
The biggest stories of the day delivered to your inbox.
By signing up to the Mashable newsletter you agree to receive electronic communications from Mashable that may sometimes include advertisements or sponsored content.
Thanks for signing up. See you at your inbox!