Generative pre-trained transformers, or GPTs for short, such as those used in OpenAI’s ChatGPT chatbot and Dall-E image generator, are a current trend in AI research. Everyone wants to apply GPT models to almost everything, and it’s sparked considerable debate for a variety of reasons. But now, in the latest example, we see the release of a new model of artificial intelligence called GPT to read the human mind, although not in the way you imagine.
- Bill Gates: Artificial intelligence will teach children to read in the next 18 months
This machine has entered a new phase of reading the human mind
Journal Scientific American In this regard, it has been reported that a group of researchers has built a GPT model that can read the human mind. This program is not unlike ChatGPT in that it can generate coherent and continuous language from a single command. The main difference is that the desired parameter here is the analysis of human brain activity.

A team from the University of Texas at Austin recently published their study in Nature Neuroscience. This method uses imaging from an fMRI device to interpret human brain activity, which is called “hearing, speaking or imagining”. Scientists call the technique completely “non-invasive,” which is ironic because reading someone’s mind is actually tapping into a part of that person’s being.
Of course, from a scientific and medical point of view, the word non-invasive means that their method does not cause a wound, surgery or implant of a foreign object in the body of the person concerned. Of course, this is not the only time scientists have developed technology that can read thoughts, but it is the only successful method that does not require electrodes attached to the subject’s brain.

We trained and tested our decoder on brain responses while subjects listened to naturally narrated stories. Based on the brain’s responses to novel stories that were not used in training, the decoder successfully retrieved the meaning of the stories.
– Jerry Tang (@jerryptang) September 30, 2022
The mind is not read by hair, but the machine interprets human thoughts
This model, called GPT-1, is the only method that interprets brain activity in the form of continuous language. Other techniques can only extract a single word or short phrase, but GPT-1 can generate complex descriptions that explain the essence of the subject.
For example, one participant listened to a recording of: “I don’t have my driver’s license yet.” Linguistic model interpreted fMRI imaging as follows: “He hasn’t started learning to drive yet.” So while it does not read a person’s thoughts word for word, it can get a general idea of what is in a person’s mind and describe it briefly.
Of course, invasive methods can interpret exact words because they are trained to recognize specific physical motor functions in the brain, such as moving the lips to form a word. The GPT-1 model determines its output based on blood flow in the brain. In fact, thoughts cannot be accurately repeated in this method because it works at a higher level of nervous function.
Assistant Professor Alexander Huth of UT Austin’s Center for Neuroscience and Computer Science said at a press conference last Thursday:
“Our system works on a very different level. Instead of looking at this low-level engine, our system really works at the level of ideas, semantics, and meaning. That’s what it achieves.”
To achieve this result, the help of several volunteers was taken. They trained the machine on the scans of three volunteers who each spent 16 hours listening to stories recorded in an fMRI machine. This process allowed GPT-1 to associate neural activity with recorded words and ideas.
After training, volunteers listened to new stories while scanning, and the GPT-1 accurately determined the general idea of what the participants were hearing. The study also used silent films and volunteers’ imaginations to test the technology with similar results.
Interestingly, GPT-1 was more accurate in interpreting the audio recordings of the participants’ fictitious stories. It can be interpreted as the abstract nature of imaginative thoughts versus the more concrete ideas that come from listening to something. However, GPT-1 was still able to get very close to the subject when reading unspoken thoughts.

The same decoder also worked on brain responses while subjects imagined telling stories, even though the decoder was trained only on perceived speech data. We expect that training the decoder on some imaginary speech data will further improve performance.
– Jerry Tang (@jerryptang) September 30, 2022
In one case, the volunteer imagined:[من] I went on a dirt road through a wheat field and over a stream and next to some wooden buildings. GPT-1 then interpreted, “He must have crossed a bridge to the other side and a very large building in the distance.” Thus, it ignored some essential details and vital context, but still understood the main elements of one’s thinking.
Another new technology, another privacy concern
AI machines capable of reading human minds may be the most controversial type of GPT technology. However, the team working on this technology considers helping ALS or aphasia patients to speak and communicate as their highest goal, but they also acknowledge the potential for its misuse. Although it currently requires subject consent to work in its current form, the study acknowledges that abusers can use the technology without any moral obligation if they get their hands on it.
Regarding this issue, this article states:
“Our privacy analysis shows that subject cooperation is currently required for both training and applying the decryptor. However, future developments may enable decoders to bypass these requirements. Furthermore, even if the decoder’s predictions are false without the subject’s consent, they can be intentionally misinterpreted for malicious purposes. For these and other unforeseen reasons, it is important to raise awareness of the risks. From the technology of deciphering the brain and the adoption of policies that protect the mental privacy of each individual.”
Of course, this scenario assumes that fMRI technology can be miniaturized enough to be practical outside of a clinical setting. Of course, for now, to reach such a position, you have to work for years. However, this technology can also be like a double-edged sword, from helping the disabled to communicate to being misused to dump people’s information without their consent.
Related posts:
- The first signs of replacing artificial intelligence with human labor
- Artificial intelligence has put video game illustrators out of work in China
- Hello again to artificial intelligence; Zuckerberg’s new toy
- AI illusions continue; ChatGPT accuses George Washington University law professor of sexual harassment