Connect with us

TECH

AI creates images of what people are seeing by analyzing brain scans Story-level

Avatar

Published

on

The images in the bottom row were recreated from the brain scans of someone looking at the ones in the top row.

Yu Takagi and Shinji Nishimoto/Osaka University, Japan

A tweak to a popular text-to-image generator AI allows it to convert brain signals directly into images. However, the system requires extensive training using bulky and expensive imaging equipment, so day-to-day mind reading is a far cry from reality.

Several research groups previously generated images from brain signals using power-hungry AI models that require fine tuning of millions to billions of parameters.

Now, shinji nishimoto other yu takagi at Osaka University in Japan have developed a much simpler approach using Stable Diffusion, a text-to-image generator released by Stability AI in August 2022. Their new method involves thousands, rather than millions, of parameters.

When used normally, Stable Diffusion quickly converts text to an image by starting with random visual noise and adjusting it to produce images that look similar to those in your training data that have similar text captions.

Nishimoto and Takagi built two complementary models to make AI work with brain signals. The pair used data from four people who took part in a previous study that used functional magnetic resonance imaging (fMRI) to scan their brains as they viewed 10,000 different images of landscapes, objects and people.

Using about 90 percent of the brain imaging data, the pair trained a model to make links between fMRI data from a region of the brain that processes visual signals, called the early visual cortex, and the images people were seeing.

They used the same data set to train a second model to form links between the text descriptions of the images, made by five annotators in the previous study, and fMRI data from a region of the brain that processes the meaning of the images, called the ventral visual cortex.

After training, these two models, which had to be customized for each individual, could translate the brain imaging data into forms that were directly fed into the stable diffusion model. It could then reconstruct about 1,000 of the images people viewed with 80 percent accuracy, without having been trained on the original images. This level of precision is similar to that previously achieved in a study that analyzed the same data using a much more tedious approach.

“I couldn’t believe my eyes, I went to the bathroom and looked in the mirror, then went back to my desk to look at myself again,” says Takagi.

However, the study only tested the approach on four people, and mind-reading AIs work better on some people than others, Nishimoto says.

Also, since the models must be customized to each individual’s brain, this approach requires lengthy brain scan sessions and huge fMRI machines, he says. sikun lin at the University of California, Santa Barbara. “This isn’t practical for everyday use at all,” she says.

In the future, more practical versions of the approach could allow people to make art or alter images with their imaginations, or add new elements to the game, Lin says.

Topics:

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Copyright © 2023 Story Level Media.