Meta Announces New Method for Real-Time Decoding of Images from Brain Activity
Meta Announces New Method for Real-Time Decoding of Images from Brain Activity

Meta Announces New Method for Real-Time Decoding of Images from Brain Activity

Brain decoding tech has improved a lot recently thanks to AI/ML, enabling reading out visual perceptions from fMRI brain scans. But fMRI is too slow for real-time BCIs.

A new study from Meta's AI research team pushes brain reading into real-time using MEG, which measures whole-brain activity at super-fast millisecond resolution.

They built a 3-part pipeline to decode MEG signals:

  1. Embed images into latent spaces using pretrained models like CLIP.
  2. Train MEG-specific ConvNet to predict embeddings from MEG data.
  3. Generate images from MEG embeddings with diffusion model.

They tested it on 20k+ natural images. MEG decoding was 7X better than old methods, hitting 70% top-5 accuracy in retrieving the right images.

Generated images matched semantics decently but lacked fine visual details compared to fMRI. MEG seems more focused on high-level category info whereas fMRI captures more low-level features.

This could enable visual BCIs for paralysis, etc. ... honestly, a world where we can decode brain images in real time is pretty crazy. The findings also raise some important ethical considerations around privacy of decoded mental content... (wow, that was a weird sentence to write!).

TLDR: New MEG pipeline decodes dynamic visual data from brain activity in real-time. Good but not yet photorealistic-quality image generation.

Full summary here. Paper is here.

submitted by /u/Successful-Western27
[link] [comments]