computer-vision
computer-vision

Can robots learn from machine dreams?

MIT CSAIL researchers used AI-generated images to train a robot dog in parkour, without real-world data. Their LucidSim system demonstrates generative AI’s potential for creating robotics training data.

Combining next-token prediction and video diffusion in computer vision and robotics

A new method can train a neural network to sort corrupted data while anticipating next steps. It can make flexible plans for robots, generate high-quality video, and help AI agents navigate digital environments.

AI pareidolia: Can machines spot faces in inanimate objects?

New dataset of “illusory” faces reveals differences between human and algorithmic face detection, links to animal face recognition, and a formula predicting where people most often perceive faces.

Helping robots zero in on the objects that matter

A new method called Clio enables robots to quickly map a scene and identify the items they need to complete a given set of tasks.

Helping robots practice skills independently to adapt to unfamiliar environments

New algorithm helps robots practice skills like sweeping and placing objects, potentially helping them improve at important tasks in houses, hospitals, and factories.

Precision home robots learn with real-to-sim-to-real

CSAIL researchers introduce a novel approach allowing robots to be trained in simulations of scanned home environments, paving the way for customized household automation accessible to anyone.

MIT researchers advance automated interpretability in AI models

MAIA is a multimodal agent that can iteratively design experiments to better understand various components of AI systems.

Researchers leverage shadows to model 3D scenes, including objects blocked from view

This technique could lead to safer autonomous vehicles, more efficient AR/VR headsets, or faster warehouse robots.

Understanding the visual knowledge of language models

LLMs trained primarily on text can generate complex visual concepts through code with self-correction. Researchers used these illustrations to train an image-free computer vision system to recognize real photos.

Researchers use large language models to help robots navigate

The method uses language-based inputs instead of costly visual data to direct a robot through a multistep navigation task.