Researchers leverage shadows to model 3D scenes, including objects blocked from view
This technique could lead to safer autonomous vehicles, more efficient AR/VR headsets, or faster warehouse robots.
This technique could lead to safer autonomous vehicles, more efficient AR/VR headsets, or faster warehouse robots.
LLMs trained primarily on text can generate complex visual concepts through code with self-correction. Researchers used these illustrations to train an image-free computer vision system to recognize real photos.
The SPARROW algorithm automatically identifies the best molecules to test as potential new medicines, given the vast number of factors affecting each choice.
Combining natural language and programming, the method enables LLMs to solve numerical, analytical, and language-based tasks transparently.
The method uses language-based inputs instead of costly visual data to direct a robot through a multistep navigation task.
DenseAV, developed at MIT, learns to parse and understand the meaning of language just by watching videos of people talking, with potential applications in multimedia search, language learning, and robotics.
The startup Augmental allows users to operate phones and other devices using their tongue, mouth, and head gestures.
With generative AI models, researchers combined robotics data from different sources to help robots learn better.
A new approach could streamline virtual training processes or aid clinicians in reviewing diagnostic videos.
“Alchemist” system adjusts the material attributes of specific objects within images to potentially modify video game models to fit different environments, fine-tune VFX, and diversify robotic training.