Multiple AI models help robots execute complex plans more transparently
A multimodal system uses models trained on language, vision, and action data to help robots develop and execute plans for household, construction, and manufacturing tasks.
A multimodal system uses models trained on language, vision, and action data to help robots develop and execute plans for household, construction, and manufacturing tasks.
This new method draws on 200-year-old geometric foundations to give artists control over the appearance of animated characters.
Using generative AI, MIT chemists created a model that can predict the structures formed when a chemical reaction reaches its point of no return.
Using machine learning, the computational method can provide details of how materials work as catalysts, semiconductors, or battery components.
A new, data-driven approach could lead to better solutions for tricky optimization problems like global package routing or power grid operation.
With the PockEngine training method, machine-learning models can efficiently and continuously learn from user data on edge devices like smartphones.
Computer vision enables contact-free 3D printing, letting engineers print with high-performance materials they couldn’t use before.
Researchers coaxed a family of generative AI models to work together to solve multistep robot manipulation problems.
By focusing on causal relationships in genome regulation, a new AI method could help scientists identify new immunotherapy techniques or regenerative therapies.
Researchers use synthetic data to improve a model’s ability to grasp conceptual information, which could enhance automatic captioning and question-answering systems.