Researchers enhance peripheral vision in AI models
By enabling models to see the world more like humans do, the work could help improve driver safety and shed light on human behavior.
By enabling models to see the world more like humans do, the work could help improve driver safety and shed light on human behavior.
PhD students interning with the MIT-IBM Watson AI Lab look to improve natural language usage.
A multimodal system uses models trained on language, vision, and action data to help robots develop and execute plans for household, construction, and manufacturing tasks.
This new method draws on 200-year-old geometric foundations to give artists control over the appearance of animated characters.
“Minimum viewing time” benchmark gauges image recognition complexity for AI systems by measuring the time needed for accurate human identification.
Justin Solomon applies modern geometric techniques to solve problems in computer vision, machine learning, statistics, and beyond.
MIT CSAIL researchers innovate with synthetic imagery to train AI, paving the way for more efficient and bias-reduced machine learning.
Computer vision enables contact-free 3D printing, letting engineers print with high-performance materials they couldn’t use before.
AI models that prioritize similarity falter when asked to design something completely new.
Amid the race to make AI bigger and better, Lincoln Laboratory is developing ways to reduce power, train efficiently, and make energy use transparent.