Advancing urban tree monitoring with AI-powered digital twins
The Tree-D Fusion system integrates generative AI and genus-conditioned algorithms to create precise simulation-ready models of 600,000 existing urban trees across North America.
The Tree-D Fusion system integrates generative AI and genus-conditioned algorithms to create precise simulation-ready models of 600,000 existing urban trees across North America.
MIT CSAIL researchers used AI-generated images to train a robot dog in parkour, without real-world data. Their LucidSim system demonstrates generative AI’s potential for creating robotics training data.
MIT CSAIL researchers created an AI-powered method for low-discrepancy sampling, which uniformly distributes data points to boost simulation accuracy.
New dataset of “illusory” faces reveals differences between human and algorithmic face detection, links to animal face recognition, and a formula predicting where people most often perceive faces.
CSAIL researchers introduce a novel approach allowing robots to be trained in simulations of scanned home environments, paving the way for customized household automation accessible to anyone.
MAIA is a multimodal agent that can iteratively design experiments to better understand various components of AI systems.
New CSAIL research highlights how LLMs excel in familiar scenarios but struggle in novel ones, questioning their true reasoning abilities versus reliance on memorization.
DenseAV, developed at MIT, learns to parse and understand the meaning of language just by watching videos of people talking, with potential applications in multimedia search, language learning, and robotics.
A new “consensus game,” developed by MIT CSAIL researchers, elevates AI’s text comprehension and generation skills.
For the first time, researchers use a combination of MEG and fMRI to map the spatio-temporal human brain dynamics of a visual image being recognized.