Research
Research

Ecologists find computer vision models’ blind spots in retrieving wildlife images

Biodiversity researchers tested vision systems on how well they could retrieve relevant nature images. More advanced models performed well on simple queries but struggled with more research-specific prompts.

Need a research hypothesis? Ask AI.

MIT engineers developed AI frameworks to identify evidence-driven hypotheses that could advance biologically inspired materials.

MIT engineers grow “high-rise” 3D chips

An electronic stacking technique could exponentially increase the number of transistors on chips, enabling more efficient AI hardware.

MIT researchers introduce Boltz-1, a fully open-source model for predicting biomolecular structures

With models like AlphaFold3 limited to academic research, the team built an equivalent alternative, to encourage innovation more broadly.

Study reveals AI chatbots can detect race, but racial bias reduces response empathy

Researchers at MIT, NYU, and UCLA develop an approach to help evaluate whether large language models like GPT-4 are equitable enough to be clinically viable for mental health support.

Teaching a robot its limits, to complete open-ended tasks safely

The “PRoC3S” method helps an LLM create a viable action plan by testing each step in a simulation. This strategy could eventually aid in-home robots to complete more ambiguous chore requests.

Researchers reduce bias in AI models while preserving or improving accuracy

A new technique identifies and removes the training examples that contribute most to a machine-learning model’s failures.

Study: Some language reward models exhibit political bias

Research from the MIT Center for Constructive Communication finds this effect occurs even when reward models are trained on factual data.

Enabling AI to explain its predictions in plain language

Using LLMs to convert machine-learning explanations into readable narratives could help users make better decisions about when to trust a model.

Citation tool offers a new approach to trustworthy AI-generated content

Researchers develop “ContextCite,” an innovative method to track AI’s source attribution and detect potential misinformation.