Data
Data

Making it easier to verify an AI model’s responses

By allowing users to clearly see data referenced by a large language model, this tool speeds manual validation to help users spot AI errors.

Modeling relationships to solve complex problems efficiently

Associate Professor Julian Shun develops high-performance algorithms and frameworks for large-scale graph processing.

How AI is improving simulations with smarter sampling techniques

MIT CSAIL researchers created an AI-powered method for low-discrepancy sampling, which uniformly distributes data points to boost simulation accuracy.

New security protocol shields data from attackers during cloud-based computation

The technique leverages quantum properties of light to guarantee security while preserving the accuracy of a deep-learning model.

Study: AI could lead to inconsistent outcomes in home surveillance

Researchers find large language models make inconsistent decisions about whether to call the police when analyzing surveillance videos.

A fast and flexible approach to help doctors annotate medical scans

“ScribblePrompt” is an interactive AI framework that can efficiently highlight anatomical structures across different medical scans, assisting medical workers to delineate regions of interest and abnormalities.

Study: Transparency is often lacking in datasets used to train large language models

Researchers developed an easy-to-use tool that enables an AI practitioner to find data that suits the purpose of their model, which could improve accuracy and reduce bias.

New open-source tool helps to detangle the brain

The software tool NeuroTrALE is designed to quickly and efficiently process large amounts of brain imaging data semi-automatically.

MIT researchers use large language models to flag problems in complex systems

The approach can detect anomalies in data recorded over time, without the need for any training.

Method prevents an AI model from being overconfident about wrong answers

More efficient than other approaches, the “Thermometer” technique could help someone know when they should trust a large language model.