Study: AI could lead to inconsistent outcomes in home surveillance
Researchers find large language models make inconsistent decisions about whether to call the police when analyzing surveillance videos.
Researchers find large language models make inconsistent decisions about whether to call the police when analyzing surveillance videos.
“ScribblePrompt” is an interactive AI framework that can efficiently highlight anatomical structures across different medical scans, assisting medical workers to delineate regions of interest and abnormalities.
Researchers developed an easy-to-use tool that enables an AI practitioner to find data that suits the purpose of their model, which could improve accuracy and reduce bias.
The software tool NeuroTrALE is designed to quickly and efficiently process large amounts of brain imaging data semi-automatically.
The approach can detect anomalies in data recorded over time, without the need for any training.
More efficient than other approaches, the “Thermometer” technique could help someone know when they should trust a large language model.
Introducing structured randomization into decisions based on machine-learning model predictions can address inherent uncertainties while maintaining efficiency.
A new technique enables users to compare several large models and choose the one that works best for their task.
More accurate uncertainty estimates could help users decide about how and when to use machine-learning models in the real world.
The challenge asked teams to develop AI algorithms to track and predict satellites’ patterns of life in orbit using passively collected data