Large language models are biased. Can logic help save them?
MIT researchers trained logic-aware language models to reduce harmful stereotypes like gender and racial biases.
MIT researchers trained logic-aware language models to reduce harmful stereotypes like gender and racial biases.
The method enables a model to determine its confidence in a prediction, while using no additional data and far fewer computing resources than other methods.
MIT spinout Verta offers tools to help companies introduce, monitor, and manage machine-learning models safely and at scale.
The chatbot’s success on the medical licensing exam shows that the test — and medical education — are flawed, Celi says.