Large language models are biased. Can logic help save them?
MIT researchers trained logic-aware language models to reduce harmful stereotypes like gender and racial biases.
MIT researchers trained logic-aware language models to reduce harmful stereotypes like gender and racial biases.
The program leverages MIT’s research expertise and Takeda’s industrial know-how for research in artificial intelligence and medicine.
The method enables a model to determine its confidence in a prediction, while using no additional data and far fewer computing resources than other methods.
A new study shows how large language models like GPT-3 can learn a new task from just a few examples, without the need for any new training data.
A new tool brings the benefits of AI programming to a much broader class of problems.