Scott Reed at DeepMind - Working on neural proof generation and inference by combining deep learning and symbolic logic.
Luke Hewitt at DeepMind - Developing graph neural networks and reinforcement learning for mathematical and logical reasoning.
Alex Graves at DeepMind - Pioneering new recurrent neural networks like the Differentiable Neural Computer for complex logical inference and reasoning problems.
Brenden Lake at NYU - Leading work on integrating neural learning and structured Bayesian models to achieve human-like concept learning and reasoning abilities.
Xavier Llora at Carnegie Mellon University - Advancing probabilistic logic neural networks that incorporate symbolic logic with deep neural models for enhanced reasoning capacities.
Tommi Jaakkola at MIT - Research on modular networks and theory of mind reasoning for unpacking the logical structure of how agents interpret the world.
Jian Tang at Mila - Proposing and developing logic attention networks that inject inductive bias into transformers to nudge towards logical consistency.
Matt Gardner at Allen Institute for AI - Created the Aristo project for question answering focused on training AI models with scientific facts and logical reasoning skills.
Roba Abbas at Monash University - Designing explainable AI systems with formal argumentation to enable richer human-aligned justification chains.
Sanjay Subrahmanian at UCLA - Leader in work on heterogeneous reasoning combining search algorithms, formal logic, and deep learning for explainable and transparent reasoning.
[link] [comments]