One of the things I’ve been thinking about with AI systems is the difference between decision support and decision making.
Decision support: meaning the system provides info and a human evaluates it and may or may not take an action.
Decision making: meaning the system actually performs the action.
For example:
• Suggesting eligible clinical trial participants
• Flagging abnormal lab results
• Recommending a route on a GPS
In these cases the system helps a human decide.
But there are also systems that automatically:
• approve or deny requests
• enroll users into workflows
• trigger actions based on a rule set or user input
That’s a very different level of responsibility.
Curious where people think the boundary should be between recommendation and decision.
[link] [comments]