Decision Entropy
Decision Entropy

Decision Entropy

Lets say you have an AI that is both self improving and unsupervised.

Do we think that it is likely that as the AI gets increasingly efficient, it becomes increasingly less able to learn? Surely as the AI refines itself, it will become faster, but more specialized, and eventually stop learning altogether? Won't people train their AIs to the point that they're so good at one thing, the thing can no longer performed by a human, at which point, we're toast?

This keeps me awake at night.

submitted by /u/aliasrob
[link] [comments]