most of what today's ais generate is derived from what appears to be the human consensus on a certain matter. a major problem with this approach is that we humans tend to get a lot wrong. if we're to achieve AGI, we need to correct this.
a perfect example is the question of whether or not we humans have a free will. since the popular consensus is that we do, that is what most, or perhaps all, of today's ais will claim and defend. the problem is that we humans are as wrong about free will as we once were about the world being flat.
if we apply logic and reasoning to the question, we realize that there are only two mechanisms that theoretically explain how things happen; causality and acausality. everything is either caused or uncaused, or perhaps a combination of the two, (although this last possibility is highly unlikely).
if our human decisions are caused, the cause-and-effect causal regression behind everything we decide stretches back to at least the big bang, thus making those decisions not freely willed.
if our human decisions are uncaused, or random, an uncaused decision cannot be logically attributed to a human or anything else for the matter. the same goes for a random decision, if by random we mean without any discernable, predictable, cause. so, according to logic, both causality and acausality (or any combination thereof) render a human free will categorically impossible.
as you can see, the logic of why humans do not have a free will could not be more clear and strong. but ask an ai if humans have a free will, and all current models that i'm aware of will initially not in the slightest consider that logic in arriving at their answer. they will go the pc route of simply stating what most humans believe about the matter. if you press some, as i have done, they will eventually concede that free will is a logical impossibility. but this will take some coaxing and persistence.
now consider how many other questions ais get wrong because they are basing their answers on popular consensus rather than logic and reasoning. this is a big problem that needs to be fixed if we are to reach agi.
an interesting test of whether anyone at all is training ais to use logic and reasoning rather than popular consensus for their answers will come when the new version of grok is released later this month. grok has been billed as an ai designed to unlock the fundamental truths of our universe. however, how it answers the free will question will tell us whether it is using logic and reasoning applied to available evidence for its answers or whether it is just another ai parroting what we humans tend to believe.
if anyone knows of an ai that has been trained to ignore popular consensus, and generate content according to what is logical and reasonable, it'd be great if you could post the link in the comments.
[link] [comments]