Why does the existential threat of AI given such a low probability
Why does the existential threat of AI given such a low probability

Why does the existential threat of AI given such a low probability

I'm confused by the low statistical probability assigned to the existential threat posed by AI. What metrics did scientists use to reach this conclusion, and what evidence supports the argument that AI isn't a major existential risk? We lack another AGI to draw comparisons with, so I perceive this assessment as either hubris, excessive optimism, based on a lack of sufficient evidence. This to me feels like they're downplaying AI's capabilities is intended to assuage public anxieties, particularly regarding job displacement, even if it leads to unrealistic solutions like "Universal High Income."

P.S I'm not an AI doomer, I believe this is a valid question warranting serious discussion.

submitted by /u/Major_Fishing6888
[link] [comments]