Reading through some of his literature posted here. Hardly dove in and he writes in Chapter 1.4 "Organizational Risks",
The Challenger disaster, alongside other catastrophes, serves as a chilling reminder that even with the best expertise and intentions, accidents can still occur. As we progress in developing advanced AI systems, it is crucial to remember that these systems are not immune to catastrophic accidents. An essential factor in preventing accidents and maintaining low levels of risk lies in the organizations responsible for these technologies. In this section, we discuss how organizational safety plays a critical role in the safety of AI systems. First, we discuss how even without competitive pressures or malicious actors, accidents can happen—in fact, they are inevitable. We then discuss how improving organizational factors can reduce the likelihood of AI catastrophes.
Hard to continue reading after something like that. I just find it difficult to trust experts when they can't detect stuff like this in their "own writing", which is obviously not their own writing.
[link] [comments]