I will use the following [variation on the] Turing test:
An entity is conscious, has human (or superhuman) intelligence, and has moral status and rights, if and only if a comfortable majority of ordinary people can agree that it does. (Can be applied to both AGI and transhuman endeavours).
Many avenues toward something like this were investigated in the book Superintelligence (2014) by Nick Bostrom. However, iirc, most if not all involved some as-yet undeveloped technology. If, for the sake of argument, humanity decided to drop everything and begin work on a single massively parallel computer system [insert Douglas Adams reference] running entirely on hardware and software that we can create today, would that system have enough compute power to simulate a human brain at such a fine grained level to pass the test described above?
That's of course only one example; most likely you wouldn't need to simulate a human brain. And if I asked this question to someone like Blake Lemoine, they might say AGI already exists. Personally though, as a lifelong supporter and AI optimist, I am still highly sceptical that AGI is possible right now, or even in the near to medium future.
[link] [comments]