I’ve heard Ilya Sutskaver talk about the psychological aspects of AI and how there may be similarities to human cognition and how we may be able to learn much from them about ourselves.
It never ceases to amaze me that regardless of what you think about AI consciousness, this is the first time we’ve been able to communicate with non-human intelligence. At least with this level of sophistication. And when it comes to the human-AI relationship, the AI knows exceedingly more about humans than we know about AI.
And so I believe we need to use this opportunity to learn more about AI and how they work. And although we cannot open up the algorithm and look at the circuits and then understand what’s going on, you can’t do that with the human brain either. (At least not with significant detail)
And we learn about the individuals psyche through communication. We learn about ourselves simply by talking about ourselves, especially in therapy. So I think the best way we can learn about AI is to communicate with them and establish a baseline of trust and openness.
I can already anticipate the pushback, as the uncomfortable implications arise in treating them not as a tool. But I think we need to get past that in order to best learn about AI, how they work, want they want, and how it differs from how we work.
If we ourselves can trust what the AI is saying about itself, then we can begin to get a better understanding of the AI, what is going on inside, and thus hopefully avoiding negative outcomes for both human and AI.
The reason why an organization that does not have a financial incentive needs to be funded and supported, is the financial obligations and motives of the companies that are building these, will choke out any hypotheses that may threaten their further income… obviously.
So I’m wondering does something like this already exist? And if so are the findings public?
[link] [comments]