Current AI systems are 'turn based' that is that the user give an input and then the Ai gives an output and so on back and forth. In between times the AI is not processing anything or doing any 'work' towards the goals of the user. The question is whether such a system can get us to AGI? Humans are obviously not turn based and are continually thinking about things relevant to their goals. In some ways this makes AI systems a lot safer as they can't go out and do something by themselves, whatever action is taken will only be over a single 'turn' and then it will stop until the new instructions are given.
It also seems that given there is the capability for an AI to hold thousands or millions of conversations with users simultaneously, there is a question as to what is the result if such an AI is directed to hold thousands or millions of conversations with itself simultaneously? Surely that would be like a 'brain' or at least a community in some sense? I imagine the way to do this would be for each instance of the AI to have slightly different parameters of operation and temperature, for instance, and then just start it chatting with itself, perhaps with the initial meta prompt of "What do you think you should do?" or "How would you develop yourself" It could then hold discussions or 'votes' within itself about the best way forward.
It seems like this kind of 'hive mind' would be the obvious thing to do experimentally for researchers that had access to a many instance AI system so I wonder if it has been done and what the results have been?
[link] [comments]