Hey folks, I’ve been mulling over something. Every other day I see some article about more progress in AI, and it just doesn’t seem to stop. Just look at the AI models today, they're getting massive, and their capabilities are already borderline scary. How much longer before the lines between AI and human blur, or even become indistinguishable? Take gpt-4 or Anthropic’s Claude, for instance, they're so much better at understanding and even generating human-like text. Not to mention, these models are conditioned to avoid certain topics and tasks.
I don’t think people realize that AutoGPT doesn’t work well because OpenAI and Anthropic have trained their AI’s not to be autonomous. Could we possibly already be in the AI singularity?
[link] [comments]