If you are confident that recursive AI self-improvement is not possible, what makes you so sure?
We know computer programs and hardware can be optimized. We can foresee machines as smart as humans some time in the next 50 years. A machine like that could write computer programs and optimize hardware. What will prevent recursive self-improvement? &…
Teaching LLMs to be more reasonable
Based on a bit of research and a lot of gut feeling, I offer the following speculation: if you self-trained an LLM with a Python interpreter or Java compiler in a feedback loop where it learned from its own mistakes then it could become dramatically b…