There’s a common idea in AGI theory (including work like Solomonoff induction and AIXI) that a truly general learning system should handle this task:
If a pattern is simple, the system should be able to figure out the rule behind it with (almost) perfect accuracy, even when it only sees a tiny amount of data (something like a few hundred bits or less).
By “simple pattern,” I mean things that have an easy underlying rule: repeating sequences, small formulas, short algorithms, signals that can be described in a very compact way. Humans can usually spot these quickly. Current ML models often fail unless they get large datasets or the pattern fits their built-in biases.
Questions for discussion:
- Is this capability a realistic requirement for AGI?
- How close are current methods (e.g., program-induction approaches, hybrids of neural nets and symbolic reasoning, etc.)?
- Are there good benchmarks that test this “small data, simple rule” ability?
- Are there arguments against treating this as a core requirement?
Looking for viewpoints from both theory-focused people and practitioners.
[link] [comments]