A basic capability AGI should have: solving any simple pattern from very small data
A basic capability AGI should have: solving any simple pattern from very small data

A basic capability AGI should have: solving any simple pattern from very small data

There’s a common idea in AGI theory (including work like Solomonoff induction and AIXI) that a truly general learning system should handle this task:

If a pattern is simple, the system should be able to figure out the rule behind it with (almost) perfect accuracy, even when it only sees a tiny amount of data (something like a few hundred bits or less).

By “simple pattern,” I mean things that have an easy underlying rule: repeating sequences, small formulas, short algorithms, signals that can be described in a very compact way. Humans can usually spot these quickly. Current ML models often fail unless they get large datasets or the pattern fits their built-in biases.

Questions for discussion:

  1. Is this capability a realistic requirement for AGI?
  2. How close are current methods (e.g., program-induction approaches, hybrids of neural nets and symbolic reasoning, etc.)?
  3. Are there good benchmarks that test this “small data, simple rule” ability?
  4. Are there arguments against treating this as a core requirement?

Looking for viewpoints from both theory-focused people and practitioners.

submitted by /u/oaprograms
[link] [comments]