<span class="vcard">/u/MetaKnowing</span>
/u/MetaKnowing

Jensen Huang says AI Scaling Laws are continuing because there is not one but three dimensions where development occurs: pre-training (like a college degree), post-training ("going deep into a domain") and test-time compute ("thinking")

submitted by /u/MetaKnowing [link] [comments]

The first decentralized training of a 10B model is complete… "If you ever helped with SETI@home, this is similar, only instead of helping to look for aliens, you will be helping to summon one."

submitted by /u/MetaKnowing [link] [comments]

Dario Amodei says although AGI is not a good term because we’re on a continuous exponential of improvement, "we’re at the start of a 2-year period where we’re going to pass successively all of those thresholds" for doing meaningful work

submitted by /u/MetaKnowing [link] [comments]