<span class="vcard">/u/chris24H</span>
/u/chris24H

Is alignment missing a dataset that no one has built yet?

LLMs are trained on language and text, what humans say. But language alone is incomplete. The nuances that make humans individually unique, the secret sauce of who humans actually are rather than what they say. I'm not aware of any training dataset…