What’s with all the people in this sub hating on V-JEPA or the guy behind it?
What’s with all the people in this sub hating on V-JEPA or the guy behind it?

What’s with all the people in this sub hating on V-JEPA or the guy behind it?

I mean I’m exited for it too I’m ALL for other alternative potentially more efficient routes to getting flexible reasoning to emerge from models,

And transformers are great for replicating data generating types of data, but like it feels really brute force and such with all the chain of thought and prompt engendering and stretching and warping of pre-trained transformer models,

It would be great if we had a large flexible reasoning dataset (whatever that even means) to train them on but

Clarification love transformers, i do see them as being able to reason if we had a reasoning dataset or given enough processing power

but just like how transformers were great for parallel processing to make it way more efficient and faster training than just RNNS

It would be great if it turns out there’s a better architecture for the flexible reasoning

submitted by /u/Impossible_Belt_7757
[link] [comments]