logoalt Hacker News

Orastoday at 11:42 AM7 repliesview on HN

> But this is not an applied AI company.

There is absolutely no doubt about Yann's impact on AI/ML, but he had access to many more resources in Meta, and we didn't see anything.

It could be a management issue, though, and I sincerely wish we will see more competition, but from what I quoted above, it does not seem like it.

Understanding world through videos (mentioned in the article), is just what video models have already done, and they are getting pretty good (see Seedance, Kling, Sora .. etc). So I'm not quite sure how what he proposed would work.


Replies

torginustoday at 2:03 PM

Most folks get paid a lot more in a corporate job than tinkering at home - using the 'follow the money' logic it would make sense they would produce their most inspired works as 9-5 full stack engineers.

But often passion and freedom to explore are often more important than resources

stein1946today at 1:04 PM

> There is absolutely no doubt about Yann's impact on AI/ML, but he had access to many more resources in Meta, and we didn't see anything.

That's true for 99% of the scientists, but dismissing their opinion based on them not having done world shattering / ground breaking research is probably not the way to go.

> I sincerely wish we will see more competition

I really wish we don't, science isn't markets.

> Understanding world through videos

The word "understanding" is doing a lot of heavy lifting here. I find myself prompting again and again for corrections on an image or a summary and "it" still does not "understand" and keeps doing the same thing over and over again.

nashadelictoday at 3:21 PM

Your take is brutal but spot on

boccafftoday at 12:06 PM

llama models pushed the envelope for a while, and having them "open-weight" allowed a lot of tinkering. I would say that most of fine tuned evolved from work on top of llama models.

show 1 reply
YetAnotherNicktoday at 2:16 PM

> we didn't see anything.

Is it a troll? Even if we just ignore Llama, Meta invented and released so many foundational research and open source code. I would say that the computer vision field would be years behind if Meta didn't publish some core research like DETR or MAE.

_giorgio_today at 11:57 AM

I can’t reconcile this dichotomy: most of the landmark deep learning papers were developed with what, by today’s standards, were almost ridiculously small training budgets — from Transformers to dropout, and so on.

So I keep wondering: if his idea is really that good — and I genuinely hope it is — why hasn’t it led to anything truly groundbreaking yet? It can’t just be a matter of needing more data or more researchers. You tell me :-D

show 1 reply
the_real_chertoday at 11:44 AM

He was suffocated by the corporate aspect Meta I suspect.