Also stuff like this:
>That’s not exotic. That’s just model parallelism with extra suffering.
>That’s not product magic. That’s a checkbox.
What really triggers my internal AI slop detector is this:
>Their renders. Their prototype shots. Their exploded views. Their spec sheet.
>Nobody asked what silicon was inside. Nobody asked how 120B on LPDDR5X was supposed to work. Nobody spent
>No cloud. No GPU. No subscriptions.
>wrong class of chip, wrong power envelope, wrong everything
>The visual geometry matches. The licensing model matches. The China-based semiconductor ecosystem match
>Real researchers. Real papers. Real contributions.
LLMs love to overuse this pattern.
This also smells of an autoregressive model trying to make a point that TiinyAI simply forked another repo and claimed as their own invention, before realizing mid-paragraph it's by the same people:
>So no, TiinyAI did not “launch” PowerInfer. SJTU researchers did.
>TiinyAI’s GitHub repo is a fork of the original PowerInfer repository. At least one of the original academic authors appears tied to the code history. So there is clearly some real overlap between the research world and the product world.