It's basically way better than LoRA under all respects and could even be used to speed up inference. I wonder whether the big models are not using it already... If not we'll see a blow up in capabilities very, very soon. What they've shown is that you can find the subset of parameters responsible for transfer of capability to new tasks. Does it apply to completely novel tasks? No, that would be magic. Tasks that need new features or representations break the method, but if it fits in the same domain then the answer is "YES".
Here's a very cool analogy from GPT 5.1 which hits the nail in the head in explaining the role of subspace in learning new tasks by analogy with 3d graphics.
Think of 3D character animation rigs:
• The mesh has millions of vertices (11M weights).
• Expressions are controlled via:
• “smile”
• “frown”
• “blink”
Each expression is just:
mesh += α_i \* basis_expression_i
Hundreds of coefficients modify millions of coordinates.It does seem to be working for novel tasks.
> Does it apply to completely novel tasks? No, that would be magic.
Are there novel tasks? Inside the limits of physics, tasks are finite, and most of them are pointless. One can certainly entertain tasks that transcend physics, but that isn't necessary if one merely wants an immortal and indomitable electronic god.