logoalt Hacker News

jbentley101/22/20251 replyview on HN

Tiny language models can do a lot if they are fine tuned for a specific task, but IMO a few things are holding them back:

1. Getting the speed gains is hard unless you are able to pay for dedicated GPUs. Some services offer LoRA as serverless but you don't get the same performance for various technical reasons.

2. Lack of talent to actually do the finetuning. Regular engineers can do a lot of LLM implementation, but when it comes to actually performing training it is a scarcer skillset. Most small to medium orgs don't have people who can do it well.

3. Distribution. Sharing finetunes is hard. HuggingFace exists, but discoverability is an issue. It is flooded with random models with no documentation and it isn't easy to find a good oen for your task. Plus, with a good finetune you also need the prompt and possibly parsing code to make it work the way it is intended and the bundling hasn't been worked out well.


Replies

grisaitis01/22/2025

when you say fine-tuning skills or talent are scarce, do you have specific skills in mind? perhaps engineering for training models (eg making model parallelism work)? or the more ML type skills of designing experiments, choosing which methods to use, figuring out datasets for training, hyperparam tuning/evaluation, etc?

show 1 reply