I mean anything in the 0.5B-3B range that's available on Ollama (for example). Have you built any cool tooling that uses these models as part of your work flow?
I've been using Llama models to identify cookie notices on websites, for the purpose of adding filter rules to block them in EasyList Cookie. Otherwise, this is normally done by, essentially, manual volunteer reporting.
Most cookie notices turn out to be pretty similar, HTML/CSS-wise, and then you can grab their `innerText` and filter out false positives with a small LLM. I've found the 3B models have decent performance on this task, given enough prompt engineering. They do fall apart slightly around edge cases like less common languages or combined cookie notice + age restriction banners. 7B has a negligible false-positive rate without much extra cost. Either way these things are really fast and it's amazing to see reports streaming in during a crawl with no human effort required.
Code is at https://github.com/brave/cookiemonster. You can see the prompt at https://github.com/brave/cookiemonster/blob/main/src/text-cl....
I have ollama responding to SMS spam texts. I told it to feign interest in whatever the spammer is selling/buying. Each number gets its own persona, like a millennial gymbro or 19th century British gentleman.
I have a mini PC with an n100 CPU connected to a small 7" monitor sitting on my desk, under the regular PC. I have llama 3b (q4) generating endless stories in different genres and styles. It's fun to glance over at it and read whatever it's in the middle of making. I gave llama.cpp one CPU core and it generates slow enough to just read at a normal pace, and the CPU fans don't go nuts. Totally not productive or really useful but I like it.
I have a small fish script I use to prompt a model to generate three commit messages based off of my current git diff. I'm still playing around with which model comes up with the best messages, but usually I only use it to give me some ideas when my brain isn't working. All the models accomplish that task pretty well.
Here's the script: https://github.com/nozzlegear/dotfiles/blob/master/fish-func...
And for this change [1] it generated these messages:
1. `fix: change from printf to echo for handling git diff input`
2. `refactor: update codeblock syntax in commit message generator`
3. `style: improve readability by adjusting prompt formatting`
[1] https://github.com/nozzlegear/dotfiles/commit/0db65054524d0d...We fine-tuned a Gemma 2B to identify urgent messages sent by new and expecting mothers on a government-run maternal health helpline.
https://idinsight.github.io/tech-blog/blog/enhancing_materna...
I have a tiny device that listens to conversations between two people or more and constantly tries to declare a "winner"
Micro Wake Word is a library and set of on device models for ESPs to wake on a spoken wake word. https://github.com/kahrendt/microWakeWord
Recently deployed in Home Assistants fully local capable Alexa replacement. https://www.home-assistant.io/voice_control/about_wake_word/
https://gophersignal.com – I built GopherSignal!
It's a lightweight tool that summarizes Hacker News articles. For example, here’s what it outputs for this very post, "Ask HN: Is anyone doing anything cool with tiny language models?":
"A user inquires about the use of tiny language models for interesting applications, such as spam filtering and cookie notice detection. A developer shares their experience with using Ollama to respond to SMS spam with unique personas, like a millennial gymbro or a 19th-century British gentleman. Another user highlights the effectiveness of 3B and 7B language models for cookie notice detection, with decent performance achieved through prompt engineering."
I originally used LLaMA 3:Instruct for the backend, which performs much better, but recently started experimenting with the smaller LLaMA 3.2:1B model.
It’s been cool seeing other people’s ideas too. Curious—does anyone have suggestions for small models that are good for summaries?
Feel free to check it out or make changes: https://github.com/k-zehnder/gophersignal
"Comedy Writing With Small Generative Models" by Jamie Brew (Strange Loop 2023)
https://m.youtube.com/watch?v=M2o4f_2L0No
Spend the 45 minutes watching this talk. It is a delight. If you are unsure, wait until the speaker picks up the guitar.
Microsoft published a paper on their FLAME model (60M parameters) for Excel formula repair/completion which outperformed much larger models (>100B parameters).
We (avy.ai) are using models in that range to analyze computer activity on-device, in a privacy sensitive way, to help knowledge workers as they go about their day.
The local models do things ranging from cleaning up OCR, to summarizing meetings, to estimating the user's current goals and activity, to predicting search terms, to predicting queries and actions that, if run, would help the user accomplish their current task.
The capabilities of these tiny models have really surged recently. Even small vision models are becoming useful, especially if fine tuned.
I've made a tiny ~1m parameter model that can generate random Magic the Gathering cards that is largely based on Karpathy's nanogpt with a few more features added on top.
I don't have a pre-trained model to share but you can make one yourself from the git repo, assuming you have an apple silicon mac.
I simply use it to de-anonymize code that I typed in via Claude
Maybe should write a plugin for it (open source):
1. Put in all your work related questions in the plugin, an LLM will make it as an abstract question for you to preview and send it
2. And then get the answer with all the data back
E.g. df[“cookie_company_name”] becomes df[“a”] and back
Tiny language models can do a lot if they are fine tuned for a specific task, but IMO a few things are holding them back:
1. Getting the speed gains is hard unless you are able to pay for dedicated GPUs. Some services offer LoRA as serverless but you don't get the same performance for various technical reasons.
2. Lack of talent to actually do the finetuning. Regular engineers can do a lot of LLM implementation, but when it comes to actually performing training it is a scarcer skillset. Most small to medium orgs don't have people who can do it well.
3. Distribution. Sharing finetunes is hard. HuggingFace exists, but discoverability is an issue. It is flooded with random models with no documentation and it isn't easy to find a good oen for your task. Plus, with a good finetune you also need the prompt and possibly parsing code to make it work the way it is intended and the bundling hasn't been worked out well.
I'm working on a plugin[1] that runs local LLMs from the Godot game engine. The optimal model sizes seem to be 2B-7B ish, since those will run fast enough on most computers. We recommend that people try it out with Gemma 2 2B (but it will work with any model that works with llama.cpp)
At those sizes, it's great for generating non-repetitive flavortext for NPCs. No more "I took an arrow to the knee".
Models at around the 2B size aren't really capable enough to act a competent adversary - but they are great for something like bargaining with a shopkeeper, or some other role where natural language can let players do a bit more immersive roleplay.
I have it running on a Raspberry Pi 5 for offline chat and RAG. I wrote this open-source code for it: https://github.com/persys-ai/persys
It also does RAG on apps there, like the music player, contacts app and to-do app. I can ask it to recommend similar artists to listen to based on my music library for example or ask it to quiz me on my PDF papers.
Not sure it qualifies, but I've started building an Android app that wraps bergamot[0] (the firefox translation models) to have on-device translation without reliance on google.
Bergamot is already used inside firefox, but I wanted translation also outside the browser.
[0]: bergamot https://github.com/browsermt/bergamot-translator
I used local LLMs via Ollama for generating H1's / marketing copy.
1. Create several different personas
2. Generate a ton of variation using a high temperature
3. Compare the variagtions head-to-head using the LLM to get a win / loss ratio
The best ones can be quite good.
JetBrains' local single-line autocomplete model is 0.1B (w/ 1536-token context, ~170 lines of code): https://blog.jetbrains.com/blog/2024/04/04/full-line-code-co...
For context, GPT-2-small is 0.124B params (w/ 1024-token context).
I made a shell alias to translate things from French to English, does that count?
function trans
llm "Translate \"$argv\" from French to English please"
end
Llama 3.2:3b is a fine French-English dictionary IMHO.I used a small (3b, I think) model plus tesseract.js to perform OCR on an image of a nutritional facts table and output structured JSON.
I'm making an agent that takes decompiled code and tries to understand the methods and replace variables and function names one at a time.
We're using small language models to detect prompt injection. Not too cool, but at least we can publish some AI-related stuff on the internet without a huge bill.
I've created Austen [0] to generate relationships between book characters using Mermaid.
I am using smollm2 to extract some useful information (like remote, language, role, location, etc.) from "Who is hiring" monthly thread and create an RSS feed with specific filter. Still not ready for Show HN, but working.
My husband and me made a stock market analysis thing that gets it right about 55% of the time, so better than a coin toss. The problem is that it keeps making unethical suggestions, so we're not using it to trade stock. Does anyone have any idea what we can do with that?
I am doing nothing, but I was wondering if it would make sense to combine a small LLM and SQLITE to parse date time human expressions. For example, given a human input like "last day of this month", the LLM will generate the following query `SELECT date('now','start of month','+1 month','-1 day');`
It is probably super overengineering, considering that pretty good libraries are already doing that on different languages, but it would be funny. I did some tests with chatGPT, and it worked sometimes. It would probably work with some fine-tuning, but I don't have the experience or the time right now.
I built auto-summarization and grouping in an experimental branch of my hobby-retrospective tool: https://github.com/Sebazzz/Return/tree/experiment/ai-integra...
I’m now just wondering if there is any way to build tests on the input+output of the LLM :D
Before ollama and the others could do structured JSON output, I hacked together my own loop to correct the output. I used it that for dummy API endpoints to pretend to be online services but available locally, to pair with UI mockups. For my first test I made a recipe generator and then tried to see what it would take to "jailbreak" it. I also used uncensored models to allow it to generate all kinds of funny content.
I think the content you can get from the SLMs for fake data is a lot more engaging than say the ruby ffaker library.
I'm playing with the idea of identifying logical fallacies stated by live broadcasters.
No, but I use llama 3.2 1b and qwen2.5 1.5 as bash oneliner generator, always runnimg in console.
not sure if it is cool but, purely out of spite, I'm building a LLM summarizer app to compete with a AI startup that I interviewed with. The founders were super egotistical and initially thought I was not worthy of an interview.
We are building a framework to run this tiny language model in the web so anyone can access private LLMs in their browser: https://github.com/sauravpanda/BrowserAI.
With just three lines of code, you can run Small LLM models inside the browser. We feel this unlocks a ton of potential for businesses so that they can introduce AI without fear of cost and can personalize the experience using AI.
Would love your thoughts and what we can do more or better!
I think I am. At least I think I'm building things that will enable much smaller models: https://github.com/jmward01/lmplay/wiki/Sacrificial-Training
I built https://ffprompt.ryanseddon.com using the chrome ai (Gemini nano). Allows you to do ffmpeg operations on videos using natural language all client side.
I built a platform to monitor LLMs that are given complete freedom in the form of a Docker container bash REPL. Currently the models have been offline for some time because I'm upgrading from a single DELL to a TinyMiniMicro Proxmox cluster to run multiple small LLMs locally.
The bots don't do a lot of interesting stuff though, I plan to add the following functionalities:
- Instead of just resetting every 100 messages, I'm going to provide them with a rolling window of context.
- Instead of only allowing BASH commands, they will be able to also respond with reasoning messages, hopefully to make them a bit smarter.
- Give them a better docker container with more CLI tools such as curl and a working package manager.
If you're interested in seeing the developments, you can subscribe on the platform!
Although there are better ways to test, I used a 3B model to speed up replies from my local AI server when testing out an application I was developing. Yes I could have mocked up HTTP replies etc., but in this case the small model let me just plug in and go.
I am, in a way by using EHR/EMR data for fine tuning so agents can query each other for medical records in a HIPPA compliant manner.
I have this idea that a tiny LM would be good at canonicalizing entered real estate addresses. We currently buy a data set and software from Experian, but it feels like something an LM might be very good at. There are lots of weirdnesses in address entry that regexes have a hard time with. We know the bulk of addresses a user might be entering, unless it's a totally new property, so we should be able to train it on that.
I programmed my own version of Tic Tac Toe in Godot, using a Llama 3B as the AI opponent. Not for work flow, but figuring out how to beat it is entertaining during moments of boredom.
Using llama 3.2 as an interface to a robot. If you can get the latency down, it works wonderfully
I use a small model to rename my Linux ISOs. I gave it a custom prompt with examples of how I want the output filenames to be structured and then just feed it files to rename. The output only works 90ish percent of the time, so I wrote a little CLI to iterate through the files and accept / retry / edit the changes the LLM outputs.
I don’t know if this counts as tiny but I use llama 3B in prod for summarization (kinda).
Its effective context window is pretty small but I have a much more robust statistical model that handles thematic extraction. The llm is essentially just rewriting ~5-10 sentences into a single paragraph.
I’ve found the less you need the language model to actually do, the less the size/quality of the model actually matters.
I am moderating a playlists manager to restrict them to a range of genders so it classifies song requests as accepted/rejected.
Kinda? All local so very much personal, non-business use. I made Ollama talk in a specific persona styles with the idea of speaking like Spider Jerusalem, when I feel like retaining some level of privacy by avoiding phrases I would normally use. Uncensored llama just rewrites my post with a specific persona's 'voice'. Works amusingly well for that purpose.
I've been working on a self-hosted, low-latency service for small LLM's. It's basically exactly what I would have wanted when I started my previous startup. The goal is for real time applications, where even the network time to access a fast LLM like groq is an issue.
I haven't benchmarked it yet but I'd be happy to hear opinions on it. It's written in C++ (specifically not python), and is designed to be a self-contained microservice based around llama.cpp.
Has anyone ever tried to do some automatic email workflow autoresponder agents?
Lets say, I want some outcome and it will autonomousl handle the process prompt me and the other side for additional requirements if necessary and then based on that handle the process and reach the outcome?
when i feel like casually listening to something, instead of netflix/hulu/whatever, i'll run a ~3b model (qwen 2.5 or llama 3.2) and generate and audio stream of water cooler office gossip. (when it is up, it runs here: https://water-cooler.jothflee.com).
some of the situations get pretty wild, for the office :)
Apple’s on device models are around 3B if I’m nit mistaken, and they developed some nice tech around them that they published, if I’m not mistaken - where they have just one model, but have switchable finetunings of that model so that it can perform different functionalities depending on context.
I built an Excel Add-In that allows my girlfriend to quickly filter 7000 paper titles and abstracts for a review paper that she is writing [1]. It uses Gemma 2 2b which is a wonderful little model that can run on her laptop CPU. It works surprisingly well for this kind of binary classification task.
The nice thing is that she can copy/paste the titles and abstracts in to two columns and write e.g. "=PROMPT(A1:B1, "If the paper studies diabetic neuropathy and stroke, return 'Include', otherwise return 'Exclude'")" and then drag down the formula across 7000 rows to bulk process the data on her own because it's just Excel. There is a gif on the readme on the Github repo that shows it.
[1] https://github.com/getcellm/cellm