Doesn't matter what domain and how big or small.
I learned that my fav part of Apennines is famous for a lack of light pollution and thus is an astro-tourism target. Never paid attention to that aspect.
I also learned that on Aug 12th this year a total eclipse of the sun can be observed from certain parts of Spain.
That lodash-es doesn’t ESM lodash/fp, which means there is no straightforward way of using it with Vite after version 5. God help me.
I don’t even want to use it, I just want to get legacy code building on a modern version of Vite without rewriting a couple thousand lines of code. Aaaargh
currently learning 3d "solid" design with blender for 3dprinter. I am aware blender is not the fit a cad app would be (), but it 'works' if you are careful what you do (I am not). This morning I had to clean the poor 3dprinter of a spaghetti-gone-wrong accident, because my hubris-3d-design submitted last night was not as clever as I naively thought. () the advantage of using blender is that, as a byproduct I will eventually have learned blender..
I'm exploring writing a point and click adventure, and I've found out that they're basically just hierarchical state machines with a pretty UI. This is useful because it simplifies a lot of things.
The downside is that now I'm wondering if I could write one in SQL.
I've explored various VPN settings/configurations/protocols to see what works best with my ISP that tends to throttle traffic to my work VPN.
Some things work OK, but still not as good as commercial VPN providers.
I've been working on a litle raspberry pi pico project with kafka. As someone that used to have an Arduino uno, it made me genuinely shocked at how small controllers have gotten and they're massive capabilities
I learned that the large tree near where I live in London that has visibly grown in the last year is a Coast Redwood (a.k.a. California Redwood, a.k.a. Sequoia) and that there are half a million of them in the UK.
Loving this post.
Why Bluetooth is detecting peripherals but won't connect to them on the new PC I built. Getting an antenna today but only 50% sure this will fix the issue.
I am cleaning up some pointer arithmetic stuff for multi-dimensional C style arrays. I managed to replace the code with a std::inner_product minus a std::accumulate (to accomodate for the fact that the upper array bound is exclusive, ie one-past-the-end).
Syncthing-Fork, the primary Android app for Syncthing, isn't exactly reliable anymore. TLDR a new completely unknown dev has taken over maintenance and doesn't communicate well; it's probably not reliable without checking if someone has recently audited the source code.
https://forum.syncthing.net/t/does-anyone-know-why-syncthing...
Started learning about ollama with a goal to setting up some kind of self-hosted LLM.
As a student in embedded and systems programming that never did much in high level langs, i found out how fun html+js can be.
Learned to spin dopa star from cyrus, who traveled to spend some time with his parents, who work at a college near my village.
he gifted his dopa star to us.
Messed with Remix for the first time after seeing Shopify's Hydrogen sites. Previously only used Next.js.
Plumbing together stuff so that the files from a service that can only push to sftp server end up delivered in a Dropbox folder.
I am exploring various levels of prompting in Github Copilot at Org, Repo and Personal level.
Think of it as an exploration manual.
I am working on layouting library(inspired by the concepts behind Clay). There are two modes in which these things operate - immediate mode where you recreate everything anew for every frame, or retained mode where you create everything only once and then mutate the existing state for every frame and render only the differences.
HTML DOM uses retained model as it would be unimaginable to completely throw out the entire DOM and rebuild it anew with every frame. But React, Vue and other libraries use immediate mode to communicate with HTML DOM. They might internally have retained mode, but in the end they only perform mutations based on their internal differences.
So the whole web ui is layer upon layer of immediate, retained and hybrid modes all talking to each other. Now imagine how much wasted resources all of this layering implies :)
Another interesting fact is that when you have opacity or translucency in web ui, the browser render the background elements off canvas and uses it as background for the element with the opacity in order to avoid issues with various elements seeping through in many unexpected ways.
tl;dr this topic has been thoroughly discussed in here https://www.youtube.com/watch?v=XYFBOIr6n_s
Duckdb can read in json, so you can dump kafka messages into a file and filter with SQL
Today I recorded myself skateboarding and found out that I don't move nearly as much as I think I do! No wonder I'm going so slow!
DBSCAN for grouping locations. Well, still slow...
Google Earth Engine's Foundation model via the ITU's seminar! This thing is incredible!
nixos docker container where a non-root user can install packages
replaced the broken spring on an ABANA style treadle hammer.
breaking it in the first place was more fun
That exporting an svg from Illustrator and dropping into Fusion as a sketch is not a very wise approach - the scale is completely off! DXF is the way to go, keep your units in mm across the workflow.
I've not used any CAD tools in a significant way in nearly three decades - all very familiar and yet not at the same time. Form-Z and ArchiCAD were my bread and butter back then, despised AutoCAD but here I am back in the Autodesk realm again with Fusion :-(
Adding arbitrary raw UDP connections to my browser based web tool.
I've been trying to research drone navigation tech from what we have learned so far from the russian/ukraine war. I'm very much not a hardware guy but software by itself has been feeling kind of useless or even crueler than usual.
I've read the adverserial attack paper, and I'm currently implementing a captcha based on images that have masks on them so that any LLM agent with a visual model will classify it wrong.
The idea is to use something like a slider that shows different images combined with a memory task, like "find out the pair of images" and then offer maybe a text input field where the user has to write 1,2,3 or something similar with the image numbers to pass the captcha.
The tldr is that I'm abusing the famous panda image that's classified as a gibbon as a technique to build a bot captcha.
An intern had trouble with an outdated exercise in Elixir that use an old version of an Erlang dependency, so we got to figure out how to depend on a local copy, dig into the Erlang code and do a little hack to make it work.
Basically it relied on a checksum algorithm that was previously in yet another external library but was now in the standard library so that call needed to be updated and variables carrying around the old external library had to be underscored out.
It was a good lesson in traversing error messages and going from an angry VM step by step to a clean success. Not to hairy for a junior to understand when explained, and also not too time consuming to burn out interest, while still a bit of a challenge.
I'm sick of using React for personal projects so I've been building a lightweight, functional, and minimalistic reactive web framework. Turns out there are a lot of decisions that go into something like this, it truly is an iceberg of complexity. It creates plenty of enjoyable problems to think about though
tweeks.io let's you edit any website.
I don't really musically understand reverb (to the point where I just know oh this needs a plate at X length), so I've been learning future garage forcing me to use a lot of reverb hoping it will click. Inspired by Urban Waifs sharing of some of their process in their Youtube videos. https://www.youtube.com/@UrbanWaifs/videos
I learned there was a group of people in NYC in the late '70s and early '80s that may have been the biggest serial killers in American history. They killed at least 200 people, sometimes 4–5 people per week.
Where and how Cursor store all its workspaces, chats, prompts, thoughts, context, completions, tool calls, searches, schemas, and images:
"Cursor Mirror" Anthropic Skill and Python Sister Script:
https://news.ycombinator.com/item?id=46629604
cursor-mirror skill: https://github.com/SimHacker/moollm/tree/main/skills/cursor-...
cursor_mirror.py script: https://github.com/SimHacker/moollm/blob/main/skills/cursor-...
DATA-SCHEMAS.yml schema map: https://github.com/SimHacker/moollm/blob/main/skills/cursor-...
PR-CURSOR-MIRROR-GENESIS.md pr description: https://github.com/SimHacker/moollm/blob/main/designs/PR-CUR...
cursor-chat-reflection.md session log: https://github.com/SimHacker/moollm/blob/main/examples/adven...
I-BEAM-CHARACTER.yml a spirit familiar embodying Cursor's soul: https://github.com/SimHacker/moollm/blob/main/skills/cursor-...
IMAGE-GALLERY.md analysis of images dropped into Cursor chats, their context, and meaning: https://github.com/SimHacker/moollm/blob/main/skills/cursor-...
>Note: This gallery contains descriptions, analysis, and context — not the actual images. The images live in Cursor's workspace cache. The point: cursor-mirror can find them, the Read tool can see them, and I-Beam can narrate their significance within the conversation where they appeared. Image archaeology!
That my colleagues like me despite me being a bad fit for the role here at my company. I feel good, I really like them here at Chatwoot but I'm not who they need right now. It's my last week here, and it's only been 3.5 months but they feel like family. It's sad that I had to quit but I'll miss these good people.
I found out more ways in which our entire socio-economic system is a scam. I literally learn something new about this every day.
I need my Universal Basic Income now! Help.
Go:
r, err:= fn()
Compiles if r is already declared. Creates a new lexical scope that has no access to the outer r. So the outer r doesn't get set. And I get a bug!I found out that reading 900 wpm and actually comprehending what you are reading is actually possible and not that difficult at all.
you can now use windows apps on linux without emulation: winehq.org
I found out it's easy to write Swift/Appkit apps without the dumpster fire that is Xcode! It turns out it's really easy to do it with good old `make`.
[flagged]
Published an edit today (post dated in Nov. but I've rewritten it 5x now) on my tutorial to use llama3.2:3b to generate fine tuning data to train tinyllama1.1b https://seanneilan.com/posts/fine-tuning-local-llm/ It took a while to figure out that when I made llama3.2 generate json, it didn't have enough horsepower to generate training data that was varied enough to successfully fine tune llama1.1b! Figured that out :) Something you never learn with the bigger models. Every token costs something even if it's a little bit.
Related to this, I spent three years looking up 1,000 things and writing about them.
https://deanebarker.net/huh/