What are you working on? Any new ideas that you're thinking about?
The Ubuntu DDoS got me to thinking: If we had a critical need to respin machines (like our data center caught fire), we would have been in for a real challenge. We run apt-cacher-ng, but it did nothing for us during the DDoS, and worse: Every few weeks or a month ac-ng will go out to lunch and we have to fix it.
So: ac-ng didn't reduce the impact of the DDoS, but it does lead to impact when there is no DDoS. Worst of both worlds.
So I'm working on an apt-cacher that goes to lengths to keep working as much as possible when the upstream is down. It will check the repo metadata and keeps a list of your "hot packages", and will download those before flipping the new metadata to be live, effectively a snapshot. It won't allow you to download a package you've never downloaded before in the case of a DDoS, but packages that you do download regularly (machine re-installs, apt updates), it will ensure are available in the repo.
I'm calling it apt-cacher-ultra. It is pretty early days, it'll probably be another week before it's ready for a beta. I'm running it in my dev cluster right now, successfully.
I've been working on my open source integration-platform-as-a-service (iPaaS) auth proxy. It provides an embeddable integration marketplace where users can connect 3rd party apps, and it provides a proxy endpoint to the host application to send authenticated outbound requests. That way token refresh, audit, etc stay with this system and frees the host application/agent/whatever free to just focus on the business logic.
I'm working on a Personal / Family travel organizer. Started as tool to allow me and SO to plan a trip together. There's been steady progress over the last couple years. Focus on privacy and ability to self-host. Of course, there is a managed version if one doesn't mind me having access to their data.
I am working on a task manager that’s way more informative and resource efficient than the windows task manager and works on Linux. It also provides an informative dashboard for docker containers and web servers with proxy support and preference for streaming sockets supporting http and web sockets over the same ports.
A timelapse platform powered by community photos. The idea is to place a mount and QR code at fixed viewpoints around the neighbourhood. People scan, photograph the view, optionally add their name, and submit. Over time, the platform stitches those shots into a living record of how the place changes with seasons.
Just finished the software side using a boring technology and am about to order the materials for the first few locations. Curious to explore photo alignment once real submissions start coming in. Stitching all slightly different angled photos into a smooth animation seems interesting.
For a long time I wondered how SV startups got such pretty landing pages (here’s a comment I left 2 years back: https://news.ycombinator.com/item?id=37421273). I wanted one for my side projects but couldn’t afford an agency, and the templates online were boring. Creating the page was only half the problem. I also needed somewhere to collect emails for the waitlist.
After AI happened, I built an app (promptfunnels) to scratch my own itch and generate funnels (fancy name for landing pages with a purpose).
Then came the harder part: marketing it. Coming from a tech background, I knew nothing about marketing, so I started reading and came across the $100M Leads book. I realized codifying those principles together with funnels and marketing automation had a real market. My family, friends, and acquaintances became the first customers. A friend joined me as cofounder and we both quit our jobs to do this full time.
As we talked to other startup founders, they kept describing a tangential problem they called GTM. At the core it was the same thing we were solving: marketing for non-marketers. So we pivoted to RevMozi(https://revmozi.com/), which helps non-marketers do both inbound and outbound GTM.
We’re dogfooding the product and coming out of beta next month.
Wish us luck.
Slapping together an image dithering toolkit to help with album cover stylization. Partly making sure I can replicate it down the line... but also finding an aesthetic, non-commercial motivation I thought I'd forgotten at work.
Some finished covers (https://saltwatercowboy.github.io/albedo/pages/en-10-05-26.h...). Next up pixel sorting.
I'm newly mostly-retired as a VFX software developer & CTO. I'm writing about AI, climate change and more in my blog, https://oberbrunner.com, running Long Now Boston (https://longnowboston.org) to promote long-term thinking, and working through my lifetime backlog of "wouldn't it be great if somebody wrote this" ideas using Claude, at https://github.com/garyo.
You should check out my new open source software build tool, https://pcons.org.
I built BookDMV (https://bookdmv.app), it watches DMV offices for open appointments and either alerts you or books one automatically.
Working on https://gigspool.com
- Building a platform where talented people can list the services and skills they're experienced in. Clients can book paid sessions with them directly through the platform, and once a session is booked, they both meet online to discuss, collaborate, or get advice based on expertise.
I’ve been building (launched in Feb) a home phone service for families who want to put off the smartphone for as long as possible, but still give young kids the ability to communicate via landline. https://chatterboxphone.com
A desktop client for Repomix. Repomix is a CLI which allows you to summarize all the code in a repo in one txt or md file so you can in turn feed it to an AI model for analysis. It absolutely gets the job done it its current state, but it is a personal project so there may be a few rough edges.
https://github.com/KevanMacGee/Repomix-Desktop
It's open source and has no official connection to Repomix. But the developer, yamadashy on Github, knows about it and seemed to like it enough to add it to the Repomix website under the community projects.
I like being able to paste all the code into a browser window and have lengthy discussions with ChatGPT, Gemini and GLM. Doing so in the browser saves tokens over doing it in Cursor or Codex. I like using the Projects feature in ChatGPT in the browser and Notebooks with Gemini because that gives the model context and history on whatever I am working on. It was one part scratching my own itch, one part learning about Python and Customtinker.
It's made specifically for when you just want to get the code and paste it, no muss or fuss. It doesn't have support for flags (yet?) like the CLI because again it is built for speed. Besides, when I want flags, I like using the CLI instead to get granular. Repomix Desktop is for "just give me the code."
I'm a self taught coder so I'm very open to feedback.
Mainly working on https://localhero.ai, automating i18n translations for product teams. Basically runs as a GitHub Action, translating new strings on PRs matching your brand voice and glossary. Got our first fully selfserve customer a few weeks back (found us through the docs). Interesting work lately has been improving how the system learns from manual edits, when someone tweaks a translation in the UI, it feeds back into translation memory and influences future translations in a smart way. Also did stuff like improving our agent skill, so coding agents get glossary/style guide context automatically and they can write source copy that better matches the brand.
Been pushing some new stuff on https://infrabase.ai as well, my AI infrastructure tools directory. Traffic growing steadily from comparison and alternatives pages. Interesting finding is that blog posts rank better but get fewer clicks now because AI Overviews, interactive comparison pages still earn clicks. ChatGPT has also started citing the site more as a source. Adding new content and polishing existing parts of it, added a page focusing on EU based services at https://infrabase.ai/european.
I’m working on a story time utility.(https://bedtimebookhelper.com/)
You build up a library from your physical books by scanning them in or discover OpenLibrary books to read in app. Then as you mark books in your library as read, it starts building a rotation and recommending books you haven’t read recently. I’ve been using this nightly to track my son’s 1000 books before kindergarten for the last couple of months.
Currently, I’m working to get the app out on Google Play and adding multiple story time attendee support.
3 things
- AI assisted academic progress reports so parents can effortless stay on top of kids middle/high school academics. https://www.gpa.coach
- A family economy app where parents set the rules, kids earn credits for chores and good behavior and kids redeem credits for screen time, money, and other benefits. https://www.kredz.app
- AI first fun mobile media editor your parents could use. https://www.mix.photos/
My own browser game. I created a browser game engine and building my first ever game with it. I can’t wait to launch it, I think it’s pretty cool. I’ve been working on it for 6 years!
The tech surrounding the game is awesome, the game and engine are fully deterministic, discrete (not float based), and bit-packed data structures throughout, powers of 2 everywhere for really fast operations, and logic and rendering are fully decoupled.
I wrote a simulator for the game and can simulate 10,000+ games in around 50 seconds on my MacBook M1 Pro. Purpose of the simulations is Monte Carlo method to tune my enemy AI (not LLM - conventional bots etc)
Working on https://tapitalee.com
- Deploy containerized apps to your own AWS account with minimal config!
- CLI tool with instant console sessions
- Set up SQL/Redis instantly with Heroku-like add-ons.
- For enterprise: Autoscaling, preview apps, audit trail, release approvals.I'm building a chrome extension that scans everything you read and highlights text if it maps to a market on kalshi. On hover, a tooltip pops up allowing you to drop money on it.
Use this to doomscroll nba twitter and sports bet, or if you're feeling more highbrow, peruse the NYT and passively gamble on geopolitical events.
Try it out here: https://chromewebstore.google.com/detail/anywager/eebgbiogbb...
Working on https://kapturekafka.dev, a desktop app for Kafka protocol inspection. Think Wireshark or Fiddler, but native for Kafka.
Useful to debug local Kafka apps against any cluster, intercepts the traffic, decodes the protocol. You see interesting (and weird) things when you look at the protocol. Still early, though already useful for local debugging when you know what you want.
Anydrop.org A zero-friction, cross-platform alternative to AirDrop. If your device have a browser, you can drop text or files to it. Doesn't matter if the device is a pc, mac, linux, phones, tablet or smart tv. There are no installation or login, just load https://anydrop.org on the devices needed. Also support live realtime notepad sync and clipboard for easy share of text snippets. All shares are end to end encrypted.
We are working on DBConvert Streams:
A self-hosted database IDE with built-in migration, CDC, and DuckDB-powered federated SQL.
Mostly trying to remove the annoying gap between "I can inspect this database" and "I can safely move/sync this data somewhere else".
Current focus: resumable large loads and cleaner initial-load-to-CDC handoff for Postgres/MySQL.
Working on trying to give some guidance on solar panels.
Plug in solar became legal here in the UK
Still sussing it out but started shipping something
Finding the pitch direction of the roof is kinda hard
Uses data from the house to try and get a rating
A model framework for an in house suite of models.
From dataset harvest, to training intricacies on CUDA/ROCm to fun HIP kernels. Full circle to inference testing, building it around consumer hardware(the challenge). Using this as a "how it works" deep dive, allowing me to learn more about the how, more than endless papers will. It's a MoE and I'm slowly running a human loop, research, build, correct, research.
I'm working on https://www.certkit.io. It started as a solution to handle TLS certificate automation for my other SaaS products, but we realized other people who run on-prem workloads might get something out of it.
It uses Let's Encrypt by default. We use delegated DNS to handle ACME challenge validation (we run the DNS, you just CNAME to us). This means you don't need to give us DNS credentials or anything. And for HA workloads it's great, because there's a central clearinghouse for certificates - so all the machines in your web farm (or whatever) get the same cert, but you don't run in to rate limits with LE.
We're recovering Windows Server guys so we made sure our automation works for painful windows workloads like IIS, Exchange etc. too.
We've had enough interest that we're building it out for real. Just left beta last month.
I built The Daily Baffle over at https://dailybaffle.com with a whole bunch of word and logic puzzles I designed.
There's Truthsorting, a logic puzzle where you have to order logical statements to make them true or false.
Pathword, a puzzle where you lay out letters along a path to spell out 4 words.
Morphology, a clued word ladder written by a different contribution daily.
And a few others!
I've been trying to promote it for a few months but I haven't had a ton of luck, to be honest. The audience hovers around 500 people and growing it beyond that has been pretty challenging.
FreeBSD 15.1. Released BETA2 on Friday, next Friday is BETA3 and the following week is scheduled to be a Release Candidate.
A non-profit to deconcentrate power over AI through better infrastructure for external auditing/oversight, and better infrastructure for local/federated inference/training https://openmined.org/
Also, we're hiring engineers and PMs (the eng position is about to be up). https://openmined.org/careers/#brxe-zgsziy
I'm working on turning our statically-typed formula engine -- that we use for Calcapp, our app builder -- into a real hosted solution (as well as a library). I discussed it in July last year (https://news.ycombinator.com/item?id=44702833#44704642) and have been working full-time on the project since the beginning of the year.
I figured "I already have a battle-tested solution, I just need to make it modern and spiffy, build a website for it and see if there's any interest -- in the age of Claude Code, this should be fast work!"
Wrong. Taking an internal library and offering it to others -- complete with documentation and modern tooling -- is an immense project, even with the help of AI agents.
Is there a market for a "formula engine in a box"? I don't know. But I also didn't know whether there would be a market for Calcapp either, and that has supported me working full-time for the past seven years. So I'm willing to take another chance.
Social Maps: a user reviews and ratings service for points-of-interest (e.g. cafes) in OpenStreetMap.
I’ve been trying to reduce and eliminate my reliance of the Big Tech and the lack of user reviews and ratings was always a big pain point for me each time I tried to switch away from Google Maps.
I’ve started building a service where users can write reviews and rate “places” (POIs) in OpenStreetMap database, such as a cafe, a museum, or a shop. It’s a quite straightforward CRUD app with bunch of OpenStreetMap-specific features such as logging in with OpenStreetMap and querying places by their OpenStreetMap metadata.
It’s still in active development but it has good docs, a great API reference (including an OpenAPI spec), a demo app with the entire planet imported and queryable, and an early stage Android SDK.
Been working on https://searchcode.com/ again which I bought back, albeit as code search tool for LLMs. It solves the “should I use this library” by allowing the LLM to inspect search and analyse it before integration. Can use it to compare multiple repositories before downloading. It comes with a large amount of token savings and can be really useful when wanting to learn about a codebase.
Since it does it anyway I added dossier pages to it as well https://searchcode.com/repo/github.com/rust-lang/rust Which is useful for humans, and shows what the system is creating.
Best part is that I get to use the tools I have built, so https://github.com/boyter/scc and https://github.com/boyter/cs to improve it which benefits anyone using those tools.
I've been working on a newish variant of Sudoku called Binku. It combines the traditional Sudoku rules and adds the rules from a game called Binario/Takuzu (with 1-4 as one color and 5-8 as the other color).
A sample puzzle can be found here: https://sudokupad.app/23x300ggzn
It's been well received by the (very kind!) Sudoku/puzzle communities, so I'm working on throwing a nice interface on it that fits the rules a bit better. I've found about five other examples of others doing a variation of this ruleset before in one way or another, and it's been fun trying to see how hard/deep I can get this puzzle to go.
As a data-obsessed golfer trying to get to single digits, I need a tracking app that picks up where Arccos leaves off. So I'm building one: https://shortgamewiz.com (still a bit WIP).
After a few rounds of using it, I already know a few things I didn't before: I suck at right-to-left breaking putts, I baby uphill putts too much, and getting out of bunkers consistently is not good enough if I can't sink the occasional save. So I know what to practice now.
I’ve been working on an OSS backend-in-a-box called [aepbase](https://aepbase.io/).
For the past few years, a group of us from Google, Microsoft, GM, IBM, Roblox, Rubrik + more have been working on a design standard for APIs called [AEP](https://www.aep.dev). The goal is twofold: learn from our companies mistakes around APIs and enable better tooling with less configuration.
We’re at a point where AEP-compliant APIs get a resource-oriented CLI, MCP server, full UI, and Terraform provider for near-zero configuration.
Aepbase has been my way to tie the whole ecosystem together. You run a single binary and define the schema for a resource with one API call. Now, you’ve got a full set of CRUD APIs and support for CLI/TF/MCP/UI. After one API call.
It’s a really cool way to tie together all of the work AEP has been doing.
Love to hear HN’s opinions on all of this. We’re still trying to figure out the best way to sell people on AEP.
I have been working on two opensource tools:
https://dhuan.github.io/mock/latest/examples.html Command line utility that lets you build APIs with just one command.
https://github.com/dhuan/dop JSON/YAML manipulation with AWK style approach.
I track my learning and schedule repetitions in google sheets. But Google sheets sucks on the phone. So I built a dumb frontend reading off of my (public) google sheet which just has 4 columns for links, title, dates and wait times, plus a formula. Webapp pulls the sheet as csv, renders as color coded lists and a couple charts. Chart shows what's due this week on a 15 week timeline. This is the simplest luddite version I could come up with. I don't have a way to share this with others except sharing the source. Not introducing complexity from auth, storage, managing updates from the app, etc.
I am working on a WASM based procedural plant generator:
https://nodes.max-richter.dev https://github.com/jim-fx/nodarium
I've been working on a pure Clojure implementation of WebRTC Data Channels (SCTP over DTLS over UDP). The library provides a minimal, dependency-free (except for Clojure itself) way to establish peer-to-peer data channels on the JVM.
I've always wanted this and have used it to experiment with Gemini's cloud agent Google Jules.
https://aggly.com A beautifully opinionated news aggregator that reads rss, twitter, reddit, youtube, telegram & more
I made https://poemd.dev/ as an online markdown scratchpad that supports GitHub Flavoured Markdown and stores all data in the URL. This means there are no accounts to work with and everything is basically stored in bookmarks if you choose to.
The persistence model makes documents somewhat sharable, but I do find Open Graph previews to be mixed. In Messenger it renders the whole URL, which is quite long due to encoding, and that kills the conversation view.
I am working on a research institute for East Africa, https://maiyoinstitute.org/. I want to tackle the dire lack of environmental data, by using 1. low cost hardware 2. Artificial Intelligence 3. Long term horizon. The problem set is huge, but I am focusing on low cost sensors for Air and Water data collection plus bioacoustics for now.
http://autonoma.ca/calculators/rocket/antimatter/
Given a distance, an allowable time to reach that distance, a payload to send, and an expected exhaust velocity, how would you calculate the time required to convert energy into antimatter fuel and how much antimatter needed to arrive at the destination (starting from the Moon)?
There are a few side calculations, such as the size of the radiator, estimated footprint of the fusion reactor itself, and how much metamaterial is needed. This is to help figure out timelines for a sci-fi novel, so ballpark answers are completely fine.
The calculations yield what appear to be values around the correct order of magnitude. Would be delighted to have insights, comments, and corrections.
A high-throughput multicast Bitcoin transaction distribution system, with a roadmap towards billions of transactions per second.
Features:
- Control channel for block header announcements, operational mechanisms, and network topology automation
- Separate channels for subtree, subtree grouping, and transaction load
- Transaction load sharding by deterministic multicast group membership based on TXID
- Transaction specialization filtering and retransmission both unicast and multicast, to connect edge networks only interested in a portion of the transaction load for whatever reason
- NACK-based retransmission of missed packets via hash chain gap sequence tracking (per sender, per shard) with automated caching endpoint beacon discovery and tiered network distribution
- BGP-AnyCast based transaction ingress
Basically all the topology pieces to scale the actual small-world network for Bitcoin miners or transaction processors; dense at the core, with layered and sharded group distribution towards users at the edges. Right now just site or org-scope multicast in planned, but provisions are being made to extend via MP-BGP eventually.
For BSV Blockchain but could work for the other Bitcoin variants too, if they ever wanted to scale.
Building a custom feed for Bluesky which uses collaborative filtering over the likes data: https://foryou.club
How the algorithm works: it finds people who liked the same posts as you, and shows you what else they’ve liked recently.
Launched the feed a little over a year ago and it has become the most liked feed.
AI basketball community, using computer vision to get highlights and stats http://ballers.gg
I recently switched to developing VST audio plugins and I'm loving it. Already done 3 [0][1][2] and I want to keep doing this if I manage to generate some income from it. I develop them in Typescript and then convert them to C++ with Webview, this way I have a web demo of the plugin that is almost identical to the one you get for the DAW.
[0]: https://technokick.com/ (Techno Kick synth)
[1]: https://riviera-demo.surge.sh/ (Reverb effect)
[2]: https://ya3.surge.sh/ (TB-303 synth clone)
I'm working on a little local first review tool called Review (though I sometimes refer to it as differ since that's it's original name) - you can see screenshots here https://x.com/rhyslikepb/status/2053149881104265599?s=20
The idea was borne out of wanting to use the review tools that you get on existing sites like GitHub, without having to push and start bloating PR lists. You'll be able to leave yourself comments and code suggestions after review, which you can then pull out in a Markdown file to feed back to your coding agent (or anything else for that matter).
I'm also trying to include some optional (very optional) AI extras where you can use your own keys, and then get a tour of what you've changed and a quick overview of the changes.
Paste Redactor. It redacts Personable Identifiable Information (PII) from your clipboard when you copy and paste text. It uses a custom trained local AI model so that your PII never leaves your device. That is what it does now. Currently working on it to make it work for agents as a privacy protection layer. The idea being that the most powerful AI models live in the cloud but need access to your local files to be useful. We instead want everything to go though a local protection layer before it is sent to the cloud possibly with labels and then reconstructed locally when the cloud sends back its results. Kind of like a Adblocker but for agents and private data instead.
I am working on a framework that lets you easily create tools inside the Django Admin - https://djangocontrolroom.com
I've published several panels under this banner already (tools for redis, caches, celery, etc.); I am currently working on a base library layer for tools to inherit from and to make it easier to create new tools.
Essentially, the point of all of this is to make it so that you don't need so many external services; Instead, DCR provides self hosted alternatives. This in turn makes it a lot easier to build and productionalize something using Django.
Reception has been decent so far and I estimate several thousand current adopters (Its hard to estimate based on download numbers alone.) For May I will finalize a common design language, further formalize the plugin system and how it works, and likely release a new panel.
Been writing in my blog every day, reading more, created a poker equity calculator, and working on a city wide project where I document attractions, restaurants, and stays I've experienced in my city (very early stages).
Website: https://arkvis.com
Poker Equity Calculator: https://github.com/lodenrogue/poker-equity-calculator-web
Davao Explorer: https://github.com/lodenrogue/davao-explorer
Reading Summaries: https://github.com/lodenrogue/reading-summaries
I also created a couple of chrome extensions:
HN Dracula Dark Theme: https://github.com/lodenrogue/hackernews-dracula-theme-chrom...
Regex Search Chrome Extension: https://github.com/lodenrogue/regex-search-chrome-extension
Created a small command line util to get earthquake data in the Philippines:
Philquakes: https://github.com/lodenrogue/philquakes
I'm working on World Watcher (https://worldwatcher.live). It's an interactive map of livecams around the world.
The idea is to have a better experience for navigating livecam streams that are publicly available on YouTube. There are a few livecam aggregators that include maps, but I never felt that any of them were satisfying, as they always require you to open new pages to watch the streams. On World Watcher, you can jump from place to place seamlessly.
You can also filter the streams by type of place or features, for example beaches or cams with audio. And if you don't know where to go, just try out the Explore button.