logoalt Hacker News

Audio Reactive LED Strips Are Diabolically Hard

137 pointsby surprisetalkyesterday at 1:55 PM40 commentsview on HN

Comments

doctorhandshaketoday at 2:30 PM

I like this writeup but I feel like the title doesn't really tell you what it's about ... to me it's about creativity within constraints.

The author finds, as many do, that naive or first-approximation approaches fail within certain constraints and that more complex methods are necessary to achieve simplicity. He finds, as I have, that perceptual and spectral domains are a better space to work in for things that are perceptual and spectral than in the raw data.

What I don't see him get to (might be the next blog post, IDK), is getting into constraints in the use of color - everything is in 'rainbow town' as we say, and it's there that things get chewy.

I'm personally not a fan of emissive green LED light in social spaces. I think it looks terrible and makes people look terrible. Just a personal thing, but putting it into practice with these sorts of systems is challenging as it results in spectral discontinuities and immediately requires the use of more sophisticated color systems.

I'm also about maximum restraint in these systems - if they have flashy tricks, I feel they should do them very very rarely and instead have durational and/or stochastic behavior that keeps a lot in reserve and rewards closer inspection.

I put all this stuff into practice in a permanent audio-reactive LED installation at a food hall/ nightclub in Boulder: https://hardwork.party/rosetta-hall-2019/

show 2 replies
aleksiy123today at 5:00 PM

Fun I actually did a similar project during my time at UVic 10 years ago but it was a hoodie.

https://youtu.be/-LMZxSWGLSQ

I remember thinking really hard on what to do with color. Except like you say mine is pretty much a naive fft.

https://github.com/aleksiy325/PiSpectrumHoodie?tab=readme-ov...

Thanks for reminding me.

WarmWashtoday at 3:13 PM

The real killer is that humans don't hear frequencies, they hear instruments, which are a stack of frequencies that roughly sometimes correlate with a frequency range.

I wonder if transformer tech is close to achieving real-time audio decoding, where you can split a track into it's component instruments, and light show off of that. Think those fancy Christmas time front yard light shows as opposed to random colors kind of blinking with what maybe is a beat.

show 1 reply
serftoday at 5:28 PM

the hard part is dousing a room in pulsing bright colorful LEDs tastefully.

I haven't seen that done yet. I think it's one of those Dryland myths.

nsedlettoday at 5:21 PM

I also attempted to do real-time audio visualizations with LED strips. What was unsatisfying is that the net effect always seemed to be: the thing would light up with heavy beats and general volume. But otherwise the visual didn't FEEL like the music. This is the same issue I always had with the Winamp visualizations back in the day.

To solve this I tried pre-processing the audio, which only works with recordings obviously. I extract the beats and the chords (using Chordify). I made a basic animation and pulsed the lights to the beat, and mapped the chords to different color palettes.

Some friends and I rushed it to put it together as a Burning Man art project and it wasn't perfect, but by the time we launched it felt a lot closer to what I'd imagined. Here's a grainy video of it working at Burning Man: https://www.youtube.com/watch?v=sXVZhv_Xi0I

It works pretty well with most songs that you pick. Just saying there's another way to go somewhere between (1) fully reactive to live audio, and (2) hand designed animations.

I don't think there's an easy bridge to make it work with live audio though unfortunately.

iamjackgtoday at 2:09 PM

Scott's work is amazing.

Another related project that builds on a similar foundation: https://github.com/ledfx/ledfx

menno-dot-aitoday at 2:16 PM

Woow, this was my first hardware project right around the time it released! I remember stapling a bunch of LED strips around our common room and creating a case for the pi + power supply by drilling a bunch of ventilation + cable holes in a wooden box.

And of course, by the time I got it to work perfectly I never looked at it again. As is tradition.

show 1 reply
mdrzntoday at 12:14 PM

Always been very interested in audio-reactive led strips or led bulbs, I've been using a Windows app to control my LIFX lights for years but lately it hasn't been maintained and it won't connect to my lights anymore.

I tried recreating the app (and I can connect via BT to the lights) but writing the audio-reactive code was the hardest part (and I still haven't managed to figure out a good rule of thumb or something). I mainly use it when listening to EDM or club music, so it's always a classic 4/4 110-130bpm signature, yet it's hard to have the lights react on beat.

copypapertoday at 4:08 PM

This is awesome! I did a similar project in college for one of my classes and ran into the same exact walls as you.

- The more filters I added the worse it got. A simple EMA with smoothing gave the best results. Although, your pipeline looks way better than what I came up with!

- I ended up using the Teensy 4.0 which let me do real time FFT and post processing in less than 10ms (I want to say it was ~1ms but I can't recall; it's been a while). If anyone goes down this path I'd heavily recommend checking out the teensy. It removes the need for a raspi or computer. Plus, Paul is an absolute genius and his work is beyond amazing [1].

- I started out with non-addressable LEDs also. I attempted to switch to WS2812's as well, but couldn't find a decent algorithm to make it look good. Yours came out really well! Kudos.

- Putting the leds inside of an LED strip diffuser channel made the biggest difference. I spent so long trying to smooth it out getting it to look good when a simple diffuser was all I needed (I love the paper diffuser you made).

RE: What's Still Missing: I came to a similar conclusion as well. Manually programmed animation sequences are unparalleled. I worked as a stagehand in college and saw what went into their shows. It was insane. I think the only way to have that same WOW factor is via pre-processing. I worked on this before AI was feasible, but if I were to take another stab at it I would attempt to do it with something like TinyML. I don't think real time is possible with this approach. Although, maybe you could buffer the audio with a slight delay? I know what I'll be doing this weekend... lol.

Again, great work. To those who also go down this rabbit hole: good luck.

[1]: https://www.pjrc.com/

milleramptoday at 3:09 PM

This guy has been making music controlled LED items, boxes and wrist bands. https://www.kickstarter.com/projects/markusloeffler/lumiband...

JKCalhountoday at 1:20 PM

I made a decent audio visualizer using the MSGEQ7 [1]. It buckets a count for seven audio frequency ranges—an Arduino would poll on every loop. It looks like the MSGEQ7 is not a standard part any longer unfortunately.

(And it looks like the 7 frequencies are not distributed linearly—perhaps closer to the mel scale.)

I tried using one of the FFT libraries on the Arduino directly but had no luck. The MSGEQ7 chip is nice.

[1] https://cdn.sparkfun.com/assets/d/4/6/0/c/MSGEQ7.pdf

show 1 reply
londons_exploretoday at 1:02 PM

The mel spectrum is the first part of a speech recognition pipeline...

But perhaps you'd get better results if more of a ML speech/audio recognition pipeline were included?

Eg. the pipeline could separate out drum beats from piano notes, and present them differently in the visualization?

An autoencoder network trained to minimize perceptual reconstruction loss would probably have the most 'interesting' information at the bottleneck, so that's the layer I'd feed into my LED strip.

show 1 reply
panki27today at 1:07 PM

Had a similar setup based on an Arduino, 3 hardware filters (highs/mids/lows) for audio and a serial connection. Serial was used to read the MIDI clock from a DJ software.

This allowed the device to count the beats, and since most modern EDM music is 4/4 that means you can trigger effects every time something "changes" in the music after synching once.

show 1 reply
rustyhancocktoday at 11:50 AM

More than 20 years ago or so I made a small LED display that used a series of LM567 (frequency detection ICs) and LM3914 (bar chart drivers) to make a simple histogram for music.

It was fiddly, and probably too inaccurate for a modern audience but I can't claim it was diabolically hard. Tuning was a faff but we were more willing to sit and tweak resistor and capacitor values then.

wolvoleotoday at 2:58 PM

Thanks for this! Exactly the thing I'm struggling with now. Making decent visualisation for music based on ESP32-S3.

8cvor6j844qw_d6today at 12:51 PM

Are these available commercially for consumers?

show 1 reply
p0w3n3dtoday at 11:49 AM

IANAE but I would go for electric circuit, not electronic software that steers the led. I think that nowadays, with the LLM support it can be easier and better to optimise it for the sake of latency.

show 2 replies
IshKebabtoday at 3:40 PM

It's not that hard. I did a real-time version of the Beatroot algorithm decades ago that worked pretty well for being such a simple algorithm.

askltoday at 11:41 AM

Interesting. I'm currently in the process of building something with a audio reactive LED strip but didn't come across this project yet. The WLED [1] ESP32 firmware seems to be able to do something similar or potentially more though.

[1] https://kno.wled.ge/

Edit: Oh wait, that project needs a PC or Raspberry PI for audio processing. WLED does everything on the ESP32.

show 3 replies
mockbolttoday at 2:06 PM

[flagged]

show 1 reply
kboucktoday at 1:54 PM

[flagged]

m3kw9today at 2:20 PM

how is it hard, do a A to D, add a filter, do compute, then do D to A.

show 2 replies