Creator here. I built ChartGPU because I kept hitting the same wall: charting libraries that claim to be "fast" but choke past 100K data points.
The core insight: Canvas2D is fundamentally CPU-bound. Even WebGL chart libraries still do most computation on the CPU. So I moved everything to the GPU via WebGPU:
- LTTB downsampling runs as a compute shader - Hit-testing for tooltips/hover is GPU-accelerated - Rendering uses instanced draws (one draw call per series)
The result: 1M points at 60fps with smooth zoom/pan.
Live demo: https://chartgpu.github.io/ChartGPU/examples/million-points/
Currently supports line, area, bar, scatter, pie, and candlestick charts. MIT licensed, available on npm: `npm install chartgpu`
Happy to answer questions about WebGPU internals or architecture decisions.
If you have tons of datapoints, one cool trick is to do intensity modulation of the graph instead of simple "binary" display. Basically for each pixel you'd count how many datapoints it covers and map that value to color/brightness of that pixel. That way you can visually make out much more detail about the data.
In electronics world this is what "digital phosphor" etc does in oscilloscopes, which started out as just emulating analog scopes. Some examples are visible here https://www.hit.bme.hu/~papay/edu/DSOdisp/gradient.htm
Right on time.
We’ve been working on a browser-based Link Graph (osint) analysis tool for months now (https://webvetted.com/workbench). The graph charting tools on the market are pretty basic for the kind of charting we are looking to do (think 1000s of connected/disconnected nodes/edges. Being able to handle 1M points is a dream.
This will come in very handy.
Bug report: there is something wrong with the slider below the chart in the million-points example:
https://chartgpu.github.io/ChartGPU/examples/million-points/...
While dragging, the slider does not stay under the cursor, but instead moves by unexpected distances.
TimeLine maintainer here. Their demo for live-streamed data [0] in a line plot is surprisingly bad given how slick the rest of it seems. For comparison, this [1] is a comparatively smooth demo of the same goal, but running entirely on the main thread and using the classic "2d" canvas rendering mode.
[0]: https://chartgpu.github.io/ChartGPU/examples/live-streaming/...
[1]: https://crisislab-timeline.pages.dev/examples/live-with-plug...
> I kept hitting the same wall: charting libraries that claim to be "fast" but choke past 100K data points
Haha, Highcharts is a running joke around my office because of this. Every few years the business will bring in consultants to build some interface for us, and every time we will have to explain to them that highcharts, even with it's turbo mode enabled chokes on our data streams almost immediately.
Congrats, but 1M is nothing spectacular for apps in finance.
Here’s a demo of wip rendering engine we’re working on that boosted our previous capabilities of 10M data points to 100M data points.
plot.ly has been able to do WebGL scatter plots with > 10 million points for years. There's a lot of libraries that can do this I think?
Quick update: Just shipped a fix for the data zoom slider bug that several of you reported (thanks d--b, azangru, and others).
The slider should now track the cursor correctly on macOS. If you tried the million-points demo earlier and the zoom felt off, give it another shot.
This is why I love launching on HN - real feedback from people actually trying the demos. Keep it coming! :)
I just rewrote all the graphs on phrasing [1] to webgl. Mostly because I wanted custom graphs that didn’t look like graphs, but also because I wanted to be able to animate several tens of thousands of metrics at a time.
After the initial setup and learning curve, it was actually very easy. All in all, way less complicated than all the performance hacks I had to do to get 0.01% of the data to render half as smooth using d3.
Although this looks next level. I make sure all the computation happens in a single o(n) loop but the main loop still takes place on the cpu. Very well done
To anyone on the fence, GPU charting seemed crazy to me beforehand (classic overengineering) but it ends up being much simpler (and much much much smoother) than traditional charts!
@huntergemmer - assuming you are the author, curious about your experience using .claude and .cursor, I see sub agents defined under these folders, what percent of your time spent would you say is raw coding vs prompting working on this project? And perhaps any other insights you may have on using these tools to build a library - see your first commit was only 5 days ago.
Wow, this is great. I practically gave up on rendering large data in EasyAnalytica because plotting millions of points becomes a bad experience, especially in dashboards with multiple charts. My current solution is to downsample to give an “overview” and use zoom to allow viewing “detailed” data, but that code is fragile.
One more issue is that some browser and OS combinations do not support WebGPU, so we will still have to rely on existing libraries in addition to this, but it feels promising.
Some of these don't feel 60fps, like the streaming one. I don't really know how to verify that though. Or maybe i'm just so used to 144fps.
Can it scroll while populating? I was trying to heart rate chart using libs which is captured at 60fps from camera (finger on camera with flash light). Raw drawing with canvas was faster than any libs.
Drawing and scrolling live data was problem for a lib (dont remember which one) because it was drawing the whole thing on every frame.
I've always been a bit skeptical of JS charting libs that want to bring the entire data to the client and do the rendering there, vs at least having the option to render image tiles on the server and then stream back tooltips and other interactive elements interactively.
However, this is pretty great; there really aren't that many use cases that require more than a million points. You might finally unseat dygraphs as the gold standard in this space.
Zoom doesn't seem to work on Firefox mobile. Just zooms the whole page in.
The rendering is very cool, but what i really want is this as a renderer i can plug into Vega.
Vega/VGlite have amazing charting expressivity in their spec language, most other charting libs don't come close. It would be very cool to be able to take advantage of that.
Very cool, I like the variety of demos! On the candle sticks streaming demo (https://chartgpu.github.io/ChartGPU/examples/candlestick-str...), the 1s/5m/15m etc buttons don't seem to do anything
Cool to see that this project started 5 days ago! Unfortunately, I can not make it work on my system (Ubuntu, chrome, WebGPU enabled as described in the documentation). On the other hand, It works on my Android phone...
Funny enough, I am doing something very similar: a C++ portable (Windows, Linux MacOS) charting library, that also compile to WASM and runs in the browser...
I am still at day 2, so see you in 3 days, I guess!
All charts in the demo failed for me.
Error message: "WebGPU Error: Failed to request WebGPU adapter. No compatible adapter found. This may occur if no GPU is available or WebGPU is disabled.".
Will it be possible to plot large graphs/ networks with thousands of nodes?
Very cool. Shame there's not a webgl fallback though. It will be a couple of years until webgpu adoption is good enough.
Safari on latest Sequoia doesn't support this. Given that many people will not upgrade to the latest version, it is a shame Safari is behind these things.
Have you tried rendering 30 different instances at the same time?
When did WebGPU become good enough at compute shaders? When I tried and failed at digging through the spec about a year ago it was very touch and go.
Maybe am just bad at reading specifications or finding the right web browser.
Fun benchmark :) I'm getting 165 fps (screen refresh rate), 4.5-5.0 in GPU time and 1.0 - 1.2 in CPU time on a 9970x + RTX Pro 6000. Definitely the smoothest graph viewer I've used in a browser with that amount of data, nicely done!
Would be great if you had a button there one can press, and it does a 10-15 second benchmark then print a min/max report, maybe could even include loading/unloading the data in there too, so we get some ranges that are easier to share, and can compare easier between machines :)
Fantasic Hunter, congrats!
I've been looking for a followup to uPlot - Lee who made uPlot is a genius and that tool is so powerful, however I need OffscreenCanvas running charts 100% in worker threads. Can ChartGPU support this?
I started Opus 4.5 rewrite of uPlot to decouple it from DOM reliance, but your project is another level of genius.
I hope there is consideration for running your library 100% in a worker thread ( the data munging pre-chart is very heavy in our case )
Again, congrats!
Very Nice. There is an issue with panning on the million point demo -- it currently does not redraw until the dragging velocity is below some threshold, but it should seem like the points are just panned into frame. It is probably enough to just get rid of the dragging velocity threshold, but sometimes helps to cache an entire frame around the visible range
What purposes have you found for rendering so many datapoints? It seems like at a certain point, say above a few thousand, it becomes difficult to discriminate/less useful to render more in many cases
this is so well done, thanks for sharing it. i've been trying to communicate with people how we are living in the golden age of dev where things that previously couldn't have been created, now can be. this is an amazing example of that.
Curious: How does TradingView et.al. solves this problem? They should have the same limitations? (actually, Im a user of the site, though I never started digging down how they made id)
I'd love to know if this is compatible as embedded in a Jupyter Notebook.
I like how you used actual financial data for the candlestick example :)
Amazing. I can't express how thankful I am for you building this.
Nicely done. Will you be able to render 3D donuts? And even animations, say pick a slice & see it tear apart from the donut.
No Firefox support? It has had WebGPU support since version 141.
Even when I turn on dom.webgpu.enabled, I still get "WebGPU is disabled by blocklist" even though your domain is not in the blocklist, and even if I turn on gfx.webgpu.ignore-blocklist.
Doesn't work for me? Latest chrome, RTX 4080, what am I missing?
Doesn't work on my Android phone because no GPU (but I have webgl is that not enough?)
The number of points actually being rendered doesn't seem to warrant the webgpu implementation. It's similar to the number of points that cubism.js could throw on the screen 15 years ago.
Wa, this is smooth, man. This is so cool. This is really sexy and cool, the examples page (https://chartgpu.github.io/ChartGPU/examples/index.html) has many good.
I hope you have a way to monetize/productize this, because this has three.js potential. I love this. Keep goin! And make it safe (a way to fund, don't overextend via OSS). Good luck, bud.
Also, you are a master of naming. ChartGPU is a great name, lol!
This looks great. Quick feedback, scrollbars don't work well on my mac mini M1. The bar seems to move twice as fast as the mouse.
This is great, but I don't see it being useful for most use cases.
Most high-level charting libraries already support downsampling. Rendering data that is not visible is a waste of CPU cycles anyway. This type of optimization is very common in 3D game engines.
Also, modern CPUs can handle rendering of even complex 2D graphs quite well. The insanely complex frontend stacks and libraries, a gazillion ads and trackers, etc., are a much larger overhead than rendering some interactive charts in a canvas.
I can see GPU rendering being useful for applications where real-time updates are critical, and you're showing dozens of them on screen at once, in e.g. live trading. But then again, such applications won't rely on browsers and web tech anyway.
I don't really care about this, like at all. But I just wanted to say, that's an amazing name. Well done.
[dead]
[dead]
[dead]
uPlot maintainer here. this looks interesting, i'll do a deeper dive soon :)
some notes from a very brief look at the 1M demo:
- sampling has a risk of eliminating important peaks, uPlot does not do it, so for apples-to-apples perf comparison you have to turn that off. see https://github.com/leeoniya/uPlot/pull/1025 for more details on the drawbacks of LTTB
- when doing nothing / idle, there is significant cpu being used, while canvas-based solutions will use zero cpu when the chart is not actively being updated (with new data or scale limits). i think this can probably be resolved in the WebGPU case with some additional code that pauses the updates.
- creating multiple charts on the same page with GL (e.g. dashboard) has historically been limited by the fact that Chrome is capped at 16 active GL contexts that can be acquired simultaneously. Plotly finally worked around this by using https://github.com/greggman/virtual-webgl
> data: [[0, 1], [1, 3], [2, 2]]
this data format, unfortunately, necessitates the allocation of millions of tiny arrays. i would suggest switching to a columnar data layout.
uPlot has a 2M datapoint demo here, if interested: https://leeoniya.github.io/uPlot/bench/uPlot-10M.html