logoalt Hacker News

Show HN: Utilyze – an open source GPU monitoring tool more accurate than nvtop

95 pointsby ManyaGhobadiyesterday at 1:55 PM23 commentsview on HN

The standard GPU utilization metric reported by nvidia-smi, nvtop, Weights & Biases, Amazon CloudWatch, Google Cloud Monitoring, and Azure Monitor is highly misleading. It reports the fraction of time that any kernel is running on the GPU, which means a GPU can report 100% utilization even if only a small portion of its compute capacity is actually being used. In practice, we've seen workloads with ~1–10% real compute throughput while dashboards show 100%.

This becomes a problem when teams rely on that metric for capacity planning or optimization decisions, it can make underutilized systems look saturated.

We're releasing an open-source (Apache 2.0) tool, Utilyze, to measure GPU utilization differently. It samples hardware performance counters and reports compute and memory throughput relative to the hardware's theoretical limits. It also estimates an attainable utilization ceiling for a given workload.

GitHub link: https://github.com/systalyze/utilyze

We'd love to hear your thoughts!


Comments

whatever1today at 5:02 AM

I really don’t understand how Intel has not completely dominated the computational GPU market.

Nvidia’s toolsets and APIs are under-documented, and the commercial-grade hardware itself is super unreliable.

Developers and operators just bear with the whole situation because there is no alternative. To the point that they are ready to jump to things like TPUs or other custom silicon.

Say what you will about Intel, but their documentation and the commercial-grade hardware were top-notch. I wish they find their footing and this time stay humble.

xrdyesterday at 9:32 PM

I feel like this is tangential to this conversation.

Does anyone know of a good tool for "load balancing" usage across local GPUs?

Why: I have two RTX3090s (24GB). I've been using nvidia-smi to check usage of my RTX3090. Mostly I'm running llama.cpp with unsloth/Qwen3.6-27B-GGUF:Q4_K_M and getting some pretty decent results for a self hosted LLMs (orchestrated via opencode). I'm surprised at how well it is working for a local model. nvidia-smi is great for determining total VRAM usage and nvtop gives a little more insight.

But, I also am doing some experiments with some other non-LLM models (video generation, etc), and want to find a way to timeslice across these GPUs, for example, when my coding is paused.

This "Utilyze" tool appears it would get me better insight into usage of one. Can it be scripted to better utilize my GPUs across a diverse load?

Any suggestions on whether there are existing projects out there? I thought about vibe coding, but wonder if there is existing art.

show 1 reply
jhggyesterday at 5:20 PM

We just track power utilization.

show 1 reply
uberduperyesterday at 5:27 PM

There's a few dimensions you can look at for gpu load. Probably the easiest indirect metric to watch for gpu load is power usage.

But if you really care about this, you should actually profile your application. nsight systems makes this pretty simple to do. Dunno how many actually care about having a TUI.

show 1 reply
Cynddlyesterday at 7:38 PM

This sounds super interesting and relevant. I run a small cluster with H100s (often research projects with vLLM) and being able to see not just usage but efficiency would be great.

I don't fully get the 100% utilisation vs. 1-10% real compute. Given you rely on telemetry from users to add new models, are you trying to predict how fast a model should be on vLLM, compared to how it runs in practice? What if users tweak some hyperparameters?

show 1 reply
xtimecrystalyesterday at 4:30 PM

One small suggestion: add more GPU stats to your tool.

At the moment (v0.1.3) it is more helpful for compute visualization but keeping track of memory usage/processes/temperature/fan speed/etc. prevent this from becoming a full-on drop-in replacement for `nvidia-smi` for me.

show 2 replies
nawiyesterday at 5:10 PM

Hi, many thx, does the os can run on nvidia jetson and orin? Or just for server gpu?

show 1 reply
apitmanyesterday at 8:06 PM

I believe recent versions of nvtop show efficiency, right?

show 1 reply
vogje01yesterday at 9:18 PM

Looks good for now.

Will further test it.

latchkeyyesterday at 6:06 PM

You mention rocm-smi in your blog post, but you don't actually support AMD gpus?

show 1 reply
SilentM68yesterday at 7:29 PM

Great tool.

Just testing for now.

Any removal instructions or function for utilyze beyond the manual removal of utilyze & utlz binaries from ~/.local/bin & /usr/local/bin & PATH cleanup for ~/.profile, in particular CAP_SYS_ADMIN capability and reversal for any other changes made?

show 1 reply
324yesterday at 8:30 PM

gdp means gross domestic product

marlburrowyesterday at 10:03 PM

[dead]

johnwhitmanyesterday at 6:00 PM

[dead]

throwawaycbb7yesterday at 4:52 PM

[dead]

Rekindle8090yesterday at 5:41 PM

[dead]