logoalt Hacker News

GAIA – Open-source framework for building AI agents that run on local hardware

133 pointsby galaxyLogicyesterday at 7:28 PM32 commentsview on HN

Comments

coppsilgoldtoday at 12:11 AM

Nvidia went through a lot of effort to make CUDA operational on their entire lineup, and they did it before deep learning even took off.

You do this thing not because you expect consumers with 5 year old hardware to provide meaningful utilization but as a demo ("let me grab my old gaming machine and do some supercomputing real quick") and a signal that you intend to stay the course. AMD management hasn't realized this even after various Nvidia people said that this was exactly why they did it, at some point the absence of that signal is a signal that the AMD compute ecosystem is an unreliable investment, no?

show 1 reply
xrdyesterday at 10:12 PM

I wanted to believe but anyone who has spent any time trying to run models locally knows this is not going to be solved by two lines of python running on rocm as the example shows.

show 4 replies
sabedevopsyesterday at 11:51 PM

ROCm is finally getting better due to a few well meaning engineers.

But let’s be honest, AMD has been an extremely bad citizen to non-corporate users.

For my iGPU I have to fake GFX900 and build things from source or staging packages to get that working. Support for GFX90c is finally in the pipeline…

The improvements feel like a bodyguard finally letting you through the door just because NVIDIA is eating their lunch and they don’t want their club to be empty.

They strongarm their customers to using “Enterprise” GPUs to be able to play with ROCm, and are only broadening their offerings for market share purposes.

Really shouldn’t reward this behavior.

show 2 replies
madbo1today at 6:37 AM

This seems quite significant. While many people still think about AI as something cloud bound, there are limitations such as latency, cost and, most importantly, lack of control involved.

By moving AI agents into an execution environment where they work locally, one gets both deterministic execution, reduced latency and avoids transferring information to remote clouds all the time. In certain application scenarios, for instance, when building a personal assistant or implementing automation routines, this makes a huge difference.

The problem here is not only running the model locally – that seems increasingly easy to achieve, with developments like Ollama but also managing multiple agents and coordinating them in a manner that doesn’t require powerful hardware resources.

In case GAIA manages to simplify this process to make local execution of multiple AI agents feasible, this might very well lead to a transition from 'AI as a service' to 'AI as personal infrastructure'.

Pretty exciting stuff, really.

galaxyLogicyesterday at 7:34 PM

Not so clear from their page but from

https://www.tipranks.com/news/amd-stock-slips-despite-a-majo...

I read:

" In addition to that, the update allows these agents to be turned into desktop apps for multiple operating systems. "

This seems like a new way to create app: Create an (AI) app that creates apps.

show 1 reply
0xbadcafebeetoday at 1:25 AM

I would love to use your tool locally, AMD, if you'd support the AMD graphics card you sold me.

warwickmcintoshyesterday at 10:42 PM

ROCm has improved but the reality is you're still fighting the driver stack more than the models. If you're actually doing local inference on AMD you're spending your time on CUDA compatibility layers, not the AI part. Two lines of python is marketing, the gap between demo and working AMD setup is still real.

show 1 reply
Mars008yesterday at 9:50 PM

In case you are interested:

Requirement Minimum

Processor AMD Ryzen AI 300-series

show 1 reply
MrHermestoday at 10:49 AM

[dead]

volume_techtoday at 12:08 AM

[dead]

mininglamptoday at 3:34 AM

[dead]

ElenaDaibunnytoday at 3:42 AM

[dead]