logoalt Hacker News

Framework for Artificial Intelligence Diffusion

147 pointsby chriskananlast Thursday at 8:00 PM127 commentsview on HN

Comments

chriskananlast Thursday at 9:04 PM

I have no idea if comments actually have any impact, but here is the comment I left on the document:

I am Christopher Kanan, a professor and AI researcher at the University of Rochester with over 20 years of experience in artificial intelligence and deep learning. Previously, I led AI research and development at Paige, a medical AI company, where I worked on FDA-regulated AI systems for medical imaging. Based on this experience, I would like to provide feedback on the proposed export control regulations regarding compute thresholds for AI training, particularly models requiring 10^26 computational operations.

The current regulation seems misguided for several reasons. First, it assumes that scaling models automatically leads to something dangerous. This is a flawed assumption, as simply increasing model size and compute does not necessarily result in harmful capabilities. Second, the 10^26 operations threshold appears to be based on what may be required to train future large language models using today’s methods. However, future advances in algorithms and architectures could significantly reduce the computational demands for training such models. It is unlikely that AI progress will remain tied to inefficient transformer-based models trained on massive datasets. Lastly, many companies trying to scale large language models beyond systems like GPT-4 have hit diminishing returns, shifting their focus to test-time compute. This involves using more compute to "think" about responses during inference rather than in model training, and the regulation does not address this trend at all.

Even if future amendments try to address test-time compute, the proposed regulation seems premature. There are too many unknowns in future AI development to justify using a fixed compute-based threshold as a reliable indicator of potential risk. Instead of focusing on compute thresholds or model sizes, policymakers should focus on regulating specific high-risk AI applications, similar to how the FDA regulates AI software as a medical device. This approach targets the actual use of AI systems rather than their development, which is more aligned with addressing real-world risks.

Without careful refinement, these rules risk stifling innovation, especially for small companies and academic researchers, while leaving important developments unregulated. I urge policymakers to engage with industry and academic experts to refocus regulations on specific applications rather than broadly targeting compute usage. AI regulation must evolve with the field to remain effective and balanced.

---

Of course, I have no skin in the game since I barely have any compute available to me as an academic, but the proposed rules on compute just don't make any sense to me.

show 7 replies
chriskananlast Thursday at 8:05 PM

The most salient thing in the document is that it put export controls on releasing the weights of models trained with 10^26 operations. While there may be some errors in my math, I think that corresponds to training a model with over 70,000 H100s for a month.

I personally think the regulation is misguided, as it assumes we won't identify better algorithms/architectures. There is no reason to assume that the level of compute leads to these problems.

Moreover, given the emphasis on test-time compute nowadays and that it seems like a lot of companies have hit a wall with performance gains with trying to scale LLMs at train-time, I especially think this regulation isn't especially meaningful.

show 7 replies
geuislast Thursday at 8:47 PM

This smells a lot like the misguided crypto export laws in the 90s that hampered browser security for years.

show 2 replies
cube2222last Thursday at 9:10 PM

It’s worth noting that this splits countries into three levels - first without restrictions, second with medium restrictions, third with harsh restrictions.

And the second level, for some reason, includes (among others) a bunch of countries that would normally be seen as close US allies - e.g. some NATO countries (most of Central/Eastern Europe).

show 4 replies
casebashyesterday at 3:14 PM

Most of the comments here only make sense under a model where AI isn't going to become extremely powerful AI in the near term.

If you think upcoming models aren't going to be very powerful, then you'll probably endorse business-as-usual policies such as rejecting any policy that isn't perfect or insisting on a high bar of evidence before regulating.

On the other hand, if you have a world model where AI is going to provide malicious actors with extremely powerful and dangerous technologies within the next few years, then instead of being radical, proposal like this start appearing extremely timid.

intunderflowlast Thursday at 8:37 PM

We're sorry, an error has occurred A general error occurred while processing your request.

mlfreemanlast Thursday at 8:47 PM

What do the regulators writing this intend for this to slow down/stop?

I can't seem to find any information about that anywhere.

show 2 replies
clhodapplast Thursday at 9:20 PM

One interesting geopolitical fact about this document that's not being discussed much is the way it includes Taiwan in lists of "countries".

Usually, the US government tries not to do that.

show 1 reply
veggierolllast Thursday at 9:27 PM

The compute limit is dead on arrival, because models are becoming more capable with less training anyways. (See DeepSeek, Phi-4)

resterslast Thursday at 8:34 PM

Strong opposition to this regulation seems to be one of the main things that led a16z, Oracle, etc. to go all in for Donald Trump. It's interesting that Meta too fought the regulation by its unprecedented open sourcing of model weights.

Regardless of who is currently in the lead, China has its own GPUs and a lot of very smart people figuring out algorithmic and model design optimizations, so China will likely be in the lead more obviously within 1-2 years, both in hardware and model design.

This law is likely not going to be effective in its intended purpose, and it will prevent peaceful collaboration between US and Chinese firms, the kind that helps prevent war.

The US is moving toward a system where government controls and throttles technology and picks winners. We should all fight to stop this.

show 4 replies
pjmlplast Thursday at 9:32 PM

It is going to be like it was in the 1990s with PGP and such all over again.

ChrisArchitectlast Thursday at 9:42 PM

Related:

WH Executive Order Affecting Chips and AI Models

https://news.ycombinator.com/item?id=42683251

miovoidyesterday at 6:57 AM

Perhaps this regulation will be a major force for next gen symbolic AI systems.

United857last Thursday at 9:48 PM

It seems some EU countries are unrestricted but others are. How is this compatible with the EU single market/customs union?

wslhlast Thursday at 9:15 PM

This feels like déjà vu from the crypto wars (1990s). If that experience helps, it is impossible to repress knowledge without violence, and it motivates more people to hack the system. Good times ahead "PGP released its source code as a book to get around US export law" <https://news.ycombinator.com/item?id=7885238>

show 1 reply
neilvlast Thursday at 8:35 PM

This `regulations.gov` is leaking info on who accesses what, to Google (via `www.google-analytics.com` tracker).

There should be a federal regulation about that.

show 1 reply
saberienceyesterday at 12:48 PM

What’s the point in this? Isn’t Trump going to just cancel this immediately on Monday?

I don’t see how we can assume it will be enacted at all.

chriskananlast Thursday at 9:08 PM

I'm not sure why the link no longer works, but this one works. The link should be updated to this one: https://www.federalregister.gov/documents/2025/01/15/2025-00...

show 2 replies