logoalt Hacker News

DaiPlusPlus04/28/20252 repliesview on HN

While I (loosely) understand the concept of using a custom (foundational?) machine-learning model to explore some problem-space and devise solutions, I don't understand why it says they used "AI" to "visualize" a structure. A layperson is going to think they simply asked ChatGPT to solve the problem for them and it just worked and now OpenAI owns the cure for Alzheimer's.

...I ask because bio/chem visualization and simulation was a solved problem back in the 1980s (...back when bad TV shows used renders of spinning organic-chemistry hexagons on the protagonist's computer as a visual-metaphore for doing science!).


Replies

HarHarVeryFunny05/01/2025

The popular press likes to call anything ML related "AI" since presumably that's what they think the public wants to read about.

In any case, the percentage of the population who knows the difference between a transformer, or diffusion model, or a bespoke protein folding model is going to be tiny, so calling it all "AI" does make practical sense despite being a bit misleading (it's not all ChatGPT).

Just to be clear though, the "AI" (AlphaFold) isn't being used to visualize the protein - it's used to guess the 3-D structure of the protein (via folding rules), which can then be visualized.

tibbar04/28/2025

Protein folding is one of the oldest and hardest problems in computational biology. It is fair to describe the result of protein folding as a 3D model/visualization of the protein. DeepMind's AlphaFold was a big breakthrough in determining how arbitrary structures are folded. Not always correct, but when it is, often faster and cheaper than traditional methods. I believe the latest versions of AlphaFold incorporate transformers, but it's certainly not a large language model like ChatGPT.