What you’re describing is more like whole cell simulation. Whole cells are thousands of times larger than a protein and cellular processes can take days to finish. Cells contain millions of individual proteins.
So that means that we just can’t simulate all the individual proteins, it’s way too costly and might permanently remain that way.
The problem is that biology is insanely tightly coupled across scales. Cancer is the prototypical example. A single mutated letter in DNA in a single cell can cause a tumor that kills a blue whale. And it works the other way too. Big changes like changing your diet gets funneled down to epigenetic molecular changes to your DNA.
Basically, we have to at least consider molecular detail when simulating things as large as a whole cell. With machine learning tools and enough data we can learn some common patterns, but I think both physical and machine learned models are always going to smooth over interesting emergent behavior.
Also you’re absolutely correct about not being able to “see” inside cells. But, the models can only really see as far as the data lets them. So better microscopes and sequencing methods are going to drive better models as much as (or more than) better algorithms or more GPUs.
Side note: whales rarely get cancer.
Personally, I think arc's approach is more likely to produce usable scientific results in a reasonable amount of time. You would have to make a very coarse model of the cell to get any reasonable amount of sampling and you would probably spend huge amounts of time computing things which are not relevant to the properties you care amount. An embedding and graphical model seems well-suited to problems like this, as long as the underlying data is representative and comprehensive.
Edit: Never mind, I've googled the answer.
On a millions or billions of year time frame, the organisms with the flexibility of ncRNA would have an advantage, but this is extremely hard to figure out with a "single point in time" view point.
Anyway, that was the basic lesson I took from studying non-coding RNA 10 years ago. Projects like ENCODE definitely helped, but they really just exposed transcription of elements that are noisy, without providing the evidence that any of it is actually "functional". Therefore, I'm skeptical that more of the same approach will be helpful, but I'd be pleasantly surprised if wrong.
For example, we don't keep transposons in general because they're useful, which are almost half of our genomes, and are a major source of disruptive variation. They persist because we're just not very good at preventing them from spreading, we have some suppressive mechanisms but they don't work all the time, and there's a bit of an arms race between transposons and host. Nonetheless, they can occasionally provide variation that is beneficial.
Can't emphasize enough about how DNA requires human data curation to make things work, even from day one alignments models were driven based on biological observations. Glad to see UBERON, which represents a massive amount of human insight and data curation of what is for all intents and purposes a semantic-web product (OWL based RDF at the heart) playing a significant role.
I’d pitch this paper as a very solid demonstration of the approach, and im sure it will lead to some pretty rapid developments (similar to what Rosettafold/alphafold did)
For instance, Evo2 by the Arc Institute is a DNA Foundation Model that can do some really remarkable things to understand/interpret/design DNA sequences, and there are now multiple open weight models for working with biomolecules at a structural level that are equivalent to AlphaFold 3.
To a man with a hammer…
There are technologies applicable broadly, across all business segments. Heat engines. Electricity. Liquid fuels. Gears. Glass. Plastics. Digital computers. And yes, transformers.
I parted ways with Google a while ago (sundar is a really uninspiring leader), and was never able to transfer into DeepMind, but I have to say that they are executing on my goals far better than I ever could have. It's nice to see ideas that I had germinating for decades finally playing out, and I hope these advances lead to great discoveries in biology.
It will take some time for the community to absorb this most recent work. I skimmed the paper and it's a monster, there's just so much going on.
I understand, but he made google a cash machine. Last quarter BEFORE he was CEO in 2015, google made a quarterly profit of around 3B. Q1 2025 was 35B. a 10x profit growth at this scale well, its unprecedented, the numbers are inspiring themselves, that's his job. He made mistakes sure, but he stuck to google's big gun, ads, and it paid off. The transition to AI started late but gemini is super competitive overall. Deepmind has been doing great as well.
Sundar is not a hypeman like Sam or Cook, but he delivers. He is very underrated imo.
Google's revenue in 2014 was $75B and in 2024 it was $348B, that's 4.64 times growth in 10 years or 3.1 times if corrected for the inflation.
And during this time, Google failed to launch any significant new revenue source.
If by competitive you mean "We spent $75 Billion dollars and now have a middle of the pack model somewhere between Anthropic and Chinese startup", that's a generous way to put it.
I have incredibly mixed feelings on Sundar. Where I can give him credit is really investing in AI early on, even if they were late to productize it, they were not late to invest in the infra and tooling to capitalize on it.
I also think people are giving maybe a little too much credit to Demis and not enough to Jeff Dean for the massive amount of AI progress they've made.
One interesting example of such a problem and why it is important to solve it was recently published in Nature and has led to interesting drug candidates for modulating macrophage function in autoimmunity: https://www.nature.com/articles/s41586-024-07501-1
There is a concerning gap between prediction and causality. In problems, like this one, where lots of variables are highly correlated, prediction methods that only have an implicit notion of causality don't perform well.
Right now, SOTA seems to use huge population data to infer causality within each linkage block of interest in the genome. These types of methods are quite close to Pearl's notion of causal graphs.
Please Google/Demis/Sergei, just release the darn weights. This thing ain't gonna be curing cancer sitting behind an API and it's not gonna generate that much GCloud revenue when the model is this tiny.
You can state as a philosophical ideal that you prefer open source or open weights, but that's not something deepmind has prioritized ever.
I think it's worth discussing:
* What are the advantages or disadvantages of bestowing a select few with access?
* What about having an API that can be called by anyone (although they may ban you)?
* Vs finally releasing the weights
But I think "behind locked down API where they can monitor usage" makes sense from many perspectives. It gives them more insight into how people use it (are there things people want to do that it fails at?), and it potentially gives them additional training data
But the submission blog post writes:
> To advance scientific research, we’re making AlphaGenome available in preview via our AlphaGenome API for non-commercial research, and planning to release the model in the future. We believe AlphaGenome can be a valuable resource for the scientific community, helping scientists better understand genome function, disease biology, and ultimately, drive new biological discoveries and the development of new treatments.
And at that point, they're painting this release as something they did in order to "advance scientific research" and because they believe "AlphaGenome can be a valuable resource".
So now they're at a cross-point, is this release actually for advancing scientific research and if so, why aren't they doing it in a way so it actually maximizes advancing scientific research, which I think is the point parent's comment.
Even the most basic principle for doing research, being able to reproduce something, goes out the window when you put it behind an API, so personally I doubt their ultimate goal here is to serve the scientific community.
Edit: Reading further comments it seems like they've at least claimed they want to do a model+weights release of this though (from the paper: "The model source code and weights will also be provided upon final publication.") so remains to be seen if they'll go through with it or not.
The precedent I'm going with is specifically in the gene regulatory realm.
Furthermore, a weight release would allow others to finetune the model on different datasets and/or organisms.
Page 59 from the preprint[1]
Seems like they do intend to publish the weights actually
[1]: https://storage.googleapis.com/deepmind-media/papers/alphage...
And if they don't, I'm not sure how this will gain adoption. There are tons of well-maintained and established workflows out there in the cloud and on-prem that do all of these things AlphaGenome claim to do very well - many that Google promotes on their own platform (e.g., GATK on GCP).
(People in tech think people in science are like people in tech just jump on the latest fads from BigTech marketing - when it's quite opposite it's all about whether your results/methods will please the reviewers in your niche community)
Apple's "life saving" Apple watch features are only accessible on premium devices. "Privacy is a human right" is also only possible if you buy their devices. It doesnt go around making it free to everyone, and nobody seem to be saying "if you believe in that, then why dont you make it accessible for people from all socio-economic classes?"
This is in the press release, so they are going to release the weights.
So out of my own frustration, I drew this. It's a cross-section of a single base pair, as if you are looking straight down the double helix.
Aka, picture a double-strand of DNA as an earthworm. If one of the earthworms segments is a base-pair, and you cut the earthworm in half, and turn it 90 degrees, and look into the body of the worm, you'd see this cross-sectional perspective.
Apologies for overly detailed explanation; it's for non-bio and non-chem people. :)
https://www.instagram.com/p/CWSH5qslm27/
Anyway, I think the way base pairs bond forces this major and minor grove structure observed in B-DNA.
My graduate thesis was basically simulating RNA and DNA duplexes in boxes of water for long periods of time (if you can call 10 nanoseconds "long") and RNA could get stuck for very long periods of time in the "wrong" (IE, not what we see in reality) conformation, due to phosphate/ 2' sugar hydroxyl interactions.
At least they got the handedness right.
> AlphaGenome will be available for non-commercial use via an online API at http://deepmind.google.com/science/alphagenome
So, essentially the paper is a sales pitch for a new Google service.