What you’re describing is more like whole cell simulation. Whole cells are thousands of times larger than a protein and cellular processes can take days to finish. Cells contain millions of individual proteins.
So that means that we just can’t simulate all the individual proteins, it’s way too costly and might permanently remain that way.
The problem is that biology is insanely tightly coupled across scales. Cancer is the prototypical example. A single mutated letter in DNA in a single cell can cause a tumor that kills a blue whale. And it works the other way too. Big changes like changing your diet gets funneled down to epigenetic molecular changes to your DNA.
Basically, we have to at least consider molecular detail when simulating things as large as a whole cell. With machine learning tools and enough data we can learn some common patterns, but I think both physical and machine learned models are always going to smooth over interesting emergent behavior.
Also you’re absolutely correct about not being able to “see” inside cells. But, the models can only really see as far as the data lets them. So better microscopes and sequencing methods are going to drive better models as much as (or more than) better algorithms or more GPUs.
Side note: whales rarely get cancer.
Personally, I think arc's approach is more likely to produce usable scientific results in a reasonable amount of time. You would have to make a very coarse model of the cell to get any reasonable amount of sampling and you would probably spend huge amounts of time computing things which are not relevant to the properties you care amount. An embedding and graphical model seems well-suited to problems like this, as long as the underlying data is representative and comprehensive.
Edit: Never mind, I've googled the answer.
On a millions or billions of year time frame, the organisms with the flexibility of ncRNA would have an advantage, but this is extremely hard to figure out with a "single point in time" view point.
Anyway, that was the basic lesson I took from studying non-coding RNA 10 years ago. Projects like ENCODE definitely helped, but they really just exposed transcription of elements that are noisy, without providing the evidence that any of it is actually "functional". Therefore, I'm skeptical that more of the same approach will be helpful, but I'd be pleasantly surprised if wrong.
For example, we don't keep transposons in general because they're useful, which are almost half of our genomes, and are a major source of disruptive variation. They persist because we're just not very good at preventing them from spreading, we have some suppressive mechanisms but they don't work all the time, and there's a bit of an arms race between transposons and host. Nonetheless, they can occasionally provide variation that is beneficial.
Can't emphasize enough about how DNA requires human data curation to make things work, even from day one alignments models were driven based on biological observations. Glad to see UBERON, which represents a massive amount of human insight and data curation of what is for all intents and purposes a semantic-web product (OWL based RDF at the heart) playing a significant role.
I’d pitch this paper as a very solid demonstration of the approach, and im sure it will lead to some pretty rapid developments (similar to what Rosettafold/alphafold did)
For instance, Evo2 by the Arc Institute is a DNA Foundation Model that can do some really remarkable things to understand/interpret/design DNA sequences, and there are now multiple open weight models for working with biomolecules at a structural level that are equivalent to AlphaFold 3.
To a man with a hammer…
There are technologies applicable broadly, across all business segments. Heat engines. Electricity. Liquid fuels. Gears. Glass. Plastics. Digital computers. And yes, transformers.
I parted ways with Google a while ago (sundar is a really uninspiring leader), and was never able to transfer into DeepMind, but I have to say that they are executing on my goals far better than I ever could have. It's nice to see ideas that I had germinating for decades finally playing out, and I hope these advances lead to great discoveries in biology.
It will take some time for the community to absorb this most recent work. I skimmed the paper and it's a monster, there's just so much going on.
I understand, but he made google a cash machine. Last quarter BEFORE he was CEO in 2015, google made a quarterly profit of around 3B. Q1 2025 was 35B. a 10x profit growth at this scale well, its unprecedented, the numbers are inspiring themselves, that's his job. He made mistakes sure, but he stuck to google's big gun, ads, and it paid off. The transition to AI started late but gemini is super competitive overall. Deepmind has been doing great as well.
Sundar is not a hypeman like Sam or Cook, but he delivers. He is very underrated imo.
Satya looked like a genius last year with OpenAI partnership, but it is becoming increasingly clear that MS has no strategy. Nobody is using Github Copilot (pioneer) or MS Copilot (a joke). They dont have any foundational models, nor a consumer product. Bing is still.. bing, and has barely gained any market share.
100% it's Demis.
A Demis vs. Satya setup would be one for the ages.
Google's revenue in 2014 was $75B and in 2024 it was $348B, that's 4.64 times growth in 10 years or 3.1 times if corrected for the inflation.
And during this time, Google failed to launch any significant new revenue source.
Haven't you been watching the headlines here on HN? The volume of major high-quality Google AI releases has been almost shocking.
And, they've got the best data.
If by competitive you mean "We spent $75 Billion dollars and now have a middle of the pack model somewhere between Anthropic and Chinese startup", that's a generous way to put it.
I’m no Google lover — in fact I’m usually a detractor due to the overall enshittification of their products — but denying that Gemini tops the pile right now is pure ignorance.
I have incredibly mixed feelings on Sundar. Where I can give him credit is really investing in AI early on, even if they were late to productize it, they were not late to invest in the infra and tooling to capitalize on it.
I also think people are giving maybe a little too much credit to Demis and not enough to Jeff Dean for the massive amount of AI progress they've made.
One interesting example of such a problem and why it is important to solve it was recently published in Nature and has led to interesting drug candidates for modulating macrophage function in autoimmunity: https://www.nature.com/articles/s41586-024-07501-1
There is a concerning gap between prediction and causality. In problems, like this one, where lots of variables are highly correlated, prediction methods that only have an implicit notion of causality don't perform well.
Right now, SOTA seems to use huge population data to infer causality within each linkage block of interest in the genome. These types of methods are quite close to Pearl's notion of causal graphs.
This has existed for at least a decade, maybe two.
> There is a concerning gap between prediction and causality.
Which can be bridged with protein prediction (alphafold) and non-coding regulatory predictions (alphagenome) amongst all the other tools that exist.
What is it that does not exist that you "found it disappointing that they ignored"?
Please Google/Demis/Sergei, just release the darn weights. This thing ain't gonna be curing cancer sitting behind an API and it's not gonna generate that much GCloud revenue when the model is this tiny.
You can state as a philosophical ideal that you prefer open source or open weights, but that's not something deepmind has prioritized ever.
I think it's worth discussing:
* What are the advantages or disadvantages of bestowing a select few with access?
* What about having an API that can be called by anyone (although they may ban you)?
* Vs finally releasing the weights
But I think "behind locked down API where they can monitor usage" makes sense from many perspectives. It gives them more insight into how people use it (are there things people want to do that it fails at?), and it potentially gives them additional training data
But the submission blog post writes:
> To advance scientific research, we’re making AlphaGenome available in preview via our AlphaGenome API for non-commercial research, and planning to release the model in the future. We believe AlphaGenome can be a valuable resource for the scientific community, helping scientists better understand genome function, disease biology, and ultimately, drive new biological discoveries and the development of new treatments.
And at that point, they're painting this release as something they did in order to "advance scientific research" and because they believe "AlphaGenome can be a valuable resource".
So now they're at a cross-point, is this release actually for advancing scientific research and if so, why aren't they doing it in a way so it actually maximizes advancing scientific research, which I think is the point parent's comment.
Even the most basic principle for doing research, being able to reproduce something, goes out the window when you put it behind an API, so personally I doubt their ultimate goal here is to serve the scientific community.
Edit: Reading further comments it seems like they've at least claimed they want to do a model+weights release of this though (from the paper: "The model source code and weights will also be provided upon final publication.") so remains to be seen if they'll go through with it or not.
Similarly with alpha Go they claimed to do it "to advance go" and help go community, but they played Lee se dol, released few curated self play games, collected publicity and abandoned go with no artifacts like source or weights.
But in hindsight their paper turned out to be almost 100% reproducible and resulted in super-human open-source alternative less than a year later.
So the story might repeat here. And they will achieve started goal without releasing anything
If you look at the frenzy of activity that happened after midjourney became accessible, that was awesome for everyone. Midjourney probably got help running their model efficiently and a ton of progress was quickly made.
I'm pretty sympathetic to a company doing a windowing strategy: prepare the API as a sort of beta release timed with the announcement. Spend some time cleaning up the code for public release (at Google this means ripping out internal dependencies that aren't open source), and then release a reference inference implementation along with the weights.
That's pretty reasonable. I wanted to push back on this idea that "the reason Google isn't dropping model + weights is because the corporate screws are coming down hard"
Google isn't waiting to release the weights so that they can profit from this. It's essentially the first step in the process, and serving via API gives them valuable usage data they they might not get if/when it's open sourced
I think companies in the space should either totally open source or not publish at all.
I can see publishing like this as achieving one (or more) of a several objectives:
1. Marketing software to for sales / licensing
2. Marketing startup to investors
3. Crowdsourcing use cases or product features from academia
Now here are the problems with those:
1. Selling software (exclusively) to drug companies is a terrible business model. Very low ceiling there. You can make more from one drug.
2. Indicates company focus is producing models and not drugs. See point one.
3. Computational labs want to release open source, so not viable to build on restricted tooling. Experimental labs may just be using to algo-wash prior hypotheses / biases.
Now weigh against disadvantage of letting competitors know what you are working on, how far you have progressed, as well as your methods.
I’d argue that the product providing some monetary value for Google will help ensure that this team doesn’t get moved some more profitable project instead. That way they can continue improving this tool and make more tools like it in the future.
The precedent I'm going with is specifically in the gene regulatory realm.
Furthermore, a weight release would allow others to finetune the model on different datasets and/or organisms.
This is a real tradeoff of freedom vs _. I agree that I'm not always a fan of Google being the one in control, but I'm much happier that they are even releasing an API. That's not something they did for go! (Of course there was a book written so someone got access)
Page 59 from the preprint[1]
Seems like they do intend to publish the weights actually
[1]: https://storage.googleapis.com/deepmind-media/papers/alphage...
And if they don't, I'm not sure how this will gain adoption. There are tons of well-maintained and established workflows out there in the cloud and on-prem that do all of these things AlphaGenome claim to do very well - many that Google promotes on their own platform (e.g., GATK on GCP).
(People in tech think people in science are like people in tech just jump on the latest fads from BigTech marketing - when it's quite opposite it's all about whether your results/methods will please the reviewers in your niche community)
Apple's "life saving" Apple watch features are only accessible on premium devices. "Privacy is a human right" is also only possible if you buy their devices. It doesnt go around making it free to everyone, and nobody seem to be saying "if you believe in that, then why dont you make it accessible for people from all socio-economic classes?"
This is in the press release, so they are going to release the weights.
So out of my own frustration, I drew this. It's a cross-section of a single base pair, as if you are looking straight down the double helix.
Aka, picture a double-strand of DNA as an earthworm. If one of the earthworms segments is a base-pair, and you cut the earthworm in half, and turn it 90 degrees, and look into the body of the worm, you'd see this cross-sectional perspective.
Apologies for overly detailed explanation; it's for non-bio and non-chem people. :)
https://www.instagram.com/p/CWSH5qslm27/
Anyway, I think the way base pairs bond forces this major and minor grove structure observed in B-DNA.
My graduate thesis was basically simulating RNA and DNA duplexes in boxes of water for long periods of time (if you can call 10 nanoseconds "long") and RNA could get stuck for very long periods of time in the "wrong" (IE, not what we see in reality) conformation, due to phosphate/ 2' sugar hydroxyl interactions.
At least they got the handedness right.
> AlphaGenome will be available for non-commercial use via an online API at http://deepmind.google.com/science/alphagenome
So, essentially the paper is a sales pitch for a new Google service.