https://www.nature.com/articles/s41564-025-02142-0
Wanted to share what I thought the interesting parts. From the university press release.
"To date, AI has been leveraged as a tool for predicting which molecules might have therapeutic potential, but this study used it to describe what researchers call “mechanism of action” (MOA) — or how drugs attack disease.
MOA studies, he says, are essential for drug development. They help scientists confirm safety, optimize dosage, make modifications to improve efficacy, and sometimes even uncover entirely new drug targets. They also help regulators determine whether or not a given drug candidate is suitable for use in humans... A thorough MOA study can take up to two years and cost around $2 million; however, using AI, his group did enterololin’s in just six months and for just $60,000.
Indeed, after his lab’s discovery of the new antibiotic, Stokes connected with colleagues at MIT’s Computer Science and Artificial Intelligence Lab (CSAIL) to see if any of their emerging machine learning platforms could help fast-track his upcoming MOA studies.
In just 100 seconds, he was given a prediction: his new drug attacked a microscopic protein complex called LolCDE, which is essential to the survival of certain bacteria.
“A lot of AI use in drug discovery has been about searching chemical space, identifying new molecules that might be active,” says Regina Barzilay, a professor in MIT’s School of Engineering and the developer of DiffDock, the AI model that made the prediction. “What we’re showing here is that AI can also provide mechanistic explanations, which are critical for moving a molecule through the development pipeline.”
It must be so cool to work at a university. You can just walk across campus to meet with experts and learn about or apply the cutting edge of any given field to solve whatever problem you're interested in.
Even when you work in administration, learning opportunities abound and are easy to seize.
I’m too shy to just walk into a random lab and ask questions, but three times a year, my boss likes to organize a tour of a different research facility and I really appreciate that.
Because that is what the general public believes AI means, and Open AI say they are building thinking machines with it, and this headline says ”predicted”.
We knew that LolCDE was a vulnerability to e coli since well before 2016 and knew inhibitors of the complex, globomycin being one of them, which they knew about since 1978
https://journals.asm.org/doi/full/10.1128/jb.00502-16
https://pubmed.ncbi.nlm.nih.gov/353012/
Is enterololin just another from of globomycin?
Is AI smart or are scientists just getting dumber?
Beautiful, finally something for ai/machine learning that is not a coding autocomplete or image generation usage.
It would be very interesting to keep track of this area for the next 10 years, between alpha fold for protein folding and this to predict how it will behave, how cost is reduced and trials get fast tracked
> We thus frame molecular docking as a generative modeling problem—given a ligand and target protein structure, we learn a distribution over ligand poses.
I just hope work on these very valid use cases doesn’t get negatively impacted when the AI bubble inevitably bursts.
"Stokes stresses that while the prediction was intriguing, it was just that — a prediction. He would still have to conduct traditional MOA studies in the lab.
“Currently, we can’t just assume that these AI models are totally right, but the notion that it could be right took the guesswork out of our next steps,”...so his team, led in large part by McMaster graduate student Denise Catacutan, began investigating enterololin’s MOA, using MIT’s prediction as a starting point.
Within just a few months, it became clear that the AI was in fact right.
“We did all of our standard MOA workup to validate the prediction — to see if the experiments would back-up the AI, and they did,” says Catacutan, a PhD candidate in the Stokes Lab. “Doing it this way shaved a year-and-a-half off of our normal timeline.”
In reality, a lot of research uses a variety of different general ML tools that have almost nothing to do with transformers, much less LLMs.
I feel like the common refrain of most LLM success stories over the past year is that these tools are of significantly greater help to specialists with "skin in the game", so to speak, than they are to complete amateurs. I think a lot of complaints about hallucinations reflect the experience of people who aren't working at the edge of a field where they've read all the existing literature and there simply aren't other places to turn for further leads. At the frontier, moreover, the probability that there exists a paper or book that covers the exact combination of topics that interests you is actually rather low; peer discussions are terrific, but everyone is time-starved.
Thus I find the synthetic ability of LLMs to tie together one's own field of focus with those you've never thought about or are less familiar with to be of incomparable utility. On top of that, the ability to help formulate potential hypotheses and leads -- where of course you the researcher are ultimately going to carry out the investigation or, in the best case, attempt to replicate results. Conversely, when I'm uncertain of my own conclusions, I often find myself feeding the best LLM I have access to the data I reasoned from to see whether it independently gets to the same place. I'm not concerned about hallucinations because I know there's nobody but me ultimately responsible for error -- and, at the fringe of knowledge, even a total fabrication can inspire a new (correct) approach to the matter at hand.
I think if I had to succinctly describe my own experience it would be that I never get stuck any more for days, weeks, months without even a hint of where to turn next.
Related, there's an ancient Palantir blog post (2010!) that always stuck in my memory about a chess tournament that allowed computers, grandmasters, amateurs and any combination of the above to enter [0]. At that time, the winning combination turned out to be amateurs with the best workflow for interfacing with machine. The moral of the story is probably still true (workflow is everything), but I think these new tools for the first time are really biased towards experts, i.e. the best workflow now is no longer "content neutral" but always emerges from a particular domain.
[0] https://web.archive.org/web/20120916051031/http://www.palant...
Rather, the gut of some people, especially people with IBD and people who have received broad spectrum antibiotics, can be colonized by enterobacter species. These are bacteria (including some kinds of E. Coli) that are resistant to broad spectrum antibiotics, and this overgrowth is not good for gut health. The researchers have discovered a compound that appears to fight these enterobacter species without destroying the larger gut microbiome. This could help people (especially people with IBD) whose gut has been taken over by this kind of bacteria get back to a more normal gut microbiome, although only mouse studies have been done so far.
“This new drug is a really promising treatment candidate for the millions of patients living with IBD... We currently have no cure for these conditions, so developing something that might meaningfully alleviate symptoms could help people experience a much higher quality of life.”
https://www.gastroenterologyadvisor.com/news/escherichia-col...
All the most effective treatments try to turn off parts of the immune system, and even though have minor success with some patients going through multiple different immunosuppressants to find the right one, or even cocktail, that adequately manages the disease.
During my own IBD journey, I've managed to stump the heck out of two different teams of GIs. I had been diagnosed with UC by biopsy during colonoscopy, and then at my last colonoscopy, despite not having been on medication for more than two years, they determined not only that I don't have it now, but that I never did. They told me "remission" would look different from "this bowel has never had IBD." But they also insisted I had not been misdiagnosed.
And yet they told me with a straight face that it is incurable. I had it in the past, confirmed by pathology. I don't have it now. And it's incurable. I give up.
In the end, I don't care enough to fight them about the contradiction, because the part I most care about is the "I don't have it now" part, and we're all in agreement on that.
(Note for any who are interested: I stopped medication after successfully reducing my inflammation markers within normal limits by eating the exact same thing for every single meal for 20 months with no cheating of any kind. They told me that shouldn't have been possible either, but it worked. And yes, it was as miserable as it sounds, but less miserable than living with UC.)
Inflammation release nitrates, which Enterobacteriaceae can use as a replacement for oxygen as a terminal electron acceptor
They can also encroach more easily on the protective mucus layers, which are thinner and more porous during inflammatory conditions (which itself may be a result of a messed up microbiota, which is why broad-spectrum antibiotics are not a solution)
These Enterobacteriaceae blooms in turn can cause inflammation, makes remission harder
There has been some success in reducing inflammation levels in IBD by blocking some of the binding factors that E. coli uses to attach to the epithelium/mucus
Why could we ever assume that?
> but the notion that it could be right took the guesswork out of our next steps,
Devils advocate here. Couldn't this just be a severe case of confirmation bias? You take 100 such cases, ask AI "how does it work?" and in 99 of those, the answer is somewhere on the spectrum between "total nonsense" and "clever formulation but wrong". One turns out to be right. That's the on we are seeing here, getting confirmed in the lab. That doesn't actually mean AI reduced the time by 75%.
A broken clock is also correct twice a day. We wouldn't say we have invented a clock that works without energy, sure it's wrong sometimes, but when it's correct, it's awesome! No, it's just a broken clock that's wrong most of the time.
I would also love to see that with "generative AI" we have discovered some helpful magic, but as long as we are not honest about those details (which would include publishing and owning up to mishaps), this is all just riding a hype train.
When we celebrate a scientist who makes a breakthrough, we're not crediting them for being right 100% of the time. We're recognizing that they were right more often than random chance and earlier in the process than would otherwise occur.
A researcher (or AI) who can identify promising directions at a 2/99 or 3/99 rate instead of 1/99 is genuinely valuable – they're effectively doubling or tripling the efficiency of the discovery process.
Imagine if AI can test theories in under 100 seconds AND is slightly better out of 99 tries at getting things right. Beats the human out of the water.
They're still using the scientific method, the only thing they're getting from AI is hypotheses to test. And AI is great at brainstorming plausible hypotheses.
No legal slop, just email address of runpod/prime-intelect/x-gpu provider account and deposit directly $5000 there. let them waste it.
You can easily filter who's worth receiving by they github and huggingface history.
https://www.cbc.ca/news/canada/hamilton/headlines/5-big-mcma...
Is there something new?
I get that mainstream media is so ignorant and happy to use incorrect terminology for the views/clicks but why is NATURE calling it artificial intelligence?