Our body, or really, all biological processes can synthesize incredibly complicated molecules that can take human chemists a huge amount of effort to synthesize. It really is amazing how awesome our body is.
†: My description here is a dumbed down description. For a more precise description see section 2 of Arguments in favour of remdesivir for treating SARS-CoV-2 infections, Wen-Chien Ko et al, https://www.sciencedirect.com/science/article/pii/S092485792...
> The adenosine analog NITD008 has been reported to directly inhibit the recombinant RNA-dependent RNA polymerase of the dengue virus by terminating its RNA chain synthesis. This interaction suppresses peak viremia and rise in cytokines and prevents lethality in infected animals, raising the possibility of a new treatment for this flavivirus.
Absolute gibberish to someone with limited knowledge of biology.
> inhibit the [...] RNA polymerase of the [..] virus
If it would just simply stop RNA synthesis (eg. inhibit any RNA-polymerase) or if it would actually break RNA in general, it would kill HUMANS just as well!
The point of antiviral compounds is to selectively inhibit/kill mechanisms/components of the virus and not the human host... there's hundreds of thousands of antiviral and antibiotic compounds that are not very useful because they'd kill humans just as well or give them horrible cancers or god knows what else...
As the saying goes... "Everything should be made as simple as possible, but not simpler."
Wikipedia isn't magical. If you don't care enough to edit it, it won't include the information you'd like to see.
https://blogs.sciencemag.org/pipeline/archives/2010/10/06/ch...
https://chemistry-europe.onlinelibrary.wiley.com/doi/10.1002...
What I’m curious about is why this huge group attached to the adenosine-like group is needed. It seems to be rather complex for being a shoe to be thrown into cellular gear. Do you have an idea or pointer into the mode of action of this group?
So then you add different greasy groups to new compounds and screen those. So will be worse, but sometimes some will be better. Then you look at the better ones, like having a nitrile group off the 1' position of the ribose and maybe that started as an amine (I'm making shit up here) and they decided to make it stick further out (IDK).
Anyway, I did some quick looking at it seems like remdesivir is a prodrug that gets modified by other enzymes to become triphosphorylated and then incorporated into the RNA genome of the virus (https://www.nature.com/articles/nature17180/figures/1). So they got super lucky finding it! Check out that paper for the story.
It's all part of ADMET (absorption, distribution, metabolism, excretion and toxicity) optimization. Pharmacokinetics is an important subject, and that's why you can use the "XY shoved activity in vitro" papers only as starting points.
The adenosine bit is attached to a five carbon sugar (pentagon with O at top) which is identical to the sugar it would be attached to in RNA. The next thing along is a phosphate with some oxygens double bonded to it, which is part of the "backbone" of DNA. The stuff attached to that phosphate is nothing like DNA or RNA.
Hope this makes sense and provides a little insight for you :)
So, putting the three together, would it be possible to use actual biosynthesis for designed molecules by basically writing your own DNA/RNA and inserting it into a cell?
(Or is this already what's being done?)
The entire thing is so unimaginably complex. For example, for a lot proteins that are catalysts (aka enzymes) the actual catalytic part is a metal ion and the protein mostly provides scaffolding. A nucleotide sequence alone doesn't directly tell you what ion is needed. In some cases, multiple ions can fit, but only one actually results in the protein work. This the basis for how a lot of toxic metal exert their toxic effects.
It's also not as simple as a nucleotide sequence codes for a protein and that's it. Proteins fold into their final shape from the chain of amino acids that DNA encodes. Protein folding in general is a hard problem. Biological proteins may have other proteins (called chperonins) that help them fold into their proper configuration. Then proteins may also be modified after they've folded (again by other proteins). Some proteins are made up of multiple sub-units as well.
From 2009: https://www.nytimes.com/2009/02/07/business/07goatdrug.html
If you want an easily accessible jawdropping intro David Baker's youtube briefs here are pretty cool:
But if I could make a guess I'd say that it in theory it might be relatively straightforward or easy to do but it's probably a lot harder in practice. That's often the case anyways.
Long answer: No. It would be an incredibly complex undertaking.
Disclaimer: I'm terrible at organic chemistry.
As with software engineering, you develop a general sense (I didn't) of what might work and what probably won't. You aren't coming at it blind and reinventing the wheel every time. You learn to recognize patterns in chemical structures and reason about how they will interact under various conditions based on that. You do electron pushing in your head without giving it much thought, similar to a programmer reasoning about object lifetimes or dataflow in an application.
As to -100 C or +5 atm, that's the easy part. You alter environmental conditions when what you're working with is too reactive, or not reactive enough, or you have some other general problem. It's roughly analogous to determining the minimum amount of RAM the machine hosting your production database requires.
That part, at least in theory, is easy to understand.
Chemical reactions are typically of the form:
ingredients + energy -> products + byproducts
If the energy is on the left-hand side, making the environment warm makes the reaction go faster. If energy comes out (so it's on the right-hand side), making it cooler is better. You also need to control the temperature to be in a range where both the ingredients and products can survive.
As for the pressure, if the products and by-products have more total volume than the ingredients, low pressure is good. If it's the other way round, high pressure encourages the reaction.
The more complicated part is that if multiple reactions can happen with the same ingredients, or if the products can do further, unwanted reactions. Then you have to balance out the parameters to encourage just the reaction you want, and to discourage all the others.
Back then, you had a certain number of mechanical properties in mind as well as well-known "parts" (gears, linkages, etc) that you could predict the properties of very well - and the task was then to assemble them into larger mechanisms that did what you wanted them to do.
Seems to me, the intuition here could be similar - except the number of dimensions in which parts can interact is larger, the "clockworks" are orders of magnitude more complex - and your tools are much more coarse, so mostly, even if you know what you need to build, the building itself can only be done indirectly.
I have a bachelor in chemistry and ochem was one of the most painful classes based on the amount of things you must know by heart for synthesis
For biochemistry check out r/labrats
1). So many syntheses have horrible yields just like this one. You’d start with grams of material to end up with micrograms. I loved solving these problems as an undergrad in books, but reality was far different. You don’t think much about side products until you start doing novel chemistry.
2). So much trial and error. There were happy go lucky chemists that fell into projects that were smooth as butter, while brilliant chemists would toil 12 hour days to try and something to write up as a thesis. I was neither brilliant nor lucky and took 4 different projects over two years before finally landing on something marginally MS worthy. They need a journal of failed chemistry because only the working stuff gets published. So many failures could be logged so I didn’t waste my time doing non-working or poor yielding reactions.
3). Suspicious results in journals. I would read about a reaction and someone would put a 75% yield as their result and I could barely get 20. I always thought I was just bad, but a really smart chemist challenged me one day and tried to do it himself and couldn’t do much better. He tried it 30 different ways over the year as he did other stuff. He never could get a good yield. We talked to our advisor and we wanted to challenge the result, but the advisor didn’t want to start trouble. It was past the time I decided to leave with a masters, but made me feel a little better about my lousy abilities. No one could ever possibly doublecheck every result from every publication anyways.
All this said, there are some brilliant and patient scientists out there that drive the field forward. Just a few rough around the edge items I’d love to see change.
In a different but related world, clandestine chemists share failures more often than they do their successes, at least in the communities I was a part of a very long time ago.
I'd wager this is because our substrates, reagents and solvents are such a pain in the arse to get compared to an actual lab, that wasting any of them is a no-go if it can be avoided.
Related to that, but we reused solvents and recycled material a lot more than I did doing my B.Sci in Chemistry, for the same reasons!
I think most instrumentation scientists are sympathetic to this. There are also lots of instrumentation jobs that don't require stupid numbers of publications because it's not practical to find candidates. There are relatively few good hardware people in the sciences (especially fields where you can't get a mech/electrical engineer).
I suggest finding some conferences to start. They're a good venue for telling people what you did. There are also journals specifically for building stuff, SPIE has a lot for astrophysics, for example.
However, my take-away was that the successful researchers were the ones who could take any decent experiment and figure out what was publishable about it, or at least steer it into a publishable direction.
I think there's theoretical value in this, and many have tried, but the incentives/disincentives for doing so isn't favorable. Here's some reasons why I think it's difficult to motivate people to publish negative results:
1) Negative results, while important in advancing science, don't get you grants.
2) Negative results need to be peer-reviewed -- there are "good" negative results (good protocol, failed result) and "bad" negative results due to bad data collection, wrong conclusions (bad protocol, failed result).
3) Given that it's so much easier to get a negative result than a positive one (as in anything there are only few ways to be right, tons of ways to be wrong), the volume of papers to review is orders of magnitude higher. Reviewers have to really sift to find the needle in the haystack. Between teaching classes, sitting on mindless committees, writing grants, mentoring grad students, etc. academics don't have that kind of time.
4) It would incentivize poor/mediocre labs to publish a lot of negative results to get their pub count up (these are the ones that currently publish unsubstantiated positive results in fly-by-night journals).
5) Bad faith authors may publish fake negative results to throw others off a promising line of inquiry.
(Note: some of these disincentives also apply to positive results in journals today)
The current method I know for exchanging "good" negative results is word-of-mouth, usually during post-conference drinks at the bar. (works for tech too!) I'm not sure if it's possible to arrange incentives in such a way as to make publishing "good" negative results worthwhile.
EDIT: there are exceptions. If the space of solutions is known a priori and bounded (say only n ways to do something), then publishing n results if even all n are failures is worthwhile. This situation doesn't come up all the time (the solution space is often open), but when it does, it's worth publishing all n results.
Then these could be classified by methodology/process/chemicals/etc for people to look up before starting their research.
I've had to push my colleagues to cite a number of non-traditional sources: arXiv, github, and zenodo for example. Fortunately most of them agree that citations are cheap and that giving more people credit is generally a good thing.
One thing that helps is publicly stating how you want your research cited. If you don't have a peer reviewed publication in the pipeline, tell people how to cite your work on a blog or your github page or somewhere. Most people default to peer-reviewed journals for citations and get confused when one doesn't exist, so an explicit statement really helps.
If it makes you feel any better (I 'quit' at the start of my bioinformatics PhD), there's a high chance that their projects went smooth as butter because they were also happy-go-lucky about double-checking their results. See for example that 75% yield paper you mentioned.
I got similar stories about how to (not..) grow mammalian cells in vats.
I don't know organic chemistry, but in my field the authors are often willing to discuss their results, especially if it might mean more citations for them.
I am chemical engineer and in my studies there was no programming and algorithm training at all (we did tiny bit of Scilab to solve some systems of equations but that’s all!).
Chemists in general (beyond theoretical and some open-minded exeptions) don’t program and don’t want to program. Synthetic/organic chemist still perceive synthesis as form of “art” ;) Therefore it would require huge shift in the mentality.
It is going to happen but not easily and and later than it could for social reasons :(
Besides, cleaning everything is very machine unfriendly.
Pretty much all chemistry is some sequence of the above.
There is a trend to get pharmaceuticals to a stage where flow chemistry can be used (like a small version of the full blown basic chemicals processes). This is however still a research field because a lot of processes don't lend themselves to continue flow.
The most automation in chemistry can be found in the analytical side of things. A good example right now are the covid tests that are run on large automated liquid handling systems.
You know the whole deal with the three-body problem? How closed-form solutions become intractable in a hurry once you go past two or three mutually-influential orbiting bodies? As it was explained to me, that's why there's no SPICE for chemistry. Modeling exactly what happens when complex orbitals with dozens or hundreds of electrons interact with each other is one of those things we just have no clue how to implement in a practical application.
Having gone through a master's myself, I think when we pushed two generations of children into college, we generally lowered the bar - across the board, effectively. And that's related to what's happening in the US today.
This reminds me of the problems to scale up EUV lithography which are bottlenecked on producing strong enough EUV light. They put in 20 kW of power to get out 200 W at the target wavelength of 13.5 nm, so light generation itself only has 1% efficiency, and then you need to reflect it at mirrors etc. to focus it (lenses don't work at those wavelength) and that makes only 2% of the light actually reach the waver [2].
[1]: https://www.laserfocusworld.com/blogs/article/16569161/the-s...
[2]: https://en.wikipedia.org/wiki/Extreme_ultraviolet_lithograph...
https://blogs.sciencemag.org/pipeline/archives/2010/02/23/th...
It seems there are multiple chemist-author hybrids out there!
(https://tribunist.com/technology/sr-71-blackbird-pilot-troll...)
Gentle and Forgiving nature...
“i always recommend a good pair of running shoes”, indeed.
It's interesting that chemistry for pharmaceutical purposes can involve similarly nasty substances.
The article also does a great job conveying how much of a frustration minuscule yields must be.
It is one of the oldest and very established fields. Unfortunately practices aren’t great. The preparation formulas are often vogue, imprecise and difficult to reproduce. This comes from the fact that often the sizes and types of glassware are not specified, some informations are omitted (how quickly something is changed not only to what value e.g. heat up to 100 degrees but it does not say over what time) etc. Chemists usually (except some theoretical/computational specialisations) don’t have any training in algorithms or programming.
There are novel developments such as https://www.gla.ac.uk/news/archiveofnews/2018/november/headl... and references therein. I’m optimistic about them but I expect strong opposition from older faculty. They see synthesis as more than art and think that one has to have “good hand” in order to be a good organic chemist.
I think some generational shift will be necessary in order to change this discipline to more reproducible, strict and reliable. It will come but not that soon :)
0.25 x 0.58 x 0.74 x 0.21 x 0.23 = .005 (0.05%)
The 0.005 seems to be correct, so that should be 0.5%. The rest of the article also uses "0.5%" correctly.I noticed only because something ticked on me when I read "0.005 (0.05%)", it "felt wrong" so I double checked.
I like HN in general, but this particular article gave me the same feeling that I had when I first discovered HN a couple of years ago.
Sticking to simple stuff; mines produce iron at a rate measured in thousand-tonnes per hour with yields of potentially sub-30% compared to volume of earth moved. Ammonia and many acids are presumably measured in tonnes or kilograms produced per day. Low yields make the process-oriented sad, but what matters is absolute ability to produce; not yield.
All that doesn't take anything away from this article; it just makes it hard to interpret what 'royal pain to synthesize' means in practice. The process isn't basic chemistry; but that isn't really saying much.
That would be true if reagents, labour and plant equipment were free, but unfortunately they are not. Consequently you have these strange creatures called process chemists who shave steps of the discovery synthesis, increase the yield, and get around difficult reactions. It's really quite magical.
When it comes to bulk chemicals like vinyl acetate (produced at scales of kiloton per day) another consideration is waste. It cuts into profit twice: you lose product and you pay for disposal.
If the inputs are expensive or the process can’t scale (to a billion doses in 12 months, say) then that’s worrying and perhaps indicates this isn’t a good candidate vaccine.
Something truly educational, and of course one of the people I wanted to educate was myself by forcing myself to learn much more than my high-school level of chemistry.
I've gone down this road a few times, each time giving up on the absolutely scale of even the most rudimentary understanding.
This really cracked me up. In my IT world the equivalent for "mutant" would be refactoring, right? I did "mutated" this way a few times in the past to much of the horrors of my boss(es)/manager(s) when they learned the next day.
If Remdesivir data looks good this month, there will be a rush to produce it, and if there’s only one published way to do that, then the ingredients for that one approach will potentially be hard to find. Thus we can benefit from different approaches which start from different raw materials.
Lots of cool Arxiv papers on this and Graph Neural Nets, Soft actor-critic, or Transformers can be interesting approaches. The transport theory seems like a good way to make a value function. How much time and money does it take to produce a given chemical by a given set of reactions? That’s a gajillion dollar question.
I spent way too much time last year looking at permutation-invariant distance metrics similar to Fused Gromov Wasserstein to invent an Atom Mover Distance, please let me know if you figure that out! DeepChem library is a solid framework, as are Tensorflow and Pytorch...
If anyone’s looking for a way to contribute to the COVID-19 response, open source data/algorithms to design synthesis pathways can be a strong approach. Everyone loves to use Deep Learning to design drugs, but it is valuable to design ways to make drugs, too!
Missed a trick not titling it:
“(1OO)OMG we made one gram...” :)
Scaleup from lab to pilot plant to production is another different beast.
I know that binding affinity has been shown not to be the best indicator of efficacy always, but I want to know if it's feasible, if someone can help
Apologies for the assumptions in these question, but are there many reactions in organic chemistry that are completely unknown?
This actually seems pretty fun. I'd love to have a reason to study it and a means to do something with my studies.
Well there are things we've experimentally tested, and things we haven't. Most reactions fall into the latter, and we can only make educated guesses about them.
If we could analytically solve the Schrödinger equation (we can't), we could accurately predict outcomes under perfect reaction conditions.
> are there many reactions in organic chemistry that are completely unknown?
Yes, the vast majority. But bear in mind that we can make educated guesses based on patterns, so we're not completely clueless.
Chemical formulae are required study for 11-14 year olds in England, for what it's worth. (Probably starting with words: methane + oxygen → carbon dioxide, and later writing a balanced equation: CH₄ + 2O₂ → CO₂ + 2H₂O.) The reaction presented is obviously much, much more complicated, but the same concept.
Edit: catalysts are covered too, although I don't know if the notation of putting them above the → is introduced at this age. https://www.bbc.co.uk/bitesize/guides/zqd2mp3/revision/6
There are rules, yes, of cause. But always the question: how much will react? And not: will I get the (reaction) product? But: how many (different) reaction products will I get. This is actually the reason why you need the purification steps after each reaction. Otherwise you would get an "reaction tree", in the end, byproducts reacting with other byproducts and you would get a, basically infinite amount of different end products.
Is there any evidence of this? It was a highly anticipated drug, but the first studies it was in showed only slight improvement over expected outcomes, far less than was seen with the chloroquine/zinc/antibiotic combo treatment.
I mean I think it's still interesting as a possible drug to add to the cocktail for maximum effectiveness. But let's not oversell it.
I did not understand this. liquid chromatography scales up quite well. There are other methods like Electorphoresis, salt precipitations etc., that don't.
Not compared to recrystallization or distillation. Try running a column on 10kg scale and get back to me.
(I worked in an o-chem lab for 2 years in undergrad, and the biggest we could do was ~20 grams at once)
It looks like the author is overselling some of the dangers here.
While you really don't want to dump n-BuLi into water, you have no reason to either.
The problem child of the class to which n-BuLi belongs is t-BuLi. That will spontaneously ignite in air, whereas n-BuLi will not. There was a very high-profile case I believe at UCLA a few years back in which a student using t-BuLi in the lab caused a fire with it and ended up dying.
https://cen.acs.org/articles/87/i31/Learning-UCLA.html
Also, I find this article confusing in the way it's written. Take the title, for example. It gives the impression that the author is describing his own efforts to make remdesivir ("we").
What he's really describing is some preps he found in the literature. And with a little too much hyperbole for my taste.
You're in luck, the first dose is 200mg, and then 100 mg daily after that.
Unfortunately, it's also IV, so you have a number of extra steps after synthesis to ensure sterility.
Good things to have on hand during this covid-19 pandemic (because you can't rely on hospitals to give it to you) are:
1) hydroxychloroquine sulfate taken orally 400 mg per day for a week ($180/kg on alibaba)
2) azithromycin taken orally 500 mg per day for a week ($150/kg on alibaba)
3) Camostat mesilate taking orally 200 mg three times a day for a week [1] ($50/g on alibaba)
4) favipiravir one dose of 1600mg two times on the first day, and then 600mg twice per day after that for a week [2]. ($40/g on alibaba)
5) covid-19 rapid test kits that use blood antibody tests and produce results between 3 to 10 minutes and cost about $1.50 per kit [3] .... although it looks like in the last week or so Alibaba has been blocking searches for these kits for some reason... although the search below does work but you have to look at the suppliers to find the ones that are actually selling it.
all this stuff can be bought on Alibaba and delivered in a week
[1] https://clinicaltrials.gov/ct2/show/NCT04321096
[2] https://www.medicalnewstoday.com/articles/anti-flu-drug-effe...
[3] https://m.alibaba.com/products/covid-19_rapid_test_kit.html
Stop suggesting people buy dangerous drugs from Alibaba. This should be bannable.