[syndicated profile] in_the_pipeline_feed

Let’s look at a couple of downsides of cancer treatment, but before doing so, let’s state some ground rules. Every drug, every therapy, every medical procedure is a tradeoff. There is always a risk/reward calculation to be made. For small molecule drugs, that shows itself in various ways, and two that should be more familiar to the general public are: every drug has side effects, and every drug can be toxic if taken in large enough quantities and/or in the wrong way. Paracelsus was right, folks - the dose makes the poison, and one of the things we have been trying to do for the last hundred years or so in this business is to widen the space between therapeutic doses of things and their toxic doses. But just as there is always a dose below which a drug will do no good, there is always a dose above which it will do harm.

This does not play very well in public debates. If someone asks you “Is selenium an essential trace element that everyone needs in their diet to avoid cardiac and central nervous system problems, or is it a poisonous substance that can make your hair and fingernails fall out?”, the only honest answer is “Yes”. It is both of those. Depends on how much you take. “Choose your poison” is actually a very honest phrase when it comes to pharmaceuticals, and that goes for familiar ones that people around the world every day. Aspirin or ibuprofen can truly mess you up if you take a lot of them all at once (or too much all the time for a long time), and if you take a pile of acetominophen/paracetemol with too much liquor to wash it down with you’d better have a fresh liver transplant lined up, because you’re probably going to need it. Does that mean you shouldn’t take ibuprofen for occasional aches and pains? Not at all: it’s really an excellent drug for its intended purposes, and its risk/benefit ratio is very good indeed at the recommended doses. I take it without hesitation if I feel I need it. 

But this means that drugs for more severe conditions are very likely to come with more severe possibilities for harm. Chemotherapy is pretty much at the top of the scale, with some drugs in this class being outright poisons by any definition of the word. But you are trying to kill cancer cells before they kill you, and that can be a desperate business. It is for every patient and their physicians to decide if the benefits of any given cancer therapy are worth the costs and the risks. Many times they are, and frankly, sometimes they aren't.

One of those risks is that some forms of chemotherapy appear to stimulate disseminated dormant tumor cells, which can lead to later metastatic cancer even if treatment of the original tumor is successful. The good thing about medical science is that we don’t have to just sit back and take this as our lot in this fallen world, because if we had that attitude we wouldn’t be looking for cancer therapies in the first place. This new paper looks into the mechanisms behind this effect and a possible way around it. The authors confirm with new tracing methods that the problem (as earlier hypothesized by other groups) is that many forms of chemotherapy induce senescence of fibroblast cells, which in turn leads to a chronic inflammation phenotype. That in turn causes neutrophils to form so-called “neutrophil extracellular traps” (NETs). Those are an immune defense mechanism discovered in the early 2000s, made of stretches of DNA and chromatin proteins with other enzymes attached to them, and they can bind and kill pathogens.

But they also have effects on the extracellular matrix, and when this happens in proximity to dormant tumor cells it can start them growing again, leading to metastases. The new paper linked above tried the popular “senolytic” combination of dasatinib and quercitin, when dosed along with the chemotherapy agent doxirubicin, seems to prevent this process in animal models of disease - presumably by interrupting the senescent-fibroblast part of the process. This would seem to be relatively straightforward to test in human patients, and I hope this can be done soon. I would like to see the "senolytics" idea be of some clinical use!

Another cancer therapy that can go on to induce later cancer problems of its own is radiation treatment. That can of course be a direct consequence of the radiation itself inducing mutations, which is a big motivation for the various methods developed to reduce the overall dose as much as possible while still maintaining efficacy. But as reviewed here, radiotherapy can also induce metastatic disease through effects on EGFR signaling. Radiation induces the synthesis of an EGFR ligand protein called amphiregulin, which goes on to send myeloid cells into a more immunosuppressive state and stimulates growth of metastatic tumors. Some way of blocking this effect and of screening patients who have more EGFR-sensitive myeloid cells to start with could help alleviate this problem. There are still a number of mechanistic details to be worked out, but at this point the overall picture seems valid.

So yes, there are indeed tradeoffs and possible bad outcomes. But the key is to attack these in turn, to figure out what’s really happening at the cellular and biochemical level, and to come up with further strategies. Every time, we learn more about both healthy and tumor cells, and get a chance to take another step up what is a very hard, but very important, journey up a very long ladder.

[syndicated profile] in_the_pipeline_feed

Everyone who’s heard of amyloid protein at all knows about the Alzheimer’s story around it, but “amyloid” is a pretty wide-ranging term. It refers to a whole range of poorly soluble proteins that can be produced in living systems, whose deposition cause a range of amyloidosis conditions. There are dozens of known amyloidogenic proteins, but what they have in common is that they can end up in an antiparallel beta-pleated sheet structure that is all too stable. These tend to assemble into insoluble fibrils that are clearly very difficult for protein-housekeeping systems to degrade.

The most common of these as a systematic disease (although still rare) is the amyloid light-chain (AL) variety, and that comes from misfolding/misprocessing of antibody light chain proteins in plasma cells. Instead of being assembled into functional antibodies, aggregated forms of these start depositing in a number of organs. The damage to cardiac muscle function is particularly hard to overcome, as you can well imagine, but liver failure is certainly in the picture as well. It’s a bad disease, and the treatment for it involves hammering those plasma cells with chemotherapy agents and hoping that does the job before too much organ damage accumulates. Often, though, that’s not the case - many patients still die of cardiac failure even after treatment.

There have been efforts to come up with something better: monoclonal antibodies that target the fibrils themselves and thus make them targets for macrophages to destroy them. It’s a plausible idea and it seems like it would surely be better than ravaging your immune system with cytotoxic drugs (although that’s still a better option than dying of amyloidosis, which is the only other choice). But the past couple of months have been very difficult for those hoping for this new route.

Last month Prothena’s entry in this field (birtmamimab) failed its endpoints in the clinic (overall survival, distance walked, etc.) and was immediately dropped. And just last week, AstraZeneca’s antibody anselamimab failed its clinical trial in turn, missing on overall mortality and on cardiovascular complications. Oddly, though, AZ is telling people that they saw a highly significant response in a prespecified subgroup of patients in the trial, but their press release does not tell us who those patients were. 

The company had bought into that program when they acquired a small company called Caelum a few years ago, and had been making noises about it as a big potential rare-disease winner. The failure of the Prothena antibody must have dented that optimism a bit, you’d think, but there’s nothing that removes optimism like missing your own endpoints in your own trial. It’s still not clear where things went wrong: insufficient targeting of the amyloid fibrils (i.e., not enough macrophage action?) Or was the protein removal more or less on track, but still not enough to affect the course of the disease? Perhaps everyone involved was too far along for it to make enough of a difference? 

This leaves everyone wondering about that patient subgroup, and I’ll definitely report back when we hear anything. Subgroup analysis can be a bit of a high-wire act, though, statistically. Depending on the design and size of the trial, it is entirely possible to pick up on a genuine signal of efficacy in particular patients (who can now be targeted selectively) and it is also entirely possible to pick up on what is nothing more than a random chance result that will never reproduce because there was nothing real behind it in the first place. The only way to really be sure is. . .you guessed it, run another trial. Let’s see what AstraZeneca does.

Sarepta, Sarepta

Jul. 18th, 2025 11:55 am
[syndicated profile] in_the_pipeline_feed

I’ve written several times about Sarepta Therapeutics over the years, most recently about a year ago (in a post that has references to the earlier ones). They have been working for years on Duchenne muscular dystrophy, which is commendable, because it’s a disease (like many other rare genetic ones) that is terribly underserved. Unfortunately, though, the thrust of all those blogs posts is that even after Sarepta’s small-molecule and gene therapy efforts, you might still regard it as underserved. Because there’s always been doubt that any of these are very efficacious. Or efficacious at all. I note that last year I mentioned that the confirmatory trial for the small molecule work (such as golodirsen, provisionally approved in 2019) was yet to complete, and as of this writing that statement still holds.

As detailed in that earlier post, their gene therapy (Elevidys) had a very controversial approval after the company’s clinical trial data seem (to many outside observers) to come up well short of what would be needed. Outside observers? It came up short to plenty of people inside the FDA itself, who were recommending rejection until the CBER head Peter Marks overruled all of them. (I should add that although I had my differences with Marks and his decisions over the years, he was basically forced out of the agency earlier this year by the Trump administration in the cause of replacing him with people who are far worse.) But at the time I characterized the decision as “a huge mistake”. It was.

Elevidys is an AAV-vectored gene therapy that is supposed to produce a shortened version of the dystrophin protein in muscle cells that is defective in DMD patients. This “micro-dystrophin” is hoped to restore function even though it’s quite pared-down (138 kiloDalton versus 427 for the native), and the FDA’s approval was based on evidence of its production by the new gene, with clinical benefit to come later for full approval. Judging from the small-molecule work, that might very well end up being much later.

The problem is that AAV-based therapies inevitably come with their own risks, and (in my opinion) these should be balanced by impressive efficacy data to make a convincing profile. Obviously this wasn’t present here, which sort of leaves everyone out there to soak up the risk part - and that’s what has happened. Over the last few months, Sarepta has confirmed the deaths of two patients from treatment-related liver failure. These were older DMD patients who had lost the ability to walk (not exactly the most likely candidates for benefit), but to me the entire FDA approval was based around Delivering Hope, so there you are.

Sarepta as a company has not been faring well under this news. Their stock had slipped from over $150/share down to about $17, until on Wednesday evening when they announced big layoff, restructuring, and cost-cutting to try to survive. That sent the stock up about 20% in trading on Thursday from people hoping for a rebound, but they were greeted by news on Thursday night that a third Sarepta patient had died last month. This wasn’t Elevidys-related, but rather a patient in a gene therapy trial for another rare disease indication (limb girdle muscular dystrophy). But it does not (and should not) inspire confidence that the company had not come clean about this before announcing all that restructuring.

This is just a sad story overall. Patients aren’t being served, first and foremost. We still haven’t seen definite evidence that any of these therapies are worth the risks, or are helping much at all. This makes the FDA look bad, and it makes rare disease research in general look bad. Eventually you come to the investors, who have not been having a good time of it either because they have been believing Sarepta’s press releases. None of this had to happen. None of it.

Testing the Role of Consciousness

Jul. 16th, 2025 02:28 pm
[syndicated profile] in_the_pipeline_feed

The idea of psychedelics and other consciousness-altering drugs as possible therapies for treatment-resistant depression has been getting a lot of coverage for years now, and that’s for a lot of obvious reasons. 

One of the best of these is the possibility that it can indeed be helpful: intractable depressive states like this sound completely hellish, and they must be all the more so when other therapies are tried (one after the other) without success. Beyond that is the continued interest in (or fascination with) the consciousness-altering part itself, which really took off in the mid-20th century as the effects of LSD became known and the active natural products from hallucinogenic mushrooms, morning glory species, and others were isolated and characterized. Albert Hofmann is the obligatory mention here! He himself was very much interested in potential therapeutic effects of these drugs, but it is safe to say that their recreational use (and the legal complications thereof, in many jurisdictions) overshadowed this aspect for decades.

And that’s another angle that keeps coming up, a certain “Hah! Told you so!” aspect as possible beneficial uses appear. That’s understandable, but it’s on a sliding scale, too. If these compounds turn out to have some of the medical properties that are being claimed, then yes, absolutely, it will have been a terrible shame that law enforcement tangles and the social issues involved kept this from being appreciated sooner. But at the other end of that scale, it’s easy to find people who will tell you (at length, and not always very coherently) about how their own favorite recreational drugs are not just for fun, no indeed, they are actually also the cures for cancer, for autism, for whatever diseases you care to mention and how you people who aren’t into them just fail to realize all of it. There are a few people out there who would almost like LSD to be added to the water supply, and that always makes me think of Philip K. Dick’s “Faith of Our Fathers” (second PKD reference this month!)

This new paper proposes a very interesting clinical trial, and I very much hope it gets off the ground. That’s because there’s a key issue in the therapeutic-psychedelic story that remains unsolved: do you have to consciously experience the effects, or is there some neurochemical mechanism that happens (and would happen) whether you’re aware of anything unusual or not? There are serious arguments to be made on both sides of that one, and a trial like the one detailed looks like the best way to go about answering it - in fact, it shows you the state of the field that this hasn’t been done on a useful scale already under controlled conditions.

The authors propose three treatment groups, all of whom consist of patients with refractory depression. The first group gets psilocybin under the direction of a therapist. The second group gets the same dose of psilocybin, but only after being anaesthetized with propofol. And the third group gets the propofol anaesthesia but no hallucinogen, only a placebo. That gives you a lot more blinding than is usually possible in psychedelic trials.

These groups will be followed for changes in overt symptoms as well as through fMRI brain imaging before and right after the interventions, as well as during a follow-up period. The main complications I can see are (1) the conscious+psylocibin group will get integrative psychotherapy afterwards, while the other groups don’t (the authors acknowledge this problem) and (2) if propofol somehow interferes with whatever neurochemical mechanism is at work (and I don’t see a mention of that one).

But I still think that this trial (or some version thereof) would be quite valuable - medically, psychologically, and even philosophically. I very much hope it gets off the ground.

Tell Me Again About Neurons Now

Jul. 17th, 2025 11:26 am
[syndicated profile] in_the_pipeline_feed

One of the things that you have to get used to in science (and especially biomedical science) is the constant possibility that something that Everybody Knows will turn out to be wrong. Today’s installment is the way that Everybody Knows that the brain is an obligate user of glucose for fuel. You can find this in every textbook; it’s been known for years. That’s why there are so many glucose transporters are up there, why gluconeogenesis kicks in under glucose deprivation conditions, why you don’t see lipid droplets in neuronal cells, and so on.

But the study of a set of rare diseases, the hereditary spastic paraplegias (HSPs) has undermined that confidence. There’s a long list of those, and they’re associated with an equally long list of underlying mutations. But one of them (HSP54) seems to be driven by mutations in an enzyme called DDHD2, which is a lipase. In fact, it’s a neuron-specific triglyceride lipase, which makes you wonder why neurons would need their own pathway for breaking down the triglycerides that they’re not really supposed to be storing or using for much or anything. That question becomes more acute when you look at the neurons of HSP54 patients and find that they are full of (unhydrolyzed) triglyceride droplets.

Now, in any other metabolically active tissue this would be no mystery. But brain tissue has long been thought to be the exception: high metabolic turnover, but not through mitochondrial beta-oxidation of fatty acids. This new paper shows that DDHD2 is very active at synaptic terminals and monitoring ATP production in axons shows that DDHD2 activity is directly upstream of it. Inhibition of it quickly leads to torpor in mice, as does inhibition of fatty acid import into mitochondria, and both of these show accumulation of triglyceride droplets as they take effect. It’s hard to escape the conclusion that at least some neurons do indeed use triglycerides for energy storage and rely on oxidation of the resulting fatty acids for a good part of their energy needs. Indeed pre-loading such neurons with palmitic acid makes them much more resistant to glucose starvation conditions. So the observations that lipid droplets are present in very low abundance in neurons were correct, but that missed the fact that it’s that way because they are constantly turning over and being used for fuel (!) We’re all going to have to adjust our thinking!

I wanted to take a moment, in the light of this work, to conduct a little thought experiment. I’ve been hearing an awful lot recently from various highly placed and deeply uninformed people that AI is going to pretty much solve all our outstanding scientific problems Real Soon Now (there will be another blog post just on that). So let’s ask ourselves: would AI have told us this? Remember, when people say AI they are about 95% saying “machine learning”, so would it really have told us about this after having been trained on years and years of the medical literature telling it that neurons are obligate glucose users and don’t really have a role for triglycerides? Of course not. And this is why I keep saying (and I’m sure not the only one) that we simply don’t know enough to teach the machine learning algorithms yet. Not to mention that some of what we’d be teaching them is just wrong to start with.  . .

Testing the Role of Consciousness

Jul. 16th, 2025 02:28 pm
[syndicated profile] in_the_pipeline_feed

The idea of psychedelics and other consciousness-altering drugs as possible therapies for treatment-resistant depression has been getting a lot of coverage for years now, and that’s for a lot of obvious reasons. 

One of the best of these is the possibility that it can indeed be helpful: intractable depressive states like this sound completely hellish, and they must be all the more so when other therapies are tried (one after the other) without success. Beyond that is the continued interest in (or fascination with) the consciousness-altering part itself, which really took off in the mid-20th century as the effects of LSD became known and the active natural products from hallucinogenic mushrooms, morning glory species, and others were isolated and characterized. Albert Hofmann is the obligatory mention here! He himself was very much interested in potential therapeutic effects of these drugs, but it is safe to say that their recreational use (and the legal complications thereof, in many jurisdictions) overshadowed this aspect for decades.

And that’s another angle that keeps coming up, a certain “Hah! Told you so!” aspect as possible beneficial uses appear. That’s understandable, but it’s on a sliding scale, too. If these compounds turn out to have some of the medical properties that are being claimed, then yes, absolutely, it will have been a terrible shame that law enforcement tangles and the social issues involved kept this from being appreciated sooner. But at the other end of that scale, it’s easy to find people who will tell you (at length, and not always very coherently) about how their own favorite recreational drugs are not just for fun, no indeed, they are actually also the cures for cancer, for autism, for whatever diseases you care to mention and how you people who aren’t into them just fail to realize all of it. There are a few people out there who would almost like LSD to be added to the water supply, and that always makes me think of Philip K. Dick’s “Faith of Our Fathers” (second PKD reference this month!)

This new paper proposes a very interesting clinical trial, and I very much hope it gets off the ground. That’s because there’s a key issue in the therapeutic-psychedelic story that remains unsolved: do you have to consciously experience the effects, or is there some neurochemical mechanism that happens (and would happen) whether you’re aware of anything unusual or not? There are serious arguments to be made on both sides of that one, and a trial like the one detailed looks like the best way to go about answering it - in fact, it shows you the state of the field that this hasn’t been done on a useful scale already under controlled conditions.

The authors propose three treatment groups, all of whom consist of patients with refractory depression. The first group gets psilocybin under the direction of a therapist. The second group gets the same dose of psilocybin, but only after being anaesthetized with propofol. And the third group gets the propofol anaesthesia but no hallucinogen, only a placebo. That gives you a lot more blinding than is usually possible in psychedelic trials.

These groups will be followed for changes in overt symptoms as well as through fMRI brain imaging before and right after the interventions, as well as during a follow-up period. The main complications I can see are (1) the conscious+psylocibin group will get integrative psychotherapy afterwards, while the other groups don’t (the authors acknowledge this problem) and (2) if propofol somehow interferes with whatever neurochemical mechanism is at work (and I don’t see a mention of that one).

But I still think that this trial (or some version thereof) would be quite valuable - medically, psychologically, and even philosophically. I very much hope it gets off the ground.

[syndicated profile] in_the_pipeline_feed

I wrote here in 2023 about some of the many issues surrounding the development of antibody-drug conjugates, which is a field that’s come back around several times over the years. We are nowhere near finished working out its complexities, and its promise (of extremely selective and high-effective-potency dosing) remains great enough to justify a lot of work and expense. 

I mentioned at the end of that post that a lot of new ideas were in the works, and one of those is moving into the clinic as we speak: the “dual-payload” ADCs. This is where you take an optimized antibody to (say) a particular tumor cell antigen, one chosen for its specificity over normal tissue, and attach two different sorts of “warhead” to it simultaneously. So instead of delivering just a topoisomerase inhibitor, you could deliver one of those at the same time as (say) a tubulin inhibitor, an RNA polymerase inhibitor, an ATR inhibitor, or what have you. Here’s a recent review of concepts in the area, and here’s another.

That process may not sound simple, and it’s even less simple than it sounds. As that last link notes, you have several factors to optimize. First, you have to make sure that your two mechanisms can reach their full potential against the tumor cells without interfering with each others’ modes of action. You also have to see how efficiently both of them are being delivered to those cells. That will probably involve some linker optimization, which is an issue that I spoke about recently here. (That lysosomal escape issue discussed there is a big part of getting ADCs to work in general). Optimizing one linker is not a lot of fun, so optimizing two simultaneously is pretty much guaranteed to be less so. You also have a number of choices to make about where those linkers are attached to the antibody structure and how you’d like to see them cleaved once inside the cell. And of course you have to end up with a process that can be reliably reproduced industrially, as always.

The rationale behind dual-payload dosing is easy to understand, though: resistance. Tumor cells are constantly dividing and throwing off mutations (much like bacteria do), and anticancer drug resistance is thus a similar problem to antibiotic resistance. You are trying to kill this easily mutated population, and all it takes is one successful pathway out of your drug’s mechanism to make all for naught. Oh, you’ll see improvement for a while as the susceptible population of cells is killed off. But then you’ve just given the mutant survivors an open field to expand in, in as clear a case of selection pressure as you could ever not wish to see. If you’re hitting them with two mechanisms at once, though, then the cells have to roll for a much lower-probability double mutation that lets them escape both simultaneously. You could increase the odds of those considerably by dosing first with one agent, then with another when that one fails, but both at once is a much bigger challenge.

The hope is that the tumors won’t be up to it. As you can see from the Nature overview linked in the second paragraph, a whole list of such agents is lining up to go into patients (even including some triple-payload ideas, which is sort of a whole chemotherapy cocktail carried on a single antibody). It’s basically impossible to tell which of these ideas are going to work in the clinic, so I’m glad to see so many variations. There are different mechanistic targets, different modes and rates of release, all sorts of things. We should learn quite a bit about tumor response to these ideas, which could lead to an even better second generation.

But a big question - as with traditional chemotherapy cocktails - is whether you can get greater-than-additive (that is, synergistic) effects. There are a lot of claims for this sort of thing, particularly in immuno-oncology when combined with more classic methods, but proving that in humans is quite difficult. If you’re ever going to see it, you’d think that delivering the different agents directly into the target cells simultaneously would be the way to make it happen. Let’s hope.

[syndicated profile] in_the_pipeline_feed

You of course want to be attentive to the possibility of forming toxic impurities during the manufacture of pharmaceuticals, and there are many regulatory and analytical checkpoints designed for this. One example that’s become much better-known in recent years is the formation of nitrosamines that can be traced back to synthesis steps taking place in (relatively) high-temperature DMF and similar solvents. 

Another one that’s probably only known to certain groups is the possibility of sulfonate ester formation. Sulfonate esters are used as reactive leaving groups in organic chemistry (mesylates and tosylates are two widely-known examples), and their electrophilic nature makes them something that you will want to minimize your exposure to. The related sulfate esters (such as the prototype dimethyl sulfate) are indeed toxic from several directions (acutely and longer-term mutagenically), and methyl mesylate itself, while less acutely hazardous, will also alkylate DNA with alacrity and is considered a carcinogen. What’s probably the hottest of all of these reagents, methyl fluorosulfonate, is so acutely dangerous that it dropped out of industrial production entirely. It used to be sold as “Magic Methyl”, because it would indeed slap a methyl group on most anything that could accept one, but spilling it outside a fume hood could be fatal due to its profoundly irritating effects on lung tissue.

So I think everyone is in agreement that we don’t want sulfonate esters lacing the drug supply. But how many examples of that are known? This recent paper notes that many regulatory authorities require extra caution and analysis when a drug substance containing a free OH group is reacted with a sulfonic acid, under the assumption that small amounts of such esters could be formed. But mechanistically that doesn’t make much sense: you need strenuously acidic conditions for that to happen, and that’s not what’s going on when you make sulfonate salts of amine-containing drugs. And that salt formation is why you’d be running this step in the first place - you form more-soluble salts of those amines for a better formulation, in the same way that you see hydrochlorides, acetates, phosphates, citrates and other such derivatives in dosage forms. We don’t worry about stray OH groups being turned into alkyl chlorides when an HCl salt is formed, and formation of a sulfonate ester under similar conditions is just as (un)likely. Indeed, the paper just cited notes this fallacy with respect to the regulatory treatment of hydrobromide salts.

But as it shows, there are many literature citations that warn about this potential reaction, even though none of them actually point at a real example. In fact, there don’t appear to be any. The closest case is a 2007 incident with contaminated nelfinavir (Viracept), which did indeed have ethyl mesylate in it. The drug itself was a methanesulfonate salt, but the problem turned out not to be formation of the ester during salt formation - the mesylate ester was in the acid reagent from the start! It has been stored for months in a tank that had been cleaned with ethanol and never dried, and that will indeed give you some ester formation (because the pH is so low, conditions that just don’t obtain during pharmaceutical salt formation). The author of the new paper refers to the regulatory reports that appeared after this incident as “highly misleading” on this point and says that the European Department of the Quality of Medicines in particular has “for many years supported a policy on alkyl-sulfonate impurities that is contradicted by mechanistic and experimental evidence”.

So let’s see if anything comes of this gauntlet being thrown down! I’ll report back, but don’t expect anything to happen at any noticeable speed. . .

[syndicated profile] killsixbilliondemons_feed

Posted by Abbadon

Hi ya’ll,

It’s time for me to punish myself and double down drawing a four page spread. I imagine I will be done in 3-4 wks given the previous one I did. I will announce a week ahead of time when it’s going to post. After that, we will be back to 1-2 pages/week on regular schedule.

Thanks,

-Abbadon

Senolytic Update

Jul. 10th, 2025 01:43 pm
[syndicated profile] in_the_pipeline_feed

A couple of years ago here I wrote about “senolytic” therapy (not for the first time, as links there will show). That’s the idea that there are populations of senescent cells that develop as organisms age, including humans (a species that includes so many of my readers, as Chesterton put it). There’s a hypothesis that clearing out such cells could improve overall health, and a further one that continued development of them is one of the general mechanisms behind the bad effects of aging.

So that’s certainly of interest, and there are a lot of studies in model organisms that tend to point in that direction. But it’s quite hard to target just senescent cells (well, at least so far), and the human data that we have tend to be from very small studies. This leaves the whole field in the “open question” category, for sure (or maybe that should be “at best”). In that 2023 post I mentioned an effort to extend the idea into Alzheimer’s therapy (using the dasatinb/quercitin combination that is popular in the field), and those results have recently appeared. 

It’s another small study (five patients!) but it’s a pretty thorough job, looking for all sorts of changes in blood, cerebrospinal fluid, and urine over a 12-week period. The authors pick up a few differences (increased fractalkine, MMP7, and IL-6 in the CSF, along with reduced circulating phosphatidycholine, decreased levels of a lysophospholipid (LPC 16:1) and downregulation of a few stress-related genes in peripheral blood mononuclear cells. There were certainly no differences in amyloid or tau levels in the patients, though, and no noticeable effects on them in general. The hope is that this can point the way to markers that could be used in a larger and longer trial, and to figuring out the significance (if there is any) of those phospholipid changes. Such a trial is in fact underway (with about 48 participants), and we’ll see what comes of it over the next few years. I’m agnostic: I would be very happy to see a benefit, but I think it’s very likely (as with all Alzheimer’s trials) that there won’t be one. The data will speak!

In the meantime, here’s a good overview of the senolytics field at Nature Biotechnology. I have to say that it’s not all that encouraging, particularly given the recent underwhelming clinical results from Unity Biotechnology and their UBX1325 compound in a Phase 2B trial in retinal disease. The compound missed its non-inferiority endpoint versus standard of care after 24 weeks, and that means some rethinking at the very least.

And it probably means more difficulty raising money if you come up with a new idea in the area, too. As the article notes, the existing “senolytic” compounds are a very mixed bag, with a number of other activities in vivo, and that can’t be helping. Even more troubling are recent reports from a number of labs that senescent cells themselves seem to have some beneficial functions, as opposed to being loafing cellular deadweight, and that if you did find a targeted way of clearing them out that you might end up wishing that you hadn’t. That’s going to have to be sorted out, too. Overall, I would not put money down on senolytic therapy emerging as a viable anti-aging strategy anytime in the near future. And hey, I’m 63, and I would be very happy to be wrong about that. But there we are.

[syndicated profile] in_the_pipeline_feed

Covalent drug discovery has been making a comeback so many times over the years that they’re all starting to blur together. But by now I think we can all agree that it’s yet another perfectly acceptable tool in the kit, given the number of approved drugs that have been specifically designed with such properties (and given the number of legacy drugs that also exhibit covalent behavior!)

For those outside the business, what we’re talking about here is contrast with the “traditional” sort of mechanism where your drug binds to its protein target by reversible means. These include hydrogen bonding, pi-electron interactions, dispersion/hydrophobic interactions, what have you. But no permanent bond is formed, and there is an equilibrium between the bound and unbound state. A covalent drug, on the other hand, makes a real chemical bond to the protein it’s targeting, generally by having some fairly reactive chemical group in its structure. And while there are some reversible-covalent mechanisms, in many cases the reaction is effectively irreversible: you have modified your protein target in vivo with a chemical reagent. If this happens at the active site of an enzyme, the near-invariable result is the inactivation of that particular enzyme protein molecule (the so-called “suicide inhibitor” technique). At other sites on a protein, you can modify its activity, properties, or interaction with other proteins in all sorts of ways.

But one thing you’ll want to watch out for is that the new modified protein you’re creating is not immunogenic. Keep in mind that the active compound in poison ivy (to pick one example) causes its trouble in humans by covalently modifying proteins into species that then set off an immune response (redness, swelling, itching). And in general you also don’t want to have a covalent “warhead” that is so reactive that it hits a lot of other proteins other than your target - that increases the possibility of that immune system trouble, and it certainly increases the chances of unwanted toxicity and side effects. But done properly covalent drugs can be very effective indeed.

The thing about covalent drug discovery is that it’s been pretty empirical, and that’s even as contrasted to traditional drug discovery, which is not exactly a domain ruled by cool rational calculations all the time itself. “Try it and see” is almost always sound advice in the business, and thus the adage to never talk yourself out of an easy experiment (or an easy analog compound). There are some recent efforts to prepare libraries of covalent drug-like molecules de novo and screen these across a variety of targets, but the most common way that covalent drug candidates have been developed is the other way around: you find (by conventional means) a small molecule that binds into protein site that itself has a nearby residue that might be a partner for covalent modification. Then you add a covalently reactive warhead to your scaffold, using whatever structural information you can get to try to point it in the right direction to pick up your desired residue on the protein itself. Repeat as necessary! 

One of the impressions many people have is that the molecules in these situations need to be optimized for strong binding before making that covalent jump - that’s supposed to give you better selectivity, and also allow for using a more weakly reactive covalent group in general. That also is supposed to cut down on unwanted side reactions, and you can get away with the less reactive group because ideally it’s going to be stuck in such close proximity to your desired residue and will have time to do its thing. But this paper is a useful call to rethink some of these assumptions.

The author, Bharath Srinivasan, is also reminding everyone of some of the fundamental facts about enzymes. First off, a lot of interactions between an enzyme and its natural substrate are simply not productive and don’t lead to the catalytic step that the enzyme performs - one estimate is that perhaps only one out of every ten thousand such interactions leads to a reaction (!) This means that enzymes that have higher affinity for their substrate are almost surely going to show higher rates of catalysis: the substrate is spending more time “in the zone”, and it needs all the time it can get. This takes us back to Michaelis-Menten enzyme kinetics - recall that Km is the concentration of substrate at half of an enzyme’s maximum rate as you increase substrate concentration. But, keep in mind that it doesn’t work for the substrate affinity to get too high! Enzymes work by lowering the energy of the transition state and speeding up the reaction, which means that what really counts in their affinity for the transition state (and that that’s much higher than their affinity for the substrate - or certainly for the product, which has to get the heck out of the catalytic site for the next reaction, anyway.  All this means that the best inhibitor for an enzyme is a molecule that most closely mimics the structure of the transition state, and that’s a time-honored principle of drug design. 

Second, there’s an upper bound to just how efficient that enzyme catalysis can get, and that’s when it gets up near the “diffusion limit”, which is the speed that molecules can physically move into and out of position. Determining that rate is not trivial, because it can (and does) vary according to the molecule and the medium. The standard number is 109 per mole per second, but protons in water can move a hundred times faster than that, while other larger molecules in more viscous conditions can easily be much slower. As the paper notes, a value of about 106 or 107 /m-sec is probably realistic under cellular conditions - i.e., well below the ideal values. Now there are certainly enzymes that have rates faster than that, but on closer inspection these seem to be either extracellular (like acetylcholinesterase) or part of multiprotein complexes where things are handed around outside of the bulk solvent world. The paper notes, though, that you should never compare the catalysis rate of an enzyme (kcat) directly with the diffusion limit (the units are different, for one thing). But a large comparison of enzymes plotted as kcat/Km shows a Gaussian distribution with a median around 105 or so, which probably does really reflect the limits of real-world diffusion.

The paper makes an explicit analogy between the relationship of kcat and Km and the relationship between kinact and KI for a covalent inhibitor. They’re quite similar indeed, with the biggest difference being that the covalent situation gradually decreases the concentration of active enzyme as things proceed! And in the same way that kcat and Km have a reciprocal relationship in classic enzyme kinetics, when you get up to the diffusion limit in a covalent setup, any attempts to increase inhibition by optimizing kinact are going to end up decreasing the noncovalent affinity as a consequence. They have to - mathematically there’s nowhere else to turn. So if you’re concentrating on increasing affinity (for example), you can probably get that into the hundreds-of-nanomolar range (more or less) without messing with the rate of inactivation. But the limits of the rate of diffusion won’t let you push it much more. The reactivity of your covalent compound is going to have to decrease as the affinity gets higher: you can’t have it all.

So optimizing a covalent inhibitor needs to be done by paying attention to both the binding affinity and the rate of inactivation at the same time - in fact, the paper recommends that for cases with rather flat, featureless binding sites (as with many “hard-to-drug” targets where people turn to covalent ideas in general!), you may well end up driving selectivity mostly by kinact, because you’re going to be hard-pressed to get the intrinsic affinity numbers up high due to the suboptimal binding sites.

Put another way: a good enzyme substrate has been evolutionarily optimized to strike a balance between binding and turnover. If the binding is too low, there won’t be enough enzyme/substrate complexes formed, and if the binding is too high, they’ll form readily but they’ll be too stable to go further! Evolution will have selected for an optimal kcat/Km. So when we’re stepping in to engineer covalent inhibitors, we should also never optimized just for binding or just for reactivity, because we too are looking for the optimum balance. And focusing on just one of those parameters with a promise that you’ll go back later and fix the other is a real mistake that the mathematics of enzyme kinetics will not take kindly to!

The paper goes into several real-world examples of these effects, and is highly recommended reading (and not just for covalent drug discovery folks, although they’ll definitely want to make sure that they’re thinking the right way about their work). There’s a lot more about benchmarking with so-called “standard” nucleophiles like glutathione that are worth a post of their own, too, (see here) but in general you shouldn’t be making too many assumptions about the reactivity of your warheads. Try them and see! It all comes down to that, once again. . .

Hexanitrogen Energies

Jul. 9th, 2025 12:14 pm
[syndicated profile] in_the_pipeline_feed

Going on yesterday as I did about enzyme mechanisms prompts me to return briefly to the unusual hexanitrogen molecule that was reported recently. I didn’t have time while writing that post to get into a key question about this species: why can it be isolated at all?

That takes us into some of the same territory as yesterday’s post. Since I’m sure everyone took notes, those who didn’t appreciate that the role of an enzyme is to lower the energy of a reaction’s transition state surely know it now. But if you have not spent years steeping in these subjects like a forgotten teabag, I’ll illustrate what the heck I’m talking about with the diagram at right (from Wikipedia). (Those of you have have marinated in this subject can skip ahead, or read on to make sure I get it right!) 

The Y-axis is in units of energy, and remember that a spontaneous reaction is going to go to a lower-energy state in its products. Look at the red “without enzyme” curve as it moves along on the X-axis, which is a “progress of the reaction” dimension. You start out with the reactants at one energy level and you end up with products at a lower energy leve. In this case, the reaction is the oxidation of a sugar molecule into carbon dioxide and water, and the free energy of those products is indeed lower than than the starting materials. So this reaction is feasible. So why doesn’t sugar just spontaneously ignite while sitting around in air? You’ve got all the starting materials ready to go!

That’s because in between those two states is a higher-energy one, the “transition state” mentioned before. The energy you need to put into the system to get things over that energetic hump is the activation energy, as shown, and the black line is what the landscape look like when you have an enzyme to help you out. The enzyme has (as shown) lowered the energy of the transition state, reducing the barrier to product formation and making the reaction much more feasible. If you lower that barrier enough, the thermal energy contained in the molecules at room temperature will be enough to send them over the hill. And since the reaction is giving off energy along the way (moving from higher to lower), that will almost always mean the release of more heat, which will send more starting materials onward, which will release more heat. . .well, you can see how you get fires and outright explosions once things like this really get going.

OK, with this in mind, let’s look at a similar diagram from that hexanitrogen paper, also shown at right. The energy diagram is at the left of the figure. Hexanitrogen is in the middle of this one (N6), and there are two separate modes of reaction, one heading to the left to give you two azides, and one heading to the right to give you three dinitrogen molecules. One thing you’ll note immediate is that the three-dinitrogen-molecules state is way lower energy than anything else on the diagram. Plain ol’ nitrogen gas is far lower energy than most anything you can come up with involving nitrogen atoms, and that’s why polynitrogen species can be so bracingly exciting. As mentioned numerous times in the “Things I Won’t Work With” posts, many such species are just longing to take that steep toboggan ride down to the lower-energy state, and the barriers to doing so are often rather minimal.

They’re not exactly tall here, either, honestly. You’ll see, though, that the activation energy to splitting into azides is about 26 kcal/mol, while the barrier to turning into three nitrogens is lower, at about 14.8 kcal/mol. In the grand scheme of things, that ain’t much - but it’s enough to keep a hexanitrogen molecule together for an extended period at low enough temperatures (and also enough to make sure that you’re really not going to see it for long up at room temp, either - its lifetime there is estimated at milliseconds). That’s why the authors were able to get spectral data on the compound at liquid nitrogen temperatures and below, but why you’re not going to be able to purchase a cylinder of the stuff. Thank God.

At the right of that figure are two depictions of the electrons around the molecule and its bonds - the top is the Laplacian of the computed electron density, and the bottom is the “electron localization function”, and I am not going to go into the details of those. But the two techniques agree about the bonding, and about the weak spot (lowest-electron-density) bonds. And those are the ones that would divide the molecule into three dinitrogens. By contrast, that central N-N bond that would break things into two N3 molecules is stronger (and that’s reflected in the larger activation energy for that pathway, of course). If you’re into it, you can see the bond distances involved as well in that table in the diagram, and it really does match up well with drawing the molecule as two azides stuck together in the middle like a hydrazine. And most hydrazines don’t just suddenly fall apart at their N-N bond at regular temperatures; it’s a reasonably stable situation.

So as mentioned, the only reason hexanitrogen can be dealt with at all is that even that (rather puny) 14.8 kcal/mol barrier to decomposition is a lot higher than you might have expected. Plenty of other hypothetical polynitrogen species turn out to have basically no barrier at all by comparison, which is why they’re still hypothetical, and you will be too if you try to make them on any kind of scale. The actual electron-pushing pathways that you need to engage in for that decomposition are just not as energetically favorable as they are in many other polynitrogens that have a slicker way down into nitrogen gas. Just why that should be gets into a lot of more advanced topics, not least of which is the option of quantum-mechanical tunneling through various energy barriers. But it’s clear from the calculations and the physical evidence that these have lined up in ways to make hexanitrogen feasible. Just.

Page generated Jul. 23rd, 2025 03:15 am
Powered by Dreamwidth Studios