As a pioneer in RNAi therapeutics, Alnylam has really had some ups and downs over the years (some of them chronicled on this blog). Today would be one of the “up” moments, for sure. The company (in collaboration with Sanofi) has just announced positive Phase 3 data on their therapy for hereditary ATTR amyloidosis – the RNAi, patisiran, met the primary and secondary endpoints with what look like solid numbers. This is the first time that an RNA interference therapy has made it all the way through clinical trials, so that’s a landmark. The company plans to file for FDA approval early next year.
It’s worth thinking back a bit to the earlier days of both Alnylam and RNAi. At one point about fifteen years ago, this was the hottest area in biotech, with everyone piling into it. Then came the backlash – several backlashes, actually, with some of the larger companies pulling out of the field completely and writing off their investments in it. Alnylam has stuck with it the whole way, and in light of this recent discussion of drug development costs, Andy Biotech on Twitter estimates that they’ve spend at least $1.9 billion along the way (and I think that’s not including opportunity costs/cost of capital, either).
Alnylam’s most recent Phase III trial also ended suddenly just last month, when an RNAi for hemophilia had to be dropped after a patient death. The company’s previous attempt at ATTR amyloidosis was also disastrous – rivusiran had gone into Phase III, but was abruptly pulled due to what was memorably described as a “mortality imbalance”. (As I understand it, Patisiran is from a
later chemical series (update: actually earlier!), and the data-monitoring committee for its trial recommended that it continue after the rivusiran failure). So this is not exactly a well-worked-out path through the clinic, and a lot of onlookers were holding their breath waiting for these latest results.
hATTR amyloidosis is a nasty disease, and there is (until now) absolutely nothing that can be done about it. Note that American College of Cardiology’s recommendation of “supportive therapy and clinical trials”, which can be exactly translated as “Try to make the patients comfortable while they get worse, and hope that somebody discovers something that can help”. It’s caused by mutations in the transthyretin protein, which is produced in the liver and normally carries Vitamin A, among other things. The mutant form, though, forms amyloid fibrils, as many proteins will do under the right (wrong?) conditions. This manifests as toxic deposits of bunched protein in both the heart and nervous system, leading to heart problems and neuropathy of various kinds, which get worse and worse every year until you die.
This is a natural fit for an RNAi approach, on a couple of levels. Decreasing the amount of mutant TTR protein should slow the progression of the disease, and decreasing specific protein levels is what RNAi does for you, as it goes after the relevant messenger RNA. And the liver is one of the few places that we know for sure that you can get even the most advanced RNAi molecules to show up. The problems of drug delivery and stability are some of the big reasons it’s taken so long to get to this point (the same goes for all oligonucleotide ideas), and even now they represent serious limitations. But the liver, everything goes to the liver just for circulatory and digestive plumbing reasons. Things don’t necessarily make it back out of the liver, but they do go in.
So congratulations to Alnylam! This really is a major achievement, and the amount of time, effort, money and heartbreak it’s taken to get this far should serve as a monument to how hard it is to advance something really new in drug therapy. The work continues.
Note: All opinions, choices of topic, etc. are strictly my own – I don’t in any way speak for my employer
This research got going when cells from pancreatic and colon tumor samples were co-cultured with human dermal fibroblasts. The cancer cell lines unexpectedly became more resistant to gemcitabine under these conditions, and it turned out that just transferring the medium from such cultures to other tumor cells was enough to reproduce the effect. However, if the medium was passed through a 0.45 micron filter beforehand, it did not bring on resistance at all. That immediately sets off alarm bells about bacterial contamination, and so it proved.
The human fibroblasts, on closer inspection, contained Mycoplasma DNA (at this point, the cell culture people in the audience can groan and say “What doesn’t?”). Treating the fibroblasts with an antibiotic also abolished the resistance effect on the later tumor cell cultures, an effect that could be reversed by re-inoculation with the M. hyorhinis species that seems to be the culprit. But it’s not just that one – they tried infecting the fibroblast cultures with 27 different bacteria, and half of them led to the same problem.
The Mycoplasma problem had been noted in cell culture just a few years ago, and the real culprit is a deaminase enzyme that turns gemcitabine into the (inactive) uridine analog. But this paper goes further. The authors (a large multinational team) show that rodent tumor xenograft can be inoculated with bacteria, and this treatment also makes the tumors much less responsive to gemcitabine, an effect reversed by antibiotics. They took this experiment all the way down to an implantable device inside the tumor mass to release antibiotic locally, and showed that only around the site of release did gemcitabine have the desired effect.
The final test was to see if this is happening in human patients. The paper details how tissue samples from pancreatic ductal carcinoma patients were carefully tested for bacterial ribosomal DNA, and the results were pretty stark: 76% of the tumor samples showed up with bacterial contamination, as compared to 15% of normal tissue controls. In situ hybridization confirmed the result. Sequencing suggests that these bacteria have migrated in from the duodenum, and the species detected all produce the deaminase enzyme. Finally, adding the bacteria from the human samples to the cell culture experiments showed that they indeed confer gemcitabine resistance – Robert Koch would be proud to read this paper, for sure.
This work immediately suggests that any cancer patients receiving gemcitabine be treated with antibiotics, and I hope that this affects clinical practice quickly. You wonder how many similar stories are out there that we don’t know about yet!
Note: All opinions, choices of topic, etc. are strictly my own – I don’t in any way speak for my employer
Chemists love crystals. We don’t do as much recrystallization as we used to, since there are higher-throughput (and less labor-intensive) ways of purifying things these days, but I don’t think I’ve ever met an organic chemist who isn’t happy when a product crystallized out nicely. And we all know what crystals are like – straight-sided, hard, brittle, prone to fracture under stress into smaller shards, etc.
Or not. People who do grow a lot of crystals (typically for X-ray structures) can tell you that while most things fit that description, there are some oddball outliers. You see some compounds that grow long, curved crystals, for one, which makes you think that one face is behaving differently in solution than another. Some of the longer ones are surprisingly twangy, and can take a good deal of bending before shattering, and even the chunky, faceted ones vary quite a bit in how hard and friable they seem to be if you expose them to stress. There’s a lot going on in the crystalline solid state, and not all of it is well understood.
This paper illustrates the point, for sure. It’s looking at what you’d think would be an unremarkable coordination compound, copper (II) acetylacetonate, known to many chemists (as are its kin) as “copper ack-ack”. I would not want to guess how many (acac) metal complexes have been crystallized, alone or with other ligands, but let’s stipulate that it is a large heap, and that it’s a very well known species. But you can grow long needle-like crystals of the plain Cu(II) complex that act very weirdly indeed. They’re long and flexible, and as that photo shows, can actually be threaded into knots and then untied. What’s going on at the atomic and molecular level to allow them to bend this much without breaking?
The paper presents a careful X-ray study that figures it out. The authors (from Queensland) mounted a bent crystal in a synchrotron beam and carefully focused in on different parts of bent shape. They found that a particular crystalline axis was significantly elongated on the outside of the curve, and compressed on the inside, as you’d well imagine. It turns out that while each individual copper-acac molecule is identical as the crystals bends, the arrangement they make with each other certainly changes. The distances between the crystal planes don’t change, but the molecules themselves rotate with respect to each other (here’s a movie from the paper’s SI files illustrating that). The changes are not large at all, but when you add them up across several zillion molecules in a long crystal, it gives you some real wiggle room. A philosophical question comes up: if such a sample is not a regularly arranged array of molecules across its width (or not any more), is it still a crystal, or not? If not, do we have a word for what it is?
This is a nice piece of crystallography, of course, but people into materials science will appreciate that there are a lot of interesting features that might emerge from some changes. The optical and magnetic properties of the different sides of such a crystal could well be different (in fact, might almost have to be), and these changes could well be valuable in real-world applications. These results apparently also go against at least one theory of what sorts of crystals can undergo such deformations and how they do it, which would make you think that there could be several different mechanisms available. There’s a whole world in between crystals and amorphous matter, and there’s a lot being discovered in it.
Note: All opinions, choices of topic, etc. are strictly my own – I don’t in any way speak for my employer
I left the project half-finished last night, intending to fill the radiator with the water that had been lost in pulling out the water temperature sensor. This morning I got up, intending to drive the Spitfire over to the Annual Little British Car Show, poured a bunch of water in, and watched it cascade out of the sensor recess. Tightening the nutbolt (a bolt with a hole through the center that the sensor lives in) down didn't help. I drove my normal car over, checked out some pretty cars, and drove back, and then removed the sensor and started poking at it. Halfway up the bulb that lives in the water, there's a tapered ring of metal. I thought it was a precision tapered ring, that sealed against the matching taper inside the water pump. But this is automotive: there is nothing precision outside of the innards of the engine and transmission. Instead there was secretly a rubber gasket that, when I removed the old sensor, had stayed inside the water pump housing. It was totally shot, and no amount of trying to carefully put it back in was going to save it. I ended up getting an o-ring from my collection of high temperature water-resistant o-rings and using that instead, but because it was smaller, the nutbolt no longer managed to press the sensor down well enough to seal. I had to cut a little collet on the lathe, like a thick washer but sawed in half so it could be put in two pieces around the sensor line. With that, everything sealed correctly, as far as I can tell, and the car is ready to go again. A quick jaunt around the block shows the water temperature gauge indicating roughly the right numbers. I'll check tonight to see if the radiator is full of water.
Yesterday I spent about five hours painting the house, getting a layer or two of exterior paint on all the sun-facing wood on the first floor, and getting a good start on the non-sun-facing wood. Today I'll get the small amount of wood on the second floor. Man this is sore work, all above my head, a lot of it from a ladder, but it should last several years and more importantly prevent the wood being damaged by being exposed, as it was. Looks a lot better, too, than all the flaking and peeling paint that had been there since we moved in.
So how’s it going out there in the land of the journals that will publish any flippin’ thing you send them? Apparently pretty well. I’m not sure if we’re still in the log phase of their growth or not, but there’s no shortage of quasi-open-access titles out there, the ones that (like reputable OA journals) do charge you for publication and make the resulting paper freely available (if the web site stays up). But the key difference is that they skip that pesky stage where they actually review the papers. Or even look at the papers at all. It’s much easier to make all such editorial decisions on the basis of “Have the funds deposited?”
My wife likes to quote an Iranian saying to the effect that “A thief robs a thief, and God smiles”, but not everyone who publishes in the junk journals is looking to hoodwink some faculty review committee or government agency into thinking that they have a legitimate trail of academic papers. That’s what I take away from this article, which shows that there are more people publishing in these papers than you’d think from institutions that should know better. For example, only 17% of a survey of such papers listed any sort of funding, but among those, the number one source was unfortunately the NIH. India was the number-one source of publications in the total sample, but the US was a solid runner-up.
Now, if you adjust for the number of papers produced by each country, the proportion of US scientific papers going to predatory journals is still low, but it’s definitely higher than it should be. (As a percentage of total scientific output, the countries that look worst are India and Nigeria). And in case you’re thinking that the US papers must be from Wassamatta U. and the Univ. of Southern North Dakota at Hoople, Harvard was among the top US institutions with such publications. The article itself doesn’t hide its conclusions:
Whether authors are being duped or are overzealously seeking to lengthen their publication lists, this represents enormous waste. Just the subset of articles that we examined contained data from more than 2 million individuals and over 8,000 animals. By extrapolation, we estimate that at least 18,000 funded biomedical-research studies are tucked away in poorly indexed, scientifically questionable journals. Little of this work will advance science. It is too dodgily reported (and possibly badly conducted) and too hard to find.
The authors show that the papers that show up in this layer of scientific publishing tend to report less well controlled experiments with fewer details, making their value even less than you might have thought. They’re calling for funding agencies, governments, and university administrators to start paying more attention and stop rewarding such publications as if they were legitimate. That’s the only way that anything will happen; if the benefit of publishing such stuff starts to disappear, but we’ll see if that happens, and on what time scale.
In the meantime, thousands upon thousands of these things flood out every month. I realize that seeing a paper in an expensive, hard-to-publish in journal like Nature complaining about cheap open-access publishers might be open to charges of self-interest. But these complaints are still valid even if you fold the page down so you don’t see where they’re coming from: this stuff really is a waste of time and resources.
But here I’ve been talking about the problem being localized in academia, while this article from Bloomberg tells me that I’m being too complacent. It’s focused on the Omics Group, a rather large publisher these days that owns a number of other nameplates. In the past, they’ve let all kinds of unreviewed craziness through, and listed people on their editorial boards who had no idea that they’d been so honored. These aren’t just problems from their early days; here’s a story from June. And there’s more:
In August 2016, in U.S. District Court in Nevada, the FTC raised the stakes, accusing Omics and Gedela of violating the FTC Act by engaging in “deceptive academic publishing practices.” The agency calls Omics’s peer-review processes a “sham,” whereby manuscripts get approved within days of submission instead of the weeks or months it takes at more credible venues. It alleges that Omics claims distinguished experts as editorial board members and as speakers at its conferences without their consent; fails to disclose publishing fees ranging from hundreds to thousands of dollars until after articles are accepted; cites phony impact factors (a measure of prestige indicating how often a journal’s articles get cited elsewhere); and maintains that journals are indexed in PubMed when they aren’t.
Otherwise, things are pretty above-board, I guess. The company’s founded, Srinubabu Gedela, also seems like an interesting character himself:
In Hyderabad, where Omics’s headquarters are spread over 250,000 square feet in two buildings, Gedela is convinced of a win. “The FTC is following the fake news,” he says. He exudes nervous energy as he walks the halls, rarely making eye contact with employees and cutting off conversations abruptly. For years he’s promoted a postdoctoral study he did at Stanford to boost his credibility. The school confirms he held a position for five months, an unusually short time, in 2009, a year after starting Omics. Gedela is cagey when asked about the details, saying he left early to return to India to build his company. But when he hooks his laptop up to a large TV screen in his office to show emails proving the dates of his Stanford sojourn, he accidentally projects an adviser’s email threatening to terminate his contract after saying he took vastly more vacation than allowed. When questioned, he brushes the discrepancy off.
Excellent, a real pro. But the problem, as the article shows, is that a number of biopharma companies have published papers in Omics journals, participated in its sponsored conferences (there are over a thousand every year!), and thus lined the pockets of this guy. This isn’t good. The implication is that the people doing the publishing are either clueless or trying to place papers somewhere where they know that they’ll get published no matter what, and neither of those burnishes anyone’s reputation. The closer such papers get to marketing, the worse this looks. Some of these papers have even appeared since the FTC lawsuit, and although it’s not clear when they were submitted, the Omics folks are not one of your wait-months-to-hear publishers. No, this is another thing that the industry has to watch out for; we’ve got enough people mad at us already without kicking the ball into our own nets. Ditch Omics. Ditch all of these people.
Mind you, the rest of the scientific publishing world is not such a quiet place these days, either. I wanted to note this study, which suggests that the number of pirated papers on the Sci-Hub site is now so large as to pose a potentially irreversible threat to the big publishing houses and their business model. They’d better adapt quickly – I don’t like the looks of the people who are coming up from the bottom looking to take their place.
Oh! I just discovered that Houston has a hacker space with lots of classes, including woodworking and soldering! I might have to see about taking some. https://txrxlabs.org
EDIT: Orrr...hm. TC has at least 3 custom millwork shops. That might be a way to bridge engineering and get into woodworking. Can't hurt to try.
If I wanted to take some free or cheap online courses in introductory programming to help me figure out if that's a potential career change (with reschooling) I should consider, are there particular programs you'd recommend? (I enjoy the problem solving aspect of what little I've done, but I do tend to yell at the computer a _lot_ in the process, which is why I hesitate.) I was just glancing at MIT's Open Courseware. I've heard of others like Coursera and Codecademy, and here's a little list with those and others. My background: I learned some Basic and LOGO as a kid in the dawn of PCs, struggled immensely to self-teach while completing honors calculus assignments in MATLAB in the early 90s, dabbled in class-taught FORTRAN '77 in the mid-90s for engineering courses, and have done a little VBA with Excel over the past decade (with a class to start). I have historically been able to figure out generally what code is doing by reading it. I played with LabVIEW a bit and found it pretty confusing, but I was slowly sorting it out.
Here’s one of those papers where you go “I’m really surprised that that even works”. But I shouldn’t be, I suppose, based on what’s led up to it. I last wrote about the work coming out of the University of Southampton on “clicked DNA” three years ago, but they’ve been busy. This latest paper shows that you can take ten different oligonucleotides, functionalized with azides and alkynes on their ends, and assemble them via triazole formation into a 350-base pair gene sequence (in this case, one for a green fluorescent protein). It has, then eight triazole linkages along its span.
When you do that, you get a surprisingly “life-like” DNA sequence. It’s got a bit lower melting point, but the secondary structure appears quite similar to the native stuff. This synthetic gene is recognized by the relevant enzymes in vitro, although PCR is slower, and slows down more with every triazole linkage. But the group didn’t stop there: when this clicked-DNA sequence is put into a plasmid for E. coli, the bacteria take it up and express the protein, which comes out the other end of the transcriptional machinery justas it’s supposed to. Comparing this to transfection with a conventional plasmid, with the gene assembled through T4 DNA ligase, showed that the error rate is the same or better with the triazole-linked sequence. The cellular machinery doesn’t care. They did look into the possibility that the gene was being assembled through nucleotide excision repair as a way of dealing with the triazole linkages, but a bacterial strain lacking this pathway also expressed the synthetic gene, with no big differences between it and the ligase-assembled version.
The next part of the paper shows that you can use the power of a fully synthetic gene to incorporate epigenetic markers like 5-methylcytosine wherever you want. This is much more of a pain with the existing enzymatic systems, as slick as they are with unmodified DNA. The authors make the case that synthetic assembly of genes (which doesn’t necessarily have to be with click-trizoles, one would think) allows you entry into all sorts of modifications that the natural enzyme tools will balk at, and I think that they’re right. This should be of great interest to the epigenetics field, and to anyone looking to work with chemically modified DNA in general. The transcriptional machinery will put up with a lot more than we might have thought, and we should take advantage of its cooperation.
And conceptually, this continues to make the case that molecular biology is slowly turning into a branch of chemistry. I don’t think that enough chemists are aware of that, and I’m not sure that all molecular biologists are thrilled with that characterization, either, but here we have it. . .
The adjuster said he'd put in something for cleanup costs for us, at least. But that wouldn't be more than our deductible, I don't think. And he suggested we look at FEMA stuff too (conveniently, my friend from CA just texted me this morning that he's in town helping FEMA and suggested the same).
But it's just flooring, thankfully, and stuff that was pretty old already (he noted that the carpet is high quality, which would explain why it looks so good after the 13 years I've owned the place and however long it was there before that (it was definitely not new...probably late '90s vintage, if I had to guess). He also opined that he'd pull out the bar cabinets too. Which, if we add concrete, would have to happen anyway. In the meantime, I need to talk to my insurance agent about adjusting my policy, if it's not going to cover some 20% of my house.
Just need one of us to get a job elsewhere, then we can move north and sell this place as-is, probably for about what I bought it for. =/
A fair amount of what you read about the human microbiome is hype. There’s no way around it. It’s quite difficult to study this area in a meaningful, reproducible way, and even the best work in the area can only go so far, as things stand now. When differences in (say) gut flora are actually found and worked out, we generally don’t know what the chicken-and-egg relationship between that and human disease might be, or which particular bacteria (or ratio, or blend) is responsible, in either direction.
But just because an area is difficult, or because it has a lot of media noise in it, doesn’t mean that progress isn’t being made. There’s some new work, for example, that suggests that the gut microbiome might have a real connection to multiple sclerosis. That sounds like an exercise in headline-grabbing, but it looks more solid than that. This team found several bacterial types associated with MS, following up on numerous earlier studies, specifically noting that relapsing/remitting MS patients tend to have higher proportions of Acinetobacter (generally rare in gut flora) and Akkermansia, and lower proportions of Parabacteroides. The enhanced species, when put into other animals via fecal transplant, have very noticeable effects on T-cell differentiation and also exacerbate the pathology seen in the widely used EAE rodent model of the disease. The authors also note that one of the Acinetobacter species has already been shown in the literature to produce peptides that mimic sequences in myelin basic protein, which makes you wonder if MS is (at least partly) a misfiring immune response to gut bacteria in general.
That hypothesis has been kicking around for a while, but this sort of work really strengthens it. So does the second paper mentioned above, which looks at 34 pairs of twins, one of whom has MS while the other does not (an extraordinarily powerful and useful data set). This team also found increased Akkermansia in the gut flora of the affected twins. They transplanted gut material from the affected subjects versus unaffected controls into a mouse model (RR mice) that have a mutation causing them to spontaneously develop inflammatory demyelinating disease. Only some of the human-derived bacteria were able to colonize the mouse gut (as you’d expect), but even so, the mice with the transplants from the MS-diagnosed twins showed a significantly higher rate of disease onset.
When they looked more closely at the altered gut microbiota in these mice, they found lower levels of the Sutterella genus, compared to the transplants from human patients. (Interestingly, they don’t seem to have noticed as much difference with Acenitobacter). To be sure, other studies of gut flora in MS patients versus controls have also disagreed on the raised and lowered profiles across taxa, which is one of the things that makes this such a hard area to work in. (One explanation is that the concentration and localization of various species probably vary a great deal across the whole intestinal tract, in ways that are difficult to sample and account for, and stool samples are not necessarily a good proxy).
But overall, there really does seem to be something here, although it’s way too early to start talking about the therapeutic implications (for starters, we’d better be able to agree on just which bacteria are responsible and how they might be having their effects). These studies on bacterial transfer are really doing a lot, though, to address that chicken-and-egg problem mentioned above. If adding in the relevant bacterial population can bring on trouble in this way, that significantly raises the odds for the bacterial trouble being ahead of the CNS phenotype.
These papers dovetail nicely with yet another new paper, from a team at Rockefeller/Mt. Sinai/Sloan-Kettering. They’ve found that (1) commensal gut bacteria are enriched in genes for the synthesis of N-acyl amides, (2) that the compounds thus produced are, in many cases, micromolar ligands for various GPCRs that look quite similar to endogenous human ligands for these receptors and (3) that the GPCRs identified are, in turn, disproportionately localized in the gut as well. Taken together, it would appear that this is a signaling network that has evolved among the bacteria over time which modulates their own environment, and provides a direct mechanism by which changes in gut flora could affect the host organisms (that is, us). These lipid-like-signaling receptors (such as the endocannabinoid one, GPR119) are part of the same family, and can have a number of metabolic and inflammation-pathway effects.
Overall, it looks like the microbiota/disease connections are getting stronger, and getting some more detailed foundation under them. No doubt there are going to be a lot more twists and turns as these stories go on, and I would be very skeptical of anyone claiming (at this early stage) to have a great new therapeutic microbiome breakthrough for something as complex as MS. But the field in general is real stuff, generating some very interesting real results, and is worth keeping an eye on.
There are two low blips where we stopped for stoplights, and a high point where my heart got up to 185 or so, but the rest is a nice solid consistent 160-ish, the rate I can maintain for an hour without throwing up.
It was also quite warm today, just about body temp.
The result was that when we got back, everyone showered and ate and then we had a staff meeting and at the end of the staff meeting, when the department manager stood up and said "thanks, everyone", and the rest of us all stood up, I promptly put my back against a wall and slid down it to a seated position, my manager fell over and landed halfway in a chair, and the other manager, who had been drafting me, just had to sit right back down and put his head down on the table.
So it's not just me.
Department manager was all "what are you guys DOING out there?"
"Dear Dr. Astrology: I'm feeling lost, but am also feeling very close to finding my new direction. It hurts! It would be so helpful if I could just catch a glimpse of that new direction. I'd be able to better endure the pain and confusion if I could get a tangible sense of the future happiness that my pain and confusion are preparing me for. Can you offer me any free advice? -Lost Libra." Dear Libra: The pain and confusion come from the dying of the old ways. They need to die a bit more before the new direction will reveal itself clearly. I predict that will happen soon -- no later than October 1.So very timely while considering what to do about the house, while looking for ways to move north (or jobs to ensure us income there, anyway) and where exactly to go (gut tells me TC or Madison...the latter's a lot better for jobs for me (and probably both of us), but it's not all that close to my family, being on the wrong side of a giant lake; Grand Rapids would be the theoretical best compromise), and if going back to school for a career change (and to what) would be better. Josh's is pretty good too, while he undergoes similar soul-searching for similar reasons.
Here’s another paper on the cost to develop a new drug, a topic about which, I’m convinced, debate will never end. This one is designed as a response to the Tufts estimates on these costs, and I’m not going to help much, because I have some things to debate about this paper myself.
The authors have looked at oncology-focused companies that got their first drug approvals during the period 2006-2015. There are ten of them for which 10K filings are available, and the total of R&D spending from the company’s inception up to the approval is taken as the cost of developing a new drug. The raw spending numbers run from $157 million to $1.9 billion, so you can see that there’s a bit of spread right from the beginning. They average the figures out to $648 million, and that is presented in the abstract as the cost to develop a new drug – substantially less, the authors note, than the Tufts estimate (most recently $2.7 billion).
My first thought was that it’s at least better than Donald Light’s figure of $43 million, which is my natural look-on-the-bright-side disposition coming through. But several important factors come to mind that I didn’t think the $648 million figure were taking into account. Famously, among those who argue about such things, the Tufts figure includes opportunity cost / cost of capital estimates. This drives some critics into what I can only describe as a state of rage, and my usual response to them is that I will be glad to hold any sum of money they wish and give them the exact dollar amount back in ten years. No one has taken me up on this offer. If you feel that this deal would be a loss to you, but that it would not be a loss to a business like a drug company, you need to examine your views a bit more closely. And if you feel that it would be a loss, but that this should not be included in the cost of developing a drug, my question is where are we going to put it, then? Because it’s real. The timelines involved in the drug business make it quite real indeed.
This paper, though, does include an adjusted column of figures, noting the years each company took to develop its drug and assuming a 7%/year cost. That takes the range from $203 million to $2.6 billion (basically, the Tufts figure), for an average of $757 million. But there’s an even bigger consideration than the cost of capital: the cost of failure.
And that, to me, sinks this entire paper. We have in this business somewhere around a 90% failure rate in the clinic. Picking companies’ first approvals disproportionately selects for the fortunate ones who succeeded their first time out. In other words, this estimate ignores (as much as possible) the cost of clinical failure, and that cost is one of the central facts of the entire drug industry.
Let’s do a thought experiment. Looking at the list of companies, most of them have not had a drug approval since the ones under study here (as you’d figure). So instead of taking the numbers up to the date of that first approval, how about taking them up to right now? The amount spent per drug will be creeping ever higher, since they’re still spending R&D money and still have only one drug to show for it. When they do get a second approval, we can change the denominator to a “2” and bring it back down again, but the point is that several of these companies will likely never get a second approval. That’s the odds of the game. You can argue, I suppose, that this is an unfair comparison because now you’re including costs of Drug Number 2, but you can be sure that many of the companies on this new paper’s list were working on other projects at the same time that they were getting their first drug through.
So how do we account for the cost of these failures, which are as real as can be? Matthew Herper has already done this calculation for us, as of a few years ago. He looked at a list of 98 drug companies over a ten year period, added up the stated R&D costs, and divided by the number of drug approvals (exactly as in the preceding paragraph). This is not the most elegantly nuanced financial analysis, but there’s a lot to be said for it, because that’s real money going into the hopper every year. And what you find is that you don’t even get down to the level of the Tufts estimate (as it was at the time) until you get down to nearly company #40 on the list (!) Now, I realize that the “R&D costs” category varies from company to company, but after you’ve done 98 of them, you should be closing in on something meaningful.
I have one more objection: not all the drugs on that list of ten are equal, by any means. The cheapest of the lot is a new liposomal formulation of vincristine (from Talon), and that is just not the same as coming up with a truly new drug from scratch. The estimate for the second cheapest, pralatrexate from Allos, does not take into account that there were years of collaboration between NIH, SRI, and Sloan-Kettering that identified the drug candidate itself (which is in the well-traveled antifolate area) some years before Allos even came into existence. I’m not saying anything against these companies, far from it. They actually identified ways to get useful compounds onto the market for much less money than usual, and good for them. But holding them up as examples of how much it costs to “develop a new drug” is a bit off.
So while this paper is certainly not useless, it’s not as useful as it advertises itself to be, either. Some of the points I’ve raised here do come up in it, but I don’t find the arguments against them persuasive. In the end, the universe of one-drug/first-drug companies is just not anywhere near a representative sample. (In fact, Herper’s analysis found just that – the one-drug companies in his ten-year window had the lowest cost per drug, and the companies that had more than one approval during that time spent more and more per drug, up to many billions). As far as I’m concerned, this paper sets a lower bound, and that’s all.
Friday brought news of a drug-company maneuver that I had never heard of, and didn’t even realize was possible. First, a bit of background; the stage needs to be set properly.
One of Allergan’s products is Restasis, used for dry eyes, which is an opthalmic formulation of cyclosporine. It’s a valuable part of their portfolio (net revenues of more than a billion dollars per year), but it’s under threat from a patent challenge. Mylan and Teva are both trying to force the drug off patent before its appointed time (which is about 2024). Last December, the US Patent Office granted an inter partes review of the relevant patents, a decision that did not go down well with Allergan or its investors. That form of patent review has been around since 2011 and the America Invents Act, and its purpose is specifically for prior art objections to a granted patent. I’m going to pass on offering an opinion on whether Mylan’s challenge is justified or its chances for success, noting only that getting to the IPR stage does mean that it’s a serious one.
There things stood, as of Friday. Generic challenges to lucrative patented drugs are a regular feature of life in the business, but what happened next wasn’t (or not yet). Allergan announced that they had transferred the patent rights for Restasis to the St. Regis Mohawk Indian Nation, for an up-front payment and continuing annual payments to the tribe. Why would one do such a thing? Well, it turns out that whatever patented IP owned by the tribe is protected from inter partes review challenges by their sovereign immunity. The Mohawks are, then, immediately moving to dismiss the PTO’s actions. Let me tell you, on Friday afternoon the sound of people all over the biopharma world slapping their foreheads was echoing through the boardrooms, office suites, and hallways.
Is this going to hold up? I Am Not A Patent Lawyer, in this question even more than usual, but the opinions I’ve seen so far are that yes, it very likely is. There are apparently several relevant legal precedents, and clearly both Allergan and the St. Regis Mohawk Nation have received expensive legal counsel that it’s a worthwhile effort. I’m going to assume, for the sake of argument, that they’re going to get away with this one as the law stands. So the next question is, should they? Is this a good thing or a bad one?
Awful, as far as I’m concerned. Awful on several levels. For one, this is not how the patent system (for all its flaws) is supposed to work. “The validity of your patents is subject to review, unless you pay off some Indian tribe” does not seem like a good way to run an intellectual property system. This has changed the balance of the system towards whoever has the cash to cut such a deal. At the very least, the whole Hatch-Waxman framework has probably taken a hit. Second, this absolutely cannot help but look like a slimy legal trick, an association the drug industry absolutely does not need any more of. You don’t have to go into the nuances of prior art or the inter partes review system for people to think this deal smells. That fragrance, needless to say, will not stick just to Allergan; the whole industry gets to wear it, as far as the public’s concerned. And third, God help us, this sets a precedent. When CNBC asked the tribe’s lawyer if they were open to doing more deals like this, he asked them to be sure to print his phone number. And this will no doubt engage the attention of other tribes and other lawyers, compounding the damage done in reference to those first two points.
Is there anything that can be done about this? From what I understand, the answer is “Yes, but it’s a matter for Congress”. The law can be changed, and Congress has every right to do so. But think about what Allergan has done for us: now the drug industry is in a situation where it looks bad, once again, and only thing that can be done about it is to bring Congress’ attention to drug patent law and pricing. Probably just in time for the 2018 midterm election. What a clever idea! Thanks so much.
Update: I should note that Allergan is also in Federal court in Texas, arguing that generic companies are infringing its Restasis patents. Casting the Sovereign Immunity Spell doesn’t (as I understand it) affect this, but I await clarification.