clamp / median / range
Jul. 2nd, 2025 02:45 am![[personal profile]](https://www.dreamwidth.org/img/silk/identity/user.png)
https://dotat.at/@/2025-07-02-cmp.html
Here are a few tangentially-related ideas vaguely near the theme of comparison operators.
( Read more... )
https://dotat.at/@/2025-07-02-cmp.html
Here are a few tangentially-related ideas vaguely near the theme of comparison operators.
( Read more... )
Only a few of you failed out of counter-terrorism theater club: when you are handed select choices to investigate, the money wants you to investigate those. The money does not appreciate when you question their premises/choices or study things they didn’t pay you to consider.
The goal of murder is to make someone(s) dead. If person(s) are considered socially/hierarchically important, we use a different term. To quote Booth in the Sondheim musical Assassins: “Murder is a tawdry little crime…shopkeepers get murdered, but a president is assassinated.” This changes how we view these nuclides. Classic mystery elements start attaching themselves when you consider them as murder weapons. Things like opportunity, methodology, and traceability. Committing murder is one thing. Getting away with it is an entire literary genre.
Which brings us to the killing nuclide of choice by popular acclaim, Po-210. You all know it works. You know it works because you watched a man die, more or less in real time on the news, by acute radiation sickness from an absolutely massive uptake. RIP Alexander Litvinenko. While this was most definitely an act of murder, even having enough Po-210 to kill with in the first place means it was incredibly difficult to pull off. Like, Bond supervillain caper-grade levels of difficult. Unless, of course, it is state sanctioned.
It isn’t credible as it’s just short of an act of war. If a doctor hadn’t been observant, remembered their training, and grabbed a meter from Oncology, Litvinenko might have died. And, within a few years, the Po-210 in his corpse would have decayed to a trace quantity of lead so small you wouldn’t notice it among all the rest.
This is similar to the doctor who twigged to what was going on in Goiânia with Cs-137, really. Cs-137 is a popular irradiator & calibration source, but less than it used to be. With a 30yr half life, anything made in the last century is still out there, still radioactive enough worry about, and we made a lot of sources with it. No, I won’t be sharing lethal dose numbers. Cs-137 is also damn annoying because of it’s chemical behavior and how we normally work with it, cesium chloride. As far as your body is concerned, cesium is just as good as sodium or potassium. Having it as a salt makes it very bioavailable in the body. Most of the Po-210 administered to Litvinenko was purged from his body by pooping because your body has no chemical use for it, but there was still plenty left to kill. But cesium? Anywhere your blood goes, Cs-137 goes. As a salt, CsCl would be very easy to administer to food. The drawback is that it, apparently, tastes awful. Let us give thanks and pity to Tom from Explosions & Fire and friends for doing this taste test so we don’t have to.
But as I said, Cs-137 is falling out of favor because of it’s long half life relative to human attention span/civilization and radiological dispersal device potential, rather than as a murder weapon. Where it had previously been used in industrial radiography applications, Cs-137 has been mostly replaced by Ir-192. Because Cs-137 (~30yr half life) & Co-60 (~5yrs) gamma cameras regularly got lost by radiographers, there was an interest in replacing them all with a suitable industrial radiography source with a much shorter half life so there’d be less long term consequences from losing them. Ir-192 is much more friendly with its 74 day half life. If you lose it, not much of a problem anymore within two years. Also, when you crack it open, there isn’t a highly dispersible salt in there, just a small piece of metal. That would be very rude to put in someone’s pillow. But much like Cs-137, Ir-192 is just as much hazard to the would be murderer as their victim. Also, that source is very easy to find and a lethal one will be relatively new, which means it is also well documented and licensed. Stealing them is hard because, even when lost, they tend to still be locked up in shielded housings that are hard to crack open. So, you’re looking for something that’s widely available and easy to control as the would be murderer.
Enter, P-32.
With a 14 day half-life, ridiculously energetic beta decay, and phosphorus being a nutrient your body craves almost as much as electrolytes, like CsCl, P-32 is definitely a decent murder weapon…if you have enough.
Good news, would be murderer! It’s one of the most commonly used radionuclides.
GOOD NEWS EVERYONE! There are very few inspiring incidents to reference for this.
Sadly, not zero.
The sad commonality to all the attempted murders by radionuclide is that they not only did they fail but they almost always unintentionally exposed innocent bystanders. Often family members.
Even if mass casualty wasn’t the goal, it still happens. :(
~fin~
The post CHOOSE YOUR OWN RADIATION ADVENTURE: Murder By Nuclide appeared first on Funranium Labs.
Like anyone in drug discovery, I’m always ready to hear about new assays that will let us tackle unusual and difficult targets in as close to real-world living systems as possible. We have a lot of assay technologies already, but no one can say that we’ve got things covered as well as we’d like. There are always tradeoffs! Just to pick a few, you have to balance sensitivity, reproducibility, fidelity to the behavior of natural proteins (for example, if you have to label them), realism and disease fidelity of the cell lines that you can use, the speed in which you get assay readouts, and not least the availability and expense of the tools (biological and technological) you need to get the assays to work at all. Oh yeah, and assay throughput. Always throughput, at least on drug-industry scales.
You’ll be making some compromises along the way, inevitably, so your task is to try to show that these are not enough to invalidate your results. One way to do that is to set up another assay that should be measuring the same effects, but through different readout technologies, and seeing if they both point the same way. Which doesn’t speed things up, of course, but getting untrustworthy results more quickly is not exactly a winning strategy, either (!) Drug discovery in general is a game of "How far can I go in making these steps easier and faster without making them invalid", because if you don't attack things in that manner you may never get anywhere at all.
This new paper has an interesting approach which I will be very glad to see put into action. The authors are using NanoLuc as a readout technology, which is a widely used system. It’s a light-emitting luciferase protein (whose precursor was isolated from a glowing deep-sea shrimp species) that has been around for ten years or so, and it has a number of advantages. It’s relatively small, and can be appended to larger proteins without necessarily disrupting their native function (although you will be well advised to try that in more than one way and check the results every time). It can be broken into two parts (the “NanoBit” assay technique), so that an active luciferase is only formed when these two pieces, one of which (HiBit) is quite small indeed, come into close proximity. So you can set up a system where each separate piece is attached to a different protein in a cell and you monitor whether or not these two interact, or tag some single protein with the HiBit piece and track it down separately by expressing a partner. You’ll want to try these in more than one way as well, since some of these splits almost always work better than others, for reasons that are not always clear. But NanoLuc in general has a very strong readout, and is also very stable in vivo. Its substrate (furimazine and derivatives thereof) is unique to the enzyme, and is also quite stable. There’s a lot to like.
What these authors are doing is engineering cells with a mixture of intact NanoLuc tags or split NanoBit techniques to perform what they call a “structural dynamics reponse” (SDR) assay. They’ve found that when a ligand binds to a protein involved in such a luminescent system that the light output is perturbed in detectable ways (often but not invariably a gain of signal). Just how this works is still being investigated - the paper suggests (plausibly) that a combination of outright structural shifts on ligand binding or effects on vibrational coupling could be at work. The paper shows this effect on seven proteins that are very different structurally and functionally, raising hopes that it could be a general effect. Overall, readouts from the SDR format track other assay data from known ligands/inhibitors well, with a few odd exceptions, and with generally better sensitivity. It’s striking that this effect per se has not been noticed or exploited before.
You can do this with crude cellular lysates in 1536-well plates, which can be read out in standard equipment, and another major advantage is that you don’t need specialized antibodies or other such reagents. So the barrier to entry is not high. The biggest hurdle looks like it will be experimenting with your target protein in the various ways that it can be Nano-luc-ified (as mentioned above), and the paper shows several examples of this. But that’s generally not too difficult a process. This should give assay access to a number of proteins that are difficult to express or purify, or ones that don’t work well outside of their cellular contexts or in overexpression systems. And those are just the situations you’d want to be able to address!
I am off the general internet this month, because we’re having a nice summer, I have a lot to do and I feel like just getting away from everything for a while.
My free weekly newsletter, Orbital Operations, continues to emit every Sunday morning.
Email and messaging apps are still on, but, honestly, everything else is off. Peaceful summer.
Recently read:
OK, this one’s going to be a bit chem-geeky, but it shows what you can accomplish when you really bear down on a particular reaction. This paper is looking at a constant concern in scaling up reactions: thermodynamics and heat transfer. I’ve written about this sort of things before, and the problem is pretty easy to grasp.
A lot of reactions give off some amount of heat as they proceed, but that behavior varies over a wide range. There are even spontaneous reactions that are endothermic (i.e., the reaction mixture cools down!), and that’s usually due to some big effect on the entropy term in the Gibbs free energy equation. But if your reaction does give off a noticeable amount of heat, you have some real thinking to do as you scale it up. The rise in temperature that occurs can make the reaction rate increase even more, which will give off even more heat more quickly, which will make the rate go up even faster. . . well, you can see where this leads to, and that’s often to all the inside surfaces of your fume hood (if not the walls and ceiling of your entire lab!)
We shall assume that you want to avoid this result, since most of the time most of us do. Your options are to find another reaction that isn’t so exothermic (which may not be possible) or to find a good way to remove the heat from the one you have. On a small scale, that’s generally not a concern. If you’ve got a 5mL solvent volume, you’re going to have plenty of surface area relative to that, and the cooling the walls of the vessel will almost surely do the trick: the good ol’ water bath / ice bath / dry ice bath, depending on circumstances. But here comes our old friend the square/cube law, which ruins the otherwise unassailable logic behind all those 1950s Giant Creature films. I will note in passing that a sentimental favorite of mine in that genre (although it admittedly still stinks) is The Deadly Mantis, which to my excitement as a kid not only featured a gigantic praying mantis but also stock footage of the Grumman Flying Wing bomber. No, I am not making any of this up. Where were we?
Right, thermodynamics. To a good approximation, that’s where we always are, anyway. As you go to larger reactions, the volume of the reaction vessel increases as a cubic function, but the surface area only increases as a square. So the cooling-via-the-outer-walls technique runs into trouble pretty quickly, as do numerous key functions of Giant Deadly Mantises, not least supporting their own weight with only an exoskelton and taking in enough oxygen. You can ameliorate the cooling problem for a while with better stirring techniques, but in the end you’re either going to have to find more effective ways of soaking up that heat or you’re going to go back to that “run some other reaction” option. And at some point that one starts to look more attractive than you ever thought it could.
So how do you model this sort of thing, in lieu of just setting up progressively larger reactions and seeing if they geyser out on you? The paper above discusses a common heat balance equation, which is a combination of Newton’s law of cooling and the intrinsic reaction thermodynamics. It should be emphasized that as a model system, the basic form of the equation assumes even temperature distribution and perfect mixing, which conditions may very well not obtain in your own reaction. And the “removal of heat” part of the equation is the hardest to predict reliably. Large-scale reactions are often brought up to temperature by circulating relatively thin layers of heating medium around them in jacketed reactions, but you still have that stirring problem and the way that larger and larger amounts of the reaction volume are out of contact with the outer walls. You’ll also have to consider radiative cooling, the heat capacity of the materials your reactor is made out of in the first place, and more.
The paper linked above uses the deceptively simple reaction at right as a proving ground. Aniline is N-benzylated (and di-benzylated) with benzyl bromide and some base. Now, any hack can set that up and get some sort of mixture (or push it pretty much all the way to the di-alkylated product), but just you try to make it selective for the mono-benzyl. You can see that pretty quickly by counting up the number of different techniques that have shown up in the literature to do it (one of my 1980s Laws of the Lab was “When there are twenty different ways to run a given reaction in the literature, it means that there is no good way to run the reaction”). Practically speaking, you’re probably going to run this as a protection/deprotection, or make the benzoyl amide and reduce it, rather than shooting for a direct bareback monobenzylation. But if that's what you really want instead, read on to see what you're going to need to do in order to accomplish that.
The authors have published before on this system when using a weaker base (like a tertiary amine), and here they investigate a much stronger on (lithium hexamethyldisilazide, LiHMDS). That’s going to change the reaction kinetics around substantially (which it needed), but preliminary tests showed that the process was strongly exothermic for an add-all-the-base-at-once process (even at -60C the internal temperature rose noticeably). They ran a series of reactions at different temperatures, different reactant ratios, and so on, and one thing they noticed was that the simplest reaction scheme (aniline goes to lithiated aniline, reacts with benzyl bromide to give product) just simply did not line up with the experimental results at all when modeled that way. An equilibration model (where the unreacted aniline and its lithio-derivative are in equilibrium, as is the mono-benzyl form and its lithium anion) worked much better in predicting reaction rates and changes in temperature.
They went on to refine that model for a slow-addition reaction, which is what you’d do in the real world. But slow addition of what to what? If you slowly drop in the aniline to a mix of LiHMDS and benzyl bromide, then you get mostly dibenzylation, as you’d figure (since each bit of aniline is overwhelmed by excess base and alkylating agent). But slow addition of benzyl bromide to the equilibrating mixture of aniline/lithioaniline gives you much higher selectivity for the mono-benzyl form, and slow addition of the LiHMDS to the aniline/benzyl bromide is better yet. The final model uses five rate constants for the interplay of lithiation reactions and benzylations, and a key factor is that the bulky LiHMDS has a bit more trouble deprotonating the mono-benzyl product as compared to its reaction with plain aniline. That gives the mono a chance to survive and prosper.
The final step was making a more accurate simulation of heating (and heat removal). It appears that the heat capacity of the stirring paddle/shaft and glass vessel for these experiments was a significant factor in the heat removal, for one thing. The team also conducted “heat-pulse” experiments to see how well the system dissipated thermal energy, and the final model after all these were taken into account matched the experimental data very closely.
Now, a chemist like me is never going to do any of this. I work on small scale, and I am only distantly concerned with the yield of any of my reactions. But there are plenty of chemists who care a great deal, and the details of this sort of modeling are of great interest to them. A lot of real-world reactions have the sorts of nonlinear behavior that this one turned out to have, and being able to account for it is crucial to getting a handle on the real-world behavior of larger reactions. And hey, they discovered a good way to mono-benzylate aniline along the way!
The ware for June 2025 is shown below.
A big thanks to Chris Combs for this handsome contribution! Despite being 80’s vintage, the board is in mint condition.
The Ware for May 2025 is a Facebook (now Meta) Portal. I’ll give the prize to opticron for the quick guess, email me for your prize!
I actually considered redacting the audio DSP, but decided to leave it in because it’s an interesting detail. Despite having a big burly SoC with multiple cores and a GPU, the designers still opted to integrate a dedicated chip for audio DSP. I guess silicon has gotten cheap, and good software engineers have only gotten more expensive.
I’ll also give an honorable mention to Azeta for doing a great “spirit of the competition” analysis. Thanks for the contribution!