Two newly uncovered documents offer a more nuanced account of Rosalind Franklin’s contribution to the discovery of the DNA double helix. The findings challenge some of the prevailing narratives surrounding the discovery for which James Watson, Francis Crick and Maurice Wilkins received the Nobel prize in 1962.
By many popular accounts, the key insight that helped crack the mystery of DNA’s structure came when Wilkins showed Watson an x-ray image from Franklin’s lab without her permission. Writing in Nature, researchers Matthew Cobb and Nathaniel Comfort note that this image, known as photo 51, is ‘treated as the philosopher’s stone of molecular biology’ with Franklin often painted as having ‘sat on the image for months without realising its significance, only for Watson to understand it at a glance’.
However, during a recent visit to an archive at Churchill College in Cambridge, UK, Cobb and Comfort discovered two previously overlooked documents – an unpublished news item that was drafted for Time magazine at the time of the double helix discovery, and a letter from one of Franklin’s colleagues to Crick – that cast new light on the discovery of the double helix. ‘Franklin did not fail to grasp the structure of DNA. She was an equal contributor to solving it,’ they write.
Even if it might gross us out now, experts predict that edible insects will play a significant role in our future diets. Not only are these bugs rich in protein and nutrients, but they can also be farmed more sustainably. For example, farmed crickets have a water footprint roughly 1/3 the size of beef cattle, require 50-90% less land, and emit 100 times less greenhouse gas during the farming process. In addition, where most of a cow is inedible to humans, roughly 80% of a cricket can be eaten, meaning less waste.
All things considered, it is in our best interest to get comfortable with eating insects. Unless you’re allergic to shellfish, that is.
Shellfish allergies are highly prevalent throughout the world, particularly in places where consumption is high. It’s estimated that 2% of people worldwide show immune responses to shellfish. In the U.S., roughly 6.5 million individuals have a shellfish allergy, making it twice as common as a peanut allergy. Shellfish plus seven other common allergens (milk, eggs, fish, tree nuts, peanuts, wheat and soy) make up 90% of food allergies in the U.S. that are not outgrown after childhood.
It’s not only through eating shellfish that one can have an allergic reaction. Occupational exposure at shellfish processing plants can cause reactions, often just through inhaling the airborne particles of crustaceans.
Unfortunately, the edible-insect revolution is not very welcoming for the many affected by a shellfish allergy. Upon consumption of crickets, mealworms or other insects, individuals with shellfish allergies have a good chance of having an allergic reaction. This is even true for the occupational exposure route, with reports of cricket farm or pet store workers sensitive to shellfish having reactions.
The high potential for cross-allergic reactions between insects and shellfish might only make sense once you look at their phylogenetic tree. Shellfish is a colloquial term encompassing any ocean-dwelling animal with an exoskeleton. Almost none of them are actually fish. Bearers of the shellfish moniker come from three different phyla: Mollusca, containing mollusks like snails, clams, and octopi; Echinodermata, containing echinoderms like sea urchins and sea cucumbers; and crustaceans like shrimp and lobster, which are actually a subset of the phylum Arthropoda.
You might know arthropods as all the things we typically call “bugs,” like spiders, millipedes, and all sorts of insects. Crustaceans like crayfish or prawns are, quite literally, just the bugs of the sea. With this evolutionary relationship in mind, it’s easier to imagine how immune systems often fail to differentiate between bugs of the land and the ocean.
Most shellfish allergies are due to the immune system reacting to proteins. There are a few different proteins in shellfish responsible (around 34), including arginine kinase and sarcoplasmic calcium-binding protein, but the overwhelming majority of those with shellfish allergies show sensitivity to the protein tropomyosin.
As Dr. Zachary Rubin, pediatric allergist, wrote to me, “The majority of people allergic to shellfish are allergic to a protein called tropomyosin, which is also found in many insects, so it’s not usually a good idea to consume insect-containing foods if you have a shellfish allergy.”
If you’ve studied biology or seen the famous “The Inner Life of the Cell” animation, you might recognize tropomyosin as one of the proteins integral to our muscles contracting. Without tropomyosin, I couldn’t move my fingers to type this article. But if tropomyosin is found in the proteins of most animals, why are so many people allergic to shellfish tropomyosins but not other animal tropomyosins?
It comes down to the sequence of amino acids that make up a tropomyosin protein. You can think of a protein as a chain, with each individual link being an amino acid. A protein chain can be constructed of many different combinations of amino acids in various orders to create countless distinct proteins with unique properties.
All tropomyosin proteins are similar to a certain degree—otherwise, they wouldn’t be tropomyosin proteins—but even working within these bounds, tropomyosin is afforded significant variation in its amino acid sequence between different animals. Some animals are more similar to others. Across various crustacean species such as prawns, crabs, and lobsters, amino acid identities can reach 95–100% similarity. The tropomyosin amino acid sequence is well preserved across shellfish, even though they belong to two different genetic phyla: Arthropoda and Mollusca.
One 2020 study found that certain insect species elicited less of an immune response from shrimp tropomyosin allergic patients. Mealworm larvae, waxworm larvae and superworm larvae may represent less allergenic insect options.
Another specific aspect of shellfish tropomyosins that may contribute to their allergenicity is their high thermostability. When proteins are heated, most will melt. Typically, this means that they are no longer active. Having lost their 3D structure, the proteins can’t be recognized by receptors or other important molecules and isn’t detectable by the immune system.
Treating proteins under high heat to denature them can be a strategy for making hypoallergenic products. The tricky thing is that the temperature needed to denature changes based on the protein. For example, tropomyosins happen to have very high melting points. This thermostability is thought to contribute to the highly allergenic nature of these proteins since treatment with heat, or even high pressure, isn’t sufficient to render them anallergenic.
Unfortunately, due mainly to tropomyosin and a handful of other allergenic proteins, the edible insect revolution is not for those with shellfish allergies. Given the popularity of alternative protein choices for pet foods, you may want to keep the shellfish-insect connection in mind when choosing kibble for your furry friends.
The difference between hypoallergenic and anallergenic pet foods is subtle but important. The Greek prefix hypo- means “lacking” or “less.” Hence, someone who is hypothermic is lacking in heat, and someone who is hypoglycemic has less blood glucose than they should. The prefix a- (or an- when it proceeds a word that starts with a vowel) is also Greek but means “no,” “not,” or “absence of.” We see this in terms like apolitical (not political) or anorexia (absence of appetite). Thus, hypoallergenic dog food contains fewer allergens, whereas anallergenic dog food contains none (or as close to none as possible).
What does that actually mean for pet food? Well, for most cats and dogs, proteins are the primary allergen, so hypoallergenic foods tend to use partially hydrolyzed proteins or proteins that have been broken down into smaller pieces and are therefore less likely to be recognized by an animal’s immune system, triggering a reaction. They tend to use a single source of protein instead of a blend and a single source of carbohydrates. Hypoallergenic pet foods often avoid the most common allergens for cats or dogs, sometimes employing specific parts of these animals—hydrolyzed chicken liver is common—or novel and “weird” protein sources like kangaroo, rabbit, or soybeans.
The problem is that roughly 25-50% of dogs will still have allergic reactions when fed hydrolyzed diets derived from proteins they’re allergic to. This may be due to incomplete hydrolyzation, leaving protein fragments still big enough to be recognized by the immune system, or even just cross-contamination from some part of the manufacturing process. For dogs like this, veterinarians often turn to anallergenic diets.
Proteins are long chains of amino acids. Their mass is measured in a unit called a Dalton (Da), or more commonly, a kilo-Dalton (kDa) because scientists prefer working with smaller numbers whenever possible. Protein masses can vary widely. For example, insulin has a mass of roughly 5.8 kDa, whereas ATP synthase, the enzyme responsible for powering everything we do, has a mass close to 600 kDa. Alcohol dehydrogenase, the enzyme that processes any alcohol we drink, weighs roughly 170 kDa.
There isn’t a consensus on how big a protein needs to be to potentially trigger an immune response, but we can confidently say that the smaller the protein, the lesser the chance. Hypoallergenic dog food tends to have proteins in the 3-15 kDa range. Conversely, Royal Canin’s Anallergenic food—debuted in 2012 after over a decade of research—was the first pet food considered to contain extensively hydrolyzed proteins. It contains proteins that are 95% less than 1 kDa and 88% broken down to the level of single amino acids!
In one randomized, double-blind crossover study of 10 dogs with cutaneous adverse food reactions, the Royal Canina Anallergenic diet did not trigger an allergy flare-up in a single participant. In contrast, a hydrolyzed chicken liver diet (a typical protein source for hypoallergenic dog foods) triggered a flare-up in 40%.
Despite being a feat of scientific engineering designed to help dogs and cats get relief from a condition without many other treatments, there remains a degree of controversy around Royal Canin’s Anallergenic food. Why? Because its protein source is hydrolyzed poultry feathers.
Pet food marketing has long relied on messages about feeding your dog as you would the other members of your family or avoiding “filler” ingredients. Unfortunately, this has resulted in demonizing ingredients like corn meal or hydrolyzed poultry feathers, even when all science supports their inclusion. Despite appeal-to-nature infused commercials referring to domestic dogs as “wolves” or “carnivores,” your Shih Tzu has evolved quite a bit from her wolf days and has different dietary requirements.
Dogs are not carnivores and haven’t been for thousands of years. They can digest grains quite well and benefit significantly from modern advancements in food processing, just like humans. Raw diets are dangerous for a multitude of reasons, and just because you wouldn’t want to eat an ingredient like hydrolyzed poultry feathers doesn’t mean it isn’t perfectly beneficial to your pet. Not to mention, as the poultry feathers are so extensively broken down before being included in kibble, it makes about as much sense to consider them feathers as you’d consider a single brick a cathedral.
All of the dogs in the above-mentioned crossover study readily ate the feather-based food, and such diets seem acceptable even for the more traditionally discerning cats. Despite claims to the contrary, hydrolyzed poultry feathers are well-digested by both cats and dogs.
Other attempts to demonize poultry feathers as a source of protein rely on characterizing them as a “waste product” of human meat processing—as if that’s a bad thing. For sustainability purposes, utilizing every part of an animal is far preferable to disposing of a perfectly functional ingredient. Claims that Royal Canin is only using feathers to benefit their own pockets are equally nonsensical, given how expensive the extensive hydrolyzation process is, the extensive research and development funding that went into perfecting it, and how much cheaper it would be to continue to use the conventional protein sources they already have.
Anallergenic food is a powerful tool for veterinarians and pet owners to diagnose and treat severe pet allergies that, prior to its invention, often had no great treatments. The demonization of by-products, highly processed ingredients, or whatever else you want to call hydrolyzed poultry feathers is unscientific rubbish that will almost certainly lead to pets who would benefit from this food not getting it. It’s high time we overcame our fear of the unknown and instead marvelled at how science can find unique new solutions to age-old problems.
his year’s [Nobel] Prize in Chemistry deals with not overcomplicating matters” says Johan Åqvist, Chair of the Nobel Committee for Chemistry. It has a simple and catchy name: Click Chemistry.
There is a certain chemical reaction that is often referred to as the click reaction. But that’s a bit of a misnomer. Click chemistry is a framework or methodology for doing chemistry. Specifically, making complex organic molecules, mainly pharmaceutical ones.
A very short aside: the meaning of “organic” in organic chemistry is very different than its meaning in “organic food”. A food being labelled organic means it adheres to a set of guidelines that differ in each country, but generally are hypothetically better for the environment (but the same nutrition wise) and avoid using certain fertilizers and pesticides. Organic chemistry, meanwhile, just means that the molecules involved are made up almost entirely of carbon to carbon and carbon to hydrogen bonds, with some oxygen and nitrogen atoms thrown in for good measure.
With that out of the way, let’s take a look at the click chemistry sensation that has been sweeping the nations!
Just prior to the 21st century, K. Barry Sharpless got to thinking about the way we research new potential drug molecules. The most complex chemical structures weren’t made by chemists, they were made by nature. Far more often than not, researchers were inspired by an effect observed in nature. Chemists would then spend months or years trying to synthesize the same, compound that the plant, animal, or microorganism made nearly effortlessly.
For researchers, effortless it was not. To build a complicated chemical structure could take dozens and dozens of steps, each of which needed to be optimized, and which generated waste and by-products. It was time consuming, money consuming and energy consuming—both in the literal sense, as lab equipment can use a lot of electricity, and for the researchers. Purifying the desired product at each step was a pain, and a properly disposing of toxic waste generated during each step posed an environmental concern. Barry Sharpless wondered if there wasn’t a better way of approaching this challenge.
It was in 2001 when he realized that nature was the key, but not in the way we’d previously thought. Where other chemists tried to imitate nature, using scores of different highly specialized chemical reactions to perfectly mimic natural compounds, Sharpless took a different viewpoint. He observed that nature builds all of the molecules it needs out of around 35 carbon-based building blocks, none of which are really that complex. Nearly infinite compounds can be created from just a few relatively small molecules, just by linking them together with a nitrogen or oxygen atom. It’s how DNA is made, as well as sugars and proteins.
Sharpless argued for a minimalist, streamlined approach to synthesizing complex molecules, wherein only a handful of really good (i.e. high-yielding and widely applicable) reactions would be used. He envisioned using a small range of organic “building blocks” derived from petrochemicals joined together by these really good reactions, to synthesize large chemical structures that could function as drugs or have other purposes.
The Nobel committee likens it to the IKEA flatpack approach. Having built an apartment’s worth of IKEA furniture in the last week, I love this analogy. Basically, builders (chemists) are provided with all of the necessary furniture parts (the building blocks), along with simple to use hardware like Allen keys (the very good reactions), and instructions that are easy enough for anyone to follow to put it all together.
Barry Sharpless defined criteria for being considered one of these excellent reactions—which he named “click reactions”—since they worked so well that they essentially just clicked molecules together like Lego. To be considered part of click chemistry, a reaction needs a few characteristics:
Modular (they can be used with many different building blocks)
High yielding (they don’t make many, if any, by-products, and make large amounts of the desired product)
Simple to purify (if they did make by-products, they should be non-toxic and easy to remove)
Simple reactions conditions (No fancy equipment or working under vacuum)
Use readily available starting materials (No super weird and expensive compounds)
Use either no solvent, or something benign like water that is easily removed
As I wrote above, there is one reaction that has become synonymous with click chemistry, to the point that it’s often referred to as simply “the click reaction”: the copper catalyzed azide-alkyne cycloaddition, or CuAAC (Cu is the periodic table shortform for copper). Barry Sharpless discovered it, but across the world in Denmark, at almost exactly the same time, so did Morten Meldal. Although Sharpless described its potential as enormous, neither researcher really knew that they were ushering in a new age of organic chemistry.
CuAAC quickly became the epitome of click chemistry. Before long it was it was the go-to for attaching nearly any two organic molecules together. Among many reasons it was so heavily embraced include how well it works, and the triazole linkage that it creates. Triazoles are quite stable, and are part of several important drugs, like fluconazole, a widely used antifungal medication. The CuAAC reaction sped up drug development tremendously and enabled all kinds of research that would have previously been very impractical, if not impossible.
However, there was one problem with the CuAAC reaction: its copper catalyst. Unfortunately, copper is highly toxic to living things. Some attempts were made to develop and include other compounds that could sequester the copper and prevent it from harming cells, but the next breakthrough came in 2004. This is where the third winner of the 2022 Nobel Prize in Chemistry comes in- Carolynn R. Bertozzi.
Wanting to study the complex sugars that sit on the surface of certain cells, Bertozzi was inspired by click chemistry’s simple and efficient nature. But, wanting to apply click chemistry to living cells meant finding a copper-free way of catalyzing the reaction. Looking back to the literature of 1961, she was inspired to use a ring-shaped molecule that was very unhappy being a ring.
Much like if you bend a pool noodle into a circle, molecules can often be bent into rings. But just like the pool noodle, they are clearly strained. The moment they’re given a chance, the molecule will spring open, just like the pool noodle when you let go. Bertozzi was able to harness that energy and create an incredibly powerful tool for studying molecular biology: the strain-promoted alkyne-azide cycloaddition,or SPAAC. With the toxic copper catalyst eliminated, researchers were off to the races! The applications for labelling and creating compounds that can be used in living cells or animals are numerous. I should know—it was absolutely integral to my M.Sc. thesis research!
We haven’t yet seen the full research impact of click chemistry, and we’ve already seen a lot. I have no doubt that Sharpless, Meldal and Bertozzi’s innovations will go down as defining moments in scientific history. Their impact on scientific research can’t be overstated, and therefore their potential to improve the lives of many. As the The Royal Swedish Academy of Sciences wrote, “In addition to being elegant, clever, novel and useful, it also brings the greatest benefit to humankind.”
Flying in an airplane is incredibly safe despite what our anxieties and fears might tell us. According to the International Civil Aviation Organization (ICAO), aviation has become the first ultra-safe transportation system in history. That means that for every ten million cycles (one cycle involves both a takeoff and landing), there is less than one catastrophic failure.
And yet, aviation accidents and incidents do still happen. I recently became deeply interested in aviation safety and got to wondering: Are there monthly or seasonal trends in when aviation accidents occur? Essentially, is there a statistically safer time to fly?
To answer that, we need to define the difference between an accident and an incident. It’s a subtle but important differentiation, because incidents happen all the time, while accidents are quite rare.
The ICAO defines an accident as “an occurrence associated with the operation of an aircraft which takes place between the time any person boards the aircraft with the intention of flight until such time as all such persons have disembarked, in which a person is fatally or seriously injured” and/or “the aircraft sustains damage or structural failure … or the aircraft is missing or is completely inaccessible.” On the other hand, an incident is defined as “an occurrence, other than an accident, associated with the operation of an aircraft which affects or could affect the safety of operation.” In car terms, an accident would be something like a fender bender or crash, whereas an incident would be something like your check engine light coming on or your headlight burning out.
At the time of writing this article in late January 2023, globally, there have been seven accidents in 2023, only one involving fatalities (seventy-two people presumed dead after Yeti AT72 crashed in Pokhara, Nepal). Compare this with incidents, of which there are usually around three or four every single day. If that seems like a lot, remember that the strict reporting of nearly any deviation from perfect plane operation and function is a big part of what has made aviation “ultra-safe.” No piece of machinery as complex as planes will function perfectly 100 percent of the time. By strictly cataloging all incidents, we can continuously identify trends, issues, and ways to improve aviation safety even further.
If there are temporal trends in aviation safety, there are a few reasons those could exist. One potential would be due to weather. There are definite seasonal trends in weather considered hazardous. For example, winter in Canada and the northern United States sees more ice and snow. But the question is whether these weather trends translate into accident trends.
A 2018 study examined all reported worldwide weather-related aircraft accidents from 1967 until 2010. The absolute number of weather-related accidents has increased over that period but so has the annual number of flights, so that is expected. More interesting is the percentage of accidents that are weather-related, which has also increased from about 40 percent to about 50 percent.
This rise could be due to changing weather patterns. The potential effects of climate change on airline safety are rarely discussed, but as incidences of severe weather continue increasing, presumably, so will weather-related incidents and accidents.
The authors of that study, however, believe that this increase is primarily due to “the aviation safety improvements conducted between 1967 and 2010 hav[ing] had a smaller effect on weather-caused aircraft accidents compared with other accidents.” Essentially, while improvements in areas such as crew resource management, training, and maintenance have had positive effects on aviation safety, weather-related accidents have been less sensitive to these improvements.
To look for seasonal trends, the authors of the study divided the globe into four symmetric zones according to latitudes: Zone 1: Within 12 degrees of the equator; Zone 2: between 12 and 38 degrees (which is roughly the middle of the United States); Zone 3: between 38 and 64 degrees (which encompasses most of Canada) and Zone 4: the polar regions in the far north and south.
While each zone experiences different weather and climate trends in all but the polar regions, “weather-caused accidents can be considered as uniformly distributed in the various meteorological seasons.”
The U.S. Federal Aviation Administration (FAA) agreed with the study’s conclusions, telling me that they “have not identified any other broad, seasonal or monthly incident trends.” So basically, no, there are not seasonal trends in weather-related aviation accidents, even though there are definite seasonal trends in weather considered severe.
There are three other very interesting takeaways from this study. First, the two zones nearest the equator show a much larger proportion of weather-related accidents, but that isn’t necessarily due to experiencing more severe or dangerous weather. Instead, the authors state that this is due to these zones containing a greater proportion of developing countries that, while adherent to the ICAO safety standards, tend to operate with older planes and equipment.
Second, weather is much less relevant in accidents in developed nations. While the global percentage of weather-related accidents is approaching 50 percent, in the United States and the United Kingdom, it was only 23 percent (in 2012 and between 1977 and 1986, respectively).
Third, despite snow being a widespread occurrence in Zone 4, it has never been reported as the primary cause of any accident. On the other hand, snow accounts for 7 percent of accidents in Zone 2 despite being far less common. This highlights both the disparities in safety between “developed” and “developing” nations and the increased danger associated with unusual weather. It is far safer to land in a snowstorm at an airport that frequently experiences snowstorms because it has systems in place to handle it. Unfortunately, climate change will likely only increase the incidences of unusual weather.
What about non-weather-related temporal trends in airline safety?
Dr. Daniel Bubb, former airline pilot and currently an associate professor at the University of Nevada, Las Vegas, explained to me that we tend to see more accidents in the months of June to September simply because a lot more people are flying. A 2020 analysis of airplane crash data echoed this, as did the National Transportation Safety Board: “the more risk exposure tends to track closely with the actual number of accidents,” which makes a lot of sense.
Another potential trend in aviation safety could come from something analogous to the “July Effect,” as it’s called in North America, or “Black Wednesday,” as it’s known in the United Kingdom—the idea that the day/week/month when new student doctors and nurses start at hospitals is associated with a rise in mortality or morbidity.
Luckily, the aviation industries have safeguards in place to avoid an influx of new workers. For example, both the FAA and NAV CANADA told me they specifically stagger the starts of their new air traffic controllers. A representative of Republic Airways (a regional U.S. airline) told me the same for new pilots and other employees.
An important thing to remember is just how frequently pilots have their soundness evaluated. Dr. Bubb writes that pilots “undergo recurrent training each year” and “undergo physicals each year to maintain their licenses.” With so much oversight, intense training, and staggered starts, the potential for a “July Effect” in aviation is vanishingly small.
In fact, evidence is mounting against the existence of the July Effect in medicine. A 2022 comprehensive meta-analysis of 113 studies published between 1989 and 2019 demonstrates “no evidence of a July Effect on mortality, major morbidity, or readmission.” Studies comparing teaching versus nonteaching hospitals have found teaching hospitals safer year-round!
So, is there a time of the year you should avoid flying? No, not in terms of safety. And you likewise should not avoid heading to the hospital if you feel you need to. However, if you want to decrease how much you drive, that could help with both your safety and the environment.
Losing your hair might not be the most medically concerning symptom on its own, but its effects on mental health, social well-being, and personal identity can’t be understated. As Dr. Barbra Hanna, FACOG, NCMP, put it, “negative body image, poorer self-esteem, and feeling less control over their life” compound with other “menopause symptoms that can […]
Losing your hair might not be the most medically concerning symptom on its own, but its effects on mental health, social well-being, and personal identity can’t be understated. As Dr. Barbra Hanna, FACOG, NCMP, put it, “negative body image, poorer self-esteem, and feeling less control over their life” compound with other “menopause symptoms that can make one feel as if an alien has invaded their body” to make the time around menopause extremely difficult for many women. Many women suffer for years with thinning hair and widening parts before seeking help, sometimes only to have their concerns dismissed.
Menopause-related hair loss is normal. That being said, it is absolutely worth consulting a physician if it concerns you. It can sometimes be prevented or treated, and while an emotional subject, it should not be a cause of embarrassment.
Androgenetic Alopecia, or Pattern Hair Loss
Depending on the study, the prevalence of alopecia in women has been found to be between 20 and 40%. It seems to affect white women more than those of Asian or Black descent. And while it can occur at any point in life, it overwhelmingly occurs following menopause or 12 months of amenorrhea (absence of menstruation).
Our current methods for decaffeinating coffee are far from ideal. There are a few different methods, all with their own nuanced details, but they all shake out to using some kind of solvent to dissolve and remove caffeine from green coffee beans before roasting. This extra processing means costs to produce decaf are higher, profit margins lower, and production times longer. An even bigger problem is that even the best methods for decaffeination take some aromatic compounds away along with the stimulant molecule, affecting the taste and smell of the resulting coffee.
What if, instead of removing the caffeine from coffee beans, we could grow naturally caffeine-free coffee? Doing just that might be closer on the horizon than we expected.
To know how to stop a coffee plant from producing caffeine, it’s important first to recognize why it’s making it in the first place. Caffeine is a very bitter compound (one of the reasons coffee is a bitter drink), and just as we don’t tend to enjoy overly bitter things, neither do bugs. Coffee plants are believed to produce caffeine in their leaves mainly as a pesticide to defend against being eaten by pests like the coffee berry borer, Hypothenemus hampei.
Interestingly, ancestors of the modern coffee species were probably much lower in caffeine or entirely caffeine-free. The caffeine defence is believed to have developed in central and west Africa, where the coffee berry borer is native. This is where the highest caffeine species of coffee, like Coffea arabica and Coffea canephora, are found. These two species account for nearly 100% of the world’s coffee production.
A fascinating potential method for developing caffeine-free coffee plants involves the subject of the 2021 Nobel Prize in Chemistry: CRISPR/Cas9. Often referred to as “molecular scissors,” the CRISPR/Cas9 tool is inspired by bacterial defence mechanisms against viruses and allows the very precise cutting of an organism’s DNA. In this way, a gene can be targeted and deactivated. A review paper from 2022 took a look at the feasibility of using these molecular scissors to disrupt the biosynthesis of caffeine in coffee plants.
As caffeine is a relatively complex molecule, it isn’t built in just one step. Several enzymes are responsible for precise chemical changes to the proto-caffeine molecule en route to its final form. This is good news for scientists looking to disrupt the synthesis process, as they have multiple enzymes to aim for. The authors of the 2022 review identified an enzyme called XMT as a prime target. XMT is responsible for converting xanthosine into 7-methylxanthosine during step 1 out of 4 in the caffeine synthesis pathway. By targeting the very first step in the process, the subsequent enzymes have no molecules to work on. Debilitating XMT would lead to a build-up of xanthosine, but a different enzyme that can degrade it exists, so it shouldn’t be a problem.
Another potential target is called DXMT. This enzyme is responsible for the penultimate step in caffeine synthesis, converting theobromine into caffeine. Theobromine is quite similar to caffeine structurally and shares some properties with it, like being bitter and toxic to cats and dogs. Importantly, however, theobromine does not have the stimulating effect of caffeine. Targeting and disabling DXMT would lead to a build-up of theobromine in coffee beans, which may actually be a good thing! The bitterness of caffeine is part of the flavour of coffee, meaning that a C. arabica bean without caffeine may still taste different than a C. arabica bean with caffeine, even if they’re otherwise the same. The authors of the study postulate that the increased theobromine content of a DMXT-disabled bean could compensate for the missing bitterness from caffeine.
Genetically engineered caffeine-free coffee could represent a better way of getting our java without the jolt of stimulation, but it will undoubtedly face societal hurdles. While backlash to genetically modified organisms has calmed down recently, anti-GMO sentiments are still present in consumers and regulators. The regulations for getting such a product approved in Europe are particularly stringent and pose a significant barrier.
Even if CRISPR/Cas9 coffee isn’t commercially viable, using these molecular scissors to disable specific genes can help us better understand the complex biosynthesis pathways in coffee plants. So-called knock-out mice, named for having a gene’s function stopped (knocked out), have been pivotal in our understanding of physiology and biology. Want to know what a particular gene does? Knock out its function and see what happens. Much of our understanding of complex diseases like Parkinson’s, cancer or addiction is built upon the findings from knock-out mice.
Another approach to making delicious coffee without the kick may lie with modern species of coffee that naturally produce little or no caffeine. For example, Coffea charrieriana is a caffeine-free variety endemic to Cameroon. C. pseudozanguebariae is native to Tanzania and Kenya, C. salvatrix and C. eugenioides to eastern Africa. Unfortunately, these species of coffee all produce beans that would make a cup of joe that tastes decidedly different from what we’re used to. Still, one potential way to make coffee the same as our usual beans, just without caffeine, is by crossbreeding them with C. arabica plants.
There’s one big problem, though—Where the vast majority of coffee plants are diploid, meaning they have two sets of chromosomes (like humans), C. arabica is tetraploid and has four. Unfortunately, breeding between organisms with different ploidy is typically not successful. Recently, however, several low-caffeine varieties of C. arabica have been discovered in Ethiopia. Crossbreeding between the low- or high-caffeine types of C. arabica may result in a caffeine-free bean that is otherwise the familiar morning starter we know and love.
For the caffeine-sensitive among us, there are interesting new caffeine-free coffee possibilities on the horizon. Even if the methods I’ve outlined here don’t pan out, CRISPR/Cas9 will hopefully enable discoveries regarding caffeine and coffee plants. And we never know what the future may hold.
A magnitude 9.1 earthquake occurred just off the northeast coast of Japan on March 11th, 2011, at 14:46 local time. The Fukushima Daiichi Nuclear Power Plant, like all nuclear power plants in Japan, features several safety mechanisms meant to mitigate damage to its reactors in such an event. It was built on top of solid bedrock to increase its stability, and all of its reactors featured systems that would automatically shut down—or SCRAM—the fission reactions in response to an earthquake. Luckily, only reactors 1, 2 and 3, out of six total, were in operation on that day and were successfully SCRAMed.
Even with fission stopped, however, the nuclear fuel rods continued to emit decay heat and required cooling to avoid a catastrophe. With connections to the main electrical grid cut off due to earthquake damage, the plant’s emergency backup diesel-run generators kicked in to power the cooling pumps.
Almost exactly one hour after the earthquake, the resulting tsunami struck Fukushima Daiichi with waves 14 metres (46 feet) high. All but one of the diesel generators were disabled by the seawater, and by 19:30, the water level in reactor one had drained below the fuel rod. By the same time, two days later, reactors 1, 2 and 3 had all totally melted down.
In response, over the subsequent days, over 150 000 people were relocated from areas within 40 km of Fukushima Daiichi. Farmers were ordered to facilitate euthanasia for livestock from within the Fukushima exclusion zone, which was estimated to contain 3400 cows, 31 500 pigs, and 630 000 chickens.
Of those 3400 cows, the government euthanized 1500. CNN reports that roughly 1400 were released by farmers to free roam and potentially survive on their own. They are all thought to have starved to death. Three hundred of the remaining animals are unaccounted for, but some farmers who defiantly refused to cull their animals, nor chose to set them free, can account for 200 of the bovines.
Instead, these ranchers—made up almost entirely of cattle breeders—committed to travelling for hours every day into the potentially dangerous Exclusion Zone to continue to feed and care for what some of them refer to as the “cows of hope”.
Where dairy and meat livestock farmers tend to operate larger scale, higher throughput operations, cattle breeders often have small herds and are more attached to individual animals (some even have names). As one farmer told Miki Toda, “The cows are my family. How do I dare kill them?” These animals were spared simply because it was the right thing to do.
These cows and bulls will likely never be used for meat or have their milk collected for consumption. But that doesn’t mean they’re purposeless. Researchers from several universities, including Iwate University, University of Tokyo, Osaka International University, Tokai University, University of Georgia, Rikkyo University and Kitasato University, see the saved herds as an auspicious opportunity for knowledge acquisition.
The scientific research on how radiation affects large mammals is exceedingly sparse. According to Kenji Okada, an associate professor of veterinary medicine from Iwate University, “large mammals are different to bugs and small birds, the genes affected by radiation exposure can repair more easily that it’s hard to see the effects of radiation … We really need to know what levels of radiation have a dangerous effect on large mammals and what levels don’t.”
By studying the cattle exposed to radioactive fallout after the Fukushima Daiichi nuclear disaster, we stand not only to gain retrospective insights into the true effects of radiation on large bovine mammals but to be better prepared if such an event happens again.
The euthanasia of tens of thousands of farm animals represents a massive animal welfare challenge and has a drastic impact on the livelihoods of many. Not only farmers but workers from regulatory agencies, veterinary practices, slaughterhouses, processing plants, feed supply factories, exporters, and anyone else involved in any step of the agricultural process. Nonetheless, it is, of course, warranted if necessary for the safety of consumers of animal products. But was it necessary?
Research on the Fukushima Exclusion Zone herds has been ongoing for nearly a decade now, and while it will take more time to fully see the effects of chronic low-dose radiation exposure, scientists have published preliminary findings and are starting to see trends.
So far, the bovines have not shown any increased rates of cancer. The only abnormal health indicators are white spots that some have developed on their hides. A study of Japanese Black cattle residing on a farm 12 km to the west-northwest of the Fukushima Daiichi nuclear power plant in one of the areas the Japanese government has deemed the “difficult-to-return zone” found no significant increases in DNA damage in the cows. A different study found that horses and cattle fed with radiocesium-contaminated feed showed high radiocesium levels in their meat and milk. However, they found that after just eight weeks of “clean feeding” (feeding with non-contaminated food), “no detectable level of radiocesium was noted in the products (meat or milk) of herbivores that received radiocesium-contaminated feed, followed by non-contaminated feed.”
Much like Chornobyl (the Ukrainian spelling) has become a sanctuary for wild animals despite the residual radioactivity, signs are pointing to a natural “rewilding” of the Fukushima Exclusion Zone. With humans, cars and domestic animals gone, wildlife is able to move into empty urban and suburban environments and thrive. A trail cam study of wild animals around the Exclusion Zone has uncovered “no evidence of population-level impacts in mid- to large-sized mammals or [landfowl] birds.” Wild boars are abundant in the Fukushima region and present another good representative mammal to research. A study of 307 wild boars found no elevation in genetic mutation rates and that a certain amount of boar meat could even be safely consumed by humans.
Although nuclear radiation is a frightening threat, in part due to its invisible nature, evidence seems to be pointing to minimal, if any, health effects for animals exposed to the amount released by the Fukushima Daiichi disaster.
According to a study from the University of Bristol, it’s likely that the situation would be the same for humans had they not been evacuated/relocated. Due to the relatively low-dose nature of the event, the stigma and sometimes severe mental distress experienced by those displaced, as well as losses of life associated directly with relocation and indirectly via increases in alcohol-use disorders and suicide rates, the authors conclude that “relocation was unjustified for the 160,000 people relocated after Fukushima.”
Snow crystals—better known as snowflakes—are intricate, delicate, tiny miracles of beauty. Their very existence seems unlikely, yet incomprehensible numbers of them fall every year to iteratively construct wintery wonderlands.
Every snowflake is formed of around 100,000 water droplets in a process that takes roughly 30-45 minutes. Even with this level of complexity contributing to each and every snow crystal, it seems nearly impossible that every single flake is truly singularly matchless. Yet, the scientific explanation of snow formation can explain how every snowflake tells its life story, and every story is unique. The first known reference to snowflakes’ unique shapes was by a Scandinavian bishop, Olaus Magnus, in 1555, but he was a touch mistaken in some of his proposed designs.
Snowflakes’ six-fold symmetry was first identified in 1591 by English astronomer Thomas Harriot. Still, a scientific reasoning for this symmetry wasn’t proposed until 1611 when Johannes Kepler, a German astronomer, wrote The Six-Cornered Snowflake. Indeed, almost all snowflakes exhibit a six-fold symmetry—for reasons explained here—however, they rarely can be found with 3- or 12-fold symmetry.
The notion that no two snowflakes are alike was put forth by Wilson Bentley, a meteorologist from Vermont who took the first detailed photos of snowflakes between 1885 and 1931. He went on to photograph over 5000 snow crystals and, in the words of modern snowflake expert Kenneth Libbrecht, “did it so well that hardly anybody bothered to photograph snowflakes for almost 100 years.” Bentley’s assertion of snowflakes’ unique natures might be 100 years old, but it has held up to scientific scrutiny. Understanding how snow forms can help us understand precisely how nature continues to create novel snowflake patterns.
Snow crystals begin forming when warm moist air collides with another mass of air at a weather front. The warm air rises, cooling as it does, and water droplets condense out of it, just like when your shower deposits steam onto your bathroom mirror. Unlike in your bathroom, however, these water droplets don’t have a large surface to attach to and instead form tiny droplets around microscopic particles in the air like dust or even bacteria. Big aggregates of these drops are what form clouds.
If the air continues to cool, the water enters what’s called a supercooled state. This means that they are below 0˚C, the freezing point of pure water, but still a liquid. Ice crystals will start to grow within the drop only once given a nucleation point, a position from which ice crystals can begin to grow. If you’ve ever seen the frozen beer trick, it relies on the same mechanics.
Once a droplet is frozen, water vapour in the surrounding air will condense onto it, forming snow crystals, aka snowflakes. Not every droplet freezes but those that don’t will evaporate, providing more water vapour to condense onto the frozen ones. Once roughly 100,000 droplets have condensed onto the crystal, it’s heavy enough that it falls to earth.
The crystal patterns formed when the water vapour condenses onto a growing flake are highly dependent on temperature, and how saturated the air around it is. Below you can see a Nakaya diagram. Created in the 1930s and named for its creator, Japanese physicist Ukichiro Nakaya, it shows the typical shapes of snow crystals formed under different supersaturation and temperature conditions.
Above roughly -2˚C, thin plate-like crystals tend to form. Between -2˚C and -10˚C, the formations are slender columns. Colder still, -10˚C and -22˚C herald the production of the wider thin plates we’re most used to, and below -22˚C comes a rarely seen mix of small plates and columns. Snow crystals grow rapidly and form complex, highly branched designs when humidity is high and the air is supersaturated with water vapour. When humidity is low, the flakes grow more slowly, and the designs are simpler.
As a growing snowflake moves through the air, it encounters countless different microenvironments with slightly different humidity and temperature, each affecting its growth pattern. In this way, the shape of a snowflake tells its life story—the second-by-second conditions it encounters determine its final form. That’s where the unique nature of each snowflake comes from.
Kenneth Libbrecht is a snowflake scholar—a professor of physics at California Institute of Technology who has dedicated years of his career to uncovering the mysteries of snow crystals. He was even a consultant on the movie Frozen! He grows snowflakes in his laboratory using specialized chambers under highly controlled environmental conditions. Growing multiple snow crystals very closely together under essentially identical conditions, Libbrecht can create ostensibly identical snowflakes. But even still, he considers them more like identical twins. Can you visually see a difference between them? No, not really. But if you were to zoom in, and in, and in, on some level, you would be able to find differences.
Libbrecht thinks that the question of whether there have ever been identical snowflakes is just silly. “Anything that has any complexity is different than everything else,” even if you have to go down to the molecular level to find it.
Shots, jabs, pricks—whatever you call it, having a needle inserted into your body is not most people’s idea of a fun afternoon activity. Even if you don’t have a specific needle phobia, injection reactions typically range from neutral at best to quite negative at worst. But what if needles didn’t have to hurt? Or, at least, what if they hurt less? It seems intuitively true that decreasing the size of a needle would make it hurt less, but is it really that simple?
The diameter of a needle (how big it is across) is measured in a unit called a gauge. Because the concept of a gauge pre-dates the 18th century and has been defined in many different, inconsistent ways, it’s worth specifying that needle width is measured in the Birmingham gauge. The bigger the gauge, the smaller the needle. For example, the width of a 7-gauge needle is roughly 4.6 mm (0.18 inch), while the width of a 30-gauge needle is about 0.31 mm (0.012 inch). To give you some context, a typical spaghetti noodle is roughly 14-gauge, and a regular stud earring is about 19-gauge.
There are a few factors that determine what size of needle a practitioner needs to use, including the body size of the patient and the body part being pricked, but a critical factor is the amount of fluid being injected or drawn out of the patient. If you try to inject a large amount of fluid through a very thin needle, it will both take longer and hurt more due to the high pressure.
For blood collection, which is typically a few millilitres of blood, clinicians use needles of 21-22 gauge. Vaccines are often <1 mL and accordingly use needles that are slightly smaller, around 22-25 gauge. Delivering insulin to diabetic patients requires even less fluid and can use needles as small as 29-31 gauge.
Even with the limitations imposed by volume, there is some wiggle room in the gauge of needle used for a certain procedure. Medical practitioners can often use their own judgment, experience, and clinical guidelines to change the size of needle they use. Much like how artists may favour a certain size brush, some clinicians have personal preferences in the tools of their trade.
Luckily, it is actually relatively simple to study whether decreasing needle diameter decreases pain. Just find some volunteers who are willing to be stabbed for science (or who are already being treated with a needle-involved method), stick them with at least 2 needles of different gauges without telling them which is which, and ask them how much it hurt on a numeric scale. There are dozens of studies that take this form.
Regarding simple injections in the body, this study compared a 30-gauge needle with a 26-gauge one and found no significant difference in the reported pain. As did this study, which compared 27-gauge vs 23-gauge vs 21-gauge. For injecting Botox around patients’ eyes, this study found no difference in pain scores between a 32-gauge and a 30-gauge needle. These are by no means all of the studies on needle size and pain, but they are representative of the scientific literature on this topic. Again and again, trial participants seem to find no significant difference in their pain when comparing needle gauges.
For many people, the anesthetic injection is the worst part of any dentist visit. While it would be lovely to tell you that a quick swap to a thinner needle is all you need to decrease the pain of dental injections, there is a wealth of evidence to the contrary. For anesthetic injections in the mouth, smaller-width needles were not only ineffective at reducing pain; in one study, they actually increased it!