Is There a Safer Time to Fly? (Skeptical Inquirer)

7 minute read

Flying in an airplane is incredibly safe despite what our anxieties and fears might tell us. According to the International Civil Aviation Organization (ICAO), aviation has become the first ultra-safe transportation system in history. That means that for every ten million cycles (one cycle involves both a takeoff and landing), there is less than one catastrophic failure.

It may not feel intuitively true, but you’re much safer traveling in an airplane than in a motor vehicle. In the United States, there are around 1.13 fatalities per every 100 million vehicle miles traveled, compared to just 0.035 fatalities per every 100 million airplane miles traveled. Put another way, your chances of dying in a U.S. car crash are around one in 114. Your chances of dying in a U.S. plane crash are around one in 9,821.

And yet, aviation accidents and incidents do still happen. I recently became deeply interested in aviation safety and got to wondering: Are there monthly or seasonal trends in when aviation accidents occur? Essentially, is there a statistically safer time to fly?

To answer that, we need to define the difference between an accident and an incident. It’s a subtle but important differentiation, because incidents happen all the time, while accidents are quite rare.

The ICAO defines an accident as “an occurrence associated with the operation of an aircraft which takes place between the time any person boards the aircraft with the intention of flight until such time as all such persons have disembarked, in which a person is fatally or seriously injured” and/or “the aircraft sustains damage or structural failure … or the aircraft is missing or is completely inaccessible.” On the other hand, an incident is defined as “an occurrence, other than an accident, associated with the operation of an aircraft which affects or could affect the safety of operation.” In car terms, an accident would be something like a fender bender or crash, whereas an incident would be something like your check engine light coming on or your headlight burning out.

At the time of writing this article in late January 2023, globally, there have been seven accidents in 2023, only one involving fatalities (seventy-two people presumed dead after Yeti AT72 crashed in Pokhara, Nepal). Compare this with incidents, of which there are usually around three or four every single day. If that seems like a lot, remember that the strict reporting of nearly any deviation from perfect plane operation and function is a big part of what has made aviation “ultra-safe.” No piece of machinery as complex as planes will function perfectly 100 percent of the time. By strictly cataloging all incidents, we can continuously identify trends, issues, and ways to improve aviation safety even further.

If there are temporal trends in aviation safety, there are a few reasons those could exist. One potential would be due to weather. There are definite seasonal trends in weather considered hazardous. For example, winter in Canada and the northern United States sees more ice and snow. But the question is whether these weather trends translate into accident trends.

A 2018 study examined all reported worldwide weather-related aircraft accidents from 1967 until 2010. The absolute number of weather-related accidents has increased over that period but so has the annual number of flights, so that is expected. More interesting is the percentage of accidents that are weather-related, which has also increased from about 40 percent to about 50 percent.

This rise could be due to changing weather patterns. The potential effects of climate change on airline safety are rarely discussed, but as incidences of severe weather continue increasing, presumably, so will weather-related incidents and accidents.

The authors of that study, however, believe that this increase is primarily due to “the aviation safety improvements conducted between 1967 and 2010 hav[ing] had a smaller effect on weather-caused aircraft accidents compared with other accidents.” Essentially, while improvements in areas such as crew resource management, training, and maintenance have had positive effects on aviation safety, weather-related accidents have been less sensitive to these improvements.

To look for seasonal trends, the authors of the study divided the globe into four symmetric zones according to latitudes: Zone 1: Within 12 degrees of the equator; Zone 2: between 12 and 38 degrees (which is roughly the middle of the United States); Zone 3: between 38 and 64 degrees (which encompasses most of Canada) and Zone 4: the polar regions in the far north and south.

(Photo source: https://www.timeanddate.com/geography/longitude-latitude.html)

While each zone experiences different weather and climate trends in all but the polar regions, “weather-caused accidents can be considered as uniformly distributed in the various meteorological seasons.”

The U.S. Federal Aviation Administration (FAA) agreed with the study’s conclusions, telling me that they “have not identified any other broad, seasonal or monthly incident trends.” So basically, no, there are not seasonal trends in weather-related aviation accidents, even though there are definite seasonal trends in weather considered severe.

There are three other very interesting takeaways from this study. First, the two zones nearest the equator show a much larger proportion of weather-related accidents, but that isn’t necessarily due to experiencing more severe or dangerous weather. Instead, the authors state that this is due to these zones containing a greater proportion of developing countries that, while adherent to the ICAO safety standards, tend to operate with older planes and equipment.

Second, weather is much less relevant in accidents in developed nations. While the global percentage of weather-related accidents is approaching 50 percent, in the United States and the United Kingdom, it was only 23 percent (in 2012 and between 1977 and 1986, respectively).

Third, despite snow being a widespread occurrence in Zone 4, it has never been reported as the primary cause of any accident. On the other hand, snow accounts for 7 percent of accidents in Zone 2 despite being far less common. This highlights both the disparities in safety between “developed” and “developing” nations and the increased danger associated with unusual weather. It is far safer to land in a snowstorm at an airport that frequently experiences snowstorms because it has systems in place to handle it. Unfortunately, climate change will likely only increase the incidences of unusual weather.

What about non-weather-related temporal trends in airline safety?

Dr. Daniel Bubb, former airline pilot and currently an associate professor at the University of Nevada, Las Vegas, explained to me that we tend to see more accidents in the months of June to September simply because a lot more people are flying. A 2020 analysis of airplane crash data echoed this, as did the National Transportation Safety Board: “the more risk exposure tends to track closely with the actual number of accidents,” which makes a lot of sense.

Another potential trend in aviation safety could come from something analogous to the “July Effect,” as it’s called in North America, or “Black Wednesday,” as it’s known in the United Kingdom—the idea that the day/week/month when new student doctors and nurses start at hospitals is associated with a rise in mortality or morbidity.

Luckily, the aviation industries have safeguards in place to avoid an influx of new workers. For example, both the FAA and NAV CANADA told me they specifically stagger the starts of their new air traffic controllers. A representative of Republic Airways (a regional U.S. airline) told me the same for new pilots and other employees.

An important thing to remember is just how frequently pilots have their soundness evaluated. Dr. Bubb writes that pilots “undergo recurrent training each year” and “undergo physicals each year to maintain their licenses.” With so much oversight, intense training, and staggered starts, the potential for a “July Effect” in aviation is vanishingly small.

In fact, evidence is mounting against the existence of the July Effect in medicine. A 2022 comprehensive meta-analysis of 113 studies published between 1989 and 2019 demonstrates “no evidence of a July Effect on mortality, major morbidity, or readmission.” Studies comparing teaching versus nonteaching hospitals have found teaching hospitals safer year-round!

So, is there a time of the year you should avoid flying? No, not in terms of safety. And you likewise should not avoid heading to the hospital if you feel you need to. However, if you want to decrease how much you drive, that could help with both your safety and the environment.

This article was written for Skeptical Inquirer. View the entire original for free here: https://skepticalinquirer.org/exclusive/is-there-a-safer-time-to-fly/

Advertisement

Menopausal Hair Loss: Why It Happens and What We Can Do About It (The Midlife)

8 minute read

Losing your hair might not be the most medically concerning symptom on its own, but its effects on mental health, social well-being, and personal identity can’t be understated. As Dr. Barbra Hanna, FACOG, NCMP, put it, “negative body image, poorer self-esteem, and feeling less control over their life” compound with other “menopause symptoms that can […]

Losing your hair might not be the most medically concerning symptom on its own, but its effects on mental health, social well-being, and personal identity can’t be understated. As Dr. Barbra Hanna, FACOG, NCMP, put it, “negative body image, poorer self-esteem, and feeling less control over their life” compound with other “menopause symptoms that can make one feel as if an alien has invaded their body” to make the time around menopause extremely difficult for many women. Many women suffer for years with thinning hair and widening parts before seeking help, sometimes only to have their concerns dismissed.

Menopause-related hair loss is normal. That being said, it is absolutely worth consulting a physician if it concerns you. It can sometimes be prevented or treated, and while an emotional subject, it should not be a cause of embarrassment.

Androgenetic Alopecia, or Pattern Hair Loss

Depending on the study, the prevalence of alopecia in women has been found to be between 20 and 40%. It seems to affect white women more than those of Asian or Black descent. And while it can occur at any point in life, it overwhelmingly occurs following menopause or 12 months of amenorrhea (absence of menstruation).

This article was written for The Midlife. View the entire original for free here: https://themidlife.com/menopausal-hair-loss-why-it-happens-and-what-we-can-do-about-it/

The Potential for Caffeine-Free Coffee via Crispr/CAS9 or Crossbreeding (McGill OSS)

5 minute read

Our current methods for decaffeinating coffee are far from ideal. There are a few different methods, all with their own nuanced details, but they all shake out to using some kind of solvent to dissolve and remove caffeine from green coffee beans before roasting. This extra processing means costs to produce decaf are higher, profit margins lower, and production times longer. An even bigger problem is that even the best methods for decaffeination take some aromatic compounds away along with the stimulant molecule, affecting the taste and smell of the resulting coffee.

What if, instead of removing the caffeine from coffee beans, we could grow naturally caffeine-free coffee? Doing just that might be closer on the horizon than we expected.

To know how to stop a coffee plant from producing caffeine, it’s important first to recognize why it’s making it in the first place. Caffeine is a very bitter compound (one of the reasons coffee is a bitter drink), and just as we don’t tend to enjoy overly bitter things, neither do bugs. Coffee plants are believed to produce caffeine in their leaves mainly as a pesticide to defend against being eaten by pests like the coffee berry borer, Hypothenemus hampei.

Interestingly, ancestors of the modern coffee species were probably much lower in caffeine or entirely caffeine-free. The caffeine defence is believed to have developed in central and west Africa, where the coffee berry borer is native. This is where the highest caffeine species of coffee, like Coffea arabica and Coffea canephora, are found. These two species account for nearly 100% of the world’s coffee production.

A fascinating potential method for developing caffeine-free coffee plants involves the subject of the 2021 Nobel Prize in Chemistry: CRISPR/Cas9. Often referred to as “molecular scissors,” the CRISPR/Cas9 tool is inspired by bacterial defence mechanisms against viruses and allows the very precise cutting of an organism’s DNA. In this way, a gene can be targeted and deactivated. A review paper from 2022 took a look at the feasibility of using these molecular scissors to disrupt the biosynthesis of caffeine in coffee plants.

As caffeine is a relatively complex molecule, it isn’t built in just one step. Several enzymes are responsible for precise chemical changes to the proto-caffeine molecule en route to its final form. This is good news for scientists looking to disrupt the synthesis process, as they have multiple enzymes to aim for. The authors of the 2022 review identified an enzyme called XMT as a prime target. XMT is responsible for converting xanthosine into 7-methylxanthosine during step 1 out of 4 in the caffeine synthesis pathway. By targeting the very first step in the process, the subsequent enzymes have no molecules to work on. Debilitating XMT would lead to a build-up of xanthosine, but a different enzyme that can degrade it exists, so it shouldn’t be a problem.

Another potential target is called DXMT. This enzyme is responsible for the penultimate step in caffeine synthesis, converting theobromine into caffeine. Theobromine is quite similar to caffeine structurally and shares some properties with it, like being bitter and toxic to cats and dogs. Importantly, however, theobromine does not have the stimulating effect of caffeine. Targeting and disabling DXMT would lead to a build-up of theobromine in coffee beans, which may actually be a good thing! The bitterness of caffeine is part of the flavour of coffee, meaning that a C. arabica bean without caffeine may still taste different than a C. arabica bean with caffeine, even if they’re otherwise the same. The authors of the study postulate that the increased theobromine content of a DMXT-disabled bean could compensate for the missing bitterness from caffeine.

Genetically engineered caffeine-free coffee could represent a better way of getting our java without the jolt of stimulation, but it will undoubtedly face societal hurdles. While backlash to genetically modified organisms has calmed down recently, anti-GMO sentiments are still present in consumers and regulators. The regulations for getting such a product approved in Europe are particularly stringent and pose a significant barrier.

Even if CRISPR/Cas9 coffee isn’t commercially viable, using these molecular scissors to disable specific genes can help us better understand the complex biosynthesis pathways in coffee plants. So-called knock-out mice, named for having a gene’s function stopped (knocked out), have been pivotal in our understanding of physiology and biology. Want to know what a particular gene does? Knock out its function and see what happens. Much of our understanding of complex diseases like Parkinson’s, cancer or addiction is built upon the findings from knock-out mice.

Another approach to making delicious coffee without the kick may lie with modern species of coffee that naturally produce little or no caffeine. For example, Coffea charrieriana is a caffeine-free variety endemic to Cameroon. C. pseudozanguebariae is native to Tanzania and Kenya, C. salvatrix and C. eugenioides to eastern Africa. Unfortunately, these species of coffee all produce beans that would make a cup of joe that tastes decidedly different from what we’re used to. Still, one potential way to make coffee the same as our usual beans, just without caffeine, is by crossbreeding them with C. arabica plants.

There’s one big problem, though—Where the vast majority of coffee plants are diploid, meaning they have two sets of chromosomes (like humans), C. arabica is tetraploid and has four. Unfortunately, breeding between organisms with different ploidy is typically not successful. Recently, however, several low-caffeine varieties of C. arabica have been discovered in Ethiopia. Crossbreeding between the low- or high-caffeine types of C. arabica may result in a caffeine-free bean that is otherwise the familiar morning starter we know and love.

For the caffeine-sensitive among us, there are interesting new caffeine-free coffee possibilities on the horizon. Even if the methods I’ve outlined here don’t pan out, CRISPR/Cas9 will hopefully enable discoveries regarding caffeine and coffee plants. And we never know what the future may hold.

When the Cows Come Home to Radioactive Ranches (McGill OSS)

5 minute read

A magnitude 9.1 earthquake occurred just off the northeast coast of Japan on March 11th, 2011, at 14:46 local time. The Fukushima Daiichi Nuclear Power Plant, like all nuclear power plants in Japan, features several safety mechanisms meant to mitigate damage to its reactors in such an event. It was built on top of solid bedrock to increase its stability, and all of its reactors featured systems that would automatically shut down—or SCRAM—the fission reactions in response to an earthquake. Luckily, only reactors 1, 2 and 3, out of six total, were in operation on that day and were successfully SCRAMed.

Even with fission stopped, however, the nuclear fuel rods continued to emit decay heat and required cooling to avoid a catastrophe. With connections to the main electrical grid cut off due to earthquake damage, the plant’s emergency backup diesel-run generators kicked in to power the cooling pumps.


Image Source

Almost exactly one hour after the earthquake, the resulting tsunami struck Fukushima Daiichi with waves 14 metres (46 feet) high. All but one of the diesel generators were disabled by the seawater, and by 19:30, the water level in reactor one had drained below the fuel rod. By the same time, two days later, reactors 1, 2 and 3 had all totally melted down.

In response, over the subsequent days, over 150 000 people were relocated from areas within 40 km of Fukushima Daiichi. Farmers were ordered to facilitate euthanasia for livestock from within the Fukushima exclusion zone, which was estimated to contain 3400 cows, 31 500 pigs, and 630 000 chickens.

Of those 3400 cows, the government euthanized 1500. CNN reports that roughly 1400 were released by farmers to free roam and potentially survive on their own. They are all thought to have starved to death. Three hundred of the remaining animals are unaccounted for, but some farmers who defiantly refused to cull their animals, nor chose to set them free, can account for 200 of the bovines.

Instead, these ranchers—made up almost entirely of cattle breeders—committed to travelling for hours every day into the potentially dangerous Exclusion Zone to continue to feed and care for what some of them refer to as the “cows of hope”.

Where dairy and meat livestock farmers tend to operate larger scale, higher throughput operations, cattle breeders often have small herds and are more attached to individual animals (some even have names). As one farmer told Miki Toda, “The cows are my family. How do I dare kill them?” These animals were spared simply because it was the right thing to do.

These cows and bulls will likely never be used for meat or have their milk collected for consumption. But that doesn’t mean they’re purposeless. Researchers from several universities, including Iwate University, University of Tokyo, Osaka International University, Tokai University, University of Georgia, Rikkyo University and Kitasato University, see the saved herds as an auspicious opportunity for knowledge acquisition.

The scientific research on how radiation affects large mammals is exceedingly sparse. According to Kenji Okada, an associate professor of veterinary medicine from Iwate University, “large mammals are different to bugs and small birds, the genes affected by radiation exposure can repair more easily that it’s hard to see the effects of radiation … We really need to know what levels of radiation have a dangerous effect on large mammals and what levels don’t.”

By studying the cattle exposed to radioactive fallout after the Fukushima Daiichi nuclear disaster, we stand not only to gain retrospective insights into the true effects of radiation on large bovine mammals but to be better prepared if such an event happens again.

The euthanasia of tens of thousands of farm animals represents a massive animal welfare challenge and has a drastic impact on the livelihoods of many. Not only farmers but workers from regulatory agencies, veterinary practices, slaughterhouses, processing plants, feed supply factories, exporters, and anyone else involved in any step of the agricultural process. Nonetheless, it is, of course, warranted if necessary for the safety of consumers of animal products. But was it necessary?

Research on the Fukushima Exclusion Zone herds has been ongoing for nearly a decade now, and while it will take more time to fully see the effects of chronic low-dose radiation exposure, scientists have published preliminary findings and are starting to see trends.

So far, the bovines have not shown any increased rates of cancer. The only abnormal health indicators are white spots that some have developed on their hides. A study of Japanese Black cattle residing on a farm 12 km to the west-northwest of the Fukushima Daiichi nuclear power plant in one of the areas the Japanese government has deemed the “difficult-to-return zone” found no significant increases in DNA damage in the cows. A different study found that horses and cattle fed with radiocesium-contaminated feed showed high radiocesium levels in their meat and milk. However, they found that after just eight weeks of “clean feeding” (feeding with non-contaminated food), “no detectable level of radiocesium was noted in the products (meat or milk) of herbivores that received radiocesium-contaminated feed, followed by non-contaminated feed.”

Much like Chornobyl (the Ukrainian spelling) has become a sanctuary for wild animals despite the residual radioactivity, signs are pointing to a natural “rewilding” of the Fukushima Exclusion Zone. With humans, cars and domestic animals gone, wildlife is able to move into empty urban and suburban environments and thrive. A trail cam study of wild animals around the Exclusion Zone has uncovered “no evidence of population-level impacts in mid- to large-sized mammals or [landfowl] birds.” Wild boars are abundant in the Fukushima region and present another good representative mammal to research. A study of 307 wild boars found no elevation in genetic mutation rates and that a certain amount of boar meat could even be safely consumed by humans.

Although nuclear radiation is a frightening threat, in part due to its invisible nature, evidence seems to be pointing to minimal, if any, health effects for animals exposed to the amount released by the Fukushima Daiichi disaster.

According to a study from the University of Bristol, it’s likely that the situation would be the same for humans had they not been evacuated/relocated. Due to the relatively low-dose nature of the event, the stigma and sometimes severe mental distress experienced by those displaced, as well as losses of life associated directly with relocation and indirectly via increases in alcohol-use disorders and suicide rates, the authors conclude that “relocation was unjustified for the 160,000 people relocated after Fukushima.”

This article was written for the McGill Office of Science and Society. View the original here: https://www.mcgill.ca/oss/article/history-environment/when-cows-come-home-radioactive-ranches

Is it true that no two snowflakes are identical? (McGill OSS)

4 minute read

Snow crystals—better known as snowflakes—are intricate, delicate, tiny miracles of beauty. Their very existence seems unlikely, yet incomprehensible numbers of them fall every year to iteratively construct wintery wonderlands.

Every snowflake is formed of around 100,000 water droplets in a process that takes roughly 30-45 minutes. Even with this level of complexity contributing to each and every snow crystal, it seems nearly impossible that every single flake is truly singularly matchless. Yet, the scientific explanation of snow formation can explain how every snowflake tells its life story, and every story is unique.
The first known reference to snowflakes’ unique shapes was by a Scandinavian bishop, Olaus Magnus, in 1555, but he was a touch mistaken in some of his proposed designs.

Photo source: http://www.snowcrystals.com/history/history.html

Snowflakes’ six-fold symmetry was first identified in 1591 by English astronomer Thomas Harriot. Still, a scientific reasoning for this symmetry wasn’t proposed until 1611 when Johannes Kepler, a German astronomer, wrote The Six-Cornered Snowflake. Indeed, almost all snowflakes exhibit a six-fold symmetry—for reasons explained here—however, they rarely can be found with 3- or 12-fold symmetry.

The notion that no two snowflakes are alike was put forth by Wilson Bentley, a meteorologist from Vermont who took the first detailed photos of snowflakes between 1885 and 1931. He went on to photograph over 5000 snow crystals and, in the words of modern snowflake expert Kenneth Libbrecht, “did it so well that hardly anybody bothered to photograph snowflakes for almost 100 years.” Bentley’s assertion of snowflakes’ unique natures might be 100 years old, but it has held up to scientific scrutiny. Understanding how snow forms can help us understand precisely how nature continues to create novel snowflake patterns.

Snow crystals begin forming when warm moist air collides with another mass of air at a weather front. The warm air rises, cooling as it does, and water droplets condense out of it, just like when your shower deposits steam onto your bathroom mirror. Unlike in your bathroom, however, these water droplets don’t have a large surface to attach to and instead form tiny droplets around microscopic particles in the air like dust or even bacteria. Big aggregates of these drops are what form clouds.

If the air continues to cool, the water enters what’s called a supercooled state. This means that they are below 0˚C, the freezing point of pure water, but still a liquid. Ice crystals will start to grow within the drop only once given a nucleation point, a position from which ice crystals can begin to grow. If you’ve ever seen the frozen beer trick, it relies on the same mechanics.

Once a droplet is frozen, water vapour in the surrounding air will condense onto it, forming snow crystals, aka snowflakes. Not every droplet freezes but those that don’t will evaporate, providing more water vapour to condense onto the frozen ones. Once roughly 100,000 droplets have condensed onto the crystal, it’s heavy enough that it falls to earth.

The crystal patterns formed when the water vapour condenses onto a growing flake are highly dependent on temperature, and how saturated the air around it is. Below you can see a Nakaya diagram. Created in the 1930s and named for its creator, Japanese physicist Ukichiro Nakaya, it shows the typical shapes of snow crystals formed under different supersaturation and temperature conditions.

Photo source: http://www.snowcrystals.com/morphology/morphology.html

Above roughly -2˚C, thin plate-like crystals tend to form. Between -2˚C and -10˚C, the formations are slender columns. Colder still, -10˚C and -22˚C herald the production of the wider thin plates we’re most used to, and below -22˚C comes a rarely seen mix of small plates and columns. Snow crystals grow rapidly and form complex, highly branched designs when humidity is high and the air is supersaturated with water vapour. When humidity is low, the flakes grow more slowly, and the designs are simpler.

As a growing snowflake moves through the air, it encounters countless different microenvironments with slightly different humidity and temperature, each affecting its growth pattern. In this way, the shape of a snowflake tells its life story—the second-by-second conditions it encounters determine its final form. That’s where the unique nature of each snowflake comes from.

Kenneth Libbrecht is a snowflake scholar—a professor of physics at California Institute of Technology who has dedicated years of his career to uncovering the mysteries of snow crystals. He was even a consultant on the movie FrozenHe grows snowflakes in his laboratory using specialized chambers under highly controlled environmental conditions. Growing multiple snow crystals very closely together under essentially identical conditions, Libbrecht can create ostensibly identical snowflakes. But even still, he considers them more like identical twins. Can you visually see a difference between them? No, not really. But if you were to zoom in, and in, and in, on some level, you would be able to find differences.

Libbrecht thinks that the question of whether there have ever been identical snowflakes is just silly. “Anything that has any complexity is different than everything else,” even if you have to go down to the molecular level to find it.

This article was written for the McGill Office for Science and Society. View the original here: https://www.mcgill.ca/oss/article/environment-you-asked/it-true-no-two-snowflakes-are-identical

Does size matter when it comes to needles? (McGill OSS)

5 minute read

Shots, jabs, pricks—whatever you call it, having a needle inserted into your body is not most people’s idea of a fun afternoon activity. Even if you don’t have a specific needle phobia, injection reactions typically range from neutral at best to quite negative at worst. But what if needles didn’t have to hurt? Or, at least, what if they hurt less? It seems intuitively true that decreasing the size of a needle would make it hurt less, but is it really that simple?

The diameter of a needle (how big it is across) is measured in a unit called a gauge. Because the concept of a gauge pre-dates the 18th century and has been defined in many different, inconsistent ways, it’s worth specifying that needle width is measured in the Birmingham gauge. The bigger the gauge, the smaller the needle. For example, the width of a 7-gauge needle is roughly 4.6 mm (0.18 inch), while the width of a 30-gauge needle is about 0.31 mm (0.012 inch). To give you some context, a typical spaghetti noodle is roughly 14-gauge, and a regular stud earring is about 19-gauge.

There are a few factors that determine what size of needle a practitioner needs to use, including the body size of the patient and the body part being pricked, but a critical factor is the amount of fluid being injected or drawn out of the patient. If you try to inject a large amount of fluid through a very thin needle, it will both take longer and hurt more due to the high pressure.

For blood collection, which is typically a few millilitres of blood, clinicians use needles of 21-22 gauge. Vaccines are often <1 mL and accordingly use needles that are slightly smaller, around 22-25 gauge. Delivering insulin to diabetic patients requires even less fluid and can use needles as small as 29-31 gauge.

Even with the limitations imposed by volume, there is some wiggle room in the gauge of needle used for a certain procedure. Medical practitioners can often use their own judgment, experience, and clinical guidelines to change the size of needle they use. Much like how artists may favour a certain size brush, some clinicians have personal preferences in the tools of their trade.

Luckily, it is actually relatively simple to study whether decreasing needle diameter decreases pain. Just find some volunteers who are willing to be stabbed for science (or who are already being treated with a needle-involved method), stick them with at least 2 needles of different gauges without telling them which is which, and ask them how much it hurt on a numeric scale. There are dozens of studies that take this form.

Regarding simple injections in the body, this study compared a 30-gauge needle with a 26-gauge one and found no significant difference in the reported pain. As did this study, which compared 27-gauge vs 23-gauge vs 21-gauge. For injecting Botox around patients’ eyes, this study found no difference in pain scores between a 32-gauge and a 30-gauge needle. These are by no means all of the studies on needle size and pain, but they are representative of the scientific literature on this topic. Again and again, trial participants seem to find no significant difference in their pain when comparing needle gauges.

For many people, the anesthetic injection is the worst part of any dentist visit. While it would be lovely to tell you that a quick swap to a thinner needle is all you need to decrease the pain of dental injections, there is a wealth of evidence to the contrary. For anesthetic injections in the mouth, smaller-width needles were not only ineffective at reducing pain; in one study, they actually increased it!

To continue reading, for free, click here- https://www.mcgill.ca/oss/article/medical/does-size-matter-when-it-comes-needles

When it comes to conservation, cat-fights only hurt our communication efforts (The Skeptic)

7 minute read

When non-native animals are introduced to an ecosystem, quite often, the very delicate balance of that environment is thrown off. Plants, animals, fungi, bacteria, and everything else in a biome are connected through the food web, meaning that small changes to any part of a habitat can have extensive consequences.

From zebra mussels in Canada to grey squirrels in the United Kingdom, invasive animals have become a massive problem with increases in global travel and shipping. We enact biosecurity laws and protocols, quarantine procedures and mandate pesticide treatments to try to limit their spread; but despite all our efforts to curb invasive invasions, there is one species that we tend to give a pass to: cats.

Domestic cats are not native to anywhere. While they are descended from Felis lybica, the African Wildcat, the domestic cat is a different species. They are even given a separate Latin species name: Felis catus.

Even when well fed at home, domestic cats often engage in predation and hunting behaviours. With some variance depending on location, cats tend to kill more birds and small mammals than anything else. Since domestic cats are an introduced species, they have tremendous potential to upset intricate ecological situations.

Some researchers strongly believe that domestic cats’ damaging influence on the environment has already been robustly demonstrated. They feel it is crucial to act immediately and decisively if we want to have any hope of counteracting the damage done by domestic felines. For example, in 2018, conservationists from Oklahoma State University and the Smithsonian Conservation Biology Institute published a paper wherein they denounced what they described as organised misinformation campaigns spreading junk science about domestic cats’ effects on ecosystems.

They invoke the Merchants of Doubt moniker—the name given to the “cabal of industry-beholden” contrarian scientists who denied evidence of harm by tobacco smoking, DDT and climate change for financial gain—and liken outdoor cat advocates to “cigarette and climate-change fact fighters” pushing “propaganda.”

Conversely, other researchers feel that many conservation scientists are fueling an unwarranted moral panic over outdoor cats with exaggerated claims and inadequate evidence. In response to the 2018 Merchants of Doubt publication, researchers from six universities around the world collaborated on a rebuttal. They wrote that:

equating the resources and power of global corporations and economic elites (e.g., Exxon Mobil) with the reach and advocacy of comparatively small non-profit organizations and university academics strains the [Merchants of Doubt simile] past the breaking point.

The authors take issue with conservationists concluding that cat advocates are acting with nefarious or bad faith motives and feel that calls for things such as “remov[ing cats] — once and for all — from the landscape” by “any means necessary” are sensationalist and premature. Instead, they call for better research to investigate the severity of the risks cats pose to habitats and the appropriate levels of interventions, and humane but effective alternatives to simply killing and banning outdoor cats.

A White-Hot Issue

If you’re not that familiar with the literary style research papers are usually written in, let me just say, it’s not usually quite like this. Usually, one side of an academic debate is not accusing the other of being corporate shills. The vast majority of the time, there are no mentions of “zombie apocalypse[s]” or calls to let things “weigh heavy on our shoulders.”

The rhetoric throughout the literature on outdoor cats is very inflammatory. The cats/birds issue isn’t just a problem to be solved. It is a fighta conflicta war. Solutions to this situation are needed urgently. Danger is imminent. “Drastic times call for drastic measures.” People “must ask themselves which animals should be saved but do so quickly because there is no time to [do both]… before extinctions occur”.

Clearly, the environmental impact of cats on birds, and the welfare of cats, are contentious and emotionally charged topics. It makes a lot of sense that they are. Environmental stewardship is an important role that humans are morally obligated to fulfill. Especially in the face of an existential threat. At the same time, cats also represent life that should be protected. Cats long ago transcended their status of just-another-animal. From their initial roles of pest control, they have become members of the family. Given as much, cat owners often take advice regarding their pets personally.

The thing is, this highly polarised landscape filled with provocative language and antagonistic interactions isn’t helping either side. And it isn’t helping the birds, or the cats, either.

Whether cats impact wildlife in a meaningful and long-lasting way is a question for the experts in this field. They do not seem to agree, which implies the need for more research on the matter. Either way, it doesn’t particularly matter who is “right” anymore.

What matters is how needlessly divided the debate has become.

A Birdy Binary

A false dichotomy has been created wherein one can either care about native wildlife or feline welfare, but never both. Either cats are the enemies — the representations of humans’ entitlement and disdain for the earth — or the most perfect companions, too often neglected and maligned, who are just following their natural instincts.

We do ourselves a massive disservice by reducing this complex and multifaceted issue to one side versus another, or ‘us versus them’. People are lumped into supposedly either loving birds and hating cats or vice-versa, when in truth, most conservationists and pet owners are motivated by similar loves of nature, flora, and fauna.

This artificial divide encourages more polarising solutions, more extreme takes and leads to fearmongering and moral panics. It not only creates this illusion of a lack of a middle ground, it eliminates any of the methods or solutions that would originate from there.

We can become so hyper-focused on advocating for one position that we become blinded to other parts of the issue. Habitat loss is displacing bird populations and climate change is affecting their ability to find food and water. As cities sprawl outward, they remove homelands for birds and disrupt migration routes. In Canada, around 100 million birds are estimated to die every year due to collisions with buildings, power lines and cars.

Such black-and-white thinking discourages the peer review process. With little room for nuance, any criticism of a study’s methods can be seen as dissent. Scientists need to feel free to question how research is performed and how it draws its conclusions without fear of being labelled as agents of misinformation.

It’s Getting Mean in Here

Outside of academic discussions, the binary division between perceived “bird lovers/cat haters” and “cat lovers/bird haters” is even wider. This pattern is seen to varying levels across social media, traditional media, and interpersonal relationships. Expressing the wrong opinion on Twitter about indoor/outdoor cats can lead to harassment and ostracisation.

We should all know that an anecdote is not good evidence for anything on its own. Nonetheless, let me tell you a short one.

I have written on a variety of “controversial” topics in the past — menstruation, copycat suicides, female ejaculation, transgender children, border walls — but only once have I been kicked out of a science-themed social media group. I was removed after sharing my (then) most recent article on whether bells on cat collars work to reduce the amount of prey that domestic cats kill. For the record, three studies (one published in 2005, one in 2006, and one in 2010) have shown that cats brought home less prey when they wore bells. But very quickly, the thread of responses devolved into name calling and insinuations of nefarious or financially motivated intentions.

Empathy works, not… whatever that is

What should be a logical debate on policies and practices has turned ugly. The cats and birds issue has become a hotbed for sensationalism and hyperbole, no matter your stance. And the worst part about it is that we know it won’t work as well as collaborative and kind approaches would.

We know that when trying to change somebody’s mind, what tends to work is empathy and ongoing dialogue. We want to avoid judgment, disdain, or anger. Scientists need to be transparent about how they draw their conclusions and accept legitimate criticisms. Science is not perfect or magic but just a tool to help us understand the world around us. Trust is crucial for effective communication of knowledge, and trust cannot be built on anything but honesty and openness.

Actually helping wildlife and domestic pets alike requires engaging with all stakeholders. Especially the ones that oppose your stance. As much as we may want to rant and kick and scream at the people who disagree with us, it’s pointless. Not only that, it’s actively detrimental to their understanding and your ability to communicate with them. Like with so many things, in science communication, kindness is key.

Article originally posted here- https://www.skeptic.org.uk/2022/09/when-it-comes-to-conservation-cat-fights-only-hurt-our-communication-efforts/

Problematic Perceptions of Probability of Precipitation (Skeptical Inquirer: But What Do I Know?)

5 minute read

As Benjamin Franklin wrote in 1789, “In this world, nothing can be said to be certain, except death and taxes.” For everything else, there is an inherent degree of uncertainty. We don’t often come face to face with quantitative probabilities in our everyday life, save for one: probability of precipitation (PoP).

A seemingly simple concept, PoP is present in most of our daily routines as we check the weather before getting dressed. You may not explicitly realize it, but we all have personalized thresholds for the PoP at which we will choose to bring an umbrella or cancel an outing. For some, a 60 percent PoP warrants carrying an umbrella; for others, only 80 percent or higher. Unfortunately, most of us don’t have an accurate idea of what PoP truly means, even though most of us are certain we do!

A 60 percent PoP does not mean that 60 percent of an area will receive precipitation. It also does not mean that it will rain for 60 percent of the time period, and it does not mean that a forecaster is 60 percent confident that it will rain. So, what does it mean?

As defined by the National Weather Service, a probability of precipitation (PoP) is the “chance that at least 0.01 [inch] of rain will fall at the point for which that forecast is valid over the period of time given.”

So, a 60 percent PoP means that when these meteorological conditions occur in this area, 6 times out of 10, there will be at least some rain. That’s what PoP is. But how is it calculated?

PoP = Confidence x Coverage

To find the PoP for a given area over a given time period, we take the confidence of the forecaster and multiply it by the area that will be affected by the precipitation. Say I was 100 percent confident that 50 percent of Cleveland would receive at least 0.01″ of precipitation tomorrow, the PoP would be (1 * 0.5 = 0.5) 50 percent. Now, if I was only 80 percent confident in my prediction that 50 percent of Cleveland is getting wet tomorrow, the PoP would be (0.8 * 0.5 = 0.4) 40 percent!

If you stay in the same spot all day (like I did writing this article), then a 40 percent PoP means you have a 4 in 10 chance of being rained on. But, if you move around within an area, or between areas, your probability of encountering rain increases. As Brad Panovich, Chief Meteorologist at WCNC Charlotte put it, “It’s like buying more raffle tickets. Each one you buy increases your chances of winning.”

If you were mistaken about PoP until today, count yourself among good company. Weather forecasts have been available to the general public in the United States since the late 1960s, but in studies, between 35 percent and 73.8 percent of respondents defined PoP wrong, even when they were meteorologists! A viral Tik-Tok from 2019 that incorrectly taught people the untrue percent-of-land PoP definition certainly hasn’t helped things.

It turns out that even those sort of wishy-washy terms meteorologists use to describe weather such as “scattered flurries” or “isolated showers” have fairly strict definitions too. The National Weather Service uses certain expressions to communicate the degree of certainty in a forecast: “slight chance,” “chance,” and “likely.” There are also particular qualifiers to convey the portion of the area that will be affected: “isolated/few”; “widely scattered”; “scattered”; “numerous”; or “occasional/periods of.”

Image source: https://www.weather.gov/bgm/forecast_terms

At least in Canada, the term “risk” as in a “risk of thunderstorms” indicates a 30–40 percent chance of said weather occurring. Fun fact, also in Canada, a PoP of 50 percent is never permitted, because it seems too indecisive.

So, to those who previously found themselves cursing the local weather forecaster for never getting it right, hopefully, this article helps explain that your own lack of knowledge was more likely at fault than theirs. Believe it or not, weather forecasts have actually been getting more and more accurate with time. In 1972, a National Weather Service forecast made three days before was off by an average of six degrees. Forty years later, it was down to three degrees. In the late 1980s, when trying to predict where hurricanes would make landfall three days in advance, the National Hurricane Center missed by an average of 350 miles. Now the average miss is only about 100 miles. 

Now, that’s not to say that meteorologists can’t be biased. Many weather agencies previously biased their forecasting toward more precipitation than will actually occur. This so-called “wet bias” meant that for years when the Weather Channel predicted a 20 percent PoP, it actually rained only roughly 5 percent of the time. It’s unclear if the wet bias is still influencing forecasts today.

Take a look at the chart below. It shows the percent confidence on the top and the percent area on the side. By multiplying these together we get the PoP at the intersection of the two. The chart is symmetric. For example, if your confidence is 90 percent and the area affected is 50 percent, the PoP equals 45 percent. If the values were opposite, the PoP would still be 45 percent. This means that even though there are multiple ways to arrive at each PoP, in the end, they mean similar things.

A 40 percent PoP can be arrived at by multiplying 100 percent and 40 percent or 50 percent and 80 percent. Therefore, a 40 percent PoP could mean that it is:

  • Absolute certainty (100 percent) that some (40 percent) of the area will receive rain, or
  • Quite likely (80 percent) that half (50 percent) of the area will receive rain, or
  • Somewhat possible (40 percent) that all (100 percent) of the area will receive rain, or
  • Possible (50 percent) that most (80 percent) of the area will receive rain.

In the end, all these basically mean “you probably won’t need an umbrella, but it’s not a bad idea.”

Similarly, you’ll notice that to get a PoP of 70 percent or above, one of either the confidence or area must be greater than 70 percent. Regardless of whether that’s the result of being very sure it’ll rain over more than half the area, or being fairly sure it’ll rain over the entire area, what matters is that you remember to close your bedroom window.

Original article posted here- https://skepticalinquirer.org/exclusive/new-column/

The Epidemic Facing Ash Trees (McGill OSS)

6 minute read

The Emerald Ash Borer (EAB) is a species of jewel beetle native to eastern Asia. In 2002, the beetle was detected for the first time in North America. First in Michigan, then Ontario, although tree ring analysis suggests that it has likely been present in those regions since the early 1990s. Since then, the number of EABs have increased year after year as the bugs spread across Ontario, Quebec and more than half the continental U.S.

An infection of EABs can kill an otherwise healthy ash in 2-5 years. But how can an 8.5 mm long insect kill a tree anyways? One way would be by eating all of its leaves. Without foliage, a tree has no way to photosynthesize, and therefore no way to make energy. Adult EABs do munch on leaves—a loss of tree canopy is a warning sign of EAB infestation—but not usually to the degree that would kill an ash. Instead, it’s the EAB larva that cause the majority of the damage.

EAB eggs are laid on ash branches, and larvae, once hatched, chomp their way under the bark. The little grubs will chew out 6 mm wide S-shaped tunnels called galleries to live in that can be up to 30 cm long. These galleries disrupt a tree’s internal water transport system, taking away its ability to send necessary nutrients up to its branches and leaves. As a result of nutrient deficiency, EAB-infected ash trees often show signs of chlorosis, or a lack of green colour in their uppermost leaves. Dying ash trees will sometimes send out epicormic shoots—little sprouts from the roots or lower trunk and branches—in an attempt to survive.

Most EABs spend winter inside ashes in their larval form. They’re able to withstand temperatures down to -30 ˚C, and are partially insulated by the tree bark. Eventually, come spring, the fully matured beetles will emerge from the ash trees, leaving small capital D-shaped exit holes about 4 mm wide.

The loss of one type of tree might not seem like such a cause for alarm, but the widespread death of ash trees is having many repercussions. In 2015, Montreal was home to roughly 200,000 ash trees. Mont Royal, the iconic park in the centre of the island was, until recently, home to over 10,000 of those trees. But, as a result of the EAB infestation the City of Montreal was forced to cut down about one-third of those ashes. The other two-thirds they chose to treat with preventative insecticides. To make up for the over 3000 lost trees, the city will plant 40,000 saplings. Of these, about 50% are expected to thrive. In 2016 Montreal committed $18 million to fighting the EAB and replacing the ashes it kills. In the U.S., affected states spend an average of $29.5 million per year to manage EAB populations.

The loss of ash trees can impede ecosystems, bring down home values or disrupt food webs. During bad weather, sick or dying ashes can pose a safety risk if they fall or drop branches. And with the loss of these trees comes an increased risk of landslides and flooding, both of which tree roots help to prevent.

Read the entire article for free by clicking here- https://www.mcgill.ca/oss/article/epidemic-facing-ash-trees

Luciferin and GFP: The Fluorescent Chemicals Used by Insects, Sea Creatures and Humans! (McGill OSS)

Image source

5 minute read

How do fireflies create their telltale glow? It differs slightly depending on species—there are more than 2000 species of fireflies found across the world, including many that do not glow—but the one we know the most about is the North American Firefly (Photinus pyralis). It uses a molecule named luciferin and its enzyme buddy luciferase. Luciferase reacts with luciferin, causing it to break down into two compounds and release CO2 One of those two compounds has a bit of excess energy that it releases as light!

The production of this light has three requirements, other than luciferin and luciferase: magnesium, oxygen and ATP. That ATP requirement is a big part of why the luciferin assay has become an important tool for biochemical research. Adenosine-5′-triphosphate (ATP) is the universal “energy molecule” of all forms of life. So, luciferase and luciferin can be used to test if something like a cell is alive and still producing ATP.

……

One group of fireflies, however, use their glowing abdomens to hunt. Females of the genus Photuris engage in aggressive mimicry by imitating the flashing patterns of other species’ females to lure and eat the males who seek mates.

Unfortunately, due to habitat loss and climate change, firefly numbers are declining across much of the world. The lack of appropriate green spaces for fireflies to live and mate is compounded by the sedentary nature of many firefly species. The larvae of the common European glow-worm are reported to move only about 5 meters (16.4 feet) per hour. Light pollution as well may be impacting fireflies’ ability to thrive. In one study, light pollution reduced the flashing of Photuris versicolor by almost 70%.

To read the entire article, click here.