Our collective vision of Christmas landscapes is so immersed in snow that the very phrase “It’s beginning to look a lot like Christmas” conjures up imagery that is nearly all frosted, sparkling and white. This even though a snow-covered Christmas is the exception rather than the rule for the majority of the world.
Despite what the song “White Christmas” would make you think, for more than half the continental U.S., there is less than a 50% chance of a white Christmas occurring. Snow on December 25th is rare in the U.K. and not even as common in the Great White North of Canada as you may expect! So why do we pine for a pearly white holiday time?
Maybe Bing Crosby crooning, “I’m dreaming of a White Christmas, just like the ones I used to know,” has given you the impression that climate change is to blame for the seeming lack of modern-day snowy holidays. Global warming certainly has played a role in decreasing the chances of frosty festivities and will continue to do so. But the real reason behind our widespread association of Christmas and snow is less to do with changing weather patterns and more to do with our media.
Charles Dickens’ classic tale “A Christmas Carol” was written and published in England during the Victorian era. Where nowadays, you see far more fake snow than real, during Dickens’ early life, winters in the U.K. were snow-filled times of “piercing, searching, biting cold.” The 16th to the 19th century was a climatic period known as the Little Ice Age. As a result, most of Europe saw colder, longer, and more snowy winters than previously known. Winters cold enough to allow the River Thames frost fairs to occur on a frozen-solid Thames—something that hasn’t happened since 1814.
While familiar to us in much of Canada, the lasting snowy landscapes and beauty created by ice and frost were novelties to many artists, and Father Winter served as a muse for many. The Little Ice Age period gave birth to the vast majority of European depictions of winter in paintings and inspired numerous enduring works of art.
Charles Dickens has been called the man who invented Christmas—a definite exaggeration. But we can thank him, Jacob Marley, and Ebenezer Scrooge for helping to cement a Christmas aesthetic that has persisted with impressive consistency. Christmas is a time of nostalgia for many of us, and it was no different for Dickens. His stories contain references to the snowy cold winters of his childhood, making it ironic, in a sense, that we should now feel a sort of nostalgia for Dickens’ childhood winters too.
Our views that Christmases should be snowy don’t exclusively come from the England of yore. New media and art through the years have iterated upon Dickens’ Christmas setting and only further enshrined our association of Christmastime as snow filled. The United States have contributed their fair share to the frost-filled Christmas media. From “A Visit from St. Nicholas”—better known as “’Twas the night before Christmas”—discussing newly fallen snow to stories like “How the Grinch Stole Christmas” by Theodor “Dr. Seuss” Geisel, to the lithographic prints of Currier and Ives and the Christmas scenes of Norman Rockwell. The classic Christmas movie “It’s a Wonderful Life” even won an award for developing a new version of fake snow to replace the painted cornflakes used previously!
While Bing Crosby sings less about the white Christmases he personally knew and more about the ones we as a society used to know, the man who wrote the lyrics for “White Christmas,” Irving Berlin, was likely talking about both. a Jewish immigrant to the U.S., Berlin was born in Tyumen in modern-day Russia. With average daily December temperatures of -12.9 ˚C, he very well may have been referencing both his childhood Christmases and the historic Victorian ones enshrined in our holiday ideals.
Evolution is often thought of as a solely long-term process. But the conception that its effects are only seen after millions of years ignores a crucial part of the evolutionary process: adaptation. Because we tend to fixate on the drastic changes caused by evolution over huge timescales, it’s easy to ignore the small variations between generations that add together over time to form the big evolutionary changes we focus on. This unintentional side-lining of small adaptations can blind us to the ways in which humans are directly affecting the evolutionary processes of nature. From tuskless elephants to fish that can’t smell, animals are developing specialized adaptations to allow them to live in ecosystems that have been disrupted and altered by mankind. These adaptations are one step in the evolutionary process that already bears the unmistakable marks of humanity’s influence.
Just as humans are changing the planet, they’re changing the fauna that inhabit it. Here are some examples of how.
Rome wasn’t built in a day, but from 165-180 CE, up to 2,000 of its citizens were killed per day.
The Antonine Plague, also known as the Plague of Galen (after the doctor who described it), decimated the Roman Empire. It was brought to Rome by armies returning from western Asia, causing fevers, skin sores, diarrhea and sore throats.
This plague, and the Plague of Cyprian that occurred about 70 years later,are generally thought to be due to smallpox and measles. The Roman citizens at this time would not hadbeen exposed to either virus and thus would have had no immunity, which could explain the mass casualties seen (the first plague had a mortality rate of 25%).
While smallpox has not been seen clinically since 1977, measles still kills upwards of 85,000 people every year, despite being vaccine preventable. While the measles virus is most famous for causing the red rash that begins at the hairline and slowly spreads over the entire body, it can also cause fevers, sore throats, nausea and diarrhea. Perhaps just as distinctive, if not as noticeable, are the tiny white Koplik spots that may appear inside a victim’s mouth. The good news is that the rash actually signals the end of the viral infection, and the skin usually flakes off as the rash goes away.
Most of our readers are safe from the Romans’ fate, as measles was officially eliminated from the Americas in 2016. However, this elimination is conditional on travellers not bringing the virus back from their vacations and causing an outbreak. That’s why the MMR vaccine, which provides immunity against measles, mumps, and rubella, is recommended for all, travellers and home bodies alike.
In 2014 a group of unvaccinated Amish missionaries brought measles back from the Philippines. It rapidly spread through their largely unvaccinated communities, resulting in 383 cases of measles across 9 countries. Luckily, thanks to modern medicine, no one died. We’ve come a long way from the plague that wiped out one third of the Roman Empire, and thanks to vaccines, we’ve got no plans for a measles plague of our own.
Gail Borden Jr first condensed milk in 1853 in an attempt to create a milk product that was shelf stable. He opened 2 factories to produce his product, but both ultimately failed, and it wasn’t until the 3rd factory opened in 1864 that his condensed milk, sold under the name Eagle Brand,caught on. Truly however this baking staple owes its popularity to the American civil war, as the U.S. government ordered huge amounts of sweetened condensed milk (sometimes called Borden’s Milk) for use as a field ration. After the war, the soldiers spread the news of sweetened condensed milk and its popularity only rose. The first Canadian condenser was built in1871. However a market bubble formed after the milk’s rapid rise in use, and its eventual popping left only a few companies in the condensed milk business, most notably Nestlé, and the Eagle Brand that started it all.
When you spend hours proofreading and retyping essays, you get to wondering- why do we refer to large letters as upper case, and small as lower case? It’s actually a remnant of a past where printing presses had manually set letters. Small letters, which were used the majority of the time, were kept in the lower, easier to access case. Where as large letters were kept in the upper. Also interesting to note is that capitalization belongs to the script, not the language. So all languages using Latin script, like English, have upper and lower case, but languages using Devangari, such as Hindi or Sanskrit do not.
Pencils do not contain any lead, and they never did! The mistake in terminology can be traced back to the ancient Romans who drew lines on papyrus using pieces of actual lead, all the while not realizing it was incredibly toxic. Considering its toxicity, it’s really good that pencils never did contain lead. Could you imagine how much a child could ingest while chewing on their pencil? So, what is the dark stuff inside pencils if not lead? It’s actually a mixture of graphite and clay. Graphite is literally named for its ability to leave marks on paper. It comes from the ancient Greek word graphein, meaning to draw. If you’re ever especially stressed during an exam, you could always try squeezing your pencil tip. Under enough pressure, and at high enough a temperature, graphite turns into diamond, and I expect if you manage to manually make a diamond, you won’t mind a bad exam grade as much. But besides allowing students to take exams and artists to draw, graphite serves an important role in batteries, particularly in lithium-ion batteries, due to its high conductivity.
Starting in 1932, 600 African American men from Macon County, Alabama were enlisted to partake in a scientific experiment on syphilis. The “Tuskegee Study of Untreated Syphilis in the Negro Male,” was conducted by the United States Public Health Service (USPHS) and involved blood tests, x-rays, spinal taps and autopsies of the subjects.
The goal was to “observe the natural history of untreated syphilis” in black populations. But the subjects were unaware of this and were simply told they were receiving treatment for bad blood. Actually, they received no treatment at all. Even after penicillin was discovered as a safe and reliable cure for syphilis, the majority of men did not receive it.
In 1865, the ratification of the Thirteenth Amendment of the U.S. Constitution formally ended the enslavement of black Americans. But by the early 20th century, the cultural and medical landscape of the U.S. was still built upon and inundated with racist concepts. Social Darwinism was rising, predicated on the survival of the fittest, and “scientific racism” (a pseudoscientific practice of using science to reinforce racial biases) was common. Many white people already thought themselves superior to blacks and science and medicine was all too happy to reinforce this hierarchy.
Before the ending of slavery, scientific racism was used to justify the African slave trade. Scientists argued that African men were uniquely fit for enslavement due to their physical strength and simple minds. They argued that slaves possessed primitive nervous systems, so did not experience pain as white people did. Enslaved African Americans in the South were claimed to suffer from mental illness at rates lower than their free Northern counterparts (thereby proving that enslavement was good for them), and slaves who ran away were said to be suffering from their own mental illness known as drapetomania.
During and after the American Civil War, African Americans were argued to be a different species from white Americans, and mixed-race children were presumed prone to many medical issues. Doctors of the time testified that the emancipation of slaves had caused the “mental, moral and physical deterioration of the black population,” observing that “virtually free of disease as slaves, they were now overwhelmed by it.” Many believed that the African Americans were doomed to extinction, and arguments were made about their physiology being unsuited for the colder climates of America (thus they should be returned to Africa).
Scientific and medical authorities of the late 19th/early 20th centuries held extremely harmful pseudoscientific ideas specifically about the sex drives and genitals of African Americans. It was widely believed that, while the brains of African Americans were under-evolved, their genitals were over-developed. Black men were seen to have an intrinsic perversion for white women, and all African Americans were seen as inherently immoral, with insatiable sexual appetites.
This all matters because it was with these understandings of race, sexuality and health that researchers undertook the Tuskegee study. They believed, largely due to their fundamentally flawed scientific understandings of race, that black people were extremely prone to sexually transmitted infections (like syphilis). Low birth rates and high miscarriage rates were universally blamed on STIs.
They also believed that all black people, regardless of their education, background, economic or personal situations, could not be convinced to get treatment for syphilis. Thus, the USPHS could justify the Tuskegee study, calling it a “study in nature” rather than an experiment, meant to simply observe the natural progression of syphilis within a community that wouldn’t seek treatment.
The USPHS set their study in Macon County due to estimates that 35% of its population was infected with syphilis. In 1932, the initial patients between the ages of 25 and 60 were recruited under the guise of receiving free medical care for “bad blood,” a colloquial term encompassing anemia, syphilis, fatigue and other conditions. Told that the treatment would last only six months, they received physical examinations, x-rays, spinal taps, and when they died, autopsies.
Researchers faced a lack of participants due to fears that the physical examinations were actually for the purpose of recruiting them to the military. To assuage these fears, doctors began examining women and children as well. Men diagnosed with syphilis who were of the appropriate age were recruited for the study, while others received proper treatments for their syphilis (at the time these were commonly mercury– or arsenic-containing medicines).
In 1933, researchers decided to continue the study long term. They recruited 200+ control patients who did not have syphilis (simply switching them to the syphilis-positive group if at any time they developed it). They also began giving all patients ineffective medicines ( ointments or capsules with too small doses of neoarsphenamine or mercury) to further their belief that they were being treated.
As time progressed, however, patients began to stop attending their appointments. To greater incentivize them to remain a part of the study, the USPHS hired a nurse named Eunice Rivers to drive them to and from their appointments, provide them with hot meals and deliver their medicines, services especially valuable to subjects during the Great Depression. In an effort to ensure the autopsies of their test subjects, the researchers also began covering patient’s funeral expenses.
Multiple times throughout the experiment researchers actively worked to ensure that their subjects did not receive treatment for syphilis. In 1934 they provided doctors in Macon County with lists of their subjects and asked them not to treat them. In 1940 they did the same with the Alabama Health Department. In 1941 many of the men were drafted and had their syphilis uncovered by the entrance medical exam, so the researchers had the men removed from the army, rather than let their syphilis be treated.
It was in these moments that the Tuskegee study’s true nature became clear. Rather than simply observing and documenting the natural progression of syphilis in the community as had been planned, the researchers intervened: first by telling the participants that they were being treated (a lie), and then again by preventing their participants from seeking treatment that could save their lives. Thus, the original basis for the study–that the people of Macon County would likely not seek treatment and thus could be observed as their syphilis progressed–became a self-fulfilling prophecy.
The Henderson Act was passed in 1943, requiring tests and treatments for venereal diseases to be publicly funded, and by 1947, penicillin had become the standard treatment for syphilis, prompting the USPHS to open several Rapid Treatment Centers specifically to treat syphilis with penicillin. All the while they were actively preventing 399 men from receiving the same treatments.
By 1952, however, about 30% of the participants had received penicillin anyway, despite the researchers’ best efforts. Regardless, the USPHS argued that their participants wouldn’t seek penicillin or stick to the prescribed treatment plans. They claimed that their participants, all black men, were too “stoic” to visit a doctor. In truth these men thought they were already being treated, so why would they seek out further treatment?
The researchers’ tune changed again as time went on. In 1965, they argued that it was too late to give the subjects penicillin, as their syphilis had progressed too far for the drug to help. While a convenient justification for their continuation of the study, penicillin is (and was) recommended for all stages of syphilis and could have stopped the disease’s progression in the patients.
In 1947 the Nuremberg code was written, and in 1964 the World Health Organization published their Declaration of Helsinki. Both aimed to protect humans from experimentation, but despite this, the Centers for Disease Control (which had taken over from the USPHS in controlling the study) actively decided to continue the study as late as 1969.
It wasn’t until a whistleblower, Peter Buxtun, leaked information about the study to the New York Times and the paper published it on the front page on November 16th, 1972, that the Tuskegee study finally ended. By this time only 74 of the test subjects were still alive. 128 patients had died of syphilis or its complications, 40 of their wives had been infected, and 19 of their children had acquired congenital syphilis.
There was mass public outrage, and the National Association for the Advancement of Colored People launched a class action lawsuit against the USPHS. It settled the suit two years later for 10 million dollars and agreed to pay the medical treatments of all surviving participants and infected family members, the last of whom died in 2009.
Largely in response to the Tuskegee study, Congress passed the National Research Act in 1974, and the Office for Human Research Protections was established within the USPHS. Obtaining informed consent from all study participants became required for all research on humans, with this process overseen by Institutional Review Boards (IRBs) within academia and hospitals.
The Tuskegee study has had lasting effects on America. It’s estimated that the life expectancy of black men fell by up to 1.4 years when the study’s details came to light. Many also blame the study for impacting the willingness of black individuals to willingly participate in medical research today.
We know all about evil Nazis who experimented on prisoners. We condemn the scientists in Marvel movies who carry out tests on prisoners of war. But we’d do well to remember that America has also used its own people as lab rats. Yet to this day, no one has been prosecuted for their role in dooming 399 men to syphilis.
Malaria is an infectious disease caused by a single-celled parasite that multiplies in human red blood cells as well as in the intestines of the Anopheles mosquito, the insect that transmits the disease. Researchers believe that malaria coevolved with humans in Africa. For its spread across the world, we can blame colonialism.
It is thought that malaria began to travel out of Africa about 3 000 years ago, after which its spread was hastened by wars and the import of human labour. Sardinia is an island south of Italy that was conquered by Carthage in 502 BCE. Seeking to use the land for agriculture, the Carthaginians clear cut the trees and vegetation. These ecological changes allowed flooding to occur, creating standing water that attracted mosquitoes. To work the new farms, Carthage imported labourers from Northern Africa who brought malaria with them.
About 200 years later, the Roman empire took over Sardinia, allowing malaria to make the leap to Europe. By the 1400s, malaria was well established in France and England. As populations grew, agricultural demands led to low lands and swamps being drained, often poorly, and what happened in Sardinia was repeated.
Once colonies in North America and the Caribbean were established, many of Europe’s poor emigrated there, bringing malaria with them. The agricultural practices used in the U.S. to grow cotton and rice, combined with the overcrowding and horrid conditions that slaves faced resulted in epidemics of malaria that ravaged the Southern U.S.
Colonialism’s effects on malaria were not restricted to its spread, however. Even in areas where malaria was already present, colonial influence often worsened conditions and caused epidemics.
In the southern African country of Swaziland, malaria was common but nonfatal before colonial intervention. This was due to the immunity that can be acquired with repeated exposure. When colonists arrived, however, they removed Swazi inhabitants from their homelands, forcing them to move into lowlands with larger mosquito populations.
Taxes imposed on Swazis, drought conditions, and the exportation of Swazi crops led to famines that left the Swazi people vulnerable to malarial infection. Droughts would temporarily relieve infections but at the cost of losing acquired immunity. Famines forced many Swazis to travel to find work and food, but travelling labourers would lose their acquired immunities, leaving themselves vulnerable to infection upon their return.