Blood Tests for Menopause (The Midlife)

4 minute read

One of the most common questions that we hear is, “How will I know if I am in menopause?” As you likely already know, that is not a simple yes-or-no question.

Menopause is defined clinically as 12 months of amenorrhea or absence of menstruation. That seemingly straightforward definition, however, masks a complex condition affecting millions of people. With an average age of onset of 47 years old, perimenopause—the transition period from fertility to menopause—can only be diagnosed in retrospect by considering a set of wide-ranging, somewhat vague symptoms.

Given the ambiguity and interpretation required in menopause diagnosis, a simple test that could definitively state whether someone has reached menopause or not would be extremely helpful for clinicians and patients alike. Medical practitioners can use some hormone tests to gather information about your reproductive status, but none provide the definitive answer we’d like them to.

This article was written for The Midlife. View the entire original here:


Can Periods Really Sync Up? (McGill OSS)

1 minute read

The idea that periods can synchronize was first investigated in a 1970’s paper by Martha McClintock, who examined the menstrual cycles of women living together in dorms. McClintock found that after 7 months of living together, the women’s periods had gone from an average of 6.5 days apart to 4.6 days apart, leading to the idea that proximity caused the periods of these women to synchronize due to some chemical signal.

However, studies since then have been largely unable to replicate these findings. McClintock’s results are now largely believed to have occurred by chance or poor experimental design, with many researchers calling menstrual synchrony a methodological artifact.

While it may appear that periods are synchronizing, it is important to remember that not everyone has a 28-day cycle, as some range from 21-35 days. This variability allows synchronicity to vary and periods to occur at the same, or different times. 

This article was originally posted here:

There’s a Condition That Can Cause Human Blood to Turn Green

1 minute read
Originally posted here

If you have clear blood you may be a brachiopod, if you have blue blood you may be an octopus (or just a rich human), but if you have green blood you may have sulfhemoglobinemia.

This interesting phenomenon occurs when a hemoglobin molecule (the molecule that allows our red blood cells to transport oxygen around our bodies) incorporates a sulphur atom into its structure and becomes sulfhemoglobin. Hemoglobin contains an atom of iron to bind to oxygen. In sulfhemoglobin, the sulphur atom prevents the iron from binding to oxygen, and since it’s the oxygen-iron bonds that make our blood appear red, with sulfhemoglobin blood appears dark blue, green or black.

Patients with sulfhemoglobinemia exhibit cyanosis, or a blueish tinge to their skin. This is caused by the tissues on the periphery of their bodies, like the fingertips, not receiving enough oxygen (since sulfhaemoglobin can’t transport oxygen like hemoglobin can).

Sulfhemoglobinemia is caused by excessive exposure to sulphur-containing compounds, like medications that contain sulfonamides (such as sumatriptanor furosemide), nitrate fertilizer, or the overconsumption of nitrogenous vegetables like spinach (usually only in infants). Rest assured that it takes huge amounts of these compounds to cause sulfhemoglobinemia, so you aren’t risking anything by taking your prescribed medications.

The treatment for sulfhemoglobinemia is a simple one: just wait it out. Red blood cells have a natural lifespan of about 100 days, after which they’re broken down and their components recycled. So, after about 3 months, any red blood cells that contain sulfhemoglobin will have been recycled into proper red blood cells, and any non-red tint to the blood will have disappeared.

Killer Tampons from Outer Space or Why We Don’t Hear About Toxic Shock Syndrome Anymore

5 minute read
Originally published here:

In the early 1980’s and 90’s toxic shock syndrome was on everybody’s mind. Its prevalence dominated headlines, inspiring fear in every tampon-using woman across North America. Young adults going through puberty were taught to watch out for toxic shock syndrome like it was hiding beneath every tampon wrapper.

My mom, going through puberty in the mid 80’s, was inundated with warnings to not leave tampons in too long and to always pay attention to new or unexplained rashes. But by the time I hit puberty in the early 2010’s the flood of warnings had slowed to a trickle. I was made aware that toxic shock syndrome was a risk, but also that it was rare, unlikely and treatable. Almost a decade later I don’t think I’ve heard the words toxic shock syndrome in years.

So what happened? How did people stop dying of toxic shock syndrome, or if they didn’t, why did we stop hearing about it?

What is toxic shock syndrome?

Toxic shock syndrome or TSS is an infection caused by Staphylococcus aureus, the same bacterium responsible for “staph infections” on the skin.S. aureusis normally present in human’s respiratory tracts and on their skin, but it’s what’s called an opportunistic pathogen. Given an opening (a compromised immune system or an injury on the skin),S. aureuswill infect its host, causing all sorts of nasty effects from pimples to pneumonia.

TSS is a condition resulting from an S. aureus infectionIt can occur because S. aureus contain what are called superantigensAntigens are substances that T-cells (a type of white blood cell and main player in our immune systems) bind to. Normally, some T-cells bind to antigens and then display them on their surface, to show other T-cells that the infection is being dealt with. Superantigens, however, skip this displaying step, causing more T-cells than usual (or necessary) to be activated.

These activated T-cells then go on to release cytokines, little proteins that cause inflammation. Normally, inflammation is actually a good sign. It’s the result of the body increasing blood flow to an injured area in order to heal it. But too many T-cells release too many cytokines which cause too much inflammation in a process called a cytokine storm. As the name suggests, it’s not good. Cytokine storms are associated with fevers, fatigue, nausea, rashes, diarrhea and dizziness, which are also the symptoms of TSS.

TSS is a tampon disease, right?

In 1983 over 2,200 cases of TSS were examined, and it was determined that 90% of the patients were menstruating when they fell ill. Of these menstruating patients, 99% of them were using tampons.

But TSS is not exclusive to tampons. It was first identified in five non-menstruating boys and girls in 1978. From 2001 to 2011 there were 11 cases of TSS associated with bandages used to treat burns in children, and in 2003, a man died as a result of TSS after having tattoo work done.

It’s estimated that 25-35% of TSS cases are unrelated to menstruation. These cases of nonmenstrual TSS can be caused by S. aureus or by Streptococcus pyogenes, and have a mortality rate 6 to 12 times higher than menstrual TSS. While the incidence of menstrual TSS has fallen sharply since its heyday in the 80s, the incidence of nonmenstrual TSS has remained essentially constant.

If nonmenstrual TSS is more prevalent and more dangerous, why do we only associate TSS with tampons?

Well for one, because the nonmenstrual TSS is still fairly rare. With an incidence rate of 2-4 cases per 100,000 people, nonmenstrual TSS is less common than dysentery (5.39 cases per 100,000) or Lyme disease (8.3 cases per 100,000).

Mostly, though, we don’t hear about nonmenstrual TSS because of the epidemic of menstrual TSS that took place in the early 80s.

Modern tampons were first patented in 1931, but not produced until Gertrude Tendrich bought the patent in 1933. They didn’t rise to mainstream popularity until WWII when women entering the workforce began to use them en masse.

Those tampons, marketed largely by the same brands as today (Tampax and o.b.), were made of cotton and rayon and were fairly similar to the tampons of today. One brand, however, decided to explore other materials to make their tampons more absorbent.

Rely tampons utilized compressed polyester beads and carboxymethylcellulose instead of cotton. These tampons were super-absorbent, holding nearly 20 times their own weight in blood, and opened inside the vagina to form a sort of cup to help prevent leakage. While these sound like fantastic features for a tampon, they turned out to also be fantastic features for a bacterial infection. 

Menstrual blood is not as acidic as the vagina normally, so during menstruation the pH of the vagina is raised, which can hinder its ability to kill bacteria. But that shouldn’t matter, so long as there are no cuts inside the vagina for bacteria to enter, right?

Well, the super-absorbent nature of Rely tampons meant that the vagina was left much dryer than usual. This caused tiny ulcerations to form when tampons were inserted or removed, giving bacteria the opening they needed. Couple this with the fact that people could leave Rely tampons in for longer (thereby maximizing the bacteria’s time to grow and infect) and you have the epidemic of TSS that occurred in 1980.

Rely tampons were recalled on September 22nd 1980, but cases of TSS kept occurring. It wasn’t until 1984 that researchers realized that TSS was associated with the use of any high absorbency tampon, cotton or polyester.

I don’t want TSS! What should I do?

First, don’t panic. TSS is really rare. While several high profile cases of TSS have occurred recently, the rates of TSS are lower than ever.

Tampon companies and government agencies have worked together to identify a strategy of use that minimizes your risk. Their recommendations are as follows:

  1. Use the lowest absorbency tampon that you can.
  2. Change your tampon every 4-8 hours.
  3. Wash your hands before inserting a tampon.
  4. Do not use tampons when you’re not on your period.

Can’t I just use a menstrual cup to avoid any risk of TSS?

Menstrual cups lessen the risk of TSS, but they don’t eliminate it. While it’s true they don’t absorb any blood, and therefore don’t cause vaginal dryness leading to ulcerations, they can be really difficult to put it, especially for new users, leading to scratches or cuts on the vaginal wall.

Case in point, a 37-year-old woman was diagnosed with TSS in 2015 after using a menstrual cup for the first time.

If you’re not going to change your tampon every 8 hours, you should consider a menstrual cup. They are approved by Health Canada to stay in the vagina for up to 12 hours at a time, making them great options for those who work 8-hour days or are just forgetful.

But do still remember to wash your hands before inserting the cup, and make sure to sanitize it between cycles.

If you’d like to learn more about TSS, click here for a short but very informative video.

Fingerprick Blood Sugar Tests: How They Work and Why We Still Use Them

Originally published here:

We are living in the future. We have robotic personal assistantswatchesthat replace credit cards, phonesthat recognize our faces, and self driving carsare just around the corner. But for all our advancement, patients with diabetes still need to stab themselves multiple times a day to check their blood glucose levels. There has to be a better way, right?

The history of glucose meters starts in 1956 with Leland Clark presenting a paper on an oxygen electrode, later to be renamed after him. Six years later the Clark electrode had been developed, with the help of Ann Lyons, into the first glucose enzyme electrode. These early glucose meters were large, bulky and only used in hospitals. It wasn’t until 1981 that at-home monitors were popularized, sold on the market by the same names you’d recognize today: Glucometer and Accu-chek.

These glucose meters worked by a method still used today that’s quite similar to how breathalyzers detect blood alcohol content. Electrons are transferred from the glucose in blood through molecules until it reaches the electrodes in the glucometer. These moving electrons create an electrical current proportional to the amount of glucose in the blood, and the number appears on the monitor.  

But what if we could measure our blood sugar without having to prick our fingers?

A lot of research and development has gone into that very idea.

Instead of measuring the glucose in blood directly, attempts have been made to measure the glucose in other fluids. Urine tests have been available for much longer than even blood tests but visiting a bathroom every time you need to test your sugar is far from ideal as those with type 1 diabetes may need to test their sugar up to 12 times a day!

New technologies are looking at using tears. Since these fluids are naturally external to the body their measurement needs no needles, something that would decrease the cost of testing and likely increase the reliable tracking of patients’ blood sugar.

Google notably prototyped a contact lens in 2014 that would contain the chips and sensors to measure sugar levels and either change colour accordingly, or transmit that data to an external device. Because of the low volume of tears, the lenses need to be exceptionally accurate. Reliable relationships between the glucose in tears and in blood need to be established and contact lens solution that doesn’t inhibit the lenses needs to be developed.

A few other technologies have been investigated for non-invasive blood sugar testing. A device using near-infrared spectroscopy that would shine light through the earlobe to sense glucose was prototyped, but required a lot of measurements (like earlobe width and blood oxygen levels) to calibrate (though a similar product has been sold outside of the US and Canada). Scientists have attempted to create devices that would pull glucose out from the blood through the skin, using chemicals or electrical currents, as well as devices that would measure blood sugar via polarized light measurements, but at least as of yet, none of these devices have been commercially available in Canada. 

One product that may soon be seen on market is Glucair, which functions similarly to a breathalyzer. It analyzes the acetone present in your breath to take a measurement of your blood glucose level. This system could be made quite small, like modern breathalyzers, and would require no finger pricking or needles of any kind. 

For now the best alternative to finger prick tests are continuous glucose sensors. They consist of a needle that is embedded in the skin that can take blood samples very often, and the circuitry to measure the glucose content. The results are seen by scanning the sensor with a receiver, a smartphone, or via bluetooth connection. They give live results and can last up to 7 days, but tend to be very expensive, given the disposable nature of the inserts, and aren’t always covered by insurance like glucometers are. 

In Canada there are a few neat options available. The Freestyle Libre is what’s called a flash glucose monitoring system. The small sensor is inserted into the skin and worn for 14 days, and can be scanned whenever needed by the receiving decide to get blood sugar levels. The Dexcom G5 is also a small sensor that can be worn for 10-15 days, but it transmits wirelessly to your smart devices. This makes it especially useful for parents or caretakers wanting to monitor someone else’s glucose levels.

(Click here to view a higher resolution version of this image)

Continual monitoring allows greater accuracy in insulin doses and allows a patient to provide more information about their blood sugars to their doctors. Ideally continual sensors will also be able to communicate directly with insulin pumps, so that type 1 diabetics can receive their correct dose without needing to finger prick first.

Considering how far we’ve come since the advent of blood glucose monitoring in the 1960s, I have faith continuous and non invasive technologies are coming. It’s really just a question of how many needles diabetics will have to endure before they do.

Under the Microscope: Blood

Originally published here:

Human blood contains many different components, from white blood cells to platelets, but the most abundant component by far are red blood cells.
More properly known as erythrocytes, red blood cells make up 70% of an adult human’s cells by count. They serve an integral purpose: transporting oxygen from the lungs to all other parts of the body and returning carbon dioxide to the lungs to be exhaled. To accomplish this, they have a few unique features.
In mammals, while developing red blood cells contain a nucleus and other organelles, before they mature fully, they extrude, or push out, these organelles. Having no nucleus, red blood cells are unable to create proteins or divide, but can they can store hemoglobin, the iron-containing molecule that binds oxygen and carbon dioxide. Each red blood cell can hold approximately 270 million hemoglobin molecules, each of which can bind 4 oxygen molecules. In total, your red blood cells hold about 2.5 grams of iron.
Red blood cells are shaped kind of like donuts that didn’t quite get their hole formed. They’re biconcave discs, a shape that allows them to squeeze through small capillaries. This also provides a high surface area to volume ratio, allowing gases to diffuse effectively in and out of them.
An adult human body produces around 2.4 million red blood cells every second, mostly within the bone marrow. A red blood cell will stay in circulation for 100-120 days, making a full circuit of the body ever 60 seconds. They transport inhaled oxygen to cells and return carbon dioxide to the lungs to be exhaled.
After this period is up, the membrane of the red blood cell undergoes a change that allows it to be recognized by a type of white blood cell called a macrophage, which breaks it down. Many of the components, including iron, are recycled and used to make more red blood cells. The main non-recyclable component is broken down into bilirubin, which is excreted in urine and bile. Although, if too much bilirubin is produced, it’s yellow colour can cause discoloration of the skin, as seen in jaundice.
Carbon monoxide has a 250 times greater binding affinity for hemoglobin than oxygen, meaning that if any carbon monoxide is present, it will bind to hemoglobin instead of oxygen. This is why carbon monoxide is such a danger, it reduces our bodies ability to get oxygen to our cells. This is also why many smokers are short of breath, as the carbon monoxide they inhale while smoking is out-competing oxygen for hemoglobin’s binding sites. In heavy smokers, up to 20% of oxygen binding sites may be blocked with carbon monoxide.
Because it is colourless and odourless, often times carbon monoxide’s effects aren’t noticed until they become really severe. To avoid a scary situation, every home should be equipped with a carbon monoxide detector.

Did You Know that Colonialism is responsible for the spread of malaria?

Originally published here:

Malaria is an infectious disease caused by a single-celled parasite that multiplies in human red blood cells as well as in the intestines of the Anopheles mosquito, the insect that transmits the disease. Researchers believe that malaria coevolved with humans in Africa. For its spread across the world, we can blame colonialism.

It is thought that malaria began to travel out of Africa about 3 000 years ago, after which its spread was hastened by wars and the import of human labour. Sardinia is an island south of Italy that was conquered by Carthage in 502 BCE. Seeking to use the land for agriculture, the Carthaginians clear cut the trees and vegetation. These ecological changes allowed flooding to occur, creating standing water that attracted mosquitoes. To work the new farms, Carthage imported labourers from Northern Africa who brought malaria with them.

About 200 years later, the Roman empire took over Sardinia, allowing malaria to make the leap to Europe. By the 1400s, malaria was well established in France and England. As populations grew, agricultural demands led to low lands and swamps being drained, often poorly, and what happened in Sardinia was repeated.

Once colonies in North America and the Caribbean were established, many of Europe’s poor emigrated there, bringing malaria with them. The agricultural practices used in the U.S. to grow cotton and rice, combined with the overcrowding and horrid conditions that slaves faced resulted in epidemics of malaria that ravaged the Southern U.S.

Colonialism’s effects on malaria were not restricted to its spread, however. Even in areas where malaria was already present, colonial influence often worsened conditions and caused epidemics.

In the southern African country of Swaziland, malaria was common but nonfatal before colonial intervention. This was due to the immunity that can be acquired with repeated exposure. When colonists arrived, however, they removed Swazi inhabitants from their homelands, forcing them to move into lowlands with larger mosquito populations.

Taxes imposed on Swazis, drought conditions, and the exportation of Swazi crops led to famines that left the Swazi people vulnerable to malarial infection. Droughts would temporarily relieve infections but at the cost of losing acquired immunity. Famines forced many Swazis to travel to find work and food, but travelling labourers would lose their acquired immunities, leaving themselves vulnerable to infection upon their return.