Fingerprick Blood Sugar Tests: How They Work and Why We Still Use Them

Originally published here:

We are living in the future. We have robotic personal assistantswatchesthat replace credit cards, phonesthat recognize our faces, and self driving carsare just around the corner. But for all our advancement, patients with diabetes still need to stab themselves multiple times a day to check their blood glucose levels. There has to be a better way, right?

The history of glucose meters starts in 1956 with Leland Clark presenting a paper on an oxygen electrode, later to be renamed after him. Six years later the Clark electrode had been developed, with the help of Ann Lyons, into the first glucose enzyme electrode. These early glucose meters were large, bulky and only used in hospitals. It wasn’t until 1981 that at-home monitors were popularized, sold on the market by the same names you’d recognize today: Glucometer and Accu-chek.

These glucose meters worked by a method still used today that’s quite similar to how breathalyzers detect blood alcohol content. Electrons are transferred from the glucose in blood through molecules until it reaches the electrodes in the glucometer. These moving electrons create an electrical current proportional to the amount of glucose in the blood, and the number appears on the monitor.  

But what if we could measure our blood sugar without having to prick our fingers?

A lot of research and development has gone into that very idea.

Instead of measuring the glucose in blood directly, attempts have been made to measure the glucose in other fluids. Urine tests have been available for much longer than even blood tests but visiting a bathroom every time you need to test your sugar is far from ideal as those with type 1 diabetes may need to test their sugar up to 12 times a day!

New technologies are looking at using tears. Since these fluids are naturally external to the body their measurement needs no needles, something that would decrease the cost of testing and likely increase the reliable tracking of patients’ blood sugar.

Google notably prototyped a contact lens in 2014 that would contain the chips and sensors to measure sugar levels and either change colour accordingly, or transmit that data to an external device. Because of the low volume of tears, the lenses need to be exceptionally accurate. Reliable relationships between the glucose in tears and in blood need to be established and contact lens solution that doesn’t inhibit the lenses needs to be developed.

A few other technologies have been investigated for non-invasive blood sugar testing. A device using near-infrared spectroscopy that would shine light through the earlobe to sense glucose was prototyped, but required a lot of measurements (like earlobe width and blood oxygen levels) to calibrate (though a similar product has been sold outside of the US and Canada). Scientists have attempted to create devices that would pull glucose out from the blood through the skin, using chemicals or electrical currents, as well as devices that would measure blood sugar via polarized light measurements, but at least as of yet, none of these devices have been commercially available in Canada. 

One product that may soon be seen on market is Glucair, which functions similarly to a breathalyzer. It analyzes the acetone present in your breath to take a measurement of your blood glucose level. This system could be made quite small, like modern breathalyzers, and would require no finger pricking or needles of any kind. 

For now the best alternative to finger prick tests are continuous glucose sensors. They consist of a needle that is embedded in the skin that can take blood samples very often, and the circuitry to measure the glucose content. The results are seen by scanning the sensor with a receiver, a smartphone, or via bluetooth connection. They give live results and can last up to 7 days, but tend to be very expensive, given the disposable nature of the inserts, and aren’t always covered by insurance like glucometers are. 

In Canada there are a few neat options available. The Freestyle Libre is what’s called a flash glucose monitoring system. The small sensor is inserted into the skin and worn for 14 days, and can be scanned whenever needed by the receiving decide to get blood sugar levels. The Dexcom G5 is also a small sensor that can be worn for 10-15 days, but it transmits wirelessly to your smart devices. This makes it especially useful for parents or caretakers wanting to monitor someone else’s glucose levels.

(Click here to view a higher resolution version of this image)

Continual monitoring allows greater accuracy in insulin doses and allows a patient to provide more information about their blood sugars to their doctors. Ideally continual sensors will also be able to communicate directly with insulin pumps, so that type 1 diabetics can receive their correct dose without needing to finger prick first.

Considering how far we’ve come since the advent of blood glucose monitoring in the 1960s, I have faith continuous and non invasive technologies are coming. It’s really just a question of how many needles diabetics will have to endure before they do.

The Luminescent Chemistry of Lava Lamps

Originally posted here:

If you think back to the 60’sand 70’s your memories are probably illuminated by a lamp filled with swirling globs of colourful goop that really didn’t shed much light at all.

Lava lamps were invented in 1963 by a British accountant, Edward Craven-Walker, and marketed under the name Astro Lamps. The name might have changed since then, but the chemistry largely hasn’t.

The whirling globs we remember are made mainly of paraffin wax, with compounds like carbon tetrachloride added to increase its density. The liquid the wax floats incan be water or mineral oil, with dyes and sparkles added for whimsy.

So what causes the wax to float and fall? When the lamp is turned on, the incandescent light bulb in the base begins to heat the interior of the lamp. The wax expands when heated, and since density is equal to mass divided by volume, when the volume increases, the density of the wax decreases, and it floats. 

When the ball of goop reaches the top of the lamp it cools down, decreases in volume and thus density, and falls back to the bottom to begin its journey again. A veritable Sisyphus of home decoration.

The exact composition of the wax and liquid are trade secrets, but they’re constantly being improved on. You can make a basic lava lamp at home with just oil, water and aspirin.

The newest innovation in the lava lamp legacy was the addition of ferrofluid. These liquids have microscopic magnetic particles suspended in them that allow you to interact with your lava globs with a magnet!

Besides mood lighting, lava lamps have also been used as random number generators. Programs were made to change the motion of lava blobs into truly random numbers, for use in cryptography. Whatever you use them for though, don’t drink them. Multiple people have been hospitalized for consuming the insides of these psychedelic accessories.

Can Nike’s New Shoes Really Make You Run Faster?

Photo make by Ada McVean
Originally posted here:

A New York Times’ study of 500,000 race times, set wearing Vaporflys and other shoes, confirmed Nike’s claims. They found that Vaporflys allowed a runner to run 1% faster than the next-fastest shoe, and 3-4% faster than a similarly skilled runner running in different shoes.

These results, taken from race entries on the app Strava, show that runners were more likely to set a personal record when wearing Vaporflys (though not quite as likely as those wearing Nike Streak shoes). Runners were also more likely to run faster when switching to Vaporflys and complained of less leg fatigue.

So, what’s so great about these shoes? Carbon fibres. Each sneaker features a carbon fibre plate in the midsole which absorbs and releases energy, throwing the runner forward with every step.

Since the shoes don’t contain any springs or elastics, they’re not likely to be banned from future sports competitions. But given their $250 price tag, don’t expect to see me wearing them anytime soon.

Nasa and Spacex Owe Their Accomplishments to a Dog Named Laika

Originally published here:

In the late 1940’s both Soviets and Americans began investigating the expanse of space by sending animals up, up and away. It began with fruit flies in 1947, grew to include monkeys in 1949 and mice in 1950, but no animal actually entered orbit until November 3rd, 1957, when Laika, a Soviet trainedstreet dog, made history.

Sputnik 1 was the first satellite to orbit the Earth, but Sputnik 2 (or more appropriately Muttnik) was the first satellite to reach orbit with a creature aboard.

Laika was found on the streets of Moscow, which meant she was already adapted to survive extreme cold and hunger. She was chosen because she was calm, sweet, and, as a female, could pee with her leg down (this made designing her space suit much simpler). She underwent training like any cosmonaut: centrifuges, confined spaces, loud noise exposure, acclimatization to nutrient gel food and fitting for a space suit. However, unlike modern cosmonauts, her return was never planned for.

According to the Sovietsthe plan was to euthanize Laika with medicated food just before reentry. Sadly this humane method didn’t pan out. Laika’s vital signs stopped after 5-7 hours in orbit.

The Sputnik 2 mission was planned hastily, as then Soviet leader Khrushchev wanted its launch to coincide with the 40th anniversary of the October Revolution. The vessel was built in only about 4 weeks. After news of Laika’s launch spread, the Soviet government alternatively claimed that she had died from a lack of oxygen or been euthanized early. Years later, one of the mission’s scientists admitted Laika had died by overheating due to a mechanical problem in the spacecraft, a much less desirable way to go.

Laika’s flight spawned outrage from animal rights activists the world over. But it also piqued the curiosity of an American army physician, Duane Graveline. His desire to understand how the Soviets had received the biophysical data from Laika led him to research space’s effects on the human body and help develop the technologies that allowed NASA to send astronauts (which he later became!) to space.

Laika may not have survived, but her legacy did. She’s been memorialized in two Soviet statues,and even had a band named after her.

So next time NASA launches a shuttle, remember that they owe that technology, in part, to a small Russian mutt named Laika.

Spaceships recycle everything… except astronaut’s poop

Originally posted here:

Astronauts inhale oxygen and exhale carbon dioxide, just like you and me. On Earth, where exhaled air warmed by our bodies naturally rises away from us, the possibility of inhaling too much carbon dioxide isn’t usually a worry. But for astronauts, it’s a major one. Without the ventilator fans installed in shuttles and stations, carbon dioxide would accumulate around an astronaut. This is especially a concern at night,since we tend to stay still while sleeping. This would allow CO2 to collect and starve astronauts of oxygen.

So what happens to the carbon dioxide once it’s suckedaway by the fans? Like almost everything on a spacecraft, it’s recycled.

Carbon dioxide removed from the air by the aptly named ‘carbon dioxide removal system’ is combined with hydrogen (a byproduct of the oxygen generator system) to produce methane (which is vented into space) and water, which re-enters the oxygen generating system. This cycle allows astronauts to keep breathing, drinking and flying for long periods of time without having to lug to space all the oxygen they will need for the trip.

So what isn’t recycled onthe International Space Station? Human feces. But Mark Watney seems to have inspired a potential use for that

Potatoes and Space Have a Long History

Originally posted here:

Intergalactic potatoes may seem like a side dish from the Mos Eisley Cantina, but potatoes and space have a common history.
In 1978, George Lucas began work on The Empire Strikes Back, but wanting to remain independent from Hollywood, he financed it all himself. This led to some interesting low-budget work-arounds. Most notably, the asteroid field of Hoth, whose asteroids were actually partially made of shoes and potatoes. Really!

Then again, in 2015’s The Martian, potatoes make an appearance on the space-themed silver screen with Matt Damon’s portrayal of Mark Watney, an astronaut/botanist, who grows potatoes while stranded on Mars. Although Watney might be a fictional character, thanks to scientists, he now has a spot in history. A newly-discovered flower, which belongs to the same botanical family as the potato, has been named after him – the “solanum watney”. 

Potatoes have even reached NASA’s radar. Growing food crops in space has been one of the space agency’s interests for years. Potatoes and sweet potatoes are serious contenders for space agriculture due to their high carbohydrate content and their tuberous nature that gives them low light requirements. As well, the eyes of potatoes produce sprouts that can be used to grow more plants, thereby making them a simple, reliable food source. For now astronauts have to rely on the freeze dried versions as, so far, only lettuce has actually been grown in space, but if NASA’s potato experiments here on Earth are successful, we could soon see spuds that are out of this world.