Hey! I’ve got a new solo project/album out today! Magpie Pi- Love Songs in A Minor Crash. I’m hoping to put a band together and record this album properly someday, but for now here’s the garage band version I recorded over the last five days in my bedroom.
By Bradley Stockwell
A few years ago if you were to have asked me whether or not religious institutions have impeded the progress of science, I would have given a vehement ‘hell yes’. I would’ve given the accounts of Giordano Bruno, Tycho Brahe, Kepler, Galileo, Copernicus, and the many others who risked or gave their lives in the name of science as examples. However over the years I’ve learned that making such a blanket statement is rather prejudiced. This is not to say there hasn’t been significant efforts by religious institutions to repress science, but also without them, most of the principles and methodologies of modern science and medicine would’ve never been established.
The Roman Catholic Church was vital in the development of systematic nursing and hospitals, and even still today the Church remains the single greatest private provider of medical care and research facilities in the world. The Church also founded Europe’s first universities and Medieval Catholic mathematicians and philosophers such as John Buridan, Nicole Oresme and Roger Bacon are considered the fathers of modern science. Furthermore, after the Fall of Rome, monasteries and convents became strongholds of academia, preserving the works of Euclid, Ptolemy, Plato, Aristotle, Galen, Simplicius and many more. Clergymen were the leading scholars of the day, studying nature, mathematics and the motion of stars. And while some may blame Christianity for the Fall of Rome and decline of intellectual culture during the Middle Ages, this claim is unjustified and is a much more complex issue probably better reserved for a history class. Additionally, many forget that while the western half of the Roman Empire collapsed, the much more Christianized eastern half remained relatively strong and continued into the 15th century as the Byzantine Empire.
Not to focus solely on Christianity, Islam also had a part in the preservation and flourishing of science. An Arab Muslim named Ibn al-Haytham, considered to be one the first theoretical physicists, made significant contributions in the fields of optics, astronomy and mathematics, and was an early advocate that a hypothesis must be proved by experiments based on confirmable procedures or mathematical evidence—essentially the scientific method. Caliphs during the Islamic Golden Age established research institutes, sent emissaries around the world in search of books, then funded projects to translate, study and preserved them. Much of the Ancient Greek science we have today would have been lost and the European Renaissance hundreds of years after would not have been possible without their efforts. Also, at one time arguably, Arabic was the language of science. The “al’s” in algebra, algorithm, alchemy and alcohol are just some of the remnants.
The Islamic world also imported ideas from Hindus, which includes the Arabic numerals we still use today and the concept of zero. Also, as mentioned in a previous post, The Spirituality of Science, I see many parallels between science and Dharmic beliefs, such as reincarnation and entropy: the universe is cyclical; life and death are just different stopping points on a grand recycling process; matter, like the body, is created and recycled, while energy, like the soul, is immortal and transferred. The correlation I find most fascinating though is the Hindu concept of Brahman to the laws of thermodynamics. According to belief, Brahman is the source of all things in the universe including reality and existence; everything comes from Brahman and everything returns to Brahman; Brahman is uncreated, external, infinite and all-embracing. You could substitute the word energy for Brahman and get a simple understanding of the applications of the first and second laws of thermodynamics. It’s funny how the world’s oldest religion, Hinduism, seemed to grasp these concepts thousands of years before science did.
In conclusion, although it’s still hard for me to look past some of the civil atrocities wrought by religious institutions—in particular when they’ve been intimately tied to a governing body, I think when you tally up the scores, science has benefited greatly from religion and any impediments are heavily outweighed. In a day when it seems popular to present everything in a dichotomous fashion—either you’re with or against us, I think it’s important to remember that for the most part, we all have what’s best in mind for humanity, and it’s when we work together that the best results are produced. Until next time, stay curious my friends.
By Bradley Stockwell
My two favorite arenas of academia are science and history, and the more I study the two, the more I see how interwoven they really are. There’s no greater example of this than something called the “bomb pulse”. Whether you know it or not, lurking inside of you is a piece of Cold War history—even if you weren’t alive at the time—and it is this little memento that finally solved one of biology’s most elusive secrets: How old are you? And I don’t mean how many times has the cellular clump of mass known as you swung around the sun, but how old are the individual cells that make up that mass? Your skin cells, heart cells, neurons—your body is constantly renewing itself with new cells and it is only as of 2002 that we began to have a definitive answer for how old each one was. With this post, my intentions are twofold. One: I want to tell you about one of the greatest scientific discoveries of the 21st century, and two: I’m hoping that by wrapping them in this titillating story, I can also slip in a few basic principles of nuclear chemistry. With that said, let’s begin!
Between 1945 and 1963, over four hundred nuclear bombs were detonated, unleashing an untold number of extra neutrons into the atmosphere. Some of these neutrons found their way into nitrogen atoms, causing them to eject a proton. If you’re familiar with some basic chemistry, when a seven-proton nitrogen atom loses a proton, it becomes a six-proton carbon atom. However, because these carbon atoms still have two extra neutrons from when they were nitrogen, they become something called an isotope, a variant of an element which differs in neutrons, but has the same amount of protons. In this case, these slightly more massive and radioactive isotopes become an isotope of carbon called carbon-14.
When I say radioactive, all I mean is that the atom’s nucleus is unstable; that it is emitting energy in the form of ejected subatomic particles or energetic light waves to stabilize itself until it becomes a stable isotope, or a completely new element altogether. This radioactive decay comes in three main forms: alpha, beta and gamma. Alpha decay—which only happens with heavy elements like uranium—is the ejection of something physicists call an alpha particle, but chemists just call it a helium atom, a bundle of two protons and two neutrons. In fact, almost all the helium here on Earth came from this type of decay. Think about that the next time your sucking down a helium balloon; you’re inhaling the atomic leftovers of uranium, thorium and other heavy, radioactive elements. Beta minus decay is the ejection of an electron and beta plus decay is the ejection of the electron’s antiparticle, the positron. Gamma decay is the emission of an extremely energetic light wave called a gamma ray and it is often emitted in conjunction with alpha and beta decay.
The time it can take for a radioactive element to reach a stable form can be anywhere from instantaneous to far longer than the age of the universe. Because individual atoms decay unpredictably, the way in which we measure this loss is through probability, or something called a half-life. This is the time it takes for half a quantity of radioactive material to decay into a more stable form. This is not to say if you have four radioactive atoms, in x amount of time you’ll have two necessarily, but more that each individual radioactive atom has a fifty percent chance that it will decay to a more stable form in x amount of time. For example, carbon-14, the star of our story, has a half-life of 5,730 years. This means if you had a pound of it, after 5,730 years you’d have a half pound of carbon-14 and half a pound of nitrogen-14, carbon-14’s more stable form. Then after another 5,730 years you’d have a quarter pound of carbon-14 and three-quarters pound of nitrogen-14, and so forth. This is how carbon dating works; by measuring the relative portions of carbon-12 and carbon-14 in a sample of organic matter, archeologists are able to determine its age.
The period between 1945 and 1963 in which all this atomic testing was happening is now called the “bomb pulse” by the scientific community. It was called this because the amount of carbon-14 in the atmosphere was doubled during this period from all those free neutrons crashing into nitrogen. In 1963, when the Soviet Union, the U.K. and the U.S. agreed to the Limited Test Ban Treaty which prohibited all above-ground detonations, the amount of carbon-14 began to decrease by half every eleven years and will eventually be depleted somewhere around 2030 to 2050. This isn’t because the carbon-14 is decaying into nitrogen-14 (remember the half-life of carbon-14 is 5,730 years), but because it is being absorbed by the life inhabiting our planet, which includes us. Although carbon-14 is an altered carbon atom—a carbon isotope, it still behaves like a carbon atom because it is the number of protons in an atom that determines its chemical behavior, while the number of neutrons determines its mass; and like a regular carbon atom, these carbon-14 atoms have been binding to oxygen, forming CO2, which is sucked up by plants during photosynthesis and then fed to the rest of us through the food chain. Like the plants, our bodies too can’t tell the difference between carbon and carbon-14, so for the last seventy-plus years all this extra carbon-14 has been used by every living creature to build new cells, proteins and DNA.
While our bodies can’t tell the difference between carbon and carbon-14 (because they have the same amount of protons), scientists can because of their slight difference in mass (remember carbon-14 has two extra neutrons). The difference in mass is measureable through a technique called mass spectrometry, which sorts atoms by weight. Without getting too technical, an instrument called a mass spectrometer strips atoms of some of their electrons and launches them into a magnetic field, which alters the atoms’ course, and because of inertia, heavier atoms take a wider path than lighter ones. By measuring how many atoms travel along certain paths, scientists can determine how much of a specific atom—in this case a carbon-14 atom—is in a sample.
So what does this have to do with determining a cell’s age? Well for a long time nothing. But somewhere around 2002, Krista Spalding, a postdoc at the Karolinska Institute in Stockholm, Sweden, wanted to challenge the longtime doctrine that said the human brain couldn’t create any new neurons after the age of four. There had been growing evidence that the adult hippocampus—a seahorse-shaped region deep in the brain that is important for memory and learning—could regenerate neurons, but no one knew for sure. Spalding and her postdoc advisor, Jonas Frisén, had a hunch that the “bomb pulse” period could somehow offer a solution and it did, culminating in a paper by Spalding, Frisén and their team published in June 2013, which conclusively found that the hippocampus did produce approximately 700 to 1,400 new neurons per day, and these neurons last twenty to thirty years. How you ask? Well there’s an episode of Radiolab (a wonderful science podcast I recommend you all listen to) that has a much more colorful version of Spalding and Frisén’s journey here, but because I know I’m probably already pushing your attention spans, I’ll just give a brief overview. You see, atmospheric scientists have been measuring the amount of carbon-14 and other elements in the atmosphere every two weeks since the late 1950s, giving us an extremely accurate timetable of how much carbon-14 is and was in the atmosphere at any given time after. By correlating this data to the amount of carbon-14 found in a cell’s DNA (while other molecules are regularly refreshed throughout a cell’s life, DNA remains constant), researchers can determine not just the age of a hippocampal neuron, but any cell. So by accident, the nuclear age finally shed light on when tissues form, how long they last and how quickly they’re replaced.
You—and every other living organism—are continually creating new cells. Cells that make up your skin, hair and the lining of your gut are constantly being replaced, while others, like cells that make up the lens of your eye, the muscles of your heart and the neurons of the cerebral cortex, have been with you since birth and will stay with you until you die.
So why is this so important? Well firstly, it gives us a key insight into the mechanisms behind many neurodegenerative diseases such as amyotrophic lateral sclerosis (Lou Gehrig’s disease), Parkinson’s, Alzheimer’s, Huntington’s and many more. Really we’ve only just begun to dig into this Pandora’s box so to speak, and unfortunately time is limited (well unless we start blowing up a bunch of atomic weapons again, but let’s hope humanity has moved past this) because, as I said, this measureable spike of carbon-14 in our atmosphere from the “bomb pulse” will eventually be depleted somewhere between 2030 and 2050.
Despite what the “bomb pulse” is and will offer to scientific research, isn’t it cool just knowing which cells have been at the party of you the longest? Or that like the rings of a tree, or the sedimentary layers of rock, our bodies too tell the story of our times? With that, until next time, stay curious my friends.
By Bradley Stockwell
First off, I want to apologize to all of my six followers to this blog. I know I left you in anxious anticipation over my follow-up post to Climate Change Part I on future green technologies. However, after three months of procrastination, I confess I still haven’t written it. I’m sorry, but I’m easily distracted and while attempting to assemble it I came across a story too good not to tell about a fascinating material called graphene. Graphene is the thinnest, strongest and stiffest material on Earth; it conducts electricity and heat better than any other known material; it is transparent and two-dimensional and is the basis for all future technologies and A.I. At the moment, its potential of applications looks limitless. Oh did I mention it was discovered with nothing more than pencil lead and tape? They even gave the guys who discovered it the 2010 Nobel Prize in Physics. Shit, if I knew it was that easy I could’ve scratched Nobel prize in physics off the bucket list a long time ago.
So what is graphene exactly then? In short, it’s a sheet of pencil lead (graphite) an atom thick. But to understand how we arrived at the discovery of graphene, we need to tell another story, the story of carbon. Graphene is an allotrope of carbon which simply means it’s one possible way to structure carbon atoms. The carbon atom has six protons and typically six neutrons in its nucleus. Sometimes the nucleus has eight neutrons, in which case the carbon atom is known as carbon-14. Carbon-14 is unstable, meaning it radioactively decays, but the decay is consistent over long periods of time. Because this form of carbon is found in many materials, measuring its presence gives us a way to age materials—or what is known as carbon dating. Carbon-14 however is not an allotrope of carbon, it is what is known as an isotope, something covered in detail in a previous post, Flight of the Timeless Photon.
Allotrope formation is dependent on the electrons of a carbon atom and the way in which they bond to other carbon electrons. Carbon has six electrons, two of which are buried in its innermost shell near the nucleus, and four in its outermost shell which are called valence electrons. It is these four outermost electrons—and a ton of heat and pressure—that make the difference between a lump of coal and a diamond, another allotrope of carbon. In diamond, a carbon atom’s four valence electrons are bonded with four other carbon valence electrons. This produces an extremely stiff crystalline structure. In fact, a typical diamond is made up of about a million billion billion atoms (1 with 24 zeros after it) all perfectly arranged into a single pyramidal structure, which is key to its extraordinary strength. But diamond is not the strongest and most stable allotrope of carbon. Although DeBeers may want you to think otherwise, a diamond is not forever; every diamond in existence is actually slowly turning into graphite. The process however takes billions of years so no need to worry about your wedding ring just yet.
Graphite is not a crystalline structure like diamond, but planes of carbon atoms connected in a hexagonal pattern, with each plane having an extremely strong and stable structure—stronger and more stable than diamond. Some of you may be asking, is this not the same graphite we write with and grind up into fine powder lubricants? Yes indeed it is, and the conundrum of descriptives can be blamed on electrons. In diamond, a carbon atom shares its four valence electrons with four other carbon atoms, whereas in graphite it shares its electrons with only three (see graphic below). This results in graphite having no electrons left over to form strong bonds between layers, leaving it up to something called van der Waals forces, a weak set of forces generated by fluctuations in a molecular electric field. Basically it’s the universal glue of matter and is something all molecules naturally possess. Because these forces are so weak is why you’re able to write with graphite—a.k.a. pencil lead. As you press your pencil to paper, you’re breaking the van der Waals forces, allowing layers of graphite to slide across one another and deposit themselves on a page. If it weren’t for the weak van der Waals bonds, pencil lead would be stronger than diamond and this is behind the advent of carbon fiber. Carbon fiber is spun graphite, lathered in an epoxy glue to overcome the weak van der Waals forces. Restriction of van der Waals forces is also behind the phenomenality of graphene.
Since graphene is a single layer of graphite one atom thick, there is no need to worry about weak van der Waals forces. By default this makes graphene the strongest and thinnest material known to man. Also, because its carbon atoms are not structured in a crystalline lattice like diamond, which leaves no free electrons, it also conducts electricity and heat better than any known material. This means because of its transparency and thinness, we could literally add touch sensitivity to any inanimate object and possibly entire buildings. It also allows for something called Klein tunneling, which is an exotic quantum effect in which electrons can tunnel through something as if it’s not there. Basically it means it has the potential to be an electronic dynamo and may someday replace silicon chips and pave the way for quantum computing. Graphene was purely hypothetical until 2004 when Andre Geim and Konstantin Novoselov discovered it. As stated in the title of this post, they discovered it with nothing more than a lump of graphite and sticky tape. They placed the tape on the graphite and peeled off a layer. They then took another piece of tape and stuck it to the piece of tape with the graphite layer and halved the layer. They continued to do this until they were left with a layer of graphite one atom thick. I’m not exaggerating the simplicity of the procedure in any way. Watch the video below and you can replicate the experiment yourself, the only catch is you need an electron microscope to confirm you indeed created graphene. Until next time my friends, stay curious.
By Bradley Stockwell
1878 World’s Fair: Augustin Mouchot’s solar-powered motor is a gold medal winner and initially receives generous government funding for development. However the funding is soon cut due to a dramatic decrease in the cost of coal production.
1900 World’s Fair: Commissioned by the French government, the Otto company displays the recently invented diesel engine running on peanut oil without any modification to the original design. The inventor of the engine, Rudolf Diesel, learns of this and becomes a leading proponent for the development of biodiesel fuels to spur agricultural development. However after his death in 1913 and with the emerging petroleum market on the rise, the motor is redesigned to run solely on petroleum diesel fuel.
The Egyptian desert 1913: Frank Shuman, the inventor of safety glass, presents a solar power plant which promises to make solar energy—a limitless, renewable energy source—more cost-efficient than coal. He too receives generous accolades and funding from the German and British governments, but ultimately with the outbreak of World War I shortly thereafter, funding is cut and put into the exploding petroleum market, leaving Shuman’s solar collectors to be recycled into weapons.
Detroit, Michigan 1908: Henry Ford’s first Model T rolls off the assembly line and it runs on gasoline and/or corn ethanol. Ford envisions one day however that all vehicles will run solely on agricultural fuel sources. One of particular interest to him is hemp. In 1941 he even constructs a lightweight car that runs on hemp biofuel and is constructed with plastic panels made partially of hemp. Nevertheless the Marijuana Tax Act of 1937—backed by the petrochemical company DuPont—would eventually kill the domestic hemp industry and with the onset of World War II, gasoline engine technology would only see further dominance.
Now that we’re facing down the barrel of a global climate crisis, it’s easy to look back and see where it might have been averted. It’s not like we weren’t warned; as far back as 1896 (read here) the scientific community has cautioned us about the consequences of a fossil-fueled civilization. But humanity’s myopic view of the future has not only undercut our ingenuity, but it now endangers the survival of our species—and many others I may add. However there’s hope and I’d like to pay tribute to this hope by highlighting today and tomorrow’s most innovative and coolest technologies on the frontline in the fight against climate change. But first…
Believe it or not, our planet breathes. In the spring, the forests of the Northern Hemisphere inhale carbon dioxide to grow and the amount of CO2 in the air decreases while the amount of oxygen (O2) increases. Then in the fall, when leaves fall and decay, that CO2 is released back into the atmosphere. This same respiratory cycle happens in the Southern Hemisphere, but there is far more ocean than forest in the South. This has been happening for tens of millions of years, but wasn’t noticed until 1958 when the oceanographer Charles David Keeling devised a way to accurately measure the amount of CO2 in the atmosphere. However this discovery also unearthed quite a big elephant in the room for humanity: climate change.
You see, CO2 in our atmosphere acts as an insulator for heat sent here from the sun. Without it, our planet would be a frozen wasteland and with too much of it, it’d be hell on earth and the difference between the two is not much—six molecules of CO2 per ten thousand to be exact. Since the formation of the earth, volcanoes have been spewing CO2 into the air. Then water and life came along and the CO2 was absorbed into the oceans and harvested into more organic matter. Over the course of millions of years, this bled our atmosphere of CO2 (which is a good thing when you’re cultivating life) until CO2 comprised just three-hundredths of a percent of our atmosphere—three molecules per ten thousand. And for at least the last 800,000 years this percentage has stayed relatively the same until the rise of the Industrial Revolution. Hmm… anybody see a strange correlation? We know this because we’ve drilled into glaciers and extracted and measured trapped air from that long ago. Since about the turn of the century, CO2 levels have risen a staggering 40%. And as of January 2015, we’ve officially added another molecule of CO2 per ten thousand—four per ten thousand in total—in the span of about 100 years. Earth hasn’t seen CO2 levels this high in over three million years, when horses and camels roamed the high arctic and sea levels were at least 30 feet higher; a level that would drown many major cities today.
While one more molecule per ten thousand may not sound like much, remember the difference between frozen wasteland and hell on earth is only six molecules per ten thousand and life providing oasis sits delicately in the middle at three. And it’s not like the earth is just naturally dumping all this additional CO2 into the air. We know it’s man-made because CO2 created from the burning of fossil fuels is slightly lighter than that of say volcanic CO2.
The strongest force driving climate change is us. It’s undeniable and those who deny it in my opinion are just too scared to admit it. And it is scary. It’s not like we can keep going along like this and still have another 200 years before we add two more CO2 molecules per ten thousand to the atmosphere. We’ve already set off a chain reaction of sorts. Because temperatures are rising, ground that’s been frozen for a millennia is now beginning to thaw. That ground is densely packed with organic matter and the thawing of that organic matter is releasing more CO2 into the air, causing the temperature to rise even higher and thaw ground even quicker. This positive feedback loop is also happening with the melting of sea ice. As ocean temperatures rise, more sea ice melts and more heat is absorbed into the oceans instead of being reflected back into space, which causes ocean temperatures to rise faster which in turn melts the ice faster. Not only are we contributing heavily to climate change, but now we’ve triggered Mother Earth to follow suit.
But as I stated previously there is hope. We haven’t reached the “point of no return”—the point at which no amount of effort will save us from catastrophic global warming—yet. That point is at 4.5 molecules per ten thousand, so we are damn close. If we continue at our current rate, which is adding two more CO2 molecules per million per year, we’ll reach the “point of no return” somewhere around 2042. But I have faith in humans; faith that we’re too smart and too adaptive to let that happen. After all, we come from a long pedigree of very successful survivors, so let’s put it to use. If not for the sake of saving the world, at least for the sake of technological progression. We know fossil fuels won’t last forever so why not start solving that problem now? Also wouldn’t it be cool if we had concrete that healed itself and roads that talked to us while collecting solar energy? This is just a preview of some of the green technologies and innovations on the horizon that I’ll cover in part two of this series. Until then, stay curious my friends.
By Bradley Stockwell
The further I progress in my adult life, the more I realize what an important role my first job had in building the foundation of it. Let’s be honest, working as a fast food drive-thru attendant just plainly sucks. It’s extremely stressful and degrading; you make minimum wage, have some horrible bosses and you go home every day smelling like greasy meat. I had everything from insults to water balloons to milkshakes to sex toys thrown at me (yes for some reason people love playing practical jokes on drive-thru attendants). I witnessed physical altercations—was even involved in one myself, auto accidents, arrests and people performing sex acts while I handed them their food (completely separate from the sex toys incident). Yet despite all this, I am continually grateful for the preview the drive-thru gave me of the real world in all its ugly and fascinating glory.
Customer service is an obvious requirement of a drive-thru attendant. Since food is a universal need, the walks of life I came in contact with—and had to make happy—was quite expansive. Also if you get between hungry people and their food, be prepared to see some claws come out. To avoid—or if need be remedy—angry customers, I learned there was no one-size-fits-all approach. Everyone has their own individual ticks, stresses and personalities and to succeed in customer service you need to be an excellent reader of people. As a naïve 16-year-old boy who was raised in a nice suburban community, I wasn’t so good at this in the beginning. However by the end of my two-year tenure in fast food, there wasn’t a soccer mom, senior citizen, crackhead, or corporate businessman I couldn’t charm.
As the drive-thru attendant you’re sort of the quarterback of the team. Every part of the transaction runs through your hands from taking the order, collecting the money and handing the order off to the customer. All three of these things need to be double-checked for accuracy because if something goes wrong, all the blame comes back to the customer’s only point of contact—you. While much of this was dependent on me, the food—the most important part—depended on my kitchen staff. I found out early in my fast food career that having an open line of communication with them was crucial. For me this meant I had to learn some Spanish. Repeating orders out loud in Spanish to my kitchen staff increased the accuracy of them tenfold and I can’t tell you how many times knowing a little Spanish has come in handy later in life.
Teamwork and Leadership
Additionally, I also took interest in my kitchen staff’s job roles. While my initial motivation to get behind the grill and fryer was out of curiosity, it helped me realize some of the challenges and stresses they had to deal with day-to-day—such as getting burned constantly. I even let my kitchen staff take a few cracks at the headset to understand my job also. Although we preferred our own positions in the end, it built rapport between us. Understanding and respecting how each person contributes to the team’s success as a whole was an invaluable lesson that helped me succeed later in management. But most importantly, I found working with people who respect and have a positive relationship with each other can make even the worst job very enjoyable at times.
Stress & Time Management
To this day, it’s hard to put into words the stress I felt during a lunch, or dinner rush. The headset is constantly ringing with nagging customers, orders need to be bagged, drinks need to be made, customers need to be greeted at the window, and it only took one slip for the whole thing to fall into chaos. Along with the above mentioned duties, it was also my job to make sure bags, condiments and cups were stocked and the shake machine and ice bins were regularly cleaned and filled. If these things weren’t done before a rush or shift change it meant disaster. My fellow employees, and most importantly the customers, depended on me getting these tasks done, so I quickly learned not to procrastinate them; to instead get them done in small chunks throughout the day. To also ensure rushes ran as smoothly as possible, I memorized the totals with tax for almost every combo meal and the dollar menu up to ten items that way I could fill orders while taking new ones without having to be in front of an order screen.
My most valuable lesson sort of happened by accident. Being a musician, I began treating rushes as performances just for the fun of it. Instead of focusing on how much my job sucked, I instead focused on how many people I could make smile or laugh. If the situation was appropriate, I sang to people over the intercom, did caricature voices and just really tried to be the most entertaining drive-thru attendant I could be. I learned that when I took pride in my job—no matter how menial it was, the day went by a lot faster and at times I didn’t even want to go home.
By Bradley Stockwell
If we are ever to find alien life in our lifetimes, there is no doubt that it will most likely be within our own solar system. And I’m not talking about little green men (sorry about the misleading picture), but more likely microbial alien life. In April of this year, NASA’s chief scientist, Ellen Stofan said, “I believe we are going to have strong indications of life beyond Earth in the next decade and definitive evidence in the next 10 to 20 years.”
This is quite shocking considering a little over a decade ago we thought the search for extraterrestrial life was all but dead. For many years scientists narrowly confined the search to something called the “habitable zone”, or informally known as the Goldilocks zone (I like this name better). This is the area within a star system that’s close enough to a star to allow liquid water, but not so close as to boil it away. Because Earth is our only reference for life, it was once thought that liquid water was an essential ingredient to the porridge of it; but as we’ll see later, even now that is being questioned.
Nevertheless our best hope still lies with liquid water and the only places thought to have, or had it at one time, was Earth and Mars. Unsurprisingly these are also the only two planets within our own solar system’s Goldilocks zone. Since Mars has yet to yield any signs of life, the chances of finding alien life in our solar system once seemed pretty grim. However very recent and very credible evidence is showing there is most likely oceans of liquid water far from where we thought it should be in our solar system. Because of this and continuing research on how life forms, the race is back on to find alien life in our solar system and here are the three most likely places we’ll find it.
Enceladus and its geyser spray
Enceladus is a small, ice-covered moon of Saturn. Although some may disagree, I list this as the most likely place for three reasons: warm liquid water, organics and the ease of access. In 2005, NASA’s Cassini spacecraft photographed geysers of frozen water shooting up from cracks on the surface of Enceladus. It is almost certain the culprit of these geysers is reservoirs of liquid water beneath the frozen surface formed by something called tidal flexing. Basically there’s this sort of gravitational tug-of-war between Enceladus, its neighboring moons and Saturn itself. These interactions stretch and contract the moon, creating heat which causes the ice beneath the surface to melt and form liquid water. Additionally, Cassini’s instruments also detected organic compounds like methane in the geyser spray and where there’s warm water—speculated to be near boiling temperatures—and organics, there’s the possibility of life. Life has formed in very similar conditions near hydrothermal vents here on Earth. However what really sets Enceladus apart is how easily it would be to snatch up evidence of life if it’s there. Whatever exists in those subsurface water reservoirs is continually being shot up by geysers into space and all we’d have to do is fly by and grab it with a spacecraft. There’d be none of the complications of landing robot drillers on the surface like other icy-moon candidates. Unfortunately, partly due to NASA’s shrinking planetary science budget, there are no planned missions to explore Enceladus.
Europa is another ice-covered moon a little smaller than our own orbiting Jupiter. Like Enceladus and Europa’s two neighboring moons, Callisto and Ganymede, it most likely has liquid water beneath its surface caused by tidal flexing between it, Jupiter and other Jovian satellites. Possibly there’s more water here than in all of Earth’s oceans. It is also speculated that the oceans contain salt because the moon has a magnetic field which means it’s electrically conductive; something fresh water is not. With water, salt and carbon-based organic compounds from comets that have inevitably hit the moon, you’ve got quite the recipe for life. While salt hasn’t been directly detected yet, a probe mission has been proposed for 2025 that will answer all questions about the moon’s chemical makeup. If this chemical makeup is considered life-friendly it will be all the more reason to send a lander to Europa. The problem however lies then with how to drill through 10 miles (16 km) of extremely hard ice. Nevertheless drilling through 10 miles of ice to reach water seems more promising than drilling through the at least 60 miles (100 km) of rock covering Callisto’s and Ganymede’s water.
Titan is the only other place in our solar system to have rain and liquid lakes. It is also the only moon to have a dense atmosphere.
Titan, Saturn’s largest moon and the second-largest in the solar system, would be my favorite outcome for alien life because it would completely rewrite our recipe book for life and expand the possibility of it immensely throughout the rest of the universe. Honestly I was tempted to list it as first, but I felt obligated to give preference to at least two water-bearing moons first. Other than Earth, Titan is the only known place in our solar system where it rains and there are liquid lakes. It is also the only moon in the solar system to have a dense atmosphere. Granted the rain and lakes are made of liquid ethane and methane, but ethane and methane are both saturated hydrocarbons—a.k.a. life-making stuff—and the atmosphere is largely made up of nitrogen just like here on Earth. On Earth ethane and methane are gases, but because the temperature on Titan averages about -290 Fahrenheit (-179 Celsius), they come in all three states of matter—liquid, solid and gas. There’s also a whole methane cycle analogous to Earth’s water cycle which creates similar Earthly weather patterns on Titan.
Just because organic compounds found one way to form into life here on Earth, doesn’t mean that there aren’t a multitude of other ways—in fact I refuse to believe so. In 2010 Sarah Horst of the University of Arizona found all five nucleotide bases (the building blocks of DNA and RNA) and amino acids (the building blocks of protein) among the compounds produced when energy was applied to a combination of gases like those found in Titan’s atmosphere. It was the first time nucleotide bases and amino acids had been found in such an experiment without the presence of liquid water. In 2013 NASA did an experiment of their own and also concluded that complex organic chemicals could arise on Titan based on simulations of the Titan atmosphere. If life were to exist on Titan, it would most likely inhale hydrogen in place of oxygen and metabolize with acetylene instead of glucose and exhale methane instead of carbon dioxide.
The other thing Titan has going for it over the two former candidates is we already know how to get robot landers to the surface because we’ve done it before. In 2005 the Cassini spacecraft dropped the Huygens probe and you can see how eerily similar the surface looks to Earth in this video here. Unfortunately as of right now there are no planned return trips to Titan.
Until next time my friends, stay curious.
By Bradley Stockwell
I once had a friend after a long night of drinking consult me on his living room couch, “What does quantum mechanics really mean?” I guess he asked because I blabbed about physics so much that he considered me an expert in the field rather than just the casual student I really am. I was taken aback for this particular friend and I had never discussed physics—let alone quantum mechanics—in our entire five year relationship. He was the friend I turned to when I needed a break from intellectual studies to indulge in the simpler pleasures of life such as beer and sports. He was also so heavily inebriated that I was pretty sure he wasn’t even going to remember asking the question in the morning (which I was indeed later proven right).
I answered casually, “Well, it’s the physics of atoms and atoms make up everything, so I guess it means everything.” Not satisfied with my answer he replied slurredly, “No really, what does it mean? We can’t really see what goes on in an atom so how do we really know? What if it’s just some guys too smart for their own good making it all up? Can we really trust it? From what I know we still don’t completely understand it so how do we know if it’s really real? Maybe there’s just some things as humans were not supposed to understand.”
After a few moments of contemplation I answered: “Everything from your smartphone to the latest advances in medicine, computer and materials technology, to the fact you’re changing channels on the TV with that remote in your hand is a result of understanding quantum mechanics. But you’re right; we still don’t fully understand it and it’s continually showing us that the universe is probably a place we’ll never fully grasp, but that doesn’t mean we should give up…” I then continued with what might’ve been too highbrow of an explanation of quantum mechanics for an extremely drunk person at 3 a.m. because halfway through he fell asleep.
As my friend snored beside me, I couldn’t help but be bothered that he and so many others still considered quantum mechanics such an abstract thing more than a hundred years after its discovery. I thought if only I could ground it in some way to make people realize that they interact with quantum mechanics every day; that it really was rooted in reality and not a part of some abstract world only understood by physicists. I myself being a layperson with no university-level education in science learned to understand it with nothing more than some old physics books and free online classes. Granted it wasn’t easy and took a lot of work—work I’m still continuing, but it’s an extremely rewarding work because the more I understand, the more exciting and wonderful the world around me becomes.
This was my inspiration behind The Party Trick Physicist blog; to teach others about the extraordinary world of science and physics in a format that drunk people at 3 a.m. might understand. I make no promises and do at times offer more in-depth posts, but I do my best. With this said, as unimaginative as a post about at-home physics experiments felt to me initially, there’s probably no better way to ground quantum mechanics—to even a drunk person at 3 a.m.—than some hands on experience. Below are four simple quantum mechanical experiments that anyone can do at home, or even at a party.
1. See Electron Footprints
For this experiment you’ll be building an easy to make spectroscope/ spectrograph to capture or photograph light spectra. For the step-by-step tutorial on how to build one click here. After following the instructions you should end up with, or see a partial emission spectrum like this one below.
Now what exactly do these colored lines have to do with electrons? Detailed in a previous post, The Layman’s Guide to Quantum Mechanics- Part 2: Let’s Get Weird, they are electron footprints! You see, electrons can only occupy certain orbital paths within an atom and in order to move up to a higher orbital path, they need energy and they get it by absorbing light—but only the right portions of light. They need specific ranges of energy, or colors, to make these jumps. Then when they jump back down, they emit the light they absorbed and that’s what you’re seeing above; an emission spectrum. An emission spectrum is the specific energies, or colors an electron needs—in this case mercury electrons within the florescent light bulb—to make these orbital, or ‘quantum’ leaps. Every element has a unique emission spectrum and that’s how we identify the chemical composition of something, or know what faraway planets and stars are made of; just by looking at the light they emit.
2. Measure The Speed of Light With a Chocolate Bar
This is probably the easiest experiment as it only requires a chocolate bar, a microwave oven, a ruler and calculator. I’ve actually done this one myself at a party and while you’ll come off as a nerd, you’ll be the coolest one there. Click here for a great step-by-step tutorial and explanation from planet-science.com
3. Prove Light Acts as a Wave
This is how you can replicate Thomas Young’s famous double slit experiment that definitively proved (for about 100 years) that light acts as a wave. All you need is a laser pointer, electrical tape, wire and scissors. Click here for a step-by-step video tutorial.
4. Prove Light Also Acts as a Particle
This experiment is probably only for the most ambitious at-home physicists because it is the most labor and materials extensive. However this was the experiment that started it all; the one that gave birth to quantum mechanics and eventually led to our modern view of the subatomic world; that particles, whether they be of light or matter, act as both a wave and a particle. Explained in detail in my previous post The Layman’s Guide to Quantum Mechanics- Part I: The Beginning, this was the experiment that proved Einstein’s photoelectric effect theory, for which he won his only Nobel Prize. Click here to learn how to make your own photoelectric effect experiment.
Good luck my fellow party trick physicists and until next time, stay curious.
By Bradley Stockwell
Because my last two posts were quite lengthy, I’ve decided to limit myself to 1000 words on this one. Before I begin, I must credit the physicist Brian Greene for much of the insight and some of the examples I’m going to use. Without his book The Elegant Universe, I wouldn’t know where to begin in trying to explain string theory.
In short, string theory is the leading candidate for a theory of everything; a solution to the problem of trying to connect quantum mechanics to relativity. Because it has yet to be proven experimentally, many physicists have a hard time accepting it and think of it as nothing more than a mathematical contrivance. However, I must emphasize, it has also yet to be disproven; in fact many of the recent discoveries made in particle physics and cosmology were first predicted by string theory. Like quantum mechanics when it was first conceived, it has divided the physics community in two. Although the theory has enlightened us to some features of our universe and is arguably the most beautiful theory since Einstein’s general relativity, it still lacks definitive evidence for reasons that’ll be obvious later. But there is some hope on the horizon. After two years of upgrades, in the upcoming month, the LHC—the particle accelerator that discovered the Higgs Boson (the God Particle), will be starting up again to dive deeper into some of these enlightenments that string theory has given us and may further serve as evidence for it.
So now that you have a general overview, let’s get to the nitty gritty. According to the theory, our universe is made up of ten to eleven dimensions, however we only experience four of them. Think about the way in which you give someone your location. You tell them you’re on the corner of Main and Broadway on the second floor of such-and-such building. These coordinates represent the three spatial dimensions: left and right, forward and back and up and down that we’re familiar with. Of course you also give a time in which you’ll be at this three dimensional location and that is dimension number four.
Where are these other six to seven dimensions hiding then? They’re rolled up into tiny six dimensional shapes called Calabi-Yau shapes, named after the mathematicians who created them, that are woven into the fabric of the universe. You can sort of imagine them as knots that hold the threads of the universe together. The seventh possible dimension comes from an extension of string theory called M-theory, which basically adds another height dimension, but we can ignore that for now. These Calabi-Yau ‘knots’ are unfathomably small; as small as you can possibly get. This is why string theory has remained unproven, and consequently saves it from being disproven. With all the technology we currently possess, we just can’t probe down that far; down to something called the Planck length. To give you a reference point of the Planck length, imagine if an atom were the size of our entire universe, this length would be about as long as your average tree here on Earth.
Calabi-Yau shapes, or ‘knots’ that hold the fabric of the universe together.
The exact shape of these six dimensional knots is unknown, but it is important because it has a profound impact on our universe. At its core, string theory imagines everything in our universe as being made of the same material, microscopic strings of energy. And just the way air being funneled through a French horn has vibrational patterns that create various musical notes, strings that are funneled through these six dimensional knots have vibrational patterns that create various particle properties, such as mass, charge and something called spin. These properties dictate how a particle will influence our universe and how it will interact with other particles. Some particles become gravity, others become the forces that attract, glue and pull apart matter particles. This sets the stage for particles like quarks to coalesce into protons and neutrons, which interact with electrons to become atoms. Atoms interact with other atoms to become molecules and molecules interact with other molecules to become matter, until eventually you have this thing we call the universe. Amazing isn’t it? The reality we perceive could be nothing more than a grand symphony of vibrating strings.
Many string theorists have tried to pin down the exact Calabi-Yau shape that created our universe, but the mathematics seems to say it’s not possible; that there is an infinite amount of possibilities. This leads us down an existential rabbit hole of sorts and opens up possibilities that the human brain may never comprehend about reality. Multiverse theorists (the cosmology counterparts to string theorists) have proposed that because there is an infinite number of possible shapes that there is an infinite variety of universes that could all exist within one giant multidimensional form called the multiverse. This ties in with another component of the multiverse theory I’ve mention previously; that behind every black hole is another universe. Because the gravitational pull within a black hole is so great, it would cause these Calabi-Yau ‘knots’ to become detangled and reform into another shape. Changing this shape would change string energy vibrations, which would change particle properties and create an entirely new universe with a new set of laws for physics. Some may be sustainable—such as in the case of our universe—or unsustainable. Trying to guess the exact Calabi-Yau shape a black hole would form would kind of be like trying to calculate the innumerable factors that make up the unique shape of a single snowflake.
The multiverse theory along with M-theory also leads to the possibility that forces in other universes, or dimensions, may be stronger or weaker than within ours. For example gravity, the weakest of the four fundamental forces in our universe, may be sourced in a neighboring universe or dimension where it is stronger and we are just experiencing the residual effect of what bleeds through. Sort of like muffled music from a neighbor’s house party bleeding through the walls of your house. The importance of this possibility is gravity may be a communication link to other universes or dimensions—something that the movie Interstellar played off of.
Well I’ve gone over by 52 words now (sorry I tried my best!), so until next time, stay curious my friends.
By Bradley Stockwell
A great way to understand the continuous-wave and the quantized-particle duality of quantum physics is to look at the differences between today’s digital technology and its predecessor, analog technology. All analog means is that something is continuous and all digital means is that something is granular, or comes in identifiable chunks. For example the hand of an analog clock must sweep over every possible increment of time as it progresses; it’s continuous. But a digital clock, even if it’s displaying every increment down to milliseconds, has to change according to quantifiable bits of time; it’s granular. Analog recording equipment transfers entire, continuous sound waves to tape, while digital cuts up that signal into small, sloping steps so that it can fit into a file (and why many audiophiles will profess vinyl is always better). Digital cameras and televisions now produce pictures that instead of having a continuum of colors, have pixels and a finite number of colors. This granularity of the digital music we hear, the television we watch, or the pictures we browse online often goes unnoticed; they appear to be continuous to our eyes. Our physical reality is much the same. It appears to be continuous, but in fact went digital about 14 billion years ago. Space, time, energy and momentum are all granular and the only way we can see this granularity is through the eyes of quantum mechanics.
Although the discovery of the wave-particle duality of light was shocking at the turn of the 20th century, things in the subatomic world—and the greater world for that matter, were about to get a whole lot stranger. While it was known at the time that protons were grouped within a central region of an atom, called the nucleus, and electrons were arranged at large distances outside the nucleus, scientists were stumped in trying to figure out a stable arrangement of the hydrogen atom, which consists of one proton and one electron. The reason being if the electron was stationary, it would fall into the nucleus since the opposite charges would cause them to attract. On the other hand, an electron couldn’t be orbiting the nucleus as circular motion requires consistent acceleration to keep the circling body (the electron) from flying away. Since the electron has charge, it would radiate light, or energy, when it is accelerated and the loss of that energy would cause the electron to go spiraling into the nucleus.
In 1913, Niels Bohr proposed the first working model of the hydrogen atom. Borrowing from Max Planck’s solution to the UV catastrophe we mentioned previously, Bohr used energy quantization to partially solve the electron radiation catastrophe (not the actual name, just me having a fun play on words), or the model in which an orbiting electron goes spiraling into the nucleus due to energy loss. Just like the way in which a black body radiates energy in discrete values, so did the electron. These discrete values of energy radiation would therefore determine discrete orbits around the nucleus the electron was allowed to occupy. In lieu of experimental evidence we’ll soon get to, he decided to put aside the problem of an electron radiating away all its energy by just saying it didn’t happen. Instead he stated that an electron only radiated energy when it would jump from one orbit to another.
So what was this strong evidence that made Niels Bohr so confident that these electron orbits really existed? Something called absorption and emission spectrums, which were discovered in the early 19th century and were used to identify chemical compounds of various materials, but had never been truly understood. When white light is shined upon an element, certain portions of that light are absorbed and also re-radiated, creating a spectral barcode, so to speak, for that element. By looking at what parts of the white light (or what frequencies) were absorbed and radiated, chemists can identify the chemical composition of something. This is how were able to tell what faraway planets and stars are made of by looking at the absorption lines in the light they radiate. When the energy differences between these absorbed and emitted sections of light were analyzed, they agreed exactly to the energy differences between Bohr’s electron orbits in a hydrogen atom. Talk about the subatomic world coming out to smack you in the face! Every time light is shown upon an element, its electrons eat up this light and use the energy to jump up an orbit then spit it back out to jump down an orbit. When you are looking at the absorption, or emission spectrum of an element, you are literally looking at the footprints left behind by their electrons!
Left- The coordinating energy differences between electron orbits and emitted and absorbed light frequencies. Right- A hydrogen absorption and emission spectrum.
As always, this discovery only led to more questions. The quantum approached worked well in explaining the allowable electron orbits of hydrogen, but why were only those specific orbits allowed? In 1924 Louis de Broglie put forward sort of a ‘duh’ idea that would finally rip the lid off the can of worms quantum mechanics was becoming. As mentioned previously, Einstein and Planck had firmly established that light had characteristics of both a particle and a wave, so all de Broglie suggested was that matter particles, such as electrons and protons, could also exhibit this behavior. This was proven with the very experiment that had so definitively proven light as a wave, the now famous double slit experiment. It proved that an electron also exhibited properties of a wave—unless you actually observe that electron, then it begins acting like a particle again. To find out more about this experiment, watch this video here.
As crazy as this all sounds, when the wave-like behavior of electrons was applied to Bohr’s atom, it answered many questions. First it meant that the allowed orbits had to be exact multiples of the wavelengths calculated for electrons. Orbits outside these multiples would produce interfering waves and basically cancel the electrons out of existence. The circumference of an electron orbit must equal its wavelength, or twice its wavelength, or three times its wavelength and so forth. Secondly if an electron is now also a wave, these orbits weren’t really orbits in the conventional sense, rather a standing wave that surrounded the nucleus entirely, making the exact position and momentum of the particle part of an electron impossible to determine at any given moment.
This is where a physicist by the name of Werner Heisenberg (yes the same Heisenberg that inspired Walter White’s alter ego in Breaking Bad) stepped in. From de Broglie’s standing wave orbits, he postulated sort of the golden rule of quantum mechanics: the uncertainty principle. It stated the more precisely the position of an object is known, the less precisely the momentum is known and vice versa. Basically it meant that subatomic particles can exist in more than one place at a time, disappear and reappear in another place without existing in the intervening space—and yeah, it basically just took quantum mechanics to another level of strange. While this may be hard to wrap your head around, instead imagine wrapping a wavy line around the entire circumference of the earth. Now can you tell me a singular coordinate of where this wavy line is? Of course not, it’s a wavy line not a point. It touches numerous places at the same time. But what you can tell me is the speed in which this wavy line is orbiting the earth by analyzing how fast its crests and troughs are cycling. On the other hand, if we crumple this wavy line up into a ball—or into a point, you could now tell me the exact coordinates of where it is, but there are no longer any crests and troughs to judge its momentum. Hopefully this elucidates the conundrum these physicists felt in having something that is both a particle and a wave at the same time.
Like you probably are right now, the physicists of that time were struggling to adjust to this. You see, physicists like precision. They like to say exhibit A has such and such mass and moves with such and such momentum and therefore at such and such time it will arrive at such and such place. This was turning out to be impossible to do within the subatomic world and required a change in their rigid moral fiber from certainty to probability. This was too much for some, including Einstein, who simply could not accept that “God would play dice with the universe.” But probability is at the heart of quantum mechanics and it is the only way it can produce testable results. I like to compare it to a well-trained composer hearing a song for the first time. While he may not know the exact direction the song is going to take—anything and everything is possible, he can take certain factors like the key, the genre, the subject matter and the artist’s previous work to make probabilistic guesses as to what the next note, chord, or lyric might be. When physicists use quantum mechanics to predict the behavior of subatomic particles they do very much the same thing. In fact the precision of quantum mechanics has now become so accurate that Richard Feynman (here’s my obligatory Feynman quote) compared it to “predicting a distance as great as the width of North America to an accuracy of one human hair’s breadth.”
So why exactly is quantum mechanics a very precise game of probability? Because when something is both a particle and wave it has the possibility to exist everywhere at every time. Simply, it just means a subatomic particle’s existence is wavy. The wave-like behavior of a particle is essentially a map of its existence. When the wave changes, so does the particle. And by wavy, this doesn’t mean random. Most of the time a particle will materialize into existence where the wave crests are at a maximum and avoid the areas where the wave troughs are at a minimum—again I emphasize most of the time. There’s nothing in the laws of physics saying it has to follow this rule. The equation that describes this motion and behavior of all things tiny is called a wave equation, developed by Erwin Schrödinger (who you may know him for his famous cat which I’ll get to soon). This equation not only correctly described the motion and behavior of particles within a hydrogen atom, but every element in the periodic table.
Heisenberg did more than just put forth the uncertainty principle—he of course wrote an equation for it. This equation quantified the relationship between position and momentum. This equation combined with Schrodinger’s gives us a comprehensive image of the atom and the designated areas in which a particle can materialize into existence. Without getting too complex, let’s look at a simple hydrogen atom in its lowest energy state with one proton and one electron. Since the electron has a very tiny mass, it can occupy a comparatively large area of space. A proton however has a mass 200 times that of an electron and therefore can only occupy a very small area of space. The result is a tiny region in which the proton can materialize (the nucleus), surrounded by a much larger region in which the electron can materialize (the electron cloud). If you could draw a line graph that travels outward from the nucleus that represents the probability of finding the electron within its region, you’ll see it peaks right where the first electron orbit is located from the Bohr model of the hydrogen atom we mentioned earlier. The primary difference between this model and Bohr’s though, is an electron occupies a cloud, or shell, instead of a definitive orbit. Now this is a great picture of a hydrogen atom in its lowest energy state, but of course an atom is not always found in its lowest energy state. Just like there are multiple orbits allowed in the Bohr model, there higher energy states, or clouds, within a quantum mechanical hydrogen atom. And not all these clouds look like a symmetrical sphere like the first energy state. For example the second energy state can have a cloud that comes in two forms: one that is double spherical (one sphere inside a larger one) and the other is shaped like a dumbbell. For higher energy states, the electron clouds can start to look pretty outrageous.
Left- Actual direct observations of a hydrogen atom changing energy states. Right- The many shapes of hydrogen electron clouds, or shells as they progress to higher energy states. Each shape is representative of the area in which an electron can be found. The highest probability areas are in violet.
The way in which these electron clouds transform from one energy state to the next is also similar to the Bohr model. If a photon is absorbed by an atom, the energy state jumps up and if an atom emits a photon, it jumps down. The color of these absorbed and emitted photons determines how many energy states the electron has moved up or down. If you’ve thrown something into a campfire, or a Bunsen burner in chemistry class and seen the flames turn a strange color like green, pink, or blue, the electrons within the material of whatever you threw in the flames are changing energy states and the frequencies of those colors are reflective of how much energy the changes took. Again this explains in further detail what we are seeing when we look at absorption and emission spectrums. An absorption spectrum is all the colors in white light minus those colors that were absorbed by the element, and an emission spectrum contains only the colors that match the difference in energy between the electron energy states.
Another important feature of the quantum mechanical atom, is that only two electrons can occupy each energy state, or electron cloud. This is because of something inherent within the electrons called spin. You can think of the electrons as spinning tops that can only spin in two ways, either upright or upside down. When these electrons spin, like the earth, they create a magnetic field and these fields have to be 180 degrees out of phase with each other to exist. So in the end, each electron cloud can only have two electrons; one with spin up and one with spin down. This is called the exclusion principle, created by Wolfgang Pauli. Spin is not something that is inherent in only electrons, but in all subatomic particles. Therefore this property is quantized as well according to the particle and all particles fall into one of two families defined by their spin. Particles that have spin equal to 1/2, 3/2, 5/2 (for an explanation on what these spin numbers mean, click here) and so on, form a family called fermions. Electrons, quarks, protons and neutrons all fall in this family. Particles with spin equal to 0, 1, 2, 3, and so on belong to a family called bosons, which include photons, gluons and the hypothetical graviton. Bosons, unlike fermions don’t have to obey the Pauli exclusion principle and all gather together in the lowest possible energy state. An example of this is a laser, which requires a large number of photons to all be in the same energy state at the same time.
Since subatomic particles all look the same compared to one another and are constantly phasing in and out of existence, they can be pretty hard to keep track of. Spin however provides a way for physicists to distinguish the little guys from one another. Once they realized this though, they happened upon probably the strangest and most debated feature of quantum mechanics called quantum entanglement. To understand entanglement, let’s imagine two electrons happily existing together in the same electron cloud. As stated above, one is spinning upright and the other is spinning upside down. Because of their out of phase magnetic fields they can coexist in the same energy state, but this also means their properties, like spin, are dependent on one another. If electron A’s spin is up, electron B’s spin is down; they’ve become entangled. If say these two electrons are suddenly emitted from the atom simultaneously and travel in opposite directions, they are now flip-flopping between a state of being up and a state of being down. One could say they are in both states at the same time. When Erwin Schrödinger was pondering this over and subsequently coined the term entanglement, he somewhat jokingly used a thought experiment about a cat in a box which was both in a state of being alive and being dead and it wasn’t until someone opened this box that the cat settled into one state or the other. This is exactly what happens to one of these electrons as soon as someone measures them (or observes them), the electron settles into a spin state of either up or down. Now here’s where it gets weird. As soon as this electron settles into its state, the other electron which was previously entangled with it, settles instantaneously into the opposite state, whether it’s right next to it or on the opposite side of the world. This ‘instantaneous’ emission of information from one electron to another defies the golden rule of relativity that states nothing can travel faster than the speed of light. Logic probably tells you that the two electrons never changed states to begin with and one was always in an up state and the other was always in a down. People on the other side of this debate would agree with you. However very recent experiments are proving the former scenario to be true and they’ve done these experiments with entangled electrons at over 100 km a part. Quantum entanglement is also playing an integral role in emerging technologies such as quantum computing, quantum cryptography and quantum teleportation.
For as much as I use the words strange and weird to describe quantum mechanics, I actually want to dispel this perception. Labeling something as strange, or weird creates a frictional division that I’m personally uncomfortable with. In a field that seeks to find unity in the universe and a theory to prove it, I feel it’s counterintuitive to focus on strange differences. Just like someone else’s culture may seem strange to you at first, after some time of immersing yourself in it, you begin to see it’s not so strange after all; just a different way of operating. Quantum mechanics is much the same (give it some time I promise). We also have to remember that although reality within an atom may seem strange to us, it is in fact our reality that is strange—not the atom’s. Because without the atom, our reality would not exist. A way I like to put quantum mechanics in perspective is to think of what some vastly more macroscopic being, blindly probing into our reality might think of it. He/she/it would probably look at something like spacetime for example, the fabric from which our universe is constructed, and think it too exhibits some odd properties—some that are very similar to the wave-particle duality of the quantum world. While Einstein’s relativity has taught us that space and time are unquestioningly woven together into a singular, four dimensional entity, there’s an unquestionable duality just like we find in subatomic particles. Time exhibits a similar behavior to that of a wave in that it has a definite momentum, but no definable position (after all it exists everywhere). And space on the other hand has a definable, three dimensional position, but no definable momentum, yet both make up our singular experience of this universe. See if you look hard enough, both of our realities—the big and small, are indeed weird yet fascinating at the same time. Until next time my friends, stay curious.