• Home page of novelist William S. Frankl, M.D.
  • About author William S. Frankl, M.D.
  • Books by novelist William S. Frankl, M.D.
  • Reviews of the writing of author William S. Frankl, M.D.
  • Blog of author William (Bill) S. Frankl, M.D.
  • Contact author William S. Frankl, M.D.
Title: Blog by Novelist William S. Frankl, MD

Archive for the ‘Science’ Category

On To Mars/Charles Krauthammer

Friday, June 22nd, 2018

Charles Krauthammer died on June 21, 2018. But his life and his treasure trove of writings will be with us forever. Below is one of his very finest, his case for mankind’s need to leave Earth and travel to the stars.

 

 

On To Mars

The Weekly Standard

January 31, 2000

 

 

January 31, 2000 12:00 AM

If you were to say to a physicist in 1899 that in 1999, a hundred years later . . . bombs of unimaginable power would threaten the species; . . . that millions of people would take to the air every hour in aircraft capable of taking off and landing without human touch; . . . that humankind would travel to the moon, and then lose interest . . . the physicist would almost certainly pronounce you mad.

–Michael Crichton

I

WHAT MANNER OF CREATURE ARE WE? It took 100,000 years for humans to get inches off the ground. Then, astonishingly, it took only 66 to get from Kitty Hawk to the moon. And then, still more astonishingly, we lost interest, spending the remaining 30 years of the 20th century going around in circles in low earth orbit, i.e., going nowhere.

 

Last July, the unmanned Lunar Prospector probe was sent to find out whether the moon contains water. It was a remarkable venture, but even more remarkable was the fact that Prospector was the first NASA spacecraft, manned or unmanned, to land on the moon since the last Apollo astronaut departed in 1972. Twenty-seven years without even a glance back.

We remember the late 15th and 16th centuries as the Age of Exploration. The second half of the 20th was at one point known as the Space Age. What happened? For the first 20 years we saw space as a testing ground, an arena for splendid, strenuous exertion. We were in a race with the Soviets for space supremacy, and mobilized for it as for war. President Kennedy committed all of our resources: men, materiel, money, and spirit. And he was bold. When he promised to land a man on the moon before the decade was out, there were only eight and a half years left. At the time, no American had even orbited the earth.

 

The Apollo program was a triumph. But the public quickly grew bored. The interview with the moon-bound astronauts aboard Apollo 13 was not even broadcast, for lack of an audience. It was only when the flight turned into a harrowing drama of survival that an audience assembled. By Apollo 17, it was all over. The final three moonshots were canceled for lack of interest.

Looking to reinvent itself, NASA came up with the idea of a space shuttle ferrying men and machines between earth and an orbiting space station. It was a fine idea except for one thing: There was no space station. Skylab had been launched in May 1973, then manned for 171 days. But no effort was made to keep its orbit from decaying. It fell to earth and burned. We were left with an enormously expensive shuttle–to nowhere.

 

The shuttle has had its successes–the views of earth it brought back, the repairs to the Hubble space telescope it enabled. But it has been a dead end scientifically and deadening spiritually. There is today a palpable ennui with space. When did we last get excited? When a 77-year-old man climbed into the shuttle in November 1998 for a return flight. That was the most excitement the shuttle program had engendered in years–the first time in a long time that a launch and the preparations and even the preflight press conference had received live coverage. Televisions were hauled into classrooms so kids could watch.

 

But watch what? The fact is that we were watching John Glenn reprise a flight he’d made 36 years earlier. It is as if the Wright Brothers had returned to Kitty Hawk in 1939 to skim the sand once again, and the replay was treated as some great advance in aviation.

 

The most disturbing part of the Glenn phenomenon was the efflorescence of space nostalgia–at a time when space exploration is still in its infancy. We have not really gone anywhere yet, and we are already looking back with sweet self-satisfaction.

 

The other flutter of excitement generated by the shuttle program occurred a few years earlier when Shannon Lucid received the Congressional Space Medal of Honor for a long-duration flight in low earth orbit. A sign of the times. She is surely brave and spunky, but the lavish attention her feat garnered says much about the diminished state of our space program. Endurance records are fine. But the Congressional Space Medal of Honor? It used to be given to the likes of Alan Shepard and John Glenn, who had the insane courage to park themselves atop an unstable, spanking-new, largely untested eight-story bomb not knowing whether it would blow up under them. Now we give it for spending six months in an orbiting phone booth with a couple of guys named Yuri.

II

WHAT HAPPENED? Where is the national will to explore? We are stuck along some quiet historical sidetrack. The fascination today is with communication, calculation, miniaturization, all in the service of multiplying human interconnectedness. Outer space has ceded pride of place to the inner space of the Internet. In fact, space’s greatest claim on our interest and resources currently rests on the fact that satellites allow us to page each other and confirm that 9:30 meeting about the new Tostitos ad campaign.

 

The excitement surrounding Shannon Lucid’s six months of sponge baths and Russian food aboard Mir is a reflection of the quiet domesticity of this inward-turning time. Perhaps it is the exhaustion after 60 years of world war, cold and hot, stretching right up to the early 1990s. The “Seinfeld “era is not an era for Odyssean adventures. Now is a time for home and hearth–the glowing computer screen that allows endless intercourse with our fellow humans.

 

Another reason for the diminishing drive for planetary exploration is, perversely, the fruits of the moon landing itself–and in particular that famous photograph of earth taken by the Apollo astronauts during the first human circumnavigation of another celestial body.

“Earthrise” had an important effect on human consciousness. It gave us our first view of earth as it is seen from God’s perspective: warm, safe, serene, blessed. It created a kind of preemptive nostalgia for earth, at precisely the moment when earthlings were finally acquiring the ability to leave it.

 

It is no surprise that “Earthrise” should have become such a cultural icon, particularly for the environmental Left. It offered the cosmic equivalent of the call to “Come home, America” issued just four years after the picture was taken.

 

That photo and the ethos it promoted–global, sedentary, inward-looking–were the metaphysical complement to the political arguments made at the time, and ever since, for turning our gaze from space back to earth. These are the familiar arguments about social priorities: Why are we spending all this money on space, when there is poverty and disease and suffering at home?

 

It is a maddening question because, while often offered in good faith, it entirely misses the point. Poverty and disease will always be with us. We have spent, by most estimates, some $5 trillion trying to abolish poverty in the United States alone. Government is simply not very good at solving social problems. But it can be extremely good at solving technical problems. The Manhattan Project is, of course, the classic case. As are the various technological advances forged in war, from radar to computers.

Concerted national mobilization for a specific scientific objective can have great success. This is in sharp contrast to national mobilization for social objectives, which almost invariably ends in disappointment, waste, and unintended consequences (such as the dependency and deviancy spawned by the massive welfare programs and entitlements of the sixties and the seventies–the Left’s preferred destination for the resources supposedly squandered on space).

 

But more exasperating than the poor social science and the misapprehension about the real capacities of government is the tone-deafness of the earth-firsters to the wonder and glory of space, and to the unique opportunity offered this generation. How can one live at the turn of the 21st century, when the planets are for the first time within our grasp, and not be moved by the grandeur of the enterprise?

 

NASA administrators like to talk about science and spinoffs to justify the space program. Well, the study of bone decalcification in near-earth weightlessness is fine, but it is hardly the motor force behind President Kennedy’s ringing declaration, “We choose to go to the Moon.” That is not why we, as a people and as a species, ventured into the cosmos in the first place.

 

Teflon and pagers are nice, too, and perhaps effective politically in selling space. But they are hardly the point. We are going into space for the same reason George Mallory climbed Everest: Because it is there. For the adventure, for the romance, for the sheer temerity of venturing into the void.

And yet, amid the national psychic letdown that followed the moon landings and is still with us today, that kind of talk seems archaic, anachronistic. So what do we do? We radically contract our horizons. We spend three decades tumbling about in near-earth orbit. We become expert in zero-G nausea and other fascinations. And when we do venture out into the glorious void, we do it on the very cheap, to accommodate the diminished national will and the pinched national resources allocated for exploration.

 

The reason NASA administrator Daniel Goldin adopted the “faster, better, cheaper” approach is that he was forced to. He was rightly afraid that when you send a $1 billion probe loaded with experiments and hardware and it fails (as happened to the Mars Observer in 1993), you risk losing your entire congressional backing–and your entire program. He had little choice but to adopt a strategy of sending cheaper but more vulnerable probes in order to lessen the stakes riding on each launch. Probes like the Mars Polar Lander.

III

WHEN THE MARS POLAR LANDER disappeared last month, the country went into a snit. The public felt let down, cheated of the exotic entertainment NASA was supposed to deliver. The press was peeved, deprived of a nice big story with lovely pictures. Jay Leno, the nation’s leading political indicator, was merciless. (“If you’re stuck for something to get NASA for Christmas, you can’t go wrong with a subscription to Popular Mechanics. . . . But they’re not giving up. NASA said today they’re gonna continue to look for other forms of intelligent life in the universe. And when they find it, they’re gonna hire him.”) And Congress preened, displaying concern, pulling its chin and promising hearings on the failure of the last three Mars missions. This will be a bit of Kabuki theater in which clueless politicians, whose greatest mathematical feat is calculating last week’s fund-raising take, will pinion earnest scientists about why they could not land a go-cart on the South Pole of a body 400 million miles away on a part of the planet we had never explored.

 

In other words, we are in for a spell of national bellyaching and finger-pointing which will inevitably culminate in the crucifixion of a couple of NASA administrators, a few symbolic budget cuts, and a feeling of self-satisfaction all around.

 

The biggest scandal of the Mars exploration projects is not that a few have failed, but the way the nation has reacted to those failures. A people couched and ready, expectant and entitled, armed with a remote control yet denied Martian pictures to go with their “Today” show coffee, will be avenged.

Who is to blame for the Mars disasters? Not the scientists, but the people who will soon be putting them on trial.

 

Landing on another planet is very hard. And landing on its South Pole, terra incognita for us, is even harder. As one researcher put it, this is rocket science. “Look at the history of landers on Mars,” professor Howard McCurdy of American University told the Washington Post. “Of twelve attempts, three have made it. The Soviets lost all six of theirs. . . . Mars really eats spacecraft.”

 

Something this hard requires not just technology–which we have–but will, which we don’t. And national will is expressed in funding. Since the glory days of Apollo, space exploration has progressively been starved. Today, funding for NASA is one fifth what it was in 1965, less than 0.8 percent of the federal budget.

 

And not only has the overall NASA budget declined, but so has the fraction allocated to both manned and unmanned exploration of the moon and the planets. The budget has been eaten by the space shuttle and the low-earth-orbit space station being built two decades late to finally provide a destination for the wandering shuttle.

 

Then there is what NASA calls “mission to planet earth,” a program devoted to studying such terrestrial concerns as ozone, land use, climate variability, and such. A nice idea. But it used to be NASA’s mission to lift us above ozone and land and climate to reach for something higher. The whole idea of space exploration was to find out what is out there.

 

The cost of the Mars Polar Lander was $165 million. In an $8 trillion economy, that is a laughable sum. “Waterworld” cost more. The new Bellagio hotel in Vegas could buy eight Polar Landers with $80 million left over for a bit of gambling. To put it in terms of competing space outlays, $165 million is less than half the cost of a shuttle launch. For the price of a single shuttle mission (launch, flight time, landing, and overhead) we could have sent two Mars Polar Landers and gotten $70 million back in change.

Planetary exploration is so hamstrung financially that the Polar Lander–which NASA last week officially declared dead–sent no telemetry during its final descent onto the planet. That was to save money. We’ll never know what went wrong. Adding a black box, something to send simple signals to tell us what happened, would have cost $5 million. Five million! That doesn’t buy one minute of air time on the Super Bowl.

 

The hard fact is that the kind of cheap, fast spacecraft NASA has been forced to build does reduce the loss in case of failure. But it increases the chance of failure. You cannot build in the kind of backup systems that go into the larger craft we sent exploring in the past. The Viking missions that 25 years ago touched down on Mars and gave us those extraordinary first pictures of its surface, and the Voyager spacecraft that gave us magnificent flybys of the entire solar system, typically cost 10 to 20 times more than the new “faster, better, cheaper” projects.

 

It is a travesty that the very same Congress that has squeezed funding for these programs will now be conducting the inquisition to find out why this shoestring operation could not produce another spectacular success. But we can’t just blame the politicians. This is a democracy. They are responding to their constituency. Their constituency is disappointed that it received no entertainment from the Mars Polar Lander, for which the average American contributed the equivalent of half a cheeseburger. If we had had the will to devote a whole cheeseburger to a Mars lander, it could have been equipped with redundant systems, and might have succeeded.

IV

THE FAULT, dear Brutus, is not in our stars, but in ourselves. What then to do? If we are going to save resources in acknowledgment of the diminished national will to explore, we should begin by shutting the maw that is swallowing up so much of the space budget: the shuttle and the space station. It is not as if we have nowhere to go but endlessly around earth. Recent discoveries have given us new ways and new reasons for establishing a human presence on the moon and on Mars.

 

Until a few years ago, it could have been argued that a moon base was impractical, and human Mars exploration even more so. But there is evidence that there may be water on the moon (in the form of ice, of course). And water, there as here, is the key to everything. It could provide both life support and fuel. Similarly, the fact that there is ice on Mars has led to a revolution in thinking about how we can travel there and back. Instead of carrying huge stores of fuel, which would make the launch vehicle enormously expensive and cumbersome, we could send unmanned spacecraft ahead. They would land on Mars and turn the water into life support and fuel. (If you split water, you get hydrogen and oxygen, precisely the gases that you need for life and for propulsion.) Astronauts could travel fairly light, arriving at a place already prepared with life-sustaining water, oxygen, and hydrogen for the flight back.

 

The moon and Mars are beckoning. So why are we spending so much of our resources building a tinker-toy space station? In part because, a quarter-century late, we still need something to justify the shuttle. Yet the space station’s purpose has shrunk to almost nothing. No one takes seriously its claims to be a platform for real science. And the original idea–hatched in the 1950s–that it would be a way station to the moon and Mars, was overtaken in the sixties when we found more efficient ways to fully escape earth’s gravitational well.

 

The space station’s main purpose now appears to be . . . fostering international cooperation. It became too expensive for the United States to do alone, and so we decided to share the cost and control. It provides a convenient back door for American funding of the bankrupt Russian space program. We send Russia the money to build its space station modules. This is supposed to promote friendship and keep Russian rocket scientists from moving to Baghdad.

 

The cost to the United States? Twenty-one billion dollars, enough to support 127 Polar Landers. Instead of squandering $21 billion on a weightless United Nations (don’t we have one of these already?), we should be directing our resources at the next logical step: a moon base. It would be a magnificent platform for science, for observation of the universe, and for industry. It would also be good training for Mars. And it would begin the ultimate adventure: the colonization of other worlds.

 

In 1991, the Stafford Commission recommended the establishment of permanent human outposts on the moon and on Mars by the early decades of this century. Rather than frittering away billions on the space station, we should be going right now to the moon–where we’ve been, where we know how to go, and where we might very well discover life-sustaining materials. And from there, on to the planets.

 

In the end, we will surely go. But how long will it take? Five hundred years from now–a time as distant from us as is Columbus–a party of settlers on excursion to Mars’s South Pole will stumble across some strange wreckage, just as today we stumble across the wreckage of long-forgotten ships caught in Arctic ice. They’ll wonder what manner of creature it was that sent it.

 

What will we have told them? That after millennia of gazing at the heavens, we took one step into the void, then turned and, for the longest time, retreated to home and hearth? Or that we retained our nerve and hunger for horizons, and embraced our destiny?

 

Charles Krauthammer

 

 

The Hoover Institution/Victor Davis Hanson

Sunday, May 13th, 2018

Victor Davis Hanson was one of the top conservative thinkers of the 20th century and remains so, as well, in our early 21st century. He has just received a highly coveted award from the Hoover Instite at Stanford University.

Victor Davis Hanson Wins Edmund Burke Award
Wednesday, May 2, 2018
Hoover Institution, Stanford University

This week Victor Davis Hanson won the 2018 Edmund Burke Award, which honors people who have made major contributions to the defense of Western civilization.

The honor is given annually by The New Criterion, a monthly journal of the arts and intellectual life. Edmund Burke was an 18th century Irish political philosopher who is credited with laying the foundations of modern conservatism.

Hanson, the Martin and Illie Anderson Senior Fellow at the Hoover Institution, studies and writes about the classics and military history. He received the sixth Edmund Burke Award for Service to Culture and Society at an April 26 dinner in New York City.

“I was honored to receive the award because Edmund Burke is often identified as both a defender of republican values and traditions and a foe of both autocracy and the radical mob rule of the French Revolution. I grew up on a farm and still live there most of the week. I’ve learned over a lifetime from rural neighbors and friends that agrarianism can inculcate a natural conservatism that I think Burke and others saw as an essential check on radicalism and an independence necessary to resist authoritarianism,” Hanson wrote in an email afterwards.

He noted that “candor, truth, and defiance in the face of historical and unfounded attacks on the West are essential.”

Western civilization has always been the only nexus where freedom, tolerance, constitutional government, human rights, economic prosperity, and security can be found together in their entirety, Hanson added.

“We can see that in the one-way nature of migrations from non-West to the West and in the alternatives on the world scene today. The great dangers to the West, ancient and modern, have always been its own successes, or rather the combination of the affluence that results from free-market capitalism and the entitlement accruing from consensual government. The result is that Westerners can become complacent, hypercritical of their own institutions, and convinced that they are not good if not perfect, or that the sins of mankind are the unique sins of the West,” he said.

This complacence, he said, and the idea that “utopia is attainable often results in amnesia” about the past and a sort of ignorance about the often brutal way the world works outside the West.

“Obviously if we do not defend our unique past and culture, who else will?” he said.

In his remarks on April 26, Roger Kimball, the editor and publisher of The New Criterion, said “Victor cuts across the chattering static of the ephemeral, bringing us back to a wisdom that is as clear-eyed and disabused as it is generous and serene.”

Hanson is also the chairman of the Role of Military History in Contemporary Conflict Working Group at the Hoover Institution.

Is Mount Mantap Just “tired” Or “completely exhaused?”

Sunday, May 13th, 2018

While President Trump and Secretary of State Pompeo are touting how great it is that North Korea is dismantling its nuclear test site, it might not at all be due to a turn toward peace and a nuclear free Korean Peninsula. Please read below what the real reason is likely to be.

Live Science
Why is North Korea Shutting Down Its Nuclear Test Site?
by Yasemen Saplakoglu
April 27, 2018

A train of mining carts and new structure are seen at the West Portal spoil pile within the Punggye-ri Nuclear Test Site in North Korea on April 20, 2018. The testing site sits on Mount Mantap, which seems to have “tired mountain syndrome.”
Credit: DigitalGlobe/38 North via Getty Images

Last week, North Korea announced that it will cease all nuclear testing and will shut down its main testing facility at Mount Mantap. Although some believe the decision came because of easing tensions between the country and the world, others think Mount Mantap may have come down with a bad case of “tired mountain syndrome.”

But what exactly is tired mountain syndrome, and how does a mountain “catch” it?

It turns out that repeated nuclear blasts can weaken the rock around underground nuclear test sites, eventually making them unsafe or unusable — which might have happened with North Korea’s preferred testing grounds. [North Korea: A Hermit Country from Above (Photos)]
Powerful explosions

The hermit country’s latest nuclear test, conducted in September 2017 at Punggye-ri, was at least 17 times more powerful than the bomb that was dropped on Hiroshima, Japan, in 1945, according to The Washington Post.

In fact, the explosion registered as a magnitude-6.3 earthquake, and before-and-after satellite shots showed visible movement at Mount Mantap — a 7,200-foot-high (2,200 meters) mountain under which deeply buried tunnels house most of the tests. Some geologists think that the mountain is cracking under the pressure.

“You can take a piece of rock and set it on the ground, take a hammer, tap it; nothing will happen,” said Dale Anderson, a seismologist at Los Alamos National Laboratory. You keep tapping it — and, say — the 21st time, “it will break and crack open.”

When a nuclear explosion goes off inside a mountain, it breaks the surrounding rock, and the energy propagates out like a wave (imagine throwing a pebble into a lake). But as more explosions go off around the same — but not exact — spot, rocks that are farther away also begin to crumble under repeated stress.

“The accumulated effect of these explosions that weaken rocks and create that fracturing [farther away from the point of explosion] is what we call tired mountain syndrome,” Anderson told Live Science.

Tired mountain syndrome can also stymie scientists trying to measure how strong an explosion is, he said. The propagating energy scatters around these fractured rocks before reaching the sensors, so the explosion registers as a lot weaker than it actually is, he added.

But this effect “has nothing to do with being able to use the facility,” Anderson said.

In fact, a country can keep using the site but must adjust the mathematical equations it uses so that the final magnitude of the explosion takes tired mountain syndrome into account.
Toxic seepage

If nuclear test sites are shut down, Anderson said, it’s usually a direct consequence of the syndrome. Mountains with this condition become much more permeable, meaning that more pathways open up for gas and liquid to travel through the rock. This means there’s a greater chance for radioactive gas — with the most concerning being xenon — to escape the rock and seep out to the surface, Anderson said.

“Mother nature has already fractured the rock,” Anderson said. “When an explosion goes off, sometimes damage [from it] will connect with natural fractures, and you can conceivably get a pathway up to the surface, and gases will seep out.”

The process by which gas could be pulled up and through the rock is called barometric pumping.

A group of Chinese geologists said on Wednesday (April 25) that they believe the nuclear test site had collapsed and that Mount Mantap was in “fragile fragments,” according to The Washington Post. But William Leith, the senior science adviser for earthquake and geologic hazards at the U.S. Geological Survey — who with one other scientist coined the term to describe a Soviet nuclear testing site in 2001— doesn’t think it is. In an interview with CBC Radio in October, when asked if the mountain in North Korea was tired, he said, “I would say, ‘not very tired.’ And that’s because they’ve only had, as far as we know, six underground nuclear explosions, and there’s a lot of mountain left there.”

In comparison, he and his colleagues first used the term to describe Degelen Mountain in the former Soviet Union (now Kazakhstan), which was battered by more than 200 explosions.

North Korea’s mountain may be tired — but whether it’s completely exhausted is difficult to say.

Quantum Mechanics

Sunday, May 13th, 2018

LIVE SCIENCE

What Is Quantum Mechanics?
By Robert Coolman
9/26/2014

Quantum mechanics is the branch of physics relating to the very small.

It results in what may appear to be some very strange conclusions about the physical world. At the scale of atoms and electrons, many of the equations of classical mechanics, which describe how things move at everyday sizes and speeds, cease to be useful. In classical mechanics, objects exist in a specific place at a specific time. However, in quantum mechanics, objects instead exist in a haze of probability; they have a certain chance of being at point A, another chance of being at point B and so on.
Three revolutionary principles

Quantum mechanics (QM) developed over many decades, beginning as a set of controversial mathematical explanations of experiments that the math of classical mechanics could not explain. It began at the turn of the 20th century, around the same time that Albert Einstein published his theory of relativity, a separate mathematical revolution in physics that describes the motion of things at high speeds. Unlike relativity, however, the origins of QM cannot be attributed to any one scientist. Rather, multiple scientists contributed to a foundation of three revolutionary principles that gradually gained acceptance and experimental verification between 1900 and 1930. They are:

Quantized properties: Certain properties, such as position, speed and color, can sometimes only occur in specific, set amounts, much like a dial that “clicks” from number to number. This challenged a fundamental assumption of classical mechanics, which said that such properties should exist on a smooth, continuous spectrum. To describe the idea that some properties “clicked” like a dial with specific settings, scientists coined the word “quantized.”

Particles of light: Light can sometimes behave as a particle. This was initially met with harsh criticism, as it ran contrary to 200 years of experiments showing that light behaved as a wave; much like ripples on the surface of a calm lake. Light behaves similarly in that it bounces off walls and bends around corners, and that the crests and troughs of the wave can add up or cancel out. Added wave crests result in brighter light, while waves that cancel out produce darkness. A light source can be thought of as a ball on a stick being rhythmically dipped in the center of a lake. The color emitted corresponds to the distance between the crests, which is determined by the speed of the ball’s rhythm.

Waves of matter: Matter can also behave as a wave. This ran counter to the roughly 30 years of experiments showing that matter (such as electrons) exists as particles.
Quantized properties?

In 1900, German physicist Max Planck sought to explain the distribution of colors emitted over the spectrum in the glow of red-hot and white-hot objects, such as light-bulb filaments. When making physical sense of the equation he had derived to describe this distribution, Planck realized it implied that combinations of only certain colors (albeit a great number of them) were emitted, specifically those that were whole-number multiples of some base value. Somehow, colors were quantized! This was unexpected because light was understood to act as a wave, meaning that values of color should be a continuous spectrum. What could be forbidding atoms from producing the colors between these whole-number multiples? This seemed so strange that Planck regarded quantization as nothing more than a mathematical trick. According to Helge Kragh in his 2000 article in Physics World magazine, “Max Planck, the Reluctant Revolutionary,” “If a revolution occurred in physics in December 1900, nobody seemed to notice it. Planck was no exception …”

Planck’s equation also contained a number that would later become very important to future development of QM; today, it’s known as “Planck’s Constant.”

Quantization helped to explain other mysteries of physics. In 1907, Einstein used Planck’s hypothesis of quantization to explain why the temperature of a solid changed by different amounts if you put the same amount of heat into the material but changed the starting temperature.

Since the early 1800s, the science of spectroscopy had shown that different elements emit and absorb specific colors of light called “spectral lines.” Though spectroscopy was a reliable method for determining the elements contained in objects such as distant stars, scientists were puzzled about why each element gave off those specific lines in the first place. In 1888, Johannes Rydberg derived an equation that described the spectral lines emitted by hydrogen, though nobody could explain why the equation worked. This changed in 1913 when Niels Bohr applied Planck’s hypothesis of quantization to Ernest Rutherford’s 1911 “planetary” model of the atom, which postulated that electrons orbited the nucleus the same way that planets orbit the sun. According to Physics 2000 (a site from the University of Colorado), Bohr proposed that electrons were restricted to “special” orbits around an atom’s nucleus. They could “jump” between special orbits, and the energy produced by the jump caused specific colors of light, observed as spectral lines. Though quantized properties were invented as but a mere mathematical trick, they explained so much that they became the founding principle of QM.
Particles of light?

In 1905, Einstein published a paper, “Concerning an Heuristic Point of View Toward the Emission and Transformation of Light,” in which he envisioned light traveling not as a wave, but as some manner of “energy quanta.” This packet of energy, Einstein suggested, could “be absorbed or generated only as a whole,” specifically when an atom “jumps” between quantized vibration rates. This would also apply, as would be shown a few years later, when an electron “jumps” between quantized orbits. Under this model, Einstein’s “energy quanta” contained the energy difference of the jump; when divided by Planck’s constant, that energy difference determined the color of light carried by those quanta.

With this new way to envision light, Einstein offered insights into the behavior of nine different phenomena, including the specific colors that Planck described being emitted from a light-bulb filament. It also explained how certain colors of light could eject electrons off metal surfaces, a phenomenon known as the “photoelectric effect.” However, Einstein wasn’t wholly justified in taking this leap, said Stephen Klassen, an associate professor of physics at the University of Winnipeg. In a 2008 paper, “The Photoelectric Effect: Rehabilitating the Story for the Physics Classroom,” Klassen states that Einstein’s energy quanta aren’t necessary for explaining all of those nine phenomena. Certain mathematical treatments of light as a wave are still capable of describing both the specific colors that Planck described being emitted from a light-bulb filament and the photoelectric effect. Indeed, in Einstein’s controversial winning of the 1921 Nobel Prize, the Nobel committee only acknowledged “his discovery of the law of the photoelectric effect,” which specifically did not rely on the notion of energy quanta.

Roughly two decades after Einstein’s paper, the term “photon” was popularized for describing energy quanta, thanks to the 1923 work of Arthur Compton, who showed that light scattered by an electron beam changed in color. This showed that particles of light (photons) were indeed colliding with particles of matter (electrons), thus confirming Einstein’s hypothesis. By now, it was clear that light could behave both as a wave and a particle, placing light’s “wave-particle duality” into the foundation of QM.
Waves of matter?

Since the discovery of the electron in 1896, evidence that all matter existed in the form of particles was slowly building. Still, the demonstration of light’s wave-particle duality made scientists question whether matter was limited to acting only as particles. Perhaps wave-particle duality could ring true for matter as well? The first scientist to make substantial headway with this reasoning was a French physicist named Louis de Broglie. In 1924, de Broglie used the equations of Einstein’s theory of special relativity to show that particles can exhibit wave-like characteristics, and that waves can exhibit particle-like characteristics. Then in 1925, two scientists, working independently and using separate lines of mathematical thinking, applied de Broglie’s reasoning to explain how electrons whizzed around in atoms (a phenomenon that was unexplainable using the equations of classical mechanics). In Germany, physicist Werner Heisenberg (teaming with Max Born and Pascual Jordan) accomplished this by developing “matrix mechanics.” Austrian physicist Erwin Schrödinger developed a similar theory called “wave mechanics.” Schrödinger showed in 1926 that these two approaches were equivalent (though Swiss physicist Wolfgang Pauli sent an unpublished result to Jordan showing that matrix mechanics was more complete).

The Heisenberg-Schrödinger model of the atom, in which each electron acts as a wave (sometimes referred to as a “cloud”) around the nucleus of an atom replaced the Rutherford-Bohr model. One stipulation of the new model was that the ends of the wave that forms an electron must meet. In “Quantum Mechanics in Chemistry, 3rd Ed.” (W.A. Benjamin, 1981), Melvin Hanna writes, “The imposition of the boundary conditions has restricted the energy to discrete values.” A consequence of this stipulation is that only whole numbers of crests and troughs are allowed, which explains why some properties are quantized. In the Heisenberg-Schrödinger model of the atom, electrons obey a “wave function” and occupy “orbitals” rather than orbits. Unlike the circular orbits of the Rutherford-Bohr model, atomic orbitals have a variety of shapes ranging from spheres to dumbbells to daisies.

In 1927, Walter Heitler and Fritz London further developed wave mechanics to show how atomic orbitals could combine to form molecular orbitals, effectively showing why atoms bond to one another to form molecules. This was yet another problem that had been unsolvable using the math of classical mechanics. These insights gave rise to the field of “quantum chemistry.”
The uncertainty principle

Also in 1927, Heisenberg made another major contribution to quantum physics. He reasoned that since matter acts as waves, some properties, such as an electron’s position and speed, are “complementary,” meaning there’s a limit (related to Planck’s constant) to how well the precision of each property can be known. Under what would come to be called “Heisenberg’s uncertainty principle,” it was reasoned that the more precisely an electron’s position is known, the less precisely its speed can be known, and vice versa. This uncertainty principle applies to everyday-size objects as well, but is not noticeable because the lack of precision is extraordinarily tiny. According to Dave Slaven of Morningside College (Sioux City, IA), if a baseball’s speed is known to within a precision of 0.1 mph, the maximum precision to which it is possible to know the ball’s position is 0.000000000000000000000000000008 millimeters.
Onward

The principles of quantization, wave-particle duality and the uncertainty principle ushered in a new era for QM. In 1927, Paul Dirac applied a quantum understanding of electric and magnetic fields to give rise to the study of “quantum field theory” (QFT), which treated particles (such as photons and electrons) as excited states of an underlying physical field. Work in QFT continued for a decade until scientists hit a roadblock: Many equations in QFT stopped making physical sense because they produced results of infinity. After a decade of stagnation, Hans Bethe made a breakthrough in 1947 using a technique called “renormalization.” Here, Bethe realized that all infinite results related to two phenomena (specifically “electron self-energy” and “vacuum polarization”) such that the observed values of electron mass and electron charge could be used to make all the infinities disappear.

Since the breakthrough of renormalization, QFT has served as the foundation for developing quantum theories about the four fundamental forces of nature: 1) electromagnetism, 2) the weak nuclear force, 3) the strong nuclear force and 4) gravity. The first insight provided by QFT was a quantum description of electromagnetism through “quantum electrodynamics” (QED), which made strides in the late 1940s and early 1950s. Next was a quantum description of the weak nuclear force, which was unified with electromagnetism to build “electroweak theory” (EWT) throughout the 1960s. Finally came a quantum treatment of the strong nuclear force using “quantum chromodynamics” (QCD) in the 1960s and 1970s. The theories of QED, EWT and QCD together form the basis of the Standard Model of particle physics. Unfortunately, QFT has yet to produce a quantum theory of gravity. That quest continues today in the studies of string theory and loop quantum gravity.

Robert Coolman is a graduate researcher at the University of Wisconsin-Madison, finishing up his Ph.D. in chemical engineering. He writes about math, science and how they interact with history. Follow Robert @PrimeViridian. Follow us @LiveScience, Facebook & Google+.
s

Einstein/Spooky Action At A Distance

Sunday, May 13th, 2018

If one finds this difficult to understand, read it over several times and then read the next post on Quantum Mechanics. Read that over slowly and more than several times.

Understanding all this is worth the effort. It is in some ways looking in to the far distant future.

Live Science

Biggest Test Yet Shows Einstein Was Wrong About ‘Spooky Action at a Distance’

By Mindy Weisberger, Senior Writer | May 9, 2018 04:53pm ET

A groundbreaking quantum experiment recently confirmed the reality of “spooky action-at a distance” — the bizarre phenomenon that Einstein hated — in which linked particles seemingly at a distance communicate faster than the speed of light. And all it took was 12 teams of physicists in 10 countries, more than 100,000 volunteer gamers and over 97 million data units — all of which were randomly generated by hand. The volunteers operated from locations around the world, playing an online video game on Nov. 30, 2016, that produced millions of bits, or “binary digits” — the smallest unit of computer data.

Physicists then used those random bits in so-called Bell tests, designed to show that entangled particles, or particles whose states are mysteriously linked, can somehow transfer information faster than light can travel, and that these particles seem to “choose” their states at the moment they are measured. [What Is Quantum Mechanics?]

Their findings, recently reported in a new study, contradicted Einstein’s description of a state known as “local realism,” study co-author Morgan Mitchell, a professor of quantum optics at the Institute of Photonic Sciences in Barcelona, Spain, told Live Science in an email.

“We showed that Einstein’s world-view of local realism, in which things have properties whether or not you observe them, and no influence travels faster than light, cannot be true — at least one of those things must be false,” Mitchell said.

This introduces the likelihood of two mind-bending scenarios: Either our observations of the world actually change it, or particles are communicating with each other in some manner that we can’t see or influence.

“Or possibly both,” Mitchell added.

Einstein’s worldview — Is it true?

Since the 1970s, physicists have tested the plausibility of local realism by using experiments called Bell tests, first proposed in the 1960s by Irish physicist John Bell.To conduct these Bell tests, physicists compare randomly chosen measurements, such as the polarization of two entangled particles, like photons, that exist in different locations. If one photon is polarized in one direction (say, up), the other will be going sideways only a certain percentage of the time. If the number of times that the particle measurements mirror each other goes above that threshold — regardless of what the particles are or the order in which the measurements are selected — that suggests the separated particles “choose” their state only at the moment they are measured. And it implies that the particles can instantly communicate with each other — the so-called spooky action at a distance that bothered Einstein so much.

These synched responses thereby contradict the notion of genuinely independent existence, a view that forms the foundation of the principle of local realism upon which the rules of classical mechanics are based. But, time after time, tests have shown that entangled particles do demonstrate correlated states that exceed the threshold; that the world is, indeed, spooky; and that Einstein was wrong. [The 18 Biggest Unsolved Mysteries in Physics]

Volunteers in 190 countries played a game that provided researchers with more than 97,000 random bits, which the scientists applied to measurements for entangled particles.

However, Bell tests require that the choice of what to measure should be truly random. And that’s hard to show, since unseen factors can influence researchers’ selections, and even computers’ random data generation isn’t truly random. This creates a flaw in Bell tests known as the freedom-of-choice loophole — the possibility that “hidden variables” could influence the settings used in the experiments, the scientists reported. If the measurements aren’t truly random, the Bell tests can’t definitively rule out local realism.

For the new study, the researchers wanted to gather an enormous amount of human-produced data, to be certain they were incorporating true randomness in their calculations. That data enabled them to conduct a broader test of local reality than had ever been done before, and at the same time, it allowed them to close the persistent loophole, the researchers claimed.

“Local realism is a question we can’t fully answer with a machine,” Morgan said in a statement. “It seems we ourselves must be part of the experiment, to keep the universe honest.”
Random number generators

Their effort, dubbed the Big Bell Test, engaged players — or “Bellsters” — in an online tapping game called Big Bell Quest. Players quickly and repeatedly tapped two buttons on a screen, with respective values of one and zero. Their choices streamed to laboratories on five continents, where the participants’ random choices were used to select measurement settings for comparing entangled particles, the researchers reported.

During the Big Bell Test initiative on Nov. 30, 2016, over 100,000 people used an online game to generate data for a global physics experiment.

Each of the laboratories performed different experiments, using different particles — single atoms, groups of atoms, photons and superconducting devices — and their results showed “strong disagreement with local realism” in a variety of tests, according to the study, which was published online today (May 9) in the journal Nature.

The experiments also demonstrated an intriguing similarity between humans and quantum particles, related to randomness and free will. If the Bell tests’ human-influenced measurements were truly random — not influenced by the entangled particles themselves — then the behaviors of both humans and particles were random, Mitchell explained.
“If we are free, so are they,” he said.
Original article on Live Science.


William S. Frankl, MD, All Rights Reserved