Tuesday, May 19, 2015

ENERGY FOR FUTURE PRESIDENTS (Up to Part II, Chapter 4)

Muller, Richard A. Energy for Future Presidents: The Science Behind the Headlines. New York: W. W. Norton and Company, 2012.

The print version of this book is 368 pages in length.

In this book, the author presents section lessons to future presidents on various sectors of energy use and alternative energy prospects with a goal of clarifying, correcting, and expanding on information behind the news headlines.  From the author’s perspective, the president has the responsibility to be knowledgeable about these areas and that he or she should be a “teacher” to the public when it comes to using information that go beyond the news headlines to make informed decisions about energy. He tackles a wide-ranging list of energy and energy-related topics including: energy-related disasters, global warming, shale oil, alternatives to transportation fuel, energy efficiency and conservation, solar energy, wind energy, energy storage, nuclear power, biofuels, synfuels, hydrogen fuel, hybrid autos, and carbon dioxide mitigation measures.

I chose this book despite the broad coverage because energy is a shared purview of both physics and chemistry.  The theme of the book is on looking at headlines and providing a scientific and mathematical perspective to inform people’s interpretation and perception of these issues.  These are the same headlines that invariably both the president, myself, and my students read every day that are, for many, the primary source of information.

In Part I, the author provides his perspectives on 3 major energy catastrophes, presenting some facts, his interpretation of the risks and ramifications, and his opinion on how these should information government decisions and response. 

The first chapter deals with the Fukushima reactor meltdown following damage from the earthquake and tsunami of March 2011.  He predicts that the number of deaths from cancer from the radiation will be small, less than 1%, of the human death toll caused by the actual earthquake and tsunami. On the basis of this, he proposes that nuclear reactors should be built strong enough so that fewer deaths results from its damage through radiation than those resulting from the cause of the damage.  He also proposes using the average annual radiation dose that people in Denver get as a standard to determine what the disaster response should be during a radiation release.  Against these two standards, he argues that the Fukushima was actually adequately based on the low human death toll projected despite the fact that it was not designed to withstand a 9.0 earthquake and a 50-foot tsunami.

In Chapter 2, the author questions the President’s characterization of the Gulf Oil Spill of 2010 caused by the Deepwater Horizon oil rig accident as being the “greatest environmental disaster” in history. He argues that the ensuing animal death at around 6,000 was small relative to the hundreds of millions of bird deaths due to collision with glass windows and high-voltage lines.  The beaches remained relatively clean compared to the damage done by the Exxon Valdez spill to the Alaskan shores.  He “senses” that the overreaction did more damage to the region in terms of its effect on tourism and the local economy.

Chapter 3 covers quite a bit of material starting from the author’s presentation of his group’s efforts to confirm temperature increase data.  His group through the organization Berkeley Earth Surface Temperature project did extensive analysis of temperature data previously not included in the IPCC analysis and re-analysis of temperature (1.6 billion temperature measurements, 14 data sets, 38 stations), putting in measures to avoid data selection and correction bias and station quality bias and tested for urban heat bias. To the author’s surprise, they came up with the same temperature rise reported by the IPCC of 0.9 Celsius over land concluding that “none of the legitimate concerns of the skeptics had improperly biased the prior results” suggesting to the author that “those groups had been vigilant in their analysis and treated the potential biases with appropriate care”.Furthermore, they demonstrated a close agreement between the temperature rise curve and the carbon dioxide rise curve when smooth fitting was done with volcanic eruption data. The excellent precise fit between the temperature and CO2 curves “suggests that most – maybe all – of the warming of the past 250 years was caused by humans” according to the author.  Based on these results, the author offers the following prediction: if the CO2 concentration increases exponentially and the greenhouse gas effects increase logarithmically, then the warming should grow linearly: doubling the time interval, doubles the temperature rise.  For example, assuming exponential growth of CO2 concentration, by 2052, CO2 concentration would be doubled to 560 ppm.  The corresponding rise in land temperature is 1.6 Celsius. 40 years after 2052, there will be an additional 1.6 Celsius rise and so on every 40 years until the CO2 rise is mitigated.  In the section on tipping points, the author discusses some positive and negative feedbacks that may occur as a result of increased CO2 and warming. A strong positive feedback can lead to runaway greenhouse warming. The tipping points for this to happen that have so far been identified are loosening of the Antarctic ice sheet and slipping into the sea to produce over 100 feet of sea level rise; melting of freshwater in Greenland which can disrupt the Gulf Stream and change sea current flow all the way in the Pacific; melting of permafrost and release of the potent greenhouse gas methane leading to further warming; and release of methane from the seabed as the Arctic water warms. An example of a negative feedback is an increase in water vapor cloud cover, a mere 2% increase in which can cancel any expected further warming if the CO2 concentrations double.  The author believes that the only solid evidence of warming is the temperature data; all other effects attributed to warming are “either wrong or distorted”.  In his section, he presented his views on these effects and how they may or may not be accurate or correlated with warming temperatures: hurricanes, tornadoes, polar warming, the so-called hockey data, and sea level rise. Toward the end, the author asks the question, “Can global warming be stopped assuming it is a threat?” He highlights the important role of the developing nations in decreasing CO2 emissions even though most of what is in the atmosphere now was due mostly to developed nations.  The emerging economies need to cut emission intensity by 8 – 10% per year just to stabilize greenhouse emissions. Low-cost solutions and a switch from coal to natural gas are required to help China and other emerging nations cut emissions. The author believes that geoengineering solutions may not be taken seriously ever because of the danger of further altering the earth’s geochemistry and atmospheric chemistry without knowing the ultimate consequences. Lastly, on the global warming controversy: the author’s statement is this “The evidence shows that global warming is real, and the recent analysis of our team indicates that most of it is due to humans”.  He refers to global warming as both a scientific conclusion and a secular religion for both what he calls “alarmists” and “deniers”. He believes that it is a threat that needs to be addressed even if quantification is difficult to do.  He proposes that any solution should be inexpensive enough because it is the developing world that would need it the most.  The lowest-hanging fruit right now is a switch from coal to natural gas while technologies are developed to make other sources affordable.  An electric car is an expensive solution that produces more CO2 if the electricity is provided by a coal-powered plant.

In Part II, the author gives an overview of the energy landscape.  In the introduction, he notes two complicating factors affecting the way this landscape is viewed: the out-of-whack pricing for energy and the enormity just in the US alone of the energy requirement equivalent to about 1 cubic mile of oil per year; with increasing per capita GDP, there is a corresponding increase in per capita energy use.  He also notes that in exploring various alternative energy resources, the difference between developed and developing world needs to be considered.

In Chapter 4, the author talks about the newest development in energy windfall: the development of extraction technology for recoverable natural gas from enormous reserves trapped in shale. According to the author, “the exploitability of these shale gases is the most important new fact for future US energy security – and for global warming – “. US natural gas reserves has grown over the last 12 years according to Department of Energy and US Energy Information Administration information, from 192 trillion cubic feet (Tcf) in 2001 to 300 Tcf in 2010; the remarkable event, however, is the growth of this number to 862 Tcf in just one year alone (2011).  This increase is attributed to the development of key technologies to extract gas from shale oil reserves.  From 2005 to 2012, the fraction of natural gas extracted from shale has increased from 4% to 30%; see Figure II.3 for graph showing the growth of shale gas production. 

Natural gas is released from coal and shale by pumping pressurized water down a pipe to crack the coal or shale and release the natural gas. Hydraulic fracturing (fracking) and horizontal drilling are two key technologies for extracting natural gas from shale. These two processes have enabled economically viable extraction of natural gas from shale. In a US EIA survey (Figure II.8) of 32 countries, there are estimated to be about 6622 Tcf of shale gas reserves, 13% of which are in the US. In 2013, natural gas provided about 27% of the US energy needs (updated data from LLNL energy flow chart for 2013). For the same dollar value (early 2012 data), natural gas can provide 2.5 times more energy than gasoline. Converting to natural gas for the US energy needs is not that trivia in most cases.  Volume storage and delivery is an issue as even when compressed, natural gas has three times the volume of gasoline.  As a transportation fuel, CNG has ten times the energy per gallon compared to lithium ion batteries so it is an electric vehicle competitor. Some advantages of natural gas include producing only half the greenhouse gases as coal does and the local pollutants (sulfur, mercury, carbon particles) are much lower.

Another potential source of methane being explored is in the form of methane hydrate or clathrate usually found along coasts and continental shelves.  At low temperatures and high pressures, methane mixes with water on a 1:5 ratio (more water) at least 1500 feet below and causes the water to form an ice cage that traps the methane.  As shown in Figure II.9 in the book, methane hydrate looks like ice cubes that burn.  Estimates of the amounts of methane hydrate deposits range from 10 – 100 times the amount of shale gas. The extraction process, ATTOW, is not trivial as most of the methane hydrates are further mixed with clay and salt water is corrosive. There is danger of leaking methane, however, that can contribute as a greenhouse gas.  Methane is 23 times more effective as a greenhouse gas than carbon dioxide. Furthermore, some scientists believe that the release of methane hydrates led to the catastrophic extinction of 96% of all marine species about 250 million years ago called the Permian-Triassic extinction.


Richard A. Muller is a professor of physics at the University of California, Berkeley. He is the best-selling author of Physics for Future Presidents and The Instant Physicist. He and his wife live in Berkeley, California.

READING NOTES

PART I: ENERGY CATASTROPHES
·         Energy use in the United States alone is huge: about 20 million barrels of oil each day. Because of these huge numbers, energy accidents normally make it on the news in a big way as well.
·         In this section, the author tackles 3 major energy catastrophes and offers facts and a suggestion on how to interpret the ramifications of these accidents.
·         “We need to get our facts right, put the consequences in perspective, clear up misimpressions, and get to the core of what really happened, or is still to happen.”


Chapter 1: Fukushima Meltdown
      In March 2011, a huge earthquake measuring 9.0 on the Richter scale hit Japan generating a tsunami 30 feet high and up to 50 feet in some places.  About 15,000 people died and 100,000 buildings destroyed.
      One of the recipients of the huge amount of energy unleashed by this earthquake through a 50-foor tsunami is the Fukushima Nuclear Reactor. At the site, two people died due to the earthquake and 1 due to the tsunami. No known deaths were reported due to the nuclear meltdown that ensued as a result of the impact.
      Nuclear energy releases are huge: fission of an atom of Uranium 235 can produce 20 million times the energy released in the decomposition of a molecule of TNT.
      Along with energy, high energy neutrons are also released which is the basis for the enormously rapid and huge energy explosions that fissile material is capable of.  In a nuclear reactor, the energy production must be moderated: only 4% of the uranium fuel is uranium-235 and neutron absorbers such as carbon or water are employed to slow down the reaction (only one of the emitted neutron triggers a new fission)  but still maintain a steady release of energy.
      Reactivity accidents result from runaway chain reactions when the process undergoes uncontrolled fission, which starts slowly at first and builds-up to an energy density that then results in a powerful explosion.
      In the Chernobyl reactivity accident of 1986, what killed most people was the radioactivity released and not the reactor explosion. In the Fukushima incident, the reactor did not explode and pumps kept working to cool down the heat produced from the residual radioactivity after the reactor shutdown upon impact. The cooling pumps stopped working after 8 hours without any external source of power to keep it going due to the loss of electricity because of extensive infrastructure failure.  Without the cooling pumps, the fuel overheated and melted resulting in a release of radioactivity second only to the Chernobyl accident.
      The most dangerous radioactivity released is that from iodine – 131 and cesium – 137. I – 131 has a half-life of 8 days and decays rapidly releasing radioactivity as it does making it the biggest source of radioactivity initially.  When it enters the body, it accumulates in the thyroid where it can cause cancer.  I – 131 absorption by body can be mitigated by taking potassium iodide; normal iodine from this salt saturates the thyroid and prevents or slows down the absorption of the radioactive isotope.
      Cs – 137decays more slowly so its initial impact is lower but it lasts longer.  Its half-life is 30 years.
      Sr – 90 also undergoes a slow decay. The slow decay means they are around longer and can deposit and accumulate in plants and animals that are consumed, concentrating in bones.
      An exposure of 100 rem or more will cause immediate radioactive illness (nausea, weakness, loss of hair); at 250-350 rem, 50% chance of death if untreated.
      The author offers the following formula for estimating deaths due to cancer: (population x average) / 2500.  Tin the example he gave, this formula estimated that a population of 22,000 and an average exposure of 22 rem may result in an excess of 194 extra cancers.  To give some perspective, a 20% incidence rate of cancer for a population of 22,000 is about 4,400 cancers.  Even though the number of cancers caused by radioactivity is less than 5%, they probably will be detectable as most of them will be thyroid cancers due to radioactive iodine exposure.
      Natural levels of exposure to radiation is about 0.3 rem from cosmic radiation and from uranium, thorium, and naturally radioactive potassium in the ground and another 0.3 rem from x-rays and other medical treatments. In Denver, Colorado, add another 0.3 rem from radon emitted from tiny concentrations of uranium in granite. Despite this, Denver has a lower cancer rate than the rest of the US and that the probability of dying from cancer is 0.00012, a number so small that prompts him to rhetorically ask “Should an undetectable danger play a major role in determining policy?” He further states that the International Commission on Radiologic Protection recommends evacuation when the radiation dose exceeds 0.1 rem per year which is one-third the dose that Denver gets. Chernobyl used this threshold to mandate evacuations.
      The Fukushima Nuclear Reactor was not built to withstand a 9.0 earthquake and a 50-foot tsunami.
      Should the Fukushima accident be used as a reason for ending nuclear power?  The author offers the following guidelines: 1) “Make a nuclear power plant strong enough that if it is destroyed or damaged, the incremental harm is small compared to the damage done by the root cause.” And 2), the Denver dose should be used as the standard in planning a disaster response, e.g., the ICRP threshold for evacuation should be raised to at least 0.3 rem or 3 millisieverts.
      The author contends that the Fukushima reactor was designed adequately when viewed with these standards in mind.


CHAPTER 2: THE GULF OIL SPILL
·         The author takes a hard look at references made by the president and others to the Gulf Oil spill as the “greatest environmental disaster of all time” by offering some factual perspectives on the damage wrought by the spill.  The accident killed 11 people and injured 17 more.
o   6,000 dead animals due to oil spill versus 100 million  to a 1 billion bird deaths each year due to glass windows and another 100 million deaths due to high-voltage electric lines
o   Beaches remained clean because BP hired fishermen to distribute buoys and barriers and spread dispersants to break up the oil versus the oil and tar that covered the Alaskan shores during the Exxon Valdes spill.
·         Author’s description of the Deepwater Horizon Accident: The oil rig sits above 5,000 feet of water.  A flexible pipe 23,000 feet long connects it to the oil source 18,000 feet below the seafloor.  When the rig exploded, the pipe was damaged and oil started gushing out at 26 gallons per second. The leaking oil was not plugged until July 15, 2010. It is estimated that the spill released 250 million gallons or about a million cubic meters. Despite the continued flow of oil, the affected size did not increase any further; the author surmises that this was likely due to the oil evaporating, dispersing in the water, sinking, or being cleaned-up. On September 19, the well was officially sealed.
·         The author estimates that with a spill area of about 10,000 miles, if all this oil was dispersed uniformly in that volume of water, the resulting distribution would be less than 1 ppm, “below what is considered a toxic level”. The surfactants were added to break-up and prevent big blobs of oil from forming so that more of them are accessible to oil-digesting bacteria and they don’t gum up feathers of birds and animal fur.
·         Natural oil leaks do occur in the seabed but they probably are only about 1% of the Deepwater spill.
·         A year after the initial spill, measurements showed that 99% of the water samples tested in the entire region including 1,000 square miles closes to the wellhead had no detectable oil residue or dispersant.  Tourism was severely affected with one estimate claiming a loss of 23 billion dollars over the ensuing three years. A year later, however, the Governor of Louisiana declared the “region reborn”.
·         The author believes that the President’s characterization of the disaster is hyperbole and overreaction to the spill was even more damaging.


CHAPTER 3: GLOBAL WARMING AND CLIMATE CHANGE

·         The level of carbon dioxide in the last century has increased by 40% due to human use of fossil fuels.  Carbon dioxide makes up 0.04% of the atmosphere.  Water is a more significant greenhouse gas but we have no control over the amount that evaporates from bodies of water. Methane is also an important greenhouse gas.  The oxygen, argon, and nitrogen in the atmosphere are transparent to infrared radiation.
·         Physical calculations estimate that the earth’s temperature would be below freezing if not for the presence of greenhouse gases.
·         In 2007, the IPCC reported that global temperature rose by 064 Celsius in the previous 50 years.  During those same years, the land temperature rose by 0.9 Celsius. Land temperatures rise in greater amount because heat concentrates near the surface of the land while heat spreads down to depths of 100 feet in the ocean. In the same report, the IPCC states that global warming has been happening since the 1800’s but the anthropogenic distribution is hard to determine as part of that earlier warming was due to changes in the sun’s intensity.
·         Despite the smallness of this temperature rise, scientists including the author are more concerned about greater warming occurring in the future.
·         The author’s group through the organization Berkeley Earth Surface Temperature project did extensive analysis of temperature data previously not included in the IPCC analysis and re-analysis of temperature (1.6 billion temperature measurements, 14 data sets, 38 stations), putting in measures to avoid data selection and correction bias and station quality bias and tested for urban heat bias. To the author’s surprise, they came up with the same temperature rise reported by the IPCC of 0.9 Celsius over land concluding that “none of the legitimate concerns of the skeptics had improperly biased the prior results” suggesting to the author that “those groups had been vigilant in their analysis and treated the potential biases with appropriate care”.
·         See Page 76 (ibook version) for the group’s plot of the average global temperature rise over land from 1800 to the present. Dips in the otherwise rising temperature plot attributed to volcanic eruptions correlated with ice core measurements of sulfate particles. There was a close agreement between the temperature rise curve and the carbon dioxide rise curve when smooth fitting was done with volcanic eruption data, better than the author’s attempt at using a parabola and other polynomial fit. “Our fit shows that one could ignore these (sunspot) cycles and get an excellent explanation of most of the data considering only carbon dioxide and volcanoes”.  The excellent precise fit between the temperature and CO2 curves “suggests that most – maybe all – of the warming of the past 250 years was caused by humans” according to the author.
·         Based on these results, the author offers the following prediction: if the CO2 concentration increases exponentially and the greenhouse gas effects increase logarithmically, then the warming should grow linearly: doubling the time interval, doubles the temperature rise.  For example, assuming exponential growth of CO2 concentration, by 2052, CO2 concentration would be doubled to 560 ppm.  The corresponding rise in land temperature is 1.6 Celsius. 40 years after 2052, there will be an additional 1.6 Celsius rise and so on every 40 years until the CO2 rise is mitigated.
·         The logarithmic dependence of the greenhouse effect on CO2 concentration stems from, according to the author, “the fact that most of the effect comes from the edges of the CO2 absorption lines which only broaden logarithmically”.
·         In the section on tipping points, the author discusses some positive and negative feedbacks that may occur as a result of increased CO2 and warming:
·         A strong positive feedback can lead to runaway greenhouse warming like the one that makes Venus a very hot planet. The tipping points for this to happen that have so far been identified are:
o   Loosening of the Antarctic ice sheet and slipping into the sea to produce over 100 feet of sea level rise
o   Melting of freshwater in Greenland which can disrupt the Gulf Stream and change sea current flow all the way in the Pacific
o   Melting of permafrost and release of the potent greenhouse gas methane leading to further warming
o   Release of methane from the seabed as the Arctic water warms
·         An example of a negative feedback is an increase in water vapor cloud cover, a mere 2% increase in which can cancel any expected further warming if the CO2 concentrations double.  Poor understanding of cloud cover mechanism contributes much to the uncertainty of warming predictions.
·         Local variability in temperature changes can mask the experience of global warming in different places.  About a third of temperature measurement stations report decreasing temperatures.  The author claims that a global increase of 2-3 Celsius will be felt globally and local temperature trends cannot negate it.
·         The author believes that the only solid evidence of warming is the temperature data; all other effects attributed to warming are “either wrong or distorted”.  He presents a review of some of these claims:
o   Hurricanes: increase in hurricane frequency more likely due to increased capacity to detect them even offshore.  Data for hurricanes that impact the US coast show no increase.  His conclusion “the rate of hurricanes hitting the US has not been increasing”.
o   Tornadoes: measurements show decreasing rate of tornadoes verified by statistical analysis. Global warming theory predicted that tornadoes might increase and not would increase; more storms may be generated due to energy availability in a warming climate.  However, it is temperature gradient that is more significant and not the absolute temperature in tornado formation. See graph.
o   Polar warming: Older climate models actually predict that Antarctic ice would increase and not decrease; higher rate of evaporation due to sea warming can increase the amount of snow falling in the Antarctic which stays below freezing even with warming temperatures. Satellite measurements showed however that the Antarctic has lost 36 cubic miles of ice.  Models were tweaked and were able to reproduce this actual result.  Modeling Antarctic ice can produce unreliable results because Antarctica only covers 2.7% of the globe, too small for more precise predictions.  The models and observations for the Arctic are consistent with each other: decreasing ice. The author states that it is difficult to determine the cause, global warming and/or decadal oscillations in sea surface temperature and pressure.
o   Hockey data: adjustment of temperature data, purportedly to “hide” data that seem to indicate decreasing temperatures by replacing proxy data with actual thermometer data.  See “Clinategate”.
o   Sea Level Rise: IPCC reports that sea level has risen by 8 inches in the last century (from records of tide levels).  The rise could be attributed to warmer waters which expand and the melting of glaciers. It is difficult to determine the ultimate cause.  The melting of glaciers in Greenland is attributed to soot pollution. IPCC predicts a further 1 – 2 feet rise in sea level through the remainder of the century.
·         “Can global warming be stopped assuming it is a threat?” A treaty requiring 80% cut in greenhouse emissions by US by 2080 and 70% cut in intensity by China and the developing world by 2040 are not going to result in decreased carbon dioxide atmospheric concentrations according to the author.  Under this treaty and the numbers involved, the author calculates that total atmospheric CO2 would increase to above 1,000 ppm (currently around 400 ppm) which, using IPCC models, would lead to a global temperature increase of 3 Celsius. In 2010, China’s CO2 emissions were 70% higher than the US. Its CO2 emission per capita is only 45% that of the US. President Obama did not sign the Copenhagen treaty because of China’s refusal to allow inspections.  China’s emissions intensity is now 5 times that of the US; with a 6% increase every year compared to the US rate, they will surpass US per capita emissions by 2025.  Because energy use correlates with wealth, the author rhetorically asks, “If you were the president of China, would you endanger progress to avoid a few decrees of temperature change”?  Slowing growth in China could trigger political instability, adds the author.  See figures 1.6 and 1.17.   “Every 10% cut in US emissions is negated by 6 months of China’s emission growth.” Reducing its dependence on coal and switching to natural gas can help in reducing China’s CO2 emissions (natural gas releases only half the CO2).  The author highlights the important role of the developing nations in decreasing CO2 emissions even though most of what is in the atmosphere now was due mostly to developed nations.  The emerging economies need to cut emission intensity by 8 – 10% per year just to stabilize greenhouse emissions. Low-cost solutions and a switch from coal to natural gas are required to help China cut emissions.
·         Geoengineering: some proposed solutions below.  The author believes these solutions may not be taken seriously ever because of the danger of further altering the earth’s geochemistry and atmospheric chemistry without knowing the ultimate consequences.
·         Dumping iron in the ocean to encourage plant growth
·         Cloud-seeding methods to increase cloud formation
·         Releasing sulfate particles into the stratosphere to rom aerosols that would reflect sunlight. “A simple calculation suggests that just one pound of sulfates injected into the stratosphere could offset the warming caused by thousands of pounds of carbon dioxide.
·         On the global warming controversy: the author’s statement is this “The evidence shows that global warming is real, and the recent analysis of our team indicates that most of it is due to humans”.  He refers to global warming as both a scientific conclusion and a secular religion for both what he calls “alarmists” and “deniers”. He believes that it is a threat that needs to be addressed even if quantification is difficult to do.  He proposes that any solution should be inexpensive enough because it is the developing world that would need it the most.  The lowest-hanging fruit right now is a switch from coal to natural gas while technologies are developed to make other sources affordable.  An electric car is an expensive solution that produces more CO2 if the electricity is provided by a coal-powered plant.


PART II: ENERGY LANDSCAPES
Energy per capita use has been shown to increase as a function of per capita GDP (see Figure II.1). The author poses the important question of whether energy use creates wealth or wealth creates more energy use and believes that it is probably a little of both. Because of this energy use and wealth correlation, the increasing per capita GDP of emerging nations will likely result in more energy use globally. Related to this is the cost of energy use and the author gives an example of how “out-of-whack” energy pricing can be which adds complexity to the issue of energy use and availability.  He gives the example of energy from 1 AAA battery costing 10,000 times more than the equivalent energy from electric power plants ($1,000 per kWh versus $0.10 per kWh). Clearly the cost of energy depends on how it is delivered. Gasoline costs 2.5 times more than retail natural gas and 7 times more than wholesale gas. Despite this price difference, it is difficult for the US to wean itself away from using gasoline because of the high cost of having to switch from the current gasoline delivery infrastructure creating an “inefficient market” in energy. The author provides an example of what he calculates as the wide disparity of the cost per kWh of energy depending on mode of delivery. Most of the cost of energy come from the mining, the processing, and the delivery. For instance, the author points out that at the time of writing, the sum of these for solar energy is higher than for coal. However, he does point out that, coal is not really that cheap if you take into account the environmental consequences. Toward the end, the author points out that the cheapest form of energy is energy that is not used but generated.  There are two aspects to do this: making appliances more efficient so that the same benefit is received for less energy and storing the unused but already generated energy. According to the author, the two main energy concerns of the energy landscape are energy security and climate change. An energy flow plot in the last section shows that only about 43% of energy generated are used and the other 57% are lost as heat; 83% of this is generated using coal, natural gas, and petroleum. About 40% goes to generating electricity and transportation comprises about 28% and industrial use about 24%. See 2013 US Energy Flow chart downloaded from LLNL website. The author puts this information in another perspective:

This amount of energy use per year is equivalent to 3,500 gigawatts or 3,500 large generating plants.  That is 12 kilowatts per person assuming about a 300 million US population.  This is equivalent to burning 300 tons of fossil fuel EVERY SECOND or 1 cubic mile of oil per year if all the energy came from petroleum. “Any proposed alternative energy sources must cope with this enormity.”

The US holds close to 730 million barrels of oil (ATTOW) in its Strategic Petroleum Reserve. The US imports about 9 million barrels of oil every day in the past decade.  This Reserve would last for only a little over two months. However, pumping capabilities can only extract 4.4 million barrels per day from this reserve.

Margin of spare capacity has a big influence on the price of oil (margin of spare capacity is the amount of oil that could be pumped around the world minus the amount that is actually pumped).  According to the author, “it is the continuing growth of the economies of the developing world that keeps the spare capacity low and therefore the price of oil high”. The author has two suggestions for building the margin of spare capacity: producing diesel fuel and gasoline fuel (synfuels) from coal and natural gas and exploiting recognized shale oil reserves.

The author cautions that when considering any energy technology, there needs to be a consideration of the difference between developing and developed countries.  The price of installation and maintenance of solar power in the US is expensive due to labor costs and the cheapness of natural gas is still a strong competitor.  In other countries where labor costs are lower, solar power may actually compete with natural gas.


Chapter 4: The Natural Gas Windfall
In this chapter, the author talks about the newest development in energy windfall: the development of extraction technology for recoverable natural gas (cheaply?) from enormous reserves trapped in shale. According to the author, “the exploitability of these shale gases is the most important new fact for future US energy security – and for global warming - …”.

US natural gas reserves has grown over the last 12 years according to Department of Energy and US Energy Information Administration information:
2001 – 192 trillion cubic feet (Tcf)
2010 – 300 Tcf
2011 – 862 Tcf
Between 2001 and 2010, the US extracted about 20-24 Tcf.
Some estimates are as high as 3,000 Tcf.

The author differentiates how the government and companies make predictions.  He notes that government estimates are more conservative because they have to base their estimates on proven reserves (recoverable supply). Companies err on the side of a “good bet” of supply.

Fraction of natural gas extracted from shale has increased over the last 12 years:
1966 – 1.6%
2005 – 4%
2011 – 23%
2012 – 30%
See Figure II.3 for graph showing the growth of shale gas production.

For the same dollar value (early 2012 data), natural gas can provide 2.5 times more energy than gasoline.

Converting to natural gas for the US energy needs is not that trivia in most cases.  Volume storage and delivery is an issue as even when compressed, natural gas has three times the volume of gasoline.  ATTOW, some 130,00 taxicabs and trucks have converted to CNG; existing gasoline engines can easily be converted to use natural gas.  CNG has ten times the energy per gallon compared to lithium ion batteries so it is an electric vehicle competitor.

In 2013, natural gas provided about 27% of the US energy needs (updated data from LLNL energy flow chart for 2013).

Natural gas is released from coal and shale by pumping pressurized water down a pipe to crack the coal or shale and release the natural gas. Hydraulic fracturing (fracking) and horizontal drilling are two key technologies for extracting natural gas from shale. These two processes have enabled economically viable extraction of natural gas from shale.  In a US EIA survey (Figure II.8) of 32 countries, there are estimated to be about 6622 Tcf of shale gas reserves, 13% of which are in the US. France is estimated to have about 100 years’ worth of natural gas (ATTOW, fracking is banned in France) recoverable from shale reserves but still imports 95% if its natural gas.  China is estimated to have about 400 years’ supply of natural gas from shale reserves. Some advantages of natural gas include producing only half the greenhouse gases as coal does and the local pollutants (sulfur, mercury, carbon particles) are much lower.

Another potential source of methane being explored is in the form of methane hydrate or clathrate discovered deep in the ocean usually along coasts and continental shelves.  This form of methane is mixed with water on a 1:5 ratio (more water) and is thought to form by methane seeping from sea bottom sediments, mixing with cold water (4 celsius) at high pressures (~50 atm, at least 1500 feet below), and causing the water to form an ice cage that traps the methane.  As shown in Figure II.9 in the book, methane hydrate looks like ice cubes that burn.  Estimates of the amounts of methane hydrate deposits range from 10 – 100 times the amount of shale gas. The source of the methane is unknown; it could be a bacterial product or primordial methane but it currently does not look like it is associated with fossil carbon. The extraction process, ATTOW, is not trivial as most of the methane hydrates are further mixed with clay and salt water is corrosive. Methane itself contains enough energy to pay for its recovery.  There is danger of leaking methane, however, that can contribute as a greenhouse gas.  Methane is 23 times more effective as a greenhouse gas than carbon dioxide. Furthermore, some scientists believe that the release of methane hydrates led to the catastrophic extinction of 96% of all marine species about 250 million years ago called the Permian-Triassic extinction.








No comments:

Post a Comment

Note: Only a member of this blog may post a comment.