Muller, Richard A. Energy for Future Presidents: The Science Behind the Headlines. New York: W. W. Norton and Company, 2012.
The print version of this book is 368 pages in length.
In this book, the author presents section lessons to future presidents on various sectors of energy use and alternative energy prospects with a goal of clarifying, correcting, and expanding on information behind the news headlines. From the author’s perspective, the president has the responsibility to be knowledgeable about these areas and that he or she should be a “teacher” to the public when it comes to using information that go beyond the news headlines to make informed decisions about energy. He tackles a wide-ranging list of energy and energy-related topics including: energy-related disasters, global warming, shale oil, alternatives to transportation fuel, energy efficiency and conservation, solar energy, wind energy, energy storage, nuclear power, biofuels, synfuels, hydrogen fuel, hybrid autos, and carbon dioxide mitigation measures.
I chose this book despite the broad coverage because energy is a shared purview of both physics and chemistry. The theme of the book is on looking at headlines and providing a scientific and mathematical perspective to inform people’s interpretation and perception of these issues. These are the same headlines that invariably both the president, myself, and my students read every day that are, for many, the primary source of information.
In Part I, the author provides his perspectives on 3 major energy catastrophes, presenting some facts, his interpretation of the risks and ramifications, and his opinion on how these should information government decisions and response.
The first chapter deals with the Fukushima reactor meltdown following damage from the earthquake and tsunami of March 2011. He predicts that the number of deaths from cancer from the radiation will be small, less than 1%, of the human death toll caused by the actual earthquake and tsunami. On the basis of this, he proposes that nuclear reactors should be built strong enough so that fewer deaths results from its damage through radiation than those resulting from the cause of the damage. He also proposes using the average annual radiation dose that people in Denver get as a standard to determine what the disaster response should be during a radiation release. Against these two standards, he argues that the Fukushima was actually adequately based on the low human death toll projected despite the fact that it was not designed to withstand a 9.0 earthquake and a 50-foot tsunami.
In Chapter 2, the author questions the President’s characterization of the Gulf Oil Spill of 2010 caused by the Deepwater Horizon oil rig accident as being the “greatest environmental disaster” in history. He argues that the ensuing animal death at around 6,000 was small relative to the hundreds of millions of bird deaths due to collision with glass windows and high-voltage lines. The beaches remained relatively clean compared to the damage done by the Exxon Valdez spill to the Alaskan shores. He “senses” that the overreaction did more damage to the region in terms of its effect on tourism and the local economy.
Chapter 3 covers quite a bit of material starting from the author’s presentation of his group’s efforts to confirm temperature increase data. His group through the organization Berkeley Earth Surface Temperature project did extensive analysis of temperature data previously not included in the IPCC analysis and re-analysis of temperature (1.6 billion temperature measurements, 14 data sets, 38 stations), putting in measures to avoid data selection and correction bias and station quality bias and tested for urban heat bias. To the author’s surprise, they came up with the same temperature rise reported by the IPCC of 0.9 Celsius over land concluding that “none of the legitimate concerns of the skeptics had improperly biased the prior results” suggesting to the author that “those groups had been vigilant in their analysis and treated the potential biases with appropriate care”.Furthermore, they demonstrated a close agreement between the temperature rise curve and the carbon dioxide rise curve when smooth fitting was done with volcanic eruption data. The excellent precise fit between the temperature and CO2 curves “suggests that most – maybe all – of the warming of the past 250 years was caused by humans” according to the author. Based on these results, the author offers the following prediction: if the CO2 concentration increases exponentially and the greenhouse gas effects increase logarithmically, then the warming should grow linearly: doubling the time interval, doubles the temperature rise. For example, assuming exponential growth of CO2 concentration, by 2052, CO2 concentration would be doubled to 560 ppm. The corresponding rise in land temperature is 1.6 Celsius. 40 years after 2052, there will be an additional 1.6 Celsius rise and so on every 40 years until the CO2 rise is mitigated. In the section on tipping points, the author discusses some positive and negative feedbacks that may occur as a result of increased CO2 and warming. A strong positive feedback can lead to runaway greenhouse warming. The tipping points for this to happen that have so far been identified are loosening of the Antarctic ice sheet and slipping into the sea to produce over 100 feet of sea level rise; melting of freshwater in Greenland which can disrupt the Gulf Stream and change sea current flow all the way in the Pacific; melting of permafrost and release of the potent greenhouse gas methane leading to further warming; and release of methane from the seabed as the Arctic water warms. An example of a negative feedback is an increase in water vapor cloud cover, a mere 2% increase in which can cancel any expected further warming if the CO2 concentrations double. The author believes that the only solid evidence of warming is the temperature data; all other effects attributed to warming are “either wrong or distorted”. In his section, he presented his views on these effects and how they may or may not be accurate or correlated with warming temperatures: hurricanes, tornadoes, polar warming, the so-called hockey data, and sea level rise. Toward the end, the author asks the question, “Can global warming be stopped assuming it is a threat?” He highlights the important role of the developing nations in decreasing CO2 emissions even though most of what is in the atmosphere now was due mostly to developed nations. The emerging economies need to cut emission intensity by 8 – 10% per year just to stabilize greenhouse emissions. Low-cost solutions and a switch from coal to natural gas are required to help China and other emerging nations cut emissions. The author believes that geoengineering solutions may not be taken seriously ever because of the danger of further altering the earth’s geochemistry and atmospheric chemistry without knowing the ultimate consequences. Lastly, on the global warming controversy: the author’s statement is this “The evidence shows that global warming is real, and the recent analysis of our team indicates that most of it is due to humans”. He refers to global warming as both a scientific conclusion and a secular religion for both what he calls “alarmists” and “deniers”. He believes that it is a threat that needs to be addressed even if quantification is difficult to do. He proposes that any solution should be inexpensive enough because it is the developing world that would need it the most. The lowest-hanging fruit right now is a switch from coal to natural gas while technologies are developed to make other sources affordable. An electric car is an expensive solution that produces more CO2 if the electricity is provided by a coal-powered plant.
In Part II, the author gives an overview of the energy landscape. In the introduction, he notes two complicating factors affecting the way this landscape is viewed: the out-of-whack pricing for energy and the enormity just in the US alone of the energy requirement equivalent to about 1 cubic mile of oil per year; with increasing per capita GDP, there is a corresponding increase in per capita energy use. He also notes that in exploring various alternative energy resources, the difference between developed and developing world needs to be considered.
In Chapter 4, the author talks about the newest development in energy windfall: the development of extraction technology for recoverable natural gas from enormous reserves trapped in shale. According to the author, “the exploitability of these shale gases is the most important new fact for future US energy security – and for global warming – “. US natural gas reserves has grown over the last 12 years according to Department of Energy and US Energy Information Administration information, from 192 trillion cubic feet (Tcf) in 2001 to 300 Tcf in 2010; the remarkable event, however, is the growth of this number to 862 Tcf in just one year alone (2011). This increase is attributed to the development of key technologies to extract gas from shale oil reserves. From 2005 to 2012, the fraction of natural gas extracted from shale has increased from 4% to 30%; see Figure II.3 for graph showing the growth of shale gas production.
Natural gas is released from coal and shale by pumping pressurized water down a pipe to crack the coal or shale and release the natural gas. Hydraulic fracturing (fracking) and horizontal drilling are two key technologies for extracting natural gas from shale. These two processes have enabled economically viable extraction of natural gas from shale. In a US EIA survey (Figure II.8) of 32 countries, there are estimated to be about 6622 Tcf of shale gas reserves, 13% of which are in the US. In 2013, natural gas provided about 27% of the US energy needs (updated data from LLNL energy flow chart for 2013). For the same dollar value (early 2012 data), natural gas can provide 2.5 times more energy than gasoline. Converting to natural gas for the US energy needs is not that trivia in most cases. Volume storage and delivery is an issue as even when compressed, natural gas has three times the volume of gasoline. As a transportation fuel, CNG has ten times the energy per gallon compared to lithium ion batteries so it is an electric vehicle competitor. Some advantages of natural gas include producing only half the greenhouse gases as coal does and the local pollutants (sulfur, mercury, carbon particles) are much lower.
Another potential source of methane being explored is in the form of methane hydrate or clathrate usually found along coasts and continental shelves. At low temperatures and high pressures, methane mixes with water on a 1:5 ratio (more water) at least 1500 feet below and causes the water to form an ice cage that traps the methane. As shown in Figure II.9 in the book, methane hydrate looks like ice cubes that burn. Estimates of the amounts of methane hydrate deposits range from 10 – 100 times the amount of shale gas. The extraction process, ATTOW, is not trivial as most of the methane hydrates are further mixed with clay and salt water is corrosive. There is danger of leaking methane, however, that can contribute as a greenhouse gas. Methane is 23 times more effective as a greenhouse gas than carbon dioxide. Furthermore, some scientists believe that the release of methane hydrates led to the catastrophic extinction of 96% of all marine species about 250 million years ago called the Permian-Triassic extinction.
In Chapter 5, the author provides his perspective on the real energy crisis in the US. In a somewhat facetious tone, the author rhetorically asks “What energy crisis?” (in the US) based on the following: enough coal reserves to last a century, huge reserves of natural gas and oil in shale, lots of sun and wind energy, and cheap uranium (uranium ore cost is only 2% of electricity cost). The author clarifies then, that what the US is having is a “transportation fuel crisis” due to an oil and liquid fuels shortage. In Figure II.11, the author shows that if you consider the US reserves of natural gas, coal, and oil, the US has 1,470 billion barrels of oil equivalent and leads a pack of countries, including Saudi Arabia, making the US “the king of fossil fuels”. The energy source referred to as oil or petroleum (synonymous with gasoline, diesel, jet oil) was once considered an alternative energy source when whale oil used to heat homes and businesses started running out in the 1850’s. It was primarily used as kerosene for lamps and led to the use of internal combustion engines in automobiles and airplanes. Although coal was able to run automobiles, gasoline delivers 60% more energy for the same mass. It is also incredibly cheap: assuming a price of $3.50/gallon and 35 mpg, it costs 10 cents per mile to drive with up to 5 people in the car as the author noted.
The US Hubbert’s oil peak occurred in the 1970’s and the world is close to hitting its own Hubbert’s oil peak. The author points out that the obvious substitutes for petroleum are natural gas, synfuel, and shale oil. Alternative energy sources have a difficult time competing because of the cheapness of oil-based energy; Saudi Arabia can drill oil for about $3 a barrel although the market price per barrel of oil can fluctuate between $20 to $100, increasing as demand exceeds supply. Synthetic fuel or synfuel is one solution to the liquid fuels shortage. Synfuel can be derived from the liquefaction of coal (CTL – coal to liquid) or natural gas (GTL – gas to liquid). The Fischer – Tropsch process was the first chemical procedure used to manufacture synfuel. Synfuel can cost up to $60 per barrel to make which makes its viability as an economical replacement questionable especially if the Saudis can lower oil prices easily, according to the author.
In Chapter 6, the author talks about the surprising emergence of shale oil as an energy source. Along with natural gas, shale also contains oil that can be extracted. The amount of oil deposits in shale is estimated to be over 1.5 trillion barrels, 5 times more than the oil reserves of Saudi Arabia. As with any other resource, this source of oil was not considered until oil became so expensive that the price of shale oil extraction became competitive. In a nutshell, the author describes the classic idea of how this oil is extracted: the shale is mined then heated to push out the oil-related material called kerogen. The kerogen can then be converted to diesel and gasoline in a process called retorting. The waste generated is huge, exceeding the volume of the actual rock mined. Companies like Shell, Chevron, and Exxon Mobil have been involved in developing the technology for shale oil extraction. Shell’s method called “In-Situ Conversion Process” involves heating the rocks 1-2 km underground using electricity to temperatures of 650 – 700 Celsius, letting it simmer for 3-4 years, and then employing fracking and horizontal drilling techniques to extract the smaller hydrocarbons broken up from the kerogen. As energy-intensive as it may sound, the author notes that this process actually produces a net of 3.5 times more energy than used. The estimated cost by Shell is $30/barrel; this industry may sustain profitability as long as the price of oil remains above $60/barrel. There are environmental consequences, of course,: this is yet another carbon-based fuel, oil leaking into the water table, wastewater and water shortage issues as with fracking of natural gas, etc. Areas where there is significant extraction include the Colorado Plateau, Bakken field in North Dakota, and Eagle Ford Formation in Texas. It is estimated that by the end of this decade, 25% of US oil consumption may come from shale oil.
The author devotes Chapter 7 to what he calls “cheaper – than – cheap” energy source: increasing energy productivity and efficiency. He addresses what he considers “great investments” that could actually save money for users and “feel-good” actions. Two of the money-saving actions he highlighted were adding insulation (with a 17.8% return after payback) and replacing incandescent bulbs with compact fluorescent lights (209% return). He also summarized the basis, the premise, and the results of a conservation program called Decoupling Plus implemented in CA. In this program, the utility company invests money on helping Californians buy energy efficient appliances and to conserve energy overall. The return on the utility comes in the form of diverting investment dollars from building a new power plant to increase capacity for increased energy use to conservation practices to reduce energy usage and a promise by the state to allow them to raise prices. Customers benefit from the increased energy productivity which decreases their energy costs despite an increase in prices. It is considered quite successful in California in decreasing per capita electricity use: it has been stable since 1980 while overall in the US it has increased by 50%. The catch is that electricity consumers should not increase their electricity use just because they are using more energy efficient bulbs. Other “great investments” listed by the author include cool roofs, more efficient autos, energy-efficient refrigerators, and various actions listed in the McKinsey Chart. In the next section, the author lists and describes what he opines as “feel-good measures” that may save energy but only in limited circumstances: buses and recycling paper. In the case of buses, a study found that public transportation saves energy or at least breaks even if there are more than 15 households per acre; lower than this and buses actually use more energy. In the last two sections, the author addresses issues involved in energy delivery, particularly electrical power. In “Power Blackouts”, he laments the ineffective use of the interconnection of large numbers of power plants, transmission lines, transformers, and users – the grid – that makes electricity delivery more reliable in the US. Operation problems in one plant can be overcome by another plant supplying the needed electricity. This process however cannot handle sudden high demands on the system and can lead to cascading power plant failures like what happened in New York and Massachusetts in 2003. The author lists three solutions. One is to build small scale natural gas power plants for use on high demand days. Already done in CA, this, however, is an expensive solution due to the capital investment and the poor returns as these plants are used only a fraction of the time. Another solution is to have utilities decrease voltage on the line; air conditioners still run but at a reduced power. California has also used the solution of rotating brownouts to distribute the source for the sudden high demand. In “Smart Grid, the author talks about controlling electricity use for which the author welcomes the role of market forces. He favors dynamic pricing of electricity, rising in cost when the demand is high. This is not a popular option, however, because of the unpredictability. The author suggests that the use of smart meters can help consumers program turning on and off of appliances depending on when demand peaks and there is price increase. For example, electricity enters home at two voltages 120 volts for lighting and small appliances and 240 volts for air conditioners, washers, dryers, and other appliances that pull high loads. One way to program a smart meter is to turn off the 240 volt circuits. In CA, smart meters were installed primarily so that the utility company can collect more information about energy usage. It was also designed to reduce power automatically in case of an extreme emergency. They did not start out very popular as he described in the last section.
Two major issues the author identifies related with energy are energy security and climate change. In Part III, the author devotes chapters to a description and discussion of alternative energy sources, noting that the “alternative energy field is wide, technically complex, and full of uncertainties”. He points to a table of data showing the cost of producing a kilowatt-hour of electricity using various methods (see Table III.1): coal, natural gas, nuclear, wind, solar PV, solar thermal, geothermal, biomass, and hydro. Some of these general types are further broken down into specific technologies. The table was published by the US Energy Information Administration in 2011. The author notes two caveats:
1) The data assumes that the cost of capital is 7.4%
2) The data assumes a carbon emission trading cost for coal and natural gas of about $15/ton.
The table shows that natural gas appears to be the cheapest in providing a kilowatt-hour of energy. It also has the advantage of producing less greenhouse emission than coal for equal energy produced; half the energy comes from C combining with oxygen to form carbon dioxide and the other half comes from H combining with oxygen to form water.
The author starts off Chapter 8 on Solar Surge by predicting that the price of solar panels will eventually go down but that installation and maintenance will still cost the consumer. On rainy days, there has to be another alternative. Before launching toward a discussion of PV cells, he provided a short synopsis of the physics of sunlight. Sunlight delivers about a kilowatt of power per square meter onto the surface of the earth. This is equivalent to 10 100-watt bulbs. A square mile of solar panels has the potential to generate 2.6 gigawatts which reduces down to about a gigawatt due to the 42% efficiency. This further goes down to 250 watt per square meter because the average solar power is only 25% of the peak. He then goes on to discuss two types solar energy source. Solar Thermal is a type of solar energy source in which the heat is focused, collected, and used to heat water to produce steam that is used to run a turbine. In Table III.1, this energy source is expensive, 25.9 cents per kilowatt-hour. A solar thermal power plant in California consisting of a tower toward which 24,000 moving mirrors direct sunlight can generate 5 megawatts of energy, 0.5% of a conventional gas, coal, or nuclear power plant. Because of the many moving parts, this type requires a lot of maintenance. Another solar thermal power plant uses a solar trough to focus the light using an optical design that obviates having to repoint. In this type of solar focuser, there are fewer moving parts. Spain is the biggest user of solar thermal, generating up to 4 gigawatts or 3% of their energy use by the end of 2010. The construction of these plants, however, depends on government subsidy. Some of the disadvantages he notes include the requirement for sunny days, subsidies still cover part of the cost otherwise it is still expensive, and the need for extensive maintenance. The advantages are that the hot salt can be stored for later use and the high 50% efficiency in producing electricity due to extreme temperatures that can be reached with the focused sunlight. The efficiency of trough is not as high as there is less focusing ability and the heated liquid has to flow.
The other type of solar energy source is solar cells. Solar cells or PV cells use absorbed sunlight to produce electricity based on the photoelectric effect. When sunlight heats the solar cell, an electron is ejected from an atom and travels carrying some of the photon’s energy to an electrode and a wire creating a current for electricity. ATTOW, reasonably priced cells can convert only 10% of the photon’s energy into electricity but this can go up to 42% for the most expensive cells. In 2011, the cost of PV cells dropped to $1/watt from $7/watt a few years ago. This, however, is PEAK watt. This goes down to ¼ when considering the varying angle of the sun and its absence at night. It goes further down to 1/8 of watt peak when overcast days are accounted for. The author shows a sample calculation of the return and payback time for solar cells. They also need other electronic devices such as an inverter to be able to use its electricity to run appliances and optional batteries. And, they require maintenance. He considers them not optimal (“no profit”) as they are also heavily subsidized by the government. There are many competing technologies for PV. The ones highlighted by the author are silicon, cadmium telluride, copper indium gallium selenide, and multijunction cells. Of these, the cheapest to make is silicon. The dominance of Chinese companies to produce these at such low prices have had a negative impact on US companies including those that use different technologies. Another concern is that some of these material may be in short supply although he notes that with increased demand may come increased incentive for exploration. The different materials have varying but similar efficiencies except for the most expensive to make, the multijunction cells, that can reach efficiencies as high as 42%. These have been used in the Mars Rover and with the use of PV concentrators can be made cheaper because of increased efficiency for a smaller piece. The other concern is that some of the material used are toxic. In the end the author provides the following solar cell summary
“The solar field is intensely competitive and developing fast. Prices are dropping so rapidly that the winners are likely to be decided by criteria other than solar-cell price, including cost of installation, cost of maintenance, cost of conversion to household voltages, lifetime of cells, and efficiency.”
In Chapter 9, the author discusses wind power. Wind power is normally harvested through the use of very tall wind turbines to take advantage of higher velocity winds at higher elevations and long blades to increase surface interaction with wind. Wind power increases as a cube of the wind velocity: doubling the wind speed results in 8 times the power. (Wind energy is just ½ mv2 but the power is also professional to the energy x v, thus the cubic dependence). A blade that is 63 meters long sweeps an area of about 12,000 square meters. At wind speeds of 20 mph, the power derived is 10 megawatts. Because the blades spin fast, just 3 blades are enough to take more than half the energy of the wind blowing through that circular area spanned by the radius of each blade. Betz’s Law limits the amount of energy that a turbine can extract from wind to 59% as long as there are no other turbines around (turbines are spaced by a distance 5-10 times the length of their blades). So, the 10 megawatts calculated above is reduced to 5.9 megawatts, the maximum power that can be extracted. Wind power capacity has been doubling every 3 years, as turbines are relatively inexpensive to build and don’t require fuel. The US has built 45 gigawatts worth of wind turbine farms (2.3% of electric power generation. The anticipated capacity of China was 55 megawatts at the end of 2011. See Figure III.8. It can produce electricity relatively cheaply at 9.7 cents per kilowatt-hour. The last few paragraphs are about issues that have been brought up about wind power. The author addresses each one of these: a large grid of windfarms and back-ups (such as batteries and emergency generators) can help stabilize wind power delivery in times of low winds; aesthetic issues are of concern to some people; bird deaths are a concern but the numbers due to wind turbines are small relative to collisions with other tall building structures and windows; and there are concerns about delivery of electricity because the strongest winds are generally in areas away from population centers.
In chapter 10, the author, as promised, tackles energy storage options, especially for solar and wind energy: batteries, compressed air energy storage, flywheels, supercapacitors, hydrogen and fuel cells, and natural gas.
For batteries, he touts the sodium-sulfur battery as the best option. Sodium-sulfur batteries have the advantage of low price per charge-discharge cycle: it can be recharged 4,500 times for 80% discharge while lead-acid batteries versus 500 times for both lead-acid and lithium ion batteries. Here is what he had to say then about lithium ion batteries: “I expect that lithium-ion batteries will never be used for large-scale energy storage; they are too expensive. Lithium costs 40 times more per pound than sodium and 10 times more per atom – a more relevant measure for batteries. With a 9-fold recharge advantage and a 10-fold cost-per-atom advantage, sodium-sulfur has a 90-fold advantage over lithium-ion.” A disadvantage of sodium-sulfur batteries is that they cannot be scaled down and are not suitable for a wide range of applications; they have be kept at temperature of around 350 C and contain liquid sodium. A Japanese company is developing one that can operate below 100 C. In his opinion, the future of batteries is optimistic. The market for these newer more expensive batteries was sustained because they were useful for even more expensive gadgets such as laptops. The focus of research is on rechargeability and safety. The author notes, however, that engineering development for batteries is linear not exponential as improvements will come but not at the same fast pace as in the past.
In compressed air energy storage, already used in confined spaces with no ventilation like mines, air is compressed to several times atmospheric pressure (200 atms is a typical figure) storing the energy expended by a motor pump. This energy is then released when the compressed air is allowed to expand and run a turbine. One disadvantage of this is the weight of the tank which is always about 20 times more than the weight of the air (or 5 times if it is a fiber composite tank). Another issue is that air heats up (up to 1370 C at 200 atm) when compressed so there must be a way to draw the heat away.
Energy can be stored by using a motor to spin a heavy flywheel. When a load is added to it, a motor for instance to generate electricity, the rotation slows as the kinetic energy is converted to electricity. One of its advantages is its ability to condition energy and smooth out the power. The Bevatron atom smasher in Berkeley uses a flywheel, about 10 tons each. Their energy storage density is comparable to lithium ion batteries, 30 watt-hours per pound. Beacon Power’s current set-up costs $1.39 per kilowatt – hour. The high cost makes the author think that flywheels will continue to be used to condition energy but not for large-scale energy storage.
Capacitors are composed of two metals that are charged with opposite charges and separated by an electrical conductor. They can store energy for longer periods of time than can batteries. Newly developed supercapacitors can store as much as 14 watt-hours per pound. This is about a third of the energy of similar weight lithium-ion batteries. Supercapacitors cost 3 times as much. Supercapacitors are probably best used in conjunction with a battery, providing quick boosts of power that batteries can’t; they can also be useful in improving the efficiency of regenerative breaking, absorbing and transferring energy at a more efficient pace.
A fuel cell is a battery that does not need to be recharged because the chemical reactants are added as “fuel”. In a hydrogen fuel cell, hydrogen and air are pumped to generate electricity. Efficiency is low, %25, and the author is not sure that they will replace batteries or generators.
In this last section, the author does a comparison of what he deems as the best energy storage technology, the sodium – sulfur battery to a natural gas generator. His calculations show that sodium – sulfur batteries’ capital cost is $5 per deliverable watt while natural gas capital costs $1 per deliverable watt. If the cost of the fuel is taken into account, natural gas easily wins out over solar or wind as the energy source. Batteries only compete if they are run at low duty cycle, e.g 1 hour per day, then the per watt capital cost goes down to 50 cents. The author concludes that natural gas is hard to beat.
The author starts off Chapter 11 with a list of key items he thinks are important to know about nuclear energy (he calls it an “executive summary”). A brief statement about each of these key items is given below. For more details, see reading notes.
Unlike nuclear bombs that contain highly enriched uranium, nuclear reactors cannot explode because they use low-enriched uranium.
Capital costs for nuclear power plants are high but the delivery of electricity is lower in cost because fuel and maintenance costs are low. Nuclear power plants have a very high capacity factor, operating 90% of the time due only to maintenance downtime. This has raised the revenue 1.6 times leading to lower cost of delivering electricity.
Small modular reactors (300 megawatts or less) may be the solution for the high capital cost to build a new reactor. They reduce the initial investments and their modular design allows building up of power capacity.
There are enough economically recoverable uranium to last 9,000 years at current usage if low-grade uranium ore is used. Uranium ore costs $0.2 per kilowatt-hour.
The Fukushima nuclear accident and meltdown after being hit by an earthquake and tsunami in 2011 is estimated to be only about 100 out of the total 15,000 deaths; maybe fewer as thyroid cancer is readily treatable.
Nuclear waste storage is technically feasible but receives bad public perception and political posturing. In the US, nuclear waste contains plutonium (in France, no, because it is extracted). Here are the reasons why the author thinks that nuclear waste is not a problem: plutonium has a long half-life of 24,000 years and thus does not contribute much to the radioactivity of the waste and it is highly insoluble in water so very little will end up in groundwater. The greatest danger from plutonium is inhalation; it only takes 0.00008 g to cause one cancer (versus 0.5 g if dissolved in water).
Construction of new nuclear power plants will be “exploding” in the next several years in places like China and France; Japan is helping build some of these even as some of their own nuclear reactors are taken offline.
The author devotes Chapter 12 on a promising energy technology that has been in development for decades, fusion. Fusion is a promising source of energy as it can be fueled by the most abundant element in the ocean (by atom numbers), hydrogen. Fusion can also be fueled by deuterium which, while only ~1/6000 in abundance relative to hydrogen, can be inexpensively separated from regular hydrogen (the next heavier hydrogen, tritium, is too rare but can be generated). The optimism for the fusion energy source has been around for decades. Fusion has actually occurred in the form of hydrogen bomb in 1953; as a safe source of energy however, a more controlled process needs to be developed. Some of the advantages of fusion listed by the author include the abundance of the primary fuel, hydrogen, and its relative lack of radioactive waste. The author points out, however, that neutrons produced in a typical fusion reaction (deuterium + tritium à helium + neutron) can stick to material and make them radioactive, albeit smaller than the radioactivity in a uranium fission plant. Because tritium is quite rare (16 pounds in all the world’s oceans), some fusion reactors are being designed so that the product neutrons are used to breed tritium by bombarding lithium atoms with them. In one other fusion reaction, hydrogen + boron à 3 helium + gamma ray, no neutrons are formed. The gamma rays don’t produce any significant radioactivity, just a lot of energy.
In the next few sections, the author discusses 5 of the most talked-about proposals for developing fusion as an energy source. The Tokamak, which in Russian stands for “toroidal chamber with magnetic coils, was invented in Russia in the 1950’s. It has dominated the attention and research effort in the last 60 years of fusion exploration. In tokamak, the type of fusion is called thermonuclear fusion wherein extremely high temperatures are used to overcome the repulsion for H atoms to get close enough and fuse through the short-range strong nuclear force. The National Ignition Facility is located at the Lawrence Livermore Lab. The fusion technology they are developing involves the use of LASER to heat to very high temperatures (tens of millions of degrees) and ignite a small amount of hydrogen to get the fusion of deuterium and tritium started. The author considers this design, when developed, to be the first to breakeven point in controlled fusion. In beam fusion, the target atoms are accelerated through a beam of plasma and collide with rapidly moving particles. This technique is already being used in commercially used neutron generators. In muon fusion, discovered in 1956 in a cold liquid hydrogen chamber, a muon (207 times heavier than an electron, some of which have negative charges) fuses with a proton in a hydrogen atom, ejecting an electron. The neutral, fused muon and proton can then collide with a deuteron and fuse, releasing energy and creating a helium nucleus. The author devotes the last section to tell the story of the claim of cold fusion being achieved in 1989, “verified” by scientists from top institutions, only to fizzle out as consensus evolved to declare the results unverifiable and the methods questionable.
In Chapter 13, the author discusses examples of biofuels and warns right away that some of what he is about to say may actually offend some people passionate about biofuels. He uses a somewhat tongue-in-cheek tone in some of these sections. Right off the bat, he listed some of the contentious conclusions he has arrived at: “corn ethanol should not count as a biofuel as it does not reduce greenhouse emission; biodegradable and recycling are overhyped from a global warming perspective; ethanol from cellulose offers the best hope for a significant biofuel component to solving the energy problems; and the main value of biofuels is not reducing global warming but in increasing energy security.” The author gives the following reasons for why ethanol from corn should not be considered a biofuel: it uses a lot of fertilizer; it takes a lot of oil and gasoline to run farm machinery for growing corn; and it does not produce enough sugar per acre turned into ethanol by fermentation to be carbon neutral and to yield net carbon dioxide reduction. Using corn to make ethanol has also raised prices for corn-based food. Ethanol from corn does serve the advantage of providing another local source of transportation fuel and contribute toward energy security (the author estimates about 3% US consumption and 5% of US imports) despite the fact that it provides only 2/3 of the energy as gasoline on a per gallon basis. From the global warming point of view, biodegradable materials are “bad” because they decompose to produce carbon dioxide. The author concedes, however, that from an aesthetic and animal welfare point of view, reducing plastic that ends up in our oceans and kills animals and clutters the landscape, biodegradability does have some benefits. The author does not consider waste cooking oil as a biofuel. He argues that using waste oil as fuel adds carbon dioxide to the atmosphere and is not better than petroleum. He also considers recycling paper as bad for global warming because letting it biodegrade adds carbon dioxide in the air instead of burying and sequestering its carbon content. If paper is not recycled, more trees to make paper have to be grown which removes carbon dioxide from the atmosphere. The Altamont landfill in California can generate 13,000 gallons of liquefied natural gas that it uses to operate its waste and recycling trucks. This constitutes 93% of total and so the other 7% of methane leaks into the atmosphere as a potent greenhouse gas. Cellulose, normally indigestible by humans, can be converted to the liquid fuel ethanol by fermentation using enzymes from microorganisms, fungi, or yeast. The top candidates for the cellulose are switchgrass and miscanthus grass that grows over 11 feet tall and can yield three crops per year. Miscanthis grass is projected to produce, in theory, 1150 gallons of ethanol per acre compared to corn which can only produce 440 gallons per acre. Cellulose can provide about 1/3 the energy of an equal weight of gasoline. The author estimates having to grow enough miscanthus grass in an area 560 miles on each side (6 times the size of Iowa) to replace the 1 billion tons of oil we uses each year, assuming no energy loss in the conversion. The author thinks that algae has even better potential for producing fuels. The “right kind of algae” has the potential to produce oil that can be used as diesel without expensive conversion steps in between. Algae are very efficient at producing biomass from sunlight: every cell can produce biomass compared to just leaf surface cells in grasses. Proponents of algae for producing oil claim that algae can “produce ten times the energy per acre that Miscanthus can produce”. Commercial ventures lead the work in research and development of this oil-producing technology. Genetic engineering and primarily inducing mutations is the technique being used to find the “right kind of algae”. Algae production can be very sensitive to environmental factors and biological contamination whereas growing miscanthus is less vulnerable to extreme weather and invasive species. In the end, the author does not put a high value on bioethanol or other biofuels in terms of limiting the greenhouse effect. Even if biofuel replaces gasoline, there would only be a limited reduction in predicted temperature rise. In terms of energy security, bioethanol may come too late and may be too expensive to compete with other cheaper fuels like compressed natural gas, synfuel, or shale gas.
In the beginning of Chapter 14, the author reiterates that while the US is running low on oil, this is not the case for natural gas and coal. And, as he points out, while this helps energy security, it is not good for greenhouse emissions. A large supply of natural gas and coal does not help energy sectors that require liquid fuels, especially transportation needs. Transportation infrastructure is built around using oil. Shale oil and shale gas are also fossil fuel alternatives discussed in a previous chapter. The author discusses some other “unconventional” sources of fossil fuel in this chapter.
Synfuel
The Fischer – Tropsch chemical process to convert coal to oil was first developed in Germany during World War II. This process, referred to today as CTL or coal to liquid, has been used by the company Sasol in South Africa, to produce oil during the embargo years of the apartheid era. Sasol, in 2011, announced plans to build a gas to liquid (GTL) plant in Louisiana to produce oil from natural gas projected at about 100,000 barrels per day of diesel fuel. The author predicts that there would be a growth in the construction of synfuel facilities; subsidies are no longer necessary because of lower natural gas prices.
Coal Bed Methane
Coal bed methane is methane extracted from deep coal deposits by drilling down and allowing the methane to escape. Fracking and horizontal drilling can be used as well. This type of methane is relatively pure and does not contain hydrogen sulfide and heavier hydrocarbons like propane and butane and is nicknamed “sweet gas”.
Coal Bed Gasification
In this process, deeply imbedded coal is partially burned to extract the energy from the coal without having to dig it up and bring it to the surface. The partial combustion produces other fuels such as carbon monoxide and hydrogen, a mixture called coal gas. Another advantage of this process is that the ash is left buried. The coal gas can also be collected as feed gas for the Fischer-Tropsch process and for methanol synthesis. The disadvantages include heat loss, wasted unburned coal, and potential pollution of the water table.
Enhanced Oil Recovery (EOR)
Only 20% of the oil can be extracted from underground through upward movement by its own pressure because it is sparsely distributed in rock pores and cracks. In secondary oil recovery, the oil is flushed out by water, natural gas, or carbon dioxide boosting the recovery to 40%. This has the added advantage of sequestering carbon dioxide although this is a very small fraction of what needs to be removed from the atmosphere. Enhanced oil recovery methods aim to recover the other 60% through the following techniques: reducing the viscosity by heating the oil by steam injection or pumping down air or oxygen to allow burning of some of the oil to heat the rocks; pumping soap (surfactant) to release the oil from the rocks; and sending down bacterial that can breakdown the more viscous, longer chain hydrocarbons
Oil Sands
Canada is third in the world, after Venezuela and Saudi Arabia, in terms of the amount of recoverable oil reserves. Most of this oil is in the form of oil sands (or tar sands), heavy crude oil called bitumen mixed with clay and sand. The estimates run from a conservative 200 billion barrels to an optimistic 2 trillion barrels (by Shell Oil). 2 trillion barrels are enough to supply the US with 250 years of oil and the world 60 years at current consumption. Some of the objections voiced in exploiting the oil sands of Canada include: because the oil is largely on the surface, recovery would leave ugly open-pit mining, local water pollution, and the requirement for large amounts of water. The process used to recover the oil uses up about 12% of the energy of the oil extracted.
In the beginning of Chapter 14, the author reiterates that while the US is running low on oil, this is not the case for natural gas and coal. And, as he points out, while this helps energy security, it is not good for greenhouse emissions. A large supply of natural gas and coal does not help energy sectors that require liquid fuels, especially transportation needs. Transportation infrastructure is built around using oil. Shale oil and shale gas are also fossil fuel alternatives discussed in a previous chapter. The author discusses some other “unconventional” sources of fossil fuel in this chapter.
Synfuel
The Fischer – Tropsch chemical process to convert coal to oil was first developed in Germany during World War II. This process, referred to today as CTL or coal to liquid, has been used by the company Sasol in South Africa, to produce oil during the embargo years of the apartheid era. Sasol, in 2011, announced plans to build a gas to liquid (GTL) plant in Louisiana to produce oil from natural gas projected at about 100,000 barrels per day of diesel fuel. The author predicts that there would be a growth in the construction of synfuel facilities; subsidies are no longer necessary because of lower natural gas prices.
Coal Bed Methane
Coal bed methane is methane extracted from deep coal deposits by drilling down and allowing the methane to escape. Fracking and horizontal drilling can be used as well. This type of methane is relatively pure and does not contain hydrogen sulfide and heavier hydrocarbons like propane and butane and is nicknamed “sweet gas”.
Coal Bed Gasification
In this process, deeply imbedded coal is partially burned to extract the energy from the coal without having to dig it up and bring it to the surface. The partial combustion produces other fuels such as carbon monoxide and hydrogen, a mixture called coal gas. Another advantage of this process is that the ash is left buried. The coal gas can also be collected as feed gas for the Fischer-Tropsch process and for methanol synthesis. The disadvantages include heat loss, wasted unburned coal, and potential pollution of the water table.
Enhanced Oil Recovery (EOR)
Only 20% of the oil can be extracted from underground through upward movement by its own pressure because it is sparsely distributed in rock pores and cracks. In secondary oil recovery, the oil is flushed out by water, natural gas, or carbon dioxide boosting the recovery to 40%. This has the added advantage of sequestering carbon dioxide although this is a very small fraction of what needs to be removed from the atmosphere. Enhanced oil recovery methods aim to recover the other 60% through the following techniques: reducing the viscosity by heating the oil by steam injection or pumping down air or oxygen to allow burning of some of the oil to heat the rocks; pumping soap (surfactant) to release the oil from the rocks; and sending down bacterial that can breakdown the more viscous, longer chain hydrocarbons
Oil Sands
Canada is third in the world, after Venezuela and Saudi Arabia, in terms of the amount of recoverable oil reserves. Most of this oil is in the form of oil sands (or tar sands), heavy crude oil called bitumen mixed with clay and sand. The estimates run from a conservative 200 billion barrels to an optimistic 2 trillion barrels (by Shell Oil). 2 trillion barrels are enough to supply the US with 250 years of oil and the world 60 years at current consumption. Some of the objections voiced in exploiting the oil sands of Canada include: because the oil is largely on the surface, recovery would leave ugly open-pit mining, local water pollution, and the requirement for large amounts of water. The process used to recover the oil uses up about 12% of the energy of the oil extracted.
The author devotes Chapter 15 to a discussion of other alternative sources that don’t have much cost-effectiveness and efficiency promise that he refers to them as “alternative alternatives”. The author thinks that hydrogen automobiles were never a good idea because of the following two disadvantages it shares with electric cars. Hydrogen requires a lot of energy to extract it from water by electrolysis or produce it by reaction of methane with water; this process also produces carbon dioxide. Using hydrogen as fuel returns only a part of the energy input. It is much cheaper to use methane as a fuel by combustion or in a methane fuel cell. The author lumps geothermal, tidal power, and wave power under the low power density category that they would serve to benefit areas where there is a high energy concentration from these sources. Nevertheless, these have been commercialized and some have proven economically viable but others are still working through their high initial capital costs. See Reading Notes for more details.
From http://books.wwnorton.com/books/Author.aspx?id=8648 (Accessed May 15, 2015):
Richard A. Muller is a professor of physics at the University of California, Berkeley. He is the best-selling author of Physics for Future Presidents and The Instant Physicist. He and his wife live in Berkeley, California.
READING NOTES
PART I: ENERGY CATASTROPHES
· Energy use in the United States alone is huge: about 20 million barrels of oil each day. Because of these huge numbers, energy accidents normally make it on the news in a big way as well.
· In this section, the author tackles 3 major energy catastrophes and offers facts and a suggestion on how to interpret the ramifications of these accidents.
· “We need to get our facts right, put the consequences in perspective, clear up misimpressions, and get to the core of what really happened, or is still to happen.”
Chapter 1: Fukushima Meltdown
• In March 2011, a huge earthquake measuring 9.0 on the Richter scale hit Japan generating a tsunami 30 feet high and up to 50 feet in some places. About 15,000 people died and 100,000 buildings destroyed.
• One of the recipients of the huge amount of energy unleashed by this earthquake through a 50-foor tsunami is the Fukushima Nuclear Reactor. At the site, two people died due to the earthquake and 1 due to the tsunami. No known deaths were reported due to the nuclear meltdown that ensued as a result of the impact.
• Nuclear energy releases are huge: fission of an atom of Uranium 235 can produce 20 million times the energy released in the decomposition of a molecule of TNT.
• Along with energy, high energy neutrons are also released which is the basis for the enormously rapid and huge energy explosions that fissile material is capable of. In a nuclear reactor, the energy production must be moderated: only 4% of the uranium fuel is uranium-235 and neutron absorbers such as carbon or water are employed to slow down the reaction (only one of the emitted neutron triggers a new fission) but still maintain a steady release of energy.
• Reactivity accidents result from runaway chain reactions when the process undergoes uncontrolled fission, which starts slowly at first and builds-up to an energy density that then results in a powerful explosion.
• In the Chernobyl reactivity accident of 1986, what killed most people was the radioactivity released and not the reactor explosion. In the Fukushima incident, the reactor did not explode and pumps kept working to cool down the heat produced from the residual radioactivity after the reactor shutdown upon impact. The cooling pumps stopped working after 8 hours without any external source of power to keep it going due to the loss of electricity because of extensive infrastructure failure. Without the cooling pumps, the fuel overheated and melted resulting in a release of radioactivity second only to the Chernobyl accident.
• The most dangerous radioactivity released is that from iodine – 131 and cesium – 137. I – 131 has a half-life of 8 days and decays rapidly releasing radioactivity as it does making it the biggest source of radioactivity initially. When it enters the body, it accumulates in the thyroid where it can cause cancer. I – 131 absorption by body can be mitigated by taking potassium iodide; normal iodine from this salt saturates the thyroid and prevents or slows down the absorption of the radioactive isotope.
• Cs – 137decays more slowly so its initial impact is lower but it lasts longer. Its half-life is 30 years.
• Sr – 90 also undergoes a slow decay. The slow decay means they are around longer and can deposit and accumulate in plants and animals that are consumed, concentrating in bones.
• An exposure of 100 rem or more will cause immediate radioactive illness (nausea, weakness, loss of hair); at 250-350 rem, 50% chance of death if untreated.
• The author offers the following formula for estimating deaths due to cancer: (population x average) / 2500. Tin the example he gave, this formula estimated that a population of 22,000 and an average exposure of 22 rem may result in an excess of 194 extra cancers. To give some perspective, a 20% incidence rate of cancer for a population of 22,000 is about 4,400 cancers. Even though the number of cancers caused by radioactivity is less than 5%, they probably will be detectable as most of them will be thyroid cancers due to radioactive iodine exposure.
• Natural levels of exposure to radiation is about 0.3 rem from cosmic radiation and from uranium, thorium, and naturally radioactive potassium in the ground and another 0.3 rem from x-rays and other medical treatments. In Denver, Colorado, add another 0.3 rem from radon emitted from tiny concentrations of uranium in granite. Despite this, Denver has a lower cancer rate than the rest of the US and that the probability of dying from cancer is 0.00012, a number so small that prompts him to rhetorically ask “Should an undetectable danger play a major role in determining policy?” He further states that the International Commission on Radiologic Protection recommends evacuation when the radiation dose exceeds 0.1 rem per year which is one-third the dose that Denver gets. Chernobyl used this threshold to mandate evacuations.
• The Fukushima Nuclear Reactor was not built to withstand a 9.0 earthquake and a 50-foot tsunami.
• Should the Fukushima accident be used as a reason for ending nuclear power? The author offers the following guidelines: 1) “Make a nuclear power plant strong enough that if it is destroyed or damaged, the incremental harm is small compared to the damage done by the root cause.” And 2), the Denver dose should be used as the standard in planning a disaster response, e.g., the ICRP threshold for evacuation should be raised to at least 0.3 rem or 3 millisieverts.
• The author contends that the Fukushima reactor was designed adequately when viewed with these standards in mind.
CHAPTER 2: THE GULF OIL SPILL
· The author takes a hard look at references made by the president and others to the Gulf Oil spill as the “greatest environmental disaster of all time” by offering some factual perspectives on the damage wrought by the spill. The accident killed 11 people and injured 17 more.
o 6,000 dead animals due to oil spill versus 100 million to a 1 billion bird deaths each year due to glass windows and another 100 million deaths due to high-voltage electric lines
o Beaches remained clean because BP hired fishermen to distribute buoys and barriers and spread dispersants to break up the oil versus the oil and tar that covered the Alaskan shores during the Exxon Valdes spill.
· Author’s description of the Deepwater Horizon Accident: The oil rig sits above 5,000 feet of water. A flexible pipe 23,000 feet long connects it to the oil source 18,000 feet below the seafloor. When the rig exploded, the pipe was damaged and oil started gushing out at 26 gallons per second. The leaking oil was not plugged until July 15, 2010. It is estimated that the spill released 250 million gallons or about a million cubic meters. Despite the continued flow of oil, the affected size did not increase any further; the author surmises that this was likely due to the oil evaporating, dispersing in the water, sinking, or being cleaned-up. On September 19, the well was officially sealed.
· The author estimates that with a spill area of about 10,000 miles, if all this oil was dispersed uniformly in that volume of water, the resulting distribution would be less than 1 ppm, “below what is considered a toxic level”. The surfactants were added to break-up and prevent big blobs of oil from forming so that more of them are accessible to oil-digesting bacteria and they don’t gum up feathers of birds and animal fur.
· Natural oil leaks do occur in the seabed but they probably are only about 1% of the Deepwater spill.
· A year after the initial spill, measurements showed that 99% of the water samples tested in the entire region including 1,000 square miles closes to the wellhead had no detectable oil residue or dispersant. Tourism was severely affected with one estimate claiming a loss of 23 billion dollars over the ensuing three years. A year later, however, the Governor of Louisiana declared the “region reborn”.
· The author believes that the President’s characterization of the disaster is hyperbole and overreaction to the spill was even more damaging.
CHAPTER 3: GLOBAL WARMING AND CLIMATE CHANGE
· The level of carbon dioxide in the last century has increased by 40% due to human use of fossil fuels. Carbon dioxide makes up 0.04% of the atmosphere. Water is a more significant greenhouse gas but we have no control over the amount that evaporates from bodies of water. Methane is also an important greenhouse gas. The oxygen, argon, and nitrogen in the atmosphere are transparent to infrared radiation.
· Physical calculations estimate that the earth’s temperature would be below freezing if not for the presence of greenhouse gases.
· In 2007, the IPCC reported that global temperature rose by 064 Celsius in the previous 50 years. During those same years, the land temperature rose by 0.9 Celsius. Land temperatures rise in greater amount because heat concentrates near the surface of the land while heat spreads down to depths of 100 feet in the ocean. In the same report, the IPCC states that global warming has been happening since the 1800’s but the anthropogenic distribution is hard to determine as part of that earlier warming was due to changes in the sun’s intensity.
· Despite the smallness of this temperature rise, scientists including the author are more concerned about greater warming occurring in the future.
· The author’s group through the organization Berkeley Earth Surface Temperature project did extensive analysis of temperature data previously not included in the IPCC analysis and re-analysis of temperature (1.6 billion temperature measurements, 14 data sets, 38 stations), putting in measures to avoid data selection and correction bias and station quality bias and tested for urban heat bias. To the author’s surprise, they came up with the same temperature rise reported by the IPCC of 0.9 Celsius over land concluding that “none of the legitimate concerns of the skeptics had improperly biased the prior results” suggesting to the author that “those groups had been vigilant in their analysis and treated the potential biases with appropriate care”.
· See Page 76 (ibook version) for the group’s plot of the average global temperature rise over land from 1800 to the present. Dips in the otherwise rising temperature plot attributed to volcanic eruptions correlated with ice core measurements of sulfate particles. There was a close agreement between the temperature rise curve and the carbon dioxide rise curve when smooth fitting was done with volcanic eruption data, better than the author’s attempt at using a parabola and other polynomial fit. “Our fit shows that one could ignore these (sunspot) cycles and get an excellent explanation of most of the data considering only carbon dioxide and volcanoes”. The excellent precise fit between the temperature and CO2 curves “suggests that most – maybe all – of the warming of the past 250 years was caused by humans” according to the author.
· Based on these results, the author offers the following prediction: if the CO2 concentration increases exponentially and the greenhouse gas effects increase logarithmically, then the warming should grow linearly: doubling the time interval, doubles the temperature rise. For example, assuming exponential growth of CO2 concentration, by 2052, CO2 concentration would be doubled to 560 ppm. The corresponding rise in land temperature is 1.6 Celsius. 40 years after 2052, there will be an additional 1.6 Celsius rise and so on every 40 years until the CO2 rise is mitigated.
· The logarithmic dependence of the greenhouse effect on CO2 concentration stems from, according to the author, “the fact that most of the effect comes from the edges of the CO2 absorption lines which only broaden logarithmically”.
· In the section on tipping points, the author discusses some positive and negative feedbacks that may occur as a result of increased CO2 and warming:
· A strong positive feedback can lead to runaway greenhouse warming like the one that makes Venus a very hot planet. The tipping points for this to happen that have so far been identified are:
o Loosening of the Antarctic ice sheet and slipping into the sea to produce over 100 feet of sea level rise
o Melting of freshwater in Greenland which can disrupt the Gulf Stream and change sea current flow all the way in the Pacific
o Melting of permafrost and release of the potent greenhouse gas methane leading to further warming
o Release of methane from the seabed as the Arctic water warms
· An example of a negative feedback is an increase in water vapor cloud cover, a mere 2% increase in which can cancel any expected further warming if the CO2 concentrations double. Poor understanding of cloud cover mechanism contributes much to the uncertainty of warming predictions.
· Local variability in temperature changes can mask the experience of global warming in different places. About a third of temperature measurement stations report decreasing temperatures. The author claims that a global increase of 2-3 Celsius will be felt globally and local temperature trends cannot negate it.
· The author believes that the only solid evidence of warming is the temperature data; all other effects attributed to warming are “either wrong or distorted”. He presents a review of some of these claims:
o Hurricanes: increase in hurricane frequency more likely due to increased capacity to detect them even offshore. Data for hurricanes that impact the US coast show no increase. His conclusion “the rate of hurricanes hitting the US has not been increasing”.
o Tornadoes: measurements show decreasing rate of tornadoes verified by statistical analysis. Global warming theory predicted that tornadoes might increase and not would increase; more storms may be generated due to energy availability in a warming climate. However, it is temperature gradient that is more significant and not the absolute temperature in tornado formation. See graph.
o Polar warming: Older climate models actually predict that Antarctic ice would increase and not decrease; higher rate of evaporation due to sea warming can increase the amount of snow falling in the Antarctic which stays below freezing even with warming temperatures. Satellite measurements showed however that the Antarctic has lost 36 cubic miles of ice. Models were tweaked and were able to reproduce this actual result. Modeling Antarctic ice can produce unreliable results because Antarctica only covers 2.7% of the globe, too small for more precise predictions. The models and observations for the Arctic are consistent with each other: decreasing ice. The author states that it is difficult to determine the cause, global warming and/or decadal oscillations in sea surface temperature and pressure.
o Hockey data: adjustment of temperature data, purportedly to “hide” data that seem to indicate decreasing temperatures by replacing proxy data with actual thermometer data. See “Clinategate”.
o Sea Level Rise: IPCC reports that sea level has risen by 8 inches in the last century (from records of tide levels). The rise could be attributed to warmer waters which expand and the melting of glaciers. It is difficult to determine the ultimate cause. The melting of glaciers in Greenland is attributed to soot pollution. IPCC predicts a further 1 – 2 feet rise in sea level through the remainder of the century.
· “Can global warming be stopped assuming it is a threat?” A treaty requiring 80% cut in greenhouse emissions by US by 2080 and 70% cut in intensity by China and the developing world by 2040 are not going to result in decreased carbon dioxide atmospheric concentrations according to the author. Under this treaty and the numbers involved, the author calculates that total atmospheric CO2 would increase to above 1,000 ppm (currently around 400 ppm) which, using IPCC models, would lead to a global temperature increase of 3 Celsius. In 2010, China’s CO2 emissions were 70% higher than the US. Its CO2 emission per capita is only 45% that of the US. President Obama did not sign the Copenhagen treaty because of China’s refusal to allow inspections. China’s emissions intensity is now 5 times that of the US; with a 6% increase every year compared to the US rate, they will surpass US per capita emissions by 2025. Because energy use correlates with wealth, the author rhetorically asks, “If you were the president of China, would you endanger progress to avoid a few decrees of temperature change”? Slowing growth in China could trigger political instability, adds the author. See figures 1.6 and 1.17. “Every 10% cut in US emissions is negated by 6 months of China’s emission growth.” Reducing its dependence on coal and switching to natural gas can help in reducing China’s CO2 emissions (natural gas releases only half the CO2). The author highlights the important role of the developing nations in decreasing CO2 emissions even though most of what is in the atmosphere now was due mostly to developed nations. The emerging economies need to cut emission intensity by 8 – 10% per year just to stabilize greenhouse emissions. Low-cost solutions and a switch from coal to natural gas are required to help China cut emissions.
· Geoengineering: some proposed solutions below. The author believes these solutions may not be taken seriously ever because of the danger of further altering the earth’s geochemistry and atmospheric chemistry without knowing the ultimate consequences.
· Dumping iron in the ocean to encourage plant growth
· Cloud-seeding methods to increase cloud formation
· Releasing sulfate particles into the stratosphere to rom aerosols that would reflect sunlight. “A simple calculation suggests that just one pound of sulfates injected into the stratosphere could offset the warming caused by thousands of pounds of carbon dioxide.
· On the global warming controversy: the author’s statement is this “The evidence shows that global warming is real, and the recent analysis of our team indicates that most of it is due to humans”. He refers to global warming as both a scientific conclusion and a secular religion for both what he calls “alarmists” and “deniers”. He believes that it is a threat that needs to be addressed even if quantification is difficult to do. He proposes that any solution should be inexpensive enough because it is the developing world that would need it the most. The lowest-hanging fruit right now is a switch from coal to natural gas while technologies are developed to make other sources affordable. An electric car is an expensive solution that produces more CO2 if the electricity is provided by a coal-powered plant.
PART II: ENERGY LANDSCAPES
Energy per capita use has been shown to increase as a function of per capita GDP (see Figure II.1). The author poses the important question of whether energy use creates wealth or wealth creates more energy use and believes that it is probably a little of both. Because of this energy use and wealth correlation, the increasing per capita GDP of emerging nations will likely result in more energy use globally. Related to this is the cost of energy use and the author gives an example of how “out-of-whack” energy pricing can be which adds complexity to the issue of energy use and availability. He gives the example of energy from 1 AAA battery costing 10,000 times more than the equivalent energy from electric power plants ($1,000 per kWh versus $0.10 per kWh). Clearly the cost of energy depends on how it is delivered. Gasoline costs 2.5 times more than retail natural gas and 7 times more than wholesale gas. Despite this price difference, it is difficult for the US to wean itself away from using gasoline because of the high cost of having to switch from the current gasoline delivery infrastructure creating an “inefficient market” in energy. The author provides an example of what he calculates as the wide disparity of the cost per kWh of energy depending on mode of delivery. Most of the cost of energy come from the mining, the processing, and the delivery. For instance, the author points out that at the time of writing, the sum of these for solar energy is higher than for coal. However, he does point out that, coal is not really that cheap if you take into account the environmental consequences. Toward the end, the author points out that the cheapest form of energy is energy that is not used but generated. There are two aspects to do this: making appliances more efficient so that the same benefit is received for less energy and storing the unused but already generated energy. According to the author, the two main energy concerns of the energy landscape are energy security and climate change. An energy flow plot in the last section shows that only about 43% of energy generated are used and the other 57% are lost as heat; 83% of this is generated using coal, natural gas, and petroleum. About 40% goes to generating electricity and transportation comprises about 28% and industrial use about 24%. See 2013 US Energy Flow chart downloaded from LLNL website. The author puts this information in another perspective:
This amount of energy use per year is equivalent to 3,500 gigawatts or 3,500 large generating plants. That is 12 kilowatts per person assuming about a 300 million US population. This is equivalent to burning 300 tons of fossil fuel EVERY SECOND or 1 cubic mile of oil per year if all the energy came from petroleum. “Any proposed alternative energy sources must cope with this enormity.”
The US holds close to 730 million barrels of oil (ATTOW) in its Strategic Petroleum Reserve. The US imports about 9 million barrels of oil every day in the past decade. This Reserve would last for only a little over two months. However, pumping capabilities can only extract 4.4 million barrels per day from this reserve.
Margin of spare capacity has a big influence on the price of oil (margin of spare capacity is the amount of oil that could be pumped around the world minus the amount that is actually pumped). According to the author, “it is the continuing growth of the economies of the developing world that keeps the spare capacity low and therefore the price of oil high”. The author has two suggestions for building the margin of spare capacity: producing diesel fuel and gasoline fuel (synfuels) from coal and natural gas and exploiting recognized shale oil reserves.
The author cautions that when considering any energy technology, there needs to be a consideration of the difference between developing and developed countries. The price of installation and maintenance of solar power in the US is expensive due to labor costs and the cheapness of natural gas is still a strong competitor. In other countries where labor costs are lower, solar power may actually compete with natural gas.
Chapter 4: The Natural Gas Windfall
In this chapter, the author talks about the newest development in energy windfall: the development of extraction technology for recoverable natural gas (cheaply?) from enormous reserves trapped in shale. According to the author, “the exploitability of these shale gases is the most important new fact for future US energy security – and for global warming - …”.
US natural gas reserves has grown over the last 12 years according to Department of Energy and US Energy Information Administration information:
2001 – 192 trillion cubic feet (Tcf)
2010 – 300 Tcf
2011 – 862 Tcf
Between 2001 and 2010, the US extracted about 20-24 Tcf.
Some estimates are as high as 3,000 Tcf.
The author differentiates how the government and companies make predictions. He notes that government estimates are more conservative because they have to base their estimates on proven reserves (recoverable supply). Companies err on the side of a “good bet” of supply.
Fraction of natural gas extracted from shale has increased over the last 12 years:
1966 – 1.6%
2005 – 4%
2011 – 23%
2012 – 30%
See Figure II.3 for graph showing the growth of shale gas production.
For the same dollar value (early 2012 data), natural gas can provide 2.5 times more energy than gasoline.
Converting to natural gas for the US energy needs is not that trivia in most cases. Volume storage and delivery is an issue as even when compressed, natural gas has three times the volume of gasoline. ATTOW, some 130,00 taxicabs and trucks have converted to CNG; existing gasoline engines can easily be converted to use natural gas. CNG has ten times the energy per gallon compared to lithium ion batteries so it is an electric vehicle competitor.
In 2013, natural gas provided about 27% of the US energy needs (updated data from LLNL energy flow chart for 2013).
Natural gas is released from coal and shale by pumping pressurized water down a pipe to crack the coal or shale and release the natural gas. Hydraulic fracturing (fracking) and horizontal drilling are two key technologies for extracting natural gas from shale. These two processes have enabled economically viable extraction of natural gas from shale. In a US EIA survey (Figure II.8) of 32 countries, there are estimated to be about 6622 Tcf of shale gas reserves, 13% of which are in the US. France is estimated to have about 100 years’ worth of natural gas (ATTOW, fracking is banned in France) recoverable from shale reserves but still imports 95% if its natural gas. China is estimated to have about 400 years’ supply of natural gas from shale reserves. Some advantages of natural gas include producing only half the greenhouse gases as coal does and the local pollutants (sulfur, mercury, carbon particles) are much lower.
Another potential source of methane being explored is in the form of methane hydrate or clathrate discovered deep in the ocean usually along coasts and continental shelves. This form of methane is mixed with water on a 1:5 ratio (more water) and is thought to form by methane seeping from sea bottom sediments, mixing with cold water (4 celsius) at high pressures (~50 atm, at least 1500 feet below), and causing the water to form an ice cage that traps the methane. As shown in Figure II.9 in the book, methane hydrate looks like ice cubes that burn. Estimates of the amounts of methane hydrate deposits range from 10 – 100 times the amount of shale gas. The source of the methane is unknown; it could be a bacterial product or primordial methane but it currently does not look like it is associated with fossil carbon. The extraction process, ATTOW, is not trivial as most of the methane hydrates are further mixed with clay and salt water is corrosive. Methane itself contains enough energy to pay for its recovery. There is danger of leaking methane, however, that can contribute as a greenhouse gas. Methane is 23 times more effective as a greenhouse gas than carbon dioxide. Furthermore, some scientists believe that the release of methane hydrates led to the catastrophic extinction of 96% of all marine species about 250 million years ago called the Permian-Triassic extinction.
Part II - Chapter 5: Liquid Energy Security
In a somewhat facetious tone, the author rhetorically asks “What energy crisis?” (in the US) based on the following: enough coal reserves to last a century, huge reserves of natural gas and oil in shale, lots of sun and wind energy, and cheap uranium (uranium ore cost is only 2% of electricity cost). The author clarifies then, that what the US is having is a “transportation fuel crisis” due to an oil and liquid fuels shortage. In Figure II.11, the author shows that if you consider the US reserves of natural gas, coal, and oil, the US has 1,470 billion barrels of oil equivalent and leads a pack of countries, including Saudi Arabia, making the US “the king of fossil fuels”.
In the discussion of oil, the author lumps the following as being synonymous with oil: gasoline, diesel jet, jet fuel, petroleum.
In the mid-1800’s, whale oil was used for lighting homes and businesses until it ran out due to huge decimation of the whale population. Whale oil peaked in 1845 at 15,000 gallons a year and started a decline that saw its price double in 1852. In 1859, rock oil or petroleum was discovered in Pennsylvania and was used initially primarily as kerosene for lamps. The discovery of petroleum, however, was what made the internal combustion engine possible which led to the use of automobiles and airplanes. The shortage of whale oil drove the search for new oil and, thus, one can think of petroleum as having once been considered an “alternative: energy source.
Although coal was able to run automobiles, gasoline delivers 60% more energy for the same mass. It is also incredibly cheap: assuming a price of $3.50/gallon and 35 mpg, it costs 10 cents per mile to drive with up to 5 people in the car as the author noted.
A widely used concept for predicting resources in Hubbert’s peak: the peak of maximum production of any resource commodity. The US Hubbert’s oil peak occurred in the 1970’s; the world is close to hitting its own Hubbert’s oil peak.
The author points out that the obvious substitutes for petroleum are natural gas, synfuel, and shale oil.
President Carter created the Department of Energy in the late 1970’s to work on weaning ourselves away from oil dependence on other countries and to explore alternative energy sources. By 1984, oil imports dropped by 50% but went up again in 1994 when imports exceeded the 1977 peak. In 2011, the US imported 3.05 million barrels of oil exceeding domestic oil production and creating 53% of the trade deficit. When Reagan became president, he eliminated the alternative energy programs as the price of a barrel of oil dropped from $111 during the Carter years to $22.
Alternative energy sources have a difficult time competing because of the cheapness of oil-based energy; Saudi Arabia can dill oil for about $3 a barrel. The market price per barrel of oil can fluctuate between $20 - $100, increasing as demand exceeds supplies.
Synthetic fuel or synfuel is one solution to the liquid fuels shortage. Synfuel can be derived from the liquefaction of coal (CTL – coal to liquid) or natural gas (GTL – gas to liquid). The Fischer – Tropsch process was the first chemical procedure used to manufacture synfuel Invented in the 1920’s, it was used by Nazi Germany in the 1930’s and South Africa in the 1940’s during the Apartheid era successfully to provide liquid fuels from abundant coal.
Synfuel can cost up to $60 per barrel to make which makes its viability as an economical replacement questionable especially if the Saudis can lower oil prices easily, according to the author.
Part II – Chapter 6: Shale Oil
Along with natural gas, shale also contains oil that can be extracted. The amount of oil deposits in shale is estimated to be over 1.5 trillion barrels, 5 times more than the oil reserves of Saudi Arabia. As with any other resource, this source of oil was not considered until oil became so expensive that the price of shale oil extraction became competitive. In a nutshell, the author describes the classic idea of how this oil is extracted: the shale is mined then heated to push out the oil-related material called kerogen. The kerogen can then be converted to diesel and gasoline in a process called retorting. The waste generated is huge, exceeding the volume of the actual rock mined.
Companies like Shell, Chevron, and Exxon Mobil have been involved in developing the technology for shale oil extraction. Shell’s method called “In-Situ Conversion Process” involves heating the rocks 1-2 km underground using electricity to temperatures of 650 – 700 Celsius, letting it simmer for 3-4 years, and then employing fracking and horizontal drilling techniques to extract the smaller hydrocarbons broken up from the kerogen. As energy-intensive as it may sound, the author notes that this process actually produces a net of 3.5 times more energy than used. The estimated cost by Shell is $30/barrel; this industry may sustain profitability as long as the price of oil remains above $60/barrel. There are environmental consequences, of course,: this is yet another carbon-based fuel, oil leaking into the water table, wastewater and water shortage issues as with fracking of natural gas, etc. Areas where there is significant extraction include the Colorado Plateau, Bakken field in North Dakota, and Eagle Ford Formation in Texas. It is estimated that by the end of this decade, 25% of US oil consumption may come from shale oil.
Part II – Chapter 7: Energy Productivity
The author devotes this chapter to what he calls “cheaper – than – cheap” energy source: increasing energy productivity and efficiency.
Half of the older homes in the US are estimated to benefit from added insulation according to a Department of Energy official (Art Rosenfeld). In one calculation (can be accessed through the energysavers.gov website link given in the chapter), installing insulation has a payback time of 5.62 years, this is the amount of time it would take to get back the same amount of money for installation in energy savings. His calculations showed that after the 5 years, the “capital” will continue to pay 17.8% per year in the form of reduced heating and cooling costs. This rate will go up and down depending on how the price of electricity changes.
Replacing incandescent lightbulbs with compact fluorescent lights leads to a 209% return, according to the author’s calculations. He also estimates that over the 10,000 – hour lifetime of CFL’s, you will need 6 ordinary incandescent light bulbs.
In Decoupling Plus, a conservation program in California backed by the government and implemented by the utility company, the utility company invests money on helping Californians buy energy efficient appliances and to conserve energy overall. The return on the utility comes in the form of diverting investment dollars from building a new power plant to increase capacity for increased energy use to conservation practices to reduce energy usage and a promise by the state to allow them to raise prices. Customers benefit from the increased energy productivity which decreases their energy costs despite an increase in prices. See chapter for a more detailed numerical explanation, albeit simplified, by the author. The term decoupling refers to the utility company decoupling from having to build more power plants and term the plus from the company being able to raise rates based on successful conservation investment. This scheme invented and named by Art Rosenfeld (who then went on to win physics and energy awards) is considered quite successful in California in decreasing per capita electricity use: it has been stable since 1980 while overall in the US it has increased by 50%. The catch is that electricity consumers should not increase their electricity use just because they are using more energy efficient bulbs. The success of the program depends on not just energy efficiency but less power requirements.
Other “great investments” listed by the author include:
· Cool roofs – an example made of thermoplastic material that can have color but able to reflect more than 50% of infrared. White roofs are even better at reflecting but many consider them too bright. The author notes however ATTOW that if you use air conditioner, installing a cool roof might be the better, less expensive alternative to installing solar panels.
· More efficient autos – ATTOW, the US average mileage is 30 mpg; in Europe it is 50 mpg. Cars are the least efficient when accelerating hybrid technology fixed this problem by using a battery booster and most hybrids get better mpg in city driving than they do in highways. Because of the finite life of the battery, the true cost of an electric car can soar. The author also addresses the issue of using lighter material to increase efficiency. Lighter cars have a reputation for not being safe. He points out a study however that it is true that heavier cars made by Ford, Chrysler, and General Motors are safer than the lighter ones they make. The study found, however, that these heavier cars are no safer than the lightest Japanese- and German-made cars. The same researchers found an interesting correlation between resale value and safety as measured by driver deaths per year per million cars: the higher the resale value, the safer the car (lower deaths) regardless of its price when new. See Figure II.15.
· Energy-efficient refrigerators – The energy efficiency and price of refrigerators have both improved since 1974 attributed to government mandates and market competition by the author. The average size of refrigerators in 1974 was 14 cubic feet but it was both more energy-consuming and expensive relative to today’s refrigerators with an average size of 23 cubic feet. Today’s refrigerators have more efficient motors and better insulation. The author puts it in perspective in terms of national savings: if all the refrigerators of today have the efficiency of the 1974 models, this would add another 23 gigawatts of power plants.
· The McKinsey chart – This chart was created by the consulting firm McKinsey and Company. It resulted from a study done analyzing actions that may reduce carbon emissions and their profitability or added cost. See Figure II.16 for the detailed information. He also relates an excerpt from Amory Lovins’ book Natural Capitalism that told the story of Dow Chemical employees coming up with energy-saving proposals that gave huge returns, resulting in $110 million dollars in payments to shareholders every year by 1993.
In the next section, the author lists and describes what he opines as “feel-good measures” that may save energy but only in limited circumstances:
· Buses – a study done by the Institute of Transportation Studies at Berkeley that the break-even point for bus transportation in suburbs depends on population density. They found that public transportation saves energy or at least breaks even if there are more than 15 households per acre; lower than this and buses actually use more energy.
· Recycling paper – according to the author, recycling paper neither saves trees nor reduces greenhouse emissions. Trees for paper are grown specifically for the purpose of making paper.
· Power blackouts – the interconnection of large numbers of power plants, transmission lines, transformers, and users – the grid – makes electricity delivery more reliable in the US. Operation problems in one plant can be overcome by another plant supplying the needed electricity. This process however cannot handle sudden high demands on the system and can lead to cascading power plant failures like what happened in New York and Massachusetts in 2003. The system has no way of limiting the current draw when multiple air conditioners are turned on – generators just start to overheat. The author lists three solutions. One is to build small scale natural gas power plants for use on high demand days. Already done in CA, this, however, is an expensive solution due to the capital investment and the poor returns as these plants are used only a fraction of the time. Another solution is to have utilities decrease voltage on the line; air conditioners still run but at a reduced power. California has also used the solution of rotating brownouts to distribute the source for the sudden high demand.
In controlling electricity use, the author welcomes the role of market forces. He favors dynamic pricing of electricity, rising in cost when the demand is high. This is not a popular option, however, because of the unpredictability. The author suggests that the use of smart meters can help consumers program turning on and off of appliances depending on when demand peaks and there is price increase. For example, electricity enters home at two voltages 120 volts for lighting and small appliances and 240 volts for air conditioners, washers, dryers, and other appliances that pull high loads. One way to program a smart meter is to turn off the 240 volt circuits. When smart meters first came out in CA, the three main complaints were overcharging, exposure to microwave radiation, and loss of privacy. The author addresses these three in the last few paragraphs of the chapter. Smart meters were installed primarily so that the utility company can collect more information about energy usage. It was also designed to reduce power automatically in case of an extreme emergency.
PART III: ALTERNATIVE ENERGY
Two major issues the author identifies related with energy are energy security and climate change. In Part III, the author will devote chapters to a description and discussion of alternative energy sources, noting that the “alternative energy field is wide, technically complex, and full of uncertainties”. He points to a table of data showing the cost of producing a kilowatt-hour of electricity using various methods (see Table III.1): coal, natural gas, nuclear, wind, solar PV, solar thermal, geothermal, biomass, and hydro. Some of these general types are further broken down into specific technologies. The table was published by the US Energy Information Administration in 2011. The author notes two caveats:
3) The data assumes that the cost of capital is 7.4%
4) The data assumes a carbon emission trading cost for coal and natural gas of about $15/ton.
The table shows that natural gas appears to be the cheapest in providing a kilowatt-hour of energy. It also has the advantage of producing less greenhouse emission than coal for equal energy produced; half the energy comes from C combining with oxygen to form carbon dioxide and the other half comes from H combining with oxygen to form water.
Part III – Chapter 8: Solar Surge
The author starts off this chapter by predicting that the price of solar panels will eventually go down but that installation and maintenance will still cost the consumer. On rainy days, there has to be another alternative.
First some physics about sunlight:
Sunlight delivers about a kilowatt of power per square meter onto the surface of the earth. This is equivalent to 10 100-watt bulbs.
Could solar power drive a car? With 2 square meters of solar cells, 42% (best efficiency ATTOW) 2 kilowatts will be generated or 840 watts. This is equivalent to 1.1 horsepower. Typical cars require 10-20 horsepower while cruising on the freeway and 40 – 150 horsepower needed for acceleration.
A square mile of solar panels has the potential to generate 2.6 gigawatts which reduces down to about a gigawatt due to the 42% efficiency. This further goes down to 250 watt per square meter because the average solar power is only 25% of the peak.
Solar Thermal is a type of solar energy source in which the heat is focused, collected, and used to heat water to produce steam that is used to run a turbine. In Table III.1, this energy source is expensive, 25.9 cents per kilowatt-hour. A solar thermal power plant in California consisting of a tower toward which 24,000 moving mirrors direct sunlight can generate 5 megawatts of energy, 0.5% of a conventional gas, coal, or nuclear power plant. Because of the many moving parts, this type requires a lot of maintenance. Another solar thermal power plant uses a solar trough to focus the light using an optical design that obviates having to repoint. In this type of solar focuser, there are fewer moving parts. Spain is the biggest user of solar thermal, generating up to 4 gigawatts or 3% of their energy use by the end of 2010. The construction of these plants, however, depends on government subsidy.
Disadvantages:
· Require sunny days although the heat can be stored in the hot salt
· Require subsidies
· High cost
· Maintenance
Advantages
· The hot salt can be stored for later use
· High 50% efficiency in producing electricity due to extreme temperatures that can be reached with the focused sunlight. The efficiency of trough is not as high as there is less focusing ability and the heated liquid has to flow.
Photovoltaic cells
Solar cells or PV cells use absorbed sunlight to produce electricity based on the photoelectric effect. When sunlight heats the solar cell, an electron is ejected from an atom and travels carrying some of the photon’s energy to an electrode and a wire creating a current for electricity. ATTOW, reasonably priced cells can convert only 10% of the photon’s energy into electricity but this can go up to 42% for the most expensive cells.
In 2011, the cost of PV cells dropped to $1/watt from $7/watt a few years ago. This, however, is PEAK watt. This goes down to ¼ when considering the varying angle of the sun and its absence at night. It goes further down to 1/8 of watt peak when overcast days are accounted for. The author shows a sample calculation of the return and payback time for solar cells. They also need other electronic devices such as an inverter to be able to use its electricity to run appliances and optional batteries. And, they require maintenance. He considers them not optimal (“no profit”) as they are also heavily subsidized by the government.
There are many competing technologies for PV. The author highlights some of them below:
Silicon
Silicon crystals were the original material used in the first solar cells
They have gone down in price from $5 to $1/watt.
Silicon itself is cheap but the cost of purifying it is not.
Renewable energy regulation enable competition in the market
The largest manufacturer is in China (Suntech Power), producing cells that have a 15.7% efficiency and a capacity of 1 gigawatt a year. A close second is a US company (First Solar).
1 gigawatt worth of solar energy (producing 1/8 the electricity) is small compared to 50 gigawatts of coal plants being built in China every year.
Cadmium telluride (CdTe)
A layer of CdTe 3 microns thick (1/10th of human hair) can absorb sunlight and produce electricity with a 15% efficiency or more.
CdTe can be manufactured as very thin flexible sheets that are not fragile like silicon crystals (30 times thicker).
This is the material used by First Solar who, ATTOW, has been producing over 1 gigawatt of solar cells each year at 73 cents per installed watt.
Tellurium is produced at about 800 tons a year as a by-product of copper mining and there are worries that this source might run out soon. 1 gigawatt of solar cells takes about 100 tons of tellurium although increased demand may spur exploration and discovery of more deposits.
There is concern about cadmium’s toxicity which may be released in the event of a fire although the author does not think this is likely.
Copper indium gallium selenide (CIGS)
Like CdTe, CIGS can be produced in very thin sheets of 3-4 microns and has a good capacity for absorbing sunlight and producing electricity.
Don’t contain any material considered toxic.
Indium is in short supply because it is used in many electrical applications such as indium tin oxide, a transparent electrical conductor, used in TV’s, computers, and game boxes.
CIGS are the primary material used in solar manufacturing by the San Jose, CA based company Nanosolar which produces about 640 megawatts of solar cells. Their efficiency is only 10% but have been shown to be as high as 20% in the lab.
Nanosolar and thin-film companies have been negatively impacted by the sudden drop in price of Chinese silicon solar cells (15-fold between 2006 and 2010).
Multijunction Cells
Typically made of gallium, germanium, indium, and other metals.
They are assembled in multiple layers, each one designed to absorb a wavelength range in the solar spectrum achieving efficiencies as high as 42%.
Very expensive to make, about $500 for a square centimeter. Using a PV concentrator, however, one can concentrate sunlight onto a smaller piece for a gain in efficiency, 2-4 times higher than competing cells; requires a thermal conductor to carry away the heat.
They have been used in the Mars Rover.
Solar Cell Summary
“The solar field is intensely competitive and developing fast. Prices are dropping so rapidly that the winners are likely to be decided by criteria other than solar-cell price, including cost of installation, cost of maintenance, cost of conversion to household voltages, lifetime of cells, and efficiency.”
Part III – Chapter 9: Wind
Wind turbines are designed to have huge blades and to be very tall to take advantage of high winds at higher elevations. An example given by the author notes that wind speeds at 200 feet are typically twice that at 20 feet. Wind power increases as a cube of the wind velocity: doubling the wind speed results in 8 times the power. (Wind energy is just ½ mv2 but the power is also professional to the energy x v, thus the cubic dependence). The blades are big for more surface area interaction with the wind. A blade that is 63 meters long sweeps an area of about 12,000 square meters. At wind speeds of 20 mph, the power derived is 10 megawatts. The author gives the following formula to calculate this: watt per square meter = (speed3)/10. Because the blades spin fast, just 3 blades are enough to take more than half the energy of the wind blowing through that circular area spanned by the radius of each blade. Betz’s Law limits the amount of energy that a turbine can extract from wind to 59% as long as there are no other turbines around (turbines are spaced by a distance 5-10 times the length of their blades). So, the 10 megawatts calculated above is reduced to 5.9 megawatts, the maximum power that can be extracted. A home wind turbine spanning a surface area of 4 square meter and running at average wind speeds of 5 mph can only generate 29 watts (using Betz Law). A solar cell of similar surface area can generate 600 watts at peak and average 75 watts over cloudy days and 24 hours.
Wind power capacity has been doubling every 3 years, as turbines are relatively inexpensive to build and don’t require fuel. The US has built 45 gigawatts worth of wind turbine farms (2.3% of electric power generation. The anticipated capacity of China was 55 megawatts at the end of 2011. See Figure III.8. It can produce electricity relatively cheaply at 9.7 cents per kilowatt-hour.
The last few paragraphs are about issues that have been brought up about wind power. The author addresses each one of these:
1) A large grid of wind farms can contribute to the reliability of pulling power from a wind farm and stabilizing power when the wind stops in certain areas. Back-ups can also be employed, e.g. storage batteries, emergency generators, etc.
2) Aesthetics are also an issue for some people.
3) Bird deaths by hitting turbines are small relative to bird deaths due to hitting windows and tall structures. Modern turbines are usually sited away from migratory paths.
4) Because the strongest winds occur in remote areas, there is a concern about transporting energy from wind farms sited too far away from population centers. The current grid loses about 7% in electrical energy due to transport.
Part III – Chapter 10: Energy Storage
In this chapter, the author, as promised, tackles energy storage options, especially for solar and wind energy.
Batteries
ATTOW, the most common storage batteries that come with solar power installations are lead-acid batteries. These batteries do not have a high energy density but they are highly efficient, providing 80-90% of the energy pumped into them. Four car batteries weigh 250 pounds and can provide 5 kilowatt-hours of electricity enough to power a small home for 5 hours. To contrast that with the energy density of gasoline, 250 pounds of gasoline contain 1,320 kilowatt-hours of heat energy and at 20% efficiency, a generator can still provide 50 times the energy of an equivalent weight of batteries.
The author believes, however, that the lead-acid battery is not the obvious choice for solar and wind power. He believes that the sodium-sulfur battery is a better option. The largest of this type of battery is “Bob” from Presidio, Texas. Used as an emergency back-up, Bob can provide up to 4 megawatts to power 4,000 homes for 8 hours. Sodium-sulfur batteries are also used for power levelling to keep power delivery constant in case of generator failures. Sodium-sulfur batteries have the advantage of low price per charge-discharge cycle: it can be recharged 4,500 times for 80% discharge while lead-acid batteries versus 500 times for both lead-acid and lithium ion batteries.
Written in 2012, Muller did not anticipate that Elon Musk of Tesla would have come out with a home battery system in 2015 [my own note]. Here is what he had to say then about lithium ion batteries: “I expect that lithium-ion batteries will never be used for large-scale energy storage; they are too expensive. Lithium costs 40 times more per pound than sodium and 10 times more per atom – a more relevant measure for batteries. With a 9-fold recharge advantage and a 10-fold cost-per-atom advantage, sodium-sulfur has a 90-fold advantage over lithium-ion.” A disadvantage of sodium-sulfur batteries is that they cannot be scaled down and are not suitable for a wide range of applications; they have be kept at temperature of around 350 C and contain liquid sodium. A Japanese company is developing one that can operate below 100 C.
Some interesting notes and language from the section on The Physics and Chemistry of Batteries
Metals and electrolytes are the fundamental components of batteries. Metals allow electron flow while electrolytes allow ion flow but not electron flow.
In lead-acid batteries, lead and its compounds comprise the metals and an aqueous acid solution acts as the electrolyte.
The Handbook of Battery Materials lists all the known metals and electrolytes used in batteries.
One of the challenges with batteries is how to make them rechargeable. In recharging batteries, a generator is used to force electrons the other way. Their negative charges attract back the positive ions for re-deposition on the other electrodes by flowing through the electrolytes. A big problem is ensuring that they re-deposit in the original way; many times they don’t forming dendrites eventually making the battery unusable.
Typical recharging cycles are in the hundreds; sodium-sulfur batteries can be recharged thousands of times without failure.
The Future of Batteries
NiCad batteries had memory issues – they had to be discharged completely to remember to go back to being fully charged.
NiMH batteries don’t have this problem. They are still being used in Priuses.
Lithium-ion batteries are light-weight and have high energy density.
Lithium-polymer batteries can be made really thin and are useful for small electronic gadgets like cell phones and e-book readers.
The future of batteries is optimistic. The market for these newer more expensive batteries was sustained because they were useful for even more expensive gadgets such as laptops. The focus of research is on rechargeability and safety. The author notes, however, that engineering development for batteries is linear not exponential as improvements will come but not at the same fast pace as in the past.
Bottled Wind: Compressed Air Energy Storage (CAES)
In this type of technology, already used in confined spaces with no ventilation like mines, air is compressed to several times atmospheric pressure (200 atms is a typical figure) storing the energy expended by a motor pump. This energy is then released when the compressed air is allowed to expand and run a turbine. One disadvantage of this is the weight of the tank which is always about 20 times more than the weight of the air (or 5 times if it is a fiber composite tank). Another issue is that air heats up (up to 1370 C at 200 atm) when compressed so there must be a way to draw the heat away.
There are a few places using CAES, one in Germany and another in Alabama. A plant planned in Ohio can deliver up to 2.7 gigawatts. With more advanced systems of reusing the generated heat, the expected efficiency can be up to 80%, comparable to that of batteries.
Flywheels
Energy is stored by using a motor to spin a heavy wheel. When a load is added to it, a motor for instance to generate electricity, the rotation slows as the kinetic energy is converted to electricity. One of its advantages is its ability to condition energy and smooth out the power. The Bevatron atom smasher in Berkeley uses a flywheel, about 10 tons each. Beacon Power installed 200 2,500 – pound carbon fiber composites flywheels (10 ft tall and 6 ft in diameter) in Stephen, NY moving at 1500 mph. to reduce air friction, the flywheels move in a high vacuum chamber. Each flywheel can store 25 kilowatt-hours of energy for a total of 5 megawatt-hours. These flywheels are designed to deliver 20 megawatts and therefore can run for 0.25 hours or 15 minutes.
Their energy storage density is comparable to lithium ion batteries, 30 watt-hours per pound. Beacon Power’s current set-up costs $1.39 per kilowatt – hour. The high cost makes the author think that flywheels will continue to be used to condition energy but not for large-scale energy storage.
Supercapacitors
Capacitors are composed of two metals that are charged with opposite charges and separated by an electrical conductor. They can store energy for longer periods of time than can batteries. The higher the charge, the more energy is stored but the higher the voltage. Very high voltages can cause a spark and destroy the capacitor. If the insulator is thin, then more energy per unit volume can be stored but without the high voltages. One advantage of capacitors is that they can deliver energy very quickly and don’t degrade over time as they are not dependent chemical reactions.
Newly developed supercapacitors can store as much as 14 watt-hours per pound. This is about a third of the energy of similar weight lithium-ion batteries. Supercapacitors cost 3 times as much. Supercapacitors are probably best used in conjunction with a battery, providing quick boosts of power that batteries can’t; they can also be useful in improving the efficiency of regenerative breaking, absorbing and transferring energy at a more efficient pace.
Hydrogen and fuel cells
A fuel cell is a battery that does not need to be recharged because the chemical reactants are added as “fuel”. In a hydrogen fuel cell, hydrogen and air are pumped to generate electricity. Efficiency is low, %25, and the author is not sure that they will replace batteries or generators.
Natural Gas
In this last section, the author does a comparison of what he deems as the best energy storage technology, the sodium – sulfur battery to a natural gas generator. His calculations show that sodium – sulfur batteries’ capital cost is $5 per deliverable watt while natural gas capital costs $1 per deliverable watt. If the cost of the fuel is taken into account, natural gas easily wins out over solar or wind as the energy source. Batteries only compete if they are run at low duty cycle, e.g 1 hour per day, then the per watt capital cost goes down to 50 cents. The author concludes that natural gas is hard to beat.
Part III – Chapter 11: The Coming Explosion of Nuclear Power
The author starts off this chapter with a list of key items he thinks are important to know about nuclear energy (he calls it an “executive summary”).
Unlike nuclear bombs that contain highly enriched uranium, nuclear reactors cannot explode because they use low-enriched uranium.
· A nucleus of uranium releases 20 million times more energy than a molecule of TNT. A fission process can initiate a chain reaction: every fission produces 2 or 3 neutrons, each one which can cause a fission creating 2-3 more neutrons, etc. A rough calculation shows, assuming 2 neutrons per fission, after 80 doublings, the number of neutrons present is about 10008. The Hiroshima bomb was equivalent to about 13,000 tons of TNT. Since, one uranium nuclear fission releases 20 million times the energy as 1 TNT molecule, 0.00065 tons or 1.4 pounds of uranium are needed to produce the same destructive energy. However, this amount is less than the critical mass of uranium needed to initiate and sustain an explosion.
· The rapidity of the fission process (less than a millionth of a second) is required for a nuclear bomb design. If the fission occurs in 1 second, pre-detonation occurs where all the uranium atoms in the pile would have blown away by the initial fissions.
· Heavy uranium or uranium-238 does not undergo fission in a way that can sustain a chain reaction. Creating a bomb requires almost 100% uranium – 235. Pure uranium is only 0.7% uranium – 235 and it is very difficult to separate it through the process of uranium enrichment to get at least 90% for weapons grade uranium.
· Moderators (like carbon and water) can help sustain a chain reaction even in the presence of U-238 which normally would absorb too many of the neutrons. With moderators, the neutrons hit uranium-238 nuclei with less energy and simply bounce off so the probability of hitting U-235 is higher than if they were simply absorbed by U-238. Using an expensive moderator like heavy water allows using pure uranium with only 0.7% U-235. Graphite works well too but it burns (as happened in Chernobyl). Using ordinary water as a moderator, the uranium needs to be enriched only to 3% - 4% U-235. See chapter for a brief synopsis of why the Chernobyl reactor blew up at TNT explosion levels.
Capital costs for nuclear power plants are high but the delivery of electricity is lower in cost because fuel and maintenance costs are low. It costs about 6-8 billion dollars to build a 1-gigawatt capacity reactor, about 50% more than that for a 1 giga-watt coal plant. About 80% of the electricity cost for nuclear power is from paying back the loan, compared to just 18% for natural gas plants. Nuclear power plants have a very high capacity factor, operating 90% of the time due only to maintenance downtime. This has raised the revenue 1.6 times. The history of nuclear power use in the US has gone up and down and has been marred by the Three-Mile Island accident which did not cause any deaths and the Chernobyl accident which did cause fatalities. Another factor for no new constructions is that it is not competitive with much cheaper natural gas.
Small modular reactors (300 megawatts or less) may be the solution for the high capital cost to build a new reactor. They reduce the initial investments and their modular design allows building up of power capacity. One type of these reactors is the one built by Babcock and Wilcox with a power capacity of 125 megawatts and is designed to be buried underground, operating for 3-4 years without maintenance. Toshiba also makes one that can deliver 30 – 135 megawatts and uses sodium coolant; also designed to be buried. The liquid metal sodium can be pumped by electromagnetic pumps. It the pumps fail, the reactor materials themselves can reduce the chain reaction and the sodium metal can draw away the excess heat by natural convection. One major concern is that modular reactors don’t use moderators and so the nuclear reactions occur at a fast pace which is required for these smaller reactors. For a faster chain reaction, they need a more enriched source of uranium, at 19.9% U-235 (below the 20% low-enrichment classification by the IAEA). See chapter for a detailed description of the safety design to prevent explosions of modular reactors. “Note that the safety is not based on an engineering system that requires maintenance…The safety is intrinsic to the physics of high temperatures. That’s why these reactors are sometimes called intrinsically safe.” Another factor that makes them intrinsically safe is that when “the fuel heats up, both the neutrons and the uranium atoms shake more; their instantaneous velocity is higher. U-238 has an important property in its nucleus; it becomes more efficient at absorbing neutrons when the relative velocity is increased”. Another safety feature of these modular reactors is the use of a sodium coolant which expands upon heating, becomes less dense, and rises to the top away from the core as cooler liquid replaces it near the core. Thus, it is not dependent on pumps or engineering devices. There is also less plutonium waste because any plutonium generated when neutrons stick to U-238 also fissions. Toward the end of this section, the author addresses in detail concerns about nefarious persons taking a hold of the enriched uranium in these modular reactors. At the end, the author did not fail to mention natural gas again as the cause for the challenge of increasing the energy contribution from nuclear power in the US, unless there is a big push to produce energy that does not produce carbon dioxide.
There are enough economically recoverable uranium to last 9,000 years at current usage if low-grade uranium ore is used. Uranium ore costs $0.2 per kilowatt-hour.
The Fukushima nuclear accident and meltdown after being hit by an earthquake and tsunami in 2011 is estimated to be only about 100 out of the total 15,000 deaths; maybe fewer as thyroid cancer is readily treatable.
Nuclear waste storage is technically feasible but receives bad public perception and political posturing. In the US, nuclear waste contains plutonium (in France, no, because it is extracted). Here are the reasons why the author thinks that nuclear waste is not a problem: plutonium has a long half-life of 24,000 years and thus does not contribute much to the radioactivity of the waste and it is highly insoluble in water so very little will end up in groundwater. The greatest danger from plutonium is inhalation; it only takes 0.00008 g to cause one cancer (versus 0.5 g if dissolved in water). The author offers the following perspective on the danger of inhalation: botulinum toxin (used in Botox) has an LD50 of 0.000000003 g if inhaled, 27,000 more toxic than plutonium. To address the radioactivity of nuclear waste, the author discusses Figure III.12 in the book, showing that the radioactivity of nuclear waste compared to when the uranium was first mined is a rapidly decreasing function. Thus, to the author, nuclear waste storage is not a difficult technical problem. The author offers what he thinks are three reasons for people are quite concerned about nuclear waste: “most people consider radioactivity an unknown and invisible threat, people don’t recognize that they are surrounded by a level of natural radioactivity that is usually much higher than the dose that comes from a nuclear accident, and the threat of plutonium has been so hyped that many people consider its presence to be unacceptable at any level”.
Construction of new nuclear power plants will be “exploding” in the next several years in places like China and France; Japan is helping build some of these even as some of their own nuclear reactors are taken offline. ATTOW:
· there were 31 states in the US that have operating nuclear power plants and in 7 of these, nuclear supplies 50% of their electricity
· in France, 75% of their electricity is supplied by nuclear
· in China, 27 new plants were being built, 50 planned and 110 proposed
· UK also has come up with some proposed sites
In China, most of the coal is inland and it has to rely on imports from Australia to supply the coastal areas. For every 20,000 tons of coal shipped, only 1 ton of uranium needs to be shipped for the same energy, even less for the 19.9% enriched uranium used in modular reactors.
Part III – Chapter 12: Fusion
The author devotes this chapter on a promising energy technology that has been in development for decades, fusion. Fusion is a promising source of energy as it can be fueled by the most abundant element in the ocean (by atom numbers), hydrogen. Fusion can also be fueled by deuterium which, while only ~1/6000 in abundance relative to hydrogen, can be inexpensively separated from regular hydrogen (the next heavier hydrogen, tritium, is too rare but can be generated). The optimism for the fusion energy source has been around for decades. Fusion has actually occurred in the form of hydrogen bomb in 1953; as a safe source of energy however, a more controlled process needs to be developed. Some of the advantages of fusion listed by the author include the abundance of the primary fuel, hydrogen and its relative lack of radioactive waste. The author points out, however, that neutrons produced in a typical fusion reaction (deuterium + tritium à helium + neutron) can stick to material and make them radioactive, albeit smaller than the radioactivity in a uranium fission plant. Because tritium is quite rare (16 pounds in all the world’s oceans), some fusion reactors are being designed so that the product neutrons are used to breed tritium by bombarding lithium atoms with them. In one other fusion reaction, hydrogen + boron à 3 helium + gamma ray, no neutrons are formed. The gamma rays don’t produce any significant radioactivity, just a lot of energy.
In the next few sections, the author discusses 5 of the most talked-about proposals for developing fusion as an energy source.
Tokamak
The Tokamak, which in Russian stands for “toroidal chamber with magnetic coils, was invented in Russia in the 1950’s. It has dominated the attention and research effort in the last 60 years of fusion exploration. In tokamak, the type of fusion is called thermonuclear fusion wherein extremely high temperatures are used to overcome the repulsion for H atoms to get close enough and fuse through the short-range strong nuclear force. The temperatures should be high enough so that the kinetic energy of the atoms exceeds that of the repulsive energy. This is the same type of fusion that occurs in the sun, requiring core temperatures of about 15 million Celsius degrees (the surface of the sun is only 6,000 degrees Celsius). At this temperature, the power generation is only 0.3 watt per liter in the core; human bodies can generate 1 watt per liter. To generate the power required to produce energy rapidly in a Tokamak fusion, however, requires 100 million Celsius degrees. It also requires deuterium and tritium fuel for easy ignition, with their extra neutrons increasing the strong nuclear attractive force and the rate at which they fuse. Because of the high temperatures required, reacting particles are held down by magnetic confinement. The most current development in tokamak technology is ITER (International Tokamak Experimental Reactor), a 60-foot reactor aiming to produce 500 megawatts for 400 seconds or more, 10 times more power than is needed to run it. Its cost to build has been increasing in the last few years, up to 15 billion dollars so there are questions as to whether it will be competitive. The first test of hot gases is scheduled for 2019 followed by running hydrogen fuel in 2026 and finally project completion in 2038. One objection to the ITER development comes from Greenpeace who argues that the expense of the project does not warrant a technology that might be too long in coming to help stem climate change; they argue that the 15 billion dollars should be spent on solar, wind, and other already proven renewables.
NIF, the National Ignition Facility
NIF is located at the Lawrence Livermore Lab. The fusion technology they are developing involves the use of LASER to heat to very high temperatures (tens of millions of degrees) and ignite a small amount of hydrogen to get the fusion of deuterium and tritium started. The author considers this design, when developed, to be the first to breakeven point in controlled fusion. Another name for this approach is inertial confinement fusion because the fuel’s inertia is enough to confine the hydrogen atoms even at high temperatures because the ignition happens so rapidly (a millionth of a second). 192 lasers synched together deliver a huge amount of energy in short bursts, a billionth of a second, generating 500 terrawatts. This energy is used to heat up the shell that causes it to emit x-rays. The x-rays heat and produce a shock wave that compresses the hydrogen inside. See notes section for a detailed cost summary provided by the author. The critical number that will primarily affect its competitiveness is the cost of the hydrogen targets which must be below a dollar each. The advanced system that Livermore scientists are developing is acronymed LIFE, laser inertial fusion energy.
Beam Fusion
In beam fusion, the target atoms are accelerated through a beam of plasma and collide with rapidly moving particles. This technique is already being used in commercially used neutron generators. In neutron generators, a beam of deuterons (deuterium nuclei) is accelerated in electric fields and collides with tritium-rich targets. The tritium fuses with the deuterium producing helium and neutrons. Neutron generators are used to characterize rock structure at very low depths for oil drilling in a process called oil well logging. Other commercial applications of neutron generators include coal analysis in factories, cement process control, measurement of wall, etc. Beam fusion is currently not a viable fusion energy source because the energy input is higher than the energy output.
A company called Tri-Alpha is working on an undisclosed fusion technology that makes use of the reaction between hydrogen and boron and producing three alpha particles (helium nuclei) and no neutrons (anuetronic). The technology involves accelerating the reactant particles in circular paths the hydrogen and boron being presumably confined by magnetic fields and the electrons that are there to keep the plasma neutral confined by the electric fields. Because the product particles are charged, this technique has the potential of converting the energy directly into electricity and not heat.
Muon Fusion
A muon is a small, heavy particle that lives for about 2 millionths of a second before breaking apart into an electron or a positron and neutrinos. In this type of fusion, discovered in 1956 in a cold liquid hydrogen chamber, a muon (207 times heavier than an electron, some of which have negative charges) fuses with a proton in a hydrogen atom, ejecting an electron. The neutral, fused muon and proton can then collide with a deuteron and fuse, releasing energy and creating a helium nucleus. Even though hydrogen contains very little deuterium, the fused muon-proton takes a few billionths of a second before it finds a deuteron to fuse with. A fusion reactor would have pure deuterium so the reaction is even faster. As with other fusion technologies, the energy required to create the muon must be less than the energy produced to make it commercially viable. The trick to sustain muon fusion is to get each muon to catalyze 350 fusions (the muons tend to stick to the helium nucleus). The author offers his own suggestions of techniques that can be explored to facilitate the viability of this fusion process (see chapter), focusing on decreasing the energy input, causing the muon to undergo more catalysis before sticking to helium, producing muons with much lower required energy, etc. One company, Star Scientific, has claimed that it has developed a way to produce muons with less energy. The author himself has worked on aspects of this field alongside with the original discoverer, Luis Alvarez.
Cold Fusion
The author devotes this last section of the claim of cold fusion being achieved in 1989, “verified” by scientists from top institutions, only to fizzle out as consensus evolved to declare the results unverifiable and the methods questionable.
Part III – Chapter 13: Biofuels
In this chapter, the author discusses examples of biofuels and warns right away that some of what he is about to say may actually offend some people passionate about biofuels. Right off the bat, he listed some of the contentious conclusions he has arrived at: “corn ethanol should not count as a biofuel as it does not reduce greenhouse emission; biodegradable and recycling are overhyped from a global warming perspective; ethanol from cellulose offers the best hope for a significant biofuel component to solving the energy problems; and the main value of biofuels is not reducing global warming but in increasing energy security.”
Ethanol from Corn
The author gives the following reasons for why ethanol from corn should not be considered a biofuel:
· It uses a lot of fertilizer
· It takes a lot of oil and gasoline to run farm machinery for growing corn
· It does not produce enough sugar per acre turned into ethanol by fermentation to be carbon neutral and to yield net carbon dioxide reduction.
Using corn to make ethanol has also raised prices for corn-based food. Ethanol from corn does serve the advantage of providing another local source of transportation fuel and contribute toward energy security (the author estimates about 3% US consumption and 5% of US imports) despite the fact that it provides only 2/3 of the energy as gasoline on a per gallon basis.
Biodegradable is bad?
From the global warming point of view, biodegradable materials are “bad” because they decompose to produce carbon dioxide. The author concedes, however, that from an aesthetic and animal welfare point of view, reducing plastic that ends up in our oceans and kills animals and clutters the landscape, biodegradability does have some benefits.
Pseudo-biofuels
The author does not consider waste cooking oil as a biofuel. He argues that using waste oil as fuel adds carbon dioxide to the atmosphere and is not better than petroleum.
He also considers recycling paper as bad for global warming because letting it biodegrade adds carbon dioxide in the air instead of burying and sequestering its carbon content. If paper is not recycled, more trees to make paper have to be grown which removes carbon dioxide from the atmosphere.
The Altamont landfill in California can generate 13,000 gallons of liquefied natural gas that it uses to operate its waste and recycling trucks. This constitutes 93% of total and so the other 7% of methane leaks into the atmosphere as a potent greenhouse gas.
The author uses a somewhat tongue-in-cheek tone in these sections.
Ethanol from Cellulose
Cellulose, normally indigestible by humans, can be converted to the liquid fuel ethanol by fermentation using enzymes from microorganisms, fungi, or yeast. The top candidates for the cellulose are switchgrass and miscanthus grass that grows over 11 feet tall and can yield three crops per year. Miscanthis grass is projected to produce, in theory, 1150 gallons of ethanol per acre compared to corn which can only produce 440 gallons per acre. Cellulose can provide about 1/3 the energy of an equal weight of gasoline. The author estimates having to grow enough miscanthus grass in an area 560 miles on each side (6 times the size of Iowa) to replace the 1 billion tons of oil we uses each year, assuming no energy loss in the conversion.
Ethanol from algae
The author thinks that algae has even better potential for producing fuels. The “right kind of algae” has the potential to produce oil that can be used as diesel without expensive conversion steps in between. Algae are very efficient at producing biomass from sunlight: every cell can produce biomass compared to just leaf surface cells in grasses. Proponents of algae for producing oil claim that algae can “produce ten times the energy per acre that Miscanthus can produce”. Commercial ventures lead the work in research and development of this oil-producing technology. Genetic engineering and primarily inducing mutations is the technique being used to find the “right kind of algae”. Algae production can be very sensitive to environmental factors and biological contamination whereas growing miscanthus is less vulnerable to extreme weather and invasive species.
In the end, the author does not put a high value on bioethanol or other biofuels in terms of limiting the greenhouse effect. Even if biofuel replaces gasoline, there would only be a limited reduction in predicted temperature rise. In terms of energy security, bioethanol may come too late and may be too expensive to compete with other cheaper fuels like compressed natural gas, synfuel, or shale gas.
Part III – Chapter 14: Synfuel and High-Tech Fossil Fuels
In the beginning of this chapter, the author reiterates that while the US is running low on oil, this is not the case for natural gas and coal. And, as he points out, while this helps energy security, it is not good for greenhouse emissions. A large supply of natural gas and coal does not help energy sectors that require liquid fuels, especially transportation needs. Transportation infrastructure is built around using oil. Shale oil and shale gas are also fossil fuel alternatives discussed in a previous chapter. The author discusses some other “unconventional” sources of fossil fuel in this chapter.
Synfuel
The Fischer – Tropsch chemical process to convert coal to oil was first developed in Germany during World War II. This process, referred to today as CTL or coal to liquid, has been used by the company Sasol in South Africa, to produce oil during the embargo years of the apartheid era. Sasol, in 2011, announced plans to build a gas to liquid (GTL) plant in Louisiana to produce oil from natural gas projected at about 100,000 barrels per day of diesel fuel. According to the author, US is shying away from building more of these plants even with the low cost and glut of natural gas because of the uncertainty in oil prices, “Saudi Arabia can undercut any threatening technology as long as it has a surplus capacity since it can pump oil for under $3 a barrel”.
George W. Bush signed the Energy Independence and Security Act to protect the vulnerability of the US energy needs to OPEC control of the oil market, that could “emasculate our (military) forces by a simple embargo”. This act provided loan guarantees, tax credits, and subsidies. Before it passed, however, the synfuel provision was cut out due to the concern with increased greenhouse emissions from using coal as the source of oil, “trumping” national security concerns. The author puts this in the following quantitative perspective: “Recall that the US automobile has contributed about 1/40 Celsius degrees to global warming. In the next 50 years, assuming we adopt reasonable automobile emission standards, we should be able to limit the temperature rise attributable to the US automobile to an addition 1/40 C. A switch to100% synfuel would boost that to about 1/30 C. The danger of that much rise is what you need to balance against the possible national security needs. In addition, you might want to consider the role that synfuel might play in reducing the balance-of-payments deficit.”
The author predicts that there would be a growth in the construction of synfuel facilities; subsidies are no longer necessary because of lower natural gas prices. Chevron and Sasol have started a joint venture for GTL in Qatar.
Coal Bed Methane
Coal bed methane is methane extracted from deep coal deposits by drilling down and allowing the methane to escape. Fracking and horizontal drilling can be used as well. This type of methane is relatively pure and does not contain hydrogen sulfide and heavier hydrocarbons like propane and butane and is nicknamed “sweet gas”.
Coal Bed Gasification
In this process, inspired by a lightning ignited fire in a coal deposit in Australia thousands of years ago, deeply imbedded coal is partially burned to extract the energy from the coal without having to dig it up and bring it to the surface. The partial combustion produces other fuels such as carbon monoxide and hydrogen, a mixture called coal gas. “It is the ultimate in remote chemistry.” Another advantage of this process is that the ash is left buried. The coal gas can also be collected as feed gas for the Fischer-Tropsch process and for methanol synthesis. The disadvantages include heat loss, wasted unburned coal, and potential pollution of the water table.
Enhanced Oil Recovery (EOR)
Only 20% of the oil can be extracted from underground through upward movement by its own pressure. Oil is more sparsely distributed under pressure in rock pores and cracks. This pressure is enough to recover only 20% of the oil. Secondary oil recovery where the oil is flushed out by water, natural gas, or carbon dioxide can boost this to 40%. This has the added advantage of sequestering carbon dioxide although this is a very small fraction of what needs to be removed from the atmosphere. Enhanced oil recovery methods aim to recover the other 60% through the following techniques:
· reducing the viscosity by heating the oil by steam injection or pumping down air or oxygen to allow burning of some of the oil to heat the rocks
· pumping soap (surfactant) to release the oil from the rocks
· sending down bacterial that can breakdown the more viscous, longer chain hydrocarbons
Oil Sands
Canada is third in the world, after Venezuela and Saudi Arabia, in terms of the amount of recoverable oil reserves. Most of this oil is in the form of oil sands (or tar sands), heavy crude oil called bitumen mixed with clay and sand. The estimates run from a conservative 200 billion barrels to an optimistic 2 trillion barrels (by Shell Oil). 2 trillion barrels are enough to supply the US with 250 years of oil and the world 60 years at current consumption. Some of the objections voiced in exploiting the oil sands of Canada include: because the oil is largely on the surface, recovery would leave ugly open-pit mining, local water pollution, and the requirement for large amounts of water. The process used to recover the oil uses up about 12% of the energy of the oil extracted. The author believes that synfuel from natural gas and, in the long-term, shale oil are the will compete with oil sands.
Part III – Chapter 15: Alternative Alternatives: Hydrogen, Geothermal, Tidal, and Wave Power
The author devotes this chapter to a discussion of other alternative sources that don’t have much cost-effectiveness and efficiency promise that he refers to them as “alternative alternatives”.
Hydrogen
The author thinks that hydrogen automobiles were never a good idea because of the following two disadvantages it shares with electric cars:
Hydrogen requires a lot of energy to extract it from water by electrolysis or produce it by reaction of methane with water; this process also produces carbon dioxide. Using hydrogen as fuel returns only a part of the energy input. It is much cheaper to use methane as a fuel by combustion or in a methane fuel cell.
A hydrogen car would need to have a large tank to hold the larger volume of hydrogen for the same energy or it would have to have a short mileage range. Even though hydrogen contains 2.6 times more energy per pound than gasoline, hydrogen takes up a lot more volume: at maximum pressure, it would take 10 gallons of hydrogen to match the energy of 1 gallon of gasoline for an ICE and 6 gallons of hydrogen for every 1 gallon of gasoline in a fuel cell car. Liquefied hydrogen contains 3 more time energy per gallon but requires very low temperatures and specialized storage and discovery. It is explosive at concentrations of 4% - 75% in air (Natural gas is explosive to between 5% - 15%) (due to fuel to air ratio requirements), which adds to the challenge of transport, delivery, and storage. The super light-weight property of hydrogen does not add value to an automobile although the author concedes that hydrogen fuel works well for rockets because of this. Advocates argue that if cars are made lighter, a hydrogen car can reach a 300-mile range but the author counter-argues that doing the same weight reduction in ordinary cars can increase its mileage from 35 to 100 mpg. The main advantage of hydrogen is its potential to reduce or eliminate greenhouse emissions if energy used for electrolysis is also a low carbon dioxide emitter (solar, wind, nuclear). Because of the projected contribution of automobiles to warming at a low 1/40 Celsius degrees, a switch to hydrogen benefits energy security more than greenhouse emissions. The author reiterates, natural gas is still the competitive alternative. He does not think that a profit-making market for hydrogen cars will emerge.
Geothermal
Geothermal energy is a good alternative source but only in areas where the heat is concentrated enough for efficient conversion for heating and electricity. For example, Iceland gets about 50% of its electrical and heating energy from geothermal sources; California generates 6% (2.5 gigawatts) of its electrical power from steam in a 30-square-mile area known as the Geysers. Even though the interior of the earth generates about 44 terawatts of heat power (mostly from radioactivity in the Earth’s upper crust), this is diffusely distributed to average only 0.1 watt per square meter; in contrast, solar power can generate up to 1,000 watts per square meter and an average of 250 watts night and day, north to south. Carnot efficiency calculations by the author showed only a 9% efficiency for power extraction from low-grade geothermal heat. “Fracking” this heat is not nearly as cost-effective as natural gas. The author criticizes a 2007 MIT report on geothermal mining as being “full of both optimism bias and skepticism bias”.
Tidal Power
Tidal power is also another low-density source of energy at 0.1 watt per square meter. Nevertheless, tidal power has been commercialized, the most successful one in France across the entrance of the Rance River tidal basin. Two factors contributed to the success of this tidal power plant: a huge tidal range of 26 feet and a large amount of water flowing through the dam generating an average of 100 megawatts of electric power (see picture in chapter), with a 25-megawatt peak. Resonance effects of water “sloshing” frequency have created these huge tides. The tidal power plant construction cost loans have been fully paid after 46 years and can generate electricity at 1.8 cents per kilowatt-hour. Such huge tides are not common, NY or SF gets about 6-foot tides. The barrage or dam can also cause substantial environmental damage. In New Zealand, submerged generators extract power from 7-foot tides. It had a high initial capital cost of $3 per watt but not out of range of other power plants. South Korea has the largest tidal station with a peak production of 254 megawatt from the power of 18-foot tides. The Bay of Fundy which has 56-foot tides but through a very wide mouth also has a tidal station that generates about 20 megawatts. One has been proposed by the Golden Gate Energy Company in SF. Again, much like geothermal, tidal power is concentrated only in certain areas and so is not a widely available alternative.
Wave Power
The author does not put much credit to this source as well because, gain, of low power density. He notes that, although, global wave power can deliver up to 3 terawatts of energy, that energy again is very diffuse because wave heights only average about 1 meter. Intercepting 100 meters of these waves would only generate 1 megawatt, compared to 7 megawatts from a single large wind turbine. The Pelamis Wind Energy Converter in Portugal has successfully extracted half the power of waves at 5 megawatts per kilometer but at a very high cost, $7.50 per watt installed and this does include maintenance costs.
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.