Sunday, May 31, 2015

Active learning increases student performance in science, engineering, and mathematics

Active learning increases student performance in science, engineering, and mathematics
Scott Freemana, Sarah L. Eddya, Miles McDonougha, Michelle K. Smithb, Nnadozie Okoroafora, Hannah Jordta, and Mary Pat Wenderotha
aDepartment of Biology, University of Washington, Seattle, WA 98195; and bSchool of Biology and Ecology, University of Maine, Orono, ME 04469
Edited* by Bruce Alberts, University of California, San Francisco, CA, and approved April 15, 2014 (received for review October 8, 2013)
PNAS 2014 111 (23) 8410-8415; published ahead of print May 12, 2014,doi:10.1073/pnas.1319030111

In this article, the authors meta-analyzed 225 studies comparing performance of students in lectures that use active learning methods and in traditional lectures. When this was published, this was the largest and most comprehensive meta-analysis of STEM education literature. 

The authors provide the following definitions of the two types of lectures:

Active learning engages students in the process of learning through activities and/or discussion in class, as opposed to passively listening to an expert. It emphasizes higher-order thinking and often involves group work.

The authors used Bligh’s (see article for reference) defined traditional lecturing as “. . .continuous exposition by the teacher.” Under this definition, student activity was assumed to be limited to taking notes and/or asking occasional and unprompted questions of the instructor.”

The definition of active learning was derived from coding elements from 338 audience members before departmental biology seminars on active learning in college and universities across the United States.

The authors wanted to know whether active learning lectures have any effect on student performance.  The two metrics used for student performance are (from the supporting information document):
We evaluated performance using two metrics: (i) scores on identical or formally equivalent examinations, concept inventories, or other assessments, and (ii) failure rates—in most cases measured as the percentage of Ds, Fs, and/or withdrawals. These were relevant criteria for failure because students with a D, F, or W in a STEM course are usually barred from receiving credit in the major.

The paper and the supporting information provide the details on:

·         Literature search:
In addition to peer-reviewed resources, the authors also searched gray literature, primarily unpublished dissertations and conference proceedings “for studies that compared student performance in undergraduate STEM courses under traditional lecturing versus active learning”.

·         Literature selection for inclusion in the meta-analysis:
The papers were independently coded using the following criteria:
(i) contrasted traditional lecturing with any active learning intervention, with total class time devoted to each approach not differing by more than 30 min/wk;
(ii) occurred in the context of a regularly scheduled course for undergraduates;
(iii) were largely or solely limited to changes in the conduct of the regularly scheduled class or recitation sessions;
(iv) involved a course in astronomy, biology, chemistry, computer science, engineering, geology, mathematics, natural resources or environmental science, nutrition or food science, physics, psychology, or statistics; and (v) included data on some aspect of student academic performance.

The two coders reviewed and discussed the literature until they reached consensus on the basis of the five criteria above and the additional information below:
i) The five criteria listed above for admission to the study;
ii) Examination equivalence—meaning that the assessment given to students in the lecturing and active learning treatment groups had to be identical, equivalent as judged by at least one third-party observer recruited by the authors of the study in question but blind to the hypothesis being tested, or comprising questions drawn at random from a common test bank;
iii) Student equivalence—specifically whether the experiment was based on randomization or quasirandomization among treatments and, if quasirandom, whether students in the lecture and active learning treatments were statistically indistinguishable in terms of (a) prior general academic performance (usually measured by college GPA at the time of entering the course, Scholastic Aptitude Test, or American College Testing scores), or (b) pretests directly relevant to the topic in question;
iv) Instructor equivalence—meaning whether the instructors in the lecture and active learning treatments were identical, randomly assigned, or consisted of a group of three or more in each treatment; and
v) Data that could be used for computing an effect size.

·         Data Analysis
See article for a detailed summary.

RESULTS
Overall conclusions by the authors, in their own words as stated in the Discussion section:
The data reported here indicate that active learning increases examination performance by just under half a SD and that lecturing increases failure rates by 55% (21.8% for active learning students and 33.8% for traditional lecture students). The heterogeneity analyses indicate that (i) these increases in achievement hold across all of the STEM disciplines and occur in all class sizes, course types, and course levels; and (ii) active learning is particularly beneficial in small classes and at increasing performance on concept inventories.

See relevant figures below from the article.
Fig. 2.
Fig. 2.
Effect sizes by discipline. (A) Data on examination scores, concept inventories, or other assessments. (B) Data on failure rates. Numbers below data points indicate the number of independent studies; horizontal lines are 95% confidence intervals.


A Process for Developing Introductory Science Laboratory Learning Goals To Enhance Student Learning and Instructional Alignmen

A Process for Developing Introductory Science Laboratory Learning Goals To Enhance Student Learning and Instructional Alignment
Jennifer M. Duis, Laurel L. Schafer, Sophia Nussbaum, and Jaclyn J. Stewart
Journal of Chemical Education 2013 90 (9), 1144-1150
DOI: 10.1021/ed4000102

Supporting Information:
Lab report guide and instructions for completing a tutorial on notebook documentation using “Interactive Tutorials and Virtual Lab Activities” (5 pages)

ABSTRACT: http://pubs.acs.org/doi/abs/10.1021/ed4000102

Reader’s note:  The process described by the authors below for the development of learning goals and assessment methods may serve as a model to organize out program’s efforts to develop more student learning outcomes for our laboratory curriculum.  The steps seem well-defined but require an extensive amount of time and discussion.  This is the second article I have read describing the process of developing assessment methods and the work required to validate them.  This article reinforces the idea that to come up with meaningful learning outcomes and assessment methods and analysis is not a trivial process! This is something that I would like to do for our lab curriculum but it requires a considerable amount of time, effort, and coordination from all participants.  This paper also introduced me to the existence of a set of interdisciplinary lab learning objectives developed at Rice University: http://www.owlnet.rice.edu/~labgroup/assessment/lab_objectives.html

This article describes the process and results of the development of learning goals and student assessment for the introductory chemistry lab curriculum at a large university with over 1700 chemistry students.  The researchers include an instructional research coordinator, the course instructor, a science educator, a chemical education postdoc researcher, and undergraduate researchers. 

The goals of the research and development team are to, verbatim:

·         optimize instructional use of limited laboratory contact hours
·         help students achieve learning gains in basic (e.g., maintaining a laboratory notebook, selecting measurement tools with appropriate precision, and completing “common” techniques properly) and transferable (i.e., valuable in other sciences) laboratory learning content, and
·         cultivate departmental consensus, a small research

The 2-semester course is taken by students who have completed what in Canada is Grade 12 Chemistry and is required most Faculty of Science majors. [They used the term “introductory” but I suspect this is equivalent to General College Chemistry.].  “Furthermore, approximately 1700 students complete the full, two-semester course series and complete five, mixed-mode guided inquiry style activities (supported by readings, technique guides, online tutorials, and virtual laboratories in off weeks) over each 13-week semester.”

The researchers used a set of 5 program-wide learning objectives developed by science and engineering instructors at Rice University as a basis for developing the learning goals:
1. Basic laboratory skills
2. Communication and record keeping
3. Maturity and responsibility
4. Context
5. Integration and application of knowledge/experience

Some key things about the development of the learning goals:
Chemistry department members came up with a draft of learning goals for each lab activity using the learning objectives above.  These were synthesized by a chemistry education researcher.  Prioritizing basic and transferable lab skills helped focus the learning goals.  These were then reviewed by the instructor and other members of the research and development team and rewritten in the active voice. 

A few challenges arose.  When reviewing members disagreed, the group went back to the learning objectives to rewrite or eliminate a learning goal.  Efforts were also made to clarify and specify unambiguous articulation (see example in article about “titration technique”).

A final draft of no more than 2 pages of learning goals per lab activity was completed after iterative discussions reached consensus.  This consisted of clearly articulated and consistently aligned learning goals organized into a chosen global framework.  The first main section of the framework is basic laboratory skills: safety, laboratory procedures and techniques, laboratory records, data analysis, and laboratory procedure troubleshooting.

Some key things about the development of assessments:
The first step the group undertook was to look at quizzes, laboratory notebooks, report expectations to see what learning goals are already adequately assessed within the course’s grading scheme.  Those that are not assessed in any of these laboratory deliverables were prioritized for assessment development.  These developed learning goals assessments were then validated with student and experts.

The learning goals were organized into categories of possible assessment methods: should be observed, can be in written form, can be tracked online, and/or part of currently gathered and graded material.

An example given for the assessment development process involves titration.  To assess the learning goal of using correct procedures during titration, students are observed using a rubric provided in the lab manual for the titration procedure.  To assess ability to list the general steps of a titration procedure, students were originally given an open-ended question but they found that they could not elicit the desired level of detail.  They then analyzed data collected from the open-ended questions and observations of procedure to create a set of multiple-choice and true-false questions to assess aspects of titration most commonly not conducted or described correctly.  The authors state, that, “In a best-case scenario, however, development of suitable assessments is delayed until student observations and think-aloud interviews are completed and analyzed. This prevents many false starts and reduces the lost effort when a large portion of collected data must be set aside because the assessments did not reveal
sought information.”

Assessment validation
Validation of an assessment is required to
ensure that students interpret questions correctly as intended
ensure that experts, namely experienced chemistry professors agree on whether the assessment is congruent with the corresponding learning goal
ensure validation by experts from outside the team improves construct and face validity
ensures that there is a single agreed-upon correct answer to each multiple-choice item and that all distractors are correct

To further improve face validity, assessments were validated with students to identify wording that they interpret correctly, to ensure that the correct answer was not chosen for the wrong reason, and to understand student reasoning.  This was done through personal interviews of the target population (a small financial incentive was offered to student participants).  See detailed description of the interview process in the article.

*See example of how the group arrived at revisions of assessment question and elimination of jargon in both the question and the procedure, with the “realization that such terminology was beyond the scope of the first-year course”.

Examination of laboratory activities and evaluation materials for course success for alignment with learning goals revealed some disconnects.  These disconnects were addressed either through a re-working of the course materials or realignment of the grading scheme (e.g., the points assigned to notebooks were raised and point for the lab report lowered as this was deemed more appropriate for alter in the curriculum).

The development of the learning goals and the assessment ultimately also functions to evaluate the success of the course and not just the student. To ensure that students’ responses are “honest” and not motivated by getting points, the authors suggest carrying out these assessments as tasks outside the grading rubric for the course. To encourage “sincere effort” from the students, We have
found that providing a very small bonus point allowance for
completing the test, explicit knowledge that test results will be
used to improve the course, and a final question where students
indicate how seriously they regarded the test (shown below in
Figure 3) resulted in considerable levels of serious participation and adequate controls to minimize frivolous responses.

Here is what the authors have to say on the outcomes of the learning goals and assessment development process (verbatim):

“With data provided by the process and these course assessments, we have

·         Cultivated departmental consensus regarding what students should learn in the course
·         Provided students with clear descriptions of what they should learn (LGs)
·         Emphasized basic and transferable skill development (such as suitable notebook maintenance and correct volumetric glassware manipulation) through enhanced assessment of these skills
·         Better aligned course grading policies with identified LGs
·         Better aligned instructional materials and TA orientation with established LGs
·         Provided credible feedback regarding student attainment of LGs to target course improvements for optimizing instructional use of limited laboratory contact hours


A SUMMARY OF THE STEPS, TAKEN VERBATIM FROM THE ARTICLE:
The potentially challenging task of identifying learning goals or performance objectives for an introductory science laboratory course can be accomplished systematically and productively. To begin, resolve to deal with inherent challenges: start with current laboratory instructional materials, identify an overarching theme or focus for the course, and engage invested parties throughout the process. Next, build initial LGs around an existing framework.23 If possible, engage an experienced person in discipline-based educational research to help in LG synthesis, alignment with course goals, employing clear, operational language, and organizing the resulting LGs into the agreed-upon framework. With iteratively refined LGs identified, re-examine current laboratory materials to avoid duplicating existing assessments and prioritize remaining LGs to focus assessment-development efforts. Decide on the best way to assess particular learning goals or types of learning goals, compare this to assessments currently used in the course, and develop new assessments to fill the gaps. Validate these assessments with experts (to enhance consensus, construct validity, and face validity) and students (to ensure assessments are correctly interpreted and further support face validity). Administer the assessments using a pretest−posttest strategy to assess the impact of new goal-centered course improvements.
Finally, ensure that current student assessments are aligned with the LGs and make any needed adjustments.




Saturday, May 30, 2015

Putting the Laboratory at the Center of Teaching Chemistry

Putting the Laboratory at the Center of Teaching Chemistry
A. M. R. P. Bopegedera
Journal of Chemical Education 2011 88 (4), 443-448
DOI: 10.1021/ed100045z


In this article, the author, a Chemistry professor at Evergreen State College in Washington, gives a description of non-traditional lab activities that represent a “holistic” approach (Lab-Centered Approach or LCA) to connect theory learned in lecture to practice done in lab.  The lab activities described by the author are described below:

·         Lab write-ups are used (I am assuming handouts of labs) instead of published labs.  The author did not give a description of whether these labs were created in-house.  The students are required to record everything in a lab notebook (something we already do at LPC) as opposed to the practice in many other colleges of using lab report forms from the publisher.  Students are asked to answer pre-lab questions as a requirement for doing the lab.  Students are asked to submit the lab report the next day (students have a week at LPC). The discussion section consists of questions that students need to answer to help them think through the lab (done in some labs at LPC).  Overall, this protocol sounds no different from what we already do at LPC.
·         Students are given a lab map to familiarize themselves with parts of the lab, something we do at LPC as well.
·         Students work with lab partners for every lab.  The author notes that when students have to learn how to use a new instrument, they work with each other and their mastery is closely monitored by lab staff (at LPC, the students don’t work in pairs for every lab.  I, myself, go around to make sure students know how to use our Lab-Pro and the different sensors we use for it.  Not very easy to do with the larger AA instrument. Lab staff do not normally have time to hang around in the class during the lab.)
·         The author also made special mention of students cleaning up after themselves, something required and routine in most of the labs  I have taught so I am not sure why this got special mention.
·         Grading is more process-oriented rather than results-oriented.  They are graded on the basis of their work, their technical writing skills, and their suggestions for improvement.
·         Lab problems are incorporated into their workshop (a form of problem-solving discussion session), quizzes, and exams.  (I do a little bit of this already but I plan to do more of it to give students more incentive in producing good, thorough reports.)
·         He also wrote a brief description of how he conducts his lecture which he describes as being very interactive.  The students also attend a 90 minute workshop on problem solving in which groups of students work together on problems and the instructor rotates through them, spending more time with struggling students.
·         Students in the first year chemistry class take labs where the activities represent a progression from simple verification experiments, to guided-inquiry activities, and then to open-ended inquiries.  The author mentioned that in the first quarter, students only do 4 labs as it takes time to cycle through all the classes in 10 weeks (I am not sure what he meant by this).
·         The author did not give a description of what the guided inquiry labs are but they were included in the supporting document.  In this step, he did mention that students have to work with more analysis data and he seems to imply that there is more one-to-one interaction needed in this case because of the nature of the activities.
·         In their last quarter, spring, groups of 3-4 students carry out an open-inquiry project where they have to come up with their own question and hypothesis, do some background research, design their experiment, order their chemicals, and carry out the project to fruition.  They are then presented during a science carnival event at the College.  The author provides a timeline of expected outcomes for the open-inquiry project and also some of the interesting projects that have come out of the class.

It is hard for me to imagine, although it sounds ideal, being able to have our 1A and 1B students carry out guided-inquiry and an open-inquiry projects.  We would have to eliminate a large portion of our lab curriculum and replace it with these projects. I am not sure how much lab support it might take to carry out an endeavor like this.  Furthermore, students don’t always end up with the same instructor for the second semester.

The author provided some qualitative assessments of the effectiveness of this approach in the form of some selected students comments all of which were positive. He presented some quantitative data on ACS exam results but did not elaborate as to whether they pre- and post-LCA or if they are results from the two years of using this approach. In the notes section, he did add this, “Based on information from the Registration and Records office of TESC, 76% of the students continued their studies in the sciences in subsequent years. This number excludes students who graduated or left TESC at the end of the academic year of the LCA”.

The last section was devoted to a description of incorporating a lab practical, something I have always wanted to do for the General Chemistry course.  However, this takes time not just away from students conducting more labs (It is always a difficult decision to figure out what lab to eliminate to make room) but also this is very work-intensive from the lab support end.  He shares the same reasons I have for adding a lab practical to the curriculum (which I have done for the 30A lab).  Listed verbatim, they are: “(i) emphasizes that what is learned in the laboratory is as important as what is learned in lecture; (ii) holds students responsible for learning lab techniques; (iii) provides students an opportunity to synthesize data from multiple experiments to reach a single conclusion; (iv) tests students' ability to work independently and resourcefully in the laboratory; (v) rewards students for good record keeping (lab notebook was the only resource allowed during the lab practical); (vi) tests students' ability to write a formal lab report without help; and (vii) helps students gain confidence in the laboratory before starting independent lab projects.”  The lab 3-hour lab practical required each student to determine the identity of the metal nitrate using a variety of techniques: complexometric titration, Beers- Law measurements, etc.  It sounds really interesting and I am intrigued.  More importantly, he discovered areas where students were weak and could have used more help, graphing, precision and accuracy, qualitative versus quantitative data, etc.


Friday, May 29, 2015

A syllabus for a one semester chemistry course for health professions: Report of the Task Force on Chemical Education for Health Professions

A syllabus for a one semester chemistry course for health professions: Report of the Task Force on Chemical Education for Health Professions
Mordecai Treblow, John M. Daly, and Jerry L. Sarquis
Journal of Chemical Education 1984 61 (7), 620
DOI: 10.1021/ed061p620

This very short (and old) article was a report by the task force on creating a one-semester course syllabus for health professions.  I looked up this article because it was cited in one of the other articles I read.  In this article, they give a suggested list of topics to constitute a one-semester general, organic, and biological chemistry course.  It was noted by the task force that this was created with no input from people in the health professions due to lack of response.  The previous article I presented on Views of Nursing Profession was a delayed response to this call for communication.  As far as I know, there was not a follow-up report on how to integrate the responses and views by the nursing and nursing education community into a revised model. Nevertheless, this was an interesting read as it gave me some history on the current structure of the one-semester GOB course (which I have taught many times) and the two-semester version.

It is also interesting to note that the authors and members of the task force put together this list with the goal of establishing “the chemical foundations for microbiology, physiology, nutrition, and pharmacology”.  In contrast, the participants in the Bridging Cultures study were asked about chemistry as it is useful for nursing clinical applications.  Personally, I have struggled with this and this just reinforces to me that we need to have both goals in mind (what the chemistry course is a pre-requisite for and how it may help students in their nursing clinical experience).  This dichotomy presents some level of conflict because, in my opinion, if the goal of the course is as a pre-requisite for courses in physiology, microbiology, etc, then it warrants a more foundational approach to the topics rather than directly applied.  One has to find a good compromise between the two despite the limited time we have in teaching this course even in a 2-semester mode.

This list of topics given seem overwhelming for a one-semester course.  It is notable that most texts still use this organization and selection of topics.


The authors do give a good list of advice, most of which still apply today (see article).

Valid and Reliable Assessments To Measure Scale Literacy of Students in Introductory College Chemistry Courses

Valid and Reliable Assessments To Measure Scale Literacy of Students in Introductory College Chemistry Courses
Karrie Gerlach, Jaclyn Trate, Anja Blecking, Peter Geissinger, and Kristen Murphy
Journal of Chemical Education 2014 91 (10), 1538-1545
DOI: 10.1021/ed400471a
ABSTRACT: http://pubs.acs.org/doi/abs/10.1021/ed400471a?journalCode=jceda8&quickLinkVolume=91&quickLinkPage=1538&selectedTab=citation&volume=91
The goal of this study was to develop and test valid and reliable assessments that measure the scale literacy of introductory chemistry course students (like Chemistry 31 and maybe 30A at LPC).  They define validity and reliability as follows: “Validity is determined by multiple methods, including experts constructing and analyzing the test items, the use of student responses and item statistics to edit and select test items, and comparison of the assessment measurement to other valid measures.16 The reliability of an assessment instrument can be defined as the consistency of its measurement each time it is used under the same set of conditions with the same group of subjects.”  Their study was guided by the following research questions:
1. How can scale literacy be measured for classwide assessment?
2. What is the scale literacy of introductory college chemistry students?
3. How does scale literacy predict performance in general chemistry?

The two assessment tests developed were (see article for how these were developed and validated):
Scale Literacy Skills Test (SCLT): The final version had 45 items, 36 of which were assigned to categories of the Scale Concept Trajectory developed by Jones (see article). The other 9 items pertain to macroscopic and particle representations matter (see distribution below).
Scale Concept Inventory (SCI): The final version contains 40 statements that are scored using a 5-point Likert scale: a five-option continuum from strongly agree (5) to strongly disagree (1). Twenty-three of the statements were written to elicit a positive or agree response, while 13 were written for a negative or disagree response. This technique was used to ensure that the students read each question. In addition, a verification item was used to identify those students who did not correctly utilize the SCI (for example, not reading the statements, not understanding the rating scale or simply entering random responses).
The participants were students from a 4-unit preparatory chemistry course with a college algebra or math placement test score requirement (like Chemistry 31 at LPC), from a general chemistry I course with  a college algebra and preparatory chemistry course pre-requisites (or placement tests) (Chemistry 1A at LPC), and experienced graduate students as experts.
Test Administration:
Both the SCI and SLST were administered to participants in both courses at the beginning of the semester and at the end of the semester (only starting in 2010 for the SCI).
Results of the SLST: See graph above.  Authors’ summary: The validation studies of the inventory revealed areas of particular need where students again struggle with concepts related to scale. Included in this are concepts related to the continuity of matter, number sense, magnification and the definition of a macroscopic versus a particle-level property. Incidentally, these definitions were also an issue on the SLST where the two lowest performing items for both groups were these definitions
To come up with predictors for success in the course, the authors analyzed correlations between test scores in various assessments and performance in final exams.  Some notable results (copied verbatim to preserve precision of statement):
For the General Chemistry, correlations were analyzed between scores in various tests and performance in two different final exams (ACS).  Here is the overall result for the General Chemistry: The scale literacy measure was the best predictor for performance on the conceptual final exam and the same as the combined placement test for the other final exam.
For the Preparatory Chemistry course, correlations were analyzed between scores in various tests (excluding the chemistry placement) and performance in the final exam and overall class percent: Unlike the results found for general chemistry I, the results for preparatory chemistry show a better correlation between the final exam and the ACT composite or mathematics score and the final percent in the course and the mathematics placement test score. This suggests that the level of content understanding expected of the students in this course differs from that in general chemistry I, particularly with those concepts related to scale. Indeed, the focus of this course and many remediation or preparatory chemistry courses is the ability to solve problems and begin to understand some of the language of chemistry. However, the scale literacy score is still a moderate and significant predictor of success.
Overall summary by the authors on predictors for course success:

As individual assessments, the SLST and SCI had moderate, significant, positive correlations to final exam scores. Similar results were found for other traditional measures. However, when considering combined scores (ACT composite, combined placement or combined scale literacy score), the best predictor of success by conceptual final exam performance27 was the scale literacy score. In other words, students with higher scale literacy are predicted to perform better on conceptual test items in a general chemistry course. This relationship was also found when examining similar predictors for success in a preparatory chemistry course; however, the best predictor of success in this course were the traditional measures of mathematics knowledge or ACT composite or mathematics score.

Valid and Reliable Assessments To Measure Scale Literacy of Students in Introductory College Chemistry Courses

Valid and Reliable Assessments To Measure Scale Literacy of Students in Introductory College Chemistry Courses
Karrie Gerlach, Jaclyn Trate, Anja Blecking, Peter Geissinger, and Kristen Murphy
Journal of Chemical Education 2014 91 (10), 1538-1545
DOI: 10.1021/ed400471a
ABSTRACT: http://pubs.acs.org/doi/abs/10.1021/ed400471a?journalCode=jceda8&quickLinkVolume=91&quickLinkPage=1538&selectedTab=citation&volume=91
The goal of this study was to develop and test valid and reliable assessments that measure the scale literacy of introductory chemistry course students (like Chemistry 31 and maybe 30A at LPC).  They define validity and reliability as follows: “Validity is determined by multiple methods, including experts constructing and analyzing the test items, the use of student responses and item statistics to edit and select test items, and comparison of the assessment measurement to other valid measures.16 The reliability of an assessment instrument can be defined as the consistency of its measurement each time it is used under the same set of conditions with the same group of subjects.”  Their study was guided by the following research questions:
1. How can scale literacy be measured for classwide assessment?
2. What is the scale literacy of introductory college chemistry students?
3. How does scale literacy predict performance in general chemistry?

The two assessment tests developed were (see article for how these were developed and validated):
Scale Literacy Skills Test (SCLT): The final version had 45 items, 36 of which were assigned to categories of the Scale Concept Trajectory developed by Jones (see article). The other 9 items pertain to macroscopic and particle representations matter (see distribution below).
Scale Concept Inventory (SCI): The final version contains 40 statements that are scored using a 5-point Likert scale: a five-option continuum from strongly agree (5) to strongly disagree (1). Twenty-three of the statements were written to elicit a positive or agree response, while 13 were written for a negative or disagree response. This technique was used to ensure that the students read each question. In addition, a verification item was used to identify those students who did not correctly utilize the SCI (for example, not reading the statements, not understanding the rating scale or simply entering random responses).
The participants were students from a 4-unit preparatory chemistry course with a college algebra or math placement test score requirement (like Chemistry 31 at LPC), from a general chemistry I course with  a college algebra and preparatory chemistry course pre-requisites (or placement tests) (Chemistry 1A at LPC), and experienced graduate students as experts.
Test Administration:
Both the SCI and SLST were administered to participants in both courses at the beginning of the semester and at the end of the semester (only starting in 2010 for the SCI).
Results of the SLST: See graph above.  Authors’ summary: The validation studies of the inventory revealed areas of particular need where students again struggle with concepts related to scale. Included in this are concepts related to the continuity of matter, number sense, magnification and the definition of a macroscopic versus a particle-level property. Incidentally, these definitions were also an issue on the SLST where the two lowest performing items for both groups were these definitions
To come up with predictors for success in the course, the authors analyzed correlations between test scores in various assessments and performance in final exams.  Some notable results (copied verbatim to preserve precision of statement):
For the General Chemistry, correlations were analyzed between scores in various tests and performance in two different final exams (ACS).  Here is the overall result for the General Chemistry: The scale literacy measure was the best predictor for performance on the conceptual final exam and the same as the combined placement test for the other final exam.
For the Preparatory Chemistry course, correlations were analyzed between scores in various tests (excluding the chemistry placement) and performance in the final exam and overall class percent: Unlike the results found for general chemistry I, the results for preparatory chemistry show a better correlation between the final exam and the ACT composite or mathematics score and the final percent in the course and the mathematics placement test score. This suggests that the level of content understanding expected of the students in this course differs from that in general chemistry I, particularly with those concepts related to scale. Indeed, the focus of this course and many remediation or preparatory chemistry courses is the ability to solve problems and begin to understand some of the language of chemistry. However, the scale literacy score is still a moderate and significant predictor of success.
Overall summary by the authors on predictors for course success:

As individual assessments, the SLST and SCI had moderate, significant, positive correlations to final exam scores. Similar results were found for other traditional measures. However, when considering combined scores (ACT composite, combined placement or combined scale literacy score), the best predictor of success by conceptual final exam performance27 was the scale literacy score. In other words, students with higher scale literacy are predicted to perform better on conceptual test items in a general chemistry course. This relationship was also found when examining similar predictors for success in a preparatory chemistry course; however, the best predictor of success in this course were the traditional measures of mathematics knowledge or ACT composite or mathematics score.

Thursday, May 28, 2015

Bridge Between Two Cultures - J Chem Ed 2012

A Bridge between Two Cultures: Uncovering the Chemistry Concepts Relevant to the Nursing Clinical Practice
Corina E. Brown, Melissa L. M. Henry, Jack Barbera, and Richard M. Hyslop
Journal of Chemical Education 2012 89 (9), 1114-1121
DOI: 10.1021/ed200041x

The goal of this study is collect data on the relative relevance of topics of general, organic, and biological (GOB) chemistry to the clinical nursing practice.

The article begins by listing the factors that have the most impact on student success in GOB courses: instructor teaching style, cognitive load, academic skills, prior science background, mathematics abilities, and anxiety of failure.  They point out that information from studies like these can address to some extent instructor teaching style and cognitive load.

The researchers used the research methodology phenomenography to collect qualitative data from a group of experts on their perceptions of the relevance and applications of different chemistry topics to clinical nursing practice.  This methodology is “suitable for description of differences and similarities in participants’ opinions.  The experts are a diverse group of GOB Chemistry instructors, nursing educators, and nursing graduate students with 2-5 years of clinical experience.  The experts were interviewed and the verbatim transcripts were analyzed by at least three different people to look for patterns and identify categories of descriptions, using “no algorithm for the analysis of phenomenographic interviews, rather a series of iterations to distill the meaning by repeated reading of the transcripts”.

Based on key terms used in the participants’ responses (frequency of the terms “used”, “important”, “needed”, versus “needed for understanding” or “foundational” versus “not used” or “not important”), the researchers came up with the following three categories of their relevance and applications to clinical nursing practice:
·         Topics that are important have a direct application in the nursing clinical practice.
·         Topics that are foundational facilitate the understanding of the important topics, but are not directly important for nursing clinical practice.
·         Topics that are not important do not have a direct application or are not significant in nursing clinical practice.

The list of topics evaluated were divided into General Chemistry, Organic Chemistry, and Biological Chemistry.  Detailed data about the number of responses pertaining to the three categories above are given in the article. Summary of results based on the authors’ interpretation of data (additional data from responses):

General Chemistry
·         Important Topics
o   Acid-base-salt chemistry (including pH and buffers)
o   Measurements (useful for concentration and drug dosage calculations, metric system)
o   Solutions (osmotic pressure, osmolarity, concentration)
·         Foundational
o   Matter
o   Atomic structure
o   Periodic table
o   Ionic and covalent bonds
·         Chemical calculations (“”working with mole ratios and stoichiometry was considered as not important in nursing, although an understanding of moles and molecular weight is useful in calculating concentrations”)
·         Gases, liquids, and solids are intermediate between foundational and not important although special mention of allosteric binding of oxygen to hemoglobin “especially as it relates to the Bohr effect, metabolic acidosis, and oxygen perfusion in general” is suggested by both groups.
·         Nuclear Chemistry: all chemistry instructors considered this as non-important but 70% from the nursing culture (educators and graduate students) considered this as important. The authors summarized the nursing perspective as follows: “Even though nurses are not directly involved in taking an x-ray, they should be able to explain to the patients the basics of the procedure and what the effects of the radiation treatment are.”

Organic Chemistry
Topics in organic chemistry were perceived with the greatest disparity by the participants. The key finding that came out is that while the nursing perspective values understanding of functions, the chemistry educators’ perspective emphasized the importance of understanding structure as a pre-requisite for understanding function. Because there is no obvious groupings into the three main categories, the author presented a summary of responses per topic:
·         Hydrocarbons: foundational by most experts by unsaturated hydrocarbons with less relevance than saturated hydrocarbons.  “None of the nurse educators or nursing students could present a clinical application of saturated or unsaturated hydrocarbons.”
·         Alcohols, phenols, and ethers: mixed opinions, with about half considering these three not important.  Instructors consider alcohols as important to understanding carbohydrates and metabolic intermediates.
·         Aldehydes and ketones: the nursing perspective considers aldehydes and ketones as foundational or important in terms of metabolic by-products their connection to diseases such as diabetes, again function but not structure.
·         Carboxylic acids, esters, and amides: Nursing perspective considers these as foundation with amines as important or foundational. The chemical education perspective considers these important or foundational for understanding lipids, amino acids, and proteins.

The authors have the following recommendations on the basis of the responses: “The organic functional groups should be presented in connection with the structure and function of biomolecules, the nomenclature greatly simplified, and the connection of functional groups to properties, function, and metabolism of biomolecules and drugs well-emphasized.”

Biological Chemistry
There is a more consistent evaluation of topics in biological chemistry by both cultures, agreeing that all topics are important with the exception of nuclei acids. “Understanding enzymes and vitamins, carbohydrates and lipids, and proteins and their metabolism” was considered important by the experts.  75% of chemistry instructors considered nucleic acids as not important while 70% of the nursing educators and the graduate students consider it important, “especially in terms of function, such as replication and mutations”.  Furthermore, the latter group made specific mention of the importance of understanding metabolism including: “anaerobic metabolism of carbohydrates, glycolysis and ketone body production in understanding diabetes and metabolic acidosis in general, and the urea cycle, because it is connected with liver and brain disorders”. Biochemical energetics was considered important by both cultures but counting ATP’s was not considered important by the nursing group.

Table 4 lists the relevant topics that bubbled to the top from the participants’ responses.

Responses from the participants also indicate the following recommendation:
·         Elements that are potential electrolytes should be highlighted in the periodic table
·         Understanding cations and anions is important for understanding the role of electrolytes
·         Covalent bonds and polarity were important for understanding drugs and their excretion
·         Hydrogen bonds should be specially emphasized as they play an important role in structure, physical and chemical properties, and function of biomolecules.
·         “An understanding of metabolism will enable nurses to better understand implications and side effects of prescribed medications, provide more effective interventions in trouble-shooting situations, and ease in professional communications at the patient’s level of understanding.”

In conclusion, results from this study may help chemistry instructors meaningfully reduce the cognizant load factor for nursing chemistry students and with the recommendation to use example relevant to nursing practice help improve their teaching styles to encourage student success.










ENERGY FOR FUTURE PRESIDENTS (Part V - COMPLETE)

Muller, Richard A. Energy for Future Presidents: The Science Behind the Headlines. New York: W. W. Norton and Company, 2012.

The print version of this book is 368 pages in length.

In this book, the author presents section lessons to future presidents on various sectors of energy use and alternative energy prospects with a goal of clarifying, correcting, and expanding on information behind the news headlines.  From the author’s perspective, the president has the responsibility to be knowledgeable about these areas and that he or she should be a “teacher” to the public when it comes to using information that go beyond the news headlines to make informed decisions about energy. He tackles a wide-ranging list of energy and energy-related topics including: energy-related disasters, global warming, shale oil, alternatives to transportation fuel, energy efficiency and conservation, solar energy, wind energy, energy storage, nuclear power, biofuels, synfuels, hydrogen fuel, hybrid autos, and carbon dioxide mitigation measures.

I chose this book despite the broad coverage because energy is a shared purview of both physics and chemistry.  The theme of the book is on looking at headlines and providing a scientific and mathematical perspective to inform people’s interpretation and perception of these issues.  These are the same headlines that invariably both the president, myself, and my students read every day that are, for many, the primary source of information.

In Part I, the author provides his perspectives on 3 major energy catastrophes, presenting some facts, his interpretation of the risks and ramifications, and his opinion on how these should information government decisions and response. 

The first chapter deals with the Fukushima reactor meltdown following damage from the earthquake and tsunami of March 2011.  He predicts that the number of deaths from cancer from the radiation will be small, less than 1%, of the human death toll caused by the actual earthquake and tsunami. On the basis of this, he proposes that nuclear reactors should be built strong enough so that fewer deaths results from its damage through radiation than those resulting from the cause of the damage.  He also proposes using the average annual radiation dose that people in Denver get as a standard to determine what the disaster response should be during a radiation release.  Against these two standards, he argues that the Fukushima was actually adequately based on the low human death toll projected despite the fact that it was not designed to withstand a 9.0 earthquake and a 50-foot tsunami.

In Chapter 2, the author questions the President’s characterization of the Gulf Oil Spill of 2010 caused by the Deepwater Horizon oil rig accident as being the “greatest environmental disaster” in history. He argues that the ensuing animal death at around 6,000 was small relative to the hundreds of millions of bird deaths due to collision with glass windows and high-voltage lines.  The beaches remained relatively clean compared to the damage done by the Exxon Valdez spill to the Alaskan shores.  He “senses” that the overreaction did more damage to the region in terms of its effect on tourism and the local economy.

Chapter 3 covers quite a bit of material starting from the author’s presentation of his group’s efforts to confirm temperature increase data.  His group through the organization Berkeley Earth Surface Temperature project did extensive analysis of temperature data previously not included in the IPCC analysis and re-analysis of temperature (1.6 billion temperature measurements, 14 data sets, 38 stations), putting in measures to avoid data selection and correction bias and station quality bias and tested for urban heat bias. To the author’s surprise, they came up with the same temperature rise reported by the IPCC of 0.9 Celsius over land concluding that “none of the legitimate concerns of the skeptics had improperly biased the prior results” suggesting to the author that “those groups had been vigilant in their analysis and treated the potential biases with appropriate care”.Furthermore, they demonstrated a close agreement between the temperature rise curve and the carbon dioxide rise curve when smooth fitting was done with volcanic eruption data. The excellent precise fit between the temperature and CO2 curves “suggests that most – maybe all – of the warming of the past 250 years was caused by humans” according to the author.  Based on these results, the author offers the following prediction: if the CO2 concentration increases exponentially and the greenhouse gas effects increase logarithmically, then the warming should grow linearly: doubling the time interval, doubles the temperature rise.  For example, assuming exponential growth of CO2 concentration, by 2052, CO2 concentration would be doubled to 560 ppm.  The corresponding rise in land temperature is 1.6 Celsius. 40 years after 2052, there will be an additional 1.6 Celsius rise and so on every 40 years until the CO2 rise is mitigated.  In the section on tipping points, the author discusses some positive and negative feedbacks that may occur as a result of increased CO2 and warming. A strong positive feedback can lead to runaway greenhouse warming. The tipping points for this to happen that have so far been identified are loosening of the Antarctic ice sheet and slipping into the sea to produce over 100 feet of sea level rise; melting of freshwater in Greenland which can disrupt the Gulf Stream and change sea current flow all the way in the Pacific; melting of permafrost and release of the potent greenhouse gas methane leading to further warming; and release of methane from the seabed as the Arctic water warms. An example of a negative feedback is an increase in water vapor cloud cover, a mere 2% increase in which can cancel any expected further warming if the CO2 concentrations double.  The author believes that the only solid evidence of warming is the temperature data; all other effects attributed to warming are “either wrong or distorted”.  In his section, he presented his views on these effects and how they may or may not be accurate or correlated with warming temperatures: hurricanes, tornadoes, polar warming, the so-called hockey data, and sea level rise. Toward the end, the author asks the question, “Can global warming be stopped assuming it is a threat?” He highlights the important role of the developing nations in decreasing CO2 emissions even though most of what is in the atmosphere now was due mostly to developed nations.  The emerging economies need to cut emission intensity by 8 – 10% per year just to stabilize greenhouse emissions. Low-cost solutions and a switch from coal to natural gas are required to help China and other emerging nations cut emissions. The author believes that geoengineering solutions may not be taken seriously ever because of the danger of further altering the earth’s geochemistry and atmospheric chemistry without knowing the ultimate consequences. Lastly, on the global warming controversy: the author’s statement is this “The evidence shows that global warming is real, and the recent analysis of our team indicates that most of it is due to humans”.  He refers to global warming as both a scientific conclusion and a secular religion for both what he calls “alarmists” and “deniers”. He believes that it is a threat that needs to be addressed even if quantification is difficult to do.  He proposes that any solution should be inexpensive enough because it is the developing world that would need it the most.  The lowest-hanging fruit right now is a switch from coal to natural gas while technologies are developed to make other sources affordable.  An electric car is an expensive solution that produces more CO2 if the electricity is provided by a coal-powered plant.

In Part II, the author gives an overview of the energy landscape.  In the introduction, he notes two complicating factors affecting the way this landscape is viewed: the out-of-whack pricing for energy and the enormity just in the US alone of the energy requirement equivalent to about 1 cubic mile of oil per year; with increasing per capita GDP, there is a corresponding increase in per capita energy use.  He also notes that in exploring various alternative energy resources, the difference between developed and developing world needs to be considered.

In Chapter 4, the author talks about the newest development in energy windfall: the development of extraction technology for recoverable natural gas from enormous reserves trapped in shale. According to the author, “the exploitability of these shale gases is the most important new fact for future US energy security – and for global warming – “. US natural gas reserves has grown over the last 12 years according to Department of Energy and US Energy Information Administration information, from 192 trillion cubic feet (Tcf) in 2001 to 300 Tcf in 2010; the remarkable event, however, is the growth of this number to 862 Tcf in just one year alone (2011).  This increase is attributed to the development of key technologies to extract gas from shale oil reserves.  From 2005 to 2012, the fraction of natural gas extracted from shale has increased from 4% to 30%; see Figure II.3 for graph showing the growth of shale gas production. 

Natural gas is released from coal and shale by pumping pressurized water down a pipe to crack the coal or shale and release the natural gas. Hydraulic fracturing (fracking) and horizontal drilling are two key technologies for extracting natural gas from shale. These two processes have enabled economically viable extraction of natural gas from shale. In a US EIA survey (Figure II.8) of 32 countries, there are estimated to be about 6622 Tcf of shale gas reserves, 13% of which are in the US. In 2013, natural gas provided about 27% of the US energy needs (updated data from LLNL energy flow chart for 2013). For the same dollar value (early 2012 data), natural gas can provide 2.5 times more energy than gasoline. Converting to natural gas for the US energy needs is not that trivia in most cases.  Volume storage and delivery is an issue as even when compressed, natural gas has three times the volume of gasoline.  As a transportation fuel, CNG has ten times the energy per gallon compared to lithium ion batteries so it is an electric vehicle competitor. Some advantages of natural gas include producing only half the greenhouse gases as coal does and the local pollutants (sulfur, mercury, carbon particles) are much lower.

Another potential source of methane being explored is in the form of methane hydrate or clathrate usually found along coasts and continental shelves.  At low temperatures and high pressures, methane mixes with water on a 1:5 ratio (more water) at least 1500 feet below and causes the water to form an ice cage that traps the methane.  As shown in Figure II.9 in the book, methane hydrate looks like ice cubes that burn.  Estimates of the amounts of methane hydrate deposits range from 10 – 100 times the amount of shale gas. The extraction process, ATTOW, is not trivial as most of the methane hydrates are further mixed with clay and salt water is corrosive. There is danger of leaking methane, however, that can contribute as a greenhouse gas.  Methane is 23 times more effective as a greenhouse gas than carbon dioxide. Furthermore, some scientists believe that the release of methane hydrates led to the catastrophic extinction of 96% of all marine species about 250 million years ago called the Permian-Triassic extinction.

In Chapter 5, the author provides his perspective on the real energy crisis in the US.  In a somewhat facetious tone, the author rhetorically asks “What energy crisis?” (in the US) based on the following: enough coal reserves to last a century, huge reserves of natural gas and oil in shale, lots of sun and wind energy, and cheap uranium (uranium ore cost is only 2% of electricity cost). The author clarifies then, that what the US is having is a “transportation fuel crisis” due to an oil and liquid fuels shortage. In Figure II.11, the author shows that if you consider the US reserves of natural gas, coal, and oil, the US has 1,470 billion barrels of oil equivalent and leads a pack of countries, including Saudi Arabia, making the US “the king of fossil fuels”.  The energy source referred to as oil or petroleum (synonymous with gasoline, diesel, jet oil) was once considered an alternative energy source when whale oil used to heat homes and businesses started running out in the 1850’s.  It was primarily used as kerosene for lamps and led to the use of internal combustion engines in automobiles and airplanes.  Although coal was able to run automobiles, gasoline delivers 60% more energy for the same mass. It is also incredibly cheap: assuming a price of $3.50/gallon and 35 mpg, it costs 10 cents per mile to drive with up to 5 people in the car as the author noted. 

The US Hubbert’s oil peak occurred in the 1970’s and the world is close to hitting its own Hubbert’s oil peak. The author points out that the obvious substitutes for petroleum are natural gas, synfuel, and shale oil.  Alternative energy sources have a difficult time competing because of the cheapness of oil-based energy; Saudi Arabia can drill oil for about $3 a barrel although the market price per barrel of oil can fluctuate between $20 to $100, increasing as demand exceeds supply. Synthetic fuel or synfuel is one solution to the liquid fuels shortage.  Synfuel can be derived from the liquefaction of coal (CTL – coal to liquid) or natural gas (GTL – gas to liquid). The Fischer – Tropsch process was the first chemical procedure used to manufacture synfuel. Synfuel can cost up to $60 per barrel to make which makes its viability as an economical replacement questionable especially if the Saudis can lower oil prices easily, according to the author.

In Chapter 6, the author talks about the surprising emergence of shale oil as an energy source.  Along with natural gas, shale also contains oil that can be extracted.  The amount of oil deposits in shale is estimated to be over 1.5 trillion barrels, 5 times more than the oil reserves of Saudi Arabia.  As with any other resource, this source of oil was not considered until oil became so expensive that the price of shale oil extraction became competitive.  In a nutshell, the author describes the classic idea of how this oil is extracted: the shale is mined then heated to push out the oil-related material called kerogen.  The kerogen can then be converted to diesel and gasoline in a process called retorting. The waste generated is huge, exceeding the volume of the actual rock mined. Companies like Shell, Chevron, and Exxon Mobil have been involved in developing the technology for shale oil extraction.  Shell’s method called “In-Situ Conversion Process” involves heating the rocks 1-2 km underground using electricity to temperatures of 650 – 700 Celsius, letting it simmer for 3-4 years, and then employing fracking and horizontal drilling techniques to extract the smaller hydrocarbons broken up from the kerogen.  As energy-intensive as it may sound, the author notes that this process actually produces a net of 3.5 times more energy than used. The estimated cost by Shell is $30/barrel; this industry may sustain profitability as long as the price of oil remains above $60/barrel.  There are environmental consequences, of course,: this is yet another carbon-based fuel, oil leaking into the water table, wastewater and water shortage issues as with fracking of natural gas, etc. Areas where there is significant extraction include the Colorado Plateau, Bakken field in North Dakota, and Eagle Ford Formation in Texas. It is estimated that by the end of this decade, 25% of US oil consumption may come from shale oil.


The author devotes Chapter 7 to what he calls “cheaper – than – cheap” energy source: increasing energy productivity and efficiency. He addresses what he considers “great investments” that could actually save money for users and “feel-good” actions.  Two of the money-saving actions he highlighted were adding insulation (with a 17.8% return after payback) and replacing incandescent bulbs with compact fluorescent lights (209% return). He also summarized the basis, the premise, and the results of a conservation program called Decoupling Plus implemented in CA.  In this program, the utility company invests money on helping Californians buy energy efficient appliances and to conserve energy overall.  The return on the utility comes in the form of diverting investment dollars from building a new power plant to increase capacity for increased energy use to conservation practices to reduce energy usage and a promise by the state to allow them to raise prices. Customers benefit from the increased energy productivity which decreases their energy costs despite an increase in prices.  It is considered quite successful in California in decreasing per capita electricity use:  it has been stable since 1980 while overall in the US it has increased by 50%.  The catch is that electricity consumers should not increase their electricity use just because they are using more energy efficient bulbs. Other “great investments” listed by the author include cool roofs, more efficient autos, energy-efficient refrigerators, and various actions listed in the McKinsey Chart.  In the next section, the author lists and describes what he opines as “feel-good measures” that may save energy but only in limited circumstances: buses and recycling paper.  In the case of buses, a study found that public transportation saves energy or at least breaks even if there are more than 15 households per acre; lower than this and buses actually use more energy.  In the last two sections, the author addresses issues involved in energy delivery, particularly electrical power.  In “Power Blackouts”, he laments the ineffective use of the interconnection of large numbers of power plants, transmission lines, transformers, and users – the grid – that makes electricity delivery more reliable in the US. Operation problems in one plant can be overcome by another plant supplying the needed electricity.  This process however cannot handle sudden high demands on the system and can lead to cascading power plant failures like what happened in New York and Massachusetts in 2003.  The author lists three solutions. One is to build small scale natural gas power plants for use on high demand days.  Already done in CA, this, however, is an expensive solution due to the capital investment and the poor returns as these plants are used only a fraction of the time.  Another solution is to have utilities decrease voltage on the line; air conditioners still run but at a reduced power. California has also used the solution of rotating brownouts to distribute the source for the sudden high demand.  In “Smart Grid, the author talks about controlling electricity use for which the author welcomes the role of market forces.  He favors dynamic pricing of electricity, rising in cost when the demand is high. This is not a popular option, however, because of the unpredictability.  The author suggests that the use of smart meters can help consumers program turning on and off of appliances depending on when demand peaks and there is price increase.  For example, electricity enters home at two voltages 120 volts for lighting and small appliances and 240 volts for air conditioners, washers, dryers, and other appliances that pull high loads.  One way to program a smart meter is to turn off the 240 volt circuits.  In CA, smart meters were installed primarily so that the utility company can collect more information about energy usage. It was also designed to reduce power automatically in case of an extreme emergency. They did not start out very popular as he described in the last section.

Two major issues the author identifies related with energy are energy security and climate change. In Part III, the author devotes chapters to a description and discussion of alternative energy sources, noting that the “alternative energy field is wide, technically complex, and full of uncertainties”. He points to a table of data showing the cost of producing a kilowatt-hour of electricity using various methods (see Table III.1): coal, natural gas, nuclear, wind, solar PV, solar thermal, geothermal, biomass, and hydro. Some of these general types are further broken down into specific technologies.  The table was published by the US Energy Information Administration in 2011.  The author notes two caveats:
1)      The data assumes that the cost of capital is 7.4%
2)      The data assumes a carbon emission trading cost for coal and natural gas of about $15/ton.

The table shows that natural gas appears to be the cheapest in providing a kilowatt-hour of energy. It also has the advantage of producing less greenhouse emission than coal for equal energy produced; half the energy comes from C combining with oxygen to form carbon dioxide and the other half comes from H combining with oxygen to form water.

The author starts off Chapter 8 on Solar Surge by predicting that the price of solar panels will eventually go down but that installation and maintenance will still cost the consumer.  On rainy days, there has to be another alternative.  Before launching toward a discussion of PV cells, he provided a short synopsis of the physics of sunlight.  Sunlight delivers about a kilowatt of power per square meter onto the surface of the earth. This is equivalent to 10 100-watt bulbs. A square mile of solar panels has the potential to generate 2.6 gigawatts which reduces down to about a gigawatt due to the 42% efficiency.  This further goes down to 250 watt per square meter because the average solar power is only 25% of the peak. He then goes on to discuss two types solar energy source. Solar Thermal is a type of solar energy source in which the heat is focused, collected, and used to heat water to produce steam that is used to run a turbine.  In Table III.1, this energy source is expensive, 25.9 cents per kilowatt-hour.  A solar thermal power plant in California consisting of a tower toward which 24,000 moving mirrors direct sunlight can generate 5 megawatts of energy, 0.5% of a conventional gas, coal, or nuclear power plant.  Because of the many moving parts, this type requires a lot of maintenance. Another solar thermal power plant uses a solar trough to focus the light using an optical design that obviates having to repoint. In this type of solar focuser, there are fewer moving parts. Spain is the biggest user of solar thermal, generating up to 4 gigawatts or 3% of their energy use by the end of 2010.  The construction of these plants, however, depends on government subsidy.  Some of the disadvantages he notes include the requirement for sunny days, subsidies still cover part of the cost otherwise it is still expensive, and the need for extensive maintenance.  The advantages are that the hot salt can be stored for later use and the high 50% efficiency in producing electricity due to extreme temperatures that can be reached with the focused sunlight.  The efficiency of trough is not as high as there is less focusing ability and the heated liquid has to flow.

The other type of solar energy source is solar cells.  Solar cells or PV cells use absorbed sunlight to produce electricity based on the photoelectric effect. When sunlight heats the solar cell, an electron is ejected from an atom and travels carrying some of the photon’s energy to an electrode and a wire creating a current for electricity.  ATTOW,  reasonably priced cells can convert only 10% of the photon’s energy into electricity but this can go up to 42% for the most expensive cells. In 2011, the cost of PV cells dropped to $1/watt from $7/watt a few years ago. This, however, is PEAK watt. This goes down to ¼ when considering the varying angle of the sun and its absence at night.  It goes further down to 1/8 of watt peak when overcast days are accounted for.  The author shows a sample calculation of the return and payback time for solar cells.  They also need other electronic devices such as an inverter to be able to use its electricity to run appliances and optional batteries. And, they require maintenance.  He considers them not optimal (“no profit”) as they are also heavily subsidized by the government. There are many competing technologies for PV. The ones highlighted by the author are silicon, cadmium telluride, copper indium gallium selenide, and multijunction cells.  Of these, the cheapest to make is silicon.  The dominance of Chinese companies to produce these at such low prices have had a negative impact on US companies including those that use different technologies.  Another concern is that some of these material may be in short supply although he notes that with increased demand may come increased incentive for exploration.  The different materials have varying but similar efficiencies except for the most expensive to make, the multijunction cells, that can reach efficiencies as high as 42%.  These have been used in the Mars Rover and with the use of PV concentrators can be made cheaper because of increased efficiency for a smaller piece. The other concern is that some of the material used are toxic.  In the end the author provides the following solar cell summary
“The solar field is intensely competitive and developing fast. Prices are dropping so rapidly that the winners are likely to be decided by criteria other than solar-cell price, including cost of installation, cost of maintenance, cost of conversion to household voltages, lifetime of cells, and efficiency.”

In Chapter 9, the author discusses wind power.  Wind power is normally harvested through the use of very tall wind turbines to take advantage of higher velocity winds at higher elevations and long blades to increase surface interaction with wind.  Wind power increases as a cube of the wind velocity: doubling the wind speed results in 8 times the power. (Wind energy is just ½ mv2 but the power is also professional to the energy x v, thus the cubic dependence). A blade that is 63 meters long sweeps an area of about 12,000 square meters.  At wind speeds of 20 mph, the power derived is 10 megawatts.  Because the blades spin fast, just 3 blades are enough to take more than half the energy of the wind blowing through that circular area spanned by the radius of each blade. Betz’s Law limits the amount of energy that a turbine can extract from wind to 59% as long as there are no other turbines around (turbines are spaced by a distance 5-10 times the length of their blades). So, the 10 megawatts calculated above is reduced to 5.9 megawatts, the maximum power that can be extracted.  Wind power capacity has been doubling every 3 years, as turbines are relatively inexpensive to build and don’t require fuel. The US has built 45 gigawatts worth of wind turbine farms (2.3% of electric power generation.  The anticipated capacity of China was 55 megawatts at the end of 2011. See Figure III.8. It can produce electricity relatively cheaply at 9.7 cents per kilowatt-hour. The last few paragraphs are about issues that have been brought up about wind power.  The author addresses each one of these: a large grid of windfarms and back-ups (such as batteries and emergency generators) can help stabilize wind power delivery in times of low winds; aesthetic issues are of concern to some people; bird deaths are a concern but the numbers due to wind turbines are small relative to collisions with other tall building structures and windows; and there are concerns about delivery of electricity because the strongest winds are generally in areas away from population centers.

In chapter 10, the author, as promised, tackles energy storage options, especially for solar and wind energy: batteries, compressed air energy storage, flywheels, supercapacitors, hydrogen and fuel cells, and natural gas. 

For batteries, he touts the sodium-sulfur battery as the best option. Sodium-sulfur batteries have the advantage of low price per charge-discharge cycle: it can be recharged 4,500 times for 80% discharge while lead-acid batteries versus 500 times for both lead-acid and lithium ion batteries. Here is what he had to say then about lithium ion batteries: “I expect that lithium-ion batteries will never be used for large-scale energy storage; they are too expensive. Lithium costs 40 times more per pound than sodium and 10 times more per atom – a more relevant measure for batteries. With a 9-fold recharge advantage and a 10-fold  cost-per-atom advantage, sodium-sulfur has a 90-fold advantage over lithium-ion.” A disadvantage of sodium-sulfur batteries is that they cannot be scaled down and are not suitable for a wide range of applications; they have be kept at temperature of around 350 C and contain liquid sodium.  A Japanese company is developing one that can operate below 100 C.  In his opinion, the future of batteries is optimistic.  The market for these newer more expensive batteries was sustained because they were useful for even more expensive gadgets such as laptops.  The focus of research is on rechargeability and safety. The author notes, however, that engineering development for batteries is linear not exponential as improvements will come but not at the same fast pace as in the past.

In compressed air energy storage, already used in confined spaces with no ventilation like mines, air is compressed to several times atmospheric pressure (200 atms is a typical figure) storing the energy expended by a motor pump.  This energy is then released when the compressed air is allowed to expand and run a turbine.  One disadvantage of this is the weight of the tank which is always about 20 times more than the weight of the air (or 5 times if it is a fiber composite tank). Another issue is that air heats up (up to 1370 C at 200 atm) when compressed so there must be a way to draw the heat away.

Energy can be stored by using a motor to spin a heavy flywheel.  When a load is added to it, a motor for instance to generate electricity, the rotation slows as the kinetic energy is converted to electricity. One of its advantages is its ability to condition energy and smooth out the power.  The Bevatron atom smasher in Berkeley uses a flywheel, about 10 tons each. Their energy storage density is comparable to lithium ion batteries, 30 watt-hours per pound.  Beacon Power’s current set-up costs $1.39 per kilowatt – hour. The high cost makes the author think that flywheels will continue to be used to condition energy but not for large-scale energy storage.

Capacitors are composed of two metals that are charged with opposite charges and separated by an electrical conductor.  They can store energy for longer periods of time than can batteries. Newly developed supercapacitors can store as much as 14 watt-hours per pound.  This is about a third of the energy of similar weight lithium-ion batteries. Supercapacitors cost 3 times as much.  Supercapacitors are probably best used in conjunction with a battery, providing quick boosts of power that batteries can’t; they can also be useful in improving the efficiency of regenerative breaking, absorbing and transferring energy at a more efficient pace.

A fuel cell is a battery that does not need to be recharged because the chemical reactants are added as “fuel”.  In a hydrogen fuel cell, hydrogen and air are pumped to generate electricity.  Efficiency is low, %25, and the author is not sure that they will replace batteries or generators.

In this last section, the author does a comparison of what he deems as the best energy storage technology, the sodium – sulfur battery to a natural gas generator. His calculations show that sodium – sulfur batteries’ capital cost is $5 per deliverable watt while natural gas capital costs $1 per deliverable watt.  If the cost of the fuel is taken into account, natural gas easily wins out over solar or wind as the energy source.  Batteries only compete if they are run at low duty cycle, e.g 1 hour per day, then the per watt capital cost goes down to 50 cents.  The author concludes that natural gas is hard to beat.

The author starts off Chapter 11 with a list of key items he thinks are important to know about nuclear energy (he calls it an “executive summary”).  A brief statement about each of these key items is given below.  For more details, see reading notes.

Unlike nuclear bombs that contain highly enriched uranium, nuclear reactors cannot explode because they use low-enriched uranium.

Capital costs for nuclear power plants are high but the delivery of electricity is lower in cost because fuel and maintenance costs are low. Nuclear power plants have a very high capacity factor, operating 90% of the time due only to maintenance downtime.  This has raised the revenue 1.6 times leading to lower cost of delivering electricity. 

Small modular reactors (300 megawatts or less) may be the solution for the high capital cost to build a new reactor.  They reduce the initial investments and their modular design allows building up of power capacity. 

There are enough economically recoverable uranium to last 9,000 years at current usage if low-grade uranium ore is used.  Uranium ore costs $0.2 per kilowatt-hour.

The Fukushima nuclear accident and meltdown after being hit by an earthquake and tsunami in 2011 is estimated to be only about 100 out of the total 15,000 deaths; maybe fewer as thyroid cancer is readily treatable.

Nuclear waste storage is technically feasible but receives bad public perception and political posturing. In the US, nuclear waste contains plutonium (in France, no, because it is extracted).  Here are the reasons why the author thinks that nuclear waste is not a problem: plutonium has a long half-life of 24,000 years and thus does not contribute much to the radioactivity of the waste and it is highly insoluble in water so very little will end up in groundwater. The greatest danger from plutonium is inhalation; it only takes 0.00008 g to cause one cancer (versus 0.5 g if dissolved in water). 

Construction of new nuclear power plants will be “exploding” in the next several years in places like China and France; Japan is helping build some of these even as some of their own nuclear reactors are taken offline. 

The author devotes Chapter 12 on a promising energy technology that has been in development for decades, fusion.  Fusion is a promising source of energy as it can be fueled by the most abundant element in the ocean (by atom numbers), hydrogen.  Fusion can also be fueled by deuterium which, while only ~1/6000 in abundance relative to hydrogen, can be inexpensively separated from regular hydrogen (the next heavier hydrogen, tritium, is too rare but can be generated). The optimism for the fusion energy source has been around for decades.  Fusion has actually occurred in the form of hydrogen bomb in 1953; as a safe source of energy however, a more controlled process needs to be developed.  Some of the advantages of fusion listed by the author include the abundance of the primary fuel, hydrogen, and its relative lack of radioactive waste. The author points out, however, that neutrons produced in a typical fusion reaction (deuterium + tritium à helium + neutron) can stick to material and make them radioactive, albeit smaller than the radioactivity in a uranium fission plant.  Because tritium is quite rare (16 pounds in all the world’s oceans), some fusion reactors are being designed so that the product neutrons are used to breed tritium by bombarding lithium atoms with them.  In one other fusion reaction, hydrogen + boron à 3 helium + gamma ray, no neutrons are formed. The gamma rays don’t produce any significant radioactivity, just a lot of energy.

In the next few sections, the author discusses 5 of the most talked-about proposals for developing fusion as an energy source. The Tokamak, which in Russian stands for “toroidal chamber with magnetic coils, was invented in Russia in the 1950’s.  It has dominated the attention and research effort in the last 60 years of fusion exploration. In tokamak, the type of fusion is called thermonuclear fusion wherein extremely high temperatures are used to overcome the repulsion for H atoms to get close enough and fuse through the short-range strong nuclear force. The National Ignition Facility is located at the Lawrence Livermore Lab. The fusion technology they are developing involves the use of LASER to heat to very high temperatures (tens of millions of degrees) and ignite a small amount of hydrogen to get the fusion of deuterium and tritium started.  The author considers this design, when developed, to be the first to breakeven point in controlled fusion. In beam fusion, the target atoms are accelerated through a beam of plasma and collide with rapidly moving particles. This technique is already being used in commercially used neutron generators. In muon fusion, discovered in 1956 in a cold liquid hydrogen chamber, a muon (207 times heavier than an electron, some of which have negative charges) fuses with a proton in a hydrogen atom, ejecting an electron. The neutral, fused muon and proton can then collide with a deuteron and fuse, releasing energy and creating a helium nucleus. The author devotes the last section to tell the story of the claim of cold fusion being achieved in 1989, “verified” by scientists from top institutions, only to fizzle out as consensus evolved to declare the results unverifiable and the methods questionable.


In Chapter 13, the author discusses examples of biofuels and warns right away that some of what he is about to say may actually offend some people passionate about biofuels.  He uses a somewhat tongue-in-cheek tone in some of these sections. Right off the bat, he listed some of the contentious conclusions he has arrived at: “corn ethanol should not count as a biofuel as it does not reduce greenhouse emission; biodegradable and recycling are overhyped from a global warming perspective; ethanol from cellulose offers the best hope for a significant biofuel component to solving the energy problems; and the main value of biofuels is not reducing global warming but in increasing energy security.”  The author gives the following reasons for why ethanol from corn should not be considered a biofuel: it uses a lot of fertilizer; it takes a lot of oil and gasoline to run farm machinery for growing corn; and it does not produce enough sugar per acre turned into ethanol by fermentation to be carbon neutral and to yield net carbon dioxide reduction. Using corn to make ethanol has also raised prices for corn-based food. Ethanol from corn does serve the advantage of providing another local source of transportation fuel and contribute toward energy security (the author estimates about 3% US consumption and 5% of US imports) despite the fact that it provides only 2/3 of the energy as gasoline on a per gallon basis. From the global warming point of view, biodegradable materials are “bad” because they decompose to produce carbon dioxide.  The author concedes, however, that from an aesthetic and animal welfare point of view, reducing plastic that ends up in our oceans and kills animals and clutters the landscape, biodegradability does have some benefits. The author does not consider waste cooking oil as a biofuel.  He argues that using waste oil as fuel adds carbon dioxide to the atmosphere and is not better than petroleum. He also considers recycling paper as bad for global warming because letting it biodegrade adds carbon dioxide in the air instead of burying and sequestering its carbon content. If paper is not recycled, more trees to make paper have to be grown which removes carbon dioxide from the atmosphere.  The Altamont landfill in California can generate 13,000 gallons of liquefied natural gas that it uses to operate its waste and recycling trucks.  This constitutes 93% of total and so the other 7% of methane leaks into the atmosphere as a potent greenhouse gas. Cellulose, normally indigestible by humans, can be converted to the liquid fuel ethanol by fermentation using enzymes from microorganisms, fungi, or yeast. The top candidates for the cellulose are switchgrass and miscanthus grass that grows over 11 feet tall and can yield three crops per year.  Miscanthis grass is projected to produce, in theory, 1150 gallons of ethanol per acre compared to corn which can only produce 440 gallons per acre.  Cellulose can provide about 1/3 the energy of an equal weight of gasoline. The author estimates having to grow enough miscanthus grass in an area 560 miles on each side (6 times the size of Iowa) to replace the 1 billion tons of oil we uses each year, assuming no energy loss in the conversion.  The author thinks that algae has even better potential for producing fuels. The “right kind of algae” has the potential to produce oil that can be used as diesel without expensive conversion steps in between.  Algae are very efficient at producing biomass from sunlight: every cell can produce biomass compared to just leaf surface cells in grasses. Proponents of algae for producing oil claim that algae can “produce ten times the energy per acre that Miscanthus can produce”. Commercial ventures lead the work in research and development of this oil-producing technology.  Genetic engineering and primarily inducing mutations is the technique being used to find the “right kind of algae”.  Algae production can be very sensitive to environmental factors and biological contamination whereas growing miscanthus is less vulnerable to extreme weather and invasive species. In the end, the author does not put a high value on bioethanol or other biofuels in terms of limiting the greenhouse effect. Even if biofuel replaces gasoline, there would only be a limited reduction in predicted temperature rise.  In terms of energy security, bioethanol may come too late and may be too expensive to compete with other cheaper fuels like compressed natural gas, synfuel, or shale gas.


In the beginning of Chapter 14, the author reiterates that while the US is running low on oil, this is not the case for natural gas and coal.  And, as he points out, while this helps energy security, it is not good for greenhouse emissions.  A large supply of natural gas and coal does not help energy sectors that require liquid fuels, especially transportation needs.  Transportation infrastructure is built around using oil.  Shale oil and shale gas are also fossil fuel alternatives discussed in a previous chapter. The author discusses some other “unconventional” sources of fossil fuel in this chapter.

Synfuel: The Fischer – Tropsch chemical process to convert coal to oil was first developed in Germany during World War II.  This process, referred to today as CTL or coal to liquid, has been used by the company Sasol in South Africa, to produce oil during the embargo years of the apartheid era.  Sasol, in 2011, announced plans to build a gas to liquid (GTL) plant in Louisiana to produce oil from natural gas projected at about 100,000 barrels per day of diesel fuel. The author predicts that there would be a growth in the construction of synfuel facilities; subsidies are no longer necessary because of lower natural gas prices.

Coal Bed Methane Coal bed methane is methane extracted from deep coal deposits by drilling down and allowing the methane to escape.  Fracking and horizontal drilling can be used as well.  This type of methane is relatively pure and does not contain hydrogen sulfide and heavier hydrocarbons like propane and butane and is nicknamed “sweet gas”.

Coal Bed Gasification: In this process, deeply imbedded coal is partially burned to extract the energy from the coal without having to dig it up and bring it to the surface.  The partial combustion produces other fuels such as carbon monoxide and hydrogen, a mixture called coal gas.  Another advantage of this process is that the ash is left buried. The coal gas can also be collected as feed gas for the Fischer-Tropsch process and for methanol synthesis. The disadvantages include heat loss, wasted unburned coal, and potential pollution of the water table.

Enhanced Oil Recovery (EOR): Only 20% of the oil can be extracted from underground through upward movement by its own pressure because it is sparsely distributed in rock pores and cracks. In secondary oil recovery, the oil is flushed out by water, natural gas, or carbon dioxide boosting the recovery to 40%.  This has the added advantage of sequestering carbon dioxide although this is a very small fraction of what needs to be removed from the atmosphere. Enhanced oil recovery methods aim to recover the other 60% through the following techniques: reducing the viscosity by heating the oil by steam injection or pumping down air or oxygen to allow burning of some of the oil to heat the rocks; pumping soap (surfactant) to release the oil from the rocks; and sending down bacterial that can breakdown the more viscous, longer chain hydrocarbons

Oil Sands: Canada is third in the world, after Venezuela and Saudi Arabia, in terms of the amount of recoverable oil reserves.  Most of this oil is in the form of oil sands (or tar sands), heavy crude oil called bitumen mixed with clay and sand. The estimates run from a conservative 200 billion barrels to an optimistic 2 trillion barrels (by Shell Oil).  2 trillion barrels are enough to supply the US with 250 years of oil and the world 60 years at current consumption.  Some of the objections voiced in exploiting the oil sands of Canada include: because the oil is largely on the surface, recovery would leave ugly open-pit mining, local water pollution, and the requirement for large amounts of water. The process used to recover the oil uses up about 12% of the energy of the oil extracted.

The author devotes Chapter 15 to a discussion of other alternative sources that don’t have much cost-effectiveness and efficiency promise that he refers to them as “alternative alternatives”. The author thinks that hydrogen automobiles were never a good idea because of the following two disadvantages it shares with electric cars. Hydrogen requires a lot of energy to extract it from water by electrolysis or produce it by reaction of methane with water; this process also produces carbon dioxide.  Using hydrogen as fuel returns only a part of the energy input.  It is much cheaper to use methane as a fuel by combustion or in a methane fuel cell. The author lumps geothermal, tidal power, and wave power under the low power density category that they would serve to benefit areas where there is a high energy concentration from these sources.  Nevertheless, these have been commercialized and some have proven economically viable but others are still working through their high initial capital costs.  See Reading Notes for more details. 

The author begins Chapter 16 on electric cars by listing the reasons for Americans’ (and others’, I suppose) dependence (he used the word addiction) on gasoline automobiles. Automobiles have contributed 1/40 Celsius degrees to the rise and will likely contribute about the same amount for the next 50 years.  A 60% increase in vehicle miles driven in the US in the next 50 years as projected by the US Energy Information Administration may raise this contribution to 1/25 Celsius unless increased mpg standards kick in sustaining this at 1/40 Celsius (0.045 F). In 2010, number of hybrids sold constituted only 2.5% of all vehicle sales in the US; electric cars are even smaller. Another important issue in evaluating electric cars is the trade deficit – about half of it is from importing oil.  A couple of battery issues also need to be addressed: the battery range and the battery cost.

The author is actually a proponent of hybrid cars and thinks that in a decade or so all cars would be of the hybrid kind.  He does think that electric cars are a fad. He notes that energy density, cost, and recharge time are the three fundamental problems that need to be solved before electric cars become a more viable option for the average American car buyer. [Note: this book was published in 2012.  A lot has changed since then about electric cars.]

Electric batteries can only store 1% of the energy contained in 1 pound of gasoline. Because electric engines can be up to 4 times more efficient than internal combustion engines, this fraction goes up to 4%.

Most of the cost of electric cars comes from the batteries themselves and not the electricity cost.  Most electric cars use a lithium ion battery which ranges in price between $30 and $150 per pound. Calculations done by the author using some assumed average values for battery price and number of recharge price the energy for electric cars at $2 per kilowatt-hour delivered compared to 10 cents per kilowatt-hour  for home electricity. “Replacement cost overwhelms electricity cost.” While it was an interesting read, this is where the numbers need to be updated as technology, price, and market have undergone quite a bit of change since the book was published. His conclusions: the Tesla Roadster costs about 70 cents per mile, the Chevy Volt 75 cents per mile, and the Nissan Leaf at 44 cents per mile. Adding to these operation costs would be the cost of having to replace these batteries which could run into the tens of thousands of dollars. 

Lead-Acid batteries provide the cheapest energy source for electric cars but also the shortest range.  Their electricity cost and replacement cost per mile is comparable to that of gasoline. They carry only half the energy per pound of a lithium ion battery.

The author offers a more nuanced perspective on the effective miles per gallon for electric cars when taking into account the source of energy, electrical power.  While electric engines and batteries are quite efficient, electrical energy delivery and transfer to electric cars suffer energy losses that are not accounted for in EPA’s calculations.  Based on his calculations, electric vehicles have a 27% energy efficiency, just 7 points higher than the 20% efficiency of an internal combustion engine.  In addition, if the electricity used for charging the batteries is from a coal-powered plant, the carbon dioxide emissions are much higher than a gasoline car.

Ordinary Hybrids: The prius (which the author owns) uses a battery to run the car during the most inefficient cycle. The Prius uses a small NiMH battery that can store about 2 kilowatt-hours enough for about 4-6 miles.  The battery kicks in when the car initially accelerates and recharges when it slows down. Prius uses regenerative braking; a wheel turns the electric generator to recharge the battery which slows down the car. The battery can also be recharged by a generator run by a gasoline motor.  Because the battery is not used all the time, its life span is longer. A Consumer Reports test showed little degradation in a battery in a Prius driven 200,000 miles in 8 years, most of these miles were probably highway miles and therefore did not use the battery much.  The Prius battery replacement cost $2200 which gives $1100/kilowatt-hour comparable to the Roadster and Leaf.

Battery Challenges: The author offers the following factors that need to be improved to make electric vehicles economically competitive:
·         Low energy density (energy per pound) (1/25 that of gasoline)
·         High cost per mile 5-8 times higher than gasoline (if one includes electricity cost and battery replacement cost)
·         Large storage tanks 10 times bigger for the same range
·         Long refill/recharge time
·         High initial capital cost of batteries in the tens of thousands

Battery chemistry is well-established but the challenge is in the nanotechnology of the electrodes and electrolytes. The author does not think this can be solved in the next 20 years to provide sufficient improvements to make electric cars competitive with gasoline cars. He still believes that the future of autos is in cheap batteries. Electric cars running on lead acid batteries with a low 40-60 mile range may be attractive in developing countries that are not accustomed to long commutes to work.

In Chapter 17, the author touts the idea of using natural gas to run automobiles, citing advantages and also some disadvantages and showing his usual calculations to show the economic benefits or lack of.  He starts off by pointing out that gasoline costs about 2.5 times (ATTOW) more than natural gas for the same energy content: he estimates that CNG costs 4 cents/mile to gasoline’s 10 cents/mile.  Converting natural gas as car fuel requires some infrastructure and automobile structural changes.  He favors compressed natural gas over liquefied natural gas which needs to be kept at a temperature of -259 F. Natural gas can be compressed up to 250 atm but must be stored in a steel or fiber composite tank. At this compressed volume, it contains 11 kilowatt-hours of energy per gallon compared to 33 kilowatt-hours per gallon for gasoline. Honda has been selling a CNG car since 1998.  Extra costs go into a device that allows compressing home natural gas to fill up the car tank and a high-pressure fiber composite tank to store the CNG.  The extra cost of the compressor device is eliminated if the car is used in an area where there are numerous CNG filling stations  and if it can be shared. This has proven economical to many companies who carry a fleet of CNG cars that share the same compressor device cutting costs down. A regular car can be converted to run on both gasoline and CNG.  He estimates a 20% return on investment if the price of a compressor device is eliminated but only breakeven if it is included.  In areas where gasoline is a lot more expensive than natural gas, CNG is the more cost-effective option.  Globally, there are 12 million CNG cars, 3 million in Pakistan, 1 million in India, about half a million in OECD, but only a 150,000 in the US (ATTOW). Other advantages of CNG for cars noted by the author include being less dense than air so it rises when it leaks, having a higher ignition temperature, reduced engine wear, and the sturdy tank it is stored in can survive crashes with hardly a dent.  Its main disadvantage is that for the same volume tank of CNG, its mileage range is only a third that of gasoline so a car needs a larger tank for the 250-mile range that gasoline cars average at. The tank however makes the car heavier which reduces its mpg.

In Chapter 18, the author discusses the viability of fuel cells as an energy source, primarily for electricity and transportation.  He starts out by describing that a hydrogen fuel cell is simply a device that reverses the electrolysis process to produce electrical energy.  He points out that simply burning the hydrogen and using the heat to generate electricity is less efficient at 20-35%.  Fuel cells, however, that use hydrogen can have reported theoretical efficiencies up to 83%.  The other advantages of fuel cells are that they are relatively clean, small, simple, and quiet.

Fuel cells can also run on methane and air.  Bloom Energy produces a fuel cell that is run at high temperatures and uses a solid ceramic electrolyte.  At these high temperatures, the reaction rates are faster obviating the use of catalyst.  In this fuel cell, methane reacts with water to liberate hydrogen (and also carbon monoxide) which acts as the fuel obviating the use of the more expensive hydrogen; this process is called “reforming”.  The solid ceramic electrolyte, while proprietary in its chemistry, is reported to be a YSZ ceramic type where the yttrium compound is used to stabilize the zirconium oxide crystal. Divalent oxygen ions diffuse through the ceramic electrolyte, combine with the hydrogen on the other side leaving behind the two electrons to flow through an external wire generating electricity. The Bloom Energy fuel cells have the advantage of being built near the consumer, eliminating the 7% energy loss through transmission. It is an expensive system at $7-8 / installed watt, 7 times more expensive than a natural gas turbine plant.  The Bloom Energy fuel cell installation at the Fireman’s Fund Headquarters in California allows the waste heat generated to be used for heating the nearby buildings.

Another fuel cell manufacturer, United Technologies Corporation, has been producing the fuel cells used in space missions and also has installed 75 megawatts of phosphoric acid fuel cells in 19 countries.  In these fuel cells, the hydrogen is externally produced in a reformer through the same methane-water reaction.  They are about 40% efficient.

This Bloom Energy fuel cell technology, because it requires a high temperature electrolyte, is not easily transferrable to automobiles.  A catalyst can be used to solve the reaction rate issue but platinum catalysts are very expensive.  The Honda Clarity uses a hydrogen fuel cell that depends on expensive catalysts and Toyota plans to release one this year according to the author.

Coal is a primary source of electrical energy because it is relatively inexpensive.  In Chapter 19, the author explains what “Clean Coal”.  The chapter starts out with a list of the reasons why the author refers to coal as the “filthiest fuel we have”: twice the greenhouse emissions for every kilowatt-hour, emission of sulfur dioxide producing acid rain, source of mercury pollution, source of particulate pollutants like fly ash and soot, and cause of severe pollution in Beijing. There are a couple of processes that can clean prevent or reduce the pollution from coal-burning plants in China: installation of electrostatic precipitators which are charged plates that attract the particles and remove them from the gas and sulfur dioxide scrubbers that spray the gas with soap and other chemicals such as sodium hydroxide, lime, sodium sulfite, and ammonia [all bases]).
These add about 1-2 cents per kilowatt-hour to the cost but can contribute much to lower the health costs of the air pollution produced.  China is “desperately working” on solving the air pollution problem by cleaning up the exhaust from its coal plants. In addition, China is a leading and growing contributor to greenhouse emissions.  Coal plants that produce 1 gigawatt of electricity produces a ton of carbon dioxide every 2 seconds. With China adding a new gigawatt coal plant every week (ATTOW), the carbon dioxide emission grows by a ton every 2 seconds every week.

What is clean coal? An oxymoron, to coal opponents. Clean coal is the term used for coal combustion with carbon dioxide sequestration, going beyond using scrubbers and particle traps.  Another term used is carbon dioxide capture and sequestration or CCS.  A few methods for this have been proposed, the best of which requires redesigning the entire coal plant where the carbon dioxide is sequestered in the plant itself and not in the stack. The most well-developed process was called FutureGen where the oxygen is separated from the air so that combustion can take place in purer oxygen.  Coal is gasified by reacting it with hot water producing hydrogen and carbon dioxide which are easily separated. The hydrogen is burned in the turbine with the waste heat driving a second turbine (combined cycle). The separated carbon dioxide is compressed and pumped underground.  In Texas, about 10,000 wells are using this carbon dioxide to pump underground for enhanced oil recovery. Other potential storage locations include empty coal mines and oil and gas wells and underground reservoirs containing salt brines (impermeability is good for keeping the carbon dioxide stored for hundreds or thousands of years).  Norway has stored 11 million tons of carbon dioxide using the latter location.  In 2009, FutureGen was cancelled due to cost overruns.

Carbon sequestration, according to the US EIA, can add 3 cents per kilowatt-hour to the cost of electricity.  While this is not such a huge amount for people living in the US, it is substantial for a country like China that produces 4 trillion kilowatt-hours of electricity from coal production. At 3 cents a kilowatt-hour, this is an annual bill of $120 billion dollars, which is projected to grow 10% a year with the growth in China’s electricity use.  The author is of the opinion that it in tackling greenhouse emissions, it might help to help a powerhouse like China develop the low-cost technology to bring their energy usage and/or emissions down.  China can also be encouraged to convert from coal to natural gas leading to 50% reduction in greenhouse emissions.

In the last section, Part V, the author offers a comprehensive summary of many of the recommendations and key information from the previous chapters, starting off with the following disclaimer: “The role of a science advisor should not be to advise but rather to inform and educate so that the president knows and understands enough to be able to make the right decisions…you will have to balance many things including diplomacy, economic, justice, and politics – things that scientists like me have not mastered. Any advice I give you is based on my narrow technological understanding of the world…”  The author repeats his previous statement that the “true energy crisis” in the United States and the world stems from energy security and global warming.

While there is no shortage in energy sources like coal and natural gas, the energy security issue derives from shortage of oil and the increasing deficit between the US’s ability to produce domestic petroleum and the demand growth for gasoline, diesel, and jet fuel.  This shortage in oil imperils national and military security and also contributes greatly to the country’s trade deficit which harms the economy. His suggests the following immediate action: “rapidly exploring our great shale gas and oil reserves, building an infrastructure to handle the distribution, and creating a strong synfuel capability”.  Natural gas, synfuel, and shale oil have the potential to create the most meaningful impact on our energy security and trade balance.

The global warming issue stems from the significant greenhouse emissions from using coal.  He reiterates, however, that most of these emissions will come from developing countries and therefore it is important that any solutions we come up with, some should be affordable enough to be applied in poorer countries.  Having said this, he recommends as an immediate action “large scale conversion from coal to natural gas”, the “only plausible method that holds up to scrutiny and that may be economically viable”.

In the next sections, the author details his categorization and prioritization recommendations for the different technologies in terms of their long-term and short-term prospects.  These technologies are classified as (verbatim list):

Important part of our energy future: energy productivity (both efficiency and conservation), hybrid and other autos with improved mileage, shale gas (for coal replacement, autos, synfuels), synfuel (gas to liquid and coal to liquid), shale oil, and smart grid

Have breakout potential: photovoltaics, wind (and an improved grid to deliver it), nuclear power (both old and new generations), batteries (to back up PV’s and wind), biofuel (particularly grasses such as Miscanthus), fuel cells (particularly methane – based), and flywheels

Least likely to solve our problems: hydrogen economy, all-electric autos and plug-in hybrids, corn to ethanol, solar thermal, geothermal, wave and tidal power, methane hydrates,  and algae biofuel

He elaborates on many of these items but most of them are repetitions of statements the author has previously made in the other chapters. A few notable statements are:

Subsidies help with rapid competitive development with the rapid decrease in cost of solar a good example.  But, it can also support technologies with no long-term promise.

On energy catastrophes, the author recommends that dangers that are no measurable or undetectable should not influence policy-making and repeats his suggestion to adopt the Denver dose standard for radioactivity exposure limits.

Risk-benefit calculations are complex and can be “misleading”, citing the example of whether the virtual moratorium in the US on new nuclear plants saved lives over the last 30 years if it resulted in increased coal use that released harmful pollutants such as mercury and sulfuric acid.  How should dangers from different modes of energy generation be compared in a quantitative way?

On skepticism and optimism bias, “claims based on conviction are not as valid as those based on objective analysis”.

The author notes that “green, renewable, and clean” are “feel-good” terms that can be interpreted to rule out important energy sources such as nuclear power, natural gas, and synfuels.  “Better to use sustainable and to interpret that as sustainable over the next 20-40 years, since we can’t even guess at technology beyond those horizons.  [Reader’s note: the sun as an energy source is truly sustainable for another estimated 5 billion years but our technologies and the material resources necessary to extract this energy may not be.]  Even better is to use alternative, since our balance of payments and transportation energy security do make for a current crisis.”  Further, he notes, very practically, I think, that energy sources such as shale gas may be greenhouse emitters and not sustainable over the very long term, but they will be helpful in keeping us energy secure “through a tough period” until better, more sustainable alternatives are found”. Lastly, he ascribes more truth to the statement “to be truly sustainable, it must be profitable”, with profits taking into account the indirect costs such as those to the environment. (“Unfortunately, there is no true consensus on how to measure those costs”.)  Lastly, “a gallon saved is a gallon not imported” is another aphorism that the author likes.

At the end of the book in the section “Your Legacy”, the author has the following advice for future presidents: “Your greatest challenge will be striking a balance between global warming and energy security…To earn your place in history, you have to have vision, trust in science and objective analysis, and think long-term.”


Richard A. Muller is a professor of physics at the University of California, Berkeley. He is the best-selling author of Physics for Future Presidents and The Instant Physicist. He and his wife live in Berkeley, California.


READING NOTES

PART I: ENERGY CATASTROPHES
·         Energy use in the United States alone is huge: about 20 million barrels of oil each day. Because of these huge numbers, energy accidents normally make it on the news in a big way as well.
·         In this section, the author tackles 3 major energy catastrophes and offers facts and a suggestion on how to interpret the ramifications of these accidents.
·         “We need to get our facts right, put the consequences in perspective, clear up misimpressions, and get to the core of what really happened, or is still to happen.”


Chapter 1: Fukushima Meltdown
      In March 2011, a huge earthquake measuring 9.0 on the Richter scale hit Japan generating a tsunami 30 feet high and up to 50 feet in some places.  About 15,000 people died and 100,000 buildings destroyed.
      One of the recipients of the huge amount of energy unleashed by this earthquake through a 50-foor tsunami is the Fukushima Nuclear Reactor. At the site, two people died due to the earthquake and 1 due to the tsunami. No known deaths were reported due to the nuclear meltdown that ensued as a result of the impact.
      Nuclear energy releases are huge: fission of an atom of Uranium 235 can produce 20 million times the energy released in the decomposition of a molecule of TNT.
      Along with energy, high energy neutrons are also released which is the basis for the enormously rapid and huge energy explosions that fissile material is capable of.  In a nuclear reactor, the energy production must be moderated: only 4% of the uranium fuel is uranium-235 and neutron absorbers such as carbon or water are employed to slow down the reaction (only one of the emitted neutron triggers a new fission)  but still maintain a steady release of energy.
      Reactivity accidents result from runaway chain reactions when the process undergoes uncontrolled fission, which starts slowly at first and builds-up to an energy density that then results in a powerful explosion.
      In the Chernobyl reactivity accident of 1986, what killed most people was the radioactivity released and not the reactor explosion. In the Fukushima incident, the reactor did not explode and pumps kept working to cool down the heat produced from the residual radioactivity after the reactor shutdown upon impact. The cooling pumps stopped working after 8 hours without any external source of power to keep it going due to the loss of electricity because of extensive infrastructure failure.  Without the cooling pumps, the fuel overheated and melted resulting in a release of radioactivity second only to the Chernobyl accident.
      The most dangerous radioactivity released is that from iodine – 131 and cesium – 137. I – 131 has a half-life of 8 days and decays rapidly releasing radioactivity as it does making it the biggest source of radioactivity initially.  When it enters the body, it accumulates in the thyroid where it can cause cancer.  I – 131 absorption by body can be mitigated by taking potassium iodide; normal iodine from this salt saturates the thyroid and prevents or slows down the absorption of the radioactive isotope.
      Cs – 137decays more slowly so its initial impact is lower but it lasts longer.  Its half-life is 30 years.
      Sr – 90 also undergoes a slow decay. The slow decay means they are around longer and can deposit and accumulate in plants and animals that are consumed, concentrating in bones.
      An exposure of 100 rem or more will cause immediate radioactive illness (nausea, weakness, loss of hair); at 250-350 rem, 50% chance of death if untreated.
      The author offers the following formula for estimating deaths due to cancer: (population x average) / 2500.  Tin the example he gave, this formula estimated that a population of 22,000 and an average exposure of 22 rem may result in an excess of 194 extra cancers.  To give some perspective, a 20% incidence rate of cancer for a population of 22,000 is about 4,400 cancers.  Even though the number of cancers caused by radioactivity is less than 5%, they probably will be detectable as most of them will be thyroid cancers due to radioactive iodine exposure.
      Natural levels of exposure to radiation is about 0.3 rem from cosmic radiation and from uranium, thorium, and naturally radioactive potassium in the ground and another 0.3 rem from x-rays and other medical treatments. In Denver, Colorado, add another 0.3 rem from radon emitted from tiny concentrations of uranium in granite. Despite this, Denver has a lower cancer rate than the rest of the US and that the probability of dying from cancer is 0.00012, a number so small that prompts him to rhetorically ask “Should an undetectable danger play a major role in determining policy?” He further states that the International Commission on Radiologic Protection recommends evacuation when the radiation dose exceeds 0.1 rem per year which is one-third the dose that Denver gets. Chernobyl used this threshold to mandate evacuations.
      The Fukushima Nuclear Reactor was not built to withstand a 9.0 earthquake and a 50-foot tsunami.
      Should the Fukushima accident be used as a reason for ending nuclear power?  The author offers the following guidelines: 1) “Make a nuclear power plant strong enough that if it is destroyed or damaged, the incremental harm is small compared to the damage done by the root cause.” And 2), the Denver dose should be used as the standard in planning a disaster response, e.g., the ICRP threshold for evacuation should be raised to at least 0.3 rem or 3 millisieverts.
      The author contends that the Fukushima reactor was designed adequately when viewed with these standards in mind.


CHAPTER 2: THE GULF OIL SPILL
·         The author takes a hard look at references made by the president and others to the Gulf Oil spill as the “greatest environmental disaster of all time” by offering some factual perspectives on the damage wrought by the spill.  The accident killed 11 people and injured 17 more.
o   6,000 dead animals due to oil spill versus 100 million  to a 1 billion bird deaths each year due to glass windows and another 100 million deaths due to high-voltage electric lines
o   Beaches remained clean because BP hired fishermen to distribute buoys and barriers and spread dispersants to break up the oil versus the oil and tar that covered the Alaskan shores during the Exxon Valdes spill.
·         Author’s description of the Deepwater Horizon Accident: The oil rig sits above 5,000 feet of water.  A flexible pipe 23,000 feet long connects it to the oil source 18,000 feet below the seafloor.  When the rig exploded, the pipe was damaged and oil started gushing out at 26 gallons per second. The leaking oil was not plugged until July 15, 2010. It is estimated that the spill released 250 million gallons or about a million cubic meters. Despite the continued flow of oil, the affected size did not increase any further; the author surmises that this was likely due to the oil evaporating, dispersing in the water, sinking, or being cleaned-up. On September 19, the well was officially sealed.
·         The author estimates that with a spill area of about 10,000 miles, if all this oil was dispersed uniformly in that volume of water, the resulting distribution would be less than 1 ppm, “below what is considered a toxic level”. The surfactants were added to break-up and prevent big blobs of oil from forming so that more of them are accessible to oil-digesting bacteria and they don’t gum up feathers of birds and animal fur.
·         Natural oil leaks do occur in the seabed but they probably are only about 1% of the Deepwater spill.
·         A year after the initial spill, measurements showed that 99% of the water samples tested in the entire region including 1,000 square miles closes to the wellhead had no detectable oil residue or dispersant.  Tourism was severely affected with one estimate claiming a loss of 23 billion dollars over the ensuing three years. A year later, however, the Governor of Louisiana declared the “region reborn”.
·         The author believes that the President’s characterization of the disaster is hyperbole and overreaction to the spill was even more damaging.


CHAPTER 3: GLOBAL WARMING AND CLIMATE CHANGE

·         The level of carbon dioxide in the last century has increased by 40% due to human use of fossil fuels.  Carbon dioxide makes up 0.04% of the atmosphere.  Water is a more significant greenhouse gas but we have no control over the amount that evaporates from bodies of water. Methane is also an important greenhouse gas.  The oxygen, argon, and nitrogen in the atmosphere are transparent to infrared radiation.
·         Physical calculations estimate that the earth’s temperature would be below freezing if not for the presence of greenhouse gases.
·         In 2007, the IPCC reported that global temperature rose by 064 Celsius in the previous 50 years.  During those same years, the land temperature rose by 0.9 Celsius. Land temperatures rise in greater amount because heat concentrates near the surface of the land while heat spreads down to depths of 100 feet in the ocean. In the same report, the IPCC states that global warming has been happening since the 1800’s but the anthropogenic distribution is hard to determine as part of that earlier warming was due to changes in the sun’s intensity.
·         Despite the smallness of this temperature rise, scientists including the author are more concerned about greater warming occurring in the future.
·         The author’s group through the organization Berkeley Earth Surface Temperature project did extensive analysis of temperature data previously not included in the IPCC analysis and re-analysis of temperature (1.6 billion temperature measurements, 14 data sets, 38 stations), putting in measures to avoid data selection and correction bias and station quality bias and tested for urban heat bias. To the author’s surprise, they came up with the same temperature rise reported by the IPCC of 0.9 Celsius over land concluding that “none of the legitimate concerns of the skeptics had improperly biased the prior results” suggesting to the author that “those groups had been vigilant in their analysis and treated the potential biases with appropriate care”.
·         See Page 76 (ibook version) for the group’s plot of the average global temperature rise over land from 1800 to the present. Dips in the otherwise rising temperature plot attributed to volcanic eruptions correlated with ice core measurements of sulfate particles. There was a close agreement between the temperature rise curve and the carbon dioxide rise curve when smooth fitting was done with volcanic eruption data, better than the author’s attempt at using a parabola and other polynomial fit. “Our fit shows that one could ignore these (sunspot) cycles and get an excellent explanation of most of the data considering only carbon dioxide and volcanoes”.  The excellent precise fit between the temperature and CO2 curves “suggests that most – maybe all – of the warming of the past 250 years was caused by humans” according to the author.
·         Based on these results, the author offers the following prediction: if the CO2 concentration increases exponentially and the greenhouse gas effects increase logarithmically, then the warming should grow linearly: doubling the time interval, doubles the temperature rise.  For example, assuming exponential growth of CO2 concentration, by 2052, CO2 concentration would be doubled to 560 ppm.  The corresponding rise in land temperature is 1.6 Celsius. 40 years after 2052, there will be an additional 1.6 Celsius rise and so on every 40 years until the CO2 rise is mitigated.
·         The logarithmic dependence of the greenhouse effect on CO2 concentration stems from, according to the author, “the fact that most of the effect comes from the edges of the CO2 absorption lines which only broaden logarithmically”.
·         In the section on tipping points, the author discusses some positive and negative feedbacks that may occur as a result of increased CO2 and warming:
·         A strong positive feedback can lead to runaway greenhouse warming like the one that makes Venus a very hot planet. The tipping points for this to happen that have so far been identified are:
o   Loosening of the Antarctic ice sheet and slipping into the sea to produce over 100 feet of sea level rise
o   Melting of freshwater in Greenland which can disrupt the Gulf Stream and change sea current flow all the way in the Pacific
o   Melting of permafrost and release of the potent greenhouse gas methane leading to further warming
o   Release of methane from the seabed as the Arctic water warms
·         An example of a negative feedback is an increase in water vapor cloud cover, a mere 2% increase in which can cancel any expected further warming if the CO2 concentrations double.  Poor understanding of cloud cover mechanism contributes much to the uncertainty of warming predictions.
·         Local variability in temperature changes can mask the experience of global warming in different places.  About a third of temperature measurement stations report decreasing temperatures.  The author claims that a global increase of 2-3 Celsius will be felt globally and local temperature trends cannot negate it.
·         The author believes that the only solid evidence of warming is the temperature data; all other effects attributed to warming are “either wrong or distorted”.  He presents a review of some of these claims:
o   Hurricanes: increase in hurricane frequency more likely due to increased capacity to detect them even offshore.  Data for hurricanes that impact the US coast show no increase.  His conclusion “the rate of hurricanes hitting the US has not been increasing”.
o   Tornadoes: measurements show decreasing rate of tornadoes verified by statistical analysis. Global warming theory predicted that tornadoes might increase and not would increase; more storms may be generated due to energy availability in a warming climate.  However, it is temperature gradient that is more significant and not the absolute temperature in tornado formation. See graph.
o   Polar warming: Older climate models actually predict that Antarctic ice would increase and not decrease; higher rate of evaporation due to sea warming can increase the amount of snow falling in the Antarctic which stays below freezing even with warming temperatures. Satellite measurements showed however that the Antarctic has lost 36 cubic miles of ice.  Models were tweaked and were able to reproduce this actual result.  Modeling Antarctic ice can produce unreliable results because Antarctica only covers 2.7% of the globe, too small for more precise predictions.  The models and observations for the Arctic are consistent with each other: decreasing ice. The author states that it is difficult to determine the cause, global warming and/or decadal oscillations in sea surface temperature and pressure.
o   Hockey data: adjustment of temperature data, purportedly to “hide” data that seem to indicate decreasing temperatures by replacing proxy data with actual thermometer data.  See “Clinategate”.
o   Sea Level Rise: IPCC reports that sea level has risen by 8 inches in the last century (from records of tide levels).  The rise could be attributed to warmer waters which expand and the melting of glaciers. It is difficult to determine the ultimate cause.  The melting of glaciers in Greenland is attributed to soot pollution. IPCC predicts a further 1 – 2 feet rise in sea level through the remainder of the century.
·         “Can global warming be stopped assuming it is a threat?” A treaty requiring 80% cut in greenhouse emissions by US by 2080 and 70% cut in intensity by China and the developing world by 2040 are not going to result in decreased carbon dioxide atmospheric concentrations according to the author.  Under this treaty and the numbers involved, the author calculates that total atmospheric CO2 would increase to above 1,000 ppm (currently around 400 ppm) which, using IPCC models, would lead to a global temperature increase of 3 Celsius. In 2010, China’s CO2 emissions were 70% higher than the US. Its CO2 emission per capita is only 45% that of the US. President Obama did not sign the Copenhagen treaty because of China’s refusal to allow inspections.  China’s emissions intensity is now 5 times that of the US; with a 6% increase every year compared to the US rate, they will surpass US per capita emissions by 2025.  Because energy use correlates with wealth, the author rhetorically asks, “If you were the president of China, would you endanger progress to avoid a few decrees of temperature change”?  Slowing growth in China could trigger political instability, adds the author.  See figures 1.6 and 1.17.   “Every 10% cut in US emissions is negated by 6 months of China’s emission growth.” Reducing its dependence on coal and switching to natural gas can help in reducing China’s CO2 emissions (natural gas releases only half the CO2).  The author highlights the important role of the developing nations in decreasing CO2 emissions even though most of what is in the atmosphere now was due mostly to developed nations.  The emerging economies need to cut emission intensity by 8 – 10% per year just to stabilize greenhouse emissions. Low-cost solutions and a switch from coal to natural gas are required to help China cut emissions.
·         Geoengineering: some proposed solutions below.  The author believes these solutions may not be taken seriously ever because of the danger of further altering the earth’s geochemistry and atmospheric chemistry without knowing the ultimate consequences.
·         Dumping iron in the ocean to encourage plant growth
·         Cloud-seeding methods to increase cloud formation
·         Releasing sulfate particles into the stratosphere to rom aerosols that would reflect sunlight. “A simple calculation suggests that just one pound of sulfates injected into the stratosphere could offset the warming caused by thousands of pounds of carbon dioxide.
·         On the global warming controversy: the author’s statement is this “The evidence shows that global warming is real, and the recent analysis of our team indicates that most of it is due to humans”.  He refers to global warming as both a scientific conclusion and a secular religion for both what he calls “alarmists” and “deniers”. He believes that it is a threat that needs to be addressed even if quantification is difficult to do.  He proposes that any solution should be inexpensive enough because it is the developing world that would need it the most.  The lowest-hanging fruit right now is a switch from coal to natural gas while technologies are developed to make other sources affordable.  An electric car is an expensive solution that produces more CO2 if the electricity is provided by a coal-powered plant.


PART II: ENERGY LANDSCAPES
Energy per capita use has been shown to increase as a function of per capita GDP (see Figure II.1). The author poses the important question of whether energy use creates wealth or wealth creates more energy use and believes that it is probably a little of both. Because of this energy use and wealth correlation, the increasing per capita GDP of emerging nations will likely result in more energy use globally. Related to this is the cost of energy use and the author gives an example of how “out-of-whack” energy pricing can be which adds complexity to the issue of energy use and availability.  He gives the example of energy from 1 AAA battery costing 10,000 times more than the equivalent energy from electric power plants ($1,000 per kWh versus $0.10 per kWh). Clearly the cost of energy depends on how it is delivered. Gasoline costs 2.5 times more than retail natural gas and 7 times more than wholesale gas. Despite this price difference, it is difficult for the US to wean itself away from using gasoline because of the high cost of having to switch from the current gasoline delivery infrastructure creating an “inefficient market” in energy. The author provides an example of what he calculates as the wide disparity of the cost per kWh of energy depending on mode of delivery. Most of the cost of energy come from the mining, the processing, and the delivery. For instance, the author points out that at the time of writing, the sum of these for solar energy is higher than for coal. However, he does point out that, coal is not really that cheap if you take into account the environmental consequences. Toward the end, the author points out that the cheapest form of energy is energy that is not used but generated.  There are two aspects to do this: making appliances more efficient so that the same benefit is received for less energy and storing the unused but already generated energy. According to the author, the two main energy concerns of the energy landscape are energy security and climate change. An energy flow plot in the last section shows that only about 43% of energy generated are used and the other 57% are lost as heat; 83% of this is generated using coal, natural gas, and petroleum. About 40% goes to generating electricity and transportation comprises about 28% and industrial use about 24%. See 2013 US Energy Flow chart downloaded from LLNL website. The author puts this information in another perspective:

This amount of energy use per year is equivalent to 3,500 gigawatts or 3,500 large generating plants.  That is 12 kilowatts per person assuming about a 300 million US population.  This is equivalent to burning 300 tons of fossil fuel EVERY SECOND or 1 cubic mile of oil per year if all the energy came from petroleum. “Any proposed alternative energy sources must cope with this enormity.”

The US holds close to 730 million barrels of oil (ATTOW) in its Strategic Petroleum Reserve. The US imports about 9 million barrels of oil every day in the past decade.  This Reserve would last for only a little over two months. However, pumping capabilities can only extract 4.4 million barrels per day from this reserve.

Margin of spare capacity has a big influence on the price of oil (margin of spare capacity is the amount of oil that could be pumped around the world minus the amount that is actually pumped).  According to the author, “it is the continuing growth of the economies of the developing world that keeps the spare capacity low and therefore the price of oil high”. The author has two suggestions for building the margin of spare capacity: producing diesel fuel and gasoline fuel (synfuels) from coal and natural gas and exploiting recognized shale oil reserves.

The author cautions that when considering any energy technology, there needs to be a consideration of the difference between developing and developed countries.  The price of installation and maintenance of solar power in the US is expensive due to labor costs and the cheapness of natural gas is still a strong competitor.  In other countries where labor costs are lower, solar power may actually compete with natural gas.


Chapter 4: The Natural Gas Windfall
In this chapter, the author talks about the newest development in energy windfall: the development of extraction technology for recoverable natural gas (cheaply?) from enormous reserves trapped in shale. According to the author, “the exploitability of these shale gases is the most important new fact for future US energy security – and for global warming - …”.

US natural gas reserves has grown over the last 12 years according to Department of Energy and US Energy Information Administration information:
2001 – 192 trillion cubic feet (Tcf)
2010 – 300 Tcf
2011 – 862 Tcf
Between 2001 and 2010, the US extracted about 20-24 Tcf.
Some estimates are as high as 3,000 Tcf.

The author differentiates how the government and companies make predictions.  He notes that government estimates are more conservative because they have to base their estimates on proven reserves (recoverable supply). Companies err on the side of a “good bet” of supply.

Fraction of natural gas extracted from shale has increased over the last 12 years:
1966 – 1.6%
2005 – 4%
2011 – 23%
2012 – 30%
See Figure II.3 for graph showing the growth of shale gas production.

For the same dollar value (early 2012 data), natural gas can provide 2.5 times more energy than gasoline.

Converting to natural gas for the US energy needs is not that trivia in most cases.  Volume storage and delivery is an issue as even when compressed, natural gas has three times the volume of gasoline.  ATTOW, some 130,00 taxicabs and trucks have converted to CNG; existing gasoline engines can easily be converted to use natural gas.  CNG has ten times the energy per gallon compared to lithium ion batteries so it is an electric vehicle competitor.

In 2013, natural gas provided about 27% of the US energy needs (updated data from LLNL energy flow chart for 2013).

Natural gas is released from coal and shale by pumping pressurized water down a pipe to crack the coal or shale and release the natural gas. Hydraulic fracturing (fracking) and horizontal drilling are two key technologies for extracting natural gas from shale. These two processes have enabled economically viable extraction of natural gas from shale.  In a US EIA survey (Figure II.8) of 32 countries, there are estimated to be about 6622 Tcf of shale gas reserves, 13% of which are in the US. France is estimated to have about 100 years’ worth of natural gas (ATTOW, fracking is banned in France) recoverable from shale reserves but still imports 95% if its natural gas.  China is estimated to have about 400 years’ supply of natural gas from shale reserves. Some advantages of natural gas include producing only half the greenhouse gases as coal does and the local pollutants (sulfur, mercury, carbon particles) are much lower.

Another potential source of methane being explored is in the form of methane hydrate or clathrate discovered deep in the ocean usually along coasts and continental shelves.  This form of methane is mixed with water on a 1:5 ratio (more water) and is thought to form by methane seeping from sea bottom sediments, mixing with cold water (4 celsius) at high pressures (~50 atm, at least 1500 feet below), and causing the water to form an ice cage that traps the methane.  As shown in Figure II.9 in the book, methane hydrate looks like ice cubes that burn.  Estimates of the amounts of methane hydrate deposits range from 10 – 100 times the amount of shale gas. The source of the methane is unknown; it could be a bacterial product or primordial methane but it currently does not look like it is associated with fossil carbon. The extraction process, ATTOW, is not trivial as most of the methane hydrates are further mixed with clay and salt water is corrosive. Methane itself contains enough energy to pay for its recovery.  There is danger of leaking methane, however, that can contribute as a greenhouse gas.  Methane is 23 times more effective as a greenhouse gas than carbon dioxide. Furthermore, some scientists believe that the release of methane hydrates led to the catastrophic extinction of 96% of all marine species about 250 million years ago called the Permian-Triassic extinction.


Part II - Chapter 5: Liquid Energy Security

In a somewhat facetious tone, the author rhetorically asks “What energy crisis?” (in the US) based on the following: enough coal reserves to last a century, huge reserves of natural gas and oil in shale, lots of sun and wind energy, and cheap uranium (uranium ore cost is only 2% of electricity cost). The author clarifies then, that what the US is having is a “transportation fuel crisis” due to an oil and liquid fuels shortage. In Figure II.11, the author shows that if you consider the US reserves of natural gas, coal, and oil, the US has 1,470 billion barrels of oil equivalent and leads a pack of countries, including Saudi Arabia, making the US “the king of fossil fuels”.

In the discussion of oil, the author lumps the following as being synonymous with oil: gasoline, diesel jet, jet fuel, petroleum.

In the mid-1800’s, whale oil was used for lighting homes and businesses until it ran out due to huge decimation of the whale population. Whale oil peaked in 1845 at 15,000 gallons a year and started a decline that saw its price double in 1852.  In 1859, rock oil or petroleum was discovered in Pennsylvania and was used initially primarily as kerosene for lamps. The discovery of petroleum, however, was what made the internal combustion engine possible which led to the use of automobiles and airplanes. The shortage of whale oil drove the search for new oil and, thus, one can think of petroleum as having once been considered an “alternative: energy source.

Although coal was able to run automobiles, gasoline delivers 60% more energy for the same mass. It is also incredibly cheap: assuming a price of $3.50/gallon and 35 mpg, it costs 10 cents per mile to drive with up to 5 people in the car as the author noted.

A widely used concept for predicting resources in Hubbert’s peak: the peak of maximum production of any resource commodity.  The US Hubbert’s oil peak occurred in the 1970’s; the world is close to hitting its own Hubbert’s oil peak.

The author points out that the obvious substitutes for petroleum are natural gas, synfuel, and shale oil. 

President Carter created the Department of Energy in the late 1970’s to work on weaning ourselves away from oil dependence on other countries and to explore alternative energy sources.  By 1984, oil imports dropped by 50% but went up again in 1994 when imports exceeded the 1977 peak.  In 2011, the US imported 3.05 million barrels of oil exceeding domestic oil production and creating 53% of the trade deficit.  When Reagan became president, he eliminated the alternative energy programs as the price of a barrel of oil dropped from $111 during the Carter years to $22.

Alternative energy sources have a difficult time competing because of the cheapness of oil-based energy; Saudi Arabia can dill oil for about $3 a barrel.  The market price per barrel of oil can fluctuate between $20 - $100, increasing as demand exceeds supplies.

Synthetic fuel or synfuel is one solution to the liquid fuels shortage.  Synfuel can be derived from the liquefaction of coal (CTL – coal to liquid) or natural gas (GTL – gas to liquid). The Fischer – Tropsch process was the first chemical procedure used to manufacture synfuel  Invented in the 1920’s, it was used by Nazi Germany in the 1930’s and South Africa in the 1940’s during the Apartheid era successfully to provide liquid fuels from abundant coal.

Synfuel can cost up to $60 per barrel to make which makes its viability as an economical replacement questionable especially if the Saudis can lower oil prices easily, according to the author.

Part II – Chapter 6: Shale Oil

Along with natural gas, shale also contains oil that can be extracted.  The amount of oil deposits in shale is estimated to be over 1.5 trillion barrels, 5 times more than the oil reserves of Saudi Arabia.  As with any other resource, this source of oil was not considered until oil became so expensive that the price of shale oil extraction became competitive.  In a nutshell, the author describes the classic idea of how this oil is extracted: the shale is mined then heated to push out the oil-related material called kerogen.  The kerogen can then be converted to diesel and gasoline in a process called retorting. The waste generated is huge, exceeding the volume of the actual rock mined.

Companies like Shell, Chevron, and Exxon Mobil have been involved in developing the technology for shale oil extraction.  Shell’s method called “In-Situ Conversion Process” involves heating the rocks 1-2 km underground using electricity to temperatures of 650 – 700 Celsius, letting it simmer for 3-4 years, and then employing fracking and horizontal drilling techniques to extract the smaller hydrocarbons broken up from the kerogen.  As energy-intensive as it may sound, the author notes that this process actually produces a net of 3.5 times more energy than used. The estimated cost by Shell is $30/barrel; this industry may sustain profitability as long as the price of oil remains above $60/barrel.  There are environmental consequences, of course,: this is yet another carbon-based fuel, oil leaking into the water table, wastewater and water shortage issues as with fracking of natural gas, etc. Areas where there is significant extraction include the Colorado Plateau, Bakken field in North Dakota, and Eagle Ford Formation in Texas. It is estimated that by the end of this decade, 25% of US oil consumption may come from shale oil.

Part II – Chapter 7: Energy Productivity

The author devotes this chapter to what he calls “cheaper – than – cheap” energy source: increasing energy productivity and efficiency.

Half of the older homes in the US are estimated to benefit from added insulation according to a Department of Energy official (Art Rosenfeld). In one calculation (can be accessed through the energysavers.gov website link given in the chapter), installing insulation has a payback time of 5.62 years, this is the amount of time it would take to get back the same amount of money for installation in energy savings.  His calculations showed that after the 5 years, the “capital” will continue to pay 17.8% per year in the form of reduced heating and cooling costs. This rate will go up and down depending on how the price of electricity changes.

Replacing incandescent lightbulbs with compact fluorescent lights leads to a 209% return, according to the author’s calculations. He also estimates that over the 10,000 – hour lifetime of CFL’s, you will need 6 ordinary incandescent light bulbs.

In Decoupling Plus, a conservation program in California backed by the government and implemented by the utility company, the utility company invests money on helping Californians buy energy efficient appliances and to conserve energy overall.  The return on the utility comes in the form of diverting investment dollars from building a new power plant to increase capacity for increased energy use to conservation practices to reduce energy usage and a promise by the state to allow them to raise prices. Customers benefit from the increased energy productivity which decreases their energy costs despite an increase in prices.  See chapter for a more detailed numerical explanation, albeit simplified, by the author. The term decoupling refers to the utility company decoupling from having to build more power plants and term the plus from the company being able to raise rates based on successful conservation investment. This scheme invented and named by Art Rosenfeld (who then went on to win physics and energy awards) is considered quite successful in California in decreasing per capita electricity use:  it has been stable since 1980 while overall in the US it has increased by 50%.  The catch is that electricity consumers should not increase their electricity use just because they are using more energy efficient bulbs. The success of the program depends on not just energy efficiency but less power requirements.

Other “great investments” listed by the author include:

·         Cool roofs – an example made of thermoplastic material that can have color but able to reflect more than 50% of infrared. White roofs are even better at reflecting but many consider them too bright. The author notes however ATTOW that if you use air conditioner, installing a cool roof might be the better, less expensive alternative to installing solar panels.

·         More efficient autos – ATTOW, the US average mileage is 30 mpg; in Europe it is 50 mpg. Cars are the least efficient when accelerating hybrid technology fixed this problem by using a battery booster and most hybrids get better mpg in city driving than they do in highways.  Because of the finite life of the battery, the true cost of an electric car can soar.  The author also addresses the issue of using lighter material to increase efficiency.  Lighter cars have a reputation for not being safe.  He points out a study however that it is true that heavier cars made by Ford, Chrysler, and General Motors are safer than the lighter ones they make.  The study found, however, that these heavier cars are no safer than the lightest Japanese- and German-made cars. The same researchers found an interesting correlation between resale value and safety as measured by driver deaths per year per million cars: the higher the resale value, the safer the car (lower deaths) regardless of its price when new. See Figure II.15.

·         Energy-efficient refrigerators – The energy efficiency and price of refrigerators have both improved since 1974 attributed to government mandates and market competition by the author. The average size of refrigerators in 1974 was 14 cubic feet but it was both more energy-consuming and expensive relative to today’s refrigerators with an average size of 23 cubic feet. Today’s refrigerators have more efficient motors and better insulation.  The author puts it in perspective in terms of national savings: if all the refrigerators of today have the efficiency of the 1974 models, this would add another 23 gigawatts of power plants.

·         The McKinsey chart – This chart was created by the consulting firm McKinsey and Company.  It resulted from a study done analyzing actions that may reduce carbon emissions and their profitability or added cost.  See Figure II.16 for the detailed information. He also relates an excerpt from Amory Lovins’ book Natural Capitalism that told the story of Dow Chemical employees coming up with energy-saving proposals that gave huge returns, resulting in $110 million dollars in payments to shareholders every year by 1993.

In the next section, the author lists and describes what he opines as “feel-good measures” that may save energy but only in limited circumstances:

·         Buses – a study done by the Institute of Transportation Studies at Berkeley that the break-even point for bus transportation in suburbs depends on population density. They found that public transportation saves energy or at least breaks even if there are more than 15 households per acre; lower than this and buses actually use more energy.

·         Recycling paper – according to the author, recycling paper neither saves trees nor reduces greenhouse emissions.  Trees for paper are grown specifically for the purpose of making paper.

·         Power blackouts – the interconnection of large numbers of power plants, transmission lines, transformers, and users – the grid – makes electricity delivery more reliable in the US. Operation problems in one plant can be overcome by another plant supplying the needed electricity.  This process however cannot handle sudden high demands on the system and can lead to cascading power plant failures like what happened in New York and Massachusetts in 2003.  The system has no way of limiting the current draw when multiple air conditioners are turned on – generators just start to overheat.  The author lists three solutions. One is to build small scale natural gas power plants for use on high demand days.  Already done in CA, this, however, is an expensive solution due to the capital investment and the poor returns as these plants are used only a fraction of the time.  Another solution is to have utilities decrease voltage on the line; air conditioners still run but at a reduced power. California has also used the solution of rotating brownouts to distribute the source for the sudden high demand.

In controlling electricity use, the author welcomes the role of market forces.  He favors dynamic pricing of electricity, rising in cost when the demand is high. This is not a popular option, however, because of the unpredictability.  The author suggests that the use of smart meters can help consumers program turning on and off of appliances depending on when demand peaks and there is price increase.  For example, electricity enters home at two voltages 120 volts for lighting and small appliances and 240 volts for air conditioners, washers, dryers, and other appliances that pull high loads.  One way to program a smart meter is to turn off the 240 volt circuits.  When smart meters first came out in CA, the three main complaints were overcharging, exposure to microwave radiation, and loss of privacy.  The author addresses these three in the last few paragraphs of the chapter.  Smart meters were installed primarily so that the utility company can collect more information about energy usage. It was also designed to reduce power automatically in case of an extreme emergency.


PART III: ALTERNATIVE ENERGY

Two major issues the author identifies related with energy are energy security and climate change. In Part III, the author will devote chapters to a description and discussion of alternative energy sources, noting that the “alternative energy field is wide, technically complex, and full of uncertainties”. He points to a table of data showing the cost of producing a kilowatt-hour of electricity using various methods (see Table III.1): coal, natural gas, nuclear, wind, solar PV, solar thermal, geothermal, biomass, and hydro. Some of these general types are further broken down into specific technologies.  The table was published by the US Energy Information Administration in 2011.  The author notes two caveats:
3)      The data assumes that the cost of capital is 7.4%
4)      The data assumes a carbon emission trading cost for coal and natural gas of about $15/ton.

The table shows that natural gas appears to be the cheapest in providing a kilowatt-hour of energy. It also has the advantage of producing less greenhouse emission than coal for equal energy produced; half the energy comes from C combining with oxygen to form carbon dioxide and the other half comes from H combining with oxygen to form water.


Part III – Chapter 8: Solar Surge

The author starts off this chapter by predicting that the price of solar panels will eventually go down but that installation and maintenance will still cost the consumer.  On rainy days, there has to be another alternative.

First some physics about sunlight:
Sunlight delivers about a kilowatt of power per square meter onto the surface of the earth. This is equivalent to 10 100-watt bulbs.
Could solar power drive a car?  With 2 square meters of solar cells, 42% (best efficiency ATTOW) 2 kilowatts will be generated or 840 watts. This is equivalent to 1.1 horsepower.  Typical cars require 10-20 horsepower while cruising on the freeway and 40 – 150 horsepower needed for acceleration.
A square mile of solar panels has the potential to generate 2.6 gigawatts which reduces down to about a gigawatt due to the 42% efficiency.  This further goes down to 250 watt per square meter because the average solar power is only 25% of the peak.

Solar Thermal is a type of solar energy source in which the heat is focused, collected, and used to heat water to produce steam that is used to run a turbine.  In Table III.1, this energy source is expensive, 25.9 cents per kilowatt-hour.  A solar thermal power plant in California consisting of a tower toward which 24,000 moving mirrors direct sunlight can generate 5 megawatts of energy, 0.5% of a conventional gas, coal, or nuclear power plant.  Because of the many moving parts, this type requires a lot of maintenance. Another solar thermal power plant uses a solar trough to focus the light using an optical design that obviates having to repoint. In this type of solar focuser, there are fewer moving parts. Spain is the biggest user of solar thermal, generating up to 4 gigawatts or 3% of their energy use by the end of 2010.  The construction of these plants, however, depends on government subsidy.

Disadvantages:
·         Require sunny days although the heat can be stored in the hot salt
·         Require subsidies
·         High cost
·         Maintenance

Advantages
·         The hot salt can be stored for later use
·         High 50% efficiency in producing electricity due to extreme temperatures that can be reached with the focused sunlight.  The efficiency of trough is not as high as there is less focusing ability and the heated liquid has to flow.

Photovoltaic cells
Solar cells or PV cells use absorbed sunlight to produce electricity based on the photoelectric effect. When sunlight heats the solar cell, an electron is ejected from an atom and travels carrying some of the photon’s energy to an electrode and a wire creating a current for electricity.  ATTOW,  reasonably priced cells can convert only 10% of the photon’s energy into electricity but this can go up to 42% for the most expensive cells.

In 2011, the cost of PV cells dropped to $1/watt from $7/watt a few years ago. This, however, is PEAK watt. This goes down to ¼ when considering the varying angle of the sun and its absence at night.  It goes further down to 1/8 of watt peak when overcast days are accounted for.  The author shows a sample calculation of the return and payback time for solar cells.  They also need other electronic devices such as an inverter to be able to use its electricity to run appliances and optional batteries. And, they require maintenance.  He considers them not optimal (“no profit”) as they are also heavily subsidized by the government.

There are many competing technologies for PV.  The author highlights some of them below:

Silicon
Silicon crystals were the original material used in the first solar cells
They have gone down in price from $5 to $1/watt.
Silicon itself is cheap but the cost of purifying it is not.
Renewable energy regulation enable competition in the market
The largest manufacturer is in China (Suntech Power), producing cells that have a 15.7% efficiency and a capacity of 1 gigawatt a year.  A close second is a US company (First Solar).
1 gigawatt worth of solar energy (producing 1/8 the electricity) is small compared to 50 gigawatts of coal plants being built in China every year.

Cadmium telluride (CdTe)
A layer of CdTe 3 microns thick (1/10th of human hair) can absorb sunlight and produce electricity with a 15% efficiency or more.
CdTe can be manufactured as very thin flexible sheets that are not fragile like silicon crystals (30 times thicker).
This is the material used by First Solar who, ATTOW, has been producing over 1 gigawatt of solar cells each year at 73 cents per installed watt.
Tellurium is produced at about 800 tons a year as a by-product of copper mining and there are worries that this source might run out soon.  1 gigawatt of solar cells takes about 100 tons of tellurium although increased demand may spur exploration and discovery of more deposits.
There is concern about cadmium’s toxicity which may be released in the event of a fire although the author does not think this is likely.

Copper indium gallium selenide (CIGS)
Like CdTe, CIGS can be produced in very thin sheets of 3-4 microns and has a good capacity for absorbing sunlight and producing electricity.
Don’t contain any material considered toxic.
Indium is in short supply because it is used in many electrical applications such as indium tin oxide, a transparent electrical conductor, used in TV’s, computers, and game boxes.
CIGS are the primary material used in solar manufacturing by the San Jose, CA based company Nanosolar which produces about 640 megawatts of solar cells.  Their efficiency is only 10% but have been shown to be as high as 20% in the lab.
Nanosolar and thin-film companies have been negatively impacted by the sudden drop in price of Chinese silicon solar cells (15-fold between 2006 and 2010).

Multijunction Cells
Typically made of gallium, germanium, indium, and other metals.
They are assembled in multiple layers, each one designed to absorb a wavelength range in the solar spectrum achieving efficiencies as high as 42%.
Very expensive to make, about $500 for a square centimeter.  Using a PV concentrator, however, one can concentrate sunlight onto a smaller piece for a gain in efficiency, 2-4 times higher than competing cells; requires a thermal conductor to carry away the heat.
They have been used in the Mars Rover.

Solar Cell Summary
“The solar field is intensely competitive and developing fast. Prices are dropping so rapidly that the winners are likely to be decided by criteria other than solar-cell price, including cost of installation, cost of maintenance, cost of conversion to household voltages, lifetime of cells, and efficiency.”


Part III – Chapter 9: Wind

Wind turbines are designed to have huge blades and to be very tall to take advantage of high winds at higher elevations.  An example given by the author notes that wind speeds at 200 feet are typically twice that at 20 feet.  Wind power increases as a cube of the wind velocity: doubling the wind speed results in 8 times the power. (Wind energy is just ½ mv2 but the power is also professional to the energy x v, thus the cubic dependence). The blades are big for more surface area interaction with the wind. A blade that is 63 meters long sweeps an area of about 12,000 square meters.  At wind speeds of 20 mph, the power derived is 10 megawatts.  The author gives the following formula to calculate this: watt per square meter = (speed3)/10. Because the blades spin fast, just 3 blades are enough to take more than half the energy of the wind blowing through that circular area spanned by the radius of each blade. Betz’s Law limits the amount of energy that a turbine can extract from wind to 59% as long as there are no other turbines around (turbines are spaced by a distance 5-10 times the length of their blades). So, the 10 megawatts calculated above is reduced to 5.9 megawatts, the maximum power that can be extracted.  A home wind turbine spanning a surface area of 4 square meter and running at average wind speeds of 5 mph can only generate 29 watts (using Betz Law).  A solar cell of similar surface area can generate 600 watts at peak and average 75 watts over cloudy days and 24 hours.

Wind power capacity has been doubling every 3 years, as turbines are relatively inexpensive to build and don’t require fuel. The US has built 45 gigawatts worth of wind turbine farms (2.3% of electric power generation.  The anticipated capacity of China was 55 megawatts at the end of 2011. See Figure III.8. It can produce electricity relatively cheaply at 9.7 cents per kilowatt-hour.

The last few paragraphs are about issues that have been brought up about wind power.  The author addresses each one of these:

1)      A large grid of wind farms can contribute to the reliability of pulling power from a wind farm and stabilizing power when the wind stops in certain areas.  Back-ups can also be employed, e.g. storage batteries, emergency generators, etc.
2)      Aesthetics are also an issue for some people.
3)      Bird deaths by hitting turbines are small relative to bird deaths due to hitting windows and tall structures.  Modern turbines are usually sited away from migratory paths.
4)      Because the strongest winds occur in remote areas, there is a concern about transporting energy from wind farms sited too far away from population centers.  The current grid loses about 7% in electrical energy due to transport.


Part III – Chapter 10: Energy Storage

In this chapter, the author, as promised, tackles energy storage options, especially for solar and wind energy.

Batteries
ATTOW, the most common storage batteries that come with solar power installations are lead-acid batteries.  These batteries do not have a high energy density but they are highly efficient, providing 80-90% of the energy pumped into them.  Four car batteries weigh 250 pounds and can provide 5 kilowatt-hours of electricity enough to power a small home for 5 hours.  To contrast that with the energy density of gasoline, 250 pounds of gasoline contain 1,320 kilowatt-hours of heat energy and at 20% efficiency, a generator can still provide 50 times the energy of an equivalent weight of batteries.

The author believes, however, that the lead-acid battery is not the obvious choice for solar and wind power.  He believes that the sodium-sulfur battery is a better option. The largest of this type of battery is “Bob” from Presidio, Texas. Used as an emergency back-up, Bob can provide up to 4 megawatts to power 4,000 homes for 8 hours. Sodium-sulfur batteries are also used for power levelling to keep power delivery constant in case of generator failures. Sodium-sulfur batteries have the advantage of low price per charge-discharge cycle: it can be recharged 4,500 times for 80% discharge while lead-acid batteries versus 500 times for both lead-acid and lithium ion batteries.

Written in 2012, Muller did not anticipate that Elon Musk of Tesla would have come out with a home battery system in 2015 [my own note].  Here is what he had to say then about lithium ion batteries: “I expect that lithium-ion batteries will never be used for large-scale energy storage; they are too expensive. Lithium costs 40 times more per pound than sodium and 10 times more per atom – a more relevant measure for batteries. With a 9-fold recharge advantage and a 10-fold  cost-per-atom advantage, sodium-sulfur has a 90-fold advantage over lithium-ion.” A disadvantage of sodium-sulfur batteries is that they cannot be scaled down and are not suitable for a wide range of applications; they have be kept at temperature of around 350 C and contain liquid sodium.  A Japanese company is developing one that can operate below 100 C.

Some interesting notes and language from the section on The Physics and Chemistry of Batteries

Metals and electrolytes are the fundamental components of batteries.  Metals allow electron flow while electrolytes allow ion flow but not electron flow.
In lead-acid batteries, lead and its compounds comprise the metals and an aqueous acid solution acts as the electrolyte.
The Handbook of Battery Materials lists all the known metals and electrolytes used in batteries.
One of the challenges with batteries is how to make them rechargeable. In recharging batteries, a generator is used to force electrons the other way.  Their negative charges attract back the positive ions for re-deposition on the other electrodes by flowing through the electrolytes.  A big problem is ensuring that they re-deposit in the original way; many times they don’t forming dendrites eventually making the battery unusable.
Typical recharging cycles are in the hundreds; sodium-sulfur batteries can be recharged thousands of times without failure.

The Future of Batteries

NiCad batteries had memory issues – they had to be discharged completely to remember to go back to being fully charged.
NiMH batteries don’t have this problem.  They are still being used in Priuses.
Lithium-ion batteries are light-weight and have high energy density.
Lithium-polymer batteries can be made really thin and are useful for small electronic gadgets like cell phones and e-book readers.
The future of batteries is optimistic.  The market for these newer more expensive batteries was sustained because they were useful for even more expensive gadgets such as laptops.  The focus of research is on rechargeability and safety. The author notes, however, that engineering development for batteries is linear not exponential as improvements will come but not at the same fast pace as in the past.

Bottled Wind: Compressed Air Energy Storage (CAES)
In this type of technology, already used in confined spaces with no ventilation like mines, air is compressed to several times atmospheric pressure (200 atms is a typical figure) storing the energy expended by a motor pump.  This energy is then released when the compressed air is allowed to expand and run a turbine.  One disadvantage of this is the weight of the tank which is always about 20 times more than the weight of the air (or 5 times if it is a fiber composite tank). Another issue is that air heats up (up to 1370 C at 200 atm) when compressed so there must be a way to draw the heat away.

There are a few places using CAES, one in Germany and another in Alabama.  A plant planned in Ohio can deliver up to 2.7 gigawatts.  With more advanced systems of reusing the generated heat, the expected efficiency can be up to 80%, comparable to that of batteries.

Flywheels

Energy is stored by using a motor to spin a heavy wheel.  When a load is added to it, a motor for instance to generate electricity, the rotation slows as the kinetic energy is converted to electricity. One of its advantages is its ability to condition energy and smooth out the power.  The Bevatron atom smasher in Berkeley uses a flywheel, about 10 tons each. Beacon Power installed 200 2,500 – pound carbon fiber composites flywheels (10 ft tall and 6 ft in diameter) in Stephen, NY moving at 1500 mph.  to reduce air friction, the flywheels move in a high vacuum chamber. Each flywheel can store 25 kilowatt-hours of energy for a total of 5 megawatt-hours.  These flywheels are designed to deliver 20 megawatts and therefore can run for 0.25 hours or 15 minutes.

Their energy storage density is comparable to lithium ion batteries, 30 watt-hours per pound.  Beacon Power’s current set-up costs $1.39 per kilowatt – hour. The high cost makes the author think that flywheels will continue to be used to condition energy but not for large-scale energy storage.

Supercapacitors
Capacitors are composed of two metals that are charged with opposite charges and separated by an electrical conductor.  They can store energy for longer periods of time than can batteries. The higher the charge, the more energy is stored but the higher the voltage.  Very high voltages can cause a spark and destroy the capacitor.  If the insulator is thin, then more energy per unit volume can be stored but without the high voltages.  One advantage of capacitors is that they can deliver energy very quickly and don’t degrade over time as they are not dependent chemical reactions.

Newly developed supercapacitors can store as much as 14 watt-hours per pound.  This is about a third of the energy of similar weight lithium-ion batteries. Supercapacitors cost 3 times as much.  Supercapacitors are probably best used in conjunction with a battery, providing quick boosts of power that batteries can’t; they can also be useful in improving the efficiency of regenerative breaking, absorbing and transferring energy at a more efficient pace.

Hydrogen and fuel cells
A fuel cell is a battery that does not need to be recharged because the chemical reactants are added as “fuel”.  In a hydrogen fuel cell, hydrogen and air are pumped to generate electricity.  Efficiency is low, %25, and the author is not sure that they will replace batteries or generators.

Natural Gas
In this last section, the author does a comparison of what he deems as the best energy storage technology, the sodium – sulfur battery to a natural gas generator. His calculations show that sodium – sulfur batteries’ capital cost is $5 per deliverable watt while natural gas capital costs $1 per deliverable watt.  If the cost of the fuel is taken into account, natural gas easily wins out over solar or wind as the energy source.  Batteries only compete if they are run at low duty cycle, e.g 1 hour per day, then the per watt capital cost goes down to 50 cents.  The author concludes that natural gas is hard to beat.


Part III – Chapter 11: The Coming Explosion of Nuclear Power

The author starts off this chapter with a list of key items he thinks are important to know about nuclear energy (he calls it an “executive summary”).

Unlike nuclear bombs that contain highly enriched uranium, nuclear reactors cannot explode because they use low-enriched uranium.
·         A nucleus of uranium releases 20 million times more energy than a molecule of TNT.  A fission process can initiate a chain reaction: every fission produces 2 or 3 neutrons, each one which can cause a fission creating 2-3 more neutrons, etc.  A rough calculation shows, assuming 2 neutrons per fission, after 80 doublings, the number of neutrons present is about 10008.  The Hiroshima bomb was equivalent to about 13,000 tons of TNT.  Since, one uranium nuclear fission releases 20 million times the energy as 1 TNT molecule, 0.00065 tons or 1.4 pounds of uranium are needed to produce the same destructive energy. However, this amount is less than the critical mass of uranium needed to initiate and sustain an explosion.
·         The rapidity of the fission process (less than a millionth of a second) is required for a nuclear bomb design. If the fission occurs in 1 second, pre-detonation occurs where all the uranium atoms in the pile would have blown away by the initial fissions.
·         Heavy uranium or uranium-238 does not undergo fission in a way that can sustain a chain reaction. Creating a bomb requires almost 100% uranium – 235. Pure uranium is only 0.7% uranium – 235 and it is very difficult to separate it through the process of uranium enrichment to get at least 90% for weapons grade uranium.
·         Moderators (like carbon and water) can help sustain a chain reaction even in the presence of U-238 which normally would absorb too many of the neutrons.  With moderators, the neutrons hit uranium-238 nuclei with less energy and simply bounce off so the probability of hitting U-235 is higher than if they were simply absorbed by U-238. Using an expensive moderator like heavy water allows using pure uranium with only 0.7% U-235. Graphite works well too but it burns (as happened in Chernobyl).  Using ordinary water as a moderator, the uranium needs to be enriched only to 3% - 4% U-235.  See chapter for a brief synopsis of why the Chernobyl reactor blew up at TNT explosion levels.


Capital costs for nuclear power plants are high but the delivery of electricity is lower in cost because fuel and maintenance costs are low. It costs about 6-8 billion dollars to build a 1-gigawatt capacity reactor, about 50% more than that for a 1 giga-watt coal plant.  About 80% of the electricity cost for nuclear power is from paying back the loan, compared to just 18% for natural gas plants.  Nuclear power plants have a very high capacity factor, operating 90% of the time due only to maintenance downtime.  This has raised the revenue 1.6 times.  The history of nuclear power use in the US has gone up and down and has been marred by the Three-Mile Island accident which did not cause any deaths and the Chernobyl accident which did cause fatalities.  Another factor for no new constructions is that it is not competitive with much cheaper natural gas. 

Small modular reactors (300 megawatts or less) may be the solution for the high capital cost to build a new reactor.  They reduce the initial investments and their modular design allows building up of power capacity.  One type of these reactors is the one built by Babcock and Wilcox with a power capacity of 125 megawatts and is designed to be buried underground, operating for 3-4 years without maintenance. Toshiba also makes one that can deliver 30 – 135 megawatts and uses sodium coolant; also designed to be buried.  The liquid metal sodium can be pumped by electromagnetic pumps.  It the pumps fail, the reactor materials themselves can reduce the chain reaction and the sodium metal can draw away the excess heat by natural convection. One major concern is that modular reactors don’t use moderators and so the nuclear reactions occur at a fast pace which is required for these smaller reactors.  For a faster chain reaction, they need a more enriched source of uranium, at 19.9% U-235 (below the 20% low-enrichment classification by the IAEA). See chapter for a detailed description of the safety design to prevent explosions of modular reactors. “Note that the safety is not based on an engineering system that requires maintenance…The safety is intrinsic to the physics of high temperatures. That’s why these reactors are sometimes called intrinsically safe.” Another factor that makes them intrinsically safe is that when “the fuel heats up, both the neutrons and the uranium atoms shake more; their instantaneous velocity is higher. U-238 has an important property in its nucleus; it becomes more efficient at absorbing neutrons when the relative velocity is increased”.  Another safety feature of these modular reactors is the use of a sodium coolant which expands upon heating, becomes less dense, and rises to the top away from the core as cooler liquid replaces it near the core.  Thus, it is not dependent on pumps or engineering devices. There is also less plutonium waste because any plutonium generated when neutrons stick to U-238 also fissions.  Toward the end of this section, the author addresses in detail concerns about nefarious persons taking a hold of the enriched uranium in these modular reactors. At the end, the author did not fail to mention natural gas again as the cause for the challenge of increasing the energy contribution from nuclear power in the US, unless there is a big push to produce energy that does not produce carbon dioxide.

There are enough economically recoverable uranium to last 9,000 years at current usage if low-grade uranium ore is used.  Uranium ore costs $0.2 per kilowatt-hour.

The Fukushima nuclear accident and meltdown after being hit by an earthquake and tsunami in 2011 is estimated to be only about 100 out of the total 15,000 deaths; maybe fewer as thyroid cancer is readily treatable.

Nuclear waste storage is technically feasible but receives bad public perception and political posturing. In the US, nuclear waste contains plutonium (in France, no, because it is extracted).  Here are the reasons why the author thinks that nuclear waste is not a problem: plutonium has a long half-life of 24,000 years and thus does not contribute much to the radioactivity of the waste and it is highly insoluble in water so very little will end up in groundwater. The greatest danger from plutonium is inhalation; it only takes 0.00008 g to cause one cancer (versus 0.5 g if dissolved in water).  The author offers the following perspective on the danger of inhalation: botulinum toxin (used in Botox) has an LD50 of 0.000000003 g if inhaled, 27,000 more toxic than plutonium. To address the radioactivity of nuclear waste, the author discusses Figure III.12 in the book, showing that the radioactivity of nuclear waste compared to when the uranium was first mined is a rapidly decreasing function.  Thus, to the author, nuclear waste storage is not a difficult technical problem.  The author offers what he thinks are three reasons for people are quite concerned about nuclear waste: “most people consider radioactivity an unknown and invisible threat, people don’t recognize that  they are surrounded by a level of natural radioactivity that is usually much higher than the dose that comes from a nuclear accident, and the threat of plutonium has been so hyped that many people consider its presence to be unacceptable at any level”.

Construction of new nuclear power plants will be “exploding” in the next several years in places like China and France; Japan is helping build some of these even as some of their own nuclear reactors are taken offline. ATTOW:
·         there were 31 states in the US that have operating nuclear power plants and in 7 of these, nuclear supplies 50% of their electricity
·         in France, 75% of their electricity is supplied by nuclear
·         in China, 27 new plants were being built, 50 planned and 110 proposed
·         UK also has come up with some proposed sites

In China, most of the coal is inland and it has to rely on imports from Australia to supply the coastal areas.  For every 20,000 tons of coal shipped, only 1 ton of uranium needs to be shipped for the same energy, even less for the 19.9% enriched uranium used in modular reactors.


Part III – Chapter 12: Fusion

The author devotes this chapter on a promising energy technology that has been in development for decades, fusion.  Fusion is a promising source of energy as it can be fueled by the most abundant element in the ocean (by atom numbers), hydrogen.  Fusion can also be fueled by deuterium which, while only ~1/6000 in abundance relative to hydrogen, can be inexpensively separated from regular hydrogen (the next heavier hydrogen, tritium, is too rare but can be generated). The optimism for the fusion energy source has been around for decades.  Fusion has actually occurred in the form of hydrogen bomb in 1953; as a safe source of energy however, a more controlled process needs to be developed.  Some of the advantages of fusion listed by the author include the abundance of the primary fuel, hydrogen and its relative lack of radioactive waste. The author points out, however, that neutrons produced in a typical fusion reaction (deuterium + tritium à helium + neutron) can stick to material and make them radioactive, albeit smaller than the radioactivity in a uranium fission plant.  Because tritium is quite rare (16 pounds in all the world’s oceans), some fusion reactors are being designed so that the product neutrons are used to breed tritium by bombarding lithium atoms with them.  In one other fusion reaction, hydrogen + boron à 3 helium + gamma ray, no neutrons are formed. The gamma rays don’t produce any significant radioactivity, just a lot of energy.

In the next few sections, the author discusses 5 of the most talked-about proposals for developing fusion as an energy source.

Tokamak
The Tokamak, which in Russian stands for “toroidal chamber with magnetic coils, was invented in Russia in the 1950’s.  It has dominated the attention and research effort in the last 60 years of fusion exploration. In tokamak, the type of fusion is called thermonuclear fusion wherein extremely high temperatures are used to overcome the repulsion for H atoms to get close enough and fuse through the short-range strong nuclear force.  The temperatures should be high enough so that the kinetic energy of the atoms exceeds that of the repulsive energy.  This is the same type of fusion that occurs in the sun, requiring core temperatures of about 15 million Celsius degrees (the surface of the sun is only 6,000 degrees Celsius).  At this temperature, the power generation is only 0.3 watt per liter in the core; human bodies can generate 1 watt per liter.  To generate the power required to produce energy rapidly in a Tokamak fusion, however, requires 100 million Celsius degrees. It also requires deuterium and tritium fuel for easy ignition, with their extra neutrons increasing the strong nuclear attractive force and the rate at which they fuse.  Because of the high temperatures required, reacting particles are held down by magnetic confinement. The most current development in tokamak technology is ITER (International Tokamak Experimental Reactor), a 60-foot reactor aiming to produce 500 megawatts for 400 seconds or more, 10 times more power than is needed to run it.  Its cost to build has been increasing in the last few years, up to 15 billion dollars so there are questions as to whether it will be competitive.  The first test of hot gases is scheduled for 2019 followed by running hydrogen fuel in 2026 and finally project completion in 2038.  One objection to the ITER development comes from Greenpeace who argues that the expense of the project does not warrant a technology that might be too long in coming to help stem climate change; they argue that the 15 billion dollars should be spent on solar, wind, and other already proven renewables.

NIF, the National Ignition Facility
NIF is located at the Lawrence Livermore Lab. The fusion technology they are developing involves the use of LASER to heat to very high temperatures (tens of millions of degrees) and ignite a small amount of hydrogen to get the fusion of deuterium and tritium started.  The author considers this design, when developed, to be the first to breakeven point in controlled fusion. Another name for this approach is inertial confinement fusion because the fuel’s inertia is enough to confine the hydrogen atoms even at high temperatures because the ignition happens so rapidly (a millionth of a second). 192 lasers synched together deliver a huge amount of energy in short bursts, a billionth of a second, generating 500 terrawatts. This energy is used to heat up the shell that causes it to emit x-rays. The x-rays heat and produce a shock wave that compresses the hydrogen inside. See notes section for a detailed cost summary provided by the author. The critical number that will primarily affect its competitiveness is the cost of the hydrogen targets which must be below a dollar each. The advanced system that Livermore scientists are developing is acronymed LIFE, laser inertial fusion energy.

Beam Fusion
In beam fusion, the target atoms are accelerated through a beam of plasma and collide with rapidly moving particles. This technique is already being used in commercially used neutron generators.  In neutron generators, a beam of deuterons (deuterium nuclei) is accelerated in electric fields and collides with tritium-rich targets.  The tritium fuses with the deuterium producing helium and neutrons.  Neutron generators are used to characterize rock structure at very low depths for oil drilling in a process called oil well logging.  Other commercial applications of neutron generators include coal analysis in factories, cement process control, measurement of wall, etc. Beam fusion is currently not a viable fusion energy source because the energy input is higher than the energy output.

A company called Tri-Alpha is working on an undisclosed fusion technology that makes use of the reaction between hydrogen and boron and producing three alpha particles (helium nuclei) and no neutrons (anuetronic).  The technology involves accelerating the reactant particles in circular paths the hydrogen and boron being presumably confined by magnetic fields and the electrons that are there to keep the plasma neutral confined by the electric fields. Because the product particles are charged, this technique has the potential of converting the energy directly into electricity and not heat.

Muon Fusion
A muon is a small, heavy particle that lives for about 2 millionths of a second before breaking apart into an electron or a positron and neutrinos.  In this type of fusion, discovered in 1956 in a cold liquid hydrogen chamber, a muon (207 times heavier than an electron, some of which have negative charges) fuses with a proton in a hydrogen atom, ejecting an electron. The neutral, fused muon and proton can then collide with a deuteron and fuse, releasing energy and creating a helium nucleus.  Even though hydrogen contains very little deuterium, the fused muon-proton takes a few billionths of a second before it finds a deuteron to fuse with.  A fusion reactor would have pure deuterium so the reaction is even faster. As with other fusion technologies, the energy required to create the muon must be less than the energy produced to make it commercially viable. The trick to sustain muon fusion is to get each muon to catalyze 350 fusions (the muons tend to stick to the helium nucleus).  The author offers his own suggestions of techniques that can be explored to facilitate the viability of this fusion process (see chapter), focusing on decreasing the energy input, causing the muon to undergo more catalysis before sticking to helium, producing muons with much lower required energy, etc. One company, Star Scientific, has claimed that it has developed a way to produce muons with less energy. The author himself has worked on aspects of this field alongside with the original discoverer, Luis Alvarez.

Cold Fusion
The author devotes this last section of the claim of cold fusion being achieved in 1989, “verified” by scientists from top institutions, only to fizzle out as consensus evolved to declare the results unverifiable and the methods questionable.


Part III – Chapter 13: Biofuels

In this chapter, the author discusses examples of biofuels and warns right away that some of what he is about to say may actually offend some people passionate about biofuels.  Right off the bat, he listed some of the contentious conclusions he has arrived at: “corn ethanol should not count as a biofuel as it does not reduce greenhouse emission; biodegradable and recycling are overhyped from a global warming perspective; ethanol from cellulose offers the best hope for a significant biofuel component to solving the energy problems; and the main value of biofuels is not reducing global warming but in increasing energy security.”

Ethanol from Corn
The author gives the following reasons for why ethanol from corn should not be considered a biofuel:
·         It uses a lot of fertilizer
·         It takes a lot of oil and gasoline to run farm machinery for growing corn
·         It does not produce enough sugar per acre turned into ethanol by fermentation to be carbon neutral and to yield net carbon dioxide reduction.

Using corn to make ethanol has also raised prices for corn-based food. Ethanol from corn does serve the advantage of providing another local source of transportation fuel and contribute toward energy security (the author estimates about 3% US consumption and 5% of US imports) despite the fact that it provides only 2/3 of the energy as gasoline on a per gallon basis.

Biodegradable is bad?
From the global warming point of view, biodegradable materials are “bad” because they decompose to produce carbon dioxide.  The author concedes, however, that from an aesthetic and animal welfare point of view, reducing plastic that ends up in our oceans and kills animals and clutters the landscape, biodegradability does have some benefits.

Pseudo-biofuels
The author does not consider waste cooking oil as a biofuel.  He argues that using waste oil as fuel adds carbon dioxide to the atmosphere and is not better than petroleum.

He also considers recycling paper as bad for global warming because letting it biodegrade adds carbon dioxide in the air instead of burying and sequestering its carbon content. If paper is not recycled, more trees to make paper have to be grown which removes carbon dioxide from the atmosphere. 

The Altamont landfill in California can generate 13,000 gallons of liquefied natural gas that it uses to operate its waste and recycling trucks.  This constitutes 93% of total and so the other 7% of methane leaks into the atmosphere as a potent greenhouse gas.

The author uses a somewhat tongue-in-cheek tone in these sections.


Ethanol from Cellulose
Cellulose, normally indigestible by humans, can be converted to the liquid fuel ethanol by fermentation using enzymes from microorganisms, fungi, or yeast. The top candidates for the cellulose are switchgrass and miscanthus grass that grows over 11 feet tall and can yield three crops per year.  Miscanthis grass is projected to produce, in theory, 1150 gallons of ethanol per acre compared to corn which can only produce 440 gallons per acre.  Cellulose can provide about 1/3 the energy of an equal weight of gasoline. The author estimates having to grow enough miscanthus grass in an area 560 miles on each side (6 times the size of Iowa) to replace the 1 billion tons of oil we uses each year, assuming no energy loss in the conversion.

Ethanol from algae
The author thinks that algae has even better potential for producing fuels. The “right kind of algae” has the potential to produce oil that can be used as diesel without expensive conversion steps in between.  Algae are very efficient at producing biomass from sunlight: every cell can produce biomass compared to just leaf surface cells in grasses. Proponents of algae for producing oil claim that algae can “produce ten times the energy per acre that Miscanthus can produce”. Commercial ventures lead the work in research and development of this oil-producing technology.  Genetic engineering and primarily inducing mutations is the technique being used to find the “right kind of algae”.  Algae production can be very sensitive to environmental factors and biological contamination whereas growing miscanthus is less vulnerable to extreme weather and invasive species.

In the end, the author does not put a high value on bioethanol or other biofuels in terms of limiting the greenhouse effect. Even if biofuel replaces gasoline, there would only be a limited reduction in predicted temperature rise.  In terms of energy security, bioethanol may come too late and may be too expensive to compete with other cheaper fuels like compressed natural gas, synfuel, or shale gas.


Part III – Chapter 14: Synfuel and High-Tech Fossil Fuels

In the beginning of this chapter, the author reiterates that while the US is running low on oil, this is not the case for natural gas and coal.  And, as he points out, while this helps energy security, it is not good for greenhouse emissions.  A large supply of natural gas and coal does not help energy sectors that require liquid fuels, especially transportation needs.  Transportation infrastructure is built around using oil.  Shale oil and shale gas are also fossil fuel alternatives discussed in a previous chapter. The author discusses some other “unconventional” sources of fossil fuel in this chapter.

Synfuel
The Fischer – Tropsch chemical process to convert coal to oil was first developed in Germany during World War II.  This process, referred to today as CTL or coal to liquid, has been used by the company Sasol in South Africa, to produce oil during the embargo years of the apartheid era.  Sasol, in 2011, announced plans to build a gas to liquid (GTL) plant in Louisiana to produce oil from natural gas projected at about 100,000 barrels per day of diesel fuel. According to the author, US is shying away from building more of these plants even with the low cost and glut of natural gas because of the uncertainty in oil prices, “Saudi Arabia can undercut any threatening technology as long as it has a surplus capacity since it can pump oil for under $3 a barrel”. 

George W. Bush signed the Energy Independence and Security Act to protect the vulnerability of the US energy needs to OPEC control of the oil market, that could “emasculate our (military) forces by a simple embargo”.  This act provided loan guarantees, tax credits, and subsidies. Before it passed, however, the synfuel provision was cut out due to the concern with increased greenhouse emissions from using coal as the source of oil, “trumping” national security concerns.  The author puts this in the following quantitative perspective: “Recall that the US automobile has contributed about 1/40 Celsius degrees to global warming. In the next 50 years, assuming we adopt reasonable automobile emission standards, we should be able to limit the temperature rise attributable to the US automobile to an addition 1/40 C. A switch to100% synfuel would boost that to about 1/30 C. The danger of that much rise is what you need to balance against the possible national security needs. In addition, you might want to consider the role that synfuel might play in reducing the balance-of-payments deficit.”

The author predicts that there would be a growth in the construction of synfuel facilities; subsidies are no longer necessary because of lower natural gas prices. Chevron and Sasol have started a joint venture for GTL in Qatar.


Coal Bed Methane
Coal bed methane is methane extracted from deep coal deposits by drilling down and allowing the methane to escape.  Fracking and horizontal drilling can be used as well.  This type of methane is relatively pure and does not contain hydrogen sulfide and heavier hydrocarbons like propane and butane and is nicknamed “sweet gas”.

Coal Bed Gasification
In this process, inspired by a lightning ignited fire in a coal deposit in Australia thousands of years ago, deeply imbedded coal is partially burned to extract the energy from the coal without having to dig it up and bring it to the surface.  The partial combustion produces other fuels such as carbon monoxide and hydrogen, a mixture called coal gas.  “It is the ultimate in remote chemistry.” Another advantage of this process is that the ash is left buried. The coal gas can also be collected as feed gas for the Fischer-Tropsch process and for methanol synthesis. The disadvantages include heat loss, wasted unburned coal, and potential pollution of the water table.

Enhanced Oil Recovery (EOR)
Only 20% of the oil can be extracted from underground through upward movement by its own pressure.  Oil is more sparsely distributed under pressure in rock pores and cracks. This pressure is enough to recover only 20% of the oil.  Secondary oil recovery where the oil is flushed out by water, natural gas, or carbon dioxide can boost this to 40%.  This has the added advantage of sequestering carbon dioxide although this is a very small fraction of what needs to be removed from the atmosphere. Enhanced oil recovery methods aim to recover the other 60% through the following techniques:
·         reducing the viscosity by heating the oil by steam injection or pumping down air or oxygen to allow burning of some of the oil to heat the rocks
·         pumping soap (surfactant) to release the oil from the rocks
·         sending down bacterial that can breakdown the more viscous, longer chain hydrocarbons

Oil Sands
Canada is third in the world, after Venezuela and Saudi Arabia, in terms of the amount of recoverable oil reserves.  Most of this oil is in the form of oil sands (or tar sands), heavy crude oil called bitumen mixed with clay and sand. The estimates run from a conservative 200 billion barrels to an optimistic 2 trillion barrels (by Shell Oil).  2 trillion barrels are enough to supply the US with 250 years of oil and the world 60 years at current consumption.  Some of the objections voiced in exploiting the oil sands of Canada include: because the oil is largely on the surface, recovery would leave ugly open-pit mining, local water pollution, and the requirement for large amounts of water. The process used to recover the oil uses up about 12% of the energy of the oil extracted. The author believes that synfuel from natural gas and, in the long-term, shale oil are the will compete with oil sands.


Part III – Chapter 15: Alternative Alternatives: Hydrogen, Geothermal, Tidal, and Wave Power

The author devotes this chapter to a discussion of other alternative sources that don’t have much cost-effectiveness and efficiency promise that he refers to them as “alternative alternatives”.

Hydrogen
The author thinks that hydrogen automobiles were never a good idea because of the following two disadvantages it shares with electric cars:
Hydrogen requires a lot of energy to extract it from water by electrolysis or produce it by reaction of methane with water; this process also produces carbon dioxide.  Using hydrogen as fuel returns only a part of the energy input.  It is much cheaper to use methane as a fuel by combustion or in a methane fuel cell.
A hydrogen car would need to have a large tank to hold the larger volume of hydrogen for the same energy or it would have to have a short mileage range.  Even though hydrogen contains 2.6 times more energy per pound than gasoline, hydrogen takes up a lot more volume: at maximum pressure, it would take 10 gallons of hydrogen to match the energy of 1 gallon of gasoline for an ICE and 6 gallons of hydrogen for every 1 gallon of gasoline in a fuel cell car. Liquefied hydrogen contains 3 more time energy per gallon but requires very low temperatures and specialized storage and discovery.  It is explosive at concentrations of 4% - 75% in air (Natural gas is explosive to between 5% - 15%) (due to fuel to air ratio requirements), which adds to the challenge of transport, delivery, and storage. The super light-weight property of hydrogen does not add value to an automobile although the author concedes that hydrogen fuel works well for rockets because of this.  Advocates argue that if cars are made lighter, a hydrogen car can reach a 300-mile range but the author counter-argues that doing the same weight reduction in ordinary cars can increase its mileage from 35 to 100 mpg. The main advantage of hydrogen is its potential to reduce or eliminate greenhouse emissions if energy used for electrolysis is also a low carbon dioxide emitter (solar, wind, nuclear).  Because of the projected contribution of automobiles to warming at a low 1/40 Celsius degrees, a switch to hydrogen benefits energy security more than greenhouse emissions.  The author reiterates, natural gas is still the competitive alternative.  He does not think that a profit-making market for hydrogen cars will emerge.

Geothermal
Geothermal energy is a good alternative source but only in areas where the heat is concentrated enough for efficient conversion for heating and electricity.  For example, Iceland gets about 50% of its electrical and heating energy from geothermal sources; California generates 6% (2.5 gigawatts) of its electrical power from steam in a 30-square-mile area known as the Geysers.  Even though the interior of the earth generates about 44 terawatts of heat power (mostly from radioactivity in the Earth’s upper crust), this is diffusely distributed to average only 0.1 watt per square meter; in contrast, solar power can generate up to 1,000 watts per square meter and an average of 250 watts night and day, north to south. Carnot efficiency calculations by the author showed only a 9% efficiency for power extraction from low-grade geothermal heat. “Fracking” this heat is not nearly as cost-effective as natural gas. The author criticizes a 2007 MIT report on geothermal mining as being “full of both optimism bias and skepticism bias”.

Tidal Power
Tidal power is also another low-density source of energy at 0.1 watt per square meter.  Nevertheless, tidal power has been commercialized, the most successful one in France across the entrance of the Rance River tidal basin. Two factors contributed to the success of this tidal power plant: a huge tidal range of 26 feet and a large amount of water flowing through the dam generating an average of 100 megawatts of electric power (see picture in chapter), with a 25-megawatt peak. Resonance effects of water “sloshing” frequency have created these huge tides. The tidal power plant construction cost loans have been fully paid after 46 years and can generate electricity at 1.8 cents per kilowatt-hour. Such huge tides are not common, NY or SF gets about 6-foot tides.  The barrage or dam can also cause substantial environmental damage.  In New Zealand, submerged generators extract power from 7-foot tides.  It had a high initial capital cost of $3 per watt but not out of range of other power plants. South Korea has the largest tidal station with a peak production of 254 megawatt from the power of 18-foot tides.  The Bay of Fundy which has 56-foot tides but through a very wide mouth also has a tidal station that generates about 20 megawatts.  One has been proposed by the Golden Gate Energy Company in SF. Again, much like geothermal, tidal power is concentrated only in certain areas and so is not a widely available alternative.

Wave Power
The author does not put much credit to this source as well because, gain, of low power density.  He notes that, although, global wave power can deliver up to 3 terawatts of energy, that energy again is very diffuse because wave heights only average about 1 meter.  Intercepting 100 meters of these waves would only generate 1 megawatt, compared to 7 megawatts from a single large wind turbine. The Pelamis Wind Energy Converter in Portugal has successfully extracted half the power of waves at 5 megawatts per kilometer but at a very high cost, $7.50 per watt installed and this does include maintenance costs.


Part III – Chapter 16: Electric Automobiles
The author begins this chapter on electric cars by listing the reasons for Americans’ (and others’, I suppose) dependence (he used the word addiction) on gasoline automobiles:
·         Fill rate – Gasoline can be delivered to a car at a rate of 2 gallons per minute or 4 megawatts which reduces to 1 megawatt if the 20-25% efficiency rate is taken into account. Electric engines have an 80-90% efficiency.
·         Range – a 10-gallon fill-up can cover 300 miles in an average car
·         Residue – no waste to clean up in tanks
·         Cost – 10 cents per mile at $3.50 per gallon and 35 mpg.  According to the author, gasoline is cheap enough in the US that people would choose to live farther away from their work and commute to get better housing values.  Average commute is 26 minutes for 16 miles and about a gallon of gasoline using the price above.  In Europe, the cost-effective decision leans toward driving less because taxes hike up the price of fuel.
·         Emission – after pollutant controls were put in to remove most soot and nitrous oxides from emissions, only carbon dioxide and water. Carbon dioxide, however, is now a concern due to its greenhouse potency.

Other key facts he noted are:
Global warming – automobiles have contributed 1/40 Celsius degrees to the rise and will likely contribute about the same amount for the next 50 years.

This contribution to global warming was calculated in Chapter 13 and also in the notes section.  A 60% increase in vehicle miles driven in the US in the next 50 years as projected by the US Energy Information Administration may raise this contribution to 1/25 Celsius unless increased mpg standards kick in sustaining this at 1/40 Celsius (0.045 F). In 2010, number of hybrids sold constituted only 2.5% of all vehicle sales in the US; electric cars are even smaller.

Trade deficit – about half is from importing oil
Battery-auto range – a kilowatt-hour of electrical energy can power cars for only 2-3 miles.  It takes an hour to charge a battery that amount using a 1 kilowatt-source
Battery cost – electrical energy is cheap but the expensive batteries need to be replaced after 500 charges.

The Electric Auto Fad
The author is actually a proponent of hybrid cars and thinks that in a decade or so all cars would be of the hybrid kind.  He does have a bone about electric cars.  Here’s the authors statement on why he thinks electric cars are a fad: “In virtually no other area of energy science and policy is there so much unchallenged hype as in the field of electric cars. An in no other field is there a group more enthusiastic, more optimistic, more … fanatical…thank those excited about electric cars.” He notes that energy density, cost, and recharge time are the three fundamental problems that need to be solved before electric cars become a more viable option for the average American car buyer. [Note: this book was published in 2012.  A lot has changed since then about electric cars.]

Energy density – electric batteries can only store 1% of the energy contained in 1 pound of gasoline. Because electric engines can be up to 4 times more efficient than internal combustion engines, this fraction goes up to 4%.

Price – most of the cost of electric cars comes from the batteries themselves and not the electricity cost.  Most electric cars use a lithium ion battery which ranges in price between $30 and $150 per pound. Calculations done by the author using some assumed average values for battery price and number of recharge price the energy for electric cars at $2 per kilowatt-hour delivered compared to 10 cents per kilowatt-hour  for home electricity. “Replacement cost overwhelms electricity cost.” In this section, he did a comparison of three of the most well-known electric cars: the Tesla Roadster, the Chevy Volt, and the Nissan Leaf.  While it was an interesting read, this is where the numbers need to be updated as technology, price, and market have undergone quite a bit of change since the book was published. His conclusions: the Tesla Roadster costs about 70 cents per mile, the Chevy Volt 75 cents per mile, and the Nissan Leaf at 44 cents per mile. Adding to these operation costs would be the cost of having to replace these batteries which could run into the tens of thousands of dollars.  The author concludes based on numbers for those years: “Don’t buy an all-electric car hoping to save money”, adding “I expect the interest in all-electric automobiles to die soon, when the batteries need replacement and the owners discover how much they cost.”

Plug-in Hybrid Electric Vehicles
Based on numbers he presented, the author concludes the same thing with PHEV: if you want to save money, do not use the Electric Vehicle feature. 

The author does not expect the price of batteries to go down.

Lead-Acid Batteries
These batteries provide the cheapest energy source for electric cars but also the shortest range.  Their electricity cost and replacement cost per mile is comparable to that of gasoline. They carry only half the energy per pound of a lithium ion battery.

The author reiterates that it is the cost of the batteries that “killed the electric car”.

Recharge Time
The Tesla Roadster takes close to 4 hours to charge at 240 volts and 70 amps. An alternative is a battery trade program.


Effective Miles per Gallon
The author offers a more nuanced perspective on the effective miles per gallon for electric cars when taking into account the source of energy, electrical power.  While electric engines and batteries are quite efficient, electrical energy delivery and transfer to electric cars suffer energy losses that are not accounted for in EPA’s calculations.  Based on his calculations, electric vehicles have a 27% energy efficiency, just 7 points higher than the 20% efficiency of an internal combustion engine.  In addition, if the electricity used for charging the batteries is from a coal-powered plant, the carbon dioxide emissions are much higher than a gasoline car.


Ordinary Hybrids
The prius (which the author owns) uses a battery to run the car during the most inefficient cycle. The Prius uses a small NiMH battery that can store about 2 kilowatt-hours enough for about 4-6 miles.  The battery kicks in when the car initially accelerates and recharges when it slows down. Prius uses regenerative braking; a wheel turns the electric generator to recharge the battery which slows down the car. The battery can also be recharged by a generator run by a gasoline motor.  Because the battery is not used all the time, its life span is longer. A Consumer Reports test showed little degradation in a battery in a Prius driven 200,000 miles in 8 years, most of these miles were probably highway miles and therefore did not use the battery much.  The Prius battery replacement cost $2200 which gives $1100/kilowatt-hour comparable to the Roadster and Leaf.

Battery Challenges
The author offers the following factors that need to be improved to make electric vehicles economically competitive:
·         Low energy density (energy per pound) (1/25 that of gasoline)
·         High cost per mile 5-8 times higher than gasoline (if one includes electricity cost and battery replacement cost)
·         Large storage tanks 10 times bigger for the same range
·         Long refill/recharge time
·         High initial capital cost of batteries in the tens of thousands

Battery chemistry is well-established but the challenge is in the nanotechnology of the electrodes and electrolytes. The author does not think this can be solved in the next 20 years to provide sufficient improvements to make electric cars competitive with gasoline cars. He still believes that the future of autos is in cheap batteries. Electric cars running on lead acid batteries with a low 40-60 mile range may be attractive in developing countries that are not accustomed to long commutes to work.


Part III – Chapter 17: Natural Gas Automobiles

In this chapter, the author touts the idea of using natural gas to run automobiles, citing advantages and also some disadvantages and showing his usual calculations to show the economic benefits or lack of.  He starts off by pointing out that gasoline costs about 2.5 times (ATTOW) more than natural gas for the same energy content: he estimates that CNG costs 4 cents/mile to gasoline’s 10 cents/mile.  Converting natural gas as car fuel requires some infrastructure and automobile structural changes.  He favors compressed natural gas over liquefied natural gas which needs to be kept at a temperature of -259 F. Natural gas can be compressed up to 250 atm but must be stored in a steel or fiber composite tank. At this compressed volume, it contains 11 kilowatt-hours of energy per gallon compared to 33 kilowatt-hours per gallon for gasoline. Honda has been selling a CNG car since 1998.  Extra costs go into a device that allows compressing home natural gas to fill up the car tank and a high-pressure fiber composite tank to store the CNG.  The extra cost of the compressor device is eliminated if the car is used in an area where there are numerous CNG filling stations  and if it can be shared. This has proven economical to many companies who carry a fleet of CNG cars that share the same compressor device cutting costs down. A regular car can be converted to run on both gasoline and CNG.  He estimates a 20% return on investment if the price of a compressor device is eliminated but only breakeven if it is included.  In areas where gasoline is a lot more expensive than natural gas, CNG is the more cost-effective option.  Globally, there are 12 million CNG cars, 3 million in Pakistan, 1 million in India, about half a million in OECD, but only a 150,000 in the US (ATTOW).

Other advantages of CNG for cars noted by the author include being less dense than air so it rises when it leaks, having a higher ignition temperature, reduced engine wear, and the sturdy tank it is stored in can survive crashes with hardly a dent.  Its main disadvantage is that for the same volume tank of CNG, its mileage range is only a third that of gasoline so a car needs a larger tank for the 250-mile range that gasoline cars average at. The tank however makes the car heavier which reduces its mpg.


Part III – Chapter 18: Fuel Cells
In this chapter, the author discusses the viability of fuel cells as an energy source, primarily for electricity and transportation.  He starts out by describing that a hydrogen fuel cell is simply a device that reverses the electrolysis process to produce electrical energy.  He points out that simply burning the hydrogen and using the heat to generate electricity is less efficient at 20-35%.  Fuel cells, however, that use hydrogen can have reported theoretical efficiencies up to 83%.  The other advantages of fuel cells are that they are relatively clean, small, simple, and quiet.

Fuel cells can also run on methane and air.  Bloom Energy produces a fuel cell that is run at high temperatures and uses a solid ceramic electrolyte.  At these high temperatures, the reaction rates are faster obviating the use of catalyst.  In addition, methane reacts with water to liberate hydrogen (and also carbon monoxide) which acts as the fuel obviating the use of the more expensive hydrogen; this process is called “reforming”.  The solid ceramic electrolyte, while proprietary in its chemistry, is reported to be a YSZ ceramic type where the yttrium compound is used to stabilize the zirconium oxide crystal. This type of ceramic allows the flow of heavy ions but not electrons (to physicists, “heavy ions are actually smaller than electrons in a quantum mechanical sense so they can sneak through the crystal structure”. Divalent oxygen ions diffuse through the ceramic electrolyte, combine with the hydrogen on the other side leaving behind the two electrons to flow through an external wire generating electricity. The Bloom Energy fuel cells have the advantage of being built near the consumer, eliminating the 7% energy loss through transmission. It is also an expensive system at $7-8 / installed watt, 7 times more expensive than a natural gas turbine plant.  Furthermore, in the Bloom Energy fuel cell installation at the Fireman’s Fund Headquarters in California, the waste heat generated is being used for heating the nearby buildings.

Another fuel cell manufacturer, United Technologies Corporation, has been producing the fuel cells used in space missions and also has installed 75 megawatts of phosphoric acid fuel cells in 19 countries.  In these fuel cells, the hydrogen is externally produced in a reformer through the same methane-water reaction.  They are about 40% efficient.

This Bloom Energy fuel cell technology, because it requires a high temperature electrolyte, is not easily transferrable to automobiles.  A catalyst can be used to solve the reaction rate issue but platinum catalysts are very expensive.  The Honda Clarity uses a hydrogen fuel cell that depends on expensive catalysts and Toyota plans to release one this year according to the author.


Part III – Chapter 19: Clean Coal

This chapter starts out with a list of the reasons why the author refers to coal as the “filthiest fuel we have”:

·         Coal produces twice the amount of carbon dioxide as natural gas for every kilowatt-hour of energy produced.
·         Combustion of low-grade coal produces sulfur dioxide which reacts with water to form acid rain that can kill trees, acidify bodies of water, and dissolve marble structures.
·         Use of coal releases mercury.
·         Burning coal produces fly ash and soot, some of which end up as black carbon in Greenland melting ice.
·         Produces the air pollution frequently reported in Beijing.

But, coal is commonly used because it is an inexpensive source of energy.  The electricity is even cheaper at 6-7 cents a kilowatt-hour if it is located near a coal mine like in Kentucky, Idaho, and Wyoming.

There are a couple of processes that can clean prevent or reduce the pollution from coal-burning plants in China:
·         Installation of electrostatic precipitators which are charged plates that attract the particles and remove them from the gas.
·         Sulfur dioxide scrubbers that spray the gas with soap and other chemicals such as sodium hydroxide, lime, sodium sulfite, and ammonia [all bases]).
These add about 1-2 cents per kilowatt-hour to the cost but can contribute much to lower the health costs of the air pollution produced.  China is “desperately working” on solving the air pollution problem by cleaning up the exhaust from its coal plants.

Coal plants that produce 1 gigawatt of electricity produces a ton of carbon dioxide every 2 seconds. See author’s notes for how he came up with this estimate. With China adding a new gigawatt coal plant every week (ATTOW), the carbon dioxide emission grows by a ton every 2 seconds every week.

What is clean coal? An oxymoron, to coal opponents. Clean coal is the term used for coal combustion with carbon dioxide sequestration, going beyond using scrubbers and particle traps.  Another term used is carbon dioxide capture and sequestration or CCS.  A few methods for this have been proposed, the best of which requires redesigning the entire coal plant where the carbon dioxide is sequestered in the plant itself and not in the stack.

The most well-developed process was called FutureGen:
Oxygen is separated from the air to allow the coal to burn in pure oxygen.  This helps improve the efficiency of the turbine and also makes it easier to separate the carbon dioxide later.
Coal is gasified by reacting it with hot water producing hydrogen and carbon dioxide which are easily separated. 
The hydrogen is burned in the turbine with the waste heat driving a second turbine (combined cycle).
The separated carbon dioxide is compressed and pumped underground.

In Texas, about 10,000 wells are using this carbon dioxide to pump underground for enhanced oil recovery.
Other potential storage locations include empty coal mines and oil and gas wells and underground reservoirs containing salt brines (impermeability is good for keeping the carbon dioxide stored for hundreds or thousands of years).  Norway has stored 11 million tons of carbon dioxide using the latter location.

In 2009, FutureGen was cancelled due to cost overruns.

Carbon sequestration, according to the US EIA, can add 3 cents per kilowatt-hour to the cost of electricity.  While this is not such a huge amount for people living in the US, it is substantial for a country like China that produces 4 trillion kilowatt-hours of electricity from coal production. At 3 cents a kilowatt-hour, this is an annual bill of $120 billion dollars, which is projected to grow 10% a year with the growth in China’s electricity use.

The author is of the opinion that it in tackling greenhouse emissions, it might help to help a powerhouse like China develop the low-cost technology to bring their energy usage and/or emissions down.  China can also be encouraged to convert from coal to natural gas leading to 50% reduction in greenhouse emissions.

PART IV – What is Energy?

In this optional chapter, the author gives a very brief primer explaining what energy is, starting from a high school/college freshman explanation to the most abstract but profound explanation at the college senior/graduate level: that which is conserved due to time invariance of physics equations (physical laws) (momentum is that which is conserved due to space invariance).  In the beginning, he presents a table showing how different materials vary and compare in their energy content.  What was surprising to me and to the author as he admitted is that 1 pound of chocolate chip cookies contains 7.7 times the energy of 1 pound of TNT (See table).  I will be using this for my class to familiarize students with energy content and also, as the author uses it in his class, to illustrate the difference between energy and power.

PART V – Advice for Future Presidents

This part offers a comprehensive summary of many of the recommendations and key information provided by the author in the previous chapters.  In this section, the author starts off with the following disclaimer: “The role of a science advisor should not be to advise but rather to inform and educate so that the president knows and understands enough to be able to make the right decisions…you will have to balance many things including diplomacy, economic, justice, and politics – things that scientists like me have not mastered. Any advice I give you is based on my narrow technological understanding of the world…”.

The author repeats his previous statement that the “true energy crisis” in the United States and the world stems from energy security and global warming. He reiterates and summarizes many of the points he has brought in the previous chapters.

The energy security issue derives from shortage of oil and the increasing deficit between the US’s ability to produce domestic petroleum to match the demand growth for gasoline, diesel, and jet fuel.  There is no shortage in energy sources like coal and natural gas.  This shortage in oil imperils national and military security and also contributes greatly to the country’s trade deficit which harms the economy.

His suggests the following immediate action: “rapidly exploring our great shale gas and oil reserves, building an infrastructure to handle the distribution, and creating a strong synfuel capability”.  Natural gas, synfuel, and shale oil have the potential to create the most meaningful impact on our energy security and trade balance.

The global warming issue stems from the significant greenhouse emissions from using coal.  He reiterates, however, that most of these emissions will come from developing countries and therefore it is important that any solutions we come up with, some are affordable enough to be applied in poorer countries.  Having said this, he recommends as an immediate action “large scale conversion from coal to natural gas”, the “only plausible method that holds up to scrutiny and that may be economically viable”.

In the next sections, the author details his categorization and prioritization recommendations for the different technologies in terms of their long-term and short-term prospects.  These technologies are classified as (verbatim list):
Important part of our energy future:
·         Energy productivity (both efficiency and conservation)
·         Hybrid and other autos with improved mileage
·         Shale gas (for coal replacement, autos, synfuels)
·         Synfuel (gas to liquid and coal to liquid)
·         Shale oil
·         Smart grid

Have breakout potential
·         Photovoltaics
·         Wind (and an improved grid to deliver it)
·         Nuclear power (both old and new generations)
·         Batteries (to back up PV’s and wind)
·         Biofuel (particularly grasses such as Miscanthus)
·         Fuel cells (particularly methane – based)
·         Flywheels

Least likely to solve our problems
·         Hydrogen economy
·         All-electric autos and plug-in hybrids
·         Corn to ethanol
·         Solar thermal
·         Geothermal
·         Wave and tidal power
·         Methane hydrates
·         Algae biofuel

He elaborates on many of these items but most of them are repetitions of statements the author has previously made in the other chapters. A few notable statements are:

Subsidies help with rapid competitive development with the rapid decrease in cost of solar a good example.  But, it can also support technologies with no long-term promise.

On energy catastrophes, the author recommends that dangers that are no measurable or undetectable should not influence policy-making and repeats his suggestion to adopt the Denver dose standard for radioactivity exposure limits.

Risk-benefit calculations are complex and can be “misleading”, citing the example of whether the virtual moratorium in the US on new nuclear plants saved lives over the last 30 years if it resulted in increased coal use that released harmful pollutants such as mercury and sulfuric acid.  How should dangers from different modes of energy generation be compared in a quantitative way?

On skepticism and optimism bias, “claims based on conviction are not as valid as those based on objective analysis”.

The author notes that “green, renewable, and clean” are “feel-good” terms that can be interpreted to rule out important energy sources such as nuclear power, natural gas, and synfuels.  “Better to use sustainable and to interpret that as sustainable over the next 20-40 years, since we can’t even guess at technology beyond those horizons.  [Reader’s note: the sun as an energy source is truly sustainable for another estimated 5 billion years but our technologies and the material resources necessary to extract this energy may not be.]  Even better is to use alternative, since our balance of payments and transportation energy security do make for a current crisis.”  Further, he notes, very practically, I think, that energy sources such as shale gas may be greenhouse emitters and not sustainable over the very long term, but they will be helpful in keeping us energy secure “through a tough period” until better, more sustainable alternatives are found”. Lastly, he ascribes more truth to the statement “to be truly sustainable, it must be profitable”, with profits taking into account the indirect costs such as those to the environment. (“Unfortunately, there is no true consensus on how to measure those costs”.)  Lastly, “a gallon saved is a gallon not imported” is another aphorism that the author likes.

At the end of the book in the section “Your Legacy”, the author has the following advice for future presidents: “Your greatest challenge will be striking a balance between global warming and energy security…To earn your place in history, you have to have vision, trust in science and objective analysis, and think long-term.”