Weatherdem's Weblog

Bridging climate science, citizens, and policy


Leave a comment

Climate News & Opinion Links – March 26 2014

I’ve collected a number of interesting climate and energy related news releases, stories, and opinion pieces in the past couple of weeks.  In no particular order:

The only way we will take large-scale climate action is if there are appropriate price signals in markets – signals that reach individual actors and influence their activities.  One step in the right direction was phasing out federal subsidies for high-risk coastal properties’ flood insurance policies, as Congress did in 2012.  This had the expected effect of increasing premiums for policy holders.  Unsurprisingly, people don’t want to pay more to live in their high-risk homes.  So they complained to their representatives, who responded by passing new legislation … reinstating government subsidies.  Taxpayers across the country are shoveling good money after bad for a select handful of wealthy people to build without mitigating risk to their homes or paying the true economic costs of their lifestyle decisions.  We will pay for them to rebuild again and again (remember: sea levels will rise for centuries) unless we as a society decide to stop.

Tesla is entering the energy industry.  This could be a game changer in terms of home solar energy and electric vehicles, no matter how Tesla comes out in the long-term.

20 years of IPCC effort and “achievement”.  With no robust international climate agreement after 20 years’ of work, I have a hard time accepting the claim the IPCC has achieved much of anything except an excessive bureaucracy and huge reports that few people read.

News that’s not really news: Asia will be among those hardest hit by climate change.  This isn’t a new result, but something that the IPCC’s WGII report will report on with increased confidence in 2014 versus 2007 (see above statement).  The number of people living close to coasts in Asia dwarf the total population of countries who historically emitted the most greenhouse gases.  That was true in 2007 and will be true in the future.  It will take a generation or more before effects on developed nations generate widespread action.

New research (subs. req’d) indicates ice gains in Antarctica’s Ross Sea will reverse by 2050.  Recent temperature and wind current patterns will shift from their current state to one that encourages rapid ice melt, similar to what the Arctic experienced in the past 20 years or so.

An El Nino might be developing in the tropical Pacific.  The anomalous heat content traveling east via an Equatorial Kelvin Wave rivals that of the 1997-1998 El Nino, which was the strongest in recorded history.  Earlier this month, NOAA’s Climate Prediction Center issued an El Nino Watch, citing a 50% probability that an El Nino would develop in summer or fall 2014, based in part on projections such as Columbia University’s.  El Nino is the warm phase of the ENSO phenomenon.  Warm ocean waters move from the western to eastern Pacific, affecting global atmospheric circulations.  Related to science policy, one result of Congress’ austerity approach to the economy is  monitoring buoys’ degradation in the Pacific Ocean.  NOAA helped deploy a widespread network of buoys following the 1982-1983 El Nino which helped track the progress of the 1997-1998 El Nino with greatly improved fidelity.  That network is operating at less than 75% of its designed capacity, hampering observations.  If we can’t observe these impactful events, we can’t forecast their effects.  This negatively impacts business’ and peoples’ bottom line.

Finally, I want to make some observations regarding goings-on within the climate activist community.  Vocal critics recently spent a lot of energy on hit pieces, this being only one example (poorly written with little on science, heavy on “he-saids”, with an overdose of personal insults and vindictive responses to anyone who didn’t agree with the piece, including my comments).  These writings demonstrate something rather simple to me: if you do not agree with 100% of what the activist consensus is, you’re no better than people the activists label ‘deniers’.  Additionally, the their argument is absurd: social scientists have no business analyzing climate data or commenting on activist’s claims.  Why is this absurd?  Because they simultaneously hold the contradictory belief that physical scientists should have exclusive input and decision-making power over climate policy (a social creation).  Furthermore, implicit in their messaging is social scientists don’t have the right kind of expertise to participate in “serious” discussions.  These efforts to deligitimize someone they don’t believe should participate (how very elitist of them) is reminiscent of efforts by many in the Republican party to deligitimize Barack Obama’s presidency simply because of his race.  Nothing is gained and much is lost by these efforts.  How does this advance the climate discussion to people not currently involved, which will need to happen if we are to ever take any kind of large-scale climate action?

Additional lack of critical thought is found in this post, mostly in this penultimate paragraph:

I’ve said before that I think people can believe what they want, as long as they don’t try to act on those beliefs in a way that interferes with others’ lives. When they deny the reality of global warming, and preach it to their flock, that’s exactly what they’re doing (incidentally, a large fraction of Americans believe to some extent the Bible is literally true).

The very same complaint is made by the people the author derides in this paragraph and post but in reverse and it’s one of the biggest reasons why we’ve taken so little climate action.  The author’s condescension is plainly evident for those who don’t believe exactly as he does. Instead of trying to reach out to people with different beliefs (and underlying value systems), he takes the lazy route and spends time insulting them.  Have you ever believed in something you didn’t previously after someone insulted you?  No, it’s an absurd and self-defeating strategy.  These basic problems underlie most climate change discussions and people retrench their positions instead of trying to step into other’s shoes.  I’m not sure how much this has to change before we undertake more widespread and effective climate mitigation strategies.


Leave a comment

Newest Climate Change Consensus Document Won’t Matter…

It won’t matter unless and until physical scientists leverage expertise outside of their silos and stop executing failed strategies.  In addition to summary after summary of government sanctioned peer-reviewed scientific conclusions, scientists now think they need to report on the perceived consensus on individual bases of those conclusions in order to spur the public to action.  Regardless of their personal political leanings, scientists are very conservative job actors.  They have long-held traditions that are upheld at every turn, which reduces the urgency of their statements.  As an analogy, think of a bunch of people sitting down who think for long time periods before any action is ever taken.  First, they calmly say there is a situation that requires near-immediate action.  Then they say it a little louder.  Then a handful start yelling because you’re not responding to their carefully crafted words and they think that you just didn’t hear them or you just aren’t smart enough to understand those carefully crafted words.  Then they start screaming because they’re convinced you’re an idiot and screaming will definitely work where yelling and saying those words didn’t work before.

Well, the screaming isn’t helping, is it?  You’re not an idiot.  The volume of words isn’t the issue.  The issue is you are motivated by things outside of the climate realm – things like having a job; a job that pays a living wage so you can pay for your mortgage and car payment and keep your children educated and happy.  An existence in an affluent world that allows you the time and energy to think of complex problems beyond your perceived immediate needs.  If those needs aren’t met – if you have insecure affluence – you place climate change and the environment far down on a list of priorities – just like a majority of other Americans.

But the newly released “American Association for the Advancement of Science, the world’s largest general scientific society with a membership of 121,200 scientists and “science supporters” globally” report won’t change this dynamic.  While it is important that the AAAS engages scientists and the society it serves, this report is unfortunately just the latest effort by a group of physical scientists that ignores science results outside of their discipline to try to convince Americans that immediate and drastic action is necessary.  Like previous efforts, this one will not spur people to action, mostly because the actions listed are about limits, stopping, restricting, reversing, preventing, and regulating.  The conceptual model from which these words arise works in direct contrast to the fundamentals of American culture.  We are a people who are imaginative, who innovate, who invest.

As I have written before, there is no way we will achieve greenhouse gas emissions reductions without substantial investment into innovation of new technologies that we research, develop, and deploy at scale.  There is nothing limiting or restrictive about this framework.  It it the opposite of those things.  This framework recognizes and sets out to achieve opportunities; it allows for personal and cultural growth; it is in sync with the underlying cultural fabric of this country.  It directly addresses people’s perception of the security of their affluence in the same way that developing countries’ economic growth allows people to move beyond basic material needs to higher order needs.

The reality of insecure affluence among many Americans today might be an indirect outcome of the 1%’s efforts to increase wealth disparity, but it is real.  We have to address that disparity first in order to address the real, valid perceptions of insecure affluence.  Only after Americans feel their personal wealth is secure will they have the resources to devote to higher order needs such as global climate change.  That can happen with concerted focus on investing and innovating a post-carbon economy.  But you won’t see that at the top of any policy prescription from the majority of climate scientists.


Leave a comment

Carbon Price Already Part of Doing Business

A report issued yesterday by the Carbon Disclosure Project generated a number of news articles today, including in the New York Times.  The report identified 29 major US corporations’ inclusion of future carbon prices in their financial planning.  This is a significant and logical development.  It is financially responsible for companies with billions of annual revenue dollars to consider upcoming costs in their planning.  These companies aren’t partisan, they’re interested only in making money.  If they think there is a way to make more money with carbon pricing than without, they’ll plan and act accordingly.

The NYT article notes that many Republicans might not like this development.  That’s due to the hyper-partisan characteristic of today’s leading Republicans.  Their worldview demands that they yell loudly at developments like carbon prices.  And despite their to-date very successful campaign to prevent policymakers from establishing a national carbon tax (which economists agree is the most economically efficient method) or a cap-and-trade system, they can’t and don’t control global policymakers.  A larger economic body than the US established a carbon price: the European Union.  China has begun limited implementation of carbon pricing.  Regional cap-and-trade systems encompassing US states and Canadian provinces with large economies exist and will only expand in the future.  What this means is US corporations doing business in the EU and China (and soon high population US states) have to take their carbon pricing into account.

The NYT and Huffington Post articles’ authors seem more surprised that companies like ExxonMobil are among those who are planning for carbon pricing than anything.  As I stated above, this is really the only logical development left for Exxon and other companies.  They can either perform their fiduciary duties and protect their shareholders’ interests or they can lose market share or fail.

This is one of the reasons I’ve supported regional carbon pricing following the continued failure to price carbon at the national and international levels.  If the price exists in a large enough portion of the larger economy, companies have to respond.  They can more easily lobby national politicians than local people who are very supportive of carbon pricing and who can run for local offices.  This is an example of my larger point that we need to implement climate mitigation and adaptation policies at the local level first.  Efforts to do this at the international level have failed time and time again.  But if thousands of communities implement their own strategies nationally and internationally, then higher levels of government have examples with which to work and grow.  More importantly, thousands of communities’ influence establishes political and social inertia that lobbying can only blunt.  This is the fastest way toward widespread policy implementation.


2 Comments

August 2013 CO2 Concentrations: 395.15 ppm

During August 2013, the Scripps Institution of Oceanography measured an average of 395.15 ppm CO2 concentration at their Mauna Loa, Hawai’i Observatory.

This value is important because 395.15 ppm is the largest CO2 concentration value for any August in recorded history.  This year’s July value is 2.74 ppm higher than August 2012′s!  Month-to-month differences typically range between 1 and 2 ppm.  This particular year-to-year jump is clearly well outside of that range.  This change is in line with other months this year: February’s year-over-year change was +3.37 ppm and May’s change was +3.02 ppm.  Of course, the unending trend toward higher concentrations with time, no matter the month or specific year-over-year value, as seen in the graphs below, is more significant.

The yearly maximum monthly value normally occurs during May. This year was no different: the 399.89ppm mean concentration in May 2013 was the highest value reported this year and, prior to the last six months, in recorded history (neglecting proxy data).  I expected May of this year to produce another all-time record value and it clearly did that.  May 2013′s value will hold onto first place all-time until February 2014, due to the annual CO2 oscillation that Figure 2 displays.

 photo co2_widget_brundtland_600_graph_201308_zpsf4c5a266.gif

Figure 1 – Time series of CO2 concentrations measured at Scripp’s Mauna Loa Observatory in August from 1958 through 2013.

CO2Now.org added the `350s` and `400s` to the past few month’s graphics.  I suppose they’re meant to imply concentrations shattered 350 ppm back in the 1980s and are pushing up against 400 ppm now in the 2010s.  I’m not sure that they add much value to this graph, but perhaps they make an impact on most people’s perception of milestones within the trend.

How do concentration measurements change in calendar years?  The following two graphs demonstrate this.

 photo CO2_concentration_5y_trend_NOAA_201309_zps94520ee9.png

Figure 2 – Monthly CO2 concentration values (red) from 2009 through 2013 (NOAA). Monthly CO2 concentration values with seasonal cycle removed (black). Note the yearly minimum observation occurred ten months ago and the yearly maximum value occurred three months ago. CO2 concentrations will decrease through October 2013, as they do every year after May, before rebounding towards next year’s maximum value.  The red points and line demonstrate the annual CO2 oscillation that exists on top of the year-over-year increase, which the black dots and line represents.

This graph doesn’t look that threatening.  What’s the big deal about CO2 concentrations rising a couple of parts per million per year anyway?  The problem is the long-term rise in those concentrations and the increased heating they impart on our climate system.  Let’s take a longer view – say 50 years:

 photo CO2_concentration_50y_trend_NOAA_201309_zps7649367a.png

Figure 3 – 50 year time series of CO2 concentrations at Mauna Loa Observatory (NOAA).  The red curve represents the seasonal cycle based on monthly average values.  The black curve represents the data with the seasonal cycle removed to show the long-term trend (as in Figure 2).  This graph shows the relatively recent and ongoing increase in CO2 concentrations.

As a greenhouse gas, CO2 increases the radiative forcing of the Earth, which increases the amount of energy in our climate system as heat.  This excess and increasing heat has to go somewhere or do something within the climate system because the Earth can only emit so much longwave radiation every year.  Additional figures below show where most of the heat has gone.

CO2 concentrations are increasing at an increasing rate – not a good trend with respect to minimizing future warming.  Natural systems are not equipped to remove CO2 emissions quickly from the atmosphere.  Indeed, natural systems will take tens of thousands of years to remove the CO2 we emitted in the course of a couple short centuries.  Human technologies do not yet exist that remove CO2 from any medium (air or water).  They are not likely to exist for some time.  Therefore, the general CO2 concentration rise in Figures 2 and 3 will continue for many years.

This month, I will once again present some graphs that provide additional context for CO2 concentration.  Here is a 10,000 year view of CO2 concentrations from ice cores to compare to the recent Mauna Loa observations:

Photobucket

Figure 4 – Historical CO2 concentrations from ice core proxies (blue and green curves) and direct observations made at Mauna Loa, Hawai’i (red curve).

Clearly, concentrations are significantly higher today than they were for thousands of years in the past.  While never completely static, the climate system our species evolved in was relatively stable in this time period.

Or we could take a really, really long view:

Photobucket

Figure 5 – Historical record of CO2 concentrations from ice core proxy data, 2008 observed CO2 concentration value, and 2 potential future concentration values resulting from lower and higher emissions scenarios used in the IPCC’s AR4.

Note that this graph includes values from the past 800,000 years, 2008 observed values (10ppm less than this year’s average value will be) as well as the projected concentrations for 2100 derived from a lower emissions and higher emissions scenarios used by the 2007 IPCC Fourth Assessment report.  If our current emissions rate continues unabated, it looks like a tripling of average pre-industrial (prior to 1850) concentrations will be our future reality: 278 * 3 = 834.  This graph also clearly demonstrates how anomalous today’s CO2 concentration values are in the context of paleoclimate.  It further shows how significant projected emission pathways could be when we compare them to the past 800,000 years.  It is important to realize that we are currently on the higher emissions pathway (towards 800+ppm; yellow dot).

The rise in CO2 concentrations will slow down, stop, and reverse when we decide it will.  It depends primarily on the rate at which we emit CO2 into the atmosphere.  We can choose 400 ppm or 450 ppm or almost any other target (realistically, 350 ppm seems out of reach within the next couple hundred years).  That choice is dependent on the type of policies we decide to implement.  It is our current policy to burn fossil fuels because we think doing so is cheap, although current practices are massively inefficient and done without proper market signals.  We will widely deploy clean sources of energy when they are cheap; we control that timing.  We will remove CO2 from the atmosphere if we have cheap and effective technologies and mechanisms to do so, which we also control to some degree.  These future trends depend on today’s innovation and investment in research, development, and deployment.  Today’s carbon markets are not the correct mechanism, as they are aptly demonstrating.  But the bottom line remains: We will limit future warming and climate effects when we choose to do so.

I mentioned above that CO2 is a greenhouse gas.  If CO2 concentrations were very low, the average temperature of the planet would be 50°F cooler.  Ice would cover much more of the planet’s surface than it does today.  So some CO2 is a good thing.  The problem with additional CO2 in the atmosphere is that it throws off the radiative balance of the past 10,000 years.  This sets in motion a set of consequences, most of which we cannot anticipate simply because our species has never experienced them.  The excess heat absorbed by the climate system went to the most efficient heat sink on our planet, the oceans:

 photo Total-Heat-Content.gif

Figure 6 – Heat content anomaly from 1950 to 2004 from Murphy et al., 2009 (subs. req’d).

20th century global surface temperature rise measured +0.8°F.  That relatively small increase, which is already causing widespread effects today, is a result of the tiny heat content anomaly shown in red in Figure 6.  This situation continued since Murphy’s 2009 publication

 photo Ocean_heat_content_balmaseda_et_al_zps23184297.jpg

Figure 7 – Oceanic heat content by depth since 19

This figure shows where most of the excess heat went since 2000: the deep ocean (>700m depth).  The heat content change of the upper 300m increased by 5 * 10^22 Joules/year in that time (and most of that in the 2000-2003 time span) while the 300-700m layer’s heat increased by an additional 5 * 10^22 J/y and the >700m ocean’s heat increased by a further 8 * 10^22 J/y.  That’s a lot of energy.  How much energy is it?  In 2008 alone, the oceans absorbed as much energy as 6.6 trillion Americans used in the same year.  Since there is only 7 billion people on the planet, the magnitude of this energy surplus is staggering.

More to the point, deep water heat content continued to surge with time while heat content stabilized in the ocean’s top layers.   Surface temperature measurements largely reflect the top layer of the ocean.  If heat content doesn’t change with time in those layers, neither will sea surface temperatures.  The heat is instead going where we cannot easily measure it.  Does that mean “global warming has stopped” as some skeptics recently claimed?  No, it means the climate system is transferring the heat where and when it can.  If the deep ocean can more easily absorb the heat than other media, then the heat will go there.

The deep ocean will not permanently store this heat however.  The globe’s oceans turn over on long time scales.  The absorbed heat will come back to the surface where it can transfer to the atmosphere, at which point we will be able to easily detect it again.  So at some point in the future, perhaps decades or a century from now, a temperature surge could occur.  We have been afforded time that many scientists did not think we had to mitigate and adapt to the changing climate.  That time is not limitless.


Leave a comment

Climate & Energy Links – Sep. 12, 2013

Here are some stories I found interesting this week:

California’s GHG emissions are already lower than the 2015 threshold established as part of California’s cap-and-trade policy.  The reasons emissions fell more than expected include the slow economy and relative widespread renewable energy deployment.  The problem with this is the lack of innovation.  We have seen what companies do with no incentive to innovate their operations: nothing that gets in the way of profit, which is the way companies should operate.  That’s why we need regulations – to incentivize companies to act in the public interest.  Should CA adjust future cap thresholds in light of this news?

No surprise here: Alter Net had a story detailing the US Department of Energy’s International Energy Outlook and the picture isn’t pretty (and I’m not talking about the stock photo they attached to the story – that’s not helpful).  Experts expect fossil fuels to dominate the world’s energy portfolio through 2040 – which I wrote about last month.  This projection will stand until people push their governments to change.

Scientific American’s latest microgrid article got to the point: “self-sufficient microgrids undermine utilities’ traditional economic model” and “utility rates for backup power [need to be] fair and equitable to microgrid customers.”  To the first point, current utility models will have to change in 21st century America.  Too much depends on reliable and safe energy systems.  The profit part of the equation will take a back seat.  Whatever form utilities take in the future, customers will demand equitable pricing schemes.  That said, there is currently widespread unfair pricing in today’s energy paradigm.  For example, utilities continue to build coal power plants that customers don’t want.  Customers go so far as to voluntarily pay extra for non-coal energy sources.  In the end, I support microgrids and distributed generation for many reasons.

A Science article (subs. req’d) shared results of an investigation into increasing amplitude of CO2 oscillations in the Northern Hemisphere in the past 50 years.  This increase is greater for higher latitudes than middle latitudes.  The increase’s reason could be longer annual times of decomposition due to a warming climate (which is occurring faster at higher latitudes).  Additional microbial decomposition generates additional CO2 and aids new plant growth at increasing latitudes (which scientists have observed).  New plant growth compounds the uptake and release of CO2 from microbes.  The biosphere is changing in ways that were not predicted, as I’ve written before.  These changes will interact and generate other changes that will impact human and ecosystems through the 21st century and beyond.

And the EPA has adjusted new power plant emissions rules: “The average U.S. natural gas plant emits 800 to 850 pounds of carbon dioxide per megawatt, and coal plants emit an average of 1,768 pounds. According to those familiar with the new EPA proposal, the agency will keep the carbon limit for large natural gas plants at 1,000 pounds but relax it slightly for smaller gas plants. The standard for coal plants will be as high as 1,300 or 1,400 pounds per megawatt-hour, the individuals said Wednesday, but that still means the utilities will have to capture some of the carbon dioxide they emit.”  This is but one climate policy that we need to revisit in the future.  This policy is good, but does not go far enough.  One way or another, we face increasing costs; some we can afford and others we can’t.  We can proactively increase regulations on fossil fuels which will result in an equitable cost comparison between energy sources.  Or we can continue to prevent an energy free market from working by keeping fossil fuel costs artificially lower than they really are and end up paying reactive climate costs, which will be orders of magnitude higher than energy costs.


1 Comment

Energy Generation Now & in the Future

I finished my last post with an important piece of data.  Out of 100 quads of energy the US generates every year, the vast majority of it (83%) comes from fossil fuel sources – sources that emit greenhouse gases when we burn them.  The same is true for the vast majority of other countries, and therefore for the global portfolio as well.  Here is a graphic showing global energy consumption distribution by fuel type from 1990 through 2010 and into the future:

 photo EIA-WorldEnergyConsumptionbyfueltype1990-2040_zps8d8ae886.png

Figure 1. Global fuel-type energy consumption, 1990-2040 (EIA 2013 Energy Outlook).

The global picture is somewhat different from the US picture: liquids’ energy (e.g., oil) exceed coal energy, which exceed natural gas.  All three of these carbon-intensive energy sources, which power our developed, high-wealth lifestyles, greatly exceed renewables (which hydropower dominates), which exceeds nuclear.  It is these type of energy forecasts that lead to the suite of IPCC emissions pathways:

 photo IPCCAR5RCPScenarios_zps69b8b0d5.png

Figure 2. IPCC Fifth Assessment Report Representative Concentration Pathway (RCP) CO2-eq concentrations.

Note that our current emissions trajectory more closely resembles the RCP8.5 pathway (red) than the other pathways.  This trajectory could lead to a 1000+ ppm CO2-eq concentration by 2100, or 2.5X today’s concentration value.  Stabilizing global temperature increases at less than 2C by 2100 requires stabilizing CO2-eq concentrations below 450 and quickly decreasing, which is best represented by the RCP2.6 pathway above (green).  This pathway is technologically impossible to achieve as of today.  The only way to make it possible is to invest in innovation: research, development, and global deployment of low-carbon technologies.  We are not currently doing that investment; nor does it look likely we will in the near future.

Let’s take a further look at the recent past before we delve further into the future.  Environmental and renewable energy advocacy groups tout recent gains in renewable energy deployment.  We should quietly cheer such gains because they are real.  But they are also miniscule – far too little deployment at a time when we need exclusive and much wider deployment of renewable energy globally to shift our emissions pathway from RCP8.5 to RCP2.6.  Here is a graphic showing global use of coal in the past 10+ years:

 photo WorldCoalConsumption-2001-2011_zps68aea439.jpg

Figure 3. Global coal use in million tonnes of oil-equivalent 2001-2011 (Grist).

Climate and clean energy advocates like to report their gains in percentage terms.  This is one way of looking at the data, but it’s not the only way.  For instance, coal usage increased by 56% from 2001 to 2011.  This is a smaller percentage than most renewable energy percentage gains in the same time period, but the context of those percentages is important.  As you’ll see below, renewable energy gains really aren’t gains in the global portfolio.  The above graph is another way to see this: if renewable energy gains were large enough, they would replace coal and other fossil fuels.  That’s the whole point of renewable energy and stabilizing carbon emissions, right?  If there is more renewable energy usage but also more coal usage, we won’t stabilize emissions.  Here is another way of looking at this statement:

 photo GlobalEnergyConsumption-Carbon-FreeSources1965-2012_zps1a06c9a0.png

Figure 4. Global Energy Consumption from Carbon-Free Sources 1965-2012 (Breakthrough).

Carbon-free energy as a part of the total global energy portfolio increased from 6% in 1965 to 13% in the late 1990s.  This is an increase of 200% – which is impressive.  What happened since the 1990s though?  The proportion was actually smaller in 2011 than it was in 1995 in absolute terms.  At best, carbon-free energy proportions stagnated since the 1990s.  Countries deployed more carbon-free energy in that time period, but not enough to increase their proportion because so much new carbon energy was also deployed.  What happened starting in the 1990s?  The rapid industrialization of China and India, predominantly.  Are developing countries going to stop industrializing?  Absolutely not, as Figure 1 showed.  It showed that while renewable energy consumption will increase in the next 30 years, it will likely do so at the same rate that natural gas and liquids will.  The EIA projects that the rate of increase of coal energy consumption might level off in 30 years, after we release many additional gigatonnes of CO2 into the atmosphere, ensuring that we do no stabilize at 450 ppm or 2°C.

Here is the EIA’s projection for China’s and India’s energy consumption in quads, compared to the US through 2040:

 photo EIA-EnergyConsumption-US-CH-IN1990-2040_zps70837e84.png

Figure 5. US, Chinese, and Indian energy consumption (quads) 1990-2040 (EIA 2013 Energy Outlook).

You can see the US’s projected energy consumption remains near 100 quads through 2040.  China’s consumption exceeded the US’s in 2009 and will hit 200 quads (2 US’s!) by 2030 before potentially leveling off near 220 quads by 2040.  India’s consumption was 1/4 the US’s in 2020 (25 quads), and will likely double by 2040.  Where will an additional 1.5 US’s worth of energy come from in the next 30 years?  Figure 1 gave us this answer: mostly fossil fuels.  If that’s true, there is no feasible way to stabilize CO2 concentrations at 450 ppm or global mean temperatures at 2°C.  That’s not just my opinion; take a look at a set of projections for yourself.

Here is one look at the future energy source by type:

 photo GlobalEnergyByType-2013ProjectionbyBNEF_zps36f9806f.jpg

Figure 6. Historical and Future Energy Source by Type (BNEF).

This projection looks rosy doesn’t it?  Within 10 years, most new energy will come from wind, followed by solar thermal.  But look at the fossil fuels!  They’re on the way out.  The potential for reduced additional fossil fuel generation is good news.  My contention is that it isn’t happening fast enough.  Instead of just new energy, let’s look at the cumulative energy portfolio picture:

 photo GlobalEnergyTotalByType-2013ProjectionbyBNEF_zps88331d51.jpg

Figure 7. Historical and Future Total Energy Source by Type (BNEF).

This allows us to see how much renewable energy penetration is possible through 2030.  The answer: not a lot, and certainly not enough.  2,000 GW of coal (>20% of total) remains likely by 2030 – the same time when energy experts say that fossil fuel use must be zero if CO2 concentrations are to remain below 450 ppm by 2100.  But coal isn’t the only fossil fuel and the addition of gas (another 1,700 GW) and oil (another 300 GW) demonstrates just how massive the problem we face really is.  By 2030, fossil fuels as a percentage of the total energy portfolio may no longer increase.  The problem is the percentages need to decrease rapidly towards zero.  Nowhere on this graph, or the next one, is this evident.  The second, and probably more important thing, about this graph to note is this: total energy increases at an increasing rate through 2030 as developing countries … develop.

 photo EIA-WorldEnergyConsumptionbyfueltype1990-2040_zps8d8ae886.png

Figure 8. Global fuel-type energy consumption, 1990-2040 (EIA 2013 Energy Outlook).

The EIA analysis agrees with the BNEF analysis: renewables increase through 2030.  The EIA’s projection extends through 2040 where the message is the same: renewables increase, but so do fossil fuels.  The only fossil fuel that might stop increasing is the most carbon intensive – coal – and that is of course a good thing.  But look at the absolute magnitudes: there could be twice as many coal quads in 2040 as there were in 2000 (50% more than 2010).  There could also be 50% more natural gas and 30% more liquid fuels.  But the message remains: usage of fossil fuels will likely not decline in the next 30 years.  What does that mean for CO2 emissions?

 photo EIA-WorldEnergy-RelatedCO2Emissionsbyfueltype1990-2040_zps417bffc4.png

Figure 9. Historical and projected global carbon dioxide emissions: 1990-2040 (EIA 2013 Energy Outlook).

Instead of 14 Gt/year (14 billion tonnes per year) in 2010, coal in 2040 will emit 25 Gt/year – almost a doubling.  CO2 emissions from natural gas and liquids will also increase – leading to a total of 45 GT/year instead of 30 GT/year.  The International Energy Agency (IEA) estimated in 2011 that “if the world is to escape the most damaging effects of global warming, annual energy-related emissions should be no more than 32Gt by 2020.”  The IEA 2012 World Energy Outlook Report found that annual carbon dioxide emissions from fossil fuels rose 1.4 percent in 2012 to 31.6 Gt.  While that was the lowest yearly increase in four years, another similar rise pushes annual emissions over 32Gt in 2014 – six years ahead of the IEA’s estimate.  Based on the similarity between our historical emissions pathway and the high-end of the IPCC’s AR4 SRES scenarios (see figure below), 2°C is no longer a viable stabilization target.

 photo CO2_Emissions_IPCC_Obs_2012_zpsd3f8cb8f.jpg

Figure 10. IEA historical annual CO2 emissions and IPCC AR4 emissions scenarios: 1990-2012 (Skeptical Science).

The A2 pathway leads to 3 to 4°C warming by 2100.   Additional warming would occur after that, but most climate science focus ends at the end of this century.  A huge caveat applies here: that warming projection comes from models that did not represent crysophere or other processes.  This is important because the climate system is highly nonlinear.  Small changes in input can induce drastically different results.  A simple example of this is a change in input from 1 to 2 doesn’t mean a change in output from 1 to 2.  The output could change to 3 or 50, and we don’t know when the more drastic case will take place.  Given our best current but limited understanding of the climate system, 3 to 4°C warming by 2100 (via pathway A2) could occur.  Less warming, given the projected emissions above, is much, much less likely than more warming than this estimate.  Policy makers need to shift focus away from 2°C warming and start figuring out what a 3 to 4°C warmer world means for their area of responsibility.  Things like the timing of different sea level rise thresholds and how much infrastructure should we abandon to the ocean?  Things like extensive, high-magnitude drought and dwindling fresh water supplies.  These impacts will have an impact on our lifestyle.  It is up to us to decide how much.  The graphs above and stories I linked to draw this picture for me: we need to change how we approach climate and energy policy.  The strategies employed historically were obviously inadequate to decarbonize at a sufficient rate.  We need to design, implement, and evaluate new strategies.


1 Comment

Climate & Energy Articles – Aug. 17, 2013

Since I can’t devote as much time to everything I read, here is a quick roundup of things I thought were interesting recently:

A Nature article (subs. req’d) describes some of the problems with a trending climate effort: decadal predictions.  In the past, agencies just made climate projections for a couple of centuries into the future.  In addition to that, interest in projections over the next 10 or 20 years grew.  Unfortunately, climate models aren’t well designed for these short time frames.  Thus, they miss high-frequency climate events made just after agencies issue them.  Of particular concern are the high impact events, as we tend to focus on them.  I would remind critics that point out these “misses” that very few financial models indicated the biggest economic disruption of our lifetime: the Great Recession, yet we continue to ascribe great status to the same financial titans that universally missed that high impact event.  That means, of course, that critics remain within their tribal identities and look for any evidence to support their position, even as they ignore similar evidence for analogous cases.

A group made an interesting counter argument regarding the cause behind the US’s recent drop in CO2 emissions.  Instead of the switch from carbon-intensive coal to slightly less intensive natural gas, as many analysts described, this group claims the drop occurred due to widespread, massive efficiency gains.  I characterize this as interesting because the group is countering the International Energy Agency, among others.  While not prescient, the IEA is the leading authority in these types of analyses.  We shouldn’t take their analyses without a grain of salt, of course, as their methodologies are likely imperfect.  Instead, this new argument should encourage further research and analysis.  Was the coal-to-gas switch primarily responsible or was efficiency?  Additional years’ data will help to clarify the respective roles.  In the long-term, efficiency can play as big or a bigger role than the coal-to-gas switch that occurred to date.  That’s where innovation funded from a carbon price comes into play.

Grist ran an informative series recently that included a short video of how much energy the US uses – the primary generators and consumers by type and sector.  The upshot is this: the US uses 100 quads (an energy measurement), which makes further discussion quite simple.  The US generates 81-83 quads (81-83%) via fossil fuels (oil, coal, and natural gas).  That leaves only 17-19% of US generation by non-fossil sources.  Most non-fossil energy generation is nuclear, which means renewables account for the smallest share of energy generation.  Most of that is hydropower from dams that we built in the first half of the 20th century.  This data will form the basis of my next post, which will examine the implications of this energy breakdown for climate policy.  What will it take to replace 83 quads of fossil fuel energy generation with renewable energy generation?


3 Comments

Energy and Climate Stories Via Charts

The following charts show different pieces of a sobering story: the US and the world has not and is not in the foreseeable future doing enough to reduce carbon-intensive energy.  This shouldn’t come as any great surprise, but I think these charts enable us to look at the story graphically instead of just hearing the words.  Graphics tend to have a larger impact on thought retention, so I’m going to use them to tell this story.

 photo GlobalEnergyByType-2013ProjectionbyBNEF_zps7ec53b2d.jpg

Figure 1. Annual global installations of new power sources, in gigawatts.  [Source: MotherJones via BNEF]

This figure starts the story off on a good note.  To the left of the dotted line is historical data and to the right is BNEF’s projected data.  In the future, we expect fewer new gigawatts generated by coal, gas, and oil.  We also expect many more new gigawatts generated by land-based wind, small-scale photovoltaic (PV) and solar PV.  Thus the good news: there will be more new gigawatts powered by renewable energy sources within the next couple of years than dirty energy sources.  At the same time, this graph is slightly misleading.  What about existing energy production?  The next chart takes that into account.

 photo GlobalEnergyTotalByType-2013ProjectionbyBNEF_zpsdb2d8856.jpg

Figure 2. Global energy use by generation type, in gigawatts.  [Source: MotherJones via BNEF]

The story just turned sober.  In 2030, coal should account for ~2,000GW of energy production compared to ~1,200GW today.  Coal is the dirtiest of the fossil fuels, so absent radical technological innovation and deployment, 2030 emissions will exceed today’s due to coal alone.  We find the same storyline for gas and to a lesser extent oil: higher generation in 2030 than today means more emissions.  We need fewer emissions if we want to reduce atmospheric CO2 concentrations.  The higher those concentrations, the warmer the globe will get until it reaches a new equilibrium.

Compare the two graphs again.  The rapid increase in renewable energy generation witnessed over the last decade and expected to continue through 2030 results in what by 2030?  Perhaps ~1,400GW of wind generation (about the same as gas) and up to 1,600GW of total solar generation (more than gas but still less than coal).  This is an improvement over today’s generation portfolio of course.  But it will not be enough to prevent >2°C mean global warming and all the subsequent effects that warming will have on other earth systems.  The curves delineating fossil fuel generation need to slope toward zero and that doesn’t look likely to happen prior to 2030.

Here is the basic problem: there are billions of people without reliable energy today.  They want energy and one way or another will get that energy someday.  Thus, the total energy generated will continue to increase for decades.  The power mix is up to us.  The top chart will have to look dramatically different for the mix to tilt toward majority and eventually exclusively renewable energy.  The projected increases in new renewable energy will have to be double, triple, or more what they are in the top chart to achieve complete global renewable energy generation.  Instead of a couple hundred gigawatts per year, we need a couple thousand gigawatts per year.  That requires a great deal of innovation and deployment – more than even many experts are aware.

Let’s take a look at the next part of the story: carbon emissions in the US – up until recently the largest annual GHG emitter on the globe.

 photo carbon-intensity-us-states-economy-eia-20130530_zps280fe2fb.png

Figure 3. Percent change in the economy’s carbon intensity 2000-2010. [Source: ThinkProgress via EIA]

As Jeff notes, the total carbon intensity (amount of carbon released for every million dollars the economy produces) of the economy dropped 17.9 percent over those ten years.  That’s good news.  Part of the reason is bad news: the economy became more energy-efficient in part due to the recession.  People and organizations stopped doing some of the most expensive activities, which also happened to be some of the most polluting activities.  We can attribute the rest of the decline to the switch from coal to natural gas.  Which is a good thing for US emissions, but a bad thing for global emissions because we’re selling the coal that other countries butn – as Figure 2 shows.

 photo carbon-emissions-us-states-eia-20130530_zps17d52b6b.png

Figure 4. Percent change in the economy’s total carbon emissions 2000-2010. [Source: ThinkProgress via EIA]

Figure 4 re-sobers the story.  While we became more efficient at generating carbon emissions, the total number of total emissions from 2000 to 2010 only dropped 4.2%.  My own home state of Colorado, despite having a Renewable Energy Standard and mandates renewables in the energy mix, saw a greater than 10% jump in total carbon emissions.  Part of the reason is Xcel Energy convinced the state Public Utilities Commission that new, expensive coal plants be built.  The reason?  Xcel is a for-profit corporation and new coal plants added billions of dollars to the positive side of their ledger, especially since they passed those costs onto their rate payers.

In order for the US to achieve its Copenhagen goals (17% reduction from 2005 levels), more states will have to show total carbon emission declines post-2010.  While 2012 US emission levels were the lowest since 1994, we still emit more than 5 billion metric tons of CO2 annually.  Furthermore, the US deliberately chose 2005 levels since they were the historically high emissions mark.  The Kyoto Protocol, by contrast, challenged countries to reduce emissions compared to 1990 levels.  The US remains above 1990 levels, which were just under 5 billion metric tons of CO2.  17% of 1990 emissions is 850 million metric tons.  Once we achieve that decrease, we can talk about real progress.

The bottom line is this: it matters how many total carbon emissions get into the atmosphere if we want to limit the total amount of warming that will occur this century and the next few tens of thousands of years.  There has been a significant lack of progress on that:

 photo energy_sector_carbon_intensity-20130530_zpsae891a88.jpg

Figure 5. Historical and projection energy sector carbon intensity index.

We are on the red line path.  If that is our reality through 2050, we will blow past 560 ppm atmospheric CO2 concentration, which means we will blow past the 2-3°C sensitivity threshold that skeptics like to talk about the most.  That temperature only matters if we limit CO2 concentrations to two times their pre-industrial value.  We’re on an 800-1100 ppm concentration pathway, which would mean up to 6°C warming by 2100 and additional warming beyond that.

The size and scope of the energy infrastructure requirements to achieve an 80% reduction in US emissions from 1990 levels by 2050 is mind-boggling.  It requires 300,000 10-MW solar thermal plants or 1,200,000 2.5-MW wind turbines or 1,300 1GW nuclear plants (or some combination thereof) by 2050 because you have to replace the existing dirty energy generation facilities as well as meet increasing future demand.  And that’s just for the US.  What about every other country on the planet?  That is why I think we will blow past the 2°C threshold.  As the top graphs show, we’re nibbling around the edges of a massive problem.  We will not see a satisfactory energy/climate policy emerge on this topic anytime soon.  The once in a generation opportunity to do so existed in 2009 and 2010 and national-level Democrats squandered it (China actually has a national climate policy, by the way).  I think the policy answers lie in local and state-based efforts for the time being.  There is too wide a gap between the politics we need and the politics we have at the national level.


Leave a comment

Ideology and Misperception in Energy and Climate

I could write a dissertation on this topic and spend the rest of my life researching and publishing on it.  I will have to settle for a short blog post for now, because my own research is in need of my attention.

People posted a number of tweets and articles on how “Political ideology affects energy-efficiency attitudes and choices“, which is the title of a new PNAS article.  The upshot: ideology trumps the free market.  This isn’t a surprise to me anymore – I’ve studied plenty of cases in the past two years that demonstrate this phenomenon.  In this case, peoples’ purchases of energy-efficient light bulbs were most influenced by what the bulb’s labeling stated.  The study made two stickers available: “Protect the Environment” or blank.  In both cases, the researchers made the same bulb benefits (energy use & cost) available to each potential purchaser.  The only difference was the presence of a blank or pro-environment sticker on the packaging.  With the pro-environmental sticker, conservatives were less likely to purchase the CFL bulb.  Without it, conservatives and liberals were equally likely to purchase the CFL bulb.  That’s not rational, which is a significant assumption of modern economic theory.  The result shows, unsurprisingly, that peoples’ behavior depends on their personal ideology and value system.  This has obvious implications for climate change activists: you have to operate in the value system of your targeted audience if you want them to receive your proposals well.  Beating the same drums harder won’t make conservatives care about climate change.

Climate groups are willfully failing elsewhere.  A new Yale Project on Climate Change Communication and George Mason University Center for Climate Change Communication poll demonstrates that increasing numbers of Americans are drawing incorrect conclusions from recent weather events to climate change.  The warmest year on record in the US (2012) was made more severe due to global warming, according to 50% of respondents.  A similar number believe the ongoing US drought is worse due to global warming.  The results go on and on.

Here is the rub: these beliefs have no basis in scientific fact.  2012 US temperatures were largely influenced by natural interannual variability.  It was warmer than 1998 by more than 1°F, which is significant.  But identifying a global warming signal in one year’s temperature data for the US is beyond the current capabilities of science.  We can say more robustly that the 2000s were significantly warmer than the 1990s, which were warmer than the 1980s, etc.  2012′s temperatures were extreme and it had implications that are still being felt by human and ecological systems.  The important point there is this: are existing systems capable of handling today’s weather extremes?  If not, we should do something.

The belief in climate change enhanced drought is also unsupported, as I wrote about a couple of weeks ago.  Initial findings from a NOAA-led team were unable to detect a global warming-related signal in either the onset, magnitude, or extent of the extraordinary 2012 drought.  This isn’t particularly surprising when you consider the last two droughts of similar extent and severity occurred in the 1950s and 1930s – prior to much anthropogenic forcing.  Specifically, they found that “The interpretation is of an event resulting largely from internal atmospheric variability having limited long lead predictability.”  Again, this drought is producing effects, but it isn’t directly attributable to climate change.  The question remains: are existing systems capable of handling these types of extreme events?  If they aren’t, we should do something about them, not draw unscientific causal linkages in an effort to build support for change.

The IPCC’s SREX report (Managing the Risks of Extreme Events and Disasters to Advance Climate Change Adaptation), issued just last year, reinforces this message.  There is a detectable global warming signal in a few measurable parameters such as temperature, water vapor, and sea level change.  But the climate system retains a great deal of natural variability which scientists do not fully understand.  Climate conditions will change in the next 90 years, but the likelihood of those changes varies.  Weather conditions may or may not change.  Their inherent transience makes it difficult to ascribe causal factors behind any changes.  Note further that climate projections of the 2090s are not climate conditions of the 2090s or 2010s.  Identifying likely future changes does not translate to detecting those changes today.

Yale and George Mason should digest their poll results along with the latest guidance from scientific peer-reviewed literature to help guide their communication efforts moving forward.  Given the results of this latest poll, they have their work cut out for them.  Framing, whether it is related to selling CFLs to a diverse public or differentiating between weather and climate, is critically important in climate communication.


Leave a comment

CO Public Utilities Commission Rejects Xcel Energy’s Bid To Collect Remaining $16.6 Million in SmartGridCity Costs

I last wrote on this topic a couple of months ago, following a Denver Post article that started with a Judge’s decision that ratepayers should not be responsible for cost overruns associated with Xcel’s SmartGridCity program.  The judge’s decision was not the final step in the matter.  As a matter of course, the final step was the Colorado’s Public Utilities’ Commission decision whether to grant Xcel’s request to collect $16.6 million from Colorado ratepayers.

If this is the first time you’ve read about this, here is a short history.  In 2008, Xcel proposed SmartGridCity, in which they would install approximately 50,000 smart meters in the city of Boulder by year’s end.  It was one of the most ambitious smart grid projects announced at the time.  Xcel’s proposal totaled $15 million in costs, which they themselves would completely bear.  Seven partner companies were supposed to pay for the remainder of the $100 million project.  A little something called the Great Recession got in the way, along with little transparency and project mismanagement on Xcel’s part.  Today, 23,000 smart meters are installed – at a cost of $44.5 million, triple the original estimate for less than half the project deployment.  The PUC previously approved Xcel’s request for $27.9 million, which is currently collected through customer rates, not from Xcel’s assets.

Thankfully, the PUC decided today to reject Xcel’s request with prejudice, which means Xcel cannot appeal the decision.  I support this decision mainly because I do not think Xcel should saddle regional ratepayers with costs for benefits they cannot receive.  That is a disgusting business practice and terrible precedent to set for future projects.  In a similar vein, Xcel’s success in expanding a coal plant in Pueblo, CO seemed to many to be a grab at capital to pad profit.  Ratepayers overwhelmingly rejected the plant’s expansion because it would generate more electricity than demanded by the population as well as its long life: Xcel stuck CO with this expanded plant for the next 50 years.

I have expressed my frustration with the PUC on occasion.  I do not think they exert the appropriate level of oversight over Xcel when the energy utility asks for rate increases, especially given Xcel’s lack of correctly forecasting generation capacity or demand.  This decision doesn’t atone for past decisions I didn’t agree with, but I am glad of this result.

I reiterate my general support for the smart grid.  I think we will eventually witness a significant transformation of the US’s power sector, including its infrastructure.  Smart grid technologies could usher in an era of increased efficiency.  Energy consumers currently do not have much access to data on their usage.  Many (not all) people could change their consumption habits if they had access to that data.

Follow

Get every new post delivered to your Inbox.

Join 247 other followers