Weatherdem's Weblog

Bridging climate science, citizens, and policy


Leave a comment

REMI’s Carbon Tax Report

I came across former NASA climate scientist James Hansen’s email last week supporting a carbon tax.  At the outset, I fully support this policy because it is the most economically effective way to achieve CO2 emission reductions.  An important point is this: it matters a lot how we apply the tax and what happens to the money raised because of it.  Many policy analysts think that the only way a carbon tax will ever pass is for the government to distribute the revenue via dividends to all households.  This obviously has appealing aspects, not least of which is Americans love free stuff.  That is, we love to reap the benefits of policies so long as they cost us nothing.  That attitude is obviously unsustainable – you have simply to look at the state of American infrastructure today to see the effects.

All that said, the specific carbon tax plan Hansen supported came from a Regional Economic Models, Inc. report, which the Citizens Climate Lobby commissioned.  The report found what CCL wanted it to find: deep emission cuts can result from a carbon tax.  There isn’t anything surprising with this – many other studies found the exact same result.  What matters is how we the emission cuts are achieved.  I think this study is another academic dead-end because I see little evidence how the proposed tax actually achieves the cuts.  It looks like REMI does what the IPCC does – they assume large-scale low-carbon energy technologies.  The steps of developing and deploying those technologies are not clearly demonstrated.  Does a carbon tax simply equate to low-carbon technology deployment?  I don’t think so.

First, here is an updated graphic showing REMI’s carbon emission cuts compared to other sources:

 photo EPA2014vsEIA2012vsKyotovsREMI2014_zps961bb7c7.png

The blue line with diamonds shows historical CO2 emissions.  The dark red line with squares shows EIA’s 2013 projected CO2 emissions through 2030.  EIA historically showed emissions higher than those observed.  This newest projection is much more realistic.  Next, the green triangles show the intended effect of EPA’s 2014 power plant rule.  I compare these projections against Kyoto `Low` and `High` emission cut scenarios.  An earlier post showed and discussed these comparisons.  I added the modeled result from REMI 2014 as orange dots.

Let me start by noting I have written for years now that we will not achieve even the Kyoto `Low` scenario, which called for a 20% reduction of 1990 baseline emissions.  The report did not clearly specify what baseline year they considered, so I gave them the benefit of the doubt in this analysis and chose 2015 as the baseline year.  That makes their cuts easier to achieve since 2015 emissions were 20% higher than 1990 levels.  Thus, their “33% decrease from baseline” by 2025 results in emissions between Kyoto’s `Low` and `High` scenarios.

REMI starts with a $10 carbon tax in 2015 and increases that tax by $10/year.  In 10 years, carbon costs $100/ton.  That is an incredibly aggressive taxing scheme.  This increase would have significant economic effects.  The report describes massive economic benefits.  I will note that I am not an economist and don’t have the expertise to judge the economic model they used.  I will go on to note that as a climate scientist, all models have fundamental assumptions which affect the results they generate.  The assumptions they made likely have some effect on their results.

Why won’t we achieve these cuts?  As I stated above, technologies are critical to projecting emission cuts.  What does the REMI report show for technology?

 photo REMI2014ElectricalPowerGeneration-2scenarios_zpse41c17d9.png

The left graph shows US electrical power generation without any policy intervention (baseline case).  The right graph shows generation resulting from the $10/year carbon tax policy.  Here is their models’ results: old unscrubbed coal plants go offline in 2022 while old scrubbed coal plants go offline in 2025.  Think about this: there are about 600 coal plants in the US generating the largest single share of electricity of any power source.  The carbon tax model results assumes that other sources will replace ~30% of US electricity in 10 years.  How will that be achieved?  This is the critical missing piece of their report.

Look again at the right graph.  Carbon captured natural gas replaces natural gas generation by 2040.  Is carbon capture technology ready for national-level deployment?  No, it isn’t.  How does the report handle this?  That is, who pays for the research and development first, followed by scaled deployment?  The report is silent on this issue.  Simply put, we don’t know when carbon capture technology will be ready for scaled deployment.  Given historical performance of other technologies, it is safe to assume this development would take a couple of decades once the technology is actually ready.

Nuclear power generation also grows a little bit, as does geothermal and biopower.  This latter technology is interesting to note since it represents the majority of the percentage increase of US renewable power generation in the past 15 years (based on EIA data) – something not captured by their model.

The increase in wind generation is astounding.  It grows from a few hundred Terawatt hours to over 1500 TWh in 20 years time.  This source is the obvious beneficiary to a carbon tax.  But I eschew hard to understand units.  What does it mean to replace the majority of coal plants with wind plants?  Let’s step back from academic exercises that replace power generation wholesale and get into practical considerations.  It means deploying more than 34,000 2.5MW wind turbines operating at 30% efficiency per year every year.  (There are other metrics by which to convey the scale, but they deal with numbers few people intuitively understand.)  According to the AWEA, there were 46,100 utility-scale wind turbines installed in the US at the end of 2012.  How many years have utilities installed wind turbines?  Think of the resources required to install almost as many wind turbines in just one year as already exist in the US.  Just to point out one problem with this installation plan: where do the required rare earth metals come from?  Another: are wind turbine supply chains up to the task of manufacturing 34,000 wind turbines per year?  Another: are wind turbine manufacturing plants equipped to handle this level of work?  Another: are there enough trained workers to supply, make, transport, install, and maintain this many wind turbines?  Another: how is wind energy stored and transmitted from source to use regions (thousands of miles in many cases).

Practical questions abound.  This report is valuable as an academic exercise, but  I don’t see how wind replaces coal in 20 years time.  I want it to, but putting in a revenue-neutral carbon tax probably won’t get it done.  I don’t see carbon capture and sequestration ready for scale deployment in 10 years time.  I would love to be surprised by such a development but does a revenue-neutral carbon tax generate enough demand for low-risk seeking private industry to perform the requisite R&D?  At best, I’m unconvinced it will.

After doing a little checking, a check reminded me that British Columbia implemented a carbon tax in 2008; currently it is $40 (Canadian).  Given that, you might think it serves as a good example of what the US could do with a similar tax.  If you dig a little deeper, you find British Columbia gets 86% of its electricity from hydropower and only 6% from natural gas, making it a poor test-bed to evaluate how a carbon tax effects electricity generation in a large, modern economy.


3 Comments

EPA’s Proposed CO2 Emissions Rule in Context

 photo EPA2014vsEIA2012vsKyoto_zps8d150e25.png

If you follow climate and energy news, you probably have or will encounter media regarding today’s proposed CO2 emissions rule by the EPA.  Unfortunately, that media will probably not be clear about what the rule means in understandable terms.  I’m writing this in an attempt to make the proposed rule more clear.

The graph above shows US CO2 emissions from energy consumption.  This includes emissions from coal, oil, and natural gas.  I have differentiated historical emissions in blue from 2013 EIA projections made in red, what today’s EPA proposal would mean for future emission levels, and low and high reductions prescribed by the Kyoto Protocol, which the US never ratified.

In 2011, historical US energy-related emissions totaled 5,481 million metric tons of CO2.  For the most part, you can ignore the units and just concentrate on emission’s magnitude: 5,481.  If the EPA’s proposed rule goes into effect and achieves what it sets out to achieve, 2020 emissions could be 4,498 MMT and 2030 emissions could be 4,198 MMT (see the two green triangles).  Those 2030 emissions would be lower than any time since 1970 – a real achievement.  It should be apparent by the other comparisons that this potential achievement isn’t earth shaking however.

Before I get further into that, compare the EPA-related emissions with the EIA’s projections out to 2030.  These projections were made last year and are based on business as usual – i.e., no federal climate policy or EPA rule.  Because energy utilities closed many of their dirtiest fossil fuel plants following the Great Recession due to their higher operating costs and the partial transfer from coal to natural gas, the EIA now projects emissions just above 2011’s and below the all-time peak.  I read criticism of EIA projections this weekend (can’t find the piece now) that I think was too harsh.  The EIA historically projected emissions in excess of reality.  I don’t think their over-predictions are bad news or preclude their use in decision-making.  If you know the predictions have a persistent bias, you can account for it.

So there is a measurable difference between EIA emission projections and what could happen if the EPA rule is enacted and effective.  With regard to that latter characterization, how effective might the rule be?

If you compare the EPA emission reductions to the Kyoto reductions, it is obvious that the reductions are less than the minimum requirement to avoid significant future climate change.  But first, it is important to realize an important difference between Kyoto and the EPA rule: the Kyoto pathways are based off 1990 emissions and the EPA is based off 2005 emissions.  What happened between 1990 and 2005 in the real world?  Emissions rose by 19% from 5,039 MMT to 5,997 MMT.  The takeaway: emission reductions using 2005 as a baseline will result in higher final emissions than using a 1990 baseline.

If the US ratified and implemented Kyoto on the `Low` pathway (which didn’t happen), 2020 emissions would be 4,031 MMT (467 MMT less than EPA; 1445 MMT less than EIA) and 2050 emissions would be 2,520 MMT (no comparison with EPA so far).  If the US implemented the `High` pathway, 2020 emissions would be 3,527 MMT (971 MMT less than EPA!; 1,949 MMT less than EIA!) and 2050 emissions would be drastically slashed to 1,008 MMT!

Since we didn’t implement the Kyoto Protocol, we will not even attain 2020 `Kyoto Low` emissions in 2030.  Look at the graph again.  Connect the last blue diamond to the first green triangle.  Even though they’re the closest together, you can immediately see we have a lot of work to do to achieve even the EPA’s reduced emissions target.  Here is some additional context: to keep 2100 global mean temperatures <2C, we have to achieve the lowest emissions pathway modeled by the IPCC for the Fifth Assessment Report (see blue line below):

 photo CO2_Emissions_AR5_Obs_Nature_article_zps1e766d71.jpg

Note the comment at the bottom of the graph: global CO2 emissions have to turn negative by 2070, following decades of declines.  How will global emissions decline and turn negative if the US emits >3,000 MMT annually in 2050?  The short answer is easy: they won’t.  I want to combine my messages so far in this post: we have an enormous amount of work to reduce emissions to the EPA level.  That level is well below Kyoto’s Low level, which would have required a lot of work in today’s historical terms.  That work now lies in front of us if we really want to avoid >2C warming and other effects.  I maintain that we will not reduce emissions commensurate with <2C warming.  I think we will emit enough CO2 that our future will be along the RCP6.0 to RCP8.5 pathways seen above, or 3-5C warming and related effects.

Another important detail: the EPA’s proposed rule has a one-year comment period which will result in a final rule.  States then have another year to implement individual plans to achieve their reductions (a good idea).  The downside: the rule won’t go into effect until 2016 – only four years before the first goal.  What happens if the first goal isn’t achieved?  Will future EPA administrators reset the 2030 goal so it is more achievable (i.e., higher emissions)?  Will lawsuits prevent rule implementation for years?  There are many potential setbacks for implementing this rule.  And it doesn’t achieve <2C warming, not even close.


Leave a comment

NASA & NOAA: April 2014 Warmest Globally On Record

According to data released by NASA and NOAA this month, April was the warmest April globally on record.  Here are the data for NASA’s analysis; here are NOAA data and report.  The two agencies have different analysis techniques, which in this case resulted in slightly different temperature anomaly values but the same overall rankings within their respective data sets.  The analyses result in different rankings in most months.  The two techniques do provide a check on one another and confidence for us that their results are robust.  At the beginning, I will remind readers that the month-to-month and year-to-year values and rankings matter less than the long-term climatic warming.  Weather is the dominant factor for monthly and yearly conditions, not climate.

The details:

April’s global average temperature was 0.73°C (1.314°F) above normal (14°C; 1951-1980), according to NASA, as the following graphic shows.  The past three months have a +0.63°C temperature anomaly.  And the latest 12-month period (May 2013 – Apr 2014) had a +0.62°C temperature anomaly.  The time series graph in the lower-right quadrant shows NASA’s 12-month running mean temperature index.  The 2010-2012 downturn was largely due to the last La Niña event (see below for more).  Since then, ENSO conditions returned to a neutral state (neither La Niña nor El Niño).  As previous anomalously cool months fell off the back of the running mean, the 12-month temperature trace tracked upward again throughout 2013 and 2014.

 photo NASA-Temp_Analysis_20140430_zps82150da6.gif

Figure 1. Global mean surface temperature anomaly maps and 12-month running mean time series through April 2014 from NASA.

According to NOAA, April’s global average temperatures were +0.77°C (1.386°F) above the 20th century average of 13.7°C (56.7°F).  NOAA’s global temperature anomaly map for April (duplicated below) shows where conditions were warmer and cooler than average during the month.

 photo NOAA-Temp_Analysis_201404_zps92d3f6cb.gif

Figure 2. Global temperature anomaly map for August 2013 from NOAA.

The two different analyses’ importance is also shown by the preceding two figures.  Despite differences in specific global temperature anomalies, both analyses picked up on the same spatial temperature patterns and their relative strength.

Influence of ENSO

 photo NinoSSTAnom20140501_zpsc925f282.gif

Figure 3. Time series of weekly SST data from NCEP (NOAA).  The highest interest region for El Niño/La Niña is `NINO 3.4` (2nd time series from top).

There has been neither El Niño nor La Niña in the past couple of years.  This ENSO-neutral phase is common.  As you can see in the NINO 3.4 time series (2nd from top in Figure 3), Pacific sea surface temperatures were relatively cool in January through March, then quickly warmed.  This switch occurred because normal easterly winds (blowing toward the west) across the equatorial Pacific relaxed and two significant westerly wind bursts occurred in the western Pacific.  These anomalous winds generated an eastward moving Kelvin wave, which causes downwelling and surface mass convergence.  Warm SSTs collect along the equator as a result.  These Kelvin waves eventually crossed the entire Pacific Ocean, as Figure 4 shows.

 photo PacifcOcEqTAnomaly20140523_zpsff7554f1.gif

Figure 4.  Sub-surface Pacific Ocean temperature anomalies from Jan-Apr 2014.  Anomalously cool eastern Pacific Ocean temperatures in January gave way to anomalously warm temperatures by April.  Temperatures between 80W and 100W warmed further since April 14.

The Climate Prediction Center announced an El Niño Watch earlier this year.  The most recent update says the chances of an El Niño during the rest of 2014 exceeds 65%.  There is no reliable prediction of the potential El Niño’s strength at this time.  Without another westerly wind burst, an El Niño will likely not be very strong.  Even moderate strength El Niños impact global weather patterns.

An important detail is whether the potential 2014 El Niño will be an Eastern or Central Pacific El Niño (see figure below).  Professor Jin-Yi Yu, along with colleagues, first proposed the difference in a 2009 Journal of Climate paper.  More recently, Yu’s work suggested a recent trend toward Central Pacific El Niños influenced the frequency and intensity of recent U.S. droughts.  This type of El Niño doesn’t cause global record temperatures, but still impacts atmospheric circulations and the jet stream, which impacts which areas receive more or less rain.  If the potential 2014 El Niño is an Eastern Pacific type, we can expect monthly global mean temperatures to spike and the usual precipitation anomalies commonly attributed to El Niño.

 photo EastvsCentralPacificENSOschematic_zps08856e81.jpg

Figure 5. Schematic of Central-Pacific ENSO versus Eastern-Pacific ENSO as envisioned by Dr. Jin-Yi Yu at the University of California – Irvine.

If an El Niño does occur later in 2014, it will mask some of the deep ocean heat absorption by releasing energy back to the atmosphere.  If that happens, the second half of 2014 and the first half of 2015 will likely set global surface temperature records.  2014, 2015, or both could set the all-time global mean temperature record (currently held by 2010).  Some scientists recently postulated that an El Niño could also trigger a shift from the current negative phase of the Interdecadal Pacific Oscillation (IPO; or PDO for just the northern hemisphere) to a new positive phase.  This would be similar in nature, though different in detail, as the shift from La Niña or neutral conditions to El Niño.  If this happens, the likelihood of record hot years would increase.  I personally do not believe this El Niño will shift the IPO phase.  I don’t think this El Niño will be strong enough and I don’t think the IPO is in a conducive state for a switch to occur.

The “Hiatus”

Skeptics have pointed out that warming has “stopped” or “slowed considerably” in recent years, which they hope will introduce confusion to the public on this topic.  What is likely going on is quite different: since an energy imbalance exists (less energy is leaving the Earth than the Earth is receiving; this is due to atmospheric greenhouse gases) and the surface temperature rise has seemingly stalled, the excess energy is going somewhere.  The heat has to go somewhere – energy doesn’t just disappear.  That somewhere is likely the oceans, and specifically the deep ocean (see figure below).  Before we all cheer about this (since few people want surface temperatures to continue to rise quickly), consider the implications.  If you add heat to a material, it expands.  The ocean is no different; sea-levels are rising in part because of heat added to it in the past.  The heat that has entered in recent years won’t manifest as sea-level rise for some time, but it will happen.  Moreover, when the heated ocean comes back up to the surface, that heat will then be released to the atmosphere, which will raise surface temperatures as well as introduce additional water vapor due to the warmer atmosphere.  Thus, the immediate warming rate might have slowed down, but we have locked in future warming (higher future warming rate).

 photo Ocean_heat_content_balmaseda_et_al_zps23184297.jpg

Figure 6. Recent research shows anomalous ocean heat energy locations since the late 1950s.  The purple lines in the graph show how the heat content of the whole ocean has changed over the past five decades. The blue lines represent only the top 700 m and the grey lines are just the top 300 m.  Source: Balmaseda et al., (2013)

You can see in Figure 6 that the upper 300m of the world’s oceans accumulated less heat during the 2000s (5*10^22 J) than during the 1990s.  In contrast, accumulated heat greatly increased in ocean waters between 300m and 700m during the 2000s (>10*10^22 J).  We cannot and do not observe the deep ocean with great frequency.  We do know from frequent and reliable observations that the sea surface and relatively shallow ocean did not absorb most of the heat in the past decade.  We also know how much energy came to and left the Earth from satellite observations.  If we know how much energy came in, how much left, and how much the land surface and shallow ocean absorbed, it is a relatively straightforward computation to determine how much energy likely remains in the deep ocean.

Discussion

The fact that April 2014 was the warmest on record despite a negative IPO and a neutral ENSO is eye-opening.  I think it highlights the fact that there is an even lower frequency signal underlying the IPO, ENSO, and April weather: anthropogenic warming.  That signal is not oscillatory, it is increasing at an increasing rate and will continue to do so for decades to centuries.  The length of time that occurs and its eventual magnitude is dependent on our policies and activities.  We continue to emit GHGs at or above the high-end of the range simulated by climate models.  Growth in fossil fuel use at the global scale continues.  This growth dwarfs any effect of a switch to energy sources with lower GHG emissions.  I don’t think that will change during the next 15 years, which would lock us into the warmer climate projections through most of the rest of the 21st century.  The primary reason for this is the scale of humankind’s energy infrastructure.  Switching from fossil fuels to renewable energy will take decades.  Acknowledging this isn’t defeatist or pessimistic; it is I think critical in order to identify appropriate opportunities and implement the type and scale of policy responses to encourage that switch.


1 Comment

Climate and Energy Topics – 21 May 2014

The New York Times’ Andy Revkin had this very interesting post last week: “Three Long Views of Life With Rising Seas“.  He asked three folks for their long-term view on how human might deal with the centennial-scale effects of Antarctic glacier melt.  Some of their (partial) responses merit further thought:

Curt Stager, Paul Smith: Imagine the stink we would all raise if another nation tried to take even one inch of our coastline away from us – and yet here is a slow taking of countless square miles from our shores by a carbon-driven ocean-turned-invader.

David Grinspoon: But I think if our society is around for several more centuries we will have to have found different ways to deal collectively with our world-changing technologies. If we’ve made it that far, we’ll find ways to adapt.

Kim Stanley Robinson: It was when the ice core data in Greenland established the three-year onset of the Younger Dryas that the geologists had to invent the term “abrupt climate change” because they had so frequently abused the word “quick” sometimes meaning several thousand years when they said that. Thus the appearance of “Abrupt Climate Change” as a term (and a National Research Council book in 2002).

Andy Revkin finished with: The realities of sea-level rise and Antarctic trends and China’s emissions, etc., make me feel ever more confident that the [bend, stretch, reach, teach] shift I charted for my goals in my TEDx talk (away from numbers and toward qualities) is the right path.

Chinese coal use almost equals that of the rest of the world combined, according to the EIA:

 photo ChineseCoalUsage20140521_zpsac73e973.png

This is but one reason I believe <2C warming is already a historical consideration.  All of this coal production and consumption would have to stop immediately if we have any hope of meeting this political goal.  That will not happen – absent coal generated power, which constitutes the majority generated, the global economy would spin into a depression.

On the good news front, U.S. consumers are expanding home energy efficiency and distributed power generation, according to Deloitte.  These practices started with the Great Recession, but for the first time are continuing after the economy “recovers”.  In 2013, new solar growth occurred among families making between $40,000 and $90,000.  The most engaged demographic could be Generation Y: “1/3 said they “definitely/probably” will buy a smart energy application, which is up from 28 percent in 2011.”

I’ve let my drought series lapse, but have kept watching conditions evolve across the country.  California has obviously been in the news due its drought and wildfires.  All of California is currently in a “severe” drought for the first time since the mid-1970s (see picture below).  So the quick science point: this has happened before (many times; some worse than this) and isn’t primarily caused by anthropogenic forcing.  The quick impacts point: California’s population is double today what it was in the mid-1970s.  Therefore, the same type of drought will have more impact.  Wrapping these points together: drought impacts could be greater in the 2010s than the 1970s due to sociological and not physical factors.  An important caveat: Californians are more adept now at planning for and responding to drought.  They recognize how dry normal conditions can get and have adapted more so than other places in the U.S.  Drought conditions likely won’t improve until this winter during the next rainy season since last winter was a bust for them.

 photo CAdrought20140521_zpsd403ee59.jpg

An incredible story comes from the New York Times about what it takes to engage communities on climate and energy issues.  Nebraska farmers and ranchers are fighting against the Keystone XL pipeline.  Why, you might ask?  Well, they’re certainly not a bunch of hippie greens.  No, they’re responding to their lifestyle and value system.  If KXL is built, it will be built on their land.  That means someone will take away small pieces of a bunch of farmers land, because the locals have already refused $250,000 payments for them.  If KXL is built, it will risk locals’ cattle.  Who do you think will suffer if the pipeline leaks?  The cows, the ranchers, and the Ogallala Aquifer of course.  A critical piece of the paper is this:

Here was one of the best stories she’d ever seen: Conservative American farmers rise up to protect their land. She could use the image of the family farm to reframe the way Nebraskans thought about environmentalism. It wasn’t going to be Save the Sandhill Cranes. It was going to be Save the Neighbors.

To get Nebraskans to respond to environmental issues, you have to engage them on their values, not yours (unless of course you share them).  This is the key that environmentalists have missed for decades and its part of the reason why environmentalism is so politicized.  It’s why conservatives tend not to respond to climate activism framing.

There’s plenty more where this came from.  Stay tuned.


2 Comments

State of Polar Sea Ice – March 2014: Arctic Sea Ice Maximum and Antarctic Sea Minimum

Global polar sea ice area in March 2014 remained at or near climatological normal conditions (1979-2008).  This represents early 2013 conditions continuing to present when sea ice area was at or above the average daily value.  Global sea ice area values consist of two components: Arctic and Antarctic sea ice.  Conditions are quite different between these two regions: Antarctic sea ice continues to exist abundantly while Arctic sea ice remained well below normal again during the past five months.

The NSIDC made a very important change to its dataset in June.  With more than 30 years’ worth of satellite-era data, they recalculated climatological normals to agree with World Meteorological Organization standards.  The new climatological era runs from 1981-2010 (see Figure 6 below).  What impacts did this have on their data?  The means and standard deviations now encompass the time period of fastest Arctic melt.  As a consequence, the 1981-2010 values are much lower than the 1979-2000 values.  This is often one of the most challenging conditions to explain to the public.  “Normal”, scientifically defined, is often different from “normal” as most people refer to it.  U.S. temperature anomalies reported in the past couple of years refer to a similar 1981-2010 “normal period”.  Those anomalies are smaller in value than if we compared them to the previous 1971-2000 “normal period”.  Thus, temperature anomalies don’t seem to increase as much as they would if scientists referred to the same reference period.

Arctic Sea Ice

According to the NSIDC, March 2014′s average extent was 14.80 million sq. km., a 730,000 sq. km. difference from normal conditions.  This value is the maximum for 2014 as more sunlight and warmer spring temperatures now allow for melting ice.  March 2014 sea ice extent continued a nearly two-year long trend of below normal values.  The deficit from normal was different each month during that time due to weather conditions overlaying longer term climate signals.  Arctic sea ice extent could increase during the next month or so depending on specific wind conditions, but as I wrote above, we likely witnessed 2014’s maximum Arctic sea ice extent 10 or so days ago.

Sea ice anomalies at the edge of the pack are of interest.  There is slightly more ice than normal in the St. Lawrence and Newfounland Seas on the Atlantic side of the pack.  Barents sea ice area, meanwhile, is slightly below normal.  Bering Sea ice recently returned to normal from below normal, while Sea of Okhotsk sea ice remains below normal.  The ice in these seas will melt first since they are on the edge of the ice pack and are the thinnest since they just formed in the last month.

March average sea ice extent for 2014 was the fifth lowest in the satellite record.  The March linear rate of decline is 2.6% per decade relative to the 1981 to 2012 average, as Figure 1 shows (compared to 13.7% per decade decline for September: summer ice is more affected from climate change than winter ice).  Figure 1 also shows that March 2014′s mean extent ranked fifth lowest on record.

 photo Arctic_monthly_sea_ice_extent_201403_zpsf13de46a.png

Figure 1 – Mean Sea Ice Extent for March: 1979-2014 [NSIDC].

Arctic Pictures and Graphs

The following graphic is a satellite representation of Arctic ice as of October 1st, 2013:

 photo Arctic_sea_ice_20131001_zps56b337ee.png

Figure 2UIUC Polar Research Group‘s Northern Hemispheric ice concentration from 20131001.

The following graphic is a satellite representation of Arctic ice as of January 15th, 2014:

 photo Arctic_sea_ice_20140115_zps96036b51.png

Figure 3UIUC Polar Research Group‘s Northern Hemispheric ice concentration from 20140115.

The following graphic is a satellite representation of Arctic ice as of April 1st, 2014:

 photo Arctic_sea_ice_20140401_zpsdd9dbc04.png

Figure 4 UIUC Polar Research Group‘s Northern Hemispheric ice concentration from 20140401.

I captured Figure 2 right after 2013’s date of minimum ice extent occurrence.  I wasn’t able to put together a post in January on polar sea ice, but captured Figure 3 for future reference.  You can see the rapid growth of ice area and extent in three month’s time.  Since January, additional sea ice formed, but not nearly as much as during the previous three months.  Figure 4 shows conditions just after the annual maximum sea ice area occurred.  From this point through late September, the overall trend will be melting ice – from the edge inward.

The following graph of Arctic ice volume from the end of January (PIOMAS updates are not available from the end of February or March) demonstrates the relative decline in ice health with time:

 photo SeaIceVolumeAnomaly_20140131_zpse02b6133.png

Figure 5PIOMAS Arctic sea ice volume time series through January 2014.

The blue line is the linear trend, identified as -3,000 km^3 (+/- 1,000 km^3) per decade.  In 1980, there was a +5,000 km^3 anomaly compared to 2013’s -6,000 km^3 anomaly – a difference of 11,000 km^3.  How much ice is that?  That volume of ice is equivalent to the volume in Lake Superior!

Arctic Sea Ice Extent

Take a look at March’s areal extent time series data:

 photo N_stddev_timeseries_20140401_1_zps069b9c1d.png

Figure 6NSIDC Arctic sea ice extent time series through early April 2014 (light blue line) compared with four recent years’ data, climatological norm (dark gray line) and +/-2 standard deviation envelope (light gray).

This figure puts winter 2013-14 into context against other recent winters.  As you can see, Arctic sea ice extent was at or below the bottom of the negative 2nd standard deviation from the 1981-2012 mean.  The 2nd standard deviation envelope covers 95% of all observations.  That means the past five winters were extremely low compared to climatology.  With the maximum ice extent in mid-March, 2014’s extent now hovers near record lows for the date.  Previous winters saw a late-season ice formation surge caused by specific weather patterns.  Those patterns are not likely to increase sea ice extent this boreal spring.  This doesn’t mean much at all for projections of minimum sea ice extent values, as the NSIDC discusses in this month’s report.

Antarctic Pictures and Graphs

Here is a satellite representation of Antarctic sea ice conditions from October 1, 2013:

 photo Antarctic_sea_ice_20131001_zps2fb64db9.png

Figure 7UIUC Polar Research Group‘s Southern Hemispheric ice concentration from 20131001.

And here is the corresponding graphic from January 15th, 2014:

 photo Antarctic_sea_ice_20140115_zpsd2a383a2.png

Figure 8UIUC Polar Research Group‘s Southern Hemispheric ice concentration from 20140115.

The following graphic is a satellite representation of Antarctic ice as of April 2nd, 2014:

 photo Antarctic_sea_ice_20140401_zpsd15f0ddf.png

Figure 9UIUC Polar Research Group‘s Southern Hemispheric ice concentration from 20140402.

Antarctic sea ice clearly hit its minimum between mid-January and early April.  In fact, that date was likely six weeks ago.  Antarctic sea ice is forming again as austral fall is underway.  As in recent austral summers, the lack of sea ice around some locations in Figure 8 is related to melting land-based ice.  Likewise,  sea ice presence around other locations is a good indication that there is less land-based ice melt.  Figure 8 looks different from other January’s prior to 2012 and 2013.  Additionally, Antarctic weather in recent summers differed from previous years in that winds blew land-based ice onto the sea, especially east of the Antarctic Peninsula (jutting up towards South America), which replenished the sea ice that did melt.  The net effect of the these and other processes kept Antarctic sea ice at or above the 1979-2008 climatology’s positive 2nd standard deviation, as Figure 10 below shows.

Finally, here is the Antarctic sea ice extent time series through early April:

 photo S_stddev_timeseries_20140401_zpscadac617.png

Figure 10NSIDC Antarctic sea ice extent time series through early April 2014.

The fact that Arctic ice extent continues well below average while Antarctic ice extent continues well above average for the past couple of years works against climate activists who claim climate change is nothing but disaster and catastrophe.  A reasonable person without polar expertise likely looks at Figures 6 and 10 and says, “I don’t see evidence of catastrophe here.   I see something bad in one place and something good in another place.”  For people without the time or inclination to invest in the layered nuances of climate, most activists come off sounding out of touch.  If climate change really were as clearly devastating as activists screamed it was, wouldn’t it be obvious in all these pictures and plots?  Or, as I’ve commented at other places recently, do you really think people who are insecure about their jobs and savings even have the time for this kind of information?  I don’t have one family member or friend that regularly questions me about the state of the climate, despite knowing that’s what I research and keep tabs on.  Well actually, I do have one family member, but he is also a researcher and works in supercomputing.  Neither he nor I are what most people would consider “average Joes” on this topic.

Policy

Given the lack of climate policy development at a national or international level to date, Arctic conditions will likely continue to deteriorate for the foreseeable future.  This is especially true when you consider that climate effects today are largely due to greenhouse gas concentrations from 30 years ago.  It takes a long time for the additional radiative forcing to make its way through the entire climate system.  The Arctic Ocean will soak up additional energy (heat) from the Sun due to lack of reflective sea ice each summer.  Additional energy in the climate system creates cascading and nonlinear effects throughout the system.  For instance, excess energy pushes the Arctic Oscillation to a more negative phase, which allows anomalously cold air to pour south over Northern Hemisphere land masses while warm air moves over the Arctic during the winter.  This in turn impacts weather patterns throughout the year (witness winter 2013-14 weather stories) across the mid-latitudes and prevents rapid ice growth where we want it.

More worrisome for the long-term is the heat that impacts land-based ice.  As glaciers and ice sheets melt, sea-level rise occurs.  Beyond the increasing rate of sea-level rise due to thermal expansion (excess energy, see above), storms have more water to push onshore as they move along coastlines.  We can continue to react to these developments as we’ve mostly done so far and allocate billions of dollars in relief funds because of all the human infrastructure lining our coasts.  Or we can be proactive, minimize future global effects, and reduce societal costs.  The choice remains ours.

Errata

Here are my State of Polar Sea Ice posts from October and July 2013. For further comparison, here is my State of Polar Sea Ice post from late March 2013.


1 Comment

Guest Teaching This Week

I’m guest teaching for my adviser’s Climate Policy Implications class while they are at a conference.  Yesterday was the easier task, as the class watched most of Leonardo DiCaprio’s “11th Hour“.  Like Gore’s “Inconvenient Truth”, DiCaprio makes widespread use of catastrophic visuals in the first 2/3 of the film.  I had discussions with classmates when I took this same class and others about the effects of these visuals.  Filmmakers design them to evoke strong emotional responses from viewers, which occurs even if you know what the intent is.  Beyond that intent, the images generate unintended consequences: viewers are left overwhelmed and feel helpless, which is the exact opposite reaction for which the film is likely designed.

The film contains spoken references to the same effect: “destroy nature”, “sick” and “infected” biosphere, “climate damage”, “Revenge of Nature”, “Nature has rights”, “nobody sees beauty”, “demise”, “destruction of civilization”, climate as a “victim”, “ecological crisis”, “brink”, “devastating”, and “environment ignored”.  These phrases and analogies project a separation between humans and nature; they romanticize the mythologized purity of nature, where nothing bad ever happens until the evil of mankind is unleashed upon it.  These concepts perpetuate the mindset that the movie tries to address and change.  That’s the result of … science.  As advocates of science, the interviewees in the film should support scientific results.  But they ignore critical social science findings of psychological responses to framing and imagery.  Why?  Because they’re locked into a tribal mindset and don’t critically analyze their own belief system.  All the while knocking the skeptics who don’t either.  I stopped using catastrophic language once I learned about these important scientific results.  The best I can do is advocate that these students do the same.

We didn’t finish watching the film during class, but the last handful of minutes we did watch did something few environmental-related films manage: stories of action and opportunity.  Filmmakers and climate activists need to stuff their efforts with these pieces, not pieces of destruction and hopelessness.  If you want to change the culture and mindset of society, you have to change your message.

Tomorrow, we’ll discuss the 11th Hour as well as this video: http://www.imdb.com/title/tt0492931/.  I also want to talk to the class (mostly undergraduate seniors, a couple of graduate students) about the scope of GHG emissions.  I’ve graded a few weeks’ worth of their homework essays and see clear parallels to the type of essays I wrote before I took additional graduate level science policy classes.  As my last post stated, too many scientists and activists get caught up using shorthand terms they really don’t understand (I should know, I used to do it too).  What does 400 ppm mean? 8.5 W/m^2?  2C warming?  Many of my science policy classes required translating these shorthand terms to units we can more intuitively grasp: number of renewable power plants required to reduce emissions to targets by certain dates.

My hope is that resetting the frame might elicit a different kind of conversation that what they’ve had so far this semester.  I also really enjoy talking about these topics with folks, so tomorrow should be fun.

Follow

Get every new post delivered to your Inbox.

Join 301 other followers