Weatherdem's Weblog

Bridging climate science, citizens, and policy


Leave a comment

Climate mitigation and adaptation

Twitter and the blogosphere are aflutter with references to David Robert’s post, “Preventing climate change and adapting to it are not morally equivalent“.  I read the post with the mindset that David was trying to continue recent climate-related public themes.  With that in mind, I wanted to respond to some points.

Climate hawks are familiar with the framing of climate policy credited to White House science advisor John Holdren, to wit: We will respond to climate change with some mix of mitigation, adaptation, and suffering; all that remains to be determined is the mix. [...] It makes them sound fungible, as though a unit of either can be traded in for an equivalent unit of suffering. That’s misleading. They are very different, not only on a practical level but morally.

I’ll start by noting my disagreement that Holdren’s framing establishes equitable fungibility.  That’s not the way I interpret it, anyway.  For me, it boils down to this: we have a finite amount of resources to devote to climate action.  What we spend them on remains undecided.  I don’t think of one unit of mitigation equaling one unit of adaptation.  Such a frame strikes me as silly, to be quite frank.  Many factors will go into deciding where to spend resources.  I think local and state US governments are choosing adaptation because they’ve correctly assessed that mitigation is costlier.  Governments have responsibilities to their constituencies – not far-off populations that are admittedly more at risk from climate change than ours.  That’s one of the Big Pillar Problems: climate change effects impact people with little responsibility to the problem disproportionately.  It is psychologically sound to muster less action for “others” than “selves” – for better or worse, altruism isn’t rewarded in our society.  Unfortunately, that’s the reality we live and operate in.  Wishes aren’t going to change that.

Communities and organizations could break up resources to mitigate potential dirty energy projects and make them clean in foreign countries where it is relatively cheaper to do so while simultaneously allocating remaining resources to address perceived threats locally.  That’s a harder thing to do than what I describe above – only adapt locally – but it’s also cheaper than mitigating locally (for now).

Say I pay $10 to reduce carbon by a ton. I bear the full cost, but because all of humanity benefits, I receive only one seven-billionth of the value of my investment (give or take).

David contradicts what he said prior to this with this statement.  The poorest and most vulnerable benefit more than he does.  But note the fundamental, critical point here: can anyone benefit by $10/7,000,000,000?  What can I do with 1.43*10^-9 dollars?  Absolutely nothing.  And neither can the primary benefitees, who have to share most of that calculable but meaningless number.  The second point which follows quickly on the heels of the first is that any mitigation investment requires multiple billions before anyone sees one dollar’s value and multiple trillions before anyone sees something meaningful.  Where does that money come from and how do we convince people to make the required investment with the aforementioned psychological barriers to doing so?

One obvious implication of this difference is that, to the extent spending favors adaptation over mitigation, it will replicate and reinforce existing inequalities of wealth and power. The benefits will accrue to those with the money to pay for them.

I’ll look at this differently to help understand it better: will additional mitigation spending reduce wealth and power inequalities?  Is David arguing that developing countries will be equally wealthy and powerful if climate spending is directed towards mitigation and not adaptation?  That’s probably a logical extreme.  Will we reduce inequalities between developing and developed countries to a greater extent due to mitigation or adaptation is one potential question we can address.  I haven’t seen anything that convinces me of one argument or the other.  I haven’t seen anything that addresses quantitatively either argument, to be frank.

It becomes more expensive to mitigate to an arbitrarily chosen threshold if the date by which to do so remains unchanged.  That is, if you accept <2C warming by 2100 as a goal (though I’ve detailed many times why such a goal is unfeasible), then mitigation costs are lower if we begin mitigation today instead of 20 years from now.  But why do we accept unnecessary firm boundaries on the problem?  If, as I’ve postulated, <2C warming by 2100 isn’t technologically or politically feasible, then one or both boundaries must change.  The further out in time we set the goal, the likelier it is that technologies will exist to more cheaply attain the goal.  The higher the temperature goal is, the likelier we are to achieve it.  And just like in the rest of our lives, the easier the goal is to attain, the likelier we are to do so.  And once done, the easier it becomes to attain subsequent along-the-road goals.

What’s left out of these goals is developed nation-level energy generation in developing nations – in other words keeping poor people poor indefinitely.  Mitigation alone won’t reduce wealth inequality.  If David wants to reduce wealth inequality, the best way to do so is to post-industrialize developing nations as quickly as possible.  With reduced inequality comes increased power.  The side benefits?  Developed nations actually work on mitigation (again, saving costs by mitigating where it’s cheaper) they can concentrate on adapting to climate effects along the way.

I recognize David’s valid point that skeptics are likely to latch onto the “we need to adapt” frame as a way to continue avoiding “we need to mitigate” concept.  But skeptics are going to continue avoiding the problem so long as we don’t switch how we talk with them.


Leave a comment

Climate and Energy Links – 31Aug2014

Some goodies I’ve marked but don’t have time to go into detail on—

The recent slowdown in near-surface global temperature rise has been tackled by many researchers.  This is what research science is all about: proposing hypotheses to explain phenomena.  None of the hypotheses offered can, by themselves, explain all of the slowdown.  They are likely co-occurring, which is one reason why pinning the exact cause is so challenging.  The most recent is that the Atlantic Meridional Overturning Circulation is transporting upper-oceanic heat to intermediate depths, where satellites and surface observations cannot detect it.  This theory is in line with separate theories that Pacific circulation is doing much the same thing.  I myself now think the Pacific is probably the largest contributor to heat transport from the surface to ocean depth.  GHG concentrations remain higher than at any point in the past 800,00 years (or more).  Their radiative properties are not changing – which means they continue to re-radiate longwave energy back toward the Earth’s surface.  That energy is going somewhere in the Earth’s climate system because we know it isn’t escaping to space.  This process is hypothesized to last another 15-20 years – whether in the Pacific or Atlantic or both.

Some decent science gets written sloppily by an outfit that normally does  a pretty good job of writing: meteorological organizations across the world continue to say there is a relatively high chance that 2014 will feature an El Niño.  Unfortunately, that’s not exactly how it’s reported in this article:

After initially predicting with 90 per cent certainty we’d see an El Niño by the end of the year, forecasters began scaling back their predictions earlier this month.

Number one – that’s not what forecasters predicted and the difference is important.  Forecasters predicted that there was a 90% probability that an El Niño would develop.  Probability and certainty are two very separate concepts – which is why we use two different words to describe two different things.  You’ll notice the forecasters didn’t predict either a 100% probability or with 100% certainty an El Niño would develop.  90% probability is very high, but there remained a 10% probability an El Niño wouldn’t develop.  And so far, it hasn’t.  It is still likelier than not that one will develop, but the chances that one won’t develop are higher now than in June.  A number of factors have not yet come together to initiate an El Niño event.  If they don’t come together, an El Niño likely won’t form this year.  But a blog devoted to climate science and energy policy should know how to write about these topics better than they did in this case.  Oh, and to all the climate activists who bet the farm an El Niño would definitely form this year and prove all those skeptics wrong … you look just as foolish as the skeptics screaming about their closely-held beliefs.  Scientists in particular should know better: wait until groups make observations about El Niño.  Predicting them remains much trickier than weather forecasting.  Because the next time you shout wolf…

On another note, a cool infographic:

Which means 50% of the U.S. population scattered across the entire rest of this big country is trying to tell urbanites how to lead their lives.  Something about tyranny and devotion to small government comes to mind…

Then,

This is certainly a small piece of good news.  Now the reality check: these numbers need to be orders of magnitude higher to keep global temperatures below 2C above the recent mean.  Furthermore, they need to be higher in every country.  China’s deployment of renewable energy dwarfs the U.S.’s and even that isn’t enough.  This is good, but we need much better.

More of this while we’re at it: dialogue between people and climate scientists.

Okay, that’s it.  I have my own paper to write.  Back to it.


1 Comment

UN Continues to Issue Irrelevant Pleas for Climate Action

The United Nations will issue yet another report this year claiming that deep greenhouse gas emission cuts are within reach.  As reported by Reuters (emphasis mine):

It says existing national pledges to restrict greenhouse gas emissions are insufficient to limit warming to 2 degrees Celsius (3.6 Fahrenheit) above pre-industrial times, a U.N. ceiling set in 2010 to limit heatwaves, floods, storms and rising seas.

“Deep cuts in greenhouse gas emissions to limit warming to 2 degrees C … remain possible, yet will entail substantial technological, economic, institutional, and behavioral challenges,” according to the draft due for publication in Copenhagen on Nov. 2 after rounds of editing.

Substantial is an understatement.  To achieve a better than even chance at keeping global mean annual temperatures from rising less than 2 degrees C, emissions have to peak in 2020 and go negative by 2050.  Technologies simply do not exist today that would achieve those difficult tasks while meeting today’s energy demand, let alone the energy demand of 2050.

The following quote points toward understanding the scale of the problem:

Such a shift would also require a tripling or a quadrupling of the share of low-carbon energies including solar, wind or nuclear power, it said.

That’s actually an underestimate of the required low-carbon energies.  Because again, achieving <2C warming will require net-negative carbon, not just low carbon.  But let’s stick with their estimate for argument’s sake.  Low-carbon technologies currently provide 16% of the global energy portfolio.  I’m not entirely certain the tripling quote refers to this 16% or not for the following reason: “traditional biomass” (wood and similar materials) represent 10% of the global energy portfolio, or 63% of the low-carbon energies.  We’re obviously not going to use more of this material to provide energy to the global energy-poor or industrial nations.  Wind, solar, biomass, and geothermal together account for 0.7% of the global energy portfolio.  That is a key figure.  How many news stories have you seen touting wind and solar deployment?  All of those small utility-scale plants globally account for less than 1% of total global energy.

So perhaps the UN is referring to the 16% figure, not the 0.7% figure, because even quadrupling it yields 2.8% of total global energy.  But what I just wrote is then even more valid: we need enough new solar, wind, and nuclear deployment have to not only match 15.3% of today’s global energy, but 45% of today’s global energy.  How much new low-carbon energy is that?  A lot of new low-carbon energy.  The US alone would require either 1 million+ 2.5MW wind turbines or 300,000+ 10MW solar thermal plants or 1,000+ 1GW nuclear power plants (more than the total number of today’s nuclear plants – globally).  And this doesn’t include any requirements to update national transmission grids or CCS deployment or sequestration topics.  As I said, the scale of this problem is vast and is completely glossed over by previous and it looks like current UN reports.

Look, the reasons to decarbonize are valid and well-recognized.  Emissions are driving planetary changes at rates that occur only very rarely in geologic history.  Those changes will accelerate throughout the 21st century and beyond.  Yet this remains the obsessive focus of most climate activists.  The problem remains how to achieve deep decarbonization – what policies will facilitate that effort?  The fact remains that no economy has decarbonized at requisite rates – and that includes economies that historically widely deployed nuclear and biomass energy.  The UN continues to issue reports that are wildly out-of-date the day they’re issued.  They do themselves and the world’s population no favors by doing so.  We need new methods and new frameworks within which to define and evaluate problems.


Leave a comment

Coal Plants: Colorado and the US

Colorado has a renewable energy portfolio standard for energy utility companies:

Investor-owned utilities: 30% by 2020
Electric cooperatives serving fewer than 100,000 meters: 10% by 2020
Electric cooperatives serving 100,000 or more meters: 20% by 2020
Municipal utilities serving more than 40,000 customers: 10% by 2020

The standard started with a ballot measure that voters approved in 2004 and was subsequently strengthened by legislative action twice.  The dominant utility in Colorado is Xcel Energy, based in Minneapolis, MN.  Despite spending money to defeat the initial ballot measure and the two following standards to generate first 10%, then 20%, and now 30% renewable energy by 2020, Xcel would have, did, and will meet the standards.

As with most topics, implementing high-level policies turned out differently than many RES supporters envisioned.  After the 2004 ballot measure passed, Xcel convinced the Public Utilities Commission that it needed to build a 766MW coal plant in Pueblo, CO.  CO consumers overwhelmingly objected to the planned plant for a few reasons: nobody was in desperate need of those MW, the plant’s cost (which ended up being over $1 billion) would be passed directly onto those same customers who didn’t need excess capacity, and they wanted Xcel to focus on renewable energy plants (wind and solar).  Since the PUC approved the plant, it hasn’t run at capacity.  There’s no surprise there.  Costs definitely went up on every customer in Xcel’s service region, whether they received Comanche energy or not.  This is the primary problem with private and investor utilities: the easiest way to make money is to force consumers to pay for expensive infrastructure.  And as I stated above, Xcel will easily meet its renewable energy standard.

How did Pueblo fare?  Well, that’s a new part of the story for me.  A local utility serviced Pueblo, which Black Hills Energy bought, who opted to replace nearly all its cheap coal capacity with natural gas essentially overnight.  This meant ratepayers are footed some more big infrastructure bills all at once.  In fact, Pueblo’s residential rate per kilowatt-hour has risen 26 percent since 2010.  What portion of Comanche 3’s electricity made it to Pueblo?  None of it.  Instead, the northern half of the Front Range uses that energy – the same place that wouldn’t allow Xcel to build a coal plant due to pollution and cost.

Continue reading


Leave a comment

Climate and Energy Links – Jul 2014

Some things I’ve come across recently:
New mega-map details all the ways climate change will affect our everyday lives.  We’ll need more resources like this to help personalize climate change effects.  With personalization will come motivation to act.  It’s not a panacea, but a good start.

Is your state one of the 10 most energy-efficient US states?  Mine (Colorado) isn’t.  More context: the US is good at buzzwords, but lousy at implementing policies that increase energy efficiency.  Although it’s a good thing that China is currently ranked #4 globally – they’ll have much less legacy infrastructure than the US and other developed nations to upgrade in the future.

This might be news to some: climate models that did the best at portraying natural ocean cycles the best also did better than their peers when projecting the recent surface warming pause.  What most people don’t understand is that each climate model run portrays one individual potential outcome.  That said, scientists don’t claim that individual models make perfect predictions.  The recent warming trend is well within the range of available projections.  Many skeptics, of course, gloss over this important detail when they falsely claim the models are no good.  How much time do those same skeptics spend on financial projections, anyway?

This has the potential for misinterpretation and misuse: climate worriers don’t, on average, use less electricity than those who don’t worry about the climate (at least according to a very small UK study).  They use more.  This will continue the claims of hypocrisy by skeptics, and perhaps justifiably so.  My net utility use is 14% to 17% of the average American’s 903 kilowatthours (kWh) per month: 125-150 kWh per month during the past year.  That’s in a modern home with AC, computers, and smartphones.  People can use much less than they currently do with a modern lifestyle.  They just don’t prioritize it.

Continuing on the theme of energy efficiency and waste: we waste 80 billion USD per year due to inefficient electronic devices.  Wow.   And it doesn’t have to be that way: simple measures could save billions of dollars if we implemented them.  Priorities.

Random thought: poverty-wage employers always ask if people would be willing to pay more for products if they paid their employees living wages.  I haven’t come across an easy rebuttal: were customers asked if they were willing to pay more for products if they paid their executives millions of dollars with guaranteed golden parachutes?  Guess what most people would rather support?  That’s right, the folks in their communities, not executives in their fenced off country club homes.


Leave a comment

What will 2040 US GHG emissions be

if this graph is anywhere close to accurate?

 photo Electricgeneratingcapacityadditions2000-2040-EIA_zpsa9ed57ae.png

That projection of electric generating capacity additions does not get us to stated emissions goals (e.g., 80% or 90% of 2005 levels by 2050.)  We can easily observe that out-year EIA projections probably are not very accurate and that’s a fair point.  I doubt, for instance, that this graph takes the EPA’s recent proposed rule into account.  The next 5-10 years is probably close to what will happen, however – close enough that any difference will not significantly impact say 2030 or 2040 emissions.

Note the vast difference between natural gas/oil additions for any single year between 2000-2005 and total renewables during any other year.  The only year that comes close to the same size for renewables will be 2015, but that still only amounts to 1/3 to 1/2 the natural gas additions ten years ago.  In order to achieve stated emissions goals, renewable additions will have to double every year between now and 2040.  That’s because new additions have to replace the oldest coal plants first, followed by oldest natural gas plants, and also meet increasing future demand, and generate enough energy during peak production periods to exceed peak consumption periods (not the same times of day).

Additionally, if we want to keep global mean annual temperature increases <2C, the projected natural gas additions have to tail off to zero (not stay constant) because they still emit GHGs.  And if all of that weren’t challenging enough, we must remove carbon from the atmosphere that is due to historical combustion and leakage.  But the basic story of this graph remains: this projection will not enable us to achieve stated emission reduction goals.  This graph is therefore useful in helping us understand what policies are working and what needs to be done in order to approach our emission goal.  For instance, renewables appear to enter a period of no growth in the 2020s.  That is probably unrealistic, but what policies should we consider to boost their deployment above 2005-2010 levels during the 2020s and on into the 2030s and beyond?  How about finance policies for starters?  How about long-term federal and state guarantees?  If we enact the EPA’s proposed power plant rule in most any way close to how it is currently structured, the 2020s and 2030s will likely look very different from this.  That rule could be a good start toward meeting future goals (just not 90% reduction by 2050 or <2C warming; more like 30% reduction by 2050).


Leave a comment

Deep Decarbonization Pathways Interim Report Released

An international group of folks put together an interim report analyzing “Deep Decarbonization Pathways”.  Decarbonization refers to the process of using less carbon within an economy.  The intent of the report was to show ways forward to keep global mean temperatures below 2C.  Readers of this blog know that I no longer think such a goal is achievable given the scope and scale of decarbonization.  We have not moved from a “business-as-usual” approach and have run out of time to reduce GHG emissions prior to relevant limits to meet this goal.  I argue the exact opposite of what the authors describe in their summary:

We do not subscribe to the view held by some that the 2°C limit is impossible to achieve and that it should be weakened or dropped altogether.

Thus the main problem with this report.  They’re using a threshold that was determined without robustly analyzing necessary actions to achieve it.  In other words, they a priori constrain themselves by adopting the 2C threshold.  Specifically, a more useful result would be to ascertain what real-world requirements exist to support different warming values in terms real people can intuitively understand.  The report is not newsworthy in that it reaches the same results that other reports reached by making similar assumptions.  Those assumptions are necessary and sufficient in order to meet the 2C threshold.  But examination unveils something few people want to recognize: they are unrealistic.  I will say that this report goes into more detail than any report I’ve read to date about the assumptions.  The detail is only slightly deeper than the assumptions themselves, but are illuminating nonetheless.

An important point here: the authors make widespread use of “catastrophe” in the report.  Good job there – it continues the bad habit of forcing the public to tune out anything the report has to say.  Why do people insist on using physical science, but not social science to advance policy?

On a related note, the report’s graphics are terrible.  They’re cool-color only, which makes copy/paste results look junky and interpretation harder than it should be.  So they put up multiple barriers to the report’s results.  I’m not sure why if the intent is to persuade policy makers toward action, but …

Continue reading

Follow

Get every new post delivered to your Inbox.

Join 307 other followers