Wednesday, May 30, 2012

Saving the planet with ethanol

What could be better than ethanol? The mostly corn-based, renewable biofuel extends the US gasoline supply, reduces greenhouse gas emissions, and boosts the income of the American Farmer. Ethanol’s higher octane rating makes it more powerful than regular gasoline, and every gas pump in the land prominently displays a sticker or sign encouraging consumers to Save the Planet by using the ethanol blend. The ethanol blend is almost always priced from 4-10 cents lower per gallon than regular unleaded gasoline, so consumers can not only help Save the Planet, they can also save money at the pump. What a deal!

Unfortunately, little of the above is true, although it is the way the ethanol story is reported by the major media, by corn and ethanol advocates and lobbyists, and by environmental activist organizations.

And because that’s the way the story gets reported, it’s become a major part of the green narrative, and therefore enjoys wide public, and of course, political support.

So let’s look at the claims made about ethanol and whether turning corn into fuel is on balance helpful or harmful to American farmers, ranchers, and consumers.

While there’s no doubt that ethanol extended the U.S. fuel supply in 2011 by about 10 percent (nearly 250 million gallons), one has to wonder whether corn ethanol is really a renewable fuel. Can corn ethanol production be sustained at the levels federal law calls for?

The Energy Independence and Security Act of 2007 (EISA) is the law of the land regarding ethanol production. Under the aegis of that law, the Environmental Protection Agency (EPA) develops and implements regulations regarding EISA, setting the national Renewable Fuel Standard (RFS).

The RFS calls for a tiered increase of biofuel production and blending into the domestic fuel stream through 2022. In 2008 the RFS was revised and upgraded, calling for a production increase from 8 billion gallons of renewable fuel each year to 36 billion gallons annually by 2022.

According to the RFS, 31 billion of the 36 billion gallon annual requirement is to be ethanol. Fifteen billion gallons is required to come from traditionally fermented and distilled corn starch each year. Cellulosic biofuel – ethanol produced with feedstocks other than corn – is required to produce 16 billion gallons each year. The remaining five billion gallons is called “undifferentiated advanced biofuel,” described as “…other than derived from corn starch…(including) “cellulosic biofuels, “biomass-based diesel”, and “co-processed renewable diesel.” While corn ethanol is very nearly meeting those legally binding goals, undifferentiated advanced biofuels are falling far short.

According to numbers published by the U.S. Energy Information Administration (USEIA), about 328.5 million barrels (bbl) of corn ethanol was expected to be produced (about 900,000 bbl/day). At 42 barrels per gallon, 2011 corn ethanol production should total just under 13.8 billion gallons, roughly quantity set for the year by the RFS and about 92 percent of the annual total called for by 2022.

According to Kansas State economist Ted Schroeder, who spoke last month at the Range Beef Cow Symposium at Mitchell, Nebraska, this production accounted for 35 percent of U.S. corn production, a number which is expected to grow to 40 percent by next year. And corn ethanol produced this year came from last year’s corn crop (the 2011 corn harvest is still under way).

Is there enough farmland to grow the quantity of corn required by the RFS? The answer is yes, but the question of corn ethanol sustainability is complicated by the fact that corn is an important food crop. It feeds humans directly as bread, meal and syrup (sweetener), and indirectly as livestock feed.

According to the 2007 U.S. Census of Agriculture, U.S. farmland totals 406.5 million acres. In 2007 corn was cultivated on 86.2 million acres and produced 12.7 billion bushels. According to the USDA, 2009 corn production hit 13.2 billion bushels, the highest annual production on record, on 86.5 million acres. In 2010 corn production was 12.45 billion bushels on 81.4 million acres. The 2011 forecast is 12.3 billion bushels on 83.9 million acres.

If 35 percent of the 2010 yield of 12.45 billion bushels produced 92 percent of the 15 billion gallon corn ethanol RFS, then only a three percent increase would meet the RFS goal. But corn production isn’t static, and the 2011 yield is expected to be lower, so that it will take 40 percent of this year’s crop to match 2011 ethanol production in 2012.

In theory, it is possible to eke out enough corn to hit the 15 billion gallon annual RFS mandate, but is that target reliably sustainable? What happens when the weather doesn’t cooperate? What happens when there is drought, or flooding, or disease or pest problems?

Proponents of the RFS and the mandate to produce 15 billion gallons of corn ethanol annually opine that the solution is to plant more acres of corn. But is upping U.S. corn acreage possible? And at what cost? If you count sorghum, soybean, sugarbeet, and even dry bean acres, you could theoretically add about 70.6 million additional acres of high-yield corn cultivation, for a total of nearly 160 million acres. But that’s really all the suitable high-yield corn land available in the U.S. And if you convert all sorghum, soybean, dry bean and sugarbeet acres to corn, the law of supply and demand will become the driving force. As stocks of the non-planted crops dwindle, their value will increase. Farmers, if they remain at liberty to plant the crops which will bring them the highest economic return, will turn away from corn and begin planting the more valuable crops. Corn production will fall off.

Ethanol production and government subsidies are not new phenomena in the United States, but the quantity of corn used for fuel was fairly low through the 1970s, ‘80s and ‘90s. Production began picking up in 2002 and 2003, and has grown rapidly since, driven by the ethanol demand of the RFS. As fuel prices increased, auto manufacturers stepped up production of E-85 vehicles and states began mandating ethanol to replace other fuel additives. The increase in corn ethanol demand, which effectively links corn prices to oil prices, has caused corn prices to increase. This means more profitability for corn farmers, but it not only increases fuel prices for consumers, it also increases food prices for consumers. As corn prices rise, so do the prices of wheat, soybeans, and other crops. The price of plant-based food goes up, but since livestock are also fed by with corn, grains and soybeans, the price of meat goes up as well.

Each 56-pound bushel of corn processed for fuel produces 2.8 gallons of ethanol and roughly 18 pounds of animal feed in the form of distillers’ grains, so about one-third of the corn used in ethanol production returns to livestock production. Even so, the use of corn for fuel has played a role in increasing the price of corn from $2 per bushel in 2005 to $6 in 2011. There are other factors, such as exports and food use, but Schroeder said that corn prices would be $1 to $1.50 per bushel lower if no corn went to ethanol production. Those lower prices would be reflected in lower food costs.

And while increased crop prices are a boon to farmers, ethanol production has a large negative effect on the economics of livestock production. According to Schroeder, for instance, a $1 per bushel increase in corn prices drives alfalfa hay prices up by about 15 percent, which increases the production costs for cow-calf operations and reduces their profitability. The same $1 increase raises feedyard cost of production by about $60 per head, resulting in lower prices at the sale barn for cow-calf producers while still increasing the retail price of beef.

There is a similar economic impact in every other livestock sector, including pork, lamb, and poultry. Consumers are paying more for meat and poultry, while producers, feeders and retailers find their profit margins reduced – or in some cases, gone completely.

The picture isn’t entirely bleak for those livestock producers who have survived. New technologies and improved management practices have allowed many producers become more efficient. In the cattle sector, a smaller U.S. cow herd and small increases in domestic and international demand for beef have led to a higher value for calves and feeder cattle. But high feed costs continue to erode profitability, and consumers continue to pay more for food.

Still, these things are a small price for food producers and food consumers to pay to Save the Planet, right?

Well, perhaps not.

Despite the dire and ongoing predictions of the environmental activists and global warming alarmists, the planet has actually been cooling since 1998. Although it runs counter to the popular narrative, there’s more and better evidence that global cooling poses a larger threat than global warming. Geologic and historical data show that global temperatures have been both significantly warmer and significantly cooler over the past 10,000 years. Periods of warmth have been times of plenty, and periods of cold have been times of famine.

As we’ve discussed in previous editions of this series, there’s also increasing evidence that greenhouse gases do not drive either global temperature or climate change, and that the role man made greenhouse gases play is insignificant.

One of the major arguments for the RFS and blending ethanol into the fuel stream has been to reduce man made greenhouse gas emissions. If you discount the evidence militating against man made, or anthropogenic, global warming (AGW), and believe the key to Saving the Planet lies in reducing man made carbon dioxide (CO2), methane (CH4) and nitric oxide (NO2), then ethanol, with it’s lower carbon footprint, still has to be a good idea, right?

Again, perhaps not.

Combustion of most ethanol-blended gasolines produces essentially the same carbon footprint as gasoline. CO2 production is sharply reduced in high ethanol:gasoline ratios, such as E-85, but at the cost of increased ozone (O3) and carbon monoxide (CO) emissions. The total carbon footprint of E-85 is unchanged with respect to plain gasoline or lower ratio ethanol blends. Unlike the hydrocarbon gasoline, made up of hydrogen and carbon, ethanol is an alcohol, composed of hydrogen, carbon and oxygen. Because alcohols have higher octane, or anti-knock ratings than gasoline, ethanol was considered as a replacement when lead anti-knock additives were banned from gasoline in the early 1970’s. While lead was eventually replaced by safer compounds, ethanol was shown to reduce smog emissions, mainly through increased combustion efficiency and reduced detonation in the higher compression engines of the day, and ethanol additives began to be used in urban areas in the late 1970’s.

Ethanol is able to reduce CO emissions because it features oxygen in each molecule, and when combusted, that oxygen combines with carbon monoxide in the exhaust (carbon plus one oxygen, CO) to form carbon dioxide (carbon plus two oxygen, CO2). CO2 is a less toxic gas, but nevertheless, it’s the one said to be a major player in global warming.

Reducing carbon monoxide levels is a reasonably good thing, particularly in densely populated areas with lots of automobile traffic. But ethanol combustion produces considerably more ozone than does gasoline combustion. And ozone, a major constituent of photochemical smog, is not a good thing in densely populated areas with lots of automobile traffic.

Recently a number of studies, including one conducted at Stanford University, showed that ethanol combustion produces at least two airborne carcinogens, formaldehyde and acetaldehyde. Since gasoline also produces carcinogens, benzene and butadiene, cancer rates attributed to motor vehicles powered by internal combustion engines will likely remain the same, regardless of fuel type. Unfortunately, the increased levels of smog associated with ethanol would probably increase smog related premature deaths by about four percent and spike asthma-related emergency room visits and hospitalizations.

So after spending the last forty-plus years reducing smog, ethanol may reintroduce the nasty “brown cloud” to generations who’ve never experienced it.

Ethanol blended gasoline is also more likely to contaminate groundwater than straight gasoline. Being hydroscopic, ethanol absorbs water, increasing corrosion in gasoline storage tanks and transport pipelines, increasing the incidence of leakage and spillage. Ethanol blending also makes the gasoline mixture more aqueous, allowing it to more readily permeate soils than straight gasoline. Also, ethanol has been shown to increase soil porosity, in effect increasing the rate of soil permeation.

Still, ethanol is more powerful, more fuel efficient, and less expensive, right?

Well, not quite.

Gas station price boards and fuel pumps refer to the 10 percent ethanol blend (E-10) as “premium” unleaded gasoline, and tout the higher octane rating of E-10 blends. Many consumers believe that higher octane ratings are equated with higher power ratings. This isn’t true, however. Octane ratings simply describe how much the fuel/air mixture can be compressed without detonation. Ethanol has a much higher octane rating than gasoline, and can therefore be compressed more without detonating. E-10 blends typically have an octane rating 10-15 points higher than regular, or unblended, gasoline. Detonation can quickly ruin an engine, so in general, a higher octane rating could potentially improve the life of the engine. But modern gasoline engines are designed with compression ratios low enough so that they can safely burn regular gasoline, so the octane advantage of ethanol blends is largely a moot point.

Rather than increasing the power of gasoline, blending it with ethanol actually reduces the power it can produce. Gasoline produces a lot of energy, about 115,000 Btu (British thermal units) per gallon, according to data produced by the Oak Ridge National Laboratory (ORNL) in Tennessee. ORNL is one of the main research institutions of the Department of Energy.

The same data show that ethanol is less energy dense than gasoline, containing only 75,700 Btu, or 66 percent the energy of gasoline. The ratio of energy input to work output is constant in an engine. You can think of it as 1:1; for each energy unit you put in, you get one work unit out. Put in less energy, you get less work out. If your car goes 10 miles on one quart of gasoline, it’ll only go 6.6 miles on one quart of ethanol.
Blending gasoline with ethanol dilutes the energy content of the fuel. A 10 percent blend has only .966 (96.6 percent) the energy of regular gasoline, a reduction of 3.4 percent. It seems a small number, but it reduces the range of your vehicle. If your car can travel 500 miles on a tank of regular gasoline, it will only go 483 miles on a tank of E-10. Still, that’s just a paltry 17 miles, right? Though E-10 may not be Saving the Planet, or more powerful than regular gas, and though it might reduce your range by 3.4 percent, it’s still cheaper. It says so right there on the gas pump. Right?

Ethanol blend prices typically run 4-10 cents lower than regular gasoline prices. On December 13, 2011, in Kimball, Neb., regular gas was $3.39 per gallon, while the E-10 ethanol blend was $3.36. To break even, or to pay the same price per unit of energy, the ethanol blend price would have to be $3.27 (.966 x $3.39). So the ethanol blend actually costs more per unit of energy. Ethanol blending doesn’t save the consumer money at the pump unless it beats the energy spread.

As an aside, on May 30, 2012, regular gasoline was $3.76 in Kimball, while the 10 percent ethanol blend was $3.66 per gallon. The breakeven price for this binary set solution is $3.63. Though gasoline prices have come down in recent weeks, the ethanol blend still costs more MONEY (one-half of the solution set) per unit of ENERGY (the other half of the solution set).

Recently the EPA okayed the use of a 15 percent blend, or E-15, in unmodified gasoline engines. The availability and use of E-15 hasn’t yet become widespread, for a number of reasons. Gasoline retailers would have to either switch from E-10 to E-15, or offer E-15 in addition to E-10. Each would cost time and money, and that cost would be passed on to consumers. There are also questions about whether most engines can function properly on a diet of E-15. Ethanol actually breaks down the rubber in fuel tanks and hoses, and while E-10 seems dilute enough to avoid this in most cases, there have been rubber breakdown problems in testing and road use with both blends. Ethanol is also highly hydroscopic, tending to absorb water, and as most of us know, water in the fuel is not a good thing.

Leaving aside those problems, there’s still the problem of energy dilution and price. A 15 percent blend has only .949 (94.9 percent) the energy of regular gas. To calculate the breakeven price, multiply the regular gas price by .949. Using our previous example, $3.39 times 0.949 equals $3.21. Therefore, a 15 percent ethanol blend must cost $3.21 per gallon, simply to provide the same energy for the same price. And since E-15 is 5.1 percent less energy dense than regular gasoline, it reduces our 500 mile range to 475 miles.

The news is even worse for E-85, which has an energy ratio of .711. If E-85 is priced at $2.50 today, it is nine cents above the breakeven price of $2.41. A tank of E-85 will make it only 355 miles, rather than the 500 miles for unleaded gasoline. E-85 also requires a special flex fuel engine, and will not work in a regular gasoline engine.

But E-85 isn't priced at $2.50. On May 30 at Scottsbluff, Neb., it was $3.38. That's nearly a dollar above breakeven.

Ethanol also costs you more than the pump price reflects. In addition to higher food and fuel prices, ethanol costs you more in taxes. Because it’s more expensive to produce than gasoline, and because it’s a weaker, less energy dense fuel, the ethanol industry cannot compete in the marketplace or even exist without taxpayer money. Though in January Congress allowed both the 45 cent-per-gallon Volumetric Ethanol Excise Tax Credit and the 54 cent-per-gallon ethanol import tariff to expire, the Small Ethanol Producer Credit, and the Alternative Vehicle Refueling Property Tax Credit are stillo in place, propping up the industry. It is also protected by the ethanol import tariff. As Senator Diane Feinstin (D-CA) said in 2011 year, “The ethanol industry is the only one to ever receive the triple crown of government intervention. Ethanol use is mandated by law, its users receive federal subsidies and domestic production is protected by tariffs. That policy is not sustainable.”

Direct federal subsidies to the ethanol industry rang up at more than $6 billion in 2011. With about 200 corn ethanol plants in the U.S., that means that each plant receives subsidies worth on average $30 million. Looked at another way, the cost of the ethanol industry to each American taxpayer is more than $36 each year. There’s nothing fundamentally wrong with ethanol as a fuel, or with having an ethanol industry, or even with basing so much of it on corn. But like all for-profit industries receiving federal subsidies, ethanol should be able to make it on its own in the market on the merits of its product.

All in all, corn ethanol really doesn’t deliver very well on the promises made by the major media, corn and ethanol advocates and lobbyists, and environmental activist organizations. It costs more and delivers less energy, emits the same carbon footprint as gasoline, and emits both ozone and carcinogens. It does increase the profitability of corn farmers, but at the cost of increased food prices, fuel prices, and taxes.

Does corn ethanol help or hinder ag producers, consumers, or taxpayers? You be the judge.

Climate change and starvation: assessing the risk

“The most outrageous lies that can be invented will find believers if a man only tells them with all his might.” – Mark Twain

“A lie can travel halfway around the world while the truth is putting on its shoes.” – attributed to Mark Twain

There were a lot of stories in the major media late last year touting experts predictions that climate change will lead to mass starvation unless we do something immediately.

Fortunately, the experts are almost certainly wrong, as they have been for decades, which we’ve pointed out in this series. The stories, however, illustrate the ongoing and pervasive nature of the environmental alarmist narrative.

At least two of the stories appeared in the on-line agricultural journal Drovers CattleNetwork.
The first was a Reuters story reporting the position of the U.N. Food and Agriculture Organization (FAO), that the planet’s environment is seriously degraded, threatened by global warming, and that by 2050 agriculture will be unable to feed the growing global population.

The second story reported on nearly identical findings reported by researchers from Commission on Sustainable Agriculture and Climate Change, convened by the Consultative Group on International Agricultural Research.

These stories and a number of dubious scientific reports were timed to coincide with environmental and food security concerns voiced at the 17th annual global warming conference in Durban, South Africa. The formal recommendations of the conference were “…to advance, in a balanced fashion, the implementation of the Convention and the Kyoto Protocol, as well as the Bali Action Plan, and the Cancun Agreements.”

These protocols, plans and agreements are intended to fight anthropogenic or man-made global warming (AGW) through: 1) carbon emission sequestration through the carbon market, 2) the Clean Development Mechanism, and Joint Implementation, as outlined by the United Nations Framework Convention on Climate Change.

Carbon sequestration and carbon trading quickly fell apart. There’s simply no evidence that greenhouse gases such as carbon dioxide and methane drive global temperatures or have ever driven temperature on the planet. Neither does physics allow for such a thing to happen.

As for carbon trading, the Chicago Climate Exchange (CCX) closed in 2010 after carbon prices fell from $7.50/metric ton to less than a nickel/metric ton. The European Carbon Exchange (EEC) is still trading carbon, but at great cost – at least $67 billion annually – to the economy of the European Union (EU). In the U.S. carbon trading was an unworkable scheme; in the EU it is an expensive, and failing, scheme.

The Clean Development Mechanism is perhaps the costliest swindle ever perpetrated on the third world. In simplest terms, it disallows third world development, because development causes greenhouse gas production, and greenhouse gases cause global warming. Those countries which have become dependent on UN money face the choice of losing UN cash if they attempt to develop on their own or continuing to barely subsist on UN rations. Since third world governments are the largest beneficiaries of the UN payouts, there is little if any incentive for them to change. The masses of the third world population continue to live a life of agrarian subsistence.

Joint Implementation is a combination of blackmail and swindle, where the developed countries are expected to pony up $100 billion annually to save the undeveloped world from global warming.

But a funny thing happened on the way to global warming. Despite the pronouncement by the UN’s International Energy Agency last month that global warming will become “catastrophic and irreversible” in 2017, the whole scheme has begun to fall apart. The Earth began to cool in 1998. Sea levels have not risen at all, let alone catastrophically. CO2 levels continue to rise, but none of the horrific greenhouse gas predictions have come true. Thousands of “climategate” emails now reveal the deeply unethical and flawed practices of the environmental alarmist and UN scientific “experts.”

And while the theory of man-made global warming is falling apart, so is the global economy. As Bret Stephens noted recently in the Wall Street Journal, first world nations can no longer afford to pour money into the invented notion of saving the planet by mitigating man-made greenhouse gases. Environmental alarmists and the UN have been barking up an invented tree, and the world can no longer afford to expend its wealth chasing imaginary demons.

“The U.S., Russia, Japan, Canada and the EU have all but confirmed they won’t be signing on to a new Kyoto,” said Stephens. “The Chinese and Indians won’t make a move unless the West does. The notion that rich (or formerly rich) countries are going to ship $100 billion every year to the Micronesia’s of the world is risible, especially after they’ve spent it all on Greece.

“Cap and trade is a dead letter in the U.S. Even Europe is having second thoughts about carbon-reduction targets that are decimating the continent’s heavy industries and cost an estimated $67 billion a year. “Green” technologies have all proved expensive, environmentally hazardous and wildly unpopular duds.

“That’s where the Climategate emails come in. First released on the eve of the Copenhagen climate summit two years ago and recently updated by a fresh batch, the “hide the decline” emails were an endless source of fun and lurid fascination for those of us who had never been convinced by the global-warming thesis in the first place.

“But the real reason they mattered is that they introduced a note of caution into an enterprise whose motivating appeal resided in its increasingly frantic forecasts of catastrophe. Papers were withdrawn; source material re-examined. The Himalayan glaciers, it turned out, weren’t going to melt in 30 years. Nobody can say for sure how high the seas are likely to rise – if much at all. Greenland isn’t turning green. Florida isn’t going anywhere.

“The reply global warming alarmists have made to these disclosures is that they did nothing to change the underlying science, and only improved it in particulars. So what to make of the U.N.’s latest supposedly authoritative report on extreme weather events, which is tinged with admissions of doubt and uncertainty? Oddly, the report has left climate activists stuttering with rage at what they call its “watered down” predictions. If nothing else, they understand that any belief system, particularly ones as young as global warming, cannot easily survive more than a few ounces of self-doubt.”

But if the theory of man-made climate change is falling apart, what about the specter of  famine as the global population continues to grow? As we outlined in parts two and four of this series, there’s little evidence to support the notion that agriculture is destroying the planet or that it will be unable to continue to feed the global population. These are simply more alarmist myths, calculated to instill fear and loosen global purse strings in the pursuit of a political agenda.

Not only is agriculture keeping up with global food demand, it is improving the ecology of the planet. The shrinking number of agriculture’s adverse environmental impacts continue to be mitigated by improved farming and ranching practices. At the same time, diets, health, and life spans continue to improve around the globe. This is hardly a catastrophe.

Global famine could happen, of course. The climate is changing, just as it has continually changed for more than four billion years, and an extended period of global cooling would doubtless lead to crop failures and hunger. Global cooling or the beginning of a new ice age is by far the most likely possible cause of widespread famine. As recently as the Little Ice Age (LIA, 1300-1850 A.D.) crops failed and human populations fell around the globe. And famine isn’t the only threat to humanity.

There are no guarantees in life.

As mortal beings, we all learn this at an early age. It’s an intellectual fact for the young, but it becomes real and visceral as the years begin to add up.

That’s for the individual of course.

But there are widespread risks of deadly peril to humanity as a whole. There’s no sense arguing the fact. A major asteroid colliding with Earth would probably kill all or nearly all of us. So would a large-scale exchange of nuclear weapons. So would the sudden onset of a planetary glaciation. So would world wide crop failure. And so, probably, would a world wide and long-term interruption of electricity.

Each of these things can happen. In fact, with the exception of nuclear war, each has happened, right here on this planet.

But what is the risk of one of these things – or some other catastrophe – devastating humanity in the near future? How does one assess such a risk, and then having made an assessment, how does one prepare for the coming crisis?

For some potential catastrophes, it doesn’t matter. Were a very large (greater than 500 kilometers in diameter) asteroid or comet found to be on collision course with Earth and due to strike within the next decade, there would be nothing humanity could do to avert the disaster. Oh, there would doubtless be a crash program to destroy or deflect the object, but at our level of technical and political ability, these ideas are the stuff of science fiction – America hasn’t even begun to rebuild on the site of the former World Trade center yet. What kind of crash program are we capable of? The object would strike and life as we know it – perhaps all life – on Earth would end.

The same is essentially true for nuclear war, sudden onset of glaciation (the onset of some glaciations, or ice ages, have happened within mere decades, according to the geological record), world wide crop failure, and the long-term interruption of our ability to use electricity.

There would almost certainly be one difference with these last four, however. Life on this planet would survive. Humans would be very hard hit – societies would collapse, populations would plunge, human lives would become brutish, nasty and short – but the rest of Earth’s life forms would quickly adapt and go on much as before.

Now, that’s a lot of doom and gloom. But if you objectively quantify the risks of each of those things happening, you find some rather good news. A massive impact could occur, but none have for about four billion years. Nuclear war could happen – the risk is far greater – but a world wide nuclear exchange is almost certainly beyond our present technical and political capabilities. As for a world wide crop failure, it’s hard to imagine more than one likely circumstance which would drive such a global catastrophe. It could happen, but the risk is quite small.

Perhaps the highest catastrophic risk of the five mentioned above is the long-term loss of our ability to use electricity. A massive solar flare could do it, as could a coordinated electromagnetic pulse (EMP) attack, where large nukes would be set off high in the atmosphere and the resulting rain of radiation and nuclear particles – which would be mostly harmless (at least in the short term) to human life – would nevertheless destroy all unshielded electrical equipment. No computers, or cell phones, or “cloud” based devices. No stock exchange. No electrical generation and none of the heating, cooling and lighting based upon such generation. No cars or motorcycles. No refrigeration. No ATM’s. The list goes on and on. In such a suddenly changed regime, life would be very hard for people. Harder perhaps than you can imagine.

Still, the likelihood of any of these events actually occurring within the next twenty years is  relatively small.
One way to assess risk is mathematically. We’ve just done this, on the back of an envelope as it were, for the above catastrophic scenarios. To describe it mathematically, or to “say it in math,” goes something like this. C=Rl/Dle, where C is catastrophe, Rl is likelihood that the risk will occur (quantified level of risk), and Dle is the devastation level expected to occur.

Now comes the fun part, assigning numerical values to the various mathematical expressions. This yields a catastrophe quotient – given as a percentage, of the probability of human devastation caused by a particular catastrophic event. It’s not a predictive tool, rather, it’s a way to wrap your mind around the problem.
Because Rl is expected to be small in the above scenarios, let’s scale it from 0-5, with zero being not expected to happen, ever, and five unlikely to occur, but still a distinct possibility. As for Dle , lets use a range of 0-100, where aero is no devastation and 100 is complete loss of human life on the planet.

In our massive impact scenario, Rl would be quite small, on the order of 0.02. Devastation to planetary life, however, would be huge, the maximum possible of Dle=100. Therefore our equation would look like this C=0.02/100, or 0.002 percent. In this mathematical context, not a very big risk.
Now let’s look at scenario five from above, world wide long-term interruption of electricity, using the same formula.

On Nov. 3, 2011, a powerful solar flare erupted from a huge sunspot  on the surface of our star. It was classified as an X 1.9 flare. Although the flare wasn’t aimed directly at Earth, it still caused some radio and other communications interruptions and breakdowns.

Solar flare energy is rated on x-ray output and measured in Watts per square meter. The November 3 flare, classified as X 1.9, released 1.9 times 10 to the fourth power Watts per square meter, or 1.9 x 100,000 W/m2 , or 190,000 W/ m2.  To put this in perspective, the normal solar flux reaching the surface of the Earth is about 5 W/ m2, or about 38.000 times less energetic. But X class flares can achieve level 9.9, which would put their output at 990,000 W/ m2, an increase of about 800,000 W/m2, and even higher.

Stronger flares have been measured including X-28+, X-20, x-17, X-15, X-14, X-12, X-10, and X –9.0 – X – 9.8, all during the last 30 years. None of these flares have directly impacted Earth. However, should one do so, it would quickly overwhelm the Earth’s protective magnetic field, leading to a de facto EMP event.

How do you assess such a risk? Applying our simple math from above, we find that the Rl is quite high. With 30 solar flares of X-9 magnitude or greater in less than 30 years, perhaps as high as 2.5. The Dle, in human terms, is also quite high, perhaps on the order of 90 (How long do you think you would survive without food, heat light, etc.). this makes the equation come out differently, at 0.27 percent, or two full orders of magnitude higher than the impact scenario.

What about widespread crop failure and famine caused by global cooling then? At the onset of the Little Ice Age, it took barely 20 years for food production, and then population, to begin falling off. So how do we assess a similar risk today?

In general, the risk would probably fall somewhere between the two extremes cited above.

We could reasonably set Rl at 1.0, and  Dle to 50. Therefore our equation would look like this: C=1.0/50, or 0.02 percent. Again, not a huge risk, but potentially a troubling one.

With history as a guide, however, it’s more likely that our climate will remain reasonably stable for at least hundreds of years. Also, agricultural techniques and technologies have improved greatly since the Little Ice Age, so the impact of a similar cooling period would probably be greatly reduced when it comes to food security.

Overall, an objective assessment of the likelihood of catastrophic climate change and global famine shows that the risk is there, but it appears to be quite low, at least for the foreseeable future.

Carbon Harvest: Feeding the World

This autumn, harvesters will be crawling in their thousands across 'the fruited plain,' reaping the bounty of the land to feed hundreds upon hundreds of millions of people, The annual American harvest, which passes unseen and unnoticed by most of the U.S. population, is a good time to think about the crops farmers and ranchers are harvesting, and how it is that there are crops to harvest in the first place.

Even in the largely rural Panhandle of Nebraska where I live, where the nearest cities are Denver (pop. 600,000) and Cheyenne (pop. 60,000), people get their food from a store. Most people in the region are physically closer to farms and ranches than their urban or suburban counterparts, and most have seen cattle grazing and tractors tilling and combines harvesting, but like their fellow citizens from the cities and the suburbs, they have little understanding of how crops and livestock become the packaged food items they purchase every week.

For the vast majority of Americans, farming and ranching is but a distant dream. Though fully half the population was either a farmer and/or rancher or worked on a farm or ranch at the turn of the twentieth century, and about 80 percent did so at the turn of the nineteenth century, at the turn of the twenty-first century fewer than two percent of the population were farmers or ranchers. Because of the way farmers and ranchers are counted today, the number of full-time working farmers and ranchers is probably closer to one percent. The remainder of the U.S. population, some 98-99 percent or about 305 million Americans, are two or more generations removed from the farm and ranch.

For those people, harvest is a word they learned to define in school; tractors, combines and tillage machinery are things they’ve seen on television or, rarely, at a county or state fair; the “job” that a farmer or rancher does is a mystery; and few if any can tell the difference between corn and milo, wheat and millet, soybeans and sugarbeets, cows and steers.

This is not to denigrate today’s non-farmers and non-ranchers. There are countless aspects of their collective jobs and environments which they understand intuitively but which farmers and ranchers understand not at all. We farmers and ranchers rely on their knowledge and skills to make our clothes, power our homes, manufacture our televisions and computers and the dozens of items we use every day, on and off the farm and ranch. And most of us, too, buy our food at the store.

The story of how freshly harvested commodities make their way to grocer’s shelves is fascinating, but we’ll cover that another time. This time, let’s take a look at the food plants and animals themselves – where they come from and how they grow. You might just find the story surprising.

It starts with carbon. Carbon is absolutely essential to life as we know it. At the most fundamental level, carbon plays a decisive role. DNA, which is the genetic blueprint and instruction manual for every living organism, which inhabits every cell of every living thing (other than a handful of RNA viruses and the little-understood prion, neither of which really meet the definition of “life”), is made up of  carbon, phosphorus, nitrogen, oxygen and hydrogen. Take any one of those elements away, and DNA is just a mishmash of decomposing molecules. But carbon is unique in another way. Because of it’s atomic structure, with four valance electrons, carbon provides the vital linkages between the other elements, providing the backbone of DNA. Without carbon, there would be no DNA, and without DNA there would be no life.

But carbon is more than the backbone of DNA. It is the backbone of life in general. Just as it provides the critical linkages in the DNA molecule, it provides the critical linkages in cells, tissues, proteins, amino acids, sugars, starch – and the list goes on.

If you’ve ever seen Star Trek: The Motion Picture (1979), you may recall that the threatening entity faced by the crew of the Enterprise, V-Ger, considered the crew of the starship to be an infestation of “carbon-based units.”

While the movie was fiction, of course, all life as we know it, from the tiniest bacteria to the largest whale or sequoia, does indeed consist of carbon-based units. Leaving aside water, carbon is the most abundant element in every animal and plant. The average human, for instance, contains 16 kilograms (kg) of carbon, roughly 35 pounds (lbs), or about 23 percent of total body mass. The ratio is pretty much the same for all animals. When it comes to plants, the carbon ratio is far higher. Consider grass, for instance. Again leaving aside water, and depending on the variety, grass is composed of 75-95 percent cellulose, hemi-cellulose and lignin, all of which are composed of complex carbon-based molecules. Lignin is about 59 percent carbon, cellulose about 42 percent carbon, and hemi-cellulose 38 percent carbon.

In and of itself, however, carbon is little more than ash or a dirty rock. For carbon to become integrated into the very core of every plant and animal on the planet requires quite a feat of natural chemical engineering. Most of that engineering is done via metabolism, where life forms take in raw materials, use what they need, and discard the remainder as waste. The food chain is representative of this process.

From our human perspective, it’s fair to say that plants form the base of the food chain. In the simplest form of the food chain, plants are consumed by herbivores, which are consumed by carnivores, which are finally consumed by scavengers. Some animals, including humans, are omnivores, and consume both plants and animals. The food chain doesn’t start at the bottom and end at the top. Rather, it is a continuous strand of links – a closed loop or a continuous cycle. Though we often hear about the “top of the food chain,” in reality, there is no pinnacle. The stuff of life is constantly recycled from earth to plant to animal to scavenger and back to earth.

Though some animals mimic a few aspects of agriculture (a few insects and even fewer animals), man is the only organism on the planet that practices agriculture, or grows his own food. But while man expends enormous energy, huge quantities of resources, and countless hours planting, nurturing and harvesting food plants and animals, all of the real work of assembling elements and nutrients into food occurs naturally.

When it comes to the plants we eat, farmers prepare the soil,  plant the seeds, nurture the growing plants with fertilizer, pesticides, and in some cases water, and then harvest the crop when it reaches maturity at the end of the growing season. This all sounds simple, but if you’ve ever stood at the corner of  a square-mile of wheat or corn and tried to imagine raising such a crop by hand, from tilling to planting to nurturing to harvesting – well, I hope you get the picture.

All of this toil – the countless hours of work by man and tractor and machinery and harvester – is absolutely necessary to feed the human population. Farmers, ranchers, and those in the food supply system free everyone else to do things they couldn’t do if they had to grow and process their own food. Still, all of the work done by farmers and ranchers simply supports and takes advantage of a natural process. If the seeds planted by farmers contained no DNA (and if the DNA contained no carbon), no plants would grow. And man, regardless of the technological wonders he has wrought, cannot make DNA. Only nature can do so.

Neither can man “make” the plant grow. Farmers can support plant growth by providing for proper soil and nutrition and applying pesticides when needed, and in some cases by applying supplemental water, but plants grow by photosynthesis, using energy from sunshine to drive their internal metabolism, which combines elements from the ground, carbon from the air (carbon dioxide) and water from the sky and turns them into growing biomass, and ultimately, food. But man cannot “do” photosynthesis. Only nature can.
And so from nature alone, with a backbone of carbon and a relatively small but vastly important assist from man, springs the staff of life. And many other things that are good to eat, too.

But man cannot live on bread alone, right? I hope you’ll pardon the out-of-context usurpation of that line. Let me put it another way. Man, as an omnivore, lacks certain digestive capabilities, and finds it quite difficult to stay healthy on a diet of vegetation alone. Oh, it can be done, but at the considerable expense of purchasing and consuming the various vitamins and elements needed to maintain human health but not available in vegetation. Or at least not available in a form man can digest and make use of. Happily, those vitamins and elements are available to and digestible by humans in the form of meat. Particularly cooked meat, which is the kind most of us prefer.

And so farmers, in this case often called ranchers, grow meat animals as well. But just as with the farmers who grow crops, ranchers can only support and nurture the meat animals they grow. Animals arise from and grow according to the dictates of DNA, just like plants. And carbon is the backbone of animal DNA too.
Animals have a different metabolism than plants. While plants derive their energy from the sun through photosynthesis, animals – herbivores at least — get their energy from eating plants. In essence, they harvest the sunlight that was harvested by the plant. As we go further around the food chain, predators harvest the energy that the herbivore harvested from the plant which came from the sun.

As the growing season across North America comes to a close and farmers and ranchers harvest the bounty of the land, it’s good to remember that we can only harvest and consume those things which nature provides. We can tweak things a bit here and there, and we’ve done remarkable work in increasing yields and ensuring the quality and safety of the foods we all consume. But we do not control the sun, or the weather, or make seeds germinate or ova quicken. We do not “make” our food. Nature does. And without carbon, not even nature could make food.

Why the emphasis on carbon? For some time now, for more than 25 years, in fact, politicians, self-appointed experts, and an agenda-driven media have been pushing the story that manmade carbon dioxide is pushing the planet into a climate catastrophe. Around the globe, governments are spending hundreds of billions of dollars to mitigate the production of manmade “greenhouse gases” such as carbon dioxide. In the U.S., the Environmental Protection Agency has declared carbon dioxide a pollutant. For a short time, environmental alarmist groups and capitalists – ordinarily the most bitter of enemies – joined forces to make money and sequester carbon dioxide in the ground via the Chicago Climate Exchange (CCX). In a Ponzi scheme of ridiculous proportions, speculators could buy, sell and trade so-called carbon offsets through CCX. At one point carbon was trading at $7.50 per metric ton. When CCX folded in 2010 the price was five cents per metric ton – and falling.

Over the last few years legitimate climate science – not the “settled” or “consensus” science invented by climate alarmists – has finally come to the fore. Despite the vocal claims to the contrary, carbon dioxide and other “greenhouse gases” are not driving the climate. Every hypothesis proposed by the alarmists has been shown to be in error or to violate physical law. Every one. Of the 21 ongoing computer climate models operated under the auspices of the U.N.’s Intergovernmental Panel on Climate Change (I.P.C.C.), none have predicted any of the climate changes which have occurred over the last decade. None of the 21 agree with any of the others, in fact. Computer models are ruled by the Garbage In-Garbage Out (GIGO) principle – flawed or incomplete input will only yield flawed or incomplete output.

Just as man cannot make DNA or make food grow, neither can man change the climate. Nature, which does make DNA and does grow food, also runs the climate. And nature is far, far more powerful than man.
So, in fact, is carbon.

So as we celebrate the bounty of harvest this autumn on farms and ranches and in cities, towns and villages, perhaps we should remember that as wonderful as we are – and we truly are wonderful in many, many ways – we would be nothing were it not for carbon.

Agriculture and media bias

A small, rural Nebraska newspaper an Associated Press (AP) piece some time ago about cattle starving in Nebraska. The report said in essence that more than 240 cattle had starved to death because the cattle owners couldn’t afford to feed them. The AP reporter didn’t actually see any starved carcasses or directly interview his sources for the story. The research was done over the telephone from more than a thousand miles away. The writer concluded that cattle starvation was on the rise in Nebraska, was going to get worse, and that high feed costs were the culprit.

Further investigation of the claims the story made showed that cattle starvation numbers were actually down in Nebraska, and the sources cited by the AP writer had each been surprised that the printed story diverged so far from the truth. Each also complained that he or she had been quoted out of context or misquoted altogether.

Though the small, rural newspaper did run a piece rebutting the AP story, it was too little, too late. The small newspaper’s circulation was 5,000 rural High Plains readers, while the AP story had been picked up and printed nationwide.

Farmers and ranchers know that agricultural reporting in the major media is sharply biased against modern food production and toward today’s popular environmentalist ideology. Many Americans suspect that the major media is probably biased against conservative or classical liberal values and toward modern-day liberal values.

Most of us are quite sure that leftist media bias exists in this country, and now that fact has been proven empirically in extremely well-crafted and well-executed studies, each of which have been published in the best peer-reviewed journals in the land.

A few of the scientists conducting media bias research include Tim Groseclose, Ph.D.,  Professor of political science and economics at UCLA; Jeffrey Milyo, Ph.D., Professor of public Policy at Stanford University; Stephano DellVigna, Ph.D., Associate Professor of economics at U.C. Berkeley; Ethan Kaplan, Ph.D. Professor of economics at Stockholm University; and a host of others.

Why does media bias exist? Why is media bias important to Americans in general? Why is media bias important to farmers, ranchers, and consumers?

Before we delve into the empirical proof that liberal media bias exists, let’s be clear on one thing. Bias is defined as an inclination to hold or present a particular perspective while ignoring or minimizing the validity of alternative perspectives. Though the media is significantly biased to the left, this does not mean that they are untruthful. A few are dishonest, however, most journalists simply select the stories to cover which interest them, and then slant their reporting toward their own perspective while minimizing or ignoring alternative perspectives. They are, after all human.

By and large, journalists exist in a self-selected, highly left-biased work environment populated by like-minded people. They rarely, if ever, have discussions or significant interactions with those who hold different viewpoints. Each day they are surrounded by a workplace populated by colleagues who share and support their world view. This tends to strongly reinforce their sense of the correctness of the positions they hold within the framework of a leftist/liberal narrative.

To practice the journalism outlined in the Society of Professional Journalists Code of Ethics takes a great deal of diligence, rigor, and detailed research. Journalists are voluntarily bound by this oath to provide objective reporting. Some do. But many, many more do not.

A complicating factor is that many newspapers and broadcast journalism outlets pick up and run stories from wire outlets like the Associated Press (AP), Reuters, etc. These stories are seldom if ever vetted by in-house staffs. And they nearly always follow the prevailing leftist political narrative in terms of content and “spin.”

Also, advocacy groups including The Humane Society of the U.S., the ethanol industry and others often flood newspapers and other outlets out well-crafted advocacy pieces written in the style of news stories.
In these tough economic times, which have hit the journalism industry especially hard, resulting in less advertising revenue, shrinking profit margins, and smaller reporter staffs, these advocacy pieces are often seen by editors as a boon – free, page-filling copy that they didn’t have to pay their own reporters to write. Such advocacy pieces are often run as straight news, though they are anything but complete, objective, or verified.

Thus, over time, a large, left-biased news organizations, and to some extent, well-heeled advocacy groups, change the common knowledge of the newsroom to left-biased common knowledge. In newsrooms across the land, there is an ever widening gap between what journalists actually know and what they think they know.

Subsequently, much of this left-biased common knowledge becomes incorporated into the common knowledge sets of news consumers, shifting their common knowledge base to the left as well.
This is one of the reasons media bias matters. It distorts our perception of reality. Farmers and ranchers aren’t under attack by mean people, they are under attack by consumers who are genuinely concerned about their food supply and about their planet, but whose basic understanding of agriculture and the Earth’s ecology have been distorted by the left-biased media, which bombards them ceaselessly with distorted news.

Now let’s look at how media bias can be scientifically proved. First, a recap of the scientific method, where a scientist has an idea, or hypothesis, tests the idea and fleshes it out into a theory, then tests the theory and shares the results with fellow scientists to reproduce or falsify on their own. A theory which passes this peer-review/reproduction process has more weight (or a smaller level of uncertainty) than one which does not pass the process.

In 2002, Professors’ Tim Groseclose of UCLA and Jeffrey Milyo, then of the University of Chicago but presently at Stanford University, began a research project with the goal of objectively measuring media bias. Their research project was ultimately published in the very prestigious Quarterly Journal of Economics in November, 2005.

Groseclose wrote a follow-on book, Left Turn, published this year, in which he expanded on the paper he and Milyo authored.

The tools they chose to objectively measure bias were the Political Quotient (PQ) and the Slant Quotient (SQ).  PQ’s were determined by recording the roll-call votes of all members of Congress on 20 selected pieces of legislation, and then rating those votes on a scale of 1-100, where the higher the score the more liberal the vote, and vice versa. The 20 votes used in the study were chosen by the liberal interest group Americans for Democratic Action (ADA). Because Groseclose is an admitted conservative (a rarity in academia) he chose to use ADA-selected issue votes, which served to keep any personal conservative bias out of the research project. Calculating the PQ serves to quantify each Congress member’s objective conservatism, centrism, or liberalism. Michelle Bachmann (R-MN) scored –4.1 and Jim DeMint (R-SC) 4.8 (very conservative). On the other end of the spectrum, Barney Frank (D-MA) scored 103.8 and Ron Dellums (D-CA) 107.4 (very liberal). Closest to the middle were Olympia Snowe (R-ME) at 47.9 and Ben Nelson (D-NE) 55.6.

In calculating media slant quotients, the researchers compared news stories from 20 major media outlets and treated them as if they had been speeches made by members of congress. To compute the SQ’s. Groseclose and Milyo counted citations to left- and right-leaning think tanks and “loaded political phrases” contained in the news stories. The “loaded political phrase” technique was pioneered by Matthew Gentzkow, Ph.D. and Jesse Shapiro, Ph.D., both Professors of economics at the University of Chicago.
Gentzkow and Shapiro demonstrated that the use of loaded political phrases reveals the political bias not only of members of congress, but of journalists and people in general. For instance, phrases a liberal would use include ‘veterans health care,’ ‘arctic national wildlife,’ ‘outing a CIA agent,’ ‘oil companies,’ ‘civil rights,’ and ‘Rosa Parks.’ Phrases a conservative would use include ‘personal retirement accounts,’ global war on terror,’ ‘partial-birth abortion,’ ‘stem cell,’ ‘death tax,’ and ‘illegal aliens.”

Using a statistical technique that relies in part on the Gentzkow-Shapiro method, Groseclose and Milyo were able to determine a precise SQ for each of the media outlets they studied.

The researchers weren’t surprised to find left media bias. They were surprised, however, at how pervasive and far left the bias was.

They found that the least biased major media outlets were The Washington Times and Fox news Special Report with Brit Hume, coming in at about 44.2, similar to the PQ of Susan Collins, (R-ME). The most biased outlets were The Wall Street Journal and CBS Evening News, which came in at 85 and 73 respectively, similar to the PQ’s of  Ted Kennedy (D-MA) and Joe Lieberman (D-CT). The remainder of the media outlets ranged from 55 to 73, or on par with strongly left-leaning members of Congress.

Interestingly, the media outlets’ SQ’s grouped much more closely to the score of the “average democrat” in congress (PQ about 85) than they did to the “average republican” in congress (PQ about 15). And in fact, the SQ’s for all of the media outlets examined were higher (that is, more leftist) than the mean value of 50.
Considering that congress represents individual citizens rather than the more nebulous idea of “states,” the PQ’s of congress probably make a fair representation of the PQ of the average American. That is, they describe a graphed distribution curve with a few outliers on the fringes but with the bulk of the population grouped a bit more tightly within 10 points or so of the middle, or between 40 and 60.

Groseclose and Milyos’ research shows that the bulk of the media is considerably left-biased when compared to most of the population. The media’s distribution curve is similar to that of the population, with outliers on each fringe, but with the media clustering between 55-75.

The average PQ for the population in general is 50; the average for the media in general is 65. That’s quite a difference.

Erring on the side of objectivity, Groseclose and Milyo chose to use the more liberal data when data sets diverged slightly. Therefore, Groseclose believes the evidence shows that the media is actually about six points more left-biased than the findings he and Milyo published.

Now why is this bias important?

Many argue that left-biased media is countered by right-biased media such as talk radio, conservative internet Web-sites, and Fox News. However, the left-biased media is much larger and has been in place far longer than right-biased media, and it reaches hundreds of millions of media consumers rather than the few million reached by right-biased media.

More importantly, we’re all susceptible to the fundamental trap of judging media bias. To determine whether the media is biased, we need to compare it to an outside, independent source of information. Yet most of us get the bulk of our information from the media. Now remember, the left-biased media rarely lies about details, they only present information in a biased light. If we don’t compare this biased information to an objective and independent source of information, we get only a biased, or distorted, world view.

To borrow from Groseclose, if our only source of information were televised basketball, we would think basketball of ultimate importance to humanity, and that 6’8” is the size of a normal person. We would conclude that a six-footer was a dwarf.

It is possible to get information from outside, independent sources, such as professional journals and personal interaction with experts. But this takes a lot of work. Many of us become, over time, absolute experts in our chosen fields. But in doing so, we sacrifice potential investigation time on the altar of our own professional development. We then turn to the media for information about the rest of the world. And the media gives us a distorted view of that world. Far better if the media were less biased, and gave us a clearer picture of the world.

To borrow from Groseclose again, let’s take guns, for example. Like any tool, guns may be used to good and bad purpose. Let’s imagine, and I’ll again use Groseclose’s numbers here, that guns are used for good 30 percent of the time and for bad 70 percent of the time. An objective media would then run gun stories at that ratio – 70 bad gun stories for every 30 good gun stories.

When did you last see a good gun story in the major media?

In judging media bias, there is also the problem of the alpha intellectual – that person who feels he is immune from the fundamental trap because he is more informed or better educated than most. Such a person is actually more likely to fall prey to the fundamental trap, because he self-selects the media he trusts, largely based on what his trusted media sources tell him. In the case of the alpha intellectual, the fundamental trap becomes a circular trap.

Farmers and ranchers come up against media bias and alpha intellectuals ensnared in their own circular fundamental traps all the time. Many people believe that farmers and ranchers are destroying the ecology of the planet with toxic chemicals and pumping their livestock full of hormones and antibiotics, putting humanity at great risk in their greed for ever increasing profit.

Farmers and ranchers know this isn’t so, but it is still the distorted world view peddled by the left-biased media.

So far, we’re looking at a pretty grim picture. Farmers and ranchers are only 1-2 percent of the population, yet they are beset from all sides by distorted stories and beliefs about the way they grow food. The other 98-99 percent of the population is in a similar boat, because they must eat to survive, have only the farmers and ranchers to rely upon to supply them with food, and are beset from all sides by distorted stories and beliefs about food production.

But there is some good news.

Despite the fact that the left-biased media has over time shifted common knowledge, they haven’t shifted common sense. American consumers continue to eat every day, and neither they, nor the people around them, routinely succumb to deadly agricultural toxins, hormones or antibiotics. Their common sense tells them that while there may be merit in the stories they hear and see in the media, there’s nothing wrong with the food they eat.

There are also things we can do to protect ourselves from the brunt of left-biased media.

First of all, we can stop believing in the myth of the objective journalist. They don’t exist. Therefore, there will always be at least some bias in the news you consume.

Secondly, knowing that the news you see and hear is biased, do two things: Check facts and use your own common sense. If reality doesn’t jibe with reporting, go with reality every time.

Finally, farmers and ranchers can invite members of the media to visit their operations. This will take courage, but what worthy goal doesn’t require courage? Understand that even in rural America, only a vanishing few journalists have ever been on a farm or ranch. Be prepared to spend a lot of time answering basic questions – and give good answers! Give them a tractor ride. Let them bottle-feed an orphan calf. Feed them a home-cooked lunch. Explain a bit about your economic situation, about “land-rich and cash-poor.” And never, ever, pressure them about what they will ultimately write. Journalists get “spun” all the time, and most are sick of it.

In the mean time, continue to do your best to educate the “99 percenters” whenever you have the opportunity. Remember, we are very few, and the best intentions of the very many can easily put us out of business. And then none of us would have anything to eat.

The trouble with carbon

In this series we’ve been analyzing a scholarly paper, “Solutions for a Cultivated Planet,” published Oct. 20, 2011 in the journal Nature. Lead author Jonathan A. Foley and his 20 co-authors argue that agriculture is destroying the planet through land clearing, pollution, and the generation of greenhouse gases, and will be unable in the future to feed a growing human population. The work published by Foley et al. is probably well intentioned, but the conclusions drawn are off the mark, relying as they do on a series flawed albeit popular and politically correct assumptions.

In reading the paper, one quickly realizes that far from offering workable solutions to the problems alleged, they champion a vague political approach, short on concrete ideas and long on the generality that “someone” should take care of things. While they don’t identify any specific person or institution, it seems clear they are advocating for some type of world authority to take charge of agriculture. As we pointed out in the initial installment of this series, all previous centralized approaches to agricultural production have failed miserably at the cost of millions of lives.

This time we’ll look at the problem Foley et al. see in agricultural production of “greenhouse gases,” particularly carbon dioxide (CO2) and methane (CH4), and usually lumped under the general category of “carbon.”

Foley et al. point to agriculture as the producer of  12 percent of anthropogenic (man-made) greenhouse gas production.

Before we delve more deeply into greenhouse gases, we’ll first put that number in perspective (which Foley et al. do not). All greenhouse gases, including the most abundant, water vapor, make up 2 percent of the Earth’s atmosphere. The rest is nitrogen, oxygen, and trace gases. CO2 and CH4 make up a bit more than 3.62 percent (3.62 percent CO2, 0.0017 percent CH4, a total of 3.6217 percent) of all greenhouse gases. Agriculture accounts for 12 percent of this total, or about 0.000086921 percent of the atmosphere – round it up and make it 9/100,000 (nine one-hundred-thousandths) of one percent of the atmosphere, or about 90 parts per million.

This is a small number. But small numbers can be important, so let’s look at the argument about the significance of man made greenhouse gases.

Greenhouse gases trap heat in the atmosphere, helping to keep the surface and the troposphere (the lowest level of the atmosphere) at an overall average temperature of about 15 C (59 F.).  As the popular argument goes (used by Foley et al. and thousands of others), the atmosphere was in a delicate balance before man started producing greenhouse gases during the industrial revolution by burning first coal, then oil. These added gases will reach a “tipping” point when the carbon dioxide level reaches about 400 parts per million (ppm). The present level is around 380 ppm. Once the tipping point is reached, the atmosphere could become a runaway greenhouse, sending temperatures soaring out of control, melting the ice caps, raising the sea level, and then boiling the seas away. All life on the planet would come to an end. With today’s atmosphere only 20 parts per million away from the tipping point, and agriculture already producing 90 parts per million of carbon dioxide, how much longer can man continue to grow food for a growing population before global warming destroys life?

This was a very, very scary question for most people in the late 1980’s and into the early 2000’s, when a small but vocal group of politicians, self-appointed experts, and an agenda-driven media began sounding the alarm. Prompted by the summary report of the U.N. Intergovernmental Panel on Climate Change (I.P.C.C.), Governments around the globe began spending billions of dollars to reduce man-made CO2 emissions. The U.S. Environmental Protection Agency (EPA) even labeled CO2 as a pollutant. Ethanol was touted as a safe alternative to gasoline, even though it actually releases more CO2 into the air than gasoline. A former Vice-President of the United States produced a very popular movie and declared the science settled and the debate closed. There was even a decade-long flirtation with carbon sequestration and carbon credit trading on the Chicago Climate Exchange.

But there were a couple of problems with story, regardless of how popular it was (and unfortunately, it remains popular to this day).

Firstly, a group of 43 paleoclimatologists (scientists who study past climate) began fiddling with the data they used in their climate models. After discarding some data which countered their carbon-driven warming hypothesis, and then emphasizing or adding weight to data which supported their hypothesis, they produced a now-famous “hockey stick” graph which showed two things: global temperatures began rising rapidly with the onset of the industrial revolution in the 18th century while CO2 levels were also rising. The researchers asserted that the correlation of warming and elevated CO2 proved causation; that is, carbon was driving warming. This is the conclusion they reported to the I.P.C.C.

Unfortunately, to produce this hockey stick graph, they had to delete the Medieval Warm Period (MWP, 900 A.D. to 1,300 A.D.) and the Little Ice Age (LIA, 1,300 to 1,850). Curiously, both the MWP and the LIA were present in the 1996 I.P.C.C. report while the hockey stick was absent. In the 2001 report the MWP and LIA were gone, but the hockey stick was there. Statisticians and other scientists caught on to the deception, and the 43 paleoclimatologists were severely taken to task. Later, through the “climategate” scandal, the group was found to have actively hidden and destroyed data, refused to work with other scientists, and attempted to have publication of papers countering their theory quashed. Not very scientific behavior at all.

Secondly, the Earth began cooling in about 1998, despite the fact that man-made greenhouse gas emissions were rising. Though some countries, like those in North America and Europe were attempting to curtail their emissions, other countries like China and India were producing much more as they grew increasingly industrialized. Rather than decreasing man made greenhouse emissions, there was a small but significant net increase. At the same time, geological and biological processes continued to add CO2, CH4, and other greenhouse gases to the atmosphere at a faster rate than humans possibly could.

When the planet began to cool, and greenhouse gases continued to rise, the core group of paleoclimatologists, along with many politicians and the major media, did what Professor Ian Plimer, Australia’s best-known geologist and climatologist, called “moving the goalposts.” Instead of global warming, the threat was now climate change, but the culprit was still man made carbon.

Proponents of the catastrophic anthropogenic climate change theory hold that an ill-defined “something bad” is going to happen if carbon levels keep rising. Many in the major media and famous folks from all walks of life (including a former Vice president) now claim that increased atmospheric carbon is to blame for hurricanes, tornadoes, heat waves, cold snaps, sea-level rise and fall, earthquakes, tsunamis, floods, droughts, and all manner of weather related and non-weather related phenomena.
Unfortunately for the proponents of this theory, there’s no evidence to bear this out, and plenty of evidence to refute it.

Although science has a very long way to go in understanding climate, the main climate drivers are well known. They include the sun, perturbations in the Earth’s orbit around the sun, cosmic rays, geological processes here on Earth such as plate tectonics, volcanoes and earthquakes, cloud cover, and cloud formation.

In general, the sun heats the planet to an average of about 15 C (59F.). Part of the sunshine, or electromagnetic radiation, which strikes the Earth is immediately reflected back to space by clouds and ice, and to a lesser extent, by land and water. Part of the energy warms the land and the oceans, and some of this warmth, or heat, is radiated back toward space as infrared radiation. Temperature differentials in the oceans cause currents, which move warm water from the equator toward the high latitudes, where it cools and flows back toward the equator. The warming land and oceans also transfer heat to the air, causing air currents to do essentially the same thing.

Critically, though, CO2 and CH4 are able to hold and slowly re-radiate infrared radiation in the 14-16.5 micron band. Without this slow release of collected warmth from the atmosphere’s greenhouse gases, the planet’s average temperature would be about –3 C (26.6 F., below the freezing point of water). At present CO2 and CH4 levels, these two gases capture and re-radiate nearly all the infrared energy radiated from the surface in these wavelengths. Even doubling CO2/CH4 levels to around 800 ppm would capture only a tiny bit more, perhaps enough to cause a 0.5 C increase in global temperature.

All greenhouse gases reradiate infrared radiation toward the surface, which serves to hold, but not increase, heat. They also reradiate much more heat away from the surface and toward space. The narrow wavelength band captured and reradiated by CO2 and CH4 is only a fraction of the heat radiated from surface, however. Much more heat (in the 7.5-14 and 16.5+ micron bands) is radiated and transported by atmospheric convection high into the atmosphere where it is radiated to space. Far from acting as atmospheric warming agents, greenhouse gases are actually part of a planetary cooling system. If the Earth had no water, which provides a feedback loop in the warming/cooling cycle and atmospheric turnover, there would indeed be runaway warming on Earth, just as there is on Venus, which does lack water.

Leaving aside CO2’s effect on the atmosphere, which is indeed important but certainly not a climate driver, CO2 is vital for life as we know it to exist on this planet. All plants need CO2 for photosynthesis, the process in which plants take energy from the sun and combine CO2, water and nutrients to grow. If plants had no CO2 to “breathe,” they would fare no better than an animal without oxygen. In fact, the oxygen we animals cannot do without is released by plants as a waste product of photosynthesis, and plants use the CO2 we release as metabolic waste when we exhale.

One final problem with carbon for this installment. The climate alarmists have taken to attacking carbon when they really mean CO2. Perhaps it’s a trivial complaint, particularly when this crowd has rarely, if ever, said clearly what they mean.

Nevertheless, carbon is absolutely essential to life as we know it. Carbon is the scaffolding for the other molecules that make up living organisms. Carbon is the backbone of life, providing the critical linkages in cells, tissues, proteins, amino acids, sugars, starch – and the list goes on.

There’s nothing wrong, and everything right, with being concerned about the ecology of our planet. Earth is the only home we have. Unfortunately, and sadly, a lot of people have seen fit to play fast and loose with the rules of science and journalism, and to gain in money or prestige while preying on the fears of their fellow human beings. And while doing so, they make a mockery of the fascinating and wonderful world we inhabit.
So the next time someone tells you that you, or someone else (the accuser seldom feels responsible in any way), is destroying the planet with carbon, be a little skeptical and look at the world the way it really is, and not as they tell you it is.

Next time: Media bias, and why you seldom get the whole story.

Climate, weather and history

The next few post are going to be recycled from my other blog. I don't have time to maintain two blogs, and I don't have the time, or really, the interest, to go dashing about chasing new readers. Serious bloggers are probably gasping at this point.

I'm serious about what I write about, and I want to share it with interested readers. But I'm 51 years old, I'm in a most-likely losing battle with b-cell lymphoma, and I don't feel like I have unlimited time to become a perfect, trendy blogger.

So what you see here is what you get here. To borrow a phrase from my nephew, "'s how I roll." If you haven't been here before, I'm a cow-calf operator from Nebraska -- that is, I'm an agricultural producer.

Climate hasn't been in the major media, at least not in a big way, for a while now. I think it's critical to understand that there's a difference between climate reality and climate politics. This, and the next few posts, represent my attempt to illustrate climate reality. I refuse to argue with cliches or sloganeering, however, feel free to comment and if you have a reasoned and objective argument to make, perhaps we can discuss it.

In a scientific paper published Oct. 20 in Nature, ‘Solutions for a cultivated planet’, researchers laid out an argument that agriculture is destroying the planet through land clearing, pollution, and the generation of greenhouse gasses, and will therefore be unable to feed a growing human population. In this series we’ve looked at a number of the assumptions the research team based their conclusions on, and offered a number of science-based refutations.

Like any human enterprise, modern production agriculture is not perfect. Unlike some (perhaps many) human enterprises, modern agriculture, with the invaluable assistance of land grant university and food industry research, has vigorously sought solutions whenever problems have surfaced. The proof of this is in the unprecedented quality, quantity and safety of the food supply as well as in the vast improvements in conservation and ecology practiced by farmers and ranchers in the U.S., and increasingly, in other countries.
Perhaps part of the reason agriculture has been under attack over the last few decades is fear. Most people in the developed world (about 99 percent in this country) are at least two generations removed from working with or in agriculture. Yet it is on agriculture that they rely utterly for their sustenance. Biased and sensationalistic news reporting of flawed science tends to enhance this fear. We’ll look at biased and misguided news about agriculture in a future article.

This time we’ll define climate and weather, take a close look at climate history around the globe, and explore some ideas about modern agriculture’s ability to feed the world under a warming climate and under a cooling climate.

One of the difficulties many of us have in thinking about and trying to understand the ongoing climate debate is that we tend to think that present conditions represent a sharp change from those of the near past, and that conditions of the near past were ideal and reasonably permanent. For instance, the American history taught nearly universally in this country is that Europeans essentially invaded the Americas, displaced the indigenous population, and denuded the landscape to make way for farms, destroying “old growth” forests and driving many native plants and animals to extinction.

This narrative is in some sense true, but heavily pejorative, and lacking essential context. Human populations have waxed and waned for more than 100,000 years, and whenever populations have grown, humans have pushed out into new territories. This scenario has played out countless times in every corner of the globe – including the pre-European Americas, where several civilizations were built by Asian and Pacific settlers, crested in good times, and eventually succumbed to the encroachment of new settlers arriving from the west.
Changing the landscape by farming is very different than “denuding” the landscape. Farming did bring environmental challenges, but most of these have been more than adequately addressed and those remaining are being mastered. It’s unrealistic in the extreme to think that the U.S. population can grow to 311 million without providing a food supply for those millions. Growing food alters the landscape, but it does not destroy the land or the ecology of the region, or of the planet. The planet is dynamic; landscapes and bio-populations change over time naturally. Humans are a natural bio-population and alter the landscape, as so do beavers with their dams, termites with their mounds, CO2 and CH4(methane) production, and a host of other organisms such as bacteria and fungi. In fact, no natural bio-population leaves the landscape untouched; rather, they live in symbiosis with the land.

What every crest and trough of human population, and every expansion and contraction of humankind into and out of territories has in common is a very tight correlation with climate change. Most of us tend to think the climate is essentially stable for very long periods of time. Most of us have lived through year after year climate conditions which seem unendingly similar and ordinary. Remember, though, our lives are short in geologic time scales. We’ve all heard of ice ages and tropical ages, extremes which happened long ago and may happen again in the distant future, but we imagine that they will never visit our present world or the world of the foreseeable future.

The best scientific evidence we have shows that ice ages, or glaciations, and warm periods, or interglacials, have been common on Earth for at least 2.67 million years. Science has a hard time finding climate evidence earlier than that due to the ever-changing geology of the planet. In the last 730,000 years, there is solid evidence of 10 glacial periods separated by interglacial periods. We are presently in an interglacial period which began about 13,000 years ago.

The causes of these warming and cooling cycles are immensely complex, but major factors appear to be solar activity, variations in the Earth’s orbit, nuclear and thermal activity deep within the planet, and the active geology of the planet’s crust (plate tectonics, etc.). The atmosphere and the oceans certainly contribute to climate, but react to climate driving forces. They do not in themselves drive climate. It is fair to say that Sun, orbit, internal structure and crust drive climate, while the oceans and atmosphere make weather.

Over the last 20-25 years, a small but vocal group of self-appointed experts, politicians, and news reporters have continued to sound the “human-caused climate disaster” alarm, claiming that man-made carbon dioxide will set off an irreversible “runaway greenhouse effect.” Their science, which relies heavily on computer modeling, makes little if any sense. Their predictions stubbornly refuse to come true. After not falling for more than 30 years, the sky continues not to fall. Yet somehow, governments continue to pour hundreds of billions of dollars into an effort to mitigate human influence on the climate. This is perhaps the epitome of hubris, that some believe man more powerful than the Earth or the Sun.

We humans tend to think we perform mighty deeds, and some of them are indeed remarkable, such as the ability of a tiny fraction of the population to feed all mankind. Nevertheless, our power and our abilities are as nothing when it comes to changing the climate.

But that doesn’t mean that climate won’t change. If history and geology tell us anything, it’s that climate often changes quickly – over only a few years – and those changes profoundly impact all forms of life on the planet.

Climate and weather are not the same thing. Weather is the state of the atmosphere at a particular place and time, hence the old saying that “all weather is local.” Weather can mean a blizzard here, sunshine and warmth there, a hurricane on one coast and cool rain on another. Climate, on the other hand, is the conditions which prevail over time. For instance, the Earth is presently emerging from the Pleistocene Ice Age, which ended about 13,000 years ago. Since that time the climate has been generally warming, though there have been prolonged periods which were sharply cooler and periods which were considerably warmer.

In our everyday perspective, 13,000 years seems a very long time ago. In fact, little recorded human history exists from before that time. Yet the Pleistocene Ice Age lasted for nearly 100,000 years. During that time, Homo sapiens – modern man – walked the Earth, though usually in the more temperate regions of the planet and along the edges of the glaciers. The earliest known recorded human history exists as cave paintings dating back about 35,000 years. Before that, little more than bones and primitive tools exist to tell the story of early man.

In the 13,000 years since the last ice age, there have been periods of warming and periods of cooling – times during which the planet was much warmer than it is today and life, including human life, rapidly expanded – and times when the planet was much cooler than it is today and when cold, disease, famine, starvation and extinction events drove life savagely back. Using a plethora of techniques, including ice-core sampling, geologic study, advanced chemistry and physics, scientists have identified 14 such periods.
The last warming, called the Medieval Warming Period, lasted from about 900 A.D. to 1,300 A.D. and brought the world out of the Dark Ages. Global temperature averages  were about 1.5-2.0 degrees Celsius (C) warmer than they are today. Human populations more than doubled, food was plentiful, wealth grew. This was the time in which most cathedrals were built in Europe and the Vikings colonized Greenland and Vinland, present day Newfoundland.

The Little Ice Age began about 1,300 A.D. and lasted until 1850 A.D. Though this was a cooling period, Europeans settled the Americas and the United States was born. Global temperatures fell overall, to an average of about 3.0 degrees C colder than today. While there were exceedingly harsh winters, there were also hot summers and drought. Though America was growing, it grew at a slow rate as outbreaks of famine and disease took their toll. The “starving time” of 1609-1610 which killed more than 80 percent of the residents of Jamestown Colony coincided with one of the coldest winters of the period. The Little Ice Age was a global ice age, and despite exploration of the New World and other reaches of the globe, it was an exceedingly tough time to be alive. Global human populations fell off, famine and disease was common, life spans fell and infant mortality soared.

In 1850 the Earth entered what is today called the Late Twentieth Century Warming. As with most warming and cooling periods, this period was characterized by fluctuations in warming and cooling, rather than a steady rise in global temperatures. There was warming from 1850-1940, cooling from 1940-1976, and warming from 1976 to 1998. Nevertheless, it was on balance a warming period. During this time there was remarkable growth in the human population and wealth, relatively little famine, a marked decrease in disease, amazing gains in technology, and unfortunately, remarkable gains in the lethality of warfare. Climate-wise, this period was on balance a time of very easy living.

In 1998 the global climate began to cool once again, and has fallen about 0.1-0.3 degrees C over the past dozen or so years. Thus far the cooling has caused no major disruption for life on the planet. There is no way to know whether the present cooling will continue or whether it is merely a blip in a generally warming interglacial period. Only time will tell.

As climate changes, will modern agriculture be able to feed a growing population? This is an interesting question.

If the present cooling trend eases and global climate continues to warm or stay stable for several thousand years, the answer is almost certainly yes. The real threat most humans will face will be war, rather than hunger.

It is possible that warfare or even over-zealous governmental control could spell the end of modern agriculture, at which point starvation would ensue. But it’s at least as likely that agriculture will continue to imperfectly improve, and that the human population will reach sustainable equilibrium.

But if the climate begins to cool, and Earth begins once again to slide into a glacial period, the answer is no. Crops, livestock, and humans need sunshine, liquid water and micronutrients to grow. Although some equatorial populations would survive, just as in the last ice age, technology, and most of the human population, would not.

The Earth, the climate – they will do what they are going to do regardless of our wishes. For now, perhaps it’s better to set aside silly notions about our capacity to destroy the planet and concentrate on being better neighbors, and continue to improve the lot of both humanity and the planet.

Next time: The trouble with carbon