Climate Models Confirm More Moisture in Atmosphere Attributed to Humans

When it comes to using climate models to assess the causes of the increased amount of moisture in the atmosphere, it doesn’t much matter if one model is better than the other.

They all come to the same conclusion: Humans are warming the planet, and this warming is increasing the amount of water vapor in the atmosphere.

In new research appearing in the Aug. 10 online issue of the Proceedings of the U.S. National Academy of Sciences, Lawrence Livermore National Laboratory scientists and a group of international researchers found that model quality does not affect the ability to identify human effects on atmospheric water vapor.

Oceanic water vapor map
Total amount of atmospheric water vapor over the oceans on July 4, 2009. These results are from operational weather forecasts of the European Centre for Medium-Range Weather Forecasting (ECMWF).

“Climate model quality didn’t make much of a difference,” said Benjamin Santer, lead author from LLNL’s Program for Climate Modeling and Intercomparison. “Even with the computer models that performed relatively poorly, we could still identify a human effect on climate. It was a bit surprising. The physics that drive changes in water vapor are very simple and are reasonably well portrayed in all climate models, bad or good.”

The atmosphere’s water vapor content has increased by about 0.4 kilograms per square meter per decade since 1988, and natural variability alone can’t explain this moisture change, according to Santer. “The most plausible explanation is that it’s due to human-caused increases in greenhouse gases,” he said.

More water vapor – which is itself a greenhouse gas – amplifies the warming effect of increased atmospheric levels of carbon dioxide.

Previous LLNL research had shown that human-induced warming of the planet has a pronounced effect on the atmosphere’s total moisture content. In that study, the researchers had used 22 different computer models to identify a human “fingerprint” pattern in satellite measurements of water vapor changes. Each model contributed equally in the fingerprint analysis. “It was a true model democracy,” Santer said. “One model, one vote.”

But in the recent study, the scientists first took each model and tested it individually, calculating 70 different measures of model performance. These “metrics” provided insights into how well the models simulated today’s average climate and its seasonal changes, as well as on the size and geographical patterns of climate variability.

This information was used to divide the original 22 models into various sets of “top ten” and “bottom ten” models. “When we tried to come up with a David Letterman type ‘top ten’ list of models,” Santer said, “we found that it’s extremely difficult to do this in practice, because each model has its own individual strengths and weaknesses.”

Then the group repeated their fingerprint analysis, but now using only “top ten” or “bottom ten” models rather than the full 22 models. They did this more than 100 times, grading and ranking the models in many different ways. In every case, a water vapor fingerprint arising from human influences could be clearly identified in the satellite data.

“One criticism of our first study was that we were only able to find a human fingerprint because we included inferior models in our analysis,” said Karl Taylor, another LLNL co-author. “We’ve now shown that whether we use the best or the worst models, they don’t have much impact on our ability to identify a human effect on water vapor.”

This new study links LLNL’s “fingerprint” research with its long-standing work in assessing climate model quality. It tackles the general question of how to make best use of the information from a large collection of models, which often perform very differently in reproducing key aspects of present-day climate. This question is not only relevant for “fingerprint” studies of the causes of recent climate change. It is also important because different climate models show different levels of future warming. Scientists and policymakers are now asking whether we should use model quality information to weight these different model projections of future climate change.

“The issue of how we are going to deal with models of very different quality will probably become much more important in the next few years, when we look at the wide range of models that are going to be used in the Fifth Assessment Report of the Intergovernmental Panel on Climate Change,” Santer said.

Other LLNL researchers include Karl Taylor, Peter Gleckler, Celine Bonfils, and Steve Klein. Other scientists contributing to the report include Tim Barnett and David Pierce from the Scripps Institution of Oceanography; Tom Wigley of the National Center for Atmospheric Research; Carl Mears and Frank Wentz of Remote Sensing Systems; Wolfgang Brüggemann of the Universität Hamburg; Nathan Gillett of the Canadian Centre for Climate Modelling and Analysis; Susan Solomon of the National Oceanic and Atmospheric Administration; Peter Stott of the Hadley Centre; and Mike Wehner of Lawrence Berkeley National Laboratory.

Founded in 1952, Lawrence Livermore National Laboratory is a national security laboratory, with a mission to ensure national security and apply science and technology to the important issues of our time. Lawrence Livermore National Laboratory is managed by Lawrence Livermore National Security, LLC for the U.S. Department of Energy’s National Nuclear Security Administration.

[Source: Lawrence Livermore National Laboratory news release]

Web GIS in Practice VII: Stereoscopic 3-D Solutions for Online Maps and Virtual Globes

plagueInternational Journal of Health Geographics 2009, 8:59

Maged N. Kamel Boulos, Larry R. Robinson

Because our pupils are about 6.5 cm apart, each eye views a scene from a different angle and sends a unique image to the visual cortex, which then merges the images from both eyes into a single picture. The slight difference between the right and left images allows the brain to properly perceive the ‘third dimension’ or depth in a scene (stereopsis). However, when a person views a conventional 2-D (two-dimensional) image representation of a 3-D (three-dimensional) scene on a conventional computer screen, each eye receives essentially the same information. Depth in such cases can only be approximately inferred from visual clues in the image, such as perspective, as only one image is offered to both eyes. The goal of stereoscopic 3-D displays is to project a slightly different image into each eye to achieve a much truer and realistic perception of depth, of different scene planes, and of object relief. This paper presents a brief review of a number of stereoscopic 3-D hardware and software solutions for creating and displaying online maps and virtual globes (such as Google Earth) in “true 3D”, with costs ranging from almost free to multi-thousand pounds sterling. A practical account is also given of the experience of the USGS BRD UMESC (United States Geological Survey’s Biological Resources Division, Upper Midwest Environmental Sciences Center) in setting up a low-cost, full-colour stereoscopic 3-D system.

Map of the Day: Tulsa Existing Commuter Shed Statistics

…from the ESRI Map Book, Volume 24

transportation5_sm

“In analyzing the potential for passenger rail service, INCOG, the metropolitan planning organization for the Tulsa region, compiled data regarding commuter patterns and employment. This map shows the number of employed residents within suburban communities (3-mile radius) and potential Transit Oriented Development (1-mile radius) “selection areas” along existing rail corridors with potential as high-capacity transit lines. In addition, this map illustrates the number of commuters residing within the selection areas who commute to the Central Business District.

“Courtesy of INCOG.”