Tuesday, September 2, 2014

New paper finds another potential solar amplification mechanism: dust!

A new paper published in Chemical Geology finds the Iberian Peninsula was 'rather more humid' than today from 10,000-6,000 years ago including the Holocene Climate Optimum. The authors also find "Saharan dust [over Western Europe] is driven by solar forcing and 1.5–2.0 thousand year cycles," which affects regional climate due to dust aerosols in the atmosphere.

According to the paper

Effective humidity reconstruction indicates wetter conditions during the early Holocene and progressive aridification during middle–late Holocene time, boosting abrupt changes in the lacustrine system. Cyclostratigraphic analyses and transport mechanisms both point to solar irradiance and aridity as major triggering factors for dust supply over Western Europe during the Holocene.
Data from the paper shows the Iberian Peninsula was much more humid during the early-mid Holocene [past ~11,000 years] in good agreement with solar insolation, and per the Clausius–Clapeyron Relation, warmer air can hold more water vapor/humidity. The data shows humidity decreased to the lowest levels of the Holocene during the Little Ice Age and has recovered slightly since to the end of the record in 2006. 

Thus, the data suggests the Sun drives humidity/aridity and dust transport, both of which have large effects upon climate and may add to many other solar amplification mechanisms described in the scientific literature. 

Bottom graph shows the Iberian Peninsula was much more humid during the early-mid Holocene [past ~11,000 years] in good agreement with the solar insolation. The data shows humidity decreased to the lowest levels of the Holocene during the Little Ice Age [LIA] and has since recovered slightly to the end of the record in 2006 [at left side of graph]
Note: Another new paper claims dust from distant deserts can increase atmospheric heating by a factor of four times. 

Highlights

We reconstructed the Saharan dust input over Western Europe during the Holocene.
Saharan dust is driven by solar forcing and 1.5–2.0 ky [thousand year] cycles.
We described biogeochemical impact of Saharan dust in lacustrine systems.
Lacustrine evolution has been reconstructed and compared with paleo-archives.
We speculate on meridional dust transport mechanisms.

Abstract

Saharan dust inputs affect present day ecosystems and biogeochemical cycles at a global scale. Previous Saharan dust input reconstructions have been mainly based on marine records from the African margin, nevertheless dust reaching western-central Europe is mainly transported by high-altitude atmospheric currents and requires high altitude records for its reconstruction. The organic and inorganic geochemical study of sediments from a southern Iberia alpine lacustrine record has provided an exceptional reconstruction of Saharan dust impact and regional climatic variations during the Holocene. After the last deglaciation, results indicate that Saharan dust reached Western Europe in a stepwise fashion from 7.0 to 6.0 [years before the present] and increased since then until present, promoting major geochemical changes in the lacustrine system. Effective humidity reconstruction indicates wetter conditions during the early Holocene and progressive aridification during middle–late Holocene time, boosting abrupt changes in the lacustrine system. Cyclostratigraphic analyses and transport mechanisms both point to solar irradiance and aridity as major triggering factors for dust supply over Western Europe during the Holocene.

Scientists obtain new data on the weather 10,000 years ago from sediments at the bottom of a lake in Sierra Nevada

Date: September 2, 2014
Researchers at the University of Granada participate in an international project which has revealed that during the early phase of the Holocene (10,000 -- 6,000 years ago) the climate in the Iberian Peninsula was rather more humid than it currently is.

Scientists have found evidence of atmospheric dust from the Sahara in the depths of the Rio Seco lake, 3,020 meters above sea level, accumulated over the last 11,000 years.

A research project which counts with the participation of the University of Granada has revealed new data on the climate change that took place in the Iberian Peninsula around the mid Holocene (around 6,000 years ago), when the amount of atmospheric dust coming from the Sahara increased. The data came from a study of the sediments found in an Alpine lake in Sierra Nevada (Granada)

This study, published in the journal Chemical Geology, is based on the sedimentation of atmospheric dust from the Sahara, a very frequent phenomenon in the South of the Iberian Peninsula. This phenomenon is easily identified currently, for instance, when a thin layer of red dust can be occasionally found on vehicles.

Scientists have studied an Alpine lake in Sierra Nevada, 3020 metres above sea level, called Rio Seco lake. They collected samples from sediments 1.5 metres deep, which represent approximately the last 11,000 years (a period known as Holocene), and they found, among other paleoclimate indicators, evidence of atmospheric dust coming from the Sahara. According to one of the researchers in this study, Antonio García-Alix Daroca, from the University of Granada, "the sedimentation of this atmospheric dust over the course of the Holocene has affected the vital cycles of the lakes in Sierra Nevada, since such dust contains a variety of nutrients and / or minerals which do not abound at such heights and which are required by certain organisms which dwell there"

More atmospheric dust from the Sahara

This study has also revealed the existence of a relatively humid period during the early phase of the Holocene (10,000 -- 6,000 years approximately). This period witnessed the onset of an aridification tendency which has lasted until our days, and it has coincided with an increase in the fall of atmospheric dust in the South of the Ibeian Peninsula, as a result of African dust storms.

"We have also detected certain climate cycles ultimately related to solar causes or the North Atlantic Oscillacion (NAO)," according to García-Alix. "Since we do not have direct indicators of these climate and environmental changes, such as humidity and temperature data, in order to conduct this research we have resorted to indirect indicators, such as fossil polen, carbons and organic and inorganic geochemistry within the sediments."

This research has been conducted as part of several projects which count with the participation of scientists at the University of Granada, the Andalusian Institute of Earth Sciences (CSIC-UGR), the University of Murcia, the University of Glasgow, and the University of Northern Arizona.

Story Source: The above story is based on materials provided by University of Granada. Note: Materials may be edited for content and length.

Journal Reference:
F.J. Jiménez-Espejo, A. García-Alix, G. Jiménez-Moreno, M. Rodrigo-Gámiz, R.S. Anderson, F.J. Rodríguez-Tovar, F. Martínez-Ruiz, Santiago Giralt, A. Delgado Huertas, E. Pardo-Igúzquiza. Saharan aeolian input and effective humidity variations over western Europe during the Holocene from a high altitude record. Chemical Geology, 2014; 374-375: 1 DOI:10.1016/j.chemgeo.2014.03.001

New paper explains why a new approach to climate modeling is necessary

A new paper by an international team of climate scientists explains why conventional climate models will continue to be unable to simulate the most essential aspects of climate such as convection, clouds, gravity waves, atmospheric circulation, ocean oscillations, etc. "for the foreseeable future" and "the fact that according to the last two assessment reports of the IPCC the uncertainty in climate predictions and projections has not decreased may be a sign that we might be reaching the limit of climate predictability, which is the result of the intrinsically nonlinear character of the climate system (as first suggested by Lorenz [father of chaos theory])."

The authors note that to properly simulate these features with the current crop of numerical climate models requires "extremely high resolutions" of 16 kilometers or less, in comparison to state-of-the-art climate models which use much lower grid resolutions of 50 to 100 to up to 10,000 kilometers in size. For example, prior work has demonstrated that proper simulation of convection requires model resolutions of 1-2 km, up to 2 orders of magnitude [100X] higher resolution than today's fastest supercomputer models are capable of attaining. The same is true for proper simulation of clouds, the Earth's sunshade.

To try to get around this limitation, climate models consist almost entirely of "parameterizations," which is a fancy word for fudge factors. As climate scientist Dr. Roger Pielke Sr. pointed out in a recent comment, and contrary to popular belief, climate models are not based on "basic physics" and are almost entirely comprised of parameterizations/fudge factors:



Thus, according to the authors, an entirely different approach to climate modeling is needed:
"there is a clear need for systematic stochastic approaches in weather and climate modelling. In this review we present evidence for stochastic effects in laboratory experiments. Then we provide an overview of stochastic climate theory from an applied mathematics perspectives. We also survey the current use of stochastic methods in comprehensive weather and climate prediction models and show that stochastic parameterizations have the potential to remedy many of the current biases in these comprehensive models."
"The last few decades have seen a considerable increase in computing power which allows the simulation of numerical weather and climate prediction models with ever higher resolution and the inclusion of ever more physical processes and climate components (e.g. cryosphere, chemistry). Despite this increase in computer power many important physical processes (e.g. tropical convection, gravity wave drag, clouds) are still not or only partially resolved in these numerical models. Despite the projected exponential increase in computer power these processes will not be explicitly resolved in numerical weather and climate models in the foreseeable future. For instance, Dawson et al. have demonstrated using the ECMWF integrated forecast system that extremely high resolutions (which corresponds to a grid spacing of about 16km) are required to accurately simulate the observed Northern hemispheric circulation regime structure. This resolution, typical for limited area weather and climate models used for short term prediction, remains unfeasible for the current generation of high resolution global climate models due to computational and data storage requirements. Hence, these missing processes need to be parameterized, i.e. they need to be represented in terms of resolved processes and scales 153. This representation is important because small-scale (unresolved) features can impact the larger (resolved) scales 84,162 and lead to error growth, uncertainty and biases. 
At present, these parameterizations [fudge factors] are typically deterministic, relating the resolved state of the model to a unique tendency representing the integrated effect of the unresolved processes. These “bulk parameterizations” are based on the notion that the properties of the unresolved subgrid-scales are determined by the large-scales. However, studies have shown that resolved states are associated with many possible unresolved states 22,144,167. This calls for stochastic methods for numerical weather and climate prediction which potentially allow a proper representation of the uncertainties, a reduction of systematic biases and improved representation of long-term climate variability. Furthermore, while current deterministic parameterization schemes are inconsistent with the observed power-law scaling of the energy spectrum 5,142 new statistical dynamical approaches that are underpinned by exact stochastic model representations have emerged that overcome this limitation. The observed power spectrum structure is caused by cascade processes. Recent theoretical studies suggest that these cascade processes can be best represented by a stochastic non-Markovian Ansatz. Non-Markovian terms are necessary to model memory effects due to model reduction19. It means that in order to make skillful predictions the model has to take into account also past states and not only the current state (as for a Markov process)."
"To some extent, [conventional climate model] numerical simulations come to the rescue, by allowing us to perform virtual experiments. However, the grid spacing in climate models is orders of magnitude larger than the smallest energized scales in the atmosphere and ocean, introducing biases. 
This need [for stochastic modeling] arises since even state-of-the-art weather and climate models cannot resolve all necessary processes and scales. Stochastic parameterizations have been shown to provide more skillful weather forecasts than traditional ensemble prediction methods, at least on timescales where verification data exists. In addition, they have been shown to reduce longstanding climate biases, which play an important role especially for climate and climate change predictions.
The fact that according to the last two assessment reports (AR) of the IPCC (AR4 and AR5) the uncertainty in climate predictions and projections has not decreased may be a sign that we might be reaching the limit of climate predictability, which is the result of the intrinsically nonlinear character of the climate system (as first suggested by Lorenz [father of chaos theory]).
Our hope is that basing stochastic-dynamic prediction on sound mathematical and statistical physics concepts will lead to substantial improvements, not only in our ability to accurately simulate weather and climate, but even more importantly to give proper estimates on the uncertainty of these predictions." 

Stochastic Climate Theory and Modelling

Stochastic methods are a crucial area in contemporary climate research and are increasingly being used in comprehensive weather and climate prediction models as well as reduced order climate models. Stochastic methods are used as subgrid-scale parameterizations as well as for model error representation, uncertainty quantification, data assimilation and ensemble prediction. The need to use stochastic approaches in weather and climate models arises because we still cannot resolve all necessary processes and scales in comprehensive numerical weather and climate prediction models. In many practical applications one is mainly interested in the largest and potentially predictable scales and not necessarily in the small and fast scales. For instance, reduced order models can simulate and predict large scale modes. Statistical mechanics and dynamical systems theory suggest that in reduced order models the impact of unresolved degrees of freedom can be represented by suitable combinations of deterministic and stochastic components and non-Markovian (memory) terms. Stochastic approaches in numerical weather and climate prediction models also lead to the reduction of model biases. Hence, there is a clear need for systematic stochastic approaches in weather and climate modelling. In this review we present evidence for stochastic effects in laboratory experiments. Then we provide an overview of stochastic climate theory from an applied mathematics perspectives. We also survey the current use of stochastic methods in comprehensive weather and climate prediction models and show that stochastic parameterizations have the potential to remedy many of the current biases in these comprehensive models.


Related papers demonstrating stochastic modeling outperforms conventional numeric climate models:
 
Simple climate model outperforms IPCC models, demonstrates climate effect of CO2 is miniscule

Paper: Global 'warming since 1850 is mainly the result of natural climate variations'
 
New paper seeks a grand "unification" of "quite different model physics" of convection

New paper finds sea surface temperatures were controlled by natural 60-year climate cycle during 20th century

How the journal Nature plays fast & loose with the facts about the "pause" in global warming

New paper finds models have a high rate of 'false alarms' in predicting drought

Natural Climate Change has been Hiding in Plain Sight

New paper finds the data do not support the theory of man-made global warming [AGW]
New paper finds a non-linear relationship between sunspots and global temperatures

Climate activists propose genetic modification of humans to reduce harmless CO2 emissions

Newsflash from the German ShortNews site, which although it claims to be a serious news site, is now apparently in competition with The Onion:

07/16/14 ShortNews (Germany) (Google translation + light editing)

Genetically alter humans: The last alternative to climate protection

If it is so hard to stop climate change, why not change the people, in order to slow global warming, at least? Some ideas about it are radical, but the more mankind feels the consequences, the more necessary biomedical modifications will become. Eighteen percent of greenhouse gas emissions come from livestock. Why not induce humans to have an artificial intolerance to meat? A genetic variation for size reduction would be another possibility. A reduction in height by 15 cm means a reduction in mass by 25 percent. Genetic alterations of the human eye to see better in the dark, would reduce power consumption. Chlorophylls in the skin could cover a part of the energy demand by sunlight rather than by food. Shutdown of cell activity at rest would also be an alternative.

H/T to Die Kalte Sonne: Climate activists propose genetic modification of humans as steps to reduce CO2 emissions

Monday, September 1, 2014

New paper finds the last interglacial was warmer than today, not simulated by climate models

A new paper published in Climate of the Past compares temperature reconstructions of the last interglacial period [131,000-114,000 years ago] to climate model simulations and finds climate models significantly underestimated global temperatures of the last interglacial by ~0.67C on an annual basis and by ~1.1C during the warmest month. 

This implies that climate models are unable to fully simulate natural global warming, and the error of the underestimation is about the same as the 0.7C global warming since the end of the Little Ice Age in ~1850. Thus, the possibility that present-day temperatures could be entirely the result of natural processes cannot be ruled out in comparison to the last interglacial period.

Further, during the last interglacial, Greenland temperatures were naturally up to 8C higher and sea levels up to 29 feet higher than today. And, during another interglacial, all of Greenland and West Antarctica melted & sea levels were 79 feet higher. Since this low-CO2 global warming occurred entirely naturally, there is no evidence that global warming during the present interglacial is unnatural or man-made. 

Temperatures during the last interglacial period ~120,000 years ago were higher than during the present interglacial period.



First column is the warmest single period simulated by climate models, second column is the warmest period from a compilation of temperature reconstructions.
Clim. Past, 10, 1633-1644, 2014
www.clim-past.net/10/1633/2014/
doi:10.5194/cp-10-1633-2014 


P. Bakker1,2 and H. Renssen1

1Earth and Climate Cluster, Department of Earth Sciences, VU University Amsterdam, 1081HV Amsterdam, the Netherlands
2now at: College of Earth, Ocean and Atmospheric Sciences, Oregon State University, Corvallis, Oregon, USA

Abstract. The timing of the last interglacial (LIG) thermal maximum across the globe remains to be precisely assessed. Because of difficulties in establishing a common temporal framework between records from different palaeoclimatic archives retrieved from various places around the globe, it has not yet been possible to reconstruct spatio-temporal variations in the occurrence of the maximum warmth across the globe. Instead, snapshot reconstructions of warmest LIG conditions have been presented, which have an underlying assumption that maximum warmth occurred synchronously everywhere. Although known to be an oversimplification, the impact of this assumption on temperature estimates has yet to be assessed. We use the LIG temperature evolutions simulated by nine different climate models to investigate whether the assumption of synchronicity results in a sizeable overestimation of the LIG thermal maximum. We find that for annual temperatures, the overestimation is small, strongly model-dependent (global mean 0.4 ± 0.3 °C) and cannot explain the recently published 0.67 °C difference between simulated and reconstructed annual mean temperatures during the LIG thermal maximum. However, if one takes into consideration that temperature proxies are possibly biased towards summer, the overestimation of the LIG thermal maximum based on warmest month temperatures is non-negligible with a global mean of 1.1 ± 0.4 °C. 


note this post was updated from an earlier version to correct the link to the HS post regarding sea levels during the last interglacial

The Economist claims heat sinks & bottom of ocean will become hotter than surface

An article published last week in The Economist proclaims, "the mystery of the pause in global warming may have been solved. The answer seems to lie at the bottom of the sea."

No, sadly, the mystery of the "pause" and Trenberth's "missing heat" have not been solved, nor the question whether the "missing heat" ever existed in the first place. The article claims "
People with a grasp of the law of conservation of energy are, however, sceptical in their turn of these positions and doubt that the pause is such good news," which is a false assumption because the only 'evidence' that there is any "missing heat" is the falsified output of climate models, which cannot provide 'data' or evidence.

Further, the article goes on to make astonishingly erroneous statements that violate physics/thermodynamics, including claims that man-made CO2 heat sinks to the bottom of the ocean and once the ocean depths become "hotter than the surface...global warming will resume." Thus, The Economist has become a denier of convection.

Other embarrassing basic physics/thermodynamics errors that fail grade school science include 

1) the assumption that the tail can wag the dog & the atmosphere, with 1/1000 of the heat capacity and thermal inertia of the oceans, can significantly heat the oceans

2) that an atmosphere which has not warmed in 18 years can warm the oceans, and undetected in the upper layers of the ocean





I had a subscription to the Economist for many years, but dropped it because they jumped on the fake-science CAGW bandwagon. 

Related tweets/comments:



In the article
Economist – Oceans and the climate: Davy Jones’ heat locker [http://www.economist.com/news/science-and-technology/21613161-mystery-pause-global-warming-may-have-been-solved-answer-seems?fsrc=scn/tw_ec/davy_jones_s_heat_locker]
the reporter made a factual scientific error. They wrote at the end of the article
“The process of sequestration [of heat] must reverse itself at some point, since otherwise the ocean depths would end up hotter than the surface—an unsustainable outcome. And when it does, global warming will resume.”
The deeper ocean is very cold and involves an enormous amount of mass. There is no way that the ocean depths could become hotter due to the sequestration of heat from added CO2 in the atmosphere!
“No matter how warm the surface of the ocean gets, the ocean’s huge volume and deep basins keep temperatures at the bottom of the ocean at only slightly above freezing.”.
P Gosselin (@NoTricksZone)

9/1/14, 11:27 AM
@newscientist Rubbish...atmos warms 1°...heat absorbed by oceans leads to 0.01° rise in ocean temp...heat exits ocean only if atmos cools


Davy Jones’s heat locker


The mystery of the pause in global warming may have been solved. The answer seems to lie at the bottom of the sea


Aug 23rd 2014 The Economist [with added comments and links by HS]



OVER the past few years one of the biggest questions in climate science has been why, since the turn of the century, average surface-air temperatures on Earth have not risen, even though the concentration in the atmosphere of heat-trapping carbon dioxide has continued to go up. This “pause” in global warming has been seized on by those sceptical that humanity needs to act to curb greenhouse-gas emissions or even (in the case of some extreme sceptics) who think that man-made global warming itself is a fantasy. People with a grasp of the law of conservation of energy are, however, sceptical in their turn of these positions and doubt that the pause is such good news. They would rather understand where the missing heat has gone, and why—and thus whether the pause can be expected to continue.

The most likely explanation is that it is hiding in the oceans, which store nine times as much of the sun’s heat as do the atmosphere and land combined. [Actually, the ocean stores 1000 times as much heat as the atmosphere, and the tail does not wag the dog] But until this week, descriptions of how the sea might do this have largely come from computer models. Now, thanks to a study published in Science by Chen Xianyao of the Ocean University of China, Qingdao, and Ka-Kit Tung of the University of Washington, Seattle, there are data [not according to Josh Willis of JPL who says there are no such "robust" data in the "real ocean" as opposed to the modeled ocean].

Dr Chen and Dr Tung have shown where exactly in the sea the missing heat is lurking. As the left-hand chart below shows, over the past decade and a bit the ocean depths have been warming faster than the surface. This period corresponds perfectly with the pause, and contrasts with the last two decades of the 20th century, when the surface was warming faster than the deep. The authors calculate that, between 1999 and 2012, 69 zettajoules of heat (that is, 69 x 10^21 joules—a huge amount of energy) have been sequestered in the oceans between 300 metres and 1,500 metres down. If it had not been so sequestered, they think, there would have been no pause in warming at the surface [why can't climate scientists ever admit that the other obvious possibility is that there never was any "missing heat" to sequester?]

Hidden depths

The two researchers draw this conclusion from observations collected by 3,000 floats launched by Argo, an international scientific collaboration. These measure the temperature and salinity of the top 2,000 metres of the world’s oceans. In general, their readings match the models’ predictions. But one of the specifics is weird.

Most workers in the field have assumed the Pacific Ocean would be the biggest heat sink, since it is the largest body of water. A study published in Nature in 2013 by Yu Kosaka and Shang-Ping Xie of the Scripps Institution of Oceanography, in San Diego, argued that cooling in the eastern Pacific explained most of the difference between actual temperatures and models of the climate that predict continuous warming. Dr Chen’s and Dr Tung’s research, though, suggests it is the Atlantic (see middle chart) and the Southern Ocean that are doing the sequestering. The Pacific (right-hand chart), and also the Indian Ocean, contribute nothing this way—for surface and deepwater temperatures in both have risen in parallel since 1999.




This has an intriguing implication. Because the Pacific has previously been thought of as the world’s main heat sink, fluctuations affecting it are considered among the most important influences upon the climate. During episodes called El Niño, for example, warm water from its west sloshes eastward over the cooler surface layer there, warming the atmosphere. Kevin Trenberth of America’s National Centre for Atmospheric Research has suggested that a strong Niño could produce a jump in surface-air temperatures and herald the end of the pause. Earlier this summer, a strong Niño was indeed forecast, though the chances of this happening seem to have receded recently.

But if Dr Chen and Dr Tung are right, then the fluctuations in the Atlantic may be more important. In this ocean, saltier tropical water tends to move towards the poles (surface water at the tropics is especially saline because of greater evaporation). As it travels it cools and sinks, carrying its heat into the depths—but not before melting polar ice, which makes the surface water less dense, fresh water being lighter than brine. This fresher water has the effect of slowing the poleward movement of tropical water, moderating heat sequestration. It is not clear precisely how this mechanism is changing so as to send heat farther into the depths. But changing it presumably is.

Understanding that variation is the next task. The process of sequestration must reverse itself at some point, since otherwise the ocean depths would end up hotter than the surface—an unsustainable outcome. And when it does, global warming will resume.

Correction: This article originally stated that 69 zettajoules of heat was 69 x 10^11 joules. In fact it is 69 x 10^21 joules. Sorry.

Pravda exposes the errors and frauds of global warming science

Ironically, Pravda has published today an article about the fundamental "errors and frauds of global warming science." One of the most important points mentioned is perhaps the most fundamental flaw of CAGW theory- equating radiation to be the same as heat transfer, but radiation  heat due to both the first and second laws of thermodynamics. Radiation from a cold body cannot increase the heat content of a hot body; to do so would require an impossible decrease in entropy and violate the 2nd law of thermodynamics [and also the 1st law]. The false belief of many climate scientists that radiation = heat transfer is one of the most fundamental flaws of CAGW theory.

Note: there is indeed a ~33K "greenhouse effect" due to gravity/mass/pressure/lapse rate, but it is not significantly affected by man-made CO2. 

Errors and frauds of global warming science

9/1/2014

By Gary Novak [plus links added by HS]

Modern global warming science began in 1979 with the publication of Charney et al in response to a request from a U.S. governmental office to create a study group for answering questions about global warming. Charney et al modeled atmospheric effects and drew the conclusion that the average earth temperature would increase by about 3°C upon doubling the amount of carbon dioxide in the air.

Charney et al did not have a known mechanism for global warming to base their modeling on. Their publication was total fakery stating deliberate absurdities, such as modeling "horizontal diffusive heat exchange," which doesn't exist.

In 1984 and 1988, Hansen et al did similar modeling but added a concept for heat produced by carbon dioxide, which they derived from assumed history. Over the previous century, a temperature increase of 0.6°C was assumed to have been caused by an increase in CO2 of 100 parts per million in the atmosphere. Their modeling then had the purpose of determining secondary effects, primarily caused by an assumed increase in water vapor. In other words, a primary effect was based upon the historical record, while secondary effects were modeled.

This is the approach taken to this day, while refinements are developed. There were major problems in using history for the primary effect. Firstly, the historical effect included secondary effects which could not be separated out, and no attempt was made to do so. This means the assumed primary effect included secondary effects. Secondly, there was no place for other effects in attributing the entire history to CO2.

Therefore, an attempt to determine the primary effect was made by Myhre et al in 1998 (4) by using radiative transfer equations. Those equations only show the rate of depletion of radiation as the concentration of a gas increases. They say nothing about heat. An impossibly complex analysis would be required to evaluate the resulting heat, but no such analysis was mentioned in the publication by Myhre et al. Even worse, Myhre et al added more atmospheric modeling in determining the primary effect including the effects of clouds.

These publications cannot be viewed as honest. They lack a consistent logic and fabricate conclusions with no scientific method at arriving at such conclusions. Furthermore, these publications are not science as the acquisition of evidence, since modeling is the projection of assumptions with no method of acquiring evidence. Modeling may be a tool for sociologists and politicians but has no place in science. Science attempts to verify through reproducible evidence, while modeling is nothing but an expression of opinions with no new evidence being acquired.

Even after Myhre et al supposedly determined the primary effect (said to be 5.35 times the natural log of final carbon dioxide concentration divided by prior concentration-a three component fudge factor) there was no known mechanism for carbon dioxide (or any greenhouse gas) creating global warming.

In 2001, three years after Myhre et al's publication, the IPCC described the mechanism this way: "Carbon dioxide absorbs infrared radiation in the middle of its 15 mm [sic] band to the extent that radiation in the middle of this band cannot escape unimpeded: this absorption is saturated. This, however, is not the case for the band's wings. It is because of these effects of partial saturation..."

Saturation means all available radiation gets used up. Heinz Hug stated in his publication that saturation occurs in 10 meters at the center of the absorption curve for the 15µm band (http://nov79.com/gbwm/hnzh.html#ten). On the shoulders of the absorption curves are molecules which have stretched bonds causing them to absorb at slightly altered wavelength. It is supposedly these molecules which do the heating for greenhouse gases, because they do not use up all available radiation; and therefore, more of the gases absorbs more radiation.

Scientists said that 5% of the CO2 molecules were effective on the shoulders for creating global warming. This roughly means that radiation would travel 20 times farther before being absorbed. But 20 times 10 meters is only 200 meters. Air mixes in such a short distance, which means there is no temperature change. Absorbing radiation in 200 meters is no different than absorbing it in 10 meters. In other words, the 5% claim was nothing but a fake statement for rationalizing. The shamelessness and gall of making up this subject on whim and then claiming it is science is unprecedented. Real scientists are not that way.

Since this mechanism would not stand up to criticism, scientists changed their mind about the mechanism a few years ago and said the real mechanism occurs about 9 kilometers up in the atmosphere. (The normal atmosphere, troposphere, goes up about 17 km average.) Trivial rationalizations were used, mainly that the absorption bands get narrower at lower air pressure, so they [allegedly] don't overlap with water vapor.

There are two major problems with the analysis for 9 km up. One, there is not much space left for adding heat. And two, the temperature increase required for radiating the heat back down to the surface is at least 24°C up there for each 1°C increase near the surface-not accounting for oceans (http://nov79.com/gbwm/satn.html#upa). Oceans will absorb the heat for centuries or millennia, which means 70% of the heat disappears during human influences. So the total would need to be 80°C at 9 km up to create the claimed 1°C near ground level. No temperature increase has been detected at 9 km up due to carbon dioxide [the missing "hot spot"]

Notice that the fakes didn't have a mechanism and didn't know where it was occurring 30 years after the first models were constructed in 1979 (said to be only off by 15%) and 10 years after the fudge factor was contrived for pinning down the primary effect, which the mechanism is supposed to represent. How could they get the primary effect (fudge factor) without knowing whether it was occurring at ground level or 9 km up?

Why do nonscientists assume it is self-evident that greenhouse gases create global warming, when scientists cannot describe a mechanism? Extreme over-simplification appears to be the reason. They assume that absorbing radiation is producing heat. Guess what. A jar of pickles absorbs radiation but it doesn't heat the kitchen. Total heat effects are complex, and they equilibrate.

What really happens is that the planet is cooled by radiation which goes around greenhouse gases, not through them.[I disagree in part, some IR is lost directly to space through the 'atmospheric window' but also greenhouse gases enhance cooling of the troposphere, tropopause, stratosphere, mesosphere, and thermosphere by increasing radiative surface area to space] Cooling results in an equilibrium temperature which is independent of how heat gets into the atmosphere. It means greenhouse gases have no influence upon the temperature of the planet.

The amount of carbon dioxide in the atmosphere is so low that all biology is on the verge of becoming extinct due to a shortage of CO2 which is needed for photosynthesis. There was twenty times as much CO2 in the atmosphere when modern photosynthesis evolved. Oceans continuously absorb CO2 and convert it into calcium carbonate and limestone. The calcium never runs out, and the pH of the oceans never drops below 8.1 for this reason. It's the pH which calcium carbonate buffers at. If not, why hasn't four billion years been long enough to get there?

Gary Novak

New paper finds a non-linear relationship between sunspots and global temperatures

A new paper by Dr. Nicola Scafetta published in Physica A: Statistical Mechanics and its Applications rebuts the assertion that sunspots and global temperatures are not related. Dr. Scafetta instead finds a non-linear relation between sunspots and temperatures 

"can be recognized only using specific techniques of analysis that take into account non-linearity and filtering of the multiple climate change contributions" and "Multiple evidences suggest that global temperatures and sunspot numbers are quite related to each other at multiple time scales. Thus, they are characterized by cyclical fractional models. However, solar and climatic indexes are related to each other through complex and non-linear processes."
A simple linear model based upon the "sunspot integral" and ocean oscillations explains 95% of climate change over the past 400 years. 


Dr. Scafetta's model [black line] based upon solar and anthropogenic forcing is performing much better than the IPCC models [green band] which dismiss the role of the Sun in climate change

Global temperatures and sunspot numbers. Are they related? Yes, but non-linearly. A reply to Gil-Alana et al. (2014)

Refers To

Highlights

Gil-Alana et al. claimed that the sunspots and Earth’s temperature are unrelated.
I show that Gil-Alana et al.’s claims are based on a number of misunderstandings.
The global surface temperature does nor present any “zero” frequency “pole” or “singularity”.
The temperature signature is made of oscillations plus an anthropogenic component.
Appropriate solar proxy models demonstrate the existence of a significant sun-climate relation.

Abstract

Recently Gil-Alana et al. (2014) compared the sunspot number record and the temperature record and found that they differ: the sunspot number record is characterized by a dominant 11-year cycle while the temperature record appears to be characterized by a “singularity  ” or “pole  ” in the spectral density function at the “zero  ” frequency. Consequently, they claimed that the two records are characterized by substantially different statistical fractional models and rejected the hypothesis that the Sun influences significantly global temperatures. I will show that: (1) the “singularity” or “pole” in the spectral density function of the global surface temperature at the “zero” frequency does not exist—the observed pattern derives from the post 1880 warming trend of the temperature signal and is a typical misinterpretation that discrete power spectra of non-stationary signals can suggest; (2) appropriate continuous periodograms clarify the issue and also show a signature of the 11-year solar cycle (amplitude View the MathML source), which since 1850 has an average period of about 10.4 year, and of many other natural oscillations; (3) the solar signature in the surface temperature record can be recognized only using specific techniques of analysis that take into account non-linearity and filtering of the multiple climate change contributions; (4) the post 1880-year temperature warming trend cannot be compared or studied against the sunspot record and its 11-year cycle, but requires solar proxy models showing short and long scale oscillations plus the contribution of anthropogenic forcings, as done in the literature. Multiple evidences suggest that global temperatures and sunspot numbers are quite related to each other at multiple time scales. Thus, they are characterized by cyclical fractional models. However, solar and climatic indexes are related to each other through complex and non-linear processes. Finally, I show that the prediction of a semi-empirical model for the global surface temperature based on astronomical oscillations and anthropogenic forcing proposed by Scafetta since 2009 has, up to date, been successful.

New paper finds another non-hockey-stick in Russian sub-Arctic, cooling over 4,500 years

A paper published today in Global and Planetary Change finds another non-hockey-stick in the Russian sub-Arctic with reconstructed temperatures showing a cooling trend over the past 4,500 years since the Holocene Climate Optimum. The paper adds to over 1,000 other worldwide non-hockey-sticks published in the scientific literature.

Proxy temperatures [2nd graph from left] show a decreasing trend from 4,500 years before the present [BP] to the end of the record in the 20th century
Proxy temperatures [2nd graph from bottom] show a decreasing trend from 4,500 years before the present [BP] to the end of the record in the 20th century

Abstract

Especially in combination with other proxies, the oxygen isotope composition of diatom silica (δ18Odiatom) from lake sediments is useful for interpreting past climate conditions. This paper presents the first oxygen isotope data of fossil diatoms from Kamchatka, Russia, derived from sediment cores from Two-Yurts Lake (TYL). For reconstructing late Holocene climate change, palaeolimnological investigations also included diatom, pollen and chironomid analysis.
The most recent diatom sample (δ18Odiatom = + 23.3‰) corresponds well with the present day isotopic composition of the TYL water (mean δ18O = -14.8‰) displaying a reasonable isotope fractionation in the system silica-water. Nonetheless, the TYL δ18Odiatom record is mainly controlled by changes in the isotopic composition of the lake water. TYL is considered as a dynamic system triggered by differential environmental changes closely linked with lake-internal hydrological factors.
The diatom silica isotope record displays large variations in δ18Odiatom from + 27.3‰ to + 23.3‰ from about ~ 4.5 kyrs BP until today. A continuous depletion in δ18Odiatom of 4.0‰ is observed in the past 4.5 kyrs, which is good accordance with other hemispheric environmental changes (i.e. a summer insolation-driven Mid- to Late Holocene cooling). The overall cooling trend is superimposed by regional hydrological and atmospheric-oceanic changes. These are related to the interplay between Siberian High and Aleutian Low as well as to the ice dynamics in the Sea of Okhotsk. Additionally, combined δ18Odiatom and chironomid interpretations provide new information on changes related to meltwater input to lakes. Hence, this diatom isotope study provides further insight into hydrology and climate dynamics of this remote, rarely investigated area.