Kênh Tên Miền chuyên cung cấp tên miền đẹp, giá rẻ! Hãy liên hệ kỹ thuật: 0914205579 - Kinh doanh: 0912191357 để được tư vấn, hướng dẫn miễn phí, Cảm ơn quý khách đã ủng hộ trong thời gian qua!
kiem tien, kiem tien online, kiem tien truc tuyen, kiem tien tren mang
Monday, 25 May 2015

TEMPERATURE

Temperature is the degree of hotness or coldness of any substance. It may be thought of as the level of average total mechanical and vibrational energy transfer between the molecules. This level is determined by a temperature system based on change of state such as rhe  freezing or boiling point.of water

The SI Unit is the Kelvin defined as the fraction 1⁄273.16 of the thermodynamic temperature of the triple point of water (exactly 0.01°C or 32.018 °F). In other words, it is defined such that the triple point of water is exactly 273.16 K.

The most important climate property for establishing the theory that climate is influenced by so-called greenhouse gases is the global temperature which is considered to increase with increases in human induced emissions and therefore to cause warming.

Unfortunately a scientifically established global temperature does not exist so it is also not possible to know whether it is increasing or decreasing1,2.

Essex et al1 have explained this as follows 
Physical, mathematical and observational grounds are employed to show that there is no physically meaningful global temperature for the Earth in the context of the issue of global warming. While it is always possible to construct statistics for any given set of local temperature data, an infinite range of such statistics is mathematically permissible if physical principles provide no explicit basis for choosing among them. Distinct and equally valid statistical rules can and do show opposite trends when applied to the results of computations from physical models and real data in the atmosphere. A given temperature field can be interpreted as both “warming” and “cooling” simultaneously, making the concept of warming in the context of the issue of global warming physicallyill-posed.
 Temperature is an intensive property. It can only be defined as a property of a uniform substance. 
Thermodynamic variables come in two varieties: extensive and intensive. Extensive variables are proportional to the size of the system. They are additive. In this category we find volume, mass, energy, entropy, particle number etc. We can combine two systems and the values of extensive variables for the whole system will simply be the sum of the values from the two components. Correspondingly a mean subsystem (loosely called the average) will have this sum divided by the number of components. Such an average over a quantity like mass is meaningful because the sum is meaningful. For example average mass is of importance to airlines because it is helpful to estimate the total load of an aircraft without having to weigh every passenger.


Intensive variables, by contrast, are independent of system size and represent a quality of the system: temperature, pressure, chemical potential etc. In this case combining two systems will not yield an overall intensive quantity equal to the sum of its components. 

For 
example two identical subsystems do not have a total temperature or pressure twice those of its components. A sum over intensive variables carries no physical meaning. Dividing meaningless totals by the number of components cannot reverse this outcome.

In special circumstances averaging might approximate the equilibrium temperature after mixing, but this is irrelevant to the analysis of an out-of-equilibrium case like the Earth’s climate.
It is possible to argue that sensors for temperature such as mercury in glass thermometers, thermocouples and thicknesses of tree rings or varves are surrogates or proxies of temperature. They can therefore be subject to statistical treatment, and even some form of averaging. However they are usually not representative samples, either of the surface or of the atmosphere, so the averages are of limited value and without scientific significance

No point anywhere in the climate is ever in equilibrium. The only temperatures that exist are therefore transient infinitesimal increments of the surface and of the atmosphere which are constantly changing. There is never a single or an average temperature, only a   transient temperature field. A physical model of the climate which assumes equilibrium can only be considered to be valid on a transient basis and can only be useful for forecasting if combined with records and opinions of its likely future changes. This is essentially the procedure carried out for weather forecasting.

Geologists, climatologists and meteorologists have made attempts to derive rough estimates of trends in global surface temperatures from orbital and sun variability, fossil remains, tree rings, varves and historical evidence. These may be regarded as proxy temperatures. They are only partially based on actual scientific measurement and those that are used are often from unrepresentative samples. They have no scientifically plausible accuracy or bias estimates but they may represent collective opinion, often of self-styled experts, and are subject to the same differences of opinion as those shown amongst historians.

A comprehensive list of past proxy temperatures has been compiled by Bernie Lewin3

An example from 1975 is from Stanley4
 
                                                        Figure 7.1 Mean Northern Hemisphere Temperature change from
                                                         weather station records, as derived in 1975,  (from Stanley) 4 


 The 1990 Report of the Intergovernmental Panel on Climate Change5 gave a very useful summary of the then current opinions5 from a large number of references.
Figure7.2 Global Temperatures over different periods
(IPCC !9945)
 
  

Harris and Mann6have provided this 2014 version from 2500 BC to 2040, It should be noted that they have avoided mentioning any specific temperature value.



Figure 7.3 Global Temperatures 2500 BC to 2040 AD6

The last two recognise the existence of a medieval warm period and a little ice age. The existence of both of which are now questioned by the IPCC.

Meteorologists realise that the temperature in any locality varies over a single day in a manner that cannot be predicted or averaged, but they have the task of forecasting future temperatures and other properties, so they trap a sample of the surface in a screen away from sunlight and buildings.

For some time they have measured the maximum and minimum temperature each day and used them to forecast future temperatures using computerised numerical models. These can only forecast with an accuracy of ±2ºC with a bias of ±1ºC   for a week or so ahead. Nowadays many stations are able to measure hourly temperatures and frequently quote morning and afternoon figures besides the maximum  and minimum.

The so-called Daily Mean Temperature, the mean of the maximum and minimum is not a genuine average but merely a guide to local range.

Gray7has  shown why no genuine local average surface temperature is possible by calculating the difference between a maximum/minimum  average and an hourly average for a set of  hourly temperature measurements for 24 weather stations in New Zealand8, for a typical winter’s day (January 1st2001) and a typical summer’s day (July 1st 2001), intended as a student exercise.

For the winter series this difference varies from -0.2°C  to +1.9ºC for individual plots.

For the summer series the difference varies from -1.1ºC to +0.4ºC.

The Hourly averages are not symmetrical, so a daily average plus an estimate of accuracy such as a  standard deviation is not possible. Here are some samples:





Figures 7.4 a b c d  Hourly temperature measurements6
HANSEN’S INITIATIVE


On June 14 1988, Dr James Hansen of the National Aeronautics and Space Administration told a US Congress Committee that the globe was warming and that he had a method of measuring it.

He explained this on his website, as follows9

The Elusive Absolute Surface Air Temperature (SAT) 

Q. What exactly do we mean by SAT ? 

A. I doubt that there is a general agreement how to answer this question. Even at the same location, the temperature near the ground may be very different from the temperature 5 ft above the ground and different again from 10 ft or 50 ft above the ground. Particularly in the presence of vegetation (say in a rain forest), the temperature above the vegetation may be very different from the temperature below the top of the vegetation. A reasonable suggestion might be to use the average temperature of the first 50 ft of air either above ground or above the top of the vegetation. To measure SAT we have to agree on what it is and, as far as I know, no such standard has been suggested or generally adopted. Even if the 50 ft standard were adopted, I cannot imagine that a weather station would build a 50 ft stack of thermometers to be able to find the true SAT at its location. 
Q. What do we mean by daily mean SAT ? 

A. Again, there is no universally accepted correct answer. Should we note the temperature every 6 hours and report the mean, should we do it every 2 hours, hourly, have a machine record it every second, or simply take the average of the highest and lowest temperature of the day ? On some days the various methods may lead to drastically different results. 
Q. What SAT do the local media report ? 

A. The media report the reading of 1 particular thermometer of a nearby weather station. This temperature may be very different from the true SAT even at that location and has certainly nothing to do with the true regional SAT. To measure the true regional SAT, we would have to use many 50 ft stacks of thermometers distributed evenly over the whole region, an obvious practical impossibility. 
Q. If the reported SATs are not the true SATs, why are they still useful ? 

A. The reported temperature is truly meaningful only to a person who happens to visit the weather station at the precise moment when the reported temperature is measured, in other words, to nobody. However, in addition to the SAT the reports usually also mention whether the current temperature is unusually high or unusually low, how much it differs from the normal temperature, and that information (the anomaly) is meaningful for the whole region. Also, if we hear a temperature (say 70°F), we instinctively translate it into hot or cold, but our translation key depends on the season and region, the same temperature may be 'hot' in winter and 'cold' in July, since by 'hot' we always mean 'hotter than normal', i.e. we all translate absolute temperatures automatically into anomalies whether we are aware of it or not. 
Q. If SATs cannot be measured, how are SAT maps created ? 

A. This can only be done with the help of computer models, the same models that are used to create the daily weather forecasts. We may start out the model with the few observed data that are available and fill in the rest with guesses (also called extrapolations) and then let the model run long enough so that the initial guesses no longer matter, but not too long in order to avoid that the inaccuracies of the model become relevant. This may be done starting from conditions from many years, so that the average (called a 'climatology') hopefully represents a typical map for the particular month or day of the year. 
Q. What do I do if I need absolute SATs, not anomalies ? 

A. In 99.9% of the cases you'll find that anomalies are exactly what you need, not absolute temperatures. In the remaining cases, you have to pick one of the available climatologies and add the anomalies (with respect to the proper base period) to it. For the global mean, the most trusted models produce a value of roughly 14°C, i.e. 57.2°F, but it may easily be anywhere between 56 and 58°F and regionally, let alone locally, the situation is even worse.













Having agreed with me that there is no such thing as an absolute surface temperature,
he then claims that it is possible to measure the rate in which this non existent quantity changes.

He shows how he does it in the following paragraph, which actually appears at  the beginning of the argument8.
The GISTEMP analysis concerns only temperature anomalies, not absolute temperature. Temperature anomalies are computed relative to the base period 1951-1980. The reason to work with anomalies, rather than absolute temperature is that absolute temperature varies markedly in short distances, while monthly or annual temperature anomalies are representative of a much larger region. Indeed, we have shown (Hansen and Lebedeff, 198710) that temperature anomalies are strongly correlated out to distances of the order of 1000 km.

Here he has changed his mind. Instead of absolute surface temperature being elusive,
now it  merely varies markedly in short distances.

But the temperature varies markedly at all times in the same place and therefore cannot legitimately be assumed constant. Hansen  and Lebedeff10 have even gone one step further by choosing a monthly value, made up of the daily average of the maximum and minimum, as the constant temperature of each weather site. These are not measured temperatures so they are called data.

This assigned temperature for each weather station is then assumed to apply also over a circle of 1,200 km or more radius. Then, they are said to be strongly correlated with the assigned temperature for the neighbouring weather station.

In the abstract they say10
The temperature changes at mid- and high latitude stations separated by less than1000 km are shown to be highly correlated; at low latitudes the correlation falls off more rapidly with distance for nearby station
and
Error estimates are based in part on studies of how accurately the actualstation distributions are able to reproduce temperature change in a global data set produced by a three-dimensional general circulation model with realistic variability.

At their 1200 km distance the highly correlated coefficient is around 0.5. and it gets smaller at lower latitudes. This is not a high enough figure to justify an assumption of good correlation. They cover up this unjustified assumption with models where the presumed correlation can  be assumed higher.

The whole world is divided into 80 equal area boxes and each box into 100 sub boxes. These cover most of the surface by including islands whose influence is a radius of 1200km. A monthly and then an annual average of each sub box is averaged to give one for the whole box and that average is subtracted from a global mean to give the annual temperature anomaly. The huge uncertainties involved in each of these steps  cannot be estimated and are largely ignored.

 The number and length of record and cover is variable9 The figures below indicate

  • the number of stations with record length at least N years as a function of N ,
  • the number of reporting stations as a function of time,
  • the percent of hemispheric area located within 1200km of a reporting station.
Figure 7.5 Variability of temperature records with length, number and hemisphere

The anomaly record is not uniform. Standards of measurement vary from place to place and over time. Numbers and location of acceptable stations vary and they are not distributed uniformly over the earth. Whole regions may be omitted, particularly early ones.

The whole argument depends on correlation coefficients of as low as 0.5 or less for temperature anomaly differences. The resulting plots should have much larger uncertainty estimates.

SEA SURFACE TEMPERATURE

Hansen’s system includes only land-based measurements An average global temperature anomalyneeds to include the 71% of the earth’s surface that is ocean. There are many temperature measurements made from ships, but the quality control is much worse than on the land and even then, whole regions have no figures. Folland and Parker10 have claimed to have found a way of incorporating the data. One difficulty is that many early measurements were from buckets drawn from the sea and it is sometimes uncertain whether the buckets were metal or wood. During the first World War measurements could not be made on deck. Also some measurements are from a weather station on board, often beneath the funnel.

Both American temperature compilers, the Goddard Institute of Space Studies (GISS) and The Global Historical Climatology Network (GHCN) have never accepted the use of the sea surface measurements for a global average and as a result they have to resort to a whole host of dubious devices to claim that their figures are “global.They use recent satellite measurements for the ocean12 and extrapolate them into the past. It is very suspicious that incorporating the sea surface measurements seems to make little difference to either system.


Although only 29% of the earth’s surface is land, the peak number of 5º x 5º grid-boxes from land-based weather stations was 880 in 1980, which includes 34% of the earth’s surface.  This has been achieved because there are many stations on small islands surrounded by ocean where the land temperature is assumed to be typical of the surrounding ocean. Also a proportion of sea surface measurements is from fixed buoys and weather ships. These and the land stations measure above the surface, whereas current sea surface measurements are made from ship’s engine intake, which is below the surface.

The IPCC has manipulated this unreliable average from weather stations and sea surface readings to provide the Mean Global Surface Temperature Anomaly Record5(MGSTAR) which they use as a proxy for global temperature. Estimates of error are rare and have to ignore unknown unknowns.13

No point on the curve is an actual temperature. They are supposed to be a record of temperature anomalies not a temperature record, and each point is the result of a manipulation process from a different number of unrepresentative unstandardized readings. No plausible estimate of accuracy is possible but it must be greater than the ±2ºC and ±1ºC currently possible from weather forecasts, so “trends of a few tenths of a degree" can be regarded as without statistical significance

Like all proxies its value depends on the extent to which recognised experts and the general public are prepared to accept it. It cannot be validated by any scientific procedure and the IPCC does not claim that it can be. It cannot provide forecasts, but only projections which are evaluated by scientists who depend for their income and career on the IPCC. 

Figure 7.6 Global Air Temperature Anomaly 201314    1901-2000 reference period          

Figure 7.6 shows the current version of the Hadley CRUT MGSTAR13 which finds a temperature anomaly of a mere 0.49ºC since 1850.

 Frank14 has confirmed that the supposed rise is statistically indistinguishable from zero:

MICROWAVE SENSING UNITS

A similar proxy sequence has been derived from Microwave Sounding Units, satellite measurements of the microwave  spectrum of oxygen in various levels of the atmosphere (MSU). The microwave spectrum of oxygen is dependent on temperature, so an average proxy temperature can be calculated for the various regions of the atmosphere. The segment of the upper troposphere represents global trends from 1958.
Figure 7.8 Temperature anomalies in the upper troposphere since 197016

The technique was developed by John Christy and Roy Spencer at the University of Huntsville Alabama. The series in the Lower Troposphere (TLT) is presented as a comparison with the MGSTAR surface temperature record and is also given as anomalies where the entire series is the reference period. It has been running since 1979.

At first this record appeared to differ markedly from the Surface anomaly record  preferred by the IPCC and there were long arguments as to which was authentic. Christy and Spencer were not supporters of the IPCC dogmas so a rival MSU Group was set up (Remote Sensing Systems- RSS) headed by IPCC sympathisers in the hope that it might prove to support the surface series. In the event the surface records and both the rival MSU records now possess considerable agreement and all of them show the absence of warming fro the past 18 years which prove that the IPCC models are wrong.

The IPCC has responded by abandoning its favourite MGSTAR and replacing it with MDSTAR Mean Decadal Surface Temperature Record, as follows17

Figure 7.9 Changes in decadal values of the Mean Global Decadal Temperature Record18



Even this system involves only one degree rise over 150 years, and two periods (870s to 1900s and 1940s to 1970s) when decadal temperatures fell.

It is amusing to quote the RSS versions of the MSU system which now quotes by the decade.
Figure 7.10 RSS  MSU Temperature anomalies 19

They have the foolishness to regard a temperature rise of only one tenth of degree with three places of decimals for a whole decade as having statistical significance.

FIDDLING THE FIGURES

The MAGSTAR has been the object of various doubtfully justified attempts to make its trend greater, over the years with little apparent success. This has led the IPCC to argue that a trend should be calculated from decadal rather than annual temperature anomalies. If this does not work we may have to wait for whole centuries before we can become confident that the globe is really warming.

The record is very difficult to reconcile with the IPCC theory that there is a steady warming of the surface from increasing emissions of so-called greenhouse gases. The tiny claimed rise is easily explained by a modest overall upwards bias which would be expected from the increased population and urbanisation of the world since 1850.

Gray20pointed this out in 2000. There are many records of urban influences on local temperature and there are a number of long records from stations that have shown little change which show no temperature rise. He also pointed out that liquid in glass thermometers automatically read high if they are not regularly calibrated, particularly with old ones and there are many other inaccuracies which  seem to be ignored.  

Michaels and McKitrick21,,22have shown an important  influence of sociological change and  also from inhomogeneities.

There have been repeated attempts to boost the supposed rise in the MGAsTAR by adjustments. It began right at the beginning with the attempts to argue that the record is not influenced by urban heating.

In 1990 two papers on the subject appeared in the two most respectable scientific Journals. The first was by Jones, P. D., P. Ya. Groisman, M. Coughlan, N. Plummer, W. C. Wang & T. R. Karl 1990 in Nature23

The IPCC have repeatedly quoted the paper by Jones et al.23 as evidence that urban heating is negligible.

These authors examined an extensive set of rural station temperature data for three regions of the world - European parts of the Soviet Union, Western Australia and Eastern China. When combined with similar analyses for the contiguous United States, the results are claimed to be representative of 20% of the land area of the Northern Hemisphere and 10% of the Southern Hemisphere.

They worked out the linear slope of temperature anomalies for the rural series in each case and compared it with the same slope for several gridded series. For the Western USSR, it covered the period 1901-1987 and 1930-1987, for Eastern Australia it was 1930-1988 compared with 1930-1997, for Eastern China it was 1954-1983 and for the contiguous United States it was 1901-1984. The differences between urban and rural slopes were only significant at the 5% level for Eastern Australia and for one set of Eastern China.

They concluded:

It is unlikely that the remaining unsampled areas of the developing countries in tropical climates, or other highly populated parts of Europe, could significantly increase the overall urban bias above 0.05ºC  during the twentieth century
It is unclear whether this small correction has been made for the most recent version of the Jones et al. global temperature series (Figure 5).

There are several things wrong with the Jones et al. (1990) paper.

The quality of the data is even worse than usual. They admit “It is unfortunate that separate maximum and minimum temperature data are not more widely available.”

The qualification for a “rural” site is a population below 10,000 for Western Soviet Union, below 35,000 for Eastern Australia, and below 100,000 for Eastern China. There is ample evidence (Gray 2000) that urban effects exist in such places.

They have chosen countries with a continuous record of effective scientific supervision. These are not representative of the rest of the world, where changes of country and adequate supervision are far less common.

Even these countries raise doubts. Russia had a tyrannical regime where statistics were frequently manipulated for political purposes. China had a major famine from the “Great Leap Forward” between 1958 and 1959 and also a manipulation of statistics.

In the very same year there appeared in Geophysical Research Letters another paper which included two of the authors of the previous paper, Wang and Karl (Wang et al. 1990)24.

The abstract of this paper reads
We used 1954-1983 surface temperature from 42 Chinese urban (average population 1.7 million) and rural (average population 150,000) station pairs to study the urban heat island effects. Despite the fact that the rural stations are not true rural stations, the magnitude of the heat islands was calculated to average 0.23ºC over the thirty year period, with a minimum value (0.19ºC) during the 1964-1973 decade and maximum (0.28ºC) during the most recent decades.”
This study appears to have used the same stations that were claimed to have no urban bias in the first paper and now there is an urban bias even if “rural” now includes places with population as high as 150,000.

The early paper 32 states, of Eastern China, 
The stations were selected on the basis of station history: We chose those with few, if any, changes in instrumentation, location or observation times”.
Wang et al24 says:
They were chosen based on station histories. We chose those without any changes in instrumentation, location, or observation times”.
Both papers were written at the same time and different conclusions made from the same data. Keenan25 (2007) found that they both used the same data and he showed that some places did not exist and  many of the Chinese stations moved several times over the period in question, in one case 15 km. He accused Wang of outright fraud.

Although Wang was cleared of this charge by his college, Tom Wigley, in this excnange from the Climategate papers  was in no doubt26

From: Tom Wigley 

To: Phil Jones 
 Subject: [Fwd: CCNet Xtra: Climate Science Fraud at Albany UniversityDate: Mon, 04 May 2009 01:37:07 -0600 
Cc: Ben Santer 

Phil, 
Do you know where this stands? The key things from the Peiser items are ... 
Wang had been claiming the existence of such exonerating documents for nearly a year, but he has not been able to produce them. Additionally, there was a report published in 1991 (with a second version in 1997) explicitly stating that no such documents exist. Moreover, the report was published as part of the Department of Energy Carbon Dioxide Research Program, and Wang was the Chief Scientist of that program. 
and 
Wang had a co-worker in Britain. In Britain, the Freedom of Information Act requires that data from publicly-funded research be made available. I was able to get the data by requiring Wang's co-worker to release it, under British law. It was only then that I was able to confirm that Wang had committed fraud.
Despite this plain evidence of fraud, the paper by Jones et al23 was still quoted in the next IPCC Report as evidence against urban heating, and it was even endorsed, by reference in the most recent IPCC Report19.

D’Aleo and Watts27 have documented all the devices that have been used to try and boost the alleged |warming shown by the MGASTAR. Their Summary for policymakers shows what they have found

1. Instrumental temperature data for the pre-satellite era (1850-1980) have been so
widely, systematically, and uni-directionally tampered with that it cannot be crediblyasserted there has been any significant“global warming” in the 20th century.

2. All terrestrial surface-temperature databases exhibit signs of urban heat pollution an post measurement adjustments that render them unreliable for determining accurate long-term temperature trends.

3. All of the problems have skewed the data so as greatly to overstate observed warming both regionally and globally.

4. Global terrestrial temperature data are compromised because more than three quarters of the 6,000 stations that once reported are no longer being used in data trend analyses.
5. There has been a significant increase in the number of missing months with 40% of the GHCN stations reporting at least one missing month. This requires infilling which adds to the uncertainty and possible error.

6. Contamination by urbanization, changes in land use, improper siting, and inadequately calibrated instrument upgrades further increases uncertainty.
7. Numerous peer-reviewed papers in recent years have shown the overstatement ofobserved longer term warming is 30-50% from heat-island and land use changecontamination.

8. An increase in the percentage of compromised stations with interpolation to vacant data grids may make the warming bias greater than 50% of 20th-century warming.

9. In the oceans, data are missing and uncertainties are substantial. Changes in data sets introduced a step warming in 2009.

10. Satellite temperature monitoring has provided an alternative to terrestrial stations in compiling the global lower-troposphere temperature record. Their findings are increasingly diverging from the station-based constructions in a manner consistent with evidence of a warm bias in the surface temperature record.

11. Additional adjustments are made to the data which result in an increasing apparent trend. In many cases, adjustments do this by cooling off the early record.

12. Changes have been made to alter the historical record to mask cyclical changes that could be readily explained by natural factors like multi-decadal ocean and solar changes.

13. Due to recently increasing frequency of eschewing rural stations and favoring urban airports as the primary temperature data sources, global terrestrial temperature databases are thus seriously flawed and can no longer be representative of both urban and rural environments. The resulting data are therefore problematic when used to assess climate trends or VALIDATE model forecasts.

14. An inclusive external assessment is essential of the surface temperature record of CRU GISS and NCDC chaired and panelled by mutually agreed to climate scientists who do not have a vested interest in the outcome of the evaluations.

15. Reliance on the global data by both the UNIPCC and the US GCRP/CCSP should trigger a review of these documents assessing the base uncertainty of forecasts and policy language.

The examples given in this report can be supplemented by more recent ones. Booker  Chiefio29 found that the hottest country in the world  was Bolivia. When he tried to access the figures he found that there had not been any for several years. It seems that there is an automatic adjustment replaces a missing sequence with the average of nearby countries. As these were mainly coastal resorts in Mexico they were much warmer. Then there was an automatic increase to allow for this presumed cold in Bolivia, so we end up with a high value

Gray27 has provided several other examples of how temperatures are being manipulated.

Paul Homewood28 has some recent examples.

Christopher Booker30 has found recent South American examples where the figures have been changed. Jenniffer Marohasy31  finds that Australia are also currently fiddling their figures.

UNITED STATES TEMPERATURES

In 1999 Hansen et al32gave the following graph for the US temperature record:


Figure 7.11 US temperature in 199923


Recently  it is as follows23
Figure 7.12 US Temperature33 in 2014

QUALITY OF US STATIONS

Using volunteers, Watts28has assessed the quality  of US weather stations  classified by means of a quality assessment  system  (CRN) developed by the US NOAA. His findings were as follows:-


Figure 7.13 Assessment of Quality of United States Surface Stations33

This work has led to a complete reassessment of United States weather stations.

Only 2% reached the highest quality, 61% had an accuracy no better than 2ºC and 8% were out  by 5 or more ºC.

Watts provides a large number of photographs illustrating the factors influencing reliability.

THE HOCKEY STICK

This title was applied to a graph purporting to give a temperature record of the Northern Hemisphere for the past 1000 years34

Fig 7.14  The Hockey Stick34

It differed markedly from the opinions expressed in the first IPCC Report3 (Figure 2) which identified a Medieval Warm Period and a Little Ice Age. This representation abolishes both of them and claims that the current MGSTAR temperature anomalies exceed temperatures for the past 1000 years by  a few tenths of a degree Centigrade
.
It purports to represent global temperatures for the past 1000 years It appeared three times in the early part of the 2001 IPCC Report. The Summary for Policymakers, The Technical Summaryand Chapter 2 Observed  Climate Variability and Change in which also were several variations of it.

It depends on assuming the annual changes in various geological specimens can represent global temperature changes. Many are tree ring widths. Most are just for the Northern Hemisphere and on land.

It is further assumed that the principal component of variance of a time series represents temperature change.

All the assumptions are difficult to believe and they must involve very large uncertainties which are impossible to estimate.

The comparison with the MAGSTAR is tendentious. If there is anything in such a comparison it is most likely to involve human urban effects (which they try to deny). An assumption that it is evidence for heating effects of greenhouse gases is simply absurd.
    
Soon and Baliunas35.36 gathered together many proxies and listed them. They concluded, firstly, that the coverage of data, even for the Northern Hemisphere, was not sufficiently representative to justify the deriving of an average which could be considered as reliable.

Their second conclusion was that both the medieval warm period and the little ice age were sufficiently frequent in the observations that they must have existed. Also, there was evidence that temperatures during the medieval warm period were frequently higher than those found today.

The most devastating attack on the hockey stick came from papers by McIntyre and McKitrick37-39 They set out to see whether they could recalculate the Mann/Bradley data and were initially surprised to find that the data were not available and had not even been supplied to the journals publishing the work. The papers had been published, and believed, without any check on their validity. After a long period of wrangling they managed to get hold of most of the original data. When they carried out the calculations however, they found serious errors which, when corrected, changed the whole conclusion that had been attributed to them. They found that they got a higher temperature in the year 1400 than is claimed for today. They found that the shape of the curve had been automatically predetermined. The small number of actual data before 1550 led to the excessive use, including extrapolation, of several measurements which are not considered reliable by others.

 Holland40 has documented the determined resistance of the IPCC to its acceptance of these facts.

Loehle et al41,42 questioned the reliability of tree-ring measurements, which apply only to summer and are influenced by precipitation. Increased temperature lowers soil moisture and the rings get thinner rather than thicker. When he used all the proxies except tree rings he got a modified record (Figure 13) which restored both the medieval warm period, the little ice age, and the lack of “unprecedented” character of recent temperatures.

Figure 7.14 Proxy temperatures for the past 1000 years ignoring tree rings
(Loehle and McCulloch
42)

Despite all if these arguments, the latest IPCC Report17  retains this version of the Hockey Stick. Error estimates must surely be much greater for the paleo estimates, but if the graph is taken seriously we are again involved with a few tenths of a degree of recent warming, most likely caused by  human urban and land usactivity.


Figure 7.15 Current IPCC version of reconstructed
Northern Hemisphere temperature
19

THE LATEST IPCC REPORT19 

These quotations from the Summary for Policymakers and Frequently Asked Questions embody most of the conclusion of the Report:

Warming of the climate system is unequivocal, and since the 1950s, many of the observed changes are unprecedented over decades to millennia. The atmosphere and ocean have warmed, the amounts of snow and ice have diminished, sea level has risen, and the concentrations of greenhouse gases have increased. 

FAQ 2.1 |  How Do We Know the World Has Warmed?
Evidence for a warming world comes from multiple independent climate indicators, from high up in the atmosphere to the depths of the oceans. They include changes in surface, atmospheric and oceanic temperatures; glaciers; snow cover; sea ice; sea level and atmospheric water vapour. Scientists from all over the world have independently verified this evidence many times. That the world has warmed since the 19th century is unequivocal

 Let us emphasize what it does not say.

Nowhere in this Report or in any previous Report is a claim that they have established a relationship between emissions of so-called greenhouse gases and the earth’s climate. Nor have they shown that greenhouse gases are responsible for whatever warming is claimed.

Their claim of warming since millennia is designed to cover up the fact that for the past 18 years there has been no warming at all when estimated by their Mean Global Temperature Anomaly Record, which was the basis of their claim for warming in all the previous IPCC Reports.

They try to cover up this failure by converting it to a decadal series and by trying to argue that since global temperatures seem less important everything else is warming.

The arguments get thinner when they are scrutinised. There are no temperature measurements on ice and the thickness of ice in the Arctic ocean is controlled by ocean oscillations beneath it. Glaciers have many alternative influences. The total ice in the Antarctic is at a record level. The Antarctic peninsula has an underwater volcano.

AIRS  SATELLITE TEMPERATURES

Figure 7.16 Monthly average temperature measurements
at different levels of the atmosphere
42

The Atmospheric Infrared Sounder project (AIRS) 43 has generated decadal-length, global, gridded data sets of temperature and specific humidity for several standard levels in the troposphere for the obs4MIPS project. The gridded data are based on the combined retrievals from AIRS, an instrument sensitive to infrared radiation emitted from the surface and atmosphere, and the Advanced Microwave Sounding Unit (AMSU), an instrument sensitive to microwave frequencies that facilitates retrievals even in somewhat cloudy conditions. Both AIRS and AMSU are onboard NASA's AQUA platform.

These maps are averages over an entire column of parts of the atmosphere and they are also daily or monthly averages. They are currently unable to handle much of the detail required for weather forecasting and they are still prone to inaccuracies from the difficulties of allowing for clouds and other aerosols. They have, however, added an important extra facility for studying temperature details in the atmosphere.

CONCLUSIONS 

The earth does not possess a temperature and a scientifically acceptable average cannot currently be derived. It is therefore not possible to know whether it is warming or cooling. Rough estimates based on historical anecdotes and various proxies indicate variability over geological time. Claims of global warming by the IPCC have been based on fraudulent manipulation of actual measurements and selective emphasis on favourable anecdotes. 

REFERENCES


1 Essex, C,  McKitrick,R and B. Anderssen. 2007.  Does a Global Temperature Exist? J  Non-Equilib. Thermodyn. 32, 1-27.
2 Essex, C & McKitrick, R., 2008, Taken by Storm,  Key Porter Books, Toronto Canada
4  Stanley S M , 1989.  Earth  and Life through Time.  Page 574,  after J.M. Mitchell, in  Energy and Climate; National Academy of Sciences, Washington.  W.H. Freeman & Co
5 Houghton J T , G J Jenkins & J J Ephraums (Editors) 1990 Climate Change : The IPCC  Scientific Assessment. Cambridge University Press.
6 Harris C  Mann R 2014 Global Temperature Trends From 2500 B.C. To 2040 A.D  http://www.longrangeweather.com/global_temperatures.ht5m
8  Mackintosh, L. 2012  National Institute for Water and Air Research, New Zealand (NIWA)                      Meteorologist for a day
10 Hansen J Lebedeff S. Global Trends of Measured Surface Air Temperature J  Geophys   Res VOL. 92, No. Dll, 13,345-13,372,
11 Folland, C. K. and D..E. Parker.1995. Correction of instrumental biases in historical sea surface temperature data. Quart. J. Met. Soc. 15 1195-1218.
12 Reynolds. R.W., N.A. Rayner, T.M. Smith, D.C. Stokes, and W. Wang, 2002 An Improved  In Situ and Satellite SST Analysis for Climate. J. Climate 15, 1609-1624.
13 Brohan, P., J J  Kennedy, I. Harris, S. F, B, Tett, and P. D. Jones. 2006, Uncertainty estimates in regional and global observed temperature changes: A new data set from 1850. J. Geophys. Res.  111, D12106.doi:1020/2005JD006546
14 Climatic  Research Unit 2014  http://www.cru.uea.ac.uk/
15 Frank P 2010 imposed and neglected uncertainty in the global average surface  air temperature index   Energy and Environment, 21, Number 8 / December 2010 DOI:
17 Comparing global Surface Temperature Estimates http://www.climate4you.com/GlobalTemperatures.htm#
18 Remote Sensing Systems
19 IPCC, 2013: Climate Change 2013: The Physical Science Basis.Contribution of Working Group I to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change [Stocker, T.F., D. Qin, G.-K. Plattner, M. Tignor, S.K. Allen, J. Boschung, A. Nauels, Y. Xia, V. Bex and P.M. Midgley
20 Gray, V. R. 2000. The Cause of Global Warming. Energy and Environment.  11, 613-629.
21 McKitrick, R.R. and P.J. Michaels,  2006  A test of  corrections for extraneous signals in gridded surface temperature data.  Clim.  Res, 26, 150-173
22 McKitrick, R.R. and P.J. Michaels, 2007, Quantifying the influence of anthropogenic surface processes and inhomogeneities on gridded global climate data, J. Geophys. Res. 112, D24S09, doi:10:1029/2007JD008465
23 Jones, P. D., P. Ya. Groisman, M. Coughlan, N. Plummer, W. C. Wang & T. R. Karl 1990. Assessment of urbanization effects in time series of surface air temperature over land , Nature  347 169- 172.
24 Wang, W-C, Z. Zeng, T. R Karl,1990. Urban Heat Islands in China. Geophys. Res. Lett. 17,  2377-2380.
25 Keenan, D. 2007. The Fraud Allegation Against Some Climatic Research of Wei-Chyug Wang. Energy and  Environment, 18, 985-995.
26 Reclaiming Climate Science http://www.greenworldtrust.org.uk/Science/Social/FOIA1241415427.txt.htm
27 Gray V R 2008  The Global Warming Scam
28  Paul Homewood https://notalotofpeopleknowthat.wordpress.com/
29 D;Aleo J  Watts A 2010 Surface Temperature Records Policy-Driven Deception uhttp://scienceandpublicpolicy.org/originals/policy_driven_deception.html
30 Chiefio https://chiefio.wordpress.com/2010/01/08/ghcn-gistemp-interactions-the-bolivia-effect/
http://www.telegraph.co.uk/news/earth/environment/globalwarming/11395516/The-fiddling-theisfactwith-temperature-data-is-the-biggest-science-scandal-ever.html
33 Hansen J. Ruedy, R, Glascoe, J and Satom M — August 1991   
CHAPTER 7: THE GLOBAL WARMING SCAM
35 Watts  A  2009 Is the US Surface Temperature Record Reliable https://wattsupwiththat.files.wordpress.com/2009/05/surfacestationsreport_spring09.pdf
36 Houghton, J T., Y. Ding, D. J. Griggs, . Noguer, P, J. Van der Linden, X. Dai, K. Maskell, and C A Johnson (Eds) 2001.  Climate Change 2001: The Scientific Basis. Cambridge University Press Page 29.
37 Soon, W. and S. Baliunas, Clim. Res. 2003,  23, 89-110..
38 Soon, W. 2005 Geophys. Res. Lett.  32 .L16712, doi:10.1029/2005GL02342.
39 McIntyre, S and McKitrick, R.R, (2003). Corrections to Mann et al. (1998) proxy data base and northern hemisphere average temperature series. Energy and Environment 14 751-777
40 McIntyre, S., and McKitrick, R.R, (2005). Hockey sticks, principal components and spurious significance. Geophys. Res. Lett.  32, L03710 , doi:10.1029/2004GL02175039 39 41 McIntyre, Stephen and Ross McKitrick (2011) Discussion of: A Statistical Analysis of Multiple Temperature Proxies: Are Reconstructions of Surface Temperatures Over the Last 1000 Years Reliable? Annals of Applied Statistics Vol. 5, No. 1, 56-6 DOI: 10.1214/10-39 AOAS398L.
42 Holland, D, 2007 Bias and Concealment in the IPCC Process, the “Hockey Stick” Affair and its Implications. Energy and Environment, 18 951-983.
43 Loehle. C. 2007 A Global Temperature Reconstruction Based on Non Tree ring Proxies. Energy and Environment, 18, 1049-1057.
44 Loehle, C. and J.H. McCulloch,  2008. Correction to: A 2000-year global temperature reconstruction based on non-tree ring proxies. Energy and Environment 19, 91-100,



0 comments:

Post a Comment

domain, domain name, premium domain name for sales

Popular Posts