On 3 Jul 2005 00:41:19 -0700, email@example.com wrote:
As you are so fond of saying, Google is your friend. Try to hearken
back to various debates in which the infamous "hockey stick" chart is shown
that attempts to show departures from average temperatures in 1961 to 1990
for the years 1000 AD to current time, showing this sudden jump of +.5C
when the other tempertures were below. The difference is less than 0.5 C.
The "measurements" from tree rings, corrals, ice cores and "historical
records" (remember that no calibrated met stations existed in 1000 AD) are
all being pegged at less than 0.5 C increments.
Yes, but it wouldn't be worth it, would it?
Fine, you are right. By rasing the average temperature of the area of a
city by one degree, you will have raised the "average" temperature of the
earth (depending, of course upon whether that city area is one of the
regions in which you take measurements to compute the average). Now, let's
see, a city on the order of 1000 square miles will contribute to the
overall average for the Earth's surface area of 197,000,000 square miles by
1/197,000, or a total influence of 5 microKelvin. Now, given that a fair
amount of that will be re-radiated into space, depending upon season, cloud
cover, etc, this amount is typically what most people would call
If you're gonna be dumb, you better be tough
My first response was snide and I've deleted it.
What cuaght my eye was your statement "The idea that by measuring
tree ring size, one can determine the average temperature of an
area to within tenths of a degree is ludicrous."
A quick google search using the search terms "tree ring" and "average
temperature" does not yield anyoone making such a claim.
So, I remain skeptical that such a claim has been made.
A Google search for "hockey stick chart" yields a few pages that come
up 404, perhaps due ot the NHL strike, and a few that criticize
"the hocky stick chart" but I haven't found any explanation of the
One example is found here:
The chart appears to be a graph of temperature as a function
of time. Note the caption on the left side which indicates
the temperature origin is a "1961 to 1990 average." What is
meant by "1961 to 1990 average" is a mystery to me but inasmuch
as choice of origin is arbitrary let's not worry about
It looks to me like the error bars (in grey--if those are
not error bars I don't know what they are) are about +/-
.5 degree for observations prior to about 1600, perhaps +/-
.3 degrees from 1600 - 1900 and I won't hazard a guess as
to what they are in the more recent data.
So, what again is your objection? Do you feel the the variance
in the data prior to 1600 was underestimated? If so, what
do you allege has been mishandled in the error estimation?
What do you mean by "pegged at less than 0.5 C increments"?
"Pegged" is usually used to mean a hard limit, for example met
sensor data showing relative humidity inexcess of 100% may be
arbitrarily adjusted to ("pegged" at) 100% at ingest, though
the term more often refers to a hard limit on the measurement
device itself (e.g. "pegging the meter").
The only 0.5 degree C increments I see are the major tick spacing
on the vertical (temperature) axis. Again, like the choice of
origin, that is arbitrary.
IOW, I don't see anything here to the effect of "that by measuring
tree ring size, one can determine the average temperature of an
area to within tenths of a degree is ludicrous."
If it is your intent to make another point, that point is lost
But people who use statistics know that statistics cannot answer
yes/no questions nor tell you how large an effect there is.
Statistics can only estomate the probability that the true
value of some measurable lies within some arbitrary amount from
a specific value.
That seems to frustrate a lot of people but Nature doesn't really
On 3 Jul 2005 20:51:22 -0700, firstname.lastname@example.org wrote:
You have the correct chart. This is the chart that various ("Earth in
the Balance") former presidential candidates have used to highlight the
future devastation to be caused by the alarming increase in temperature in
only the past several years. The 1961 to 1990 average temperature was
taken as a baseline and is the zero bar of said chart. The numbers below
zero indicate average temperatures below the reference bar and those above
indicate average temperatures greater than the reference. The large spike
at the end of the chart is intended to cause alarm due to a) it's large
slope and b) the fact that it is fully 0.5 C above the average from the
previous 30 years and well above the average for the past millenia.
Given that the grey bars are error bars, then the overall exercise and
alarmism raised by the presentation of said chart are beyond simple
hysterics and border on fraud. The blue and red lines are those focused
upon the by the Chicken Little crowd. The error bars indicate that this
entire exercise is attempting to extrapolate future climate from noise.
Having spent the last 15 years of my career in various development projects
that rely heavily upon integration and test and data collection, I can
categorically state that attempting to extrapolate performance from noise
measurements is a fool's errand.
That the error bars are only 0.5C is the first part that anyone with some
degree of skepticism should focus upon. The second is the deltas that are
being extrapolated for periods before the advent of the thermometer are
being assessed at less than 0.5C, when the exact causes for tree ring size,
ice core sample depth, and other "indicators" are hardly precise enough to
estimate global average temperature to such a degree of precision.
Oh please, let's not play games with semantics, you know darned well what
I meant, i.e. that the error bars shown are at best 0.5C, the attempt to
show increments of less than 0.1C are simply ludicrous. Substitute
"represented", or "reported" for "pegged" if that makes you feel any
Take a closer look a the graph, the numbers for the era before the
thermometer was invented are being estimated based upon tree ring
measurements, ice core samples and historical records (i.e, some current
era literati writing, "dang, it's cold this winter!" or "We had to order 5
more pairs of longjohns this winter"). Now, look at the blue lines, look
at the zero reference line, this graph is trying to tell you that global
average temperature was moving around 0.2 to 0.5C below the global average
The point is that this is the kind of evidence that is "widely accepted"
and "peer reviewed" and critically acclaimed as showing the coming
environmental disaster that is global warming. It is also the kind of
evidence to which people are referring when they say, "it has been proven
that global warming is occuring."
If you're gonna be dumb, you better be tough
On Sun, 03 Jul 2005 21:57:35 -0700, the opaque Mark & Juanita
Fraud, misreading, hysteria = the Greens.
That's what the Chicken Littles ARE, Mark. <g>
"How can we make our point with so little data to go on? Aha, make the
increments so small the data (with which we want to scare folks) is
off the charts!" Oh, and "Let's estimate data about 10x longer than
we have ANY data for.)
The peers should be reviewed accordingly, wot?
Recommendation for Chicken Littles: Read Michael Crichton's book
"State of Fear" for both a great story and an excellent reference
work with detailed bibliography for further research. It will give
you a whole new perspective, I guarantee!
Annoy a politician: Be trustworthy, faithful, and honest!
http://www.diversify.com Comprehensive Website Development
SPLORF! I realize that is not your only criticism but it is hilarious
that you would base ANY criticism on the tic spacing on the temeprature
axis. If they spaced the tics 10 degrees apart the plot would look the
same, it would just be harder to convert the picture to numbers.
Fiction or non-Fiction?
Daring advice! Let us know how that works out for you, unless they
take your internet access away ...
On 4 Jul 2005 12:01:09 -0700, the opaque email@example.com
Graph range has been used to hide data more than once, bubba. Here
they go the opposite direction to support falsehoods and hysteria.
Yes, global warming is real. We're coming out of the Little Ice Age.
But I don't expect to see anything like Hell on Earth any time soon,
nor do I believe that the other scientists, such as those the movie
"The Day After Tomorrow" concept was based on, have a solid data
set(read: clue), either. Chances are good that we may see a full ONE
DEGREE CENTIGRADE rise in temps this century. I'm more afraid of OJ
than I am of Global Warming.
Having read some of the news regarding the G8 summit, as well as some of
various accounts on <www.numberwatch.co.uk>, a rather interesting premise
for a science fiction story struck me. When you look at what the UK and
some of the other European nations are attempting to get Bush to agree to
regarding agreements regarding global climate change, he is not being asked
to ascribe to a political agreement backed by strong science so much as he
is being asked to sign a doctrinal statement agreeing that he and his
country "believe" in global warming and that humans are the cause for this
impending disaster. Couple that with the proposal by the UK to cut its CO2
emissions by 60% over the next decade, and to issue all citizens a "carbon
allowance" as well as the various little "sacrifices" the citizenry is
being asked to perform, many said sacrifices having no real impact upon
overall energy use, (for example, unplugging the VCR rather than letting it
run in standby mode) but getting the citizenry to "buy into doing their
part". An interesting plot for a time in the future when the world is
dominated by the green religion whose high priests regulate the lives of
the average citizens who have been reduced to living in hovels and living a
pre-industrial lifestyle. The high priests of the religion live in
sparkling compounds, high on the hills and who possess all manner of
"magic" with which to assure compliance of the peasants with their lot in
life. Various rituals are practiced by which the average people are
indoctrinated with the knowledge that they are only a blight upon the
planet and that only by following the will of the Green Priests will they
be granted suffrance by the planet to live out their lives in quiet
submission and meager consumption.
If you're gonna be dumb, you better be tough
Sure, had the authore chosen a range from, say -100 C to + 100 C the
chart would be inscrutable. As it is, the range appears tobe
chosen as any sensible person would, to fit the data on the page
within comfortable margins.
BTW, why'd you change the subject from tic-spacing to range? Perhaps
you DO realize the tic spacing is arbitrary, just like the choice
The graph in question looks to me to have bene prepared for some
sort of dog and pony show. If it was created by a climatologist
in the first place, I'll bet it was created to show to reporters
and politicians (and also bet that they didn't understand it anyways.)
It has been over a decade since I last attended a coloquium given
by a climatologist. At that time predictions were being made based
on climate models--not by looking at a graph and imagining it extended
beyond the right margin.
For example, this fellow (sorry I do not remember his name) explained
that one of the objections to a Kyoto type agreement (this was
before Kyoto) came about because some models predicted that average
annual rainfall in Siberia would decrease over about the next fifty
years but then increase over the following 100. So the Soviets
(this was back when there were still Soviets) were concerned about
not stabilizing global change at a time when Siberia was near the
dryest part of the expected changes.
Note also that Siberia getting drier for fifty years and then
getting wetter for a hunderd years after is a nonlinear change.
The prediction was not being made by simply extending a plot.
People who write as if the predictions made by climatologists
are based on extrapolating from dog and pony show style visual
1) Not very honest.
2) Not very bright.
3) Have been misled by people fitting 1) and/or 2) above.
I've never worked on a Climate model but have no doubt that
Climatologists rely on tried and true statistical methods
to fit data to their models and to made predictions from
those models just like any other scientist.
If they underestimate the uncertainties in their data, or
overestimate the degrees of freedom in their models their
reduced chi-squares will be too small, just like they were
when Gregor Mendel's data were fitted to his theory. (Not
by Mendel himself, he didn't do chi squares). While Mendel's
theory of genetics overestimated the degrees of freedom, his
data fit modern genetic theory quite well.
If someone has a scientifically valid theory, they will have
the math to support it. The same is true for a scientifically
valid criticism of a theory.
If instead, their criticism is that the tic spacing on a graph
is too close, well, that conclusion is left as an exercise for
On 5 Jul 2005 15:51:36 -0700, the opaque firstname.lastname@example.org
<frown> Oh, never mind. <big sigh>
Would the range of the chart on a page be the same with smaller
increments, Fred? I didn't change the subject, you merely found a
way to argue semantics. But, hey, if you want to Chicken Little it,
feel free. Gotcher tinfoil headgear?
One of many criticisms. EOF, bubba.
Better Living Through Denial
http://diversify.com Dynamic Websites, PHP Apps, MySQL databases
Note followups. Please remove rec.woodworking from follow-ups.
Larry Jaques wrote:
Uh, what is bothering you? If you think some feature of the chart
was selected to deceive, why not point it out instead of making
ambiguous general statements that don't look to be relevant
to THIS particular plot?
No, that's why I don;t understand how you sent from 'increments'
to range, eithout explaining what aspect of either you though had
been jiggered deceptively.
Perhaps you can make a criticism that addresses specific features
of the plot so somebody other than yourself can tell WTF it is to
which you refer?
Is your opinion is the range too large or too small?
Which and why? What range do you think would be proper?
To what 'increments' do you refer, and what 'increment size'
do you think would be proper?
THAT one is plainly meaningless. How about some others?
On 5 Jul 2005 15:51:36 -0700, email@example.com wrote:
When the @#$% was the subject ever tic spacing? The issue is the
represented data and the range of the data that is based upon very gross
observables being used to predict global average temperature fluctuations
based upon ice core samples, tree ring size, and contemporary cultural
documentation going back the past millennia. Those gross measurements
(again, which could be influenced by more than just temperature) were then
used to compute numbers with very small predicted increments. The
precision presented is not the precision that one would expect from such
gross measures. Had you explored the web site at which you found the
chart, you would have found that this was a conclusion from a paper by Mann
in 1998 that used the data that was summarized in that chart to predict
future global warming. The paper by Mann is one of the keystones of the
global warming adherents (not just a dog and pony show chart). The chart
is simply a summary of the Mann's "research" and conclusions. There are
numerous objections to Mann's methods and his refusal to turn over *all*
of his data or algorithms <http://www.climateaudit.org/index.php?p#4
despite being funded by the NSF. Further, problems with his methodology
are documented in <http://www.numberwatch.co.uk/2003%20October.htm#bathtub
as well as other areas on the site. He deliberately omitted data that
corresponded to a midiaeval warm period, thus making his predictions for
the future look like the largest jump in history. Again, even if this
chart was only for consumption by politicians and policy makers, it was a
deliberately distorted conclusion that could only be intended to engender a
specific response regarding global warming. In order to get his infamous
2.5C temperature rise prediction, he used trend of the numbers to pad the
data fit rather than padding with the mean of the data. (again documented
on the numberwatch page).
... and if it was so created, it was created in order to drive a specific
conclusion and input to direct public policy. That is not a trivial, wave
your hands and dismiss-it kind of action. The politicians who used it
certainly understood the conclusions that Mann was trying to assert. The
fact that he omitted the medieval warm period further indicates that this
was not a harmless use of the data from an innocent scientist.
Where do you think that climatologists get the bases for their climate
models? Where do you think they get data that they can use to fine-tune
those models and validate them?
So, since it's been over a decade, were their models correct? Has
rainfall in Siberia been decreasing? From a quick perusal of the web, it
appears that significant flooding has occurred in Siberia in recent years
due to heavy rains as well as spring melt.
No, it was made by running a computer model. Do you know what goes into
computer models and simulations? Do you have any idea how much data and
effort is required to get a computer model to make predictions that are
reliable? I do; as I mentioned before, I've been involved in the area of
development, and integration & test for a considerable time. I know how
difficult it is to get a model to generate accurate predictions even when I
have control of a significant proportion of the test environment. To
believe that climatologists have the ability to generate models that
predict the future performance of such a complex system as the Earth's
climate yet cannot predict even short term with any significant degree of
accuracy is a stretch of epic proportions to say the least.
People who think that climatologists who generate such charts are not
attempting to influence policy and opinion are
1) Not very honest
2) Not very bright
3) Have mislead themselves into believing that said climatologists are
simply objective scientists publishing reduced graphs that are being used
for purposes that they did not envision.
That Mann does not fall under the title of naive scientist can be found
Very well, and where are these climatologists getting *their* data to
validate their models? Generating models is easy, generating models that
produce accurate results is not.
Statistics does *not* make the math for a model. Statistics can be used
to validate the precision, or distribution of outcomes of a model run in a
Monte-Carlo sense, comparing the dispersion of the monte-carlo runs to the
dispersion of real data, but that assumes one has sufficient real data with
which to perform such a comparison and that the diversity of the variables
being modified in the model are sufficiently represented in the data set to
which the model is being compared. If all one is relying upon to predict
future events is past data being statistically processed, one has done
nothing beyond glorified curve fitting and extrapolation beyond the data
set. The real math behind models and simulations should be the
first-principals physics and chemistry that are properly applied to the
problem being modeled. Therein lies the rub, there are so many variables
and degrees of freedom (in a true modeling definition of that phrase), that
validating the first principals models to the degree that one could trust a
model to predict future climate changes is, at this time, insufficient.
Using such models in making public policy that can have devastating
economical effects upon peoples' lives would be a travesty. Finally, even
given that you have climatalogical models that have some degree of
precision, there is still the pesky problem of proving that human activity
is to blame for the phenomena being observed as root cause changes to the
future climate predictions.
Your statement above indicates that either you don't get it, or are being
deliberately obtuse regarding the referenced paper and the infamous "hockey
stick" chart. Think of it this way, the chart shown is the equivalent to
the final output from one of your revered climatologist's models that
predicts global average temperature will increase by 2.5C per decade
(Mann's original paper apparently stated 1C per decade, but the number was
later revised to 2.5C). This is the equivalent to your climatologists'
model prediction that rain in Siberia would decrease over the next 50
years, then increase over the next 100.
Fred, this is my last post on this subject, as it is clear that a) you
really don't get it and b) for all of your feigned objectivity and previous
comments upon how you take an objective view of all sides and then look at
the available, data; you have shown that you look at that data only from a
particular worldview. You are welcome to the last word, I have better
things to do with my time.
If you're gonna be dumb, you better be tough
Note followups. Please remove rec.woodworking from the distribution.
Executive summary: I'm skeptical that hte "hockey stick" plot
has any predictive value. But if it does, that will be totally
dominated by the most recent data, temperatures a hundred
years ago or more are all but irrelevant.
Mark & Juanita wrote:
When Larry Jaques wrote:
"How can we make our point with so little data
to go on? Aha, make the increments so small
the data (with which we want to scare folks) is
off the charts!"
I thought he was referring to the tic spacing as 'increments'.
If not, perhaps he or you could identify at least one (1) such
'increment' such as by showing me the endopints.
No I would not have found that because
that website was not written by Mann.
If I want to know what Bush said in
his state of the Union Message I go
to www.whitehouse.gov, not moveon.org.
If I want to know what Mann says about
the plot, I'll consult HIS writing.
A chart that is simply a summary
of someone's research and conclusions
is, by definiton, a dog and pony
show style chart. Furthermore,
if any chart is a keystone in the
argument for Global Warming it
Nothing there appears to have
been posted by Mann.
Data destruction is a serious
problem that pervades scientific
society today. Obviously there
is good reason to keep data
proprietary to the reasercher
for a reasonable period of time.
For the HST, that is ten years.
But scientists (civil servants)
working in Geophysics for NASA
and NOAAA, typically keep their
data proprietary forever and may
(often do) deliberately destroy
it after their papers are published.
Of course there is no honest
rational reason to destroy data
once the researcher is through
with his own analysis and publication.
No benefit accrues to the individual
researcher, to science or to humanity
from that destruction. The downside
is obvious, opportunity to learn
more from the data is lost. The upside
is completely nonexistant. Yet that
appalling practice persists.
As for his algorithms, the algorithms
ARE the science, if he didn't publish
his algorithms, he didn't publish anything
This, er discussion, reminds me of
something written by Tolkien
in his forward to _The Lord of the
Rings_: "Some who have read the
book, or at any rate have reviewed
it, have found it to be..."
Tolkien understood that some people
would not let a minor detail like not
having read something interfere with
their criticism and support of it. I
can't find Mann's own description of
the plot online so *I* do not know what
it is meant to portray. I'll
take a couple of educated guesses below.
The anonymous author(s) of that
webpage have not released their data
either, have they? Keeping that
in mind, let's take a look at the
"hockey stick" graph and compare
it to the "bathtub graph".
The data may be divided into three
ranges based on the error bar size.
The first range, on the left, has
the largest error bars, roughly
plus/minus 0.5 degrees abd extebds
from c AD 1000 to c AD 1625.
The second range extends from circa
AD 1625 to c AD 1920 and looks to
have error bars of about 0.3 degrees.
The third region, beginning c AD 1920
and extending to the present time
looks to have error bars of maybe 0.1
degree. Actually there is fourth region,
appearing to the right of AD 2000 that
does not appear to have any error bars
at all. That defies explanation since
it post-dates the publication of the
paper and were it a prediction, one
would expect uncertainties in the predicted
temperatures to be plotted long with
the predicted temperatures temselves.
It also seems reasonable to presume that
the most recent data are by far the most numerous
and those on the extreme left, the most sparse.
I think that one of your criticisms
is that the error bars toward the
left of the chart are too small.
Now, maybe the plot just shows data
after some degree of processing. For
example, each point may represent a
ten-year arithmetic mean of all the
temperature data within that decade,
each point could be a running boxcar
average through the data and so on.
I realy don't know but probably, due
to the data density, each point represents
more than a single observation. E.g.
it is a 'meta' plot. If so the error
bars may simply be standard plus/minus
two sigmas of the standard deviation
to those means. If so, the
the size of the error bars will scale
in inverse proportion to the square root
of the sample size.
That last is not opinion or deception.
That is simple statistical fact. Whether or
not the error bars are the right size (and keep
in mind, we don't even know if they ARE two-sigma)
is a matter than can ONLY be definiteively
settled by arithmetic though people who have
experience with similar data sets may be able
to take an educated guess based on analogy alone.
Of course if my GUESS about what is being plotted
is wrong that may also be totally irrelevant.
But supposing this is a plot of his data
set with the errors bars established by the
numerical precision within the data themelves.
A least squares fit
will be dominated by the data that are most
numerous and those with the lowest uncertainties.
ANY model fitted to those data will be
dominated by the data in the third region to the
extent that data in the second region will only have
a minor effect and those in the first region
may have a negligible efect.
So since the data that are numerous and precise are
the data from the third range, which show a rapid rise
in temperature, ANY model that is fitted to those data
will be dominated by the characteristics of that
data range. Given the steep upward slope of the data
in that third region it is hard to imagine how,
underestimating the errors in the earlier data would
actually reduce the estimated future temperature rise
extrapolated from that model.
What if the data out in that earlier flat region were
biased? What if they really should be lower or higher?
Again, that would have little effect on a madel fitted
to the entire data range for precisely the same reasons.
So, what if the 'bathtub' plot data are more accurate?
They still will surely lack the precision and density of
the modern data and so still will have little effect on
any model parameters fitted to the data set.
To the untrained eye, the 'bathtub' plot looks VERY
different from the 'hockey stick' plot but both
would produce a similar result when fitting a model
that includes the third data range.
Finally, there is a useful statistical
parameter called the reduced chi square
of the fit to the model that is indicative
of whether or not errors none's data have
been estimated properly. Simply stated,
if the have, the value will be near
unity. If they have not, the value will
be above or below unity depending on whether
the uncertainties were over or underestimated.
Gregor Mendel, long after his death, was first
accused of culling his data, then exhonerated
on the basis of statistics drawn from his data.
(Note, this was only possible because his data
were saved, not destroyed) The key mistake made
by his critic was over-estimating the degrees
How is the _chart_ a conclusion? If you refer
to the points to the right of AD 2000 as a
conclusion, how do you show they are not
properly extrapolated from the model?
Documented how? I can't even find the _word_ "pad" on that
What the hell do
"used trend of the numbers to pad the data"
"padding with the mean of the data."
mean? What the hell is "padding"?
What the page actually tells us
(it doesn't document THAT either
it tells us) is that another
climatologists, Hans von Storch
et all (HVS) using their own model
obtained results that differerd
from Mann, but those differences
were less pronounced if noise
were added to the HVS model.
As the numberwatch author notes,
as more noise is added the
long term variability in the
data is reduced. One is inclined
to say "Doh!" Adding noise
ALWAYS reduces any measure of
variablilty in a data set.
Mann's data set may exhibit less
noise becuase he has more data,
or maybe he also added noise into
his analysis to bring his reduced
chi squares to unity. He ought to
say if he did, and for all I know,
maybe he did.
As noted above, data from medieval
times are not going to affect a fit to a
model unless, contrary to reason, they
are weighted equally with the modern
data that are far more numerous and
undoubtably more accurate.
Why do you ask those questions? You indicated you already
know the answers. All I am saying is that the question of
whetehr or not the exisiting data base is large and precise
engough to jsutify a prediction is a mathematical question.
A criticism of the prediction without math is just blowing
smoke, no better than a predition make without any mathmatical
Irrelevant. The point is that a simple linear regression does
not have inflection points.
Yes. I 'turn the crank' every day and
twice on Sundays on data sets
that include tens of thousands
of observations for medium precision
orbit determination and similar
work. We emphatically do not determine
where a satellite will be tomorrow
by simply extrapolating from where
it was today.
Nature presents numerous examples
where short-term variablity
obscures long-term trends. Take
geodetic measurements for example.
The long term movement over thousands
of years can be readily
determied by geological data, but
that long term movement is
punctuated with short-term seismic
events that, over the time
frame of an hour are orders of
magnitude larger making the short-term
prediction completely wrong.
Solar astronmers can better predict
the average sunspot number over
the next year than they can for
a dya three dyas from now.
A weatherman can better predict
annual rainfall for next year than
he can how much it will rain next week.
My physical condition a hundred years
from now is much easier to predict
than my physical condition ten years from now.
There are many areas in nature in
which short-term prediction due to
variability is far more difficult
than the long term.
Let's go back to the cornerstone
of global warming, the atmospheric
Carbon Dioxide data. The temperature
of a body is constant when the rate
at which it loses energy is the same
as the rate at which it receives energy.
The three largest sources of energy
for the Earth, by far, are radioactive
decay, dissipationnof tidal energy,
and insolation. We have no significant
influence on the first two. There
are but two significant ways the Earth
loses energy, tidal dissipation and
radiative cooling. Again, we have
no influence on the former.
We have no influence on the natural
variation in the solar 'constant'.
But direct sampling of the
atmosphere makes it clear beyond
all doubt that we can influence
the Earth's albedo. We can, and
do change the balance in the
radiative transfer of energy
between the Earth and the rest of
the Universe. There is no question
that the short term effect
meaning over a century or so,
of the introduction of more
greenhouse gasses into the atmosphere
will be a temperature rise, absent
other confounding factors.
That is predicted not by any
climate model but by the law
of the conservation of energy.
There may be confounding factors
that will counteract that temperature
rise. But unless it can be demonstrated
that there are such
factors and they are countering the
effect of greenhouse gasses it
is not a question of if we can
observe the change. It is a question
only of how soon we will be able to.
It won't shock me if we cannot see
a trend yet. That non-observation
will not disprove the law of the
conservation of energy. Wht remains
crucial is determining the magnitude
of _other_ influences on Global
Temperature and how the Earth
responds to all of them.
What would their motive be?
Your commentary is at least as impressive as
any commentary read in the Washington Times.
Hell, you probably at least know some math and
science, I'm less than confident of the same
for the editorial staff of the Washington Times.
You've indicated a variety of sources yourself, so why ask?
If one's real data are insuficent in quality
or qwuantity this will result
in large uncertainties in the predictions.
I quite agree. Of Course, I have no idea if that is what Mann
did, or not.
I have no revered climatologists. You earlier referred to the
chart as a plot of DATA, now you call it 'equivalent to the final
output...' I don't see how data input to a model can be considered
equivalent to the outpur from a model.
As I said before, I haven't read the paper. It appears that neither
Have you found anything written about the chart by the persons who
(allegedly) created it?
The choice of origin is still as arbitrary
as it was when DesCartes introduced (or popularized) x-yplots.
E.g. they could have used 0 degrees C as their origin, the inter-
pretation would be the same, though John McCain would need stilts
or a very long pointer when using the chart.
"Intended to cause alarm?" That implies MOTIVE. Can you show that
the shart was drawn that way "to cause alarm" rather than to
conform to the data?
Planning for the future should be based on predictions for the future
from Climate models that are validated by close fits to historical
data. You can't extrapolate by 'looking at' a plot, for any but the
simplest of linear models. I don't think climate models fall into
that category so I don't see how the chart in question fits into
the scientific debate.
That does't make any sense. How well the data fit the model over
the period of observation how one tests validity of a model. Not
how noisy it 'looks'. Very seldom can one look at a plot of real
world data and see somethign meaningful. The question one
needs to ask as a first step to deterimining the predictive value
of the model in question is how well it fits the data.
As you know, mathematically valid results may be extracted from data
that to the human eye, appear to be randomly distributed, even as
the human eye may 'see' trends in data where mathematics tells us
there are none.
As you know, whether or not the estimated uncertainties in the data
are correct can be objectively tested. So, have they been?
I do not see that the chart extrapolates any deltas anywhere.
The charts shows mean temperatures (still undefined) vs time.
As you know the standard deviation of a mean is inversely
proportionate to the square root of the number of observations
The size of the statistically correct error bars on any 'average'
can be made arbitrarily small simply by gathering enough
Please do not lie about me. I am not playing games with semantics.
In Mathematics and Science words are carefully defined so as to
facilitate communication. When someone starts thowing them around
without regard to those defintions communincation is obstructed.
If you claim the errors are underestimated, what is your basis?
Are his chi squares too small? Also consider that we are only
GUESSING that those are error bars and even if we are, we do not
know for what confidence interval. Still if you show your
arithmetic, I'll check it out.
Are you SERIOUSLY objecting to the _tic spacing_ on the vertical
axis? If so, you'd better be tough that is just as arbitrary as the
choice of origin.
BTW, by 'arbitrary' I mean 'has no effect on interpretation',
hence my comment regarding toughness.
None of them do. Please explain what you mean, rather than play
games with semantics. Do you accuse the author of understimating
the uncertainty in his data? If so, show your evidence.
The chart is not a claim by anyone "that by measuring tree ring size,
one can determine the average temperature of an area to within tenths
of a degree." THAT is obvious, just by looking at it. We don't
even know what each 'point' being plotted actually represents. It
is possible that each point on the chart is actually extracted from
its own database each with a large number of observations. As you
know the variance decreases in inverse proportion to the number of
data points. E.g. the chart may represent a so-called 'meta'
study, a examination of an ensemble of other persons' resuults,
treating their conclusions as data.
Why don't we know these things? Well for starters, we haven't
yet found anything written about this chart by the author, have
I also don't believe any scientist basing predictions on future
climate on THAT chart. That's not the way scientist make pre-
dictions, especially about the future. That the chart gets
presented a lot, does not mean that anyone who knows a burro
from a burrow actually uses it for anything other than illustrative
Do you claim that the chart is a fake, not supported by data?
If so, what is your evidence?
I think that the scientific community is pretty heavily (95% vs 5% ??)
on the side of human causation of at least a great deal of the global
But you're right, there is no absolute proof. But can we afford to wait
till there is?
And it is pretty well established that human produced CFCs are
responsible for the loss of some of the protective ozone layer. That's
pretty global :-).
I believe that in some of my other postings I indicated that there is no
doubt that one can screw up one's local environment and that conclusive
evidence for this exists.
If you're gonna be dumb, you better be tough
1) Fire was never common or widespread over large areas in the
Eastern US the way it is in some other parts of the world.
Succession was more often set back by beaver and ice storms.
2) Much of the forst surrounding the lakes to which OP referred is
But I do agree that more of the forest should be left for Mother
Nature to manage. But she does have a lot of management tools
HomeOwnersHub.com is a website for homeowners and building and maintenance pros. It is not affiliated with any of the manufacturers or service providers discussed here.
All logos and trade names are the property of their respective owners.