Too good to be true?

On Mon, 04 Jul 2005 17:11:53 -0700, Larry Jaques wrote:

... snip

Having read some of the news regarding the G8 summit, as well as some of various accounts on , a rather interesting premise for a science fiction story struck me. When you look at what the UK and some of the other European nations are attempting to get Bush to agree to regarding agreements regarding global climate change, he is not being asked to ascribe to a political agreement backed by strong science so much as he is being asked to sign a doctrinal statement agreeing that he and his country "believe" in global warming and that humans are the cause for this impending disaster. Couple that with the proposal by the UK to cut its CO2 emissions by 60% over the next decade, and to issue all citizens a "carbon allowance" as well as the various little "sacrifices" the citizenry is being asked to perform, many said sacrifices having no real impact upon overall energy use, (for example, unplugging the VCR rather than letting it run in standby mode) but getting the citizenry to "buy into doing their part". An interesting plot for a time in the future when the world is dominated by the green religion whose high priests regulate the lives of the average citizens who have been reduced to living in hovels and living a pre-industrial lifestyle. The high priests of the religion live in sparkling compounds, high on the hills and who possess all manner of "magic" with which to assure compliance of the peasants with their lot in life. Various rituals are practiced by which the average people are indoctrinated with the knowledge that they are only a blight upon the planet and that only by following the will of the Green Priests will they be granted suffrance by the planet to live out their lives in quiet submission and meager consumption.

+--------------------------------------------------------------------------------+ If you're gonna be dumb, you better be tough +--------------------------------------------------------------------------------+
Reply to
Mark & Juanita
Loading thread data ...

Sure, had the authore chosen a range from, say -100 C to + 100 C the chart would be inscrutable. As it is, the range appears tobe chosen as any sensible person would, to fit the data on the page within comfortable margins.

BTW, why'd you change the subject from tic-spacing to range? Perhaps you DO realize the tic spacing is arbitrary, just like the choice of origin?

The graph in question looks to me to have bene prepared for some sort of dog and pony show. If it was created by a climatologist in the first place, I'll bet it was created to show to reporters and politicians (and also bet that they didn't understand it anyways.)

It has been over a decade since I last attended a coloquium given by a climatologist. At that time predictions were being made based on climate models--not by looking at a graph and imagining it extended beyond the right margin.

For example, this fellow (sorry I do not remember his name) explained that one of the objections to a Kyoto type agreement (this was before Kyoto) came about because some models predicted that average annual rainfall in Siberia would decrease over about the next fifty years but then increase over the following 100. So the Soviets (this was back when there were still Soviets) were concerned about not stabilizing global change at a time when Siberia was near the dryest part of the expected changes.

Note also that Siberia getting drier for fifty years and then getting wetter for a hunderd years after is a nonlinear change. The prediction was not being made by simply extending a plot.

People who write as if the predictions made by climatologists are based on extrapolating from dog and pony show style visual aids are:

1) Not very honest. or 2) Not very bright. or 3) Have been misled by people fitting 1) and/or 2) above.

I've never worked on a Climate model but have no doubt that Climatologists rely on tried and true statistical methods to fit data to their models and to made predictions from those models just like any other scientist.

If they underestimate the uncertainties in their data, or overestimate the degrees of freedom in their models their reduced chi-squares will be too small, just like they were when Gregor Mendel's data were fitted to his theory. (Not by Mendel himself, he didn't do chi squares). While Mendel's theory of genetics overestimated the degrees of freedom, his data fit modern genetic theory quite well.

If someone has a scientifically valid theory, they will have the math to support it. The same is true for a scientifically valid criticism of a theory.

If instead, their criticism is that the tic spacing on a graph is too close, well, that conclusion is left as an exercise for the reader.

Reply to
fredfighter

Oh, never mind.

Would the range of the chart on a page be the same with smaller increments, Fred? I didn't change the subject, you merely found a way to argue semantics. But, hey, if you want to Chicken Little it, feel free. Gotcher tinfoil headgear?

--snip--

One of many criticisms. EOF, bubba.

- Better Living Through Denial ------------

formatting link
Dynamic Websites, PHP Apps, MySQL databases

Reply to
Larry Jaques

after the sugars have been fermented into alcohol, what's left is mostly cellulose, right?

make it into MDF.

Reply to
bridger

On 5 Jul 2005 15:51:36 -0700, snipped-for-privacy@spamcop.net wrote:

When the @#$% was the subject ever tic spacing? The issue is the represented data and the range of the data that is based upon very gross observables being used to predict global average temperature fluctuations based upon ice core samples, tree ring size, and contemporary cultural documentation going back the past millennia. Those gross measurements (again, which could be influenced by more than just temperature) were then used to compute numbers with very small predicted increments. The precision presented is not the precision that one would expect from such gross measures. Had you explored the web site at which you found the chart, you would have found that this was a conclusion from a paper by Mann in 1998 that used the data that was summarized in that chart to predict future global warming. The paper by Mann is one of the keystones of the global warming adherents (not just a dog and pony show chart). The chart is simply a summary of the Mann's "research" and conclusions. There are numerous objections to Mann's methods and his refusal to turn over *all* of his data or algorithms despite being funded by the NSF. Further, problems with his methodology are documented in as well as other areas on the site. He deliberately omitted data that corresponded to a midiaeval warm period, thus making his predictions for the future look like the largest jump in history. Again, even if this chart was only for consumption by politicians and policy makers, it was a deliberately distorted conclusion that could only be intended to engender a specific response regarding global warming. In order to get his infamous

2.5C temperature rise prediction, he used trend of the numbers to pad the data fit rather than padding with the mean of the data. (again documented on the numberwatch page).

... and if it was so created, it was created in order to drive a specific conclusion and input to direct public policy. That is not a trivial, wave your hands and dismiss-it kind of action. The politicians who used it certainly understood the conclusions that Mann was trying to assert. The fact that he omitted the medieval warm period further indicates that this was not a harmless use of the data from an innocent scientist.

Where do you think that climatologists get the bases for their climate models? Where do you think they get data that they can use to fine-tune those models and validate them?

So, since it's been over a decade, were their models correct? Has rainfall in Siberia been decreasing? From a quick perusal of the web, it appears that significant flooding has occurred in Siberia in recent years due to heavy rains as well as spring melt.

No, it was made by running a computer model. Do you know what goes into computer models and simulations? Do you have any idea how much data and effort is required to get a computer model to make predictions that are reliable? I do; as I mentioned before, I've been involved in the area of development, and integration & test for a considerable time. I know how difficult it is to get a model to generate accurate predictions even when I have control of a significant proportion of the test environment. To believe that climatologists have the ability to generate models that predict the future performance of such a complex system as the Earth's climate yet cannot predict even short term with any significant degree of accuracy is a stretch of epic proportions to say the least.

People who think that climatologists who generate such charts are not attempting to influence policy and opinion are

1) Not very honest 2) Not very bright 3) Have mislead themselves into believing that said climatologists are simply objective scientists publishing reduced graphs that are being used for purposes that they did not envision.

That Mann does not fall under the title of naive scientist can be found in

Very well, and where are these climatologists getting *their* data to validate their models? Generating models is easy, generating models that produce accurate results is not.

Statistics does *not* make the math for a model. Statistics can be used to validate the precision, or distribution of outcomes of a model run in a Monte-Carlo sense, comparing the dispersion of the monte-carlo runs to the dispersion of real data, but that assumes one has sufficient real data with which to perform such a comparison and that the diversity of the variables being modified in the model are sufficiently represented in the data set to which the model is being compared. If all one is relying upon to predict future events is past data being statistically processed, one has done nothing beyond glorified curve fitting and extrapolation beyond the data set. The real math behind models and simulations should be the first-principals physics and chemistry that are properly applied to the problem being modeled. Therein lies the rub, there are so many variables and degrees of freedom (in a true modeling definition of that phrase), that validating the first principals models to the degree that one could trust a model to predict future climate changes is, at this time, insufficient. Using such models in making public policy that can have devastating economical effects upon peoples' lives would be a travesty. Finally, even given that you have climatalogical models that have some degree of precision, there is still the pesky problem of proving that human activity is to blame for the phenomena being observed as root cause changes to the future climate predictions.

Your statement above indicates that either you don't get it, or are being deliberately obtuse regarding the referenced paper and the infamous "hockey stick" chart. Think of it this way, the chart shown is the equivalent to the final output from one of your revered climatologist's models that predicts global average temperature will increase by 2.5C per decade (Mann's original paper apparently stated 1C per decade, but the number was later revised to 2.5C). This is the equivalent to your climatologists' model prediction that rain in Siberia would decrease over the next 50 years, then increase over the next 100.

Fred, this is my last post on this subject, as it is clear that a) you really don't get it and b) for all of your feigned objectivity and previous comments upon how you take an objective view of all sides and then look at the available, data; you have shown that you look at that data only from a particular worldview. You are welcome to the last word, I have better things to do with my time.

+--------------------------------------------------------------------------------+ If you're gonna be dumb, you better be tough +--------------------------------------------------------------------------------+
Reply to
Mark & Juanita

Cellulose is sugar. Breaking it up might be useful.

Reply to
George

I think there is where the NEV would go negative...but I've not looked into the chemical process balance in depth as yet.

Reply to
Duane Bozarth

The good news is that you would not be ABLE to "examine each one". Google limits you to the first 1000 URLS (or, roughly, the first 100 pages) it returns, even if it DOES claim to have found thousands of them. To add to the annoyance, they also return the pages according to their "ranking" system...which gives you the pages with the most other links TO them first. As a researcher, I find this REALLY annoying, because the real treasures are not found on the paths that EVERYONE has plodded down. Rather they are found in the dusty back shelves where no one has been for years. However, Google's respose to this is "if you are getting that many hits you are doing a bad search and should improve it". Regards Dave Mundt

Reply to
Dave Mundt

The easiest way to break up the cellulose is to feed it to cows. Nothing negative to that; the cows make it into milk or beefsteak.

Reply to
George E. Cawthon

That is what is done w/ it at present--the suggestion was to process it further chemically as part of the ethanol extraction process--and process is what would be more energy in than additional out.

As noted earlier, it's likely in my estimation that a limiting factor in the economics of biofuels will be the saturation of markets for the secondary products unless major new/additional usages can be created/found.

Reply to
Duane Bozarth

Well, no. Suggestion of digestion by the same bacteria that fill the gut of the ungulates to yield methane would be more appropriate.

Reply to
George

That would be called a "cow"... :)

Sorry, I misinterpreted your first suggestion...

Reply to
Duane Bozarth

Yea, I saw that episode, and, I think that the BIG issue there was that they were driving at a fairly low speed. They were limited to 45 MPH, and, at that rate, I am not sure that the drag would make a difference. It ALSO might well have been the vehicles. I recall a sedan from some years ago that got about 15% better gasoline mileage when driving at interstate speeds, with the A/C on. This was kind of surprising to me, but, we ran several cycles of testing over tanks of gasoline, and, it was quite consistant. Another factor is that the blast of wind through the windows can be PRETTY irritating after a bit...I much prefer the low hum of the A/C fans. Regards Dave Mundt

Reply to
Dave Mundt

Uh, what is bothering you? If you think some feature of the chart was selected to deceive, why not point it out instead of making ambiguous general statements that don't look to be relevant to THIS particular plot?

No, that's why I don;t understand how you sent from 'increments' to range, eithout explaining what aspect of either you though had been jiggered deceptively.

Perhaps you can make a criticism that addresses specific features of the plot so somebody other than yourself can tell WTF it is to which you refer?

Is your opinion is the range too large or too small?

Which and why? What range do you think would be proper?

To what 'increments' do you refer, and what 'increment size' do you think would be proper?

THAT one is plainly meaningless. How about some others?

Reply to
fredfighter

Note followups. Please remove rec.woodworking from the distribution.

Executive summary: I'm skeptical that hte "hockey stick" plot has any predictive value. But if it does, that will be totally dom> On 5 Jul 2005 15:51:36 -0700, snipped-for-privacy@spamcop.net wrote: > > >

When Larry Jaques wrote: "How can we make our point with so little data to go on? Aha, make the increments so small the data (with which we want to scare folks) is off the charts!"

I thought he was referring to the tic spacing as 'increments'. If not, perhaps he or you could identify at least one (1) such 'increment' such as by showing me the endopints.

No I would not have found that because that website was not written by Mann. If I want to know what Bush said in his state of the Union Message I go to

formatting link
not moveon.org. If I want to know what Mann says about the plot, I'll consult HIS writing.

A chart that is simply a summary of someone's research and conclusions is, by definiton, a dog and pony show style chart. Furthermore, if any chart is a keystone in the argument for Global Warming it is this:

formatting link
There are

Nothing there appears to have been posted by Mann.

Data destruction is a serious problem that pervades scientific society today. Obviously there is good reason to keep data proprietary to the reasercher for a reasonable period of time. For the HST, that is ten years. But scientists (civil servants) working in Geophysics for NASA and NOAAA, typically keep their data proprietary forever and may (often do) deliberately destroy it after their papers are published.

Of course there is no honest rational reason to destroy data once the researcher is through with his own analysis and publication. No benefit accrues to the individual researcher, to science or to humanity from that destruction. The downside is obvious, opportunity to learn more from the data is lost. The upside is completely nonexistant. Yet that appalling practice persists.

As for his algorithms, the algorithms ARE the science, if he didn't publish his algorithms, he didn't publish anything of value.

This, er discussion, reminds me of something written by Tolkien in his forward to _The Lord of the Rings_: "Some who have read the book, or at any rate have reviewed it, have found it to be..."

Tolkien understood that some people would not let a minor detail like not having read something interfere with their criticism and support of it. I can't find Mann's own description of the plot online so *I* do not know what it is meant to portray. I'll take a couple of educated guesses below.

The anonymous author(s) of that webpage have not released their data either, have they? Keeping that in mind, let's take a look at the "hockey stick" graph and compare it to the "bathtub graph".

The data may be divided into three ranges based on the error bar size. The first range, on the left, has the largest error bars, roughly plus/minus 0.5 degrees abd extebds from c AD 1000 to c AD 1625. The second range extends from circa AD 1625 to c AD 1920 and looks to have error bars of about 0.3 degrees. The third region, beginning c AD 1920 and extending to the present time looks to have error bars of maybe 0.1 degree. Actually there is fourth region, appearing to the right of AD 2000 that does not appear to have any error bars at all. That defies explanation since it post-dates the publication of the paper and were it a prediction, one would expect uncertainties in the predicted temperatures to be plotted long with the predicted temperatures temselves.

It also seems reasonable to presume that the most recent data are by far the most numerous and those on the extreme left, the most sparse.

I think that one of your criticisms is that the error bars toward the left of the chart are too small. Now, maybe the plot just shows data after some degree of processing. For example, each point may represent a ten-year arithmetic mean of all the temperature data within that decade, each point could be a running boxcar average through the data and so on. I realy don't know but probably, due to the data density, each point represents more than a single observation. E.g. it is a 'meta' plot. If so the error bars may simply be standard plus/minus two sigmas of the standard deviation to those means. If so, the the size of the error bars will scale in inverse proportion to the square root of the sample size.

That last is not opinion or deception. That is simple statistical fact. Whether or not the error bars are the right size (and keep in mind, we don't even know if they ARE two-sigma) is a matter than can ONLY be definiteively settled by arithmetic though people who have experience with similar data sets may be able to take an educated guess based on analogy alone.

Of course if my GUESS about what is being plotted is wrong that may also be totally irrelevant.

But supposing this is a plot of his data set with the errors bars established by the numerical precision within the data themelves.

A least squares fit will be dominated by the data that are most numerous and those with the lowest uncertainties. ANY model fitted to those data will be dominated by the data in the third region to the extent that data in the second region will only have a minor effect and those in the first region may have a negligible efect.

So since the data that are numerous and precise are the data from the third range, which show a rapid rise in temperature, ANY model that is fitted to those data will be dominated by the characteristics of that data range. Given the steep upward slope of the data in that third region it is hard to imagine how, underestimating the errors in the earlier data would actually reduce the estimated future temperature rise extrapolated from that model.

What if the data out in that earlier flat region were biased? What if they really should be lower or higher? Again, that would have little effect on a madel fitted to the entire data range for precisely the same reasons.

So, what if the 'bathtub' plot data are more accurate? They still will surely lack the precision and density of the modern data and so still will have little effect on any model parameters fitted to the data set.

To the untrained eye, the 'bathtub' plot looks VERY different from the 'hockey stick' plot but both would produce a similar result when fitting a model that includes the third data range.

Finally, there is a useful statistical parameter called the reduced chi square of the fit to the model that is indicative of whether or not errors none's data have been estimated properly. Simply stated, if the have, the value will be near unity. If they have not, the value will be above or below unity depending on whether the uncertainties were over or underestimated.

Gregor Mendel, long after his death, was first accused of culling his data, then exhonerated on the basis of statistics drawn from his data. (Note, this was only possible because his data were saved, not destroyed) The key mistake made by his critic was over-estimating the degrees of freedom.

How is the _chart_ a conclusion? If you refer to the points to the right of AD 2000 as a conclusion, how do you show they are not properly extrapolated from the model?

Documented how? I can't even find the _word_ "pad" on that page?

What the hell do "used trend of the numbers to pad the data"

and

"padding with the mean of the data."

mean? What the hell is "padding"?

What the page actually tells us (it doesn't document THAT either it tells us) is that another climatologists, Hans von Storch et all (HVS) using their own model obtained results that differerd from Mann, but those differences were less pronounced if noise were added to the HVS model.

As the numberwatch author notes, as more noise is added the long term variability in the data is reduced. One is inclined to say "Doh!" Adding noise ALWAYS reduces any measure of variablilty in a data set.

Mann's data set may exhibit less noise becuase he has more data, or maybe he also added noise into his analysis to bring his reduced chi squares to unity. He ought to say if he did, and for all I know, maybe he did.

As noted above, data from medieval times are not going to affect a fit to a model unless, contrary to reason, they are weighted equally with the modern data that are far more numerous and undoubtably more accurate.

Why do you ask those questions? You indicated you already know the answers. All I am saying is that the question of whetehr or not the exisiting data base is large and precise engough to jsutify a prediction is a mathematical question. A criticism of the prediction without math is just blowing smoke, no better than a predition make without any mathmatical modeling.

Irrelevant. The point is that a simple linear regression does not have inflection points.

...

Yes. I 'turn the crank' every day and twice on Sundays on data sets that include tens of thousands of observations for medium precision orbit determination and similar work. We emphatically do not determine where a satellite will be tomorrow by simply extrapolating from where it was today.

Nature presents numerous examples where short-term variablity obscures long-term trends. Take geodetic measurements for example. The long term movement over thousands of years can be readily determied by geological data, but that long term movement is punctuated with short-term seismic events that, over the time frame of an hour are orders of magnitude larger making the short-term prediction completely wrong.

Solar astronmers can better predict the average sunspot number over the next year than they can for a dya three dyas from now.

A weatherman can better predict annual rainfall for next year than he can how much it will rain next week.

My physical condition a hundred years from now is much easier to predict than my physical condition ten years from now.

There are many areas in nature in which short-term prediction due to variability is far more difficult than the long term.

Let's go back to the cornerstone of global warming, the atmospheric Carbon Dioxide data. The temperature of a body is constant when the rate at which it loses energy is the same as the rate at which it receives energy. The three largest sources of energy for the Earth, by far, are radioactive decay, dissipationnof tidal energy, and insolation. We have no significant influence on the first two. There are but two significant ways the Earth loses energy, tidal dissipation and radiative cooling. Again, we have no influence on the former.

We have no influence on the natural variation in the solar 'constant'. But direct sampling of the atmosphere makes it clear beyond all doubt that we can influence the Earth's albedo. We can, and do change the balance in the radiative transfer of energy between the Earth and the rest of the Universe. There is no question that the short term effect meaning over a century or so, of the introduction of more greenhouse gasses into the atmosphere will be a temperature rise, absent other confounding factors. That is predicted not by any climate model but by the law of the conservation of energy.

There may be confounding factors that will counteract that temperature rise. But unless it can be demonstrated that there are such factors and they are countering the effect of greenhouse gasses it is not a question of if we can observe the change. It is a question only of how soon we will be able to.

It won't shock me if we cannot see a trend yet. That non-observation will not disprove the law of the conservation of energy. Wht remains crucial is determining the magnitude of _other_ influences on Global Temperature and how the Earth responds to all of them.

...

What would their motive be?

Your commentary is at least as impressive as any commentary read in the Washington Times. Hell, you probably at least know some math and science, I'm less than confident of the same for the editorial staff of the Washington Times.

You've indicated a variety of sources yourself, so why ask?

If one's real data are insuficent in quality or qwuantity this will result in large uncertainties in the predictions.

I quite agree. Of Course, I have no idea if that is what Mann did, or not.

I have no revered climatologists. You earlier referred to the chart as a plot of DATA, now you call it 'equivalent to the final output...' I don't see how data input to a model can be considered equivalent to the outpur from a model.

As I said before, I haven't read the paper. It appears that neither have you.

Reply to
fredfighter

HomeOwnersHub website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.