UK's Climate Research Unit server Hacked -- Cat out of bag
-
- Posts: 20299
- Joined: Sat Aug 12, 2006 2:01 am
- Title: Je suis devenu Français
- Location: USA
-
- Posts: 23535
- Joined: Sun Jul 18, 2004 7:15 pm
- Title: Incipient toppler
- Location: Swimming in Lake Ed
-
- Posts: 5883
- Joined: Thu Jun 03, 2004 9:02 am
- Location: UK
-
- Posts: 15132
- Joined: Sun Jun 06, 2004 1:16 pm
- Title: Curmudgeon
- Location: Hither, sometimes Yon
-
- Posts: 20455
- Joined: Sat Jun 05, 2004 1:53 pm
- Title: Forum commie nun.
- Location: Stirring the porridge with my spurtle.
Interesting article in today's Grauniad.
What say our IT mavens?Many climate scientists have refused to publish their computer programs. I suggest is that this is both unscientific behaviour and, equally importantly, ignores a major problem: that scientific software has got a poor reputation for error.
There is enough evidence for us to regard a lot of scientific software with worry. For example Professor Les Hatton, an international expert in software testing resident in the Universities of Kent and Kingston, carried out an extensive analysis of several million lines of scientific code. He showed that the software had an unacceptably high level of detectable inconsistencies.
For example, interface inconsistencies between software modules which pass data from one part of a program to another occurred at the rate of one in every seven interfaces on average in the programming language Fortran, and one in every 37 interfaces in the language C. This is hugely worrying when you realise that just one error — just one — will usually invalidate a computer program. What he also discovered, even more worryingly, is that the accuracy of results declined from six significant figures to one significant figure during the running of programs.
Hatton and other researchers' work indicates that scientific software is often of poor quality. What is staggering about the research that has been done is that it examines commercial scientific software – produced by software engineers who have to undergo a regime of thorough testing, quality assurance and a change control discipline known as configuration management.
By contrast scientific software developed in our universities and research institutes is often produced by scientists with no training in software engineering and with no quality mechanisms in place and so, no doubt, the occurrence of errors will be even higher. The Climate Research Unit's "Harry ReadMe" files are a graphic indication of such working conditions, containing as they do the outpouring of a programmer's frustrations in trying to get sets of data to conform to a specification.
-
- Posts: 5883
- Joined: Thu Jun 03, 2004 9:02 am
- Location: UK
False. You pay you can buy as much raw data as you like. People like the Met Office don't care who buys their stuff.Abdul Alhazred wrote:Plausible but not evidences.asthmatic camel wrote:What say our IT mavens?
There a a zillion nits to pick with any scientific set up. That's how science gets done.
To me the damning bits (in general not that particular article) are:
1) Unwillingness to share the raw data.
-
- Posts: 5883
- Joined: Thu Jun 03, 2004 9:02 am
- Location: UK
-
- Posts: 11741
- Joined: Fri Jun 11, 2004 4:52 am
- Title: mere ghost of his former self
That didn't answer AC's question.Geni wrote:Open source software is good but outside some rather narrow areas (maths voting machines) the lack of it is not normaly considered suspicious. Even with voting machines it's not the broadest of groups who considered it to be a problem.asthmatic camel wrote:What say our IT mavens?
-
- Posts: 23535
- Joined: Sun Jul 18, 2004 7:15 pm
- Title: Incipient toppler
- Location: Swimming in Lake Ed
I'm certainly not an IT maven, but reading this I find it misleading.asthmatic camel wrote:Interesting article in today's Grauniad.
What say our IT mavens?Many climate scientists have refused to publish their computer programs. I suggest is that this is both unscientific behaviour and, equally importantly, ignores a major problem: that scientific software has got a poor reputation for error.
There is enough evidence for us to regard a lot of scientific software with worry. For example Professor Les Hatton, an international expert in software testing resident in the Universities of Kent and Kingston, carried out an extensive analysis of several million lines of scientific code. He showed that the software had an unacceptably high level of detectable inconsistencies.
For example, interface inconsistencies between software modules which pass data from one part of a program to another occurred at the rate of one in every seven interfaces on average in the programming language Fortran, and one in every 37 interfaces in the language C. This is hugely worrying when you realise that just one error — just one — will usually invalidate a computer program. What he also discovered, even more worryingly, is that the accuracy of results declined from six significant figures to one significant figure during the running of programs.
Hatton and other researchers' work indicates that scientific software is often of poor quality. What is staggering about the research that has been done is that it examines commercial scientific software – produced by software engineers who have to undergo a regime of thorough testing, quality assurance and a change control discipline known as configuration management.
By contrast scientific software developed in our universities and research institutes is often produced by scientists with no training in software engineering and with no quality mechanisms in place and so, no doubt, the occurrence of errors will be even higher. The Climate Research Unit's "Harry ReadMe" files are a graphic indication of such working conditions, containing as they do the outpouring of a programmer's frustrations in trying to get sets of data to conform to a specification.
It isn't about the 'code' so much as it is about the algorithm. It's the algorithm that the auditors really want. The code is just the expression of it.
The difference seems subtle, and is I suppose it is given that the code is usually the only expression that exists for the algorithm. So if the code makers don't want to release their actual code, they should be required to release the exact algorithm in some other form.
Of course, even if the algorithm is released, it is useless without ALL the data on which the algorithm operated.
Got to have both and in almost all cases, they fight tooth and nail to keep it private.
And why shouldn't they? As one leading climate scientist put it, 'why should we give you the data when you're just going to try to find something wrong with it?' :(
-
- Posts: 11741
- Joined: Fri Jun 11, 2004 4:52 am
- Title: mere ghost of his former self
Really? Does the Met Office sell the Yamal data that Briffa used in his 2000 paper? More to the point -- Abdul's point I believe -- why did Briffa wait until just recently to release that data?Geni wrote:False. You pay you can buy as much raw data as you like. People like the Met Office don't care who buys their stuff.Abdul Alhazred wrote:Plausible but not evidences.asthmatic camel wrote:What say our IT mavens?
There a a zillion nits to pick with any scientific set up. That's how science gets done.
To me the damning bits (in general not that particular article) are:
1) Unwillingness to share the raw data.
-
- Posts: 23535
- Joined: Sun Jul 18, 2004 7:15 pm
- Title: Incipient toppler
- Location: Swimming in Lake Ed
OH!!!! I KNOW, I KNOW, ASK ME, ASK ME!@!!xouper wrote:More to the point -- Abdul's point I believe -- why did Briffa wait until just recently to release that data?
But I'll leave it first to Gini to quote a Wiki article edited by Connelly before I give the real (and demonstrable) answer.
Last edited by Rob Lister on Sat Feb 06, 2010 2:59 pm, edited 1 time in total.
-
- Posts: 11741
- Joined: Fri Jun 11, 2004 4:52 am
- Title: mere ghost of his former self
I don't find it misleading because even if there are no problems with the algorithm, there can be errors in the code. Both need to be validated.Rob Lister wrote:I'm certainly not an IT maven, but reading this I find it misleading.asthmatic camel wrote:Interesting article in today's Grauniad.
What say our IT mavens?Many climate scientists have refused to publish their computer programs. I suggest is that this is both unscientific behaviour and, equally importantly, ignores a major problem: that scientific software has got a poor reputation for error.
There is enough evidence for us to regard a lot of scientific software with worry. For example Professor Les Hatton, an international expert in software testing resident in the Universities of Kent and Kingston, carried out an extensive analysis of several million lines of scientific code. He showed that the software had an unacceptably high level of detectable inconsistencies.
For example, interface inconsistencies between software modules which pass data from one part of a program to another occurred at the rate of one in every seven interfaces on average in the programming language Fortran, and one in every 37 interfaces in the language C. This is hugely worrying when you realise that just one error — just one — will usually invalidate a computer program. What he also discovered, even more worryingly, is that the accuracy of results declined from six significant figures to one significant figure during the running of programs.
Hatton and other researchers' work indicates that scientific software is often of poor quality. What is staggering about the research that has been done is that it examines commercial scientific software – produced by software engineers who have to undergo a regime of thorough testing, quality assurance and a change control discipline known as configuration management.
By contrast scientific software developed in our universities and research institutes is often produced by scientists with no training in software engineering and with no quality mechanisms in place and so, no doubt, the occurrence of errors will be even higher. The Climate Research Unit's "Harry ReadMe" files are a graphic indication of such working conditions, containing as they do the outpouring of a programmer's frustrations in trying to get sets of data to conform to a specification.
It isn't about the 'code' so much as it is about the algorithm. It's the algorithm that the auditors really want. The code is just the expression of it.
The difference seems subtle, and is I suppose it is given that the code is usually the only expression that exists for the algorithm. So if the code makers don't want to release their actual code, they should be required to release the exact algorithm in some other form.
Can you imagine if creationists tried using that excuse to not show their work?... they fight tooth and nail to keep it private.
And why shouldn't they? As one leading climate scientist put it, 'why should we give you the data when you're just going to try to find something wrong with it?' :(
-
- Posts: 23535
- Joined: Sun Jul 18, 2004 7:15 pm
- Title: Incipient toppler
- Location: Swimming in Lake Ed
The emphasis is what is misleading. It doesn't well differentiate between the two Houses of climatology: the modelers and the researchersxouper wrote: I don't find it misleading because even if there are no problems with the algorithm, there can be errors in the code. Both need to be validated.
But each can validate their code or not, as they see fit. It doesn't matter if it is well written or sloppy or even if it works because if the auditors have the algorithm they can express it in code some other way and, having done that, apply it to the same data set.
The result will either be identical (mathematically or statistically) or it will not.
If not, something is wrong, clearly.
Also, each premise is open to examination and it is up to the modeler or researcher to justify its use.
If they can't, something is wrong, clearly.
The burden of proof then lies where it should, on the modeler/researcher.
Last edited by Rob Lister on Sat Feb 06, 2010 3:40 pm, edited 1 time in total.
-
- Posts: 20299
- Joined: Sat Aug 12, 2006 2:01 am
- Title: Je suis devenu Français
- Location: USA
Or anybody else really.xouper wrote:Can you imagine if creationists tried using that excuse to not show their work?... they fight tooth and nail to keep it private.
And why shouldn't they? As one leading climate scientist put it, 'why should we give you the data when you're just going to try to find something wrong with it?' :(
-
- Posts: 1499
- Joined: Wed Jun 02, 2004 11:04 pm
- Location: UK
It's generally faster to write new code than to try to understand and correct someone else's poorly written code.
If the raw data is made available and several different programs use it to produce the same answers, that's a better test (IMO) of the validity of the original team's conclusions.
Of course if different programs produce significantly different answers from the same raw data, then either some of the programs are wrong, or it's possible that the raw data is of poor and inconsistent quality such that meaningful results can't really be expected from it.
If the raw data is made available and several different programs use it to produce the same answers, that's a better test (IMO) of the validity of the original team's conclusions.
Of course if different programs produce significantly different answers from the same raw data, then either some of the programs are wrong, or it's possible that the raw data is of poor and inconsistent quality such that meaningful results can't really be expected from it.
-
- Posts: 2868
- Joined: Tue Jun 08, 2004 8:55 pm
- Location: That Good Night
"Third, it is apparent to me, and many others who have followed this exchange and your on-line discussions of how to proceed, that you are not acting in good faith in requests for data."robinson wrote:Or anybody else really.xouper wrote:Can you imagine if creationists tried using that excuse to not show their work?... they fight tooth and nail to keep it private.
And why shouldn't they? As one leading climate scientist put it, 'why should we give you the data when you're just going to try to find something wrong with it?' :(
Worked for Lenski...
-
- Posts: 17762
- Joined: Fri Oct 26, 2007 4:13 pm
- Location: Friar McWallclocks Bar -- Where time stands still while you lean over!
ceptimus. A giant among critical thinkers. And......winner of this weeks SC good reasoning award.Abdul Alhazred wrote::clap:ceptimus wrote:It's generally faster to write new code than to try to understand and correct someone else's poorly written code.
If the raw data is made available and several different programs use it to produce the same answers, that's a better test (IMO) of the validity of the original team's conclusions.
Of course if different programs produce significantly different answers from the same raw data, then either some of the programs are wrong, or it's possible that the raw data is of poor and inconsistent quality such that meaningful results can't really be expected from it.
Your pony is in the mail. :)
-
- Posts: 5883
- Joined: Thu Jun 03, 2004 9:02 am
- Location: UK
The met office was example. There are other collecting agencies around the met office is just the first one to come to mind (heh if certian rumors are true wait a bit and you can buy the who met office).xouper wrote:Really? Does the Met Office sell the Yamal data that Briffa used in his 2000 paper?Geni wrote:False. You pay you can buy as much raw data as you like. People like the Met Office don't care who buys their stuff.Abdul Alhazred wrote:Plausible but not evidences.asthmatic camel wrote:What say our IT mavens?
There a a zillion nits to pick with any scientific set up. That's how science gets done.
To me the damning bits (in general not that particular article) are:
1) Unwillingness to share the raw data.
-
- Posts: 23535
- Joined: Sun Jul 18, 2004 7:15 pm
- Title: Incipient toppler
- Location: Swimming in Lake Ed
A little off topic but I gotta ask: Geni, do you drink a bit? I mean, if you do, it's cool. You're among .... well, not friends exactly, but something that almost approaches it.Geni wrote: The met office was example. There are other collecting agencies around the met office is just the first one to come to mind (heh if certian rumors are true wait a bit and you can buy the who met office).
-
- Posts: 5883
- Joined: Thu Jun 03, 2004 9:02 am
- Location: UK
-
- Posts: 5883
- Joined: Thu Jun 03, 2004 9:02 am
- Location: UK
Peer review doesn't mean what you think it means.corplinx wrote:To truly be able to peer review a scientific conclusion reached based on computed results, you would need access to the custom source code written to produce the result.
Well thats one position. British goverment took the position that just because something is publicaly funded doesn't mean you shouldn't try to sell it and thus reduce the amount of public funding needed in future. Things are changing but slowly and I don't the tories will see any need to continue such change."But.... but..... its ugly!" is not an excuse. Especially for research in part funded by public funds.
-
- Posts: 1830
- Joined: Fri Aug 19, 2005 4:41 pm
- Location: New York
Just to pick a nit, the G.E.N.I is technically correct; an analysis on the level you suggest is closer to an audit than to mere peer review. That doesn't make it a bad idea. Indeed, I'd say that any alleged science that calls for the massive kinds of changes that the so-called climate scientists are calling for demands audit-level review even if so much of it hadn't already been revealed to be unfiltered sewage.corplinx wrote:Yes it does, however, my ideal of being able to take the actual code/data, and rerun to see if the numbers actually match what is in the paper/study/findings/etc is a higher standard that rest of the world has not yet caught up with.Geni wrote: Peer review doesn't mean what you think it means.
-
- Posts: 11741
- Joined: Fri Jun 11, 2004 4:52 am
- Title: mere ghost of his former self
Exactly.manny wrote:... I'd say that any alleged science that calls for the massive kinds of changes that the so-called climate scientists are calling for demands audit-level review ...
The stakes are way too high for people like Briffa and Jones to hide behind the excuse that they don't have to share their data with those who would presume to "audit" them.
It is not sufficient for the scientists to just say "trust me". Audits of their work are mandatory before making massive public policy changes.
-
- Posts: 29811
- Joined: Fri Jul 16, 2004 4:00 pm
- Location: Location: Location!
-
- Posts: 15132
- Joined: Sun Jun 06, 2004 1:16 pm
- Title: Curmudgeon
- Location: Hither, sometimes Yon
Over and above the GIGO problem and the difficulty QCing massive digital datasets are the differential equation coding and boundary condition handling from cell to cell and the 3d gridding algorithms needed for both internal processing and output display. One erroneous data point effects large areas after processing and is basically undetectable.Rob Lister wrote:I'm certainly not an IT maven, but reading this I find it misleading.asthmatic camel wrote:Interesting article in today's Grauniad.
What say our IT mavens?Many climate scientists have refused to publish their computer programs. I suggest is that this is both unscientific behaviour and, equally importantly, ignores a major problem: that scientific software has got a poor reputation for error.
There is enough evidence for us to regard a lot of scientific software with worry. For example Professor Les Hatton, an international expert in software testing resident in the Universities of Kent and Kingston, carried out an extensive analysis of several million lines of scientific code. He showed that the software had an unacceptably high level of detectable inconsistencies.
For example, interface inconsistencies between software modules which pass data from one part of a program to another occurred at the rate of one in every seven interfaces on average in the programming language Fortran, and one in every 37 interfaces in the language C. This is hugely worrying when you realise that just one error — just one — will usually invalidate a computer program. What he also discovered, even more worryingly, is that the accuracy of results declined from six significant figures to one significant figure during the running of programs.
Hatton and other researchers' work indicates that scientific software is often of poor quality. What is staggering about the research that has been done is that it examines commercial scientific software – produced by software engineers who have to undergo a regime of thorough testing, quality assurance and a change control discipline known as configuration management.
By contrast scientific software developed in our universities and research institutes is often produced by scientists with no training in software engineering and with no quality mechanisms in place and so, no doubt, the occurrence of errors will be even higher. The Climate Research Unit's "Harry ReadMe" files are a graphic indication of such working conditions, containing as they do the outpouring of a programmer's frustrations in trying to get sets of data to conform to a specification.
It isn't about the 'code' so much as it is about the algorithm. It's the algorithm that the auditors really want. The code is just the expression of it.
The difference seems subtle, and is I suppose it is given that the code is usually the only expression that exists for the algorithm. So if the code makers don't want to release their actual code, they should be required to release the exact algorithm in some other form.
Of course, even if the algorithm is released, it is useless without ALL the data on which the algorithm operated.
Got to have both and in almost all cases, they fight tooth and nail to keep it private.
And why shouldn't they? As one leading climate scientist put it, 'why should we give you the data when you're just going to try to find something wrong with it?' :(
Last edited by hammegk on Mon Feb 08, 2010 10:42 pm, edited 1 time in total.
-
- Posts: 10271
- Joined: Tue Nov 13, 2007 11:00 pm
- Location: Hangar 18
So, introduce more errors to cancel them out.hammegk wrote:Over and above the GIGO problem and the difficulty QCing massive digital datasets are the differential equation coding and boundary condition handling from cell to cell and the 3d gridding algorithms needed for both internal processing and putput display. One erroneous data point effects large areas after processing and is basically undetectable.Rob Lister wrote:I'm certainly not an IT maven, but reading this I find it misleading.asthmatic camel wrote:Interesting article in today's Grauniad.
What say our IT mavens?Many climate scientists have refused to publish their computer programs. I suggest is that this is both unscientific behaviour and, equally importantly, ignores a major problem: that scientific software has got a poor reputation for error.
There is enough evidence for us to regard a lot of scientific software with worry. For example Professor Les Hatton, an international expert in software testing resident in the Universities of Kent and Kingston, carried out an extensive analysis of several million lines of scientific code. He showed that the software had an unacceptably high level of detectable inconsistencies.
For example, interface inconsistencies between software modules which pass data from one part of a program to another occurred at the rate of one in every seven interfaces on average in the programming language Fortran, and one in every 37 interfaces in the language C. This is hugely worrying when you realise that just one error — just one — will usually invalidate a computer program. What he also discovered, even more worryingly, is that the accuracy of results declined from six significant figures to one significant figure during the running of programs.
Hatton and other researchers' work indicates that scientific software is often of poor quality. What is staggering about the research that has been done is that it examines commercial scientific software – produced by software engineers who have to undergo a regime of thorough testing, quality assurance and a change control discipline known as configuration management.
By contrast scientific software developed in our universities and research institutes is often produced by scientists with no training in software engineering and with no quality mechanisms in place and so, no doubt, the occurrence of errors will be even higher. The Climate Research Unit's "Harry ReadMe" files are a graphic indication of such working conditions, containing as they do the outpouring of a programmer's frustrations in trying to get sets of data to conform to a specification.
It isn't about the 'code' so much as it is about the algorithm. It's the algorithm that the auditors really want. The code is just the expression of it.
The difference seems subtle, and is I suppose it is given that the code is usually the only expression that exists for the algorithm. So if the code makers don't want to release their actual code, they should be required to release the exact algorithm in some other form.
Of course, even if the algorithm is released, it is useless without ALL the data on which the algorithm operated.
Got to have both and in almost all cases, they fight tooth and nail to keep it private.
And why shouldn't they? As one leading climate scientist put it, 'why should we give you the data when you're just going to try to find something wrong with it?' :(
-
- Posts: 49740
- Joined: Fri Jun 04, 2004 2:50 pm
Actually, though, I find some of the responses amusing in their own right. Especially "Destroy these tablets"Abdul Alhazred wrote:Meanwhile in the land of hysterical fantasy:
Assuming the worst, what would you say to distant future humans?
Democratic Underground
:lmao:
Personally, in that scenario I put up two tablets, one reading "Everything on this tablet is a lie"
-
- Posts: 20299
- Joined: Sat Aug 12, 2006 2:01 am
- Title: Je suis devenu Français
- Location: USA
Re:
Found my first posts about climate/global warming etc etc just now.
One - climate gate
Two - the idiotic response to any skeptical inquiry into it
Three - the fuck you beyond all imagining blizzards of the winter of 2009/10
The following was in response to the old JREF forum acting all fascists about this shit.
Lot of memories reading this thread. And more than a few missing members, sadly many of them are dead.
I was such a babe in the woods on this subject. It was three things that led to my looking at the data for myself.
One - climate gate
Two - the idiotic response to any skeptical inquiry into it
Three - the fuck you beyond all imagining blizzards of the winter of 2009/10
The following was in response to the old JREF forum acting all fascists about this shit.
robinson wrote: ↑Mon Nov 23, 2009 12:56 am Waitaminnut! You can't link to the emails?
That's beyond KoolAid, that is Jim Jones in the compound shit. Gun to your head, drink it fucker! Drink it!!
Fuck. I am so glad I got banned, this level of dumb is beyond comprehension.
I ran this story by a non internet person today, they want to know why the emails and documents that were hacked aren't already open to view.
As in, "It's weather, why would anybody hide it?".
Lot of memories reading this thread. And more than a few missing members, sadly many of them are dead.
-
- Posts: 17762
- Joined: Fri Oct 26, 2007 4:13 pm
- Location: Friar McWallclocks Bar -- Where time stands still while you lean over!
-
- Posts: 20299
- Joined: Sat Aug 12, 2006 2:01 am
- Title: Je suis devenu Français
- Location: USA
Re: UK's Climate Research Unit server Hacked -- Cat out of bag
The libtards and global warmers who I ran into in 2009/10 all accused my humble self of having an agenda and being a "denier", (which was clearly an ad hom evoking "holocaust denial"), despite the blatant fact that until November 2009 I either didn't post about global warming, or I was one of the strident voices claiming it was all over, too late, we were all fucked.
That was when I realized the alarmists were not remotely interested in facts or science.
That was when I realized the alarmists were not remotely interested in facts or science.
-
- Posts: 34112
- Joined: Sat Jun 05, 2004 2:17 am
- Title: Man in Black
- Location: Division 6
-
- Posts: 20299
- Joined: Sat Aug 12, 2006 2:01 am
- Title: Je suis devenu Français
- Location: USA
Re: UK's Climate Research Unit server Hacked -- Cat out of bag
So do I
Also Abdul
Dr Matt
Cool Hand Luke
….
I would go on but I am starting to become sad
We need a memorial thread
Also Abdul
Dr Matt
Cool Hand Luke
….
I would go on but I am starting to become sad
We need a memorial thread