UK's Climate Research Unit server Hacked -- Cat out of bag

We are the Borg.
robinson
Posts: 20299
Joined: Sat Aug 12, 2006 2:01 am
Title: Je suis devenu Français
Location: USA

Post by robinson »

Good plan.
Rob Lister
Posts: 23535
Joined: Sun Jul 18, 2004 7:15 pm
Title: Incipient toppler
Location: Swimming in Lake Ed

Post by Rob Lister »

Mentat wrote:I meant a scientific article using only the tree. If there isn't, then I can only chalk this up to a big strawman argument.
Sorry. I misunderstood your desire


http://www.sciencedirect.com/science?_o ... 37955303c3
Geni
Posts: 5883
Joined: Thu Jun 03, 2004 9:02 am
Location: UK

Post by Geni »

Abdul Alhazred wrote:About those Himalayan glaciers:
Zey are shrinking yes:

http://www.asiasociety.org/onthinnerice
hammegk
Posts: 15132
Joined: Sun Jun 06, 2004 1:16 pm
Title: Curmudgeon
Location: Hither, sometimes Yon

Post by hammegk »

Damn good thing, too.

I'd hate to think another iceage was already beginning.
asthmatic camel
Posts: 20455
Joined: Sat Jun 05, 2004 1:53 pm
Title: Forum commie nun.
Location: Stirring the porridge with my spurtle.

Post by asthmatic camel »

Interesting article in today's Grauniad.
Many climate scientists have refused to publish their computer programs. I suggest is that this is both unscientific behaviour and, equally importantly, ignores a major problem: that scientific software has got a poor reputation for error.

There is enough evidence for us to regard a lot of scientific software with worry. For example Professor Les Hatton, an international expert in software testing resident in the Universities of Kent and Kingston, carried out an extensive analysis of several million lines of scientific code. He showed that the software had an unacceptably high level of detectable inconsistencies.

For example, interface inconsistencies between software modules which pass data from one part of a program to another occurred at the rate of one in every seven interfaces on average in the programming language Fortran, and one in every 37 interfaces in the language C. This is hugely worrying when you realise that just one error — just one — will usually invalidate a computer program. What he also discovered, even more worryingly, is that the accuracy of results declined from six significant figures to one significant figure during the running of programs.

Hatton and other researchers' work indicates that scientific software is often of poor quality. What is staggering about the research that has been done is that it examines commercial scientific software – produced by software engineers who have to undergo a regime of thorough testing, quality assurance and a change control discipline known as configuration management.

By contrast scientific software developed in our universities and research institutes is often produced by scientists with no training in software engineering and with no quality mechanisms in place and so, no doubt, the occurrence of errors will be even higher. The Climate Research Unit's "Harry ReadMe" files are a graphic indication of such working conditions, containing as they do the outpouring of a programmer's frustrations in trying to get sets of data to conform to a specification.
What say our IT mavens?
Geni
Posts: 5883
Joined: Thu Jun 03, 2004 9:02 am
Location: UK

Post by Geni »

Abdul Alhazred wrote:
asthmatic camel wrote:What say our IT mavens?
Plausible but not evidences.

There a a zillion nits to pick with any scientific set up. That's how science gets done.

To me the damning bits (in general not that particular article) are:
1) Unwillingness to share the raw data.
False. You pay you can buy as much raw data as you like. People like the Met Office don't care who buys their stuff.
Geni
Posts: 5883
Joined: Thu Jun 03, 2004 9:02 am
Location: UK

Post by Geni »

asthmatic camel wrote: What say our IT mavens?
Open source software is good but outside some rather narrow areas (maths voting machines) the lack of it is not normaly considered suspicious. Even with voting machines it's not the broadest of groups who considered it to be a problem.
xouper
Posts: 11741
Joined: Fri Jun 11, 2004 4:52 am
Title: mere ghost of his former self

Post by xouper »

Geni wrote:
asthmatic camel wrote:What say our IT mavens?
Open source software is good but outside some rather narrow areas (maths voting machines) the lack of it is not normaly considered suspicious. Even with voting machines it's not the broadest of groups who considered it to be a problem.
That didn't answer AC's question.
Rob Lister
Posts: 23535
Joined: Sun Jul 18, 2004 7:15 pm
Title: Incipient toppler
Location: Swimming in Lake Ed

Post by Rob Lister »

asthmatic camel wrote:Interesting article in today's Grauniad.
Many climate scientists have refused to publish their computer programs. I suggest is that this is both unscientific behaviour and, equally importantly, ignores a major problem: that scientific software has got a poor reputation for error.

There is enough evidence for us to regard a lot of scientific software with worry. For example Professor Les Hatton, an international expert in software testing resident in the Universities of Kent and Kingston, carried out an extensive analysis of several million lines of scientific code. He showed that the software had an unacceptably high level of detectable inconsistencies.

For example, interface inconsistencies between software modules which pass data from one part of a program to another occurred at the rate of one in every seven interfaces on average in the programming language Fortran, and one in every 37 interfaces in the language C. This is hugely worrying when you realise that just one error — just one — will usually invalidate a computer program. What he also discovered, even more worryingly, is that the accuracy of results declined from six significant figures to one significant figure during the running of programs.

Hatton and other researchers' work indicates that scientific software is often of poor quality. What is staggering about the research that has been done is that it examines commercial scientific software – produced by software engineers who have to undergo a regime of thorough testing, quality assurance and a change control discipline known as configuration management.

By contrast scientific software developed in our universities and research institutes is often produced by scientists with no training in software engineering and with no quality mechanisms in place and so, no doubt, the occurrence of errors will be even higher. The Climate Research Unit's "Harry ReadMe" files are a graphic indication of such working conditions, containing as they do the outpouring of a programmer's frustrations in trying to get sets of data to conform to a specification.
What say our IT mavens?
I'm certainly not an IT maven, but reading this I find it misleading.

It isn't about the 'code' so much as it is about the algorithm. It's the algorithm that the auditors really want. The code is just the expression of it.

The difference seems subtle, and is I suppose it is given that the code is usually the only expression that exists for the algorithm. So if the code makers don't want to release their actual code, they should be required to release the exact algorithm in some other form.

Of course, even if the algorithm is released, it is useless without ALL the data on which the algorithm operated.

Got to have both and in almost all cases, they fight tooth and nail to keep it private.

And why shouldn't they? As one leading climate scientist put it, 'why should we give you the data when you're just going to try to find something wrong with it?' :(
xouper
Posts: 11741
Joined: Fri Jun 11, 2004 4:52 am
Title: mere ghost of his former self

Post by xouper »

Geni wrote:
Abdul Alhazred wrote:
asthmatic camel wrote:What say our IT mavens?
Plausible but not evidences.

There a a zillion nits to pick with any scientific set up. That's how science gets done.

To me the damning bits (in general not that particular article) are:
1) Unwillingness to share the raw data.
False. You pay you can buy as much raw data as you like. People like the Met Office don't care who buys their stuff.
Really? Does the Met Office sell the Yamal data that Briffa used in his 2000 paper? More to the point -- Abdul's point I believe -- why did Briffa wait until just recently to release that data?
Rob Lister
Posts: 23535
Joined: Sun Jul 18, 2004 7:15 pm
Title: Incipient toppler
Location: Swimming in Lake Ed

Post by Rob Lister »

xouper wrote:More to the point -- Abdul's point I believe -- why did Briffa wait until just recently to release that data?
OH!!!! I KNOW, I KNOW, ASK ME, ASK ME!@!!

But I'll leave it first to Gini to quote a Wiki article edited by Connelly before I give the real (and demonstrable) answer.
Last edited by Rob Lister on Sat Feb 06, 2010 2:59 pm, edited 1 time in total.
xouper
Posts: 11741
Joined: Fri Jun 11, 2004 4:52 am
Title: mere ghost of his former self

Post by xouper »

Rob Lister wrote:
asthmatic camel wrote:Interesting article in today's Grauniad.
Many climate scientists have refused to publish their computer programs. I suggest is that this is both unscientific behaviour and, equally importantly, ignores a major problem: that scientific software has got a poor reputation for error.

There is enough evidence for us to regard a lot of scientific software with worry. For example Professor Les Hatton, an international expert in software testing resident in the Universities of Kent and Kingston, carried out an extensive analysis of several million lines of scientific code. He showed that the software had an unacceptably high level of detectable inconsistencies.

For example, interface inconsistencies between software modules which pass data from one part of a program to another occurred at the rate of one in every seven interfaces on average in the programming language Fortran, and one in every 37 interfaces in the language C. This is hugely worrying when you realise that just one error — just one — will usually invalidate a computer program. What he also discovered, even more worryingly, is that the accuracy of results declined from six significant figures to one significant figure during the running of programs.

Hatton and other researchers' work indicates that scientific software is often of poor quality. What is staggering about the research that has been done is that it examines commercial scientific software – produced by software engineers who have to undergo a regime of thorough testing, quality assurance and a change control discipline known as configuration management.

By contrast scientific software developed in our universities and research institutes is often produced by scientists with no training in software engineering and with no quality mechanisms in place and so, no doubt, the occurrence of errors will be even higher. The Climate Research Unit's "Harry ReadMe" files are a graphic indication of such working conditions, containing as they do the outpouring of a programmer's frustrations in trying to get sets of data to conform to a specification.
What say our IT mavens?
I'm certainly not an IT maven, but reading this I find it misleading.

It isn't about the 'code' so much as it is about the algorithm. It's the algorithm that the auditors really want. The code is just the expression of it.

The difference seems subtle, and is I suppose it is given that the code is usually the only expression that exists for the algorithm. So if the code makers don't want to release their actual code, they should be required to release the exact algorithm in some other form.
I don't find it misleading because even if there are no problems with the algorithm, there can be errors in the code. Both need to be validated.
... they fight tooth and nail to keep it private.

And why shouldn't they? As one leading climate scientist put it, 'why should we give you the data when you're just going to try to find something wrong with it?' :(
Can you imagine if creationists tried using that excuse to not show their work?
Rob Lister
Posts: 23535
Joined: Sun Jul 18, 2004 7:15 pm
Title: Incipient toppler
Location: Swimming in Lake Ed

Post by Rob Lister »

xouper wrote: I don't find it misleading because even if there are no problems with the algorithm, there can be errors in the code. Both need to be validated.
The emphasis is what is misleading. It doesn't well differentiate between the two Houses of climatology: the modelers and the researchers

But each can validate their code or not, as they see fit. It doesn't matter if it is well written or sloppy or even if it works because if the auditors have the algorithm they can express it in code some other way and, having done that, apply it to the same data set.

The result will either be identical (mathematically or statistically) or it will not.

If not, something is wrong, clearly.

Also, each premise is open to examination and it is up to the modeler or researcher to justify its use.

If they can't, something is wrong, clearly.

The burden of proof then lies where it should, on the modeler/researcher.
Last edited by Rob Lister on Sat Feb 06, 2010 3:40 pm, edited 1 time in total.
robinson
Posts: 20299
Joined: Sat Aug 12, 2006 2:01 am
Title: Je suis devenu Français
Location: USA

Post by robinson »

xouper wrote:
... they fight tooth and nail to keep it private.

And why shouldn't they? As one leading climate scientist put it, 'why should we give you the data when you're just going to try to find something wrong with it?' :(
Can you imagine if creationists tried using that excuse to not show their work?
Or anybody else really.
ceptimus
Posts: 1499
Joined: Wed Jun 02, 2004 11:04 pm
Location: UK

Post by ceptimus »

It's generally faster to write new code than to try to understand and correct someone else's poorly written code.

If the raw data is made available and several different programs use it to produce the same answers, that's a better test (IMO) of the validity of the original team's conclusions.

Of course if different programs produce significantly different answers from the same raw data, then either some of the programs are wrong, or it's possible that the raw data is of poor and inconsistent quality such that meaningful results can't really be expected from it.
Beleth
Posts: 2868
Joined: Tue Jun 08, 2004 8:55 pm
Location: That Good Night

Post by Beleth »

robinson wrote:
xouper wrote:
... they fight tooth and nail to keep it private.

And why shouldn't they? As one leading climate scientist put it, 'why should we give you the data when you're just going to try to find something wrong with it?' :(
Can you imagine if creationists tried using that excuse to not show their work?
Or anybody else really.
"Third, it is apparent to me, and many others who have followed this exchange and your on-line discussions of how to proceed, that you are not acting in good faith in requests for data."

Worked for Lenski...
sparks
Posts: 17762
Joined: Fri Oct 26, 2007 4:13 pm
Location: Friar McWallclocks Bar -- Where time stands still while you lean over!

Post by sparks »

Abdul Alhazred wrote:
ceptimus wrote:It's generally faster to write new code than to try to understand and correct someone else's poorly written code.

If the raw data is made available and several different programs use it to produce the same answers, that's a better test (IMO) of the validity of the original team's conclusions.

Of course if different programs produce significantly different answers from the same raw data, then either some of the programs are wrong, or it's possible that the raw data is of poor and inconsistent quality such that meaningful results can't really be expected from it.
:clap:
ceptimus. A giant among critical thinkers. And......winner of this weeks SC good reasoning award.

Your pony is in the mail. :)
Geni
Posts: 5883
Joined: Thu Jun 03, 2004 9:02 am
Location: UK

Post by Geni »

xouper wrote:
Geni wrote:
Abdul Alhazred wrote:
asthmatic camel wrote:What say our IT mavens?
Plausible but not evidences.

There a a zillion nits to pick with any scientific set up. That's how science gets done.

To me the damning bits (in general not that particular article) are:
1) Unwillingness to share the raw data.
False. You pay you can buy as much raw data as you like. People like the Met Office don't care who buys their stuff.
Really? Does the Met Office sell the Yamal data that Briffa used in his 2000 paper?
The met office was example. There are other collecting agencies around the met office is just the first one to come to mind (heh if certian rumors are true wait a bit and you can buy the who met office).
Rob Lister
Posts: 23535
Joined: Sun Jul 18, 2004 7:15 pm
Title: Incipient toppler
Location: Swimming in Lake Ed

Post by Rob Lister »

Geni wrote: The met office was example. There are other collecting agencies around the met office is just the first one to come to mind (heh if certian rumors are true wait a bit and you can buy the who met office).
A little off topic but I gotta ask: Geni, do you drink a bit? I mean, if you do, it's cool. You're among .... well, not friends exactly, but something that almost approaches it.
Geni
Posts: 5883
Joined: Thu Jun 03, 2004 9:02 am
Location: UK

Post by Geni »

Rob Lister wrote: A little off topic but I gotta ask: Geni, do you drink a bit?
Not often. As in a I'd need calendar to work out the last time.
Geni
Posts: 5883
Joined: Thu Jun 03, 2004 9:02 am
Location: UK

Post by Geni »

corplinx wrote:To truly be able to peer review a scientific conclusion reached based on computed results, you would need access to the custom source code written to produce the result.
Peer review doesn't mean what you think it means.
"But.... but..... its ugly!" is not an excuse. Especially for research in part funded by public funds.
Well thats one position. British goverment took the position that just because something is publicaly funded doesn't mean you shouldn't try to sell it and thus reduce the amount of public funding needed in future. Things are changing but slowly and I don't the tories will see any need to continue such change.
manny
Posts: 1830
Joined: Fri Aug 19, 2005 4:41 pm
Location: New York

Post by manny »

corplinx wrote:
Geni wrote: Peer review doesn't mean what you think it means.
Yes it does, however, my ideal of being able to take the actual code/data, and rerun to see if the numbers actually match what is in the paper/study/findings/etc is a higher standard that rest of the world has not yet caught up with.
Just to pick a nit, the G.E.N.I is technically correct; an analysis on the level you suggest is closer to an audit than to mere peer review. That doesn't make it a bad idea. Indeed, I'd say that any alleged science that calls for the massive kinds of changes that the so-called climate scientists are calling for demands audit-level review even if so much of it hadn't already been revealed to be unfiltered sewage.
xouper
Posts: 11741
Joined: Fri Jun 11, 2004 4:52 am
Title: mere ghost of his former self

Post by xouper »

manny wrote:... I'd say that any alleged science that calls for the massive kinds of changes that the so-called climate scientists are calling for demands audit-level review ...
Exactly.

The stakes are way too high for people like Briffa and Jones to hide behind the excuse that they don't have to share their data with those who would presume to "audit" them.

It is not sufficient for the scientists to just say "trust me". Audits of their work are mandatory before making massive public policy changes.
DrMatt
Posts: 29811
Joined: Fri Jul 16, 2004 4:00 pm
Location: Location: Location!

Post by DrMatt »

My cat is very much alive. No bag for him just yet.


insulin, diuretics, pepcid, metronidazole, Ringer's solution, and occasionally ciproheptadine, but no bag.
hammegk
Posts: 15132
Joined: Sun Jun 06, 2004 1:16 pm
Title: Curmudgeon
Location: Hither, sometimes Yon

Post by hammegk »

Rob Lister wrote:
asthmatic camel wrote:Interesting article in today's Grauniad.
Many climate scientists have refused to publish their computer programs. I suggest is that this is both unscientific behaviour and, equally importantly, ignores a major problem: that scientific software has got a poor reputation for error.

There is enough evidence for us to regard a lot of scientific software with worry. For example Professor Les Hatton, an international expert in software testing resident in the Universities of Kent and Kingston, carried out an extensive analysis of several million lines of scientific code. He showed that the software had an unacceptably high level of detectable inconsistencies.

For example, interface inconsistencies between software modules which pass data from one part of a program to another occurred at the rate of one in every seven interfaces on average in the programming language Fortran, and one in every 37 interfaces in the language C. This is hugely worrying when you realise that just one error — just one — will usually invalidate a computer program. What he also discovered, even more worryingly, is that the accuracy of results declined from six significant figures to one significant figure during the running of programs.

Hatton and other researchers' work indicates that scientific software is often of poor quality. What is staggering about the research that has been done is that it examines commercial scientific software – produced by software engineers who have to undergo a regime of thorough testing, quality assurance and a change control discipline known as configuration management.

By contrast scientific software developed in our universities and research institutes is often produced by scientists with no training in software engineering and with no quality mechanisms in place and so, no doubt, the occurrence of errors will be even higher. The Climate Research Unit's "Harry ReadMe" files are a graphic indication of such working conditions, containing as they do the outpouring of a programmer's frustrations in trying to get sets of data to conform to a specification.
What say our IT mavens?
I'm certainly not an IT maven, but reading this I find it misleading.

It isn't about the 'code' so much as it is about the algorithm. It's the algorithm that the auditors really want. The code is just the expression of it.

The difference seems subtle, and is I suppose it is given that the code is usually the only expression that exists for the algorithm. So if the code makers don't want to release their actual code, they should be required to release the exact algorithm in some other form.

Of course, even if the algorithm is released, it is useless without ALL the data on which the algorithm operated.

Got to have both and in almost all cases, they fight tooth and nail to keep it private.

And why shouldn't they? As one leading climate scientist put it, 'why should we give you the data when you're just going to try to find something wrong with it?' :(
Over and above the GIGO problem and the difficulty QCing massive digital datasets are the differential equation coding and boundary condition handling from cell to cell and the 3d gridding algorithms needed for both internal processing and output display. One erroneous data point effects large areas after processing and is basically undetectable.
Last edited by hammegk on Mon Feb 08, 2010 10:42 pm, edited 1 time in total.
Mentat
Posts: 10271
Joined: Tue Nov 13, 2007 11:00 pm
Location: Hangar 18

Post by Mentat »

hammegk wrote:
Rob Lister wrote:
asthmatic camel wrote:Interesting article in today's Grauniad.
Many climate scientists have refused to publish their computer programs. I suggest is that this is both unscientific behaviour and, equally importantly, ignores a major problem: that scientific software has got a poor reputation for error.

There is enough evidence for us to regard a lot of scientific software with worry. For example Professor Les Hatton, an international expert in software testing resident in the Universities of Kent and Kingston, carried out an extensive analysis of several million lines of scientific code. He showed that the software had an unacceptably high level of detectable inconsistencies.

For example, interface inconsistencies between software modules which pass data from one part of a program to another occurred at the rate of one in every seven interfaces on average in the programming language Fortran, and one in every 37 interfaces in the language C. This is hugely worrying when you realise that just one error — just one — will usually invalidate a computer program. What he also discovered, even more worryingly, is that the accuracy of results declined from six significant figures to one significant figure during the running of programs.

Hatton and other researchers' work indicates that scientific software is often of poor quality. What is staggering about the research that has been done is that it examines commercial scientific software – produced by software engineers who have to undergo a regime of thorough testing, quality assurance and a change control discipline known as configuration management.

By contrast scientific software developed in our universities and research institutes is often produced by scientists with no training in software engineering and with no quality mechanisms in place and so, no doubt, the occurrence of errors will be even higher. The Climate Research Unit's "Harry ReadMe" files are a graphic indication of such working conditions, containing as they do the outpouring of a programmer's frustrations in trying to get sets of data to conform to a specification.
What say our IT mavens?
I'm certainly not an IT maven, but reading this I find it misleading.

It isn't about the 'code' so much as it is about the algorithm. It's the algorithm that the auditors really want. The code is just the expression of it.

The difference seems subtle, and is I suppose it is given that the code is usually the only expression that exists for the algorithm. So if the code makers don't want to release their actual code, they should be required to release the exact algorithm in some other form.

Of course, even if the algorithm is released, it is useless without ALL the data on which the algorithm operated.

Got to have both and in almost all cases, they fight tooth and nail to keep it private.

And why shouldn't they? As one leading climate scientist put it, 'why should we give you the data when you're just going to try to find something wrong with it?' :(
Over and above the GIGO problem and the difficulty QCing massive digital datasets are the differential equation coding and boundary condition handling from cell to cell and the 3d gridding algorithms needed for both internal processing and putput display. One erroneous data point effects large areas after processing and is basically undetectable.
So, introduce more errors to cancel them out.
Nyarlathotep
Posts: 49740
Joined: Fri Jun 04, 2004 2:50 pm

Post by Nyarlathotep »

Abdul Alhazred wrote:Meanwhile in the land of hysterical fantasy:

Assuming the worst, what would you say to distant future humans?
Democratic Underground

:lmao:
Actually, though, I find some of the responses amusing in their own right. Especially "Destroy these tablets"

Personally, in that scenario I put up two tablets, one reading "Everything on this tablet is a lie"
robinson
Posts: 20299
Joined: Sat Aug 12, 2006 2:01 am
Title: Je suis devenu Français
Location: USA

Re:

Post by robinson »

Found my first posts about climate/global warming etc etc just now.
robinson wrote: Mon Nov 23, 2009 12:01 am Help me out here, cause I have always been confused on the GW/JREF fuckall conversations.

GW is accepted by "true skeptics" and any skepticism about GW is a bad thing? Is that the JREF stand?
hammegk wrote: Mon Nov 23, 2009 12:02 am Yup.
I was such a babe in the woods on this subject. It was three things that led to my looking at the data for myself.

One - climate gate
Two - the idiotic response to any skeptical inquiry into it
Three - the fuck you beyond all imagining blizzards of the winter of 2009/10


The following was in response to the old JREF forum acting all fascists about this shit.
robinson wrote: Mon Nov 23, 2009 12:56 am Waitaminnut! You can't link to the emails?

That's beyond KoolAid, that is Jim Jones in the compound shit. Gun to your head, drink it fucker! Drink it!!

Fuck. I am so glad I got banned, this level of dumb is beyond comprehension.

I ran this story by a non internet person today, they want to know why the emails and documents that were hacked aren't already open to view.

As in, "It's weather, why would anybody hide it?".

Lot of memories reading this thread. And more than a few missing members, sadly many of them are dead.
sparks
Posts: 17762
Joined: Fri Oct 26, 2007 4:13 pm
Location: Friar McWallclocks Bar -- Where time stands still while you lean over!

Re: UK's Climate Research Unit server Hacked -- Cat out of bag

Post by sparks »

First.
robinson
Posts: 20299
Joined: Sat Aug 12, 2006 2:01 am
Title: Je suis devenu Français
Location: USA

Re: UK's Climate Research Unit server Hacked -- Cat out of bag

Post by robinson »

The libtards and global warmers who I ran into in 2009/10 all accused my humble self of having an agenda and being a "denier", (which was clearly an ad hom evoking "holocaust denial"), despite the blatant fact that until November 2009 I either didn't post about global warming, or I was one of the strident voices claiming it was all over, too late, we were all fucked.

That was when I realized the alarmists were not remotely interested in facts or science.
Pyrrho
Posts: 34112
Joined: Sat Jun 05, 2004 2:17 am
Title: Man in Black
Location: Division 6

Re: UK's Climate Research Unit server Hacked -- Cat out of bag

Post by Pyrrho »

I wish Lister was still with us.
robinson
Posts: 20299
Joined: Sat Aug 12, 2006 2:01 am
Title: Je suis devenu Français
Location: USA

Re: UK's Climate Research Unit server Hacked -- Cat out of bag

Post by robinson »

So do I

Also Abdul

Dr Matt

Cool Hand Luke

….

I would go on but I am starting to become sad

We need a memorial thread