Home >Comments and Articles > The science goes round and round, and it comes out here
I was doing an archaeological dig in my garage, and I came across a stack of stuff I wrote when I was at university many years before. As a prerequisite for studying psychology, I also had to do some introductory anthropology and sociology. By coincidence, in the same week I just happened to pick up a book by Martin Gardner in which he wrote about Derek Freeman's criticism of the work of Margaret Mead. This reminded me of some examples of how to do and not to do science that I came across in that particular Behavioural Sciences department. (Please don't bother to tell me that the social sciences are not sciences. I have heard it all before.)
It is pointless to talk about the science in sociology because there wasn't any. At the time, the discipline was just a cesspit of insanity driven by the intellectual vacuity of man-hating feminists. All you had to remember to get good marks was that the words "man" and "rapist" were synonyms, although it helped if you were young, female and prepared to endure some sexual liberation weekends at the homes of female lecturers. (Outrageous, but true. One lecturer claimed that all women should be lesbians on principle and offered individual, personal training courses for first-year students.) A certain amount of this nonsense spilled over into anthropology. Discussion of Freeman's book and research were forbidden, as it was apparently quite clear that he had attacked Mead only because her lesbianism offended his well-known sense of homophobia. Evidence provided for his homophobia was that he had attacked Mead, who was a lesbian. In the midst of the bizarre circular argument any scientific comments about what either Mead or Freeman had to say were outlawed. She was right because of her sexuality, he was wrong because of her sexuality. Another bizarre thing was that most people who had commented on Mead's sexuality in the past had presented her as someone who chased after any available man in trousers. Of course, who Mead slept with had no bearing on the validity of her research findings, but that seemed to be all that anyone was interested in at the time.
Another wonderful circular piece of "science" also came out of the anthropology department. In 1985, the Australian government formally handed over ownership of Ayers' Rock to the original Aboriginal owners. This was a great day in the history of the relationship between the original occupants of the continent and the newer arrivals, and one of my proudest possessions is a souvenir t-shirt commemorating the occasion. You can't buy those in the local tourist traps! One problem which arose at the time was identifying the traditional owners of the rock, and to settle this an anthropologist was called in. She was able to demonstrate that one particular tribal group were the traditional owners because they were the only people who knew the secret and sacred dances associated with the location. And how did she know what these dances were? Well, this same group of people had shown them to her some years before when she had been doing field work with them. To add to the absurdity, the oldest member of the tribe remembered walking thousands of kilometres over many months with his family when they first moved to the area. This meant that the identified traditional owners had only been living there for about 70 years, although there had been human occupants of the area for more than 30,000 years (and the rock had been there for about 200 million). I don't begrudge the transfer of ownership to the particular group because the handover was really only symbolic anyway, but if anything is going to be supported by science then the science should be good enough to do the job.
Most of the work I did was in perception and cognitive psychology, where it is possible to do something approaching scientific research. One of the criticisms often made against the social sciences (and medical research) is that the acceptable level of error in the results is too high, but this criticism misses the point about what is being measured and the sort of tools available to do the measuring. It is pointed out that, for initial research at least, a doctor or psychologist is prepared to say that they have found an effect if there is less than a 5% chance of being wrong, but in physics it might have to be a thousandth of one percent or even less. This is then used to suggest that this means that the research is somehow less rigorous than physics. This is wrong on two counts. The first is what the accountants call materiality. Drug effects can be measured in time intervals from fractions of a second up to months or years. The variability in the results can be quite wide, unlike physical reactions which may have little or no variability within the equations which describe them (much of the variability can be an artefact of the measuring instrument anyway). Medicine and psychology do not have equations based on universal constants. The second count is that you can only measure things to the accuracy of the measuring instrument and the granularity of what is being measured. Physicists may be able to talk intelligibly about things like the Planck length and the charge on the electron and measuring the cosmic background temperature to within one twenty-millionth of a degree Kelvin, but human beings are not that precise.
In one study I carried out measuring reaction times to stimuli the two groups produced beautiful normal curves of response times, with the same standard deviation but barely different means. Put another way, we had to accept the null hypothesis and the experiment was a failure. (When we examined the data in a different way a startling and unexpected effect showed up which led to further research, but for the purpose here I am only talking about the original hypothesis.) I was asked to give a talk about the experiment to a junior-year tutorial group to illustrate the concept of statistical significance, or how a difference which makes no difference isn't a difference at all. After the talk I was approached by a student who told me that she was a mathematician and I was a fool and a poltroon because anyone could see that the means of the two groups were not equal when they differed by some fraction of a thousandth of a second. I tried in vain to convince her that I had not said that the means were equal, just not far enough away from each other to be useful. As far as she was concerned, numbers were either equal or they were not. In mathematics this may be true, but we were talking about the real world here.
The final example shows how real science should be and is done. In one of my final-year perception classes the major assignment was broken into three components. We had to submit an experimental design which described an optical illusion, propose a hypothesis for the origin and mechanism of the illusion, and say how we planned to test this hypothesis. References and citations were not required. The second component was to conduct the experiment and write it up in the format expected by the scientific journals. The last part was to give a verbal presentation of the results to the rest of the class, and this presentation had to be given before the mark for the written assignment would be released.
I picked an illusion which had fascinated me for years, wrote up a guess as to what was happening inside the head when the illusion was being experienced and dropped the paper into the appropriate assignment submission slot in the department's office door. Back it came a few days later with full marks. So far, so good. Then I hit the library, and the first reference I came up with was the professor's PhD thesis. This had been the man's life work, and he and his thesis supervisor were the acknowledged world experts on the phenomenon. My only problem was that I had suggested a different mechanism for the illusion. It was too late to pick another topic, so there was no option but to approach the professor and talk things out. Luckily I had been in his classes before so we knew each other, but I was still very nervous when I knocked on his office door. The first thing he said was that he had been expecting me to call. He then went on to talk in the way a real scientist should. What goes on in cognition and perception cannot be directly observed, and has to be inferred from observation and measurement. At any one time, the dominant theory is that which offers the best explanation, but that theory is always open to modification or even rejection if new data turns up. The professor didn't think that my idea was going to overthrow the dominant paradigm, but he accepted that it might help to explain some unexplained anomalies. He also said that he knew me well enough to know that I wasn't being contrary just for the sake of it and that I would be honest if the actual experimental work didn't support my hypothesis. He then gave me a copy of a paper about the illusion which had just been accepted for publication and we parted friends.
As it happened, the experiment showed some evidence for the validity of my hypothesis and didn't contradict the established research, but the oral presentation was still a tense affair. After I had spoken one of the junior academic staff told me that she had never seen me look so nervous, although what I had to say appeared coherent and well thought out. I simply said "Look up Rod's PhD thesis in the library". And did I get good marks for challenging a scientist's findings and nudging his thinking in a slightly different direction? Well, modesty forbids me ...
Back to The Millenium Project Email the Copyright © 1999- |