Peer reviewed article shows peer reviewing is bunk..?

"The present investigation was an attempt to study the peer-review process directly, in the natural setting of actual journal referee evaluations of submitted manuscripts. As test materials we selected 12 already published research articles by investigators from prestigious and highly productive American psychology departments, one article from each of 12 highly regarded and widely read American psychology journals with high rejection rates (80%) and nonblind refereeing practices.

With fictitious names and institutions substituted for the original ones (e.g., Tri-Valley Center for Human Potential), the altered manuscripts were formally resubmitted to the journals that had originally refereed and published them 18 to 32 months earlier. Of the sample of 38 editors and reviewers, only three (8%) detected the resubmissions. This result allowed nine of the 12 articles to continue through the review process to receive an actual evaluation: eight of the nine were rejected. Sixteen of the 18 referees (89%) recommended against publication and the editors concurred. The grounds for rejection were in many cases described as ?serious methodological flaws.? A number of possible interpretations of these data are reviewed and evaluated."

formatting link

Oh dear oh dear.

Reply to
The Natural Philosopher
Loading thread data ...

It's the worst system... apart from all the others.

And since I'm wheeling out Churchill quotes: The maxim 'Nothing avails but perfection' may be spelt shorter: 'Paralysis.'

Theo

Reply to
Theo Markettos

That tends to suggest a weakness with their nonblind reviewing. Well known authors and mates of the reviewers therefore find it much easier to get published. A rough working heuristic is 10% of the peer reviewed literature is wrong or will subsequently be revised or discredited.

Even for papers that are technically correct as drafted last minute improvements made by the journals editor to improve readability can sometimes completely invert the meaning of a key claim.

formatting link

Surprising that the plagarism wasn't actually picked up more often.

The odds of getting published with an *assumed* to be decent paper and an unfamiliar name and institution seems to be about half that of regular mainstream authors known to the journal and its reviewers.

The big weakness is that it means they can possibly get stuff which is not actually that good published due to bias in the review process.

It presents a strong argument in favour of having blind reviews. It should not matter if the unknown author is working alone in a cave - the scientific evidence either stands up to scrutiny or it doesn't.

American psychology is probably about as scientific as the other dismal science - economics. I'd expect the hard science journals to be better.

Reply to
Martin Brown

:-)

Yerrs. But it does maybe imply that who write it and where it came from are significant evaluations in reviewing.

Perhaps the answer is to strip the material of originating authors and establishments before reviewing it..

Reply to
The Natural Philosopher

formatting link

Wasn't the driving force behind Popper's analysis of the scientific method a strong conviction that psychology wasn't a science at all? More 'metaphysics'

Reply to
The Natural Philosopher

My wife is a scientist at the Sanger Institute. When she was working in Dijon many years ago she knew of one French scientist (in Paris, I think) whose work turned out to be made obselete or plain nonsense by other research elsewhere. The problem was he was one of the few people the journals could call on to be referee for new publications in that field....

...which of course meant that any papers discrediting or making irrelevant his work were highly unlikely ever to see the light of day. I think it was years, decades, until his work was finally accepted as being debunked. He lived for many years in a position of power and influence over others.

Even afterwards, the science journals don't retract or "unpublish" the nonsense. It's there for ever.

From what Alena describes, I think the process of peer reviewing is pretty flawed and indeed open to all sorts of abuse and problems.

Not only that, but in many institutions a boss will insist on being added as an author to a paper he or she had little involvement in. He may insist on putting his chums' names on it too, a bit of mutual back-scratching that disadvantages those scientists who actually did the work by bumping their names further down the pecking order in the list of authors. On a paper one wants to be either first or last author in the list (those two people being the one in charge and the one with the ideas and the work - or it might be vice versa, I forget the order).

So, scientists can be hindered by manipulative superiors and peer reviewers - a double-edged sword.

My understanding is that in the field of maths and physics these problems are increasingly being eliminated by a growing trend of self-publication and open dissemination/review, bypassing the so-called professional journals and their peer-review editing system, yes?

Myself, I think the whole peer-reviewing thing stinks.

Michael

Reply to
Michael Kilpatrick

formatting link

In the very early days of word processors ISTR one of my colleagues submitting a paper to a fairly prestigious journal and being told to shorten it by about 20%. He reformatted it in 10pt instead of 12 pt and they accepted it.

Reply to
newshound

The Natural Philosopher wrote: [snip]

formatting link

I've always had a jaundiced view of peer review. This started when a referee returned a paper stating that it omitted "key references". The reference list supplied by the referee were irrelevant and all published by the same person. My guess being that person was the referee.

I published in another journal rather than give the scrote the pleasure of gaming the citation index.

Reply to
Steve Firth

I think you're being optimistic. The psych papers I've read have almost all been bunk. The subject is mostly pseudoscience, with articles mostly produ ced by people who fail to get the basics straight, fail to offer arguments that are actually logical, and come up with conclusions that are more guess es than anything established by the data. Lets face it, if you've got a pro per set of critical thinking skills you're not likely to be working in psyc h.

There is always a drive to publish and be seen as great, and always a drive to approve other published work so the odds of others approving yours is m uch better. There's also always the drive to take power over and profit fro m ever more people's lives, a very strong trend in psych.

At the end of the day, there's really no aspect of psych that stands up to scrutiny.

NT

Reply to
meow2222

I'd say this is not actually a surprise was there not something similar done by New Scientist a while back with similar outcomes?

I think that it was under the research into hidden bias that even the people trying to be fair were not aware they were applying and its a sobering thought that we are in fact all biased in some way.

Brian

Reply to
Brian Gaff

But what is the alternative? You can't have laymen, or even scientists from other disciplines, reviewing research in specialists fields about which they may know little or nothing. How can they judge its merits? I tend to agree it's a poor system and open to abuse, but like democracy, the alternatives are worse.

Reply to
Chris Hogg

There's your problem; psychology.

Reply to
Huge

A mate of mine had that problem with his PhD. His research showed the accepted wisdom from a Grand Old Man in the field was bunk.

Reply to
Huge

There doesn't appear to be an obvious one. What can change is the naive assumption than anything peer reviewed is sound science: in the main, its bunk. Policy makers need to quit being naive.

NT

Reply to
meow2222

Reply to
Java Jive

Certainly does.

The way papers are written also invites mis-understanding. One paper recently said that according to their measure, the medicine did not improve quality of life, but when asked "Did that medicine improve your quality of life?" around half answered "Yes!". And only a small percentage said "No". (Their measure was a really crummy questionnaire that is widely used.)

When mentioned elsewhere in future the impression will be given that the medicine did not improve QoL.

The above was a real shame because the question the paper asked, and was attempting to answer, is important. It was asking is A better than B? Almost the whole medical establishment has assumed that B must be better than B when there has never been any trial to demonstrate that in the 50 to 60 years that B has come to totally dominate over A. That is despite patients who actually manage to get hold of A often experiencing a huge improvement. It is so very often put down as placebo effect or similar.

Reply to
polygonum

Current practice when submitting research to reputable journals is that you are permitted to nominate individuals who you would prefer not to act as reviewer for your work (giving reasons). The publisher doesn't have to go along with it, but it's good step in the right direction at the very least.

Reply to
Lobster

formatting link

Public review?

Where people can put up comments on the paper in a way that allows a) corrections; b) enhancements; c) discussion.

Not at all sure how you can restrict the process to those who are suitably qualified. But maybe even that is not so very important. After all, you humble servant here has actually roundly criticised a BMJ article - and that was at least published on-line. I felt I knew enough to do so, but in formal academic terms am totally unqualified.

Reply to
polygonum

Not particularly, given that this article was published over 30 years ago - long before computers were used for this. Nowadays all the major publishers routinely run submitted material through plagiarism detection software.

And these days, many journals do indeed do blind reviews (would be interested in knowing what the argument is against not doing always doing it that way).

Reply to
Lobster

The problem with that is that while it'll take in more valid criticism, it also takes in vast swathes of comment of no value. It becomes unworkable.

But narrow it down and it simply becomes the self serving bs-go-round much research is today.

NT

Reply to
meow2222

HomeOwnersHub website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.