I’m not sure how many academics read EngineerBlogs, but I’m curious how many have run into the problem of false results. I personally spent a lot of time trying during my MS trying to reproduce data from various papers only to find there were problems with the results or the set up. I can remember one paper, in particular, that showed an antenna design published by a fairly prestigious group. I used exactly the same software and tried to replicate their design only to find that the results were considerably different than what was shown in the paper. I finally broke down and emailed one of the authors. Their response was that the dimensions given on the design were not correct. The problems are multiplied, of course, if you’re dealing with different software as you may not be able to even get your simulation results to match up. How do you know your differences aren’t a result of the computational algorithms?
I was very interested, therefore, when I heard about an effort to create more reproducibility in science. On Tuesday, ECN published an article about Science Exchange’s “Reproducibility Initiative.” While it appears the initiative is geared toward social and medical sciences, the idea is that Science Exchange will find a lab that can repeat experiments to validate the reproducibility of the results. Ideally, the cost to reproduce the results will be only 20% of the cost of doing the original study.
I think this implies a lot about how the funding/publishing system is broken. What funding agency is really willing to go through and fund several groups to do the same work? Part of the reason there is a push to do so much ‘new research’ and so little effort to verify reproducibility is because there’s no money to do it. It also doesn’t get you very far in terms of original publications, if you can get it published at all: negative results aren’t often viewed as favorable by a lot of reviewers.
Personally, however, I think this is a great idea. It is most certainly in the interest of academic researchers to have governmental agencies fund these validation studies. When you compare it to how much money is probably wasted by other groups attempting to replicate false results, which is a significant chunk, according to the article, it seems that increasing funding by 20% is small change.
The cynical part of me thinks the initiative is doomed to failure because of financial issues. Where should that additional 20% come from? Many corporations will spend money to pay their own people to replicate experiments, but are they willing to pay additional taxes instead of replicating the research themselves? If industry doesn’t want to help pay for it, where should the money come from? Or should we continue to leave the priority to replicate these in the hands of industry and let academic researchers suffer?
I followed this discussion on social sciences aiming towards development – extra 20%? Why not require grad students to reproduce at least one study or paper during their main research?
This may be ridiculous, but there are two problems with research, funding and review. Funding should come from an independent agency (not government, they are not independent) maybe something like a pool that can collect funds from companies that have research money available or crowd-funding. The money from the fund should be awarded by all of the researchers voting on proposals submitted. I have not fully thought this out. As far as review goes, there is a large industry that has developed around the publishing of research papers. The papers should be made publically available on the internet for all who have an interest to see. This would allow youngsters to read these papers and maybe develop an interest in becoming a scientist. It would also allow cross-pollination of ideas and provide a wider review. I also believe that the business of research would become research instead of business.
I think it would be an excellent idea for granting agencies to set aside 10% of all funding for replication studies. People who were dubious about a result could apply for a grant to replicate the experiment.
David’s idea of an “independent agency” is a bit strange—industrial pools of money have been one of the biggest sources of biased research in the past, and they are unlikely to be trustworthy for deciding which research to replicate.
Hello,
Regarding your first problem about replicating the results that don’t come close to what you see in the paper, I don’t see a viable solution. I know, Chris doesn’t like the IEEE, but certain publications have a proven track record or producing working results. For example there are lots of circuit publications, but I would have a hard time believing that results in JSSC are faked.
There are so many variables in an engineering paper that unless you know the author has a proven track record or the paper has been cited several times, then you really are unsure about the results.
Regards,
Just because an author as been cited several times, or has several publications, even in the most prestigious journals, that doesn’t mean they couldn’t commit fraud.
Have you ever checked out the Wiki of Jan Hendrik Schön? I would say he’s published in the most prestigious journals who are the pinnacle of academic publishing. His retractions include:
9 in Science
7 in Nature
4 in APL
6 in Physical Review
plus his PhD was revoked.
Up until 2002 when this started, he was probably considered THE rock star scientist. Why wouldn’t the editors of Science and Nature believe him?
Hello,
I agree that a healthy amount of skepticism is good but just because you cannot reproduce the results does not make it fake. You read the paper, and see if they make a logical argument as to why their design is better in some respects. If you like the idea, you can mimic it but if your results don’t match, there are so many variables that are unaccounted for that make your results worse.
Perhaps research in mathematics is the only field where a paper cannot be faked, but if the paper is in a reputable journal and the author is widely cited then I would accept the paper on it’s merits…at least when it comes to circuits. After all being peer reviewed means that it has passed the smell test. Sure there are bad apples, but it’s a good screen…at the very least.
Cherish,
If you’re finding out that results aren’t what they say they are, you’re obliged to let the journal know that, especially if you can back it up with results of your own.
Research that proves false research results should be made public. That’s why we have things like Comments, Rebuttals, and the like. Otherwise, what’s the point of the peer review system?
Contact the editor of the journal and explain the results. It’s better to make a call and see what you should do from there. They’re supposed to be the people guiding the journal and I can assure you they don’t want false results in the their journal.
And just an FYI, the higher the journal impact factor, generally the higher the retraction rate because people do try to reproduce their results.