3. The Unscientific Method

The Truth Can Be Skewed

Scientific studies allow science to expand its knowledge, from finding connections between two seemingly different entities to testing and explaining phenomena that the world doesn’t quite understand yet. With the correct use of these scientific studies, scientists can achieve feats that would have been deemed impossible without the newly found knowledge: More cures can be found, larger realizations and trends can be identified, and even more knowledge of a field can potentially make growths as even more studies can elaborate. However, scientific studies’ massive influence is a double-edged sword. The “truthful” studies that we believe because they are backed by scientific research may be completely wrong. However, studies are still fallible and studies that push false claims can skew the truth and push an agenda. This trend is completely detrimental to the science community and the people.

As one might expect, scientific studies have a very rigid system that details what studies must accomplish to make a claim. For a scientific study to prove a claim (scientifically known as a hypothesis), the study must prove that the hypothesis must have an undeniable relationship with the data that is collected. To prove the hypothesis, the scientists first form what is known as a null hypothesis, which assumes the that there is no correlation between the two. For example, if the hypothesis is that a newly made drug increases dopamine levels, the null hypothesis would be that the drug did not exhibit any change in dopamine levels. The scientists then attempt to prove the actual hypothesis by rejecting the null hypothesis.

The data, which is found by the carefully thought-out tests and conditions set in place by researchers, is then analyzed to see if the data was statistically significant enough to reject the null hypothesis. This test is essentially finding whether the data was gotten due to random chance or if the claim is the reason behind the data. The researchers then use many different methods to calculate the probability of how likely the data that was given could have shown up, also known as the p-value. To say something was statistically significant, the probability must be lower than 5 percent. This magic number of 5 percent is key, as any study that produces a p-value lower than 5 percent is deemed to be valid. As the probability of the null hypothesis being true statistically improbable and rejected, the scientist can then conclude that the actual hypothesis true. Any p-value that is 5 percent or higher cannot reject the null hypothesis and cannot prove the claim that the study was trying to make, which forces the scientist to either retry the study or change the claim altogether.

Intentional errors have become a major issue as the scientific studies, which people take at face value, become either misleading or entirely untrue and flood the scientific journals. Studies that affect the percentage of published claims undergo effects such as publication bias and the file-drawer effect. The author Megan L. Head, in the article “The Extent and Consequences of P-Hacking in Science,” defines publication bias as, “the phenomenon in which studies with positive results are more likely to be published than studies with negative results.” The file-drawer effect is the tendency for scientists to refrain from publishing negative studies as due to the lack of money. These effects are very detrimental as there is a noticeable underrepresentation of negative published studies. In “The file drawer problem and tolerance for null results,” Robert Rosenthal describes this effect by saying, “the extreme view of the ‘file drawer problem’ is that journals are filled with the 5% of the studies that show Type I errors, while the file drawers are filled with the 95% of the studies that show nonsignificant results.” This is a direct result of scientists attempting to push studies that innately get more attention, as positive-resulting, intriguing studies will be more popular than negative-resulting studies.

However, the bias can be even more direct with something known as p-hacking. The essential part of a study is primarily based on the comparison of the p-value to find something that is statistically probable. So through p-hacking, scientists can attempt to alter the way they compute the p-value with any given data. In the web article “Is Science Broken?” author Christie Aschwanden simulated how easy it is to find something statistically significant for many different claims with the same data. In his simulation, we are to choose a category on which political party, Republican or Democratic, we want the hypothesis to support. Aschwanden then demonstrated that by choosing to keep and omit some parts of the data (such as the type of politicians that we want to consider as politicians and including recessions), the combination of different parts of the data can prove a hypothesis for both sides. Even with the same data pool, the fact that the use of p-hacking can prove completely opposite ideologies shows the massive influence that p-hacking can have.

With these massive rigid systems that scientists must undergo for their livelihood, scientists put massive amounts of value in publication. As innovation comes directly from the scientists, they are put under massive amounts of pressure for publishing. This pressure to publish has directly resulted in the overflowing publication rates that seem to have no end. Thus, a large portion of studies is only partial truths due to the many different biases they are forced to undergo through, intentionally or not. The reason why there is so much potential for bias is due to the fractured system that scientific studies are based off.

Due to the emphasis on quantity over quality for both payments and value, scientists are more inclined to not publish the full potential of what studies could have achieved. Thus, more and more faulty studies with intriguing, misleading theses start to accumulate. To combat this, replication tests are very valuable as they attempt to retest the study exactly to test the study’s validity. These tests are essentially a fail-safe, where another scientific group that is independent to the original does everything that the study did to see if it produces similar results. Erick Turner from the FDA-also known as the Food and Drug Administration- spoke about the replication tests held in 2008. The FDA retested 74 studies that proved the effectiveness of numerous FDA-registered antidepressants. From the replication tests, they found that 23 of them didn’t even have evidence of publication, which left 51 studies to examine. It was reported that 48 of those 51 studies that were left originally showed positive results, yet when the FDA concluded the replication studies they found that only 38 studies out of the original 74 had positive results, completely disproving studies that were now found to be selling ineffective antidepressants.

If such a test is so valuable to validate incorrect tests, then there should not be so many tests that people can view where the study essentially publishes false claims. Sadly, these faulty studies are unlikely to be corrected as there is no incentive within the scientific community to replicate the tests. Even though the FDA made replication tests, the company is not a good representation of the entirety of the community as the FDA is a government funded organization whose primary focus is to regulate issues such as the biased studies.  This occurrence is known as the replication crisis. To make sure that harmful products do not go to the patients and prevent the need for replication tests, organizations such as the FDA place very rigid requirements. However, regulatory associations such as the FDA are simply not enough to keep the influence of drug companies away from scientific studies.

Petter Hutt’s paper, “Untangling the Vioxx-Celebrex Controversy: A Story about Responsibility.” describes the exact process of how the FDA approves a drug.  The FDA first requires what is known as an NDA or New Drug Application. The new drug then undergoes the Investigation New Drug test, or IND test, and three phases to test safety. The IND test is used to see if the production and analyzation had “protection of the human research project, animal studies completed and analyzed, scientific merit, and qualifications of the investigator.” From the IND, the drug then undergoes Phase I, II, and III. Phase I tests the drug on one subject to check for adverse side effects, which moves on to Phase II if successful. Phase II administers the drug multiple times on a small group, which will move on to Phase III. In this phase, the drug is given to thousands of patients with many different methodologies to check for drug interactions/reactions. It is estimated that this entire process takes around 7 to 13 years before the application is finished. After the application is submitted, the FDA then makes a committee to push the new drug and either authorize the drug or stop the process there.

This very methodical authorization system should be able to sort off unsafe drugs after numerous checks. However, the unreliability of the FDA was completely exposed with the Vioxx controversy. DrugWatch, in the web article “Vioxx Recall – Merck and FDA,” discusses the painkiller Vioxx and how it was spread to many different doctors with the primary goal of giving the drug to as many patients as possible. However, in only 5 years, this seemingly harmless drug was found to more than double the risk of heart attacks and death. Eventually, in 2004, Merck recalled Vioxx after being put in the spotlight for their drug. DrugWatch described the havoc Vioxx caused, with over 38,000 deaths, as potentially, “ the worst drug disaster in history.”

The drug went through the entire rigid appeal process of the FDA and was approved in 1999. Not once did the FDA stop the drug until the symptoms the heart issues started to appear and an analyzation was made. But by the time the FDA caught on, Vioxx already damaged thousands of lives. The reason why this disaster even occurred was due to Merck manipulating the data the study had. For the Merck scientists to show that the drug was safe enough for use, they omitted the detrimental data pertaining to patients with heart complications. Otherwise, the drug could not have been released. In fact, Hutt stated that “the General Accounting Office found that of 198 drugs approved by the FDA between 1976-1985, about half had serious post-approval problems.”

Not only that, but this controversy also shed light on the corruption of the FDA. It was noted that Merck persuaded the FDA to remove warning labels for digestive issues with Vioxx before the drug was even approved. The FDA also ignored numerous doctors’ complaints of their patients’ hearts problem until 2002, when a study that showed the relationship between heart complications and Vioxx. When that integral piece of information came out, all the FDA did was simply add a label. The FDA had numerous chances to prevent a disaster from happening and the organization was built to do just that. However, the bias that Merck was pushing forward to validate their product slipped through, which shows even the FDA struggles to mitigate the effect of bias in scientific studies.

As noted before, the scientists’ payment is incentivized to push the claims of whatever will help their career. If the scientists could sustain themselves using the replication test, researchers would have used these replication tests. However, replications tests carry no monetary value, as they only restate what someone else has stated, so scientists avoid the very test that helps counteract faulty claims. As scientists are only human and will have the tendency to prioritize their own living at the expense of integrity, scientists would rather push a swarming number of theses for money. This phenomenon eliminates the fail-safe that is made to get rid of the faulty studies, which means that the number of studies that are fundamentally lying is going to steadily increase with little resistance.

This phenomenon is very detrimental to the future of science. In the article, “Pressure to ‘Publish or Perish’ May Discourage Innovative Research, UCLA Study Suggests,” author Phil Hampton discusses a study lead by Jacob Foster that measures the risks and innovation studies take and the implications that studies make.  Foster found in the fields of biomedicine and chemistry that more than 60% of the studies that were analyzed showed no new connections. This essentially means that innovation is slowly grinding to a halt due to the flawed system. As scientists are fixated with their publications to make a steady income, they will push whatever will gives them the safest income. Even though going with the more innovative idea may result in a breakthrough that will net massive amounts of revenue from publication, there is an even greater chance that the study will not result in a positive study, which would not be beneficial to the scientist. This risk vs reward scenario causes scientists to then make a choice on what they value more, to be put in a textbook or to eat the next day. There, the non-innovative route becomes the favored choice as scientists do not have a safety net that can warrant the risk. Thus, innovation is slowly starting to decrease. This result is one of the worst outcomes, as only innovation causes new leaps and bounds to be made from science. If innovation starting to slow down, science slows down as well.

These issues can be solved by money, so funding from organizations seem to be one of the best solutions. Money being given to the researchers which allow them to remove the restraint of income so better tests are made. However, this harmonious relationship becomes detrimental as both parties benefit too much. A claim from a scientific study is very valuable for a business. The faith people have with how rigid scientific studies are causes people to believe essentially anything a scientific study proves. Thus, companies are willing to invest a lot of money for scientific studies that positively help whatever the company is pushing. This investment would ultimately result in more money for the future. This interest itself causes a cycle that makes this issue worse. A business wants to be able to push their values to gain more money or popularity, so the businesses are more willing to pay money to inevitably reap the benefits. As the business itself pays money for the studies, scientists are more enticed to make a study that proves the business’ value for a better living, giving more and more incentive to produce more or alter claims that prove the value.

This cycle results in countless biased articles that unjustifiably prove the claim of the business that affects the public. Companies such as pharmaceuticals and sport drink companies are repeatedly found in the obvious malpractice. For example, in the study “Association of Funding And Conclusions in Randomized Drug Trials,” Bodil ALs-Nielsen randomly selected 370 random drug trials to see if there was an effect on the result of the test being funded by a non-profit organization or a for-profit organization.  With only 16% of the studies recommending the drugs when it was funded by a non-profit organization and 51% of the studies when funded by a for-profit organization, it is painfully obvious to see the effect that funding sources have.

Biased studies can even be detrimental after it has been disproven. America has kick started  a newly found movement where people are against vaccination and refuse to give their children vaccinated. This movement grew in popularity when Andrew Wakefield released a study that shows the correlation between vaccines and autism. However, this study was completely biased to fit Wakefield’s claim. The study not only took very specific conditions to make the claim, Wakefield was even accused of violating ethical rules.  In the article “The Lancet Retracts Andrew Wakefield’s Article « Science-Based Medicine,”  UK General Medical Council’s Fitness to Practise Panel  officially stated on Jan 28, 2001, that “it has become clear that several elements of the 1998 paper by Wakefield et al are incorrect.” Even though the original paper has been debunked repeatedly, the movement still stays strong and un-wavered. Once the headline of the audacious claim is made, the impact the study has will still remain regardless of the truth. This trend gives even more power to the biased claims.

This corruption of scientific studies must be addressed. Many scientists are aware of the situations and biases but are helpless to do anything about it. Yet, the scientific system sets a precedent that dissuades scientists from reaching their highest potential. This issue can be resolved as long as money is not the primary factor. By giving scientists a steady income, it incentivises them to work on what they deem important rather than safe and potential corruption would disappear. As a result, scientific journals would be filled with unbiased, pure information which allows science to progress in the likes where science has never seen before.

Works Cited

Head, M. L. “The Extent and Consequences of P-Hacking in Science.” The Extent and Consequences of P-Hacking in Science. PLoS Biol, n.d. Web. 18 Nov. 2016.

Aschwanden, Christie. “Science Isn’t Broken.” FiveThirtyEight. N.p., 19 Aug. 2016. Web. 15 Nov. 2016.

Rosenthal, Robert. “The File Drawer Problem And Tolerance For Null Results.” Psychological Bulletin 86.3 (1979): 638-641. PsycARTICLES. Web. 15 Nov. 2016.

Turner, Erick H. “Selective Publication of Antidepressant Trials and Its Influence on Apparent Efficacy — NEJM.” New England Journal of Medicine. N.p., 17 Jan. 2008. Web. 28 Nov. 2016.

Hampton, Phil. “Pressure to ‘publish or Perish’ May Discourage Innovative Research, UCLA Study Suggests.” UCLA Newsroom. N.p., 08 Oct. 2015. Web. 018 Nov. 2016

Nielsen, MD Bodil. “Association of Funding and Conclusions in Randomized Drug Trials.”Association of Funding and Conclusions in Randomized Drug Trials. The JAMA Network, 20 Aug. 2003. Web. 01 Dec. 2016.

Hutt, Peter Barton. “Untangling the Vioxx-Celebrex Controversy: A Story about Responsibility.”Tran, Lan. N.p., 4 May 2005. Web. 18 Nov. 2016.

“Vioxx Recall – Merck and FDA.” DrugWatch. N.p., n.d. Web. 18 Nov. 2016.

Novella, Steven. “The Lancet Retracts Andrew Wakefield’s Article « Science-Based Medicine.” The Lancet Retracts Andrew Wakefield’s Article « Science-Based Medicine. N.p., 03 Feb. 2010. Web. 25 Nov. 2016.

5 Responses to 3. The Unscientific Method

  1. davidbdale says:

    In a Reply to this page, answer these questions about your selected Research Argument.
    1. What is the counterintuitive thesis of the essay?
    2. What is the best evidence the author supplies?\
    3. What strong counterargument does the author refute?
    4. Which are the most persuasive sources cited?
    5. What one big improvement would you recommend for a Rewrite before the final grade?
    6. How would you grade this essay: A, B, C, D, F.

  2. romanhsantiago says:

    1. Scientific studies may be completely wrong yet still fallible in efforts to push an agenda.
    2. The evidence the author provided was the story of the Vioxx recall when a seemingly harmless painkiller that went through the phases of FDA testing and passed and when was released increased heart problems and death in patients.
    3. The counterargument that the author is refuting is that the FDA’s phase system is rigorous enough to weed out any harmful drug. However he argues scientific studies can be biased and corrupt if the money and the agenda is there.
    4. The most persuasive source cited are “Vioxx Recall – Merck and FDA.” DrugWatch. N.p., n.d. Web. 18 Nov. 2016. it did a really good job of explaining the consequences of skewed testing
    5. Attempt to better explain the scientific part of it whether it be the study or medicine so the reader can better understand and follow the issue.
    6. A

    • dunkindonuts10 says:

      1. Claim-science helps expand its knowledge leading to new testing to achieve something unimaginable. On the other hand tests can be found to be wrong
      2. Not necessarily reliable yet there are many tests that are created in order to get these medicines done correctly
      3.the medicine that was created ended up killing/harming about 40,000 people, there is a flip side to the medicine if all tests are not done and recorded correctly
      4. Persuasive sources-Erick Turner from the FDA- he is from a government funded organization whose job is to make sure everything is reliable and safe for the people
      5.first couple paragraphs got me lost, even though they were sharing valuable information, to what the writer was talking about but then gradually got me more engaged as they went on
      6. B

      • davidbdale says:

        Dunkin, you replied to Roman’s Reply instead of starting your own.
        1. That’s confusing, but you might win a tie-breaker.
        2. OK
        3. Does this evidence suggest (as the author claims) that the harm resulted from DELIBERATE “cooking” of the results?
        4. I see your point. Turner was willing to call out the FDA.
        5. Agreed. Within one paragraph, readers should have no question about the author’s thesis and the direction the argument will take.
        6. Tough grader!
        3/3

    • davidbdale says:

      1. Sorry, Roman. I don’t understand the meaning of “wrong yet fallible.”
      2. That’s pretty good, but what is an acceptable risk? Remember the 3 in one million Polio vaccines that result in childhood paralysis?
      3. Sounds just right.
      5. You and I are both in favor of better explanations.
      3/3

Leave a Reply

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s