Skip to Main Content

BIO 482C - Human Genomics

Tips for evaluating articles

The best way to evaluate the quality of an article:

Replicate the study that's described in the article to see if you get the same results and conclusions.

The next best way to evaluate the quality of an article.

Thoroughly read the article and evaluate both the the study design and methods used to determine whether they are robust. Evaluate the results for validity and determine whether the conclusions of the study follow from the study design, methods used, and results gathered. Also check the quality and appropriate use of the references cited in the source. 

The next, next best way to evaluate the quality of an article: 

Use heuristics. A heuristic is a mental shortcut for dealing with a problem, using a more practical, realistic method. Heuristics are not guaranteed to be optimal or perfect, but they are often good enough. Below are some commonly used heuristics for judging the quality of a journal article:

Publisher Journal Article Author
Is it well known and reputable? Is it predatory? Does it have good citation metrics? Does the author have a good h-index rating?
Is it peer-reviewed? How rigorous is the study type and/or methods? How prestigious is the author's employer?
What is its impact factor?   Has the author ever been listed on RetractionWatch?
How stringent are the submission requirements?  

Problems with this heuristic include:

1) It takes experience to recognize reputable publishers. 

2) Even reputable publishers sometimes do unreputable things.

3) This heuristic will lead one to discriminate against lesser known but perfectly good publishers.

Problems with these heuristics include:

1) It's not always easy to recognize a predatory journal.

2) The peer-review process is more rigorous for some journals than others.

3) The peer-review process has lots of flaws.

4) Journal publishers regularly manipulate their impact factors to make them as high as possible.

5) It's time consuming to research differences in rigor for submission requirements, and it's not a guarantee of quality.

Problems with these heuristics include:

1) Both authors and journal publishers manipulate citation metrics.

2) It takes time to accumulate citation metrics, so newly published articles don't have high scores.

3) Study type and methodology rigor may not be easy to determine.

Problems with these heuristics include:

1) Authors can manipulate their h-index.

2) A low h-index doesn't mean the author is a bad researcher.

3) It takes experience to recognize prestigious employers.

4) Even authors working at prestigious institutions can put out bad work.

5) This heuristic will lead one to discriminate against less prestigious but perfectly good employers.