## News & Events

# How to Critically Read Quantitative Research Using These 10 Questions

- April 17, 2017
- Posted by: Mike Rucker
- Category: Research

In some ways, appraisal of quantitative and qualitative work follows the same guideline — we need to establish whether a study has the required rigor needed to trust its results and conclusions (regardless of whether they are positive or negative). While there is some overlap, there are also some pointers specific to quantitative research methods. If you need help evaluating the quality of a qualitative paper, pleease read this post.

Regarding critically reading quantitative research, below are some principles you can apply when reading (or writing) a quantitative article or paper to better understand its strengths and weaknesses.

**Has a study design been identified?**

The write up of the research should tell us what type of design was used, so the reader can assess if it was an appropriate choice and application of the design. Common designs in quantitative studies include cross-sectional study, randomized controlled trial (RCT), cohort study, case-control study, to name just a few.

**Are hypotheses or research questions clearly stated?**

Generally, there should be mention of the hypotheses or research question(s) at the end of the introduction section. In this way, we can understand the focus of the research better, as well as the choice of statistical methods.

**Is the sample representative?**

The work needs to describe the sample group so the reader can assess if it is representative of the population that is being studied. Ideally, this should be a random sample (non-probability sampling). Since this is not always possible, if the authors used a probability sample — such as a convenience sample — they need explain why this was necessary.

**Is the sample of sufficient size?**

The method section should tell the reader if the sample size is appropriate and has enough statistical power — this means the researchers took steps to include enough participants to ensure that the results of the research have a meaningful degree of statistical significance.

**Is there a control group?**

Some quantitative designs use a control group, however, this is not necessary. If the chosen design asks for a control group (for example, in controlled trials and case-control trials), this should be sufficiently described. Sometimes, matching is also applied so cases and controls are matched according to sex, gender, income, etc.

**Are inclusion and exclusion criteria presented?**

Inclusion and exclusion criteria should be clearly explained. Detail about inclusion and exclusion criteria helps with the understanding the sampling strategy and the sample’s representativeness.

**Has validity been addressed?**

Researchers need to give the reader information about the validity of the measurement that was used. The research write up should mention at least one (but sometimes more) type of validity, such as face, content and construct validity.

**Have the drop outs been considered?**

If the study had participants drop out, there should be some information on drop outs and why they decided to leave the study.

**Has the method of statistical analysis been appropriately chosen?**

In most cases one or both of two types of statistical analyses will be used in an article: descriptive or/and inferential statistics. Descriptive statistics are used to describe the sample (e.g. gender, age, marital status, etc.) and give information to the reader on the basic features of the data (e.g. mean, median, standard deviation). Inferential statistics, on the other hand, are used to reach conclusions and show correlations. A reader would expect that a good quality quantitative article goes beyond descriptive statistics and gives us deep insight into a certain process and/or phenomenon. Also, if a researcher wants to make generalizations about a broader population (than the sample), inferential statistics are almost always required.

**Have potential biases been considered?**

For example, have those who collected the data been “blinded” (e.g. have measures been taken to mitigate the testers’ preferences and/or preexisting expectations to reduce bias)? Could there be a selection bias (if there was no randomization)? Any points in this regard that have come up during the research process should be addressed in the limitations section of the paper.

This is just an introduction into some of the concepts that need to be considered when appraising quantitative work, and the list is hardly exhaustive. To better understand each of these concepts, you probably should do some additional reading. Of course, no article is perfect and it is possible to always find potential weaknesses … even in the most rigorous of studies. With time, you will gain more confidence evaluating research and you will be able to establish which studies are relevant to you and your field of work.