Skip to Main Content

SUNY Downstate Medical Research Library of Brooklyn: Evidence-Based Medicine Course

Therapy and Critical Analysis

Therapy and Critical Analysis 

The three main questions to ask yourself about any report before applying its findings to your patients are:
 

1) Are the results valid?
2) What are the results?
3) Will the patients help me with my individual patient?

 

The data required to answer these questions varies between the domains and, depending upon the type of question, the types of evidence used.

Are the results valid?

When reading the results of a single clinical trial, the meausure of validity is whether the trial lives up to the expectations of a randomized, double-blinded, placebo-controlled clinical trial.

RANDOMIZATION: Almost all reports of randomized clinical trials have a Table 1.  This table describes the characteristics of the participants in each arm of the trial.  By checking this table, you can make sure that patients were properly randomzied (that a particular characteristic is not represented more in one group than another).  In the event that pure chance has placed more people with one characteristic (e.g. more men) in one group than another, you want to make sure that these differences were accounted for.  The usual process for this is a sub-group analysis.  Researchers may recompute the necessary statistics accounting for the larger representation in one group.  If the researchers do not report doing a sub-group analysis or present the characteristics of their patients, the validity of the study may be called into question.

BLINDEDNESS:  It is vital that all participants in a study be blinded to the true nature of the "treatment" being received.  Clinicians who know whether their patients are taking the true treatment or the placebo may treat their patients differently and thus affect the outcome of the trial.  Patients who know they're getting the placebo may not wish to continue the trial.  Study personnel collecting data may be inclined to prompt for specific comments if they know the person they're interviewing is taking the treatment being studied.  Naturally, there are times when blinding is impossible.  It will be difficult to fool someone into believing they've undergone surgery when they haven't.  So allowances must be made.  That said, if there is no discussion of blinding, the validity of the study may be called into question.

FOLLOW-UP: All patients must be accounted for at the end of the trial. They must be analyzed in the groups to which they were randomized.  It happens, sometimes, that patients are lost to follow-up for reasons other than those related to the purpose of the trial (a death clearly counts as a failure on the part of the treatment).  These patients must still be accounted for.  An intention-to-treat analysis assigns those patients lost to follow-up to the group to which they were originally randomized.  It counts those patients as failures of the treatment.  If there is no mention of an intention-to-treat analysis, the validity of the study may be called into question.


What are the results?

The results typically reported in an article on therapy are Absolute Risk Reduction (or Number Needed to Treat) and the  Relative Risk Reduction.  At the very least, the data required to compute these numbers should be reported.   You will also want to check the precision of the results.  This is given by the confidence intervals for all both numbers.
For a further discussion of these statistics, see the Important Concepts section on Therapy.

Will the results help me in caring for my patients?

In addition to the concerns suggested earlier , there are two issues to be aware of when reading an article on therapy.

OUTCOMES:  When we discussed formulating a clinical question , we stressed the need to focus on clinically significant outcomes rather than lab tests.  However, if  the study used lab tests as their outcome, you need to be sure that these "substitute endpoints" are valid.

BENEFITS:  Any decision to institute therapy should weigh the possible benefits against the possible harms.  The data that most helps you to do this is the Number Needed to Treat.  

The University of Alberta provides a worksheet to analyze articles on therapy.

 



These criteria apply mainly to the report of a single clinical trial?  If the article you are reading is a systematic review or meta-analysis, then you will also need to take the following information into consideration.

COMPLETENESS: Were the criteria used to select article for inclusion appropriate?  We have already discussed the importance of only using clinical trials for questions about therapy.  If other types of articles were included (e.g. Cohort Studies), was there a good clinical reason for doing so?  Is it unlikely that important, relevant studies were missed?  The authors will have had to sufficiently describe the process for finding studies and the reasons for inclusion/exclusion to answer these questions.

VALIDITY: Were individual studies checked for validity?  Was the criteria for confirming validity adequately explained?

HOMOGENEITY: This is particularly important for meta-analyses.  Were the patients in each study sufficiently similar that conclusions from one study could be applied to the patients of another?

 The University of Alberta provides a worksheet to analyze systematic reviews.

Diagnosis and Critical Analysis

Diagnosis and Critical Analysis

The three main questions to ask yourself about any report before applying its findings to your patients are:
 

1) Are the results valid?
2) What are the results?
3) Will the patients help me with my individual patient?

 

The data required to answer these questions varies between the domains and, depending upon the type of question, the types of evidence used.

Were the results valid?

PATIENTS:  There is no sense in testing a diagnostic test on patients who are unlikely to have a condition (If your attending asks you to perform a TVU on a 50-year-old man, you should ask why).  So the first question to ask yourself is whether the patient sample included an appropriate spectrum of patients?  Since you want to test the value of a negative result as well as a positive one, you want to make sure that healthy patients were included.  

BLIND COMPARISON:  You want to compare the test being studied to the reference standard, the recognized preferred test for the condition being tested for.  You want to make sure that BOTH tests were applied independently to ALL patients.  

REPRODUCIBILITY:  Was the test described clearly enough that you could perform it in your setting?  Were the analysis and interpretation of results clearly described?  Otherwise, you can't be sure of the same results.  

What are the results?
As described in the Important Concepts section for diagnosis, the results you'd like to see are the positive and negative Likelihood Ratios.  However, it is far more likely that studies will report Sensitivity and Specificity.  From these, you can compute Likelihood Ratios.  You will also want to know how precise the results of the study using confidence intervals .

Will the results help me in patient care?
In addition to the concerns raised earlier , you will want to be aware of the following issues.

PATIENTS:  Are your patients similar enough that the prevalance of the disease in the study population is similar to that in your patients?  Is the severity of the disease in the test population similar to patients you are likely to see?

BENEFITS:  Are there risks associated with the test?  Are these outweighed by the danger of an undiagnosed disease? 

The University of Alberta provides a worksheet for analyzing articles on diagnosis.

Harm and Critical Analysis

Harm and Critical Analysis

 

The three main questions to ask yourself about any report before applying its findings to your patients are:

1) Are the results valid?
2) What are the results?
3) Will the patients help me with my individual patient?

The data required to answer these questions varies between the domains and, depending upon the type of question, the types of evidence used.

Are the results valid?
If the study is a clinical trial, the measure of validity is the same as for therapy .
If the study is a cohort study, the measure of validity is the same as for prognosis .
If the study is a case-control study, then you will need to consider the following issues.

BIAS:   Case-control studies are susceptible to recall bias.  When you start to base your research on things that have happened in the past, you are assuming that the records actually represent the past.  For example, there is a brief period of time that you cannot count on patients who died of AIDS to have been written up that way.  Interviewer bias occurs when the study personnel prod survivors or clinicians for the data they're looking for by the nature of their questions.

PATIENTS:  Except for the exposure under study, were the compared groups similar to each other.  Were other known risk factors adjusted for?

RELATIONSHIP:  In order to show that an exposure results in a particular outcome, two things must be true.  The exposure must precede the outcome.  The severity of the outcome must increase in proportion to the severity of the risk.  

What are the results?
Studies of risk often produce results in terms of Relative Risk or Odds Ratios.  These are discussed further in the Important Concepts section for harm.  The precision of the estimates is measured by confidence intervals .


Will the results help me in patient care?
In addition to the issues discussed earlier , you need to consider the following issues when dealing with a question of harm.

THERAPY:  Is there, in fact, a way to reduce or cure the results of the exposure?

BENEFITS:  Does the magnitude of the risk justify reducing exposure?  Are there adverse effects of reducing exposure?

The University of Alberta provides a worksheet for analyzing articles on harm.

Screening and Critical Analysis

Screening and Critical Analysis

The three main questions to ask yourself about any report before applying its findings to your patients are:
 

1) Are the results valid?
2) What are the results?
3) Will the patients help me with my individual patient?

 

The data required to answer these questions varies between the domains and, depending upon the type of question, the types of evidence used.


Are the results valid?


To the extent that a screening test is used to identify the presence or absence of a disease, many of the measures of validity are the same as those for diagnosis .

You will also want to know if there is randomized trial evidence that earlier intervention works?  This calls for a fairly complex trial since the researchers will first have to randomized patients to those that are screened and those that are not screened.  Then the patients that produce positive results will have to be randomized for treatment and non-treatment.  As screening is performed on an asymptomatic population, the patients that were not screened may not develop the disease and may skew the results.


What are the results?

More often than not, screening studies don't produce results so much as recommendations.  You will have to determine whether the recommendations are valid.  In support of the recommendations, researchers will often produce such statistics as Relative Risk Reduction, Absolute Risk Reduction, Number Needed to Treat, or even Number Needed to Screen.  These statistics are treated in greater depth in the Important Concepts section on screening.


Will the results help me in patient care?

To determine whether to follow the results of a screening study, you will need to determine three things:

1) What are the benefits?
        Is there, in fact, a treatment for this condition?
        Can early detection assist in preventing the condition from occurring?
        Does early detection of untreatable conditions assist in quality of life decisions?

2) What are the harms?
        Does the treatment produce side effects?
        Are there adverse effects of the screening?
        Will there be unnecessary treatment due to false positive results?
        Will there be anxiety generated by the investigations?
        What are the costs and inconveniences incurred during investigations and treatment?
        Is the test sensitive enough to prevent false reassurances for those who actually have the condition?

3) How do the benefits and harms compare in different people and with different screening strategies?
        Does screening affect people differently at different ages?  races? genders?
        Is the risk of disease the same for all people?

 



More often than not, reports on screening take the form of clinical practice guidelines.  Like systematic reviews, these reports carry special concerns of their own.

BIAS: Were the data identified, selected and combined in an unbiased fashion?  Were the criteria for selection and exclusion of data clear?  Did the researchers include data that did not support their theory?

VALIDITY:  Were all important options and outcomes specified?  Were important recent developments included?  Has the guideline had peer review and testing?

RECOMMENDATIONS;  How strong are the recommendations.  Could the uncertainty in the evidence change the guideline's recommendations?   Are the recommendations applicable to my patients?

The University of Alberta provides a worksheet on analyzing practice guidelines.