This site is intended for healthcare professionals
Ocean underwater view
  • Home
  • /
  • News
  • /
  • News trends
  • /
  • Clinical Trials
  • /
  • Looking under the surface of clinical trials: Are ...
Original Medthority Content

Looking under the surface of clinical trials: Are the findings valid and relevant to your patients?

Last updated:19th Mar 2024
Published:30th Mar 2023
Author: Debra Kiss, PhD; Senior Medical Writer at EPG Health.

Over the past decade there has been a seemingly endless list of concerns raised about the quality, accuracy and integrity of clinical trial publications1-10. These problems are not rare, and continue to be reported even in high-impact journals2-5,11-14. As clinical trial findings can influence real-world treatment decisions, these issues could result in the use of ineffective treatments, or may even lead to patient harm2,12,15,16.

In this final article of a 3-part series, we describe some approaches that may assist healthcare professionals to use a more critical approach to reading clinical trial publications. These approaches may enable a more informed interpretation of a study’s findings. They may also assist healthcare professionals with determining whether findings are of a sufficient quality and standard to inform their clinical decision-making.

Consider the foundation of the study: Are the design and methods valid?

Before looking at the results of the study, consider the design and methods, as they provide important context and can indicate if results are valid17,18. This may involve consideration of17-19:

  • The trial design and whether it is appropriate to address the clinical question
  • Whether randomisation was valid, and if potential confounders were balanced between groups in baseline characteristics (including prognostic factors and co-administered interventions)
  • Whether participants were analysed in the groups to which they were allocated (intention-to-treat, ITT); or if a per-protocol or modified ITT analysis was used, was this adequately justified and appropriate?
  • The length of follow-up, and extent of loss to follow-up (particularly between groups)
  • Whether the study had sufficient statistical power to detect a minimum important difference in the primary outcome of interest

A sample size calculation is critical to determine the target number of participants in an RCT. This target should be met during recruitment as well as during each time point in follow-up17. A high number of dropouts, crossover or loss to follow-up can cause a study to lose adequate power19.

Underpowered studies can be difficult to interpret (particularly when not statistically significant); whereas over-powered studies may yield results that are not clinically important19

It is also important to note that a published article is not the entire study – it is not possible for the publication to convey full details on study design, execution and all raw data collected. Further details can be found in the study protocol and statistical analysis plan20.

The heart of an RCT: Study results and treatment effect

When interpreting the results of a study, consider aspects such as the magnitude of the treatment effect, its precision, the clinical importance of the estimate, and whether adherence to the intervention could have affected the outcome18,19.

Magnitude of effect: Absolute and relative measures

The magnitude of treatment effect is often assessed via measures of association, including relative and absolute risk measures18,21.

  • Relative risk measures include risk ratio, relative risk, odds ratio and prevalence ratio21
  • Absolute risk measures include risk difference, prevalence difference and differences in proportions, or number needed to treat21

When reporting the magnitude of treatment effect for binary outcomes, both relative and absolute outcomes should be reported to facilitate accurate interpretation21. Despite clear guidance that both relative and absolute measures should be reported in RCTs22, a 2020 review of 200 individual RCTs reported that only 9% presented both relative and absolute measures23; while 28% reported relative treatment effect only, and 17% reported absolute treatment effect only23.

Although relative measures can inform differences between groups and directions of association, in the absence of context from absolute measures, they can be used to overstate outcomes - particularly if outcomes are rare21.

A hypothetical example can help to put this into context: An relative risk of 50% could represent an absolute risk reduction of 30% (if the absolute risk improved from 60% to 30%), or an absolute risk reduction of 2% (if the absolute risk improved from 4% to 2%)18.

Inclusion of absolute risk alongside relative risk measures can provide a more complete picture when interpreting data, and assist when discussing results with patients18

Precision

Where a confidence interval is provided, this can be used to indicate precision of the treatment effect19. A 95% confidence interval represents the range of values within which we can be 95% confident the true treatment effect lies within19. A narrow confidence interval indicates a precise estimate of treatment effect, however a large confidence interval should be interpreted with caution – as the true effect can lie within a wide range of values19.

A key question to help clarify whether a result is adequately precise: Would the decision be the same whether the lower and the upper boundaries of the confidence interval were the truth? If so, then the results can be considered sufficiently precise18.

Clinical importance

To assess the clinical importance of the treatment effect, consider if the observed treatment effect is likely to translate into meaningful improvements for the patient19. This may be based on clinical acumen and experience, or established minimally important differences in literature19.

The effect of the intervention on the primary outcome should be sufficiently different from the effect of the alternative that the average patient would have no hesitation in making a choice
Practice Committee of the American Society for Reproductive Medicine, 202017

However, decisions around clinical importance can be complicated by use of surrogate outcomes18. These may or may not lead to clinically meaningful improvements in outcomes that are important to patients, such as quality of life or overall survival (in oncology, for example)18.

Statistical significance does not necessarily translate into a clinically important difference19, as the result should be large enough to be clinically meaningful17

Could adherence have impacted the outcome?

It is important to consider whether the adherence (or compliance) of study participants to the intervention and control was reported, and whether there was a difference in adherence between groups18. This is particularly relevant in oncology trials, where adverse events and medication safety profiles can have a considerable impact on adherence18.

Consider applicability: Could the study’s results apply to real-world patients?

To determine if results of a clinical trial could be relevant to individual patients in clinical practice, consider the applicability of the population and intervention used in the trial, and how well they can be applied to patients in clinical settings17,19 (Table 1).

Table 1. Assessing applicability of a clinical trial to real-world patients17-19.

Key consideration Details
Patient population
Is the population used in the trial comparable to patients in your own practice?
Consider:
· Inclusion/exclusion criteria in the trial
· Baseline characteristics and demographics of participants in the trial

If inclusion/exclusion criteria are too strict or too broad, the trial’s results may not be applicable to people who might receive the intervention in the real world.

Baseline characteristics and demographics may not reflect those who might receive this intervention in the real world (e.g., age, time since diagnosis, treatment history, considering if there are barriers to access, etc.)
Delivery of the intervention
Is the delivery of the intervention feasible, based on resources and expertise of healthcare providers in your real-world setting?
Does the intervention involve highly specialised procedures or training, such as surgery?

Interventions that are significantly affected by levels of provider expertise may lead to different treatment outcomes in the real world, compared to the clinical trial
Is adherence to the intervention feasible?
Does the intervention involve reasonable compliance/adherence demands on the patient?
Consider the nature of the intervention – the method and frequency of administration, patient acceptability, etc.

If the intervention involves unreasonable demands on the patient or healthcare provider, it may produce different results in the real world, compared to the clinical trial

Keep in mind the potential impact of bias

A study is considered biased when a component of its design or execution has systematic impacts on the results that may deviate from the truth24.

Bias can lead to an over- or underestimation of the truth, or can compromise the validity of the study’s findings – even if all other facets of the study were appropriate24

Various types of bias can impact how the results of the study are subsequently interpreted24. These include biases in participant selection, trial performance, detections of outcomes, attrition (withdrawal) of participants, and reporting24.

It has been suggested that when interpreting RCTs, healthcare professionals should carefully consider potential sources of bias, and the impact they may have on confidence in the trial’s results19. While healthcare professionals may not have sufficient time or resources to perform a full risk of bias assessment on each clinical trial publication, it is important to be aware that these types of bias do exist and can have a considerable impact on the validity of a trial’s findings24,25.

In conclusion: Deciding whether to use trial results to inform patient management

When interpreting clinical trial results, it is important to remember results are influenced by a variety of factors and should not be taken at face value. Care should also be taken to avoid extrapolating results to patient populations or settings that were outside the context of the trial.

In summary, key considerations when reading a clinical trial publication include:

• Are the design and methods valid?
• What are the study results and treatment effect?
• Is the trial population relevant to real-world patients and settings?
• Are there potential sources of bias in the study that could compromise the validity of the study’s findings?
• Are the results of a sufficient quality and standard to inform your clinical decision-making?

Shared-decision making is an important consideration when applying evidence to individual patients18,20. This includes consideration of whether the treatment benefits outweigh the potential risks, harms and costs, as well as the patient’s individual preferences and circumstances17,18,20.

Ultimately, clinicians, often with patients, need to determine the importance of the (clinical trial) findings and the application in clinical care
Bauchner et al., 201920

Missed the earlier articles in this series?

Read article 1 in this series - How can flaws in clinical trial design, conduct and reporting impact clinical decision-making?

Read article 2 in this series - Interpreting clinical trial results through a frame of reference: Why take the time?

Further reading for healthcare professionals

Tips on reading clinical trial publications

Useful explanations on types of bias in clinical trials

Critical appraisal – where to start?

Critical appraisal checklists have been developed to guide the reader through the appraisal process, prompting them to ask certain questions of the study publication they are appraising26. The type of checklist is tailored to the type of study being appraised, for example if it is a randomised controlled trial or observational cohort study26.

Explore medical education and publication summaries on Medthority

Explore a range of independent and sponsored medical education across therapeutic areas including cardiology, immunology, oncology and more. Upskill on key developments via continuing medical education in various formats including Learning Zones, with expert insights from key opinion leaders in videos and podcasts, as well as webinars, quizzes and more. Medthority Learning Zones also feature publication digests, which provide concise summaries of key publications across different fields.

Explore medical education content by specialty on Medthority

References

  1. Mitra-Majumdar M, Kesselheim AS. Reporting bias in clinical trials: Progress toward transparency and next steps. PLoS Med. 2022;19(1):e1003894.
  2. Moore A, Fisher E, Eccleston C. Flawed, futile, and fabricated—features that limit confidence in clinical research in pain and anaesthesia: a narrative review. Br J Anaesth. 2022.
  3. Carlisle JB. False individual patient data and zombie randomised controlled trials submitted to Anaesthesia. Anaesthesia. 2021;76(4):472-479.
  4. Goldacre B, Drysdale H, Dale A, Milosevic I, Slade E, Hartley P, et al. COMPare: a prospective cohort study correcting and monitoring 58 misreported trials in real time. Trials. 2019;20(1):118.
  5. Hemming K, Javid I, Taljaard M. A review of high impact journals found that misinterpretation of non-statistically significant results from randomized trials was common. J Clin Epidemiol. 2022;145:112-120.
  6. Heneghan C, Goldacre B, Mahtani KR. Why clinical trial outcomes fail to translate into benefits for patients. Trials. 2017;18(1):122.
  7. Ioannidis JP, Caplan AL, Dal-Ré R. Outcome reporting bias in clinical trials: why monitoring matters. BMJ. 2017;356:j408.
  8. Ioannidis JPA. Hundreds of thousands of zombie randomised trials circulate among us. Anaesthesia. 2021;76(4):444-447.
  9. Steegmans PAJ, Di Girolamo N, Meursinge Reynders RA. Spin in the reporting, interpretation, and extrapolation of adverse effects of orthodontic interventions: protocol for a cross-sectional study of systematic reviews. Res Integr Peer Rev. 2019;4(1):27.
  10. West J, Bergstrom C. Misinformation in and about science. Proc Natl Acad Sci USA. 2021;118(15):e1912444117.
  11. Garegnani LI, Madrid E, Meza N. Misleading clinical evidence and systematic reviews on ivermectin for COVID-19. BMJ Evid Based Med. 2022;27(3):156-158.
  12. Bramstedt KA. The carnage of substandard research during the COVID-19 pandemic: a call for quality. J Med Ethics. 2020;46(12):803.
  13. Smith R. Time to assume that health research is fraudulent until proven otherwise? Br Med J. 2021.
  14. McErlean M, Samways J, Godolphin PJ, Chen Y. The reporting standards of randomised controlled trials in leading medical journals between 2019 and 2020: a systematic review. Ir J Med Sci. 2023;192(1):73-80.
  15. Poutoglidou F, Stavrakas M, Tsetsos N, Poutoglidis A, Tsentemeidou A, Fyrmpas G, et al. Fraud and deciet in medical research - insights and current perspectives. Voices in Bioethics. 2022;8.
  16. Turner M. University of Canberra. Evidence-Based Practice in Health. https://canberra.libguides.com/evidence.
  17. Practice Committee of the American Society for Reproductive Medicine. Interpretation of clinical trial results: a committee opinion. Fertil Steril. 2020;113(2):295-304.
  18. Sonbol MB, Firwana BM, Hilal T, Murad MH. How to read a published clinical trial: A practical guide for clinicians. Avicenna J Med. 2020;10(2):68-75.
  19. Thabane A, Phillips MR, Wong TY, Thabane L, Bhandari M, Chaudhary V, et al. The clinician’s guide to randomized trials: interpretation. Eye. 2022;36(3):481-482.
  20. Bauchner H, Golub RM, Fontanarosa PB. Reporting and Interpretation of Randomized Clinical Trials. JAMA. 2019;322(8):732-735.
  21. Turner EL, Platt AC, Gallis JA, Tetreault K, Easter C, McKenzie JE, et al. Completeness of reporting and risks of overstating impact in cluster randomised trials: a systematic review. Lancet Glob Health. 2021;9(8):e1163-e1168.
  22. Schulz KF, Altman DG, Moher D, the CG. CONSORT 2010 Statement: updated guidelines for reporting parallel group randomised trials. BMC Med. 2010;8(1):18.
  23. Rombach I, Knight R, Peckham N, Stokes JR, Cook JA. Current practice in analysing and reporting binary outcome data-a review of randomised controlled trial reports. BMC Med. 2020;18(1):147.
  24. Phillips MR, Kaiser P, Thabane L, Bhandari M, Chaudhary V, Wykoff CC, et al. Risk of bias: why measure it, and how? Eye. 2022;36(2):346-348.
  25. Tikkinen KAO, Guyatt GH. Understanding of research results, evidence summaries and their applicability—not critical appraisal—are core skills of medical curriculum. BMJ Evid Based Med. 2021;26(5):231-233.
  26. Deakin University. A brief guide to critical appraisal. https://deakin.libguides.com/c.php?g=558207&p=6505765. Accessed 28 February 2023.
Welcome: