Services 1300 888 378

How we find and appraise the research evidence in Interventions and Therapies

The information for each intervention contains:

A description of the intervention, costs, time and resources needed to complete the intervention, and information to help decide whether the intervention might be suitable for an individual with cerebral palsy and his/her family.

In addition, we give a summary of the best available research evidence and whether or not the evidence suggests the intervention is effective.

The chart below outlines the process we use to write the information for each intervention:

A Subject Matter Expert and a Topic Coordinator/Researcher Write Content

  1. Information about the intervention including:
    • Costs and resources involved in using the intervention
    • Considerations in choosing the intervention
    • Assessments to measure the outcome of the intervention
  2. A summary of the research evidence. This includes:

Two experts on the intervention with knowledge of research, review each topic for accuracy and rigour

Editorial Expert Edits Content for Readability

Two Consumers Review Content

People with cerebral palsy or their parents review the content for readability, meaningfulness, clarity and usability

Content is Published

Searching for studies evaluating an intervention

The subject matter expert and topic coordinator work together to develop a clinical question following PICO guidelines to carefully devise an efficient strategy to search for research evidence about each intervention. Searching for research using the PICO format involves incorporating all or some of the following into the search strategy.

P = Patient or population of interest
I = Intervention under evaluation
C = Comparison intervention
O = Outcomes of interest

An example of a clinical question derived using PICO guidelines is: What is the evidence that, for children with hemiplegic cerebral palsy [Population of interest], occupational therapy following Botulinum toxin injections to the upper limb [Intervention under evaluation] is more effective than Botulinum toxin injections alone [Comparison intervention] for improving children’s ability to complete daily activities [Outcome of interest]?

Databases of evidence in medicine (Medline), allied health and nursing (CINAHL, PEDro, OTseeker), psychology (PsycInfo) and education (ERIC) are searched using the search strategy.

Selecting the best available research evidence to inform about intervention

The subject matter expert and topic coordinator independently complete this part of the process and then confer to reach agreement.

There are two considerations in selecting the best available research evidence to inform practice:

Different types of studies which evaluate the effectiveness of interventions are considered to form a hierarchy. Randomised controlled trials (RCTs) for instance compare the results of 2 or more groups of participants to find out if an intervention is more effective than no intervention or than a different intervention. Good RCTs are considered to be high level of evidence and close to the top of the hierarchy. Other studies take measurements before and after an intervention in a single group of participants. These kinds of studies are towards the bottom of the hierarchy and we are likely to trust the results of these studies less than those from an RCT.

We use the Oxford Centre for Evidence-Based Medicine 2011 Levels of Evidence1.

An appraisal and summary of all available research studies on a treatment. A good quality systematic review of randomised controlled trials is the best evidence to inform about the effects of intervention. A systematic review may include a meta-analysis, which is where the results from individual studies are analysed statistically.
A type of research study where participants are randomly allocated (by chance) into one or more treatment groups or control (no treatment) groups. Outcomes are measured before and after treatment to work out if one treatment is more effective than another, or more effective than a control group. Randomly allocating participants is to achieve groups with similar characteristics, so that the only expected differences between groups is the treatment they have receive. Any differences between groups after treatment are therefore considered to be due to the treatment.
Participants are allocated to 2 or more treatment or control groups but not in a randomised way.

Case series

Evaluates the outcomes of an intervention in a single group of participants. Measurements of interest are usually taken before each person starts treatment and compared with results taken after treatment.

Case-control studies

Outcomes in a group of people with a condition (cases) are compared with those of another group without the condition (controls).

Historically controlled studies

Outcomes of a group of people who receive an intervention are compared with those of a group of people whose outcomes were measured in the past after a different, or no, intervention.

Expert opinion, not necessarily based on research evidence. May also be evidence inferred from a related mechanism, such as based on animal or laboratory studies.

Although RCTs are the ideal study design to inform us about the effects of intervention, there are a number of reasons why RCTs are not conducted. Amongst these reasons are that: RCTs are complex and expensive, numbers of participants may be too small to enable a well-conducted RCT, or interventions can be too complex to be appropriate to evaluate using an RCT. For newer and developing interventions, RCTs may only be completed once lower level studies have provided preliminary evidence that the intervention is safe, feasible, tolerated by participants and potentially effective.

We trust the results from studies at the top of the evidence hierarchy and give them more weight when making decisions about the evidence for interventions because they should have less risk of bias. Bias is where aspects of the design of a study or the way the study is carried out, in addition to the intervention itself, influence the results. One example of a source of bias is if researchers have chosen participants to be involved because they believe they are more likely to respond to an intervention. The results of studies with high risk of bias may not be accurate as the bias of the study, rather than the intervention itself, may be responsible for the results. Even higher levels of studies can have high risk of bias if they are not carried out in a rigorous way. This is why we also critically appraise each study we use to inform us about an intervention to determine the quality of the study.

Once we understand the level of evidence of each study (from the hierarchy) and also the quality (risk of bias) of each study, we select the best available evidence to inform us about the effects of intervention. To select the best available evidence we consider the following:

  • Highest level of evidence (from the top of the hierarchy). Systematic reviews, and any good quality RCTS published since the systematic review, are chosen as the highest level of evidence. We progressively move down the hierarchy of evidence to RCTs and so on, if no systematic reviews exist.
  • Quality of studies. Preference is given to high quality studies when multiple studies at the same level exist.
  • Relevance to cerebral palsy. Research which is completed with people with cerebral palsy is considered best available evidence. If no research on a particular intervention has been published with people with cerebral palsy we consider whether it is appropriate to use research completed with other groups of people. We usually use systematic reviews and RCTs when drawing on evidence from groups of people who do not have cerebral palsy.
  • The outcomes measured in the study are those we chose to focus on, usually those that are considered most meaningful to the daily lives of people with cerebral palsy and their family and caregivers.

References

A list of references and source material used in developing the information for each intervention is provided after the summary of evidence. Copyright laws do not allow us to link to the full article unless we are able to find it freely on the internet. We have provided a link to a freely available copy where possible. In other cases, we have included a link to the abstract/summary of the article, which in most cases is on the website of the journal in which the article was published. The journal website usually provides access to purchasing the article.

Assessing the overall quality of the best available evidence

Once we have selected the best available evidence, the subject matter expert and topic coordinator independently assess the overall quality of the evidence and then meet to reach agreement.

We use the principles of GRADE2 (Grades of Recommendation, Assessment, Development, and Evaluation) to assess the overall quality of the best available evidence. GRADE was developed to rate quality of evidence and strength of recommendation of the entire body of evidence (including RCTs and lower levels of evidence) for interventions in health care. As we are using only a subset of the evidence for each intervention, we are using GRADE as a guide.

To rate the overall quality of the best available evidence we:

  1. Complete a preliminary rating based on the Level of evidence to achieve an initial rating of High, Moderate, Low or Very Low quality 3.
  2. Then, we consider other aspects of the evidence and upgrade or downgrade the initial rating as follows:

Downgrading. The initial rating is downgraded if the body of evidence:

  • Has significant limitations or risk of bias4
  • Has inconsistencies in the results across the studies5
  • Is based on groups of people other than those with cerebral palsy (indirectedness)6
  • If participants in studies have wide variability in their responses to intervention (imprecision)7

Upgrading. The initial rating is upgraded8 if the body of evidence:

  • Indicates that the intervention is very effective (large magnitude of effect)
  • Demonstrates that more of the intervention clearly results in better outcomes (dose effect)

Guidance about the effectiveness of interventions

The experts writing the material incorporate their knowledge and expertise with the information from the best available evidence to provide a summary to people with cerebral palsy, their families and caregivers, clinicians, service providers and funders. This summary contains information about the amount and quality of evidence for an intervention, whether an intervention is considered effective and guidance for pursuing an intervention if it is considered worthwhile to use or to trial with a person with cerebral palsy or their family.

Some of the following conclusions about the effectiveness of an intervention may apply:

High quality evidence supports the effectiveness of an intervention and it could be pursued if it is appropriate for the person with cerebral palsy and their family.

Low quality evidence suggests that the intervention may be effective and it could be implemented in collaboration with an appropriate health professional and carefully monitored to check it is working and is not harmful.

Results from studies are unclear or inconclusive, so it is not yet known whether the intervention is effective or not.

There may be no research yet completed evaluating certain interventions. Using these interventions should be carefully implemented and monitored for effectiveness and safety.

There are times when research suggests that interventions are not effective, not effective for some people, or that the harms and inconveniences outweigh the benefits of an intervention. We may suggest that such interventions are not used and that an alternative intervention which is considered effective and safe is used.

Guidance is also given to assist in deciding how to pursue an intervention if it is considered worthwhile pursuing for an individual with cerebral palsy and their family. Read more on using the information about interventions and therapies.

Updating evidence on interventions

New research is constantly being published. We undertake to review evidence and update each intervention every 2 years.