###### tags: `clinical trials` `randomisation` `blinding` `RCT` `bias` `intention-to-treat` `as-treated` ` per-protocol` `PROC SEQDESIGN` `PROC SEQTEST`
# Terminology, designs and analyses of a clinical trial
### Different types of clinical research studies
[Learn About Studies|ClinicalTrials.gov](https://clinicaltrials.gov/study-basics/learn-about-studies)

---
### Interim Analysis
An interim analysis (IA) is conducted before data collection has been completed. Clinical trials are unusual in that enrollment of subjects is a continual process staggered in time. If a treatment can be proven to be clearly beneficial or harmful compared to the concurrent control, or to be obviously futile, based on a pre-defined analysis of an incomplete data set while the study is on-going, the investigators may stop the study early.
#### Sample size and power
If the purpose on an IA is for assessment of the primary endpoint, then it is the responsibility of the Lead Biostatistician or designee to ensure that the type I error is appropriately controlled. For example, p-values and/or decision boundaries could be calculated with a group sequential technique such as O’Brien-Fleming, Lan-De Mets, etc., using validated software such as `SAS PROC SEQDESIGN` and `PROC SEQTEST`. The trial must have sufficient power to assess the primary endpoint after taking into account any planned IAs
If an IA is for the purpose of assessing the primary endpoint, the statistician who performs any IAs will be unblinded/unmasked early and therefore should not be the statistician who performs the final analysis of the primary endpoint. In addition, any unblinded/unmasked statistician must ensure that unblinded/unmasked results are only communicated to appropriate people.
#### Early stopping of a clinical trial
It is the responsibility of the **DSMB Biostatistician** or **designee** to decide whether the results of an IA should lead to the recommendation that a trial be stopped.
A trial can be **stopped due to efficacy** if an IA finds a beneficial effect of the trial intervention of such statistical strength that it passes a pre-specified criterion. During the planning stages of a trial this criterion will be set such as to maintain the overall type I error rate of the set on analyses defined as all IAs plus the final analysis.
As the primary endpoint will usually be assessed with a two-sided statistical test, it is possible that a trial may be **stopped due to evidence that an intervention is harmful** with respect to the primary outcome. This will be done with the same procedure for controlling type I error as for efficacy.
Any criterion for early stopping based on statistical testing must not increase the overall type I error rate for either benefit or harm. Should asymmetric boundaries be required, they should be calculated in such a way that the reported type I error is correct.
If an IA for any outcome uncovers substantial evidence of AEs or SAEs then this information must be forwarded to the Data Safety Monitoring Boards (DSMB0 who are responsible for deciding if a trial should be stopped due to safety concerns.
#### Knowledge of unblinded/unmasked analysis results before trial completion
No blinded/masked persons involved in the management, administration or analysis of an ongoing trial shall have access to unblinded/unmasked results from an IA unless a decision to stop the trial has been made.
The DSMB shall at all times have access to the results of any unblinded/unmasked IAs. The statistician who performs an unblinded/unmasked IA will not communicate any information about the results of this analysis to any blinded/masked person.
---
### PICO
Look up the [impact of patient, intervention, comparison, outcome (PICO)](https://pubmedhh.nlm.nih.gov/nlmd/pico/piconew.php) as a search strategy tool on literature search quality: a systematic review
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6148624/
---
### Biases in trial stages
Bias can occur before the trial (i.e., trial planning stage), during the trial (i.e., trial implementation/data collection stage, data analysis/publication stage, and after the trial (i.e., publication phases of research). [Identifying and Avoiding Bias in Research](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2917255/)
**Trial planning stage** flawed study design, selection bias, channeling bias
**Trial implementation stage** interviewer bias, chronology bias, recall bias, transfer bias (loss to follow-up), misclassification of exposure or outcome, performance bias, attrition bias, detection bias
**Data analysis/publication stage** Citation bias, confounding, outcome reporting bias
---
### Sources of biases
* [8.4 Introduction to sources of bias in clinical trials](https://handbook-5-1.cochrane.org/chapter_8/8_4_introduction_to_sources_of_bias_in_clinical_trials.htm)
The reliability of the results of a randomized clinical trial (RCT) depends on the extent to which potential sources of bias have been avoided. A useful classification of biases is into **selection bias**, **performance bias**, **attrition bias**, **detection bias** and **outcome reporting bias**.
**Selection bias** occurs when individuals or groups in a study sample differ systematically from the population of interest, which leads to a systematic error in an association or outcome. For example, prospective cohort studies of dietary and lifestyle factors exhibit a “healthy participant effect”, reporting lower mortality rates among participants than among the general population. This suggests that people who are interested in healthy lifestyles, and therefore have more healthy behaviours, such as low smoking rates, are more likely to sign up to take part in a prospective study than those with less healthy lifestyles. This can also be considered a sampling bias. [Selection bias](https://catalogofbias.org/biases/selection-bias/)
Observational studies are prone to selection bias
**Preventive steps**
Randomisation
Generation of the allocation sequence
Concealment of the allocation sequence until assignment occurs
To assess the probable degree of selection bias, authors should include the following information at different stages of the trial or study:
– Numbers of participants screened as well as randomised/included.
– How intervention/exposure groups compared at baseline.
– To what extent potential participants were re-screened.
– Exactly what procedures were put in place to prevent prediction of future allocations and knowledge of previous allocations.
– What the restrictions were on randomisation, e.g. block sizes.
– Any evidence of unblinding.
– How missing data from participants lost to follow-up were handled.
**Performance bias** refers to systematic differences in the care provided to members of different study groups other than the intervention under investigation. Performance bias often occurs in trials where it is not possible to blind participants and/or researchers, such as trials of surgical interventions, nutrition or exercise. A [qualitative study](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3994506/) assessed the risk of performance bias in a weight-loss trial of a novel patient counselling programme compared to usual care in general practice. The control group reported being disappointed at having been offered usual care when they had taken part in the trial. Reactions to disappointment involved both movements toward and away from behaviour change. The researchers concluded that disappointment may introduce bias, as they lead the randomized groups to differ in ways other than the intended experimental contrast.
**Preventive steps**
Blinding of participants and personnel
Other potential threats to validity
Ideally, participants and researchers should be blinded to the interventions. If blinding is not feasible, the effect of performance bias can be mitigated by using objective outcomes. If subjective outcomes are used in a trial, performance bias can be mitigated by blinding the outcome assessor. For instance, in a trial assessing the effect of cognitive behavioural therapy (CBT), the researcher who delivers the intervention (who cannot be blinded), should be different to the researcher who assesses the patient-reported outcomes.
**Attrition bias** Attrition occurs when participants leave during a study. A [trial investigating the quality of life among patients randomised to aggressive treatment of renal cancer](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4688419/) had high rates of attrition owing to toxicity, disease progression, and deaths (64% in the control group; 70% in the intervention group). Analysis of those still in the trial showed no difference in the quality of life. The impact of attrition bias, however, suggested that even with equal drop-outs in both groups a biased estimate occurred. [attrition bias](https://catalogofbias.org/biases/attrition-bias/)
**Preventive steps** Techniques for preventing losses follow-up include (1) ensuring good communication between study staff and participants, (2) accessibility to clinics, (3) effective communication channels, (4) incentives to continue, and (5) ensuring that the study is of relevance to the participants.
**Detection bias** refers to systematic differences between groups in how outcomes are determined. Larger men have bigger prostates, which makes diagnosing prostate cancer via biopsy more difficult (it is harder to hit the target). Therefore, men with larger prostates are less likely to be accurately diagnosed with prostate cancer. Thus, a real association between obesity and prostate cancer risk may be underestimated. [Detection bias](https://catalogofbias.org/biases/detection-bias/)
**Preventive steps**
Blinding of outcome assessment
Intervention studies should be designed to ensure that all groups have an equivalent chance of being affected by known factors that influence detection. The use of randomisation in intervention studies also aims to generate groups equivalent in unknown factors. In observational studies, potential sources of detection bias should be sought out, and if identified, adjusted for or stratified by to clarify the observed associations of interest.
**Outcome reporting bias** refers to selective reporting of pre-specified outcomes in published clinical trials. A [similar review of a cohort of Canadian studies](http://www.cmaj.ca/content/171/7/735) found that 88% of the randomised trials failed to report at least one pre-specified outcome.
**Preventive steps**
Clinical trial investigators must maintain a policy of transparency, and if they do change or omit planned outcomes, an adequate explanation should be provided for readers. Reviewers and journal editors should compare the final studies with their protocol or registry to assess for evidence of outcome reporting bias.
**Other biases**
[Reporting Biases](https://catalogofbias.org/biases/reporting-biases/) A systematic distortion that arises from the selective disclosure or withholding of information by parties involved in the design, conduct, analysis, or dissemination of a study or research findings
**Recall bias** Patients with a particular disease are more likely to remember details of the disease course than people without the disease.
**Preventive steps**
Use prospective study design
Use validated patient questionnaires
---
### Selection bias subtypes
Selection bias is not one thing. Subtypes include sampling bias, Berkson's bias, self-selection bias, and loss to follow-up (transfer bias)
---
### What are the factors to be considered while preparing the randomisation specification?
* Study design (e.g., blind or open label, parallel group, cross-over, ascending dose, adaptive etc)
* Treatment groups (number and details including labelling)
* Number of subjects and ratio of subjects to treatment groups
* Requirement to replace subjects
* Block sizes
* Stratification requirements (centres, baseline characteristics, disease status etc)
* Method of randomisation (e.g., minimisation etc) and software to be used
* policy for breaking the blind
* Handling of requests for additional codes
### What are the information to be included in a randomisation plan?
* Mode of randomisation
* Methods to be used to prepare the material
* Format and content of all the outputs to be produced
* A distribution list of electronic and paper when applicable, copies of the material
* Details of storage and access
* Emergency access to the blinded code for individual subjects during the study
### Randomisation methods
**simple randomisation** Randomization based on a single sequence of random assignments is known as simple randomization.This technique maintains complete randomness of the assignment of a subject to a particular group. The most common and basic method of simple randomization is flipping a coin.
**Block randomization**
The block randomization method is designed to randomize subjects into groups that result in equal sample sizes. After block size has been determined, all possible balanced combinations of assignment within the block (i.e., equal number for all groups within the block) must be calculated. Blocks are then randomly chosen to determine the patients’ assignment into the groups.
**Stratified randomization**
Stratified randomization is achieved by generating a separate block for each combination of covariates, and subjects are assigned to the appropriate block of covariates. After all subjects have been identified and assigned into blocks, simple randomization is performed within each block to assign subjects to one of the groups.
**Covariate adaptive randomization**
In covariate adaptive randomization, a new participant is sequentially assigned to a particular treatment group by taking into account the specific covariates and previous assignments of participants. Covariate adaptive randomization uses the method of minimization by assessing the imbalance of sample size among several covariates.
**online randomization software**
[QuickCalcs- Run statistical analyses quickly and directly in your browser](https://www.graphpad.com/quickcalcs/index.cfm)
---
### Blinding
| Type | Description |
| -------- | -------- |
| Unblinded or open label | All parties are aware of the treatment the participant receives |
| Single blind or single-masked | Only the participant is unaware of the treatment they receive |
| Double blind or double-masked | The participant and the clinicians / data collectors are unaware of the treatment the participant receives |
| Triple blind | Participant, clinicians / data collectors and outcome adjudicators / data analysts are all unaware of the treatment the participant receives |
[The concept of blinding in clinical trials](https://www.eupati.eu/clinical-development-and-trials/concept-blinding-clinical-trials/#Types_of_blinding)
---
### Stages in the design of a clinical trial
[Stages in the design of a clinical trial](https://derangedphysiology.com/main/cicm-primary-exam/required-reading/research-methods-and-statistics/Chapter%202.0.1/stages-design-clinical-trial)
**Literature review**
* What current published evidence exists? One might find that the question has already been answered to a satisfactory degree
* How were those studies conducted? Deficiencies in their methodology may guide the design of your new study
* What other questions remain unanswered? This may guide you towards considering some different secondary outcome measures, as well as purely exploratory measures (eg. using a novel yet-to-be-validated biomarker)
**Define the research question and hypothesis**
* What does the trial aim to achieve?
* What question does it attempt to answer?
* What is the significance of this question?
* What are the endpoints?
* The primary endpoint should be clearly defined.
* An ill-defined endpoint invalidates the rest of the study. The primary endpoint should also be valuable enough to pursue. It should be readily measured, and ideally it should be a direct measure of outcome (as it is inferior to rely on surrogate endpoints).
* The hypothesis needs to be succinctly formulated.
**Develop a study protocol** The aim is to minimise bias and to maximise precision. That development has numerous facets to it:
* Define the study population. Determine which inclusion and exclusion criteria you are going to use
* Calculate the sample size. This will be largely determined by the expected effect size of the treatment on the primary outcome measure.
* Define treatment groups. These should ideally be equal in almost every way.
* Decide on a method of randomisation, if you are going to randomise the patients. Or, come up with a really good reason as to why you cannot randomise.
* Determine the method of treatment allocation and how youre going to conceal allocation
* Define the timing of intervention. i.e., a protocol for when the randomised patients end up receiving the target intervention. The protocol should be sufficiently explicit and "fool-proof" so that protocol violations are kept to a minimum.
* Clearly define the data collection protocol and the instruments which will be used for this
* Arrnage a convenient method of reporting adverse events so that no harm is done
**Gain ethics approval**
In Australia, the Declaration of Helsinki has informed and guided the NHMRC standards as laid out in the [National Statement on Ethical Conduct in Human Research (2007, updated 2015)](https://www.nhmrc.gov.au/guidelines-publications/e72). These guidelines direct the approval of the research by local [Human Research Ethics Committees (HRECs)](https://www.nhmrc.gov.au/health-ethics/human-research-ethics-committees-hrecs).
**Perform a pilot study**
This, in the Myles and Gin book, is described as "an important and often neglected process". This tests the feasibility of the full-scale trial, assessing the assumptions made in the course of making the trial protocol. Results may require that the final trial protocol be modified, or that the required sample size be recalculated.
**Funding**
**Conduct of the study**
Basic rules as to how to do it properly can be found in the The [Australian Clinical Trial Handbook](https://www.tga.gov.au/publication/australian-clinical-trial-handbook) from the TGA. In short, as thew trial runs there needs to be:
* Registration in an online database, eg. the Australian Clinical Trial Registry
* Collection of data which is preserved (7 years)
* Collection of consent documents which are preserved (7 years)
* Notification to sponsor and HREC of serious adverse events
* Regular reports of study progress.
* Regular review by an independent committee, for large trials
* No deviation from protocol without HREC endorsement, unless serious harm is being avoided
**Publication**
The final outcome of a trial is some sort of paper. That paper should be formatted according to [The Consolidated Standards of Reporting Trials (CONSORT)](http://www.consort-statement.org/) statement. In their media release there is a table (Table 1) expanding upon over thirty item numbers which must be satisfied for successful compliance. This table is not reproduced here even by this details-hungry author. The primary exam candidate is left to decide by themselves as to how much of their time this is worth.
**Follow-up of the study**
Long-term reassessment of the study population is sometimes warranted and brings about new information; this becomes more valid if is planned well in advance and if the population and outcome measures are agreed upon before the original study is concluded.
---
### Designs of RCTs
The choice of appropriate trial designs and analysis depends on the study objectives, characteristics of the therapy and disease, the nature of the primary endpoints, and the availability of funding (Evans, 2010). In the following pages, I have compared some common randomised controlled trials (RCT), univariate statistical analyses, and analysis sets.
**Parallel group design** is used to compare two or more treatments. Subjects are randomly assigned to one of the treatment groups and then receive just one treatment during the trial. The simplest design is a two-arm parallel design that consists of (1) one treatment and one control group, (2) two different treatment groups, or (3) 2 different doses of a common drug. The subjects are followed prospectively and the response is compared between the groups (Dimova and Allison, 2016)
**Crossover design** is a type of repeated measure design where randomised subjects cross over from one treatment to another treatment during the trial. Unlike the parallel design where subjects in a group receiving only 1 treatment, subjects in a crossover design receive all treatments in a different sequence. For example, 2 x 2 crossover design consists of 2 sequences, 2 periods, and 2 treatments (e.g. A and B). Subjects are randomly assigned to either sequence 1, receiving treatment A, then B, or sequence 2, receiving treatment B, then A. The crossover design has an advantage of requiring fewer subjects than parallel design because subjects act as their own control thus eliminating inter-subject variation. This design may have an issue with carry-over effect, which the residual effect of the treatment in the first period continues into the second period. However, a washout period is usually built into the design to separate treatment periods and to eliminate the carry-over effect.
**Factorial design** tests the effect of two or more interventions simultaneously using all combinations of the interventions. In a 2x2 factorial design, 2 interventions (e.g. A and B), each having 2 levels (e.g. presence or absence), are being evaluated, which gives a total of 4 groups that receive: (1) A only, (2) B only, (3) A & B, and (4) neither A nor B. The main advantage of this design is its efficiency when study sample sizes are expected to be large. This design allows direct assessment of the interaction between the interventions because all the intervention combinations are included. However, this trial can have some challenges, such complex participant recruitment and protocol adherence.
**Noninferiority design**. The objective of this trial is to show that an intervention is at least as good as or no worse than an active control. The intervention of interest is expectedly better than the active control in many ways (e.g. less expensive, better safety, a better quality of life, or less invasive). This design is appropriate when there is evidence for the efficacy and effect size for the selected active control. More considerations are needed for selecting an active control (See Evans, 2010).
**Equivalence designs** Noninferiority designs are a one-sided test used to determine if a novel intervention is no worse than a standard intervention. Equivalence designs, a two-sided test, pose a similar question, but also allow for the possibility that the novel intervention is no better than the standard one.
Boundaries of equivalence and noninferiority for a 2-sided 95% confidence interval of the difference (Experimental Treatment – Standard Treatment)

[Noninferiority and Equivalence Designs: Issues and Implications for Mental Health Research](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2696315/)
In **cluster RCTs**, groups of individuals rather than individuals are randomized to different interventions. Cluster-randomized trials are also known as group-randomized trials. We say the 'unit of allocation' is the cluster, or the group. The groups may be, for example, schools, villages, medical practices or families. Such trials may be done for one of several reasons. It may be to evaluate the group effect of an intervention, for example herd-immunity of a vaccine. It may be to avoid ‘contamination’ across interventions when trial participants are managed within the same setting, for example in a trial evaluating a dietary intervention, families rather than individuals may be randomized. A cluster-randomized design may be used simply for convenience. [16.3 Cluster-randomized trials ](https://handbook-5-1.cochrane.org/chapter_16/16_3_1_introduction.htm)
---
### Statistical analysis for clinical trials
The choice of an appropriate analysis depends on the study objectives, the nature of the response variable (Y), and sample sizes. In the following table, I have summarised some univariate analyses by the nature of Y, study objectives, and provided SAS procedures for conducting the analysis. The full list of analysis can be found at [CHOOSING THE CORRECT STATISTICAL TEST IN SAS, STATA, SPSS AND R](https://stats.idre.ucla.edu/other/mult-pkg/whatstat/

---
### Analysis sets
Three types of analysis sets include intention-to-treat (ITT), as-treated and per-protocol (PP).
**ITT** includes every subject who is randomized according to randomized treatment assignment, including those who drop out prematurely, are non-compliant to the treatment, or even take the wrong study treatment. Tthe ITT is also known as a full analysis set. ITT analysis is usually described as “once randomized, always analyzed”. ITT analysis is referred as ‘as randomized’ - the opposite of the term ‘as treated’.

In the **as-treated set**, subjects are analysed according to the treatment they actually received. The term "as treated" means that when we do analysis/summaries, the treatment assignment is based on the actual treatment the patients receive, not the treatment the patients are supposed to receive. [‘As treated’ versus ‘As randomized’ analysis](http://onbiostatistics.blogspot.com/2013/09/as-treated-versus-as-randomized-analysis.html)
In the **per-protocol set**, only those subjects who strictly complied with the protocol are included in the analysis. This analysis can only be restricted to the participants who fulfill the protocol in the terms of the eligibility, adherence to the intervention, and outcome assessment. Subjects who had major protocol violations, had no measurements of the primary endpoint, or had non-sufficient exposure to the study treatment are excluded from the analysis.
---
### Clinical Trial Registries
[Australian New Zealand Clinical Trials Registry (ANZCTR)](https://www.anzctr.org.au/) ANZCTR is a register of clinical trials being undertaken in Australia and New Zealand.
[ClinicalTrials.gov](http://www.clinicaltrials.gov/) Clinical trials.gov is a US site listing clinical trials in the US and in other countries, including Australia.
---