Clinical Trial Simulation - Society for Clinical Trials

Transcription

Clinical Trial Simulation - Society for Clinical Trials
Clinical Trial Simulation
Society for Clinical Trials Workshop
Seth Berry, PharmD
Russell Reeve, PhD
May 18th, 2014
Copyright © 2013 Quintiles
Outline
• Models
> Your simulations are only as good as the data and models going into the simulation
• Simulations
> Trial Design
- Fixed vs. Adaptive Designs
> Simulated Patients
vs. Re-sampling
Re sampling
- Virtual Subjects vs
- Multivariate Distribution Sampling
- Disease Progression Models
> Inclusion / Exclusion Criteria
> Drug Model
- Dose-Response
- PK-PD (1-CMT)
> Study Conduct
- Compliance
C
li
- Sampling
> Study Statistical Analysis
> Replicates for Scenario / Sensitivity Testing
> Controlled vs. Un-controlled Variables (Tamaguchi Design)
> Maximizing Probability of Success (ie, Power), Efficiencies, Utility Indices, and Minimizing the
Type 1 Error Rate
12 May 2014
Copyright 2014 - Quintiles
2
Motivation for Trial Simulation
• Need to design
g a trial
• Standard methods are inadequate
> Consider the usual back of the envelope calculation N = 2(zβ − z1−α/2)2Δ2/σ2.
> Do we know Δ? σ?
- Regional treatment effects?
p
Missing
g observations? Effects of imputation?
p
> What about dropouts?
> What distribution of responses? AEs?
> Effect of inclusion/exclusion criteria?
> Timing of observations?
12 May 2014
Copyright 2014 - Quintiles
3
Example: Anakinra in RA
0.6
Anakinra Model
0.4
0.3
0.2
0.1
ACR20
Note that placebo
response drops back to
baseline gradually
beyond about week 15
15.
0.5
Treatment effect varies
with time.
10
20
30
40
50
Time (weeks)
4
12 May 2014
Copyright 2014 - Quintiles
4
Other factors to be considered
• Placebo effect varies with the number of treatment
• Particular true in CNS,
CNS but also true in other areas
> For example, RA
Plot of Predictors for E0 and Emax
m
m
0
m
1
m
Emax
E0
m
m
m
m
0
0
0
0
0
-1
0
m
-2
Predictor for E0 or Em
max
m
0
0
0
0
10
1.0
15
1.5
20
2.0
25
2.5
30
3.0
35
3.5
40
4.0
Number of Treatment Arms
12 May 2014
Copyright 2014 - Quintiles
5
Fixed vs Adaptive Designs
12 May 2014
Copyright 2014 - Quintiles
6
Fixed vs Adaptive
• Similarities
> Both type of designs benefit from trial simulation
> Patient recruitment  Treatment Randomization  Drug Effect (PK/PD)  Trial level statistical
analysis  Adjusting trial design parameters to optimize
• Differences
> Cannot get away with back-of-envelope calculations in adaptive trials
y
create more complex
p
statistical issues
> Interim analyses
- Recruitment rate, center (or nation or region) by treatment interaction
- More operating characteristics to worry about
» Fixed: Power function, Type I error rate (power under null)
» Adaptive: Power function,
function Type I error rate,
rate expected sample size,
size probability of success
• Our Focus Today
> Due to limited amount of time, we will discuss simulations of fixed designs today
> Can talk with us after the class about adaptive design simulations
12 May 2014
Copyright 2014 - Quintiles
7
Modeling & Simulation
S ft
Software
Software
Use
Platform
SAS 9.2
Statistical Programming and
T diti
Traditional
l St
Statistical
ti ti l A
Analyses,
l
Trial Simulations
Virtual Machine
R 3.0.3 and S+ 6.2
Advanced Statistical Analyses,
Trial Simulations, Graphical
Analyses
Virtual Machine, HPC Cloud
NONMEM 6.2
Population PK-PD Modeling and
Simulation
Internal Web-Based
Server Cluster
Pharsight Trial Simulator
Clinical Trial Simulation
Internal Server
Phoenix WinNonlin
Non-compartmental PK Analysis
Local Desktops
OpenBUGS
Bayesian Analysis
Virtual Machine
WinBUGS – PKBUGS
PK Bayesian Analysis
Virtual Machine
WinPOPT
D-Optimal Sampling Design
Virtual Machine
ADDPLAN and FACTS
Adaptive Trial Design
Virtual Machine
12 May 2014
Copyright 2014 - Quintiles
8
Simulation Tools
E hh
Each
has th
their
i own strength
t
th
• Pharsight’s Phoenix Trial Designer (Trial Simulator)
> Graphical User Interface makes programming a simulation easy
easy, and explaining to
others straightforward
> Can create quite complicated designs, and features the full trial behavior
(recruitment, compliance, input/output, data analysis, etc.)
> Expensive and needs a good computing environment
- Not particularly good on virtual machines
> Don’t save too much of the intermediate data, as it is stored in Microsoft’s Access
> Sometimes difficult to analyze properly scenarios unless you export the data
• SAS
>
>
>
>
Preferred language of statisticians
Data step fairly powerful for simulation, with many distributions built-in
built in
Requires a modular building of code, that is often counterintuitive and confusing
For large simulations, can be difficult to explain to others, and to document and
debug even for experienced statisticians
> SAS sells a version that works in their parallelized cloud (very fast)
12 May 2014
Copyright 2014 - Quintiles
9
Simulation Tools
E hh
Each
has itits own strengths
t
th
• R or S-PLUS
> Growing in popularity for simulation projects
> Functions are very convenient for plugging in modules
- Might have input/output function, another for compliance model, etc.
> Veryy flexible,, and can build the simulation from the inside/out
> All of the new statistics students know this language
> Revolution Analytics has a version that works in a parallel computing environment
(very fast, but with limited parallelization built into functions)
- But
B t simulations
i l ti
are easy tto parallelize
ll li even without
ith t allll ffunctions
ti
b
being
i
parallelized
• FACTS
> Stands for Fixed and Adaptive Clinical Trial Simulation
> Very good for a wide variety of pre-canned adaptive designs
> Only simulates the decision algorithm in conjunction with the recruitment rate
- Recruitment much more an issue in adaptive designs than fixed
> Pretty easy to use, even if the output displays are a bit confusing
12 May 2014
Copyright 2014 - Quintiles
10
High Performance Computing
T
d Ti
Turnaround
Time
“If faced with a pending development decision on Friday,
would like the modeling & simulation work conducted over
the weekend and the results available on Monday morning.”
- Dennis Gillings, Quintiles CEO
• Scalable Cloud Computational Resources
o Case Study using R
- Revolution Analytics / Microsoft Azure Cloud Computing
- Implemented up to 1000 computational cores simultaneously
- Reduced modeling & simulation computational time
from 2 months down to approximately 1 hour.
- Linear relationship between number of cores and time
savings
- Successful deployed on critical trial simulation for client for
design of Bayesian trial
12 May 2014
Copyright 2014 - Quintiles
11
Simulation Planning
12 May 2014
Copyright 2014 - Quintiles
12
Begin with the End in Mind
O
Overview
i
off trial
t i l simulation
i l ti process
•
•
•
•
•
Assess effect of study parameters on operating characteristics
Calculate operating characteristics (power
(power, etc)
Statistically analyze endpoints of interest
What endpoints are we collecting, and when?
Ho are endpoints affected b
How
by dosing (PK/PD relationship)
> What about compliance? Will need a drop out model.
• What treatment arms?
> Any modification to those treatment arms (e
(e.g.,
g sample size change
change,
additions/subtractions, randomization changes)
• What population are drawing from?
For complex
F
l designs,
d i
build
b ild each
h off these
th
components modularly so that you can modify
later.
Build
B
ild only
l those
th
pieces
i
you need
d for
f your
question.
12 May 2014
Copyright 2014 - Quintiles
13
Overview of Plan
> Dose-response (exposure, etc.)
> Measurements
- Efficacy, safety
> Population of interest
- Subset
S b tb
based
d on covariates
i t
> Timing of dosing/interventions, measurements
> Drop outs, other noncompliance
> Data analysis model
> Variations in design parameters
- Effect of changing timing, dosing, sample size, treatment effect size, etc.
12 May 2014
Copyright 2014 - Quintiles
14
Dose-Response (exposure)
• Model linking
g dose to clinical
endpoint
Drug Effect vs Exposure for
Different Population Markers
> Efficacy
> Safety
> Y = a + d/(K + (x/c)b)
• If PK models available, then
can use exposure parameters
> AUC, Cmax
> If you have PK model, can use
that to generate drug
concentrations
See
http://aquaticpath.umd.edu/appliedtox/mod
ule1.html
for more information
12 May 2014
log Vira
al load Change frrom baseline
• Often
Of
use the Hill model
0.5
0
No Poly
Obs 369
05
-0.5
Obs 370
Obs 369, 370
-1
Pred None
Pred 369
Pred 370
-1.5
Pred 369, 370
-2
0
Copyright 2014 - Quintiles
20
40
60
Drug Concentration
80
15
Measurements
• Measurements are outcomes that are measured
• In TS, these are scheduled
> Dialog box for that
> Can be missed at random
> Lot of flexibility on probability of miss or hazard
function
• In SAS and R, can program in missingness
> Good way to assess impact on MAR or MNAR
> Create model for AE that yield drop outs
• Measurements can have measurement error
• Can simulate measurements but ignore for later
analysis if changes are needed
16
12 May 2014
Copyright 2014 - Quintiles
16
Population
• For a g
given trial/indication, there are many
y relevant covariates that could
influence treatment outcome/disease progression.
• Want to specify distribution models for covariates that approximate real world
patients.
• If you have a large sampling of patients, can sample with replacement from
the list, OR create distributions and generate purely in silico participants
12 May 2014
Copyright 2014 - Quintiles
17
Randomization and Sample Size
• Randomization
> What if randomization is not balanced?
- Dunnett’s: Optimal if ratio is close to k1/2 : 1 : 1: … : 1
> Can investigate effect of treatment x center interaction (especially for adaptive
where we have staggered starts and recruitment rates across centers)
> Effect of multiple populations
- Already on DMARD vs DMART-naïve
• Sample Size
> Can investigate range of sample size to characterize power function, other
performance metrics of the trial
12 May 2014
Copyright 2014 - Quintiles
18
Timing (Protocol scheduled times)
• Patients are scheduled a series of treatment. For example,
p taking
g drug
g once
daily for 2 months.
• Patients are scheduled for multiple follow-ups so that treatment effect be
observed.
• Want to set up trial protocol: treatment and observation schedules.
12 May 2014
Copyright 2014 - Quintiles
19
Noncompliance
• Trial p
protocol could be violated: missing
g dose, missing
g follow-up,
p missing
g
patients, etc.
• Want to set up models for noncompliance
• Easiest: P{ dropping out on visit t } = p
• What is effect if dropouts are dose-dependent?
> P{ dropping out on visit t } = p0 + p1 I{ treatment = active }
12 May 2014
Copyright 2014 - Quintiles
20
Data Analysis Model
• Simulated data is onlyy useful with proper
p p analysis.
y
• Want to specify analysis method (based on protocol), for each replicate or
each scenario of the simulation.
• Good place to investigate:
> Effects of adding covariates to the analysis (may not be as useful in categorical as
you would hope)
> Missing data strategies (i.e.,
(i e LOCF vs MMR,
MMR etc.)
etc )
> Multiple comparison strategies
12 May 2014
Copyright 2014 - Quintiles
21
Simulation Design
• The p
point of simulation is to explore
p
different scenarios.
• Want to specify many scenarios based on design parameters (sample size,
treatment effect size, variability of effect size, etc) and their combinations.
• Central Composite Design for optimizing responses (eg, Power)
12 May 2014
Copyright 2014 - Quintiles
22
Modeling & Simulation
Basic Concepts
12 May 2014
Copyright 2014 - Quintiles
23
Defining Modeling and Simulation
Modeling: “Looking backward”
• Develop
D
l mathematical
th
ti l models
d l tto d
describe
ib and
d explain
l i observations
b
ti
Modeling
Simulation: “Looking forward”
• Using
g a model to p
predict outcomes based on “what if” assumptions
p
Simulation
For Biopharma:
F
Bi h
M&S iis using
i di
diverse d
data
t sources tto model
d l and
d simulate
i l t relationships
l ti
hi
between drug exposure, response, and patient characteristics
12 May 2014
Copyright 2014 - Quintiles
24
The Learn and Confirm Cycle
A Iterative
An
It ti Approach
A
h
Model
Question
Data
ata
Answer
Design
es g
Execute
Derived from Sheiner LB, Clin Pharmacol Therap 1997, 61:275-291
25
The Learn and Confirm Paradigm
A li d tto th
Applied
the D
Drug D
Development
l
tP
Process
Phase I
Phase II
Model
Model
Data
Design
Phase III
Data
E
Execute
Derived from Sheiner LB, Clin Pharmacol Therap 1997, 61:275-291
Design
Execute
Model
Data
Design
Execute
26
The Learn and Confirm Paradigm
N L
No
Longer Ph
Phase II, II
II, and
d III
Learning
Confirm
Model
Model
Data
Design
Data
E
Execute
Derived from Sheiner LB, Clin Pharmacol Therap 1997, 61:275-291
Design
Execute
Model
Data
Design
Execute
27
PK-PD
Principles
12 May 2014
Copyright 2014 - Quintiles
28
The Causal Chain
F
From
Dose
D
tto Clinical
Cli i l E
Endpoint
d i t
Dose
• Represents the basic amount of drug being administered to a patient
• Often
Oft a static
t ti assignment
i
t th
thatt d
does nott change
h
over titime
• The amount of drug
g that is observed in the body
y
• Varies within a subject over time due to the effect of absorption, distribution, metabolism, and
elimination (ADME) of the drug
Concentration • Additionally can vary between subjects due to differences in the ADME
Response
Clinical
Endpoint
12 May 2014
•
•
•
•
In basic clinical pharmacology receptor theory, the effect the drug has on the body.
Varies within a subject over time due to the concentration of drug available to elucidate an effect
Additionally can vary between subjects due to differences in the receptors / genetics
Is not always quantifiable
• The observed patient outcome or assessment.
• Can vary both within and between patients over time.
Copyright 2014 - Quintiles
29
Dose vs Concentration Response
D i and
Design
dA
Analysis
l i
• Dose-Response
X
D1
D2
R
X
O (Placebo)
(Pl
b )
O
D1
D2
• Concentration-Response
X
C1
C2
R
X
O ((Placebo))
O
12 May 2014
Copyright 2014 - Quintiles
C1
C2
30
Dose vs Concentration Response
I li ti
Implications
off Low
L
Ph
Pharmacokinetic
ki ti V
Variability
i bilit
• Low PK Variability = No Confounding & Little Overlap
R
C
D1
D1
D2
D2
D3
D3
R
C
12 May 2014
Copyright 2014 - Quintiles
31
Dose vs Concentration Response
I li ti
Implications
off High
Hi h Ph
Pharmacokinetic
ki ti V
Variability
i bilit
• High PK Variability = Confounding Overlap
R
C
D1
D1
D2
D2
D3
D3
R
C
12 May 2014
Copyright 2014 - Quintiles
32
Dose vs Concentration Response
The Confounding Effect of High Correlation Between Pharmacokinetics
and Response
• Dose-Response
D1
D2
R
O (Placebo)
(Pl
b )
C
• Concentration-Response
> C
Concentration-Response
t ti R
P
Protects
t t A
Against
i t Di
Disease and
d Ph
Pharmacokinetic
ki ti
Confounders That Cause the Down Bias
C1
C2
R
O ((Placebo))
C
12 May 2014
Copyright 2014 - Quintiles
33
Case Study
R d i dD
Randomized
Dose vs. C
Concentration
t ti C
Controlled
t ll d Cli
Clinical
i lT
Trials
i l
Source: Reeve, R and M Hale (1994) Results and efficiency of Bayesian dose adjustment in a clinical trial with a binary endpoint. Proceedings of the Biopharmaceutical Section of the ASA 1994
34
Exposure
T
Terminology
i l
Poor Metric
Dose
Best Metric
Cmax or AUC
Concentration
Due to the potential confounding that causes the down bias
12 May 2014
Copyright 2014 - Quintiles
35
Pharmacokinetic Models
Estimating Exposure Parameters
• Well-Stirred Pharmacokinetics Model
• Physiologically Based Pharmacokinetics
(PBPK)
Right
Heart
Left
Heart
Lungs
Peripheral
Compartment
Upper Body
Liver
Central
C
Compartment
t
t
Spleen
Intestine
Kidneys
Lower Body
12 May 2014
Copyright 2014 - Quintiles
36
Variability Models
E ti ti th
Estimating
the V
Variability
i bilit
O erall Variability
Overall
Variabilit
Between Subject Variability
Within Subject Variability
(Inter Subject Variability)
(Intra Subject Variability)
True WSV
Residual Error
(Measurement Error)
12 May 2014
Copyright 2014 - Quintiles
37
Population PK Models
N li
Nonlinear
Mi
Mixed
d Eff
Effects
t M
Modeling
d li
• Create Basic Structural Model
> 1-,
1 2-,
2 3-Compartment
3C
t
t
600
> Route of Administration (Oral, IV)
> Estimate PK Parameters
- Volume of Distribution
- Clearance
• Create Error Model
> Between Subject Variability
- Exponential Error Model
> Within Subject Variability
Con
ncentration (mg/L)
- Absorption Rate Constant
500
400
300
200
100
0
0
6
12
18
24
30
36
42
48
Time After Dose (hr)
- Proportional Error Model
- Additive Error Model
Source: Example developed from analysis of a large, proprietary, de-identified illustrative data set.
38
Population PK Models
N li
Nonlinear
Mi
Mixed
d Effects
Eff t M
Modeling
d li
> Evaluate Covariate Relationships
40
- Weight Adjusted Dosing
- Renal / Hepatic Impairment Adjustment
- Concomitant Medication Drug-Drug
Interaction Signal
Clearance (L/hr)
30
20
10
> Variability Estimates in Patients
0
40
60
80
100
120
140
Weight (kg)
- Exposure
- Response
R
40
> Justify Dose / Labeling
Clearance (L/hr)
30
20
10
0
0
50
100
150
200
Creatinine Clearance (mL/min)
Source: Example developed from analysis of a large, proprietary, de-identified illustrative data set.
39
Population PK-PD Models
N li
Nonlinear
Mi
Mixed
d Eff
Effects
t M
Modeling
d li
• Create Basic Structural Model
- Baseline
B
li
- Maximum Effect (Emax)
09
0.9
0.8
0.7
400
0.6
Effect
> Estimate PK Parameters
1.0
500
Concentration (mg/L)
> Li
Linear, P
Power, Emax,
E
Si
Sigmoidal
id l
Emax, Hysteresis
Pharmacodynamics
Pharmacokinetics
600
300
0.5
0.4
200
0.3
0.2
100
0.1
0
0.0
0
6
12
18
24
30
36
42
48
10-1.0
Time After Dose (hr)
2
3 4 5 67
100.0
2
3 4 5 67
101.0
2
3 4 5 67
102.0
2
Concentration (mg/L)
- Effective Concentration (EC50)
Ph
Pharmacokinetics
ki ti - Pharmacodynamics
Ph
d
i
• Create Error Model
1.0
> Between Subject Variability
0.9
0.8
- Proportional
p
Error
0.6
Effect
- Additive Error
0.7
> Within Subject Variability
- Proportional Error Model
- Additive Error Model
0.5
0.4
0.3
0.2
0.1
0.0
0
6
12
18
24
30
36
42
48
Time After Dose (hr)
Source: Example developed from analysis of a large, proprietary, de-identified illustrative data set.
40
3 4 5 6 7 89
Disease Progression Models
Disea
ase Statuss
Additive Symptomatic
with Tolerance
Fully Protective
Partially
Protective
Natural Progression
Start
Treatment
12 May 2014
• Basic Disease Progression
Model
>
>
>
>
>
Alzheimer’s
Parkinson’s (UPDRS)
A
Anemia
i
Diabetes
Thromboembolic Disorders
• Time-To-Event Models
> Myocardial Infarction
> Stroke
Time
Copyright 2014 - Quintiles
41
Population PK-PD Models
M d lV
Model
Validation
lid ti
Stability
• Re
Re-Sampling
Sampling Techniques
> Bootstrap: Random
Predictability
• Internal
> Visual & Numerical
Sensitivity
• Inputs
> Vary Covariates
Sampling with
Predictive Checks
> Vary Observations
Replacement
> Posterior Predictive
> Vary Independent
> Jack-Knife: N-1
> Condition Number
Checks
• External
Variable
• Exclusions
> Data Splitting
> Observations
> Comparison with
> Subjects
Separate Study / Data
12 May 2014
Copyright 2014 - Quintiles
42
Protocol Deviations and
Execution Models
12 May 2014
Copyright 2014 - Quintiles
43
Protocol Deviations and Execution Models
O
Overview
i
• The evaluation of variation in the conduct of clinical study designs to identify
the strengths and weaknesses of a specific design
• Examples:
> Adherence to Dosing Regimen
- Wrong or Extra Doses
- Improper Timing of Doses / Dosing Holiday
> Dropout Models
- Time Patient Discontinues Study Procedures
- Time to Switch to Different Medication (eg, Rescue Medication)
12 May 2014
Copyright 2014 - Quintiles
44
Adherence Modeling
C ll ti Adh
Collecting
Adherence D
Data
t
• John Urquhart’s
q
Rule of 6’s
• Pill Bottle Caps or Blister Packs
• Uses microprocessor in cap to collect
date and time data on when doses are
taken
• LCD Readout or Wireless Scanner
• Drawbacks
> Only solid dosage forms
> Cost : $274/patient for 6 months
> Limited acceptance
> Multiple dose removal and setting dose
aside
• Eg MEMSCap™ Medication Even
Monitoring System by Aardex
12 May 2014
Copyright 2014 - Quintiles
45
Adherence Modeling
E ti ti Adh
Estimating
Adherence – Markov
M k Mixed
Mi d Eff
Effects
t Regression
R
i Model
M d l
Source: Girard, et. al. “A Markov Mixed Effect Regression Model for Drug Compliance”, Statistics in Medicine.. Vol 17, pgs 2313-2333. (1998)
46
Adherence Modeling
P di ti Adh
Predicting
Adherence – Markov
M k Mixed
Mi d Eff
Effects
t Regression
R
i M
Model
d l
Source: Girard, et. al. “A Markov Mixed Effect Regression Model for Drug Compliance”, Statistics in Medicine.. Vol 17, pgs 2313-2333. (1998)
47
Adherence Modeling
I
Impact
t on Exposures
E
• Adherence holidays can drop
concentration levels below a
threshold for therapeutic
efficacy
• Non-compliance with
administration (eg, double
dosing)
dos
g) can
ca also
a so raise
a se
concentration levels above
toxicity thresholds, leading to
potential adverse events
Source: Comté, et. al. “Estimation of the comparative therapeutic superiority of QD and BID dosing regimens, based upon integrated analysis of dosing history and pharmacokinetics”,
Journal of Pharmacokinetics and Pharmacodynamics. Vol 34, pgs 549-558. (2007)
48
Dropout Modeling
H
Hazard
dM
Models
d l
• Assuming the time to a specified discontinuation event u follows a distribution
probability
y of the event occurring
g is:
function F,, then the p
T
p (θ ) =  f (u ) ⋅ e −λ ⋅u ⋅ du
0
where the hazard λ is defined as the probability the event happens given it has not
h
happened
d att a specified
ifi d titime t during
d i th
the period
i d off th
the study
t d T
Placebo
Disease
Clinical
Outcome
Adherence
Drug
Therapy
PK
Biomarker
Dropout
Adverse
Event
Placebo
Source: Kimko & Duffull. Simulation for Designing Clinical Trials.: A Pharmacokinetic-Pharmacodynamic Modeling Perspective. Marcel Dekker, New York, NY. 2003
49
Additional Slides
12 May 2014
Copyright 2014 - Quintiles
50
Use in Early Drug Development
• Biomarkers/Surrogate Endpoints/Clinical Markers
> Earlyy Start on Proof of Concept
p
> Early Read on Establishing Variability in PK / PD
> Helpful for Selection of Doses for Phase 2
(Dose Ranging Studies)
> Examples
-
12 May 2014
CD4 counts / viral load
Serum Glucose
CT / MRI / PET
Genomics -> CYP2D6 Metabolizers
QTc
Blood Pressure
INR
Cure / No Cure
Copyright 2014 - Quintiles
51
Additional Resources
B k
Books
12 May 2014
Copyright 2014 - Quintiles
52
Simulation
12 May 2014
Copyright 2014 - Quintiles
53
Clinical Trial Simulation
• Clinical Trial Simulation
> Protocol Design
> Protocol Deviation
- Adherence / Compliance
- Drop-out
> Covariate Distribution /
Correlation
> Dose
D
/ Ph
Pharmacokinetic
ki ti /
Pharmacodynamic Model
> Data Analysis Plan
> Results
> Simulation Scenarios
Figure courteous of Pharsight Trial Simulator
54
Trial Design
12 May 2014
Copyright 2014 - Quintiles
55
Trial Design
Pl thi
Plan
this in
i Advance
Ad
• Dosing schedules
• Timing of endpoints
• Parallel, crossover, extension phase?
12 May 2014
Copyright 2014 - Quintiles
56
Simulated Patients
12 May 2014
Copyright 2014 - Quintiles
57
Covariate Distribution Model
• Specify
p
y subject
j
covariates distributions.
> Trial simulator set up covariates before entering the subject loop.
> Population covariates cannot be assigned values during the simulation.
• Discrete and continuous distributions.
> Tip: Try to use nominal values for discrete distributions. Example, use male and
female for Race instead of 0 and 1.
• Covariates can be used as study inclusion and exclusion criteria during
enrollment and to stratify subjects during assignment to treatment groups
enrollment,
groups.
• Covariate distributions may vary across a number of sub-populations, the
covariate distribution model can model sub-population and center effects.
• Variability in the covariate distribution model can be enabled or disabled
separately from that in the drug model.
12 May 2014
Copyright 2014 - Quintiles
58
12 May 2014
Copyright 2014 - Quintiles
59
Simulated Patients
Vi t l S
Virtual
Subjects
bj t vs. R
Re-sampling
li
• Virtual subjects much easier to create
> Just need distribution
> Keep in mind that many covariates will be
correlated
- Difficult to maintain direct correlation
among covariates
> Distribution of age and weight
100
Weight (kg)
80
60
Brainard and Burmaster (1992) Risk Analysis
population,
p
we fit bivariate distributions
For the U.S. p
to estimated numbers of men and women aged 18-74
years in cells representing 1 in. intervals in height and
10 lb intervals in weight. For each sex separately, the
marginal histogram of height is well fit by a normal
distribution. For men and women, respectively, the
marginal
i l hi
histogram
t
off weight
i ht iis wellll fit and
d
satisfactorily fit by a lognormal distribution. For men,
the bivariate histogram is satisfactorily fit by a normal
distribution between the height and the natural
logarithm of weight. For women, the bivariate
histogram is satisfactorily fit by two superposed
normal distributions between the height and the
natural logarithm of weight. The resulting distributions
are suitable for use in public health risk assessments
40
http://www.ncbi.nlm.nih.gov/pubmed/1502374
20
0
0
10
20
30
40
50
60
A g e (y r)
• Re-sampling from an existing database
more realistic
> Based on actual patients
Source: Example developed from analysis of a large, proprietary, de-identified illustrative data set.
60
Simulated Patients
Di
Disease
P
Progression
i M
Models
d l
• Example
• P{ TS | t } = a + b[1−exp(−kt)]
• Correlation among time points
in the data, despite its binaryness, though it typically does
not affect results
• Useful when yyou mayy need to
involve many time points in
your analysis, or if you are
investigating time points to use
as the primary
12 May 2014
Copyright 2014 - Quintiles
61
Simulated Patients
Di
Disease
P
Progression
i M
Models
d l
• Usef
Usefull to get realistic variation
ariation
among molecules
• Expected treatment effects can be
estimated
• Prior for model parameters can be
constructed
12 May 2014
Copyright 2014 - Quintiles
62
Inclusion / Exclusion
Criteria
12 May 2014
Copyright 2014 - Quintiles
63
Inclusion/Exclusion
• Apply
pp y IE criteria against
g
simulated p
patients
> Chiefly for covariates that affect response
• Could have large effect on study performance
• Match up to you TPP
12 May 2014
Copyright 2014 - Quintiles
64
The Drug Model(s)
12 May 2014
Copyright 2014 - Quintiles
65
Drug Model (Input/Output)
• Drug
g model is the most important
p
p
part of the trial design.
g
• Scheduled events: Formulations, Responses, Actions and Events
12 May 2014
Copyright 2014 - Quintiles
66
Pharmacokinetic – Pharmacodynamic Model
I t
Integration
ti Into
I t Clinical
Cli i l T
Trial
i l Si
Simulation
l ti
• Inputs to Dose-PK-PD Model
> Protocol Deviation
- Adherence / Compliance
- Drop-out
> Model Parameters Adjusted
j
Based Upon Virtual Patient
Covariates
• Outputs from Dose-PK-PD
Model
> Exposures for Scenario Testing
of Sampling Design
> Response
- Efficacy
- Safety
Figure courteous of Pharsight Trial Simulator
67
12 May 2014
Copyright 2014 - Quintiles
68
Drug Model
D
Dose-Response
R
M
Model
d l
• Two models that fit most situations
> Linear
> Hill (alias Emax, logistic, Michaelis-Menten)
Two forms of same equation:
R = E0 + Emax xγ/(Kγ + xγ) + e
R = E0 + Emax/{ 1 + exp[[ −b(log
b(l x − c)) ] }
Thompson et al (2013)
12 May 2014
Copyright 2014 - Quintiles
69
Drug Model
Ph
Pharmacokinetic
ki ti – Pharmacodynamic
Ph
d
i M
Models
d l
• Think in steps
> Dose  PK  PD
• C(t) = (D/V)[ exp(−(CL/V)t) − exp(-kat) ]
PK Model
Input:
p Dose,, t
Output: C(t)
• Instant effect
> R = E0 + Emax C(t)γ/(K γ + C(t)γ )
QT prolongation
p
g
is often modeled this way
y
> Q
• Delayed effect
> R = E0 + Emax C(t)
( )γ/(K
( γ + C(t
( − tdelay)γ )
12 May 2014
Copyright 2014 - Quintiles
PD Model
Input: C(t), t
Output: Response
70
Models of Study
Conduct
12 May 2014
Copyright 2014 - Quintiles
71
Models of Study Conduct
T diti
Traditional
lS
Serial
i l vs. (D
(D-Optimal)
O ti l) S
Sparse PK S
Sampling
li Wi
Windows
d
Traditional Serial Sampling
D-Optimal Design Sparse Sampling Windows
600
600
500
Concen
ntration (mg/L)
Concentra
ation (mg/L)
500
400
300
200
400
300
200
100
100
0
0
0
2
4
6
8
10
12
14
16
18
20
22
0
24
• Typical sampling scheme for Phase 1 Studies
• Intensive, rigid sampling difficult for patients
• Good for descriptive statistics, not as informative
for modeling
12 May 2014
2
4
6
8
10
12
14
16
18
20
22
24
Time After Dose (hr)
Time After Dose (hr)
• Sparse sampling more common in Phase 2/3
• Limited sampling windows offer flexibility for
clinic and patients, but can also be too flexible
• More informative for modeling
• Need a priori information (ie, the PK model)
• Reduce cost associated with redundant sampling
• Optimizing sampling can help strengthen study
outcomes and can be evaluated by looking at
various sampling scenarios
Copyright 2014 - Quintiles
72
Models of Study Conduct
Adh
Adherence
((aka,
k C
Compliance)
li
)
• Simple Adherence Models (1-coin vs. 2-coin)
> 1-coin model: The likelihood a patient will take a
medication on any given day
> 2-coin model: The likelihood a patient will take a
medication today, depending if they took the previously
prescribed medication.
> n
coin model: Expandable out to n doses
n-coin
• Advanced Adherence Models
> Correlation Between Adherence and Dose Frequency
- High rates for QD dosing
- Lower rates for BID, TID, QID
> Correlation Between Adherence and Dose Timing
- Different adherence rates for morning dose
vs lunch vs evening
> Correlation Between Adherence and Duration of
Treatment
- Decreases out over time for chronic therapies
- Decrease of Efficacy (Tolerance) or Development of
Adverse Events
> Feedback Loop dependent upon Efficacy and Safety
Endpoints
12 May 2014
Copyright 2014 - Quintiles
73
Study Statistical
Analysis
12 May 2014
Copyright 2014 - Quintiles
74
Statistical Analysis
Now is the time to plan for effects of analysis on study operating
characteristics
• Build simple statistical analyses into your simulation to begin wth
• Factors you may wish to investigate:
> Covariate adjustment: Help or hurt?
> Effect of categorization of endpoint
> Effect of different time points
- Azheimer’s: Treatment effect broadens over time
» Interplay between length of study and N
> Treatment of missing data
- LOCF, MMRM, etc.
> Endpoint summarization
- Slope, AUEC, change from baseline
> Analysis populations
- ITT: Variability in mean difference due to imputation method and dropouts
- PP: Variability in sample size due to dropouts
75
Scenario & Sensitivity
Testing
12 May 2014
Copyright 2014 - Quintiles
76
Scenario & Sensitivity Testing
Obj ti
Objectives
and
dM
Method
th d
• Objective: Investigate the effects of design
parameters on design performance metrics
• Looking for: Design with adequate power,
appropriate Type I error rate, adequate
estimation precision, and robust against
g
factors we cannot control
> Cannot control ED50, but there may be uncertainty
in it, so want trial that gets “correct” decision most
of the time regardless of ED50 value
• Design parameters: Vary by design
• Replicates: Repeatedly run trial to get
distribution
> Common to see 10,000 replicates
77
Scenario & Sensitivity Testing
C t ll d vs. U
Controlled
Un-controlled
t ll d V
Variables
i bl (T
(Tamaguchi
hi D
Design)
i )
• Example:
> Dose adaptive design to estimate best dose for
Phase III in Alzheimer’s
> What can we vary?
- Sample size (N), time to endpoint, number of
doses, point of interim analysis
- Have estimate of effect size with uncertainty
» Based on last study, our prior on treatment
effect is Normal(3,
Normal(3 2.1
2 12)
• Two approaches to handling treatment effect
> Tamaguchi
g
design
g
- Another factor in the design, to be controlled
> Bayesian approach
- Random variable, and to get power we average
conditional power across all values of treatment
effect
Factorial Design of
Simulation
N
300
500
300
500
300
500
300
500
300
500
300
500
300
500
300
500
Time
6 mo
6 mo
12 mo
12 mo
6 mo
6 mo
12 mo
12 mo
6 mo
6 mo
12 mo
12 mo
6 mo
6 mo
12 mo
12 mo
Num Doses
2
2
2
2
4
4
4
4
2
2
2
2
4
4
4
4
IA Point
50%
50%
50%
50%
50%
50%
50%
50%
70%
70%
70%
70%
70%
70%
70%
70%
Trt Effect
Vary this
78
Scenario & Sensitivity Testing
M i i i P
Maximizing
Probability
b bilit off S
Success (i
(ie, Power),
P
) etc
t
Bonferroni adjusted alpha
0.05
=0
Design 1
Param 1 Param 2
0
0
1
2
1
0
1
2
Overall
Power
11.3%
78.4%
100.0%
Prob Pick
Best Dose
0.887
0.586
0.873
100.0%
0.068
Design 2
Arms
Dropped N / arm
0
xx
0
xx
0
xx
Factorial design for
parameters
parameters,
including
14.3%
0.857
0
xx2
100.0% design
0.135
0
xx
options
0
xx
Overall Prob Pick Arms
Power Best Dose Dropped N / arm
0.126
0.874
3.521
xx
0.605
0.47
2.259
xx
0.996
0.98
1.444
xx
Many types of design
characteristics
can be
0.139
0.861
3.584
xx
1
1
0.924
xx
investigated
1
1
0.634
xx
n = 1000 trials
Unadju
usted alpha = 0.05
5
Stat analysis
included in
simulation design
Design 1
Volume
0
Dose
0
1
2
Overall
Power
13 8%
13.8%
76.4%
100.0%
Prob Pick
Best Dose
86 2%
86.2%
55.6%
86.5%
1
0
1
2
15.3%
100 0%
100.0%
100.0%
84.7%
17 2%
17.2%
7.1%
Design 2
Arms
Dropped N / arm
0
xx
0
xx
0
xx
0
0
0
xx
xx
xx
Overall Prob Pick Arms
Power Best Dose Dropped N / Arm
0 213
0.213
0 787
0.787
4 91
4.91
xx
0.812
0.628
3.641
xx
1
0.987
2.599
xx
0.228
1
1
0.772
1
1
4.933
1 623
1.623
1.032
xx
xx
xx
79
Work through Specific
Examples
12 May 2014
Copyright 2014 - Quintiles
80
Outcome Optimization
12 May 2014
Copyright 2014 - Quintiles
81
Case Study
Cli i l T
Clinical
Trial
i l Si
Simulation
l ti - Outcome
O t
Optimization
O ti i ti
Challenge
> Treatment:
- Piperacillin / Tazobactam
(PTZ)
> Problem:
- Obtain Probability of target
attainment (PTA) > MIC for
more than 50% of the
dosing interval
- Identify if dosing needs to
b adjusted
be
dj t d iin th
the obese
b
population (including
adjustments for CrCL)
- Compare traditional vs
extended – infusion dosing
regimens
• Methodology:
> Monte Carlo Simulations using
Pharsight® Trial Simulator™
2.2.2
> Using a previously developed
PTZ Population PK model,
model with
covariates
Results
• No weight
weight-based
based PTZ dose
adjustments are required in
obese population
• Validates the use of extendedinfusion regimens in both the
normal and obese individuals
(%)
• Background
Solution
Source: TP Dumitrescu, R Kendrick, H Calvin, and NS Berry. “Using Monte Carlo Simulations to Assess Dosing Regimen Adjustments of Piperacillin/Tazobactam in Obese Patients with
Varying Renal Functions.” Journal of Pharmacokinetics and Pharmacodynamics. May 2013
82
Bioequivalence
Bi
i l
with
ith
Sample Size Reestimation
12 May 2014
Copyright 2014 - Quintiles
83
Basic Setup
Obj ti
Objectives
and
dP
Prior
i K
Knowledge
l d
•
•
•
•
•
Generic drug, but want to compare with 2 existing on-market products
Very long half-life (>2 weeks), so will do a parallel design
Toxicity not an issue
Will be comparing AUC and Cmax
Standard statistical analysis
> log AUC and log Cmax modeled independently
> If 90% confidence interval for log AUC1/log AUC2 is between 0.8 and 1.25, then
bioequivalent (BE); otherwise, fail to be BE
> Comparison must be made to both comparators
• CV of AUC = 0.3 (based on literature of N=20 patients in a different
population)
• No data on Cmax variability,
variability but we think smaller than AUC
• Decided to do a sample size re-estimation
> But when?
• Of interest: Type I and Type II error rates
• We will program this in SAS
84
Flow Chart for Simulation
Will be
b programmed
d in
i SAS
SAS code to Simulate log(Cmax)
Simulate log(Cmax)
For ease of programming,
create all data, and will drop
those not used in analysis
Do Interim Analysis
Recalculate n using only
first ninterim subjects, and
keep n subjects overall
Final Analysis
Calculate confidence
intervals, and from that
power
data phase1;
input CV DOrate InterimPoint;
do iter = 1 to &NumIter;
NumDays = &TargetN / &PatientsPerDay;
do patient=1 to &MaxN;
day = floor((patient-1)/6) + 1;
trt = mod(patient-1, 3) + 1;
if patient<=&TargetN then PlannedGroup=1;
else PlannedGroup=0;
if patient<=&TargetN*InterimPoint then ForInterim=1;
else ForInterim=0;
logCmax = normal(0)*CV;
if uniform(0) <= DOrate then DO=1; else DO=0;
if trt=1 then logCmax = logCmax + 0.05;
lastPK = day + &ADAPKtime;
output;
end;
end;
cards;
0.33 0 0.333
Etc.
85
Flow Chart for Simulation
Will be
b programmed
d in
i SAS
Simulate log(Cmax)
SAS code to calculate new sample size, but first need to
calculate the coefficient of variation from the blinded data,
so we do not do a fancy mixed model
model, but a simple CV
calculation ignoring the treatment effect
For ease of programming,
create all data, and will drop
those not used in analysis
proc sql;
Do Interim Analysis
Recalculate n using only
first ninterim subjects, and
keep n subjects overall
create table BlindedInterimResults as
select CV
CV, DOrate,
DOrate InterimPoint,
InterimPoint iter,
iter N(DO) as NsoFar,
NsoFar
mean(DO) DORateObs, std(logCmax) as CVhat,
max(lastPK)+&PKanalysisTime+&AssayTime as StartAdditionalCohort
from phase1
where ForInterim=1
group by CV, ADArate, InterimPoint, iter;
Final Analysis
Calculate confidence
intervals, and from that
power
86
Flow Chart for Simulation
Will be
b programmed
d in
i SAS
SAS code to calculate new sample size, based on results
from the PROC SQL on last slide
Simulate log(Cmax)
For ease of programming,
create all data, and will drop
those not used in analysis
data Predicted;
set BlindedInterimResults;
NumEvaluable = (1-ADARateObs)*&Cohort*&NumCohort/3;
if NumEvaluable < 60 then MaxN=10; else MaxN=8;
*** Calculate Sample Size ***;
CVpt1 = 0.25;
CVpt2 = 0.275;
SS1 = 36;
SS2 = 43;
SS3 = 50;
arrayy CVpt
p {{*}} CVpt1-CVpt12;
p
p ;
array SS {*} SS1-SS12;
Do Interim Analysis
Recalculate n using only
first ninterim subjects, and
keep n subjects overall
if CVhat <= CVpt1 then SSarm = SS1;
else
do i=2 to 12;
if CVpt{i-1}
CV t{i 1} < CVhat
CVh t <= CVpt{i}
CV t{i} then
th SSarm
SS
= SS{i};
SS{i}
end;
Recruit = SSarm/(1-ADARateObs); ** Per arm sample size **;
Recruit = 3*Recruit;
** Sample size total **;
Recruit = ceil(max(NsoFar, Recruit)); ** Keep what we already have **;
Final Analysis
Calculate confidence
intervals, and from that
power
CVpt3 = 0.3;
run;
87
Flow Chart for Simulation
Will be
b programmed
d in
i SAS
SAS code to Calculate CI and Power: Analyze complete
data, and then determine whether BE criteria are met (and
in what proportion of in silico trials
Simulate log(Cmax)
For ease of programming,
create all data, and will drop
those not used in analysis
Do Interim Analysis
Recalculate n using only
first ninterim subjects, and
keep n subjects overall
Final Analysis
Calculate confidence
intervals, and from that
power
ods listing close;
ods output Estimates=estimates;
proc mixed data=FinalData(where=(DO=0));
by CV ADArate InterimPoint iter;
class trt;
model logCmax = trt;
estimate 'TrtDiff 1 vs 2' trt 1 -1 0 / cl alpha=0.1;
estimate 'TrtDiff 1 vs 3' trt 1 0 -1 / cl alpha=0.1;
estimate 'TrtDiff 2 vs 3' trt 0 1 -1 / cl alpha=0.1;
p
run;
ods listing;
data CIinLimits;
set estimates;
if log(0.8)
l (0 8) <= lower
l
and
d upper <= log(1.25)
l (1 25) then
th InLimits=1;
I Li it 1
else InLimits=0;
run;
proc sql;
select CV
CV, ADArate,
ADArate InterimPoint,
InterimPoint mean(AnyIn) as AnyInRate,
AnyInRate mean(AllIn) as AllInRate
from TrialResults
group by CV, ADArate, InterimPoint;
88
Dose Ranging Study
with
ith O
One IInterim
t i tto
Drop Doses
Dose-Response Model
89
Basic Setup
Obj ti
Objectives
and
dP
Prior
i K
Knowledge
l d
• New molecular entity entering Phase II in a debilitating, progressive disease
with no alternative treatment
• We have 4 doses to investigate, plus placebo
• Endpoint is continuous, measured at baseline, 3 mo, 6 mo, 9 mo, and 12 mo
• Phase I data indicates
> Toxicity
> Linear dose-response curve, linear time-response curve
• We want to drop doses as fast as possible in non-efficacious dose groups
> Phase
Ph
I iindicates
di t ttoxicity
i it
• Generated this in R
• Will not have complete code, but will have key pieces
• Advantage to R in this context
> Can create sub-modules
sub modules as functions,
functions which we can then alter as assumptions
change
90
Onion Approach to Building
Si l ti
Simulation
Summarize
S
i
overall results
Generate many
trials, varying
assumptions
Generate trial
data
Summary: What is the power of
each parameter set, what sample
size yields acceptable power
power, etc
Generate trials: Need to
replicate to calculate power, while
varying key parameters (sample
size, type of design, etc.)
Generate trial data: Generate in
silico patients and get their results
Subject data: Based on modeling
of Phase I data
Subject
Test is superiority over
placebo, so
Power = Prob{ p-value < α }
91
Response for a Given Subject
C t d as ffunction
Created
ti so that
th t we can use arguments
t to
t vary the
th effect
ff t
subject.data <- function(dose=300, dose.effect)
{
weeks <- c(4, 8, 12, 16, 24)
n <- length(weeks)
eta <- rnorm(1, 0, 10)
epsilon <- rnorm(n, 0, 10)
if(dose==0)
{
val <- 0
}
else
{
val <- 0
}
Create dose- and time-response
function. Based on p
prior data,, is linear
is both dose and time. Potentiall
different model for placebo than for
treated.
- xxxxxx*dose*dose.effect + eta + epsilon
+ eta + epsilon
result <- data.frame(week=weeks, val=val)
result
}
92
Trial Simulation
C ll ffunction
Calls
ti subject.data
bj t d t
run.trial <- function(sample.size, dose.effect, design)
{
des2 <
<- design[design$status == 1,]
1 ]
DesignPts <- dim(des2)[1]
NumRep <- trunc( sample.size/DesignPts ) + 1
doses <- rep(design$dose1, sample.size)[1:sample.size]
structures
Set up data
needed for output
Simulate data for a trial
Perform data analysis,
and return result of
ti l
trial
(
data.out <- data.frame(
dose1=doses,
Response=rep(0,length(doses)),
dose=as.factor(doses),
volume=as.factor(volume)
)
for(i in 1:dim(data.out)[1])
{
subj <- subject.data(dose=data.out[i,"dose1"], dose.effect)
row <- subj$week==24
data out[i "Response"] <
data.out[i,"Response"]
<- subj[row,3]
subj[row 3]
}
trial.lm <<- lm(Response~dose-1, data=data.out)
pvalues <- summary(trial.lm)$coefficients[,4]
pvalues[summary(trial.lm)$coefficients[,1]>0]
p
y
$
<- 1
pvalues
}
93
Vary Parameters
O
Operating
ti Characteristics
Ch
t i ti off Trial
T i l under
d Diff
Differentt S
Scenarios
i
sim.all
i
ll <- function(iter=1,
f
i (i
1 dose.cond=c(0,
d
d (0 1,
1 2
2, 0
0, 1,
1 2),
2) alpha=0.05)
l h 0 05)
Set up basic structure in
{
##### Set everything up
results will be deposited
fixed.power <- function(output)
{
output[(length(output)-2):length(output)]
output[(length(output)
2):length(output)]
}
K <- length(dose.cond)
power.curve <- data.frame(P560=rep(0,K), overall=rep(0, K),
best.dose=rep(0, K))
which
Simulate each scenario
##### Vary the parameters, and call the simulation function
for(i in 1:K)
{
Make.design2()
fixed.output <- fixed.design(iter, sample.size=180,
function returns
d
dose.effect=dose.cond[i],
ff t d
d[i]
alpha=0.05, make.design2()
the parameters to
)
be varies
power.curve[i,] <- fixed.power(fixed.output)
}
cbind(data.frame(n=rep(iter,
cbind(data.frame(n
rep(iter, K), dose=dose.cond,
dose dose.cond, volume=vol.cond),
volume vol.cond), power.curve)
}
Return results to user
94
CNS phase III trial
simulation
95
Background
P
Program
hi
history
t
and
d simulation
i l ti objective
bj ti
• Preparing for Phase III for NME
> Have data from earlier phases to produce models
> Phase I: Dose-ascending study with dense PK data, but no PD data
> Phase II: Have both sparse PK, and clinical endpoint
- Phase II clinical endpoint
p
((standard q
questionnaire endpoint
p
for this indication)) will
carry into Phase III as the primary
- Dose ranging, endpoint at 3 months
• Disease
Di
iis a slowly
l l progressive
i di
disease th
thatt ultimately
lti t l results
lt iin d
death
th
> Instrument will be modeled as a linear decay over time
• Significant unmet need
• Simulation objectives
> Determine best sample size, time of endpoint, and probability of success
96
Design basics
Ph
Phase
III standard
t d d design
d i
• Will have up to 2 dose groups + placebo group
• Parallel,
Parallel randomized,
randomized double
double-blind
blind
• We expect treatment to gradually benefit the patients, or at least slow the
progression of the disease
> Treatment is symptomatic relief
> Would we change design if it were possibly disease modifying?
• Follow patients up to at least 1 year, but primary time point may be earlier
97
Our Onion
Summarize
S
i
overall results
Generate many
trials, varying
assumptions
Generate trial
data
Start with building models for
what we know, and fill in the
remainder with assumptions.
Effect of assumptions can be
assessed in the simulation.
Let’s start with the subject model
(input/output)
(input/output).
Subject
98
Subject Model
D
Dose
 Drug
D
E
Exposure  Clinical
Cli i l O
Outcomes
t
• Pharmacokinetics
> Build model from Phase II, which has the best data
> 2 compartment model is best fit
- Constructed population model in NONMEM
- Advantages
g of p
population
p
model here: Estimates of variability
y of p
parameters
kel
ka
Source
Central
Sink
Alternatives for ka
ka = ka,tv exp(η2)
ka = ka,tv exp(θ W)
Peripheral
99
Efficacy Response
D
Dose
 Drug
D
E
Exposure  Clinical
Cli i l O
Outcomes
t
• Response (change from baseline) at month 3 modeled as function of dose
> Can modify this function
> Assume linear effect over time
> Can modify treatment effect estimates to establish power curve
100
Patient Model
P
Progress
iin model
d lb
building
ildi
• From Phase I: PK Model, steady state version (since out at 3 months or
beyond)
> Cpt = (D/V) exp(−ka t)/(1 − exp(− ka t )) + ω
- ka and V has subject-level random variables in them
Will need a
covariate model
• From Phase II: PD Model for efficacy
> E{Y}
{ } = β0 + β1ALT + β2 AST + β3 Age
g + β4 Sex + β5 log(C
g( pt)
> Variability estimates come from the model as well as mean structure
101
Adherence Model
H d problems
Had
bl
with
ith thi
this iin earlier
li ttrials
i l
1.0
0.8
0.6
0
0.4
0.2
0.0
> P(constipation) = logit(γ 0+ γ1 Cpt)
> Will increase drop out hazard function
> Include in missed dose hazard function?
- h(t)
( ) = h1 exp(AE
p( const)
Probability of CON
NSTIPATION
• Adverse events related to drug concentration
CONSTIPATION
0
16.67
33.33
50
66.67
83.33
Max(concentration) P-value= 0.058
• Patients drop out as study progresses
> Hazard of drop out increases over time, and is
treatment dependent
> These are phase II patients, maybe phase III will
be more committed? Can modify hazard during
scenario building.
g
102
Covariate Model Building
A t lD
Actual
Data
t or Lit
Literature
t
Review
R i
Alt
Alternative
ti
• Best alternative would be data already in
hand for the existing patient population
> Can construct multivariate models
• Literature review can also give some
guidance
• Alternative source of data
> NAMCS
Source: Stragnes et al (2004) Body Fat Distribution, Relative Weight, and Liver Enzyme Levels: A Population-Based Study,
Hepatology, Vol 39, Issue, pp 754-763 (http://onlinelibrary.wiley.com/doi/10.1002/hep.20149/pdf)
103
Design Options—Scenarios
P
Possibilities
ibiliti to
t Investigate
I
ti t
Scenarios
Interim analysis
with possibility
of stopping for
futility
No interim
analysis
Standard
deviation of
differences
among means
5
10
N / arm
50
50
50
100
100
100
50
50
50
100
100
100
First Futility
Sample Size
(fraction f)
15 (30%)
25 (50%)
35 (70%)
30 ((30%))
50 (50%)
70 (70%)
15 (30%)
25 (50%)
35 (70%)
30 ((30%))
50 (50%)
70 (70%)
104
Scenario Anaylsis
O ti i i th
Optimizing
the d
design
i
• Design was factorial, so our analysis is straightforward
• Often “surprising”
surprising (i.e.,
(i e outside of consensus) information is gleaned
Comparing Update Frequencies: Every Week(Black), Every 2 Weeks(Red), and Every 4 Weeks(Green)
Probability of Futility - Null Case
Pr SS PhaseIII
100
50
100
50
100
0 .4 0
0 .0
0 .0
50
0 .3 0
0 .1
0 .2
0 .2
0 .3
0 .5 0
0 .4
Pr Succ
0 .4
0 .2 0
N u llc a s e
0.2
0 .0 0
0.4
0 .1 0
0.8
N=50
N=100
0.6
Pr Fut
50
100
50
100
50
100
50
100
50
100
50
100
50
100
50
100
50
100
0 .9
0 .55
0 .55
0 .7
0 .9
0 .7
0 .1 0
0 .0 0
1.0
1
0 .9
0 .7
0 .9
0 .11 0
0 .1 0
0 .7
0 .5
.7
0 .5
.5
0 .0 0
.3
0 .0 0
0.2
0
U s haped
0 .2 0
0.4
0 .2 0
0.6
0.8
N=50
N=100
0.0
Probability of Futility
P
0 .1 0
Fraction f
Probability of Futility - Expected Case
0 .2 0
.7
E x p e c te d
.5
0 .0 0
.3
0 .2 0
0.0
Probability of Futility
1.0
N Savings
Fraction f
105
Take Home Message
106
Conclusions
• Timing
> Simulations don’t
don t have to be perfect
perfect, but they do have to be timely
• Quality of Predictions
> PK-PD Example
> Retrospective CNS discussed above
- Simulation results predicted outcome of study
- Analysis indicated probability of success to be low, and indeed the study failed to
show statistical significance for efficacy
> Mofetil example
- MMF is a drug to reduce organ rejection episodes.
- The drug is excreted renally.
- In prior trials for kidney transplantation
transplantation, there was an inverse relationship between
drug concentration in blood and probability of rejection.
- Trial was simulated in SAS.
107
Mycophenolate mofetil RCCT
C
Conclusions
l i
• MMF is a drug to reduce organ rejection episodes.
• The drug is excreted renally.
renally
• In prior trials for kidney transplantation, there was an inverse relationship
between drug concentration in blood and probability of rejection.
• Randomized Concentration-Controlled Trial (RCCT) was proposed
proposed. Was
designed using simulation
108
MMF Results
Ti lR
Trial
Results
lt and
d IInformation
f
ti Gl
Gleaned
d ffrom Si
Simulations
l ti
• Information gleaned from the simulations
> Effects of dose adjustments
> Effects of maximum dose
> Power and Type I error rate
- Nonstandard null hypothesis
yp
> Ability to discriminate among doses
> Distribution of doses
- Needed for manufacturing
- Save several million dollars off the cost of the trial just on this finding
> Trial Result
- Logistic regression analysis showed a highly statistically significant relationship
between median ln(MPA AUC) and the occurrence of a biopsy-proven rejection
(P<0.001)
Van Gelder et al (1999) A randomized double-blind, multicenter plasma concentration controlled study of the safety
and efficacy of oral mycophenolate mofetil for the prevention of acute rejection after kidney transplantation,
Transplantation, 68(2) pp 261-66.
109
Take Home Message
G
General
l Considerations
C
id ti
• Powerful tool to support trial design
• Understanding what you know and what you don’t
don t know
> Your models and simulations are only as good as the data and assumptions that go
into them
> Creating
g the model is jjust as beneficial as conducting
g the simulations
• You don’t have to have all of the different pieces to do trial simulation.
> Just work with what you have
> If need be make assumptions and test them
110