Abstracts

Slides are available by clicking on the title of the talk.

S1.1: Strategies to Accelerate Rare Disease Drug Development

Janet Maynard, FDA

Drug development for the approximately 7,000-10,000 rare diseases and conditions can be challenging and complex for many reasons. Challenges in rare disease drug development include small and sometimes very small populations, often poorly understood natural history, and limitations on available drug development tools and outcome measures. For these and other reasons, many rare diseases have few or no available treatments for patients who suffer from them. However, it is an exciting time in rare disease drug development with new opportunities and scientific advancements that hold the potential to transform the treatment of many rare diseases. The Food and Drug Administration (FDA) has numerous initiatives and resources to facilitate rare disease product development, including the Center for Drug Evaluation and Research’s (CDER’s) Accelerating rare Diesaes Cures (ARC) program and the Rare Disease Endpoint Advancement (RDEA) Pilot Program. Through these initiatives and collaboration, we are working to advance rare disease drug development.


S1.2: Innovative Thinking for Rare Disease Drug Development

Shein-Chung Chow, Duke University

For the development of a test treatment or drug product, it is necessary to conduct composite hypothesis testing to test for effectiveness and safety simultaneously, since some approved drug products were recalled due to safety concerns. One major issue in conducting a composite hypothesis testing for effectiveness and safety may require a huge sample size for achieving a desired power in both safety and effectiveness. Situation can be much difficult in orphan (rare disease) drug development. In this presentation, a generalized two-stage innovative approach to test for effectiveness and safety is proposed. Additionally, to alleviate the requiring of a large RCT and revealing effectiveness, real-world data (RWD) is suggested to use in conjunction with randomized clinical trial (RCT) data for rare disease drug development. The proposed approach can help investigators test for effectiveness and safety simultaneously with limited sample size available. It also helps reduce the probability of approving a drug product with safety concerns.


S2.1: Adaptive Endpoint Selection in Rare Disease Drug Development

Cong Chen, Merck & Co.

In rare diseases, there are many unanswered questions during clinical development. Arguably the most important one is how to choose the primary endpoints that translate into meaningful improvement of health outcomes for patients while maximizing trial probability of success at the same time. A natural history study is often recommended by regulatory agencies. This recommendation has dampened enthusiasm for many drug developers because it entails much higher cost and longer timeline. We propose to use an innovative design strategy that allows adaptation on primary endpoint(s) so that learning about disease endpoints can be done within the pivotal trial itself through a subset of patients (i.e., informational cohort). The overall Family Wise Error Rate (FWER) will be controlled through the use of combination test following partition testing principle. A case example in patients with Pompe disease is used to show that the proposed innovative design maintains robust power across treatment effect scenarios while traditional fixed design bears the high risk of failure due to incorrect endpoint selection. Even if multiple endpoints can be included as primary, the proposed innovative design can still improve power over traditional designs by optimizing alpha allocations in cases with differential treatment effects.


S2.2: Evaluation of Biomarker Assessments in Conjunction with Clinical Endpoints for Early Decision Making in Rare Disease Drug Development

Carina Ittrich, Boehringer Ingelheim

Biomarkers are playing an increasing role in clinical drug development to characterize the mode of action of new compounds and to derive early signs of efficacy and safety, especially in rare disease areas. To assist with early decision making, it is necessary to demonstrate that short term effects in biomarkers will very likely translate to clinically meaningful efficacy on later clinical endpoints. Besides evidence of demonstrating prognostics value of biomarkers, investigation of their associations with clinical endpoints is important. We proposed to use methods in evaluating surrogacy and mediation analyses to existing data from clinical or translational science studies to establish this needed disease link. We defined multidimensional decision rules based on a combination of cellular, molecular, or imaging biomarkers with clinical endpoints for early decision making for the development of new treatments. Multiplicity aspects as well as incorporation of existing evidence have been considered for the derivation of the decision boundaries. Despite operational and computational challenges, the integration of biomarker information into early decision making provides a great opportunity to expand our insights in rare diseases and to deliver new efficacious compounds for these patients.


S2.3: Composite Endpoint in Cell/Gene Therapy Development

Yaohua Zhang and Bang Wang, Vertex

Evaluating the efficacy of a treatment requires a comprehensive analysis of both its benefits and potential harms, typically measured using multiple outcome variables. The prevailing approach clinicians use to synthesize this data into a single metric for decision-making is the construction of a composite endpoint. This involves weighing various outcomes based on their clinical importance. For instance, in cardiovascular outcome trials (CVOTs), fatal events like death generally receive higher priority than non- fatal outcomes like worsening heart failure hospitalization. While this approach simplifies the interpretation, it is often based on the assumption that certain outcomes are inherently more important than others. However, this assumption of unequal importance can introduce interpretative complexities when it is not clinically justified. Constructing a hierarchical composite endpoint under such circumstances can distort the real-world implications of treatment effects and potentially lead to misleading conclusions. To address these limitations, we propose a novel approach that constructs standardized or normalized composite endpoints under the assumption that multiple, continuous outcomes are equally clinically relevant and pertain to the same domain. To validate the effectiveness of our proposed methods, we conducted numerical studies across various settings. Our findings suggest that our approaches offer a simple and robust interpretation of treatment effects when multiple outcomes of equal clinical importance are involved.


S3.1: Challenges of Mucopolysaccharidosis II (MPS II) drug development and path to acceleration of the pediatric rare disease drug development with novel endpoints

Yoonjin Cho, Regenxbio

Mucopolysaccharidosis type II (MPS II) is an X-linked lysosomal storage disorder caused by the deficiency of iduronate-2-sulfatase with an estimated incidence of between 0.3 and 0.71 per 100,000 births. The neuronopathic form of MPS II is progressive and causes central nerve system dysfunction and neurodevelopment developmental delay in children. Approximately two-thirds of patients with the lysosomal storage disease mucopolysaccharidosis II have progressive cognitive impairment. The current standard of care, enzyme replacement therapy, does not cross the blood–brain barrier at therapeutic concentrations and does not prevent cognitive decline in neuronopathic form of MPS II. The drug development of MPS II presents several challenges. The diagnosis of neuronopathic MPS II often relies on the demonstration of neurodevelopment decline which may vary from patient to patient on the degree of severity and on the timing of the onset. The disease trajectory of MPS II is heterogenous; the natural history of the neuronopathic MPS II is not well characterized and furthermore, the cognitive assessment tool and scores from the tool to characterize disease severity, and track changes over time, are not consistently used and lacks methodological characterization. This presentation will lay out some of the challenges of drug development of MPS II with an effort to overcome those challenges. The data from the retrospective natural history will be presented within statistical framework and how these data could be used to form response criteria and hypothesis testing and to evaluate the effectiveness of treatment as early as possible.


S3.2: Endpoints Selection and Efficacy Evaluation for Clinical Development in Rare Disease

Bo Huang, Pfizer

The clinical development of novel therapies in rare diseases is often faced with challenges, such as lack of understanding of the disease’s natural history, selection or development of clinically meaningful endpoints, benefit-risk assessment in small populations, and ability to conduct adequate and well controlled clinical trials. We will first go over some of the existing methods to address the general issues in rare diseases, and for the rest of the presentation focus on how to analyze binary outcomes that are often developed to measure the treatment effect that indicates response to the therapy. Commonly used summary measures for response include the cumulative and the current response rate at a specific time point. The current response rate is sometimes referred to as the “probability-of-being-in- response” (PBIR), which regards a patient as a responder only if he/she has achieved and remains in response at present. The methods utilized in practice for estimating these rates, however, may not be appropriate. Moreover, while an effective treatment is expected to achieve a rapid and sustained response, the response at a fixed time point does not provide information about the duration of response. As an alternative, one may consider a curve constructed from the current response rates over the entire study period, which can be used for visualizing how rapidly patients responded to therapy, and how long responses were sustained. The area under the probability-of-being-in-response curve is the mean duration of response. This connection between response and duration of response makes this curve attractive for assessing the treatment effect. In contrast to the conventional method for analyzing the duration of response data, which uses responders only, the above procedure includes all comers in the study. We will go over the statistical methodology and provide illustrative examples.


S3.3: Design considerations in expanding lines of therapy for unmet medical need

Xin Wang, Bristol Myers Squibb

Despite therapeutic advancements in hematology treatments, unmet need for therapies that generate durable responses remains. Chimeric antigen receptor T-cell (CAR-T) product has demonstrated efficacy and manageable safety in 11Th11Tematologic malignancies. It’s a common development strategy to first develop indications in later lines of therapy, and subsequently expand to earlier lines.

As patients in early lines of hematologic diseases are usually associated with longer median survival time, it usually requires a large sample size and long trial duration to adequately power for an early line study. While exploring line of therapy expansion, one possible approach is to design an overall trial including both late line patients and earlier line patients. The trial may be powered for overall population, while maintaining sufficient sample size to demonstrate consistency of treatment effect in earlier lines. Different consistency criteria are discussed, along with analysis and operational considerations to maintain overall trial conduct feasibility.


S4.1: SAM: Self-adapting Mixture Prior to Dynamically Borrow Information from Historical Data for Rare Disease Clinical Trials

Ying Yuan, University of Texas MD Anderson Cancer Center

Utilizing historical data represents a pivotal strategy in tackling the inherent accrual challenge of rare diseases. Mixture priors provide an intuitive way to incorporate historical data while accounting for potential prior-data conflict by combining an informative prior with a non- informative prior. However, pre-specifying the mixing weight for each component remains a crucial challenge. Ideally, the mixing weight should reflect the degree of prior-data conflict, which is often unknown beforehand, posing a significant obstacle to the application and acceptance of mixture priors. To address this challenge, we introduce self-adapting mixture (SAM) priors that determine the mixing weight using likelihood ratio test statistics. SAM priors are data-driven and self-adapting, favoring the informative (non-informative) prior component when there is little (substantial) evidence of prior-data conflict. Consequently, SAM priors achieve dynamic information borrowing. We demonstrate that SAM priors exhibit desirable properties in both finite and large samples and achieve information-borrowing consistency. Moreover, SAM priors are easy to compute, data-driven, and calibration-free, mitigating the risk of data dredging. Numerical studies show that SAM priors outperform existing methods in adopting prior-data conflicts effectively. We developed R package "SAMprior" and web application that are freely available at CRAN and www.trialdesign.org to facilitate the use of SAM priors.

Joint work with Peng Yang (Rice University), Yuansong Zhao (The University of Texas Health Science Center), Lei Nie (FDA), Jonathon Vallejo (FDA)


S4.2: A Bayesian Nonparametric Model to Create Synthetic Matching Populations

Peter Mueller, University of Texas at Austin

We propose a model-based approach using nonparametric Bayesian common atoms models to create synthetic matching populations from available data. The model and method are developed in the context of single-arm, treatment only clinical trials. The single-arm cohort is complemented by a synthetic control arm that is created from readily available external data in the form of electronic health records (EHR). Although randomized clinical trials (RCT) remain the gold standard for approvals by regulatory agencies, the increasing availability of such real world data has opened opportunities to supplement increasingly expensive and difficult to carry out RCT's with evidence from readily available real world data.

S4.3: Leveraging Real-World Data and Real-World Evidence in Clinical Trial Design and Analysis for Rare Diseases

Chenguang Wang, Regeneron

Incorporating real-world data (RWD) is more widely accepted by regulatory agencies in cases of rare diseases, but the RWD must still undergo appropriate analysis to derive the right real-world evidence (RWE) and support regulatory decisions. Recently, methods leveraging external RWD in clinical trial design and analysis in the context of regulatory decision-making are proposed. The methods use propensity score or entropy balancing to pre-select a subset of RWD patients that are similar to those in the investigational study. In this presentation, we will review the methods and examine the implicit causal assumptions required by the methods.


S4.4: An extended Bayesian divide-and-conquer analysis approach for hybrid control trials

Jian Zhu, Servier

Recently various methodologies have been proposed to utilize propensity score and Bayesian dynamic borrowing methods in leveraging RWD in hybrid control trials. After evaluating common caveats of such framework, we extended the borrowing-by-parts power prior with novel plausibility indexes to better control borrowing. This is particularly useful when there are temporal effects between the trial and external control, and suitable for rare diseases. A simulation study demonstrates that the proposed method performs well and is more robust to violations of model assumptions.


S5.1: A causal mediation model to evaluate the individual surrogacy of plasma neurofilament light chain (NfL) for SOD1 ALS using tofersen data

Peng Sun, Biogen

In April 2023, tofersen was approved for the treatment of SOD1-ALS, a devastating, uniformly fatal, and ultra-rare genetic form of ALS with approximately 330 people in the US living with the disease. Tofersen is the first approved treatment that target a genetic cause of ALS. This indication was approved under accelerated approval based on reduction in plasma neurofilament light chain (NfL) observed in patients treated with tofersen. The accelerated approval required the establishment of plasma NfL as a reasonably likely surrogate endpoint to predict clinical benefit. Neurofilament are proteins that are released from neurons where they are damaged, and they are a marker of neurodegeneration. We built a statistical model with a causal inference component to assess the relationship between early tofersen reduction in plasma NfL at Week 16 and slowing of clinical progression over time on an individual basis. The model deconstructs the observed change from baseline in clinical endpoint for a tofersen-treated participant into three components: change due to natural disease progression, change due to tofersen effect through the plasma NfL pathway, and change due to tofersen effect through non-biomarker pathway/factors. Many factors were considered in FDA’s evaluation in conjunction of the observed efficacy and safety data from the pivotal study and its long-term extension to support the accelerated approval. Those in general include mechanistic, scientific, and empirical evidence. The causal inference model constitutes the key empirical evidence demonstrating the correlation between observed reduction in plasma NfL and a reduction in decline of clinical outcomes and it was extensively evaluated by the FDA. In this talk, we will discuss the motivation, construct, and results of the causal inference model. In addition, we will discuss how the proposed model can be framed in the causal mediation analysis framework.


S5.2: The use of balancing methods in ALS real-world settings and beyond

Marie-Abèle Bind, Massachusetts General Hospital Biostatistics Center

For years, researchers proposed to minimize patient burden by using historical control data to reduce the number of concurrent control RCT participants (e.g., in ALS research). We will use Dorn's advice via Cochran (1965): How would the study be conducted if it were possible to do it by controlled experimentation? We will show how to construct hypothetical randomized experiments using concurrent controls. This last setting has been referred as a "paired availability design for historical controls" (Baker et al., 2001). We will capitalize on a rich causal inference literature (e.g., Cochran, Rubin, Freedman, Rosenbaum, Stuart, Hernan), for which the big idea is to embed an observational study into a hypothetical RCT (or emulate a RCT). The operational strategy to assess causality from a non-randomized data set (Bind and Rubin, 2019, 2020, 2021) consists of multiple steps: (i) a conceptual stage, (ii) a design stage, (iii) a statistical analysis stage, (iv) a sensitivity analysis stage, and (v) a summary stage. The conceptual stage involves the precise formulation of the causal question using potential outcomes and a hypothetical intervention, for which the treatment is randomly assigned to participants given background covariates. This description includes the timing of the random assignment and defines the target population. This conceptual stage demands subject-matter knowledge and careful scientific argumentation to make the embedding plausible to scientific readers. The design stage attempts to reconstruct (or approximate) the design of a randomized experiment before any outcome data are observed. Matching (e.g., Cochran and Rubin, 1973; Rosenbaum and Rubin, 1983; Stuart, 2010) is one way to accomplish covariate balance between the treated and control group.


S5.3: Conditional and unconditional analyses for Bayesian historical data borrowing

Xiaodong Luo, Sanofi

It is often important and required by the regulatory agencies to show the operating characteristics when using Bayesian method for data borrowing in confirmatory trials. Type-1 error rate in this scenario can be classified as two types: conditional and unconditional, whereas the conditional type-1 error rate is computed conditioning on the observed data to be borrowed and the unconditional type-1 error rate is calculated without the conditional probability. We will illustrate that if the purpose is to control the conditional type-1 error rate, the Bayesian data borrowing approach will reduce to the Frequentist approach. If the purpose is to control the unconditional type-1 error rate, additional assumption is needed to control the error rate within a “reasonable” range of scenarios. We hope the classification of the two types of type-1 error rates can help further the understanding of the Bayesian methods in clinical trials.


S5.4: Use of win statistics (win ratio, win odds and net benefit) for hierarchical analysis of clinical endpoints with low event rates and surrogate endpoints

Gaohong Dong, BeiGene

In many therapeutic areas, it is challenging to design a clinical trial with a feasible sample size and adequate statistical power using clinical endpoints primarily because the event rates of these clinical outcomes are too low (e.g., mortality rates in diabetes studies). It is not uncommon that surrogate endpoints are used together with the clinical endpoints to construct a composite endpoint, so that an adequate statistical power with a feasible sample size can be achieved. In this talk, we will present, using the win statistics (win ratio, win odds, and net benefit), the clinical endpoints and surrogate endpoints can be analyzed hierarchically per the order of clinical importance. These statistics are based on pairwise comparisons. Each pairwise comparison starts with the most important outcome (e.g., death), then less important endpoints (e.g., hospitalization) are considered only if the higher priority outcomes do not result in a win. Different from a conventional composite endpoint of outcomes in the same data type (e.g., all are time-to-event outcomes), we will show that the win statistics allow a composite of multiple endpoints in any same or mixed data types (e.g., time-to-event, continuous, ordinal). Recurrent and repeated events can be also incorporated. This flexibility particularly enables the hierarchical analyses of clinical endpoints with low event rates and surrogate endpoints. We will use real clinical trials (e.g., a liver transplant Phase III study and heart failure phase III studies) to demonstrate pros and cons of such innovative analyses with win statistics and how the results can be interpreted.


S6.1: An Exploration of Model-Based Dose Findings Algorithms for Use in Dose-Finding Gene Therapy Trials

Kevin Roberts, Pfizer

Quantitative dose escalation in early phase clinical trials has been expanding across drug modalities and disease indications in response to the demonstrable deficiencies of non-quantitative or algorithmic methods of dose progression. In this presentation, we describe the state of the art for quantitative dose escalation with an eye to applications in cell and gene therapy modalities. The application of quantitative dose progression methods in this setting poses unique challenges due to the small sample sizes and often small number of dose levels under study. The regulatory, scientific, medical, and statistical environment has advanced to the point where new guidance is needed to help characterize the safety profile of these powerful products to ensure appropriate decisions are made to ensure patient safety and support the development of gene therapy drugs. I will discuss the current use of quantitative dose escalation, the limitations of these options for gene therapy indications, and offer recommendations and examples of new and existing models appropriate for the gene therapy space.


S6.2: The Temporally-resolved Connectedness of Genetics, Imaging Biomarkers, and Clinical Endpoints in Alzheimer's disease

Ixavier A. Higgins, Eli Lilly and Company

Bayesian networks (BN) are a novel approach for estimating the presence and direction of influence between several variables of interest. In recent years, static Bayesian networks have found wide application in drug discovery efforts, identifying established and novel pathways between genes from cross-sectional human gene expression data. A major limitation of the static Bayesian network is the directed acyclic graphical structure does not allow feedback loops that have been shown to naturally occur in biological systems. Dynamic Bayesian networks (DBN) have arisen as temporally-resolved extension of the static setting where a variable’s current value or state can directly or indirectly influence its observations in the future. We explore the utility of the DBN in an examination of longitudinally observed genetics, imaging biomarkers, and cognitive endpoints in Alzheimer’s disease. The fully data-driven approach connects aspects of the established Amyloid/Tau/Neurodegeneration framework to downstream changes in cognition and function as the disease progresses. Further, we evaluate the potential to use estimated DBNs to simulate functional and cognitive outcomes resulting from hypothetical therapeutic manipulation of upstream biomarkers.


S6.3: AI/ML for drug discovery

Haoda Fu, Eli Lilly and Company

AI and ML are revolutionizing drug discovery, particularly in the area of de novo drug design. These tools can predict the properties of potential drug candidates and identify promising drug targets by analyzing large amounts of data from various sources. Using AI and ML, researchers can generate new chemical entities with optimized drug-like properties. De novo drug design using AI and ML has led to the discovery of new treatments for cancer, infectious diseases, and metabolic disorders. However, challenges remain, such as the need for high-quality data and the exploration of larger chemical spaces. Despite these challenges, AI and ML have the potential to transform the way new drugs are designed and developed, accelerating the drug discovery process and bringing new treatments to patients faster than ever before.