Biostatistics and Reporting are a critical part to any regulated clincal trial. We have master the use of RStudio in combination with SAS (for 21 CFR Part 11 compliant aspects) to support Phase I and Phase II clinical trials as a Statistical Programmer and Regulatory Biostatistician.

We can support these major areas:

  1. Study design and protocol development
  2. Statistical Consulting
  3. Sample Size Calculations
  4. Randomization Schemas
  5. Statistical Analysis Plans (SAP)
  6. Data Safety and Monitoring Board (DSMB) Support

Study Design and Protocol Development

We are experienced with the knowledge to develop optimal trial designs and write protocols for clinical development programs. From phase I/II Pharmokentic (PK) and Pharmacodynamic (PD) studies, early-phase dose-finding safety and efficacy studies to phase II trials, we can address your biostatistics needs and recommend the best solutions for your study.

We can provide oversight of clinical development programs as members of clinical / scientific advisory boards, clinical trial Steering Committees and DSMBs. We’ve supported submissions to FDA as part of my routine duties in the Army, particpated in meetings with Investigational New Drug (IND) and Investigation Device Expemtion (IDE) protocol discussions, and presented results of previous studies.

Statistical Consulting

In a consulting role, we can provide support to Study Planning, Protocol Development, and Clinical Advisory Groups: drug development, scientific, statistical & trial design consulting. Further, we provide literature reviews, re-analysis / meta-analysis of reference studies, review and analysis of preclinical studies (toxicity, PK), analysis in supports of the safety and efficacy sections of Investigator Brochures (IBs). In addition to simulation studies, we provide study design, sample size/power calculation, statistical considerations, randomization, including adaptive methods. Finally, we can support client representation or support at IND / pre-study regulatory meetings.

Sample Size Calculations

During the planning stages of a clinical study, we’ll work with your team to establish the hypotheses of interest that reflect the study objectives under the study design, per SOP. Based on the hypotheses, we will:

  1. Determine the number of subjects needed to detect a clinically meaningful difference at a required level of power
  2. Discuss the trade-off in power should budget considerations be a factor determining the number of subjects.
  3. If prior knowledge regarding the endpoints of interest is limited or inaccurate, we can help design your study to allow for interim analyses and sample size recalculation to ensure that your study is not under/over powered.

Randomization Schemas

We can assist Sponsors with developing randomization schemas that minimize the variability of the parameter of interest and avoid confounding from other factors, per SOP. We are experienced in the following types of randomization:

  1. Simple: randomize subjects to one of two treatment groups
  2. Block: randomize subjects into treatment groups in blocks to ensure balance in sample size across groups over time. The block size is a multiple of the number of treatment groups.
  3. Stratified: randomize subjects to achieve balance among the treatment groups in terms of covariates, e.g. subject’s baseline characterstics.
  4. Dynamic: also known as adaptive randomization. An example is the minimization method in which subjects are randomized to a particular treatment group based on specific covariates and previous assignments of subjects.

Statistical Analysis Plans

We have the knowledge and experience to write SAPs and develop corresponding tables, listings and figure (TLF) shells, based on the protocol and electronic case report forms (eCRF), per SOP. SAPs can be written for:

  1. Interim analysis
  2. Final/main analysis
  3. DSMB meetings
  4. Manuscripts
  5. Integrated Summaries of Safety and Efficacy (ISS/ISE) analysis
  6. Post-hoc/exploratory analysis
  7. Meta-analysis

DSMB Support

We have the skills, knowledge and experience to support DSMBs in the following areas:

  1. Preparation of DSMB charters / manuals
  2. Data exports and analyses for DSMB meetings: blinded and unblinded data
  3. Participation in DSMB meetings
  4. Preparation of DSMB statistical reports (open and closed sessions)
  5. Presentation of DSMB results
  6. Documentation of open/closed sessions
  7. Unblinding requests


For ISS/ISE, we will:

  1. Develop a data specification plan to integrate (pool) safety and efficacy data across multiple studies within the program
  2. Implement the data specification plan to generate standardized datasets for analysis
  3. Develop the statistical analysis plan and tables, listings and figures shells to assess safety and efficacy at the program level
  4. Program the integrated safety and efficacy tables, listings and figures

With pooled ISS data, we will:

  1. Identify common related adverse events and serious adverse events
  2. Identify safety concerns that show a pattern across all studies
  3. Assess safety in subgroups of subjects, if applicable

With pooled ISE data, we will:

  1. Assess efficacy in subgroups of subjects (e.g. pediatric population)
  2. Assess efficacy of secondary endpoint across all studies which would not have been possible under single studies
  3. Explore inconsistency in results between studies
  4. Assess sensitivity of results

Clinical Data Interchange Standards Consortium (CDISC) Standards

CDISC is the standard for submitting clinical data to Regulatory Agencies. As a member of the CDISC organization, we maintain up-to-date knowledge on Study Data Tabulation Model (SDTM) implementation guidelines. Our SDTM services include developing SDTM datasets for ongoing studies or converting legacy databases to SDTM standards.

We utilize industry standards/references when creating SDTM datasets:

  1. CDISC SDTM Implementation Guide
  2. CDISC SDTM guidance
  3. SDTM Controlled Terminology
  4. Indication-specific SDTM specifications, if applicable

Our processes in SDTM development are as follows:

  1. Map raw data variables to SDTM domains (identify domains, required, expected, permissible and relationship variables)
  2. Create SDTM specification documents
  3. Program SDTM domains
  4. Validate SDTM domains
  5. Annotate eCRF
  6. Create Define.xml
  7. Create Study Reviewer’s Guide (SRG)
  8. Produce Submission Package
  9. Quality Control at each step of the process, per SOP

Standards for creating analysis ready datasets are based on guidelines published by CDISC. We develop the specifications for Analysis Data Model (ADAM) datasets based on the SAP and the TLF shells. The source data for the ADAM datasets are preferably the SDTM datasets but it can also be done from the raw (native) study datasets.

We utilize industry standards/references when creating ADAM datasets:

  1. CDISC ADAM Implementation Guide
  3. ADAM Controlled Terminology
  4. Indication-specific ADAM domain specifications, if applicable

Our processes in ADAM development are as follows:

  1. Map relevant raw or SDTM data variables to ADAM domains
  2. Determine derived variables for analysis
  3. Determine analysis flags (population, sub-group, criterion-based) needed for analysis
  4. Create ADAM specification documents based on the above
  5. Program ADAM domains
  6. Validate ADAM domains
  7. Create Define.xml
  8. Create Study Reviewer’s Guide
  9. Produce Submission Package
  10. Quality Control at each step of the process, per SOP

Finally, we have the experience and core competency in programming TLFs in support of:

  1. Final analyses
  2. Interim analyses
  3. DSMB meetings
  4. Annual safety updates / Development Safety Update Reports (DSURs)
  5. Integrated summaries of safety (ISS)
  6. Integrated summaries of efficacy (ISE)
  7. Abstracts/Manuscripts
  8. Post-hoc and exploratory analyses

The source data for TLF programming are the raw (native) study database, SDTM or ADAM datasets. All TLF programming is based on the shells created from the SAP. If shells are not available, we will create the shells upon request to ensure transparency and consistency of the output.

All programming is done by our statistical programmers per the SAP and TLF shell. Our SOPs define a separate program to be developed for each individual TLF. Programs can be shared directly with the agency upon request. We maintain log for each study to define the extent and scope of quality control for TLFs. QC includes independent programming, and/or code review and/or content review of a subset of programs and/or TLF output. All issues are documented and resolved in the log.