Wednesday, January 21, 2026

"Quality by Design" in Clinical Trials

What is Quality by Design according to ICH GCP E6(R3)? 

E6(R3) Good Clinical Practice (GCP) places Quality by Design as a a key and recurring principle across the guideline. The concept is introduced in the context of clinical study design, risk identification, and planning, and is reinforced at multiple points throughout the document as follows:


  • "This guidance builds on key concepts outlined in the ICH guidance for industry E8(R1) General Considerations for Clinical Studies (April 2022).3 This includes fostering a quality culture and proactively designing quality into clinical trials and drug development planning, identifying factors critical to trial quality, engaging interested parties, as appropriate, and using a proportionate risk-based approach." (Introduction I, page 8)
  • "Clinical trials should be designed to protect the rights, safety and well-being of participants, and assure the reliability of results. Quality by design should be implemented to identify the factors (i.e., data and processes) that are critical to ensuring trial quality and the risks that threaten the integrity of those factors and ultimately the reliability of the trial results." (PRINCIPLES OF ICH GCP (II), page 10) 
  • "Quality by design involves focusing on critical to quality factors of the trial in order to maximize the likelihood of the trial meeting its objectives." (6.2, page 13)
  • "The sponsor should adopt a proportionate and riskbased approach to quality management, which involves incorporating quality into the design of the clinical trial (i.e., quality by design) and identifying those factors that are likely to have a meaningful impact on participants’ rights, safety and well-being, and the reliability of the results (i.e., critical to quality factors as described in ICH E8(R1)). "(3.10, page 35)
  • "The systems and processes that help ensure this quality should be designed and implemented in a way that is proportionate to the risks to participants and the reliability of trial results." (D, page 52) 

https://www.fda.gov/regulatory-information/search-fda-guidance-documents/e6r3-good-clinical-practice-gcp

What is Quality?

Quality is not a single, universally fixed concept. Its meaning varies across disciplines and industries, and is often shaped by context and purpose. In technical and operational settings, quality is commonly understood in two complementary ways: 

  1. as the set of characteristics of a product or service that enable it to meet stated or implied needs (fit for purpose)
  2. as the absence of defects or deficiencies in what is delivered.

Well-known quality theorists have captured this distinction succinctly. Joseph Juran described quality as suitability for the intended use, emphasizing outcomes and user needs, while Philip Crosby framed quality as adherence to defined requirements, highlighting consistency and compliance. Together, these perspectives illustrate that quality depends both on how well something is designed and on how reliably it is delivered. (American Society for Quality, https://asq.org/quality-resources/quality-glossary/q)

How quality is measured and improved (design → execution → control)?

  • Quality by Design (QbD)
  • Designs quality into products and services from the outset by identifying critical factors and risks, aiming to prevent defects at the start of the process.
  • Total Quality Management (TQM)
  • Embeds quality across the organization during execution through shared responsibility, customer focus, and continuous improvement.
  • Lean
  • Improves execution quality by eliminating waste, simplifying workflows, and reducing sources of error throughout the process.
  • Six Sigma
  • Makes quality measurable by identifying root causes of variation and eliminating defects early through data-driven process control.
  • ISO 9001 Quality Management System
  • Provides governance and control through defined processes, documentation, monitoring, and continual improvement.

What are alternatives to Quality by Design?

Quality by Design is not the only way quality can be pursued in clinical trials. Historically, and still in practice today, quality is often achieved through reactive or downstream approaches, rather than through upfront design decisions.

  1. Quality by inspection is assessed primarily through: monitoring visits, source data verification, audits and inspections, retrospective review of deviations and queries.In this model, quality issues are detected after they occur, and corrective actions are applied later in the trial lifecycle.
  2. Quality by correction (CAPA-driven quality) relies on: identifying deviations and errors during execution, performing root cause analyses, implementing corrective and preventive actions (CAPAs). This approach can restore compliance, but it is inherently reactive and often resource-intensive.
  3. Quality by over-control is to apply uniform controls everywhere, collect more data “just in case,” increase monitoring intensity regardless of risk.
  4. Quality by experience and intuition is driven by: experienced teams, informal knowledge, “how it has always been done.”

How is quality viewed by GCP across design, execution, and operations?

ICH GCP E6(R3) frames quality as something that starts with study and protocol design and is then managed throughout the full clinical trial lifecycle. The guideline emphasizes that trials should be designed to protect participants and ensure reliable results, and it introduces quality by design as the identification of critical-to-quality data and processes, together with the risks that may compromise them. In this sense, protocol design is the primary entry point for quality, as design decisions determine what data are collected, how procedures are performed, and which elements of the trial are essential.

At the same time, GCP does not treat quality as fixed once the protocol is finalized. Trial conduct is defined broadly to include planning, execution, oversight, analysis, and reporting, making it clear that quality must be actively managed during operations, not only specified in design documents. Quality by design at the protocol level is therefore linked to ongoing quality management during trial execution, including monitoring and sponsor oversight.

Systems, software, and tools are addressed as supporting elements, not as sources of quality themselves. GCP requires that the systems and processes used are designed, implemented, and validated in a way that is proportionate to risks to participants and to the reliability of trial results, while deliberately avoiding prescriptive requirements for specific technologies. Manual, electronic, or hybrid approaches are all acceptable if they adequately support the identified critical-to-quality factors.

Beyond protocol design and execution activities, quality by design can also be applied to areas such as the design of roles and responsibilities, data flows across partners, and decision and escalation pathways. Although these aspects are not always explicitly labeled as quality by design in the guideline, they follow the same underlying logic of anticipating risks early rather than relying solely on downstream correction.

Overall, E6(R3) presents quality as a design-led, lifecycle-wide responsibility, supported by proportionate systems and processes, while intentionally leaving flexibility in how quality is operationally enabled.

Why is Quality by Design important for the development of a study protocol?

The study protocol defines what a clinical trial is intended to achieve and how it will be conducted. Design decisions made at this stage determine the scope of data collection, the complexity of procedures, the feasibility of execution at sites, and the ability to protect participants while generating reliable results. For this reason, Quality by Design is particularly important during protocol development.

Applying Quality by Design means identifying, upfront, which data and processes are critical to the study objectives and which risks could compromise participant safety or result reliability. This helps focus the protocol on what truly matters, rather than treating all procedures and data points as equally important. Without this prioritization, protocols often become overly complex, difficult to execute, and prone to amendments and deviations during conduct.

Quality by Design also supports operational feasibility. Protocols developed without considering site capabilities, patient burden, and realistic workflows may be technically compliant but operationally fragile. Incorporating quality considerations early allows unnecessary procedures to be reduced, ambiguous instructions to be clarified, and study requirements to be better aligned with routine clinical practice, improving consistency across sites.

From a data perspective, Quality by Design ensures that data collection is purpose-driven. Clearly defined endpoints, aligned assessments, and well-structured visit schedules reduce variability, data errors, and the need for retrospective correction. This strengthens the reliability of trial results while lowering the operational burden associated with excessive data collection and monitoring.

Finally, Quality by Design at the protocol stage provides the foundation for proportionate quality management during execution. When critical-to-quality factors and risks are clearly defined, oversight, monitoring, and data review can be aligned with those priorities rather than applied uniformly across all trial activities.

In this sense, Quality by Design is not an additional layer imposed on protocol development, but a way to ensure that the protocol itself is clear, feasible, and focused on what is essential, reducing avoidable quality issues later in the trial lifecycle.

Common protocol issues when Quality by Design is lacking

  • Errors and ambiguities in protocol text
    Unclear procedures, inconsistent visit schedules, or contradictory instructions lead to variable interpretation across sites, deviations, and amendments.

  • Excessive complexity and operational burden
    Overly detailed or unrealistic requirements make protocols difficult to understand and execute, increasing noncompliance, data errors, and site workload.

  • Patient safety and ethical concerns
    Unnecessary tests, overly frequent procedures, or poorly justified assessments increase patient burden and may expose participants to avoidable risk without clear scientific benefit.

Which protocol elements benefit most from Quality by Design?

Quality by Design has the greatest impact on protocol elements that directly affect participant safety, data reliability, and operational feasibility. Applying Quality by Design to these elements helps ensure that the protocol is focused, executable, and aligned with its objectives.

Endpoints

Endpoints are among the most critical protocol elements. Quality by Design helps ensure that endpoints are:

  • clearly defined and measurable,

  • directly linked to study objectives,

  • supported by appropriate assessments and timing.

Poorly designed endpoints often lead to inconsistent data collection, interpretability issues, and protocol amendments. Prioritizing endpoints early helps avoid unnecessary data collection and reduces downstream complexity.

Visit schedules and procedures

Visit schedules strongly influence both patient burden and site workload. Applying Quality by Design means:

  • aligning visits with routine clinical practice where possible,

  • minimizing unnecessary procedures,

  • defining realistic visit windows.

Overly complex or dense visit schedules increase the risk of missed visits, deviations, and data gaps, even when sites are well trained.

Eligibility criteria

Eligibility criteria determine who can safely and appropriately participate in the trial. Quality by Design helps ensure that criteria are:

  • scientifically justified,

  • clearly operationalized,

  • neither unnecessarily restrictive nor ambiguous.

Poorly designed eligibility criteria can lead to slow recruitment, frequent protocol waivers, and inconsistent interpretation across sites.

Safety reporting and assessment

Safety-related procedures are inherently critical-to-quality. Quality by Design supports:

  • clear definitions of reportable events,

  • realistic reporting timelines,

  • proportionate safety assessments aligned with known risks.

Ambiguity or excessive safety requirements can either delay critical reporting or overwhelm sites with low-value assessments.

Data collection and case report forms

Data collection is often the most visible source of operational burden. Quality by Design focuses data collection on:

  • information essential for endpoints and safety,

  • data that supports regulatory and scientific decisions,

  • avoiding “nice-to-have” variables without clear purpose.

Excessive or poorly aligned data collection increases error rates, query volume, and monitoring effort without improving study quality.

Summary

Protocol elements that define what is measured, who participates, when activities occur, and how safety is managed benefit most from Quality by Design. Applying this approach helps transform the protocol from a comprehensive list of activities into a focused, feasible study plan that supports reliable results and participant protection.

How does Quality by Design influence trial execution once the protocol is approved?

Once a protocol is approved, Quality by Design influences trial execution by guiding how activities are prioritized, managed, and overseen, rather than by changing the protocol itself. The critical-to-quality factors and risks identified during design become reference points for operational decisions throughout trial conduct.

During execution, Quality by Design supports focused oversight. Monitoring, data review, and sponsor oversight can be directed toward data and processes that are essential for participant safety and the reliability of results, instead of being applied uniformly across all trial activities. This allows resources to be concentrated where failures would have the greatest impact.

Quality by Design also shapes operational workflows. Site training, data entry timelines, escalation rules, and issue management processes can be aligned with the risks identified in the protocol. This reduces ambiguity during execution and helps ensure that operational practices are consistent with the study’s quality objectives.

From a data management perspective, Quality by Design enables proportionate control. Data cleaning, query management, and review activities can focus on critical variables, reducing unnecessary rework while maintaining confidence in key results. This approach helps avoid both under- and over-control during trial conduct.

Finally, Quality by Design provides a framework for adaptive management during execution. As the trial progresses, emerging issues can be assessed against predefined critical-to-quality factors, allowing corrective actions to be targeted and justified rather than reactive or blanket in nature.

In this way, Quality by Design does not end with protocol approval. It continues to influence how the trial is executed, monitored, and adjusted, ensuring that operational decisions remain aligned with the study’s core objectives and risk profile.

What role do systems, software, and tools play in supporting Quality by Design during execution?

During trial execution, systems, software, and tools play a supporting role in Quality by Design by helping to implement and maintain the quality objectives defined during study and protocol design. They do not define quality themselves, but they can enable consistency, visibility, and control over critical-to-quality data and processes.

ICH GCP E6(R3) treats systems and processes as mechanisms that help ensure quality, and requires that they are designed, implemented, and validated in a way that is proportionate to risks to participants and to the reliability of trial results. The guideline is intentionally technology-neutral and does not mandate specific tools, levels of automation, or system architectures. As a result, manual processes, electronic systems, or hybrid approaches may all be acceptable, provided they adequately support the identified critical-to-quality factors.

In practice, systems such as EDC, CTMS, safety databases, and trial master file platforms can support Quality by Design by reinforcing design-time decisions during execution. For example, they can help ensure that critical data are captured consistently, that key activities are tracked against protocol requirements, and that deviations or delays affecting critical-to-quality elements are visible and addressed in a timely manner.

At the same time, systems cannot compensate for weak design. If critical-to-quality factors are not clearly defined in the protocol, or if operational risks are not anticipated, software tools may amplify complexity rather than improve quality. In such cases, additional controls, queries, or monitoring activities are often introduced reactively, increasing operational burden without addressing root causes.

When used appropriately, systems and tools support Quality by Design by enabling proportionate execution. They help translate design-level priorities into operational workflows, oversight activities, and data review processes that are aligned with study risk. Their value lies not in their sophistication, but in how well they reflect and reinforce the quality decisions made during trial design.

Can Quality by Design be achieved without advanced clinical trial software?

In principle, Quality by Design does not depend on the use of advanced clinical trial software. ICH GCP E6(R3) does not mandate specific technologies and allows quality to be supported through manual, electronic, or hybrid processes, provided they are appropriate for the risks and objectives of the trial.

In practice, however, the role of software depends strongly on whether it is absent, misaligned, or deliberately designed as a framework.

When no clinical software is used, trial processes may remain relatively simple and linear. In such cases, Quality by Design can be supported through clear protocols, well-defined procedures, and direct oversight, because risks and data flows are easier to understand and control.

When software is introduced without being aligned to the protocol design and critical-to-quality priorities, it can become a barrier rather than a support. Fragmented systems, duplicate data entry, delayed visibility into critical data, and forced workarounds can introduce new risks and shift quality management toward retrospective checking and correction.

Quality by Design is most effectively supported when software functions as a framework that reflects the study design. In this role, systems reinforce protocol logic, support critical timelines, make key risks visible during execution, and enable proportionate oversight. In such cases, software does not create quality, but it helps translate design decisions into consistent operational practice.

From a GCP perspective, this distinction is intentional. The guideline defines quality principles and requires that systems used are controlled and validated, but it leaves the design of operational frameworks to sponsors and CROs. As trial complexity increases, the absence of a coherent system framework, or the presence of poorly aligned systems, can undermine the practical application of Quality by Design during execution.

In summary, Quality by Design can exist without advanced software in simple settings, but once software is part of the trial environment, its role becomes critical. Software that is not aligned with study design and quality priorities risks obstructing quality, while software designed as an enabling framework can meaningfully support Quality by Design in complex clinical trials.

Does the increasing complexity of clinical trials require a new interpretation of Quality by Design in execution?

The increasing complexity of clinical trials challenges a purely design-time interpretation of Quality by Design. Modern studies involve more data sources, decentralized activities, multiple vendors, adaptive designs, and tighter timelines than earlier trial models. While Quality by Design remains anchored in protocol and study design, these changes raise the question of whether design-time decisions alone are sufficient to sustain quality during execution.

ICH GCP E6(R3) continues to frame Quality by Design around identifying critical-to-quality factors and risks upfront. However, as trials become more operationally complex, many quality risks emerge during execution, at the interfaces between systems, organizations, and processes. These risks are often not fully predictable at the time of protocol finalization, even when design is robust.

This does not imply that the regulatory meaning of Quality by Design has changed. Rather, it suggests that execution has become a more active extension of design. In complex trials, maintaining alignment between protocol intent and real-world conduct requires continuous attention to how workflows, data flows, and oversight mechanisms behave in practice. Quality by Design therefore increasingly depends on whether execution environments can reflect and adapt to design-level priorities.

As complexity grows, execution-level interpretation of Quality by Design often shifts toward:

  • maintaining visibility over critical-to-quality data and processes,

  • detecting deviations from design assumptions early,

  • and adjusting controls proportionately as risks materialize.

This evolution does not replace design-time Quality by Design, but it extends it into execution, where systems, processes, and decision pathways must consistently reinforce the original quality objectives of the study.

In this sense, increasing trial complexity does not require redefining Quality by Design, but it does require rethinking how design decisions are operationalized and sustained during execution. Quality by Design becomes less about a one-time protocol exercise and more about ensuring that execution environments remain aligned with what was designed to matter most.


Disclaimer

This article reflects the personal views and interpretations of the author and is intended for educational and illustrative purposes only. It does not represent regulatory guidance, official interpretations, or the position of any regulatory authority, sponsor, CRO, or organization.

The text has been streamlined and edited with the assistance of ChatGPT (version 5.2) to improve clarity and structure. Any opinions expressed are the author’s own and should not be construed as formal guidance or compliance advice.

The content is provided in the context of tutorial-style discussion materials, aimed at supporting learning and reflection on Quality by Design and related concepts in clinical research.

No comments:

Post a Comment