2021 report

Improving Methods for Discrete Choice Experiments to Measure Patient Preferences

By: A. Ellis* , K. Thomas, K. Howard, M. Ryan, E. Bekker-Grob & E. Lancsar

Source: ORCID
Added: April 19, 2023

Results Summary Download Summary Results Summary What was the project about? Researchers can use experiments to learn about what patients prefer. Discrete choice experiments, or DCEs, describe treatments with different features, such as out-of-pocket costs or wait times. Patients fill out surveys about which treatments they prefer. From their choices, researchers learn what is most important to patients and how they think about the different features. DCEs can be hard to design and analyze. When surveys are complex, patients may ignore information or take shortcuts, which leads to inaccurate results. To make DCE results more accurate, researchers can Change the design of the DCE Apply statistical methods But current knowledge of how to do this is limited. In this project, the research team looked at improving methods to design and analyze DCEs. What did the research team do? First, the research team looked at how changes to the design of the DCE affected results. Using a computer program and data from two DCEs, the team created test data for 100,000 patients. The team used the test data to see how changes in DCE design, such as the number of patients taking part, affected results. DCEs are complex, so researchers often test the design in a small pilot study, which informs the design of the main study. The team also looked at how the changes in pilot study designs affected the accuracy of results from the main studies. Next, the research team looked at one type of statistical method used in DCEs called random parameter logit estimation with Halton draws. This method lets researchers measure what patients prefer while accounting for different preferences across patients. The team tested the method under different conditions, such as how much preferences vary from patient to patient. Then they looked at how many Halton draws were needed to get accurate results in a DCE study. The research team worked with other DCE researchers to design this study. What were the results? When the DCE design included more patients, the results were more accurate for assessing patient preferences. If the pilot study had design errors, results from the main study were less accurate. In random parameter logit estimation with Halton draws, the research team figured out the number of Halton draws needed to improve the accuracy of DCE results. What were the limits of the project? The research team used two DCEs and varied a few study design aspects. Results may differ for other data sets and design changes. Future research could test the random parameter logit estimation with Halton draws with other data sets and designs. How can people use the results? Researchers can use the results to improve how they design and analyze DCEs. Professional AbstractProfessional AbstractBackground Researchers use discrete choice experiments (DCEs) to measure individual patient preferences. In DCEs, researchers give patients a survey describing scenarios with different options from which to choose. For example, a hypothetical DCE offers two healthcare interventions. The two interventions differ in their features, like out-of-pocket costs and wait times. Patients choose the intervention they prefer. Patients’ choices help researchers understand which features are most important to patients. Researchers also learn how patients think about the different options for each feature, such as different out-of-pocket costs. Designing and analyzing a DCE is challenging. For example, in DCEs with complex options, patients may ignore information, which may lead to inconsistent responses and inaccurate analyses. Altering DCE design features and statistical model assumptions may increase accuracy of DCE results. Objective To improve understanding of the effects of selected DCE design features and statistical model assumptions on DCE results. Study Design Design Element Description Design Simulations, empirical analysis Data Sources and Data Sets Empirical data on 2 DCEs: Study 1 examined preferences for organ allocation among adults (N=2,051) in the Australian general public. Study 2 examined preferences for labor induction among women (N=362) who were participating in a randomized trial of labor induction alternatives in South Australia. Simulated data for 100,000 participants based on results from empirical data sets Analytic Approach Simulations Random parameter logit estimation Outcomes Estimates of bias, relative standard error, and D-error (measure of overall error in parameter estimation) Methods First, the research team examined how different DCE designs affect study estimates. DCEs have two parts: a pilot study and a main study. The team created simulated DCE pilot and main studies by replicating two empirical DCEs in a simulated population of 100,000 individuals. They generated 864 simulations representing variations in DCE design such as sample size and the prevalence, correlations, and interactions of different variables. Using different analytic models, the team assessed estimation errors due to DCE design. Next, the research team examined the effects of using Halton draws on estimates from a random parameter logit model. Halton draws are a sampling technique that generates random data points simulating the overall population. The random parameter logit model assumes that parameters, such as the strength of preference for a certain healthcare feature, are random and vary across individuals. The team identified the number of Halton draws and the number of parameters for generating accurate results. DCE researchers helped design the study. Results In simulations, increasing sample size decreased random error. Random errors due to small sample size in the main study increased if the pilot study had a small sample size (n=30), unmeasured interactions, and selection bias. Random parameter logit analysis estimates had greater bias when model parameters were highly correlated. With correlations of 0.1, 0.2, and 0.3, bias reached 8%, 16%, and 24%, respectively. Too few Halton draws or a greater number of random parameters violated model assumptions and produced inaccurate results. Estimates were more accurate with fewer random parameters (less than 10). Up to 20,000 Halton draws were needed when the random parameters exceeded 15. Limitations Simulation scenarios did not cover the full range of study designs. Random parameters followed normal distributions. Results may differ for other parameter distributions and data sets. Conclusions and Relevance Improving methods for designing and analyzing DCEs can help researchers study patient preferences. Using more Halton draws when more random parameters are present may increase accuracy of random parameter logit models for DCEs. Future Research Needs Future research could examine additional DCE design features with other data sets.