Collaborative Evaluation for the Utah All Payer Claims Database Capacity Enhancement


We present a process to evaluate the continuing development of All Payer Claims Databases (APCDs) using a collaborative evaluation process. Project teams enhanced the Utah APCD with improved analytic capacity to provide online pricing and cost-transparency reports to support health care reform in Utah. Our program evaluation efforts added key methods and tools, building on recommendations in the APCD Development Manual to provide evaluation data facilitating improvements [1]. These additions included a Collaborative Evaluation Model, logic models, and development and use of best practices as measures. Stakeholders found that the added use of best practices, logic models, and frequent feedback to practitioners facilitated the project’s success. Since the Collaborative Evaluation Model served a structural purpose, it was transparent to the project teams.

Share and Cite:

Garvin, J. , Cardwell, J. , Doing-Harris, K. , Bolton, D. , Snow, L. , Hawley, C. and Xu, W. (2018) Collaborative Evaluation for the Utah All Payer Claims Database Capacity Enhancement. Technology and Investment, 9, 91-108. doi: 10.4236/ti.2018.92007.

1. Introduction

APCDs currently operate in thirteen states, with five more being implemented [2] . In broad terms, an APCD’s purpose is to facilitate curtailing the rising costs of healthcare [1] [3] [4] [5] [6] [7] . APCDs provide transparency in pricing across healthcare providers [7] . This transparency, which is reasoned, will allow market forces, instigated by the payers and the public, to drive down cost and increase quality [3] [4] [5] . Transparency, however, involves more than listing prices for services. Analytic capacity, which provides actionable information, is also necessary [5] . For example, providing the cost to consumers for a maternity episode should include all costs for all providers and all care settings from the first prenatal visit to the last postnatal visit. However, developing actionable information requires stakeholder participation to determine the use cases that should be addressed in a given location to provide value to consumers for useful decision-making and the costs to be included [3] [4] .

We present an evaluation process that stakeholders viewed favorably. Stakeholders, such as employers, the public, insurers, and payers, all participate in healthcare decisions. All are influential drivers in cost reduction [5] . Public health researchers at universities and the Utah Department of Health (UDOH), and policy makers are also interested in quality of care and in optimal use of APCDs [3] . Gathering and engaging a stakeholder pool with diverse interests is important to develop a useful APCD [1] [6] . Other successful evaluation efforts have noted that ongoing and systematic evaluation, including development and use of logic models, is also important [8] [9] [10] . Identifying the process of change is a key component of our evaluation program. Because of the critical nature of evaluation questions as part of the evaluation plan, we developed the questions iteratively with stakeholders.

Utah has had an APCD since September 2009, with 21 health insurance carrier plans submitting enrollment, pharmacy, and medical file data as early as 2007 [11] . The Health Data Committee (HDC) statute authorizes data collection and provides strategic and policy oversight of Utah’s healthcare data systems, including the APCD [12] . The Utah Partnership for Value Driven Healthcare (UPV), a statewide community collaborative hosted by Health Insight (the Quality Improvement Organization for Utah), provides recommendations to the HDC about analyses and enhancements to improve functioning of the Utah APCD [13] . Stakeholders external to UDOH’s Office of Healthcare Statistics (OHCS) participate in the governance structures. These stakeholders include consumers, providers, policy makers, researchers, and payers, as well as organizations related to health information exchange and healthcare quality improvement [12] .

The All Payer Claims Database Development Manual (2015) suggests that several components are essential for APCD development [1] . Based on early learning about APCD development, these areas include engagement, governance, funding, technical build, analyses and application development [2] . APCDs evolve over time and funding may come from a variety of sources. The most recent funding resulted from the CMS Grants to States to Support Health Insurance Rate Review and Increase Transparency in Health Care Pricing, Cycle III (Cycle III grant) available through the UDOH. The Cycle III grant required an evaluation component for which we developed a novel APCD evaluation process that builds on recommendations found in the APCD Development Manual [1] .

2. Utah All Payers Claims Database

Before the Cycle III grant award, the OHCS had a limited ability to undertake needed analyses to disseminate meaningful information, including price transparency, to facilitate healthcare reform efforts. While OHCS hosted the APCD in SQL databases―where authorized users can access data files and perform analyses using statistical software such as SAS or Stata―OHCS was dependent upon vendors to produce standard reports. For example, the APCD did not incorporate information on insurance premiums, rates, benefits, risk adjustment, or quality metrics. The lack of these data reduced the utility of the available information to inform consumer choices about healthcare services.

Figure 1 illustrates the proposed APCD improvements. The improvements described in the figure included plans to fulfill the three aims of the Cycle III grant. In Aim 1, data collection would be enhanced. In Aim 2, the analytic capacity of the Utah APCD would be expanded, and in Aim 3, dissemination of APCD results would be improved.

Figure 1. Proposed improved technical build and analytic capability.

3. Evaluation Approach

We used a Model for Collaborative Evaluation (MCE) built upon six precepts utilized in unity with stakeholders throughout the project: 1) Identify the situation; 2) Clarify expectations; 3) Establish a shared commitment; 4) Ensure open communication; 5) Encourage best practices; and 6) Follow specific guidelines [14] . Evaluation questions are criteria to evaluate the project and are guideposts towards fulfilling the process of change [15] . Identifying the theory of change is a key component of an evaluation plan, because it facilitates identifying the desired outcomes and evaluation questions to assess the project.

To strengthen our evaluation model, we also utilized community-based participatory research (CBPR) principles as described in Sandoval et al. 2011 [16] . We believed this to be an important inclusion, as Utah’s APCD development is merger of large-scale IT implementation and public health collaboration. The use of the MCE provides a framework for our evaluation plan. The use of the CBPR describes methods to carry out the collaboration; facilitating equity and positive group dynamics between the evaluation team and the project team and stakeholders.

We emphasized context, group dynamic processes, the APCD as an intervention, and outcomes in our work. Figure 2 provides a visualization of the collaborative evaluation model coupled with the core components of our evaluation process. In our work we considered the requirements for our evaluation including our stakeholders, relationships between organizations, funding available to undertake the evaluation, and the legislative mandate for the APDC, as well as

Figure 2. Synthesized MCE and CBPR model that depicts our collaborative evaluation process throughout APCD development.

other important aspects of the context of the evaluation we were undertaking. We also used a process in which there was equity of stakeholder groups and participants. The concept of equity was critical to establish bi-directional communication essential to the collaborative evaluation approach. To iteratively develop the APCD, we focused on the improvements needed to produce required results such as creating data to assist providers in reducing the cost of healthcare, assisting the Utah Insurance Department with insurance rate review, and empowering consumers to make better decisions about healthcare expenses. Last, we focused on the desired outcomes in terms of being able to use the APCD to provide data to fulfill important uses cases determined by our stakeholders. Important uses cases include developing asthma measures, assessing falls in the elderly, and increasing price transparency for maternity episodes of care.

4. Utah APCD Evaluation Methods

We developed the process of change statement as well as our evaluation plan, including our evaluation questions presented in Table 1, through collaborative feedback with project stakeholders in order to reach a unified goal. The theory of change provides what will happen if the project is successful. The evaluation questions help the evaluation team and the stakeholders determine what aspects of the project have been successful and have value. Our theory of change for Utah’s APCD was: “By improving the existing capability and functionality of the APCD, price transparency information will be provided to consumers, employers, researchers and the general public in Utah to support public health and health reform efforts”. Because the evaluation questions are critical, we also developed them iteratively with stakeholders. Stakeholders, such as employers, the public, insurers, and payers, all participate in healthcare decisions. Any or all are influential drivers in cost reduction [5] . Public health researchers at universities and the UDOH, and policy makers, are also interested in the quality of care and influential in optimal use of APCDs [3] . Gathering and engaging a stakeholder pool with diverse interests is important for development of the APCD [1] [6] . As described in other successful evaluation efforts, ongoing and systematic evaluation, including development and use of logic models and use of evaluation questions to guide the evaluation, are also important [2] . Based on the collaborative approach we used, the process of change, evaluation questions, logic models, and

Table 1. Initial evaluation questions as developed by the evaluation team.

other methods used in the evaluation were developed in partnership with the project team and key stakeholders.

4.1. Comparison of APCD Manual Development Methods with Utah Methods

During the development of the evaluation, we considered the recommendations made in the APCD manual as listed in Table 2. Our program evaluation plan

Table 2. Components in the APCD Development Manual and Utah APCD program evaluation.

aAdditional refinement executed by the Evaluation Team to improve upon what is specified in the APCD Development Manual based on our program evaluation approach.

added key methods and tools, building on recommendations in the APCD Development Manual, to provide evaluation data facilitating improvements discussed in the Cycle III grant [1] . By doing so, we built upon these recommendations and added further detail to the processes that matched the needs of our development of the Utah APCD. We recognize the value of the components and they formed the backbone of our efforts.

4.2. Iterative Use Case Development Process

We conducted meetings to establish collaboratively use cases for effective evaluation of the APCD, using an iterative approach. In early 2014, we first met with the Principal Investigator who provided names of key informants for our collaborative efforts. We then helped plan and facilitate a UDOH stakeholder brainstorming session. We prepared topics of interest for the session including questions about data sources and needs, analytic tools, reports, and documents. In this session, we led an open but guided discussion to generate ideas from various teams in the Utah Department of Health. Some questions that we asked to direct the discussion were:

・ What kinds of reports do you want to have?

・ What data would you want in a de-identified data set?

・ What data sources do you typically use?

・ What statistical or analytic tools do you use and envision needing?

・ Is there anything else you would like to share with us about how the APCD can be used to support your work?

Based on suggestions from the event, we developed a large set of use cases that we shared with all project groups. Project teams continue to prioritize and iteratively develop the use cases.

During Years 1 and 2, we conducted 49 meetings and exchanged numerous emails with various stakeholders―including project leaders, development teams, other state APCD teams, consumer engagement groups, and individual key informants―to collaboratively establish the use cases and determine what we should use to evaluate the APCD effectively. We reflect the results at the bottom of Table 3. In Year 3, we assessed the APCD’s ability to effectively provide the data and information of value to stakeholders, based on five select use cases that best represented each stakeholder group critical to the theory of change.

4.3. Novel Methods in Addition to Those Recommended in the APCD Development Manual

In our program evaluation process, we used additional techniques that built upon the APCD Development Manual [1] . These included a Collaborative Evaluation Model, logic models, and development and use of best practices as measures. We describe components in the APCD Development Manual as fundamental aspects in developing an APCD (Table 2, second column) and the related overarching foundational six categories (Table 2, first column). We added components as shown in the third column.

4.4. Collaborative Evaluation Model

In accordance with the collaborative evaluation model, we worked with internal stakeholders, including Cycle III collaborators, UDOH staff, and external stakeholders, to develop the model to evaluate the overall project and the vision for evaluation [3] . We engaged stakeholders by sharing information about how our program evaluation plan follows the six major precepts of the collaborative model mentioned previously.

We shared the draft evaluation plan with stakeholders in late 2013, shortly after initiation of the Cycle III grant, as part of a collaborative development of the final plan. Collaborative evaluation was a new concept to most project leaders and members so, throughout the first half of 2014, we met frequently with various stakeholder groups to present formalized work plans, metrics, and logic models. We followed our dissemination efforts with an initial survey to evaluate our process at the end of Year 1. We surveyed the leads for each aim regarding use of collaborative evaluation as part of our plan development. We followed this with a second survey in Year 2 to determine how well we accomplished this. The Year 2 survey asked whether the collaborative and iterative engagement of the evaluation team contributed to the overall success of the project and development of the use cases. We administered a second survey to gather feedback about our evaluation activities in Year 2. This survey included three instruments with different questions tailored to specific project groups based on work assignments. We based questions on a five-item Likert scale assessing statements about the evaluation plan and its contribution to the APCD project. Response choices were Strongly Agree, Somewhat Agree, Neither Agree nor Disagree, Somewhat Disagree, Strongly Disagree. All responses were anonymous.

We developed the evaluation questions in Table 1 based on the project aims. Aim 1 focuses on improving data quality through ensuring complete submissions. We aligned these efforts with the third objective for an APCD analytic plan and we established a process of continuous data improvement for increasing data quality. Data quality best practices and other strategies create an environment that promotes data reliability and confidence in the APCD as a resource. The emphasis in Aim 2 is analytics, operationalizing capacity and building infrastructure to produce meaningful information for internal UDOH use and online reporting. We accomplished dissemination of reports relevant to the use cases established by the stakeholders in Aim 3. We developed our evaluation plan through 71 communications with 16 project groups.

4.5. Logic Models

We used logic models as a bridge to understanding for the program teams associated with all aims of the work. We developed an overall logic model for the project (Appendix 1) as well as one for each aim. Especially at the beginning of the project, the logic models helped to identify critical inputs, activities and participation (outputs) to achieve the overall project outcomes as well as the interconnectivity of each part of the project. We used logic models because they provide an overview of critical elements and because they facilitate use of project management techniques by program staff for each project aim. As part of the Year 2 survey, we created two questions to gather feedback from project managers who worked with the evaluation team to design and distribute logic models throughout the APCD project.

4.6. Best Practices

We identified relevant best practices in large-scale health information technology (HIT) development, administrative data use and quality, healthcare information security and privacy, and Solutions Development Life Cycle (SDLC). We identified best practices by an initial literature search or by consulting an active UDOH Department of Technology Services internal policy. For each aspect of large-scale HIT, we searched its name and best practice. We analyzed the assessment of best practices in the top scientific and gray literature search results for each category. We undertook literature searches to identify best practices for data quality and healthcare information security and privacy. We based SDLC best practices on an internal UDOH policy. We shared best practices with each project team and iteratively developed the practices through 56 meetings and nine email communications during which we revised the practices 48 times. Final versions of best practices were used annually to assess team performances. Each team’s work was assessed with a designation of partial use, ongoing use, and complete use at year’s end. We shared assessments with project teams and revised them for accuracy as necessary.

4.7. Use Cases

We assessed five use cases to evaluate the success of APCD data application (Table 3). In accordance with our collaborative evaluation model, we worked

Table 3. Final evaluation questions and use cases for each area of Evaluation throughout the APCD project.

with multiple stakeholders to best represent actual users of APCD data, such as internal UDOH researchers and staff, external collaborators, and consumers in the general public. Once the APCD implementation had been completed, we selected and interviewed key informants to represent each of these stakeholder groups to obtain feedback about their experience with the APCD in fulfilling their requests and use cases. Interviewees included epidemiologists, professional and academic researchers, informaticists, consumers, and patient advocates.

We carried out the interviews in a semi-structured format, obtaining consent before each interview. While we left the interviews as open-ended as possible, we used research questions to guide our discussion with informants. These research questions were:

1) What did the stakeholder express about:

a) Requesting data?

b) Applying APCD data to a use case?

c) Utility/value of APCD use?

d) Quality of the data?

e) Effectiveness of the staff?

f) Ease of use of the data?

g) Security of the data?

2) What other issues were communicated from the stakeholder?

Once we had completed our interviews, we used a qualitative analysis on our gathered responses. Summarize snippets of feedback can be found in Appendix 1. Overall, feedback about the APCD was very positive. All of our interviewees were successful at utilizing APCD data for their needs, and expressed positive results. The most common difficulty expressed by APCD users was the initially using the APCD data, as those without claims data experience had a higher learning curve than those that had previously used claims data in other applications. However, all interviewees were able to overcome difficulties along the way, largely due to positive interaction and assistance from UDOH staff. Based on interviewee responses, this commitment to assistance from APCD staff was critical to user success. It should be noted, however, that we were only able to evaluate four of the five use cases, as we were unable to evaluate the use of the APCD to support effective rate review due to time constraints and technical barriers for the stakeholder.

In addition to our findings, there has been further support of successful application of the improved APCD. On December 15, 2016, a community showcase was held to describe the success of a variety of use cases that had utilized APCD data [6] , including some of the use cases selected here for our evaluation. Showcased use cases were an assortment of positive utility of the APCD, ranging across public health and academic research, quality control, and cost transparency. These featured use cases were positively received by the community, and demonstrates the extent that the APCD is being used, and reflects improved value and utility of the APCD.

5. Gathering Feedback from Stakeholders about the Evaluation Process

We gathered feedback about the evaluation process with surveys. We used survey results to clarify the goals of the collaborative evaluation model by sharing the formalized evaluation plan including the process of change and evaluation process framework. We began with project leadership, subsequently met with the various stakeholder groups, and then provided updates on our progress throughout Year 2.

The three surveys focused on HIT, logic models, and collaborative evaluation (Tables 4-6). Our logic models were well received but there was confusion regarding the evaluation team’s role and questions about the utility of the collaborative evaluation model. Satisfaction with the model and understanding of its use were mixed. Half of the respondents found the process useful, one quarter were ambivalent and one quarter found the process intrusive. Satisfaction with the model and understanding of its use likewise varied: 50% found the model useful, 25% were ambivalent, and 25% found the model intrusive.

The majority of respondents specifically felt the evaluation plan positively contributed to the Cycle III grant project, while a minority felt neutral and an

Table 4. Results from the survey about HIT.

Table 5. Results from the Survey about Collaborative Evaluation.

Table 6. Results from survey about the use of logic models.

even smaller minority felt that the plan did not positively contribute. This feedback indicated the need for ongoing communication about evaluation goals. Several respondents expressed that the logic model and collaborative model provide “valuable guidance” for “what tasks/activities need attention”. Others reported being unclear about the role of the evaluation team and did not appear to understand the actual focus of the evaluation. They felt the team should be “focusing on an evaluation of the impact of the project”, although discussions emphasized the focus on the plan’s processes.

Engagement with our collaborative stakeholders remained positive throughout Year 2, and we again prepared a survey to gather feedback. We designed three surveys in Year 2 to gather information from specific groups based on their work, increasing the potential for all respondents to provide valid responses. Survey questions ranged from use of the collaborative model and the evaluation team’s role in the project, to some of our more visible contributions, such as best practices tracking and logic models.

Feedback from our Year 2 survey yielded overwhelmingly positive results and showed that our stakeholders had increased confidence in our collaborative efforts. The majority of respondents specifically felt the evaluation plan positively contributed to the Cycle III project, while a minority felt neutral and an even smaller minority felt that the plan did not positively contribute. This was a noticeable improvement from the previous year’s survey, demonstrating improvement in sharing and utilizing the collaborative evaluation model in the APCD development project. We developed our evaluation plan through 71 communications with 16 project groups. Responses were overwhelming positive, with 90% of all responses falling under “Strongly Agree” or “Agree”. The remaining 10% were neutral, which all pertained to the use of best practices in SDLC. Respondents felt that the most valuable best practices were those in data quality (80% “Strongly Agree”; 20% “Agree”).

5.1. Model for Collaborative Evaluation

Out of 13 people offered the survey, eight responded. Respondents had varied opinions of the collaborative model’s usefulness. In the initial survey, there was confusion about the model and its role. However, the second year survey showed an overall improvement in stakeholders’ views and understanding of the collaborative evaluation, as the majority of responses (63%) fell into the “Agree” and “Strongly Agree” categories. Interestingly, the question with the most similar responses in all surveys was whether the collaborative and iterative approach facilitated the development of use cases, with most (63%) agreeing or strongly agreeing and one respondent somewhat disagreeing.

5.2. Logic Models

Of the six people offered the survey, three responded. We asked if logic models improved the teams’ understanding of Cycle III tasks and goals. This survey reconfirmed that overall reception of logic models was positive. All responses were positive, with two answering “Agree” and one answering “Strongly Agree”. Additionally, one respondent commented that the logic models could have benefited from another update at the beginning of Year 2.

In addition to discussion during program management and other project meetings, communication about the logic models occurred in 21 emails. All four logic models were modified twice during the grant to reflect progress toward completion. The models identify critical inputs, activities and participation (outputs) to achieve the overall project outcomes as well as the interconnectivity of each part of the project. Logic models were used because they not only provide an overview of critical elements but also facilitate use of project management techniques by program staff in each project Aim.

5.3. Best Practices

During Years 1 and 2, we evaluated the use of best practices components to improve claims data quality, large-scale HIT development, healthcare information security and privacy, and software development. As reflected in Table 7, there was significant and increasing use of component parts of best practices. We also used logic models with each team.

6. Insights

UDOH obtained funds from the Cycle III rate review grant to further develop the APCD [17] . Other states report using the APCD to evaluate specific use cases such as asthma and medical homes, and to understand evolving needs of the healthcare system [18] [19] [20] . Our collaborative evaluation framework was instrumental in the development and success of Utah’s APCD implementation to date. We found that communication is essential to effective collaboration. We promoted stakeholder engagement by extensive, ongoing contacts by email and meeting to explain the project, address concerns, and promote ownership of APCD among different participants. We recognize that we used a new applied practice in public health evaluation, and stakeholders are more accustomed to the outcome-based evaluation process. Although our model was initially viewed unfavorably by about 25% of respondents to our survey, we interpreted this response as stemming from the change in the evaluation process, timing of

Table 7. Results of best practices tracking across the first two years of the APCD project.

which is new for our stakeholders.

Framework elements such as best practices are foundational to the support of use cases and are essential to accomplishing the project’s objectives. As noted in the APCD Development Manual, obtaining funding for further development is important. Collaborative program evaluation provided additional data, processes, and value to facilitate successful completion of the project. The evaluation process will benefit stakeholders by improving online pricing and cost transparency reports for consumers, employers, researchers, and the public in Utah.

The additional methods used in Utah may be beneficial for other states developing an APCD, especially if use cases of high value and HIT (such as analytic and other software) are being developed and used. The collaborative approach requires a significant number of meetings with stakeholders and this may not be feasible for all projects. Most important, best practices can play an important role in helping high risk, large scale HIT projects achieve success.

A collaborative program evaluation approach, including the use of best practices in developing and implementing enhancement for APCDs, builds on the foundation provided by the APCD Development Manual [1] . APCD development can be a dynamic process involving many constituent stakeholders. Because of the evolving nature of APCD development, the development team benefits from continual engagement of stakeholders and bi-directional feedback loops. Stakeholders found that the added use of best practices, logic models, and frequent feedback to practitioners facilitated the project’s success. Since the Collaborative Evaluation Model served a structural purpose, it was transparent to the project teams. Based on these results, when developing and improving APCDs teams should consider using logic models, a collaborative evaluation process, as well as best practices in security and privacy, large-scale HIT development, software development, and data quality.

7. Conclusion

The use of a collaborative approach in this APCD evaluation included key methods and tools, recommendations from the APCD Development Manual, the use of a collaborative evaluation model, logic models, and development and use of best practices as measures. Stakeholders felt that our transparent evaluation efforts facilitated the project’s success.


This publication is funded by CMS Grants to States to Support Health Insurance Rate Review and Increase Transparency in Health Care Pricing, Cycle III (Grant Number: 1 PRPPR140059-01-00), through a subcontract with Utah Department of Health Office of Health Care Statistics.

Conflicts of Interest

No conflict of interest is declared by the authors.

Human Subject Research

Waiver for human subject research by IRB # 00070963.

Appendix 1. Snippets of Feedback from Interviews of Key Informants about Their Experience with the APCD

Conflicts of Interest

The authors declare no conflicts of interest.


[1] Porter, J., Love, D., Costello, A., Peters, A. and Rudolph, B. (2015) All-Payer Claims Database Development Manual: Establishing a Foundation for Health Care Transparency and Informed Decision Making.
[2] APCD Council (2018) Interactive State Report Map. UNH, the APCD Council, and NAHDO.
[3] Gross, K., Brenner, J.C., Truchil, A., Post, E.M. and Riley, A.H. (2013) Building a Citywide, All-Payer, Hospital Claims Database to Improve Health Care Delivery in a Low-Income, Urban Community. Population Health Management, 16, S20-S25.
[4] Green, L., Lischko, A. and Bernstein, T. (2016) Realizing the Potential of All-Payer Claims Databases.
[5] Love, D., Custer, W. and Miller, P. (2010) All-Payer Claims Databases: State Initiatives to Improve Health Care Transparency.
[6] Miller, P.B., Love, D., Sullivan, E., Porter, J. and Costello, A. (2010) All-Payer Claims Databases: An Overview for Policymakers.
[7] Peters, A., Sachs, J., Porter, J., Love, D. and Costello, A. (2014) The Value of All-Payer Claims Databases to States. North Carolina Medical Journal, 75, 211-213.
[8] Fisher, E.S., Shortell, S.M., Kreindler, S.A., Van Citters, A.D. and Larson, B.K. (2012) A Framework for Evaluating the Formation, Implementation, and Performance of Accountable Care Organizations. Health Affairs, 31, 2368-2378.
[9] Kaplan, B. and Harris-Salamone, K.D. (2009) Health IT Success and Failure: Recommendations from Literature and an AMIA Workshop. Journal of the American Medical Informatics Association, 16, 291-299.
[10] McManus, J. and Wood-Harper, T. (2007) Understanding the Sources of Information Systems Project Failure. Journal of the Management Services Institute, 51, 38-43.
[11] Utah Office of Health Care Statistics (2018) Utah All Payer Claims Database: Description and Background.
[12] Utah Health Data Authority Act (2014) 26 Utah Code Ann. Chap. 33a. 103.
[13] Health Insight Utah (2018) Utah Partnership for Value-Driven Healthcare.
[14] Rodriguez-Campos, L. and Rincones-Gomez, R. (2013) Collaborative Evaluations: Step-by-Step. 2nd Edition, Stanford University Press, Stanford.
[15] Fitzpatrick, J.L., Sanders, J.R. and Worthen, B.R. (2010) Program Evaluation: Alternative Approaches and Practical Guidelines. 4th Edition, Pearson, New York.
[16] Sandoval, J.A., Lucero, J., Oetzel, J., Avila, M., Belone, L., Mau, M., Pearson, C., Tafoya, G., Duran, B., Rios, L.I. and Wallerstein, N. (2011) Process and Outcome Constructs for Evaluating Community-Based Participatory Research Projects: A Matrix of Existing Measures. Health Education Research, 27, 680-690.
[17] Porter, J., Love, D., Peters, A., Sachs, J. and Costello, A. (APCD Council) (2014) The Basics of All-Payer Claims Databases: A Primer for States.
[18] JSI Research & Training Institute, Inc. on Behalf of the Vermont Asthma Program (2014) The What, Who, Why, and How of All-Payer Claims Databases.
[19] Minnesota Department of Health (2015) MN APCD All Payer Claims Database.
[20] New York State Department of Health (2015) All Payer Database.

Copyright © 2024 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.