A Schematic Simulation for Health Economic Feasibility Studies: Results from a Model for Cardiac Remote Monitoring

Abstract

The market for active implants and biosensors is of high economic and medical interest. As health economic considerations get into focus in terms of business planning and reimbursement, valid and flexible economic feasibility studies get more important. Unfortunately, literature mostly provides only single economic views on specific aspects like cost savings from reduced rehabilitation in a special patient cohort. To make planning and technology value negotiation more effective and more valid, a methodology to collect relevant data from different studies and normalize it to a common set of parameters was developed for the field of cardiac monitoring in a mixed example population with an approach of simple external weight, ECG and blood-pressure measurement or implanted devices for cardiac monitoring. The target entities taken into account by the simulation model were the impacts on heart attack, stroke, heart failure and the process of implant monitoring. Simulation took place at an example population of 500 patients with specific morbidity criteria. The health economic value was calculated over a period of three years and was split into a technology effectiveness measurement in Quality-adjusted-Lifetime-Years (QALYs) and a “cost- saving-part”. QALYs were chosen as technology effectiveness parameter for a combined and weighted mortality- and morbidity-reduction. Allocating 24.000 Euro to a saved QALY, 42% of the cost would be allocated to QALYs meaning money being spent for gained lifetime-years. The remaining 58% would be the different real cost savings: a per patient gross saving of 3.308 € per year would result for that part (21% on heart attack, 3% on stroke, 68% on heart failure and 8% on implant monitoring). Up-to-date studies do not provide a simple mechanism to allow custom-tailored health economic feasibility study results in terms of other specific population mixes or outcome parameters. Target audiences for the methodology of the described simulation are payors and solution providers targeting a specific patient population or specific telemedical situations. This way product development can address market-related needs more specific and healthcare providers can compare different outcome parameters in the given entities.

Share and Cite:

Elsner, C. , Boriani, G. , Häckl, D. , Desch, S. and Thiele, H. (2016) A Schematic Simulation for Health Economic Feasibility Studies: Results from a Model for Cardiac Remote Monitoring. Open Journal of Modelling and Simulation, 4, 118-128. doi: 10.4236/ojmsi.2016.43011.

Received 26 June 2016; accepted 25 July 2016; published 29 July 2016

1. Introduction

Different papers discuss the value of simulations in healthcare: especially the rising pressures to ensure the most efficient and effective use of limited health service brings decision makers and researchers to run modelling solutions [1] . These solutions allow a better ease of use especially for sensitivity analysis, parameter changes and adaptions of settings without the need for a new huge clinical trial. Systematic review also shows that especially the field of Health Technology Assessment is a fertile area of research in economic modeling, which yields useful insights into the application of these techniques [2] .

Literature also points out, that only few people in most health care organisations possess the ability to conduct these modeling technics. Further, there needs to be a greater understanding about how and where simulations can support decision-making, if policy makers are to become “intelligent customers of the technology” [3] . Simulations must ensure that these outputs are readily accessible and relevant if models want to have an impact in improving health.

Especially, HTA or even simulations on the topic of cardiac monitoring seem not to be an integral part of the device development in most research groups or companies. Looking in contrast at the market for the reimbursement regulations and the payor’s decision-makers, health economic feasibility analysis becomes an important topic for providers and payors.

The parties having “health economic” studies as a part of their business-planning process normally have in- house solutions based on a very basic set of considerations. Normally, evaluation is done over “landmark publications” and “average populations” [2] . These plans are good for most plain bottom line argumentation in a management decision―but they start to fail in specific questions e.g. for specific cost savings for single parties (e.g. “How much money is saved due to altered drug-therapy with the device?”). Additionally, sensitivity analysis (e.g. taken the worst case studies in the market, how much would the technology still save?) and the adaption to mixed target populations is not possible.

The EU funded Med Tec HTA joint project [4] points out this high relevance of health economic and technology diffusion and adaption simulations. This way technology and study findings can be lifted from a “study population” impact perspective to a “bird’s eye perspective” e.g. in terms of relevance for global or specific population mixes, targeted outcome parameters or whole countries/geographic areas. Up-to-date studies mostly do not provide this information mechanism to allow these custom-tailored health economic and/or feasibility results.

This paper describes the methodology to extract and combine study information from different sources to generate a simulation for the described purpose at the example of cardiac remote monitoring for different purposes. It follows an exact guideline and checklist given by the FDA and Fone et al. [5] for best-practice healthcare simulations. Target audiences for the methodology of the described simulation are hospitals, payors and solution providers targeting a specific patient population or specific telemedical situations and/or payor negotiations. This tool is trying to add value to overcome those problems especially for the field of cardiac remote monitoring and to have a reusable, valid and common data model for related health economic questions and product development related considerations.

2. Materials & Methods: Simulations Structure/Methodology

The modeling itself relies on a methodology recently published [6] . For the given model the methodology was further refined and implemented more detailed.

Simulations and modelling are defined as a “...replicable sequence of computations used for generating estimates of quantities of concern [...] based on data from primary and/or secondary sources…” [7] and are recommended in the modernization act of the Food and Drug Administration (FDA) end of the 80’s as valuable tool to help in healthcare and social policy decisions. According to the FDA, simulations can also be some kind of “evidence based medicine” (EBM)―which names the conscientious, explicit and judicious use of current best external evidence in making decisions in the care of individual patients [8] . This definition includes, that EBM is not limited to studies and meta-analyzes. Also the general concept of the expected value of information (VOI) from decision theory [9] is transferable to EBM: The VOI is defined as the difference between the expected consequences (benefits) of a decision to be made under consideration of specific information and the expected consequences (benefits) if that decision is made without that information [10] . Due to this high expectation several standards for testing and validating a healthcare simulation had to be kept [11] . Figure 1 shows these accepted standards for the 4 dimensions of a simulation model testing. It includes different functionality and plausibility tests of the simulation model.

The technical validation is more comparable to a common software function testing and can be done by automated routines and robots. The steps of the plausibility testing and cross-over validation needs extensive discussions, workshops and expert talks to be performed properly. The special validation step of the “validation of prediction” is more complex and needs series of data on which the prediction-ability of the model is tested.

The approach to build the simulation itself is also a 4 step way. Figure 2 shows this schematic process in an overview and states out resources used and results after each step.

In a first step the different data sources were identified and structured in a central repository. Normally―as in this case―there are 5 different kinds of data being integrated in the dataset:

1) Costdata from common cost tables on wages, 2) treatment cost and cost of equipment, 3) data on process workflows mostly from specific studies (e.g. How long does it take to perform an ECG?), 4) data on population epidemiology like incidence and prevalence rates for targeted illnesses, and last 5) data on specific effects of the technology to be investigated in the simulation. In a second step the data is “normalized”: All input and outcome values are projected to a similar “normalized” set of parameters. This normalization process would for example project different studies comparing the impact of a heart failure monitoring system to the two parameters “Number of hospitalisations p.a.” and “Length of stay” with and without the technology. This way values will be comparable to other studies dealing with a likewise technology. Other normalisation steps would also integrate for example regression analysis to model correlations between New-York-Health-Association (NYHA)- Grades for the severity of a heart failure and the savings impacts in different outcome parameters due to cardiac monitoring resulting from different studies.―An example is given in Figure 3.

The data is then extracted in eXtensible-Media-Language (XML) Format and stored to the database of the simulation, where different sets can be selected and correlations are stored and connected in the core model of the software tool.

Figure 1. The four dimensions of simulation validation.

Figure 2. Approach to build the simulation.

Figure 3. Example regression analysis over 6 studies for the correlation of NYHA grade and savings due to the use of cardiac monitoring.

In a third step the Scenario-Analysis takes place for different adaptions of the simulation. In this step, the technology used or compared is specified, the reference and benchmark publications are chosen and the simulation is run with a different set of minimum and maximum parameters from the chosen publications.

In the fourth and last step the simulation is custom-tailored to a specific population-mix according to the parameters delivered e.g. from the payor and the impact on the whole population and single patients is calculated.

The simulation itself contains three different software components: the pre-processing module, the Core Module running a Monte Carlo simulation, and the post-processing module. The pre-processing module generates datasets of all individuals as matrices of their different states like atrial fibrillation present, percentage of NYHA and other parameters. The simulation itself is based on a multistate model in the core entity of the model. The life-course of the 500 individuals is conceptualized as a sequence of transitions between different possible states. The transition intensities seen in Part III of this paper and the description of a starting population given above are the key characteristics for this microsimulation. The model itself was designed as a classical continuous time microsimulation model, with a focus of the trajectories of the individual entities, which compose the aggregate model. In detail that means that each single individual’s matrix is computed and altered in every simulated time-period of e.g. a month and the derived costs and outcomes are summarized. The post-processing module then summarizes these outcomes and computes them graphically in the desired manner at the end of all computed simulation loops.

3. Materials & Methods: Selection of Used Data

For the simulation model a typical population mix was designed: The population had a spread of heart failure according to NYHA-Grading (= I: 20%, II: 35%, III: 22%, IV: 23%), a male and female mix of 50:50, a median age of 75 with a standard deviation of 13. 150 of the patients had a device implanted, 350 would have an external cardiac monitoring implemented. The population had an atrial flutter (AF) prevalence of 6% and a 6% yearly risk of a myocardial infarction (MI).

The model implemented 4 dimensions of effects shown in Figure 4.

The researched technology included a “simple” setting of a daily monitoring of blood pressure and weight measurement in the 350 patients without an implanted device and the same for the 150 patients with an internal biosensor using the given sensor. The selected cost parameters represent a typical cross-selection of publications covering the outcome entities chosen for the model at the best fit and quality. These are rather chosen to represent the methodology of the simulation work done. The key point of the given simulation is the fact that interchanging single parameters and exchanging single publications and resimulating the outcomes is easy.

Figure 4. The 4 effects on effectiveness of care & corresponding patients in the population.

Table 1. Used cost parameter.

4. Results from the Simulation

The results part was structured in different “outcome entities” due to different typical target parameters in the processes related to the different illnesses. Whilst most parameters where direct cost savings in terms of technology effectiveness, in the myocardial infarction, the stroke and the heart failure model, QALYs where also chosen as combined mortality and morbitity-reduction parameter. To graph the QALYs together with the cost savings, they were translated into Euros by taking the general assumption of 24.000 EURO per QALY [11] . Every cost-saving outcome entity was broken down into an Euro value to have a maximum comparability in the model. The “sub model” parameters implemented the cost for the typical in-hospital treatment process derived from a mixture of the normal-ward and intensive-ward days in hospital. In the device remote-monitoring sub model these costs were derived from a specific process and workflow model for the device follow-up [15] . Additionally cost for medication, transportation and rehabilitation was drawn from the database and given publications, as different approaches typically describe these entities as additional cost drivers. Payors are normally able to specifically track these cost blocks and are interested in specifically monitoring those. In the stroke model it was not possible to derive the specific transportation costs, so this block was skipped.

The results are shown in Figures 5(a)-(d) for all different entities separate and in Figure 6 summarized for all entities. For the heart failure and the myocardial infarction each the QALYs and the hospital treatment account for the biggest “blocks”. For the stroke interestingly the medication and QALYs account for over 70% of the cost. The device remote monitoring has the biggest block with over 70% in the clinical processes savings. Overall, the results show a quite different spectrum for the different entities of the patients.

5. Analysis and Discussions

Looking at the specific results of the simulated population and the value generation through a “simple” telemonitoring over implanted devices and external devices, one can say, that the technology has a quite high value saving. The per month per patient technology value sums up to 275,67 ? per year this sums up to 3.308,04 ? per patient-year-value. This would be the technologies yearly value derived from the calculated components.

Unfortunately “real” negotiations often differ very strict, what types of costs are saved: looking at these details and the spread of the savings in the given population, overall 42% of the generated value was contributed over QALYs. Interestingly these QALY savings mostly account in the field of heart failure (28% of the whole savings) followed by the myocardial infarction with 13% and the stroke with just a 1% QALY saving of the whole saved amount. Looking at the source data of the simulation for the specific population, the QALYs are a hard indicator mostly for saved life-time-years (36%) and only in a small percentage for “Quality-adjusted” years due to a smaller burden for the patients (6%). In a negotiation with payors, this “mortality reduction” through the technology could be some very good “hard argument”.

Speaking in terms of a hard cost-effectiveness analysis without taking QALYs into account the analysis means that the break-even point for the ideal device for the given mixed population is at 1.918,66 ?per year― below that yearly cost, the therapeutic value of the technology would be cost effective for the given population.

Negotiation outcomes mostly prove that only “real money” for a payor saved will be relevant in negotiating a higher reimbursement for a new technology. Looking at the “real direct savings” in the model, these will spread over medication cost (3%) and transportation cost (10%)―which would mean a business value of 35,84 ?per month per patient. The rest of the 45% saved money is a mixture of the different treatment costs in hospital and

(a)(b)(c)(d)

Figure 5. Distribution of cost savings for the myocardial infarction, for the stroke, for the heart failure and for efforts on implanted device monitoring.

Figure 6. Distribution of cost savings over the complete model for all entities.

rehab facilities. For those 45%, it will be complex to argue, as the target groups payor and provider will have different views.

Taking into account, that negotiation calculations normally focus on the stated “real-savings” and “reduced resource use” (which both sum up to 159,89 ?p.m.) and―unfortunately―don’t take into account QALYs, the value 159,89 ?p.m. may represent only part of the technologies complete business value, but may be a mixture of the stated “positive” and “negative” negotiation views stated before and some kind of “sharing of earnings” and “real savings” between the parties of this negotiation.

Up-to-date studies are “flat” models and do not provide a simple mechanism to allow custom-tailored health economic feasibility study results e.g. in terms of other specific population mixes or targeted outcome parameters. Interchanging single parameters and exchanging single publications and resimulating the outcomes is easy with the given interchangeable model.

Target audiences for the methodology of the described simulation are payors and solution providers targeting a specific patient population or specific telemedical situations.

6. Conclusions

With the tool and methodology itself, a custom-tailored health economic feasibility study e.g. in terms of a specific population mix of the investigation and/or targeted outcome parameters (e.g. saved cost at provider, saved transportation cost, etc.) was produced.

Discussions with experts and the presentation of business proposals showed that the developed model helped a lot to reveal detailed effects of the introduction of a new technology. Overall, it was definitely able to ease decisions at healthcare payors to “buy in” into a new technology.

In these discussions, it also turned out that the major problems of innovations in the healthcare sector were:

1) Missing incentive systems due to a missing interlink between the budgetary systems and

2) The risk aversion and lack of traceability of the payors to change their systems and give innovations a chance.

But it also turned out, that the problem of missing incentive systems due to a missing interlink between the budgetary systems could be reduced over the given transparency, but not fully eliminated. Additionally, even if the technology provided potential savings, still the risk aversion and lack of traceability of the payors to change their systems would not enable them to “buy in” in the new technology as one would normally expect in such a business proposal.

Recommended scenarios for using a simulation from the authors’ standpoint of view include therefore the following cases:

1) Missing evidence for a business or usecase, where results can be predicted only from the combination of different studies.

2) Getting evidence is not possible or would mean enormous effort or is ethical not feasible.

3) The question is targeting to a broad variation of input and output parameters being asked e.g. by payors etc.

4) A high number of publications and studies in the field with an overview of variations and outcomes hard to get.

As payors could also influence the incentive systems, another further development in progress for the simulation is focusing on the implementation of reassurance algorithms into the financial model of the simulation― which may help against the given risk-aversion. Further developments may include also geoinformation. This would allow e.g. for telemedical measures impact projections not only in different target populations but also for different regions and population densities and medical infrastructures.

NOTES

*Corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

References

[1] Pitt, M., et al. (2016) Systems Modelling and Simulation in Health Service Design, Delivery and Decision Making. BMJ Quality & Safety, 25, 38-45. http://dx.doi.org/10.1136/bmjqs-2015-004430
[2] Brailsford, S., et al. (2009) An Analysis of the Academic Literature on Simulation and Modelling in Health Care. Journal of Simulation, 3, 130-140. http://dx.doi.org/10.1057/jos.2009.10
[3] Pearson, M., et al. (2013) Involving Patients and the Public in Healthcare Operational Research—The Challenges and Opportunities. Operations Research for Health Care, 2, 86-89.
http://dx.doi.org/10.1016/j.orhc.2013.09.001
[4] http://www.medtechta.eu/wps/wcm/connect/site/medtechta/home
[5] Fone, D., et al. (2003) Systematic Review of the Use and Value of Computer Simulation Modelling in Population Health and Health Care Delivery. Journal of Public Health, 25, 325-335.
http://dx.doi.org/10.1093/pubmed/fdg075
[6] Elsner, C. and Häckl, D. (2014) Health Economic Value Generation in the Azerbaijan Republic: Simulated Results for an Integrated Telecardiology Care Program. Journal of Economic Sciences: Theory & Practice, 71, 127-139.
[7] National Research Council (1991) Improving Information for Social Policy Decisions: The Uses of Microsimulation Modeling, Vol. 1, Review and Recommendations. National Academy Press, Washington DC.
[8] Sackett, D.L., Rosenberg, W.M., Gray, J.A., Haynes, R.B. and Richardson, W.S. (1996) Evidence Based Medicine: What It Is and What It Isn’t. British Medical Journal, 312, 71-72.
http://dx.doi.org/10.1136/bmj.312.7023.71
[9] (2015). http://en.wikipedia.org/wiki/Value_of_information
[10] Raiffa, H. (1968) Introductory Lectures on Choices under Uncertainty. Decision Analysis, 10, 345-356
[11] Weinstein, M.C., O’Brien, B., Hornberger, J., et al. (2003) Principles of Good Practice for Decision Analytic Modeling in Health-Care Evaluation: Report of the ISPOR Task Force on Good Research Practices-Modeling Studies. Value Health, 6, 9-17. http://dx.doi.org/10.1046/j.1524-4733.2003.00234.x
[12] Dakin, H., Devlin, N., Feng, Y., et al. (2014) The Influence of Cost-Effectiveness and Other Factors on NICE Decisions. HERC Research Paper, Vancouver.
[13] Selection of Column “Normal Ward” in MDC 5 DRGs on 1st of June 2015.
http://www.g-drg.de/cms/G-DRG-System_2013/Abschlussbericht_zur_Weiterentwicklung_des_G-DRG-Systems_und_Report_Browser/Report-Browser_2011_2013
[14] Kielblock, B., Frye, C.H., Kottmair, S., Hudler, T.H., Siegmund-Schultze, E. and Middeke, M. (2007) Einfluss einer telemedizinisch unterstützten Betreuung auf Gesamtbehandlungskosten und Mortalität bei chronischer Herzinsuffizienz. Deutsche Medizinische Wochenschrift, 132, 417-422.
http://dx.doi.org/10.1055/s-2007-970350
[15] Elsner, C.H., Sommer, P., Piorkowski, C., et al. (2006) A Prospective Multicenter Comparison Trial of Home Monitoring against Regular Follow-Up in MADIT II Patients: Additional Visits and Cost Impact. IEEE Computers in Cardiology, Valencia, 17-20 September 2006, 241-244.
[16] Hamby, L., Weeks, W.B. and Malikowski, C. (2000) Complications of Warfarin Therapy: Causes, Costs, and the Role of the Anticoagulation Clinic. Effective Clinical Practice, 3, 179-184.
[17] Kolominsky-Rabas, P., Heuschmann, P., Marschall, D., et al. (2006) Lifetime Cost of Ischemic Stroke in Germany: Results and National Projections from a Population-Based Stroke Registry the Erlangen Stroke Project. Stroke, 37, 1179-1183. http://dx.doi.org/10.1161/01.STR.0000217450.21310.90
[18] Gandjour, A., Kleinschmit, F. and Lauterbach, K.W. (2002) European Comparison of Costs and Quality in the Treatment of Acute Myocardial Infarction. European Heart Journal, 23, 858-868.
http://dx.doi.org/10.1053/euhj.2001.3080

Copyright © 2024 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.