SmartCare value case development approach

An ASSIST assessment is done in four subsequent steps, following an implementation and deployment cycle leading from initial idea formulation to deployment either at pilot scale or in mainstreamed service provision.

A detailed description of the assessment approach can be found in D9.1 "SmartCare First Report on dissemination and exploitation activities".

Step 1 – Stakeholder identification

Work started with consolidating the initial assumptions (elaborated in WP1) made by the deployment sites on what stakeholders will play a role in the service.

As a general rule, the value case should cover all stakeholders that are

  • involved in the service, i.e. playing an active role; or
  • affected by the service, i.e. in a passive manner.

Both cases, active and passive, are characterised by a stakeholder experiencing any kind of impact, negative or positive, due to the new or changed service.

As the first step in the process, the stakeholder identification was conceived as a pragmatic exercise, informed by the stakeholders at the site. Telephone conferences were organised to arrive at reasonable assumptions as to how the new service might in general impact on each stakeholder involved or affected. Usually, it took several sessions until all stakeholders were identified. The process was supported in a one-to-one manner by the task leader, who brought in supporting evidence from earlier projects or literature to help the formulation of ideas, or to check existing ideas against proven practice. In that sense, the work was largely reciprocal, combining local context and pre-existing information.

 

Step 2 - Impact identification

The second step was to identify all relevant positive and negative impacts for each stakeholder, as well as to define suitable indicators to measure each impact. Again, the final shape of the impact model and indicator set depends largely on the local context. On the one hand, the indicators need to make sense in relation to the locally implementable service configuration, and any given framework conditions that cannot be changed. At the same time, populating the indicator set with data needs to be practically feasible under the given circumstances.

Picking up the results of Step 1, work now was more systematic, with a view to ensuring a full coverage of all relevant impacts and a correct identification of indicators for each. This was achieved by employing a causal chain linking the outputs and outcomes of the service to its impacts. For example, the implementation of an EHR system into existing care processes (output) makes certain information available to all professionals involved in the process (outcome). This in turn may then lead to increased efforts for data entry and maintenance (negative impact) as well as to increased efficiency in service provision due to improved availability of relevant data (positive impact). These impacts then create the value of the outputs and outcomes for each stakeholder. Whereas the outputs and outcomes are neutral, impacts are positive or negative. Indicators were then defined that allow the measuring of each impact. For the example just given, indicators for efficiency gains could for example measure the time spent by a doctor on a patient consultation before and after the introduction of the EHR. The efficiency gain would be commensurate to the time saved.

Sometimes non-monetary impacts need to be realised to be of utility for a stakeholder. Turning time savings into cost savings for example may necessitate a reduction in staff. Alternatively, in a growing service, efficiency gains can lead to a slower growth of staff base compared to client base. Usually, there are different ways to realise a given benefit, each with its own knock-on effects (e.g. public protest against staff lay-offs). Because of the high number of alternative ways of benefit realisation, as well as their sensitivity to financial and political framework conditions, they are not a regular part of the value model in a calculatory sense. Instead, options for benefit realisation are discussed in the textual analysis of the value model (see Step 4). As with Step 1, impacts and indicators were checked against the knowledge gained from previous implementations or other sources.

 

Step 3 – Data collection

Data to populate the indicators defined in Step 2 usually comes from different sources. Primary sources include all data collected directly in the course of the pilot, such as log data stored in ICT systems, administrative data, and time sheet data specifically gathered for the purpose of the project. Also, end-user / staff related data is usually gathered by means of a dedicated questionnaire applied towards the end of the pilot duration. Where necessary, secondary data will be used, e.g. derived from official statistics, published studies, or administrative databases.

 

Step 4 – The value case: strengths and weaknesses of the service

The final step of the approach focuses on analysing the quantified costs and benefits for each stakeholder. This includes the calculation of key performance measures such as “socio-economic return”, “economic return” and “breakeven point”.

The analysis also includes identification of the key “adjusting screws” that are available to the pilot service for further optimising the value case under the given conditions.

Overall, the analysis of the results will allow the deployment sites to:

  • Identify benefit shifts: These occur frequently when new services are being introduced or existing ones are changed. Wherever such a change is to the disadvantage of a stakeholder, that one is likely to become a veto player which will reduce the overall utility and performance of the service, especially if that stakeholder holds a powerful role. To avoid veto players it may become necessary to find additional (financial) incentives for stakeholders who are experiencing costs but no immediate benefits from the service.
  • Justify investment: The analysis of the overall performance of the service will allow responsible service managers and other decision makers to prove that the investment (both in terms of money and time) is worthwhile.
  • Calculate break-even: When communicating the costs and benefits to involved persons it is important to understand when the benefits surpass the costs. This will allow preparing stakeholders for a prolonged phase of investment, again both in terms of money (e.g. cost for equipment) and of time (e.g. staff time for training and adapting to the new way of working). In integrated care, as in health and care in general, services may often take a comparatively long time to arrive at break-even. Time spans between five and seven years are not uncommon. This is especially the case when a value case depends on the full-scale utilisation of the service, as compared to a more limited pilot scale. A counter measure can be to think about quick wins for stakeholders affected by delayed benefits and high and early costs.
  • Understand service impacts: The understanding of all impacts (including secondary and long-term effects) may offer a new perspective on the service that is led by an economic and strategic view. This is a value in its own right, because it complements a technical and organisational point of view and explains and predicts why stakeholders behave as they do.