Selection Of Indicators For Hospital Performance Mesurement
1. Selection of indicators for
Hospital Performance
Measurement
A report on the 3rd and 4th
Workshop
Barcelona, Spain, June and September 2003
This report has been prepared by:
Jérémy Veillard, Technical Officer
Ann-Lise Guisset, Technical Officer
Mila Garcia-Barbero, Head of WHO Office for
Integrated Health Care Services, Division of
3. CONTENTS
Page
1. INTRODUCTION .....................................................................................................................1
1.1. RATIONALE ...........................................................................................................................1
1.2. BACKGROUND.......................................................................................................................1
1.3. OBJECTIVES ..........................................................................................................................2
1.4. STRUCTURE OF THE REPORT...................................................................................................2
2. MATERIAL AND METHODS................................................................................................2
2.1. REVIEW OF THE LITERATURE .................................................................................................2
2.2. SURVEY IN 11 EUROPEAN COUNTRIES ...................................................................................3
2.3. PRE-SELECTION OF INDIVIDUAL INDICATORS .........................................................................4
3. RESULTS...................................................................................................................................4
3.1. SUB-DIMENSIONS OF THE OPERATIONAL AND CONCEPTUAL MODELS OF HOSPITAL
PERFORMANCE..............................................................................................................................4
3.1.1. It was agreed that prerequisites to the conceptual model were:...................................4
3.1.2. Conceptual and operational model ...............................................................................4
3.2. SELECTION OF INDICATORS ....................................................................................................7
3.2.2. Selection of indicators by dimension.......................................................................9
3.3 FEEDBACK OF RESULTS TO PARTICIPATING HOSPITALS ...................................................16
4. CONCLUSIONS..................................................................................................................17
4.1. SUMMARY OF PRODUCTS ................................................................................................17
4.2. RECOMMENDATIONS ON DATA RELATED ISSUES .............................................................18
4.2.1. Collection and quality control of hospital data...........................................................18
4.2.2. Data aggregation ........................................................................................................18
4.3. RECOMMENDATIONS ON ROLES AND RESPONSIBILITIES FOR PILOTING THE FRAMEWORK FOR
HOSPITAL PERFORMANCE ASSESSMENT ......................................................................................18
4.3.1. Beneficiaries................................................................................................................18
4.3.2. Project management/leadership..................................................................................19
4.3.3. Selection of indicators at local /national level............................................................19
4.3.6. Training implications ..................................................................................................20
4.4. GENERAL RECOMMENDATIONS FOR IMPLEMENTATION OF THE FRAMEWORK FOR HOSPITAL
PERFORMANCE ASSESSMENT.......................................................................................................20
4.5. THE STEPS FORWARD ...........................................................................................................20
ANNEX 1 .....................................................................................................................................22
ANNEX 2 .....................................................................................................................................25
ANNEX 3 .....................................................................................................................................27
ANNEX 4 .....................................................................................................................................32
4.
5. EUR/03/5038066
page 1
1. Introduction
1.1. Rationale
The restructuring of health care services among several European countries aims at increasing
accountability, cost-effectiveness, sustainability and quality improvement strategies, and shows a
growing interest in patient satisfaction. These reforms highlight a major challenge throughout
Europe for efficient and high quality hospitals. They demand evidence-based policies and
management strategies for hospital performance assessment. In this context, The WHO Regional
Office for Europe provides a flexible and comprehensive framework called the Performance
Assessment Tool for quality improvement in Hospitals (PATH). It includes i.e.
Product 1. A conceptual model of performance (dimensions, sub-dimension and how they
relate to each other),
Product 2. Criteria for selection of indicators
Product 3. Lists of indicators (e.g. including, rationale, operational definition, data collection
issues, support for interpretation),
Product 4. An operational model of performance (how indicators relate to each other, and
also to explanatory variables and to standards),
Product 5. Strategies for feedback of results to hospitals, mainly through a “balanced
dashboard”,
Product 6. Strategies to foster benchmarking.
1.2. Background
This report is the summary of the two last workshops in a series of four dedicated to building a
framework for hospital performance assessment.
The two first workshops on hospital performance assessment led to an agreement on the
objectives of the project, definitions of the main concepts (performance, quality, indicators, etc.),
identification of six dimensions of performance (product 1) and criteria for indicators selection
(product 2). The conceptual model encompasses six dimensions: clinical effectiveness, safety,
patient centeredness, responsive governance, staff orientation and efficiency. The criteria for
indicator selection are there importance and relevance to European hospitals, reliability and
validity (of each individual indicator and of the set of indicators as a whole), and burden of data
collection.
A preliminary list of indicators was established, based on an extensive review of the literature of
the current national and/or regional performance assessment projects (almost 300 indicators were
reviewed). The indicators were tested against the selection criteria described above and a
shortlist of indicators was drawn.
6. EUR/03/5038066
page 2
The group decided to organize indicators into two “baskets”:
- a “core” basket gathering a limited number of indicators generally available, applicable
and valid; relying on the best scientific evidence, for which data are available in most
European countries and which are very responsive in different contexts; and
- a “tailored” basket gathering indicators proposed for use only in specific situations
because of variability of data availability, applicability to specific settings (e.g. teaching
hospitals, rural hospitals, etc.) or validity (cultural, financial, organisational contexts).
1.3. Objectives
Based on this previous work, the objectives of the third and fourth workshops were:
• To refine the operational model by clarifying sub-dimensions of performance,
• To select a core basket of indicators and propose a tailored list,
• To build an operational model of performance
• To discuss strategies for dissemination and follow-up of the framework and more
specifically its pilot implementation.
1.4. Structure of the report
In this report we first describe the steps, material and methods for achieving these objectives
(section 2: material and methods), the main products of both workshops (section 3: results) and
conclude with a discussion on the next steps of the project with a focus on the challenges and
opportunities for implementation (section 4: discussion).
2. Material and methods
2.1. Review of the literature
The first step taken was the identification of indicators for sub-dimensions of performance not
covered or partly covered by current hospital performance assessment systems under use. In this
way, the list of potential indicators pre-selected during the second workshop was enlarged to
cover the sub-dimensions added during the further conceptualization phase and included
indicators used in research projects and not widely used by hospitals.
Next, an extensive review of the grey and scientific literature was performed. Evidence for each
indicator on the rationale for use, prevalence, validity and reliability, current scope of use,
supposed and demonstrated relationship with other performance indicators, exogenous factors
and verification of standards was collected.
The review of the literature showed that some dimensions and indicators, such as clinical
effectiveness, have been well researched and built on a scientific tradition of evaluation. But
others, such as responsive governance and efficiency are not so well represented in the literature
and tend to be based primarily on empirical evidence or expert judgment.
7. EUR/03/5038066
page 3
A distinction between “reflective” and “formative” indicators was drawn. Formative indicators
(causal) determine changes in the value of the latent variable while reflective indicators (effect)
work the other way around. For instance, length of stay is a formative indicator of efficiency as
efficiency is partly determined by length of stay, but at the same time as clinical effectiveness
affects length of stay and hence length of stay is also a reflective indicator of clinical
effectiveness. This distinction is important from a methodological point of view to evaluate
indicators validity. During implementation phase it will support the interpretation of indicators
results.
2.2. Survey in 11 European countries
A survey was conducted in 11 European countries in May 2003. It aimed to define the hospital
management’s scope for decision-making, the relevance of various indicators and the burden of
data collection. Questions were circulated to volunteer members of the Health Promoting
Hospitals network and to countries participating in the pilot project.
Twelve responses came from the 11 countries. One questionnaire was sent to each one of the
countries. Surveys were filled in either by individuals or by large multi-professional working
groups. Each working group was asked to fill in the survey for a so-called “lay hospital” in the
country.
Survey results have to be interpreted with great caution. Inference is limited because of a sample
bias. Recipients of the questionnaire were identified from a self-selected group (Health
Promoting Hospital network). There may also be a “social desirability” bias. It means that
respondents answer the way they believe they are expected to answer and overrate the
importance of socially desirable components of performance (e.g. health promotion, staff
satisfaction). The survey only captured limited information on the content and quality of national
data sets. Moreover, two questionnaires from one country showed intra-country discrepancies.
Although these factors limit the interpretation of the survey, they do not render it unhelpful. The
empirical findings of the survey were considered crucial to reconcile theory with practice and to
develop a strategy to monitor the integrity of the model and its application to different health
care systems. To evaluate applicability was extremely important because indicators were drawn
from a mainly Anglo-Saxon literature and applicability of tools and extrapolation to other
contexts is often questionable. The purpose of the survey to facilitate the selection of indicators
with a first input from the countries was met. This purpose will be completed in the next steps
with the input from countries that will pilot the balanced dashboard of performance indicators.
Responses to the survey showed wide variations in data availability and data quality, including:
• continuing use of ICD-9 instead of ICD-10,
• relative or absolute lack of secondary diagnosis coding,
• over/under recording reflecting funding and culture,
• delineation of episodes, readmissions, attribution to hospitals, and
• variable, usually limited, linkage between hospitals and primary health care
In general, respondents supported the values and the measures proposed in the survey. Many of
the issues, such as staff orientation, were considered to be very important, although few countries
actually have systems to measure it.
8. EUR/03/5038066
page 4
2.3. Pre-selection of individual indicators
The pre-selection was based on evidence in the literature, results of the survey in participating
countries and expert judgement. Discussions took place at the third and fourth workshops.
During the third workshop, four working groups composed of international experts (see
appendix) in the different dimensions selected (clinical effectiveness and patient safety, staff
orientation and staff safety, efficiency and patient centeredness, responsive governance and
environmental safety) were asked to select indicators using a modified nominal group technique.
They first scored them individually on a scale from 1 to 10 according to importance, validity and
burden of data collection. Individual scores were reported to the group and discussed. Then
indicators were allocated to a “core” or “tailored” baskets or excluded from the framework.
During the fourth workshop, the list of indicators was reviewed to guarantee the content validity
of the set of indicators as a whole.
3. Results
3.1. Sub-dimensions of the operational and conceptual models of
hospital performance
3.1.1. It was agreed that prerequisites to the conceptual model were:
- to be consistent with WHO policy and language;
- to share a common understanding of seemingly very similar concepts e.g.
dimensions/perspectives; indicators / standards /criteria; outputs / results / performance
whose complexity is increased by its translation from English to other languages;
- to clearly define concepts included under each sub-dimension e.g. emotional support,
empowerment and autonomy;
- a glossary of terms for the purpose of the project will be useful and
- the need to design indicators around practical customers (hospitals).
3.1.2. Conceptual and operational model
The conceptual model encompasses four vertical dimensions (clinical effectiveness, efficiency,
staff orientation and responsive governance) that cut across two horizontal perspectives (patient
centeredness and safety) (see figure 1). Sub-dimensions for each of the six
dimensions/perspectives are described in table 1.
9. EUR/03/5038066
page 5
Figure 1: The WHO Regional Office for Europe theoretical Model for Hospital Performance
Error!
Staff orientation
Reponsive governance
Clinical effectiveness
Efficiency
Safety
Patient centeredness
Table 1: Description of the dimensions and sub-dimensions of performance
Dimension Sub-dimensions
Clinical effectiveness - Conformity of processes of care
- Outcomes of processes of care
- Appropriateness of care
Efficiency - Appropriateness (added after discussions during the workshop)
- Input related to outputs of care
- Use of available technology for best possible care
Staff orientation - Practice environment
- Perspectives and recognition of individual needs
- Health promotion activities and safety initiatives
- Behavioural responses and health status
Responsive governance - System / Community integration
- Public health orientation
Safety - Patient safety
- Staff safety
- Environment safety
Patient centeredness - Client orientation
- Respect for patients
It was made clear that several issues deserve special consideration:
1 Highlight the central role of both patient centeredness and safety values in guiding
health systems and hospitals management: a patient’s perspective on clinical
effectiveness, efficiency, staff orientation, responsive governance
2 Make explicit the relationships between indicators. The difference between determinants
and measures of performance is helpful in constructing and balancing the indicator set, as
many measures e.g. length of stay may be seen as associated with a range of variables
which may be characterised as formative drivers or reflective images.
10. EUR/03/5038066
page 6
3 The distinction between formative and reflective indicators was crucial. However, it may
be difficult to understand and might cause confusion to potential users of the indicators.
For the educational material, terms such as “the indicator reflects…” and the “indicator
acts upon…” will be preferred.
The overall structure for each dimension and the main points described are presented below.
a. Clinical effectiveness
Within clinical effectiveness, a focus on team working and on clinical conditions, rather than on
individual specialties or professions, was recommended.
Although it was acknowledges that several indicators have major limits, nevertheless they should
be considered to guarantee content validity of the set of indicators as a whole.
Some of the main problems are that:
- complications and sentinel events are seriously underreported,
- indicators based on data extracted from the medical record, e.g. on appropriate and timely
care, depend on content which is commonly not recorded, and represent a very high burden
of data collection.
b. Efficiency
There are practical limitations of linking inputs to health care outputs or outcomes due to:
• lack of activity based costing;
• inconsistency of case-mix classification and
• difficulty in standardising costs in monetary terms between countries
Despite these limitations, opportunities for measuring efficiency include optimal use of available
technology (e.g. machine time), utilisation rates, staffing ratios, and financial management.
Following the discussion, appropriateness of health services utilization was added as a sub-
dimension. Efficiency without appropriateness is considered a meaningless dimension. Given the
wide variations in the availability, training and functions of personnel and even within countries,
indicators based on staffing ratios would be difficult to interpret. Moreover, they are largely
outside the hospital’s control. Hence, they are not treated as performance indicators but as
background information, as an important measure to understand and interpret other performance
indicators.
In market economies hospitals manage finances. In other contexts, hospitals only manage or
even administer line-budgets. Because of those wide disparities in financial responsibilities,
indicators on financial performance and profitability are only in the tailored set.
c. Staff orientation
Many potential indicators are sensitive to context. In this area there are wide variations between
countries and priorities vary widely. In some countries the preoccupation is overstaffing, job
security and timely payment while in others, overworking, turnover and vacancy rate,
professional identity, self-regulation, team working and a main reliance on nurses (who are
usually in short supply) are overreaching challenges.
11. EUR/03/5038066
page 7
These conditions largely affect indicators on staff orientation. Staffing levels, team working and
continuity of care bear also on patient safety and clinical effectiveness.
Staff orientation should recognize knowledge management and its application i.e. competence
and practical skills.
d. Responsive governance
Responsive governance relates to the hospital role, responsibilities and influence within the
health care systems. It is also very sensitive to context and culture. There is also a general lack of
literature on responsive governance indicators.
Attention should focus on:
- continuity of care, focusing on patient perception (patients surveys) or factual
issues (discharge letters)
- patient discharge planning (over which hospital has control) and
- responsiveness to the health needs of the community served.
e. Patient centeredness
Patient centeredness is usually assessed through patients’ surveys. There are three broad
approaches to patient surveys. They measure patient experience with care received, patient
satisfaction or the gap between patient’s expectations and perceived experience. The three
surveys are complementary and one approach is not advocated over the other. What is really
important is that hospitals listen to the patients, use the results from the survey to improve
services and do it in a standardized way to allow comparisons between all major sub-dimensions.
But it is unrealistic to have a same standardized questionnaire for all hospitals in Europe.
f. Safety
This transversal dimension is divided into patient, staff and environmental safety. It should link
clusters of ideas such as:
- patient centeredness and continuity of care and
- staff orientation and patient safety: training / adequacy.
Sub-dimensions of patient safety include issues such as quality monitoring, development and use
of standardized guidelines, drug prescribing and delivery organization, infection control
mechanisms, continuity of care, professional qualifications and job content.
3.2. Selection of indicators
The selection of indicators was a very complicated process because of the different
understanding, systems and purposes. Methodological discussions reflected below facilitated the
clarification of different issues and facilitated the selection of indicators.
- Indicators or standards?
Discussion centred on the definition of an indicator in relation to criteria, standards and norms.
On the one hand, some members of the group considered that indicators have to be quantified,
continuous variables and related to a denominator. On the other hand, the Ontario definitions
include qualitative 0/1 variables, which may present a confusing message to many European
12. EUR/03/5038066
page 8
countries. It was concluded that assessment of structural characteristics might be more
appropriate questions for periodic surveys rather than for continuous measurement purposes.
It was decided that ultimately the selection of indicators should be based on functionality rather
than academic classifications. This would allow the inclusion of rate/ratio measures, supported
by dichotomous questions, which may relate to internal or external organisational assessment.
The final set of indicators does not include self-assessment against standards because of
contextual validity (standards who proved useful in a setting may not be applicable to all other
settings) and burden of tool development.
- Evidence and usefulness
Evidence of validity may be relatively weak for some measures. For example, the indicator
“return to ICU” is widely adopted by hospitals but there is very little hard evidence that this is a
construct valid measure. Indicators such as waiting times and caesarean sections rates may be
affected by local values, practices and norms. There is no robust evidence that they inform about
the quality of clinical practice. Furthermore, even if evidence exists it may not be useful in one
country but very useful in others. For instance, the implications of “overtime” in the employment
environment of the USA may not be directly transferable to Europe. Similarly, turnover is only
an indicator of staff satisfaction and morale in context where nurses have the opportunity to
move job and unemployment rates are very low.
However, it was agreed that, despite this, some indicators might be valuable for individual
hospitals to use as comparative measures between hospitals even if there is no agreement on best
practices for clinical decision-making. When no or little evidence is available to support the
indicator but that the indicator is considered useful and is used by many hospitals or included in
many systems, it has strong face validity.
It was agreed that, unless there is clear evidence to the contrary, it is acceptable to recommend
measures that are based on usefulness rather than hard scientific evidence. Indicators included in
the core set have been selected on the basis of best available evidence and relevance to the
European hospitals context.
- Content validity of the set as a whole
Indicators lists aim to support a balanced and realistic view of hospital performance, progressing
from a comprehensive list of known measures to a structured core set which is appropriate in
most acute general hospital, and a supplementary “tailored” set for more specific situations or
where data were available.
Whether the balance is correct or not depends on the agreed aims and use of the indicator set.
The core set is more outcomes focused, the tailored set more process focused, and structure is
amenable to simple descriptive measures. But many outcomes e.g. staff satisfaction may also be
seen as structural inputs to the care process. A good overall mix of input (or structure) / output
(or process) / outcomes measures seems the best strategy for having an impact on quality
improvement.
- Challenges with data collection and operational definitions
Ultimately the reliability of hospital performance indicators rests upon the quality of data from a
variety of sources, such as:
13. EUR/03/5038066
page 9
• Clinical and administrative database: need linkage, standardized definitions, coding
procedures, clinical validation
• Self assessment surveys: much used in Ontario but liable to inconsistent application and
thus results
• Patient surveys
• Staff surveys
• Abstraction of medical records: e.g. occurrence screening, retrospective clinical audit
A conclusion was that indicators (e.g. complications) should not be excluded merely because
they require regularly missing or inaccurate data. On the contrary, they should be used as an
opportunity to identify and respond to a need for education and improvement leading to more
effective information systems. Similarly, indicators based on data abstracted manually from
records should not be excluded; the exercise is educational for staff and improves the quality of
the clinical records.
If indicators are to be used for international comparisons, operational definitions (and the
underlying data) need to be standardised rather than left for local determination within national
contexts. Although standardisation between countries should be aimed at, its achievement will be
gradual. A commitment to start working for convergence is preferred to the unrealistic aim to
seek immediate conformity. International comparisons are a secondary objective, aimed for at a
later stage of the project.
3.2.2. Selection of indicators by dimension
In this section, indicators are presented crossing vertical dimensions and horizontal perspectives.
Clinical effectiveness and patient safety
Initially 25 indicators were selected: 11 indicators for the core basket and 14 for the tailored
basket (see both baskets on table 2). Indicators which use return home as an endpoint, and which
describe merely volume of care were excluded.
The following indicators were selected:
- Sentinel events especially related to surgery
- Mortality in hospital (core) and out of hospital (advanced), disease specific at 30 days
e.g. neonatal, Coronary Artery Bypass Graft (CABG), hip fracture, Acute Myocardial
Infarction (AMI).
- Readmission within 28 days to same hospital (core) or other hospital (tailored) for asthma
and diabetes, separated for children and adults
- Return to ICU within 48 hours, admission after day surgery
- Appropriate use of services: core set caesarean section and prophylactic antibiotic use (by
audit of indications rather than overall rate). An advanced level questionnaire could be
used to assess availability and application of hospital policies and clinical guidelines.
Table 2: Final list of indicators for clinical effectiveness and patient safety
Dimension / Sub-dimension Core Tailored Interpretative
information
Appropriateness of care Caesarean section rate Result of audit of indications for Caesarean section rate in
Caesarean section area
Conformity of processes of care Result of audit of medical records for Door to needle time
14. EUR/03/5038066
page 10
prophylactic antibiotic use Percent of patients with CT scan (3
hours) after stroke
Percent of AMI patient discharged
on aspirin
Outcomes of care and safety processes Mortality rates for selected tracers Ditto CORE, with more advanced
Readmission rates for selected tracers risk-adjustment procedures and
Rate of admission after day surgery follow-up of patients (e.g. different
Rate of return to ICU for selected hospitals for readmission and fixed
tracer conditions follow-up for mortality)
Prevalence of sentinel events Post-tonsillectomy bleeding Reporting procedures for
Rate of pressure ulcers for stroke sentinel events, surveillance
and fracture patients systems
Rate of nosocomial infections
Rate of third degree perineal tear
Rate of ureteric/bladder damage
associated with hysterectomy
The tailored basket includes many of the core indicators, but refined by record linkage and
adding other specific conditions, for instance readmission within 28 days after surgery, door to
needle time, computer assisted tomography scan within 3 hours stroke, acute myocardial
infarction patients discharged on aspirin, post-tonsillectomy bleeding, pressure ulcers on
neurology (stroke) and orthopaedic wards (hip fracture), hospital-acquired infection (central
veinous percutaneous lines, artificial ventilation, total hip replacement), third degree perineal
tear, ureteric/bladder damage associated with hysterectomy, diabetes control (see COMAC
guidelines).
Technical problems arise with tracers due to diagnostic variability, low prevalence rate and small
samples. As a result of a focus on clinical groups many small hospitals with statistically small
samples may be excluded. Due to this limitation, further work on selection of tracer conditions
needs to be done.
To support the interpretation of indicators, a preliminary questionnaire on safety structures and
standards might include: guidelines development, committees, existence of an emergency trauma
register, triage system, blood transfusion-related safety procedure (standard ordering list,
haemovigilance), autopsies, technical maintenance e.g. lasers.
Other questions and comments need to be taken into consideration:
- Sentinel events: are the American Medical Association (AMA) list and definitions of
incidents transferable to Europe? Should reporting mechanisms be standardised in order
to make any results comparable? Does the prevailing culture promote “zero reporting”
and the denial of adverse events? Sentinel events should be used as reflections of safety –
by their analysis rather than their measurement.
- Appropriateness and conformity: these should be combined as “process of care”,
assuming the availability of evidence-based or locally defined guidelines to define what
is appropriate.
- Staff overtime: this is an invalidated determinant of safety but is an important issue to
many hospitals. It may be better focused on nursing care (where evidence is clearer than
medical) as a measure of staff welfare rather than patient safety.
15. EUR/03/5038066
page 11
Efficiency
Efficiency needs to be linked to complexity as can be measured by DRG if data is available. It
should also include waste of resources e.g. blood, operating rooms, CT scanning, clinical time,
X-ray film, and food.
The use of the Appropriateness Evaluation Protocol (AEP) was discussed and was postponed
because its usefulness in the European context was still unclear. A further analysis still needs to
be done before AEP can be recommended in a wider context than the one it was designed for.
The final selection of efficiency indicators is presented in table 3.
Table 3: Final list of indicators for efficiency
Dimension / Sub-dimension Core Tailored Interpretative
information
Appropriateness of services - Ambulatory surgery rate (extension: - Result of audit of
medical acute care) for selected Appropriateness Evaluation
tracers Protocol (AEP – European
version)
Productivity (input related to output) - Median length of stay for selected - LOS case-mix adjusted Staff ratios (per
tracers - # dosage unit (or cost) professional category
- Percent of patients admitted on day antibiotics per patient day and per department)
of surgery, for selected tracers - Cost of corporate
services per patient day
Use of capacity - Average inventory in stock, for - Operating Room Bed occupancy rate
pharmaceuticals, blood products, utilization rate
surgical disposable equipment
- Operating room unused sessions
Financial performance - Cash-Flow/Debt
Patient centeredness
Patient centeredness is primarily assessed through patient surveys. Many hospitals will need help
in introducing patient surveys; others may be encouraged to ensure broad coverage for internal
benchmarking even if they do not use a standard instrument. Hospitals should include results of
their “home-made”, non-standardized, survey into the reporting scheme and monitor evolution
over time. Although results may only be used for internal comparisons, the introduction of
standardized questionnaires tested for validity and reliability on a large scale are strongly
preferred.
Table 4: Final list of indicators for patient centeredness
Dimension / Sub-dimension Core Tailored Interpretative
information
Patient centeredness - Average score on overall
perception/satisfaction items in
patient surveys
Interpersonal aspects - Average score on
interpersonal aspects items in
patient surveys
Client orientation: access - Percent of cancelled one- - Average score on access
day surgical procedures cancelled items in patient surveys
on day of surgery
Client orientation: amenities - Average score on basic
amenities items in patient
surveys
Client orientation: comprehensiveness ? ?
Client orientation: information and - Average score on
empowerment information and empowerment
items in patient surveys
Client orientation: continuity - Average score on
continuity of care items in patient
surveys
16. EUR/03/5038066
page 12
Staff orientation and staff safety
The following issues around indicators were raised:
- Operational definition of training days: How are “training days” to be defined? How
would in-service training be included? Are they measures of structure or of process?
Would training budget as a percentage of staff budget be a better measure? Both training
days and training budget as a percentage of budget staff were retained after having been
considered complementary tools and will be tested through the pilot implementation.
- Staff surveys should be a priority for further developments of indicators. However, no
indicator based on staff surveys results is included in the core set because even if
standardised staff survey tools exist, they are not widely applicable in the participating
countries. They will have to be widely adapted to the situation. Moreover, many countries
do not have a culture of surveying either patients or staff and would be slow to adopt
such recommendation.
Table 5: Final list of indicators for staff orientation and staff safety
Dimension / Sub-dimension Core Tailored Interpretative
information
Economic factors Wages paid on time Salary and benefits
Variation in workforce
Practice environment Results of staff survey on job HR survey on strategies to
content adequate staffing to needs
Perspective and recognition of individual Number training hours on total
needs number of working hours
Training budget on total budget
dedicated to staff
Health promotion and safety initiatives Budget dedicated to staff HP Percent job descriptions with risk
activities on total number of full time assessment
equivalent staff
Staff experience Result of staff survey on
organizational climate
Behavioural responses Number of days of short-term Turnover rate
absenteeism (1 to 3 days) on total
number of days contracted (stratified
by department and profession)
Number of days of long-term
absenteeism (more 42 days) on total
number of days contracted (stratified
by department and profession.)
Staff safety Number of work-related injuries Number of assaults on staff
(stratified by type) on total number of
staff
Safety processes Staff excessive working hours
Responsive governance and environmental safety
Waiting time must be analysed by urgency and interpreted to discriminate between the efficiency
of waiting list management as opposed to the stewardship of health system resources. Potential
medical conditions should be added to potential surgical procedures in waiting times.
17. EUR/03/5038066
page 13
Table 6: Final list of indicators for responsive governance and environmental safety
Dimension / Sub-dimension Core Tailored Interpretative
information
Responsive governance and environmental safety
System integration and continuity - Average score on items on - Result of audit of discharge Description of roles and
perceived continuity in patient preparation functions implemented to
surveys - Result of Appropriateness foster integration of care
- Percent discharge letters sent to Evaluation Protocol for geriatric
GPs within 2 weeks patients
Public Health Orientation: access - Waiting time for selected - Score on items on Description of strategies
tracers (median & variance) perceived financial access in implemented for the
patient surveys management of waiting list
Public Health Orientation: Health - % women breastfeeding at - % AMI and CHF with lifestyle Self-assessment of WHO
promotion discharge counselling (audit) documented Baby Friendly standards
in record
Equity and ethics ? ?
Environmental concerns ? ?
Summary definitions of the core set of indicators
Table 7: The final set of indicators and their definition, numerators and description are included
in Table 7
Dimension / Sub- Definition of the indicator
dimension
1- Clinical effectiveness
Primary caesarean section delivery
Numerator: Number of cases within the denominator with caesarean section
Denominator: Number of cases with first time deliveries
Alternative definition: caesarean section deliveries rate primigravidae
Numerator: total number of caesarean section delivery cases
Processes of care Denominator: total number of deliveries
Appropriateness of prophylactic antibiotic use for selected tracer procedures
Numerator: Number of patients who receive prophylactic antibiotics in adherence to accepted guidelines
for selected procedures
Denominator: Total number of patients for selected procedures in the random sample of medical records
audited
Readmission for selected tracer conditions / procedures within the same hospital
Numerator: Total number of patients readmitted to the emergency department of the same hospital within
a fixed follow-up period relevant to initial condition procedure and with a readmission diagnosis relevant
Outcomes of processes to the initial care
of care
Denominator: Total number of patients admitted for selected tracer condition (e.g. asthma, diabetes,
pneumonia, CABG)
Exclusion criteria: Patients admitted for the same tracer condition who died during the first spell
Admission after one-day surgery
Numerator: Number of patients transferred from the day procedure facility following an operation
procedure (by selected procedure, e.g. cardiac catheterization, digestive, respiratory or urinary system
diagnostic endoscopy, laparoscopic cholecystectomy, one-day cataract surgery, curettage and dilatation of
uterus) to an overnight facility, directly or within 12 hours
Denominator: Total number of patients who have an operation / procedure performed in the procedure
facility
Exclusion criteria: Readmission for further planned operation to be excluded from both numerator and
denominator
18. EUR/03/5038066
page 14
Return to ICU
Numerator: Total number of patients with selected conditions / procedures discharged from intensive care
unit who return to ICU within 48 hours
Denominator: Total number of patients with selected conditions / procedures discharged
alive from ICU
2- Efficiency
Ambulatory surgery use
Numerator: Number of laparoscopic cholecystectomies, one-day cataract surgeries, curettage and
dilatation of the uterus and oncology procedures performed in the day procedure facility (no overnight stay
expected) over a period
Denominator: Total number of procedures over the same period
Appropriateness
Admissions on day of surgery
Numerator: Total number of admissions on day of surgery
Denominator: Total number of patients admitted for surgery
Median (or average) length of stay for specific procedures and conditions: hip replacement, CABG,
diabetes and asthma, appendectomy
Numerator: Total number of days for specific procedures and conditions: hip replacement, CABG,
diabetes and asthma, appendectomy
Input related to output
Denominator: Total number of patients admitted for hip replacement, CABG, diabetes and asthma,
appendectomy
Exclusion criteria: transfer to / from other hospitals.
Inventory in stock
Numerator: Total value inventory at the end of the year for pharmaceuticals, blood products, surgical
disposable equipment
Denominator: Total expenditures for pharmaceuticals, blood products, surgical disposable equipment / 365
days
Use of capacity
Operating rooms unused sessions
Numerator: Number of sessions used.
Denominator: Number of sessions staffed
Exclusion: Night surgical session (8 PM – 8 AM?)
3- Staff orientation (or staff responsiveness)
Practice environment
Staff training
Training days
Numerator: total number of training hours
Perspectives and Denominator: total number of working hours
recognition of individual
needs Stratification proposed: by professional category
Training budget
Numerator: total amount of budget dedicated to staff training
Denominator: total amount of budget dedicated to staff
Health Promotion budget
Health Promotion
Numerator: total amount of budget dedicated to staff health promotion activities
activities
Denominator: total number of EFT staff
19. EUR/03/5038066
page 15
Absenteeism
Short-term absenteeism
Consequences Numerator: total number of short-term absenteeism days (from 1 to 3 days)
Denominator: total number of working days
Desegregation proposed: to be considered at hospital level but also stratified by department and
professional category
Long term absenteeism
Numerator: total number of long-term absenteeism days (> 42days) over a period
Denominator: total number of working days over a period
Stratification proposed: to be considered at hospital level but also stratified by department and professional
category
4- Responsive governance
System / community Perceived continuity through patient survey (see patient centeredness)
integration
Discharge letters to general practitioners
Numerator: Number of discharge letters sent to general practitioners within a maximum period of two
weeks
Denominator: Total number of discharge
Waiting time for selected procedures and conditions
Variance of waiting time for specific surgical procedures and conditions: total hip replacement, hallux
valgus, varicose veins surgery, breast cancer surgery, cataract surgery, cardiac surgery (differentiated by
degree of emergency)
Public health orientation
Breastfeeding at discharge
Numerator: Total number of women breastfeeding at discharge
Denominator: Total number of deliveries
Criteria for inclusion: singleton, born at greater or equal 37 weeks, weight >2,500 grams
Environmental safety
5- Patient centeredness
Score on patient experience/satisfaction questionnaire, including items on:
- Overall perception / satisfaction
Client orientation - Interpersonal aspects
- Client orientation: information and empowerment
- Client orientation: continuity
Cancelled one day surgical procedures
Numerator: Number of patients booked for a one day surgical procedure cancelled on the day of the
procedure or after admission
Respect for patients
Denominator: Total number of patients booked for a one-day surgical procedure
20. EUR/03/5038066
page 16
3.3 Feedback of results to participating hospitals
The main message to convey is that indicators should not be interpreted in isolation because:
- the six dimensions are interrelated;
- each dimension has its own conceptual model and sub-dimensions;
- each indicator relates to other indicators within its dimension or other dimensions.
Trade-offs between indicators need to be highlighted and being taken into consideration when
reporting. Reporting is a crucial step towards the interpretation of results of indicators as part of
a process of quality improvement.
The discussions on the reporting tool, focused on its function as a retrospective, strategic
summary (“scorecard”) or as a real-time operational system (“dashboard”). The second
approach was chosen for this project. Moreover, the term “scorecard” implies a score, which is
very much value-loaded and implies a judgement. Though the opposite message should be
conveyed: indicators cannot be used as definite judgement on hospital’s quality, they should be
used as flags and as a starting point in a quality improvement process.
The purpose of the balanced dashboard is to provide information to guide decision-making and
quality improvement. Therefore the reporting scheme will relate results to external references as
well as internal comparisons over time, and give guidance on interpretation.
The structure of a balanced dashboard to report results to participating hospitals was proposed.
Indicators are organized in “embedded levels”. On the first page, a synthetic overview over all
dimensions is given. The following pages focus on specific dimensions and the dashboard ends
up with a detailed description of each individual indicator with comparisons with different
references, a focus on evolution over the past assessments, identification of relevant variables
and other indicators that may influence or be influenced by.
The specifications of the dashboard will initially be defined during the field implementation of
the project in a limited number of countries. Constant feedback from the field will be
incorporated to ensure that the tool is really valuable and usable by future participating hospitals.
The design of reports should comply with the structure of accountability and authority within the
institution.
Implementation, the Danish experience
Two current projects (one national, one in Copenhagen hospitals) provide practical experience of
the development and use of clinical indicators relying largely on routine clinical data held in
disease specific registers. These produce monthly and quarterly reports focused by clinical
specialty with comparative data and thresholds defined by peer group providers. Evaluation
showed that clinicians need to learn skills in the use and interpretation of data as well as
inducement to supervise the quality of clinical data abstraction and coding.
21. EUR/03/5038066
page 17
4. Conclusions
4.1. Summary of products
A number of products were developed in the frame of this project:
- identification of WHO strategic orientations related to hospital performance
- emerging conceptual model of performance standards and measures, identification of the
key dimensions of hospital performance (Product 1)
- a framework for evidence-based indicator selection (Product 2)
- growing catalogue of performance standards and indicators, and review of the literature
on their importance and usefulness, reliability and validity, contextual factors, current
uses, etc. (Product 3)
- the definition of a core set of indicators that represent the different dimensions and sub-
dimensions of performance in a balanced way (Product 3)
- the identification of the relationship betweens indicators within dimension and between
dimensions and exogenous factors that affect them (Product 4)
- an insight into the importance, usefulness, impact on quality and general availability of
potential indicators in ten European countries, through a survey in 10 countries (May
2003)
- a framework for the design of a reporting instrument called “balanced dashboard”
(Product 5).
After having agreed on the final selection of indicators, the following orientations were agreed
on:
- indicators are not measures but flags signalling potential problems that need a deep
analysis of variations and understanding of factors that influence them. These variations
may either reflect variations in quality of care, or may just reflect variations in the quality
of data, or variations in the context and exogenous variables.
- evidence is often used as an absolute value, but the discussions made clear that evidence
may change over time and may depend on the context and not be of global value.
- specific training is needed in skills for handling complex tools; while simple tools could
be handled without advanced expertise. Some target countries do not have a tradition for
using complex tools in quality management of health care, so the hospital performance
model must be tailored to them.
- the aim of this project is to develop a model giving maximal value for quality
improvement at hospital level and not especially for international benchmarking.
A tool for performance measurement may deviate from its purpose and therefore it must be
carefully assessed if the tool developed could have any unwanted incentives built in. This must
be considered carefully in the pilot implementation period.
22. EUR/03/5038066
page 18
4.2. Recommendations on data related issues
4.2.1. Collection and quality control of hospital data
Collection and quality control of hospital data should be the responsibility of each hospital; i.e.
clinical data capture, coding, validation. This could be monitored by meta-indicators such as
average number of ICD codes per discharge for each hospital or by showing compliance with
agreed internal processes and checks (such as “data accreditation”). If hard measures are not
available for quality control of hospital data, a self-assessment of data quality will be performed
by participating hospitals.
Regarding its resources and overall objective, the role of WHO might be limited in terms of
concrete implementation support for collecting, validating, aggregating and presenting
indicators. The role of WHO is to support its Member States to develop national capacities for
assessing hospital performance.
The indicators could be used (piloted initially) by individual hospitals and by aggregated
databases at national level. This will support the simultaneous validation of the model, the
indicators and the implementation.
4.2.2. Data aggregation
Data aggregation ideally would be done by an independent agency at national or international
level. To maintain objectivity, this agency would audit data, standardise aggregation; make
adjustments and the calculation of distributions, norms and significance. The agency needs for
time and resources should be realistically projected and funded.
However even if hospital performance networks are configured at national level, they should be
linked at international level. WHO could support this linkage either by directly or through other
institutions or NGOs as the International Hospital Federation. In addition to data aggregation,
there will be a continuing need for revising guidance on the application and interpretation of the
indicators.
The demonstration of “success stories” will depend heavily on internal assessment and external
benchmarking that will have to be adjusted for risk and case-mix, and stratified to promote
genuine peer comparison. In the short term, reference comparisons will be mostly internal until
results are available for aggregation and pooling.
4.3. Recommendations on roles and responsibilities for piloting the
framework for hospital performance assessment
4.3.1. Beneficiaries
The main beneficiaries are the hospitals themselves. The first contact should be the chief
executive or governing body of an institution, although it is important to ensure that clinicians
are involved, at least enough to be committed to contributing to the accuracy of clinical data.
This assumes that managers actually have the authority to manage and to improve performance.
23. EUR/03/5038066
page 19
For this reason, pilot sites need to be selected on the basis of shared objectives and timescale and
managers need to be able to take executive decisions to respond to the issues addressed by the
indicators.
Governments could also support the project by providing funding for the initial testing and
validation of data and development of norms and benchmarks. Even if government funding is not
provided, hospitals may have difficulties in keeping the resulting data from their regulatory
bodies. It is therefore important to make clear the limitations of the indicators when applied for
purposes other than internal management.
Hospital-specific results are not for public reporting. However, a communication plan focusing
on the public to describe the nature of the project and subsequent quality improvement actions
could be designed by hospitals and national bodies. The initiative is fostering quality
improvement actions and hence the public will indirectly benefit from the project.
A compromise on explicit conditions including data quality, information management, use of
indicators etc. should be fostered with WHO European Hospital Performance project.
Mechanisms for monitoring and reporting back to WHO should also be established.
4.3.2. Project management/leadership
Hospitals need to define terms of reference to identify the general requirements on skills and
experience, including leadership and credible academic links. The leadership role may
correspond to a senior manager or clinician, assuming he/she has the required technical skills.
The project manager would need to be “the charismatic” leader and should be supported by a
technical team.
A national level committee or group may be valuable at governmental level (especially if the
project is centrally sponsored), or through an NGO such as hospital or medical association where
these exist (especially if private and public hospitals are to be involved).
4.3.3. Selection of indicators at local /national level
There should not be a minimum number of indicators collected by individual hospitals in order
to participate in the WHO programme, but sites should be strongly encouraged to take as many
indicators as possible and to avoid the misuse of data. There would be no upper limit to the
number of indicators in the “tailored” basket, to which any existing local indicators could be
added.
In practice, hospitals are likely to adopt any indicators that can be derived at minimal costs and
effort from their existing data, but they should be strongly advised to include some core
measures – preferably all – in each dimension and stick to the strategy of the project, which
consists in assessing interrelated indicators in the context of the overall performance of the
hospital. Certainly each country should be encouraged to identify some hospitals that will collect
data for all of the core measures.
24. EUR/03/5038066
page 20
4.3.6. Training implications
Most potential users would not have the necessary to make effective use of the proposed
indicators. There must be arrangements for initial and continuing training for pilot sites, and for
cascading this at national level.
WHO will prepare a user manual (in process), but local induction will be essential for the
principal users. An initial two days workshop for hospital project leaders in each country would
facilitate understanding and implementation. A further training of data technical staff and coding
staff in technical procedures and in meeting data accreditation standards will also be needed.
This could be integrated with formal management and clinical training but can also be achieved
informally though user networks.
4.4. General recommendations for implementation of the
framework for hospital performance assessment
The following points were agreed on:
- It is important to identify the project leader (senior managers, clinicians, other) to avoid
the political barriers and seek local support from authorities
- The validity of the model and indicators (the “product”) should be evaluated – including
its uptake and impact on hospital performance (improvement as well as perverse
behaviour) - over a period of defined years
- Regarding the pilot of the overall model, it should be referred to implementation rather
than pilot testing (it will avoid the idea of scientific validation but will rather insist on the
practical use and refinement of the model)
- The implementation should be designed according to the national context (i.e. a country-
by-country approach)
- Intergovernmental relationship between WHO and potential partners e.g. NGO such as
the IHF should be considered for possibly hosting in the future a European database
allowing benchmarking between the hospitals of the different European countries
- Develop expanded guidance on the collection and interpretation of individual indicators
- Identify the resources (time, data, training, money) which hospitals need to implement
the package
4.5. The steps forward
Hospital performance is an ongoing work and indicators will need to be regularly reviewed in
light of new evidence. Remaining gaps that reduce the content validity of the core set were
identified, such as indicators of comprehensiveness and internal continuity are lacking. When
such indicators will become available they will need to be included in the set.
In summary, the group has developed a prototype but has not tested it or defined “after-sales”
service or longer-term development. Remaining actions will include:
25. EUR/03/5038066
page 21
- develop user manual,
- define data quality standards and capacity needs for hospital data systems,
- define selection criteria for hospitals participating to the pilot implementation,
- define a strategy for cascade training of hospital project leaders,
- develop and maintain participant networks,
- identify agencies or mechanisms for data validation and aggregation,
- design scope and schedule of programme for development and revision of indicators.
Within these actions, the next steps agreed upon are the following:
- Refinement of the indicator model and the specifications of individual measures using,
where available, existing published guidance on numerators, denominators, sample
frames, coding criteria and interpretation notes (January 2004 workshop).
- Develop an introductory manual to describe the project (as work in progress), the
conceptual model, some well-tested and agreed indicators and examples of others which
need further development
- Further presentation of each indicator selected in the core or tailored basket. Include
preface guidance on each indicator to identify the values and related standards.
- Development of educational support as a primary product for pilot sites and then other
users.
- Pilot implementation in 6 countries in a limited number of hospitals to evaluate the
usefulness of the framework for hospitals and identify the resources (time, data, training,
money) which hospitals need to implement the package. Define expected outcomes of the
pilot implementation.
- Run three-day workshop for country representatives (January 2004 workshop).
- As participants and experience grow, a mechanism should be developed to evaluate the
indicators (technical) and the impact of the project (behavioural); this could be
channelled through periodic meetings of user hospitals to pool and exchange data and
indicator results.
A number of WHO Member States showed interest in hospital performance management during
2003, and are potential users of the indicator set; the maximization of the use of this model for
country-specific work was consequently addressed.
26. EUR/03/5038066
page 22
Annex 1
SCOPE AND PURPOSE
June 2003
The WHO Office for Integrated Health Care Services in Barcelona, Division of Country Support,
is organizing a third workshop on Hospital Performance measurement from 13-14 June 2003.
Continuing the second meeting, held in March 2003, this workshop is part of a WHO initiative to
develop a Hospital Quality Improvement Strategy to support Member States in the
implementation of Hospital performance assessment strategies and use of key indicators.
The project has three main objectives:
1. Collect evidence on the use of hospital performance assessment models to support
countries in their implementation.
2. Support the Member States for producing benchmarking tools to allow hospitals to
compare themselves to peer groups in order to improve the quality of care provided.
3. Build an experts’ network on hospital performance assessment to support country
implementation and analyze outcomes.
The work will be done in three stages: analysis of existing models worldwide and definition of a
model, congruent with WHO’s policy orientations, which could be used throughout Europe;
piloting of the agreed model, validated by groups of experts in a range of different countries
(between 6 and 9 countries); and development of guidelines to facilitate country implementation.
Conclusions of the first workshop were the proposal of generic definitions adapted to the context
of this project, definitions of key dimensions of hospital performance promoting a
comprehensive model of hospital performance measurement and recommendations regarding the
design of a benchmarking network allowing participants to compare their own performance to
peer hospitals through relevant performance indicators.
The group of experts agreed on six key dimensions for assessing hospital performance:
• Clinical effectiveness
• Safety
• Patient centeredness
• Production efficiency
• Staff orientation
• Responsive governance
During the second workshop, the expansion of the key dimensions of hospital performance, the
design and the test of a framework to select evidence-based performance indicators, the review
and first assessment of performance indicators, and the pilot test of the future set of indicators
were discussed.
27. EUR/03/5038066
page 23
The following conclusions were reached: progress was made in the definition of the main
concepts of hospital performance; agreement on the sub-dimensions of hospital performance;
agreement on a framework for selecting evidence-based indicators and on the orientations of the
pilot test.
The purpose of the third workshop will be to:
i. discuss the results of the questionnaire on indicators conducted in 25 European countries in
May 2003;
ii. select the set of indicators and performance tools which will be pilot tested in a range of
European countries (October 2003-April 2004);
iii. discuss the overall model of Hospital performance, agree on all definitions, dimensions,
sub-dimensions and on the general architecture of the model; and
iv. discuss the orientations and agree on the principles of the pilot test.
September 2003
The WHO Office for Integrated Health Care Services in Barcelona, Division of Country Support,
is organizing a fourth and final workshop on Hospital Performance measurement from 12-13
September 2003.
Continuing the third meeting, held in June 2003, this workshop is part of a WHO initiative to
develop a Hospital Quality Improvement Strategy to support Member States in the
implementation of Hospital performance assessment strategies and use of key indicators.
The project has three main objectives:
1. Collect evidence on the use of hospital performance assessment models to support
countries in their implementation
2. Support the Member States for producing benchmarking tools to allow hospitals to
compare them to peer groups in order to improve the quality of care provided
3. Build an experts’ network on hospital performance assessment to support country
implementation and analyse outcomes
The work will be done in three stages: analysis of existing models worldwide and definition of a
model, congruent with WHO’s policy orientations, which could be used throughout Europe;
piloting of the agreed model, validated by groups of experts in a range of different countries
(between 6 and 9 countries); and development of guidelines to facilitate country implementation.
The following outcomes were achieved between January 2003 and June 2003:
- Identification of the key dimensions of hospital performance assessment
- Identification of WHO strategic orientations related to the project
- Clarification of the key dimensions and definition of the general architecture of the model
- Review of literature on hospital performance indicators and definition of a framework to
pre-select evidence-based indicators
- Survey on the importance, usefulness, impact on quality and general availability of potential
indicators by hospital managers in different European countries (May 2003 – 12 European
countries)
- Pre-selection of performance indicators on a scientific basis
- Selection of the sets of indicators and completion of the first draft of the model
28. EUR/03/5038066
page 24
The overall purpose of the fourth workshop is to finalize the design of a balanced scorecard
model, which could be pilot tested in 6 European countries.
The three major objectives of the workshop will be to:
1. Finalize the core set of performance indicators included the balanced scorecard and
characterize the trade-offs between the selected indicators
2. Design a dashboard (balanced scorecard model) enhancing evidence-based management
and related challenges
3. Consequently define relevant strategies for the preparation of the pilot test
The expected detailed outcomes of the workshop are:
For the first objective
1.1. To reach an agreement on the final core set of indicators to be included in the balance
scorecard
1.2. To agree on the trade-offs between hospital performance indicators and selected in the core
set
For the second objective
2.1. To reach an agreement on the design of a dashboard enhancing evidence-based management
2.2. To agree on the main challenges and strategies for facilitating the appropriation of the results
For the third objective
3.1. To agree on the strategies for preparing the pilot test in six European countries
(implementation test of the balanced scorecard in Albania, Denmark, France, Germany, Georgia,
Lithuania)
29. EUR/03/5038066
page 25
Annex 2
PROVISIONAL PROGRAMME
Friday, 13 June 2003
09.00 – 09.10 Introduction: Jeremy Veillard
09.10 – 09.20 Outline of the project: Jeremy Veillard
09.20 – 09.30 Discussion
09.30 – 09.50 Theoretical frame of Hospital Performance Assessment: Niek Klazinga
09.50 – 10.30 Discussion
Chair: Vahe Kazandjian
10.30 – 11.00 COFFEE BREAK
11.00 – 11.30 Presentation of the pre-selected list of indicators: François Champagne and Ann-Lise
Guisset
11.30 – 12.00 Discussion
Chair: Brian Collopy
12.00 – 12.15 Presentation of the results of the European questionnaire on indicators
12.15 – 12.45 Discussion
Chair: Brian Collopy
12.45 – 14.00 LUNCH BREAK
14.00 – 16.00 Sub working groups – selection of indicators
16.00 – 16.30 COFFEE BREAK
16.30 – 18.00 Sub working groups – selection of indicators (continuation)
18.00 Closure of the first day
Saturday, 14 June 2003
09.00 – 10.30 Sub working groups – selection of indicators (continuation)
10.30 – 11.00 COFFEE BREAK
11.00 – 13.00 Plenary session – presentation of the indicators selected by the working groups
Chair: Ann Rooney
13.00 – 14.00 LUNCH BREAK
14.00 – 14.30 Synthesis and identification of major issues
François Champagne and Ann-Lise Guisset
14.30 – 15.30 Discussion
Chair: Niek Klazinga
15.30 – 16.00 COFFEE BREAK
16.00 – 16.15 First orientations for pilot testing the set of indicators: Jeremy Veillard
16.15 – 16.45 Discussion
Chair: Dr Pierre Lombrail
16.45 – 16.55 Wrap-up of the meeting: Charles Shaw
16.55 – 17.00 Conclusions: Jeremy Veillard
17.00 Closure of the meeting
Friday, 12 September 2003
30. EUR/03/5038066
page 26
09.00 – 09.15 Opening and introduction: Jeremy Veillard
Selection and interrelations of indicators
09.15 – 10.00 Selection of the core set of indicators: principles, choices and main challenges
Ann-Lise Guisset and François Champagne
10.00 – 10.30 Discussion
Chair: Vahé Kazandjian
10.30 – 11.00 COFFEE BREAK
11.00 – 13.00 Discussion (continuation)
Chair: Vahé Kazandjian
13.00 – 14.00 LUNCH BREAK
14.00 – 14.30 Interrelations and trade-offs between performance indicators selected in the core set
Ann-Lise Guisset and François Champagne
14.30 – 15.30 Discussion
Chair: Niek Klazinga
15.30 – 16.00 COFFEE BREAK
Appropriation of the results and related challenges
16.00 – 16.45 Identification of main challenges: from indicators to interpretation to action
Ann-Lise Guisset and François Champagne
16.45 – 17.45 Discussion
Chair: Adalsteinn Brown
17.45 – 18.00 Wrap-up: Svend Jorgensen
18.00 Closure of the first day: Jeremy Veillard
Saturday, 13 September 2003
09.00 – 09.30 Educational aspects on assessing hospital performance: Part 1: Presentation of the
dashboard
09.30 – 10.30 Discussion
Chair: Adalsteinn Brown
10.30 – 11.00 COFFEE BREAK
11.00 – 11.30 Educational aspects on assessing hospital performance: part 2: tools for facilitating
the use of the dashboard and preliminary steps for pilot testing the model
Ann-Lise Guisset and François Champagne
11.30 – 13.00 Discussion
Chair: Johann Kjaergaard
13.00 – 13.15 Wrap-up of the workshop: Charles Shaw
13.15 – 13.30 Conclusions and future steps: Jeremy Veillard
13.30 Closure of the workshop
31. EUR/03/5038066
page 27
Annex 3
LIST OF PARTICIPANTS
Temporary Advisers
Mr Onye Arah Telephone: +31 20 566 5049
Health Services & Systems Research. Room K2- Fax: +31206972316
203, Department of Social Medicine E-mail: o.a.arah@amc.uva.nl
Division of Clinical Methods and Public Health
Academic Medical Center
P.O. Box 22700
1100 DE Amsterdam
NETHERLANDS
Dr Adalsteinn D. Brown Telephone: +1 416 946 5023
Department of Health Policy, Management and Fax: +1 416 978 1466
Evaluation E-mail: adalsteinn.brown@utoronto.ca
University of Toronto
150 College Street, Fitzgerald Bldg., Room 147A
M5S 1A8 Toronto, ON
CANADA
Dr François Champagne Telephone: +15143432226
Professeur titulaire Fax: +15143432207
GRIS et Département d'administration de la santé E-mail: francois.champagne@umontreal.ca
Université de Montreal
B.P. 6128, succursale Centre-ville
H3C 3J7 Montreal, Quebec
CANADA
Dr Brian T. Collopy Telephone: +61 3 9419 3377
Director Fax: +61 3 9416 1192
CQM Consultants E-mail: cqm@sprint.net.au
Level 4. 55 Victoria Pde
Fitzroy, Victoria 3065
AUSTRALIA
Mr Thomas Custers Telephone: +31205664786
Department of Social Medicine Fax: +31206972316
Academic Medical Center E-mail: t.custers@amc.uva.nl
P.O. Box 22700
1100 DE Amsterdam
NETHERLANDS
Ms Pilar Gavilán Telephone: +34 93 482 43 33
Responsible for the Projects Unit Fax: +34 93 482 45 27
Directorate on Organization, Information Systems, E-mail: pgavilan@ics.scs.es
Projects and Evaluation (DOSIPA)
Catalan Institute for Health
Gran via de les Corts Catalanes, 587
08007 Barcelona
SPAIN
32. EUR/03/5038066
page 28
Dr Alicia Granados Navarrete Telephone: +34 93 200 22 53
c/ Alfons XII, 23-27, 3r 3a Fax:
08006 Barcelona E-mail: aliciagranados3@hotmail.com
SPAIN
Dr Ann-Lise Guisset Telephone:
GRIS et Département d'administration de la santé Fax:
Université de Montreal E-mail: ann-lise.guisset@umontreal.ca
B.P. 6128, succursale Centre-ville
H3C 3J7 Montreal, Quebec
CANADA
Dr Svend Juul Jørgensen Telephone: +34 93 241 82 70
WHO Consultant Fax: +34 93 241 82 71
WHO Office for Integrated Health Care Services E-mail: sjj@es.euro.who.int
Marc Aureli, 22-36
08006 Barcelona
SPAIN
Mr Vytenis Kalibatas Telephone: +370 37 32 63 23
Deputy Managing Director Fax: +370 37 32 66 01
Kaunas Medical University Hospital E-mail: kalibata@kmu.lt
Eiveniu str. 2
LT-3007 Kaunas
LITHUANIA
Dr Vahé Kazandjian Telephone: +1 410 379 9540
President Fax: +1 410 379 9558
Center for Performance Sciences (CPS) E-mail: Vkazandjian@cpsciences.com
6820 Deerpath Road
Elkridge, MD 21075-6234
UNITED STATES OF AMERICA
Dr Johan Kjaergaard Telephone: +45 3531 2852
Head of Unit for Clinical Quality Fax: +45 3531 6317
Copenhagen Hospital Corporation E-mail: jk02@bbh.hosp.dk
Bispebjerg Bakke 20C
2400 København NV
DENMARK
Dr Niek Klazinga Telephone: +31 20 5664892
Department of Social Medicine Fax: +31 20 6972316
Academic Medical Center E-mail: n.s.klazinga@amc.uva.nl
P.O.Box 22700 (Meibergdreef, 9)
1100 DD Amsterdam
NETHERLANDS
33. EUR/03/5038066
page 29
Dr Pierre Lombrail Telephone: +33 2 40 84 69 20
Director Fax: +33 2 40 84 69 21
Pôle Information Médicale, d'Evaluation & de E-mail: pierre.lombrail@chu-nantes.fr
Santé Publique (PIMESP)
Centre Hospitalier Universitaire de Nantes.
Hôpital Saint Jacques. 85, rue Saint Jacques
44 093 Nantes cedex 1
FRANCE
Ms Ehadu Mersini Telephone: +355 4 364614
Chief of the Planning and Medical Programs Sector Fax: +355 4 364270
Hospitals Department E-mail: ehadmers@hotmail.com
Ministry of Health
Tirana
ALBANIA
Dr Etienne Minvielle Telephone: +33144236000
INSERM Fax: +33145856856
101 rue de Tolbiac E-mail: minviel@kb.inserm.fr
75654 Paris Cedex 13
FRANCE
Ms Anne L. Rooney Telephone: +1-630-268-7445
Executive Director Fax: 1-630-268-7405
International Services E-mail: ARooney@jcrinc.com
Joint Commission Resources, Inc.
One Lincoln Centre, Suite 1340
IL 60181 Oakbrook Terrace, IL 60181
UNITED STATES OF AMERICA
Dr Henner Schellschmidt Telephone: +49 228 843 135
Wissenschaftliches Institut der AOK Fax: +49 228 843 144
Kortrijker Str. 1 E-mail:
53177 Bonn henner.schellschmidt@wido.bv.aok.de
GERMANY
Dr Rosa Suñol Telephone: +34 93 207 66 08
Foundation Avedis Donabedian Fax: +34 93 459 38 64
Provença 293 pral E-mail: fad@fadq.org
08037 Barcelona
SPAIN
Rapporteur
34. EUR/03/5038066
page 30
Dr Charles Shaw Telephone: +44 20 7307 2879
Director, Audit and Quality Fax: +44 20 7307 2422
CASPE Research E-mail: cshaw@kehf.org.uk
11-13 Cavendish Square
London W1G 0AN
UNITED KINGDOM
Observer
Professor Mohammed Hoosen Cassimjee Telephone: +27 33 3879000
Head Fax: +27 33 3979768
Department of Family Medicine E-mail: professorcassimjee@mail.com
Pietermaritzburg Metropolitan Hospital Complex
and Midlands Region
Northdale Hospital. Old Greytown Road. Private
Bag X9006
Pietermaritzburg 3200
SOUTH AFRICA
World Health Organization
Regional Office for Europe
Dr Manuela Brusoni Telephone: +34932418270
Intern Fax: +34932418271
WHO Office for Integrated Health Care Services E-mail: mbr@es.euro.who.int;
Division of Country Support manuela.brusoni@uni-bocconi.it
c/ Marc Aureli, 22-36
08006 Barcelona
SPAIN
Dr Mila Garcia-Barbero Telephone: +34932418270
Head of the Office Fax: +34932418271
WHO Office for Integrated Health Care Services E-mail: mgb@es.euro.who.int
Division of Country Support
c/ Marc Aureli, 22-36
08006 Barcelona
SPAIN
Mr Oliver Gröne Telephone: +34932418270
Technical Officer, Health Services Fax: +34932418271
WHO Office for Integrated Health Care Services E-mail: ogr@es.euro.who.int
Division of Country Support
Dr Isuf Kalo Telephone: +45 39 17 12 65
Regional Adviser for Quality of Health Systems Fax: +45 39 17 18 64
Division of Country Support E-mail: ika@who.dk
8, Scherfigsvej
DK 2100 Copenhagen Ø
DENMARK
35. EUR/03/5038066
page 31
Mr Sergio Pracht Telephone: +34932418270
STP- Patients Pathways Fax: +34932418271
WHO Office for Integrated Health Care Services E-mail: spr@es.euro.who.int
Division of Country Support
Dr Carles Triginer Borrell Telephone: +34932418270
Technical Officer, Emergency Medical Services Fax: +34932418271
WHO Office for Integrated Health Care Services E-mail: ctr@es.euro.who.int
Division of Country Support
Mr Jeremy Veillard Telephone: +34 93 241 82 70
Technical Officer, Hospital Management Fax: +34 93 241 82 71
WHO Office for Integrated Health Care Services E-mail: jveillard@es.euro.who.int
Division of Country Support