Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

Principles for Best Practice in Clinical Audit - Book Summary - English literature - Radcliffe, Summaries of English Literature

Public and professional belief in the essential quality of clinical care has been hit hard in recent years, not least by a number of highly public failures.We can no longer think about effectiveness of care as an isolated professional matter.Clin ical governance is the organisational approach for quality that integrates the perspectives of staff, patients and their carers, and those charged with managing our health service

Typology: Summaries

2010/2011

Uploaded on 12/19/2011

madbovary
madbovary 🇬🇧

3.9

(13)

244 documents

1 / 206

Toggle sidebar

Related documents


Partial preview of the text

Download Principles for Best Practice in Clinical Audit - Book Summary - English literature - Radcliffe and more Summaries English Literature in PDF only on Docsity! Principles for Best Practice in Clinical Audit Radcliffe Medical Press Radcliffe Medical Press Ltd 18 Marcham Road Abingdon Oxon OX14 1AA United Kingdom www.radcliffe-oxford.com The Radcliffe Medical Press electronic catalogue and online ordering facility. Direct sales to anywhere in the world. # 2002 National Institute for Clinical Excellence All rights reserved. This material may be freely reproduced for educational and not for profit purposes within the NHS. No reproduction by or for commercial organisations is permitted without the express written permission of NICE. British Library Cataloguing in Publication Data A catalogue record for this book is available from the British Library. ISBN 1 85775 976 1 Typeset by Aarontype Ltd, Easton, Bristol Printed and bound by TJ International Ltd, Padstow, Cornwall Acknowledgements The preparation of this book was funded by the National Institute for Clinical Excel- lence. We would like to thank Steve Barrett and Paul Sinfield formerly of CGRDU, Leicester, for assistance in the early stages of the literature review, and Laura Price, for her work in editing the text of the book. Finally, we thank all those – too numerous to mention by name – who reviewed the book during its development. Foreword The time has come for everyone in the NHS to take clinical audit very seriously. Anything less would miss the opportunity we now have to re-establish the confidence and trust upon which the NHS is founded. Public and professional belief in the essential quality of clinical care has been hit hard in recent years, not least by a number of highly public failures. We can no longer think about effectiveness of care as an isolated professional matter. Clinical govern- ance is the organisational approach for quality that integrates the perspectives of staff, patients and their carers, and those charged with managing our health service. But real commitment is needed from everyone involved if governance is to fulfil its promise. Concerns about the quality of NHS care have attracted national publicity, public inquiries and a focus on failure. While we must do everything we can to put in place systems to avoid such failings in future, these isolated cases should not dominate our thinking about quality of care. It is just as important that clinical governance should support a process of continuous quality improvement throughout the NHS. Clinical audit is at the heart of clinical governance. . It provides the mechanisms for reviewing the quality of everyday care provided to patients with common conditions like asthma or diabetes. . It builds on a long history of doctors, nurses and other healthcare professionals reviewing case notes and seeking ways to serve their patients better. . It addresses quality issues systematically and explicitly, providing reliable infor- mation. . It can confirm the quality of clinical services and highlight the need for improve- ment. This book provides clear statements of principle about clinical audit in the NHS. The authors have reviewed the literature concerned with the development of audit over recent years, and are able to speak about clinical audit with considerable personal authority. Too often in the past local and national clinical audits have failed to bring about change. The Report of the Public Inquiry into Children’s Heart Surgery at the Bristol Royal Infirmary 1984–1995 (2001) provides salutary reading for anyone in the NHS who is still inclined to dismiss the importance of clinical audit. But audit cannot be expected to bear fruit unless it takes place within a supportive organisation committed to a mature approach to clinical quality – clinical governance. Clinical audit does not provide a straightforward or guaranteed solution for each problem. Local audit programmes in primary and secondary care will need to use the principles set out in this book to devise and agree local programmes tailored to address local issues. Nevertheless, we hope you will find that the distillation of evidence and wisdom about audit presented in this book will help you to create audit programmes that are capable of bringing about real improvements. The National Institute for Clinical Excellence and the Commission for Health Improvement will each have an important part to play in setting the national context within which the NHS addresses the need to review the quality of healthcare. But the real worth of clinical audit will depend on the commitment of local NHS staff and organisations. We hope that this book will help provide a framework for clinical audit that maximises local enthusiasm and commitment to high-quality patient care. Dame Deirdre Hine Sir Michael Rawlins Chair Chairman Commission for Health Improvement National Institute for Clinical Excellence FOREWORD vii Introduction: using the method, creating the environment What is clinical audit? Clinical audit is a quality improvement process that seeks to improve patient care and outcomes through systematic review of care against explicit criteria and the implementation of change. Aspects of the structure, processes, and outcomes of care are selected and systematically evaluated against explicit criteria. Where indicated, changes are implemented at an individual, team, or service level and further moni- toring is used to confirm improvement in healthcare delivery. This definition is endorsed by the National Institute for Clinical Excellence. Who is this book for? This book is written primarily for staff leading clinical audit and clinical governance projects and programmes in the NHS. It should also prove useful to many other people involved in audit projects, large or small and in primary or secondary care. Why should I read it? Every NHS health professional seeks to improve the quality of patient care. The concept that clinical audit can provide the framework in which this can be done collaboratively and systematically is reflected in current NHS policy statements. . As a first step, clinical audit was integrated into clinical governance systems (Department of Health, 1997; Welsh Office, 1996). . Full participation in clinical audit by all hospital doctors was subsequently made an explicit component of clinical governance (Department of Health, 1998; Welsh Office, 1998). . The NHS Plan (Department of Health, 2000) has taken these policies further, with proposals for mandatory participation by all doctors in clinical audit and devel- opments to support the involvement of other staff, including nurses, midwives, therapists and other NHS staff. Improving Health in Wales (Minister for Health and Social Services, 2001) introduced annual appraisals that address the results of audit. The General Medical Council now advises all doctors that they: ‘must take part in regular and systematic medical and clinical audit, recording data honestly. Where necessary, you must respond to the results of audit to improve your practice, for example by undertaking further training’ (General Medical Council, 2001). The UK Central Council for Nursing, Midwifery and Health Visiting states that clinical governance, assisting the coordination of quality improvement initiatives such as clinical audit, is: ‘the business of every registered practitioner’ (UK Central Council for Nursing, Midwifery and Health Visiting, 2001). The recommendations of Learning from Bristol: the Report of the Public Inquiry into Children’s Heart Surgery at the Bristol Royal Infirmary 1984–1995 (Department of Health, 2001) (referred to hereafter as ‘the Bristol Royal Infirmary Inquiry’) can now be added to these statements. In particular, the Inquiry makes the following recommendations. 143 The process of clinical audit, which is now widely practised within trusts, should be at the core of a system of local monitoring of performance. 144 Clinical audit must be fully supported by trusts. They should ensure that health- care professionals have access to the necessary time, facilities, advice, and exper- tise in order to conduct audit effectively. All trusts should have a central clinical audit office that coordinates audit activity, provides advice and support for the audit process, and brings together the results of audit for the trust as a whole. 145 Clinical audit should be compulsory for all healthcare professionals providing clinical care and the requirement to participate in it should be included as part of the contract of employment. The Government has welcomed the recommendations of the Bristol Royal Infirmary Inquiry (Learning from Bristol: the Department of Health’s Response to the Report of the Public Inquiry into Children’s Heart Surgery at the Bristol Royal Infirmary 1984–1995, 2002) (the full set of recommendations relevant to audit and the Govern- ment’s response are to be found at Appendix VIII.) It follows that all healthcare professionals need to understand the principles of clinical audit, and the organisations in which they work must support them in undertaking clinical audit. Using the method Clinical audit can be described as a cycle or a spiral (see Figure 1). Within the cycle there are stages that follow a systematic process of establishing best practice, 2 PRINCIPLES FOR BEST PRACTICE IN CLINICAL AUDIT measuring care against criteria, taking action to improve care, and monitoring to sustain improvement. The spiral suggests that as the process continues, each cycle aspires to a higher level of quality. Clinical audit requires the use of a broad range of methods from a number of disciplines, for example, organisational development, statistics, and information man- agement. Clinical audit can be undertaken by individual healthcare staff, or groups of professionals in single or multidisciplinary teams, usually supported by clinical audit staff fromNHS trusts or primary care organisations. At the opposite end of the scale, a clinical audit project may involve all services in a region or even in the country. Effective systems for managing the audit project and implementing change are impor- tant whether a large number of people or only a few are involved in the audit project. At the start of an audit project, spending time on creating the right environment may be more important than spending time on the method itself. Creating the environment The Government has introduced clinical governance to support organisational change in the way care is delivered within the NHS. Clinical governance has been defined as: ‘. . . a framework through which NHS organisations are accountable for continuously INTRODUCTION 3 What are we trying to achieve? Doing something to make things better Have we made things better? Are we achieving it? Why are we not achieving it? Guidelines Evidence Outcomes Sampling Patient and public involvement Benchmarking Benchmarking Consensus Data analysis Process re-design Process re-design Questionnaire design Data collection Facilitation Change management Monitoring Continuous quality improvement Figure 1. The clinical audit cycle. a guide to online resources for clinical audit; a list of national audit projects, sponsored by the National Institute for Clinical Excellence; recommendations from the Bristol Royal Infirmary Inquiry and the Government’s response; lessons learnt from the National Sentinel Audit Programme; information from the Commission for Health Improvement on examining clinical audit during a clinical governance review; a list of the desirable characteristics of audit review criteria; and a further reading list. Also included are checklists developed from the key points and key notes from each stage. These are designed to complement other assessment tools, summarising the important elements of clinical audit highlighted within the book. Reviewing audit projects, or plans for projects, can help to improve their quality, and these checklists can aid the design and conduct of audits. They can be used by clinicians or audit staff before an audit starts, or after it has finished to look at what might have been done differently. A checklist for reviewing audit programmes is also included, and those who lead audit in health service organisations may use it to identify ways in which their programmes could be strengthened. Although the checklists are intended as learning aids, they are not suited to use as part of a formal assessment process, for which other audit review systems are available. The Commission for Health Improvement (CHI) assesses audit programmes as part of its reviews of health service organisations (the key elements included in the CHI review are described in an appendix). A particularly useful review system for trusts enables self-assessment of the performance of the audit programme and can be used to complement the checklists in this book (Walshe and Spurgeon, 1997); this can be downloaded from www.hsmc3.bham.ac.uk/hsmc. The findings of the literature review are set out in Appendix XI. Electronic access All the resources associated with this book and the full literature review are available on the CD-ROM and via the NICE website (www.nice.org.uk). References Department of Health. The New NHS:Modern, Dependable. London: The Stationery Office, 1997. Department of Health. A First Class Service. Quality in the New NHS. London: Department of Health, 1998. Department of Health. The NHS Plan: A Plan for Investment – A Plan for Reform. London: The Stationery Office, 2000. Department of Health. Learning from Bristol: the Report of the Public Inquiry into Children’s Heart Surgery at the Bristol Royal Infirmary 1984–1995. Command paper CM 5207. London: The Stationery Office, 2001. 6 PRINCIPLES FOR BEST PRACTICE IN CLINICAL AUDIT Department of Health. Learning from Bristol: the Department of Health’s Response to the Report of the Public Inquiry into Children’s Heart Surgery at the Bristol Royal Infirmary 1984–1995. Command paper CM 5363. London: The Stationery Office, 2002. Dixon N. Good Practice in Clinical Audit – A Summary of Selected Literature to Support Criteria for Clinical Audit. London: National Centre for Clinical Audit, 1996. General Medical Council. Good Medical Practice. London: General Medical Council, 2001. Minister for Health and Social Services. Improving Health in Wales – A Plan for the NHS and its Partners. Cardiff: National Assembly for Wales, 2001. UK Central Council for Nursing, Midwifery and Health Visiting. Professional Self- Regulation and Clinical Governance. London: United Kingdom Central Council for Nursing, Midwifery and Health Visiting, 2001. Walshe K, Spurgeon P. Clinical Audit Assessment Framework. HSMC Handbook Series 24. Birmingham: University of Birmingham, 1997. Welsh Office. Framework for the Development of Multi-professional Clinical Audit. Cardiff: Welsh Office, 1996. Welsh Office. Quality Care and Clinical Excellence. Cardiff: Welsh Office, 1998. INTRODUCTION 7 Good preparation is crucial to the success of an audit project. National audit projects reviewed by the National Institute for Clinical Excellence (NICE) suggest that two broad areas of preparation must be addressed (see Appendix IX): . project management, including topic selection, planning and resources, and communication . project methodology, including design, data issues, implementability, stakeholder involvement, and the provision of support for local improvement. In practical terms, preparing for audit can be broken down into five elements that are discussed through the chapter: . involving users in the process (for the purpose of this book, the terms ‘users’ and ‘service users’ include patients, other service users and carers, and members of groups and organisations that represent their interests) . topic selection . defining the purpose of the audit . providing the necessary structures . identifying the skills and people needed to carry out the audit, and training staff and encouraging them to participate. An example of the factors that contributed to a successful audit (in secondary care) is shown in Table 1. Involving users The focus of any audit project must be those receiving care. Users can be genuine collaborators, rather than merely sources of data (Balogh et al., 1995). 10 PRINCIPLES FOR BEST PRACTICE IN CLINICAL AUDIT Table 1. An example of factors contributing to the success of an audit (secondary care). The audit took place in a Walsall clinic for survivors of myocardial infarction; coronary heart disease is a major health issue in Walsall (Giles et al., 1998) . Support from the health authority . Partnership with primary care . A good link with the patient support group . Involvement of patients . A good evidence base for guidelines . Effective distribution of guidelines . Use of information technology . Improved record keeping . Audit used as an inbuilt element of work Sources of user information The concerns of users can be identified from various sources, including: . letters containing comments or complaints . critical incident reports . individual patients’ stories or feedback from focus groups . direct observation of care . direct conversations. The most common method of involving users in clinical audit is the satisfaction survey. Involvement of users in the planning and negotiation of topics for audit is much less common. Some sources of guidance on how to involve users and the public at different stages of the audit cycle are given in Appendix IV. New systems for user involvement Systems are being introduced into the NHS locally to identify and discuss the issues that are of most concern to service users; for example, in England, each trust will have a Patient Forum and a Patient Advocacy and Liaison Service (Department of Health, 2000). These systems are not focused on audit, but they will provide a route through which topics for audit can be identified. Trusts will also be required to undertake regular user surveys. The involvement of users in decisions about their health is also central to the new direction in health and social policy inWales (Minister for Health and Social Services, 2001). For example, in Wales: . Local Health Groups and NHS trusts produce public involvement plans . ‘signpost’ guidance has been issued to the NHS to assist preparation of baseline assessments of public involvement . Community Health Councils have been retained and strengthened to ensure the most effective representation of patients. The publication A Guide to Involving Older People in Local Clinical Audit Activity: National Sentinel Audits Involving Older People (Kelson, 1999) offers practical advice and many examples of how older people can assist at many stages of the audit cycle, from selection of topics to dissemination of findings. One example is a project in Fife, in which user panels consisting of housebound people over 75 years of age contributed to the development of a hospital discharge policy. In a project to involve patients with brain tumours in an assessment of the service at King’s College Hospital, London, a process map of the patient’s journey through the service was developed and randomly selected patients were interviewed in their own homes (Grimes, 2000). After analysing patients’ comments and identifying problems, new documentation was produced to help staff through issues requiring discussion with patients during their stay in hospital. Aspects of outpatient activity, such as turn-around times for biopsy results and availability of clinical scans, were also addressed. STAGE ONE: PREPARING FOR AUDIT 11 National involvement At a national level, there is a responsibility to ensure that clinical audit is an integral part of the quality improvement and clinical governance strategies. NICE provides guidance on clinical audit with its guidelines, and as part of its clinical governance reviews the Commission for Health Improvement (CHI) ensures that NHS trusts and primary care organisations undertake audit. CHI’s reports give a detailed assessment of the state of clinical audit within an organisation, citing examples of good and poor practice (Table 2). Further details of the review process and clinical governance reports are available from CHI’s website (www.chi.nhs.uk). In addition, the Royal Colleges and professional bodies are involved, with their members, in raising aware- ness and support for clinical audit. Users in audit projects teams Users are increasingly involved as members of clinical audit project teams. Where users are involved in this way, careful thought needs to be given to issues of access, preparation and support (Kelson, 1998). Selecting a topic The starting point for many quality improvement initiatives – selecting a topic for audit – needs careful thought and planning, because any clinical audit project needs a significant investment of resources. Audit priorities The clinical team has an important role in prioritising clinical topics, and the following questions may be a useful discussion guide. 12 PRINCIPLES FOR BEST PRACTICE IN CLINICAL AUDIT Table 2. Poor practice identified in one trust during a clinical governance review carried out by the CHI. The trust was urged to make greater use of clinical audit to improve services for users, encourage multidisciplinary audits, and ensure that findings were implemented, monitored, and evaluated . Clinical audits in response to reported incidents, complaints, NICE guidance or National Service Frameworks were seldom performed . Few multidisciplinary audits were undertaken . Patients’ perspectives were not generally considered . There was no systematic implementation or follow-up of audit findings, despite examples of good practice in some directorates audit staff with the breadth of skills to work across the range of issues encompassed within clinical governance is significant. Clinical staff will struggle to complete effective clinical audit projects unless they have expert support in terms of project management, knowledge of clinical audit techniques, facilitation, data management, staff training and administration. Funding is also required for clinical staff to participate in audit (see Stage Two: selecting criteria). Clinical audit projects are expensive and their costs must be justifiable. Project assessments should include cost as part of the review (Walshe and Spurgeon, 1997). It should be remembered, however, that the topics selected for clinical audit are priorities within a given service, and the clinical audit process can provide valuable data to assist decision-making about the use of resources locally within that service. Budget holders must seriously consider any findings that a service needs further resources in order to improve. One example of this is an audit project undertaken to identify all patients taking angiotensin-converting enzyme (ACE) inhibitors in one general practice, focusing on those whose blood pressure was not maintained below 160/90mmHg. The impact of various interventions on the cost of improving care was analysed at the end of the audit cycle. The audit showed that it was possible to reduce blood pressure further in a significant number of patients receiving ACE inhibitors, but drug costs and the number of referrals to specialist services would both rise (Jiwa and Mathers, 2000). Making time The main barriers to audit reported in the literature are lack of resources, especially time. Both protected time to investigate the audit topic and collect and analyse data, and time to complete an audit cycle are in short supply. Clearly, if clinical audit is to fulfil its potential as a model for quality improvement, staff of all grades need to be allocated the time to participate fully. Identifying and developing skills for audit projects To be successful, a clinical audit project needs to involve the right people with the right skills from the outset. Therefore, identifying the skills required and organising the key individuals should be priorities. Certain skills are needed for all audit projects, and these include: . project leadership, project organisation, project management . clinical, managerial, and other service input and leadership . audit method expertise . change management skills . data collection and data analysis skills . facilitation skills. STAGE ONE: PREPARING FOR AUDIT 15 Audit project teams The usual approach, even for small projects, is to set up an audit project team customised to the specific audit project, with team members providing many of the skills needed. For example, clinical service representatives and audit staff are usually included in audit project teams. It is also important that the team includes members from all the relevant groups involved in care delivery, and not just those with clinical experience. So, according to the project topic, an audit project team in a primary care setting may include a surgery receptionist, while a team in secondary care may include porters or catering staff. All audit projects need direct access to people with a full understanding of the processes of clinical care and the information systems used within the service, and this essential real-world knowledge is most likely to be found from the staff working in the service. All project team members should have: . a basic understanding of clinical audit (one barrier to successful audit highlighted in the review of the evidence is lack of training and audit skills) . an understanding of and commitment to the plans and objectives of the project . an understanding of what is expected of the project team – this needs to be clarified at the outset and may be expressed in a ‘terms of reference’ document. It may also be useful to establish ground rules for meetings, so that everyone is clear about the way in which the team will function. A trained facilitator can guide and enable effective team working. Finally, if the audit team is to improve the performance of a clinical service, team members must be able to communicate effectively with their colleagues. Members of the project team must, therefore, have the full confidence and support of the staff and organisation and be able to promote the audit and plans for quality improvement. Role of clinical audit staff in audit projects A good understanding of audit methods, as well as significant organisational and analytical skills, is needed when carrying out many clinical audits. Local audit staff can provide expert help. Clinical audit staff have a number of important roles, though these may differ between organisations. . Information/knowledge support – in collaboration with colleagues in library and information services, audit teams should have access to information technology (IT) facilities to help gather evidence for standard setting and search for other projects on the same topic. . Data management – clinical audit staff have expertise in data collection, entry, analysis, and presentation. . Facilitation – some clinical audit staff have particular training and skills in group dynamics. The role of a facilitator in the context of clinical audit is to help the team 16 PRINCIPLES FOR BEST PRACTICE IN CLINICAL AUDIT to assimilate the evidence, to come to a common understanding of the clinical audit methodology, to guide the project from planning to reporting, and to enable the group to work together effectively. . Project management – project management and leadership is an important factor in quality improvement projects. In the words of McCrea (1999), ‘Since both health care and clinical audit depend on the quality of teamwork, more attention needs to be given to the development of appropriate skills of team leadership.’ Achieving improvements in quality through clinical audit often depends on managing relationships and resources across the wider organisation as well as addressing issues within the team immediately involved in the audit. . Training – in many NHS organisations, audit staff are involved in training and support on a wide range of quality improvements skills for clinicians, managers and others involved in clinical governance. Healthcare Quality Quest (1999) and the Clinical Audit Association (www.the-caa- ltd.demon.co.uk) have developed organisational roles and competencies related to clinical effectiveness and clinical audit to make explicit the way in which designated audit staff and clinical staff work together to improve the quality of care. Developing skills Lack of training and audit skills is highlighted in the review of the evidence as a barrier to successful audit. One assessment framework states that an ongoing programme of training in clinical audit for clinical professionals should be available to members of clinical staff from different departments/services and different professions (Walshe and Spurgeon, 1997). Advice and support for clinical audit are, in fact, available to staff working in most NHS organisations, and may include: . advice, including the selection of methods . ongoing help in the use of methods . access to training in clinical audit methods. Although many NHS trusts and primary care organisations run excellent ‘in-house’ clinical audit training, staff are often unable to attend because of their other duties. Providing sufficient cover for staff development and training has budgetary impli- cations – indeed, staff salaries are the major expense involved in clinical audit. This is a key issue in developing organisational strategies to support clinical governance, and needs to be taken seriously if clinical audit is to be successful. Encouraging and supporting staff participation in audit In any clinical audit project, the people involved in delivering and receiving care should be involved, either directly or by means of representation, from start to finish. STAGE ONE: PREPARING FOR AUDIT 17 Bate SP. Strategies for Cultural Change. Oxford: Butterworth-Heinemann, 1998. Buttery Y. Implementing evidence through clinical audit. In: Evidence-based Healthcare. Oxford: Butterworth-Heinemann, 1998: 182–207. Cox S, Wilcock P, Young J. Improving the repeat prescribing process in a busy general practice. A study using continuous quality improvement methodology. Quality in Health Care 1999; 8: 119–125. Department of Health. The NHS Plan: A Plan for Investment – A Plan for Reform. London: The Stationery Office, 2000. Dickinson K, Edwards J. Clinical audit: failure or hidden success? Journal of Clinical Excellence 1999; 1: 97–100. Giles PD, Cunnington AR, PayneM, Crothers DC,WalshMS. Cholesterol reduction for the secondary prevention of coronary heart disease: a successful multi- disciplinary approach to implementing evidence-based treatment in a district general hospital. Journal of Clinical Effectiveness 1998; 3: 156–60. Grimes K. Using patients’ views to improve a health care service. Journal of Clinical Excellence 2000; 2: 99–102. Healthcare Quality Quest. Clinical Audit Manual: Using Clinical Audit to Improve Clinical Effectiveness. Romsey: Healthcare Quality Quest, 1999. Houghton G, O’Mahoney D, Sturman SG, Unsworth J. The clinical implementation of clinical governance: acute stroke management as an example. Journal of Clinical Excellence 1999; 1: 129–32. Jiwa M, Mathers N. Auditing the use of ACE inhibitors in hypertension. Reflecting the cost of clinical governance? Journal of Clinical Governance 2000; 8: 27–30. Kelson M. Promoting Patient Involvement in Clinical Audit: Practical Guidance on Achieving Effective Involvement. London: College of Health, 1998. Kelson M. A Guide to Involving Older People in Local Clinical Audit Activity: National Sentinel Audits Involving Older People. London: College of Health, 1999. McCrea C. Good clinical audit requires teamwork. In: Baker R, Hearnshaw H, Robertson N, eds. Implementing Change with Clinical Audit. Chichester: Wiley, 1999: 119–32. Minister for Health and Social Services. Improving Health in Wales – A Plan for the NHS and its Partners. Cardiff: National Assembly for Wales, 2001. Morrell C, Harvey G. The Clinical Audit Handbook. London: Baillière Tindall, 1999. Schein EH.Organizational Culture and Leadership. 2nd edition. San Francisco: Jossey Bass, 1997. Walshe K, Spurgeon P. Clinical Audit Assessment Framework, HSMC Handbook Series 24. Birmingham: University of Birmingham, 1997. 20 PRINCIPLES FOR BEST PRACTICE IN CLINICAL AUDIT Stage Two: selecting criteria Key points . Clinical audit can include assessment of the process and/or outcome of care. The choice depends on the topic and objectives of the audit. . Explicit rather than implicit criteria should be preferred. . Systematic methods should be used to derive criteria from evidence. These include methods for deriving criteria from good-quality guidelines or from reviews of the evidence. . Criteria should relate to important aspects of care and be measurable. . Provided that research evidence confirms that clinical care processes have an influence on outcome, measurement of the process of care is generally more sensitive and provides a direct measure of the quality of care. . Measurement of outcome can be used to identify problems in care, provided outcomes are clear, influenced by process, and occur within a short period. . Adjustment for case mix is generally required for comparing the outcomes of different providers. . If the criteria incorporate, or are based on, the views of professionals or other groups, formal consensus methods are preferable. . There is insufficient evidence to determine whether it is necessary to set target levels of performance in audit. However, reference to levels achieved in audits undertaken by other professionals is useful. . In some audits, benchmarking techniques could help participants in audit to avoid setting unnecessarily low or unrealistically high target levels of performance. Defining criteria Within clinical audit, criteria are used to assess the quality of care provided by an individual, a team, or an organisation. These criteria: . are explicit statements that define what is being measured . represent elements of care that can be measured objectively. Recent Government publications indicate that health professionals will be expected to develop criteria and standards that measure a wide range of features of quality in healthcare, such as access to care as well as satisfaction with the care received (Depart- ment of Health, 2000). Different professional groups have used different definitions of ‘criteria’ and ‘stand- ards’ (Tables 3 and 4). For clarity, this book uses the definition of criteria from the Institute of Medicine and the phrase ‘level of performance’ rather than the potentially more confusing term ‘standard’. Criteria can be classified into those concerned with: . structure (what you need) . process (what you do) . outcome of care (what you expect). The advantage of categorising the criteria in this way is that if an outcome is not achieved and the structure and processes necessary have already been identified, the source of the problem should be easier to identify. Structure criteria Structure criteria refer to the resources required. They may include the numbers of staff and skill mix, organisational arrangements, the provision of equipment and physical space. 22 PRINCIPLES FOR BEST PRACTICE IN CLINICAL AUDIT Table 4. Definitions of a ‘standard’ . An objective with guidance for its achievement given in the form of criteria sets which specify required resources, activities, and predicted outcomes (Royal College of Nursing, 1990) . The level of care to be achieved for any particular criterion (Irvine and Irvine, 1991) . The percentage of events that should comply with the criterion (Baker and Fraser, 1995) Table 3. Definitions of a ‘criterion’ . An item or variable which enables the achievement of a standard (broad objective of care) and the evaluation of whether it has been achieved or not (Royal College of Nursing, 1990) . A definable and measurable item of healthcare which describes quality and which can be used to assess it (Irvine and Irvine, 1991) . A systematically developed statement that can be used to assess the appropriateness of specific healthcare decisions, services, and outcomes (Institute of Medicine, 1992) Prioritising the evidence method This method of developing criteria reviews the evidence in the source guidelines or systematic reviews for each element of care identified as important in determining outcome (Fraser et al., 1997). The criteria that have most impact on outcome are then categorised as ‘must do’ or ‘should do’ (Tables 5 and 6). The process can be sum- marised as follows. . Identify key elements of care from review of good-quality guidelines or systematic reviews. . Carry out focused systematic literature reviews in relation to each key element of care to develop, when it is justified by evidence, one or more criteria for each element of care. . Prioritise the criteria into ‘must do’ or ‘should do’ on the strength of research evidence and impact on outcome. . Present the criteria in a protocol. . Include data collection forms, instructions etc. . Submit the protocol to external peer review. STAGE TWO: SELECTING CRITERIA 25 Table 6. Additional (‘should do’) criteria for benzodiazepine prescribing. There is some research evidence for these criteria, but their impact on outcome is less certain (Shaw and Baker, 2001) . The records show that, if the patient is aged 65 years or over, they or their carer(s) have been given advice on the risks for elderly patients . Chronic users (use for 4 weeks or longer) should be identified and encouraged to reduce . The drug taper should be gradual, with a reduction of 2–2.5 mg diazepam equivalent every 2 weeks . Before drug reduction is started, the patient has been switched to an equivalent dose of diazepam Table 5. Essential (‘must do’) criteria for reviewing benzodiazepine prescribing. There is firm research evidence to justify their inclusion (Shaw and Baker, 2001) . New benzodiazepine prescriptions must only be issued for short-term relief (no longer than four weeks) of severe anxiety or insomnia . The records show that a patient receiving a prescription (either new or repeat) for a benzodiazepine has been advised on non-drug therapies for anxiety or insomnia . The records show that the patient has been given appropriate advice on the risks, including the potential for dependence . The records show that patients prescribed benzodiazepines are reviewed regularly, at least three-monthly RAND/UCLA appropriateness method This modified panel process, based on the RAND appropriateness method, was originally developed for assessing the performance of various investigative and sur- gical procedures in the USA (Kahn et al., 1986). The findings of a literature review are submitted to a panel of clinicians, chosen for their clinical expertise and professional influence, who are asked to rate the appropriateness of a set of possible indications for the particular procedure on a 9-point scale from 1 (extremely inappropriate) to 9 (extremely appropriate). A first round of ratings is undertaken without allowing any discussion between the panellists, and a second round is undertaken after a structured panel meeting. Criteria for assessing the care of people with stable angina, asthma, and non-insulin- dependent diabetes have been developed in the UK using an updated version of these methods (Campbell et al., 1999). Ratings of expert panels can closely reflect the views of clinicians (Ayanian et al., 1998), but different panels produce slightly different criteria, and when they are used to evaluate the quality of care, very different results may be obtained (Shekelle et al., 1998). The advantages of this method are that it: . combines systematic review of the scientific literature with expert opinion . yields specific criteria that can be used for review criteria or practice guidelines, or both . provides a quantitative description of the expert judgement of a multidisciplinary group of practitioners . gives equal weight to each panellist in determining the final result. AHCPR method Yet another method of developing criteria from guidelines has been produced by the Agency for Health Care Policy and Research (AHCPR), with its own evidence-based guidelines as the starting point (Agenda for Health Care Policy and Research, 1995a and 1995b). The procedure is relatively complex, because the guidelines cover most elements of care, taking note of different levels of evidence. The method uses a panel to rate elements of care on the basis of their importance to quality of care and fea- sibility for monitoring (Hadorn et al., 1996). Several sets of criteria have been devel- oped in the UK from guidelines supplemented by consultation with expert panels (Hutchinson et al., 2000). Criteria based on professional consensus If criteria incorporate or are based on the views of professional groups, it is better to use formal consensus methods. However, different consensus groups are likely to pro- duce different criteria. A checklist is useful to ensure that an explicit process is used to identify, select, and combine the evidence for the criteria, and that the strength of the evidence is assessed in some way (Naylor and Guyatt, 1996). 26 PRINCIPLES FOR BEST PRACTICE IN CLINICAL AUDIT Several sets of locally based criteria have been developed by involving clinical experts and consensus panels. For example, in an initiative to transfer outpatient follow-up after cardiac surgery from secondary to primary care, protocols for optimal care in general practice were developed in collaboration with a consultant cardiologist, with the criteria and standards being agreed between the cardiologist, general practi- tioners and nurses (Lyons et al., 1999). Locally developed criteria have the advantage that it is easier to take into account local factors such as the concerns of local users. In practice, the most efficient approach is likely to be the use of criteria developed by experts from evidence, together with criteria based on the preferences of users determined locally. Involving users Practitioners and users may assess the quality of care in different ways. Practitioners are likely to place greater value on clinical competence and measurable benefits to patient health status or outcome. Users, on the other hand, although they value com- petence, might also be concerned that a holistic approach to care is adopted and be more interested in process criteria. In addition, different patient groups will have different perspectives. For example, older people may have very specific views on communication skills, convenience and accessibility (Table 7). Issues like these need to be translated into measurable criteria in collaboration with healthcare professionals. Service users can also become usefully involved in developing criteria that take account of the needs of people with their particular condition, from specific age groups, or ethnic or social backgrounds. Audit teams can collaborate with users to establish their experience of the service and the important elements of care from which criteria can be developed. Several qualitative methods are available to help with understanding users’ experiences. These include: . the critical incident technique (Powell et al., 1994) . focus groups (Kelson et al., 1998) . consumer audit (Fitzpatrick and Boulton, 1994). In a focus group involving people who had suffered strokes and their carers, perceived deficiencies were reported in: . diagnosis . treatment and care in hospital STAGE TWO: SELECTING CRITERIA 27 Table 7. Outcome measures that older people may consider important (Kelson, 1999) . The attitude and manner in which a treatment or intervention was carried out . The effect of treatment and care on quality of life and socio-psychological and emotional outcomes, as well as purely clinical outcomes . The level and effectiveness of cooperation between different sectors and agencies, taking into account the older person’s expectations, aspirations, and preferences In Wales, the National Assembly’s Innovations in Care Team (IiC) coordinate the best practice programme, which includes seedcorn funding for innovative schemes, learning events, and information on best practice. The National Assembly for Wales’ Clinical Governance Support and Development Unit (CGSDU) provides learning opportunities through clinical governance network support arrangements. Care pathways Integrated care pathways define the expected timing and course of events in the care of a patient with a particular condition (Kitchiner and Bundred, 1996). They describe explicitly all the expected processes of care. The topics selected are usually high- volume conditions, and the development of the pathway begins with a review of the scientific evidence. A group consisting of representatives of all the staff involved in care identifies key milestones and maps the process so that duplications or wasteful activities can be highlighted. A care pathway indicates how care should be provided at each stage of the patient’s management and makes measuring performance easier. A copy of the pathway can be included in the patient’s records, to be used by all professional groups caring for the patient. This minimises duplication and documentation, and allows variations from the pathway to be identified and investigated, and appropriate action to be taken. Care pathways are easier to introduce when there is established routine practice and little variation between users. Their introduction requires appreciable time and effort, but they offer an alternative approach that incorporates both systems of care and clini- cal management. More pathways have been written for the management of surgical than medical conditions. Although detailed evidence about their benefits is limited, encouraging reports from some services are available. For example, the introduction of care pathways over a period of eight years in one hospital was associated with improvements in the management of several conditions (Layton et al., 1998). References Agency for Health Care Policy and Research. Using Clinical Practice Guidelines to Evaluate Quality of Care. Volume 1: Issues. AHCPR publication no 95-0045. US Department of Health and Human Services, 1995a. Agency for Health Care Policy and Research. Using Clinical Practice Guidelines to Evaluate Quality of Care. Volume 2: Methods. AHCPR publication no 95-0046. US Department of Health and Human Services, 1995b. Ayanian JZ, Landrum MB, Normand SL, Guadagnoli E, McNeil BJ. Rating appropriateness of coronary angiography – do practising physicians agree with an expert panel and with each other? New England Journal of Medicine 1998; 338: 1896–1904. Baker R, Fraser RC. Development of audit criteria: linking guidelines and assessment of quality. British Medical Journal 1995; 31: 370–3. 30 PRINCIPLES FOR BEST PRACTICE IN CLINICAL AUDIT Campbell SM, Roland MO, Shekell PG, Cantrill SA, Buetow SA, Cragg DK. Development of review criteria for assessing the quality of management of non- insulin dependent diabetes mellitus in general practice. Quality in Health Care 1999; 8: 61–5. Department of Health. The NHS Plan: A Plan for Investment – A Plan for Reform. London: The Stationery Office, 2000. Department of Health. The Essence of Care. Patient-Focused Benchmarking for Health Care Practitioners. London: Department of Health, 2001. Dixon N. Good Practice in Clinical Audit – A Summary of Selected Literature to Support Criteria for Clinical Audit. London: National Centre for Clinical Audit, 1996. Ellis JM. Sharing the evidence: clinical practice benchmarking to improve continu- ously the quality of care. Journal of Advanced Nursing 2000; 32: 215–25. Fitzpatrick R, Boulton M. Qualitative methods for assessing health care. Quality in Health Care 1994; 3: 107–13. Fraser RC, Khunti K, Baker R, Lakhani M. Effective audit in general practice: a method for systematically developing audit protocols containing evidence-based audit criteria. British Journal of General Practice 1997; 47: 743–6. Hadorn DC, Baker DW, Kamberg CJ, Brook RH. Phase II of the AHCPR-sponsored heart failure guideline: translating practice recommendations into review criteria. Journal on Quality Improvement 1996; 22: 265–76. Hearnshaw HM, Harker RM, Cheater FM, Baker RH, Grimshaw GM. Expert consensus on the desirable characteristics of review criteria for the improvement of healthcare quality. Quality in Health Care 2001; 10: 173–8. Howitt A, Armstrong D. Implementing evidence based medicine in general practice: audit and qualitative study of antithrombotic treatment for atrial fibrillation.British Medical Journal 1999; 318: 132–47. Hutchinson A, McIntosh A, Anderson JP, Gilbert CL, Field R. Evidence Based Review Criteria for Type 2 Diabetes Foot Care. Sheffield: RCGP Effective Clinical Practice Unit, University of Sheffield, 2000. Institute of Medicine. Guidelines for Clinical Practice: From Development to Use. Washington DC: National Academic Press, 1992. Irvine D, Irvine S. Making Sense of Audit. Oxford: Radcliffe Medical Press, 1991. Kahn LK, Roth CP, Fink A, Keesey J, Brook RH, Park RE, Chassin MR, Solomon DH. Indications for Selected Medical and Surgical Procedures – a Literature and Ratings of Appropriateness. Colonoscopy. RANDR-3204/5-CWF/HF/PMT/RWJ. Santa Monica, 1986. Kahn LK, Rubenstein LV, Sherwood MJ, Brook RH. Structured Implicit Review for Physician Implicit Measurement of Quality of Care: Development of the Form and Guidelines for Its Use. RAND note N-3016-HCFA. Santa Monica, 1989. Kelson M. A Guide to Involving Older People in Local Clinical Audit Activity: National Sentinel Audits Involving Older People. London: College of Health, 1999. Kelson M, Ford C, Rigge M. Stroke Rehabilitation: Patients’ and Carers’ Views. London: Royal College of Physicians, 1998. STAGE TWO: SELECTING CRITERIA 31 Kitchiner D, Bundred P. Integrated care pathways. Archives of Disease in Childhood 1996; 75: 1668. Layton A, Moss F, Morgan G. Mapping out the patient’s journey: experiences of developing pathways of care. Quality in Health Care 1998; 7 Suppl: S30–6. Lyons C, Thomson A, Emmanuel J, Sharma R, Robertson D. Transferring cardiol- ogy out-patient follow-up from secondary to primary care. Journal of Clinical Governance 1999; 7: 52–6. National Assembly for Wales. Fundamentals of Care Project. Cardiff: National Assembly for Wales, 2001. Naylor CD, Guyatt GH. Users’ guide to the medical literature IX. How to use an article about a clinical utilization review. Journal of the American Medical Associa- tion 1996; 275: 1435–9. Powell J, Lovelock R, Bray J, Philp I. Involving users in assessing service quality: benefits of using a qualitative approach. Quality in Health Care 1994; 3: 199–202. Royal College of Nursing. Quality Patient Care – the Dynamic Standard Setting System. Harrow: Scutari, 1990. Scottish Intercollegiate Guidelines Network. Secondary Prevention of Coronary Heart Disease Following Myocardial Infarction. Edinburgh: Scottish Intercollegiate Guidelines Network, 2000 (www.sign.ac.uk). Shaw E, Baker R. Audit protocol: benzodiazepine prescribing in primary care. Journal of Clinical Governance 2001; 9: 45–50. Shekelle PG, Kahan JP, Bernstein SJ, Leape LL, Kamberg CJ, Park RE. The reproducibility of a method to identify the overuse and underuse of medical procedures. New England Journal of Medicine 1998; 338: 1888–95. 32 PRINCIPLES FOR BEST PRACTICE IN CLINICAL AUDIT In addition, medical coding systems can be very unreliable for identifying users, their conditions, and the nature of their care. Audit staff must be very careful about the accuracy, timeliness and completeness of clinical records. It can help to use certain data collection strategies, including: . multiple sources of information . direct observation . encounter sheets completed at the time by the healthcare professional. It is always tempting to collect more data than necessary, but only the minimum amount required by the objectives of the audit should be collected. It is better to STAGE THREE: MEASURING LEVEL OF PERFORMANCE 35 Table 9. An example of identifying the data to be collected to audit completeness of data on physiotherapy and medical treatment for ankylosing spondylitis (Lubrano et al., 1998) . Measures of spinal movement: – height – chest expansion – cervical rotation – tragus or occiput to wall distance – modified Schober’s flexion and extension – side lumbar flexion – intermalleolar abduction – interfingertip abduction . Medical information: – non-steroidal anti-inflammatory drug usage – sulfasalazine usage – eye disease – aortic incompetence – renal disease . General information: – exercise frequency – duration of early morning stiffness Table 10. Setting time periods in an audit of GP referrals for lumbar spine radiography (Garala et al., 1999) . An initial 3-month retrospective audit examined: – the number of lumbar spine radiographs requested by GPs – the percentage of these with a positive result – the percentage of people experiencing a change in their clinical management as a result of radiography . A prospective audit of the same practices for the same time period 1 year later showed: – a 61% reduction in requests for lumbar spine radiographs – an increase in those with positive results improve a single aspect of care than to collect data on 20 items and change nothing. There is an inevitable trade-off between data quality and the costs and practicality of collecting data. Sampling users Once the group or population of users has been precisely defined by specifying the ‘inclusion criteria’, it is time to decide on the records fromwhich data will be collected. It may not always be practical or feasible to include each and every user, and in this case, a representative sample is usually chosen from which inferences about the total population can be made. When choosing a sample, two questions need to be answered. . How many of the users (study population) do I need to select? . How do I choose a representative sample? When the sample size has been determined, the sample can be identified. The number needed in the sample is determined by two factors: . the degree of confidence wanted in the findings . resource constraints (time, access to data, costs). Various methods can be used for calculating sample sizes, depending on the type of data. In audit, it is usual to compare the proportion of users whose care is in accord- ance with the criteria before changes in care with the proportion after the changes. The calculation of sample size for proportions is relatively simple (see the example below), but if the data are in a format other than proportions, statistical advice should be sought. Sampling methods range from very simple to highly sophisticated. Random sampling should be used whenever possible to minimise the risk of bias. This means that each case in the group is allotted a number, and a published random numbers table (e.g. Altman, 1991) is used to identify the case numbers to include. Pocket calculators and computers can also generate random numbers. Calculating sample sizes for proportions – an example A primary care team is planning an audit of the care of people with hypertension. They have 300 people being treated for the disorder, but do not have time to review all the records. They select one key criterion – those on treatment should have had their blood pressure checked and the result should have been below 150/90mmHg on three occasions in the past 12 months – and hope to achieve a performance level of 70%. They are willing to accept 5% inaccuracy due to sampling – in other words, if their 36 PRINCIPLES FOR BEST PRACTICE IN CLINICAL AUDIT findings give a level of 70%, on 95% of occasions the true value would lie between 65% and 75%. They use the public domain software programme Epi Info to calculate the sample size using these parameters, and the sample required is found to be 155. (Epi Info is produced by the Centers for Disease Control and Prevention in the USA, and may be downloaded from www.cdc.gov/epiinfo.) Interval sampling Random sampling assumes that the sample can be drawn from a defined population of users or cases. However, users do not form a static population, and the individ- uals making up the user population (i.e. those attending clinics, practices or who are admitted to hospital) will change during the audit. In these circumstances, the sample is often determined by intervals of time; for example, people admitted to the coronary care unit from January to March inclusive. This is a reasonable approach provided that admission rates and the quality of care are not influenced by major seasonal factors. Two-stage sampling Two-stage sampling may improve efficiency (Alemi et al., 1998). A small sample is selected first, and if unequivocal conclusions can be drawn, no more data are collected. If the results are ambiguous a larger sample is selected. Rapid-cycle sampling The traditional audit cycle often involves collecting relatively large amounts of data over a long period, with a similar protracted data collection after changes are intro- duced. Although this approach, if correctly applied, provides good information about performance, it can make the process of change slow. A recently introduced alterna- tive involves the use of small samples, with many repeated data collections to monitor serious fluctuations or changes in care. The cycle is completed quickly, and reliability is improved by the repeated data collections (Alemi et al., 2000; Plsek, 1999). The Cancer Services Collaborative (CSC) has used rapid cycles of improvement as a key feature of its quality improvement strategy. PDSA cycles (plan, do, study, act) involve testing change ideas on a small scale, usually on a small number of clinicians and small user samples, before introducing the change to other clinics or user groups. Further information on this method can be found in the Service Improvement Guides available from the CSC on the National Patient’s Access Team website (http:// 195.92.252.217/channels/npat/). An evaluation of the method is available from the Health Services Management Centre at the University of Birmingham (www.hsmc3. bham.ac.uk/hsmc/). STAGE THREE: MEASURING LEVEL OF PERFORMANCE 37 Data abstraction tools Data for an audit are generally collected retrospectively, in other words some time after care has been provided. Typically, the data are collected from records, and may be extracted onto standard forms or entered directly into a computer database. Figure 4 shows a data collection form used in an audit of the assessment of urinary incontinence by community nurses. Data collection forms must specify precisely the information to be abstracted from the record, and they should be clear and easy to use. It is good practice to pilot the data collection form to enable any inherent prob- lems to be detected and corrected. Different data collectors will inevitably interpret some record entries in the same record in different ways. It is essential that data collectors undergo training on the use of the data collection form, so any confusing items are identified and a clear policy is established on how data items should be recorded. A protocol should also be provided for data collectors to follow when deciding whether the patient notes provide sufficient information to suggest that a criterion has definitely been met. Data collectors should be able to seek advice if they encounter entries in records that are particularly confusing. Before starting an audit, the reliability of data collection should be checked by asking data collectors to indepen- dently extract data from the same sample of records and then compare their findings. The percentage of items that are the same, or the kappa statistic, is calculated to estimate inter-rater reliability (Altman, 1991). If reliability is low, the data collection procedures must be reviewed. Retrospective or concurrent data collection? Retrospective data collection provides a picture of care provided during a time period in the past, for example, the previous six months. Although this provides a baseline of care provision, it may not be as useful as working with concurrent data. Concurrent data collection gives a team more immediate feedback on its current performance and can act as a positive reinforcement to improve or maintain practice. Concurrent data can be collected and presented on paper or electronically. Appro- priately designed and used electronic records can also provide concurrent data that can be used to support the continuous improvement of practice. As IT systems in the NHS improve, concurrent audit and continuous improvement are likely to become more common. Concurrent data collection and analysis have been used to improve the timeliness of giving thrombolytics to people admitted to an accident and emergency department with chest pain (Plsek, 1999). Each time thrombolytic therapy was administered, a cross was placed in the appropriate 10-minute column of a check-sheet to indicate the time elapsed since presentation. As the histogram eventually developed, the mean, spread, and characteristic shape of the time distribution could be read directly from the check-sheet (Figure 5). 40 PRINCIPLES FOR BEST PRACTICE IN CLINICAL AUDIT Data analysis The type of analysis to be used should be identified at an early stage, as it influences both the type and amount of data collected. The analysis can range from a simple calculation of percentages, through to relatively sophisticated statistical techniques. On most occasions, however, simple methods are preferable, and indeed, if the results are to stimulate change, the analysis must be simple enough for everyone in the care process to understand (Plsek, 1999). Furthermore, provided samples have not been used, statistical tests are superfluous. If samples have been taken, the most appropriate calculation to perform is confidence intervals (Gardner and Altman, 1989). Just as the analysis should be as simple as possible, the findings should be presented simply and clearly. Bar charts have become the most common format, but the numbers should be available in separate tables rather than presenting the charts alone. The example in Figure 6 demonstrates methadone prescribing issues audited in 16 general practices. From these findings, it was possible to draw some conclusions about the impact of the audit and the education sessions on the prescribing practice of the general practitioners involved (Beaumont, 1997). Statistical quality control charts can help to develop understanding of process performance and provide longitudinal information that may not otherwise be detected. For example, a control chart of the number of patient falls per month, with non- constant control limits due to the varying number of patients, shows three atypical out-of-control events in an otherwise stable process (Figure 7) (Benneyan, 1998). Although more sophisticated statistical procedures can be used to analyse audit data, expert advice should be sought while the audit is being prepared if this level of analysis is thought to be necessary. STAGE THREE: MEASURING LEVEL OF PERFORMANCE 41 F re q u en cy Time (mins) 0 5 10 15 20 25 30 <20 20–29 30–39 40–49 50–59 60–69 70–79 80–89 >90 Median = 38 mins Figure 5. Concurrent data collection for the administration of thrombolytic therapy in the accident and emergency department (‘door-to-needle time’) (Kendall and McCabe, 1996). Patient in contact with community drug team or street agency Named pharmacist in the notes ositive urine test before scripting Hepatitis C status discussed Hepatitis B status discussed HIV status discussed Notification to the Home Office Methadone prescribing Comparisons between first and second audit Percentage 0 10 20 30 40 50 60 70 80 90 100 First audit Second audit Figure 6. An example of a bar chart used in a clinical audit (data from Beaumont, 1997). F al ls p er p at ie n t d ay Month Control chart of patient falls per month 0 1 1 2 2 3 3 4 4 6 6 5 5 7 7 8 8 9 9 10 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 Upper control limit Lower control limit Patient falls Figure 7. Example of a statistical control chart used in a clinical audit (Benneyan, 1998). be given a chance to object. Before adopting this approach, it is wise to seek the advice of the local district research ethics committee or the GMC. The UKCC (the Nursing and Midwifery Council from April 2002), offers advice about data confidentiality matters and the latest information on UK and European legislation can be obtained from the Office of the Information Commissioner (www. dataprotection.gov.uk; helpline: 01625 545 745). These issues of confidentiality and consent are ones that the organisation should address early in order to provide guidance, ideally in the form of a policy to which individuals or teams can refer. NHS organisations should appoint Caldicott guardians who can advise on local arrangements and who are responsible for: . agreeing and reviewing internal protocols governing the protection and use of user- identifiable information by staff in the organisation . disclosure of user-identifiable information across organisational boundaries under the auspices of clinical governance (NHS Executive, 1999a; Welsh Office, 1999). At the time of press, the Office of the Information Commissioner has issued, for consultation, draft guidance on the use and disclosure of medical data; policy and guidance in this area is subject to change (www.doh.gov.uk/ipu). References Alemi F, Moore S, Headrick L, Neuhauser D, Hekelman F, Kizys N. Rapid improve- ment teams. Joint Commission Journal on Quality Improvement 1998; 24: 119–29. Alemi F, Neuhauser D, Ardito S, Headrick L, Moore S, Hekelman F, Norman L. Continuous self-improvement: systems thinking in a personal context. Joint Commission Journal on Quality Improvement 2000; 26: 74–86. Altman DG. Practical Statistics for Medical Research. London: Chapman and Hall, 1991. Beaumont B. Methadone prescribing in general practice.Audit Trends 1997; 5: 90–95. Benneyan JC. Use and interpretation of statistical quality control charts. International Journal for Quality in Health Care 1998; 10: 69–73. Cheater F, LakhaniM, Cawood C.Audit Protocol: Assessment of Patients with Urinary Incontinence. CT14. Leicester: Eli Lilly National Clinical Audit Centre, Depart- ment of General Practice & Primary Health Care, University of Leicester, 1998. Data Protection Act 1998. www.hmso.gov.uk/acts/acts1998/19980029.htm (accessed June 2000). Garala M, Craig J, Lee J. Reducing the general practitioner referral for lumbar spine X-ray. Journal of Clinical Governance 1999; 7: 186–9. Gardner MJ, Altman DG. Statistics with Confidence. London: BMJ Publishing Group, 1989. General Medical Council. Confidentiality: Protecting and Providing Information. London: General Medical Council, 2000. STAGE THREE: MEASURING LEVEL OF PERFORMANCE 45 Giles PD, Cunnington AR, PayneM, Crothers DC,WalshMS. Cholesterol reduction for the secondary prevention of coronary heart disease: a successful multi- disciplinary approach to implementing evidence-based treatment in a district general hospital. Journal of Clinical Effectiveness 1998; 3: 156–60. Kalayi C, Rimmier F, Maxwell M. Improving referral for cardiac rehabilitation – an interface audit. Journal of Clinical Governance 1999; 7: 177–80. Kendall JM, McCabe SE. The use of audit to set up a thrombolysis programme in the accident and emergency department. Emergency Medicine Journal 1996; 13: 49–53. Lubrano E, Butterworth M, Hesselden A, Wells S, Helliwell P. An audit of anthropometric measurements by medical and physiotherapy staff in patients with ankylosing spondylitis. Clinical Rehabilitation 1998; 12: 216–20. Mays N, Pope C. Qualitative research in health care. Assessing quality in qualitative research. British Medical Journal 2000; 320: 50–2. NHS Executive. Information for Health. An Information Strategy for the Modern NHS 1998–2005. London: Department of Health, 1998. NHS Executive. Clinical Governance. Quality in the New NHS. London: Department of Health, 1999a. NHS Executive. Quality and Performance in the NHS: Clinical Indicators. London: Department of Health, 1999b. Plsek PE. Quality improvement methods in clinical medicine. Pediatrics 1999; 103: 203–14. Pope C, Ziebland S,Mays N. Qualitative research in health care. Analysing qualitative data. British Medical Journal 2000; 320: 114–16. Welsh Office. Protecting Patient Identifiable Information: Caldicott Guardians in the NHS. WHC(99)92. Cardiff: Welsh Office, 1999. 46 PRINCIPLES FOR BEST PRACTICE IN CLINICAL AUDIT Stage Four: making improvements Key points . A systematic approach to implementation appears to be more effective. Such an approach includes the identification of local barriers to change, the support of teamwork, and the use of a variety of specific methods. . An investigation of potential barriers to change assists in the development of implementation plans. . Teams undertaking audit that are appropriately supported and able to use a variety of techniques can identify potential barriers and develop practical implementation plans. . Contextual factors influence the likelihood of change. These include the sig- nificance of change to service users, the effectiveness of teamwork, and the organisational environment. . Those planning audits should avoid relying on feedback alone as the method of implementing change; although feedback of data alone can occasionally be effective, change is much more likely if it forms part of a more complex set of change processes/interventions. . The dissemination of educational materials, such as guidelines, has little effect unless accompanied by the use of selected implementation methods. . Interactive educational interventions including outreach, service user and/or professional reminders (whether manual or computerised), decision support, and system changes can sometimes, but not always, be effective. . In audit, the use of multifaceted interventions chosen to suit the particular circumstances is more likely to be effective in changing performance than the use of a single intervention alone. Key note . Clinical governance programmes offer a structure to support efforts to make improvements, including personal professional development, support of teams, and clear accountability. One relatively practical framework that incorporates the concept of barriers to change has five principal steps (Grol, 1997). . The required change is clearly defined, based on evidence, and is presented in a way that staff can easily understand. . The barriers to change are identified (e.g. using the methods in Table 13), including those relating to professionals and to the healthcare organisation. . Implementation methods are chosen that are appropriate to the particular circumstances, the change itself, and the obstacles to be overcome. An under- standing of selected theories of behaviour change may be used to inform the choice of methods. . An integrated plan is developed for coordinated delivery and monitoring of the interventions. The plan should describe the sequence in which interventions will be made, the staff and resources required to make them, and the target groups. . The plan is carried out, and progress is evaluated, with modifications to the plan or additional interventions being used as required. This model and others like it make clear that implementation of change is a process that must be carefully planned and systematically managed. The particular inter- ventions or implementation methods used form only one aspect of the process of improving care. Implementing change A recent review of tools, models and approaches to changing management in the NHS (Iles and Sutherland, 2001) provides a helpful overview of how change can be implemented, addressing issues affecting the management of health services rather than just clinical care, which tends to be the focus of clinical audit. The effectiveness of the available strategies in various settings and for all clinical activities is considered in detail in the literature review that is presented in the accompanying CD-ROM, and in Appendix XI. It confirms that although many interventions are available for implementing change, no single method is always effective. Indeed, a diagnostic analysis should be undertaken to identify factors that will influence the likelihood of change (Table 13) before selecting the most appropriate strategies for implementing change. 50 PRINCIPLES FOR BEST PRACTICE IN CLINICAL AUDIT Table 13. Some methods of identifying barriers to change . Interviews of key staff and/or users . Discussion at a team meeting . Observation of patterns of work . Identification of the care pathway . Facilitated team meetings with the use of brainstorming or fishbone diagrams Promoting successful audit Most health professionals have taken part in audit before, and their experiences support the more formal reviews of implementing change – it is possible to change practice, but it is not a simple process. Although participation levels in audit are gen- erally high, the benefits in terms of improved care have usually been modest (Buttery et al., 1995a; Hearnshaw et al., 1998). It is important to understand the reasons for the limited achievements in order to learn how to make audit more effective in the future. In a recent review of 93 studies concerned with a wide variety of clinical audits involving different professional groups, the barriers to successful audit included: . lack of resources . lack of expertise in project design and analysis . lack of an overall plan for audit . poor relationships between professional groups or agencies and within teams . organisational problems, such as lack of a supportive relationship between clinicians and managers (Johnston et al., 2000). Hierarchical relationships, lack of commitment from senior doctors and managers, poor organisational links between departments, and lack of time and practical support can also be obstacles to nurses taking part in clinical audit and changing practice (Cheater and Keane, 1998). Different chief executives in a sample of 29 provider organisations allocated differ- ent priorities to developing audit, but most felt that fuller integration with other activities would improve effectiveness (Buttery et al., 1995a). Managerially competent, enthusiastic leaders tended to be most effective, but lack of clarity about aims and an over-concentration on data collection were problems. Factors that promote the success of clinical audit include: . sound leadership . a conducive/supportive organisational environment . structures and systems to support audit, including mechanisms to make data collection easier . a well-managed audit programme . addressing a range of issues important to the trust and individual clinicians . giving adequate attention to all stages of audit (Buttery et al., 1995b; Rumsey et al., 1996a; Johnston et al., 2000; Rumsey et al., 1996b). In primary care, second data collections to follow up initial audits are often not completed (Hearnshaw et al., 1998), though better organised practices with ade- quate resources and a positive attitude to audit may be more successful (Chambers et al., 1996). STAGE FOUR: MAKING IMPROVEMENTS 51 Establishing the right environment The environment in which audit is performed needs the appropriate structures in place and a culture that supports them. And many features of the environment for audit are also important for implementation (Table 14). Indeed, as change is so dependent on behavioural factors, the nature of the environment is even more important for implementation than for other aspects of audit. Individual environments Individuals need time to devote to implementing improvements, even if they do not have a role in planning the changes. Giving people an opportunity to think through the implications, and to discuss practical issues with others, can make change less of a challenge. Time alone is not sufficient, however, and systems and support must also be available to help individuals improve their existing skills or develop new ones. 52 PRINCIPLES FOR BEST PRACTICE IN CLINICAL AUDIT Table 14. Aspects of an environment that promote clinical audit Individuals Teams Organisations Structure Time Personal development plans Access to advice about change management Access to a system for reporting concerns Occupational health service available Leadership Clear and shared objectives Effective communication Training in improvement methods Opportunities for the team to meet to share ideas and develop plans Explicit commitment to clinical audit within the organisation Clear system for managing a clinical governance programme Staff with responsibility for audit are fully trained and encouraged to develop new solutions to old problems Good systems for understanding the views of users Good communication with other health and social care agencies Culture Positive attitude to audit and improvement Lack of fear – of change and of confronting less than desired or even poor performance Open to new ideas Focus on the user’s experience Interprofessional respect and cooperation Users’ perspectives genuinely regarded as the focus of quality improvement Open to interest from external agencies in quality of performance, and not afraid of inspection ‘No blame’ approach to errors Audit given a high priority . Local authorities are to be given powers to scrutinise the NHS locally. . Financial rewards to trusts are to be linked to the results of the annual National Patient Survey. . User versions of guidelines and other forms of information for users about the care of particular conditions are to be routinely available. These new systems should both inform and consult users, who need information both about what services and care they can expect in general, and about their own individual care. In Wales, the new arrangements for user involvement, announced in Improving Health in Wales (Minister for Health and Social Services, 2001), include: . Patient Support Managers, to support patients in their dealings with NHS staff . Local Health Alliances, set up by local health authorities to engage with the community . a Health and Social Care Charter to clarify how people can access NHS and social care services and the rights and responsibilities of patients . annual prospectuses, published by all trusts and Local Health Groups, that set out the services available . a network of ‘expert patients’ to support individual patients in the treatment of specific conditions. The arrangements for involving users and the public, described above, could be operated without a real conviction that users are central to improving the quality of care. Leadership from the top of the organisation is required to show that the design of services around the needs of users is possible, rewarding, and necessary. Health and social care organisations Changes that are implemented to improve care may have knock-on effects on other agencies: for example, early discharge schemes may affect both health and local social services. In addition, improvement should not stop at organisational boundaries, but follow users as they make use of different services. This means that both day-to-day operational systems and strategic systems are required to ensure close cooperation between agencies. Agencies may also work together to develop shared objectives for quality improvement. Such an approach is essential in the agreement of health improvement programmes and the implementation of National Service Frameworks. It is often difficult for organisations, teams, and individual professionals to appre- ciate the consequences of their decisions for other agencies, or to fully understand their working practices. As a result, the other agencies are often misunderstood and a cycle of poor cooperation and recrimination is established. The beliefs and assumptions that are associated with these problems should be challenged. Opportunities to make progress may arise during the course of local audit projects or in the development of STAGE FOUR: MAKING IMPROVEMENTS 55 policies related to the health improvement programme or National Service Frame- works. Once the user’s experience is understood, the importance of improving care across boundaries becomes clear. Examples of implementing change The three examples described below share common features and illustrate some of the points discussed in the previous sections. In each case, the changes were devel- oped from evidence about appropriate care and were accepted by the professionals involved. Users were also involved in each example, either in giving their views about aspects of care or the design of services, or by being given additional informa- tion based on research evidence to enable them to make an informed decision about their care. The examples also show how multiple interventions were used to improve care. In each case the selection of interventions was based on an appreciation of the local circumstances and the particular obstacles or issues that needed addressing. The planning and coordination involved in each example indicates that in each case the environment was conducive to successful audit. Antibiotic prescribing for otitis media in children (Cates, 2000) ACochrane review questioned the use of antibiotics in the initial management of acute otitis media in children. The doctors in the practice agreed a new policy, but recognised that it might be difficult to implement change. They therefore agreed to offer parents a deferred prescription so that they could wait and see if their child improved without antibiotics. They also prepared an information leaflet for parents explaining the results of research and recommending alternative management, including paracetamol. In the months that followed, fewer children with otitis media were given antibiotics. Pain control after Caesarean section (Antrobus, 1999) Pain control was identified as a problem from an audit involving case-note review and interviews with women after Caesarean section. A new protocol was developed from a review of the evidence, and this was supported by formal pain assessments, pre- printed prescription labels to apply to drug charts, and the introduction of self- medication by women. Education was delivered to doctors and nurses in individual face-to-face sessions (as in educational outreach). At the second data collection a few months later: . women were more satisfied with pain control . the incidence of pain was reduced . mobility was improved . the length of hospital stay was reduced. 56 PRINCIPLES FOR BEST PRACTICE IN CLINICAL AUDIT Management of acute stroke (Dunning et al., 1999) An initial audit demonstrated a lack of systematic coordination of care and variability in clinical practice. A project team was established, and an integrated care pathway developed. The communication strategy involved presentations and discussions with members of the directorate, the trust board, and the public, and local councillors. A user support group was set up. New documentation was introduced, and a new psychological assessment framework was adopted. Stroke beds were designated, and the referral process was streamlined. Subsequent data collections showed that: . the proportion of people discharged to their own homes had increased . hospital stays were shorter . there was a lower incidence of hospital-acquired complications. References Antrobus H. Do-it-yourself pain control. ImpAct 1999; 1: 6–7. Brearley M. Teams: lessons from the world of sport. British Medical Journal 2000; 321: 1141–3. Buttery Y, Walshe K, Rumsey M, Amess M, Bennett J, Coles J. Provider Audit in England. London: CASPE Research, 1995a. Buttery Y, Rumsey M, Bennett J, Coles J. Dorset HealthCare NHS Trust’s Clinical Audit Programme. A Case Study. London: CASPE Research, 1995b. Cates C. Promoting interest in evidence-based practice in primary care. ImpAct 2000; 2: 1–3. Chambers R, Bowyer I, Campbell I. Investigation into the attitude of general prac- titioners in Staffordshire to medical audit. Quality in Health Care 1996; 5: 13–19. Cheater FM, Keane M. Nurses’ participation in audit: a regional study. Quality in Health Care 1998; 7: 27–36. Department of Health. A First Class Service. Quality in the New NHS. London: Department of Health, 1998. Department of Health. The NHS Plan: A Plan for Investment – A Plan for Reform. London: The Stationery Office, 2000. Dunning M, Abi-Aad G, Gilbert D, Hutton H, Brown C. Experience, Evidence and Everyday Practice. Creating Systems for Delivering Health Care. London: King’s Fund, 1999. Ferlie E,WoodM, Fitzgerald L. Some limits to evidence-based medicine: a case study from elective orthopaedics. Quality in Health Care 1999; 8: 99–107. Grol R. Beliefs and evidence in changing clinical practice. British Medical Journal 1997; 315: 418–21. Hayes N. Foundations of Psychology. London: Routledge, 1994. Hearnshaw H, Baker R, Cooper A. A survey of audit activity in general practice. British Journal of General Practice 1998; 48: 979–81. STAGE FOUR: MAKING IMPROVEMENTS 57 Monitoring and evaluating changes Collecting data for a second time, after changes have been introduced, is central to both assessing and maintaining the improvements made during clinical audit. The same procedures of sample selection, information collection, and analysis (see Stage Three: measuring level of performance) should be used throughout the process, to ensure that the data are valid and comparable with each other. Rapid-cycle data collection may also be appropriate, in which only absolutely essential data are collected from small samples (again, see Stage Three). If performance targets have not been reached during implementation, modifications to the plan or additional interventions will be needed. Using IT Awell thought out and integrated IT strategy can help data collection. For example, it is already possible to link a patient record to a specially constructed, audit-collection computer program to record levels of care automatically and continuously. Those with access to the program – individuals, team leaders, and managers with responsibility for quality control – have immediate access to current levels of care (see the NHS Beacons Learning Network website, www.nhs.uk/beacons). It may be easier to sustain improvement within an environment that accepts re-audit at regular intervals. In some cases, regular re-audit is similar to quality control, in which the process continues provided that a sample of events is within the acceptable limits. Again, computerised patient records can provide automatic and instantaneous audit data (www.nhs.uk/beacons; NHS Executive, 1998). Clinical performance indicators Time and planning are both needed in setting up systems for long-term monitoring of indicators, and unrealistically short timescales should be avoided. However, although organisations must invest in facilities, personnel, and training to monitor indicators, it is important to realise that only the minimum number of essential indicators should be included in monitoring. The work involved in third or later data collections can be minimised if monitoring is based on routinely available or easily collected indicators, such as those that contribute to the NHS Performance Assessment Framework (NHS Executive, 1999) and the National Survey of Patient and User Experience (see www.doh.gov.uk/ nhsperformanceindicators) in England and the PerformanceManagement Framework in Wales. The small number of high-level, clinical performance indicators being developed for use in the NHS Performance Assessment Framework (England) and the Perform- ance Management Framework (Wales) relate to only a very limited range of care, and they are unlikely to be directly useful on most occasions. However, performance 60 PRINCIPLES FOR BEST PRACTICE IN CLINICAL AUDIT indicators (England) and clinical indicators (Wales) are also being developed within each NHS National Service Framework. For example, the Framework for Coronary Heart Disease (Department of Health, 2000a) includes clinical indicators (Table 15). These indicators draw on existing sources of information whenever possible, to minimise the problem of collecting additional data. If no data source is available and the performance indicator is a key measure, systems for providing data must be created. For example, many primary care organisations do not routinely monitor the proportion of people with recognised coronary heart disease who have been advised about aspirin, and so they will have to develop a monitoring system for this indicator. Whenever possible, authoritative, evidence-based sources of guidance on selecting performance indicators and advice on audit criteria (such as those in the technology appraisals and guidelines produced by NICE) should be used. Such sources are likely to also be used by other healthcare organisations, which will facilitate comparison of performance. Development of local indicators is sometimes required, but care should be taken to ensure that they are valid and reliable (Sheldon, 1998). When performance indicators are used to monitor sustained improvement, it is vitally important to ensure that data are collected accurately and analysed and interpreted appropriately. The findings should be reviewed on a regular basis and used STAGE FIVE: SUSTAINING IMPROVEMENT 61 Table 15. Coronary heart disease performance indicators (Department of Health, 2000a) Acute myocardial infarction (AMI) Preventing coronary heart disease among those at high risk Health improvement Coronary heart disease mortality rates by Health Authority (from existing public health common dataset) Fair access, effective delivery Number and percentage of patients eligible for thrombolysis receiving it within 60 minutes of call for professional help (‘call-to-needle-time’) Number and proportion of people aged 35–74 years with recognised coronary heart disease whose records document advice about the use of aspirin Efficiency Reference costs for AMI (HRG codes E11 and E12) User experience National coronary heart disease survey of NHS patients Health outcome Proportion of people aged 35–74 years in a primary care organisation and health authority area with a diagnosis of AMI who die in hospital within 30 days of their infarct Rate of cardiovascular events in people with a prior diagnosis of coronary heart disease, peripheral vascular disease, transient ischaemic attack, or occlusive stroke to guide service development. Any decline in performance should be investigated through more detailed audits, and new improvement strategies activated as necessary. In this way, monitoring can be linked to an overall quality strategy and becomes a routine part of managing the service. Other methods of continued monitoring Errors, adverse incidents, and significant event audit can also be used for continued monitoring. Comments from users may be included as sources of information about performance. Although these informal mechanisms can detect declining performance and initiate formal investigations, they depend on an environment that fosters the reporting of errors and adverse incidents and they are no substitute for systematic monitoring. Evaluating audit quality The quality of clinical audit programmes must be evaluated as part of the wider clinical governance agenda. A useful framework for trusts assessing, reviewing, and improving the effectiveness of their clinical audit programmes has been developed and can be downloaded from www.hsmc3.bham.ac.uk/hsmc (Walshe and Spurgeon, 1997), and a scale to measure the quality of audit projects through audit project reports has been developed and tested (Millard, 2000). CHI undertakes regular reviews of clinical governance in NHS trusts and primary care organisations, and assessment of audit is an integral part of these assessments. The framework currently used by CHI is given in Appendix VII. A set of checklists is included in this book (Appendix VI), based on the key points and key notes highlighted in each section. The checklists can be used by audit leaders and clinicians to evaluate the methods they have used, or are planning to use, in their audits. They may also help those responsible for managing audit programmes. The checklists are intended as practical aids to learning, and are not designed for external assessments of audits or audit programmes. Maintaining and reinforcing improvement Maintaining and reinforcing improvement over time is a complex process. In UK projects in which improvements have been sustained, some common factors have been identified (Dunning et al., 1999), including: . reinforcing or motivating factors built in by the management to support the continual cycle of quality improvement . integration of audit into the organisation’s wider quality improvement systems . strong leadership. 62 PRINCIPLES FOR BEST PRACTICE IN CLINICAL AUDIT can flourish in response to patients’ needs.’ Throughout this book we have sought to make clear the fundamental importance of involving people who use the health service in clinical audit and other methods of quality improvement, and the report of the Bristol Royal Infirmary Inquiry explains why. Changing the organisational culture is a core aspect of plans to identify and reduce the number of severe adverse incidents in the NHS (Department of Health, 2000b). The ideal culture is informed by four principal elements. . Staff are prepared to report errors or near-misses, which the organisation analyses and provides feedback on any action being taken. . The culture is just, and staff are able to trust the organisation to distinguish acceptable from unacceptable behaviours. . The culture is flexible, respecting the skills and abilities of front-line staff and allowing them to take control. . The culture is prepared to learn and has the will to implement necessary major reforms. The learning organisation An organisation that is committed to quality improvement can be thought of as a learning organisation (Argyris, 1991). This concept distinguishes organisations by how supportive they are to new ideas. A learning organisation is responsive to change and seeks to improve the quality of its output through single-, double- or triple-loop learning. . Single-loop learning involves incremental change to close the gap between current and target levels of performance. . Double-loop learning allows organisations to change the existing assumptions about performance, including the goals of the organisation and the levels of performance that can be attained. . Triple-loop learning generalises developments learnt from one audit to other areas of healthcare, so that improvements are generated simultaneously. Organisational learning and other aspects of organisational change are discussed in more detail in a guide published by the NHS Service Delivery and Organisation Research and Development Programme (Iles and Sutherland, 2001). Knowledge management Knowledge management, another developing area of interest, concentrates on how organisations become more intelligent and work better and more intelligently. This approach (see www.eknowledgecenter.com/articles/1010/1010.htm) demonstrates a very important principle: that tacit knowledge of how to improve performance is often already present in an organisation, but is not necessarily shared by the workforce. STAGE FIVE: SUSTAINING IMPROVEMENT 65 Knowledge management recognises that organisations need to develop a culture and structures to spread that knowledge so that it is useful to the organisation. Sustained quality improvement in practice Many attempts to support quality improvement by clinical audit have been reported in NHS Beacons (NHS Beacon Services, 2000/2001), ImpAct (a supplement of Band- olier, see Table 17), an Internet site devoted to supporting quality improvement audit projects and quality improvement programmes. Although these examples provide useful reports of the success of quality improvement initiatives, it is not clear how generally applicable they are, because the supporting environments of the projects are often unknown. CHI, the Modernisation Agency in England and the Innovations in Care (IiC) Team and Clinical Governance Support and Development Unit (CGSDU) at the National Assembly forWales also have roles in helping to sustain quality improvements in NHS organisations. CHI does this by providing feedback on the implementation of clinical management strategies within organisations in which a wide range of indicators of performance are considered, and the Modernisation Agency (England) and CGSDU (Wales) by facilitating the development of an environment within the NHS in which clinical audit can thrive. Health Authorities are already responding to their responsi- bilities and creating structures to deal with critical incident reporting and systems for dealing with poorly performing practitioners. References Argyris C. Teaching smart people how to learn. Harvard Business Review 1991; 69: 99–109. Bate SP. Strategies for Cultural Change. Oxford: Butterworth-Heinemann, 1998. 66 PRINCIPLES FOR BEST PRACTICE IN CLINICAL AUDIT Table 17. Quality improvement activities featured in ImpAct Salford – Diabetes Clinical Team . www.jr2.ox.ac.uk/bandolier/ImpAct/imp04/i4-04.html South Tees – colposcopy service . www.jr2.ox.ac.uk/bandolier/ImpAct/imp04/i4-04.html Leicester – Royal Infirmary Gynaecology Department . www.jr2.ox.ac.uk/bandolier/ImpAct/imp05/i5-1.html East Kent – Primary Care Clinical Effectiveness (PRICCE) programme . www.jr2.ox.ac.uk/bandolier/ImpAct/imp01/EASTKENT.html Dorset – Successful Cardiac Care based on Evidence of Effectiveness in Dorset (SUCCEED) project . www.jr2.ox.ac.uk/bandolier/ImpAct/imp09/i9-4.html Berwick DM. A primer on leading the improvement of systems. British Medical Journal 1996; 312: 619–22. Davies HTO, Nutley ST. Organisational culture and the quality of the service provided. Quality in Health Care 2000; 9: 111–19. Department of Health. National Service Framework for Coronary Heart Disease. London: Department of Health, 2000a. Department of Health. An Organisation with a Memory. Report of an Expert Group on Learning From Adverse Events in the NHS. London: Department of Health, 2000b. Dunning M, Abi-Aad G, Gilbert D, Hutton H, Brown C. Experience, Evidence and Everyday Practice. Creating Systems for Delivering Health Care. London: King’s Fund, 1999. Garside P. Organisational context for quality: lessons from the fields of organisational development and change management. Quality in Health Care 1998; 7 Suppl: S8–15. Greenhalgh T. Change and the organisation: culture and context. British Journal of General Practice 2000; 50: 340–1. Huntington J, Gillam S, Rosen R. Clinical governance in primary care: organisational development for clinical governance. British Medical Journal 2000; 321: 679–82. Millard AD. Measuring the quality of clinical audit projects. Journal of Evaluation of Clinical Practice 2000; 6: 359–70. Iles V, Sutherland K. Organisational Change. Managing Change in the NHS. London: National Co-ordinating Centre for NHS Service Delivery and Organisa- tion Research and Development Programme, 2001. (Can be downloaded from www.sdo.lshtm.ac.uk) NHS Beacon Services. NHS Beacons Learning Book, 2000/2001. Petersfield: NHS Beacon Programme (www.nhs.uk/beacons). NHS Centre for Review and Dissemination. Getting evidence into practice. Effective Health Care; 5. York: University of York, 1999. NHS Executive. Information for Health. An Information Strategy for the Modern NHS 1998–2005. London: Department of Health, 1998. NHS Executive. Quality and Performance in the NHS: Clinical Indicators. London: Department of Health, 1999. Sheldon T. Promoting health care quality: what role performance indicators? Quality in Health Care 1998; 7 Suppl: S45–50. Shortell SM, Bennett CL, Byck GR. Assessing the impact of continuous quality improvement on clinical practice: what it will take to accelerate progress.Millbank Quarterly 1998; 76: 593–624. Walshe K, Spurgeon P. Clinical Audit Assessment Framework, HSMC Handbook Series 24. Birmingham: University of Birmingham, 1997. STAGE FIVE: SUSTAINING IMPROVEMENT 67 70 PRINCIPLES FOR BEST PRACTICE IN CLINICAL AUDIT Clinical effectiveness The extent to which specific clinical interventions, when deployed in the field for a particular patient or population, do what they are supposed to do, i.e., maintain and improve health and secure the greatest possible health gain from available resources. NHS Executive. Promoting Clinical Effectiveness. A Framework for Action In and Through the NHS. Leeds, 1996. Clinical governance . . . a framework through which the NHS organisations are accountable for continuously improving the quality of their services and safeguarding high standards of care by creating an environment in which excellence in clinical care will flourish. Department of Health. A First Class Service: Quality in the New NHS. HMSO, London, 1998. Welsh Office. Quality Care and Clinical Excellence. Cardiff: Welsh Office, 1998. Clinical guidelines . . . systematically developed statements to assist practitioner and patient decisions about appropriate healthcare for specific circumstances Institute of Medicine. Guidelines for Clinical Practice: from Development to Use. Washington, DC: National Academic Press, 1992. Criteria Systematically developed statements that can be used to assess the appropriateness of specific healthcare decisions, services, and outcomes. Institute of Medicine. Guidelines for Clinical Practice: from Development to Use. Washington, DC: National Academic Press, 1992. Evidence- based practice The conscientious, explicit, and judicious use of current best evidence, based on systematic review of all available evidence – including patient- reported, clinician-observed and research-derived evidence – in making and carrying out decisions about the care of individual patients. The best available evidence, moderated by the patient circumstances and preferences, is applied to improve the quality of clinical judgements. National Centre for Clinical Audit. Glossary of Terms Used in the NCCA Criteria for Clinical Audit. London: National Centre for Clinical Audit, 1997. APPENDIX I 71 Facilitator In the context of clinical audit, the role of the facilitator is to help the clinical audit group to assimilate the evidence and come to a common understanding of clinical audit methodology, to guide the clinical audit project from planning to reporting and to enable the group to work effectively to that end. Morrell C, Harvey G. The Clinical Audit Handbook. London: Baillière Tindall, 1999. Health technology appraisals Technology appraisals provide patients, health professionals, and health service managers with a single, authoritative source of advice on new and existing health technologies. National Institute for Clinical Excellence website (www.nice. org.uk) Accessed October 2000. Level of performance In this book the term ‘level of performance’ is used in preference to the potentially confusing term ‘standard’ (see page 28). National Service Frameworks National Service Frameworks set national standards and define service models for a specific service or care group, put in place programmes to support implementation, and establish performance measures against which progress within an agreed timescale will be measured. National Service Frameworks website (www.doh.gov.uk/nsf/ nsfhome.htm). Accessed October 2000. Outcome Result of an intervention. Outcomes can be desirable, such as improvement in the patient’s condition or quality of life, or undesirable, like side-effects. NHS Executive. Evidence-Based Health Care. An Open Learning Resource for Health Care Practitioners. CASP and HCLU, 1999. Research A systematic investigation undertaken to discover facts or relationships and reach conclusions using specifically sound methods. Hockey L. The nature and purpose of research. In: The Research Process in Nursing. 3rd edition. London: Blackwell, 1996. Standard See Stage Two: selecting criteria, page 22. 72 PRINCIPLES FOR BEST PRACTICE IN CLINICAL AUDIT Systematic review A review in which all the trials on a topic have been systematically identified, appraised, and summarised according to predetermined criteria. It can, but need not, involve meta-analysis as a statistical method of adding together the results of trials that meet minimum quality criteria. Clinical Evidence. BMJ Publishing Group and the American College of Physicians, American Society of Internal Medicine, 1999. User In this book, the terms ‘user’ and ‘service user’ include patients, service users, and carers, and members of groups and organisations that represent their interests. The map of resources has been divided up into sections. . The first section looks at publications that are likely to influence clinical decision- making – clinical guidelines and systematic reviews. . The next section looks at the related area of standard setting for services and where these are found. . The area entitled Sharing know-how covers a range of information types. The stated purpose of each of these resources is to provide actual examples of quality improve- ment initiatives. This might take the form of projects collected in searchable databases or accessing people through discussion groups. . Assessing the impact cites resources to help document and define clinical audit projects and programmes of work. Each of these sections is only a partial foundation to the resources available. For further resources it is preferable to use specialist gateways such as OMNI and NMAP that are developed using explicit evaluation criteria. Critical appraisal Developing an awareness of where things are and familiarity with the different types of information available go hand-in-hand with the need to adopt a critical attitude to the information retrieved. One way of doing this is to develop specific evaluation criteria. Criterion Focus on Context Scope, audience, authority Content Coverage, currency, valid alternatives Access Usability The websites included in this guide have been assessed against these criteria. The descriptions of each resource have been written to give as much information as possible about the context and content of each site. Where there are issues about usability, such as the need to register before gaining access to material or download software in order to read documents, this has been made explicit. Internet resources for clinical audit Please note that, while every effort has been made to ensure that the Internet addresses in this section (and in the book as a whole) are correct at the time of press, some may change over time. APPENDIX II 75 Clinical guidelines . NICE . SIGN Systematic reviews . Cochrane Library . Clinical Evidence . Centre for Reviews and Dissemination . Health Evidence Bulletins Service standards . National Service Frameworks . National Centre for Health Outcomes Development Clinical audit assessment . Health Services Management Centre, University of Birmingham Specialist gateways . OMNI . NMAP . NeLH and its Virtual Branch Libraries Organisations offering support . Medical Royal Colleges and other professional bodies . Clinical Audit Association Quality Improvement Initiatives . Beacons programme . Innovations in Care programme . Service delivery practice database . Evidence in practice database . ImpAct and Bandolier . National Primary Care Collaborative . CLIP database Discussion groups . CHAIN . Clinical audit . Clinical governance group Clinical governance . Clinical Governance Research and Development Unit . NHS Clinical Governance Support Team . National Assembly for Wales Clinical Governance Support and Development Unit (CGSDU) . Commission for Health Improvement . WISDOM Setting standards Finding the evidence Sharing know-how Assessing the impact Finding further resources A map of useful resources for clinical audit Clinical guidelines National electronic Library for Health (NeLH) clinical guidelines database . www.nelh.nhs.uk/guidelines_database.asp The NeLH provides a database of evidence-based guidelines. These include guidelines from NICE and professional bodies such as the Royal College of Nursing. National Guideline Clearing House (NGC) . www.guideline.gov/index.asp The NGC provides a searchable database of clinical practice guidelines. Guidelines posted on the NGC site meet several criteria including having been published in the past five years, are written in English and are based on a systematic literature search of existing scientific evidence published in peer reviewed journals. The NHC is sponsored by the Agency for Healthcare Research and Quality in partnership with the American Medical Association of Health Plans. National Institute for Clinical Excellence (NICE) . www.nice.org.uk NICE is a Special Health Authority for England and Wales that provides patients, health professionals, and the public with authoritative, robust, and reliable guidance on current best practice. The guidance covers individual health technologies (including medicines, medical devices, diagnostic techniques, and procedures) and the clinical management of specific conditions. The site includes: . technical and summary reports of guidelines commissioned by NICE . health technology appraisals . referral practice guidelines. These are available in PDF format and can be viewed with Adobe Acrobat software, which is easily downloaded from the Internet. Scottish Intercollegiate Guideline Network (SIGN) . www.sign.ac.uk SIGN is a network of clinicians and healthcare professionals including representatives of all the UKRoyalMedical Colleges as well as nursing, pharmacy, dentistry, and pro- fessions allied to medicine. Its objective is to improve the effectiveness and efficiency of clinical care for patients in Scotland by developing, publishing, and disseminating guidelines that identify and promote good clinical practice. APPENDIX II 77 The website offers PDF versions of publications from NCHOD, which can be viewed with Adobe Acrobat software, easily downloaded from the Internet. Sharing know-how Bandolier . www.jr2.ox.ac.uk/bandolier This monthly publication aims to make knowledge and evidence from research more readily known and used. It concentrates on simplifying the results of research and presenting them accessibly to busy clinicians and patients. The site is organised around specialist subsite collections, such as asthma and cardiac care. In addition, the whole site can be searched and the current issue viewed. Cancer Services Collaborative . www.nhs.uk/nationalplan/npch14.htm The Cancer Services Collaborative Patient Pathway Programme is a major initiative to improve the quality of cancer services in England. Launched in late 1999, the 18-month programme is being piloted by nine cancer networks (one in each region and two in London). Attention is being given to breast, colorectal, lung, ovarian, and prostate cancer services. The objective is to optimise service delivery from the patient’s perspective to support effective clinical care. Particular attention is being given to: . coordinating the patient’s cancer journey . improving the patient’s/carer’s experience . optimising care delivery . matching capacity and demand. CLIP database . www.eguidelines.co.uk/clip/clip_main.htm The CLIP database contains summaries of completed or ongoing local clinical effectiveness initiatives contributed by staff across the NHS. Each record presents contact details for further information. New users must register with eGuidelines to access the database; registration is free of charge. eGuidelines is part of the Medendium Group Publishing Limited. Commission for Health Improvement (CHI) . www.doh.gov.uk/chi/index.htm CHI works at a local and national level to monitor and improve clinical care throughout England and Wales. 80 PRINCIPLES FOR BEST PRACTICE IN CLINICAL AUDIT . Locally, it inspects clinical governance arrangements through clinical governance reviews, and conducts investigations or inquiries into serious service failures. . Nationally, CHI undertakes studies reviewing the implementation of the National Service Frameworks, guidance from the National Institute for Clinical Excellence and other NHS priorities. . CHI also has a role in providing leadership for spreading good practice in clinical governance. The site covers the range of CHI activities including details of Clinical Governance Reviews. It publishes the programme of reviews, information on the review process, review results and work on the National Service Frameworks. ImpAct . www.jr2.ox.ac.uk/bandolier/ImpAct/index.html ImpAct is a publication that focuses on ways of raising standards and improving the delivery of services to patients. It identifies ways of improving performance which have been successful and which are transferable. Reports will include successful local initiatives and material developed locally that could be adapted for use elsewhere. ImpAct focuses on: . clinical governance and questions about clinical quality, such as the application of National Service Frameworks, emergency pressures, demand, and waiting times . integration of services across institutional boundaries . primary care groups and questions about service delivery . involving patients and the public . developments in human resources such as staffing and skill mix issues. Criteria for guiding choice of initiatives include: . availability of information to describe the benefits to patients and organisations . transferability and general applicability of projects to other situations . affordability of projects within normal budgets. The site includes a searchable archive of back numbers, and a PDF version of the current issue, which can be viewed with Adobe Acrobat software, easily downloaded from the Internet. Institute for Healthcare Improvement (IHI) . www.ihi.org IHI is a Boston-based, independent, non-profit organisation that has worked since 1991 to accelerate improvement in healthcare systems in the USA, Canada, and Europe, by fostering collaboration among healthcare organisations. APPENDIX II 81 The site gives details of conferences, courses and project work. Certain resources are available online, including a publication on reducing medical errors under the IHI patient safety resources homepage, which is available as a PDF file that can be viewed with Adobe Acrobat software, easily downloaded from the Internet. National Co-ordinating Centre for NHS Service Delivery and Organisation (NCCSDO) research and development programme . www.sdo.lshtm.ac.uk The NCCSDO research and development programme is a national research pro- gramme established to consolidate and develop the evidence base on the organisation, management and delivery of healthcare services. The site contains publications including two on managing change in the NHS: . Organisational Change. A Review for Health Care Managers, Professionals and Researchers – a review of models of change management to help managers, profes- sionals and researchers find their way around the literature and consider the evidence available about different approaches to change . a summary version called Making Informed Decisions on Change. Key Points for Health Care Managers and Professionals. NHS Beacons programme . www.nhsbeacons.org.uk The Beacon programmewas established to underpin the spread of good practice across the service. Beacon status is awarded to those organisations offering patients access to ‘faster, more convenient and more appropriate care’. The areas highlighted so far include: . outpatient services . coronary heart disease . stroke . palliative care . human resources . health improvement . mental health . personality disorder. The site provides a database of Beacon sites and their schemes. This is searchable by key area (e.g. primary care), text phrase, topic, dissemination activity, or NHS region. Dates when the Beacon sites can be visited can also be found. The Innovations in Care Programme in the National Assembly forWales is running a similar programme in Wales. 82 PRINCIPLES FOR BEST PRACTICE IN CLINICAL AUDIT Clinical Governance Association (CGA) . www.bamm.co.uk/CGA Website – Home.html The CGA is a membership association set up to provide a support network for staff whose primary role is to lead or assist with the implementation of clinical governance across the health economy. It provides training and development programmes, forums for problem solving and opportunities for skill sharing. Clinical Governance Research and Development Unit (CGRDU) . www.le.ac.uk/cgrdu/index.html CGRDU came into existence on 1 April 1999, succeeding the Eli Lilly National Clinical Audit Centre, which since 1992 had been a national resource in the field of clinical audit, particularly in the setting of primary healthcare and at the interface between primary and secondary care. The principal function of CGRDU is research and development within the emerging field of clinical governance. The site includes audit protocols to view and download. Clinical Governance Support and Development Unit (Wales) (CGSDU) Website is under development at time of press, but will be accessible via www.wales.gov.uk/. The CGSDU was established in April 2001 to provide leadership and support to the NHS to develop, strengthen, and improve clinical governance in Wales. Its pro- gramme of work includes: . a Board Support Programme: creating the vision of what clinical governance should look like, integrating the component parts, spreading across the whole organisation in a multiprofessional way, incorporating cross-sector and public/patient views . a Clinical Governance Development Programme: to support clinical team working aimed at implementation of priority areas (e.g. NSFs, clinical networks) . a Clinical Governance Learning Network: supporting clinical governance leads, facilitators, and others to identify, develop, and disseminate useful tools, tech- niques etc. . direct training and information . work with NHS organisations in specific areas: e.g. implementing CHI recommen- dations, progressing activity against clinical governance performance measures. Clinical Resource and Audit Group (CRAG) . www.show.scot.nhs.uk/crag CRAG is the lead body within the Scottish Executive Health Department, promoting clinical effectiveness in Scotland. The main committee of CRAG, together with its APPENDIX II 85 subcommittees, provides advice to the Health Department, acts as a national forum to support and facilitate the implementation of the clinical effectiveness agenda, and funds a number of clinical effectiveness programmes and projects. The site incorporates key documents on clinical effectiveness, including a national initiative about the management of diabetes. Clinical Governance resources – Library and Information Service, Health Services Management Centre, University of Birmingham . spp3.bham.ac.uk/hsmc/library/hot_topic_clinicalgov.htm The Library and Information Service incorporates the collections and resources of the Health Services Management Centre Library and the West Midlands NHS Executive Library. The site includes a collection of clinical governance resources under a ‘Hot Topics’ section, bringing together material from around the NHS health regions. NHS Clinical Governance Support Team . www.cgsupport.org The NHS Clinical Governance Support Team has been created to help deliver the successful implementation of clinical governance ‘on the ground’. The aim is to support the delivery of high-quality, patient-centred healthcare that is accountable, systematic, and sustainable. WISDOM . www.wisdomnet.co.uk The WISDOM project delivers networked professional development for primary health care, using Internet technologies for information sharing and communication. The project contains resources about clinical governance and quality assurance, as well as evidence-based practice, Primary Care Group organisation, and change management. The WISDOM Centre is based at the Institute of General Practice and Primary Care, Community Sciences Centre, Northern General Hospital, Sheffield. The site runs a number of ‘virtual conferences’, discussion groups for networked professional learning. These include a group for clinical governance and clinical updates. The site contains an extensive library of online resources relevant to primary healthcare, including the Resource Pack for Clinical Governance. Selection of Medical Royal Colleges and professional bodies College of Occupational Therapists – Clinical Audit . www.cot.co.uk The British Association and College of Occupational Therapists is a trade union and professional association. 86 PRINCIPLES FOR BEST PRACTICE IN CLINICAL AUDIT The site provides information on: . publications about audit . a database of audits completed by occupational therapists that is used to facilitate networking . workshops and study days . participation in relevant national audits . clinical guideline development. Community Practitioners and Health Visitors Association (CPHVA) . www.msfcphva.org/index1.html CPHVA is the UK professional body that represents registered nurses and health visitors who work in a primary or community health setting. The site gives details of CPHVA publications including a clinical effectiveness resource pack and links to systematic reviews relevant to the profession. Joint Royal Colleges Ambulance Liaison Committee (JRCALC) . www.asancep.org.uk/JRCALC The JRCALC was created in 1989 to provide a focus for the UK Ambulance Service in its interactions with other professional healthcare groups. The site provides: . information about a number of quality improvement initiatives, including a national clinical audit of acute myocardial infarction by ambulance services . updates on the work of the clinical guidelines sub-committee and its work in developing pre-hospital guidelines. Royal College of Anaesthetists . www.rcoa.ac.uk The College has produced a guide for departments of anaesthesia summarising the methods by which the medical profession is currently regulated, and gives guidance to anaesthetists about how departments of anaesthesia can set, maintain, and monitor standards of good practice within this changing environment. The College has also established an ongoing national reporting system for recording critical incidents and sharing information about such incidents on a national basis. The professional standards section of the site includes such College publications as Raising the Standard: a Compendium of Audit Recipes for Continuous Improvement in Anaesthesia and information about the College’s critical incident reporting scheme. All online versions are available as PDF files, which can be viewed using Adobe Acrobat software, easily downloaded from the Internet. APPENDIX II 87