eNeonatal Review eNeonatal Review
HOME      CME/CNE INFORMATION      PROGRAM DIRECTORS      NEWSLETTER ARCHIVE     EDIT PROFILE       RECOMMEND TO A COLLEAGUE
Subscribe to eNeonatal ReviewApril 2008: VOLUME 5, NUMBER 8

Quality Improvement:
Translating Effort into Outcomes


In this Issue...

While health care workers are among the most dedicated and well trained of any labor force in the world, preventable errors and quality issues remain a leading cause of morbidity and mortality across the healthcare industry — including patients cared for in the neonatal intensive care unit. Quality improvement methodology, based on scientific principles different from those employed in clinical trials, holds promise for improving patient outcomes, and allows for robust testing of change in situations where clinical trials are not ethical or practical.

In this issue, we review the methodology of quality improvement (QI) and provide a model for effective implementation of quality improvement methods, including the use of measurement, which will enable our readers to start QI projects in their units.
THIS ISSUE
IN THIS ISSUE
COMMENTARY from our Guest Authors
THE PLAN-DO-STUDY-ACT CYCLE
MEASUREMENT AND DATA COLLECTION IN QUALITY IMPROVEMENT
CHANGE PACKAGES, CHANGE CONCEPTS, AND CHECKLISTS
STATISTICAL PROCESS CONTROL CHARTS AND THE SCIENCE OF IMPROVEMENT
STANDARD OPERATIONAL DEFINITIONS
Course Directors

Edward E. Lawson, MD
Professor
Department of Pediatrics
Division of Neonatology
The Johns Hopkins University
School of Medicine

Christoph U. Lehmann, MD
Associate Professor
Department of Pediatrics
Division of Neonatology
The Johns Hopkins University
School of Medicine

Lawrence M. Nogee, MD
Associate Professor
Department of Pediatrics
Division of Neonatology
The Johns Hopkins University
School of Medicine

Mary Terhaar, DNSc, RN
Assistant Professor
Undergraduate Instruction
The Johns Hopkins University
School of Nursing

Robert J. Kopotic, MSN, RRT, FAARC
President, Kair Medical Innovations
San Diego, CA
GUEST AUTHORS OF THE MONTH
Reviews:
Dan Ellsbury, MD Dan Ellsbury, MD
Director of Continuous Quality Improvement
Pediatrix Medical Group
Children’s Center at Mercy Medical Center
Des Moines, IA
Commentary:
Robert L. Ursprung, MD, MMSc Robert L. Ursprung, MD, MMSc
Associate Director of Continuous Quality Improvement
Pediatrix Medical Group
Cook’s Children’s Medical Center
Forth Worth, TX
Guest Faculty Disclosure

Dr. Ellsbury has disclosed that he is employed by the Pediatrix Medical Group and serves as a clinical neonatologist as well as the Director of Continuous Quality Improvement.

Dr. Ursprung has disclosed that he is employed by the Pediatrix Medical Group and serves as a clinical neonatologist as well as the Associate Director of Continuous Quality Improvement, and has been paid honorarium from Vermont Oxford Meetings.


Unlabeled/Unapproved Uses

The authors have indicated that there will be no reference to unlabeled/unapproved uses of drugs or products in the presentation.

Program Directors' Disclosures
LEARNING OBJECTIVES
At the conclusion of this activity, participants should be able to:

Identify to colleagues an appropriate topic within the NICU suitable for a quality improvement project
Organize colleagues into a quality improvement team to implement the quality improvement project
Incorporate measurements into their quality improvement efforts so that colleagues may analyze the success of the project
Program Information
CE Info
Accreditation
Credit Designations
Intended Audience
Learning Objectives
Internet CME/CNE Policy
Faculty Disclosure
Disclaimer Statement

Length of Activity
1.0 hours Physicians
1 contact hour Nurses

Expiration Date
April 9, 2010

Next Issue
May 8, 2008
COMPLETE THE
POST-TEST


Step 1.
Click on the appropriate link below. This will take you to the post-test.

Step 2.
If you have participated in a Johns Hopkins on-line course, login. Otherwise, please register.

Step 3.
Complete the post-test and course evaluation.

Step 4.
Print out your certificate.

Physician Post-Test

Nurse Post-Test

Respiratory Therapists
Visit this page to confirm that your state will accept the CE Credits gained through this program or click on the link below to go directly to the post-test.

Respiratory Therapist Post-Test
APRIL PODCAST
eNeonatal Review Podcast eNeonatal Review is proud to continue our accredited
PODCASTS for 2008.
Listen here.

In this audio interview that features Drs. Robert Ursprung and Dan Ellsbury, they discuss the importance of quality improvement initiatives in the NICU, how to implement such a program, and the improved outcomes that can be attained.

Participants can now receive 0.5 credits per podcast after completing an online post-test. In addition to our monthly newsletters, there will be 6 podcasts throughout the year.

To learn more about podcasting and how to access this exciting new feature of eNeonatal Review, please visit this page.
Podcasts
Please remember that you don't need this
iPod Nano
to listen to our podcasts. You can listen directly from your computer.
Listen to our Podcast
COMMENTARY
Improving quality and safety in healthcare is a major concern of healthcare providers, the general public, and policy makers.1,2,3,4 While health care workers are among the most dedicated and well trained of any labor force in the world, preventable errors and quality issues are a leading cause of morbidity and mortality across the healthcare industry.5,6,7,8,9

Variation in risk adjusted neonatal intensive care unit (NICU) outcomes suggests there are unmeasured, modifiable factors contributing to poor neonatal outcomes.10,11,12,13,14 There is a growing body of evidence that quality improvement (QI) methodology can lead to changes in care practices resulting in improved patient outcomes.15,16,17

In the February 2008 edition of eNeonatal Review, Edwards and Suresh discussed the history of QI in our young field of neonatology including reviews of several of the sentinel manuscripts. In this edition, we would like to discuss a model for effective implementation of quality improvement methods including the use of measurement (as below).18,19,20


The improvement process can begin with identification of a clinical outcome that is perceived as suboptimal (eg, our incidence of surgical Retinopathy of Prematurity (ROP) is 50% higher than similar NICUs) or a clinical curiosity (eg, it seems like we have a lot of catheter-related blood stream infections). Below is a hypothetical QI project, focusing on ROP:

  1. We noted that the incidence of severe ROP among very low birth weight (VLBW) infants in our NICU is more than 50% higher than similar NICUs, a finding that has been consistent for several years. Upon review of the literature, we determined oxygen management to be a key modifiable factor affecting the outcome of ROP. Furthermore, we found published examples of successful ROP QI projects upon which to model the project.
  1. We then assembled a six-member, multidisciplinary ROP improvement team, including representatives from every discipline that would be affected by the improvement project. Our team included representatives from nursing, respiratory therapy, neonatology, ophthalmology, and hospital management.
  1. Our multidisciplinary ROP improvement team then created a specific aim statement: we aim to reduce the incidence of stage 3 or worse ROP by 50% over the next 2 years without an increase in mortality or other morbidities among VLBW infants who are admitted to our NICU before the 5th day of life. Our hypothesis was that a reduction in ROP could be achieved via use of a multidisciplinary approach including: (1) the use of a guideline concerning oxygen management, (2) a multidisciplinary education program, and (3) use of an oxygen management contract that all frontline providers in the NICU sign.
  1. Key elements of our VLBW ROP reduction guideline included the following items:21,15,22,23,24
    • Treat oxygen like a drug with known toxicities
    • Standardize oxygen saturation goals to 85-93%
    • Standardize oximeter alarm limits to 80-95%
    • Assess an infant prior to increasing FiO2 for hypoxemia; if FiO2 is increased, then the provider is to remain at the bedside until patient has stabilized
    • Notify the physician for FiO2 increase >10% over baseline
    • No “prophylactic” increases in FiO2 (eg, prior to procedures)
    • Use of blended oxygen at all times, in all locations (including delivery room)
    • Use of an oximeter in the delivery room
  1. Our primary outcome was to reduce the incidence of severe ROP (≥ Stage 3) in VLBW infants. In addition to our primary outcome, “balancing measures” were needed to monitor for potential adverse consequences of our intervention. For these, we chose mortality, severe intraventricular hemorrhage, periventricular leukomalacia, chronic lung disease, necrotizing enterocolitis, and length of stay. Intermediate outcome measures included the percentage of time oxygen saturations resided in the target range, the oximeter’s alarm settings, and whether target oxygen saturation reminders were posted at each bedside.
  1. Our data collection plan was to use the Pediatrix Clinical Data Warehouse to monitor clinical outcomes. Four respiratory therapists (2 from day shift, 2 from night shift) audited each VLBW bedside 4 times weekly. They recorded time spent in the target saturation range (many oximeters capture this information electronically), oximeter alarm settings, and if the oxygen saturation targets were posted at every VLBWs bedside.
  1. Our intermediate outcomes were evaluated each month and our clinical outcome measures were evaluated each 6 months. Data from these evaluations were organized and interpreted at monthly ROP improvement team meetings, and then relayed to management and frontline providers via email, presentation at staff meetings, and education conferences.
It is critical to recognize that the typical QI project is not clinical research, and that the goal is not pure knowledge acquisition. The typical QI project is undertaken to improve the effectiveness of a care process with the goal of improving clinical outcomes, patient safety, patient satisfaction, and/or resource utilization. With this difference in mind, the approach to team assembly and measurement is different in a QI project from a clinical trial.

  • Team Assembly: 4 to 8 people are commonly a good size for the improvement team. Too small a team limits multidisciplinary input and provides too few people to share the workload, while too large a team may lose focus and spend more time talking about improvement than implementing improvement efforts.
  • Intermediate Measures: For a clinical trial, intermediate measures are often required only if there is a concern for safety. In QI projects, rapid cycle tests of change with frequent feedback of data to frontline providers (and concomitant encouragement) are critical to success of a project. Outcomes that are relatively rare (such as severe ROP in our example) may take a considerable period of time to become evident, often greater than one year. Therefore, it is important to monitor some “intermediate” outcomes or “process” measures to allow the team to evaluate if the project is being implemented effectively earlier in the project time frame.
  • Measure Definitions: Clear, consistent definitions for key measures are critical to QI projects. Not only do they assure that the team understands what is being measured, but they also allow data to be comparable across institutions. When possible, QI teams should use definitions currently employed by large databases (eg, the NICHD Neonatal Network Database, Pediatrix Medical Group’s Clinical Data Warehouse, or the Vermont Oxford Network Database).
In summary, quality improvement projects are rapid cycle interventions with frequent measurements, adjustments, and deployments. Quality improvement methodology allows for robust testing of change in situations where clinical trials are not ethical or practical. Healthcare is fortunate to possess such a dedicated and well-trained labor force. By following the principles outlined herein, providers can capitalize on this unique workforce, achieving measurable gains important to improving clinical outcomes.


References

1. Kohn LT, Corrigan JM, Donaldson MS. To Err is Human: Building a Safer Health System. Washington, DC: National Academy Press; 2000.
2. Richardson WC, Briere R. Crossing the Quality Chasm: A New Health System for the 21st Century. Washington, DC: National Academy Press; 2001.
3. Page A. Keeping Patients Safe: Transforming the Work Environment of Nurses. Washington, DC: National Academies Press; 2003.
4. Performance Measurement: Accelerating Improvement. Washington, DC: National Academies Press; 2005.
5. Brennan TA, Leape LL, Laird NM, et al. Incidence of adverse events and negligence in hospitalized patients: results of the Harvard Medical Practice Study I. N Engl J Med. 1991;324:370-376.
6. Leape LL, Brennan TA, Laird N, et al. The nature of adverse events in hospitalized patients. Results of the Harvard Medical Practice Study II. N Engl J Med. 1991;324:377-384.
7. Wilson RM, Runciman WB, Gibberd RW, et al. The quality in Australian health care study. Med J Aust. 1995;163:458-71.
8. Vincent C, Neale G, Woloshynowych M. Adverse events in British hospitals: preliminary retrospective record review. BMJ. 2001;322:517-519.
9. Baker GR, Norton PG, Flintoft V, et al. The Canadian adverse events study: the incidence of adverse events among hospital patients in Canada. CMAJ. 2004;170:1678-1686.
10. Rogowski JA, Staiger DO, Horbar JD. Variations in the quality of care for very-low-birthweight infants: implications for policy. Health Affairs. 2004;23:88-97.
11. Horbar JD, Rogowski J, Plsek PE, et al. Collaborative quality improvement for neonatal intensive care. NIC/Q Project Investigators of the Vermont Oxford Network. Pediatrics. 2001;107:14-22.
12. Vermont Oxford Network 2004 Very Low Birth Weight Database Summary. Horbar JD, Carpenter JH, Kenny M, editors. Vermont Oxford Network, Burlington, Vermont. 2005.
13. Eichenwald EC, Blackwell M, Lloyd JS, Tran T, Wilker RE, Richardson DK. Inter-neonatal intensive care unit variation in discharge timing: influence of apnea and feeding management. Pediatrics. 2001;108: 928-933.
14. Brodie SB, Sands KE, Gray JE, Parker RA, Goldmann DA, Davis RB, Richardson DK. Occurrence of nosocomial bloodstream infections in six neonatal intensive care units. Pediatr Infect Dis J. 2000; 19:56-65.
15. Chow LC, Wright KW, Sola A. Can Changes in Clinical Practice Decrease the Incidence of Severe Retinopathy of Prematurity in Very Low Birth Weight Infants? Pediatrics. 2003; 111:339-345.
16. Bloom BT, Mulligan J, Arnold C, et al. Improving Growth of Very Low Birth Weight Infants in the First 28 Days. Pediatrics. 2003;112:8-14.
17. Horbar JD, Carpenter JH, Buzas J, et al. Timing of Initial Surfactant Treatment for Infants 23 to 29 Weeks’ Gestation: Is Routine Practice Evidence Based? Pediatrics. 2004;113:1593-1602.
18. Nelson EC, Splaine ME, Batalden PB. Building Measurement and Data Collection into Medical Practice. Ann Intern Med. 1998;128:460-466.
19. Berwick DM. Developing and Testing Changes in Delivery of Care. Ann Intern Med. 1998;128:651-656.
20. Nelson EC, Splaine ME, Plume SK. Good Measurement for Good Improvement Work. Q Manage Health Care. 2004;13:1-16.
21. Deulofeut R, Critz A, Adams-Chapman I, et al. Avoiding hyperoxia in infants < or = 1250 g is associated with improved short- and long-term outcomes. J Perinatol. 2006;11:700-705.
22. Saugstad OD. Oxygen and retinopathy of prematurity. J Perinatol. 2006; 26:S46-50 and discussion S63-4.
23. Saugstad OD, Ramanathan R, Speer CP. Evidence vs Experience. J Perinatol. 2006; 26:S63-S64.
24. Goldsmith JP, Greenspan JS. Neonatal intensive care unit oxygen management: a team effort. Pediatrics. 2007;119:1195-1196.
Recommend to a Colleague

Newsletter Archive

Listen to our Podcast
THE PLAN-DO-STUDY-ACT CYCLE
Berwick DM. Developing and testing changes in delivery of care. Ann Intern Med. 1998; 128(8):651-656.

(For non-journal subscribers, an additional fee may apply for full text articles.)
View Journal Abstract View Full Article
Plsek PE. Quality improvement methods in clinical medicine. Pediatrics. 1999; 103(1 Suppl E):203-214.

(For non-journal subscribers, an additional fee may apply for full text articles.)
View Journal Abstract View Full Article
In these two papers, Berwick and Plsek describe the basic methods of developing and testing changes in the healthcare setting. The neonatal intensive care unit is a complex adaptive system – introducing change into such a complex nonlinear system often results in unpredicted outcomes with the potential for patient harm. The “Plan-Do-Study-Act” (PDSA) cycle is the primary tool used in performance improvement science to test change. The PDSA cycle uses short cycle, small-scale tests to enable learning in complex dynamic systems. With small-scale tests of change, incremental and more predictable improvements can be accomplished.


The papers cited detail each element:
  • Plan: After assessment of the problem and determination of a potential change, a specific plan is constructed. The objective of the change is clearly stated, and the proposed change is defined. A small-scale implementation of the change is then designed. Specific outcome measures are identified and a data collection process is developed.
  • Do: The change is implemented and measurements are obtained. Problems and unexpected events are recorded for analysis.
  • Study: Data are analyzed and outcomes are compared to predicted results. Problems and unexpected events are evaluated, and findings summarized.
  • Act: The knowledge obtained from the tested change is used to refine the change. The cycle may need to be repeated multiple times on a progressively larger scale until enough confidence is gained to implement the change on a wide scale.
Use of this simple yet powerful methodology is the cornerstone of quality improvement. The PDSA cycle provides the mechanism to test change in a rapid fashion, enabling meaningful change to occur in complex adaptive systems such as the neonatal intensive care unit.
Recommend to a Colleague

Newsletter Archive

Listen to our Podcast
MEASUREMENT AND DATA COLLECTION IN QUALITY IMPROVEMENT
Nelson EC, Splaine ME, Batalden PB, Plume SK. Building measurement and data collection into medical practice. Ann Intern Med. 1998; 128(6):460-466.

(For non-journal subscribers, an additional fee may apply for full text articles.)
View Journal Abstract View Full Article
Nelson EC, Splaine ME, Plume SK, Batalden P. Good measurement for good improvement work. Qual Manag Health Care. 2004; 13(1):1-16.

(For non-journal subscribers, an additional fee may apply for full text articles.)
View Journal Abstract View Full Article
Improvement activities require some form of measurement to determine the effect of changes in practice. In these two papers, Nelson and Batalden describe methods of data collection and measurement for daily clinical improvement work and for more elaborate quality improvement research. They highlight 8 key principles of measurement for use in busy, complex clinical settings:
  1. Seek usefulness, not perfection, in the measurement. Extensive data collection is time consuming, costly, and often not practical in the clinical setting: collect only the information that is essential to answer your question.
  2. Use a balanced set of process, outcome, and cost measures. Process measures provide short-term feedback on the effectiveness of change, enabling rapid adjustment and refinement of practice changes. Outcome measures reflect your primary objective or long-term goal. Cost or balancing measures are the potential negative outcomes or adverse effects that could potentially occur. All of these measures are important in the evaluation of your project.
  3. Keep measurement simple – think big, but start small. Focus on the most important measurements for the initial stages of a project. As the project matures, more broad-ranging and detailed measures may be required.
  4. Use both qualitative and quantitative data. Objective quantitative measures are easily and commonly used. However, many problems have a subjective component that can be measured qualitatively (eg, user survey).
  5. Create standard operational definitions of measures. Clearly and explicitly define measures so that results are meaningful. Definitions of measures and data collection methods must be standardized – failure to standardize measures will cause team members to question the validity of the project and the results.
  6. Measure small but representative samples. While all-inclusive measurement is ideal, it is rarely practical. Subsets of patients can be sampled to provide a more manageable data collection plan. Again, the emphasis is on usefulness, not perfection.
  7. Build measurement into daily work. Ideally, data collection can be built into the standard work process. If the mechanics of obtaining data are burdensome and time-consuming, compliance will be low.
  8. Develop a measurement team. Quality improvement is a multidisciplinary endeavor. Sharing the work of data collection provides varied perspectives and improves teamwork.
In order to motivate continued participation in quality improvement activities, clinicians must receive feedback on the effectiveness of their work. Unlike scientific studies that require rigorous and expensive data collection, quality improvement is focused on measuring a relatively small group of important process, outcome, and balancing measures. Use of these 8 principles of measurement provides a framework in which useful and practical measurement can be incorporated into daily clinical practice.
Recommend to a Colleague

Newsletter Archive

Listen to our Podcast
CHANGE PACKAGES, CHANGE CONCEPTS, AND CHECKLISTS
Pronovost P, Needham D, Berenholtz S, Sinopoli D, Chu H, et al. An intervention to decrease catheter-related bloodstream infections in the ICU. N Engl J Med. 2006; 355(26):2725-2732.

(For non-journal subscribers, an additional fee may apply for full text articles.)
View Journal Abstract View Full Article
Hales BM, Pronovost PJ. The checklist—a tool for error management and performance improvement. J Crit Care. 2006; 21(3):231-235.

(For non-journal subscribers, an additional fee may apply for full text articles.)
View Journal Abstract View Full Article
Gawande A. The checklist: if something so simple can transform intensive care, what else can it do? New Yorker. 2007; 86-101.

(For non-journal subscribers, an additional fee may apply for full text articles.)
View Journal Abstract View Full Article
In the Keystone ICU project, Pronovost dramatically demonstrated a large and sustained reduction (up to 66%) in rates of catheter-related bloodstream infections in a group of 108 adult intensive care units in Michigan.

The study intervention combined 5 simple evidence-based procedures into a single intervention package: hand washing, using full-barrier precautions during the insertion of central venous catheters, cleaning the skin with chlorhexidine, avoiding the femoral site if possible, and removing unnecessary catheters. A key aspect of this change package was the use of a checklist during central line insertion. If any component of the change package was not properly performed, the observer was empowered to stop the procedure until compliance was achieved. In the 108 ICUs studied, the mean catheter-related bloodstream infections decreased from a rate 7.7 infections per 1000 catheter days to 1.4 at 16 to 18 months from baseline.

This landmark paper demonstrated the power of a simple “change package” of a few evidence-based interventions, reinforced by the use of a checklist. The checklist is just one example of a “change concept” — a general approach to change that has been found to be useful in developing specific ideas for changes that lead to improvement. As described by Gawande, checklists have been successfully used for decades in industries outside of healthcare. Pronovost has powerfully demonstrated the effectiveness of this change concept in healthcare as well.

The Hales article documents the details of successful checklist use in aviation, industry, and healthcare. Appropriate use and common barriers to use in healthcare are described. Despite the success and simplicity of checklists in many arenas, why have physicians been so slow to embrace this methodology? The answer is not clear. The Hales and Pronovost paper provocatively contrast the use of checklists in healthcare and aviation:


“Checklists have contributed to prevention of error under stressful conditions, maintenance of precision, focus, clarity, and memory recall. Although pilots are expected to use their professional judgment and critical thinking skills, they are also provided with tools to aid them in recalling the masses of catalogued information at the appropriate time. If pilots are not expected to recall from memory each crucial step of their complex tasks—why is this required of clinicians who are also responsible for the lives of others? Is the aviation industry willing to take these extra measures because their own lives are put at risk by their performance?”
Recommend to a Colleague

Newsletter Archive

Listen to our Podcast
STATISTICAL PROCESS CONTROL CHARTS AND THE SCIENCE OF IMPROVEMENT
Benneyan JC, Lloyd RC, Plsek PE. Statistical process control as a tool for research and healthcare improvement. Qual Saf Health Care. 2003; 12(6):458-464.

(For non-journal subscribers, an additional fee may apply for full text articles.)
View Journal Abstract View Full Article
Matthes N, Ogunbo S, Pennington G, Wood N, Hart MK, Hart RF. Statistical process control for hospitals: methodology, user education, and challenges. Qual Manag Health Care. 2007; 16(3):205-214.

(For non-journal subscribers, an additional fee may apply for full text articles.)
View Journal Abstract View Full Article
Berwick DM. The science of improvement. JAMA. 2008; 299(10):1182-1184.

(For non-journal subscribers, an additional fee may apply for full text articles.)
View Journal Abstract View Full Article
As described by Benneyan, “Statistical process control (SPC) is a branch of statistics that combines rigorous time series analysis methods with graphical presentation of data, often yielding insights into the data more quickly and in a way more understandable to lay decision makers.” Changing data analysis from static annualized data displays to dynamic “real-time” displays provides a method to rapidly identify significant changes in outcomes.

Individual measurements from any process will exhibit variation. SPC methodology provides a way to determine if data variation is stable (in statistical control) or unstable (out of statistical control). This variation is also termed “common cause variation” and “special cause variation”. Multiple statistical rules and tests can be used to determine the type of variation occurring, even with relatively small numbers of data points. Use of SPC methodology provides tremendous power to rapidly assess the effect of change on a system, and is a key element of quality improvement practice.

Matthes et al describe the implementation of SPC methodology in the hospital setting, including educational approaches and tactics to increase the use of SPC. They also discuss the use of SPC as required by the Joint Commission and in the context of public reporting of outcomes such as the National Hospital Quality Measures.

In “The Science of Improvement”, Berwick describes the progress of quality improvement science in healthcare, and discusses the criticism by proponents of randomized clinical trials (RCTs) as the gold standard for learning. For many problems faced today in healthcare, RCTs are not possible, ethical, or practical. Quality improvement science can fill the gap in many of these situations. Only by combining the power of RCTs with the practical science of quality improvement can we continue to move forward in improving the success and quality of healthcare.

The SPC process is described graphically as:


The science of statistical process control continues to evolve. SPC provides dynamic and unique ways to learn and improve in healthcare. Further education and training in the use of this methodology is essential to the sustained progress of healthcare quality improvement.
Recommend to a Colleague

Newsletter Archive

Listen to our Podcast
STANDARD OPERATIONAL DEFINITIONS
Stover BH, Shulman ST, Bratcher DF, Brady MT, Levine GL, Jarvis WR; Pediatric Prevention Network. Nosocomial infection rates in US children's hospitals' neonatal and pediatric intensive care units. Am J Infect Control. 2001; 29(3):152-157.

(For non-journal subscribers, an additional fee may apply for full text articles.)
View Journal Abstract View Full Article
Braun BI, Kritchevsky SB, Kusek L, Wong ES, Solomon SL, Steele L, Richards CL, Gaynes RP, Simmons B; Evaluation of Processes and Indicators in Infection Control (EPIC) Study Group. Comparing bloodstream infection rates: the effect of indicator specifications in the evaluation of processes and indicators in infection control (EPIC) study. Infect Control Hosp Epidemiol. 2006; 27(1):14-22. Epub 2006 Jan 6.

(For non-journal subscribers, an additional fee may apply for full text articles.)
View Journal Abstract View Full Article
A standard operational definition is a clear, quantifiable description of what to measure and how to measure it. While seemingly an obvious concept (comparing apples and apples, not apples and oranges), it is a major source of difficulty in many quality improvement projects. Standardization of definitions is essential to enable provision of meaningful measures and comparisons.

Stover and the Pediatric Prevention Network highlighted the importance of standard definitions. They surveyed 50 children’s hospitals to determine nosocomial infection rates and surveillance methods used in neonatal and pediatric intensive care units. Reported infection rates varied by hospital; some reported overall rates, others focused on particular sites of infection. Many did not provide NICU device-associated rates stratified by birth-weight group, and denominators used to calculate device-associated infection rates also varied (patient days versus device days). Therefore, any meaningful inter-hospital comparison of nosocomial infection rates was impossible.

Braun and the EPIC study group showed that even seemingly clear outcome definitions may yield conflicting results. They compared the median rate of blood stream infections per 1000 central line days in a group of 28 hospitals. Interestingly, they found very discordant results depending on how the data was obtained. Administrative data, clinical data, combined clinical and administrative data, and inclusion of hospital-wide versus intensive care unit data, as well as the specific technique of collecting central line days all resulted in different infection rates, despite the seemingly identical definition. The authors note that meaningful inter-hospital comparisons of infection rates are not possible unless all aspects of data definition and collection are standardized.
Recommend to a Colleague

Newsletter Archive

Listen to our Podcast
CME/CNE INFORMATION
 Accreditation Statement — back to top
Physicians
The Johns Hopkins University School of Medicine is accredited by the Accreditation Council for Continuing Medical Education (ACCME) to provide continuing medical education for physicians.

Nurses
The Institute for Johns Hopkins Nursing is accredited as a provider of continuing nursing education by the American Nurses Credentialing Center's Commission on Accreditation.

Respiratory Therapists
Respiratory therapists should visit this page to confirm that AMA PRA Category 1 Credit(s)TM is accepted toward fulfillment of RT requirements.
 Credit Designations — back to top
Physicians
eNewsletter: The Johns Hopkins University School of Medicine designates this educational activity for a maximum of 1.0 AMA PRA Category 1 Credit(s)TM. Physicians should only claim credit commensurate with the extent of their participation in the activity.

Podcast: The Johns Hopkins University School of Medicine designates this educational activity for a maximum of 0.5 AMA PRA Category 1 Credit(s)TM. Physicians should only claim credit commensurate with the extent of their participation in the activity.

Nurses
eNewsletter: This 1.0 contact hour Educational Activity (Provider Directed/Learner Paced) is provided by The Institute for Johns Hopkins Nursing. Each newsletter carries a maximum of 1.0 contact hours.

Podcast: This 0.5 contact hour Educational Activity (Provider Directed/Learner Paced) is provided by The Institute for Johns Hopkins Nursing. Each podcast carries a maximum of 0.5 contact hours.

Respiratory Therapists
For United States: Visit this page to confirm that your state will accept the CE Credits gained through this program.

For Canada: Visit this page to confirm that your province will accept the CE Credits gained through this program.
 Post-Test — back to top
To take the post-test for eNeonatal Review you will need to visit The Johns Hopkins University School of Medicine's CME website or The Institute for Johns Hopkins Nursing. If you have already registered for another Hopkins CME program at these sites, simply enter the requested information when prompted. Otherwise, complete the registration form to begin the testing process. A passing grade of 70% or higher on the post-test/evaluation is required to receive CME/CNE credit.
 Statement of Responsibility — back to top
The Johns Hopkins University School of Medicine and The Institute for Johns Hopkins Nursing take responsibility for the content, quality, and scientific integrity of this CME/CNE activity.
 Intended Audience — back to top
This activity has been developed for neonatologists, NICU nurses and respiratory therapists working with neonatal patients. There are no fees or prerequisites for this activity.
 Learning Objectives — back to top
At the conclusion of this activity, participants should be able to:

Identify to colleagues an appropriate topic within the NICU suitable for a quality improvement project
Organize colleagues into a quality improvement team to implement the quality improvement project
Incorporate measurements into their quality improvement efforts so that colleagues may analyze the success of the project
 Internet CME/CNE Policy — back to top
The Office of Continuing Medical Education (CME) at The Johns Hopkins University School of Medicine (SOM) is committed to protecting the privacy of its members and customers. The Johns Hopkins University SOM CME maintains its Internet site as an information resource and service for physicians, other health professionals and the public.

Continuing Medical Education at The Johns Hopkins University School of Medicine and The Institute for Johns Hopkins Nursing will keep your personal and credit information confidential when you participate in a continuing education (CE) Internet based program. Your information will never be given to anyone outside The Johns Hopkins University program. CME/CE collects only the information necessary to provide you with the service you request.
 Faculty Disclosure — back to top
As a provider accredited by the Accreditation Council for Continuing Medical Education (ACCME), it is the policy of Johns Hopkins University School of Medicine to require the disclosure of the existence of any significant financial interest or any other relationship a faculty member or a provider has with the manufacturer(s) of any commercial product(s) discussed in an educational presentation. The Program Directors reported the following:

Edward E. Lawson, MD has indicated a financial relationship of grant/research support from the National Institute of Health (NIH). He also receives financial/material support from Nature Publishing Group as the Editor of the Journal of Perinatology.
Christoph U. Lehmann, MD has received grant support from the Agency for Healthcare Research and Quality and the Thomas Wilson Sanitarium of Children of Baltimore City.
Lawrence M. Nogee, MD has received grant support from the NIH.
Mary Terhaar, DNSc, RN has indicated no financial relationship with commercial supporters.
Robert J. Kopotic, MSN, RRT, FAARC has indicated a financial relationship with the ConMed Corporation.

Guest Authors Disclosures
 Disclaimer Statement — back to top
The opinions and recommendations expressed by faculty and other experts whose input is included in this program are their own. This enduring material is produced for educational purposes only. Use of The Johns Hopkins University School of Medicine name implies review of educational format design and approach. Please review the complete prescribing information of specific drugs or combination of drugs, including indications, contraindications, warnings and adverse effects before administering pharmacologic therapy to patients.
© 2008 JHUSOM, IJHN, and eNeonatal Review

Created by DKBmed.
 
COMPLETE THE
POST-TEST


Step 1.
Click on the appropriate link below. This will take you to the post-test.

Step 2.
If you have participated in a Johns Hopkins on-line course, login. Otherwise, please register.

Step 3.
Complete the post-test and course evaluation.

Step 4.
Print out your certificate.

Physician Post-Test

Nurse Post-Test

Respiratory Therapists
Visit this page to confirm that your state will accept the CE Credits gained through this program or click on the link below to go directly to the post-test.

Respiratory Therapist Post-Test