What We Learned from the Pilot Project
Common successes and challenges emerged from the Community Health Center CKD Pilot Project, particularly related to effective interventions, patient education, project implementation, and data collection/reporting.
Effective and Sustainable Interventions
Of the CCM elements utilized, interventions in decision support (DS), clinical information systems (CIS), and delivery system design (DSD) were most successful, including:
- Creating prompts in EHR for elevated BP and abnormal/missing labs. EHR heterogeneity led to variation in how prompts were implemented. Prompts within the provider workflow were more usable. (DS)
- Creating CKD and/or CKD-HTN specific templates in EHR, including recommended testing/treatment. Centers created templates for the project. (CIS)
- Delegating tasks for reception, medical assistants, nurses to help in care management; Integrating team "huddle" to review necessary patient testing/vaccination. This approach helped with patient flow. (DSD/Organization of Health Care)
- Developing algorithms for CKD labs and annual screening for diabetics; developing standing orders for annual UACR for diabetic patients. Lab slips/standing orders helped providers consider which tests are needed and facilitated delegation to medical assistants. (DS/DSD)
- Performing automated monthly reporting; Developing custom clinical reports. Centers varied on what data was measured and how it was reported. Different systems used by CHCs have variations of a report card that can be viewed by disease over designated timeframes. (CIS)
- Conducting patient education using self-management strategies with NKDEP's Explaining Your Kidney Test Results tear-off pad for appropriate patients. The pad engaged providers and patients, and the format was more useful than pamphlets/other formats, partly because it saved space in exam rooms. Some centers included the tear-pad PDF within their EHR. (Self Management)
All centers implemented CKD education, divided into four related performance measures (CKD and Risk, Testing, Treatment, and Dialysis/Transplantation). Common themes included:
- NKDEP's tear-off pad was extremely helpful to all centers for increasing CKD education. Most centers used the pad to ensure key topics were discussed.
- Involving allied health professionals (e.g., clinical and medical assistants) in patient education was effective for many centers.
- Providers were interested in patient self-management but needed help incorporating patient-centered goal setting within short visits.
Several activities improved project implementation/maintenance, and centers acknowledged common challenges:
- A project champion from the CHC community was critical to conceptualizing the project.
- Several activities aided center recruitment and staff participation:
- Finding centers that were involved in HRSA's HDCs with systems change experience and management support.
- Supporting regular center communication for help and sharing of best practices.
- Identifying a CKD expert to present the project to facilitate provider buy-in.
- Integrating CKD care into existing diabetes care rather than implementing a new activity.
- Allowing tailored implementation per center needs.
- The support network of collaborating centers was more important to centers than funding.
- In place of technical support, a kickoff presentation by a local champion and nephrologist, with baseline data and ideas for change, would be sufficient.
- Common challenges affected implementation:
- Updating EHRs to collect CKD-related data and create reports.
- Getting providers to order UACR, especially after initial assessment.
- Starting/maintaining project activities with limited time and competing issues.
- High staff turnover, particularly with provider champions.
Data Collection and Reporting
Data collection and reporting challenges impacted centers' ability to access and utilize data. Several centers experienced major health record system disruptions (e.g., transitioning from PECSYS to an EHR) and difficulties setting up the CKD report within new systems. Some centers were unable to populate their templates electronically and had to transfer data manually. Additionally, there were a variety of platforms across even this small sample of centers.
Key findings included:
- Centers wanted to use data to monitor impact on a population level. Centers that could access data regularly used it to: discuss findings and future focus at team, staff, and QI meetings; provide patient counseling/intervention; and post graphs for providers to evaluate and compare performance. Presenting data to providers efficiently and objectively was challenging; however, many centers found it helpful to put data in context of the system and outline actions. Centers also wanted to compare data with each other so they could model high achiever practices.
- Changing systems for QI initiatives presents time/resource challenges. EHRs employed by participating centers were difficult to use for data extraction and performance improvement; centers allocated significant time/resources trying to establish registry-like functionality. Centers with challenges experienced delays and inconsistencies with data collection. A lab interface, standing orders, and accurate manual data entry impacted data collection and sustainability.
When implementing new measures, some centers did not understand structured versus unstructured data (i.e., default options versus open input fields), creating issues as fields weren't modifiable. Unstructured fields presented a challenge because providers did not enter data, particularly for urine albumin-to-creatinine ratio (UACR).
- The project required too many data points; race/ethnicity data were not utilized and increased data collection challenges. Centers initially wanted race/ethnicity data to analyze disparities but ultimately did not use the data.
- Sample size variation and inaccurate data limited quantitative analysis, hindering project evaluation on patient outcomes. Rigorous data analysis was not possible since some centers' populations could bias results. Several centers identified data inaccuracies from manual entry or changing systems. Therefore, data were used to inform programmatic changes rather than for impact evaluation.
Page last updated: October 10, 2012