Printed from:

Tackling the Triple Aim in Primary Care Residencies: The I3 POP Collaborative

Katrina E. Donahue, MD, MPH; Alfred Reid, MA; Ann Lefebvre, MSW, CPHQ; Michele Stanek, MHS; Warren P. Newton, MD, MPH

Background and Objectives: The I3 POP Collaborative’s goal is to improve care of populations served by primary care residencies in North Carolina, South Carolina, and Virginia by dramatically improving patients’ experience, quality of care, and cost-effectiveness. We examine residency baseline triple aim measures, compare with national benchmarks, and identify practice characteristics associated with data reporting.

Methods: We used a cross-sectional design, with 27 primary care residency programs caring for over 300,000 patients. Outcome measures were obtained via data pulls from electronic health records and practice management system submitted by residencies; they include quality measure sets for chronic illness and prevention, patient experience (usual provder continuity and time to third available), and utilization (emergency visits, hospitalizations, referrals, high-end radiology).

Results: Thirteen practices (48%) reported all required baseline measures. We found associations between data reporting ability with registry use (59% versus 0%) and having a faculty member involved in data management (69% versus 29%). Reported measures varied widely; examples include colorectal cancer screening (median: 61%, range: 28%–80%), provider continuity (median: 52%, range: 1%–68%), subspecialty referral rate (median: 24%, range: 10%–51%). Seventy percent of patient-centered medical homes (PCMH) recognized practices had usual provider continuity (UPC) ≥ collaborative median versus 0% of non-PCMH recognized practices. Median data were similar to national comparisons for chronic disease measures, lower for prevention and better for utilization.

Conclusions: Baseline triple aim data are highly variable among residencies, but residency care is comparable to available national standards. Registry use and faculty leadership in data management are critical success factors for assessing practice performance.

(Fam Med 2015;47(2):91-7.)

Health care transformation depends upon successful primary care transformation. How to transform practices to become true patient-centered medical homes (PCMH),1 how to build primary care into accountable care organizations (ACOs),2 and how to train the physician workforce needed for this new world, however, is unclear. Moreover, given their large organizational size, often rigid culture, and multifocal missions, academic settings are traditionally difficult environments for quality improvement.3 To address these issues, the I3 Collaborative was developed.4 I3 stands for impact “cubed”: first, with respect to residency patients, second, in the practices the residents go to, and third, for the patients in the local practices residency faculty may support. Initially developed in 2005 to address chronic disease improvement in family medicine residencies in North and South Carolina,4 I3 expanded both regionally and across other primary care disciplines with a focus on PCMH development/recognition in 2009.5 I3 POP, the third iteration of the I3 Collaborative, now seeks to implement the triple aim6 in primary care residencies in North Carolina, South Carolina, and Virginia, with the goal of dramatically improving patient experience of care, measured quality of care, and outcomes for populations and curbing inappropriate utilization of services over 30 months.

The purpose of this cross-sectional study is to examine baseline I3 POP data for the triple aim and (1) describe the characteristics of the participating residencies, (2) compare measures of quality, patient access, continuity, and utilization with nationally reported benchmarks, and (3) identify practice characteristics associated with data reporting ability and higher measured outcomes at baseline.




Setting and Collaborative Design

The I3 POP Collaborative tailored the Institute for Health Care Improvement Breakthrough Series Collaborative design7 to address practice transformation specifically in primary care residencies. As pre-work in early 2012, programs including 20 family medicine, three internal medicine, and four pediatric residencies from the Carolinas and Virginia responded to an initial survey reporting program and practice characteristics including size and setting of the residency, number of providers and staff, electronic health record (EHR) used, faculty physician involvement in quality, and program priorities for improvement. An initial planning meeting finalized measures and endorsed the overall design of the collaborative. Partipating programs identified improvement teams including faculty, residents, and staff and participate in face to face meetings every 6 months.

Core Measures

Core measures for the I3 Collaborative include descriptive measures of each site’s patient population including number of active patients (at least one clinic visit in the 18 months between January 1, 2011–June 30, 2012) and number of annual visits. In addition, measures for each domain of the Triple Aim were chosen to maximize consistency with established national standards (eg, National Quality Forum [NQF], National Center for Quality Assurance [NCQA]) and existing or emerging practice requirements such as Meaningful Use of Health Information Technology mandated by the Affordable Care Act. Table 1 lists the full I3 Core measure set. These measures were submitted at baseline and annually. In an effort to find population comparisons, rates for all these measures were pulled from available published population data including NCQA measures,8 the National Ambulatory Medical Care Survey,9 the National Health Interview Survey,10 CDC National Immunization Survey,11 National Hospital Discharge Survey,12 National Hospital Ambulatory Care Survey,13 and other health care groups.14-17


The collaborative included three threads: quality of care, patient experience, and utilization. Residencies participated in at least one thread but could participate in more. Each thread held a separate monthly webinar, organized into 6-month segments between the face to face collaborative meetings. The webinars required some data collection according to the thread topic and a focus discussion. Some of the topics included case management, special populations, reducing readmissions, quality improvement tools, cycle times, and patient advisory councils.


All statistical analyses were performed using Stata 10.1. Univariate and bivariate analyses of continuous and categorical variables helped identify missing data, sparse numbers, extreme values, and linearity. Thirteen programs in the collaborative were able to meet the agreed-upon 6-month deadline for reporting baseline core measures. To assess whether any program characteristics were associated with the ability to meet the data reporting deadline, we compared the two groups using Student’s t or Mann-Whitney tests for continuous variables and χ2 for categorical variables. To examine characteristics associated with baseline performance we examined a median split of practices based on baseline performance. Baseline data and initial assessment protocols were approved by the University of North Carolina Institutional Review Board. Data shared among the collaborative were exempted from review, since they were extracted from existing records and contained no personal identifiers.




Twenty-seven programs caring for over 313,000 patients participated in I3 POP (20 family medicine, three internal medicine, and four pediatric residencies) (Table 1). Residency practices included eight university departments, 15 community based, one rural track residency, one urban private practice residency, and two military residencies. A high proportion of patients are uninsured (45%) or have Medicaid (24%). All practices use an EHR, and although many practices used nominally the same EHR we found little commonality in data acquisition. Forty-four percent of programs had switched EHRs in the past 12 months or anticipated changing their EHR in the next 12 months. Over 80% had a registry (defined as a searchable list of patient data that the practice actively uses to assist in patient care), and 48% described a faculty physician involved in data management (faculty member defined as the person who provides data reporting or programing services to the residency program’s EHR and/or registry).Reported measures varied widely among practices (Table 2). Examples include adult diabetics with A1C <8 (median 61%, range 47%–85%), LDL < 100mg/dL (median: 47%, range: 23%–70%) and nephropathy screening (median 61%, range 7%–97%). Overall hypertension control with blood pressure < 140/90mmHg ranged from 47%–76%. Preventive measures also varied widely; examples include colorectal cancer screening (median: 61%, range: 28%–80%) and mammography screening (median 54%, range: 18%–83%). Additionally, variations were noted for usual provider continuity (median: 52%, range: 5%–68%), and subspecialty referral rate (median: 24%, range: 10%–51%). In comparison to other reported population measures, baseline I3 participant measures were similar to reported chronic disease measures and patient satisfaction measures, below reported prevention measures (except tobacco), and lower for emergency department use, hospitalization, and readmission measures; high-end radiology referrals and specialty referrals were higher than other reported population measures. Thirteen practices (48%) were able to report all required measures at baseline by 6 months. We found associations between practices’ ability to report data and the use of a registry (59% versus 0%, P=.020) and having a faculty member involved in data management (69% versus 29%, P=.040) (Table 3). We found no association with NCQA PCMH recognition or with any of the other characteristics we measured, including EHR stability (current EHR in place for at least 1 year; no EHR migration anticipated in next year), number of physician providers, number of active patients, payor mix, or university versus community setting.


By 12 months, five additional practices were able to report baseline data retrospectively, so we based performance comparisons on that sample of 18. Characteristics associated with lower utilization measures for ED visits (median split: baseline performance under the median of the group) included university departments (100% versus 25%, P=.030), larger number (over 43) of providers (P=.040), and larger number of active patients (over 10,000) (P= .040). Having a larger portion of Medicare (>25%) was associated with a lower admission rate (P=.020). There were no other baseline differences in characteristics (PCMH, EHR stability, faculty involvement in data management, and other types of insurance). For ED, referral, and hospitalization measure, there were no differences by age (a proxy for different populations seen by different residencies) and did not find any differences in outcomes.

In terms of patient experience measures, usual provider continuity (UPC) was associated with PCMH recognition; 70% of PCMH recognized practices had UPC ≥ collaborative median versus 0% of non-PCMH recognized practices (P=.03). Having a higher proportion of Medicare was also associated with better usual provider continuity. There were no baseline differences in either of these measures on other characteristics (university/other, EHR stability, faculty involvement in data management, Registry use, number of providers, number of patients, or other types of insurance). Practice-specific patient satisfaction measures were used in 48% of practices; however, the variety of measures used (Press-Ganey, CG CAHPS, Army Provider Level Satisfaction Survey, practice developed, data not shown) precluded comparison.

When examining practice characteristics associated with better measures, those with higher quality measures for A1C in diabetes at baseline had a stable EHR (83% versus 0%, P=.006). There were no baseline differences in other characteristics for A1C control, hypertension control, or mammography rates (PCMH recognition, faculty involvement in data management, Registry use, number of providers, number of patients, or other types of insurance, data not shown).




The I3 POP collaborative seeks to improve dramatically quality of care and patient experience while reducing cost in 27 primary care residency programs in North Carolina, South Carolina, and Virginia. Our baseline data underscore the difficulty in obtaining population-based data from hospitals and health systems, despite using measures and specification mandated by meaningful use legislation and incentives, as well as marked variation in baseline quality, patient experience, and cost across residencies. All of these residencies use electronic records, but almost half have recently or are about to have a “rip and replace” of their EHR, with all that implies in terms of clinical upheaval. Faculty leadership is a key contributor to having the right data, as is the relationship with the hospital technical personnel assigned to provide reports.

It is important to keep in mind the limitations of our data. Importantly, I3 is a pragmatic collaborative, with secondary data obtained from actual clinical care records, but these data are important as they used to drive QI as well as quality reporting and financial incentives. We attempted to improve data validity by extensive discussion and consensus about the study measures and their specifications, allowing residencies to choose what they wanted to work on, providing an opportunity to align with their health care system priorities. We additionally provided ongoing one on one interaction and support regarding collection and submission of data. Without diving into the details of the proprietary reporting specifications or program mapping done at individual health system IT departments, however, it is difficult to be sure data are complete and comparable. For the cost data, some residencies reported significant proportions of patients seeking care outside of their system hospitals or facilities, thus lowering reported utilization rates. However, the data are reasonable to use for tracking trends over time and quality improvement. For patient experience, different systems use different measures, making head to head comparison difficult; this may be improved as standardized surveys like the Consumer Assessment of Healthcare Providers and Systems (CG-CAHPS) spread. Finally these data represent 27 Southeastern residencies and may not be generalizable to other residencies and regions. However, the residencies include a wide variety of types, locations, and governance, and, in the aggregate, take care of over 300,000 patients across all three primary care disciplines. I3 represents the only study of quality of care in residency programs to date.

Within these limitations, then, what do the data mean? Health policy analysts and others assume that the new health care systems will be built on a foundation of primary care, with wrap-around case management and IT services with a goal of improving the health of the population. Key to this vision will be the availability of triple aim data that can drive care at the level of the individual practices that will be the building blocks of integrated systems. Our data suggest, however, that this vision is a long way off. Despite using measures required by meaningful use, and despite giving residencies some flexibility about which measures they track getting population-based data organized by the population of patients taken care of by residencies has been very difficult, even with more than 6 months’ time to do so. Surprisingly, 41% of practices with registries were still unable to report data in the intial phase. Residencies trying to obtain triple aim data fell victim to competing health system priorities, a lack of health system understanding of what is necessary for population health or primary care, or just the chaos of contemporary health care. We have had better success as I3 POP has gone on, but the implication is obvious—it will be impossible to implement the triple aim until we have actionable data at the individual, practice, and population levels.

Another important finding from our baseline data collection is the striking variance between different health care systems’ use of identical software products. Both during the build process and during the adoption process, health care systems make different decisions about their use of a product. Despite “standard data specifications,” data can be housed in different places in the EHR and/or analyzed differently, which begs the question in both our regional and national population data whether the data captured are truly complete and comparable. Moreover, our data demonstrate that the ability to measure and report quality metrics is fraught with difficulty during an EHR transition. These EHR transitions are a notable trend nationally,18 and organizations considering or planning for EHR transitions need to develop contingency plans to ensure both continuous capacity for data reporting and consistent performance on measured quality of care.

What do we know about the baseline quality, cost effectiveness, and patient experience of care delivered to the over 300,000 patients in the I3 collaborative? How do residencies compare with private practice? In family medicine, there has long been an informal assumption that the quality of care delivered in Family Medicine Centers is poorer quality than that in robust private practices. Our data suggest, however, that there are not significant differences. A key question is, compared to what? For quality of care, there are known national benchmarks, and we chose representative chronic disease and clinical preventive services. What is most striking is the variation across residencies and measures; beyond this, residency care for chronic disease seems a little better relatively than that for preventive services. This is consistent with the prior experience of many of these residencies with chronic diseases.4 Further study is needed to learn if this is because the reported data in the prevention measures is too disparate to provide an adequate comparison or of the greater difficulty in developing preventive services office systems, which require consensus across a large spectrum of patients. Also impressive is the negative association of EHR transition on both quality of care (A1C control) and prevention (mammography rates); EHRs are the central nervous system of modern practice, and brain transplants remain challenging!

Patient experience is difficult to compare across residencies, given the variation of measures used by health care systems. Key drivers of patient experience are access and continuity. Unsurprisingly, given our residency settings, UPC is somewhat lower than published reports from private practice settings, but reports in the time to third-available appointment —a measure of access—is better than published reports. Key drives of cost such as hospitalization and ED utilization rates also show striking variation, but are significantly lower than private practice, while referral rate is higher. Ideally, we should address appropriateness of ED visits and hospitalizations, and benchmarks should be updated regularly, given the significant secular trends in this area. More broadly, these data underscore the need for benchmarks and guidelines across the triple aim and across regions.

What do these data mean for the future development of primary care residency training? All ACGME residencies are now implementing milestones, and family medicine has new standards, including attention to the care of populations.20-22 What seems to be underemphasized in the new residency review processes, however, is attention to the actual measured quality of care, patient experience, and cost of care delivered in the residencies. We believe that the curriculum is the practice—and, therefore, it is important to examine the care given in residencies as a part of the regular review of residencies. Do we really want our residents to learn the habits of a practice with 25% of their patients with poor A1C control or with a continuity rate of 25%? In conclusion, as the health care landscape continues to change dramatically, it is important to transform residency practices and the education that takes place in these settings. Overall, at baseline, the care given by residencies is comparable to national benchmarks, but there are major challenges in obtaining actionable triple aim data for the populations taken care of by residencies. For residencies that have been able to obtain data, there is huge variation across residencies. Attention to the data and faculty leadership are important early success factors.

Corresponding Author: Address correspondence to Dr Donahue, University of North Carolina, Department of Family Medicine, 590 Manning Drive, Chapel Hill, NC 27599. 919-966-5090. Fax: 919-966-6125.

Acknowledgments: Financial support for the I3 collaborative was provided by grants from the Duke Endowment, the Fullerton Foundation, and the North Carolina Area Health Education Centers Program.

Oral presentation was made at the 2013 North American Primary Care Research Group Annual Meeting, Ottawa, Ontario.

Special thanks to Sam Weir, MD, Cristen Page, MD, MPH, and Mark Gwynne, MD, for their thoughtful reviews of the drafts.



  1. Rosenthal TC. The medical home: growing evidence to support a new approach to primary care. J Am Board Fam Med 2008;Sep-Oct;21(5):427-40.
  2. Luft HS. From small area variations to accountable care organizations: how health services research can inform policy. Annu Rev Public Health 2012 Apr;33:377-92.
  3. Removing obstacles to quality improvment for academic health centers—grant results reports—RWJF.
  4. Newton W, Baxley E, Reid A, Stanek M, Robinson M, Weir S. Improving chronic illness care in teaching practices: learnings from the I(3) collaborative. Fam Med 2011 Jul-Aug;43(7):495-502.
  5. Reid A, Baxley E, Stanek M, Newton W. Practice transformation in teaching settings: lessons from the I3 PCMH Collaborative. Fam Med 2011;43(7):487-94.
  6. Berwick DM, Nolan TW, Whittington J. The triple aim: care, health, and cost. Health Aff (Millwood) 2008 May-Jun;27(3):759-69.
  7. Institute for Healthcare Improvement. The breakthrough series: IHI’s collaborative model for achieving breakthrough improvement. Cambridge, Mass: Institute for Healthcare Improvment, 2003.
  8. NCQA State of Healthcare Quality 2013. Accessed December 2013.
  9. Stafford RS, Monti V, Ma J. Underutilization of aspirin persists in US ambulatory care for the secondary and primary prevention of cardiovascular disease. PLoS Med 2005 Dec;2(12):e353.
  10. Jamal A, Malarcher AM, Shaw L, Engstrom MC. Tobacco use screening and counseling during physician office visits among adults— National Ambulatory Medical Care Survey and National Health Interview Survey, 2005–2009. Accessed October 13, 2013.
  11. Centers for Disease Control and Prevention. Estimated vaccination coverage with individual vaccines and selected vaccinations before 24 months of age by state and local area US, National Immunization SurveyQ1/2012-Q4/2012. Accessed October 2013.
  12. Centers for Disease Control and Prevention. National Hospital Discharge Survey, 2010. Accessed October 2013.
  13. Centers for Disease Control and Prevention. National Hospital Ambulatory Medical Care Survey: 2010 Emergency Department summary tables. 2010. Accessed September 2013.
  14. Dartmouth Atlas Project. After hospitalization: a Dartmouth Atlas Report for Medicare Beneficiaries. Accessed October 2013.
  15. Fisher LW. Comparison of specialty referral patterns of primary care providers. J Healthc Manag 2002 May-Jun;47(3):197-204;discussion 205.
  16. Blachar A, Tal S, Mandel A, et al. Preauthorization of CT and MRI examinations: assessment of a managed care preauthorization program based on the ACR Appropriateness Criteria and the Royal College of Radiology guidelines. J Am Coll Radiol 2006 Nov;3(11):851-9.
  17. AHRQ NHQR/NHDR. Quality measures. December 2013.
  18. Jamoom E, Beatty P, Bercovitz A, Woodwell D, Palso K, Rechtsteiner MS. Physician adoption of electronic health record systems: United States, 2011. NCHS data brief, no 98. Hyattsville, MD: National Center for Health Statistics, 2012.
  19. Nasca TJ, Philibert I, Brigham T, Flynn TC. The next GME accreditation system—rationale and benefits. N Engl J Med 2012 Mar 15;366(11):1051-6.
  20. Program Requirements for GME in Family Medicine, effective 7/1/14. Accessed March 9, 2014.
  21. Combes JR, Arespacochaga E. Physician competencies for a 21st century health care system. J Grad Med Educ 2012 Sep;4(3):401-5.
  22. Bodenheimer T, Chen E, Bennett HD. Confronting the growing burden of chronic disease: can the US health care workforce do the job? Health Aff (Millwood) 2009 Jan-Feb;28(1):64-74.
  23. Jacobsen D, Sevin C. Improved care for patients with congestive heart failure. Jt Comm J Qual Patient Saf 2008 Jan;34(1):13-9.
  24. Lee DS, Stukel TA, Austin PC, et al. Improved outcomes with early collaborative care of ambulatory heart failure patients discharged from the emergency department. Circulation 2010 Nov 2;122(18):1806-14.
  25. Nguyen DL, Dejesus RS, Wieland ML. Missed appointments in resident continuity clinic: patient characteristics and health care outcomes. J Grad Med Educ 2011 Sep;3(3):350-5.
  26. Phan K, Brown SR. Decreased continuity in a residency clinic: a consequence of open access scheduling. Fam Med 2009 Jan;41(1):46-50.
  27. Merritt Hawkins & Associates. 2009 Survey of Physician Appointment Wait Times. Accessed December 2013.

From the Department of Family Medicine, University of North Carolina (Drs Donahue and Newton, Mr Reid and Ms Lefebvre); Cecil G. Sheps Center for Health Services Research, Chapel Hill, NC (Dr Donahue); North Carolina Area Health Education Centers (Ms Lefebvre); and the University of South Carolina (Ms Stanek).


Copyright 2017 by Society of Teachers of Family Medicine