Print Page   |   Contact Us   |   Sign In
E-Briefings – Volume 16, No. 4, July 2019

Welcome to The Governance Institute’s E-Briefings!


This newsletter is designed to inform you about new research and expert opinions in the area of hospital and health system governance, as well as to update you on services and events at The Governance Institute.


Click here to download the full PDF version.

HROs in Healthcare: More Mindset Than Standardized Processes


There is no way to “error proof” the human beings delivering healthcare, and the “name, blame, and train” approach only makes employees less willing to report mistakes. This article explains how hospitals and health systems can become high-reliability organizations by creating systems where healthcare professionals become wired to detect and address vulnerabilities and error-provoking conditions before they lead to patient harm.

Read More

Collapse Article

By Anna Grome, Principal Consultant, TiER1 Performance Solutions, and Richard Corder, Managing Director, TiER1 Healthcare


Key Board Takeaways


Sample questions to ask fellow board members and management include:

  • Are we on a journey toward being highly reliable?

  • Do we have a blueprint or a roadmap for this journey?

  • Have we committed to a future state of “zero preventable harm”?

  • How many cases of avoidable harm did we have here last month?

  • When did we last conduct a survey of our staff to learn about whether they feel supported in their work?

Mistakes. We all make them—punctuation errors in emails, missed exits, pulling door handles designed to be pushed. Basic fallibility is a symptom of an uncurable condition: humanity.


Thankfully, most of our daily mistakes are inconsequential. But healthcare workers live with the reality that their mistakes can sometimes have serious consequences, like permanent harm to another person, and even death. Studies estimate that medical errors may account for as many as 250,000 deaths in the United States annually, making them our third-leading cause of death and surpassing other developed nations.1


If we want healthcare to be safer and more effective, we must come face to face with our basic human nature. There is no way to “error proof” the human beings delivering care, and the “name, blame, and train” approach only makes employees less willing to report mistakes. The answer lies in recognizing human error as a factor and creating an environment (physical and psychological) dedicated to preventing those errors from becoming harm events. By creating a system where healthcare professionals become wired to detect and address vulnerabilities and error-provoking conditions before they lead to patient harm, we can become high-reliability organizations (HROs).


What Is an HRO?


An HRO is an organization that operates in a complex, high-risk environment without serious accidents or failures. Think of the flight deck of an aircraft carrier, the cockpit of a jumbo jet, or a nuclear submarine as robust analogies of environments with constantly changing conditions, hierarchical organizational structures, and personnel all consistently prioritizing safety as the number one goal. Sound familiar? Given the complexity and high-risk nature of healthcare, it is not surprising that many healthcare organizations have begun looking to the disciplines of HROs to apply to their own environments.


Is Your Organization an HRO?


Be on the lookout for mindsets and behaviors such as:

Preoccupation with Failure

  • Explore:

  • Does the hospital’s event reporting system track “near-misses”?

    Are leaders actively encouraging (and rewarding) open, candid conversations about vulnerabilities and conditions that may cause harm?

    Are investments being made in training and leadership development to shape the mindsets and build the capabilities needed to become an HRO?

    Are practices—such as safety huddles—in place that enable staff to express concerns and make adjustments to prevent harm?

    Are leaders modeling transparency by sharing error and “near-miss” data openly?

    Do existing metrics or productivity goals compete with (or are they perceived as competing with) safety? Do they promote behaviors that undermine openness, problem detection, and safety?

  • Ask for examples of employee stories that exemplify someone speaking up and stopping a potentially unsafe event or procedure.

  • Be on the lookout for the discussion about mistakes or errors being explained away, brushed over, or avoided as inconsequential or “the way we do business around here.”

Reluctance to Simplify

  • Guard against over-simplification by using the rule of “Five Whys” when trying to understand why an event occurred. Seek multiple perspectives from differing roles when trying to understand the sources of a problem.

  • Ask probing questions to understand the impact of various factors on performance (e.g., work process, physical environment, team dynamics, tools and equipment, and incentives).

Sensitivity to Operations

  • Seek opportunities to experience operations at the front line of care.

  • Partner a board member, an executive, and a frontline caregiver for “30-60-30 rounds”—in time commit to all shifts, all days:

  • 30 minutes of leader-led preparation for a departmental “walkabout”

    60 minutes in the department/area/huddle/environment—observing and inquiring

    30 minutes in a debrief to share observations, ask questions, provide feedback

  • Invite and actively listen to input from frontline staff (e.g., Are you concerned about anything that could pose harm to patients? What ideas do you have for improving operations?).

Deference to Expertise

  • Adopt a mindset of “crowd over core” for sourcing solutions and ideas for improving operations. A larger number of employees in daily contact with the work are typically better positioned to solve and create than a small number of leaders more removed from the front line of care.

  • Listen for different voices over the usual suspects.

  • Ask whether the customers being served (internal = employees, external = patients and families) have been involved in problem solving and decision making.

Commitment to Resilience

  • When serious harm events occur, remember that staff will also be in need of support, the opportunity to debrief, and the time to begin the healing process.

  • Avoid ego-centered vs. patient-centered conversations.

It is tempting to think that straightforward “standardization” of healthcare processes and practices will achieve high reliability. But the teams on aircraft carriers and nuclear submarines will tell you that the principles of high reliability go far beyond simply standardizing processes. High reliability is a mindset. There are people involved. Think of it as a condition of persistent mindfulness within an organization. HROs build and nurture resilience by “relentlessly prioritizing safety over all other performance pressures.”2


Characteristics of HROs


HROs use systems thinking to evaluate and design for safety and are keenly aware that safety is an ever evolving and dynamic property. While the disastrous outcomes may appear to be similar, the details, people involved, and other nuances render no two accidents or incidents exactly the same. High-reliable workplaces create an environment in which potential problems, or harm events, are detected early, responded to, and resolved before they become catastrophic events.


The HRO mindset is supported by five characteristics:3

Preoccupation with failure
Reluctance to simplify
Sensitivity to operations (situational awareness)
Deference to frontline expertise
Commitment to resilience

Characteristics of High Reliability in Healthcare


1. Preoccupation with Failure


It is estimated that less than 14 percent of medical errors are reported today.4 For an organization aiming for high reliability, this is an exciting opportunity for improvement. In HROs, employees at all levels of the organization are encouraged (and rewarded) to think proactively about how to break down work processes, anticipate lapses, and take corrective action. Employees are empowered to report mistakes and respectfully share their concerns for how things might go wrong without personal risk or fear of reprisal. Leaders model those behaviors to create an environment that destigmatizes failure, eschews complacency and self-protection, and promotes transparency, learning, and improvement. The identification of “near-misses” are celebrated as medical errors that were caught before they harmed a patient and as opportunities to learn and improve the system.


2. Reluctance to Simplify


Highly reliable organizations resist the urge to make generalized excuses for bad outcomes or reach broad-brushed conclusions when things fail. They appreciate the complexity and interdependence of the system and work hard to understand the real reasons why a process failed—believing that what might appear to be the reason or causation on the surface is usually hiding a more unique and sensitive reality. They adopt a systems perspective and take the time to dig and ask questions so that they can understand and address the (often multiple) factors that contributed to the problem.


3. Sensitivity to Operations


Every employee in a highly reliable organization is in tune with the environment where the work is happening. In healthcare, this means at the point of care or service, such as at the bedside, in the operating room, when pulling supplies from a shelf, or when delivering a food tray to a family member. This sensitivity can only be gained by going to the source—observing operations first-hand and asking frontline staff questions that invite reflection and candor. This focus on the details of the operation is a reminder to not assume that something is working as planned, designed, or expected. This sensitivity opens people up to better inform decisions about improvement efforts.


4. Deference to Expertise


Leaders and supervisors in highly reliable organizations actively solicit input from, listen to, and respect the people that have the most knowledge about the work, regardless of seniority or place in the hierarchy. These are often referred to as the people “closest to the work.”


Organizations that are exemplars understand the importance of this principle as a way to resist the all-too-common phenomena of “work as imagined vs. work as done” that impedes improvement and safety. Leaders actively downplay hierarchy, create a climate of psychological safety, and work to encourage and reward employee input and ideas.


5. Commitment to Resilience


Behind this attribute is a leadership commitment to recover swiftly when the unexpected does occur. Employees are explicitly trained and supported on how to manage the very real emotional and physical realities of both “near-misses” but also the difficult outcomes that healthcare employees witness every day. Of all the components to a highly reliable environment, this is arguably the one that requires immediate, sensitive, and compassionate leadership and support.


Creating an HRO is so much more than simply standardizing practice—it’s a mindset that persists throughout all roles and levels of an organization. It’s a journey that requires strong and committed leadership who understand that organizations are people. And if people are fallible by nature, then so are organizations. Designing systems with this understanding at the core is the key to high reliability.


The Governance Institute thanks Anna Grome, Principal Consultant, TiER1 Performance Solutions, and Richard Corder, Managing Director, TiER1 Healthcare, for contributing this article. They can be reached at a.grome@tier1performance.com and r.corder@tier1performance.com.



1Martin A. Makary and Michael Daniel, “Medical Error—The Third Leading Cause of Death in the U.S.,” BMJ, May 3, 2016.

2Agency for Healthcare Research & Quality, “Patient Safety Primer: High Reliability,” PSNET, January 2019.

3Karl E. Weick and Kathleen M. Sutcliffe, “Managing the Unexpected: Resilient Performance in an Age of Uncertainty,” July 3, 2001.

4U.S. Department of Health & Human Services, “Hospital Incident Reporting Systems Do Not Capture Most Patient Harm,” January 2012.


Managing the Disruption in Access to Care


As we prepare to weather a new round of political debate and shifting alliances among the largest players in the healthcare industry, boards and management teams must continue to move forward on their transformation journey. This article highlights why carefully constructed compensation plans for the executive leaders enacting this change can provide a useful and effective tool for boards.

Read More

Collapse Article

By Steve Sullivan, Managing Director, Pearl Meyer


Key Board Takeaways


  • What are two real challenges to healthcare delivery in the next several years that could impact executives’ roles and responsibilities and their compensation?

  • Why will compensation committees need to monitor “access to care” as a key criterion for organizational success and pay-for-performance?

  • How might management’s pursuit of new revenue streams require compensation committees to rethink their overall approach to executive compensation?

  • The healthcare industry has had no shortage of upheaval and sizeable business challenges. As we prepare to weather a new round of political debate and shifting alliances among the largest players, boards and management teams of healthcare organizations must continue to move forward on their transformation journey. They will need to carefully consider all possible avenues for financial stability and a competitive edge, even as significant disruption is taking shape.


    There are several current and pressing areas for healthcare organizations where compensation may play a significant role. Two of the biggest changes happening are the onslaught of new industry partnerships that are changing how consumers access non-urgent or minor emergency care and the search for new revenue sources.


    The Impact of Disruptive Industry Partnerships


    Many different types of companies are now entering the healthcare marketplace and disrupting medicine’s traditional delivery channels by focusing directly on healthcare consumers. Some recent examples include:


    • CVS acquired Aetna to “remove barriers to high-quality care.”

    • Walgreens and Microsoft are partnering to provide immediate connectivity between consumers, providers, pharmaceutical manufacturers, and payers.

    • Anthem is partnering with Walmart to deliver a new and convenient Medicaid program.

    • Humana and Walmart have established the Humana Walmart Prescription Drug Plan.

    • Amazon, Berkshire Hathaway, and JPMorgan Chase are partnering to “enable improved quality and overall transparency and speed within the healthcare sector.”

    These collaborations between retail, finance, technology, and insurance companies constitute true disruptive innovation and will upend the way individuals and families encounter immediate and primary care, as well as obtain and pay for prescriptions. Patients’ waiting time to access medical care is a prime example. “Access to care” measurements continue to be critically important to providers for care quality and reimbursement and often appear in a healthcare executive’s incentive compensation program. Traditionally those leaders able to drive down the number of days or weeks between a patient requesting an appointment and seeing a physician were rewarded with some portion of an incentive award. These new disruptive partnerships are already changing the conversation from days to hours and minutes. More traditional providers are likely to be perceived by consumers as “too little, too late.”


    Healthcare compensation committees should be aware, however, that accelerating service or utilizing technology alone will not constitute the type of innovation needed to survive and succeed against these newer for-profit retailers, who are experienced in giving the customer what they want while turning a profit. As digital connectedness enables real-time video and audio interactions between medical personnel and patients in disparate locations, traditional benchmarks measuring gradual improvements like “third next available appointment” will fall away. True disruptive innovation only succeeds when it replaces high cost and complexity with simplicity, convenience, accessibility, and affordability—and achieves effective outcomes.


    Changing Revenue Sources


    Further, the revenue associated with traditional acute care services will continue to decline faster than non-profit healthcare providers can reduce expenses, mostly due to reimbursement reductions, the tremendous shift to outpatient care, and the expansion of ambulatory care competitors. Revenue growth attributable to outpatient care exceeded inpatient revenue for the fifth straight year in the U.S. Hospitals and health systems that focus solely on expense reduction will disappear.


    Many healthcare organizations will continue to repurpose and sell off existing assets (like inpatient facilities) and monetize peripheral businesses (food services, transportation, parking, etc.). Others continue to establish or partner with physician-led ambulatory surgery centers and enroll patients in risk-managed care plans.


    More recently, traditional provider organizations have begun actively seeking completely new sources of revenue. Some have created venture capital branches to identify opportunities for non-traditional collaborations. Others have invested in outside telemedicine businesses so they may fully automate their patients’ access to care experience. Key in all of these initiatives is the need to drive market share, convert prospective patients into repeat customers, and heed the customers’ voice in order to improve the patient experience.


    The ongoing proliferation of non-traditional money-making strategies will change the composition of the leadership team and the roles and responsibilities of those leaders. Forward-thinking boards and compensation committees will work with their CEOs to understand anticipated changes to their business and will research the executive talent and compensation characteristics of targeted areas of operation. Many of the leaders that will migrate into the C-suite from for-profit businesses will be familiar with executive compensation programs featuring larger pay-at-risk components than that associated with traditional non-profit healthcare. They will likely have participated in annual and long-term incentive arrangements, often directed through an executive employment agreement.


    Compensation committees will be challenged to develop compensation arrangements that are equally compelling and fair to all participants. Performance incentive plans for diverse management teams deploying their expertise into far-flung businesses will need to feature high-level shared goals in which all have a vested interest, not unlike those found in public companies. Challenging too will be the task of identifying competitive levels of total direct compensation for hybrid and non-traditional executive roles. These compensation arrangements will also need to shift to align with and track an organization’s ability to accelerate simultaneous improvements across clinical outcomes, length of service cycle, patient experience, consumer cost, financial efficiency, and operational simplicity.


    As we have learned in the past decade or so, there are no simple answers to healthcare reform. While the scope of the challenges facing the healthcare industry and its leaders is vast, there are many advancements taking place and new models are becoming the norm. And while they are not a magic wand, carefully constructed compensation plans for the executive leaders enacting this change can provide a useful and effective tool for boards.


    The Governance Institute thanks Steve Sullivan, Managing Director in the Houston office of Pearl Meyer, for contributing this article. He can be reached at steven.sullivan@pearlmeyer.com.



    The AI-Enhanced Patient Experience: What Boards Should Know


    Not long ago, the prospect of artificial intelligence making a meaningful contribution to healthcare seemed almost fanciful. Today it seems inevitable, and medicine has already felt the impact of several AI implementations. This article discusses new and emerging AI technologies that shape patient experience, and the constraints boards will need to navigate to ensure an effective deployment.

    Read More

    Collapse Article

    By Steve Jackson, President, NRC Health


    Key Board Takeaways

    • Consider where your organization falls on the AI adoption curve. AI’s advantages are both significant and cumulative. Has your organization deployed an AI-augmented experience system yet? If not, what are the major points of resistance?

    • Test for data security. For any possible AI initiative, seek an audit from internal IT staff. Ask about data-handling practices. Ensure that the vendor observes the strictest possible standards for PHI and sensitive internal data.

    • Test for clarity. Ask any AI vendor to explain, in plain English, how their algorithms operate. It may take some time, but press on to ensure you understand how it works. If you can’t understand it, neither will the hospital’s staff. And if they can’t understand it, they won’t trust it.

    Not long ago, the prospect of artificial intelligence (AI) making a meaningful contribution to healthcare seemed almost fanciful. Today, it seems inevitable.


    Medicine has already felt the impact of several AI implementations. Researchers use AI to select candidates for clinical trials.1 Algorithms routinely outperform human radiologists in spotting certain cancers.2 Academics at Oxford and Yale predict that by the year 2053, all surgical work could be conducted by AI-powered machines.3


    These technologies all affect the practice of medicine. Clinical work will never be the same. What, then, might advances in AI mean for the business of healthcare? How will it affect the patient experience? For hospital boards, this is an essential question—and one they should consider sooner rather than later. Like any important innovation, experience-centered AI will disproportionately reward early adopters. That does not mean, however, that hospitals and health systems should rush to deploy an AI solution. For any AI implementation to be successful, leadership must understand AI’s limits, as well as its promise.


    This article will discuss 1) new and emerging AI technologies that shape patient experience, and 2) the constraints boards will need to navigate to ensure an effective deployment.


    Poised for an AI Revolution


    The moment is ripe for AI to augment how hospitals and health systems serve their customers.4 Three specific technologies—one in use today, and two others in early phases of development—represent a landmark shift in AI’s healthcare capacities.


    Natural Language Processing


    Verbal comments from patients—on feedback surveys and on social media—can easily number into the hundreds of thousands per year. No human workforce, however large, could possibly parse it all. Natural Language Processing (NLP) makes this verbal information legible.5


    NLP is an AI process that algorithmically “reads” patients’ comments, en masse and instantly. It then classifies comments according to subject matter and the sentiments that they express. The end result is a robust, at-a-glance understanding of how patients feel about their care experiences. It’s an invaluable tool for strategic planning.


    Predictive Analytics


    This nascent technology uses historical data to make predictions about the future. Healthcare organizations have only recently begun to embrace it, but it could have a profound effect on the industry.


    A well-configured predictive-analytics engine generates insight on patient preferences from two datasets: patient health information (PHI) and observable patient behaviors. This puts hospitals and health systems in a proactive position. They will be able to anticipate patient needs and desires, instead of reacting to them—giving healthcare organizations time to prepare.


    For instance, predictive analytics could help hospitals and health systems manage population health. Algorithms can use social-determinant data to more effectively stratify health risks among specific patient populations, and thereby inform leaders where to focus resources.6 Predictive analytics may also enable healthcare organizations to explore alternative payment models, including those that embrace downside risk.7 Such models are untenable when organizations can’t forecast their future expenses. With predictive analytics, however, hospitals and health systems will be able to discern their future spending with increasing accuracy, as algorithms absorb more case data. They can then confidently proceed in negotiations with payers, certain of the cost-burden they will be expected to share.


    Personalized Engagement


    Personalized engagement engines take AI’s capabilities one step further. They don’t just predict consumer needs and behaviors—they help to direct them.


    Think of the recommendation systems you might find on Amazon or Netflix. These absorb trillions of datapoints about their customers, and then algorithmically produce suggestions for what they might want to buy or watch. This kind of technology is not yet available for use in guiding patients through their healthcare experiences (although so-called “cognitive aides” already assist with clinical care8). Once this innovation matures, however, hospitals and health systems will be able to find new ways to maximize satisfaction, manage volumes, and pre-empt emergent clinical risks. For now, it’s science fiction. But it’s only a matter of time before it becomes fact.


    AI’s Complications


    For hospitals and health systems, AI’s value is indisputable. However, AI solutions don’t operate in a vacuum. Their utility wholly depends on the human beings who develop and use them. To function within an organization, an AI solution must have the following.


    1. Expert Data Management


    In manufacturing, the quality of raw materials must be carefully vetted, or else the final product will be compromised. The same holds true for AI—except that in this case, the “raw materials” are data.


    Data must be labelled and sorted by human analysts before an AI process can use it. It’s these analysts’ work that determines the quality of data, and data quality ultimately determines the value of an AI process.9 In healthcare, these analysts enter a demanding arena. Dueling classification systems (e.g., ICD-9 versus ICD-10), incompatible EHR systems, and rigid privacy constraints all make healthcare data uniquely prone to mislabeling and compromise. Board members should be careful to audit any vendor’s data-management practices before allowing them access.


    2. Bias Controls


    Machines may power AI algorithms, but humans design them. As such, any AI process inevitably reflects the viewpoint of its authors. This calls for a special conscientiousness from AI suppliers. They must avoid letting their biases affect the systems they build.


    When AI products operate in hospitals and health systems, they work with the most deeply personal data imaginable—medical records. Even further, predictive analytics processes frequently use patient socioeconomic information (like race, income, or ZIP code) to perform their analyses. Access to such sensitive datapoints could easily lead to harmful outcomes if biases aren’t carefully controlled. AI vendors should demonstrate—and document—their controls before serving any organization.


    3. A Transparent, Legible Methodology


    An algorithm, no matter how clever, is of no use to a hospital or health system if the organization’s staff doesn’t trust it. Designers of AI products must therefore spare a thought for clinicians.


    At the root of this issue is AI’s “black box problem.”10 Because AI products are so complex, few people are equipped to assess their conclusions. The inner workings of AI are opaque to anyone without a background in computer science. This gives rise to credibility questions. Physicians—or any other professionals, for that matter—are unlikely to accept clinical input from an algorithm they can’t understand. To respect the autonomy of clinical staff, then, and to earn their trust, AI processes must offer a fully explicable methodology. Without that transparency and trust, an AI initiative is unlikely to succeed.


    AI’s Very Human Future


    Powerful as they may be, AI processes are no substitute for humanity in healthcare. They may offer direction, but it will always be providers who deliver the care. In AI, today’s hospitals and health systems have a once-in-a-generation opportunity to shape their organizations. It will fall largely to board members’ responsible assessment of AI solutions to ensure that this shaping is for the better.


    The Governance Institute thanks Steve Jackson, President of NRC Health, for contributing this article. He can be reached at sjackson@nrchealth.com.



    1Maxine Bookbinder, “The Intelligent Trial: AI Comes to Clinical Trials,” Clinical Research News, September 2019.

    2Jeff Zagoudis, “Artificial Intelligence Improves Lung Cancer Detection,” Imaging Technology News, April 18, 2018.

    3Katja Grace et al., “When Will AI Exceed Human Performance? Evidence from AI Experts,” ArXiv, May 24, 2017.

    4Jacques Bughin et al., “How Artificial Intelligence Can Deliver Real Value to Companies,” McKinsey Global Institute, June 2017.

    5NRC Health, “Authentic Voices: What Natural Language Processing Reveals about Your Patients,” Becker’s Hospital Review, May 2019.

    6Jennifer Bresnick, “10 High-Value Use Cases for Predictive Analytics in Healthcare,” HealthITAnalytics, September 4, 2018.

    7Michael E. Chernew and Austin B. Frakt, “The Case for Downside Risk (or Not),”Health Affairs, October 16, 2018.

    8Bronwyn Middleton, Dean F. Sittig, and Adam Wright, “Clinical Decision Support: A 25-Year Retrospective and a 25-Year Vision,”Yearbook of Medical Informatics, May 20, 2016.

    9Trevor Strome, “Productive Healthcare Analytics Results from High-Quality Data,” SearchHealthIT, September 2017.

    10Dave Gershgorn, “If AI Is Going to Be the World’s Doctor, It Needs Better Textbooks,” Quartz, September 6, 2018.


    Legal