As the Covid-19 pandemic continues to challenge our world, technology is playing a central role in accelerating new discovery, innovation and efficiencies across the healthcare industry. As the newest general-purpose technology, artificial intelligence (AI) is expected to usher in this sea change in health tech.

However, while AI advancements are celebrated and new techniques are adopted, concerns are also emerging. There are significant questions arising around accuracy, transparency and inclusivity — among other concerns. A recent KMPG study highlights that business leaders see AI adoption moving too fast and overwhelmingly believe that the technology requires safeguards, such as ethics standards and regulatory controls.

Further, a recent investigation of the United States’ largest health records vendor Epic Systems and its AI technology shows errors detecting patient illnesses. And akin to broader concerns about the technology, experts are asking how AI works and who reviews the decisions made by algorithms. In addition, a PwC survey reveals that data privacy remains a key trust concern among business leaders (70 percent), employees (62 percent) and consumers (62 percent) alike. There are also calls for greater diversity in the datasets used to train algorithms as well as greater diversity in the teams that develop the technology.

And while policymakers recognize the many benefits of AI, they also understand the potential societal risks, which is why the U.S. has established the following:

  • The White House’s National AI Initiative Office, which coordinates research and policymaking across government and includes a task force that is responsible for strategizing how to enhance access to AI tools and resources.
  • The National Institute of Standards and Technology’s (NIST) development of an AI risk framework.
  • AI-related legislation and proposals introduced by at least 17 states.
  • The Food and Drug Administration (FDA) issuance of the “Artificial Intelligence/Machine Learning (AI/ML)-Based Software as Medical Device (SaMD) Action Plan.”

Policymakers in the EU are also making strides in this area. Earlier this year, the European Commission introduced the Artificial Intelligence Act which sets a new legal framework for governing AI.

Healthcare a top concern, trust in tech declines

The 2021 Edelman Trust Barometer: Healthcare reveals that improving healthcare is a top societal concern (26 of 28 countries measured). Meanwhile, trust has declined across healthcare subsectors: hospitals (-2 pts), biotech (-4 pts), consumer health (-3 pts), health insurance (-3 pts), and pharmaceutical companies (-1 pts). Still, the healthcare sector overall is trusted, and even higher (+7 pts) than business in general.

But trust in technology points in the opposite direction. The 2021 Edelman Trust Barometer: Trust in Technology reveals there have been dramatic drops in technology trust during the pandemic (-7 pts) since May 2020 to January 2021. And the U.S. technology sector has seen the greatest decline (-9 pts) compared to other sectors. Further, the pandemic has accelerated concerns about the impact of AI and robots on the human workforce.

Maintaining trust is key for organizations as they navigate public perception and policymaker scrutiny around health tech. And while experts say AI can improve health outcomes and create efficiencies in care delivery, healthcare organizations must deploy AI responsibly.

Responsible AI

To help mitigate and prepare for reputational risk and ensure AI is used responsibly, healthcare organizations developing, deploying and adopting AI should ask the following questions:

  • How will AI impact the workforce and advance shared economic prosperity?
  • How does your organization ensure that AI is fair, secure and explainable across stakeholder groups?
  • How does your organization solve for bias in your AI tools? How diverse is your team that builds and manages the AI systems?
  • What processes are in place for safe, ethical, and effective management of data? Is your organization “earning the right” to patient data and striving for a meaningful value exchange?
  • How does your organization limit the risk of carbon impact of AI? How is your organization using AI to create measurable efficiencies and achieve sustainability goals?

Trust focus drives business value

It is now critical to build and nurture trust as new technologies accelerate across industries. Organizations must show leadership around all aspects of these technologies from development and deployment to adoption and governance. This will help organizations realize the full business value of new technologies across industries. Deploying new technologies that behave responsibly and are trustworthy can also be a significant differentiator and competitive advantage for companies.

Kelly Finneran is Senior Account Supervisor in Edelman’s Technology Sector, Lynn Hanessian is Chief Strategist in the Health Sector, Paul Ratzky is Senior Vice President in the Health Tech Sector, Anna Sekaran is Executive Vice President and Head of Technology East, and Hyun Shin is Vice President of Technology Policy.

 


 

Get in Touch: