
How America can seize its rare opportunity to nail AI regulation for health care
What does successful regulations for health care-specific AI models look like?
President Biden issuing his
And there’s a rare opportunity for Congress to take it a step further.
This perspective is especially critical to AI companies building health care-specific models meant to assist health care professionals. Overly rigid regulations can make development stagnate, and we’ve seen that happen across a multitude of industries. At the same time, however, regulators can’t afford to be lenient for the sake of progress or potential profits.
With such a broad consensus, it would be a massive missed opportunity if Congress didn’t seize on this sentiment to build an effective bipartisan framework for AI regulation beyond an executive order. But what does success here look like for health care-specific
Regulation as a catalyst for AI development
Too often, sweeping regulation in a particular industry comes as either a response to a massive crisis, such as the Dodd-Frank reforms after the 2008 financial crisis or appears to be an unnecessary hurdle to innovation. But successful AI regulation would be the opposite: Rather than stifling innovation, it will guide companies to develop models that are more accurate and reliable—and therefore more powerful and capable of improving human lives.
For health care-specific
The main gap here is that large language models don’t reason like a doctor. They produce their output by filling in the blank with the most statistically probable word, without any mechanism for understanding the context of those words. The result is an unreliable model that gets things right much of the time, but sometimes completely hallucinates information or gets things wrong.
Because doctors and other health care professionals can’t trust such a tool, they don’t even come close to harnessing AI’s true potential in their industries. This is why a firm regulatory backbone is fundamental to catalyzing more sustainable and credible development.
Proactive regulation must include health care leaders
These flaws of generative AI models should be fully understood and taken into account when crafting legislation to regulate it. That’s in addition to the massive number of other issues that need to be addressed, such as privacy surrounding the personal user data that goes into training these models, labor market impacts, and the way AI-produced content shakes up longstanding intellectual property law.
Biden’s executive order kicked things off in the right direction here. But it should be viewed as just a starting point, one where regulators can take a proactive approach to AI regulation that involves consulting with the industry itself. That includes the AI giants, such as OpenAI and Google and the like. But it must also include the smaller startups in the health care space.
Since AI giants aren’t prioritizing gaining professional-level expertise in every field their models could be used in, relying on them to dictate regulation will likely take on a “one-size-fits-all” approach that ignores the nuances of health care-specific AI and unwittingly stall industry-wide adoption.
In health care, for example, proper regulation can achieve a baseline of elements that health care systems can look for when deciding on adopting an AI tool that coincides with their own specific needs. That will encourage health care-specific AI companies to build models that smartly mitigate risk and foster transparency by making sure its output can be clinically validated and trustworthy.
But if regulators don’t consult with AI companies that are already working in the health care space, there’s no guarantee that this baseline of elements will accurately reflect what the industry needs or is even capable of doing. This is why working with smaller, industry-specific companies is so critical, as they can give ground-level understanding and expertise that large-scale AI companies simply cannot provide. By going in blind, regulators do themselves a disservice by creating rules that miss key knowledge and will inevitably require working backward to mitigate those gaps.
In this scenario, regulators get to play two roles. They still get to set the rules, but also help forge industry-spanning business relationships in the shadow of regulatory clarity. By creating this sort of “regulatory safety net” that takes the industry’s nuances and requirements into account, physicians and other health care professionals will have an easier time trusting and incorporating AI models into their workflows. Knowing that there is an external force that is keeping AI companies in line to make sure their output is clinically verifiable and understandable will only expedite adoption for physicians and health care systems.
That requires giving health care AI builders already blazing a trail in the space a seat at the table preemptively, rather than when things go awry. In doing so, these regulatory bodies can catalyze the industry instead of slowing it down.
Michal Tzuchman-Katz, MD, is CEO and Co-Founder at
Newsletter
Stay informed and empowered with Medical Economics enewsletter, delivering expert insights, financial strategies, practice management tips and technology trends — tailored for today’s physicians.