Legal Issues Raised by Deploying AI in Healthcare

Joseph P.  ("Joe") McMenamin
Date:
Friday, June 21, 2019
Time:
10:00 AM PDT | 01:00 PM EDT
Duration:
60 Minutes

More Trainings by this Expert   Product Id : 502602

Price Details
$150 Live
$290 Corporate Live
$190 Recorded
$390 Corporate Recorded
Combo Offers
Live + Recorded
$289 $340 Live + Recorded
Corporate (Live + Recorded)
$599 $680 Corporate
(Live + Recorded)
Price Detail Options
Overview:

Classically, the law reasons by analogy, and from precedent. The theory is that the law should deal with like situations in like ways.

In some respects, however, Artificial Intelligence, especially the concept of machine learning, is virtually unprecedented, so the law is struggling with how to deal with it, or will be soon. Consider a few of the difficulties that the law will probably need to address:

Who will pay for healthcare services dependent on AI, and who will be entitled to such payments? Will those payments be keyed to "value," the currently orthodox yardstick? If so, by what means will "value" be measured, especially if, as many predict, outcomes may change unforeseeably?

Who will own the massive trove of data AI learns from and bases decisions on, and how will the rights of the owner be protected?

What governmental agencies will have a voice in regulating the use of AI in health care, and how will they rule? How will federalism issues be addressed?

Who will own the AI system's intellectual property, and how will that owner's rights be protected? Can a machine that has learned, as it was programmed to do, and then acted upon its learning, be seen as a creator, or as an inventor? If so, can it hold intellectual property rights in its own creations, and if so, how will those be protected, and for whose benefit? If not, who does hold such rights?

What are the implications of AI on competition law, and will antitrust authorities be implicated?

What happens if a patient is injured, or even killed, while getting AI-influenced or AI-controlled diagnosis or treatment? Will the owner of the AI system face liability in such circumstances? If so, under what theories? Fundamental to product liability claims is the proposition that the allegedly defective product reached the consumer in substantially the same condition as it was in when it left the hands of the manufacturer.

How can we evaluate products claims when, as a result of machine learning, the product will not in fact be in the same condition as it was at manufacture, and will in fact be in a condition that no one, including the programmers who created the AI, can foresee?

Will health care professionals, or institutions, face liability for unexpected outcomes alleged to have resulted from deployment of AI? If so, under what theories? Is it possible for an AI system, which theoretically is based on and improves upon the best care known, ever breach the standard of care? Will early adopter doctors be accused of breach because AI is not yet used by "reasonably prudent" colleagues? Will the late adopter be liable because he waited too long to jump on the bandwagon?

What defenses, if any, will be available to defendants?

Could AI aggravate health disparities, or itself be a source of bias,and if so, what if anything should or can be done about it? Can AI be deployed in those jurisdictions that prohibit the corporate practice of medicine? If so, what are the implications for patients in those jurisdictions?

This list is intended to be illustrative, not comprehensive. And it is US-centric. The complexities grow exponentially when one thinks about issues arising when AI is exported across national borders, as it almost certainly will be.

Historically, the genius of the common law has been its ability to adapt to circumstances unseen when it arose. We can be confident it will do so again. It is much harder to be confident in predicting how.

Why should you Attend: The states have regulated the practice of medicine since the earliest days of the Republic. Since at least the enactment of Medicare in 1965, however, the role of the Federal government has grown enormously, and as fast as an aggressive tumor.

The welter of state and federal statutes and regulations thus governing health care in the U.S. today is probably more complex than it is in any other country at any other time in history.

Depending on the circumstances, violations, sometimes including even unwitting violations, of these authorities can result in sanctions that can be severe, or even crippling. Apart from the risks associated with violations of numerous statutory and regulatory enactments, healthcare in the United States faces a constant and evolving risk of litigation, including tort, breach of contract, and a variety of other theories.

And all this is just the state of affairs before artificial intelligence came on the scene. With its arrival, we face a bevy of new issues that may well take years to sort out entirely. In the meantime, uncertainty is unavoidable.

No one has the crystal ball needed to foresee all the questions, never mind all the answers.By looking at some of the problems the law will need to address before they arise, or at least before they arise at your organization or for you personally, you can be better prepared to know what to look for, to understand what developments mean, and to take action to reduce your risk and perhaps to improve your future.

Areas Covered in the Session:

  • AI's healthcare capabilities
  • AI tech 101
  • Research
  • Practicing medicine
  • Corporate practice of medicine
  • Negligence
  • Product liability
  • Privacy
  • Reimbursement
  • Regulation
  • Intellectual property

Who Will Benefit:
  • Innovation Officers
  • Technical Officers
  • Legal Counsel
  • CFOs
  • Physicians and Other Healthcare Providers
  • Risk Managers
  • Privacy Officers
  • Hospital and Health System Administrators


Speaker Profile
Joseph P. ("Joe") McMenamin is a physician-attorney with McMenamin Law Offices in Richmond, Virginia. His practice concentrates exclusively on the law of health care, with special emphasis on digital health. With respect to the legal issues pertinent to this form of care, he has advised providers, hospital associations, consultancies, private equity firms, insurers, telecoms and several organizations facilitating telemedical services. The decisions of several digital health clients to enter the field of AI stimulated Joe’s immersion in the subject.

Before being admitted to the Bar, Joe practiced emergency medicine at hospitals in Pennsylvania and Georgia on a part-time and full-time basis over a seven-year period overlapping his specialty training in internal medicine and his legal education. He presently serves as general counsel to the Virginia Telemedicine Network and as a member of the Legal Resource Team of CTeL, the Center for Telemedicine and eHealth Law. An associate professor of Legal Medicine at VCU, he is board-certified in Legal Medicine and a Fellow of the College of Legal Medicine.




Your Recently Viewed Webinars

Payment Methods

Contact Us

NetZealous LLC,
161 Mission Falls Lane, Suite 216,
Fremont, CA 94539, USA.

Information

  Refund Policy
  +1-800-447-9407
  Fax: 302 288 6884
  support@compliance4All.com