• Revenue Cycle Management
  • COVID-19
  • Reimbursement
  • Diabetes Awareness Month
  • Risk Management
  • Patient Retention
  • Staffing
  • Medical Economics® 100th Anniversary
  • Coding and documentation
  • Business of Endocrinology
  • Telehealth
  • Physicians Financial News
  • Cybersecurity
  • Cardiovascular Clinical Consult
  • Locum Tenens, brought to you by LocumLife®
  • Weight Management
  • Business of Women's Health
  • Practice Efficiency
  • Finance and Wealth
  • EHRs
  • Remote Patient Monitoring
  • Sponsored Webinars
  • Medical Technology
  • Billing and collections
  • Acute Pain Management
  • Exclusive Content
  • Value-based Care
  • Business of Pediatrics
  • Concierge Medicine 2.0 by Castle Connolly Private Health Partners
  • Practice Growth
  • Concierge Medicine
  • Business of Cardiology
  • Implementing the Topcon Ocular Telehealth Platform
  • Malpractice
  • Influenza
  • Sexual Health
  • Chronic Conditions
  • Technology
  • Legal and Policy
  • Money
  • Opinion
  • Vaccines
  • Practice Management
  • Patient Relations
  • Careers

Artificial intelligence is about to make cybersecurity even more complicated

Article

Expect hackers and network managers to employ AI in a ‘cat-and-mouse’ game to steal or protect patient records.

hc3 artificial intelligence, cybersecurity and the health sector cover © U.S. Department of Health and Humans Services’ Health Sector Cybersecurity Coordination Center

© U.S. Department of Health and Humans Services’ Health Sector Cybersecurity Coordination Center

In the near future, expect hackers and information technology specialists to enlist artificial intelligence (AI) in the continuing computer battles between thievery and security of health care records.

The U.S. Department of Health and Humans Services’ Health Sector Cybersecurity Coordination Center (HC3) and Office of Information Security this month offered the threat briefing in “Artificial Intelligence, Cybersecurity and the Health Sector.”

Users and systems educated in AI can better detect phishing attempts and programs also enhanced by AI, the report said.

“Moving forward, expect a cat-and-mouse game,” the report said. “As AI capabilities enhance offensive efforts, they’ll do the same for defense; staying on top of the latest capabilities will be crucial.”

ChatGPT … uh-oh

The bad news: Hackers already are using online discussion groups about how AI might help them gain entry to computer networks and trick workers toward their goal of stealing valuable patient data. AI can help hackers design and execute attacks, with better impersonation, faster actions, greater complexity, and more automation, according to HC3.

Available tools include ChatGPT, the AI computer program by OpenAI that has sparked wide public interest since its public debut last year, with the follow-up GPT-4 program this year. In the report, HC3 staff used ChatGPT to create slides used and to design phishing email templates.

“Hey [Employee], I need your help with a matter that’s incredibly urgent,” the sample email said.

“It attempts to appeal to the recipient’s loyalty to the company, and their desire to help a partner company that is struggling,” the HC3 report said. Another sample phishing email “attempts to appeal to the recipient’s desire to protect themselves from financial fraud,” the report said. Customized with names, web links, and attachments, the email texts were just about ready to send to unsuspecting health care workers.

Hackers could use ChatGPT to write programs that allow them to collect keystrokes or gain access to computer systems.

Staying secure

At least two resources are developing to help cybersecurity experts in health care and generally.

The National Institute of Standards and Technology released its “Artificial Intelligence Risk Management Framework” in January, and created the Trustworthy & Responsible AI Resource Center in March. It has plans to assist “in the development and deployment of trustworthy and responsible AI technologies.”

This year, Microsoft and The MITRE Corp. published the MITRE-ATLAS, or Adversarial Threat Landscape for Artificial-Intelligence Systems. It is an online database of tactics, techniques, and case studies about machine learning systems from corporate and academic sources. It was developed “to raise awareness of these threats and present them in a way familiar to security researchers.”

Getting to know AI

The 59-page report includes a primer on the history of artificial intelligence. There also are definitions distinguishing terms such as AI, which may be referred to as weak or narrow AI; artificial general intelligence, which has “a significantly wider scope of capabilities;” and machine learning, used for fraud detection, social media content and search engine results, and image recognition.

In medicine, Massachusetts Institute of Technology researchers have described a machine learning algorithm that can analyze brain scans more than 1,000 times more quickly than a human.

Related Videos
Kyle Zebley headshot
Kyle Zebley headshot
Kyle Zebley headshot