Monday, August 28, 2023

Artificial Intelligence – A Danger to Patient Privacy?

Industries worldwide have integrated artificial intelligence (AI) into their systems as it promotes efficiency, increases productivity, and quickens decision-making. ChatGPT certainly raised eyebrows as it demonstrated similar characteristics at the start of its debut back in November 2022. 

The healthcare sector alone, according to Insider Intelligence, has experienced significant improvements in its medical diagnoses, mental health assessments, and faster treatment discoveries after the deployment of AI.  

Risks of AI in Healthcare 

As more healthcare software systems include AI-based features, the necessity for gathering more data increases. It’s important to assess potential privacy and security issues in AI. Using artificial intelligence in healthcare poses a risk to privacy and compliance within regulatory frameworks, such as the Health Insurance Portability and Accountability Act of 1996 (HIPAA) Security Rule.  

In this article, we highlight protocols that will aid in combating these risks to ensure artificial intelligence systems remain compliant with HIPAA and maintain patient trust. 

The Difference Between Artificial Intelligence and Machine Learning 

When talking about artificial intelligence, oftentimes machine learning is synonymously referenced. Artificial intelligence is an umbrella term that covers a wide variety of specific technological mechanisms and algorithms. Machine learning sits under that umbrella as one of the major subfields, similar to robotics and natural language processing.  

Hence, it’s important for us to highlight the nuances in this area. When we refer to artificial intelligence in this article, we’ll be referring to it generally and encompassing both artificial intelligence and machine learning.  

What is Artificial Intelligence? 

Artificial intelligence is a set of technologies that enable computers to learn to perform tasks traditionally performed by humans.  

What is Machine Learning? 

Machine learning is a type of AI application that automatically learns insights and recognizes patterns from data used in the past via algorithms. It then applies that knowledge to make increasingly complex decisions with almost zero programming additives. 

HIPAA and Patient Trust Using AI in Healthcare  

Let’s briefly review the three main requirements of HIPAA 

  1. Appropriate safeguarded mechanisms must be in place to protect the privacy of protected health information and must only be accessed by authorized parties.  
  2. The confidentiality, integrity, and security of ePHI must be protected via administrative, physical, and technical defenses.  
  3. Notification must be provided as the result of a breach of any unsecured ePHI.  

Currently, HIPAA does not have specific language that target artificial intelligence, however, it’s important to remain compliant to each HIPAA control, as each control is applicable even in the light of this relatively new technology.  

Arguing the Pros and Cons of Artificial Intelligence in Healthcare 

As mentioned above, artificial intelligence could provide a whirlwind of possibilities, including quickly diagnosing diseases, recommending treatment options, and decreasing surgery errors 

However, some people when surveyed, felt indifferent to the uses of AI in health and medicine. According to the Pew Research Center, approximately three-quarters of Americans were concerned that healthcare providers would move too quickly to implement AI into medical systems without understanding the full scope of risks it could bring to patients.  

The Pew Research Center conducted a survey between December 12-18, 2022, with 11,004 US adults, which demonstrated 38% believed AI would lead to better health outcomes, 33% felt it would lead to worse outcomes and 27% remained neutral – presenting no changes at all.

Addressing Privacy and Security Issues in AI  

When using AI in healthcare, here are key factors to address with HIPAA and patient trust. 

1. Transparency  

As artificial intelligence infuses its way into the medical field, patient data will begin to continuously be absorbed into its systems. Patients/consumers have the right to question the transparency of said usage and if their data is overall safe.  

To remain transparent, health organizations employing AI need to clearly disclose the use of AI within systems and a general overview as to why using AI will be most beneficial to the processes of the organizations when serving its patients. 

In addition to this disclosure, reveal the scope of patient data sets (e.g., blood type, weight, gender, age, disease) that will be used within AI systems and the purpose of the AI System itself (e.g., diagnosis and public health trends).  

Also, allow patients to have a choice in what kinds of ePHI may be used within your AI systems, as applicable.  

2. Potential AI Risks – Manage and Mitigate 

The use of artificial intelligence in the healthcare environment is not without risk, and here as elsewhere in cybersecurity the mitigation of cyberattacks greatly depends on structured and efficient risk management.  

Organizations should choose the most applicable risk management framework that aligns with their objectives and keep HIPAA compliance and patient trust in consideration. 

3. Protection of Data 

Artificial intelligence often requires enormous amounts of data to produce the most accurate and illustrative results, which poses high privacy risks. One of the ways to prioritize the protection of PHI is to apply detective and preventative controls – these should be designed to keep the confidentiality, integrity, and security of datasets and information systems. 

Preventative Controls (Controls designed in a way to prevent threat events – omitting/blocking the potential threat): 

  • Firewalls 
  • Physical barriers  
  • Anonymization  
  • Access control 
  • Segregation of duties 

Detective Controls (Controls designed to detect an event once that event has already occurred – aimed to reduce the impact of that event): 

  • Internal and external audit reviews 
  • Log monitoring 
  • Vulnerability management 
  • Incident alerting 
  • File integrity monitoring  

4. Anonymization and Access Control According to HIPAA 

It is essential to understand the methods necessary to manage the process of anonymization and access control as per HIPAA’s requirements.

Anonymization or De-Identification

According to HIPAA, the removal or modification of personally identifiable information is one of the requirements necessary to protect PHI data. A covered entity or its business associate is authorized to create information that is not individually identifiable by following the de-identification standard and implementation specifications found in section §164.514(a)-(b).  

HIPAA provides two de-identification methods:

1. Expert Determination

  • The HIPAA-covered entity or business associate obtains anonymization by a qualified statistical expert. 
  • That expert is determined by their professional experience, academic or other training and actual experience. 

2. Safe Harbor

  • The removal of the patient’s and the patient’s relatives, household members, and employers’ designated identifiers. 

Access Control

HIPAA permits users to have the applicable rights and/or privileges to access and perform certain functions as per their role and responsibilities. 

Below are specific Access Controls protocols in alignment with the HIPAA security rule:  

Required: 

  • Assign a unique identifier to identify and track user activity.  
  • Have procedures for obtaining ePHI during an emergency.  

Addressable:  

  • Set up systems to automatically log off a workstation. 
  • Use encryption and decryption mechanisms for ePHI. 

Below you can review the full list of HIPAA clauses when updating your organization’s security policies: 

5. Staff Attestation to AI safeguarding 

Governance can begin by ensuring the organization’s privacy policy and security policies are up to date, and that there is security awareness training delegated regarding the usage of AI.  

Various GRC software can help with managing policy updates (which should be renewed annually) and can be used to automatically notify staff members to read and attest to each policy.  

Annually update policies to limit the risk of AI within the organization. As health organizations employ AI, procedures in the context of HIPAA, should provide the appropriate steps for staff to follow within the privacy policy and security policy.  

Again, the safeguards of AI are not explicitly addressed by HIPPA itself; however, general protocols for AI integration are provided in its technology clauses.   

Lastly, annually update the most up-to-date privacy policy with PHI owners, as this document remains public facing. 

These techniques, once integrated end-to-end within all AI systems, can enhance data protection, which mitigates the risk of cyberattacks (such as data breaches, ransomware, and denial of service).  

Stay Updated 

Be sure to keep up with HIPAA’s News Releases page to remain up-to-date with the latest HIPAA guidance. 

Also, check out our LogRhythm blog page, where we discuss tech news, security insights, and other compliance related topics.  

The post Artificial Intelligence – A Danger to Patient Privacy? appeared first on LogRhythm.



from LogRhythm https://bit.ly/3YU6404
via IFTTT

No comments:

Post a Comment