advertisement

canadian cities poised to be leaders in ai health care, but privacy concerns could leave some canadians behind

concerns over the safety and security of patient data could mean not everyone will benefit equally from the next technological revolution.

data privacy concerns in health care: how will it impact ai?
hospitals serving diverse, populous cities like toronto have created an incredibly valuable network of patient data getty
curated, representative, diverse data sets. these collections are going to be worth their (metaphorical) weight in gold once the ai revolution in health care really takes off. but serious concerns about patient privacy and how much information doctors will need from these models may mean ai health-care tools flourish in wealthy, diverse cities like toronto, while smaller clinics are left to fend for themselves.

large language models (llms), like chatgpt , have the incredible capability to sift through massive data sets and use this information to provide analyses and predictions. in the world of health care, where worldwide medical literature doubles every eighteen months or so, this ability will not just prove useful, but also necessary to keep treatments in line with the explosion in research findings.

the conclusions drawn by llms are only as good as the data they pull from, one of the reasons why when asked a question, chatgpt — which pulls from publicly available information throughout the internet — can come up with some pretty dubious responses. the same will be true for health care. if an artificial intelligence (ai) model only has patient data on a certain subset of the population, then the conclusion drawn by that model will really only be accurate for that same demographic. for example, if a cardiac centre primarily treats heart attacks in caucasian men aged 50 to 70, the data set will skew to that demographic. if practitioners then try to use this information to help a young woman of south asian descent experiencing similar symptoms to a heart attack, they may miss out on crucial information.

advertisement

advertisement

“it’s extremely important that the data that you use to construct your model is diverse,” says dr. barry rubin, chair and medical director of the university health network’s peter munk cardiac centre (pmcc) in toronto. “this is where toronto has huge advantage because we’re the most diverse city on earth, [with] over 200 languages spoken. you want models that are going to apply to everybody, not just one demographic.”
along with bo wang, phd, ai lead at pmcc, the team has been developing models to solve what they see are some of the critical concerns in health care today, such as practitioners spending too much time on non-skilled tasks, like taking notes, which could be replaced by a computer. wang and team have also released a demo called clinical camel — an “open-source health care-focused chatbot,” according to their website — which has been trained to pull information from medical literature.

in may, the who called for caution in the use of ai-generated large language models , warning the “adoption of untested systems could lead to errors by health-care workers, cause harm to patients, erode trust in ai and thereby undermine (or delay) the potential long-term benefits and uses of such technologies around the world.” chief among the concerns listed include that the data used to train these models may be “biased, generating misleading or inaccurate information that could pose risks to health.”

advertisement

advertisement

it’s the database and the training of that data, because that’s what informs the prediction model that answers questions” explains hamed shahbazi, founder and ceo of well health technologies, the largest owner and operator of outpatient health clinics in canada.

and if the data could make or break an llm, that means access to the data will be paramount. but how much access can be shared across health networks?

burnout, data and paperwork: ai could be the answer to our health-care woes

a patient journey creates huge amounts of data: blood tests, x-rays, ultrasounds, ct scans, and mris. genetic testing and family medical history also adds integral glimpses into a person’s health. however, the more data we generate, the more time it takes to go through and contextualize the information with the latest research findings. and when you multiply the time spent on this by the hundreds to thousands of patients a clinic may see each year, it becomes inefficient.
“if i’ve seen you for the first time and even if i have access to your records, what are the chances that i’m going to be able to browse through all that?” says shahbazi. “it’s unrealistic. people say doctors make mistakes, well, it’s unfair a little bit because sometimes they don’t have access to the information, and sometimes it just doesn’t even make sense for them to have the time or energy to be able to go through everything in a very substantial patient record.”

this is why well health technologies has launched well ai voice, an ai model that works by “ privately and securely capturing a patient encounter conversation and automatically generating a succinct and medically relevant chart note, ” according to well’s press release.

advertisement

advertisement

at pmcc, rubin and wang are also developing a model that can take notes during a medical appointment, freeing up the practitioner from having to do the double duty of take notes while also trying to have a conversation with their patient. the ai model will then be able to check the important points from the meeting against the patient’s previous medical history and scientific literature and flag possible conditions, tests that may be required and other information that would take health-care staff precious time to investigate.
“it takes a doctor five minutes to generate a note and they see 30 patients a day,” says rubin. “that’s 150 minutes that you just saved from that doctor’s day, and that’s 150 minutes that they could be spending talking to you or not have to go home at night [and continue working].”

the picture rubin paints is one that still critically hinges on the physician. not least of which because liability for patient outcomes will still fall on the shoulders of practitioners, rather than computer programmers. the models described by rubin and shahbazi have ai function as more of a super-scribe, taking notes and comparing data across millions of data points and scientific studies, to make informed suggestions to the team. the decisions will always — at least for now — be made by practitioners.

advertisement

advertisement

“we are not at the stage where artificial intelligence is managing patients,” says rubin. “but we are at the stage where artificial intelligence can provide clinicians, doctors, nurses, allied health professionals, pharmacists with additional information that they can consider when making a recommendation to a patient.”

protecting patient data will require resources, funding

as great as taking notes and checking it against published studies is, the real secret sauce will be the insights gleaned from these massive data sets themselves.

what we’ve been hearing from people is … ‘i want my data to work for me. the data that i generate needs to give me insight,’” says shahbazi. “we want it protected and secure, but we also really want the brightest minds and technologies to tell us what that data is telling us about, and correlating that with other people.”

putting data to work generally means adding it to a database that can be read by an llm, which comes with risks. among the warnings listed by the who: “llms may be trained on data for which consent may not have been previously provided for such use, and llms may not protect sensitive data (including health data) that a user provides to an application to generate a response.”

advertisement

advertisement

wang agrees.

we need to provide enough education to doctors and patients about the pros and cons so that they understand [how] their data may potentially be used,” says wang. but in a field in the midst of technological revolution, specifying exactly how patient data may be used in the future could be tricky.

the risks are also difficult to quantify. patient data is usually anonymized when added to a database, meaning personally identifiable information like names, addresses, and anything that will obviously point to a specific person are removed. however, as many ai models rely on anonymous information, others have been set up specifically to re-identify these sources by cross-referencing them with available data sets.
and illegitimate actors have already begun to identify health-care units as targets. laboratory results of 15 million canadians were compromised when the country’s largest laboratory service, life labs, was hacked in 2019. in 2021, hackers were able to access patient data going back 14 years in the eastern health region of newfoundland (an area that includes the capital city, st. john’s), among other regions in the province. sensitive data may be used for financial gain, either through credit card or banking information, or by using sensitive health data for coercion, extortion and intimidation.

advertisement

advertisement

editor’s note: fear of genetic discrimination is a reality for many families. find out more in ‘huntington disease runs in my family

protecting large swaths of patient data to be used with ai means that considerable investment will have to be poured into cybersecurity — on top of managing the database and llms. these costs mean there is a chance we will see yet another iteration of postal-code health care, where canadians living in wealthier areas have better access to care than those who live in lighter-resourced areas.

it’s not clear that there are resources for everybody to have this,” says rubin.

the obvious solution to the privacy issue would be to make the ai interface as opaque as possible, protecting proprietary information and patient data. the problem with that, wang explains, is that providers — liable for any decision made through an ai model — will want to have a thorough understanding of how the model works before trusting it.

the ability to check information is also important to guard against hallucinations, a phenomenon where the algorithm provides answers not based on the data. (openai, the company that operates chatgpt, is being sued by a georgian radio host after the llm allegedly claimed he was involved in a court case for “misappropriate[ing] funds for personal expenses without authorization or reimbursement.”)

advertisement

advertisement

for now, wang and rubin are focusing on developing an llm that is tied to their hospital, accessible by staff who already have access to the data under the personal health information protection act .

emma jones is a multimedia editor with healthing. you can reach her at emjones@postmedia.com or on instagram and twitter @jonesyjourn.

thank you for your support. if you liked this story, please send it to a friend. every share counts. 

comments

postmedia is committed to maintaining a lively but civil forum for discussion and encourage all readers to share their views on our articles. comments may take up to an hour for moderation before appearing on the site. we ask you to keep your comments relevant and respectful. we have enabled email notifications—you will now receive an email if you receive a reply to your comment, there is an update to a comment thread you follow or if a user you follow comments. visit our community guidelines for more information and details on how to adjust your email settings.