top of page

Compassionate Convergence of Man and Machine: Preserving Humanity in Digital Health

Moderated by Olivia Kersey (ISPEP) | Panellists: Lisa Kerr (Evinova), Keith Berelowitz (pRx Engage), Hannah Humphrey (Lived Experience Expert), Derick Mitchell (PFMD)



The closing panel for Day One of the Patient Engagement Solutions & Innovation World Congress 2026 brought together a range of perspectives to grapple with one of healthcare's most pressing tensions: how do we harness the undeniable power of digital technology without losing the human connection that defines good care?



Technology in service of people


Moderator Olivia Kersey opened by asking Lisa Kerr, Director of Patient Engagement at Evinova, how technological advancement can be reconciled with humanity in care delivery. Lisa's answer set the tone for the session: "The best digital health tools make either somebody's job in the healthcare industry easier so that they have more time for patients, or they improve the education and information available to patients so that they can better understand the choices they have to make."


Keith Berelowitz, Founder and CEO of pRx Engage, brought the conversation to considerations of accessibility. He noted that AI-powered translation, adjustable font sizes, and colour blindness settings are not expensive – about 2000 euros per year for a given platform. 



The dangers of simulated empathy


One of the most energised threads of the discussion concerned patients' growing reliance on AI tools like ChatGPT for health information – and what that reliance actually means.


Derick Mitchell, Executive Director of PFMD, observed that patients are increasingly trusting AI tools even above their GPs, drawn in by the empathetic and non-judgmental tone these tools adopt. Lisa pushed back with a sobering counter-point, citing a study published in Nature showing that use of ChatGPT Health in the US resulted in overdiagnosis of people who didn't need to see a doctor, and – more dangerously – underdiagnosis of those who urgently did.


Olivia reinforced the concern, noting that AI's simulated empathy can actually validate dangerous thinking: making users feel heard and reassured when they are, in fact, heading in completely the wrong direction. For parent advocate Hannah Humphrey, this was acutely felt in the world of rare disease, where patients and families have often already been failed by the system and are desperately seeking answers. 


Keith's response – that AI tools must carry clear warnings that they are not a substitute for clinical opinion – was expanded upon by Olivia. While an absence of such warnings is indeed an early flag for untrustworthiness, we are, she argued, all increasingly desensitised to disclaimers and Ts&Cs, given the bombardment in the modern age we now see given the digitisation and scale of access to information. She emphasised that disclaimers are not enough to ensure safe use – especially for users under the pressure of acute illness or desperately seeking diagnosis for their child where ‘the system’ has let them down. No legal small print is going to interrupt the search. 



Managing health information: the burden at ground level


Next, Hannah shared her experiences of the systemic failures that place unnecessary health information management burdens on caregivers/families, making the case for a unified patient record. "Technology should reduce the burden on caregivers, not increase it," she said. "Ideal world: a single patient record accessible from everywhere in the country – not just in health, but also in education and social care."


She went further, stating that emergency care plans should be the first thing visible when a clinician opens a patient's record, and that shift handovers should require the same mandatory review, noting "The amount of information not being passed over across shifts is terrifying."


On a separate note, Hannah called for the integration of digital health solutions into the NHS app as standard, noting that collation of the six different apps she currently has to use to support her daughter’s care would be “really life changing” for her family, acting as a central point of reference for the various hospitals involved in appointments and communications. Keith expanded on this by highlighting the need for a “common denominator” across geographic regions so that such solutions can be scaled across borders.



A solution for clinical trials


Olivia then invited the panel to share a concrete example of a digital health solution and its impact. Lisa responded by describing Evinova's remote patient monitoring solution for oncology trials, which tracks side effects using a combination of device measurements, patient-reported outcomes, and intelligent logic that asks only the questions relevant to a patient's current state.  Critically, it doesn't wait for a scheduled visit. When a patient's condition escalates, the system alerts the clinical team in real time. The result: patients stay on trial longer, which means more robust data, smoother progression of treatment to market, and potentially better outcomes for patients.



Transparency for trust


As the conversation broadened to questions of trust and adoption, an audience member shared a story of a peer receiving a letter from their doctor with skepticism, feeling it had been generated by an AI tool. Lisa’s thought-provoking response can be summarised as follows: transparency about why technology is being used can fundamentally transform its reception. She went on to explain that if the letter was indeed AI-generated, one could explain to the patient that yes, it was written with AI, but the benefit of that could be that the doctor has been able to use the time otherwise spent writing to then support an additional patient, or they've gone home and had a good night's sleep so they can provide better care the next day. 


The point cuts across contexts: whether it's AI-generated clinical letters, algorithmic triage, or a chatbot answering out-of-hours queries, explaining the purpose behind the technology – and what it frees humans to do – helps to build the trust that drives adoption.



Critical thinking in the age of AI: an urgent warning


Olivia's closing comments invited reflection on the realities of AI integration for individual professionals. While she acknowledged that AI stands to absorb vast amounts of routine administrative work – freeing clinicians, researchers, and strategists for higher-order thinking – she voiced a fundamental concern about the direction of travel.

"I've also seen how AI can give people the perceived licence to clock out and let it think for them when it shouldn't. We could substitute the boring stuff and be more smart with our cognitive resources – or it could substitute our cognitive resources and leave us frankly as brain-dead zombies. Which path we go down is very much going to be up to the individual. And human nature does tend to take the easy option, even if it's not the right one. So personally, I'm concerned that we as a population, as a profession, may lose the ability for that critical thinking, even though technically, we're freeing up resource to do so."


This wariness was echoed by an audience member who described patients arriving to appointments with two pages of AI-generated diagnoses and treatment plans – leaving clinicians to spend most of the consultation correcting misconceptions rather than advancing care. A clear call from the floor followed, advocating for a coordinated public education campaign, from government and the NHS, that helps people understand both the power and the limits of AI health tools and how to responsibly navigate health information more broadly.


Speaking more generally, Hannah reminded the room that AI is a catch-all term for a wide range of technologies – and some are already being used successfully in healthcare settings such as radiology. "People's fear of AI is painting everything with the same brush." she noted. This was a useful reminder of the growing importance of nuance and specificity in discussions of the role of AI in healthcare and research.



Key takeaways


Reflecting on the session, our key takeaways are as follows:


  1. In industry, promising solutions are constantly arriving, and it is encouraging to see the evolution of remote patient monitoring in trials and the potential for platform accessibility adaptations at the click of a button. However, when it comes to healthcare ‘on the ground’, the basics of effective digital information management remain hugely fragmented, resulting in unacceptable burden for patients and carers. Arguably, efforts from the NHS and other provider bodies should be focused on addressing this foundation before pursuing ‘jazzier’ solutions.


  1. While patients and carers are increasingly empowered by digital technologies enabling access to information and AI tools, the exposure this brings is also putting them at risk. A sustained and expertly-delivered public education campaign – embedded long-term into messaging from health systems and government – is urgently required if we are to prevent avoidable harm. Further, if relationships between patients and clinicians are to be successful, if outcomes are to be protected, it is imperative that healthcare professionals are also actively equipped to navigate the unavoidable influence of AI on the information and expectations that patients and carers bring to clinic.


To conclude – is it truly possible for the meeting of man and machine to preserve humanity in health research and care? Yes – but only if this is actively embedded. If technology frees up more face-to-face time, if it eases effective personalisation of materials, if it provides a safety net for crucial health information falling through the gaps of chaotic hospital shifts, then great. However, it will be up to all of us, as individuals and as employees of companies and providers, to consistently challenge digitisation for its own sake and for every ‘innovation’, continue to ask the question, “What are we digitising, why are we doing it, and ultimately, who for?”

Comments


bottom of page