We were delighted to extend our partnership with the absolute legends at Latrobe Health Services, a non-profit health insurer based in Gippsland Victoria. Latrobe are dedicated to improving the lives of rural and regional Australians and chose Phoenix Australia as their inaugral partner for the launch of their Foundation, the largest investment of its sort in Australia.
We will be co-designing and testing a new version of our evidence-based and trauma-informed SOLAR app tailored to rural and regional Australians.
My firm belief is that everyone can, nay must, develop an intuitive understanding of how modern generative artificial intelligence algorithms work. Doing so helps us make rational decisions about their benefits and potential harms. That’s why it was a pleasure to be invited to share these insights with colleagues in this lovely beach-side conference.
Citation
Khanna, R. (2024). Artificial Intelligence: Under the hood and in the clinic. Royal Australian and New Zealand College of Psychiatrists’ Victorian Branch Conference, Lorne, Australia. October 2024.
This was my PhD thesis. The full version is not available yet as I prepare the manuscript for submission to journals, but the following provides a quick overview.
Can smartphones, computerised tasks and webcams help spot PTSD earlier?
PTSD is common in Australia—especially among veterans and police—but it often goes undetected because thorough clinical assessments take time and specialist staff. My PhD tested whether a fully remote, low-burden assessment (done on a regular laptop/phone) could help.
What we did
Forty-two veterans and police completed short, at-home activities:
a brief trauma story in their own words (audio/video)
quick picture-based attention tasks
a few days of very short symptom check-ins on their phone (EMA)
We analysed language, voice, and facial expression patterns alongside the phone micro-surveys, and compared results with the gold-standard PTSD interview.
What we found
A combined computer model correctly flagged every PTSD case in our sample (no missed cases) while keeping false alarms reasonably low.
Surprisingly, some single sources worked brilliantly on their own:
short phone app check-ins (micro-surveys) over four days,
the way people told their trauma story (word use, flow), and
facial expression during the story.
Tasks that rely on split-second reactions or precise heart-rate from webcams were less reliable at home.
Why it matters
This early work shows that remote, low-cost tools can capture meaningful PTSD signals and could support large-scale screening and ongoing monitoring—especially where specialist services are scarce. Patterns over time (how symptoms rise and fall during daily life) were more informative than one-off snapshots.
Caveats
This was a small, fairly similar group (mostly male veterans/police) over a short period. Results need testing in larger, more diverse samples. These methods are not a diagnosis; they’re meant to support, not replace, clinicians.
What’s next
We hope to run bigger, more diverse studies, extend monitoring beyond four days, and focus on the most helpful pieces (phone micros-survey check-ins + brief narrative), with strong privacy safeguards. In the mean time, we’re using some of these ideas in our DECODE PTSD trial, seeing if similar digital assessments can help predict who is most likely to respond to which treatment.
The integration of computational methods into psychiatry presents profound ethical challenges that extend beyond existing guidelines for AI and healthcare. While precision medicine and digital mental health tools offer transformative potential, they also raise concerns about privacy, algorithmic bias, transparency, and the erosion of clinical judgment. This article introduces the Integrated Ethical Approach for Computational Psychiatry (IEACP) framework, developed through a conceptual synthesis of 83 studies. The framework comprises five procedural stages – Identification, Analysis, Decision-making, Implementation, and Review – each informed by six core ethical values – beneficence, autonomy, justice, privacy, transparency, and scientific integrity. By systematically addressing ethical dilemmas inherent in computational psychiatry, the IEACP provides clinicians, researchers, and policymakers with structured decision-making processes that support patient-centered, culturally sensitive, and equitable AI implementation. Through case studies, we demonstrate framework adaptability to real-world applications, underscoring the necessity of ethical innovation alongside technological progress in psychiatric care.
Citation
Putica, A., Khanna, R., Bosl, W., Saraf, S., & Edgcomb, J. (2025). Ethical decision‑making for AI in mental health: The Integrated Ethical Approach for Computational Psychiatry (IEACP) framework. Psychological Medicine, 55, e213. DOI: 10.1017/S0033291725101311