Electronic Spring 2026 | Issue 66

Psychiatry Meets AI and Tech Policy: Reflections from a Fireside Chat

By Vivien Choi, MD Psychiatry PGY-3 Resident | Rosalind Franklin University/Chicago Medical School

As a psychiatry resident preparing to subspecialize in Child and Adolescent Psychiatry, between overnight calls, patient loads, and the general blur of training, most continuing education events blend together pretty quickly. I'd been hearing more about AI tools at my training sites, and the topic had come up repeatedly at the AACAP Annual Meeting 2025 in Chicago this past October, enough that I was starting to feel like I needed a better handle on where all of it was actually heading. It was with that mindset that I walked into a fireside chat hosted by the Illinois Psychiatric Society featuring Dr. Sudhir Shenoy MS Ph.D. on February 3rd, 2026. I expected a broad conversation about AI in healthcare. What I didn't expect was to leave feeling genuinely energized about my own role in shaping that future.

Dr. Shenoy holds a doctorate in computer engineering with a specialization in artificial intelligence in social robots, and honestly, that distance from medicine was exactly what made his perspective so useful. He has spent years at the intersection of AI research and federal technology policy, including direct engagement with policymakers in Washington, D.C. His talk reflected both technical depth and policy fluency that you only get from operating in both worlds at once. For those of us in medicine who sometimes feel like AI is happening to us rather than with us, hearing from someone who genuinely understands the machinery behind these tools and who has sat in the rooms where these discussions happen was a rare and welcome experience.

The AI Tools Are Already Here

Dr. Shenoy opened by grounding us in the present. AI is not a future concern for psychiatry. It is a current one. He described a range of AI tools already entering clinical spaces such as ambient AI scribes that generate progress notes in real time, risk stratification algorithms embedded in electronic health record systems, and clinical decision support tools that flag potential diagnoses or medication interactions. For those of us in residency, this is not theoretical. Many of us are already encountering these systems on our training rotations, often without any formal education about how they work, how they were validated, or what their limitations are.

As a future child and adolescent psychiatrist, I find myself looking at these tools with particular wariness. The populations I work with are children and adolescents, families navigating some of the most sensitive developmental periods of their lives, and I think to myself, how are they represented in the datasets used to train these AI systems? What does it mean to apply a risk algorithm to a 14 year old that was built primarily on adult data? Who asked that question before the tool was deployed? Dr. Shenoy's talk gave me language and context for concerns I had already been carrying but hadn't fully been able to articulate.

A Framework Worth Keeping

One of the most useful things Dr. Shenoy introduced was a simple framework for how innovation actually moves into clinical practice:

Technology → Policy → Law → Regulation → Clinical Practice

The technology gets built, often in academic or commercial research labs settings with limited clinician input. Policy discussions follow, sometimes years later, legislative proposals emerge, agencies interpret and implement them, and eventually healthcare systems adapt. Dr. Shenoy explained as an AI tool travels through each of those stages, the decisions about how it works, what data it uses, how it's validated, and what recourse exists when it fails have largely already been made.

"By the time something shows up in your clinic," he said, "the debate about it has usually already happened. And often without psychiatrists in the room."

That sentence stuck with me. As a trainee, someone above me makes the administrative decisions. For example, they chose the EHR, set the workflow, and decided which tools the team would use. I just learned to work within it. But Dr. Shenoy was making a different point. The process he described isn't locked once it starts. It moves through stages, and each stage is a moment where a different voice could have shaped the outcome. The problem is that clinicians, and psychiatrists especially, tend to show up at the very end, if at all. That doesn't have to be the default.

Advocacy as Clinical Responsibility

The part of the session I keep coming back to was the conversation about advocacy. Like a lot of residents, I've tended to think of policy as something that belongs to other people: legislators, lobbyists, academics with endowed chairs and time to testify in front of congress on Capitol Hill or State Capitols. Dr. Shenoy, drawing on his own experience navigating federal AI policy discussions in Washington, pushed back on that in a way that felt specific rather than just motivational.

"You don't need to be a policy expert to influence policy," he told us. "Policymakers deeply need clinical perspectives on how a tool affects patient trust, workflow, safety, and therapeutic relationships. That insight doesn't come from technologists. It comes from you."

Coming from someone with Dr. Shenoy's technical background, that landed differently than it would from most speakers. He knows what engineers and data scientists can and cannot see. He has been in rooms where technology decisions were made with a lot of confidence about model accuracy and algorithmic efficiency and real blind spots about the clinical realities those tools would actually encounter. The therapeutic relationship in psychiatry, the trust, the disclosure, the deeply human work of sitting with someone in their suffering, is not something that gets adequately captured in a statistical loss function. Psychiatrists are among the few people positioned to articulate why that matters in terms policymakers can actually do something with.

For those of us in child and adolescent psychiatry training, the stakes feel even more specific. Children cannot consent for themselves. Adolescents are navigating identity, autonomy, and privacy in ways that adult centered frameworks consistently underestimate. If we are not at the table when AI tools for pediatric behavioral health are being evaluated and regulated, our patients end up bearing the consequences of decisions made without them in mind.

Illinois, Springfield, and the Bigger Picture

Dr. Shenoy also walked us through the current landscape of AI legislation, which is moving at the state and federal levels simultaneously and not always in coordination. Illinois has been one of the more active states in technology regulation. The Biometric Information Privacy Act, passed in Illinois in 2008, gets cited nationally as a model, because it already regulates how facial expression analysis, voice data, and behavioral monitoring can be used. These are the same signals some of these AI tools use in mental health. 

There is real ongoing legislative interest in how AI intersects with healthcare and mental health specifically. This is where organizations like the Illinois Psychiatric Society become genuinely important, not just as professional development infrastructure but as a real policy mechanism. IPS and its national counterpart, the American Psychiatric Association, engage in public comment processes, legislative testimony, and direct outreach to regulators on issues affecting our field. They are the way that clinicians, residents very much included, can have a lasting impact on policy without needing individual access to legislators or a second career in law or policy.

Before this session, I thought of my IPS membership mostly in terms of conferences, networking, and CME. I'm thinking about it differently now.

What I'm Taking Forward

I'm in a phase of training where most of my energy goes toward just becoming competent at the core work of psychiatry. Learning to read a room, to sit with ambiguity, to make sound clinical judgments in hard moments. That's what fills my days. But Dr. Shenoy's talk reminded me that becoming a good psychiatrist isn't only about developing those skills in isolation. It's also about understanding the environment I'll be practicing in, who shapes it, how it changes, and where my voice might fit.

For my fellow residents and medical students reading this: you are entering a field that is changing faster than our training programs have fully caught up with. The AI tools you encounter during residency are early versions of systems that will be far more capable and far more embedded in clinical practice by the time you're early-career. The regulatory frameworks governing those systems are being written right now, and some of that writing is being done by people who have never treated a patient, never sat with a teenager in crisis, never experienced the particular weight of a psychiatric hold.

You have knowledge those people don't have. It's worth finding ways to share it.

I'm grateful to the Illinois Psychiatric Society for creating the space for this kind of conversation, and to Dr. Shenoy for bringing a perspective that genuinely expanded how I think about the field I'm entering. If you weren't able to attend, I'd encourage you to seek out his work and start thinking about where in the framework he described might have room for your voice.