We often say that children are the future. But when we imagine the future of health care with artificial intelligence (AI), are the voices of children and youth included in that vision?
More and more studies are demonstrating the particular tasks for which AI may be helpful to patients and clinicians. For example, AI algorithms can improve radiologists’ assessment of bone age among pediatric patients, and better support childhood asthma management. But aside from demonstrations of clinical benefit is the normative question of how we should go about conducting AI research and clinical implementation in a manner that is consistent with a “social licence.”
Social licence refers to the alignment of practices with societal expectations, independent of formal regulations. While regulatory frameworks and approval processes can manage the access and use of data for established purposes, understanding how patients expect and prefer the conduct of these activities to be carried out can go a long way toward establishing and maintaining trust. Where there is a disconnect between current practice and patients’ expectation, we can explore whether better education is needed or perhaps modify practices to suit patient preferences.
While there is ample research querying adults’ perspectives on AI in health care, few studies have recruited children and youth to understand their thoughts and perspectives. This gap represents a failure to imagine children and youth as stakeholders – a failure that has also been observed during the COVID-19 pandemic.
When we explore issues of social licence, children and youth (i.e., those under 18 years of age) face a particular disadvantage. Children’s views (and those of youth, to different extents) on morality are often not afforded the same moral weight as is granted readily to those of adults, without considering the underlying reasons. But growing up in a world where technology inhabits every realm of being means that young people “are on the front lines of change” – their conceptual maps for this world shape the future. It is important that we meet these young patients where they’re at. Taking this step to engage them in defining what the future should look like for two reasons: 1) on the practical front, realign expectations; 2) on the ethical front, value the autonomy and voice of young people.
On the practical front, we adults are in a position of power. We make decisions for young people after having evaluated the pros and cons, ideally with a fulsome understanding of the experiences of children factored in. If we haven’t really understood those experiences, if we don’t even ask children, we run the risk of developing ineffective normative guidance. Normative guidance such as the standards and principles that regulate consent and assent in health AI will enable us to integrate pediatric health AI at the point of care. If we want to know what would make pediatric patients be comfortable with an AI system, we have to facilitate this discussion and get them to participate in their care by asking them what makes something “comfortable” for them. Only then can AI use for pediatric health be in the best interest of children and youth.
We often say that children are the future. But when we imagine the future of health care with artificial intelligence, are the voices of children and youth included in that vision?
On the ethical front, there is a duty to involve young people in decisions that affect their well-being. The United Nations Convention on the Rights of the Child (CRC), an international legal rights document, is a resource used in pediatric bioethics to outline normative moral concepts and values. The CRC recognizes that children not only have the basic human rights afforded to all, but an additional set of rights because of the obligations a society has toward its youth. Beyond being legally binding, the CRC outlines ways to operationalize the core commitment and values of bioethics that are consistent with the best interests for children. Article 12, for example, states that children have a right to have their voices heard for decisions affecting them in accordance with their maturity. Article 13 asserts a right to receive information in a manner of the young person’s choosing.
For health-care AI, these rights may be translated to a duty to seek out and duly consider the values and views of youth for how AI should be utilized to inform their care. Moreover, we should consider how they wish to receive information about AI systems in health care.
Beyond these basic rights are the additional moral obligations central to pediatric bioethics, including truth-telling, child- and family-centred care and developing autonomy.
Truth-telling refers to the notion that honesty and transparency are fundamental to the therapeutic alliance; the relative immaturity of children is not a substantive reason in and of itself to withhold information from patients.
With child- and family-centred care, children’s needs are recognized as primary, but the family is designated as central, highlighting the importance of family as the child’s support system.
And most importantly, developing autonomy recognizes the importance of continually and increasingly engaging children in the decision-making surrounding their health care. Many children are capable of providing consent, given their understanding and appreciation of the relevant risks and benefits of the options. Promoting autonomy also means allowing all children a say in their health-care delivery, even when they are not strictly capable of making a medical decision. A young child might not be capable under the law but can nonetheless participate in decision-making. For example, children may not be capable of consenting to receiving intravenous medication but can be asked what arm the injection should be delivered in, whether they want to be distracted or not or how they want to be positioned (lying down, on a parent’s lap, etc.).
To this end, our team at The Hospital for Sick Children (SickKids) and Holland Bloorview Kids Rehabilitation Hospital (HBKH) are conducting a research study to begin exploring social licence. Explain AI 4 Kids is focused on understanding the moral intuitions, values and views of children and youth pertaining to important ethical questions inherent to the integration of health AI. We intend for these initial findings to generate discussion and further research that can lead to incorporating children and youths’ views into AI initiatives, policies and ethical guidelines.
The study uses vignettes that illustrate to participants hypothetical scenarios of how AI could be integrated into their care. Each vignette takes place in the hospital and incorporates real-world examples of AI-based methods (e.g., precision medicine) and technology (e.g., wearables, fitness trackers and social media). These stories set the foundation for participants to contemplate the benefits and risks of AI, provide insight on the research process, including who should have access to the data, how consent should be provided and what should be done when mistakes are made.
AI’s opacity and complexity prompts many to consider how we can reasonably achieve sufficient understanding among laypersons, let alone children. To this, one of our PhD advisors (during a graduate degree in neuroscience, a field not known for its simplicity) once said, “If you can’t explain something to a typical 12-year-old and get them to understand it, you probably don’t understand it as well as you think you do.”
It’s time to stop making excuses for excluding children from these important conversations that will shape the next generation of health-care provision.
Ask them. Listen. Take their views seriously. That’s the future of health care.