Main 主要

Artificial intelligence (AI) systems are fundamentally changing the world and affecting present and future generations of children. Children are already interacting with AI technologies in many different ways: embedded in the connected toys, smart home Internet-of-Things (IoT) technologies, apps and services they interact with on a daily basis1,2. Such AI systems provide children with many benefits, such as enjoyment and convenience from connected devices3, personalized education and learning from intelligent tutoring systems4 or online content monitoring and filtering by algorithms that proactively identify potentially harmful content or contexts5. Going forwards, AI systems will, in all likelihood, become altogether even more pervasive in children’s lives simply due to their unprecedented capability to create compelling, adaptive and personal user experiences. Yet despite its enormous potential, AI presents challenges for children, including biases affecting vulnerable sub-groups6, unforeseen negative consequences7 and looming privacy risks from extensive data collection practices8. Over recent years, substantial efforts have been made to regulate ethical AI9. Despite there being a growing consensus about what the principles require, in general, engagement on children’s issues is still largely lacking and limited. Although the AI principles remain valid in cases involving children, the unique characteristics and rights of children necessitate a more nuanced approach. Various ethical principles have been proposed to safeguard the rights of children, but effective implementations and practical applications remain relatively unexplored. It is thus crucial to address the challenges that emerge when translating these principles into real-world scenarios for children. This will ensure that the benefits of AI are maximized while its potential harms are minimized.
人工智能(AI)系统正在从根本上改变世界,并影响着当代和未来几代儿童。儿童已经以多种不同的方式与人工智能技术进行互动:嵌入在连接的玩具中,智能家居物联网(IoT)技术,他们每天都会与之互动的应用程序和服务 1,2 。这样的人工智能系统为儿童提供了许多好处,例如来自连接设备的享受和便利 3 ,来自智能辅导系统的个性化教育和学习 4 或通过主动识别潜在有害内容或上下文的算法进行在线内容监控和过滤 5 。展望未来,人工智能系统很可能会在儿童的生活中变得更加普遍,这仅仅是因为它们具有前所未有的能力来创造引人注目的、自适应的和个性化的用户体验。 然而,尽管人工智能具有巨大的潜力,但它也给儿童带来了挑战,包括影响弱势亚群体的偏见 6 ,不可预见的负面后果 7 以及广泛的数据收集实践带来的潜在隐私风险 8 。近年来,已经做出了大量努力来规范道德AI 9 。尽管人们对这些原则的要求日益达成共识,但总体而言,对儿童问题的参与仍然很大程度上缺乏和有限。虽然在涉及儿童的案件中,大赦国际的原则仍然有效,但儿童的独特特点和权利要求采取更加细致入微的办法。为保障儿童权利提出了各种道德原则,但有效实施和实际应用仍然相对缺乏探索。因此,在将这些原则转化为现实世界中的儿童情景时,必须应对出现的挑战。 这将确保人工智能的好处最大化,同时将其潜在危害最小化。

In this Perspective we undertake an analysis of existing AI ethics guidelines, considering the unique characteristics, rights and needs of children in the digital world. From this examination, we develop a synthesized set of terminologies, drawing from ethical principles both within the digital environment and those extending beyond. Recognizing the challenges inherent in tailoring these principles to suit children’s distinct needs, we offer a roadmap for prospective inquiries aiming to establish child-centred AI.
在这个视角中,我们对现有的人工智能伦理准则进行了分析,考虑到数字世界中儿童的独特特征,权利和需求。从这个检查中,我们开发了一套综合的术语,从数字环境中的伦理原则和那些延伸到外面。认识到定制这些原则以适应儿童独特需求的固有挑战,我们为旨在建立以儿童为中心的人工智能的前瞻性调查提供了路线图。

A review of ethical AI principles and how they relate to children
AI伦理原则及其与儿童的关系

To establish the scope of this Perspective we start by asking how ‘children’ are characterized in current ethical AI principles, and how their rights are framed. According to the United Nations Convention on the Rights of the Child, a ‘child’ is defined as anyone under the age of 18 (ref. 7). While this is a broad definition, it is directly adopted by numerous major ethical AI guidelines, often treating ‘children’ as a singular category without further exploration.
为了确定这一视角的范围,我们首先要问的是,在当前的人工智能伦理原则中,“儿童”是如何被描述的,以及他们的权利是如何被框定的。根据联合国《儿童权利公约》,“儿童”的定义是18岁以下的任何人(参考文献 7 )。虽然这是一个广泛的定义,但它直接被许多主要的人工智能伦理准则所采用,通常将“儿童”视为一个单一的类别,而无需进一步探索。

Meanwhile, the UN Committee on the Rights of the Child’s endorsement of General Comment 25 (ref. 7) in February 2021 provided a milestone guidance on children’s rights within the digital domain, outlining four key principles essential for upholding children’s rights in a digital context: (1) non-discrimination: ensure that all children have equal access to meaningful digital experiences; (2) best interests of the child: prioritize children’s welfare in all decisions and actions affecting them; (3) right to life and development: protect children from digital threats such as violent content, harassment and exploitation, emphasize the influence of technology during early childhood and adolescence and educate caregivers on safe usage; (4) respect for the child’s views: promote children’s expression on digital platforms, integrate their input into policies and ensure that service providers respect their privacy and thought freedom. Our reviews of major ethical AI frameworks identified key themes and principles for upholding these children’s rights from the General Comment 25. Moreover, we examine the varied terminologies employed within these principles across different disciplines and domains.
与此同时,联合国儿童权利委员会批准了第25号一般性意见,(参考编号0#)于2021年2月就数字领域的儿童权利提供了里程碑式的指导,概述了在数字环境中维护儿童权利的四项关键原则:(1)不歧视:确保所有儿童都能平等获得有意义的数字体验;(2)儿童的最大利益:在影响儿童的所有决定和行动中优先考虑儿童的福利;(3)生命权和发展权:保护儿童免受暴力内容、骚扰和剥削等数字威胁,强调技术对幼儿和青少年的影响,并教育照顾者安全使用技术;(4)尊重儿童的意见:促进儿童在数字平台上的表达,将他们的意见纳入政策,并确保服务提供商尊重他们的隐私和思想自由。 我们对主要的人工智能伦理框架进行了审查,从第25号一般性意见中确定了维护这些儿童权利的关键主题和原则。此外,我们还研究了不同学科和领域中这些原则所使用的各种术语。

Fairness, equality, inclusion and access
公平、平等、包容和机会

This theme resonates deeply with the principle of children’s right to non-discrimination. By setting a standard of non-discriminatory harm, it becomes crucial for AI system designers to prevent unfair and unequal consequences across different communities10. However, there is a noticeable gap in research focusing on child-related fairness. Frameworks, particularly those from entities like United Nations Children’s Fund (UNICEF), stand out for their emphasis on advocating for marginalized children and the importance of diversifying datasets to minimize biases2. Several resources also champion the cause of universal digital access for every child, ensuring no discrimination based on gender, disability or ethnicity7. The push for diversity in AI system design and increasing calls for active child participation in AI policymaking and design processes are evident2,11. The conceptualization of fairness within AI is often viewed through different lenses based on professional backgrounds. For educators, fairness is about tailoring learning experiences to meet individual needs and ensuring equitable access to quality education for all children, irrespective of their backgrounds12. Child psychologists focus on age-appropriate AI interactions and equal treatment for children with developmental challenges13. AI engineers often aim to address fairness technically, developing unbiased algorithms and striving for uniform service quality14. However, the practicality and common use of such methods in the industry remain a matter of debate. Experts in AI ethics15,16 have noted that this technical approach to fairness is not always feasible or consistent across the industry. Moreover, a purely technical perspective on fairness often overlooks the complex, varied realities of how individuals experience fairness in real-world scenarios.
这一主题与儿童不受歧视的权利原则有着深刻的共鸣。通过设定非歧视性伤害的标准,人工智能系统设计者必须防止不同社区之间出现不公平和不平等的后果。然而,在关注与儿童有关的公平的研究方面存在着明显的差距。框架,特别是来自联合国儿童基金会(儿童基金会)等实体的框架,突出强调倡导边缘化儿童以及数据集多样化以尽量减少偏见的重要性 2 。一些资源还支持为每个儿童普及数字接入的事业,确保没有基于性别,残疾或种族的歧视 7 。推动人工智能系统设计的多样性以及越来越多的儿童积极参与人工智能决策和设计过程的呼声是显而易见的。 人工智能中的公平概念通常根据专业背景从不同的角度来看待。对于教育工作者来说,公平是指为满足个人需求而量身定制学习体验,并确保所有儿童都能公平地获得优质教育,无论其背景如何。儿童心理学家专注于与年龄相适应的人工智能互动和对有发展挑战的儿童的平等对待。人工智能工程师通常旨在从技术上解决公平性问题,开发无偏见的算法,并努力实现统一的服务质量。然而,这种方法在工业中的实用性和普遍使用仍然是一个有争议的问题。人工智能伦理专家指出,这种公平的技术方法在整个行业并不总是可行或一致的。此外,纯技术性的公平观往往忽视了个人在现实世界中如何体验公平的复杂、多样的现实。

Transparency and accountability
透明度和问责制

Transparency and accountability are often brought up together in the literature, and are closely related to supporting children’s best interests in the design of systems. Accountability requires an ability to identify a chain of responsibility for system (mis-)behaviours, and justify how the designers and developers of AI systems should be held accountable 10. Very few AI frameworks mentioned the importance of extra attention paid to systems that could be accessed by children, particularly the importance of impact assessments17. The AI frameworks proposed by UNICEF and the UN were among the few that urged for constant review, update and refinement to integrate children’s rights2. References to transparency comprise efforts to improve the understandability of information given8 and enable the caretakers of children to understand the impact on children18, as well as making information accessible8,13. It is essential to recognize that the interpretations of transparency and accountability can vary and emphasize different aspects across various sectors. In education, transparency clarifies AI-driven learning, personalization and data protection19. For children’s online safety, it involves clear data handling and algorithmic safety communication8. In healthcare, it addresses the role of AI in diagnostics and handling children’s health data20.
透明度和问责制在文献中经常被一并提及,并与在制度设计中支持儿童的最大利益密切相关。问责性要求能够识别系统(错误)行为的责任链,并证明人工智能系统的设计者和开发者应该如何承担责任。很少有人工智能框架提到对儿童可以访问的系统给予额外关注的重要性,特别是影响评估的重要性 17 。联合国儿童基金会和联合国提出的人工智能框架是少数几个敦促不断审查、更新和完善以纳入儿童权利的框架之一。提到透明度时,包括努力提高所提供信息的可理解性 8 ,使儿童照料者能够了解对儿童的影响 18 ,以及使信息无障碍 8,13 。 必须认识到,对透明度和问责制的解释可能各不相同,并强调不同部门的不同方面。在教育领域,透明度明确了人工智能驱动的学习、个性化和数据保护#6。对于儿童的在线安全,它涉及清晰的数据处理和算法安全通信 8 。在医疗保健领域,它解决了人工智能在诊断和处理儿童健康数据方面的作用 20

Privacy, manipulation and exploitation
隐私、操纵和剥削

Privacy is a recurring theme in current literature, typically discussed in relation to data protection8, data security21 and data trust22. For child-specific data privacy, numerous frameworks highlighted the importance of regulating data practices for children, including setting higher default privacy settings on child-accessible systems, retaining minimal personal data and avoiding sharing of such data if detrimental effects are foreseeable8,10,17. Privacy is closely linked to preventing data exploitation and manipulation, particularly concerning data-driven personalized targeting methods of AI and their potential harms in various contexts17. Current principles emphasize a heightened scrutiny of AI systems, particularly with respect to their impact on children’s behaviour or emotions, offering concrete examples of behavioural manipulation for practitioners to avoid, such as using personal data to incentivize engagement, nudging children to continue playing by implying potential loss or exploiting children’s vulnerabilities through the profiling of their personal data7,8,23. Beyond the commercial aspect of children’s data, the professional use of their data in areas such as paediatric bioethics brings to light the intricate balance between the privacy rights of professionals, parents, caregivers and the children themselves. An important challenge arises from whether a child possesses the capacity to provide informed consent, either due to their age or cognitive ability. The enduring ethical obligation remains: protect a child’s confidentiality, respect parental access and foster a constructive parent–child relationship.
隐私是当前文献中反复出现的主题,通常与数据保护 8 ,数据安全 21 和数据信任 22 相关。对于儿童特定的数据隐私,许多框架强调了规范儿童数据做法的重要性,包括在儿童可访问系统上设置更高的默认隐私设置,保留最低限度的个人数据,并在可预见有害影响的情况下避免共享此类数据。隐私与防止数据利用和操纵密切相关,特别是关于人工智能的数据驱动的个性化定位方法及其在各种情况下的潜在危害。 目前的原则强调加强对人工智能系统的审查,特别是在其对儿童行为或情绪的影响方面,为从业者提供了避免行为操纵的具体例子,例如使用个人数据来激励参与,通过暗示潜在的损失或利用儿童的脆弱性来推动儿童继续游戏。除了儿童数据的商业方面之外,在儿科生物伦理学等领域对儿童数据的专业使用揭示了专业人员、父母、照顾者和儿童本身的隐私权之间的复杂平衡。一个重要的挑战是儿童是否有能力提供知情同意,无论是由于其年龄还是认知能力。持久的道德义务仍然是:保护儿童的机密性,尊重父母的知情权,并培养建设性的亲子关系。

Safety and safeguarding

The term ‘harm’ is central to safety and safeguarding in AI usage, with the aim being to prevent harm to users and to shield users from harmful effects24. Broad references to safety include general pleas for safety and security, or an expectation that AI should avoid causing predictable or unintentional harm. This concept is closely related to children’s right to life and development, necessitating more nuanced considerations by taking the biological and psychological distinctions between children and adults into account, as well as recognizing that children may interact with digital services and apps in unforeseeable ways. Meanwhile, interpretations of safety can also vary across different domains. In the educational sector, this relates to educational disparities caused by AI tools, misinformation affecting learning outcomes or the emotional distress caused by biased AI assessment. Within healthcare, safety can encompass erroneous AI-generated diagnoses, flawed treatment recommendations or the misuse of personal health data that results in privacy breaches20. When considering media interaction, safety entails protecting children from encountering inappropriate content, cyber bullying, online grooming, excessive screen time and exploitative information8,25,26. This underscores the necessity for AI systems that can effectively shield children from harmful and unreliable content while simultaneously upholding their right to freedom of expression.

Age-appropriateness and right to be heard

This theme is most closely aligned with children’s rights to be heard. The term ‘developmental stage’ is frequently brought up in related references7,11,17, encouraging stakeholders to respect the evolving capacities of children as an enabling principle that addresses the process of their gradual acquisition of competencies, understanding and agency7. Statements have also been made around how special attention should be paid to the effects of technology in children’s earliest years of life, and to support relationships with parents and caregivers, which is crucial for shaping children’s cognitive, emotional and social development7,19. This sentiment often extends to fostering children’s voices, appropriate to their age and developmental stages. AI design codes for children have underlined the significance of designing not just for children, but with them27. Practical implementations of these developmental considerations could include age-adapted transparency28 and control mechanisms for children at different developmental stages2. Adopting methodologies that actively involve children and encourage their contributions could also cater to their developmental needs27.

Challenges in translating ethical AI principles into practice for children

Translating principles into practice is a well-known challenge, yet the distinct context of doing so for children introduces specific complexities. Children are inherently different from adults in their moral significance. Irrespective of their intellectual capabilities, children do not possess the same level of autonomy and responsibility as adults. As such, AI principles for children should not be treated merely as an extension of general user guidelines or a subcategory of guidelines for socially vulnerable groups. Given children’s distinct attributes and circumstances, we underscore four major challenges in adapting ethical AI principles for their benefits.

Lack of consideration of the developmental aspect of childhood

The key challenge in translating AI principles into practice, as highlighted by numerous studies, is the absence of consistent professional codes and norms for AI applications due to the vast number of different forms of application and technologies aiming for different goals with different stakeholders29,30,31. Incorporating children into the AI ethics conversation introduces a new layer of complexity due to their diverse needs, age ranges, development stages, backgrounds and characters. Their unique physical and psychological traits necessitate special care in the deployment of AI systems that shape their information, services and opportunities.

As previously indicated, current ethical AI frameworks either overlook the distinct role of children or categorize them as a homogeneous group. The integration of children into these principles seems more like a superficial gesture of compliance, failing to truly address their distinctive needs and viewpoints. In fact, in many instances, the term ‘children’ could be swapped out for ‘socially vulnerable groups’ without substantial change to the content or context. To contemplate the ethical nuances associated with the involvement of children and adolescents in research, we must delve deeper into the wide-ranging notion of ‘childhood’. A critical feature distinguishing childhood from general users is its developmental progression—transitioning from the utter dependence of infancy to the comparative self-reliance of youth. Implementing straightforward age categories is insufficient due to the vast differences in children’s intellectual abilities, pace of growth, maturity and experiences. Therefore, to ensure that ethical considerations match the unique requirements and circumstances of each child, it is crucial to adopt a more nuanced approach beyond mere age-specific classifications.

The current deficiency in diverse considerations further reflects the lack of concrete conceptualization around what children’s best interests truly signify. Often referenced in literature, the term is generally employed to denote a child’s overall well-being, development and protection in a vague manner. This traditional interpretation of ‘children’s best interests’ may not entirely capture or may need modification when applied to children in diverse circumstances. Essentially, this suggests that our usual understanding of certain terms may be inadequate or necessitate refinement, particularly when considering the specific contexts and characteristics of children.

Lack of consideration of the role of guardians in childhood

One aspect that distinctly separates children from other general users is the presence of parents or guardians. They hold important legal and ethical roles in making decisions for their children. It is crucial to acknowledge the meaningful moral distinction between ‘competent children’ and ‘adults’. Despite their intellectual abilities, children do not possess the same rights or responsibilities as adults. Ethically and legally, parents bear this responsibility. Consequently, it is essential to examine the roles of parents in this context and consider their unique interests, which may differ from those of other ‘adults’. However, existing references on ethical AI principles reveal a critical gap in addressing surrogate or substitute decision-making for children, such as those decisions made by parents. The fact that many ethical AI frameworks barely address the roles of children makes it unsurprising that the interactions between children and their guardians are infrequently discussed. Such lack of oversight in examining parental decision-making may inadvertently increase risk exposure for children.

On the other hand, the few frameworks that do address parents and families, such as those used by UNICEF, often adhere to a traditional assumption and portray their roles as possessing superior expertise and skills to steer their children through the digital landscape and facilitate their learning. The topic of effectively bolstering children’s personal resilience development, however, is rarely addressed. Although parents undoubtedly have a pivotal role in protecting their children in the digital world, children, born into the digital era, interact with the AI environment with an ease that is almost instinctive. By contrast, their parents may lack the same depth of understanding in this domain32. This potential expertise shift underscores the need to transition from a parent/teacher-led approach to a child-centred approach33. This involves moving from instruction to support in children’s experiences, fostering self-determination, values and self-identity. This aligns with the child–computer interaction community’s trend34 to promote children’s autonomy and resilience in AI interactions, including critical comprehension of digital environments and informed choices. Guardians (parents, teachers or carers) and children should collaborate, yet current ethical AI principles lack guidance on enhancing joint consent and decision-making processes.

Lack of child-centred evaluations considering children’s best interests and rights

One key challenge in translating ethical AI principles into practice for children is the difficulty of translating high-level principles to quantifiable outcomes or technical standards. A recent survey of 188 AI systems developed for children showed that almost all of them relied solely on technical evaluations to measure their performance, such as through the accuracy, precision and recall of the results generated by the systems31. In fact, quantitative measurements have long been considered the gold standard for performance assessment in the fields of AI, algorithms and related disciplines. This approach has provided a seemingly objective and standardized method for evaluating the effectiveness of various systems and models. However, as these fields continue to advance, it has become increasingly clear that relying solely on quantitative metrics can present challenges in meeting certain empirical requirements. For instance, it is not surprising to see that within the current AI community, such a principle/requirement for safety and safeguarding would just be directly translated into identifying online inappropriate content, evaluated by the accuracy of its classification. Another example is the principle of sustainability and age-appropriateness—how the developmental needs and long-term well-being of children could be even evaluated by technical measures remains questionable.

This is not to say that technical evaluations are not important—they are, but translating principles into practice requires more than that. The current trend of relying solely on quantitative measurements may steer the AI community away from prioritizing human-centred factors when designing for children. As a result, critical needs, such as understanding how users desire to be treated by the developed systems, may be overlooked. This mandates a more balanced approach that prioritizes both empirical variables and quantitative measurements. More fundamentally, it also necessitates a paradigm shift towards cultivating a more human-centred approach within the AI community.

Lack of a coordinated, cross-sector and cross-disciplinary approach

Meanwhile, a pertinent challenge arises from the absence of supportive resources, theories and empirical evidence that can facilitate the translation from principle to practices. Ethical AI principles pertaining to children’s rights, such as those championed by organizations like UNICEF and the UK Information Commissioner’s Office (ICO)2,8, predominantly focus on domains such as education, health, privacy and online safety. Discussions concerning the potential risks, disadvantages attributed to fairness and non-discrimination oversights or the deficiency of robust legal and professional accountability mechanisms have been inadequate.

Conversely, it is interesting to see that while there have been limited shared resources on developing AI for children, experts from other domains (for example, law, psychology) often have their own established codes of practice and substantial knowledge bases to draw on. It is even more interesting to observe that sometimes experts from different domains work on analogous issues but use completely different vocabularies and methodologies. The discussion in ‘A review of ethical AI principles and how they relate to children’ above illustrates how diverse terminologies can convey disparate meanings and contexts, depending on the domain of expertise. The crux of the challenge lies in the adaptability of these terms and methodologies across different AI principles. This highlights the need for strengthened collaboration within the child–computer–interaction community, to harmonize multidisciplinary domains and encourage knowledge transfer to avoid duplicate efforts. Ultimately, this cross-sector and cross-disciplinary cooperation could play pivotal role in safeguarding children’s interests, rights and well-being in AI system development.

Future directions for ethical AI for children

While there is a rising sense of urgency to focus on child-centred AI, marked by notable initiatives such as Exploring Children’s Rights and AI at the Alan Turing Institute and Child Rights by Design by the Digital Futures Commission, our approach uniquely aims to bridge the gap between theoretical ethical AI principles and their real-world application in contexts centred on children. Within this larger, imperative framework of setting the agenda, we present preliminary recommendations that explicitly link ethical AI considerations to actionable guidelines in the realm of child-centred AI technologies.

Increased stakeholder involvement

Our analysis indicates that the current principled approach of AI for children fails to take into account critical considerations regarding the best interests of children, and is ambiguous with regard to the actual needs and empirical requirements of both children and their families as they were rarely consulted. On the other hand, research indicates that stakeholders, including children, parents and other caretakers, strongly desire a say in defining how they should be treated by AI systems35. We recommend that future designers and developers of AI for children take a more participatory and more inclusive approach, and include stakeholders such as parents, schools and teachers, practitioners and, most importantly, children themselves from diverse backgrounds to better understand what is actually needed by different stakeholders. In recent years, participatory methods involving stakeholders have gained popularity in the field of human–computer interaction (HCI). These approaches include organizing focus groups with parents, collaborating with educators to develop AI tools that align with curriculum needs and enhance learning, and consulting child psychologists to ensure that AI technology is age-appropriate36. There has also been substantial effort in specialized fields such as healthcare to incorporate the perspectives of children and young people regarding the role of AI in medicine37 and clinical care38. Specifically, the child–computer interaction community has increasingly emphasized methodological approaches that involve children directly in the design process39,40,41. This movement advocates for empowering children by giving them a voice in these processes, thus supporting their autonomy and fostering resilience. However, such efforts and methodologies were typically utilized primarily in the HCI field; as discussed before, the prevailing convention still revolves around purely technical and quantitative evaluation metrics within the AI and algorithm community—which leads us to the second recommendation.

Direct support for industry designers and developers

To ensure that ethical AI principles are actually implemented in practice, we must recognize that it is crucial to directly involve designers and developers in the process of developing these ethical AI principles and building associated best-practice guidelines to transform these principles into practice. Recent research has shown that the lack of guidance in navigating the often abstract landscape of AI principles for children, as well as the treacherous landscape of existing tools such as third-party services42, remain a critical bottleneck for industrial practitioners in the journey of creating ethical technologies: industry support is lacking for developers to interpret the guidelines, and supporting resources (for example, existing libraries and tools) are scarce in translating such principles. One way to address this open challenge is to create mechanisms to incentivize community building among practitioners, designers and developers, facilitating their sharing and building of knowledge and even the development of momentum for fundamental changes. Furthermore, we suggest that ethical AI practitioners and organizations should increase their collaboration with developers and designers to take a bottom-up approach and create a shared foundation for industry standards and best practices. We anticipate that bringing together the knowledge and expertise of these multiple stakeholders will not only catalyse practical and workable guidelines and resources, but also promote a culture of accountability and continuous improvement.

Establishment of legal and professional accountability mechanisms

Recent regulatory efforts such as the Online Safety Bill26, the EU Artificial Intelligence Act43 and the Algorithmic Accountability Act44 have taken the first steps towards addressing the challenges of regulating AI. While these regulations made a good start in promoting the responsible and accountable use of AI technology, the importance of child-specific legislation is often underestimated. Although certain legislation such as FTC COPPA and the Online Safety Bill mentioned aspects relevant to children, such legislation typically focused more on children’s general well-being online and less on AI-specific impact and harms. Meanwhile, existing AI regulatory effort often focuses on specific sectors or applications of AI, and often underscores the importance of collaboration between different stakeholders, including policymakers, industry players and civil society groups. Finally, we often hear statements suggesting that regulations may never fully catch up with the rapid pace of technological development. This sentiment is sometimes used as an excuse for the absence of up-to-date legislation and professional standards. Future legislative efforts could use the position of the UN Convention on the Rights of the Child on children’s digital rights as a foundation. These initiatives could then explore how to adapt children’s basic human rights within the context of the digital world.

Increased multidisciplinary collaboration around a child-centred approach

One of the challenges in converting abstract principles into a reliable set of guidelines for children is mainly due to the inadequate availability of resources. For instance, despite all working towards designing for a better experience for children, researchers from HCI and design domains may typically focus on the interaction between children and AI, along with their user experience and perceptions on a specific topic45; whereas researchers from education domain may focus more on children’s learning performance and long-term behavioural change46. Likewise, work from researchers in policy guidance domain may be more heavily oriented around how AI for children could be associated with greater societal impact2. Fostering collaborations among experts from domains such as HCI, design, algorithms, policy guidance, data protection law and education, along with various practices from within the AI and related communities, could unite voices that might deploy different terminologies. Ultimately, we assert that there is a compelling necessity to forge a transformative discipline, as we have noticed a disconnect in the knowledge and methodologies employed by researchers and practitioners across various disciplines. This necessitates a radical rethinking of our disciplinary approaches, transcending boundaries and integrating the strengths and perspectives of numerous disciplines. In particular, this new interdisciplinary sphere should strongly advocate for a child-centred approach, wherein the needs, experiences and perspectives of individuals, particularly children, are at the forefront of design and implementation. This integrated focus will enable us to devise future ethical AI systems that are equipped to address and be well-prepared for the unique socio-technical challenges pertinent to creating AI systems for children.