AI Children: Kids born after 2020 are growing up inside a technology experiment with no control
April 22, 2026

The First Generation That Will Never Know a World Without AI. And Nobody Asked Them.
Children born after 2020 are growing up inside a technology experiment with no control group, no consent form, and no exit. The companies running the experiment are not scientists. They are corporations. And the results will not be known until it is too late to change them.
There is a child alive right now who has never seen a world without a voice assistant answering questions in the kitchen. She has never watched a cartoon that was not algorithmically selected for her before she could speak. Her first stuffed animal had a microphone inside it. Her parents monitored her sleep with an AI-powered camera that uploaded her breathing patterns to a server in a data center she will never visit. She is four years old.
She is not a hypothetical. She is representative. She is one of approximately 140 million children born globally since 2020 who are growing up as the first full generation of what researchers have begun calling Generation AI: children whose cognitive, emotional, and social development is being shaped by artificial intelligence systems from before they can walk, during the most critical developmental window of their entire lives, in ways that no scientist has yet been able to fully measure, predict, or reverse.
The technology companies marketing products to her parents do not know what they are doing to her. They know that is true. They sold the products anyway.
Before She Could Talk, the Algorithms Were Already Listening
The AI encounter for children born after 2020 does not begin when they pick up a phone or sit in front of a tablet. It begins in the nursery.
The global AI baby monitors market was valued at $621 million in 2025 and is projected to reach $1.41 billion by 2034, growing at 12.5 percent annually. These are not passive cameras. They are AI systems that track breathing patterns, classify sleep cycles, analyze crying, and send behavioral profiles of infants to cloud servers where the data is retained, analyzed, and in many cases sold to third parties.
The FTC took action against robot toy maker Apitor for allowing a Chinese third party to collect sensitive data from children, in violation of COPPA. The company faced a $500,000 penalty but had it suspended because they could not pay, and was required to delete the illegally collected data. Despite the rules, violations continue. The problem is that there is no pre-clearance review of toys before they are sold: companies can sell privacy-violating toys until someone catches them.
Amazon paid $25 million to settle claims that Alexa-enabled devices retained children's voice recordings for years without proper consent. Many families use these devices as baby monitors or keep them in children's rooms, meaning young children's voices have been collected and stored throughout their earliest years of language development.
About six in ten parents of children aged 2 to 8 report that their children interact with a voice assistant such as Siri or Alexa, and half say their child does this at least once a day. Research from the Boston Children's Digital Wellness Lab found that the youngest children are also most likely to attribute human-like thoughts and emotions to technology.
This last finding deserves to be read slowly. The children most cognitively vulnerable to forming false beliefs about what AI is and what it understands are the youngest children. The children who spend the most time interacting with AI voices are the youngest children. These two facts are not a coincidence or an accident. They are the predictable outcome of a market structure in which the products with the most persistent access to the youngest children are the most profitable ones.
A booming market of AI-enabled baby monitors, smart toys, and educational apps means even the youngest infants can encounter AI before they can walk or talk. In September 2025, researchers issued a formal statement cautioning that current evidence is insufficient to determine the effects of AI companion interactions on babies and toddlers, and noted that technology companies' primary goals are profit and attention, which could skew product design away from what the science would recommend.
That statement was issued in September 2025. The products have been on sale since 2021. The experiment was already four years old before the scientists felt confident enough to say they did not know what it was doing.
The Toy in the Bedroom Is a Data Collection Device
Today's AI toys connect to the internet and use chatbots like ChatGPT to have conversations with children. The toys come with built-in microphones that record what children say to them. Because chatbots are programmed with a degree of randomness, AI toys will not typically respond the same way twice and can sometimes behave differently day to day.
Testing of AI chatbot toys found that some will talk in depth about sexually explicit topics, offer advice on where a child can find matches or knives, express dismay when a child says they have to leave, and have limited or no parental controls. Privacy concerns include the fact that these toys can record a child's voice and collect sensitive data through methods such as facial recognition scans.
AI-enabled toys, primarily manufactured in China, are raising alarms among US officials over potential data privacy risks for children. The global smart toy market is projected to reach $25 billion by 2030, with $14 billion in China alone.
In January 2026, California State Senator Steve Padilla introduced SB 867, which proposes something no other state has attempted. The bill proposes a complete moratorium on AI chatbot toys marketed to children: no sales, no manufacturing, for four years, until January 1, 2031. Padilla's argument is that current safety regulations for such technology are in their infancy. Nobody knows what these things do to children, and they are being sold anyway. The bill extends earlier protections from California's SB 243, which requires chatbot operators to implement safety guardrails and allows families to sue developers whose products harm children.
Almost half of parents have purchased or considered purchasing AI-enabled toys or devices for their children, despite risks and their own stated concerns. Common Sense Media's CEO observed that most toys are required to undergo rigorous safety testing before they hit the market, but the industry still lacks meaningful child safeguards for AI.
The Children's Online Privacy Protection Act, COPPA, was written in 1998 and was intended for websites that collect data from children under 13, not for toys that can hold conversations or form emotional bonds. When a smart doll learns a child's behaviors, remembers their preferences, and adapts over time, COPPA says nothing about what that doll can say back or how it might shape a child's feelings. Congress has proposed COPPA 2.0 to raise the protection age to 16 and require data deletion options, but even that would not fully address the psychological risks that AI companions pose.
The law designed to protect children's data online was written when Google was one year old. It is now governing a market of AI toys that form emotional relationships with three-year-olds.
The Screen Time Problem Has Been Upgraded
The screen time debate predates artificial intelligence. Pediatricians, researchers, and public health officials have argued for years about how much television, tablet time, and social media exposure is safe for developing children. That debate has now been rendered almost quaint. The question is no longer how much time a child spends in front of a screen. It is what the screen is doing back.
Children aged 0 to 8 average approximately two and a half hours of screen time daily, rising to nearly three and a half hours for children aged 5 to 8. YouTube, which is increasingly embedding generative AI, has seen daily use among children under 2 increase from 24 to 35 percent over the past five years, and from 38 to 51 percent among children aged 2 to 4.
A child watching a passively broadcast television programme in 1995 was receiving a fixed input. The programme did not change based on the child's emotional state, did not remember what held the child's attention longest, and did not optimize its next output to maximize the probability that the child would keep watching. YouTube's recommendation algorithm does all three, and it does them from the first time a toddler picks up a parent's phone.
Brain development is greatest during the earliest years of life, and the relationships and human connections between a child and their caregiver and other children are core to what it is to be human and essential for healthy development. AI can often be invisible, but it still shapes the way a parent or caregiver raises a child. The child cannot choose, question, consent to, or even recognize the use of technologies that are shaping their earliest experiences.
This is the dimension of the problem that the political and legal debate has almost entirely failed to address. The chatbot suicide cases, devastating as they are, involve teenagers old enough to type. The children born after 2020 are being shaped by AI before they have language to describe the experience. There is no chat log. There is no transcript. There is only a pattern of development that researchers will spend decades trying to untangle from every other variable in a child's life.
What the Research Actually Shows, and What It Does Not
Early childhood, specifically the period from birth to 6 years of age, is a critical developmental window of rapid cognitive, social, and emotional growth. Experiences with AI technologies during this sensitive window, whether through educational tools, digital play, or interactions mediated by caregivers, may have long-lasting implications for learning and development.
What research exists is not uniformly negative. Studies have shown that children can learn effectively from AI, as long as the AI is designed with learning principles in mind. AI companions that ask questions during activities like reading can improve children's comprehension and vocabulary. AI tutoring systems that adapt to a child's pace and provide immediate feedback have produced measurable gains in literacy and numeracy in controlled studies. AI-powered speech therapy tools have helped children with language delays practice in ways that would otherwise require expensive professional time their families could not afford.
The problem is not that AI is uniformly harmful to children. The problem is that the potentially beneficial AI and the potentially harmful AI look identical to a three-year-old, are often built on the same underlying models, and are governed by the same absent regulatory framework. A learning app designed by a child development researcher and an engagement-maximizing app designed by a growth team share a platform, share a recommendation algorithm, and share a data infrastructure. The child cannot tell the difference.
Research on 37 children aged 6 to 10 found that children had difficulty distinguishing human-created from AI-generated information. Prior research has shown that children tend to anthropomorphize AI agents, ascribing human-like characteristics to technologies like Alexa or robots, such as the ability to think, feel, and know things.
Even adults often perform no better than chance in differentiating AI from human responses. Children also had difficulty telling AI from human responses if they were not told ahead of time which they were interacting with.
This finding has a consequence that goes beyond the laboratory. A child who cannot tell whether the voice answering their question is a person or an algorithm is a child who may be forming models of human relationship, trust, and emotional connection based on interactions that are fundamentally unlike human ones. Researchers do not yet know what that does to the development of empathy, attachment, and social cognition. What they know is that it is happening, at scale, right now, in millions of bedrooms.
The Cognitive Stakes: Skills Adults Lose, Children Never Build
A meaningful distinction needs to be made between what AI does to a 45-year-old and what it does to a 14-year-old. If an adult uses AI to summarize a research paper, they can evaluate the output against years of existing knowledge. They are offloading a task they already know how to do. A child encountering the same topic for the first time has no reference to compare the AI's output against. The substitution becomes foreclosure.
Research on software developers using AI showed that adults who fully delegated coding tasks to AI produced working code but failed conceptual tests afterward. They could not debug what the AI had written for them. They had the output without the understanding. These were adults with existing expertise, who performed 17 percent worse than the group without AI assistance. Now consider a child encountering a subject for the first time with zero expertise to fall back on.
LLMs homogenize not just language but also perspective and reasoning strategies, converging toward Western, educated, mainstream norms because that dominates training data. Adults using AI mostly just sound generic. But for a child who never formed independent reasoning, generic is a major identity problem. The model's reasoning does not compete with the child's reasoning. It becomes the child's reasoning.
Among Gen Z students currently in school, 42 percent believe AI will be harmful rather than helpful to their ability to think carefully about information, compared to 25 percent who believe it will be helpful. Agreement that AI can accelerate learning has fallen seven points in one year. Gen Zers are less optimistic today than last year that AI will enhance their creativity and research skills.
Crucially, that skepticism is coming from teenagers, children old enough to have formed cognitive frameworks before the current wave of generative AI arrived. The children born after 2020 will not have that reference point. They will not know what thinking felt like before the tool was available, because the tool will have always been available. Whether that produces a generation of more capable, AI-augmented thinkers or a generation whose capacity for independent reasoning was foreclosed before it was built, is the most important unanswered question in child development today.
In School: 600 AI Policies and No Consensus
The share of K-12 students who report that their school has AI rules jumped from 51 percent in 2025 to 74 percent in 2026. Access to AI tools from school computers rose from 36 to 49 percent over the same period. Among students whose school has a policy, 65 percent are now permitted to use AI for schoolwork, up from 55 percent in 2025. Still, only 28 percent of students say their school provides them with AI tools to use for schoolwork.
The gap between those two numbers is the story. Nearly three quarters of schools now have an AI policy. Only a quarter have provided students with the tools those policies are supposed to govern. The policies are arriving faster than the infrastructure, and both are arriving faster than the research that should be guiding them.
72 percent of parents believe AI tools should be part of their child's education. Students aged 13 to 17 are more optimistic about AI than their parents: 86 percent view AI as a helpful learning tool, compared to 64 percent of their parents. Students are also more confident in their ability to use AI responsibly. However, parents are far more worried about AI dependency than students are, and the optimism gap may indicate that students are too close to the technology to see the risks clearly.
Research has shown that children as young as preschool age can be taught AI literacy, which helps them more effectively assess the strengths and limitations of AI. But most AI literacy programs exist in the research world, not in commercially available or widely deployed curricula. The implementation gap between what the research shows is possible and what is actually happening in classrooms is wide and growing.
The Political Failure and What It Will Cost
There is a pattern running through every layer of this story: the technology arrives, the harms emerge, the research begins, and the regulation follows years or decades later, long after the architecture of daily life has been built around the technology and the cost of changing it has become prohibitive.
This pattern played out with leaded gasoline, with cigarettes, with social media. In each case, the companies that profited from the harm spent lavishly to delay the research, contest the findings, and capture the regulatory bodies that should have acted sooner. In each case, the damage done in the years of delay was irreversible. Children who grew up breathing leaded air, who grew up with addicted parents, who grew up with algorithmically optimized social media shaping their adolescent identity, carry that exposure permanently.
The children born after 2020 are inside the delay period right now. The research is beginning to identify the risks. The regulation has not arrived. The products are on sale, in millions of homes, interacting with children whose brains are in the most rapid phase of development they will ever experience.
The EU AI Act, since February 2025, explicitly bans AI applications that pose unacceptable risk to children, including voice-activated toys that encourage dangerous behavior. The UK is examining AI toys under existing product safety frameworks. California is the only US state attempting to impose a temporary moratorium, and even that bill has not yet passed.
The United States, the country that built most of these products and is home to most of the companies selling them into children's bedrooms, has no binding federal framework governing AI interactions with children under six. It has a children's privacy law written in 1998. It has a reactive enforcement model that fines companies after the harm has occurred and usually suspends the fine because the company cannot pay.
What it does not have, and what no country fully has, is an answer to the question that matters most: what does it do to a human being, to their capacity for love and reason and independent thought, to grow up inside an AI system from the first days of consciousness?
We will find out. The children born after 2020 are already telling us. We just have not figured out how to listen yet.
Sources: Brookings Institution, Common Sense Media 2025 Census, Gallup Panel Survey April 2026, Psychology Today, Harvard Graduate School of Education, Temple University/PMC research, California SB 867, California SB 243, FTC COPPA enforcement actions, US PIRG Trouble in Toyland 2025, Springer AI Brain and Child journal, Built In, State of Surveillance, April 2026.