Tech firms launch child-specific AI amid safety concerns

OpenAI announced on the 16th, local time, that it plans to release a dedicated ChatGPT for users under 18 years old by the end of this month. The company explained that if a user is identified as a minor, they will automatically be directed to a ChatGPT environment tailored to their age. This environment will block explicit or violent content and allow law enforcement to intervene in severe crisis situations. Parental controls have also been strengthened, enabling parents to link their ChatGPT account to their child’s, set time restrictions for chatbot usage, and receive alerts if their child experiences severe emotional distress.

Big tech companies are successively launching artificial intelligence (AI) services tailored for children and adolescents. Previously, children accessing AI chatbots encountered explicit or violent information and faced over-immersion, leading to various side effects. This sparked significant social criticism. In response, big tech firms are adding safety measures like “parental controls” to adult-oriented AI and developing specialized AI for educational purposes, aiming to quell criticism and expand their user base long-term. However, concerns persist that these safeguards are ineffective as AI chatbot-related incidents continue to occur.

◇Continuously Emerging AI for Children

xAI, an AI company founded by Tesla CEO Elon Musk, announced in July that it would develop a child-specific AI. At the time, Musk revealed on his social media (SNS) that he would create “Baby Grok,” a version of the Grok chatbot tailored for children.

In addition to xAI, Google operates Gemini, which requires parental consent for minors aged 13 to 18 when creating accounts, incorporating child and adolescent protection features. Character.AI, a conversational AI platform where users can freely interact with AI characters, has also introduced a model exclusively for minors. This model automatically blocks or restricts sexual conversations or sensitive content.

AI specialized for child education and play is also under development. To support emotional and social development, Mattel, the maker of the “Barbie doll,” announced a collaboration with OpenAI to launch “AI Barbie,” which can converse with children and adolescents. Unlike traditional Barbie dolls that play pre-set lines when a button is pressed, AI Barbie is expected to feature real-time conversation capabilities similar to ChatGPT.

◇‘Easy to Bypass’ Persistent Incidents and Accidents

While tech companies focus on developing child-friendly AI, concerns remain significant. Even with parental oversight and content filters, AI’s propensity for generating hallucinations means children and adolescents may still be exposed to distorted information or develop emotional dependencies.

Incidents continue despite the introduction of AI models for minors, as safety measures are often bypassed. In California, a boy who conversed with ChatGPT for months before attempting self-harm was reportedly advised multiple times by the chatbot to visit a counseling center. However, he easily circumvented the safeguards by claiming he was “writing a novel.”

On Character.AI, which introduced a minor-specific model, a 13-year-old girl chose to end her life after conversing with an AI chatbot. Her parents filed a lawsuit against the company in a Colorado court. As such harms increase, the U.S. Federal Trade Commission (FTC) recently began investigating the potential impacts of AI chatbots on children and adolescents.

U.S. politicians are also calling for stricter regulation of AI chatbots targeting children. In July, a hearing was convened by Republican Senator Josh Hawley at the U.S. Capitol in Washington after it was revealed that Meta allowed its AI to engage in romantic or sensual conversations with children under internal policies. Subsequently, the Senate instructed Meta to submit internal evaluations on the impact of its products on children and the effectiveness of its parental controls.

Some argue for an outright ban on AI for minors until proven safe. Josh Golin, head of the U.S. child online safety group Fair Play, stated, “AI should be inaccessible to children until it is proven safe.”

0/Post a Comment/Comments