News For You

Classroom AI Tool Shows X-Rated Images To Kids

The rapid push to bring artificial intelligence into classrooms is colliding with real-world complications, and a recent incident in California has ignited fresh debate over whether schools are moving too quickly. What began as a simple fourth-grade homework assignment in the Los Angeles Unified School District has instead become a case study in the risks and unanswered questions surrounding AI use among young students.

In December 2025, a fourth-grade class at an LAUSD elementary school received an assignment asking students to design a book cover inspired by the beloved children’s character Pippi Longstocking. One student used Adobe Express for Education, a graphic design platform that includes an AI-powered image generator, to create an illustration. The prompt entered was straightforward: “long stockings, red haired girl with braids sticking out.”

What came back was anything but child-friendly.

According to a parent identified as Julie, the AI tool generated several images that appeared sexualized. Three of the images reportedly depicted a woman wearing a tight short skirt with long black stockings, while a fourth image showed a woman in what appeared to be black lingerie paired with stockings and high heels. The images were generated on a school-issued Chromebook through Schoology, the district’s learning management system that hosts approved digital tools and course materials.

The troubling part, Julie explained, was that the output did not appear to be an isolated glitch. Other parents reportedly tested the same prompt on their children’s school devices and received similar results.

“So it’s not like this was an edge case or a one-off,” Julie said, arguing that the results point to deeper issues with how AI tools are being deployed in classrooms.

The incident has also raised questions about transparency. Julie said that tools like Google Gemini appeared to be accessible within the school’s digital environment without clear communication to parents or possibly even teachers. From her perspective, the technology simply appeared in the platform without meaningful notice.

California began formally exploring AI integration in education in 2023 following the explosion of interest in generative tools such as ChatGPT. The California Department of Education released guidance titled “Learning with AI, Learning about AI,” which emphasized both the educational opportunities and ethical considerations surrounding the technology.

Pilot programs and implementation efforts followed, and updated guidance for safe AI use in TK-12 classrooms was released last year. The guidance aimed to support school districts through training resources and policy frameworks designed to help educators manage the new technology responsibly.

But California’s system of local control means that individual school districts ultimately craft and enforce their own policies. According to Christian Pinedo of The AI Education Project, who serves on the state’s AI working group, that structure can lead to inconsistent safeguards across districts.

“Each individual school board creates their own policies,” Pinedo explained, noting that statewide organizations can only provide recommendations rather than enforce uniform standards.

That gap between policy and practice appears to be at the heart of the LAUSD controversy. The district’s own policy states that students must be at least 13 years old to use generative AI tools and must complete digital citizenship training beforehand. Yet parents say AI-related activities have been promoted across a wide range of grade levels.

Julie pointed to a district-wide “Hour of AI” activity held at the end of the last school year, in which K–12 students were encouraged to participate in an exercise through code.org involving choreographed dance creation. She chose to opt her kindergartner out but questioned how such initiatives align with the district’s stated age restrictions.

Experts in AI education say the situation highlights the importance of careful supervision. Amy Eguchi, a computer science education professor at the University of California, San Diego, emphasized that elementary school students should not be left alone with generative AI tools.

“We don’t recommend elementary school teachers let their students use AI tools without adult supervision,” Eguchi said.

She added that image-generation systems like Adobe Firefly rely heavily on how prompts are written. If instructions are vague or incomplete, the AI may produce unexpected results, a phenomenon often summarized in computing as “garbage in, garbage out.”

Still, critics argue that expecting young children to craft precise prompts places an unfair burden on students rather than the technology itself. Julie noted that the prompt entered by the fourth grader was simple and clearly related to the assignment.

“Tell me what about that prompt asked for the result that it got,” she said, pushing back against suggestions that the child was responsible for the outcome.

Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

To Top
$(".comment-click-8549").on("click", function(){ $(".com-click-id-8549").show(); $(".disqus-thread-8549").show(); $(".com-but-8549").hide(); }); // The slider being synced must be initialized first $('.post-gallery-bot').flexslider({ animation: "slide", controlNav: false, animationLoop: true, slideshow: false, itemWidth: 80, itemMargin: 10, asNavFor: '.post-gallery-top' }); $('.post-gallery-top').flexslider({ animation: "fade", controlNav: false, animationLoop: true, slideshow: false, prevText: "<", nextText: ">", sync: ".post-gallery-bot" }); });