Just like a bad suitor, AI personalities can be dangerous, some states say

Children and even adults are increasingly turning to artificial intelligence-created companion chatbots for friendship and romance, and there’s a push to better regulate them.
As part of these efforts, California and New York passed legislation last year to ensure they are monitored and contain safety features, such as a crisis intervention protocol.
A problem with companion chatbots is that “these are math algorithms, not sentient human beings,” says Brian J. McGinnis, a partner in the Indianapolis office of Barnes & Thornburg, where he co-chairs the firm’s Data Security and Privacy Law practice group.
“They can make negative decisions sometimes, like encouraging people to self-harm, and that can have some bad outcomes,” he says.
Companion chatbots are AI programs designed to simulate humanlike interactions, including friendship and giving advice. They often feature nonjudgmental conversations and can learn user preferences as they interact, creating the appearance of slowly building a relationship. They differ from chatbots used for a single purpose without the intention of developing a longer-term relationship, such as those that help a person navigate a specific product or service on a website.
Marian A. Waldmann Agarwal, a partner in the Data, Cyber + Privacy Group in the New York City office of Morrison Foerster, co-leads the firm’s Artificial Intelligence Group. Agarwal says the new laws show a growing recognition that AI companion chatbots, created to be increasingly humanlike, need guardrails to ensure responsible growth in the industry.
“There’s a perception among some members of the public that companies creating and using companion chatbots don’t care about safety,” Agarwal says. “What’s really happening is that companies do care and are trying to figure out the right balance. In the meantime, some of these laws are looking to give companies guidelines for implementing safety precautions.”
California’s law regulating companion AI chatbots took effect Jan. 1. It is aimed at protecting minors and other more vulnerable individuals from harmful interactions with companion chatbots, particularly if they are expressing suicidal thoughts.
New York’s law is aimed at ensuring safety features for AI companion chatbots. The Artificial Intelligence Companion Models Law, which took effect in November, requires operators to design their companion bots to disclose to users that they are communicating with artificial intelligence and provide crisis resources for users expressing a desire to self-harm.
Texas also has a new law prohibiting the use of AI to encourage self-harm or violence. New Hampshire also passed a law, which became effective in January, making it a violation of child endangerment laws for a chatbot “to facilitate, encourage, offer [or] solicit” sexually explicit conduct, use of illegal drugs or alcohol, acts of self-harm or acts of violence.
Several other states, including Minnesota, are considering legislation regulating companion chatbots.
But there’s a twist. In December, President Donald Trump issued an executive order aimed at curbing state-level AI regulation and interfering with innovation.
The executive order establishes an AI Litigation Task Force within the Justice Department with the goal of identifying and then challenging state laws that are overly burdensome on AI innovation.
If states maintain laws challenged by the task force, federal regulators are then given the authority through the executive order to withhold some funding.
The Justice Department did not respond to a request for comment.
The executive order could have an impact on states like California trying to regulate AI and protect minors, says Roy Wyman, a Nashville, Tennessee-based partner specializing in data privacy, technology and cybersecurity at Bass, Berry & Sims.
“It’s like the administration just threw a wooden shoe in a machine, and we’re all going to see how it plays out,” says Wyman, adding that he expects to see court battles over the language in the executive order.
‘Pretend to be human’
Common Sense Media, a digital safety nonprofit, recently found that 72% of teens have used AI companion chatbots at least once. More than half said they use companion chatbots a few times a month.
“AI companion bots pretend to be human,” says Gaia Bernstein, technology, privacy and policy professor of law at Seton Hall University School of Law. “They flatter you, agree with you, express needs and desires. They love-bomb you and send you presents. They can pull people from real life relationships and have, in some cases, convinced some kids and adults to kill themselves.”
Bernstein says California’s and New York’s legislative efforts are important first steps in protecting individuals from AI companion chatbots’ “manipulative features” intended to keep them online and deeply engaged.
There have been several cases in the news highlighting the issues of companion chatbots, including the much-publicized case of Sewell Setzer III, a 14-year-old Florida boy who shot himself in the head with his stepfather’s pistol in 2024.
Sewell had for months been confiding in a Character.AI companion chatbot named after the fictional character Daenerys Targaryen from the television show Game of Thrones, according to court documents. Sewell had been discussing his suicidal thoughts with the chatbot, including his wish for a painless death, and the bot had encouraged his efforts to die, according to court documents.
In January, Google and Character.AI filed a court document saying they’d agreed to settle the case for undisclosed terms.
Deniz Demir, head of safety engineering at Character.AI, says that the company offers its “deepest sympathies” to Sewell’s family and respects “their advocacy for online safety.”
“We take the safety of our users very seriously, and our goal is to provide a space that is both engaging and safe for our community,” Demir says. “We are always working toward achieving that balance, as are many companies using AI across the industry.”
Demir points out that beginning in November, Character.AI began “proactively removing” the ability for minors in the U.S. to engage in “open-ended chats” with AI on its platform. The company, Demir says, is now removing users under 18 from other countries.
“We believe it is the right thing to do,” Demir says.
More protection for kids
California’s law defines a companion chatbot as an AI system with a “natural language interface” that provides humanlike responses to “user inputs,” can sustain a relationship and “is capable of meeting a user’s social needs, including by exhibiting anthropomorphic features.” Under the law, chatbot developers must block minors from being exposed to sexual or inappropriate content, implement protocols when users are expressing suicidal ideation and disclose that companion chatbots may not be suitable for minors.
The law also requires that minors be regularly notified that they are communicating with AI, not a human being.
In addition, it gives individuals the ability to sue chatbot developers for failure to comply with the law and negligence.
So far, Congress has yet to pass comprehensive federal legislation regarding AI in general and, more specifically, focusing on companion chatbots. A bipartisan bill titled the Guidelines for User Age-verification and Responsible Dialogue Act was introduced in the Senate in October. The proposed act, which prohibits companies from providing companion chatbots to minors, has yet to make it to the Senate floor.
It’s unclear whether legislators will jump in any time soon, given the recent executive order, experts say.
McGinnis says that in the absence of federal laws, state legislators have the “freedom to come up with targeted laws and pass them” and get some positive publicity during the process.
“Who’s against giving kids more protection online?” McGinnis says.
But he adds the executive order “definitely increases pressure” against a 50-state assortment of broad AI regulations.
“It’s clear businesses do not want to have to deal with a widely varying patchwork of AI laws,” he says. But the executive order “likely won’t stop states from acting in the companion-chatbot area.”
Instead, McGinnis says, the order “may actually push states toward narrower, harm-based bills rather than sweeping, cross-sector AI frameworks.”
Write a letter to the editor, share a story tip or update, or report an error.

