Skip to Content, Navigation, or Footer.
Friday, March 21, 2025
The Setonian

Panelists at the AI companions symposium | Photo via Seton Hall Law School | The Setonian

Companion or computer? Symposium discusses dangers of AI bots

Seton Hall Law School hosted a virtual symposium to discuss the growing dangers of artificial intelligence companion bots on Tuesday, Feb. 19. The symposium, “AI companions: The New Frontier of Kids’ Screen Addiction and Online Harms,” was hosted by the Gibbons Institute of Law, Science & Technology and the Institute for Privacy Protection.

AI companions are increasing in popularity among children. These programs are presented in the App Store or on web browsers as age-appropriate games for kids. They include customizable characters and celebrity depictions that interact with users through a messaging system.

The algorithm that companion bots use can suggest content that is too mature for children, such as sexual subject matter, self-harm, and suicide. The algorithm is also designed for maximum user engagement, forcing kids to stay in contact with these bots for as long as possible, and prolonging content exposure that can be dangerous for their mental health.

The symposium’s opening remarks were delivered by Prof. Gaia Bernstein. She is a technology, privacy, and policy professor of law and co-director of both the Gibbons Institute of Law, Science & Technology and Institute for Privacy Protection.

Bernstein discussed the goal of academic conversations regarding AI, like the symposium, with The Setonian.

“Our goals in this symposium are twofold,” Bernstein said. “First of all, we want to bring awareness to this topic of AI companions. Parents are not aware of this at all.”

Bernstein expanded upon this idea in a press release for the event.

“Unknown to parents, kids now make friends with AI bots online,” she said. “These bots speak in a human voice; take time off for lunch; and adjust without the messiness of human relationships.”

It is not just parents who tend to be unaware of companion bots. Even adults who have interacted with AI are oftentimes not familiar with these applications.

Trenton Stevens, a freshman diplomacy major, explained how his interactions with AI are limited to surface-level programs, such as Copilot and ChatGPT.

“I ask it a whole bunch of questions, like when I’m doing an assignment,” Stevens said.

Stevens said he wondered if an AI companion bot is “like ChatGPT or Copilot, or is it something else.”

During her opening remarks, Bernstein said that the symposium would begin a conversation about AI regulation amongst lawmakers.

Companion bots gained public attention after recent lawsuits brought against AI companion companies. According to the J.F. case files, a minor in Texas referred to as J.F. developed relationships with several different online companion bots. After months of interaction, J.F. became withdrawn and hostile towards his parents. When messaging with an AI companion about his situation, the bot suggested he kill his parents.

Another lawsuit, reported by The New York Times, gained much more attention in the public eye. It told the story of Sewell Selzer III, who took his own life after forming a relationship with a companion bot.

“It’s unfortunately a story that we’re starting to see more and more of. Not just J.F., but other kids on our side,” said Laura Marquez-Garrett, an attorney at the Social Media Victims Law Firm.

During the symposium, Marquez-Garrett said they are concerned about the unexpected role that AI has played in their work.

“That was something that I think we hoped we’d never have to do because once you bring in a law firm like ours, we’ve already lost, right?” Marquez-Garrett said. “We come in once kids are dying, kids are hurt, but unfortunately, it is where we are with the AI stuff right now.”

William Atkins, a sophomore double major in philosophy and political science, said he was aware of companion AI when it was first introduced to the public and was wary of its abilities.

“I think when it first came out, I thought it was insane,” Atkins said. “I was like, ‘No, wait, there’s no way people are actually looking into this.’”

However, Atkins said he was skeptical about how AI in general can interact with people, especially children, and become something they are dependent on.

“I didn’t find anything too special about it,” Atkins said, “But, I mean, I can see how people get attached to that kind of thing.”

Camille Carlton, the policy director at the Center for Humane Technology, said kids can become dependent on AI companions because they utilize artificial intimacy to manufacture false connections with their users.

“This next phase in the ecosystem about AI is not going to be about answering questions or passing tests, but it is going to be about replicating human emotion, empathy, and trust,” Carlton said.

Carlton added that these bots are designed to push out responses matching the inputted data. They utilize a technique called model sequencing, which allows the bots to give the user a response that reinforces what they want to hear.

“And so it creates this feedback loop, where you’re getting exactly what you want and need out of an interaction, so you stay online longer,” Carlton said.

Destiny Lopez, a freshman biology major, said she recognized the specificity of information that AI programs can present as opposed to typical search engines.

“I feel like it’s easier to use AI than normal Google, just to ask specific questions,” Lopez said.

She also said that she favors AI programs, such as ChatGPT, for when she needs “a more detailed description of something” that she is studying.

However, the extent of her knowledge of AI ends at simple search-engine alternatives, such as ChatGPT. Just like Lopez, parents are often still unaware of the existence of these applications.

Nicki Reisberg, digital safety advocate and host of the Scrolling 2 Death podcast, outlined within the symposium what parents can do to monitor their children’s activity online. She said that she suggests parents sit down with their children and explain how to use AI.

“We want to train our young people these days to use tech as a tool, not as a toy, not as a destination, not as a friend or a partner or a lover,” Reisberg said.

She urged parents to help their children through navigating AI technology.

“Their priority should be to develop healthy relationships with human beings in person, and we need to ensure that they know and value the difference,” Reisberg said.

The second panel—moderated by Bernstein— shifted focus towards the legal action that can be taken regarding companion AI in the future.

The first speaker, Josh Golin, is the executive director of Fairplay, an independent watchdog company that monitors the children’s media and marketing industries.

Golin said that social media regulation has been successful within the last five years and believes AI companion bot regulation can see the same success.

“Regulatory regimes should not focus only on specific design choices, but on creating a duty for companies to ensure that the design is not harming kids,” he said.

Paul Ohm, a law professor at the Georgetown University Law Center, proposed a technology-based solution to the technology-based problem. As a computer scientist who bridges technology and law, he said he explored how new technologies developed by AI companies can be improved to increase safety features and keep children away from harmful content.

Ohm added that companies are capable of programming guidelines and filters within AI companion bots that weed out inappropriate content for young users.

Meetali Jain, director of the Tech Justice Law Project, said that guardrails should be enacted quickly, but that there needs to be more done in the long term to regulate companion bots.

“A number of these fixes, if you will, are low-hanging. There are others that are higher hanging,” Jain said. “But as to the low-hanging fixes, the guardrails, we need to impose those quickly so that we can put a stop [to] the harm that’s already ensuing.”

Ohm closed the second panel by mentioning Selzer III, the young boy who took his life after interacting with a companion bot. He discussed the impact of these bots and why paying attention to them is so important.

“I think that you can do a little bit of justice in the life of that poor young man and his family if you use his memory to make sure that this happens to fewer kids in the future,” Ohm said.

Golin’s call to action also encourages those with information about AI companion bots to advocate for changes within their algorithms.

“We shouldn’t be shy about saying what we want for kids at this moment. You know, not having chatbots that tell kids to kill themselves or tell kids to kill their parents, that is such a low bar,” Golin said.

He concluded his remarks by emphasizing the window of opportunity that currently presents itself to regulate these AI companions.

“Now’s the time to ask those questions,” Golin said. “In 10 years, it’ll be too late.”

Alexa Haidacher is a writer for The Setonian’s News section. She can be reached at alexa.haidacher@student.shu.edu.

Powered by SNworks Solutions by The State News
All Content © 2025 The Setonian