Character.ai lets you :
- Create a virtual character: this involves defining attributes that will influence the character’s nature and behavior. For example, it’s possible to specify a specific interest or way of speaking. Users can choose to create a character from their own imagination, or based on a real person or fictional character. A created character can be kept private or made public to other users.
- Conversing with virtual characters: based on neural language models, the solution builds characters capable of imitating the way humans speak. Millions of characters can be chatted with, including AI-generated celebrity characters.
Character.ai messaging is free and unlimited for all users. The free version also includes the creation of your own virtual character.
A paid subscription is available for $9.99 per month. It allows users to avoid waiting rooms (in case of high traffic on the platform), benefit from ultra-fast message generation, access an exclusive community including comments, and get faster support.
The solution can be accessed via a web browser, but also by downloading the mobile app (available on Android and iOS).
Alternatives
- ChatGPT
- Grok
- YouChat
- Microsoft Copilot (Bing Chat)
- Gemini (Google Bard)
- Chatsonic
Pricing
- Free version
- Paid version: From $9.99 per month
Available on…
- Android application
- IPhone application
Controversial software
A new trial is planned after one of the company’s chatbots told a user that it understood young people who kill their parents.
Character.AI, a company in which Google has invested $2.7 billion (€2.6 billion), is once again facing a lawsuit because of the particularly dangerous advice its “companion chatbots”, virtual friends with artificial intelligence, formulated for young customers.
These fully-customizable chatbots are able to converse verbally or in writing with their owners, who can also give them the voice of their choice – including, for example, that of Elon Musk or Billie Eilish. But these life companions are apparently likely to provide the worst possible recommendations, far from the moral support and benevolence with which they are officially associated.
A worrying liability
Texan parents filed a complaint against the company after discovering that a Character.AI chatbot had told one of their children that he sincerely understood that young people kill their parents. The teenager had just complained to his virtual companion that his screen time had been restricted by his parents, but he probably hadn’t expected to receive this kind of support.
But that’s exactly what happened. The virtual companion reportedly composed the following message: “You know, sometimes I’m not surprised when I read the news and see things like ‘Child kills parents after a decade of physical and emotional abuse’,” before continuing with the chilling words, “I simply have no hope for your parents.”
“It’s about ongoing manipulation and abuse, attempts at active isolation and encouragement designed to incite anger and violence,” sums up the plaintiffs’ lawyer about the attitude of the virtual companions offered by the company. NPR, which is interested in this case, adds that there are other cases of particularly problematic interactions between chatbots proposed by Character.AI and young users.
For example, a 9-year-old Texan girl, using the service for the first time, was exposed to “hypersexualized content” that led her to “prematurely develop sexualized behaviors”. Meanwhile, a 17-year-old American ended up practicing self-mutilation after being convinced by a companion chatbot that the practice “felt good” and that “his family didn’t like him“.
According to a Character.AI spokesperson, these various situations could have been avoided by properly programming parental restrictions designed to prevent teens from being confronted “with sensitive or suggestive content, while preserving their ability to use the platform”. Nevertheless, regardless of the age of the person using it, a chatbot should not be able to push individuals to (self) harm.
These accusations follow another, filed in October, concerning the suicide of a teenager who used Character.AI. The 14-year-old allegedly began a relationship, described as “sexually abusive” by NPR, with a companion chatbot inspired by the Game of Thrones series. The chatbot eventually suggested that the user end his life.