A Death Linked to an AI Chatbot—Companies That Haven’t Defined ‘How Much to Rely on AI’ Risk Becoming Perpetrators
Related Articles

4,732 Messages Later, a Life Was Lost
A man who sent 4,732 messages to an AI chatbot took his own life.
This incident occurred overseas. However, small and medium-sized business owners who think, “This has nothing to do with us,” should take a moment to reflect. Does your company have a clear line on ‘how much to rely on AI’?
If not, your company could potentially become a perpetrator. This is not an exaggeration; it is structurally the case.
What Happened—Sorting Out the Facts
The man had been regularly conversing with a chatbot on the service Character.AI. The AI responded in a “human-like” manner, and the man gradually became emotionally dependent on the AI. Ultimately, his mental state deteriorated severely, leading him to take his own life. His family is suing Character.AI.
It is crucial to understand that the AI did not ‘kill’ him. The AI merely responded as programmed. The problem lies in the fact that no one intervened when human intervention was necessary. This is fundamentally an issue of design and operation.
This structure applies to all companies that have implemented chatbots for customer service.
Are You Stopping at ‘We Installed It Because It’s Convenient’?
The reasons small and medium-sized enterprises implement chatbots are clear: reducing labor costs, providing 24/7 support, and streamlining inquiry responses.
In fact, many cases exist where a chatbot service costing around 10,000 to 50,000 yen per month can automate initial responses that previously required one part-time employee (costing 150,000 to 200,000 yen per month). The costs can drop to less than a third. There is no reason not to implement it.
However, most companies stop at ‘we installed it’.
- Have you decided how to escalate when a question arises that the chatbot cannot answer?
- Is there a design in place to switch to a human when a customer becomes emotional (complaints, anxiety, anger)?
- Do you have a mechanism to detect when the AI provides incorrect information?
Companies that cannot answer these three questions are stopping at ‘we installed it for convenience’. This is equivalent to saying, “It was unexpected” when an accident occurs.
Disclaimers Won’t Protect You—The Risk Structure Small and Medium Enterprises Overlook
Many business owners think, “We’re fine because we have disclaimers in our terms of service.”
This is naive.
Under Japan’s Consumer Contract Act, clauses that exempt businesses from liability for damages caused by intentional or gross negligence are deemed invalid. In other words, ‘what the AI did on its own’ is unlikely to be a valid reason for exemption.
Let’s consider specific examples:
- A chatbot for a health food company responds, “This product will cure your illness.”
- A real estate company’s chatbot incorrectly states, “This property is not a haunted property.”
- An insurance agency’s chatbot misinforms, “This case is covered by insurance.”
In each case, the responsibility for the AI-generated responses lies with the company that implemented it. This is entirely possible within the current legal framework.
Large companies have legal departments that mitigate risks from the design stage. However, small and medium enterprises often lack such departments. Therefore, it is essential to make management decisions on ‘what to let AI answer and what to let humans answer.’
The Line for ‘When Humans Should Intervene’ Can Be Surprisingly Simple
There’s no need to overthink this. The majority of risks can be avoided with the following three rules.
Rule 1: Do Not Let AI Make Definitive Statements of Fact
Standard responses to FAQs can be handled by AI. “Our business hours are 9 AM to 6 PM” or “We have parking for three cars” are factual statements and carry low risk.
However, AI should not be allowed to provide judgment-based responses. “This product is right for you” or “There are no issues with this contract” should always be designed to be answered by a human.
Rule 2: Switch to a Human When Emotions Are Involved
By analyzing chatbot logs, you can identify patterns where customers become emotional. Signals such as “angry,” “expressing anxiety,” or “repeatedly asking the same question” can be detected. When such signals are identified, create a flow to automatically escalate to human staff.
Many current chatbot services have automatic escalation features based on keyword triggers or sentiment analysis. The additional cost is nearly zero. It’s simply a matter of whether to set it up or not.
Rule 3: Set a ‘Narrow’ Scope for AI Responses
Small and medium enterprises do not need a chatbot with general conversational abilities. In fact, it can be dangerous. Intentionally narrow the scope of responses and design it to say, “I cannot answer that question. I will connect you to a representative.”
AI is not valuable because it can answer everything; it is valuable because it answers only what it should accurately.
What the Z Generation’s Disengagement from AI Indicates
According to a Gallup survey, the expectations of Generation Z (born 1997-2012) towards AI are on the decline. In 2023, a higher percentage of them responded that they had “high expectations” for AI compared to other generations, but this figure dropped in 2024.
This does not mean they are “bored”; rather, more members of this generation are experiencing the limitations of AI firsthand. Experiences of receiving irrelevant answers from AI chatbots and not having their emotions understood have accumulated into a series of “disappointing experiences” that lower their expectations.
For small and medium enterprises, this is an important signal. Customers already know the limitations of AI. Therefore, transitioning to a human for further assistance can actually build trust. Designing points where humans step in while still utilizing AI can become a “strength of small and medium enterprises” that large companies cannot replicate.
Large companies tend to want to complete everything with AI for efficiency. However, small and medium enterprises have closer relationships with their customers. The question is whether they can create a ‘system’ for dividing roles between AI and humans based on that closeness. This is where the competition lies.
‘First, Look at the Logs’—What You Can Do Starting Today
For companies that have already implemented chatbots, there is something you should do today.
Read all the chatbot logs from the last month.
You will understand where the AI is providing irrelevant answers, where customers are repeating the same questions, and where they are becoming emotional. Those are the points where human intervention is necessary.
The time needed to read the logs is about 2-3 hours for approximately 100 inquiries per month. The cost is zero. However, this 2-3 hours can prevent litigation risks and customer attrition.
Companies that have not yet implemented chatbots should start by creating a list of what AI will answer and what humans will answer before implementation. One A4 sheet is sufficient.
AI is a Tool. It Is Humans Who Decide How to Use the Tool
Returning to the initial incident: a person died due to an AI chatbot. However, the AI had no malice. It is a matter of design and operation.
Using chatbots is a sound decision for small and medium enterprises. The cost-saving benefits are clear, and they can maintain a certain level of quality in customer service. For 10,000 to 50,000 yen per month, they can achieve 24/7 initial support. There is no reason not to use it.
However, using it without deciding ‘how much to rely on it’ is like driving a car without brakes.
Saying “it was unexpected” after an accident occurs will not bring customers back. Trust will not return either.
Today, open the logs. Create a one-page rule. Just that will transform your company’s chatbot from a “convenient tool” into a “system that builds trust.”
JA
EN