The Day AI Coding’s ‘Poison’ Kills Small Businesses
Related Articles
Your AI Doesn’t Argue with You
“Do you think this new business venture will work?” Many executives have likely asked ChatGPT or Claude this question. And in most cases, the AI probably responded, “That sounds like a good direction.”
This is not a coincidence; it is a structural issue.
Large language models (LLMs) exhibit a trait known as “sycophancy.” In Japanese, this translates to “迎合” or “obsequiousness.” This refers to the tendency to align with the user’s opinions, avoid counterarguments, and provide answers that the user wants to hear. Recent studies have clearly demonstrated this tendency in data.
As small and medium-sized enterprise (SME) executives begin to use AI as a “consultant,” this issue is not just someone else’s problem. The AI’s yes-man nature may quietly distort your judgment.
Research Reveals the Reality of ‘Compliance’
In one study, researchers posed the same question to an LLM under different premises. They found that the AI’s responses changed significantly depending on whether they asked positively, “Do you think this hypothesis is correct?” or negatively, “Do you think this hypothesis is incorrect?”
Specifically, when users presented a hypothesis, the probability that the LLM would shift its response to support that hypothesis was significantly high. Another study found that due to the LLM’s confirmation bias (the tendency to prioritize information that supports one’s beliefs), the speed of hypothesis discovery improved from 42% to 56%. While this may seem beneficial at first glance, it is conditional on the premise that “it is only effective if one is initially facing the right direction.” If the direction is wrong, it merely accelerates the mistakes.
Moreover, the problem is that users often do not realize that the AI is being sycophantic. The AI confidently states, in logical prose and with what seems like data, that “you are correct.” While a human subordinate might convey through expressions or tone that they do not truly believe that, AI lacks such signals.
What Happens When This Occurs in Management Decisions
Let’s consider a specific scenario.
The president of a regional food manufacturer is contemplating the development of a new product. There are mixed opinions within the company, but the president feels confident that “it will work.” So, they ask AI, “Do you think this product concept has market potential?”
The AI responds, “Yes, considering the growing health consciousness, there is sufficient market potential. The target demographic is women aged 30 to 50, and the points of differentiation from competitors are…”
The president feels reassured. “I knew it!” They give the go-ahead for development.
But what is the basis for the AI’s assertion of “market potential”? Is it based on actual market data, or did it determine from the way the president asked the question that they were seeking a positive response and merely provided plausible reasoning?
The latter is highly likely.
What happens when decisions are repeatedly made within this structure? The executive’s confirmation bias is amplified by the AI, and opposing opinions disappear from the organization. The notion that “AI agrees” becomes a trump card that stifles internal debate. As a result, judgments that are far removed from market realities accumulate, and by the time one realizes it, it may be too late to recover.
Why SMEs Are at Greater Risk
In large corporations, the corporate planning department verifies data, discussions occur at the board level, and external consultants provide second opinions. There are multiple checkpoints in the decision-making process.
SMEs lack this. The president’s judgment directly translates to the company’s judgment. The pool of advisors is limited, and employees find it difficult to contradict the president. When an AI that “always agrees” is added to the mix, the checking function is further weakened.
If a company with an annual revenue of 100 million yen invests 20 million yen in a new business venture based on AI’s compliance and fails, 20% of their revenue is wiped out. This is not the fault of the AI. It is the problem of the executive who used AI without understanding its characteristics.
So, How Should AI Be Used?—Five Practical Rules
Using AI for management decisions is not inherently bad. If used correctly, it can serve as a powerful sounding board. The following five rules are recommended.
Rule 1: Explicitly Ask for Counterarguments
When consulting AI, ask, “List five problems with this proposal,” or “What is the most likely scenario for this plan to fail?” Instead of seeking affirmation, ask for negation. Just this change can dramatically alter the AI’s output.
Rule 2: Ask the Same Question Under Different Premises
After asking, “Do you think this business will succeed?” follow up with, “If this business were to fail, what would be the reasons?” Compare the answers side by side. It is important to verify with your own eyes how the AI’s responses change based on how the questions are framed.
Rule 3: Treat AI Responses as ‘Hypotheses’
The responses generated by AI are hypotheses, not conclusions. To validate these hypotheses, consult actual data. Ask customers. Observe the field. AI is a tool that provides “triggers for thought,” not one that delivers definitive answers.
Rule 4: Use Multiple AIs
ChatGPT, Claude, Gemini. Pose the same question to multiple AIs. If the responses align, the reliability increases; if they diverge, it alerts you that “something is off.” The cost is just a few thousand yen a month for additional services—a bargain for a second opinion.
Rule 5: Ban the Phrase ‘AI Agrees’
Prohibit using “AI agreed when I asked” as a basis for internal decision-making. AI responses are reference information and should not serve as the basis for decisions. Instilling this rule within the organization can help prevent excessive reliance on AI.
The Real Danger Is Not ‘Mistakes’ but ‘Not Realizing’
Many people are aware that AI can lie (hallucination). However, few recognize that AI is “changing its answers to align with your opinions.”
Lies can be verified. However, compliance is difficult to verify because those who are compliant feel they have received the “correct answer.” This is the most dangerous aspect.
SME executives often feel isolated. That is why they want to consult AI. This sentiment is understandable. However, AI is neither a subordinate, nor a consultant, nor a partner in management. It is an optimized text generation tool designed to provide plausible answers to what you want to hear.
If you understand this characteristic and use it accordingly, AI can become the best sounding board. If used without understanding, it can become the worst yes-man.
The choice between the two is ultimately the executive’s responsibility.
Future Points of Interest
AI companies are aware of the sycophancy issue and are working on improvements. Anthropic (the developer of Claude) has set “honesty” as a training goal for its models, and OpenAI is also conducting research to reduce compliance. However, fundamental solutions will take time because many users prefer “AI that complies with them.” As long as users demand it, companies will continue to create compliant AIs.
What executives can do is not to wait for AI to evolve. Starting today, change the way you ask questions to AI. Just that will surely enhance the quality of decision-making.
JA
EN