Microsoft Clarifies Copilot is for ‘Entertainment Purposes’—Who Takes Responsibility for Your Company’s AI Utilization?
Related Articles
Your Company: Are You Making Decisions Based on AI Outputs?
There’s a shocking fact: many companies are using Microsoft Copilot as a silver bullet for operational efficiency. However, its terms of service explicitly state that it is for “entertainment purposes.”
In other words, Microsoft does not guarantee the accuracy of the information generated by its AI tools for business purposes. If you make a business decision based on numbers provided by Copilot and incur losses, Microsoft will not take responsibility. That’s the contract you’re entering into.
You might be thinking, “No way!” But this isn’t just a Microsoft issue. A review of the terms of service for major AI services reveals that nearly all include similar disclaimers. OpenAI’s ChatGPT and Google’s Gemini also do not guarantee the accuracy of outputs for business use.
In the field of small and medium-sized enterprises (SMEs), AI outputs are already being used as “evidence” rather than just “reference.” Whether it’s creating estimates, reviewing contracts, conducting market analysis, or making hiring decisions, are you using the answers generated by AI directly in your decision-making? If so, your company is currently sitting on a legal risk.
The True Meaning of ‘Entertainment Purposes’
There’s a clear business calculation behind Microsoft categorizing Copilot as for “entertainment purposes.”
Information generated by AI inevitably contains a certain probability of error. The “hallucination” problem of large language models (LLMs) has not been fully resolved as of 2025. Recent studies suggest that the factual accuracy of major LLMs is around 85-95%. This means that there’s a 5-15% chance of generating incorrect information in a seemingly plausible format.
Given this 5-15% error rate, guaranteeing that AI outputs are “suitable for business use” poses a huge litigation risk for AI providers. Hence, they include disclaimers. The term “entertainment purposes” serves as the lowest common denominator for legal defense.
The problem is that users are often unaware of this fact. According to a survey, about 60% of SME owners reported that they have “never read” the terms of service for the AI services they use. They do not even grasp what risks they are taking on.
The EU AI Act Will Change the Game Rules
The EU AI Act, which was enacted in 2024, will bring significant changes to this situation.
The EU AI Act classifies AI systems into four risk levels: “unacceptable risk,” “high risk,” “limited risk,” and “minimal risk.” AI systems classified as high risk are required to ensure transparency, conduct risk assessments, and establish human oversight.
Examples of high-risk applications include:
- AI used for recruitment and personnel evaluation
- AI used for credit assessments
- AI used for educational evaluations
- AI used for managing safety-related infrastructure
You might think, “This doesn’t concern us since we don’t do business in the EU.” However, the impact of the EU AI Act will extend to Japanese companies as well. Companies providing services within the EU or handling data from the EU will be subject to it. Just as the GDPR became a benchmark for global regulations, the EU AI Act is likely to follow suit. Discussions about similar regulations are already beginning in Japan.
‘AI Made a Mistake’ Is Not an Excuse
Let’s consider some specific risk scenarios.
Scenario 1: The estimate generated by AI contained errors.
You fed past transaction data into Copilot to create an estimate for a new project. When you presented the amount generated by the AI to the client, it turned out there was an error in the cost calculation, resulting in a loss of 1 million yen. Who bears this loss? Not Microsoft. They are exempt under the terms of service. It’s your company.
Scenario 2: A significant oversight was found in a contract reviewed by AI.
You had AI review a contract sent by a business partner. Since the AI responded with “No issues,” you proceeded to finalize it. Later, it was discovered that it contained unfavorable clauses, resulting in hundreds of thousands of yen in damages. The responsibility lies with your company for trusting the AI’s judgment.
Scenario 3: AI screening in recruitment was discriminatory.
You had AI evaluate applicants’ resumes for screening purposes. Later, it was revealed that applicants with specific attributes were unfairly rated low. Under the EU AI Act, this would fall under strict regulations as a high-risk AI system, and violations could incur fines of up to 35 million euros (approximately 5.6 billion yen) or 7% of global revenue.
In all these scenarios, saying “AI made a mistake” will not hold up legally. The decision to use AI was made by humans, and the responsibility for using AI outputs without verification lies with the user.
Five Self-Defense Measures SMEs Should Implement Immediately
It’s not enough to just raise awareness of the crisis. Here are specific measures to take.
1. Read the terms of service.
Check the terms of service for the AI services your company uses, focusing at least on the sections regarding “disclaimers” and “limitations on use.” This can be done in 30 minutes. That half an hour could prevent damages amounting to hundreds of thousands of yen.
2. Establish a rule to treat AI outputs as “drafts.”
Formalize a company rule stating, “AI outputs must be reviewed and approved by a human before use.” Particularly for documents involving amounts, legal documents, and external communications, prohibit the direct use of AI outputs.
3. Keep records of AI usage.
Document which operations used which AI tools and how. Maintaining records will help clarify the cause and responsibility in case issues arise. A simple Excel log will suffice.
4. Identify high-risk operations.
Identify operations within your company where AI output errors could lead to significant damages, such as recruitment, contracts, finance, and safety management. Implement stricter rules for AI use in these areas.
5. Review AI usage policies annually.
AI technology and regulations are rapidly evolving. Set aside time once a year to review your company’s AI usage policies. Ideally, consult external experts, but even just holding internal discussions on whether “the way we use AI is still appropriate” can be beneficial.
What Lies Beyond ‘Using AI Because It’s Convenient’
AI is undoubtedly convenient. It has the potential to dramatically enhance productivity in SMEs. I personally advocate for the strong utilization of AI.
That’s why I say: It’s dangerous to be blinded by convenience and ignore the risks.
Microsoft’s labeling of their service as “for entertainment purposes” is not a sign of dishonesty; they understand the current limitations of AI technology better than anyone. Understanding these limitations and using the technology accordingly is vastly different from using it without awareness.
The responsibility for AI outputs lies not with the AI or the AI vendor, but with the human users. This principle will remain unchanged, regardless of how advanced the technology becomes.
Read the terms of service. Verify outputs. Keep records. These may seem like mundane tasks, but the difference between companies that can do this and those that cannot will be stark three years from now.
Is your company’s utilization of AI truly acceptable as it stands?
JA
EN