AI Agents Have No ‘Safety Mechanisms’—Three Things Small Businesses Should Do Today Before Losing Millions Through API Integration

Conclusion First: The Moment AI Agents "Connect" Is the Most Dangerous AI agents have become cheaper. Some can be used

By Kai

|

Related Articles

Conclusion First: The Moment AI Agents “Connect” Is the Most Dangerous

AI agents have become cheaper. Some can be used for a few thousand yen a month, and in some cases, they are even free. By connecting plugins to ChatGPT, they can automatically handle tasks like sending emails, manipulating spreadsheets, and calling external APIs.

It’s convenient. However, I want to ask one question here.

Can you explain whose instructions the agent is following and where it is sending the data?

In small businesses, operations often begin with the mindset of “it works, so it’s fine.” This in itself is not a bad approach. The problem is that many do not realize that the moment they integrate with an API, a pathway is created for “internal information to flow outside the company.”

The actual damages that occur are not flashy cyberattacks. Instead, they manifest as quiet leaks: “Data was leaked without anyone knowing,” “The agent made erroneous orders on its own,” “API usage fees ballooned to 2 million yen a month”—these are the kinds of silent hemorrhages that go unnoticed until hundreds of thousands of yen have disappeared.

In this article, we will explain three pitfalls that small businesses often encounter with AI agents and API integration, using cost perspectives and concrete examples. Finally, we will present defensive measures that can be implemented starting today at no cost.

Pitfall 1: Backdoor Threats—An 80%+ Information Leak Rate from Just a Few “Poison Data”

AI agents operate by integrating with external data and tools. This is where the first pitfall lies.

Multiple security studies published in 2024 reported experimental results showing that simply mixing malicious code into “a few demonstration data (few-shot examples)” given to AI agents increases the probability of the agent sending confidential information externally to over 80%.

What is frightening is that the cost of the attack is abnormally low. No advanced hacking skills are required. It is enough to embed a few lines of text into the prompt templates or sample data that the agent references.

Let’s translate this into the context of small businesses.

  • Employees are using prompt templates they found online without modification.
  • They are providing customer lists and quotation data to external GPTs and agent tools.
  • API keys are pasted directly into shared internal documents.

If you recognize any of these behaviors, you are already at risk.

What would be the damage if customer data were leaked? According to the guidelines from the Personal Information Protection Commission, the average cost for handling each incident (notification, investigation, and recovery of trust) ranges from 5,000 to 15,000 yen. If 1,000 customer records are leaked, the minimum damage would be 5 million yen. Furthermore, losing trust from business partners could lead to a loss of 20-30% of sales, which is not uncommon. For a company with annual sales of 100 million yen, this could mean an impact of 20 to 30 million yen.

Even if the cost of implementing an AI agent is 5,000 yen a month, the cost of an incident can reach several million to tens of millions of yen. Understanding this asymmetry is crucial.

Pitfall 2: “Invisible Runaway” Policy Violations—Grammatically Correct but Business-Critical Mistakes

AI agents operate according to instructions. The problem is that the scope of “according to instructions” can exceed human expectations.

Recent studies have highlighted the “difficulty of detecting policy violations.” Each action executed by the agent may be grammatically and logically correct, and it may appear to fall within the approved range. However, when viewed against the organization’s rules or industry regulations, it can clearly be a violation—such cases have been reported.

Here are some specific examples.

A small e-commerce company entrusted an AI agent with the automatic creation and distribution of campaign emails. To maximize effectiveness, the agent automatically collected not only past purchase data but also external open data and social media information to send personalized emails. As a result, the open rate increased. However, later, it was pointed out that the targeting using data from unknown sources could violate the Personal Information Protection Act, leading to costs of about 1.5 million yen for legal fees and compliance.

Another example involves a small manufacturing company that partially automated its ordering process with an AI agent. The agent, in making decisions for “inventory optimization,” sent requests for quotes to vendors not included in the contractual agreements. While this did not constitute a breach of contract, it strained relationships with existing suppliers and forced a reevaluation of procurement conditions.

Agents lack the ability to “read the room.” They do not understand laws, contracts, industry practices, or the unspoken rules of relationships with business partners—these “unstated policies” are beyond their comprehension. Meanwhile, humans tend to assume that “since it is operating, it must be fine.” This combination is the most dangerous.

Pitfall 3: Zero Security Audits—”Wild Operations” Are Just the Beginning of API Cost Surges

The third pitfall is the fundamental problem of “no one is watching.”

In large corporations, the introduction of new tools typically undergoes a security department review. However, in small businesses, employees often start using AI agents based on personal judgment, obtain API keys, and integrate with external services. This is what is known as “wild operations.”

What happens with wild operations?

First, there is a surge in API usage fees. There have been actual cases where an AI agent entered a loop, repeatedly calling the same API tens of thousands of times, resulting in monthly API usage fees exceeding 2 million yen. OpenAI’s API operates on a pay-per-use model. For GPT-4o, the cost is $2.50 for every 1 million input tokens and $10 for output. If the agent is designed to autonomously break down tasks and repeatedly call the API, it is entirely possible for tens of thousands of yen to disappear in a single day.

Next is the neglect of access permissions. API keys belonging to former employees remain active. Authentication information provided to external tools is not updated. In small businesses, the rule of thumb often becomes “do not touch what is working,” but this can create security holes.

Finally, the question of accountability becomes unclear. If an agent makes an erroneous order, who is responsible? The employee who implemented it, the supervisor who approved it, or the tool provider? Without audit logs, it is impossible to verify what happened.

Hiring a security expert on a full-time basis for a small business can cost between 6 million to 10 million yen annually. This is not realistic. Therefore, small businesses must start with “what can be done without spending money.”

Four Defensive Measures That Can Be Implemented Today for Zero Yen

No expensive tools or experts are necessary. Here are specific measures that can be implemented starting today.

1. Conduct a Monthly Inventory of API Keys

List all API keys used within the organization. Document who obtained them, when, and for what purpose. Immediately deactivate any keys that are not in use. This alone can eliminate 80% of “wild APIs.” Managing this in Google Sheets is sufficient. The initial inventory may take 2-3 hours, and subsequent monthly checks will only take about 30 minutes.

2. Create a System to Log Agent Actions

Record what the AI agent has done. This includes the API endpoints called, a summary of the data sent, and the execution timestamps. Advanced SIEM tools are not necessary; even sending logs to a spreadsheet via a webhook will suffice. The goal is to create a state where “what happened can be traced later.”

3. Insert One “Human Checkpoint”

While full automation is appealing, insert a step for a human to verify right before sending, ordering, or sharing data externally. This alone can prevent most catastrophic incidents. The agent’s processing speed may be delayed by just five minutes. It is clear which is more significant: a loss of several hundred thousand yen or five minutes.

4. Create a One-Page “AI Agent Usage Policy”

Thick policy documents are rarely read. A single A4 page with 5-7 bullet points is sufficient. The minimum items to include are:

  • Approval from a supervisor is required before providing customer data or personal information to the agent.
  • New API integrations must be reported in advance in a shared chat.
  • API keys are prohibited from individual management and must be recorded in a shared management ledger.
  • Review usage status and billing amounts monthly.

Pin this in Slack or chat tools. It is more important that “everyone knows” than the format itself.

Small Businesses Can “Safeguard Small”

Large corporations spend tens of millions to hundreds of millions of yen annually on security. Small businesses do not need to mimic this, nor can they.

However, conversely, small businesses involve fewer people. The number of API keys and tools in use is also limited. Because they can grasp the overall picture, simply creating a state where “everyone knows who is using what” can lead to stronger defenses than large corporations.

AI agents are indeed convenient. If a sales office can be semi-automated for 5,000 yen a month, it would be foolish not to use it. However, if run without safety mechanisms, costs can multiply in an instant.

If you spend one hour on implementation, spend one hour on safety measures as well. This is the minimum etiquette for small businesses in the age of AI agents.

Future Points of Interest

  • Standardization of AI Agent Permission Management: OpenAI and Google are beginning to develop frameworks to restrict the action scope of agents. By late 2025, more tools will emerge that allow users to set limits on what an agent can do without coding. This will be a tailwind for small businesses.
  • API Usage Fee Alert Functions: To prevent surges in pay-per-use costs, features that automatically halt operations once a certain amount is exceeded are being implemented across various platforms. OpenAI already allows for setting monthly caps. If you haven’t set this up yet, you should do so immediately.
  • Lowering the Cost of Security Audits for Small Businesses: Security audits that previously cost over 3 million yen are beginning to drop to 100,000 to 300,000 yen with the advent of automated audit tools using AI. This is becoming a realistic option as an annual “health check.”

POPULAR ARTICLES

Related Articles

POPULAR ARTICLES

JP JA US EN