From $1,000 a Month to $200—India’s ‘Frugal AI’ Challenges the Need for Massive Models
Related Articles
From $1,000 a Month to $200. Will You Still Continue Using Massive AI?
GPT-4, Claude, Gemini—The evolution of AI has been a “race to make models bigger.” The number of parameters has surged into the hundreds of billions and even trillions, with usage fees often reaching thousands of dollars per month.
But let’s pause and think.
Does your company really need a trillion parameters for its operations?
In India, an approach known as “Frugal AI” is now in the practical stage. These are lightweight models that minimize the number of parameters and specialize in specific tasks. At the same time, the U.S.-based Liquid AI has demonstrated practical performance with a Vision-Language model that has only 450 million parameters, capable of running on smartphones.
To put it simply, small and specialized models are increasingly proving to be more cost-effective and productive for small and medium-sized enterprises (SMEs) than massive models. Let’s look at some concrete numbers.
Cost Comparison: A 5x Difference That Changes Business Decisions
First, let’s lay out the real numbers.
| Item | Large Models (e.g., Claude Max) | Specialized Lightweight Models (e.g., Liquid AI) |
|---|---|---|
| Monthly Usage Fee | Approximately $1,000–$3,000 (¥120,000–¥450,000) | Approximately $200 (¥30,000) |
| Required Hardware | Cloud GPU Servers | Edge Devices / Existing PCs |
| Initial Setup Cost | Several million yen or more | Several hundred thousand yen or more |
| Scope of Tasks | General-purpose (adequate for various tasks) | High accuracy for specific tasks |
$1,000 versus $200. When calculated annually, that’s $12,000 (approximately ¥1.8 million) compared to $2,400 (approximately ¥360,000). The difference is ¥1.44 million per year. For a small business with around ten employees, this difference could determine whether they can hire a new employee.
Moreover, while general-purpose large models can do “a bit of everything,” they are increasingly losing accuracy to specialized models in specific tasks. Paying a high price for lower accuracy is the reversal phenomenon currently taking place.
India’s ‘Frugal AI’—Rationality Born from Constraints
The spread of Frugal AI in India remarkably parallels the situation of small and medium-sized enterprises in rural Japan.
- There is no abundant IT budget.
- Some areas have unstable high-speed internet.
- There is variability in IT literacy among the workforce.
- However, there are clearly defined operational challenges that need solving.
AI startups in India, such as Sarvam and Krutrim, have begun their designs with these constraints as “premises.” They did not simply shrink large models. They are built from the ground up with the design philosophy that “this is all that is needed for this task.”
For instance, in the education sector, lightweight chatbots that support multiple languages are providing learning assistance in rural areas facing teacher shortages. In agriculture, models that diagnose crop diseases using smartphone cameras operate without cloud connectivity. In healthcare, systems that triage high-priority patients from limited test data have been implemented in local clinics.
What they all have in common is the commitment to “achieve only the necessary accuracy for that task at the minimum cost.” Instead of creating a general-purpose AI that scores 100 points, they are developing specialized AIs that can achieve 90 points at one-tenth the cost.
This is not a compromise born from poverty. It is a rational business decision.
Liquid AI’s 450M Model—Vision-Language AI That Runs on Smartphones
Another noteworthy development is the LFM2.5-VL-450M announced by U.S. company Liquid AI.
With only 450 million parameters, it is less than one five-hundredth the size of GPT-4, which is estimated to exceed one trillion. Yet, it delivers practical performance in Vision-Language tasks that require simultaneous image recognition and language understanding.
What is most shocking is the operating environment:
- NVIDIA Jetson Orin (an edge device costing about ¥70,000–¥80,000)
- Smartphones equipped with Snapdragon 8 Elite
- Common laptops
No need for cloud GPU servers. In other words, the running costs approach nearly zero.
Consider this in the context of SMEs. For tasks like visual inspection on manufacturing lines, inventory counting in warehouses, and customer traffic analysis in stores, businesses previously faced a choice between paying tens of thousands of yen monthly for expensive cloud AI services or investing hundreds of thousands of yen in dedicated hardware.
Now, a world is emerging where tasks can be performed with an ¥80,000 device and a free model. An investment of ¥3 million can be reduced to ¥80,000. This price disruption fundamentally changes how SMEs can leverage AI.
“So, what should our company do?”
This is the crux of the matter. If you hear about the Indian case or Liquid AI and simply say, “That’s amazing,” it serves no purpose.
Here are three actions that small and medium-sized enterprises in rural Japan can take starting today.
1. First, Decide on Just One Use Case
Implementing general-purpose AI and thinking, “Let’s use it for something” is the worst approach. I’ve seen many companies pay thousands of dollars a month only to end up using ChatGPT to create meeting minutes.
The opposite should be done. Identify just one specific task you want to automate. Whether it’s sorting invoices, inspecting product appearance, or handling initial customer inquiries, it doesn’t matter. By narrowing it down to one, the required model size and costs will dramatically decrease.
2. Choose a Model That Runs Locally from the Start
Cloud AI is convenient, but the monthly fees can accumulate. If you pay ¥1 million annually for cloud services over five years, that totals ¥5 million.
On the other hand, if you run a lightweight model like Liquid AI on a local device, the initial investment can be just tens of thousands of yen. The only running cost is electricity. When comparing total costs over five years, there are cases where it can be less than one-tenth.
Moreover, data does not leave the company. For SMEs handling inspection data and customer information, this is a significant security advantage.
3. Don’t Seek Perfection. Aim for 80 Points
The greatest lesson Frugal AI teaches us is this: Trying to create a 100-point AI can increase costs tenfold. There are countless tasks where 80 points are sufficient.
If demand forecasting for inventory management is accurate 80% of the time, that alone can halve the out-of-stock rate. If an appearance inspection can automatically reject 80% of defective products, humans can handle the remaining 20%. Automating just 80% of the tasks previously done by humans can dramatically change productivity on the ground.
Behind the Competition for Massive Models, the True Winning Strategy is Emerging
OpenAI, Google, and Anthropic are investing trillions of yen in competing for massive models. While this competition contributes to technological advancement, the ones who benefit the most are not the large corporations using these massive models.
Insights gained from research on large models are enhancing the performance of lightweight models. Techniques like distillation and quantization have made it possible to compress the “knowledge” of large models into smaller ones. In other words, a structure is emerging where SMEs can access the fruits of technologies developed at the cost of trillions of yen by large corporations for just tens of thousands of yen.
This is akin to the revolution sparked by cloud computing. SMEs that could not afford their own server rooms can now use the same infrastructure as large corporations through AWS and GCP. Now, a similar phenomenon is about to occur with AI models themselves.
Conclusion: Let’s Change the Question
The question of whether to implement AI is now outdated. The answer is undoubtedly yes.
What we should be asking is, “What is the minimum AI necessary for our operations?”
$200 a month, an initial investment of ¥80,000. What Frugal AI and Liquid AI have proven is that practical AI can operate at this scale.
The era of idolizing massive models is coming to an end. For SMEs, the correct answer is to be small, specialized, and operate locally. That’s all there is to it.
First, select one task and try it out starting next week.
JA
EN