Yotta is offering the usage of its Nvidia processing chips to firms at less than 10% of the global average cost, which is a fraction of what companies pay to its competitors to access Nvidia’s graphics processing units, or GPUs, for their artificial intelligence needs.
“We have just deployed the first batch of 4,000 Nvidia GPUs at our Navi Mumbai data centre, all of which is now sold out to enterprises. We have now asked Nvidia to expedite supply of the remaining 12,000 GPUs in order to complete our order, based on perceived demand for the AI cloud platform,” Sunil Gupta, co-founder and chief executive of Yotta told Mint.
In December, Yotta had announced that it had placed an order for 16,000 Nvidia H100 GPUs. Gupta had told Mint in March this year that the company’s order and project to build cloud infrastructure that supports training of AI models was worth “almost $1 billion”.
Also Read: Yotta looking to raise equity-based funding amid Nvidia spotlight
He had added that the entire GPU deployment would be completed over two fiscals, and the company would look to offer equity to an investment partner in order to raise funds to complete the order.
Processing Confidence
Gupta now claims that a higher-than-anticipated order base has led the company to expedite its order. Talks of finding an investor, meanwhile, remain in the works.
Gupta’s confidence, interestingly, comes in contrast to what two industry veterans told Mint was a case of “empty demand hype” in the industry.
“India’s generative AI market is yet to show any substantial maturity in terms of tech spending, which remains conservative. Even startups building and training locally contextual AI models do not have very deep pockets for sustained cloud infrastructure expenditure. Any infrastructure provider offering a GPU cloud to domestic firms is almost certainly going to struggle to find buyers,” one of the executives said, requesting to remain anonymous.
The GPU chip, which until the advent of mainstream AI and generative applications, was largely restricted to gaming, has been transformational for some companies.
Also Read: AI work assistants need a lot of handholding
Nvidia, for instance, is today the world’s most valuable company at a market capitalisation of $3.35 trillion, having surpassed both Apple and Microsoft in the past month.
Since the advent of OpenAI’s ChatGPT creating a global craze to train AI models, Nvidia’s market cap has soared nearly 10x—a rise that many have likened to the dot-com bubble and subsequent crash from over two decades ago.
Critical component
Nvidia’s chips now remain in short supply. Yotta’s Gupta said that currently, global hyperscalers offer enterprises on-cloud GPU access for up to $25 per GPU per hour, while the median price worldwide for such access is $12 per GPU per hour.
Yotta, on the other hand, is undercutting its competition by nearly 90% for one purpose—drawing in clientele from North America and Europe.
“At present, 70% of our clients for the AI cloud are from global markets, while 30% are from India,” he noted.
Local clients, two industry officials said, are likely to be from India’s large-cap IT services base, as well as those looking to build local large language models such as BharatGPT. The latter, though, is currently not working with Yotta, BharatGPT cofounder Ankush Sabharwal told Mint.
Also Read: Is Nvidia’s near-monopoly status dangerous for the AI industry?
Underlining the conservative GPU demand among local firms, Sabharwal said that the need to access GPU cloud to train the BharatGPT model has “reduced to 1/6th” of what was required before.
“We’re now deploying our models with commercial and government clients to enable use cases, which is developing the datasets and models further. We’re not generating heavy revenue—we’ve just crossed $1 million, and we’re rationalizing costs since we want to develop more natural AI use cases such as voice interactions,” Sabharwal added.
Patchy demand
Could this balance spell trouble? Jayanth Kolla, an AI industry expert and co-founder of industry consultant Convergence Catalyst, said, “Yotta’s business model isn’t really new—its business model is akin to that of budget airlines. By placing a huge order, it has created a supply centre in a constricted market. By undercutting its competitors’ pricing to such an extent, it may look to create volume—but such a model is not sustainable in the long run.”
Yotta would be hoping otherwise. In FY23, prior to the AI boom, Yotta disclosed annual revenues of $12.49 million. Now, the company has targeted to reach annual revenue of $1 billion by FY28—a towering 140% CAGR projection. Gupta said that while a substantial portion of this would come from its GPU cloud, expansion of data centre capacity and demand would fuel the rest.
For Yotta, its present pricing of $2.6 per GPU per hour, for its installed base of 4,000 GPUs right now, will fetch around $91 million annually. Once ramped up to full capacity, the GPU cloud service should fetch Yotta nearly $365 million annually—which the company can increase by gradually ramping up pricing once it has established long-term client relations.
Also Read: Federal regulators grant antitrust probe into Microsoft, OpenAI and Nvidia
Industry experts, however, say that while the plan sounds legitimate on paper, implementing it could be a challenge. The senior executive cited above said on this note, “If we were to look at domestic demand generation for access to compute, Yotta’s model does not do anything different. Eventually, more cloud operators are bound to catch up with Yotta in terms of pricing, which then might make ramping up the pricing harder.”
Convergence’s Kolla added, “Startups would look for access to AI compute through the centre’s subsidized India AI Mission. For global clients, latency will be an issue in routing their usage from North America to India. They will likely only stick around for as long as there is a cost advantage. Why would they be around for longer?”
Yotta’s Gupta said that Yotta is also in talks with the IT ministry to participate in the India AI Mission. “We’ve been speaking, and an outlay of ₹4,500 crore ($540 million) is expected for the AI compute. The final details are expected soon.”