search

Building India's Content From Inside Out: How ShareChat's CTO Is Rewiring A Plat ...

deltin55 1970-1-1 05:00:00 views 82
At A Glance:

  • Strategic Shift: Moved from ‘growth-at-any-cost’ to sustainable expansion; achieved EBITDA positivity ahead of target.
  • TikTok Learnings: Data-driven experimentation, global talent acquisition, and AI-first culture now embedded in ShareChat’s playbook.
  • Cost Discipline: 85 per cent server cost reduction via tech consolidation, multi-cloud adoption, third-party alternatives, and custom infra (e.g., in-house ML training platform).
  • AI/ML Core: Personalization and engagement prediction drive operations; Singapore AI hubs secure world-class talent, complemented by India’s fast-scaling pool.
  • Recommendation Systems: Built for implicit signals and discovery, unlike e-commerce intent-driven models; optimized for real-time responsiveness.
  • Vernacular Moat: Deep integration of 15 to 20 Indian languages into AI/ML architecture, a unique edge over global competitors.
  • Generative AI: Active pilots in content creation, dialect-sensitive translation, creator support, and new content generation.
  • Microdrama Integration: Distinct recommendation models for short-form vs. microdrama, unified by shared AI/ML principles.
  • Cloud Strategy: 100 per cent cloud-hosted (India regions); optimised costs make on-premise ROI unattractive.
  • Future Tech Stack: Unified, simplified architecture to decode user behaviour across formats; advanced AI models adopted as compute costs fall.
There is something quietly poetic about the setting. Two people sitting across from each other, one of whom spent a formative stretch of his career inside one of the most algorithmically sophisticated companies in the world, the other now asking him how that experience is shaping what might become India's most distinctive answer to short-form video. The conversation begins with that observation hanging in the air.
ShareChat CTO Nitin Jain does not bristle at the TikTok comparison. He leans into it, but with a correction that turns out to be the defining frame for everything that follows. TikTok, he explains, is not what the outside world thinks it is. From the outside, it reads as a social media company. From the inside, it is something else entirely: a data-driven and AI-powered content distribution company with an unusually deep commitment to experimentation and speed. The social layer is, in a sense, the product. The machinery underneath is the point that drives the company’s ambitions.
What defined TikTok, in his telling, was a "huge focus on experimentation, learning and then feeding that into the loopback so that people can create quickly," combined with a deliberate ability to "tap the talent across the world in various geographies like Singapore, China, obviously United States", he mentioned.  
That internal worldview, he says, is precisely what he has been trying to transplant into ShareChat. Not the aesthetic, not the format, not the scroll-stopping feed, but the philosophy. The obsession with building tight feedback loops. The willingness to experiment fast, learn fast, and iterate. The deliberate hunt for talent across geographies, from Singapore to China to the United States, because the best work in AI and machine learning does not cluster neatly within any single country's borders.
ShareChat, he is clear, must be understood the same way. At its heart, it is a technology company, a data company, an AI and machine learning company. And with roughly 200 million monthly active users and billions of pieces of content moving through the platform, the core engineering challenge is exactly what it was at TikTok: how do you collect signals, understand them, and translate them into something a machine can act on to personalise what each individual sees?
“We are a tech product and as a part of tech product, considering our large user base, we have about 200 million active users monthly active users. To be able to serve this user base at this scale where we have billions of contents, it basically is about how do we really collect and understand how users are engaging with content”, explained Nitin Jain.
Jain also praised India’s homegrown talent when it came to product engineering or the platform engineering. However, he did point out a few lacking areas.
‘I think India is slightly thin when it comes to homegrown AI ML capabilities, specifically when it comes to state of the art, so bringing in this talent outside India and hiring at different geographical locations and then pairing some of these smarter people within India, to be able to form a formidable team that is able to deliver similar goals’, he added.
The Mandate That Changed Everything

The story of ShareChat's recent years is, at one level, a very familiar one. It is the story of a company built during a period of abundant capital discovering, as that capital dried up, that growth at any cost is not a strategy. It is a gamble.
By late 2023 and early 2024, it had become clear that a reset was necessary. The leadership made a deliberate call: before the business could grow again, it had to control what it was spending. Technology infrastructure was identified as the most significant lever. It was also where the incoming CTO was handed his mandate.
The terms were stark. Within six to nine months of joining, the technology cost base had to be turned cash-flow neutral. Not reduced. Not trimmed. Neutral. The company needed to be earning more than it was burning. "Within me joining, in the next six to nine months you will have to basically turn cash flow neutral so that we are earning more than what we are spending”, Jain added.  
What followed was, by his own description, a process of reinvention at multiple levels simultaneously. The results, when they came, were dramatic. In the financial year 2024, technology burn fell by 70 per cent year on year. The company remained largely flat in revenue growth during that period, but the cost discipline was unmistakable. In FY25, EBITDA losses were reduced by nearly 80 per cent. Then, in FY26, as the company got back to growth, revenue expanded by 40 per cent while losses continued to shrink to approximately 100 crore.
By April 2026, as the conversation takes place, the CTO expects ShareChat to cross into EBITDA positive, PAT positive, and cash-flow positive territory simultaneously. Not quarter two. Not the second half of the year. This month. From the second quarter of FY27 onwards, the expectation is sustained profitability across all three dimensions.
"Doing things sustainably has become the DNA of the company itself," he reflects. He added that it has stopped being a mandate and started being the company’s innate way of doing things. What began as an instruction handed down from the top has become the instinct of the organisation.
The 85 Per Cent Problem

"It's not actually half. We almost reduced it by close to 85% from the peak. From peak to my today's spend, we have reduced the cost by 85 per cent”, that what Jain stated when asked about the server cost per user.
The true headline number in ShareChat's cost story is 85 per cent. That is the reduction in server costs from peak to present day, a figure larger than many in the industry would assume is even achievable without fundamentally changing what a platform does.
The CTO explains that no single lever achieved it. It was, instead, a series of choices made simultaneously across multiple dimensions, each small in isolation, collectively transformative.
The first was workload consolidation. In 2022 and 2023, ShareChat's infrastructure was spread across multiple cloud providers without a clear unifying logic. The process of consolidating those workloads began, but with a deliberate choice to remain multi-cloud. The reasoning was not contradictory: staying with multiple providers was a decision made to continue leveraging the different strengths of each, while managing the relationships with those providers more deeply and therefore more cost-effectively.
"We made a conscious choice to stay multi cloud as well, but that was more of a conscious choice in that we want to continue leveraging the capabilities of multiple cloud providers, but simultaneously managing those relationships deeper as well”
The second lever was a move away from native cloud provider services towards third-party alternatives. In many cases, third-party services that run on top of major cloud platforms are both more capable and significantly cheaper than the native equivalents. ShareChat systematically evaluated what it was using and replaced native services where better alternatives existed.
The third, and perhaps most revealing, lever was the decision to build internal infrastructure from scratch. The most vivid example involves machine learning model training. Training large ML models is computationally intensive and GPU-dependent. GPUs are expensive. The conventional approach, especially for a company trying to move fast, is to pay cloud providers for GPU access. ShareChat instead built its own custom training platform, one that intelligently combines the characteristics of CPUs and GPUs to drive both types of hardware more effectively. Rather than adopt what the giants of the industry were doing, the team invented their own solution, keeping costs low while maintaining the scale required.
The fourth lever was granular: an almost obsessive scrutiny of every line item in the technology footprint. Storage costs. Data transfer costs. Every byte being sent out through a provider's network. Every disk being written to. The CTO describes going after every single aspect of the footprint the company was leaving behind, and optimising each one.
"We literally look at every single aspect of the footprint that we are leaving behind, storing on the disks or we are sending out, and then optimise those", Jain added.  
The result of all this is a company that is now, paradoxically, more confident staying on cloud than it would have been otherwise. When Nitin Jain joined, he was genuinely considering a hybrid or on-premise strategy. Having pushed cloud optimisation as far as they have, the ROI calculation for on-premise has become unappealing. All of ShareChat's data sits in the cloud. For privacy reasons, it all sits in India, concentrated primarily in the Mumbai region. The data does not leave the country.
Singapore, Talent, And The Two-Year Gap

If the cost story is about what has already been built, the talent story is about what still needs to be.
ShareChat runs AI centres in Singapore, with additional hiring activity across Europe. The choice of Singapore is not accidental. The CTO spent time at TikTok in Singapore and came away with a specific conviction: the talent density there is extraordinary.
"I'm quite convinced the talent which is out there, that talent basically matches the talent which comes from other geographies like the Western World and Silicon Valley, that is the reason we want to be out there", Jain explained.
It is not merely competitive with India; it is competitive with the best technical talent anywhere in the world, including the Western centres and Silicon Valley. When you need people who are operating at the absolute frontier of what is possible in AI and machine learning, Singapore is where you go.
The role of those centres is specifically to build ShareChat's ML stack, and at the heart of that, its recommendation systems. These are the systems that must infer, from the implicit signals a user leaves behind through their behaviour, what each individual likes, dislikes, and wants to see next. Users do not tell you what they want. They reveal it, one scroll and one pause at a time, and the machine has to read those signals in real time.
However, Jain is careful not to frame this as a simple indictment of Indian AI talent. The word he uses is density, not quality. India has had a software engineering industry for fifty to sixty years. The depth of product engineering and platform engineering capability built up over that time is real and considerable. The gap in AI and machine learning is not one of raw ability; it is one of volume and, critically, of experience. The people in Singapore and the Western centres who have already solved these kinds of problems at scale bring something that cannot be replicated quickly: the lived knowledge of having done it before.
His estimate of the lag is approximately two years. Whatever was the state of the art in 2023 and 2024, India has largely reached it now. "Whatever was the state of art, let's say in 2023, 2024, we are already there. But then in two years the world has advanced so much", he explains.
But the frontier has moved. Generative AI, large language models, the architectures currently being explored in the West: ninety per cent of that work is still happening outside India. The knowledge will transfer, he believes, but it has not fully transferred yet. Until this class of technology matures and that knowledge circulates globally, the gap will persist.
The hybrid model ShareChat runs attempts to work around this. Global talent, often experienced practitioners who have already built comparable systems elsewhere, is paired with young Indian engineers who are ambitious, capable, and moving fast. The CTO's observation is that this pairing tends to work. The Indian engineers step up. In some cases, they exceed what the global talent brings to the table. The gap, he insists, is closing.
Jain adds that, ‘90 per cent of this work is being done in the Western world. That knowledge will still have to transfer back to India. Until this class of technology matures and that knowledge circulates globally, we'll still be playing, in my opinion, a catch-up game.’
What Are Recommendation System And what Do They Actually Do

When the conversation turns to recommendation systems, Nitin Jain makes a distinction that is easy to miss but important. The world of recommendation, he explains, is not one world. It is at least two, and they are fundamentally different.
In e-commerce, the user tells you what they want. When someone goes to Amazon and types a search query, they are providing explicit intent. Amazon knows immediately what kind of buyer it is dealing with, whether they are value-driven or convenience-driven, what category of product interests them. The signals are abundant and direct. The recommendation problem, while complex, starts with something concrete.
Social media is different. The user tells you nothing. The problem begins the moment they open the app, before they have done anything at all. What is the very first piece of content you show them? What will make them stay? The entire enterprise of content discovery, the process of surfacing something worth watching to someone who has not asked for anything, is what recommendation means in this context. It is a problem of inference, not of lookup.
"The problem really starts as soon as you land onto the app. It's like, what is that very first content that I show you which you find interesting and then you continue to look forward to?”, he explains.
ShareChat's technology stack, he says, resembles TikTok's and Meta's at a high level. The broad architecture is similar. He adds, however, "the devil lies in the details.” The differences are in the details, and the most important details are about data quality and real-time responsiveness.
Data quality means maintaining clean, high-integrity signals about how users engage with content, without compromising privacy. It means being disciplined about what you feed into your models. The recommendation system is only as good as the data it learns from, and dirty data produces systems that learn the wrong things.
Real-time responsiveness means something specific in this context: the system must respond at the microsecond level. When a user likes or dislikes a piece of content, the very next piece surfaced must already account for that signal. Non-social platforms can afford to wait. They can look at the last hour of engagement and update accordingly. A social content platform cannot. The feedback loop must be essentially instantaneous.
And then there is the layer that makes ShareChat's recommendation problem structurally different from any other platform in the world.
The Vernacular Architecture

ShareChat was built from the ground up to handle between fifteen and twenty of India's top languages. This is not a feature added on top of an existing system. It is a foundational architectural choice that shapes every decision made about how the AI and machine learning systems are designed.
"Our technology stack was built keeping in mind that we are going to deal with somewhere between 15 to 20 top languages in India," the CTO says. "Versus when you look at global systems, they predominantly are more focused on building around English and the other popular languages."
Global platforms, Jain notes, have largely been built around English and a small set of widely spoken languages. The underlying assumptions baked into their architectures reflect that. When you build for linguistic diversity at the scale India requires, those assumptions break. The design choices, the ways you structure the data, the ways you train your models, all of it must be rethought.
This, he argues, is one of ShareChat's genuine moats. It is not the only one, and he does not claim it is. But it is real, and it is hard to replicate. For any competitor to build what ShareChat has built in this dimension, they would have to start again from the foundation. The incumbency advantage here is not cosmetic.
The broader strategic frame Jain offers is that ShareChat is not positioning itself as a TikTok alternative in the straightforward sense. It is not simply trying to do what TikTok does, in India, in Indian languages. It is providing something different: deeper regional reach, for both users and creators, in markets and communities that global platforms have largely not engaged with meaningfully.
The goal is to build micro and macro regional communities, to give people a platform that is genuinely close to their community rather than generically global.
"It's just so hard to build," he says. "For anybody else to be able to build the similar capability that we have built, that's going to be very hard. That continues to be one of the big differentiators as well."
The Microdrama Bet

One of the more technically interesting challenges ShareChat has taken on is the integration of its microdrama platform, launched in May 2023, with its existing short-form video offering.
"We have both contents on the same platform for the users, and both contents have got a very, very different characteristic. One is a sort of scroll-worthy, another one is depth of consumption”, Jain explains the complexity of having two content types on the same platform.  
The two content types look superficially similar: they are both video, both consumed on a phone, both served through an algorithm. But they are behaviourally distinct in ways that matter enormously for how a recommendation system works.
Short-form video is scroll-driven. The engagement signals are immediate: a like, a share, a download, the absence of a scroll. The model is optimised to read these signals and surface the next piece of content that will generate similar engagement. The unit of consumption is the single video.
Microdrama is different. It is serial. The user is following a narrative across episodes. The relevant signal is not whether they liked a specific piece of content but whether they watched the next episode in a series, and the one after that, and the tenth episode, and the twentieth. Depth of consumption is the metric, not breadth of engagement. "Microdrama is more like how deep did you go in terms of consuming that content", he adds.
These two different signal structures require two different model architectures. The principles underlying the models are similar, but the specific data fed into them, the signals they are asked to optimise for, and the outputs they produce are all different.
ShareChat's solution is an aggregator layer that sits above both models. The short-form model surfaces, say, four short videos. The microdrama model surfaces two series. The aggregator then combines these recommendations and sequences them in a way that reflects the individual user's balance of interest between the two content types. A user who is deeply engaged with a particular drama series will see more of it. A user who is primarily a short-form scroller will see mostly that. Someone in between gets a blended feed, calibrated to their behaviour.
The challenge the CTO articulates is that the platform itself cannot tell, when a new user arrives, whether they are a short-form person or a microdrama person. It has to figure that out by exposing them to both types of content and reading the signals.
ShareChat's solution is an aggregator layer that sits above both models. "The short-form video model says these are the four short form videos that should be seen. The microdrama model says these are the two series that you should see. Now I need to combine these four plus two in a way that picks up your interest towards microdrama or short form and then appropriately lays out those contents in front of you", Jain explains.  
The Future Stack

Five years from now, the CTO's vision for ShareChat's technology stack is one of simplification and unification. Today, there are effectively two content worlds on the platform: short-form video and microdrama. Both are available on a single surface, but the underlying systems that power them are still somewhat separate. The direction of travel is to bring them together into a single, more elegant stack that can learn user behaviour across both content types, and across whatever new content formats emerge in the future.
The other horizon he is watching is transformer-based architectures. These are the model architectures that have driven the most dramatic advances in AI over the last several years, the same family of architectures that underlies large language models and the most capable generative AI systems. They are, however, extremely computationally expensive to train and run. ShareChat has not yet gone in this direction, primarily because of the cost.
His expectation is that this changes as these models become more affordable. "I'm just hoping that we will eventually be able to make it there as well. I think we'll get there to start using those as those models become cheaper and our capabilities improve as well’, he adds.  
The CTO's expectation is that this changes. As transformer-based architectures become more cost-effective, as compute becomes cheaper, as optimisation techniques improve, he expects to begin integrating these models into ShareChat's recommendation infrastructure. That is when the gap he describes between where India's AI ecosystem is today and where the global frontier sits will matter most. Closing it is not just a talent question. It is a cost question. And if FY26 has demonstrated anything, it is that ShareChat knows how to handle cost.
The Creator's Argument

The conversation ends where it began: with a question about why. Why would a creator choose ShareChat over the platforms where the numbers are bigger, the audiences are larger, and the algorithms have had more years to learn?
The CTO's answer is not about features. It is about visibility. On global platforms, he argues, the barrier to being seen is very high. The algorithm needs to see a certain volume of views and engagement before it starts amplifying a creator's content. For a regional creator making videos in a specific dialect, about specific communities, for a specific audience, that volume is almost impossible to accumulate. The platform's default is to surface content that is already popular, which means regional creators are perpetually starting from behind.
ShareChat's platform, built as it is for regional depth, does not have this problem. A creator making content in Tamil Nadu's regional dialects, or covering events specific to a city in Haryana, or building a community around a particular local culture, can find their audience on ShareChat in a way that a global platform's algorithm simply will not facilitate. The micro-network can form. The community can build. The barrier is lower because the platform was designed with these creators in mind, not as an afterthought.
The CTO is careful not to position this as an either-or choice. He expects creators and users to be present across platforms simultaneously. ShareChat is not asking for exclusivity. It is asking for a legitimate reason to be chosen, and his argument is that for the long tail of India's regional creators, that reason is structural. It is baked into the architecture.
What’s Next For ShareChat And Moj?

The perfect way to define ShareChat would be to call it a data and AI company with a focus on experimentation. With 200 million Monthly Active Users (MAU), a deep understanding of user engagement with content and personalised experiences, focus on deeper regional reach and language diversity in India (unlike global platforms), and an aim to build micro and macro regional communities, ShareChat is at the precipice of achieving something very unique.
ShareChat has successfully navigated a challenging financial period, achieving profitability through aggressive cost optimization and a focused tech strategy centered on data, AI, and a deep understanding of India's vernacular markets.
Its differentiated approach to regional content and creator support positions it uniquely against global competitors. The company is actively investing in AI, including generative AI, to enhance content creation, accessibility, and user experience, while managing the talent gap and cost implications of advanced AI.
Nitin Jain, with his strong grasp of both technical details and business strategy, highlights a clear, disciplined approach to cost management, strategic talent acquisition, and leveraging AI to address unique market challenges. The company's regional focus is presented as a significant competitive advantage and a foundational element of its tech architecture.
ShareChat has built something genuinely distinctive, a platform architected for India's linguistic and regional complexity at a level no global competitor has matched. Whether that architecture, that vernacular moat, can translate into the kind of scale that makes a platform indispensable rather than merely useful: that is the question that the next several years will answer.
The technology, for the first time in a while, is no longer the constraint.
like (0)
deltin55administrator

Post a reply

loginto write comments
deltin55

He hasn't introduced himself yet.

410K

Threads

12

Posts

1410K

Credits

administrator

Credits
144952