deltin51
Start Free Roulette 200Rs पहली जमा राशि आपको 477 रुपये देगी मुफ़्त बोनस प्राप्त करें,क्लिकtelegram:@deltin55com

AI-optimised Cloud Infra Set For Major Growth: Report

deltin55 1970-1-1 05:00:00 views 153

AI-optimised infrastructure as a service (IaaS) is emerging as the next disruptive growth engine for AI infrastructure, according to a report from Gartner. This specialised cloud service segment is poised for significant expansion as organisations increasingly adopt artificial intelligence (AI) and generative AI (GenAI).
The AI-optimised IaaS market encompasses spending on high-performance computing (HPC) resources—such as graphics processing units (GPUs), application-specific integrated circuits (ASICs), and other AI accelerators—designed for large-scale AI processing. Gartner projects end-user spending on this infrastructure will grow 146 per cent by the end of 2025.
"Traditional IaaS is maturing; however, AI-optimised IaaS spending growth projections are higher than that of traditional IaaS over the next five years," said Hardeep Singh, Principal Analyst at Gartner. "As organisations expand their use of AI and GenAI, they will need specialised infrastructure such as GPUs, tensor processing units (TPUs) or other AI ASICs, high-speed networking and optimised storage for fast parallel processing and data movement. As such, traditional central processing unit (CPU)-based IaaS will face significant challenges in meeting these demands."
Gartner estimates worldwide end-user spending on AI-optimised IaaS will total $18.3 billion by the end of 2025 and $37.5 billion in 2026.
A key driver of this growth will be the rising demand for running inference workloads. As AI adoption scales across industries, inferencing workloads will become a dominant force driving demand for AI-optimised IaaS. Gartner projects that end-user spending on inferencing will surpass that of training-intensive workloads by 2026. Spending on inference-focused applications is expected to reach USD 20.6 billion, up from USD 9.2 billion in 2025. In 2026, 55 per cent of AI-optimised IaaS spending will support inference workloads, and it is projected to reach more than 65 per cent in 2029.
"Unlike training, which involves intensive, large-scale compute cycles that occur during model development and ongoing updates, inference happens continuously — powering real-time applications such as chatbots, recommendation engines, fraud detection systems and industry-specific applications," said Singh.
like (0)
deltin55administrator

Post a reply

loginto write comments

Explore interesting content

deltin55

He hasn't introduced himself yet.

6030

Threads

12

Posts

110K

Credits

administrator

Credits
18342