Artificial Intelligence as Infrastructure: Industrialization, Capital Expansion, and the Probability of AGI
Artificial Intelligence is undergoing a structural transformation. Instead of functioning primarily as standalone software applications, AI is increasingly evolving into a foundational infrastructure layer upon which digital systems, services, and industries are built. This shift was earlier seen in the technological transitions such as the development of electricity grids, telecommunications networks, and cloud computing platforms. The transition toward AI as infrastructure is accompanied by massive capital investment, rapid expansion of computational hardware, and the construction of large-scale data center ecosystems.
This blog post examines the economic, technological, and probabilistic dimensions of this transformation. It analyzes infrastructure investment projections, compute scaling trends, the method of Jevons Paradox in AI demand, and current probability estimates for Artificial General Intelligence (AGI) based on expert surveys and forecasting models.
Introduction
Artificial Intelligence has historically been developed and deployed as software applications designed to perform specific tasks. Examples include search engines, recommendation systems, and enterprise analytics tools. In this traditional model, the value of AI was primarily embedded within the application itself.
However, the current technological trajectory indicates a shift toward AI functioning as infrastructure rather than software. In this new paradigm, AI becomes a foundational computational layer that supports a wide range of applications, platforms, and services. Similar to utilities such as electricity, cloud computing, or the internet, AI systems provide capabilities that other technologies build upon.
This transformation is worked by advances in large-scale machine learning models, the rise of AI agents capable of interacting with external systems, and massive investment in computational infrastructure. As a result, AI is increasingly treated as a core industrial capability rather than a discrete software product.
The Traditional Software Model
Before the emergence of infrastructure-scale AI systems, software development largely followed a product-based structure.
Key characteristics of the traditional software model include:
Software functions as a finished product.
Applications are installed locally or accessed through a specific interface.
Programs execute predefined functions written directly into the codebase.
Most of the economic value is generated by the individual application.
In this framework, software products operate independently and are optimized for specific tasks. Although APIs and integrations exist, the underlying intelligence typically remains embedded within the application itself.
The Infrastructure Model of AI
The infrastructure model represents a fundamental shift in how AI systems are developed and utilized.
Core characteristics of this model include:
AI functions as a foundational computational layer.
Multiple applications and services interact with the same AI system.
Thousands of products can connect to a shared intelligence platform.
AI systems are continuously updated and retrained rather than deployed as static products.
Operation requires large-scale compute infrastructure, specialized chips, and high-capacity data centers.
Recent developments in AI agents illustrate this model clearly. Many modern agents can connect to external tools, software platforms, and databases, allowing a single AI system to orchestrate complex workflows across multiple applications.
As AI systems become increasingly integrated into this kind of infrastructure, the economic and technological dynamics surrounding them begin to resemble other large-scale industrial systems
Capital Investment and Industrial Scale
The shift towards AI infrastructure requires enormous capital investment. Unlike traditional software, which can often be developed with relatively limited physical resources, infrastructure-level AI systems depend on large-scale hardware deployments and energy-intensive compute environments.
Investment firm Brookfield Asset Management projects that the global AI infrastructure sector will experience sustained growth driven by rising demand for compute capacity. According to Brookfield, the next decade could see approximately $7 trillion invested globally in AI infrastructure.
A projected breakdown of this investment includes:
$4 trillion allocated to semiconductor manufacturing and chip supply chains
$2 trillion allocated to AI data centers
$0.5 trillion allocated to energy generation and grid infrastructure
$0.5 trillion allocated to networking and fiber connectivity
The magnitude of this investment is comparable to earlier industrial infrastructure projects such as railway networks, telecommunications systems, and national electrical grids. This scale of spending suggests that AI development is transitioning from a software-driven industry into a capital-intensive industrial sector.
The Jevons Paradox and AI Demand
An important economic principle relevant to AI infrastructure growth is the Jevons Paradox. In 1865, economist William Stanley Jevons observed that improvements in the efficiency of coal-powered steam engines in the United Kingdom led to increased coal consumption rather than reduced consumption. The paradox occurs because efficiency improvements reduce costs, which in turn increases demand.
A similar pattern has appeared in modern electricity markets. Over the past seventy years, the real price of electricity declined by approximately 65 percent due to productivity improvements. Despite this decrease in cost, global electricity consumption increased roughly fifteenfold as more industries and households adopted electricity-intensive technologies.
This dynamic suggests that improvements in AI efficiency may not reduce demand for computation. Instead, cheaper and more efficient AI systems could significantly increase total compute consumption. As a result, AI computation may evolve into a utility-like service with continuously growing demand, similar to electricity or internet bandwidth.
Compute and Hardware Expansion
The physical compute layer supporting AI is expanding exponentially.
Market projections indicate significant growth in AI hardware infrastructure:
The global AI data center GPU market is projected to grow from approximately $10.5 billion in 2025 to $77 billion by 2035, representing an annual growth rate of about 22 percent.
The number of installed AI GPUs worldwide is expected to increase from roughly 7 million units to approximately 45 million by 2034.
Currently, the market for AI data center GPUs is highly concentrated. NVIDIA controls approximately 92 percent of the AI data center GPU market, indicating the emergence of a dominant hardware layer similar to Intel’s role during the early personal computer era. Such concentration suggests that the foundational infrastructure of AI computing may become dominated by a small number of key hardware providers.
Data Center Expansion
Another clear signal of the AI infrastructure shift is the high growth of expansion of data center capacity. Current estimates indicate:
Approximately 11,800 data centers operate globally.
Around 4,000 of these facilities are already optimized for AI workloads.
AI data center capacity is growing at approximately 33 percent per year.
Total AI data center power capacity is projected to reach roughly 82 gigawatts by 2034, representing nearly a tenfold increase compared to current levels.
The construction of these facilities requires significant investments in land, power infrastructure, cooling systems, and networking equipment, reinforcing the idea that AI development is becoming an industrial-scale process.
Probability Estimates for Artificial General Intelligence
Although infrastructure expansion suggests rapid technological progress, there is no universally accepted probability for the emergence of Artificial General Intelligence. Instead, researchers rely on expert surveys and forecasting models to estimate the likelihood of AGI development.
Expert Survey Estimates
One of the largest surveys conducted among AI researchers asked approximately 2,778 experts to estimate when AGI might emerge.
Estimated probabilities include:
Year Probability AGI Exists 2027 ~10% 2047 ~50% 2100 ~90%
These results suggest that the probability of AGI reaching human-level capabilities approaches 50 percent by the middle of the twenty-first century.
In practical terms:
Success before 2047: approximately 50 percent
Failure before 2047: approximately 50 percent
This implies that the current expert consensus places mid-century AGI at roughly coin-flip odds.
Meta-Analysis of Multiple Surveys
A meta-analysis combining results from ten surveys covering more than 5,000 experts produced a similar estimate. The central result suggests a 50 percent probability that AGI appears between 2040 and 2061. This reinforces the conclusion that current expert forecasts cluster around the middle of the century.
Short-Term Forecasts
Forecasts focusing on the next decade produce significantly lower probabilities.
Typical estimates include:
Approximately 10–20 percent probability of AGI by 2030
Experimental forecasting ranges from 3 percent to 47 percent
Median predictions cluster around 12–13 percent
These results indicate that while AGI within the next decade is possible, it remains relatively unlikely.
Step-Based Probability Models
Some researchers approach the problem by modeling AGI development as a sequence of uncertain steps. These steps may include:
hardware scaling
algorithmic breakthroughs
alignment solutions
economic deployment
societal integration
When each stage has an independent probability of success, the overall probability declines due to multiplication of uncertainties.
For example:
Step A success probability: 70 percent
Step B success probability: 60 percent
Step C success probability: 50 percent
Total probability:
0.7 × 0.6 × 0.5 = 21 percent
Because of this compounding uncertainty, some studies estimate extremely low probabilities for transformative AI in the near future. One analysis estimated roughly 0.4 percent probability of transformative AGI by 2043.
Researcher Sentiment
Survey data also reveals skepticism regarding current AI development methods. Approximately 76 percent of AI researchers believe that scaling existing systems alone is unlikely to produce AGI.
This implies that significant algorithmic or architectural breakthroughs may still be required.
Long-Term Outcome Estimates
Researchers also estimate the potential long-term impacts of advanced AI systems.
Approximate probability ranges include:
Outcome Estimated Probability AI broadly benefits humanity 60–70% Major global disruption 20–30% Catastrophic outcome ~10%
These estimates remain highly uncertain but illustrate the range of potential scenarios considered by experts.
Combined Probability Timeline
Combining multiple surveys produces a simplified probability trajectory:
2025–2030
Success: approximately 10–15 percent
Failure: approximately 85–90 percent
2030–2050
Success: approximately 50 percent
Failure: approximately 50 percent
2050–2100
Success: approximately 80–90 percent
Failure: approximately 10–20 percent
These projections suggest that the likelihood of AGI increases high over time as technological capabilities accumulate.
Key Sources of Uncertainty
Three major factors contribute to the uncertainty surrounding AGI timelines.
Algorithmic breakthroughs
Current AI architectures may not be sufficient to achieve general intelligence.
Compute scaling
Training compute for frontier AI systems has historically increased roughly tenfold every two to three years.
Data availability
High-quality training data may become scarce, potentially slowing future progress.
Changes in any of these variables could significantly accelerate or delay the development of AGI.
Artificial Intelligence is transitioning from a software-based technology into a large-scale infrastructure system supported by massive industrial investment. This transformation is characterized by high growth in compute capacity, data center construction, semiconductor manufacturing, and global capital flows.
At the same time, uncertainty remains high regarding the timeline for Artificial General Intelligence. Current expert surveys suggest relatively low probabilities of AGI in the near term but increasing likelihood over the course of the century. Understanding this transition requires analyzing both the economic scale of AI infrastructure and the probabilistic uncertainty surrounding the future development of general intelligence
Sources
https://www.brookfield.com/sites/default/files/documents/Brookfield_Building_the_Backbone_of_AI.pdf
https://www.leanrs.com/insights/global-data-center-market-outlook-2030
https://arxiv.org/abs/2401.02843
https://arxiv.org/abs/2412.09385
https://arxiv.org/abs/2306.02519
https://research.aimultiple.com/future-of-ai/
https://www.yahoo.com/news/agi-could-now-arrive-early-130000193.html

