01

Enterprise AI Pulse — H1 2025

Sample Report

WAVE: H1-2025  |  SAMPLE: 147  |  PUBLISHED: JUN 2025

How 147 F1000 executives are spending, deploying, and betting on enterprise AI — and where the market is headed in the next 12 months.

01

Executive Summary

The State of Enterprise AI in 2025

Enterprise AI has moved past the experimentation phase. The median F1000 company now spends $2.8M annually on AI — up 34% from H2 2024 — and 65% of respondents plan to increase budgets further in the next 12 months. This is no longer a discretionary line item; it's a core infrastructure investment.

But the picture is more nuanced than the headline numbers suggest. While foundation model spend dominates (35% of total budgets), the real heat is in agent orchestration — the technology with the highest momentum score in our index despite only 18% production adoption. Enterprises are betting big on agentic workflows while simultaneously struggling with data quality (cited by 58% as the top barrier) and multi-vendor complexity (73% now use two or more model providers).

The vendor landscape is consolidating around a few clear winners, but switching risk is real: Snowflake, LangChain, and Google Cloud all show switching intent above 20%. Enterprises are locking in infrastructure bets while hedging their model layer.

This report presents aggregated, anonymized data from our H1 2025 research panel. Individual company data is never disclosed. For a personalized benchmark comparing your organization to peers in your exact industry and size bracket, join the research panel at the end of this report.

02

Key Findings

What the Data Shows

Median AI Spend

$0M

Median annual AI spend across F1000 respondents, up 34% from H2 2024

Increasing Budgets

0%

of enterprises plan to increase AI spend in the next 12 months

Heat Index Leader

Agent Orchestration

Highest heat score (88) — 18% in production, 32% running pilots

Top Challenge

Data Quality

58% cite data quality & readiness as the primary barrier to AI ROI

Multi-Model Default

0%

of enterprises use 2+ foundation model providers — vendor lock-in fears persist

Hybrid Compute

0%

run AI workloads across public cloud + on-premises or colocation infrastructure

03

Respondent Profile

Who Participated

Industry Breakdown

n=147

Financial Services23%
Technology19%
Healthcare & Life Sciences14%
Manufacturing12%
Retail & Consumer10%
Energy & Utilities8%
Media & Telecom7%
Government & Defense6%

Company Size

By employee count

10,000+ employees29%
5,000-9,99924%
2,500-4,99921%
1,000-2,49916%
500-99910%
04

Spend

Where the Money Goes

AI budgets have bifurcated. The $2M–5M bucket contains the largest share of respondents (26%), representing the new "standard" enterprise AI investment. But the distribution has a long tail: 11% of respondents spend north of $10M, and the gap between the median and mean ($2.8M vs. $4.1M) reveals a significant population of heavy spenders pulling the average up — primarily F500 financial services and technology firms.

The more interesting question is what they're buying. Foundation model access and infrastructure together consume 65% of budgets, while consulting and third-party labor take the remaining 35%. This split has shifted 8 points toward infrastructure since H2 2024, as enterprises move from "tell us what to do" to "help us build it."

AI Spend Distribution

Annual AI spend buckets across all respondents

$0-500K0%
$500K-1M0%
$1M-2M0%
$2M-5M0%
$5M-10M0%
$10M-25M0%
$25M+0%

▲ Median bucket: $2M-5M

05

Allocation

Budget Breakdown

Foundation Models & AI Vendors0%

Median: $784K · Mean: $990K

Infrastructure (GPU/Cloud)0%

Median: $700K · Mean: $875K

Data Foundation & Structured Data0%

Median: $560K · Mean: $700K

Consulting & Advisory0%

Median: $420K · Mean: $525K

Third-Party Labor & Contractors0%

Median: $336K · Mean: $420K

06

Trajectory

Where Budgets Are Heading

The directional signal is unambiguous: budgets are going up. But the rate of increase varies dramatically by category. Foundation model spend is the most aggressively increasing (72%), driven by new model releases, expanded context windows, and the shift from single-model to multi-model architectures. Infrastructure is close behind at 68%, reflecting the GPU build-out.

The one category showing meaningful deceleration is third-party labor (17% decreasing) — a leading indicator that enterprises are internalizing AI capabilities rather than outsourcing them. This is consistent with the "build, don't buy" sentiment we're hearing in practitioner interviews.

Foundation Models & AI Vendors72% increasing
Infrastructure (GPU/Cloud)68% increasing
Consulting & Advisory55% increasing
Data Foundation & Structured Data62% increasing
Third-Party Labor & Contractors48% increasing
Increasing
Flat
Decreasing
07

Emerging Technology

Heat Index

Technologies earning their way to the majors. The Heat Index tracks emerging technologies by momentum — a composite of adoption, pilot activity, and planning intent. When adoption crosses 50%, a technology graduates to the AI Core Categories.

Agent orchestration leads with a heat score of 88, despite only 18% production adoption. This is the "pilot purgatory" pattern: massive interest, significant pilot activity (32%), but real deployment challenges around reliability, observability, and governance. RAG systems (85) and AI monitoring (78) round out the top three — both are enabling layers that need to be in place before agentic workflows can scale.

At the bottom: federated learning (38) and AI governance tools (52) show limited momentum. The governance gap is particularly notable given that 42% of respondents cite governance as a top challenge — they know they need it, but the tooling hasn't earned enough trust yet.

In Use / Production
In pilot
Near-term
Long-term
Not planned
08

Vendor Landscape

Market Share by Category

The vendor landscape is a story of breadth vs. depth. OpenAI dominates in raw adoption (78%) but its primary vendor share (42%) is lower than you'd expect — a direct consequence of the multi-model trend. Enterprises are using OpenAI alongside Anthropic, Google, and open-source models, treating foundation models as a commodity layer rather than a strategic commitment.

Anthropic has the strongest primary-to-adoption ratio in foundation models (22% primary out of 52% adoption), suggesting that companies who adopt Claude tend to make it their primary model — a retention signal worth watching. In agentic AI, LangChain/LangGraph leads adoption (45%) but CrewAI and Amazon Bedrock Agents are growing fast from smaller bases.

In compute infrastructure, the hyperscaler duopoly (AWS + Azure = 65% primary) remains entrenched, but specialist providers like CoreWeave (18% adoption) are gaining ground with AI-native workloads that demand bare-metal GPU access.

Foundation Models

Adoption % · Gold = primary vendor share

OpenAI
78%42% primary12% switching
Anthropic
52%22% primary6% switching
Google (Gemini)
41%15% primary
Meta (Llama)
34%8% primary
Cohere
18%5% primary21% switching
Mistral
15%4% primary

Agentic AI

Adoption % · Gold = primary vendor share

LangChain / LangGraph
45%22% primary25% switching
Microsoft (Copilot Studio)
38%18% primary
CrewAI
22%8% primary
Amazon Bedrock Agents
20%7% primary
Anthropic (Claude Code)
18%6% primary
AutoGen
14%4% primary

Compute Infrastructure

Adoption % · Gold = primary vendor share

AWS
72%35% primary8% switching
Microsoft Azure
65%30% primary18% switching
Google Cloud
38%14% primary22% switching
CoreWeave
18%6% primary
Lambda
12%4% primary
NVIDIA DGX Cloud
10%3% primary

Consulting & System Integrators

Adoption % · Gold = primary vendor share

Accenture
32%18% primary12% switching
Deloitte
28%15% primary
EY
22%12% primary
McKinsey
15%8% primary
Bain
10%5% primary

Third-Party Labor

Adoption % · Gold = primary vendor share

TCS
30%16% primary18% switching
Infosys
25%14% primary
HCL
20%11% primary
Wipro
18%9% primary
Cognizant
15%8% primary

Data Foundation

Adoption % · Gold = primary vendor share

Snowflake
45%28% primary22% switching
Databricks
40%25% primary10% switching
MongoDB
22%10% primary
Google BigQuery
18%8% primary
AWS Redshift
15%7% primary
09

Switching Risk

Vendor Vulnerability Index

We asked respondents: "Are you actively evaluating alternatives to any of your current AI vendors?" The results reveal which vendors are most at risk of displacement.

Snowflake shows the highest switching intent (28%), with Databricks as the primary alternative — driven by the convergence of data lakehouse and AI workloads. LangChain (25%) faces headwinds from framework fatigue and the rise of simpler agent toolkits. The pattern is consistent: vendors that became popular during the experimentation phase are now facing scrutiny as enterprises move to production-grade deployments.

At the other end, Anthropic (6%) and AWS (8%) show the lowest switching intent — a signal of deep integration and satisfaction.

LangChain / LangGraph

Agentic AI

0%

SWITCHING INTENT

Usage: 45%
CrewAICustom frameworks

Snowflake

Data Foundation

0%

SWITCHING INTENT

Usage: 45%
DatabricksBigQuery

Google Cloud

Compute Infrastructure

0%

SWITCHING INTENT

Usage: 38%
AWSCoreWeave

Cohere

Foundation Models

0%

SWITCHING INTENT

Usage: 18%
AnthropicMistral

Microsoft Azure

Compute Infrastructure

0%

SWITCHING INTENT

Usage: 65%
AWSGoogle Cloud

TCS

Third-Party Labor

0%

SWITCHING INTENT

Usage: 30%
InfosysHCL

OpenAI

Foundation Models

0%

SWITCHING INTENT

Usage: 78%
AnthropicGoogle (Gemini)

Accenture

Consulting & System Integrators

0%

SWITCHING INTENT

Usage: 32%
DeloitteEY

Databricks

Data Foundation

0%

SWITCHING INTENT

Usage: 40%
SnowflakeBigQuery

AWS

Compute Infrastructure

0%

SWITCHING INTENT

Usage: 72%
Google CloudCoreWeave

Anthropic

Foundation Models

0%

SWITCHING INTENT

Usage: 52%
OpenAIGoogle (Gemini)
10

Infrastructure

Compute Strategy

The "cloud-only" era for AI compute is over. While 68% of respondents use public cloud, 48% have adopted hybrid architectures that combine cloud with on-premises or colocation infrastructure. The driver is straightforward: GPU costs at scale. Enterprises running fine-tuning workloads or high-throughput inference are discovering that reserved cloud instances quickly approach the cost of owned hardware.

GPU-as-a-Service (40% adoption) has emerged as the middle path — access to dedicated GPU clusters without the capital expenditure of building your own. CoreWeave, Lambda, and NVIDIA DGX Cloud are the primary providers serving this segment. For enterprises that aren't ready to commit to 3-year cloud reservations but need more control than on-demand pricing offers, this category is growing fastest.

AI Compute Approaches

% of respondents using each approach (may select multiple)

0%

Public Cloud

Public Cloud
0%
Hybrid (Cloud + On-Prem)
0%
GPU-as-a-Service
0%
On-Premises
0%
Colocation
0%
11

Barriers

Top Challenges to AI Adoption

The top three barriers — data quality (58%), talent (52%), and ROI measurement (48%) — have barely changed since 2023. This persistence is the real story. Enterprises have dramatically increased AI spending, but the fundamental blockers haven't been solved.

Data quality is not an AI problem — it's a data engineering problem that predates AI by a decade. The organizations making the most progress are those who treat data readiness as a pre-requisite infrastructure investment, not a side project of the AI team. Talent remains constrained, but the nature of the gap is shifting: it's less about finding ML engineers and more about finding practitioners who can bridge AI capabilities with domain expertise.

Hallucination and reliability (22%) appears low on the list, but interviews suggest it's underreported — many respondents consider it a subset of "governance" rather than a standalone challenge. In practice, it's the primary blocker to agentic deployment at scale.

Data Quality & Readiness0%
Talent & Skills Gap0%
ROI Measurement0%
Governance & Compliance0%
Integration Complexity0%
Vendor Lock-in Concerns0%
Security & Privacy0%
Change Management0%
Compute Cost Management0%
Hallucination / Reliability0%
12

Methodology

About This Research

This report is based on the Arcana Research Enterprise AI Pulse study, conducted in Q1–Q2 2025. The panel consists of 147 respondents across 8 industries, all at companies with 500+ employees. Respondents hold titles of VP or above with direct budget authority or influence over AI spending decisions.

Data is collected through a structured research instrument with branching logic based on industry, company size, and current AI maturity. All responses are anonymized and aggregated. No individual company data is disclosed in this report. Vendor-specific data points are validated against publicly available market data where possible.

The heat index is a composite score (0–100) calculated from weighted production adoption (40%), pilot activity (30%), and near-term planning intent (30%). Vendor vulnerability scores are based on self-reported switching intent and should be interpreted as directional signals, not predictions.

Personalized Benchmark —

How Does Your Organization Compare?

The aggregated view tells a market story. The personalized benchmark tells your story — spend percentiles against your exact peer set, vendor positioning relative to your industry, heat index status mapped to your technology stack, and tailored recommendations from our research team.

Join the Research Panel

Data from the Arcana Research Enterprise AI Pulse study, H1 2025. All figures are aggregated and anonymized.