WAVE: H1-2025 | SAMPLE: 147 | PUBLISHED: JUN 2025
How 147 F1000 executives are spending, deploying, and betting on enterprise AI — and where the market is headed in the next 12 months.
Executive Summary —
Enterprise AI has moved past the experimentation phase. The median F1000 company now spends $2.8M annually on AI — up 34% from H2 2024 — and 65% of respondents plan to increase budgets further in the next 12 months. This is no longer a discretionary line item; it's a core infrastructure investment.
But the picture is more nuanced than the headline numbers suggest. While foundation model spend dominates (35% of total budgets), the real heat is in agent orchestration — the technology with the highest momentum score in our index despite only 18% production adoption. Enterprises are betting big on agentic workflows while simultaneously struggling with data quality (cited by 58% as the top barrier) and multi-vendor complexity (73% now use two or more model providers).
The vendor landscape is consolidating around a few clear winners, but switching risk is real: Snowflake, LangChain, and Google Cloud all show switching intent above 20%. Enterprises are locking in infrastructure bets while hedging their model layer.
This report presents aggregated, anonymized data from our H1 2025 research panel. Individual company data is never disclosed. For a personalized benchmark comparing your organization to peers in your exact industry and size bracket, join the research panel at the end of this report.
Key Findings —
Median AI Spend
$0M
Median annual AI spend across F1000 respondents, up 34% from H2 2024
Increasing Budgets
0%
of enterprises plan to increase AI spend in the next 12 months
Heat Index Leader
Agent Orchestration
Highest heat score (88) — 18% in production, 32% running pilots
Top Challenge
Data Quality
58% cite data quality & readiness as the primary barrier to AI ROI
Multi-Model Default
0%
of enterprises use 2+ foundation model providers — vendor lock-in fears persist
Hybrid Compute
0%
run AI workloads across public cloud + on-premises or colocation infrastructure
Respondent Profile —
n=147
By employee count
Spend —
AI budgets have bifurcated. The $2M–5M bucket contains the largest share of respondents (26%), representing the new "standard" enterprise AI investment. But the distribution has a long tail: 11% of respondents spend north of $10M, and the gap between the median and mean ($2.8M vs. $4.1M) reveals a significant population of heavy spenders pulling the average up — primarily F500 financial services and technology firms.
The more interesting question is what they're buying. Foundation model access and infrastructure together consume 65% of budgets, while consulting and third-party labor take the remaining 35%. This split has shifted 8 points toward infrastructure since H2 2024, as enterprises move from "tell us what to do" to "help us build it."
Annual AI spend buckets across all respondents
▲ Median bucket: $2M-5M
Allocation —
Median: $784K · Mean: $990K
Median: $700K · Mean: $875K
Median: $560K · Mean: $700K
Median: $420K · Mean: $525K
Median: $336K · Mean: $420K
Trajectory —
The directional signal is unambiguous: budgets are going up. But the rate of increase varies dramatically by category. Foundation model spend is the most aggressively increasing (72%), driven by new model releases, expanded context windows, and the shift from single-model to multi-model architectures. Infrastructure is close behind at 68%, reflecting the GPU build-out.
The one category showing meaningful deceleration is third-party labor (17% decreasing) — a leading indicator that enterprises are internalizing AI capabilities rather than outsourcing them. This is consistent with the "build, don't buy" sentiment we're hearing in practitioner interviews.
Emerging Technology —
Technologies earning their way to the majors. The Heat Index tracks emerging technologies by momentum — a composite of adoption, pilot activity, and planning intent. When adoption crosses 50%, a technology graduates to the AI Core Categories.
Agent orchestration leads with a heat score of 88, despite only 18% production adoption. This is the "pilot purgatory" pattern: massive interest, significant pilot activity (32%), but real deployment challenges around reliability, observability, and governance. RAG systems (85) and AI monitoring (78) round out the top three — both are enabling layers that need to be in place before agentic workflows can scale.
At the bottom: federated learning (38) and AI governance tools (52) show limited momentum. The governance gap is particularly notable given that 42% of respondents cite governance as a top challenge — they know they need it, but the tooling hasn't earned enough trust yet.
Vendor Landscape —
The vendor landscape is a story of breadth vs. depth. OpenAI dominates in raw adoption (78%) but its primary vendor share (42%) is lower than you'd expect — a direct consequence of the multi-model trend. Enterprises are using OpenAI alongside Anthropic, Google, and open-source models, treating foundation models as a commodity layer rather than a strategic commitment.
Anthropic has the strongest primary-to-adoption ratio in foundation models (22% primary out of 52% adoption), suggesting that companies who adopt Claude tend to make it their primary model — a retention signal worth watching. In agentic AI, LangChain/LangGraph leads adoption (45%) but CrewAI and Amazon Bedrock Agents are growing fast from smaller bases.
In compute infrastructure, the hyperscaler duopoly (AWS + Azure = 65% primary) remains entrenched, but specialist providers like CoreWeave (18% adoption) are gaining ground with AI-native workloads that demand bare-metal GPU access.
Adoption % · Gold = primary vendor share
Adoption % · Gold = primary vendor share
Adoption % · Gold = primary vendor share
Adoption % · Gold = primary vendor share
Adoption % · Gold = primary vendor share
Adoption % · Gold = primary vendor share
Switching Risk —
We asked respondents: "Are you actively evaluating alternatives to any of your current AI vendors?" The results reveal which vendors are most at risk of displacement.
Snowflake shows the highest switching intent (28%), with Databricks as the primary alternative — driven by the convergence of data lakehouse and AI workloads. LangChain (25%) faces headwinds from framework fatigue and the rise of simpler agent toolkits. The pattern is consistent: vendors that became popular during the experimentation phase are now facing scrutiny as enterprises move to production-grade deployments.
At the other end, Anthropic (6%) and AWS (8%) show the lowest switching intent — a signal of deep integration and satisfaction.
LangChain / LangGraph
Agentic AI
SWITCHING INTENT
Snowflake
Data Foundation
SWITCHING INTENT
Google Cloud
Compute Infrastructure
SWITCHING INTENT
Cohere
Foundation Models
SWITCHING INTENT
Microsoft Azure
Compute Infrastructure
SWITCHING INTENT
TCS
Third-Party Labor
SWITCHING INTENT
OpenAI
Foundation Models
SWITCHING INTENT
Accenture
Consulting & System Integrators
SWITCHING INTENT
Databricks
Data Foundation
SWITCHING INTENT
AWS
Compute Infrastructure
SWITCHING INTENT
Anthropic
Foundation Models
SWITCHING INTENT
Infrastructure —
The "cloud-only" era for AI compute is over. While 68% of respondents use public cloud, 48% have adopted hybrid architectures that combine cloud with on-premises or colocation infrastructure. The driver is straightforward: GPU costs at scale. Enterprises running fine-tuning workloads or high-throughput inference are discovering that reserved cloud instances quickly approach the cost of owned hardware.
GPU-as-a-Service (40% adoption) has emerged as the middle path — access to dedicated GPU clusters without the capital expenditure of building your own. CoreWeave, Lambda, and NVIDIA DGX Cloud are the primary providers serving this segment. For enterprises that aren't ready to commit to 3-year cloud reservations but need more control than on-demand pricing offers, this category is growing fastest.
% of respondents using each approach (may select multiple)
0%
Public Cloud
Barriers —
The top three barriers — data quality (58%), talent (52%), and ROI measurement (48%) — have barely changed since 2023. This persistence is the real story. Enterprises have dramatically increased AI spending, but the fundamental blockers haven't been solved.
Data quality is not an AI problem — it's a data engineering problem that predates AI by a decade. The organizations making the most progress are those who treat data readiness as a pre-requisite infrastructure investment, not a side project of the AI team. Talent remains constrained, but the nature of the gap is shifting: it's less about finding ML engineers and more about finding practitioners who can bridge AI capabilities with domain expertise.
Hallucination and reliability (22%) appears low on the list, but interviews suggest it's underreported — many respondents consider it a subset of "governance" rather than a standalone challenge. In practice, it's the primary blocker to agentic deployment at scale.
Methodology —
This report is based on the Arcana Research Enterprise AI Pulse study, conducted in Q1–Q2 2025. The panel consists of 147 respondents across 8 industries, all at companies with 500+ employees. Respondents hold titles of VP or above with direct budget authority or influence over AI spending decisions.
Data is collected through a structured research instrument with branching logic based on industry, company size, and current AI maturity. All responses are anonymized and aggregated. No individual company data is disclosed in this report. Vendor-specific data points are validated against publicly available market data where possible.
The heat index is a composite score (0–100) calculated from weighted production adoption (40%), pilot activity (30%), and near-term planning intent (30%). Vendor vulnerability scores are based on self-reported switching intent and should be interpreted as directional signals, not predictions.
Personalized Benchmark —
The aggregated view tells a market story. The personalized benchmark tells your story — spend percentiles against your exact peer set, vendor positioning relative to your industry, heat index status mapped to your technology stack, and tailored recommendations from our research team.
Join the Research PanelData from the Arcana Research Enterprise AI Pulse study, H1 2025. All figures are aggregated and anonymized.