Subcategory · AI Citation Index

GPU Cloud

GPU cloud is a three-horse race for AI attention. CoreWeave is the consensus pick — surfaces in 90% of discovery prompts across two engines and wins more head-to-head comparisons than it loses. RunPod and Vast.ai trail within ten points of each other on shortlist share, while Lambda Labs, Oracle, and Microsoft hover in the mid-30s. Below the top tier, Replicate, Modal, and Anyscale each show up as kingmakers in evaluation — they appear in dozens of head-to-head matchups but barely surface in discovery queries, meaning buyers compare them once they know about them but AI rarely volunteers their names. The category is contested — no single brand owns more than half the shortlist, and the kingmaker brands dilute the top tier's share of evaluation wins.

50 discovery queries · 178 head-to-heads · refreshed May 4, 2026

Discovery stage

The shortlist

Across 50 buyer-style "GPU Cloud" queries

CoreWeave shows up in 90% of buyer queries about GPU cloud, visible on ChatGPT and Gemini. RunPod lands in 74% of those same queries, Vast.ai in 70%. Lambda Labs, Oracle, and Microsoft each surface in roughly a third of discovery prompts across two engines, while Crusoe Cloud sits at 42%. Replicate and Modal trail further — both surface on two engines but only in 20% and 16% of queries, respectively.

0%21%43%64%86%Coverage — share of discovery prompts where the brand surfaces14%34%55%76%96%Engine diversity

Hover or click a logo to see brand details

X = coverage across discovery prompts · Y = engine diversity · Bubble size = total mentions
Tracked acrossChatGPT,Gemini,Claude

Get weekly AI visibility changes for GPU Cloud sent to your inbox.

Score shifts, new entrants, citation gaps — every Monday.

Signal by intent

By topic

Top 5 most-cited brands per intent cluster. Brands with zero citations in a topic are not shown.

≥50% cited
25–49%
<25%
Topics are discovery-stage prompt clusters · gpu-cloud

Evaluation stage

Head-to-head

How brands fare on comparison queries · category median 61/100

CoreWeave wins most head-to-head comparisons, averaging 84 across 19 matchups. Azure Machine Learning scores 81 but appears in only four comparison queries. Replicate, Modal, and Anyscale each appear in 30 or more head-to-heads — Replicate averages 66, while Modal and Anyscale score below the category median at 42 and 48, meaning they lose more matchups than they win. Crusoe Cloud also loses more than it wins, averaging 42 across 19 comparisons.

0255075100Evaluation score — head-to-head win rate (0–100)017335066Head-to-head queriesmedian win ratemedian exposure

Hover or click a logo to see brand details

X = head-to-head win rate · Y = number of head-to-head queries · Bubble size = head-to-head exposure
Median win rate 61/100

Each brand's evaluation score averages how AI responds to head-to-head comparison queries that mention them. Above-median brands win their comparisons more often than they lose; bubble size reflects how many head-to-heads they appear in.

Citation sources

Where AI pulls citations from

808 citations captured across GPU Cloud prompt runs.

Vendor pages

285

Product, help, and marketing pages from tracked vendors

Independent sources

279

Reviews, encyclopedias, forums, press — not vendor-owned

Buyer questions

What AI cites for top GPU Cloud questions

Buyers ask AI for GPU cloud options filtered by pricing model, deployment speed, and workload — phrasings like 'GPU cloud with reserved instance pricing', 'GPU cloud for video generation models', 'GPU cloud vs on-premise GPU servers'. Head-to-head comparisons dig into specific brands for enterprise AI training or cheap access: 'Nebius vs CoreWeave for enterprise AI', 'TensorDock vs RunPod for cheap GPU access'. The queries skew technical — no trust or conversion prompts in the current set.

Discovery

Buyers exploring the category

Evaluation

Buyers comparing options

Want to know if AI cites your brand for GPU Cloud?

Free audit. ChatGPT, Perplexity, Gemini, Claude.

Run an audit →

See the full GPU Cloud leaderboard →