Back to Blog
AnnouncementMarch 19, 2026

Why we raised $19M to fix GPU infrastructure

Ditlev BredahlCo-founder
4 min read

Why we raised $19M to fix GPU infrastructure

Today we're announcing that hosted·ai has raised a $19M seed round led by Creandum, with Repeat VC and participation from People Ventures, Z21 Ventures, Golden Sparrow, Hersir Ventures and Tekton.

I want to explain why we raised, what we're building, and why we believe the GPU infrastructure market is at an inflection point.

The GPU market has a waste problem, not a scarcity problem

The conventional narrative is that GPUs are scarce. That's not wrong - but it misses the bigger picture.

Today's GPU infrastructure is profoundly wasteful. AI workloads consume only around 30% of GPU capacity on average. That means roughly 70% of the GPUs that service providers invest in - and customers pay for - sits idle.

This isn't a technology problem. It's an architecture problem. Unlike traditional cloud compute, which scales dynamically to match demand, GPUs are static: customers rent fixed instances based on estimated peak requirements. The result is waste at every level of the stack.

For service providers, this means enormous CAPEX requirements and persistent profitability challenges. For customers, it means paying for capacity they don't use. For the industry, it means artificial scarcity created by inefficiency - not by a lack of silicon.

What we're building

hosted·ai is building the software layer that fixes this.

hosted·ai - our core platform - delivers GPU pooling, optimized multi-tenant workload placement and GPU overcommit. The result: up to 5x improvement in GPU utilization. That means 5x reduction in CAPEX requirements for service providers, better margins, and lower prices for customers.

packet·ai - our own neocloud - runs on our customers' optimized infrastructure, delivering GPU compute at market-leading prices. It generates direct demand for our customers' GPUs while proving the thesis: efficient infrastructure means better economics for everyone.

GPUaaS.com - our wholesale marketplace - connects enterprise buyers with hosted·ai customers and partners who can fulfil custom GPU cluster requirements at scale.

Next on the roadmap: GPU Mesh, a resource exchange that lets service providers buy and sell spare GPU capacity - extending their reach, or launching a full neocloud, without additional hardware CAPEX.

Why now

As AI shifts from model training to inference, the market is changing. Companies increasingly need local, low-latency, sovereign GPU infrastructure. The hyperscalers can't be everywhere. Regional service providers can - but only if the economics work.

Our software makes those economics work.

We've spent 25 years building infrastructure software that makes service providers competitive. We built UK2 Group and OnApp to bring cloud infrastructure to the mainstream service provider market. The GPU opportunity is the biggest we've seen.

What the funding means

This round lets us move faster: more platform capabilities, more partners, more regions. We're building the operating system for the GPU economy, and this puts us in a strong position to do exactly that.

If you're a service provider looking to launch or scale GPU infrastructure, or an enterprise looking for GPU capacity, we'd love to talk.