Ultralight Sovereign AI Environment
Forging the Sovereign AI Edge
An ultra-light infrastructure investment and deployment platform purpose-built for enterprise AI: combining hyper-scale cloud experience, mission-critical systems engineering, security, governance, and disciplined capital structuring.

Purpose-built sovereign AI nodes — deployed at your facility, governed by us.
AI runs on infrastructure never designed for it
Generic cloud and NeoCloud models were built for a different era. Enterprises running AI on them are paying a heavy price — in dollars, risk, and lost control.
Data Sovereignty
Generic cloud means your data lives on someone else's servers, in someone else's jurisdiction. You don't control where it goes, how it's processed, or who can access it.
- ✗Data stored on hyperscaler infrastructure you don't control
- ✗Subject to foreign jurisdiction and legal access requests
- ✗No guarantee of geographic boundaries
Data Migration Pain
Moving data to and from NeoCloud is slow, expensive, and risky. Once you're in, you're stuck — egress fees and migration friction create vendor lock-in that grows with your data volumes.
- ✗High egress fees lock you into a provider
- ✗Migration to and from NeoCloud is costly and complex
- ✗AI workloads need data close to compute — cloud creates latency
Security & Compliance
Regulated industries can't afford the compliance ambiguity of shared cloud. NeoCloud providers weren't designed for GDPR, HIPAA, or government data classification — and it shows.
- ✗NeoCloud compliance certifications are incomplete for regulated sectors
- ✗Shared infrastructure means shared risk surface
- ✗Audit trails and data residency guarantees are difficult to enforce
Our answer: sovereign AI nodes deployed close to your data — purpose-built, fully compliant, and owned by a fund that makes you money.
Cut your AI bill in half
Enterprises running AI on NeoCloud or hyperscalers are renting at peak-scarcity prices — paying for capacity they don't control and data they can't easily move.
Our model flips the equation. You rent sovereign compute — your data, your infrastructure — at a fraction of the cost. Think of it as the difference between staying in a hotel every month versus leasing your own place.
Cost Model Comparison
Single site · Illustrative example
Every fund creates $1.6B in value for its enterprise clients across 200 sites

Each Apex Foundry node is a purpose-built sovereign AI compute unit — installed at your site, financed by our fund.
How We Deploy
We act as design authority and capital orchestrator — certified execution partners handle delivery under our governance layer.
Qualify & Upgrade Sites
Rigorous site assessment — power envelope, cooling topology, compliance posture, and financial modelling. If a site doesn't fit, we don't deploy.
Deploy AI Nodes
Standardized high-density AI nodes — sub-2MW, vendor-neutral, and repeatable. Frozen reference architectures mean no bespoke engineering creep.
Structure Capital
Capital-efficient financing models with institutional-grade governance. 75% of CAPEX financed — you invest 25% equity, we handle the rest.
Orchestrate Delivery
Coordinated execution across OEMs, system integrators, cooling, power, and storage partners. Gate-based acceptance: sites either pass or don't get commissioned.
Disciplined. Insurable. Financeable.
Our objective is to build and scale a disciplined AI infrastructure portfolio with strong enterprise demand and long-term capital partners — structured for institutional investors from day one.
- Distributed portfolio of sub-2MW AI nodes
- Standardized reference architectures — no bespoke creep
- Vendor-neutral engineering across all sites
- Repeatable deployment model with identical go-live gates
- Portfolio-level governance and risk controls
- SLA stacking: OEM → SI → Apex Foundry → client
Capital Strategy
Multi-site infrastructure fund targeting 200 sites. Fund closes and transfers ownership to PE. Cumulative revenue per fund exceeds $480M. Profitable from Day 1 of live operations.
Insurance Architecture
Heterogeneous sites are insurable when bounded. Every site maps to an insurance class. After 5–7 sites, insurers price the fleet — not each individual site. Mirrors how solar farms and cell towers became insurable at scale.
How the Platform Works Together
Apex Foundry sits at the center of a disciplined ecosystem — connecting enterprise demand to capital, sites, engineering, and compute in a single governed platform.
Financing 75% of CAPEX brings the benefit of NeoCloud at improved compliance and data convenience.
Need upgraded low-voltage power and cooling, close to enterprise data. The project pays rent and funds upgrades, improving site value, utilization, and attractiveness.
Partners must deliver AI rack solutions on demand. We need to be able to upgrade any site — cooling and power engineering are central to every deployment.
Compute and storage vendors deliver the core value. We ensure delivery timelines and payment terms through our partnerships. Cluster design and networking are key.
GTM depends on how we work with the site owner — an active site owner is a great help.
Enterprise choice is between staying in a hotel or leasing its own place. With us they make money with just 25% equity investment — and solve scarcity, data migration, and cost uncertainty challenges.
Core Founding Team
A globally experienced leadership group with decades of experience at AWS, IBM, Nokia, EY, Shell, and leading infrastructure vendors.

Guido Bartels
President & CEO
AWS · IBM
Global cloud and energy executive with 35+ years scaling high-growth technology businesses. Former AWS MD and IBM GM leading billion-dollar expansions, hyperscale region launches, and strategic alliances. Advises enterprises on AI infrastructure, cloud transformation, and resilient, sustainable data-center strategy.
LinkedIn
Martin Rapos
Co-CEO / COO
Shell · IBM · AKULAR
AI data-center builder and AKULAR co-founder with two decades in digital twins, liquid-cooled compute design, and operational telemetry. Builds modular, high-density micro-data-center networks optimized for enterprise AI workloads. Bridges 3D spatial intelligence with real-world infrastructure.
LinkedIn
Eric Ashman
CTO
Nokia
Senior systems engineer with 30 years designing and operating mission-critical compute environments. Expert in GPU clusters, distributed storage, and automation. Aligns engineering strategy with operational reliability, scaling complex workloads from prototype to production.
LinkedIn
Roy Timor-Rousso
CMO / BD EMEA & APAC
Flexnode · 11Stream
Global technology and business leader specializing in modular data centers, sovereign cloud, and AI infrastructure. Drives multimillion-dollar initiatives across government and hyperscale sectors. Leads Sovereign Cloud & AI at 11Stream and global BD at Flexnode.
LinkedIn
Taco de Vries
CRO / BD Americas
EY · IBM
Energy-tech strategist with 25+ years modernizing distribution grids for electrification and data-center growth. Specializes in grid-edge forecasting, DER valuation, and advanced analytics. Led major eMobility and smart-grid programs across North America, Europe, and LATAM.
LinkedInAKULAR Digital Twin Team
AKULAR provides the digital twin backbone — enabling real-time 3D visibility of every rack, node, and facility before, during, and after deployment. Customers see exactly what we see, from day one.
Advisory Board
300 years of combined expertise spanning data centers, cloud, telecom, security, power & cooling, utilities, military, and higher education.
Ready to Deploy Your Sovereign AI Environment?
Whether you're an enterprise seeking purpose-built AI compute or an institutional investor exploring the platform, we'd like to hear from you.








