Skip to main content
Gcore logo

Gcore

Global edge cloud with NVIDIA GPUs and inference at the edge

Rapidly-catching neocloud🇱🇺 LU

Last reviewed May 8, 2026

Gcore is a global edge cloud provider offering NVIDIA GPU instances and bare-metal clusters, including GB200 capacity, alongside an inference platform that runs models close to end users across its CDN footprint.

We're actively tracking prices for Gcore. Check back soon, or browse other providers with current pricing.

Pros & Cons

Advantages

  • Wide global edge presence beyond typical neocloud regions
  • Access to Blackwell-generation GB200 systems
  • Combined CDN, networking and GPU stack from a single provider

Limitations

  • Catalog spans many products beyond GPU compute, which can complicate pricing
  • GPU SKU availability varies meaningfully by region
  • Less GPU-specialized than dedicated training neoclouds

Key Features

Global Edge Footprint

GPU and inference workloads deployable across Gcore's worldwide edge points of presence

GB200 Capacity

Blackwell-generation GB200 systems available alongside H100 and H200

Inference at the Edge

Managed inference platform that serves models from regions closest to users

Bare-Metal and Cloud GPUs

Both managed cloud GPU instances and dedicated bare-metal nodes

Pricing Options

OptionDetails
On-Demand Cloud GPUsHourly billing for self-serve GPU virtual machines
Bare-Metal ClustersDedicated bare-metal GPU servers, typically reservation-led
Inference PlatformPer-request or per-second billing for hosted model endpoints at the edge

Availability & Support

Regions

Global edge presence across Europe, North America, Asia, Latin America and the Middle East

Support

Documentation, support tickets and enterprise contracts

Getting Started

  1. 1

    Create an account

    Sign up at the Gcore customer portal

  2. 2

    Select GPU product

    Choose between cloud GPU instances, bare-metal clusters, or the inference platform

  3. 3

    Provision and connect

    Launch via the portal or API and connect over SSH or supported orchestrators