
Sesterce
European GPU cloud for AI training and inference
GPU-focused🇫🇷 FRgpu-cloudinference
Sesterce is a European GPU cloud platform offering on-demand NVIDIA GPUs for AI training and inference, with bare-metal, VM, and container deployment options across multiple regions.
21
GPU Models
$0.17
From / hour
Available GPUs
Hourly on-demand pricing. Click column headers to sort.
Prices last updated: March 31, 2026
GPU Model↑ | Memory↑ | GPUs↑ | vCPUs↑ | RAM↑ | Price / hr↑ | Updated↑ | Source |
|---|---|---|---|---|---|---|---|
| A10 | 24GB | 1x | 30 | 200 GB | $0.83/hr | 3/31/2026 | |
| A100 SXM | 80GB | 1x | 14 | 117 GB | $1.38/hr | 3/31/2026 | |
| A100 SXM | 80GB | 2x | 60 | 400 GB | $1.42/hr | 3/31/2026 | |
| A100 SXM | 80GB | 4x | 120 | 800 GB | $1.42/hr | 3/31/2026 | |
| A100 SXM | 80GB | 8x | 120 | 940 GB | $1.38/hr | 3/31/2026 | |
| A16 | 64GB | 1x | 6 | 64 GB | $0.56/hr | 3/31/2026 | |
| A16 | 64GB | 2x | 12 | 128 GB | $0.56/hr | 3/31/2026 | |
| A16 | 64GB | 4x | 24 | 256 GB | $0.56/hr | 3/31/2026 | |
| A16 | 64GB | 8x | 48 | 496 GB | $0.56/hr | 3/31/2026 | |
| A16 | 64GB | 16x | 96 | 960 GB | $0.63/hr | 3/31/2026 | |
| A30 | 24GB | 1x | 16 | 48 GB | $0.39/hr | 3/31/2026 | |
| A30 | 24GB | 2x | 30 | 96 GB | $0.39/hr | 3/31/2026 | |
| A30 | 24GB | 4x | 50 | 192 GB | $0.39/hr | 3/31/2026 | |
| A30 | 24GB | 8x | 94 | 384 GB | $0.39/hr | 3/31/2026 | |
| A40 | 48GB | 1x | 6 | 60 GB | $1.21/hr | 3/31/2026 | |
| A40 | 48GB | 2x | 12 | 120 GB | $1.21/hr | 3/31/2026 | |
| A40 | 48GB | 4x | 24 | 240 GB | $1.21/hr | 3/31/2026 | |
| A40 | 48GB | 8x | 48 | 480 GB | $1.21/hr | 3/31/2026 | |
| B200 | 192GB | 1x | 26 | 360 GB | $5.82/hr | 3/31/2026 | |
| B200 | 192GB | 2x | 62 | 360 GB | $5.29/hr | 3/31/2026 | |
| B200 | 192GB | 4x | 124 | 736 GB | $4.84/hr | 3/31/2026 | |
| B200 | 192GB | 8x | 96 | 2048 GB | $4.11/hr | 3/31/2026 | |
| Gaudi 2 | 96GB | 8x | 152 | 940 GB | $1.21/hr | 3/31/2026 | |
| GH200 | 96GB | 1x | 64 | 432 GB | $1.64/hr | 3/31/2026 | |
| H100 SXM | 80GB | 1x | 16 | 128 GB | $1.83/hr | 3/31/2026 | |
| H100 SXM | 80GB | 2x | 60 | 360 GB | $2.09/hr | 3/31/2026 | |
| H100 SXM | 80GB | 4x | 124 | 720 GB | $2.09/hr | 3/31/2026 | |
| H100 SXM | 80GB | 8x | 128 | 1536 GB | $1.97/hr | 3/31/2026 | |
| H200 | 141GB | 1x | 16 | 200 GB | $2.70/hr | 3/31/2026 | |
| H200 | 141GB | 2x | 88 | 370 GB | $3.75/hr | 3/31/2026 | |
| H200 | 141GB | 4x | 176 | 740 GB | $3.30/hr | 3/31/2026 | |
| H200 | 141GB | 8x | 480 | 2048 GB | $2.48/hr | 3/31/2026 | |
| HGX B300 | 288GB | 1x | 30 | 275 GB | $7.25/hr | 3/31/2026 | |
| HGX B300 | 288GB | 2x | 60 | 550 GB | $6.35/hr | 3/31/2026 | |
| HGX B300 | 288GB | 4x | 120 | 1100 GB | $5.90/hr | 3/31/2026 | |
| HGX B300 | 288GB | 8x | 240 | 2200 GB | $5.67/hr | 3/31/2026 | |
| L4 | 24GB | 1x | 8 | 48 GB | $0.99/hr | 3/31/2026 | |
| L4 | 24GB | 2x | 16 | 96 GB | $0.95/hr | 3/31/2026 | |
| L4 | 24GB | 4x | 32 | 192 GB | $0.93/hr | 3/31/2026 | |
| L4 | 24GB | 8x | 64 | 384 GB | $0.92/hr | 3/31/2026 | |
| L40 | 40GB | 1x | 26 | 192 GB | $1.09/hr | 3/31/2026 | |
| L40 | 40GB | 2x | 50 | 384 GB | $1.09/hr | 3/31/2026 | |
| L40 | 40GB | 4x | 126 | 232 GB | $1.10/hr | 3/31/2026 | |
| L40 | 40GB | 8x | 252 | 464 GB | $1.10/hr | 3/31/2026 | |
| L40S | 48GB | 1x | 16 | 128 GB | $0.81/hr | 3/31/2026 | |
| L40S | 48GB | 2x | 44 | 256 GB | $0.97/hr | 3/31/2026 | |
| L40S | 48GB | 4x | 88 | 512 GB | $0.97/hr | 3/31/2026 | |
| L40S | 48GB | 8x | 64 | 1536 GB | $0.76/hr | 3/31/2026 | |
| L40S | 48GB | 10x | 80 | 1470 GB | $1.60/hr | 3/31/2026 | |
| RTX 4000 Ada | 20GB | 1x | 8 | 32 GB | $0.87/hr | 3/31/2026 | |
| RTX 4090 | 24GB | 1x | 12 | 70 GB | $0.66/hr | 3/31/2026 | |
| RTX 4090 | 24GB | 2x | 24 | 140 GB | $0.66/hr | 3/31/2026 | |
| RTX 4090 | 24GB | 4x | 60 | 352 GB | $0.66/hr | 3/31/2026 | |
| RTX 4090 | 24GB | 8x | 48 | 256 GB | $0.44/hr | 3/31/2026 | |
| RTX 6000 Pro | 96GB | 1x | 14 | 46 GB | $0.55/hr | 3/31/2026 | |
| RTX 6000 Pro | 96GB | 2x | 26 | 160 GB | $1.07/hr | 3/31/2026 | |
| RTX 6000 Pro | 96GB | 4x | 52 | 320 GB | $1.07/hr | 3/31/2026 | |
| RTX 6000 Pro | 96GB | 8x | 128 | 512 GB | $0.66/hr | 3/31/2026 | |
| RTX A4000 | 16GB | 1x | 4 | 21 GB | $0.17/hr | 3/31/2026 | |
| RTX A4000 | 16GB | 2x | 8 | 43 GB | $0.17/hr | 3/31/2026 | |
| RTX A4000 | 16GB | 4x | 16 | 86 GB | $0.17/hr | 3/31/2026 | |
| RTX A4000 | 16GB | 8x | 32 | 172 GB | $0.17/hr | 3/31/2026 | |
| RTX A4000 | 16GB | 10x | 56 | 215 GB | $0.17/hr | 3/31/2026 | |
| RTX A5000 | 24GB | 1x | 10 | 48 GB | $0.48/hr | 3/31/2026 | |
| RTX A5000 | 24GB | 2x | 20 | 96 GB | $0.24/hr | 3/31/2026 | |
| RTX A5000 | 24GB | 4x | 40 | 192 GB | $0.48/hr | 3/31/2026 | |
| RTX A5000 | 24GB | 8x | 62 | 384 GB | $0.48/hr | 3/31/2026 | |
| RTX A6000 | 48GB | 1x | 6 | 48 GB | $0.54/hr | 3/31/2026 | |
| RTX A6000 | 48GB | 2x | 14 | 96 GB | $0.54/hr | 3/31/2026 | |
| RTX A6000 | 48GB | 4x | 30 | 192 GB | $0.54/hr | 3/31/2026 | |
| RTX A6000 | 48GB | 8x | 62 | 384 GB | $0.54/hr | 3/31/2026 | |
| Tesla V100 | 32GB | 1x | 6 | 23 GB | $0.43/hr | 3/31/2026 | |
| Tesla V100 | 32GB | 2x | 10 | 45 GB | $0.43/hr | 3/31/2026 | |
| Tesla V100 | 32GB | 8x | 48 | 180 GB | $0.43/hr | 3/31/2026 |
Pros & Cons
Advantages
- Multiple deployment types: VM, container, and bare-metal
- European-based with multi-region availability
- Developer-friendly REST API
- Dedicated AI inference hosting with unlimited-token pricing
- NVLink-equipped multi-GPU configurations
- Cloud-init support for automated provisioning
Limitations
- Smaller provider with limited brand recognition
- Fewer GPU models compared to major hyperscalers
- Credit-based billing requires pre-funded balance
Getting Started
Getting started guide coming soon.
Compare Providers
Find the best prices for the same GPUs from other providers