Tesla V100 GPU

The V100 remains relevant for legacy HPC and AI training workloads, with 16GB/32GB HBM2 options and Tensor Cores. While outperformed by newer GPUs, it's still used in scientific computing and older inference pipelines.

Starting Price
$0.19/hr
Available on 6 cloud providers
Tesla V100 GPU

Key Specifications

๐Ÿ’พMemory

32GB VRAM

๐Ÿ—๏ธArchitecture

Volta

โš™๏ธCompute Units

N/A

๐ŸงฎTensor Cores

640

Technical Specifications

Hardware Details

ManufacturerNVIDIA
ArchitectureVolta
CUDA Cores5120
Tensor Cores640
RT CoresN/A
Compute UnitsN/A
GenerationN/A

Memory & Performance

VRAM32GB
Memory Interface4096-bit
Memory Bandwidth900 GB/s
FP32 Performance14 TFLOPS
FP16 Performance112 TFLOPS
INT8 PerformanceN/A

Performance

Computing Power

CUDA Cores5,120
Tensor Cores640

Computational Performance

FP32 (TFLOPS)14
FP16 (TFLOPS)112

Common Use Cases

AI training, HPC

Machine Learning & AI

  • Training large language models and transformers
  • Computer vision and image processing
  • Deep learning model development
  • High-performance inference workloads

Graphics & Compute

  • 3D rendering and visualization
  • Scientific simulations
  • Data center graphics virtualization
  • High-performance computing (HPC)

Market Context

The Tesla V100 sits within NVIDIA's Volta architecture lineup, positioned in the entry performance tier.

Cloud Availability

Available across 6 cloud providers with prices ranging from $0.19/hr. Pricing and availability may vary by region and provider.

Market Position

Released in 2017, this GPU is positioned for professional workloads.

Current Pricing

ProviderHourly PriceSource
RunPod
$0.19/hr
Vast.ai
$0.20/hr
Lambda Labs
$4.40/hr
Paperspace
$2.30/hr
Cudo Compute
$0.39/hr
Datacrunch
$0.19/hr

Prices are updated regularly. Last updated: 6/17/2025