Tesla V100 GPU
The V100 remains relevant for legacy HPC and AI training workloads, with 16GB/32GB HBM2 options and Tensor Cores. While outperformed by newer GPUs, it's still used in scientific computing and older inference pipelines.
Starting Price
$0.19/hr
Available on 6 cloud providers

Key Specifications
๐พMemory
32GB VRAM
๐๏ธArchitecture
Volta
โ๏ธCompute Units
N/A
๐งฎTensor Cores
640
Technical Specifications
Hardware Details
ManufacturerNVIDIA
ArchitectureVolta
CUDA Cores5120
Tensor Cores640
RT CoresN/A
Compute UnitsN/A
GenerationN/A
Memory & Performance
VRAM32GB
Memory Interface4096-bit
Memory Bandwidth900 GB/s
FP32 Performance14 TFLOPS
FP16 Performance112 TFLOPS
INT8 PerformanceN/A
Performance
Computing Power
CUDA Cores5,120
Tensor Cores640
Computational Performance
FP32 (TFLOPS)14
FP16 (TFLOPS)112
Common Use Cases
AI training, HPC
Machine Learning & AI
- Training large language models and transformers
- Computer vision and image processing
- Deep learning model development
- High-performance inference workloads
Graphics & Compute
- 3D rendering and visualization
- Scientific simulations
- Data center graphics virtualization
- High-performance computing (HPC)
Market Context
The Tesla V100 sits within NVIDIA's Volta architecture lineup, positioned in the entry performance tier.
Cloud Availability
Available across 6 cloud providers with prices ranging from $0.19/hr. Pricing and availability may vary by region and provider.
Market Position
Released in 2017, this GPU is positioned for professional workloads.