Tesla T4 GPU

The T4 is a low-power data center GPU optimized for AI inference, offering mixed-precision performance in a compact design. It's commonly used in cloud deployments for cost-efficient scaling of NLP and recommendation systems.

Starting Price
$0.15/hr
Available on 1 cloud providers
Tesla T4 GPU

Key Specifications

๐Ÿ’พMemory

16GB VRAM

๐Ÿ—๏ธArchitecture

Turing

โš™๏ธCompute Units

N/A

๐ŸงฎTensor Cores

320

Technical Specifications

Hardware Details

ManufacturerNVIDIA
ArchitectureTuring
CUDA Cores2560
Tensor Cores320
RT Cores40
Compute UnitsN/A
GenerationN/A

Memory & Performance

VRAM16GB
Memory Interface256-bit
Memory Bandwidth320 GB/s
FP32 Performance8.1 TFLOPS
FP16 Performance65 TFLOPS
INT8 Performance130 TOPS

Performance

Computing Power

CUDA Cores2,560
Tensor Cores320
RT Cores40

Computational Performance

FP32 (TFLOPS)8.1
FP16 (TFLOPS)65
INT8 (TOPS)130

Common Use Cases

Inference, video transcoding

Machine Learning & AI

  • Training large language models and transformers
  • Computer vision and image processing
  • Deep learning model development
  • High-performance inference workloads

Graphics & Compute

  • 3D rendering and visualization
  • Scientific simulations
  • Data center graphics virtualization
  • High-performance computing (HPC)

Market Context

The Tesla T4 sits within NVIDIA's Turing architecture lineup, positioned in the entry performance tier.

Cloud Availability

Available across 1 cloud providers with prices ranging from $0.15/hr. Pricing and availability may vary by region and provider.

Market Position

Released in 2018, this GPU is positioned for professional workloads.

Current Pricing

ProviderHourly PriceSource
Vast.ai
$0.15/hr

Prices are updated regularly. Last updated: 7/31/2025