GPU Plan multi-gpu-dedicated-server---8xv100 not found

GPU Plans

GPU plans sold as part of cloud servers with their specifications and prices. Each plan includes one or more GPU plus CPU, memory, disk and data transfer.

GPU Types

Cloud providers offer a variety of GPUs from vendors like NVIDIA and AMD, each optimized for different workloads. From AI training to graphics rendering, these powerful processors enable diverse applications. Here's a look at some common GPU types:

  • A10: A versatile data center GPU, balancing AI inference and graphics rendering. Offers strong performance for diverse workloads, including virtual workstations and AI-powered video processing. See also A10G
  • A40: A professional workstation GPU built for demanding graphics and AI tasks. It delivers exceptional performance for visual effects, 3D rendering, and AI-accelerated workflows in professional environments.
  • A100: A high-performance data center GPU designed for accelerating AI training and high-performance computing. Delivers exceptional computational power for complex simulations and large-scale deep learning models.
  • H100: NVIDIA's next-generation AI GPU, providing a significant leap in performance over the A100. Engineered for massive AI workloads, with improved transformer engine performance, and increased memory bandwidth.
  • H200: An enhanced version of the H100, designed to tackle the most demanding AI workloads. It offers increased memory capacity and bandwidth, enabling faster processing of massive datasets for large language models and generative AI.
  • L4: An energy-efficient GPU optimized for AI video and inference workloads in the data center. It excels at tasks like video transcoding, AI-powered video analysis, and real-time inference, while maintaining a low power footprint.
  • T4: An entry-level inference GPU, widely used in cloud environments for AI inference and graphics virtualization. Provides a cost-effective solution for deploying AI models and delivering virtual desktops.
  • L40S: A powerful data center GPU designed for professional visualization and demanding AI workloads. Ideal for rendering complex 3D models, running simulations, and accelerating AI-driven design and content creation.
  • NVIDIA V100: A previous-generation high-performance GPU, still widely used for AI training and scientific computing. It offers substantial computational power and memory bandwidth for demanding workloads. See also NVIDIA V100S.
  • AMD Radeon Pro V520: A professional workstation GPU designed for visualization and graphics-intensive applications. It delivers reliable performance for tasks like 3D modeling, rendering, and video editing.
  • Nvidia RTX 4000: The NVIDIA RTX 4000 Ada Generation is a powerful professional GPU with 20GB GDDR6 ECC memory. Featuring 6144 CUDA cores, 192 Tensor cores, and 48 RT cores, it excels in demanding creative, design, and AI workflows. Its single-slot, power-efficient design delivers high performance for complex tasks.
  • Nvidia Quadro RTX 6000: The Radeon Pro RTX 6000 is a high-end professional workstation graphics card. It boasts 48GB of GDDR6 ECC memory, 18,176 CUDA cores, 568 Tensor cores, and 142 RT cores, delivering exceptional performance for demanding tasks like 3D rendering, AI, and data science. Its 960 GB/s memory bandwidth and advanced features like DLSS 3 and SER accelerate workflows, making it a top choice for professionals.
Filter by provider name:
Filter by GPU type:

This is not a comprehensive list and prices may vary before VPSBenchmarks can update them.

AWS Amazon AWS
GPUs - P6

Amazon EC2 P6 instances, powered by NVIDIA Blackwell GPUs, are designed for high-performance AI training and inference. Featuring 5th Generation Intel Xeon Scalable processors, these instances provide substantial leaps in GPU memory and compute throughput compared to previous generations. They are optimized for large-scale distributed AI workloads, including mixture of experts and trillion-parameter models, utilizing Elastic Fabric Adapter (EFA) networking and high-bandwidth local NVMe storage to accelerate data-intensive machine learning and high-performance computing applications efficiently in the cloud.

Plan GPU Type GPU RAM vCPUs RAM Storage Price
p6-b200.48xlarge 8 x NVIDIA B200 1432 GB 192 2048 GB 30720 GB $113.93/hr
GPUs - G5

G5 instances feature up to 8 NVIDIA A10G Tensor Core GPUs and second generation AMD EPYC processors. They also support up to 192 vCPUs, up to 100 Gbps of network bandwidth, and up to 7.6 TB of local NVMe SSD storage.

Plan GPU Type GPU RAM vCPUs RAM Storage Price
g5.xlarge 1 x Nvidia A10G 24 GB 4 16 GB 250 GB $1.01/hr
g5.16xlarge 1 x Nvidia A10G 24 GB 64 256 GB 1900 GB $4.10/hr
g5.12xlarge 4 x Nvidia A10G 96 GB 48 192 GB 3800 GB $5.67/hr
g5.48xlarge 8 x Nvidia A10G 192 GB 192 768 GB 7600 GB $16.29/hr
g5.2xlarge 1 x Nvidia A10G 24 GB 8 32 GB 450 GB $1.21/hr
GPUs - P4d

Amazon Elastic Compute Cloud (Amazon EC2) P4d instances deliver high performance for machine learning (ML) training and high performance computing (HPC) applications in the cloud. P4d instances are powered by NVIDIA A100 Tensor Core GPUs and deliver industry-leading high throughput and low-latency networking. These instances support 400 Gbps instance networking. P4d instances provide up to 60% lower cost to train ML models, including an average of 2.5x better performance for deep learning models compared to previous-generation P3 and P3dn instances.

Plan GPU Type GPU RAM vCPUs RAM Storage Price
p4d.24xlarge 8 x NVIDIA A100 320 GB 96 1152 GB 8000 GB $21.96/hr
p4de.24xlarge 8 x NVIDIA A100 640 GB 96 1152 GB 8000 GB $27.45/hr
GPUs - G6

G6 instances feature up to 8 NVIDIA L4 Tensor Core GPUs with 24 GB of memory per GPU and third generation AMD EPYC processors. They also support up to 192 vCPUs, up to 100 Gbps of network bandwidth, and up to 7.52 TB of local NVMe SSD storage.

Plan GPU Type GPU RAM vCPUs RAM Storage Price
g6.xlarge 1 x Nvidia L4 24 GB 4 16 GB 250 GB $0.80/hr
gr6.4xlarge 1 x Nvidia L4 24 GB 16 128 GB 600 GB $1.54/hr
g6.4xlarge 1 x Nvidia L4 24 GB 16 64 GB 600 GB $1.32/hr
g6.12xlarge 4 x Nvidia L4 96 GB 48 192 GB 3760 GB $4.60/hr
g6.48xlarge 8 x Nvidia L4 192 GB 192 768 GB 7520 GB $13.35/hr
GPUs - P5

P5 instances provide up to 8 NVIDIA H100 GPUs with a total of up to 640 GB HBM3 GPU memory per instance. P5e and P5en instances provide up to 8 NVIDIA H200 GPUs with a total of up to 1128 GB HBM3e GPU memory per instance. Both instances support up to 900 GB/s of NVSwitch GPU interconnect (total of 3.6 TB/s bisectional bandwidth in each instance), so each GPU can communicate with every other GPU in the same instance with single-hop latency.

Plan GPU Type GPU RAM vCPUs RAM Storage Price
p5en.48xlarge 8 x H200 1128 GB 192 2000 GB 30720 GB $63.30/hr
p5.48xlarge 8 x H100 640 GB 192 2000 GB 30720 GB $55.04/hr
p5.4xlarge 1 x H100 80 GB 16 256 GB 3840 GB $6.88/hr
GPUs - G4

Amazon EC2 G4 instances are the industry’s most cost-effective and versatile GPU instances for deploying machine learning models such as image classification, object detection, and speech recognition, and for graphics-intensive applications such as remote graphics workstations, game streaming, and graphics rendering. G4 instances are available with a choice of NVIDIA GPUs (G4dn) or AMD GPUs (G4ad).

Plan GPU Type GPU RAM vCPUs RAM Storage Price
g4dn.metal 8 x NVIDIA T4 128 GB 96 384 GB 1800 GB $7.82/hr
g4dn.xlarge 1 x NVIDIA T4 16 GB 4 16 GB 125 GB $0.53/hr
g4ad.xlarge 1 x AMD Radeon Pro V520 8 GB 4 16 GB 150 GB $0.38/hr
g4ad.2xlarge 1 x AMD Radeon Pro V520 8 GB 8 32 GB 300 GB $0.54/hr
g4ad.8xlarge 2 x AMD Radeon Pro V520 8 GB 32 128 GB 1200 GB $1.73/hr
g4ad.16xlarge 4 x AMD Radeon Pro V520 8 GB 64 256 GB 2400 GB $3.47/hr
g4dn.12xlarge 4 x Nvidia T4 16 GB 48 192 GB 900 GB $3.91/hr
Database Mart Database Mart
GPU Server - Datacenter

Dedicated servers with Xeon CPUs and various GPU types. One month billing minimum.

Plan GPU Type GPU RAM vCPUs RAM Storage Price
Multi-GPU Dedicated Server - 4xA100 4 x A100 40 GB 44 512 GB 20480 GB $3.42/hr
Advanced GPU VPS - RTX Pro 4000 1 x NVIDIA RTX Pro 4000 24 GB 24 56 GB 320 GB $0.27/hr
Enterprise GPU Dedicated Server - A40 1 x Nvidia A40 48 GB 36 256 GB 10240 GB $0.75/hr
Enterprise GPU Dedicated Server - A100 1 x Nvidia A100 40 GB 36 256 GB 10240 GB $1.10/hr
Enterprise GPU Dedicated Server - H100 1 x Nvidia H100 80 GB 36 256 GB 10240 GB $3.56/hr
Enterprise GPU VPS - RTX Pro 6000 1 x NVIDIA RTX Pro 6000 24 GB 32 84 GB 400 GB $0.82/hr
DOcean DigitalOcean
GPU Droplets - H100

Reliably run training and inference on AI/ML models, process large data sets and complex neural networks for deep learning use cases, and serve additional use cases like high-performance computing (HPC)

Plan GPU Type GPU RAM vCPUs RAM Storage Price
NVIDIA H100x8 8 x NVIDIA H100 640 GB 160 1920 GB 2000 GB $23.92/hr
NVIDIA H100 1 x NVIDIA H100 80 GB 20 240 GB 720 GB $3.39/hr
Genesis PC Genesis Public Cloud
GPU Server - T4

Plan GPU Type GPU RAM vCPUs RAM Storage Price
g6as.xlarge 1 x Tesla T4 16 GB 2 16 GB 0 GB $0.41/hr
g6as.2xlarge 1 x Tesla T4 16 GB 4 32 GB 0 GB $0.45/hr
g6as.3xlarge 1 x Tesla T4 16 GB 6 48 GB 0 GB $0.48/hr
g6as.4xlarge 1 x Tesla T4 16 GB 8 64 GB 0 GB $0.52/hr
g6as.6xlarge 1 x Tesla T4 16 GB 12 96 GB 0 GB $0.60/hr
g6as.8xlarge 1 x Tesla T4 16 GB 16 128 GB 0 GB $0.68/hr
GCE Google Compute Engine
Accelerator Optimized - A3 High

A3 VMs are preconfigured in one shape with eight of NVIDIA H100 GPUs

Plan GPU Type GPU RAM vCPUs RAM Storage Price
a3-highgpu-1g 1 x Nvidia H100 80 GB 26 234 GB 750 GB $11.06/hr
a3-highgpu-2g 2 x Nvidia H100 160 GB 52 468 GB 1500 GB $22.12/hr
a3-highgpu-4g 4 x Nvidia H100 320 GB 104 936 GB 3000 GB $44.25/hr
a3-highgpu-8g 8 x Nvidia H100 640 GB 208 1872 GB 6000 GB $88.49/hr
Accelerator Optimized - G2

G2 VMs come with NVIDIA L4 GPUs.

Plan GPU Type GPU RAM vCPUs RAM Storage Price
g2-standard-4 1 x Nvidia L4 24 GB 4 16 GB 0 GB $0.71/hr
g2-standard-8 1 x Nvidia L4 24 GB 8 32 GB 0 GB $0.85/hr
g2-standard-16 1 x Nvidia L4 24 GB 16 64 GB 0 GB $1.15/hr
g2-standard-32 1 x Nvidia L4 24 GB 32 128 GB 0 GB $1.73/hr
g2-standard-12 1 x Nvidia L4 24 GB 12 48 GB 0 GB $1.00/hr
g2-standard-24 2 x Nvidia L4 48 GB 24 96 GB 0 GB $2.00/hr
g2-standard-48 4 x Nvidia L4 96 GB 48 192 GB 0 GB $4.00/hr
g2-standard-96 8 x Nvidia L4 192 GB 96 384 GB 0 GB $8.00/hr
LayerStack LayerStack
Cloud GPU - vGPU

Virtualized GPU technology for sharing graphics processing & computing power.

Plan GPU Type GPU RAM vCPUs RAM Storage Price
1 A40-48Q 1 x A40-48Q 48 GB 24 128 GB 1500 GB $1.57/hr
Linode Linode
GPU - RTX 6000

Balanced price-performance for machine learning, data analytics, and gaming harnessing the power of CUDA, Tensor, and RT cores. Currently available in limited core compute regions. Contact our sales team to get started.

Plan GPU Type GPU RAM vCPUs RAM Storage Price
Dedicated 32 GB + RTX6000 GPU x1 1 x RTX6000 32 GB 8 32 GB 640 GB $1.50/hr
Dedicated 96 GB + RTX6000 GPU x3 3 x RTX6000 32 GB 20 96 GB 1920 GB $4.50/hr
Dedicated 128 GB + RTX6000 GPU x4 4 x RTX6000 32 GB 24 128 GB 2560 GB $6.00/hr
GPU - RTX 4000

Dedicated virtual machines equipped with NVIDIA GPUs to speed up complex compute jobs.

Plan GPU Type GPU RAM vCPUs RAM Storage Price
RTX4000 Ada GPU x1 Small 1 x RTX4000 Ada 0 GB 4 16 GB 500 GB $0.52/hr
RTX4000 Ada GPU x1 Large 1 x RTX4000 Ada 0 GB 16 64 GB 500 GB $0.96/hr
RTX4000 Ada GPU x4 Medium 4 x RTX4000 Ada 0 GB 48 196 GB 2000 GB $3.57/hr
Azure Microsoft Azure
GPU Compute - ND-H200-v5

The ND H200 v5-series is a flagship Azure virtual machine designed for high-end AI training and generative inference. It features eight NVIDIA H200 Tensor Core GPUs with 141GB of HBM3e memory each, providing 4.8 TB/s of bandwidth. Powered by Sapphire Rapids processors, it offers 96 vCPUs and 1850 GiB of RAM. With 3.2 Tb/s of InfiniBand interconnect, it is optimized for large-scale, low-latency AI clusters and complex scientific computing workloads.

Plan GPU Type GPU RAM vCPUs RAM Storage Price
ND96is_H200_v5 8 x NVIDIA H200 1128 GB 96 1800 GB 28000 GB $84.80/hr
GPU Compute - NCsv3

NCv3-series VMs are powered by NVIDIA Tesla V100 GPUs. From 6 to 24 Intel Xeon E5-2690 v4 vCPUs.

Plan GPU Type GPU RAM vCPUs RAM Storage Price
NC24rs v3 4 x V100 64 GB 24 448 GB 2944 GB $13.46/hr
NC24s v3 4 x V100 64 GB 24 448 GB 2944 GB $12.41/hr
NC12s v3 2 x V100 32 GB 12 224 GB 1472 GB $6.20/hr
NC6s v3 1 x V100 16 GB 6 112 GB 736 GB $3.10/hr
GPU compute - NCasT4_v3

The NCasT4_v3-series virtual machines are powered by Nvidia Tesla T4 GPUs and AMD EPYC 7V12(Rome) CPUs. The VMs feature up to 4 NVIDIA T4 GPUs with 16 GB of memory each, up to 64 non-multithreaded AMD EPYC 7V12 (Rome) processor cores(base frequency of 2.45 GHz, all-cores peak frequency of 3.1 GHz and single-core peak frequency of 3.3 GHz) and 440 GiB of system memory.

Plan GPU Type GPU RAM vCPUs RAM Storage Price
NC4as T4 v3 1 x T4 16 GB 4 28 GB 180 GB $0.53/hr
NC8as T4 v3 1 x T4 16 GB 8 56 GB 360 GB $0.76/hr
NC16as T4 v3 1 x T4 16 GB 16 110 GB 360 GB $1.22/hr
NC64as T4 v3 4 x T4 64 GB 64 440 GB 2880 GB $4.35/hr
GPU Compute - NVv3

The NVv3-series virtual machines are powered by NVIDIA Tesla M60 GPUs and NVIDIA GRID technology with Intel E5-2690 v4 (Broadwell) CPUs and Intel Hyper-Threading Technology. These virtual machines are targeted for GPU accelerated graphics applications and virtual desktops where customers want to visualize their data, simulate results to view, work on CAD, or render and stream content

Plan GPU Type GPU RAM vCPUs RAM Storage Price
NV12s v3 1 x M60 8 GB 12 112 GB 320 GB $1.16/hr
NV24s v3 2 x M60 16 GB 24 224 GB 640 GB $2.31/hr
NV48s v3 4 x M60 32 GB 48 448 GB 1280 GB $4.62/hr
GPU Compute - NCads_H100_v5

The NCads H100 v5-series virtual machines are part of the Azure GPU family designed for Applied AI training and batch inference workloads. Powered by NVIDIA H100 NVL GPUs and 4th-generation AMD EPYC Genoa processors, these instances provide high-performance compute capabilities. They feature up to 2 GPUs with 94GB memory each and up to 96 non-multithreaded processor cores. This series is ideal for GPU-accelerated analytics, video processing, and autonomy model training, supporting high-throughput local NVMe storage and accelerated networking.

Plan GPU Type GPU RAM vCPUs RAM Storage Price
NC80adis H100 v5 2 x NVIDIA H100 188 GB 80 640 GB 7152 GB $13.96/hr
NC40ads H100 v5 1 x NVIDIA H100 94 GB 40 320 GB 3576 GB $6.98/hr
GPU Compute - NDsr H100 v5

The ND H100 v5-series virtual machines are Azure's flagship generative AI infrastructure, designed for massive-scale AI training and inference. These instances feature eight NVIDIA H100 Tensor Core GPUs interconnected via 400 Gb/s NVIDIA Quantum-2 InfiniBand. They are powered by 4th Gen Intel Xeon Scalable processors and provide significant performance leaps for large language models. With high-speed DDR5 memory and local NVMe storage, they deliver the throughput necessary for the most demanding deep learning workloads and high-performance computing applications.

Plan GPU Type GPU RAM vCPUs RAM Storage Price
ND96isr H100 v5 8 x NVIDIA H100 640 GB 96 1900 GB 28000 GB $98.32/hr
ovhcloud OVHcloud
Public Cloud - Cloud GPU

Cloud servers specially designed for processing AI, graphics and massively parallel tasks

Plan GPU Type GPU RAM vCPUs RAM Storage Price
t2-le-180 4 x Tesla V100S 32 GB 60 180 GB 500 GB $3.74/hr
t2-le-90 2 x Tesla V100S 32 GB 30 90 GB 500 GB $1.87/hr
t2-le-45 1 x Tesla V100S 32 GB 15 45 GB 300 GB $0.94/hr
t2-180 4 x Tesla V100S 32 GB 60 180 GB 50 GB $8.42/hr
t2-90 2 x Tesla V100S 32 GB 30 90 GB 800 GB $4.21/hr
t2-45 1 x Tesla V100S 32 GB 15 45 GB 400 GB $2.11/hr
t1-le-180 4 x Tesla V100 16 GB 32 180 GB 400 GB $3.28/hr
t1-le-90 2 x Tesla V100 16 GB 16 90 GB 400 GB $1.64/hr
t1-le-45 1 x Tesla V100 16 GB 8 45 GB 300 GB $0.82/hr
t1-180 4 x Tesla V100 16 GB 36 180 GB 50 GB $7.72/hr
t1-90 2 x Tesla V100 16 GB 18 90 GB 800 GB $3.86/hr
t1-45 1 x Tesla V100 16 GB 8 45 GB 400 GB $1.93/hr
l4-360 4 x L4 24 GB 90 360 GB 400 GB $3.51/hr
l4-180 2 x L4 24 GB 45 180 GB 400 GB $1.76/hr
l4-90 1 x L4 24 GB 22 90 GB 400 GB $0.88/hr
h100-1520 4 x H100 80 GB 120 1520 GB 200 GB $13.10/hr
h100-760 2 x H100 80 GB 60 760 GB 200 GB $6.55/hr
h100-380 1 x H100 80 GB 30 380 GB 200 GB $3.28/hr
a100-720 4 x A100 80 GB 60 720 GB 500 GB $12.87/hr
a100-360 2 x A100 80 GB 30 360 GB 500 GB $6.44/hr
a100-180 1 x A100 80 GB 15 180 GB 300 GB $3.22/hr
l40s-360 4 x L40S 48 GB 60 360 GB 400 GB $6.55/hr
l40s-180 2 x L40S 48 GB 30 180 GB 400 GB $3.28/hr
l40s-90 1 x L40S 48 GB 15 90 GB 400 GB $1.64/hr
a10-180 4 x A10 180 GB 120 180 GB 400 GB $3.56/hr
a10-90 2 x A10 24 GB 60 90 GB 400 GB $1.78/hr
a10-45 1 x A10 24 GB 30 45 GB 400 GB $0.89/hr
OVHcloud_us OVHcloud US
Public Cloud - Cloud GPU

The most powerful Public Cloud instances for parallel processing: up to 1,000 times faster than a CPU.

Plan GPU Type GPU RAM vCPUs RAM Storage Price
t2-45 1 x Tesla V100S 32 GB 15 45 GB 400 GB $2.19/hr
t2-90 2 x Tesla V100S 32 GB 30 90 GB 800 GB $4.38/hr
t2-180 4 x Tesla V100S 32 GB 60 180 GB 4050 GB $8.76/hr
t2-le-45 1 x Tesla V100S 32 GB 15 45 GB 300 GB $0.88/hr
t2-le-90 2 x Tesla V100S 32 GB 30 90 GB 500 GB $1.76/hr
t2-le-180 4 x Tesla V100S 32 GB 60 180 GB 500 GB $3.53/hr
l4-90 1 x L4 24 GB 22 90 GB 400 GB $1.00/hr
l4-180 2 x L4 24 GB 45 180 GB 400 GB $2.00/hr
l4-360 4 x L4 24 GB 90 360 GB 400 GB $4.00/hr
ServerOptima Server Optima
GPU Servers - Nvidia A100

The A100 is based on NVIDIA’s Ampere architecture, which brings significant performance improvements over previous generations. It features advanced Tensor Cores that accelerate deep learning computations, enabling faster training and inference times.

Plan GPU Type GPU RAM vCPUs RAM Storage Price
1 x NVIDIA A100 40G Dedicated 1 x NVIDIA A100 40 GB 12 16 GB 500 GB $1.16/hr
UpCloud UpCloud
GPU Servers - Nvidia L40S

Premium AMD CPUs and up to 100k IOPS with MaxIOPS.

Plan GPU Type GPU RAM vCPUs RAM Storage Price
3 x NVIDIA L40S 32 cores 3 x NVIDIA L40S 144 GB 32 384 GB 0 GB $5.69/hr
1 x NVIDIA L40S 16 cores 1 x NVIDIA L40S 48 GB 16 192 GB 0 GB $1.79/hr
1 x NVIDIA L40S 8 cores 1 x NVIDIA L40S 48 GB 8 64 GB 0 GB $1.30/hr
2 x NVIDIA L40S 16 cores 2 x NVIDIA L40S 96 GB 16 192 GB 0 GB $3.09/hr
Vultr Vultr
Cloud GPU - Nvidia L40S

Breakthrough multi-workload acceleration for large language model (LLM) inference and training, graphics, and video applications based on the latest Ada Lovelace architecture. The prices shown in the table below reflect on-demand pricing. 36-month prepaid pricing for the NVIDIA L40S starts at $0.848/GPU/hr.

Plan GPU Type GPU RAM vCPUs RAM Storage Price
NVIDIA L40S 1gpu 1 x NVIDIA L40S 48 GB 16 180 GB 1200 GB $1.67/hr
NVIDIA L40S 4gpus 4 x NVIDIA L40S 192 GB 64 750 GB 2600 GB $6.68/hr
NVIDIA L40S 2gpus 2 x NVIDIA L40S 96 GB 32 375 GB 2200 GB $3.34/hr
NVIDIA L40S 8gpus 8 x NVIDIA L40S 384 GB 128 1500 GB 3400 GB $13.37/hr
Cloud GPU - Nvidia HGX H100

Designed specifically for AI and HPC workloads, featuring fourth-generation Tensor Cores and the Transformer Engine with FP8 precision for accelerated performance. Hourly price is calculated based on 730 hours per month. Contact us for additional contract duration and payment options.

Plan GPU Type GPU RAM vCPUs RAM Storage Price
NVIDIA HGX H100 8 x NVIDIA HGX H100 640 GB 224 2048 GB 32640 GB $23.92/hr
Cloud GPU - Nvidia A40

Combining professional graphics with powerful compute and AI, to meet today's design, creative, and scientific challenges.

Plan GPU Type GPU RAM vCPUs RAM Storage Price
NVIDIA A40 1gpu 1 x NVIDIA A40 48 GB 24 120 GB 1400 GB $1.71/hr
NVIDIA A40 4gpus 4 x NVIDIA A40 192 GB 96 480 GB 1400 GB $6.85/hr
Cloud GPU - AMD MI355X

Providing remarkable acceleration with breakthrough memory capacity and memory bandwidth. The prices shown in the table below reflect on-demand preemptible instances for AMD MI355X GPUs.

Plan GPU Type GPU RAM vCPUs RAM Storage Price
MI355X 8 GPUs 8 x AMD MI355X 2304 GB 256 3000 GB 61000 GB $20.72/hr

Be the first to learn about new Best VPS rankings. Subscribe to our newsletter.