RTXPro60001.38$/h
96GB
$0.29
A100_80G1.32$/h
80GB
$0.04
H1002.09$/h
80GB
$0.00
A60000.54$/h
48GB
$0.01
RTX6000Ada0.85$/h
48GB
$0.33
L40S0.76$/h
48GB
$0.19
L401.09$/h
48GB
$0.00
A1001.38$/h
40GB
$0.00
RTX50900.71$/h
32GB
$0.06
V100_32G2.57$/h
32GB
$0.00
L40.92$/h
24GB
$0.00
A50001.56$/h
24GB
$0.00
RTX40900.54$/h
24GB
$0.12
A40000.17$/h
16GB
$0.00
A160.56$/h
16GB
$0.00
V1002.57$/h
16GB
$2.15
CPU0.39$/h
0GB
$0.00
Inference
Quickly deploy a public or custom model to a dedicated inference endpoint.
- Model
- Name