Rent high-performance GPU servers equipped with the latest professional Nvidia Tesla graphic cards, including the Tesla A100 80GB and H100 models. These servers are ideal for demanding applications such as AI acceleration, processing large datasets, and tackling complex high-performance computing (HPC) tasks. Experience unmatched speed and efficiency for your advanced computing needs.
NVIDIA H100, powered by the new Hopper architecture, is a masterpiece of a GPU that delivers powerful AI acceleration, big data processing, and high-performance computing (HPC).
The Hopper architecture introduces fourth-generation tensor cores that are up to nine times faster as compared to their predecessors, resulting in enhanced performance across a wide range of machine learning and deep learning tasks.
With 80GB of high-speed HBM2e memory, this GPU can handle large language models (LLS) or AI tasks with ease.
The Nvidia A100 80GB GPU card is a data center accelerator that is designed to accelerate AI training and inference, as well as high-performance computing (HPC) applications.
The A100 80GB GPU also has the largest memory capacity and fastest memory bandwidth of any GPU on the market. This makes it ideal for training and deploying the largest AI models, as well as for accelerating HPC applications that require large datasets.
H100 GPU is the most powerful accelerator ever built, with up to 4X faster performance for AI training and 7X faster performance for HPC applications than the previous generation.
H100’s architecture is supercharged for the largest workloads, from large language models to scientific computing applications. It is also highly scalable, supporting up to 18 NVLink interconnections for high-bandwidth communication between GPUs.
H100 GPU is designed for enterprise use, with features such as support for PCIe Gen5, NDR Quantum-2 InfiniBand networking, and NVIDIA Magnum IO software for efficient scalability.
These cores are specifically designed to accelerate AI workloads, and they offer up to 2X faster performance for FP8 precision than the previous generation.
This feature allows the A100 GPU to skip over unused portions of a matrix, which can improve performance by up to 2X for certain AI workloads.
This feature allows the A100 GPU to be partitioned into up to seven smaller GPU instances, which can be used to accelerate multiple workloads simultaneously.
Here are some benchmarks for the Nvidia A100 80GB and H100 GPUs compared to other GPUs for AI and HPC:
AI Benchmarks |
||||
Benchmark | A100 80GB | H100 | A40 | V100 |
ResNet-50 Inference (Images/s) | 13 128 | 24 576 | 6 756 | 3 391 |
BERT Large Training (Steps/s) | 1 123 | 2 231 | 561 | 279 |
GPT-3 Training (Tokens/s) | 175B | 400B | 87.5B | 43.75B |
HPC Benchmarks |
||||
Benchmark | A100 80GB | H100 | A40 | V100 |
HPL DP (TFLOPS) | 40 | 90 | 20 | 10 |
HPCG (GFLOPS) | 45 | 100 | 22.5 | 11.25 |
LAMMPS (Atoms/day) | 115T | 250T | 57.5T | 28.75T |
Nvidia H100 GPU is considerably faster than the A100 GPU in both AI and HPC benchmarks. It is also faster than other GPUs such as the A40 and V100. But Nvidia A100 GPU comes at a lower price and could be more cost efficient for significant AI and HPC tasks.
H100 for PCIe-Based Servers | A100 80GB PCIe | |
FP64 | 26 teraFLOPS | 9.7 TFLOPS |
FP64 Tensor Core | 51 teraFLOPS | 19.5 TFLOPS |
FP32 | 51 teraFLOPS | 19.5 TFLOPS |
TF32 Tensor Core | 756 teraFLOPS | 156 TFLOPS | 312 TFLOPS |
BFLOAT16 Tensor Core | 1,513 teraFLOPS | 312 TFLOPS | 624 TFLOPS |
FP16 Tensor Core | 1,513 teraFLOPS | 312 TFLOPS | 624 TFLOPS |
FP8 Tensor Core | 3,026 teraFLOPS | |
INT8 Tensor Core | 3,026 TOPS | 624 TOPS | 1248 TOPS |
GPU memory | 80GB | 80GB HBM2e |
GPU memory bandwidth | 2TB/s | 1,935 GB/s |
Decoders |
7 NVDEC 7 JPEG |
|
Max thermal design power (TDP) | 300-350W (configurable) | 300W |
Multi-Instance GPUs | Up to 7 MIGS @ 10GB each | Up to 7 MIGs @ 10GB |
Form factor | PCIe dual-slot air-cooled |
PCIe Dual-slot air-cooled or single-slot liquid-cooled |
Interconnect | NVLink: 600GB/s PCIe Gen5: 128GB/s |
NVIDIA® NVLink® Bridge for 2 GPUs: 600 GB/s PCIe Gen4: 64 GB/s |
Server options | Partner and NVIDIA-Certified Systems with 1–8 GPUs | Partner and NVIDIA-Certified Systems™ with 1-8 GPUs |
NVIDIA AI Enterprise | Included |
Reach out to our sales department to go over the terms and conditions before placing an order for GPU servers equipped with Nvidia Tesla H100 or A100 cards.
Our Services
GPU servers for data science
e-Commerce hosting
Finance and FinTech
Hosting for gambling projects in the Netherlands
Private cloud
Rendering, 3D Design and visualization
Managed colocation
GPU servers for Deep Learning
Wide range of pre-configured servers with instant delivery and sale
If you don't find the right configuration, you can always contact our Sales Department. Our managers will help you with your requirements. We are very flexible.
If you don't find the right configuration, you can always contact our Sales Department. Our managers will help you with your requirements. We are very flexible.
You can choose a suitable Data Center in the Netherlands and the USA
We use an individual approach with each client, which is reflected not only in our technological solutions but also in the appropriate Data Center. We offer Data Centers TIER III categories, which allows us to offer the most flexible solutions for the needs of every client.
For business-critical applications, availability is paramount. In this case, you need a certified Tier III category data center at a minimum. For minor tasks, TIER II or even TIER I Data Center will suffice.
A complete list of the Data Centers and their characteristics can be found here.
If availability is crucial to you, we recommend certified Data Centers, i.e. EuNetworks.
If you would like to test the Internet speed to resources hosted on our servers, you can make use of Speedtest or Looking Glass without having to wait for the deployment of the test server.
You can request a test server if you would like to:
test the compatibility of the software and the chosen configuration of the dedicated server or VPS with GPU cards;
make sure that the server configuration chosen is capable of all the tasks at hand and that server upgrade (RAM, HDD, etc.) won’t be necessary in the near future;
evaluate the performance of HOSTKEY’s servers in practice.
When providing a test server, HOSTKEY uses an individual approach to every client. On average, the trial period is 3 days.
If you would like to get further information regarding getting a test server and/or terms and conditions of the service, please contact our managers.
After the trial period is over, you can pay for the server and continue using it. If we haven’t received your payment within 3 days after the end of the trial period, the server will be terminated and all data deleted.
Note: HOSTKEY operates under the current legislation of the Russian Federation, the Netherlands and the United States of America.
Therefore, we ask you to use our servers without breaking the laws of the corresponding country.
Requirements for the use of servers in the corresponding country.
All our services are paid for in advance. We accept payments via credit card, PayPal, P2P cryptocurrency payments from any wallet, application or exchange through BitPay. We also accept WebMoney, Alipay and wire transfers. Read more about our payment terms and methods. Read more about payment terms and methods.
We are very confident in our products and services. We provide fast, reliable and comprehensive service and believe that you will be completely satisfied.
You can ask for a test server for 3-4 days for free.
Refund is only possible in case of an accident from our side with your server being offline for 24 hours or more due to that.
Read more about refund procedure.
Customers whose servers come with unlimited bandwidth are committed to a fair usage policy.
That means that servers on the 1 Gbps port cannot use more than 70% of the allocated bandwidth for more than 3 hours a day.
Subscribe to our newsletter & actionable email communication
You will be first to receive useful tips and special offers