An open source, node-based program for image generation from a series of text prompts.
ComfyUI pre-installed on servers in the Netherlands, Finland, Germany, Iceland, Turkey and the USA
Rent a virtual (VPS) or a dedicated server with a pre-installed ComfyUI - a free and open-source program for image generation from a series of text prompts. Simply choose the desired plan, configure a server and start working in just 15 minutes.
ComfyUI is provided only for leased HOSTKEY servers. To get the ComfyUI, select it in the "Software" tab while ordering the server.
Rent a reliable VPS in the Netherlands, Finland, Germany, Iceland, Turkey and the USA.
Server delivery ETA: ≈15 minutes.
Rent a dedicated server with a full out-of-band management in the Netherlands, Finland, Germany, Turkey and the USA.
Server delivery ETA: ≈15 minutes.
ComfyUI is an open source program that allows users to generate images from a series of text prompts for free. It is available under the GNU General Public License v3.0.
We guarantee that our servers are running safe and original software.
To install ComfyUI, you need to select it while ordering a server on the HOSTKEY website. Our auto-deployment system will install it on your server.
If you have any difficulties or questions when installing and/or using this software, carefully learn the documentation on the official website of the developer, read about typical problems and how to solve them or contact ComfyUI support.
A ComfyUI server is designed to run the ComfyUI interface smoothly for Stable Diffusion. You can use it to make AI images, automate tasks and try out custom models easily.
It saves a lot of time setting up and keeps you from facing compatibility problems. Since everything is already set up with HOSTKEY, you can start creating right away without facing any technical problems.
Yes. Since everything is open, you can use Python and existing APIs to add new models, make custom nodes or create new features.
Absolutely. You can use full root privileges, create private networks and your data is not shared outside. HOSTKEY provides the best privacy and control for your assets.
Yes. Multi-GPU execution using configuration flags and environment variables ComfyUI allows multi-GPU execution. By default, it finds all the available GPUs. ComfyUI allows you to set up multi gpu so that you can assign GPUs for certain workflows or divide batch rendering across GPUs. Out-of-the-box server images have ComfyUI multi-gpu configuration enabled, so you can hit the ground running.
For SDXL at 1024x1024, you will need at least 24 GB of VRAM to prevent out of memory errors. For more advanced renders (2K, 4K, and even more advanced nodes) 48 GB VRAM or multi-gpu ComfyUI setups are recommended. With multi-GPU parallelism, memory loads are better shared.
Yes, but with limits. Technically it is possible to run a ComfyUI multi gpu configuration with mixed GPUs (e.g. RTX 4090 + RTX A5000). However, by default performance will revert to the slower card for parallel tasks. For optimal stability and throughput we recommend using the same models.
ComfyUI lets you assign nodes to particular devices. This is useful for workflows where some nodes are memory-heavy (i.e. SDXL base model) and some are lightweight (i.e. upscalers). Pinning is applied through the node execution settings or launch arguments, which makes your multi gpu ComfyUI pipeline more efficient.
There are a number of optimizations that can greatly decrease the time it takes to render:
Prebuilt ComfyUI multi gpu configuration images comes with these options ready to use.
Yes. ComfyUI pipelines can be run on AMD GPUs that are supported by ROCm. While the most optimized option continues to be NVIDIA CUDA, AMD's MI200/MI300 series are currently supported in ComfyUI multi-gpu setup with ROCm enabled. Availability is dependent on the particular model and driver stack.
You can monitor the performance with built-in utilities such as nvidia-smi (for Nvidia) or rocm-smi (for AMD). For more in-depth monitoring, our GPU servers offer dashboards with real-time utilization, VRAM utilization and temperature. This makes it easy to find bottlenecks in a multi gpu ComfyUI pipeline and optimize accordingly.
Yes. You can install and run Automatic1111, InvokeAI or other Stable Diffusion toolkits alongside ComfyUI. Many users have multi gpu ComfyUI for the workflow and A1111 for UI based experimentation on the same hardware. Dockerized setups or separate virtual environments make switching a breeze
ComfyUI, when deployed on-premise, delivers a range of key features designed to provide a flexible, customizable, and efficient user interface for image generation workflows.
ComfyUI is not just a graphical interface - it's a modular, node-based pipeline builder for Stable Diffusion pipelines that allows users to build, test and run AI workflows precisely the way they want them. This visual approach eliminates the barriers between imagination and technical execution and every step of your pipeline becomes a flexible node.
When combined with ComfyUI multi gpu support, this tool becomes an enterprise-level solution for creators, agencies and research teams. GPUs are available for parallel execution, which means you can:
Work with multiple image graphs at once.
Both Nvidia CUDA and AMD ROCm are supported, providing the users with freedom of hardware choice. With prebuilt images, rapid activation, and 24/7 technical support, our platform eliminates setup friction and allows you to focus entirely on creativity.
The true power of multi gpu ComfyUI is based on parallelism. Tasks are divided between multiple GPUs, which allows for greater throughput and significantly faster render speeds. Complex workflows that would take hours on a single card can be accomplished in a fraction of the time with ComfyUI multi gpu configuration.
Many pipelines are memory bound. With access to GPUs with up to 80 GB VRAM, you can make use of demanding upscales, large diffusion graphs, and advanced inpainting without running into crashes. For ComfyUI multi-gpu, VRAM means stability.
Get started in minutes using pre-configured ComfyUI images. No need to worry about drivers, dependencies, or compatibility. Our templates come with optimized CUDA/ROCm stacks with ComfyUI multi gpu setup already tested.
Fast I/O is a prerequisite of efficient workflows. That's why each GPU server comes with NVMe SSDs for model storage, embeddings, and caching. This provides seamless transitions between models, checkpoints, and LORAs without any delays.
Put servers in several continents Whether your team is in Europe, North America or Asia, you'll see the benefits of lower latency and faster collaboration.
Our infrastructure consists of DDoS protection, firewalls, live monitoring, and automated alerts. Your ComfyUI multi gpu configuration remains safe, regardless of individual projects or enterprise workloads.
Perfect for single creators that want rapid prototyping and iteration. These GPUs perform extremely well with SD/SDXL models, 24GB VRAM and better CUDA optimization.
Professional grade GPUs with large pools of VRAM (24-48GB). These are perfect for ComfyUI multi gpu setup where bigger batches and complicated workflows require more headroom.
Datacenter-class cards are for intensive workloads With up to 80 GB VRAM and multiple user queue capabilities, they are used to power studio pipelines, high volume render farms and AI research labs.
For users who prefer to use AMD's ecosystem, these GPUs add ROCm compatibility to ComfyUI. Where supported, they provide good performance and are a CUDA alternative for multi gpu ComfyUI users.
Launch in minutes with all software required already installed. Best choice for beginners, and teams who do not mind speed over manual tweaking.
Experienced users can install ComfyUI manually in order to fine-tune environments, driver stacks and dependencies. This option allows one to have full control over every detail.
Containerization guarantees isolated environments, easy version management and reproducibility. Run ComfyUI multi gpu set up inside docker with GPU passthrough for easy portability.
From entry-level RTX cards to high-end datacenter GPUs such as H100, we offer the flexibility to fit all needs and budgets.
Get your ComfyUI multi-gpu configuration up and running in minutes not hours.
Options for hourly billing, monthly rental or long-term reserved servers. Pay only for what you use.
Presence in Europe, USA and Asia ensures low latency all over the globe.
Our engineers are available 24/7 to help with ComfyUI multi gpu setup, scaling or troubleshooting.
GPUs are used to speed up diffusion steps, upscaling and complex graph nodes to make workflows responsive and real-time.
CPU rendering - rendering can take hours per image. Using GPU acceleration, the same workflow can be run in seconds.
Using ComfyUI multi gpu setup, you can assign different GPUs to different tasks, allowing you to run graphs or massive batch rendering in parallel.
8 GB for base models (SD 1.5).
24 GB VRAM or more is advised for ComfyUI multi gpu configuration with complex nodes.
High-core CPUs and 64–128 GB RAM ensure smooth multitasking. Fast NVMe storage speeds up caching and model switching.
Quickly start a ComfyUI ready GPU server.
Install CUDA/ROCm, drivers and ComfyUI manually for total customization.
Self-contained, isolated and repeatable production environments
Scale to a ComfyUI multi gpu environment with more cards/servers
Higher VRAM = stability and larger batches.
Single GPU for individuals. Multi-GPU ComfyUI for teams and studios.
Choose hourly for tests, monthly for projects, or long-term for enterprise workloads.