We tested the Radeon AI PRO R9700 with 32GB of memory in real-world tasks such as LLM inference, graphics and video generation, and 3D rendering, and compared it to NVIDIA’s products. The results were inconclusive.
By visiting and using this website you agree to the placement of cookies. Learn more.
I write a lot, frequently and on a wide range of topics. My interests range from contributing patches to open-source software and from deploying MkDocs to pushing GPUs with neural networks as well as working with robotics. I’m also the creator of a robotics kit and programming-themed board games.
We tested the Radeon AI PRO R9700 with 32GB of memory in real-world tasks such as LLM inference, graphics and video generation, and 3D rendering, and compared it to NVIDIA’s products. The results were inconclusive.
Testing the RTX PRO 2000 (70W power consumption, 16GB GDDR7 memory) with Ollama, ComfyUI, and Blender. We’ve checked what this card is actually capable of and whether it’s worth the price.
04.12.2025
How did we create our LLM benchmark for GPU servers using Ollama? We developed a script, tested it with DeepSeek R1, and configured the necessary contexts. We identified some patterns and compared the performance of different GPUs, all of which are now available on GitHub.
24.09.2025
If you’re in the market for a replacement for Google Meet—just like we were—we’ve got options for you: Zoom, NextCloud, or self-hosted solutions. After thorough testing, we decided on Jitsi Meet on a VPS and have put it to use in real-world scenarios. We’d love to share our insights and any potential pitfalls you should be aware of.
Want to automate your daily tasks — without touching a single line of code? In just 15 minutes, you’ll have a fully functional Telegram bot powered by n8n. And that’s just the beginning.
Which server is best for running a Blockchain validator? Learn the optimal CPU, RAM, and storage specs for high-performance blockchain validation.
Dual RTX 5090 server: Scaling performance or multiplying problems in AI tasks?
Ready to accelerate your localization process? Use Ollam and the OpenWebUI API in the command line, write a bash script, tune your prompts—and streamline your workflow!
OpenWebUI 0.5.x takes your interaction with language models to the next level! Explore asynchronous chats, Code Interpreter capabilities, image generation directly from prompts, Google Drive integration, flexible user permission settings, and a multitude of other improvements.