AMD EPYC 9354 Servers —from €299/month or €0.42/hour ⭐ 32 cores 3.25GHz / 768GB RAM / 2x3.84TB NVMe / 10Gbps 100TB
EN
Currency:
EUR – €
Choose a currency
  • Euro EUR – €
  • United States dollar USD – $
VAT:
OT 0%
Choose your country (VAT)
  • OT All others 0%

21.02.2025

What's New in OpenWebUI Versions 0.5.x

server one
HOSTKEY

Back in December, on the 25th to be exact, OpenWebUI upgraded to version 0.5.0, and one of the best interfaces for working with models in Ollama embarked on a new chapter. Let's take a look at what's emerged over the past 1.5 months since the release and what it now offers in version 0.5.12.

  • Asynchronous Chats with Notifications. You can now start a chat, then switch to other chats to check some information and return without missing anything like before. Model processing happens asynchronously, and when it completes its output, you'll receive a notification.

  • Offline Swagger Documentation for OpenWebUI. You no longer need an internet connection to access the OpenWebUI documentation. Remember: in the OpenWebUI docker image, you need to pass the variable -e ENV='dev' in the launch string, otherwise it will start in prod mode and without API documentation access.

  • Support for Kokoro-JS TTS. Currently only available for English and British English, but it works directly in your browser with good voice quality. We're looking forward to other language voices in the models!

AI Platform: Pre-installed Language LLM Models on High-Performance Servers with GPUs

Rent a high-performance server with a GPU card featuring pre-installed top LLM models:

  • DeepSeek-r1-14b
  • Gemma-2-27b-it
  • Llama-3.3-70B
  • Phi-4-14b

🔶 NVIDIA RTX 4090 GPUs 🔶 Hourly Billing 🔶 Discounts up to 30%

Order Now

  • Code Interpreter Mode Added. This feature lets you execute code through Pyodide and Jupyter, improving output results. Access it in Settings - Admin Settings - Code Interpreter. Access to Jupyter is provided through an external server.

  • Support for "Thinking" Models with Thought Output. You can now use models like DeepSeek-R1 and see how they interpret prompts by displaying their "thoughts" in separate tabs.

  • Direct Image Generation from Prompts. With a connected service like ComfyUI or Automatic1111, you can generate images directly from your input prompt. Simply toggle the Image button under your prompt field.

  • Document Uploading from Google Drive. While you can now upload documents directly from your Google Drive, there's no straightforward way to authorize access through the menu. You'll need to set up an OAuth client, a Google project, obtain API keys, and pass variables to the OpenWebUI instance upon uploading. The same applies to accessing S3 storage. We hope for a more user-friendly solution soon.

  • Persistent Web Search. You can now enable web search permanently to get relevant results, similar to ChatGPT. Find this option in Settings - Interface under Allows users to enable Web Search by default.

  • Redesigned Model Management Menu. This new menu lets you include and exclude models and fine-tune their settings. If you're missing the Delete Models option, it's now hidden under a small download icon labeled Manage Models in the top right corner of the section. Clicking on it will reveal the familiar window for adding and deleting models in Ollama.

  • Flexible Model and User Permissions. You can now create user groups and assign them access to specific models and OpenWebUI functions. This allows you to control actions within both Workspaces and chats, similar to workspace permissions.

  • New Chat Actions Menu. A new menu with additional chat functions is accessible by clicking the three dots in the top right corner. It allows you to share your chat and collaborate on it. You can also view a chat overview, see real-time HTML and SVG generation output (Artifacts section), download the entire chat as JSON, TXT, or PDF, copy it to the clipboard, or add tags for later search.

  • LDAP Authentication. For organizations using OpenWebUI, you can now connect it to your authentication server by specifying email and username attributes. However, manual user group allocation is still required.

  • Channels. These are chat rooms within OpenWebUI allowing users to communicate with each other. After creation, they become visible to all users or specific user groups defined by you. To enable this feature, go to Settings - Admin Settings - General.

And Many More Improvements! This includes OAuth support, model-driven tool and function execution, minor UI tweaks, API enhancements, TTS support via Microsoft solutions or models like MCU-Arctic, and more. Stay on the cutting edge by checking for new OpenWebUI release notifications and updating regularly. While we recommend a slight delay of a few days after a major update, as several minor fixes are usually released within 2-3 days.

Other articles

23.04.2026

Server Price Increases in 2026: Forecasts, Causes, and Recommendations

The server market is heating up again: memory prices are rising by tens of percent, GPUs are in short supply, and cloud providers are preparing to raise rates. We break down what's happening in 2026 and how to avoid overpaying.

21.04.2026

Why No One Remembers What Was Decided After an Hour-Long Call

An hour in Zoom passes with no action items. This isn't a fluke: video calls have specific cognitive limits that poor organization only exacerbates. We break down why this happens and which simple changes actually work.

17.04.2026

Putting "Legacy" V100s to Work with Modern AI Agents for Documentation

Documentation becomes outdated faster than code. We solved this with AI agents and V100 GPUs. It works, but not without nuances.

17.04.2026

WordPress with OpenLiteSpeed vs Classic LEMP: Real Benchmarks

OpenLiteSpeed outpaces LEMP dramatically: up to 10x more RPS and 35x lower TTFB. But it crashes at 500 concurrent connections. Tested on real servers with load tests.

20.03.2026

Launching an Internet Radio Station with AzuraCast

Internet radio hasn't disappeared; it has simply migrated to the cloud. This guide demonstrates how to launch your own radio station on a VPS with AzuraCast in just 15 minutes: upload music, configure live streaming, connect a domain, and obtain an API for integrations.

Upload