Skip to content

Ollama Installation

In this article

Introduction to Ollama

Ollama is a framework for running and managing large language models (LLMs) on local computing resources. It enables the loading and deployment of selected LLMs and provides access to them through an API.


If you plan to use GPU acceleration for working with LLMs, please install NVIDIA drivers and CUDA at the beginning.

System Requirements:

Requirement Specification
Operating System Linux: Ubuntu 22.04 or later
RAM 16 GB for running models up to 7B
Disk Space 12 GB for installing Ollama and basic models. Additional space is required for storing model data depending on the used models
Processor Recommended to use a modern CPU with at least 4 cores. For running models up to 13B, a CPU with at least 8 cores is recommended
Graphics Processing Unit (optional) A GPU is not required for running Ollama, but can improve performance, especially when working with large models. If you have a GPU, you can use it to accelerate training of custom models.


The system requirements may vary depending on the specific LLMs and tasks you plan to perform.

Installing Ollama on Linux

  1. Download and install Ollama:

    curl -L -o /usr/bin/ollama
    chmod +x /usr/bin/ollama
  2. Create a group:

    sudo useradd -r -s /bin/false -m -d /usr/share/ollama ollama
  3. Create the Ollama service:

    tee /usr/lib/systemd/system/ollama.service > /dev/null <<EOF
    Description=Ollama Service
    ExecStart=/usr/bin/ollama serve
  4. Enable and start the service:

    sudo systemctl daemon-reload
    sudo systemctl enable ollama
    sudo systemctl start ollama

Ollama will be accessible at or http://<you_server_IP>:11434.

Updating Ollama on Linux

To update Ollama, you will need to re-download and install its binary package:

sudo curl -L -o /usr/bin/ollama
sudo chmod +x /usr/bin/ollama

For ease of future updates, you can create a script (run as root or with sudo):

systemctl stop ollama
sudo curl -L -o /usr/bin/ollama
sudo chmod +x /usr/bin/ollama
systemctl start ollama

Installing Language Models LLM

You can find the list of available language models on this page.

To install a model, click on its name and then select the size and type of the model on the next page. Copy the installation command from the right-hand window and run it in your terminal/command line:

ollama run llama3


Recommended models are marked with the latest tag.


To ensure acceptable performance, the size of the model should be at least two times smaller than the amount of RAM available on the server and ⅔ of the available video memory on the GPU. For example, a model of size 8GB requires 16GB of RAM and 12GB of video memory on the GPU.

After downloading the model, restart the service:

service ollama restart

For more information about Ollama, you can read the developer documentation.