How-to install Ollama in Ubuntu with vGPU on the ITS Private Cloud

Run large language models (LLMs) with ollama on Ubuntu Linux, utilizing vGPU resources powered by U of T ITS Private Cloud Infrastructure. Utilize the VSS-CLI commands to facilitate this process.

We've pinpointed a bug in the NVIDIA driver version 525_525.147.05 when used alongside Ubuntu kernel 5.15.0-105-generic or newer. To ensure smooth operation of this tutorial, we recommend sticking to kernel version 5.15.0-102-generic. For further details, refer to the following link: Bug Report

ollama is an LLM serving platform written in golang. It makes LLMs built on Llama standards easy to run with an API.

Table of Contents


Steps

Virtual Machine Deployment

  1. Download and update the following attributes:

    1. machine.folder: target logical folder. List available folders with vss-cli compute folder ls

    2. metadata.client : your department client.

    3. metadata.inform: email address for automated notifications

  2. Deploy your file as follows:

    vss-cli --wait compute vm mk from-file ubuntu-llm-ollama.yaml
  3. Add a virtual GPU of 16GB, specifically the 16q profile. For more information in the profile used, check the following document How-to Request a Virtual GPU

    vss-cli compute vm set <VM_ID> gpu mk --profile 16q
  4. Once the VM has been deployed, a confirmation email will be sent with the assigned IP address and credentials.

  5. Power on virtual machine

    vss-cli compute vm set ubuntu-llm state on

NVIDIA Driver and Licensing

Current supported driver is nvidia-linux-grid-535_535.183.01_amd64.deb

  1. Login to the server via ssh. Note that the username may change if you further customized the VM with cloud-init

  2. Download the NVIDIA drivers from VKSEY-STOR:

  3. Install the drivers as privileged user:

  4. Create directory: ClientConfigToken

  5. Create the NVIDIA token file:

  6. Set permissions to the NVIDIA token:

  7. Set the FeatureType to 2 for “NVIDIA RTX Virtual Workstation” in /etc/nvidia/gridd.conf with the following command:

  8. Restart nvidia-gridd service to pick up the new license token:

  9. Check for any Error or successful activation:

    output:

  10. Verify GPU status with nvidia-smi:

  11. You can also monitor in console the gpu usage with nvtop :



Install the Ollama service

Prerequisites

  1. Download Anaconda package

  2. Install Anaconda package.



Install and configure Ollama service

  1. Download & install ollama

  1. For testing purposes, open a terminal:

    1. Second terminal: run the following command:

      1. You can now test by asking questions:

    2. Proceed to cancel the terminal.

  2. You can exit the model by typing: control + d

Install & configure Ollama Web UI

Prerequisites

  1. Download and install nvm

  2. Load the environment or execute the command below:

  3. Install nodejs

  4. Install python 3


Install and Configure Ollama Web UI

 

  1. Download and install ollama-webui:

  2. Create ollama-webui environment file: .env

  3. Install libraries and build the ollama-webui project

  4. Install python3-pip and python3-venv:

  5. Create virtual environment to isolate dependencies:

  6. Install libraries and run the ollama backend

  7. Test the UI application by opening a browser with the your servers ip address and port 8080:
    http://XXX.XXX.XXX.XXX:8080




     

  8. Select a model: llama2 and ask a message:

     

     

Troubleshoot

  1. How to verify if the nvidia licenses has been installed properly?
    Solution. Run the following command as root:

    output:

 

Reference links

VSS Cloud documentation

Anaconda

nvidia-smi commands

Ollama

Ollama-webui

nvtop

Linux

 

University of Toronto - Since 1827