How-to install Ollama in Ubuntu with vGPU on the ITS Private Cloud
Run large language models (LLMs) with ollama on Ubuntu Linux, utilizing vGPU resources powered by U of T ITS Private Cloud Infrastructure. Utilize the VSS-CLI commands to facilitate this process.
We've pinpointed a bug in the NVIDIA driver version 525_525.147.05 when used alongside Ubuntu kernel 5.15.0-105-generic or newer. To ensure smooth operation of this tutorial, we recommend sticking to kernel version 5.15.0-102-generic. For further details, refer to the following link: Bug Report
ollama
is an LLM serving platform written in golang. It makes LLMs built on Llama standards easy to run with an API.
Table of Contents
Steps
Virtual Machine Deployment
Download and update the following attributes:
machine.folder
: target logical folder. List available folders withvss-cli compute folder ls
metadata.client
: your department client.metadata.inform
: email address for automated notifications
Deploy your file as follows:
vss-cli --wait compute vm mk from-file ubuntu-llm-ollama.yaml
Add a virtual GPU of
16GB
, specifically the16q
profile. For more information in the profile used, check the following document How-to Request a Virtual GPUvss-cli compute vm set <VM_ID> gpu mk --profile 16q
Once the VM has been deployed, a confirmation email will be sent with the assigned IP address and credentials.
Power on virtual machine
vss-cli compute vm set ubuntu-llm state on
NVIDIA Driver and Licensing
Current supported driver is nvidia-linux-grid-535_535.183.01_amd64.deb
Login to the server via ssh. Note that the username may change if you further customized the VM with
cloud-init
Download the NVIDIA drivers from VKSEY-STOR:
Install the drivers as privileged user:
Create directory: ClientConfigToken
Create the NVIDIA token file:
Set permissions to the NVIDIA token:
Set the
FeatureType
to2
for “NVIDIA RTX Virtual Workstation” in/etc/nvidia/gridd.conf
with the following command:Restart
nvidia-gridd
service to pick up the new license token:Check for any Error or successful activation:
output:
Verify GPU status with
nvidia-smi
:You can also monitor in console the gpu usage with
nvtop
:
Install the Ollama service
Prerequisites
Download Anaconda package
Install Anaconda package.
Install and configure Ollama service
Download & install
ollama
For testing purposes, open a terminal:
Second terminal: run the following command:
You can now test by asking questions:
Proceed to cancel the terminal.
You can exit the model by typing:
control + d
Install & configure Ollama Web UI
Prerequisites
Download and install nvm
Load the environment or execute the command below:
Install
nodejs
Install python 3
Install and Configure Ollama Web UI
Download and install ollama-webui:
Create ollama-webui environment file:
.env
Install libraries and build the ollama-webui project
Install
python3-pip
andpython3-venv
:Create virtual environment to isolate dependencies:
Install libraries and run the ollama backend
Test the UI application by opening a browser with the your servers ip address and port 8080:
http://XXX.XXX.XXX.XXX:8080
Select a model: llama2 and ask a message:
Troubleshoot
How to verify if the nvidia licenses has been installed properly?
Solution. Run the following command as root:output:
Reference links
VSS Cloud documentation
Anaconda
nvidia-smi
commands
Ollama
Ollama-webui
nvtop
Linux
University of Toronto - Since 1827