How-to deploy PrivateGPT in Ubuntu with vGPU on the ITS Private Cloud

In December 2023, we announced the launch of virtual GPU capabilities on the ITS Private Cloud, as detailed in our blog post ( ) and now, we are working on practical examples to harness the power, affordability, security and privacy of the ITS Private Cloud to run Large Language Models (LLMs).

This How-To is focused on deploying a virtual machine running Ubuntu with a 16GB vGPU the vss-cli to host PrivateGPT, an Artificial Intelligence Open Source project that allows you to ask questions about documents using the power of LLMs, without data leaving the runtime environment.

 

Virtual Machine Deployment

  1. Download the vss-cli configuration spec: and update the following attributes:

    1. machine.folder: target logical folder. List available folders with vss-cli compute folder ls

    2. metadata.client : your department client.

    3. metadata.inform: email address for automated notifications

  2. Deploy with the following command:

    vss-cli --wait compute vm mk from-file ubuntu-llm-privategpt.yaml
  3. Add a virtual GPU of 16GB, specifically the 16q profile. For more information in the profile used, check the following document

    vss-cli compute vm set ubuntu-llm gpu mk --profile 16q
  4. Once the VM has been deployed, a confirmation email will be sent with the assigned IP address and credentials.

  5. Power on virtual machine

    vss-cli compute vm set ubuntu-llm state on

NVIDIA Driver and Licensing

  1. Login to the server via ssh. Note that the username may change if you further customized the VM with cloud-init

  2. Download the NVIDIA drivers from VKSEY-STOR:

  3. Install the drivers as privileged user:

  4. Create the NVIDIA token file:

  5. Set permissions to the NVIDIA token:

  6. Set the FeatureType to 2 for NVIDIA RTX Virtual Workstation in /etc/nvidia/gridd.conf with the following command:

  7. Restart nvidia-gridd service to pick up the new license token:

  8. Check for any Error or successful activation:

    output:

  9. Verify GPU status with nvidia-smi:

  10. You can also monitor in console the gpu usage with nvtop :

Install PrivateGPT

Dependencies

  1. Login to the server via ssh. Note that the username may change if you further customized the VM with cloud-init

  2. Install OS dependencies:

  3. Install python 3.11 either from source or via ppa:deadsnakes/ppa:

  4. Install NVIDIA CUDA Toolkit. Needed to recompile llama-cpp-python later.

Install PrivateGPT

  1. Clone source repository

  2. Create and activate virtual environment:

  3. Install poetry to get all python dependencies installed:

  4. Update pip and poetry. Then Install PrivateGPT dependencies:

  5. Install llama-cpp-python

Enable GPU support

  1. Export the following environment variables:

  2. Reinstall llama-cpp-python:

 

Run PrivateGPT

  1. Run python3.10 -m private_gpt to start:

  2. Open a web browser with the IP address assigned on port 8001: http://XXX.XXX.XXX.XXX:8001

  3. Upload a few documents and start asking questions:

    CleanShot 2024-02-01 at 12.44.24.mp4

 

University of Toronto - Since 1827