How-to deploy Danswer for AI powered Virtual Assistants on the ITS Private Cloud

Table of Contents

Introduction

Danswer is the AI Assistant that can be connected to many sources like Atlassian Confluence, Sharepoint, Slack, web, files, MS Teams and many more. It performs retrieval of documents, generates embeddings and stores them locally. Once a query comes in, a semantic similarity search is performed and the most relevant results are passed to the LLM instead of full documents, by doing this, noise to the model is reduced (System Overview).

You can use the ITS Private Cloud GPU offering to deploy both Danswer and local LLM like Ollama or vLLM and your data would never leave U of T.

Additionally, you could deploy Danswer on the ITS Private Cloud and use any remote LLM for inference service like Azure OpenAI, ChatGPT, Claude, etc., and only the most relevant vectorized data would leave our infrastructure.

This How-To is focused on deploying a virtual machine with the vss-cli running Ubuntu to host Danswer to hold ~1000 indexed documents, with the following specs:

  • 8vCPUs.

  • 500GB ssd storage.

  • 16GB memory reserved.

  • 16GB vGPU.

Instructions

Virtual Machine deployment

  1. Download and update the following attributes:

    1. machine.folder: target logical folder. List available folders with vss-cli compute folder ls

    2. metadata.client : your department client.

    3. metadata.inform: email address for automated notifications

  2. Deploy your file as follows:

    vss-cli --wait compute vm mk from-file ubuntu-danswer.yaml
  3. Disable secure boot as a workaround for nvidia-gridd issues:

    vss-cli compute vm set <VM_ID> secure-boot --off
  4. (Optional) If planning to use Ollama, add a virtual GPU of 16GB, specifically the 16q profile. For more information in the profile used, check the following document

    vss-cli compute vm set <VM_ID> gpu mk --profile 16q
  5. Once the VM has been deployed, a confirmation email will be sent with the assigned IP address and credentials.

  6. Power on virtual machine:

Docker

  1. Add the docker gpg key:

  2. Add the repository to Apt sources:

  3. Install Docker:

(Optional) ollama

Follow steps 2 and 3 of

Danswer

  1. Clone the Danswer repo:

  2. Go to danswer/deployment/docker_compose:

  3. Configure Danswer by creating a file in danswer/deployment/docker_compose/.env with the following contents:

    1. More information about configuration settings can be found here

  4. Build the containers:

  5. Danswer will now be running on http://{ip-address}:3000.

  6. To stop the stack:

  7. (Optional) If you are using Ollama on the same instance, use the following settings:

    1. Display name: ollama

    2. Provider Name: ollama

    3. [Optional] API Base: http://host.docker.internal:11434

    4. Model Names:

      1. llama3

      2. phi3

    5. Default model: llama3

    6. Fast Model: phi3

  8. (Optional) If you are using remote inference like OpenAI or Azure OpenAI, refer to the official danswer.ai docs:

  9. Create your first connector ( )

Related articles

University of Toronto - Since 1827