Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

Table of Contents

Table of Contents
minLevel1
maxLevel6
include
outlinefalse
indent
styledecimal
excludeTable of Contents
typelist
class
printablefalse

Introduction

Danswer is the AI Assistant that can be connected to many sources like Atlassian Confluence, Sharepoint, Slack, web, files, MS Teams and many more sources. It performs retrieval of documents, generates embeddings and stores them locally. Once a query comes in, a semantic similarity search is performed and the most relevant results are passed to the LLM instead of full documents, by doing this, noise to the model is reduced (System Overview).

...

  • 8vCPUs.

  • 500GB ssd storage.

  • 16GB memory reserved.

  • 16GB vGPU.

Table of Contents

...

\uD83D\uDCD8 Steps

Instructions

Virtual Machine deployment

  1. Download

    View file
    nameubuntu-danswer.yaml
    and update the following attributes:

    1. machine.folder: target logical folder. List available folders with vss-cli compute folder ls

    2. metadata.client : your department client.

    3. metadata.inform: email address for automated notifications

  2. Deploy your file as follows:

    Code Block
    vss-cli --wait compute vm mk from-file ubuntu-danswer.yaml
  3. Disable secure boot as a workaround for nvidia-gridd issues:

    Code Block
     vss-cli compute vm set <VM_ID> secure-boot --off
  4. (Optional) If planning to use Ollama, add a virtual GPU of 16GB, specifically the 16q profile. For more information in the profile used, check the following document How-to Request a Virtual GPU

    Code Block
    vss-cli compute vm set <VM_ID> gpu mk --profile 16q
  5. Once the VM has been deployed, a confirmation email will be sent with the assigned IP address and credentials.

  6. Power on virtual machine:

    Code Block
    vss-cli compute vm set ubuntu-llm state on

...

  1. Clone the Danswer repo:

    Code Block
    git clone https://github.com/danswer-ai/danswer.git
  2. Go to danswer/deployment/docker_compose:

  3. Code Block
    cd danswer/deployment/docker_compose
  4. Configure Danswer by creating a file in danswer/deployment/docker_compose/.env with the following contents:

    Code Block
    # Configures basic email/password based login
    AUTH_TYPE="basic"
    
    # Rephrasing the query into different languages to improve search recall
    MULTILINGUAL_QUERY_EXPANSION="English,Spanish"
    
    # Set a cheaper/faster LLM for the flows that are easier (such as translating the query etc.)
    FAST_GEN_AI_MODEL_VERSION="gpt-3.5-turbo"
    
    # Setting more verbose logging
    LOG_LEVEL="debug"
    LOG_ALL_MODEL_INTERACTIONS="true"
    
    DISABLE_TELEMETRY="true"
    1. More information about configuration settings can be found here https://docs.danswer.dev/configuration_guide

  5. Build the containers:

    Code Block
    docker compose -f docker-compose.dev.yml -p danswer-stack up -d --build --force-recreate
  6. Danswer will now be running on http://{ip-address}:3000.

  7. To stop the stack:

    Code Block
    docker compose -f docker-compose.dev.yml -p danswer-stack down
  8. (Optional) If you are using Ollama on the same instance, use the following settings:

    1. Display name: ollama

    2. Provider Name: ollama

    3. [Optional] API Base: http://host.docker.internal:11434

    4. Model Names:

      1. llama3

      2. phi3

    5. Default model: llama3

    6. Fast Model: phi3

  9. (Optional) If you are using remote inference like OpenAI or Azure OpenAI, refer to the official danswer.ai docs: https://docs.danswer.dev/quickstart#generative-ai-api-key

  10. Create your first connector (https://docs.danswer.dev/connectors/overview )

Filter by label (Content by label)
showLabelsfalse
max5
spacescom.atlassian.confluence.content.render.xhtml.model.resource.identifiers.SpaceResourceIdentifier@32c6ab3f
sortmodified
showSpacefalse
reversetrue
typepage
labelskb-how-to-article
cqllabel = "kb-how-to-article" and type = "page" and space = "VSSPublic"