Skip to content

Seafile AI extension

From Seafile 13, users can enable Seafile AI to support the following features:

  • File tags, file and image summaries, text translation, sdoc writing assistance
  • Given an image, generate its corresponding tags (including objects, weather, color, etc.)
  • Detect faces in images and encode them
  • Detect text in images (OCR)

AIGC statement in Seafile

With the help of large language models and face recognition models and algorithm development, Seafile AI supports image recognition and text generation. The generated content is diverse and random, and users need to identify the generated content. Seafile will not be responsible for AI-generated content (AIGC).

At the same time, Seafile AI supports the use of custom LLM and face recognition models. Different large language models will have different impacts on AIGC (including functions and performance), so Seafile will not be responsible for the corresponding rate (i.e., tokens/s), token consumption, and generated content. Including but not limited to

  • Basic model (including model basic algorithm)
  • Parameter quantity
  • Quantization level

When users use their own OpenAI-compatibility-API LLM service (e.g., LM studio, Ollama) and use self-ablated or abliterated models, Seafile will not be responsible for possible bugs (such as infinite loops outputting the same meaningless content). At the same time, Seafile does not recommend using documents such as SeaDoc to evaluate the performance of ablated models.

Deploy Seafile AI basic service

Deploy Seafile AI on the host with Seafile

The Seafile AI basic service will use API calls to external large language model service to implement file labeling, file and image summaries, text translation, and sdoc writing assistance.

Seafile AI requires Redis cache

In order to deploy Seafile AI correctly, you have to use Redis as the cache. Please set CACHE_PROVIDER=redis in .env and set Redis related configuration information correctly.

  1. Download seafile-ai.yml

    wget https://manual.seafile.com/13.0/repo/docker/seafile-ai.yml
    
  2. Modify .env, insert or modify the following fields:

    COMPOSE_FILE='...,seafile-ai.yml' # add seafile-ai.yml
    
    ENABLE_SEAFILE_AI=true
    SEAFILE_AI_LLM_KEY=<your openai LLM access key>
    
    COMPOSE_FILE='...,seafile-ai.yml' # add seafile-ai.yml
    
    ENABLE_SEAFILE_AI=true
    SEAFILE_AI_LLM_TYPE=other
    SEAFILE_AI_LLM_URL=https://api.openai.com/v1 # your LLM API endpoint
    SEAFILE_AI_LLM_KEY=<your LLM access key>
    SEAFILE_AI_LLM_MODEL=gpt-4o-mini # your model id
    

    About use custom model

    Seafile AI supports the use of custom large models, but the following conditions must be met: - OpenAI compatibility API - The large model supports multi-modality (such as supporting images, etc.)

  3. Restart Seafile server:

    docker compose down
    docker compose up -d
    

Deploy Seafile AI on another host to Seafile

  1. Download seafile-ai.yml and .env:

    wget https://manual.seafile.com/13.0/repo/docker/seafile-ai/seafile-ai.yml
    wget -O .env https://manual.seafile.com/13.0/repo/docker/seafile-ai/env
    
  2. Modify .env in the host will deploy Seafile AI according to following table

    variable description
    SEAFILE_VOLUME The volume directory of thumbnail server data
    JWT_PRIVATE_KEY JWT key, the same as the config in Seafile .env file
    INNER_SEAHUB_SERVICE_URL Intranet URL for accessing Seahub component, like http://<your Seafile server intranet IP>.
    REDIS_HOST Redis server host
    REDIS_PORT Redis server port
    REDIS_PASSWORD Redis server password
    SEAFILE_AI_LLM_TYPE Large Language Model (LLM) Type. openai (default) will use OpenAI's gpt-4o-mini model and other for user-custom models which support multimodality
    SEAFILE_AI_LLM_URL LLM API endpoint, only needs to be specified when SEAFILE_AI_LLM_TYPE=other. Default is https://api.openai.com/v1
    SEAFILE_AI_LLM_KEY LLM API key
    FACE_EMBEDDING_SERVICE_URL Face embedding service url
    SEAFILE_AI_LLM_MODEL LLM model id (or name), only needs to be specified when SEAFILE_AI_LLM_TYPE=other. Default is gpt-4o-mini

    then start your Seafile AI server:

    docker compose up -d
    
  3. Modify .env in the host deployed Seafile

    SEAFILE_AI_SERVER_URL=http://<your seafile ai host>:8888
    

    then restart your Seafile server

    docker compose down && docker compose up -d
    

Deploy face embedding service (Optional)

The face embedding service is used to detect and encode faces in images and is an extension component of Seafile AI. Generally, we recommend that you deploy the service on a machine with a GPU and a graphics card driver that supports OnnxRuntime (so it can also be deployed on a different machine from the Seafile AI base service). Currently, the Seafile AI face embedding service only supports the following modes:

  • Nvidia GPU, which will use the CUDA 12.4 acceleration environment (at least the minimum Nvidia Geforce 531.18 driver) and requires the installation of the Nvidia container toolkit.
  • Pure CPU mode

If you plan to deploy these face embeddings in an environment using a GPU, you need to make sure your graphics card is in the range supported by the acceleration environment (e.g., CUDA 12.4 is supported) and correctly mapped in /dev/dri directory. So in some case, the cloud servers and WSL under some driver versions may not be supported.

  1. Download Docker compose files

    wget -O face-embedding.yml https://manual.seafile.com/13.0/repo/docker/face-embedding/cuda.yml
    
    wget -O face-embedding.yml https://manual.seafile.com/13.0/repo/docker/face-embedding/cpu.yml
    
  1. Modify .env, insert or modify the following fields:

    COMPOSE_FILE='...,face-embedding.yml' # add face-embedding.yml
    
    FACE_EMBEDDING_VOLUME=/opt/face_embedding
    
  2. Restart Seafile server

    docker compose down
    docker compose up -d
    
  3. Enable face recognition in the repo's settings:

    Enable face recognition

Deploy the face embedding service on a different machine than the Seafile AI basic service

Since the face embedding service may need to be deployed on some hosts with GPU(s), it may not be deployed together with the Seafile AI basic service. At this time, you should make some changes to the Docker compose file so that the service can be accessed normally.

  1. Modify .yml file, delete the commented out lines to expose the service port:

    services:
        face-embedding:
        ...
        ports:
            - 8886:8886
    
  2. Modify the .env of where deployed Seafile AI:

    FACE_EMBEDDING_SERVICE_URL=http://<your face embedding service host>:8886
    
  3. Make sure JWT_PRIVATE_KEY has set in the .env for face embedding and is same as the Seafile server

  4. Restart Seafile server

    docker compose down
    docker compose up -d
    

Persistent volume and model management

By default, the persistent volume is /opt/face_embedding. It will consist of two subdirectories:

  • /opt/face_embedding/logs: Contains the startup log and access log of the face embedding
  • /opt/face_embedding/models: Contains the model files of the face embedding. It will automatically obtain the latest applicable models at each startup. These models are hosted by our Hugging Face repository. Of course, you can also manually download your own models on this directory (If you fail to automatically pull the model, you can also manually download it).

Customizing model serving access keys

By default, the access key used by the face embedding is the same as that used by the Seafile server, which is JWT_PRIVATE_KEY. At some point, this will have to be modified for security reasons. If you need to customize the access key for the face embedding, you can do the following steps:

  1. Modify .env file for both face embedding and Seafile AI:

    FACE_EMBEDDING_SERVICE_KEY=<your customizing access keys>
    
  2. Restart Seafile server

    docker compose down
    docker compose up -d