Skip to content

Setup With S3 Storage

Deployment notes

  • This feature is only for Pro Edition
  • If your Seafile server is deployed from binary packages, you have to do the following steps before deploying:

    1. install boto3 to your machine

      sudo pip install boto3
      
    2. Install and configure memcached or Redis.

      For best performance, Seafile requires enable memory cache for objects. We recommend to at least allocate 128MB memory for memcached or Redis.

      The configuration options differ for different S3 storage. We'll describe the configurations in separate sections. You also need to add memory cache configurations

New feature from 12.0 pro edition

If your will deploy Seafile server in Docker, you can modify the following fields in .env before starting the services:

INIT_S3_STORAGE_BACKEND_CONFIG=true
INIT_S3_COMMIT_BUCKET=<your-commit-objects>
INIT_S3_FS_BUCKET=<your-fs-objects>
INIT_S3_BLOCK_BUCKET=<your-block-objects>
INIT_S3_KEY_ID=<your-key-id>
INIT_S3_SECRET_KEY=<your-secret-key>
INIT_S3_USE_V4_SIGNATURE=true
INIT_S3_AWS_REGION=us-east-1 # your AWS Region
INIT_S3_HOST=s3.us-east-1.amazonaws.com # your S3 Host
INIT_S3_USE_HTTPS=true

The above modifications will generate the same configuration file as this manual and will take effect when the service is started for the first time.

How to configure S3 in Seafile

Seafile configures S3 storage by adding or modifying the following section in seafile.conf:

[xxx_object_backend]
name = s3
bucket = my-xxx-objects
key_id = your-key-id
key = your-secret-key
use_v4_signature = true
use_https = true
... ; other optional configurations

You have to create at least 3 buckets for Seafile, corresponding to the sections: commit_object_backend, fs_object_backend and block_backend. For the configurations for each backend section, please refer to the following table:

Variable Description
bucket Bucket name for commit, fs, and block objects. Make sure it follows S3 naming rules (you can refer the notes below the table).
key_id The key_id is required to authenticate you to S3. You can find the key_id in the "security credentials" section on your AWS account page or from your storage provider.
key The key is required to authenticate you to S3. You can find the key in the "security credentials" section on your AWS account page or from your storage provider.
use_v4_signature There are two versions of authentication protocols that can be used with S3 storage: Version 2 (older, may still be supported by some regions) and Version 4 (current, used by most regions). If you don't set this option, Seafile will use the v2 protocol. It's suggested to use the v4 protocol.
use_https Use https to connect to S3. It's recommended to use https.
aws_region (Optional) If you use the v4 protocol and AWS S3, set this option to the region you chose when you create the buckets. If it's not set and you're using the v4 protocol, Seafile will use us-east-1 as the default. This option will be ignored if you use the v2 protocol.
host (Optional) The endpoint by which you access the storage service. Usually it starts with the region name. It's required to provide the host address if you use storage provider other than AWS, otherwise Seafile will use AWS's address (i.e., s3.us-east-1.amazonaws.com).
sse_c_key (Optional) A string of 32 characters can be generated by openssl rand -base64 24. It can be any 32-character long random string. It's required to use V4 authentication protocol and https if you enable SSE-C.
path_style_request (Optional) This option asks Seafile to use URLs like https://192.168.1.123:8080/bucketname/object to access objects. In Amazon S3, the default URL format is in virtual host style, such as https://bucketname.s3.amazonaws.com/object. But this style relies on advanced DNS server setup. So most self-hosted storage systems only implement the path style format. So we recommend to set this option to true for self-hosted storage.

Bucket naming conventions

No matter if you using AWS or any other S3 compatible object storage, we recommend that you follow S3 naming rules. When you create buckets on S3, please read the S3 rules for naming first. Note, especially do not use capital letters in the name of the bucket (do not use camel-style naming, such as MyCommitObjects).

  • seafile-commit-object
  • seafile-fs-object
  • seafile-block-object
  • SeafileCommitObject
  • seafileFSObject
  • seafile block object

Use server-side encryption with customer-provided keys (SSE-C) in Seafile

Since Pro 11.0, you can use SSE-C to S3. Add the following sse_c_key to seafile.conf (as shown in the above variables table):

[commit_object_backend]
name = s3
......
use_v4_signature = true
use_https = true
sse_c_key = XiqMSf3x5ja4LRibBbV0sVntVpdHXl3P

[fs_object_backend]
name = s3
......
use_v4_signature = true
use_https = true
sse_c_key = XiqMSf3x5ja4LRibBbV0sVntVpdHXl3P

[block_backend]
name = s3
......
use_v4_signature = true
use_https = true
sse_c_key = XiqMSf3x5ja4LRibBbV0sVntVpdHXl3P

sse_c_key is a string of 32 characters.

You can generate sse_c_key with the following command. Note that the key doesn't have to be base64 encoded. It can be any 32-character long random string. The example just show one possible way to generate such a key.

openssl rand -base64 24

Warning

If you have existing data in your S3 storage bucket, turning on the above configuration will make your data inaccessible. That's because Seafile server doesn't support encrypted and non-encrypted objects mixed in the same bucket. You have to create a new bucket, and migrate your data to it by following storage backend migration documentation.

Example

[commit_object_backend]
name = s3
bucket = my-commit-objects
key_id = your-key-id
key = your-secret-key
use_v4_signature = true
aws_region = eu-central-1
use_https = true

[fs_object_backend]
name = s3
bucket = my-fs-objects
key_id = your-key-id
key = your-secret-key
use_v4_signature = true
aws_region = eu-central-1
use_https = true

[block_backend]
name = s3
bucket = my-block-objects
key_id = your-key-id
key = your-secret-key
use_v4_signature = true
aws_region = eu-central-1
use_https = true
[commit_object_backend]
name = s3
bucket = your-bucket-name
host = sos-de-fra-1.exo.io
key_id = ...
key = ...
use_https = true
path_style_request = true

[fs_object_backend]
name = s3
bucket = your-bucket-name
host = sos-de-fra-1.exo.io
key_id = ...
key = ...
use_https = true
path_style_request = true

[block_backend]
name = s3
bucket = your-bucket-name
host = sos-de-fra-1.exo.io
key_id = ...
key = ...
use_https = true
path_style_request = true
[commit_object_backend]
name = s3
bucket = your-bucket-name
host = fsn1.your-objectstorage.com
key_id = ...
key = ...
use_https = true
path_style_request = true

[fs_object_backend]
name = s3
bucket = your-bucket-name
host = fsn1.your-objectstorage.com
key_id = ...
key = ...
use_https = true
path_style_request = true

[block_backend]
name = s3
bucket = your-bucket-name
host = fsn1.your-objectstorage.com
key_id = ...
key = ...
use_https = true
path_style_request = true

There are other S3-compatible cloud storage providers in the market, such as Blackblaze and Wasabi. Configuration for those providers are just a bit different from AWS. We don't assure the following configuration works for all providers. If you have problems please contact our support

[commit_object_backend]
name = s3
bucket = my-commit-objects
host = <access endpoint for storage provider>
key_id = your-key-id
key = your-secret-key
# v2 authentication protocol will be used if not set
use_v4_signature = true
# required for v4 protocol. ignored for v2 protocol.
aws_region = <region name for storage provider>
use_https = true

[fs_object_backend]
name = s3
bucket = my-fs-objects
host = <access endpoint for storage provider>
key_id = your-key-id
key = your-secret-key
use_v4_signature = true
aws_region = <region name for storage provider>
use_https = true

[block_backend]
name = s3
bucket = my-block-objects
host = <access endpoint for storage provider>
key_id = your-key-id
key = your-secret-key
use_v4_signature = true
aws_region = <region name for storage provider>
use_https = true

Many self-hosted object storage systems are now compatible with the S3 API, such as OpenStack Swift, Ceph's RADOS Gateway and Minio. You can use these S3-compatible storage systems as backend for Seafile. Here is an example config:

[commit_object_backend]
name = s3
bucket = my-commit-objects
key_id = your-key-id
key = your-secret-key
host = <your s3 api endpoint host>:<your s3 api endpoint port>
path_style_request = true
use_v4_signature = true
use_https = true

[fs_object_backend]
name = s3
bucket = my-fs-objects
key_id = your-key-id
key = your-secret-key
host = <your s3 api endpoint host>:<your s3 api endpoint port>
path_style_request = true
use_v4_signature = true
use_https = true

[block_backend]
name = s3
bucket = my-block-objects
key_id = your-key-id
key = your-secret-key
host = <your s3 api endpoint host>:<your s3 api endpoint port>
path_style_request = true
use_v4_signature = true
use_https = true

Run and Test

Now you can start Seafile and test