2.5. Configure#

The vantage6-node requires a configuration file to run. This is a yaml file with a specific format.

The next sections describes how to configure the node. It first provides a few quick answers on setting up your node, then shows an example of all configuration file options, and finally explains where your vantage6 configuration files are stored.

2.5.1. How to create a configuration file#

The easiest way to create an initial configuration file is via: v6 node new. This allows you to configure the basic settings. For more advanced configuration options, which are listed below, you can view the example configuration file.

2.5.2. Where is my configuration file?#

To see where your configuration file is located, you can use the following command

v6 node files

Warning

This command will not work if you have put your configuration file in a custom location. Also, you may need to specify the --system flag if you put your configuration file in the system folder.

2.5.3. All configuration options#

The following configuration file is an example that intends to list all possible configuration options.

You can download this file here: node_config.yaml

# API key used to authenticate at the server.
api_key: ***

# URL of the vantage6 server
server_url: https://cotopaxi.vantage6.ai

# port the server listens to
port: 443

# API path prefix that the server uses. Usually '/api' or an empty string
api_path: ''

# subnet of the VPN server
vpn_subnet: 10.76.0.0/16

# set the devices the algorithm container is allowed to request.
algorithm_device_requests:
  gpu: false

# Add additional environment variables to the algorithm containers. In case
# you want to supply database specific environment (e.g. usernames and
# passwords) you should use `env` key in the `database` section of this
# configuration file.
# OPTIONAL
algorithm_env:

  # in this example the environment variable 'player' has
  # the value 'Alice' inside the algorithm container
  player: Alice

# Add additional environment variables to the node container. This can be useful
# if you need to modify the configuration of certain python libraries that the
# node uses. For example, if you want to use a custom CA bundle for the requests
# library you can specify it here.
node_extra_env:
  REQUESTS_CA_BUNDLE: /etc/ssl/certs/ca-certificates.crt

# Add additional volumes to the node container. This can be useful if you need
# to mount a custom CA bundle for the requests library for example.
node_extra_mounts:
  - /etc/ssl/certs/ca-certificates.crt:/etc/ssl/certs/ca-certificates.crt:ro

node_extra_hosts:
  # In Linux (no Docker Desktop) you can use this (special) mapping to access
  # the host from the node.
  # See: https://docs.docker.com/reference/cli/docker/container/run/#add-host
  host.docker.internal: host-gateway
  # For testing purposes, it can also be used to map a public domain to a
  # private IP address, allowing you to avoid breaking TLS hostname verification
  v6server.example.com: 192.168.1.10

# specify custom Docker images to use for starting the different
# components.
# OPTIONAL
images:
  node: harbor2.vantage6.ai/infrastructure/node:cotopaxi
  alpine: harbor2.vantage6.ai/infrastructure/alpine
  vpn_client: harbor2.vantage6.ai/infrastructure/vpn_client
  network_config: harbor2.vantage6.ai/infrastructure/vpn_network
  ssh_tunnel: harbor2.vantage6.ai/infrastructure/ssh_tunnel
  squid: harbor2.vantage6.ai/infrastructure/squid

# path or endpoint to the local data source. The client can request a
# certain database by using its label. The type is used by the
# auto_wrapper method used by algorithms. This way the algorithm wrapper
# knows how to read the data from the source. The auto_wrapper currently
# supports: 'csv', 'parquet', 'sql', 'sparql', 'excel', 'omop'. If your
# algorithm does not use the wrapper and you have a different type of
# data source you can specify 'other'.
databases:
  - label: default
    uri: C:\data\datafile.csv
    type: csv

  - label: omop
    uri: jdbc:postgresql://host.docker.internal:5454/postgres
    type: omop
    # additional environment variables that are passed to the algorithm
    # containers (or their wrapper). This can be used to for usernames
    # and passwords for example. Note that these environment variables are
    # only passed to the algorithm container when the user requests that
    # database. In case you want to pass some environment variable to all
    # algorithms regard less of the data source the user specifies you can
    # use the `algorithm_env` setting.
    env:
      user: admin@admin.com
      password: admin
      dbms: postgresql
      cdm_database: postgres
      cdm_schema: public
      results_schema: results


# end-to-end encryption settings
encryption:

  # whenever encryption is enabled or not. This should be the same
  # as the `encrypted` setting of the collaboration to which this
  # node belongs.
  enabled: false

  # location to the private key file
  private_key: /path/to/private_key.pem

# Define who is allowed to run which algorithms on this node.
policies:
  # Control which algorithm images are allowed to run on this node. This is
  # expected to be a valid regular expression. If you don't specify this, all algorithm
  # images are allowed to run on this node (unless other policies restrict this).
  allowed_algorithms:
    - ^harbor2\.vantage6\.ai/[a-zA-Z]+/[a-zA-Z]+
    - ^myalgorithm\.ai/some-algorithm

  # It is also possible to allow all algorithms from particular algorithm stores. Set
  # these stores here. They may be strings or regular expressions. If you don't specify
  # this, algorithms from any store are allowed (unless other policies prevent this).
  allowed_algorithm_stores:
    # allow all algorithms from the vantage6 community store
    - https://store.cotopaxi.vantage6.ai
    # allow any store that is a subdomain of vantage6.ai
    - ^https://[a-z]+\.vantage6\.ai$

  # If you define both `allowed_algorithm_stores` and `allowed_algorithms`, you can
  # choose to only allow algorithms that both policies allow, or you can allow
  # algorithms that either of them allows. By default, only algorithms that are given
  # in *both* the `allowed_algorithms` and `allowed_algorithm_stores` are allowed by
  # setting this to the default value `false`.
  allow_either_whitelist_or_store: false

  # Define which users are allowed to run algorithms on your node by their ID
  allowed_users:
    - 2
  # Define which organizations are allowed to run images on your node by
  # their ID or name
  allowed_organizations:
    - 6
    - root

  # The basics algorithm (harbor2.vantage5.ai/algorithms/basics) is whitelisted
  # by default. It is used to collect column names in the User Interface to
  # facilitate task creation. Set to false to disable this.
  allow_basics_algorithm: true

  # Set to true to always require that the algorithm image is successfully pulled. This
  # ensures that no potentially outdated local images are used if internet connection
  # is not available. This option should be set to false if you are testing with local
  # algorithm images. Default value is false.
  require_algorithm_pull: false

# credentials used to login to private Docker registries
docker_registries:
  - registry: docker-registry.org
    username: docker-registry-user
    password: docker-registry-password

# Create SSH Tunnel to connect algorithms to external data sources. The
# `hostname` and `tunnel:bind:port` can be used by the algorithm
# container to connect to the external data source. This is the address
# you need to use in the `databases` section of the configuration file!
ssh-tunnels:

  # Hostname to be used within the internal network. I.e. this is the
  # hostname that the algorithm uses to connect to the data source. Make
  # sure this is unique and the same as what you specified in the
  # `databases` section of the configuration file.
  - hostname: my-data-source

    # SSH configuration of the remote machine
    ssh:

      # Hostname or ip of the remote machine, in case it is the docker
      # host you can use `host.docker.internal` for Windows and MacOS.
      # In the case of Linux you can use `172.17.0.1` (the ip of the
      # docker bridge on the host)
      host: host.docker.internal
      port: 22

      # fingerprint of the remote machine. This is used to verify the
      # authenticity of the remote machine.
      fingerprint: "ssh-rsa ..."

      # Username and private key to use for authentication on the remote
      # machine
      identity:
        username: username
        key: /path/to/private_key.pem

    # Once the SSH connection is established, a tunnel is created to
    # forward traffic from the local machine to the remote machine.
    tunnel:

      # The port and ip on the tunnel container. The ip is always
      # 0.0.0.0 as we want the algorithm container to be able to
      # connect.
      bind:
        ip: 0.0.0.0
        port: 8000

      # The port and ip on the remote machine. If the data source runs
      # on this machine, the ip most likely is 127.0.0.1.
      dest:
        ip: 127.0.0.1
        port: 8000

# Whitelist URLs and/or IP addresses that the algorithm containers are allowed to reach
# using the http protocol. Note that the addresses given below are examples and should
# be replaced with the actual addresses that you want to whitelist.
whitelist:
  domains:
    - google.com
    - github.com
    - host.docker.internal # docker host ip (windows/mac)
  ips:
    - 172.17.0.1 # docker bridge ip (linux)
    - 8.8.8.8
  ports:
    - 443

# Containers that are defined here are linked to the algorithm containers and
# can therefore be accessed when by the algorithm when it is running. Note that
# for using this option, the container with 'container_name' should already be
# started before the node is started. Also, if you are using this option together with
# the `whitelist` option, make sure to whitelist the `container_label` under `ips`,
# as well as the port(s) that you want to reach on the container.
docker_services:
    container_label: container_name

# Settings for the logger
logging:
  # Controls the logging output level. Could be one of the following
  # levels: CRITICAL, ERROR, WARNING, INFO, DEBUG, NOTSET
  level:        DEBUG

  # whenever the output needs to be shown in the console
  use_console:  true

  # The number of log files that are kept, used by RotatingFileHandler
  backup_count: 5

  # Size kb of a single log file, used by RotatingFileHandler
  max_size:     1024

  # Format: input for logging.Formatter,
  format:       "%(asctime)s - %(name)-14s - %(levelname)-8s - %(message)s"
  datefmt:      "%Y-%m-%d %H:%M:%S"

  # (optional) set the individual log levels per logger name, for example
  # mute some loggers that are too verbose.
  loggers:
    - name: urllib3
      level: warning
    - name: requests
      level: warning
    - name: engineio.client
      level: warning
    - name: docker.utils.config
      level: warning
    - name: docker.auth
      level: warning

# Additional debug flags
debug:

  # Set to `true` to enable the Flask/socketio into debug mode.
  socketio: false

  # Set to `true` to set the Flask app used for the LOCAL proxy service
  # into debug mode
  proxy_server: false


# directory where local task files (input/output) are stored
task_dir: C:\Users\<your-user>\AppData\Local\vantage6\node\mydir

# Whether or not your node shares some configuration (e.g. which images are
# allowed to run on your node) with the central server. This can be useful
# for other organizations in your collaboration to understand why a task
# is not completed. Obviously, no sensitive data is shared. Default true
share_config: true

2.5.4. Configuration file location#

The directory where the configuration file is stored depends on your operating system (OS). It is possible to store the configuration file at system or at user level. By default, node configuration files are stored at user level, which makes this configuration available only for your user.

The default directories per OS are as follows:

Operating System

System-folder

User-folder

Windows

C:\ProgramData\vantage\node\

C:\Users\<user>\AppData\Local\vantage\node\

MacOS

/Library/Application/Support/vantage6/node/

/Users/<user>/Library/Application Support/vantage6/node/

Linux

/etc/vantage6/node/

/home/<user>/.config/vantage6/node/

Note

The command v6 node looks in these directories by default. However, it is possible to use any directory and specify the location with the --config flag. But note that doing that requires you to specify the --config flag every time you execute a v6 node command!

Similarly, you can put your node configuration file in the system folder by using the --system flag. Note that in that case, you have to specify the --system flag for all v6 node commands.

2.5.5. Security#

As a data owner it is important that you take the necessary steps to protect your data. Vantage6 allows algorithms to run on your data and share the results with other parties. It is important that you review the algorithms before allowing them to run on your data.

Once you approved the algorithm, it is important that you can verify that the approved algorithm is the algorithm that runs on your data. There are two important steps to be taken to accomplish this:

  • Set the (optional) allowed_algorithms option in the policies section of the node-configuration file. You can specify a list of regex expressions here. Some examples of what you could define:

    1. ^harbor2\.vantage6\.ai/[a-zA-Z]+/[a-zA-Z]+: allow all images from the vantage6 registry

    2. ^harbor2\.vantage6\.ai/algorithms/glm$: only allow the GLM image, but all builds of this image

    3. ^harbor2\.vantage6\.ai/algorithms/glm@sha256:82becede498899ec668628e7cb0ad87b6e1c371cb8 a1e597d83a47fac21d6af3$: allows only this specific build from the GLM image to run on your data

  • Enable DOCKER_CONTENT_TRUST to verify the origin of the image. For more details see the documentation from Docker.

Warning

By enabling DOCKER_CONTENT_TRUST you might not be able to use certain algorithms. You can check this by verifying that the images you want to be used are signed.

2.5.6. Logging#

To configure the logger, look at the logging section in the example configuration file in All configuration options.

Useful commands:

  1. v6 node files: shows you where the log file is stored

  2. v6 node attach: shows live logs of a running server in your current console. This can also be achieved when starting the node with v6 node start --attach