# Resources

# Neo4j Resource

This resource allows the user to include a Neo4j Database in the ecosystem.

# Configuration

  • Resource Name: (required) The name of the resource
  • host: (required) The host to use for the connection to the Neo4j Database server
  • port: (required) The port to use for the connection to the Neo4j Database server, providing 0 will ignore the port
  • user: (required) The username for the authenticated request to the Neo4j Database server
  • password: (required) The password for the authenticated request to the Neo4j Database server
  • database: (optional) The database name (only Neo4j v4.0 and above)
  • read_only: (optional, default false) if the resource should open a read only connection to the Neo4j server

Refer to the Neo4j driver URI schemes documentation in order to create the correct host

https://neo4j.com/docs/driver-manual/4.0/client-applications/#driver-configuration-examples

Providing a host with a missing scheme, such as neo4j, will default to using the bolt:// scheme.

TIP

When the database configuration option is not specified. The neo4j database will be used on a Neo4j 4.x server, on 3.5x servers the only default database will be used.

TIP

When a resource is used in read-only mode, we ensure that a Read transaction is open to the Neo4j Server. However, it is not the most secure approach. To ensure high security, it's best to create a service account on the Neo4j server with read-only access and use that account in Hume to create the resource.


# RabbitMQ Resource

This resource allows the user to include a RabbitMQ Message Broker in the ecosystem. For more informations about RabbitMQ, visit https://www.rabbitmq.com/ .

# Configuration

  • Resource Name (required) The name of the resource
  • Host (required) The host to use for the connection to the RabbitMQ server
  • Port (required) The port to use for the connection to the RabbitMQ server
  • user (optional) The username for an authenticated request to the RabbitMQ server
  • password (optional) The password for an authenticated request to the RabbitMQ server

# Elasticsearch Resource

This resource allows the user to include an Elasticsearch server index in the ecosystem.

# Configuration

  • Resource Name (required) The name of the resource
  • Host (required) The host to use for the connection to the Elasticsearch server
  • Port (required) The port to use for the connection to the Elasticsearch server
  • user (optional) The username for an authenticated request to the ES server
  • password (optional) The password for an authenticated request to the ES server
  • index (requred) The index name to use for this resource

# Amazon S3

This resource allows the user to include an Amazon S3 bucket in the ecosystem.

# Configuration

Note: Access keys consist of two parts: an access key ID and a secret access key. You use access keys to sign programmatic requests that you make to AWS

  • Resource Name (required) The name of the resource
  • Access Key Id (required) An access key ID
  • Access Key Secret (required) A secret access key
  • Region (required) The AWS Region where the bucket has been created (ex. eu-central-1)
  • Bucket (required) The name of the bucket
  • Prefix (optional) The prefix to limits the results to only those keys that begin with the specified prefix
  • Delete After Read (optional) Deletes an existing object from bucket once downloaded in orchestra

# Azure Blob Storage

This resource allows the user to include an Azure blob storage in the ecosystem.

# Configuration

Note: Access keys consist of two parts: an access key ID and a secret access key. You use access keys to sign programmatic requests that you make to AWS

  • Resource Name (required) The name of the resource
  • Account name (required) Azure account name to be used for authentication with azure blob services
  • Account Key (required) The account key
  • Blob Enpoint Suffix (optional) If not specified '.core.windows.net' will be used. Note for Azure Government a suffix like '.core.usgovcloudapi.net' it's needed
  • Container (required) The blob container name

# Local FileSystem resource

This resource provides access to file systems, allowing files to be processed by orchestra

# Configuration

  • Resource Name (required) The name of the resource
  • Path (required) The starting directory

# JDBC Resource

This resource provides access to any RDBMS JDBC-compliant supported by Hume.

Please note that Orchestra supports, out-of-the-box, PostgreSQL and MySQL Server and it's fully configurable to interact with any other RDBMS vendor.

In this section we will even explain how to extend Orchestra to support a not natively supported RDBMS.

# Configuration

  • Resource name (required): The name of the resource
  • JDBC URL (required): The connection string which the JDBC Datasource will use to connect to the target database: it does contain info like the database vendor, host and port
  • user (required) The username for an authenticated request to the target database server
  • password (required) The password for an authenticated request to the target database server

# Suitable JDBC drivers

As step zero, in order to set up your JDBC Resource for reading from a given RDBMS, you have to download, from the vendor's CDN, the proper JDBC driver.

Hereafter you can find the links to the download pages of some important DB vendors, not natively included in Orchestra and, consequently, to be uploaded separately:

Then, there are three main steps to do, which should be accomplished differently, according to the type of installation:

  1. tell Orchestra where to search for these new jar files; technically speaking, it's a matter of including one folder into the class path, which is bound to be scanned at Orchestra start up.
  2. copy the driver into that folder
  3. restart Orchestra: the new driver will be loaded at the startup.

Let's detail these steps one by one, according to the type of installation we're handling:

# The Docker way
# 1. Specify the classpath

Our Orchestra Docker container maps its internal /plugins folder as part of the Java class search path; so, to load our driver smoothly, we just need to mount that folder on a Docker volume pointing to a folder on the host filesystem.

Let's say we have a /orchestra-plugins directory on the filesystem of the Docker host. In our docker-compose.yml file, under the scope of the orchestra service, we will create a new volume like the one in the following example.


orchestra:
...
  volumes:
      ...
      - /orchestra-plugins:/plugins
...

# 2. Copy the JAR into the plugins folder

Do copy the JDBC drivers in the /orchestra-plugins folder of your host: the jar will then automatically appear in the /plugins folder of the container.

# 3. Restart Orchestra

Head to the folder containing the docker-compose.yml file, then type:

docker-compose up -d --force-recreate orchestra

Once the container finishes the restarting, you'll be able to create a JDBC resource that can be used into a workflow for querying you target RDBMS.

# No Docker
# 1. Specify the classpath

Choose a folder on the servers' filesystem (absolute path) which will become part of the Java class path. Let's assume, as an example, that we'll use /plugins.

# 2. Copy the JAR into the plugins folder

Copy the JAR driver into your selected folder (/plugins); please make sure to launch Orchestra as a user with enough permissions to read from that directory.

# 3. Restart Orchestra

Assuming that hume-orchestra.jar is the name of your Orchestra executable jar, do type:

java \
    -classpath hume-orchestra.jar \
    -Dloader.path=/plugins \
    org.springframework.boot.loader.PropertiesLauncher

Of course, you'll have to change the JAR name (-classpath) and the loader path (-Dloader.path) directory if they're different from the example above.


# Kafka Resource

Available since : 2.3

This resource provides access to Kafka. For more informations about Kafka, visit https://kafka.apache.org/ .

# Configuration

  • Resource Name (required) The name of the resource
  • brokers (required) The host to use for the connection to the target broker
  • port (required) The port to use for the connection to the target broker

# SMTP Resource

Available since : 2.5.0

This resource provides access to an SMTP server to be used in combination with the Email component in Orchestra.

# Configuration

  • Resource Name (required) The name of the resource
  • uri (required) The uri of the smtp server, in the form of smtp://<mail-server-address>:<mail-server-port>, for eg : smtp://smtp.gmail.com:1025
  • user (optional, default null) The username to use for the connection to the SMTP server
  • password (optional, default null) The password to use for the connection to the SMTP server

Note : The user and password options work in tandem meaning that the presence of one negates the nullability of the other option.


# Webhook

Available since : 2.8.0

This resource will expose a webserver on the Orchestra service with the defined port and credentials. Users can then use a Webhook component allowing incoming json POST requests from 3rd party services that will be processed in their workflows.

The exposed webservers will require basic authentication configuration as no unauthenticated requests are authorized on the webservers.

# Configuration

  • Resource Name (required) The name of the resource
  • host (required) The host on which the webserver will listen on. Use 0.0.0.0 to listen on all network interfaces or localhost to listen only to the local network interface
  • port (required) the port on which the webserver will be exposed.
  • user (required) the user name for the basic authentication.
  • password (required) the password for the basic authentication

# Lifecycle of the Webhook server

When creating the resource, nothing will happen. The Webhook server will only be created once a workflow that is using the Webhook server with the help of a Webhook component is started.

Multiple webhook components can expose different paths using the same server and stopping a workflow on a specific path doesn't affect other workflows exposing different paths using the same server.

If all the workflows using a webhook server are stopped, the webhook server will be shutdowned.

# Security

# Authentication

All webhook server resources require a username and password to be configured. Note that this information is passed over the wire from the API to Orchestra, depending on your security requirements it is advised to use encrypted ecosystem variables to configure such information.

# TLS

When Orchestra is configured to make use of TLS certificates

server:
  port: 8100
  ssl:
    key-store: /opt/hume/ssl/hume.jks
    key-store-type: "pkcs12"
    key-store-password: "password"
    key-password: "password"
    key-alias: "hume-orchestra"

The webhook servers configured will automatically re-use the TLS configuration and accept only request on the TLS protocol layer.

# Exposure of the webhook ports

By nature, Orchestra is not a service that should be exposed to the outside and it is the reason why webhook servers to no re-use the internal webserver of Orchestra used by the API but rather expose dedicated webservers for webhooks.

It means that depending on your infrastructure configuration, you will need to expose those ports to the third parties.

DOCKER

With docker, you will generally need to map the host ports to the container ports. The following configuration expose the 8665 port to the host.

services:
    orchestra:
        image: docker.graphaware.com/public/hume-orchestra:{{hume_version}}
        ports:
        - 8100:8100
        - 8665:8665

NON-DOCKER

When running without Docker, an NGINX server will generally be used for exposing Hume. You can proxy the incoming requests to NGINX to the webhook server port on the Orchestra service via the localhost interface.

Add the following section to the NGINX configuration :

location /webhook {
    proxy_pass https://localhost:8665;
    proxy_set_header X-Forwarded-Proto $scheme;
    proxy_set_header X-Forwarded-For $remote_addr;
    proxy_set_header Host $host;
  }

With such setup, requests to <url>/webhook will be forwarded to localhost:8665 without having to expose Orchestra on its own to the outside world.