Installation OpenShift

This page describe Everyware Cloud installation process for OpenShift Container Platform and OKD.

Project Configuration

The first step is to create a new project. In this documentation, we will use the project name ec-demo.

oc new-project ec-demo --display-name='Everyware Cloud Demo'

Once the project is created, we can configure the extra service account and Security Context Constraints (SCC). This step is required only if you are deploying the VPN service since the VPN container needs a privileged SCC.

First we use the provided template to create a new SCC for the VPN allowing the container to be run as privileged, with NET_ADMIN capability and as user root.

oc create -f scc-vpn.yml

Then we can create a new service account and associate it to the SCC.

oc create sa ec-vpn-privileged -n ec-demo
oc adm policy add-scc-to-user ec-vpn-privileged -z ec-vpn-privileged -n ec-demo

For older versions of OpenShift you may need to use oadm instead of oc adm.

Docker Images

Firstly create a secret containing the credentials for the Everyware Cloud docker registry and grant permission to the default and ec-vpn-privileged service accounts. Replace username and password with the values provided by Eurotech.

oc create secret docker-registry \
    --docker-server=registry.everyware-cloud.com \
    --docker-username=<username> \
    --docker-password=<password> \
    --docker-email=unused \
    -n ec-demo \
    eurotech-docker-registry
    
oc secrets add serviceaccount/default secrets/eurotech-docker-registry --for=pull
oc secrets add serviceaccount/ec-vpn-privileged secrets/eurotech-docker-registry --for=pull

Then import the images to OpenShift image streams.

IMAGE_TAG="5.5.0-ubi"

COMPONENTS=("ec-api" "ec-broker" "ec-console" "ec-events-broker" "ec-vpn")

for COMPONENT in "${COMPONENTS[@]}"; do
    oc import-image ${COMPONENT}:${IMAGE_TAG} --from=registry.everyware-cloud.com/eurotech/${COMPONENT}:${IMAGE_TAG} -n ec-demo --confirm
done

At this point only the image metadata is retrieved. The image itself will be downloaded from the Everyware Cloud registry only when a corresponding container is started for the first time.

Secrets

Before deploying Everyware Cloud it's advised to configure few secrets that will be used by the containers. In particular, EC uses secrets for both the database credentials and the certificates.

First, configure the secrets for the database using the command shown below. Please change the values according to your own environment:

oc create secret generic ec-db --from-literal=username=ecdbuser --from-literal=password=ecdbpass -n ec-demo

Then create a secret containing the certificate, the private key and the CA chain. This command can be used multiple times if you are not using wildcards certificates or you want to have separate certificates for each service.

oc create secret generic ec-crt --from-file=crt=cert.pem --from-file=key=key.pem --from-file=ca=ca.pem -n ec-demo

Configure the secrets for the events broker. These are used for the internal communication between the EC components and the events broker itself.

oc create secret generic ec-events-broker --from-literal=username=ec-user --from-literal=password=ec-pass -n ec-demo

Lastly create the secrets for the transport connections between the various services and the message broker.

oc create secret generic ec-transport-api --from-literal=username=ec-sys --from-literal=password=ec-pass -n ec-demo

oc create secret generic ec-transport-console --from-literal=username=ec-sys --from-literal=password=ec-pass -n ec-demo

oc create secret generic ec-transport-broker --from-literal=username=ec-sys --from-literal=password=ec-pass -n ec-demo

Template Installation

The installation of Everyware Cloud on OpenShift is just a matter of importing the various templates and processing them.

Firstly install the configmaps/configs.yml template. This contains the common configurations used by the various services like database and storage connections. These are some of the parameters you must configure for your deployment:

EVENTS_BROKER_HOSThost address where the events broker can be found, this can be the internal service name (ec-events-broker.ec-demo.svc.cluster.local)
DB_HOSThost address where the database can be found
DB_PORTport exposed by the database
DB_NAMEname of the database schema used by EC
STORAGE_HOSThost address where the storage can be found
STORAGE_PORTport exposed by Elasticsearch
STORAGE_SSLindicates if Elasticsearch connections should SSL/TLS or not

Once the Config Map is created it's possible to deploy the Everyware Cloud components. The events broker should be installed first while the order of the other components is not really important.

The templates are split in two: deployments and services. They should be deployed together per component.

Common parameters for the deployments templates are the following:

DEPLOYMENT_NAMEname of the deployment
IMAGE_NAMESPACEthe OpenShift namespace where the ImageStream resides
IMAGE_STREAMthe name of the ImageStream for the product
IMAGE_VERSIONtag of the image matching the one used when importing to the local registry (e.g. 5.0.1)
CONFIGname of the config map configured in the previous step
DB_SECRETname of the secret containing the DB credentials, in this example ec-db
CRT_SECRETname of the secret containing the certificates, in this example ec-crt
TRANSPORT_SECRETname of the secret containing the broker credentials
EVENTS_BROKER_SECRETname of the secret containing the events broker credentials, in this example ec-events-broker

There are specialised service templates fort the main public cloud providers. The most important parameters are:

SERVICE_NAMEname of the service
DEPLOYMENT_NAMEname of the deployment which pods should be used by the service

The various templates contain the description of all the possible options in case you need to further customise the installation. The list of the possible environment variables which can be set for a specific container can be found in the container Container Properties page.

Kafka integration

In EC it is possible to add routes to Kafka to forward client messages to an external broker (see Kafka Route). For this purpose, the EC broker will need to be configured with the correct credentials to authenticate itself to Kafka.

The broker currently supports two types of authentication: SASL/PLAIN and SASL/GSSAPI.

SASL/PLAIN

This authentication method uses plain username and password to authenticate with Kafka.

Create a JAAS configuration file containing the following text replacing username and password with values valid for your Kafka.

KafkaClient {
    org.apache.kafka.common.security.plain.PlainLoginModule required
    username="ec_kafka_username"
    password="ec_kafka_password";
};

Create a secret containing the created jaas.conf file.

oc create secret generic ec-jaas-conf --from-file=jaas.conf=path_to_local/jaas.conf

Once this is done modify the broke configuration in OpenShift adding a volume pointing to the newly created Secret. The yaml should look similar to this snippet.

[...]
          volumeMounts:
            - mountPath: /opt/amq/data
              name: ec-broker-volume-1
            - mountPath: /etc/opt/ec/jaas
              name: ec-broker-volume-2
              readOnly: true
[...]
      volumes:
        - emptyDir: {}
          name: ec-broker-volume-1
        - name: ec-broker-volume-2
          secret:
            defaultMode: 420
            items:
              - key: jaas.conf
                path: jaas.conf
            secretName: ec-jaas-conf

Also append to the variable variable called ${ACTIVEMQ_OPTS} the following value.

-Ddynamic_route.kafka.jaas_configuration_file=/etc/opt/ec/jaas/jaas.conf

Be sure to give producer ACLs to the user used to authenticate to Kafka. This user will need permissions to write to all topics matching EC account names.

SASL/GSSAPI

This authentication method uses Kerberos to authenticate with Kafka.

Create a JAAS configuration file containing the following text replacing the principal with the one configured in Kerberos.

KafkaClient {
  com.sun.security.auth.module.Krb5LoginModule required
  useKeyTab=true
  storeKey=true
  keyTab="/etc/opt/ec/security/kafka.keytab"
  principal="[email protected]";
};

Create a secret containing the created jaas.conf file

oc create secret generic ec-jaas-conf --from-file=jaas.conf=path_to_local/jaas.conf

Create e keytab file containing the credentials for your Kerberos principal and create a configMap containing it. You need OCP/OKD and oc client at least version 3.10 to create binary file configMaps.

oc create configmap ec-keytab --from-file=kafka.keytab

Now create the krb5.conf file containing a configuration matching your Kerberos infrastructure. The file should look something like this:

# Configuration snippets may be placed in this directory as well
includedir /etc/krb5.conf.d/

[logging]
 default = FILE:/var/log/krb5libs.log
 kdc = FILE:/var/log/krb5kdc.log
 admin_server = FILE:/var/log/kadmind.log

[libdefaults]
 dns_lookup_realm = false
 ticket_lifetime = 24h
 renew_lifetime = 7d
 forwardable = true
 rdns = false
 pkinit_anchors = /etc/pki/tls/certs/ca-bundle.crt
 default_realm = EXAMPLE.COM
 default_ccache_name = KEYRING:persistent:%{uid}

[realms]
 EVERYWARE.IO = {
  kdc = mykdc.example.com
  admin_server = mykdc.example.com
 }

[domain_realm]
.example.com = EXAMPLE.COM
example.com = EXAMPLE.COM

Create a configMap containing the krb5.conf file.

oc create configmap ec-krb5-conf --from-file=krb5.conf

Once this is done modify the broke configuration in OpenShift adding the volumes pointing to the newly created secrets/configMaps. The yaml should look similar to this snippet.

[...]
          volumeMounts:
            - mountPath: /opt/amq/data
              name: ec-broker-volume-1
            - mountPath: /etc/opt/ec/krb5
              name: ec-broker-volume-2
            - mountPath: /etc/opt/ec/security
              name: ec-broker-volume-3
            - mountPath: /etc/opt/ec/jaas
              name: ec-broker-volume-4
              readOnly: true              
[...]
      volumes:
        - emptyDir: {}
          name: ec-broker-volume-1
        - configMap:
            defaultMode: 420
            name: ec-krb5-conf
          name: ec-broker-volume-2
        - configMap:
            defaultMode: 420
            name: ec-keytab
          name: ec-broker-volume-3
        - secret:
            defaultMode: 420
            secretName: ec-jaas-conf
          name: ec-broker-volume-4

Also append to the variable called ACTIVEMQ_OPTS the following value. If it's not present in list of the deployment environment variables just create it.

-Ddynamic_route.kafka.jaas_configuration_file=/etc/opt/ec/jaas/jaas.conf -Djava.security.krb5.conf=/etc/opt/ec/krb5/krb5.conf

Be sure to give producer ACLs to the user used to authenticate to Kafka. This user will need permissions to write to all topics matching EC account names.

Limitations

Only one JAAS configuration file is allowed in the platform (due to the limitation of the current Camel-Kafka client used). So it's not possible to have different routes with different login credentials.

Monitoring

Everyware Cloud comes with a set of tools to monitor itself. The broker is instrumented to expose several metrics and Prometheus is used retrieve these metrics and to check the status of the services.

The installation is done, like for the other components, via OpenShift template. First of all a new service account is needed to bring up the pods. This service account requires permission to list the services and pods of the tenant where OpenShift lives.

oc create sa ec-monitoring -n ec-demo
oc adm policy add-role-to-user view -z ec-monitoring -n ec-demo

Then the templates can be loaded and executed. The template will create also basic configuration files as ConfigMaps. The defaults should be fine for most deployments assuming the name of the services and of the deployments of EC are not customised, otherwise the service names in the configuration ConfigMap should be updated accordingly.

By default there is no persistence for the monitoring pod. However this can be added mounting a persistent volume instead of emptyDir. It's also possible to configure Prometheus to send metrics to an external storage as described here or to have an external Prometheus pull metrics from the internal one as described here.

To expose Prometheus to the outside either create a Route or expose the service via NodePort. Note that, as there is no authentication, it's highly advised not to expose the service to untrusted networks.

Multiple service instances

It is possible to deploy multiple instances of the same Everyware Cloud service. To do so just process again the service template changing the SERVICE_NAME parameter.

One common scenario where it's desirable to have multiple instances of the same service is the message broker per tenant isolation (see messaging section). In this case the cluster name inside Everyware Cloud will have the same value of the SERVICE_NAME parameter.

Node affinity

It is possible to force certain applications to run on particular cluster nodes using node affinity. To do so, we first need to add a label to the node and then change the template to force the pods to be run on the nodes matching that label.

The following snippet shows how to add an app label to the node called my-node-id-1 Note that to run this command you need cluster admin privileges.

oc label node my-node-id-1 app=broker

At this point, change the template adding the affinity configuration. In this case, we are forcing the pod to be run on nodes with app label containing the value broker.

[...]
  template:
    metadata:
    	labels:
        app: "${EC_SERVICE_NAME}"
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
              - matchExpressions:
                  - key: app
                    operator: In
                    values:
                      - broker
      containers:
      - env:
[...]

More information about node affinity can be found in the official OpenShit documentation here.

Routes and Load Balancer Configuration

Broker and VPN use node ports since plain TCP/TLS is not yet supported by the OpenShift routers. It is recommended to use an external load balancer and to avoid connecting your devices directly to the OpenShift slaves. SSL termination is already managed by these two services so you won't need to add certificates outside of OpenShift.

The API and the Console require the setup of OpenShift routes. It is possible to configure SSL termination at route level. You will also need to configure your DNS service creating records for the routes in OpenShift.