Installation
Project Configuration
The first step is to create a new project. In this documentation, we will use the project name ec-demo
.
oc new-project ec-demo --display-name='Everyware Cloud Demo'
Once the project is created, we can configure the extra service account and Security Context Constraints (SCC). This step is required only if you are deploying the VPN service since the VPN container needs a privileged
SCC.
First we use the provided template to create a new SCC for the VPN allowing the container to be run as privileged, with NET_ADMIN capability and as user root.
oc create -f scc-vpn.yml
Then we can create a new service account and associate it to the SCC.
oc create sa ec-vpn-privileged -n ec-demo
oc adm policy add-scc-to-user ec-vpn-privileged -z ec-vpn-privileged -n ec-demo
For older versions of OpenShift you may need to use oadm
instead of oc adm
.
Docker Images
Firstly download the images from the Eurotech docker registry and import them into the OpenShift local registry.
Change IMAGE_TAG with the desired version of Everyware Cloud.
IMAGE_TAG="5.2.2"
COMPONENTS=("ec-api" "ec-broker" "ec-console" "ec-events-broker" "ec-vpn")
docker login registry.everyware-cloud.com
for COMPONENT in "${COMPONENTS[@]}"; do
docker pull "registry.everyware-cloud.com/eurotech/${COMPONENT}:${IMAGE_TAG}"
done
The login credentials for the Docker registry are provided by Eurotech.
Once the images are downloaded they need to be tagged and pushed to the registry. The following command will tag and push the REST API container image. Note that we are using the project name ec-demo
as part of the registry URL. If you are running these operations directly inside an OpenShift node your could probably use docker-registry.default.svc.cluster.local:5000
as your DOCKER_REGISTRY
.
IMAGE_TAG="5.2.2"
DOCKER_REGISTRY="docker-registry.default.svc.cluster.local:5000"
COMPONENTS=("ec-api" "ec-broker" "ec-console" "ec-events-broker" "ec-vpn")
OPENSHIFT_PROJECT="$(oc project -q)"
docker login -u $(oc whoami) -p $(oc whoami -t) "${DOCKER_REGISTRY}"
for COMPONENT in "${COMPONENTS[@]}"; do
docker tag "registry.everyware-cloud.com/eurotech/${COMPONENT}:${IMAGE_TAG}" "${DOCKER_REGISTRY}/${OPENSHIFT_PROJECT}/${COMPONENT}:${IMAGE_TAG}"
docker push "${DOCKER_REGISTRY}/${OPENSHIFT_PROJECT}/${COMPONENT}:${IMAGE_TAG}"
done
Secrets
Before deploying Everyware Cloud it's advised to configure few secrets that will be used by the containers. In particular, EC uses secrets for both the database credentials and the certificates.
First, configure the secrets for the database using the command shown below. Please change the values according to your own environment:
oc create secret generic ec-db --from-literal=username=ecdbuser --from-literal=password=ecdbpass -n ec-demo
Then create a secret containing the certificate, the private key and the CA chain. This command can be used multiple times if you are not using wildcards certificates or you want to have separate certificates for each service.
oc create secret generic ec-crt --from-file=crt=cert.pem --from-file=key=key.pem --from-file=ca=ca.pem -n ec-demo
Configure the secrets for the events broker. These are used for the internal communication between the EC components and the events broker itself.
oc create secret generic ec-events-broker --from-literal=username=ec-user --from-literal=password=ec-pass -n ec-demo
Lastly create the secrets for the transport connections between the various services and the message broker.
oc create secret generic ec-transport-api --from-literal=username=ec-sys --from-literal=password=ec-pass -n ec-demo
oc create secret generic ec-transport-console --from-literal=username=ec-sys --from-literal=password=ec-pass -n ec-demo
oc create secret generic ec-transport-broker --from-literal=username=ec-sys --from-literal=password=ec-pass -n ec-demo
Template Installation
The installation of Everyware Cloud on OpenShift is just a matter of importing the various templates and processing them.
Firstly install the configs.yml template. This contains the common configurations used by the various services like database and storage connections. These are some of the parameters you must configure:
STORAGE_HOST | host address where the storage can be found |
DB_HOST | host address where the DB can be found |
EVENTS_BROKER_HOST | host address where the events broker can be found, this can be the internal service name (ec-events-broker.ec-demo.svc.cluster.local) |
Then you can go on and configure the rest of the templates. The order in which they are installed it's not really important.
SERVICE_NAME | value used to generate OpenShift resource names (e.g. name of the services, deployments) |
IMAGE_VERSION | tag of the image matching the one used when importing to the local registry (e.g. 5.0.1) |
NAMESPACE | project name, for this example the it's ec-demo |
CONFIG | name of the config map configured in the previous step |
DB_SECRET | name of the secret containing the DB credentials, in this example ec-db |
CRT_SECRET | name of the secret containing the certificates, in this example ec-crt |
TRANSPORT_SECRET | name of the secret containing the broker credentials |
EVENTS_BROKER_SECRET | name of the secret containing the events broker credentials, in this example ec-events-broker |
The various templates contain the description of all the possible options in case you need to further customise the installation.
Kafka integration
In EC it is possible to add routes to Kafka to forward client messages to an external broker (see Kafka Route). For this purpose, the EC broker will need to be configured with the correct credentials to authenticate itself to Kafka.
The broker currently supports two types of authentication: SASL/PLAIN and SASL/GSSAPI.
SASL/PLAIN
This authentication method uses plain username and password to authenticate with Kafka.
Create a JAAS configuration file containing the following text replacing username and password with values valid for your Kafka.
KafkaClient {
org.apache.kafka.common.security.plain.PlainLoginModule required
username="ec_kafka_username"
password="ec_kafka_password";
};
Create a secret containing the created jaas.conf file.
oc create secret generic ec-jaas-conf --from-file=jaas.conf=path_to_local/jaas.conf
Once this is done modify the broke configuration in OpenShift adding a volume pointing to the newly created Secret. The yaml should look similar to this snippet.
[...]
volumeMounts:
- mountPath: /opt/amq/data
name: ec-broker-volume-1
- mountPath: /etc/opt/ec/jaas
name: ec-broker-volume-2
readOnly: true
[...]
volumes:
- emptyDir: {}
name: ec-broker-volume-1
- name: ec-broker-volume-2
secret:
defaultMode: 420
items:
- key: jaas.conf
path: jaas.conf
secretName: ec-jaas-conf
Also append to the variable variable called ${ACTIVEMQ_OPTS}
the following value.
-Ddynamic_route.kafka.jaas_configuration_file=/etc/opt/ec/jaas/jaas.conf
Be sure to give producer ACLs to the user used to authenticate to Kafka. This user will need permissions to write to all topics matching EC account names.
SASL/GSSAPI
This authentication method uses Kerberos to authenticate with Kafka.
Create a JAAS configuration file containing the following text replacing the principal with the one configured in Kerberos.
KafkaClient {
com.sun.security.auth.module.Krb5LoginModule required
useKeyTab=true
storeKey=true
keyTab="/etc/opt/ec/security/kafka.keytab"
principal="[email protected]";
};
Create a secret containing the created jaas.conf file
oc create secret generic ec-jaas-conf --from-file=jaas.conf=path_to_local/jaas.conf
Create e keytab file containing the credentials for your Kerberos principal and create a configMap containing it. You need OCP/OKD and oc client at least version 3.10 to create binary file configMaps.
oc create configmap ec-keytab --from-file=kafka.keytab
Now create the krb5.conf file containing a configuration matching your Kerberos infrastructure. The file should look something like this:
# Configuration snippets may be placed in this directory as well
includedir /etc/krb5.conf.d/
[logging]
default = FILE:/var/log/krb5libs.log
kdc = FILE:/var/log/krb5kdc.log
admin_server = FILE:/var/log/kadmind.log
[libdefaults]
dns_lookup_realm = false
ticket_lifetime = 24h
renew_lifetime = 7d
forwardable = true
rdns = false
pkinit_anchors = /etc/pki/tls/certs/ca-bundle.crt
default_realm = EXAMPLE.COM
default_ccache_name = KEYRING:persistent:%{uid}
[realms]
EVERYWARE.IO = {
kdc = mykdc.example.com
admin_server = mykdc.example.com
}
[domain_realm]
.example.com = EXAMPLE.COM
example.com = EXAMPLE.COM
Create a configMap containing the krb5.conf file.
oc create configmap ec-krb5-conf --from-file=krb5.conf
Once this is done modify the broke configuration in OpenShift adding the volumes pointing to the newly created secrets/configMaps. The yaml should look similar to this snippet.
[...]
volumeMounts:
- mountPath: /opt/amq/data
name: ec-broker-volume-1
- mountPath: /etc/opt/ec/krb5
name: ec-broker-volume-2
- mountPath: /etc/opt/ec/security
name: ec-broker-volume-3
- mountPath: /etc/opt/ec/jaas
name: ec-broker-volume-4
readOnly: true
[...]
volumes:
- emptyDir: {}
name: ec-broker-volume-1
- configMap:
defaultMode: 420
name: ec-krb5-conf
name: ec-broker-volume-2
- configMap:
defaultMode: 420
name: ec-keytab
name: ec-broker-volume-3
- secret:
defaultMode: 420
secretName: ec-jaas-conf
name: ec-broker-volume-4
Also append to the variable variable called ${ACTIVEMQ_OPTS}
the following value.
-Ddynamic_route.kafka.jaas_configuration_file=/etc/opt/ec/jaas/jaas.conf -Djava.security.krb5.conf=/etc/opt/ec/krb5/krb5.conf
Be sure to give producer ACLs to the user used to authenticate to Kafka. This user will need permissions to write to all topics matching EC account names.
Limitations
Only one JAAS configuration file is allowed in the platform (due to the limitation of the current Camel-Kafka client used). So it's not possible to have different routes with different login credentials.
Multiple service instances
It is possible to deploy multiple instances of the same Everyware Cloud service. To do so just process again the service template changing the SERVICE_NAME parameter.
One common scenario where it's desirable to have multiple instances of the same service is the message broker per tenant isolation (see messaging section). In this case the cluster name inside Everyware Cloud will have the same value of the SERVICE_NAME parameter.
Node affinity
It is possible to force certain applications to run on particular cluster nodes using node affinity
. To do so, we first need to add a label to the node and then change the template to force the pods to be run on the nodes matching that label.
The following snippet shows how to add an app
label to the node called my-node-id-1
Note that to run this command you need cluster admin privileges.
oc label node my-node-id-1 app=broker
At this point, change the template adding the affinity configuration. In this case, we are forcing the pod to be run on nodes with app
label containing the value broker
.
[...]
template:
metadata:
labels:
app: "${EC_SERVICE_NAME}"
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: app
operator: In
values:
- broker
containers:
- env:
[...]
More information about node affinity can be found in the official OpenShit documentation here.
Routes and Load Balancer Configuration
Broker and VPN use node ports since plain TCP/TLS is not yet supported by the OpenShift routers. It is recommended to use an external load balancer and to avoid connecting your devices directly to the OpenShift slaves. SSL termination is already managed by these two services so you won't need to add certificates outside of OpenShift.
The API and the Console require the setup of OpenShift routes. It is possible to configure SSL termination at route level. You will also need to configure your DNS service creating records for the routes in OpenShift.
Updated about 6 years ago