Deployment
Kubernetes
PostgreSQL
We recommend running PostgreSQL as a cloud service such as Google Cloud SQL, Amazon RDS for PostgreSQL or Microsoft Azure PostgreSQL, and not running it in the same Kubernetes cluster as the other applications in case of node failures.
Example Deployment Files
The entire distribution service platform can be deployed in a Kubernetes cluster with only a few commands. We provide a yaml deployment files that you can use as a base for your own deployment. Some configurations are better stored as configmaps
or secrets
. The script will create 2 replicas of every deployment and attempt to spread them across different nodes in the cluster. Below you can find the deployment files for a deployment in a Google Cloud Kubernetes environment
Namespace & Backend
---
apiVersion: v1
kind: Namespace
metadata:
name: t1c
---
apiVersion: cloud.google.com/v1beta1
kind: BackendConfig
metadata:
name: t1c-backendconfig
spec:
timeoutSec: 300
connectionDraining:
drainingTimeoutSec: 60
Configmaps
apiVersion: v1
kind: ConfigMap
metadata:
name: t1c-ds-configmap
data:
KONG_PROXY_LISTEN: "0.0.0.0:8000, 0.0.0.0:8443 ssl http2"
KONG_ADMIN_LISTEN: "0.0.0.0:8001, 127.0.0.1:8444 ssl"
KONG_STATUS_LISTEN: "0.0.0.0:8100"
KONG_DATABASE: "postgres"
KONG_ADMIN_ACCESS_LOG: "/dev/stdout"
KONG_ADMIN_ERROR_LOG: "/dev/stderr"
KONG_LOG_LEVEL: "warn"
KONG_NGINX_PROXY_LARGE_CLIENT_HEADER_BUFFERS: "4 256k"
KONG_NGINX_WORKER_PROCESSES: "1"
KONG_PROXY_ACCESS_LOG: "/dev/stdout"
KONG_PROXY_ERROR_LOG: "/dev/stderr"
GCP_DB_CONNECTION_NAME: "t1t-saas-signbox:europe-west1:t1c-ds"
DS_ALLOWED_HOST: ".t1t.io"
DS_APP_TOKEN_VALIDITY_SECONDS: "600"
DS_APPLICATION_ISSUER: "t1cds-app"
DS_GATEWAY_ADMIN_URL: "http://kong-admin.t1c.svc.cluster.local:8001"
DS_GATEWAY_BASE_PATH: ""
DS_GATEWAY_CONSUMER_APPLICATION: "t1cds-app"
DS_GATEWAY_CONSUMER_REGISTRATION: "t1cds-reg"
DS_GATEWAY_CONSUMER_USER: "t1cds-user"
DS_GATEWAY_ENABLED: "true"
DS_GATEWAY_URL: "https://acc-ds.t1t.io"
DS_IDP_ISSUER: "https://acc-ds.t1t.io/auth/realms/trust1connector"
DS_KEYSTORE_PATH: "/mnt/t1cds.p12"
DS_REG_TOKEN_VALIDITY_SECONDS: "600"
DS_REGISTRATION_ISSUER: "t1cds-reg"
DS_SECURITY_ENABLED: "true"
DS_MAX_PAGE_SIZE: "100"
INCLUDE_STACKTRACE: "true"
JAVA_OPTS: "-Xms512m -Xmx1024m -Dpidfile.path=/dev/null -Dconfig.resource=k8s.conf -Dlogger.resource=logback-cloud.xml -Dplay.evolutions.db.default.autoApply=true"
REQUIRE_GATEWAY_HEADERS: "false"
T1C_DOMAIN: "t1c.t1t.io"
T1C_PORT: "51883"
T1C_DB_URL: "jdbc:postgresql://127.0.0.1:5432/t1c-ds"
T1C_EVOLUTIONS_AUTO: "true"
T1C_EVOLUTIONS_AUTO_DOWNS: "false"
T1C_EVOLUTIONS_ENABLED: "true"
T1C_EVOLUTIONS_SCHEMA: "public"
RMC_LABEL: "rmc"
RMC_URL: "https://acc-rmc.t1t.io"
TZ: "Europe/Brussels"
T1C_IMPLICIT_VERSION_CREATION: "true"
T1C_VERSION_URI_TEMPLATE: "https://storage.googleapis.com/t1c-dependencies-acc/[[OS]]/v[[VERSION]]/Release/trust1team/[[FILENAME]]"
T1C_VERSION_FILENAME_TEMPLATE_VALUES: |
{"MACOS":"Trust1Connector-x86.dmg","MACOSARM":"Trust1Connector-arm.dmg","UNIX":"trust1connector.deb","WIN32":"t1c-x86.msi","WIN64":"t1c-x64.msi"}
T1C_VERSION_OS_TEMPLATE_VALUES: |
{"MACOS":"mac","MACOSARM":"mac","UNIX":"unix","WIN32":"win","WIN64":"win"}
KEYCLOAK_HOSTNAME: acc-ds.t1t.io
PROXY_ADDRESS_FORWARDING: "true"
DB_VENDOR: POSTGRES
DB_ADDR: 127.0.0.1
DB_DATABASE: keycloak
DB_SCHEMA: public
Secrets
Sensitive information such as usernames, passwords, and related data should be stored as secrets. Using Kustomize you can create secrets from string literals which can be set as environment variables in your deployment specs.
apiVersion: v1
kind: Secret
metadata:
name: t1c-ds-secrets
type: Opaque
stringData:
DS_KEYSTORE_ALIAS: ""
DS_KEYSTORE_PASSWORD: ""
PLAY_SECRET: ""
T1C_DS_DB_USER: ""
DB_USER: ""
T1C_DS_DB_PWD: ""
DB_PASSWORD: ""
KEYCLOAK_USER: ""
KEYCLOAK_PASSWORD: ""
KONG_PG_DATABASE: ""
KONG_PG_HOST: ""
KONG_PG_USER: ""
KONG_PG_PASSWORD: ""
Keycloak IDP
apiVersion: v1
kind: Service
metadata:
name: keycloak
namespace: t1c
labels:
app: keycloak
annotations:
beta.cloud.google.com/backend-config: '{"ports": {"80":"t1c-backendconfig"}}'
spec:
ports:
- name: http
port: 80
targetPort: 8080
- name: https
port: 443
targetPort: 8443
selector:
app: keycloak
type: NodePort
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: keycloak
namespace: t1c
labels:
app: keycloak
spec:
replicas: 1
selector:
matchLabels:
app: keycloak
template:
metadata:
labels:
app: keycloak
spec:
restartPolicy: Always
volumes:
- configMap:
name: t1cds-jks
name: keystore
- secret:
secretName: t1cds-svc-account
name: svc-account
containers:
- name: keycloak
image: quay.io/keycloak/keycloak:14.0.0
envFrom:
- configMapRef:
name: t1c-ds-configmap
- secretRef:
name: t1c-ds-secrets
volumeMounts:
- mountPath: "/mnt"
name: keystore
ports:
- name: http
containerPort: 8080
- name: https
containerPort: 8443
readinessProbe:
httpGet:
path: /auth/realms/master
port: 8080
- name: cloud-sql-proxy
# It is recommended to use the latest version of the Cloud SQL proxy
# Make sure to update on a regular schedule!
image: gcr.io/cloudsql-docker/gce-proxy:1.23.1
envFrom:
- configMapRef:
name: t1c-ds-configmap
command: [ "/cloud_sql_proxy" ]
args: [ "-log_debug_stdout=true", "-verbose=false","-instances=$(GCP_DB_CONNECTION_NAME)=tcp:5432", "-credential_file=/secrets/service_account.json" ]
securityContext:
# The default Cloud SQL proxy image runs as the
# "nonroot" user and group (uid: 65532) by default.
runAsNonRoot: true
volumeMounts:
- name: svc-account
mountPath: /secrets/
readOnly: true
Kong DB Bootstrapping
Should only be run once
---
# This job will state "Pod has warnings, but if the kong-migrations job has completed (db is bootstrapped), you can delete the job
apiVersion: batch/v1
kind: Job
metadata:
name: kong-migrations
spec:
template:
metadata:
name: kong-migrations
spec:
volumes:
- name: svc-account
secret:
secretName: t1cds-svc-account
containers:
- name: kong-migrations
image: kong:2.5.0-alpine
command:
- /bin/sh
- -c
- kong migrations bootstrap
envFrom:
- configMapRef:
name: t1c-ds-configmap
- secretRef:
name: t1c-ds-secrets
- name: cloud-sql-proxy
# It is recommended to use the latest version of the Cloud SQL proxy
# Make sure to update on a regular schedule!
image: gcr.io/cloudsql-docker/gce-proxy:1.23.1
envFrom:
- configMapRef:
name: t1c-ds-configmap
command: [ "/cloud_sql_proxy" ]
args: [ "-log_debug_stdout=true", "-instances=$(GCP_DB_CONNECTION_NAME)=tcp:5432", "-credential_file=/secrets/service_account.json" ]
securityContext:
# The default Cloud SQL proxy image runs as the
# "nonroot" user and group (uid: 65532) by default.
runAsNonRoot: true
volumeMounts:
- name: svc-account
mountPath: /secrets/
readOnly: true
restartPolicy: OnFailure
---
# This job will state "Pod has warnings, but if the kong-migrations job has completed (db is bootstrapped), you can delete the job
apiVersion: batch/v1
kind: Job
metadata:
name: kong-migrations
spec:
template:
metadata:
name: kong-migrations
spec:
volumes:
- name: svc-account
secret:
secretName: t1cds-svc-account
containers:
- name: kong-migrations
image: kong:2.5.0-alpine
command:
- /bin/sh
- -c
- kong migrations up
envFrom:
- configMapRef:
name: t1c-ds-configmap
- secretRef:
name: t1c-ds-secrets
- name: cloud-sql-proxy
# It is recommended to use the latest version of the Cloud SQL proxy
# Make sure to update on a regular schedule!
image: gcr.io/cloudsql-docker/gce-proxy:1.23.1
envFrom:
- configMapRef:
name: t1c-ds-configmap
command: [ "/cloud_sql_proxy" ]
args: [ "-log_debug_stdout=true", "-instances=$(GCP_DB_CONNECTION_NAME)=tcp:5432", "-credential_file=/secrets/service_account.json" ]
securityContext:
# The default Cloud SQL proxy image runs as the
# "nonroot" user and group (uid: 65532) by default.
runAsNonRoot: true
volumeMounts:
- name: svc-account
mountPath: /secrets/
readOnly: true
restartPolicy: OnFailure
---
# This job will state "Pod has warnings, but if the kong-migrations job has completed (db is bootstrapped), you can delete the job
apiVersion: batch/v1
kind: Job
metadata:
name: kong-migrations
spec:
template:
metadata:
name: kong-migrations
spec:
volumes:
- name: svc-account
secret:
secretName: t1cds-svc-account
containers:
- name: kong-migrations
image: kong:2.5.0-alpine
command:
- /bin/sh
- -c
- kong migrations finish
envFrom:
- configMapRef:
name: t1c-ds-configmap
- secretRef:
name: t1c-ds-secrets
- name: cloud-sql-proxy
# It is recommended to use the latest version of the Cloud SQL proxy
# Make sure to update on a regular schedule!
image: gcr.io/cloudsql-docker/gce-proxy:1.23.1
envFrom:
- configMapRef:
name: t1c-ds-configmap
command: [ "/cloud_sql_proxy" ]
args: [ "-log_debug_stdout=true", "-instances=$(GCP_DB_CONNECTION_NAME)=tcp:5432", "-credential_file=/secrets/service_account.json" ]
securityContext:
# The default Cloud SQL proxy image runs as the
# "nonroot" user and group (uid: 65532) by default.
runAsNonRoot: true
volumeMounts:
- name: svc-account
mountPath: /secrets/
readOnly: true
restartPolicy: OnFailure
Kong Gateway
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: kong-serviceaccount
namespace: t1c
---
apiVersion: v1
kind: Service
metadata:
name: kong-service
namespace: t1c
annotations:
beta.cloud.google.com/backend-config: '{"ports": {"80":"t1c-backendconfig"}}'
spec:
type: NodePort
ports:
- name: proxy
port: 80
protocol: TCP
targetPort: 8000
- name: proxy-ssl
port: 443
protocol: TCP
targetPort: 8443
selector:
app: kong-gateway
---
apiVersion: v1
kind: Service
metadata:
name: kong-admin
namespace: t1c
annotations:
beta.cloud.google.com/backend-config: '{"ports": {"80":"oppaas-api-backendconfig"}}'
spec:
type: NodePort
ports:
- name: admin
port: 8001
protocol: TCP
targetPort: 8001
- name: admin-ssl
port: 8444
targetPort: 8444
protocol: TCP
selector:
app: kong-gateway
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: kong-gateway
name: kong-gateway
namespace: t1c
spec:
replicas: 1
selector:
matchLabels:
app: kong-gateway
template:
metadata:
labels:
app: kong-gateway
spec:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- kong-gateway
topologyKey: "kubernetes.io/hostname"
volumes:
- name: svc-account
secret:
secretName: t1cds-svc-account
containers:
- name: kong-gateway
image: kong:2.5.0-alpine
envFrom:
- configMapRef:
name: t1c-ds-configmap
- secretRef:
name: t1c-ds-secrets
lifecycle:
preStop:
exec:
command:
- /bin/sh
- -c
- kong quit
livenessProbe:
failureThreshold: 3
httpGet:
path: /status
port: 8100
scheme: HTTP
initialDelaySeconds: 5
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
ports:
- containerPort: 8000
name: proxy
protocol: TCP
- containerPort: 8443
name: proxy-ssl
protocol: TCP
- containerPort: 8100
name: metrics
protocol: TCP
- containerPort: 8001
name: admin
protocol: TCP
- containerPort: 8444
name: admin-ssl
protocol: TCP
readinessProbe:
failureThreshold: 3
httpGet:
path: /status
port: 8100
scheme: HTTP
initialDelaySeconds: 5
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
securityContext:
runAsUser: 1000
- name: cloud-sql-proxy
# It is recommended to use the latest version of the Cloud SQL proxy
# Make sure to update on a regular schedule!
image: gcr.io/cloudsql-docker/gce-proxy:1.23.1
envFrom:
- configMapRef:
name: t1c-ds-configmap
command: [ "/cloud_sql_proxy" ]
args: [ "-log_debug_stdout=true", "-instances=$(GCP_DB_CONNECTION_NAME)=tcp:5432", "-credential_file=/secrets/service_account.json" ]
securityContext:
# The default Cloud SQL proxy image runs as the
# "nonroot" user and group (uid: 65532) by default.
runAsNonRoot: true
volumeMounts:
- name: svc-account
mountPath: /secrets/
readOnly: true
serviceAccountName: kong-serviceaccount
Distribution Service
---
apiVersion: v1
kind: Service
metadata:
labels:
app: t1c-ds-service-v3-5
name: t1c-ds-service-v3-5
namespace: t1c
annotations:
beta.cloud.google.com/backend-config: '{"ports": {"80":"t1c-backendconfig"}}'
spec:
type: NodePort
ports:
- name: http
port: 80
protocol: TCP
targetPort: 9000
selector:
app: t1c-ds-v3-5
---
apiVersion: apps/v1
kind: Deployment
metadata:
namespace: t1c
name: t1c-ds-v3-5
labels:
app: t1c-ds-v3-5
spec:
replicas: 1
selector:
matchLabels:
app: t1c-ds-v3-5
template:
metadata:
labels:
app: t1c-ds-v3-5
spec:
restartPolicy: Always
volumes:
- configMap:
name: t1cds-keystore
name: keystore
- secret:
secretName: t1cds-svc-account
name: svc-account
containers:
- name: t1c-ds-v3-5
image: eu.gcr.io/t1t-pre-prod/t1cds:3.5.0-SNAPSHOT
imagePullPolicy: Always
ports:
- name: http
containerPort: 9000
volumeMounts:
- mountPath: "/mnt"
name: keystore
envFrom:
- configMapRef:
name: t1c-ds-configmap
- secretRef:
name: t1c-ds-secrets
resources:
requests:
memory: "600Mi"
readinessProbe:
httpGet:
path: /v3_5/system/ready
port: http
periodSeconds: 10
failureThreshold: 10
initialDelaySeconds: 20
livenessProbe:
httpGet:
path: /v3_5/system/alive
port: http
periodSeconds: 10
initialDelaySeconds: 20
- name: cloud-sql-proxy
# It is recommended to use the latest version of the Cloud SQL proxy
# Make sure to update on a regular schedule!
image: gcr.io/cloudsql-docker/gce-proxy:1.23.1
envFrom:
- configMapRef:
name: t1c-ds-configmap
command: [ "/cloud_sql_proxy" ]
args: [ "-log_debug_stdout=true", "-verbose=false","-instances=$(GCP_DB_CONNECTION_NAME)=tcp:5432", "-credential_file=/secrets/service_account.json" ]
securityContext:
# The default Cloud SQL proxy image runs as the
# "nonroot" user and group (uid: 65532) by default.
runAsNonRoot: true
volumeMounts:
- name: svc-account
mountPath: /secrets/
readOnly: true
ReadMyCards
---
apiVersion: v1
kind: Service
metadata:
labels:
app: read-my-cards-service
name: read-my-cards-service
namespace: t1c
annotations:
beta.cloud.google.com/backend-config: '{"ports": {"80":"t1c-backendconfig"}}'
spec:
type: NodePort
ports:
- name: http
port: 80
protocol: TCP
targetPort: 8080
selector:
app: read-my-cards
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: read-my-cards
namespace: t1c
labels:
app: read-my-cards
spec:
replicas: 1
selector:
matchLabels:
app: read-my-cards
template:
metadata:
labels:
app: read-my-cards
spec:
restartPolicy: Always
containers:
- name: read-my-cards
image: eu.gcr.io/t1t-pre-prod/read-my-cards:0.1.3
imagePullPolicy: Always
ports:
- name: http
containerPort: 8080
env:
- name: VUE_APP_T1C_URL
value: "https://t1c.t1t.io"
- name: VUE_APP_T1C_PORT
value: "51883"
resources:
requests:
cpu: 0.2
memory: "200Mi"
readinessProbe:
httpGet:
path: /index.html
port: 8080
periodSeconds: 10
failureThreshold: 10
initialDelaySeconds: 20
livenessProbe:
httpGet:
path: /index.html
port: 8080
periodSeconds: 10
initialDelaySeconds: 20
Ingress
apiVersion: "networking.k8s.io/v1beta1"
kind: "Ingress"
metadata:
name: "trust1connector-lb"
namespace: t1c
annotations:
kubernetes.io/ingress.global-static-ip-name: acc-trust1connector-ip
ingress.gcp.kubernetes.io/pre-shared-cert: "t1t-io-ssl-2022-02-18"
kubernetes.io/ingress.allow-http: "false"
spec:
backend:
serviceName: kong-service
servicePort: 80
rules:
- host: "acc-ds.t1t.io"
http:
paths:
- path: "/auth"
backend:
serviceName: "keycloak"
servicePort: 80
- path: "/auth/*"
backend:
serviceName: "keycloak"
servicePort: 80
- host: "acc-rmc.t1t.io"
http:
paths:
- path: "/*"
backend:
serviceName: "read-my-cards-service"
servicePort: 80
ConfigMaps From Files
The DS keystores can be stored as configmaps in the cluster, and be mounted as volumes in the pod containers. We require a Java keystore (jks
) file to configure the IDP, and a PKCS12 keystore (p12
) for the DS API. The contents of both keystores must be identical.
kubectl create configmap t1cds-keystore --from-file=conf/t1cds.p12
kubectl create configmap t1cds-jks --from-file=conf/t1cds.jks
GKE Guide
Database
Create a PostgreSQL 12 database instance
When creating the database instance, configure the connectivity and backups option according to your need. The database instance must be reachable from the K8s cluster.
Create the necessary databases:
t1c-ds
keycloak
kong
Create Kubernetes Cluster
Ubuntu 18.04
PostgreSQL
1) Add PostgreSQL repository:
wget --quiet -O - https://www.postgresql.org/media/keys/ACCC4CF8.asc | sudo apt-key add -
echo "deb http://apt.postgresql.org/pub/repos/apt/ `lsb_release -cs`-pgdg main" |sudo tee /etc/apt/sources.list.d/pgdg.list
2) Install PostgreSQL:
sudo apt update
sudo apt -y install postgresql-12 postgresql-client-12
3) Configure PostgreSQL. The PostgreSQL server should be reachable from the DS API, Kong Gateway and Keycloak application server(s). We refer you to the documentation: https://www.postgresql.org/docs/12/
4) Create the users and the 3 databases (t1c-ds
, kong
, keycloak
):
We recommend creating different users for each database, but the same user can also be used for all databases.
CREATE USER [INSERT_DATASTORE_USERNAME_HERE];
ALTER USER [INSERT_DATASTORE_USERNAME_HERE] PASSWORD '[INSERT_DATASTORE_PASSWORD_HERE]';
CREATE DATABASE [INSERT_DATABASE_NAME_HERE] OWNER [INSERT_DATASTORE_USERNAME_HERE];
Distribution Service API
1) Obtain the Distribution Service API server distributable. If you wish to build a package from source, run sbt ";clean;compile;dist"
from the project root. A zip archive containing the application will be available under the target/universal
folder
2) Install Java:
wget -qO - https://adoptopenjdk.jfrog.io/adoptopenjdk/api/gpg/key/public | sudo apt-key add -
sudo add-apt-repository --yes https://adoptopenjdk.jfrog.io/adoptopenjdk/deb/
# If you get a command not found error run the command below:
# sudo apt-get install -y software-properties-common
sudo apt-get update
sudo apt-get install adoptopenjdk-11-hotspot
3) Unzip to a folder of your choice. We recommend using a subdirectory of the /opt
folder.
4) Configure the Distribution Service API. See Configuration
for a detailed description of the available options.
5) Create a service. We recommend using systemctl
. Create a file in the /etc/systemd/system/
folder called t1cds.service
and configure it as follows:
Description=T1C-DS API
After=syslog.target network.target
Before=httpd.service
[Service]
Environment=PLAY_SECRET=nf8dqrQM9_?XUm]JCxKu7Jyo9cMf`Eqh<VmOTlj`QWJAiKDqp?fD3J=zvOm3v9L:
ExecStart=/opt/t1cds/bin/t1c-ds
[Install]
WantedBy=multi-user.target
We strongly recommend placing sensitive information in the service definition as environment variables. See Configuration
to get a list of configuration keys.
6) Enable and start the service
chmod 664 /etc/systemd/system/t1cds.service
systemctl enable /etc/systemd/system/t1cds.service
service t1cds start
Kong Gateway
We refer you to the Kong
installation guides for the platform of your choice:
The Kong
gateway should be configured to run in database mode, and the Admin API must be available on a port accessible only by the DS API.
Keycloak
We refer you to the Keycloak Installation documentation.
Docker Compose
For development and testing purposes we offer a Docker Compose image to run the platform easily. Note that you must have access to the Trust1Team Docker container registry, or import the DS API image in yours.
After executing docker-compose up, you must still bootstrap the gateway and configure the IDP keystore
Bootstrapping Gateway Request
This is an example request to bootstrap the gateway for a docker compose deployment following the example docker-compose.yml
below.
curl --location --request POST 'http://localhost:4600/v3_5/gateway/bootstrap' \
--header 'Authorization: Bearer ey...ngA' \
--header 'Content-Type: application/json' \
--data-raw '{
"dsServiceName": "t1c-ds",
"dsServiceHost": "t1c-ds",
"dsPort": 4600
}'
Example
version: "3"
services:
database:
image: "postgres:11.3"
container_name: "t1c-db"
netwversion: "3"
services:
t1c-db:
image: "postgres:13-alpine"
container_name: "t1c-db"
networks:
- t1c-io
volumes:
- "./postgres:/docker-entrypoint-initdb.d"
- "t1c-data:/var/lib/postgresql/data"
command: ["-c", "shared_buffers=256MB", "-c", "max_connections=200"]
ports:
- 5433:5432
environment:
TZ: "Europe/Brussels"
PGTZ: "Europe/Brussels"
POSTGRES_USER: "postgres"
POSTGRES_PASSWORD: "postgres"
t1c-idp:
image: "jboss/keycloak:latest"
container_name: "t1c-idp"
networks:
- t1c-io
command:
- "-Dkeycloak.profile.feature.upload_scripts=enabled"
- "-Dkeycloak.profile.feature.token_exchange=enabled"
environment:
DB_VENDOR: POSTGRES
DB_ADDR: t1c-db
DB_DATABASE: keycloak
DB_USER: postgres
DB_SCHEMA: public
DB_PASSWORD: postgres
KEYCLOAK_USER: admin
KEYCLOAK_PASSWORD: admin
TZ: "Europe/Brussels"
volumes:
- ./conf/t1cds.jks:/mnt/t1cds.jks
ports:
- 9999:8080
depends_on:
- t1c-db
t1c-gtw-migration:
image: "kong:2.5.0-alpine"
container_name: "t1c-gtw-migration"
command: kong migrations bootstrap
depends_on:
- t1c-db
environment:
KONG_DATABASE: postgres
KONG_PG_DATABASE: kong
KONG_PG_HOST: t1c-db
KONG_PG_USER: postgres
KONG_PG_PASSWORD: postgres
TZ: "Europe/Brussels"
networks:
- t1c-io
restart: on-failure
deploy:
restart_policy:
condition: on-failure
t1c-gtw-migrations-up:
image: "kong:2.5.0-alpine"
container_name: "t1c-gtw-migrations-up"
command: kong migrations up && kong migrations finish
depends_on:
- t1c-db
environment:
KONG_DATABASE: postgres
KONG_PG_DATABASE: kong
KONG_PG_HOST: t1c-db
KONG_PG_USER: postgres
KONG_PG_PASSWORD: postgres
TZ: "Europe/Brussels"
networks:
- t1c-io
restart: on-failure
deploy:
restart_policy:
condition: on-failure
t1c-gtw:
image: "kong:2.5.0-alpine"
container_name: "t1c-gtw"
depends_on:
- t1c-db
environment:
KONG_ADMIN_ACCESS_LOG: /dev/stdout
KONG_ADMIN_ERROR_LOG: /dev/stderr
KONG_ADMIN_LISTEN: 0.0.0.0:8001
KONG_PROXY_LISTEN: 0.0.0.0:8000
KONG_DATABASE: postgres
KONG_PG_DATABASE: kong
KONG_PG_HOST: t1c-db
KONG_PG_PASSWORD: postgres
KONG_PG_USER: postgres
KONG_PROXY_ACCESS_LOG: /dev/stdout
KONG_PROXY_ERROR_LOG: /dev/stderr
TZ: "Europe/Brussels"
networks:
- t1c-io
ports:
- "8000:8000/tcp"
- "8001:8001/tcp"
healthcheck:
test: ["CMD", "kong", "health"]
interval: 10s
timeout: 10s
retries: 10
restart: always
t1c-ds:
environment:
JAVA_OPTS: "-Dconfig.resource=k8s.conf -Dlogger.resource=logback-docker.xml -Dplay.evolutions.db.default.autoApply=true"
DS_ALLOWED_HOST: ".t1t.io"
DS_APP_TOKEN_VALIDITY_SECONDS: 600
DS_GATEWAY_ADMIN_URL: "http://t1c-gtw:8001"
DS_GATEWAY_CONSUMER_REGISTRATION: "t1cds-reg"
DS_GATEWAY_CONSUMER_APPLICATION: "t1cds-app"
DS_GATEWAY_CONSUMER_USER: "t1cds-user"
DS_GATEWAY_BASE_PATH: ""
DS_GATEWAY_ENABLED: "true"
DS_GATEWAY_URL: "http://localhost:8000"
DS_IDP_ISSUER: "http://localhost:9999/auth/realms/trust1connector"
DS_KEYSTORE_PATH: "/mnt/t1cds.p12"
DS_KEYSTORE_PASSWORD: "password"
DS_KEYSTORE_ALIAS: "t1cds"
DS_REG_TOKEN_VALIDITY_SECONDS: 600
DS_SECURITY_ENABLED: "true"
DS_MAX_PAGE_SIZE: 100
INCLUDE_STACKTRACE: "true"
PLAY_SECRET: "superdupersecret"
REQUIRE_GATEWAY_HEADERS: "false"
T1C_DOMAIN: "t1c.t1t.io"
T1C_PORT: "51983"
T1C_EVOLUTIONS_ENABLED: "true"
T1C_EVOLUTIONS_AUTO: "true"
T1C_EVOLUTIONS_AUTO_DOWNS: "true"
T1C_DB_URL: "jdbc:postgresql://t1c-db:5432/t1c-ds"
T1C_DS_DB_USER: "postgres"
T1C_DS_DB_PWD: "postgres"
T1C_EVOLUTIONS_SCHEMA: "public"
T1C_IMPLICIT_VERSION_CREATION: "true"
T1C_VERSION_URI_TEMPLATE: "https://storage.googleapis.com/t1c-dependencies-dev/[[OS]]/v[[VERSION]]/Release/trust1team/[[FILENAME]]"
T1C_VERSION_FILENAME_TEMPLATE_VALUES: |
{"MACOS":"Trust1Connector-x86.dmg","MACOSARM":"Trust1Connector-arm.dmg","UNIX":"trust1connector.deb","WIN32":"t1c-x86.msi","WIN64":"t1c-x64.msi"}
T1C_VERSION_OS_TEMPLATE_VALUES: |
{"MACOS":"mac","MACOSARM":"mac","UNIX":"unix","WIN32":"win","WIN64":"win"}
RMC_LABEL: "rmc"
TZ: "Europe/Brussels"
image: "eu.gcr.io/t1t-pre-prod/t1cds:latest"
container_name: "t1c-ds"
volumes:
- ./conf/t1cds.p12:/mnt/t1cds.p12
networks:
- t1c-io
ports:
- 4600:9000
depends_on:
- t1c-db
networks:
t1c-io:
driver: bridge
volumes:
t1c-data:
driver: local
You can run the docker in detached mode via the command
$ docker compose up -d
Last updated