Get Started Free

Integration with VM (Legacy) Services

Pain Point

When deciding on Kubernetes as a platform and an architectural shift to Docker containers, there is a critical need to manage and secure the communication between services, including integration with VM (legacy) services.

Integration of VM (legacy) services with Docker containers is very complex. In case your containers based application needs to call a number of VM (legacy) services running behind the firewall, each service will require a lot of manual and complex configurations.

The following script fetches configuration data from the Kubernetes cluster and applies the configuration to the VM:

Sample Script

set -euo pipefail
log() { echo "$1" >&2; }

# NOTE - if you are on a mac and you don't have gsed, uncomment this line:
#  brew install gnu-sed

# vars
PROJECT_ID="${PROJECT_ID:?PROJECT_ID env variable must be specified}"
export VM_NAME="istio-gce"
export SERVICE_NAMESPACE="default" # put VM-based ProductCatalog's istio services in the GKE default ns

# configure cluster context
gcloud config set project $PROJECT_ID
gcloud container clusters get-credentials $CLUSTER_NAME --zone $ZONE
kubectl config use-context $CTX

# generate cluster.env from the GKE cluster
GWIP=$(kubectl get -n istio-system service istio-ingressgateway -o jsonpath='{.status.loadBalancer.ingress[0].ip}')

ISTIO_SERVICE_CIDR=$(gcloud container clusters describe ${CLUSTER_NAME?} \
                       --zone ${ZONE?} --project ${PROJECT_ID?} \
                       --format "value(servicesIpv4Cidr)")

log "istio CIDR is: ${ISTIO_SERVICE_CIDR}"

echo "ISTIO_INBOUND_PORTS=8080" >> scripts/cluster.env

# Get istio control plane certs
kubectl -n ${SERVICE_NAMESPACE?} get secret istio.default \
  -o jsonpath='{.data.root-cert\.pem}' | base64 --decode | tee scripts/root-cert.pem
kubectl -n ${SERVICE_NAMESPACE?} get secret istio.default \
  -o jsonpath='{.data.key\.pem}' | base64 --decode | tee scripts/key.pem
kubectl -n ${SERVICE_NAMESPACE?} get secret istio.default \
  -o jsonpath='{.data.cert-chain\.pem}' | base64 --decode | tee scripts/cert-chain.pem

# Populate 7-configure-mesh script with the GWIP IP
# (this script is sent to the VM, to run there.)
gsed -r -i "s|$pattern|$replace|g" scripts/

# scp certs, env file, and script to the GCE instance
log "sending cluster.env, certs, and script to VM..."
# scp everything over to the VM
gcloud compute --project ${PROJECT_ID?} scp --zone ${ZONE?} \
  scripts/ scripts/cluster.env scripts/*.pem ${VM_NAME?}:
log "...done."


# (where GWIP refers to your GKE cluster's Istio IngressGateway IP address)

# setup --  install docker
curl -fsSL | sudo apt-key add -
sudo add-apt-repository "deb [arch=amd64] $(lsb_release -cs) stable";
sudo apt-get update;
sudo apt-get install -y docker-ce;

# update /etc/hosts for DNS resolution
echo -e "\n$GWIP istio-citadel istio-pilot istio-pilot.istio-system" | \
   sudo tee -a /etc/hosts

# install + run the istio remote - version ${ISTIO_VERSION}
curl -L${ISTIO_VERSION}/deb/istio-sidecar.deb > istio-sidecar.deb

sudo dpkg -i istio-sidecar.deb

ls -ld /var/lib/istio

sudo mkdir -p /etc/certs
sudo cp {root-cert.pem,cert-chain.pem,key.pem} /etc/certs
sudo cp cluster.env /var/lib/istio/envoy
sudo chown -R istio-proxy /etc/certs /var/lib/istio/envoy

ls -l /var/lib/istio/envoy/envoy_bootstrap_tmpl.json
ls -l /var/lib/istio/envoy/sidecar.env
sudo systemctl start istio

How CloudPlex addresses your pain

CloudPlex offers a VM (legacy) service in which a developer only needs to provide the service information. All the configurations required to integrate the VM (legacy) services and containers are created automatically by the platform. As shown below, this visual process is simple, easy and fast.