Skip to main content
Hands On - Working with WSO2 Api Manager in the Cloud

Working with WSO2 API Manager in the CLOUD

1 estrella2 estrellas3 estrellas4 estrellas5 estrellas (1 votos, promedio: 5,00 de 5)
Cargando…

WSO2 has been on the market since 2005 and was one of the first open source solutions available on the market for API Management. Providing a software suite based on “Carbon” allowing the implementation of a complete Service Oriented Architecture (SOA). The company has been growing steadily and is seen as a visionary by Gartner when it comes to full lifecycle API management.

GFI has had many years of experience working with a SOA architecture and  using WSO2 products to implement it. We have been a strategic partner since 2012 and have experience designing, implementing and maintaining WSO2 environemnts.

 

POC WSO2AM & AWS 01

 

Conform the SOA architecture the WSO2 products would be the the enterprise integrator which is used as the Enterprise Service Bus (ESB) to transport data between the different internal data providers such as Databases or Service but B2B connectors are provided aswell so data exchange with 3rd parties is also possible.

On top of that sits the API manager which can expose services based on Microservices or APIs provided by the ESB. These services can be grouped or translatesd to be used by different frontends such as Mobile clients. More importantly they can be presented to developers via a portal so new applications can be build both internally and externally.

In order to secure the traffic between the different environments and authenticate user, the identy server comes into play. It provides different authentication methods such as Oauth2, JWT and OpenID. It can also be used to implement federated authentication for which different connectors are provided. This would for example make it possible for users with a Google account to access APIs provided by the API manager.

Last but not least all the analytic data from the different WSO2 components can be send to the Analytics engine which can do realtime analysis and reporting on the performance and usage of the systems connected.

 

Objective

Nowadays most applications can be run in the cloud and WSO2 is no exception. However, to run an application in the cloud is only useful if there is added value for example if you can realize rapid scalability and configuration updates. Luckily the WSO2 platform has been created in a highly scalable form that is compatible with most mayor cloud providers.

Although WSO2 also provides its own cloud solution, many will opt for other cloud providers often because they might already have a preferred provider. In this document we have chosen a configuration with the most commonly used cloud solution, Google or Amazon, but a lot of other providers can be used for a similar setup.

In order to deploy on AWS or Google cloud the preferred way to deploy is using Kubernetes/Docker. WSO2 provides predefined Docker images that can be deployed with scritps they provide. Using this setup, a clustered easily expandable infrastructure configuration can be created. For example a typical WSO2 API management cluster would be setup like presented below.

 

POC WSO2AM & AWS 2

 

In the following document the architecture and technology behind such a cluster will be explained together with the advantages and disadvantages so the proper choices can be made. All the software proposed in this document is Open Source.

 

 

POC SCENARIO

In ths scenario a cluster will be created based on 2 technologies. The first one is Docker which is used to create a minimal container for a specific application optimized for usage in virtualized environments. Docker has become ne of the main players in the virtualization market because of its agility and ease of use. On top of that like WSO2 it is Open Source.

 

POC WSO2AM & AWS 3

 

The container is placed in a Kubernetes cluster which is the second technology, used to create a virtualized environment that is highly scalable. Kubernetes is also Open Source based.

In general, a Kubernetes cluster consists of several cluster nodes containing containerized applications (usually Docker containers). The cluster is controlled via the master, from which for example new docker images are deployed to the Nodes. The standard cluster setup is pictured below:

 

POC WSO2AM & AWS 4

 

WSO2 provides different prearranged cluster configurations for Kubernetes clusters via their Github site. This makes the task of selecting the optimal architecture and installing it via predefined Kubernetes/Docker containers images, easier. However, the disadvantage is that the predefined setup is based on a slightly older version of WSO2AM (version 2.1.0). On top of that the predefined architectures might not necessarily fit your needs.

For this POC we will use a simple scenario with the installation of only one node the intention is to explain how a basic deploy works technically. The installation is based on a predefined image (created especially for this POC). The image is based on the latest WSO2AM version at the time of writing (Version 2.2.0).

These basics are off course vital to understanding more complicated deploys where more cluster nodes are installed. The plan is to expand on this in later related documentation.

 

Networking

When creating a cluster, it is important to think about the network configuration. This means you need to think which ports the containers will expose and how the outside world is able to connect. In the kubernetes environment groups of functionality are presented as services. Typically Services and Pods have ip addresses that can only be routed internally (private ip ranges). Below you will find an example of how services relate to Pods and Nodes.

 

POC WSO2AM & AWS 5

 

With WSO2 for example the port used for configuring is usually 9443, while you would generally allow developers to connect to it you might not want to expose it to the outside world. In order to guarantee security to for sensitive components you can opt to use a proxy which will only connect the local developer machne to a certain port or to the cluster.  

Besides thinking of how to configure internal access you might want expose your API via a loadbalanced solution. Kubernetes offers 2 solutions for allowing inbound traffic to connect to cluster services; Ingress and Loadbalancer. Ingress is usually a free or cheap option within the cloud but has its limits (can only expose ports 80 & 443). Ingress can be used based on relative paths which can be linked to services, or hostnames linked to services.

 

Security

In a cloud setup security is an important component. With the WSO2AM installation a standard setting of Oauth2/OpenID is installed for exposed APIs, which is used in this POC. On top of that WSO2 can be configured to use different authentication methods for connecting to the administrative frontends.

Below you will find the standard configuration screen where Oauth2 and the different “Grant” types can be selected.

 

POC WSO2AM & AWS 6

 

In case different security scenarios are needed a combination of API manager and Identity Server can be installed. For more advanced cluster environments this is certainly advisable. Another option would be to use the security modules made available by the cloud provider. These are usually easily integrated.

Whatever the choice is it is always important to keep think of the data you are using and make sure the usage is inline with the GDPR.

 

Technical solution

This tutorial was created using a Fedora Linux workstation. For the cloud services Amazon was used but the with some small changes Google cloud services might also be used (The kubectl/docker commands are the same). If you are using windows some commands may differ. It could be useful to create a virtual machine with a basic Linux installation.

 

Preparation

First, we will need to create 2 accounts to be able to use the different services. The first is an account for cloud services you can either use Amazon AWS or Google cloud to provide the services we need and secondly the docker hub to provide a registry to store docker images (or to use predefined images). All accounts can be created for free. Although Amazon can charge you for some of the service you install (check this before installing).

After the accounts have been created we need some basic tools to administer and create the different components. Most of them can be installed using apt-get or dnf depending on the Linux version):

  • Container tools
    • Docker
    • Kubernetes (Kubectl)
  • Amazon tools
    • Kops
    • Awscli
    • S3cmd
  • Gcloud (If Google cloud is used)

Now we can start configuring the account we need within the Amazon cloud. This can be done via the Amazon AWS console.

 

POC WSO2AM & AWS 7

 

You select the IAM service (see picture above) and create an Amazon IAM user with the permissions shown as follows:

  • AmazonEC2FullAccess
  • AmazonRoute53FullAccess
  • AmazonS3FullAccess
  • AmazonVPCFullAccess

The previous task will provide you with the data to configure AWS access for the Linux environment. Setup the AWS CLI connection by providing the Access Key, Secret Access Key and the AWS region that you want the Kubernetes cluster to be installed on:

aws configure

AWS Access Key ID [None]: AccessKeyValue
AWS Secret Access Key [None]: SecretAccessKeyValue
Default region name [None]: us-east-1 (the region that was assigned to you)
Default output format [None]:

 

YAML

The big advantage of the setup using Kubernetes is that once the initial configuration of the cloud environment is done, the configuration and updates of the cluster and sizing can be arranged by uploading YAML files. These YAML files trigger actions in the Kubernetes cluster and will in general just take a few minutes.  Kubernetes also automatically can take care of updates without service interruption.

Kubernetes distinguishes between several types of YAML´s that each have their own structure and function within the configuration. In this document we will first create a namespace for the cluster using a “Namespace” YAML next we will be using the Deployment YAML to create the pods and containers for the cluster. Afterwards we will use the services to create the ports connected to the different nodes finally an Ingress YAML can be used to expose the services to the outside world.

For a more detailed description of the YAML files you can check the Kubernetes documentation (https://kubernetes.io/docs/home)

 

Images

While you could create the images needed from the ground up, it´s easier to use a predefined image and change the configuration to fit your needs. If you do wish to create the image from scratch you can use the following tutorial:

https://github.com/wso2-attic/kubernetes-artifacts

For this tutorial we will use a predefined image which is delivered with the document. It’s a standard dump of a docker image (Tar file). The image can be loaded into a locally running docker via the following command:

docker load -i image.tar

You can check if the image has been loaded properly using the command:

docker images

The output would be the images available on your server. If you need to edit the configuration of the image you can connect to it and use an editor to change the config files (nano is preinstalled). If you need to install new software in the image you can use “apt-get”. The steps would be the following

Step 1 (Start the image in background)

docker run -d <imageid>

to check if the image is running you can use the command:

docker ps -a

Step 2 (Attach to image with console)

docker exec -t -i <containerid> /bin/bash

Step 3 (Edit the config files and save desired changes)

nano <config file>

Step 4 (Exit the container & commit the changes)

docker commit <containerid> <image name>

Step 5 (Tag the image)

docker tag <imageid> <dockerhub user>/<repository name>:version

Step 6 (Login to the docker hub and upload the image)

docker login
docker push <dockerhub user>/<repository name>:version

After this you should logon to dockerhub to check if the image was uploaded properly (this is vital to be able to continue with the rest of the tutorial).

 

InStallation Google cloud

If you choose to use Googe instead of Amazon you can use the information of this chapter instead of the Amazon specific configuration in the next chapter.

First create a google cloud account and create a Kubernetes engine cluster via the google console. You can select “create cluster” (see picture below) and follow the steps of the wizzard. Afterwards it takes a few minutes to initalize.

 

POC WSO2AM & AWS 8

 

The following parts can be configured from the command line. Use gcloud tool to create a new cluster in a “compute zone” that is the best fit for you. To list all available zones use:

gcloud compute zones list

When you selected the proper zone you can configure the following

gcloud config set project example
gcloud config set compute/zone europe-west1-d
gcloud container clusters create my-cluster --num-nodes 1 --machine-type g1-small

After this the configuration is the same as for Amazon and you can use the Kubectl command.

 

Installation AWS

Create the Amazon S3 bucket for the installation (can also be done graphically within Amazon console):

s3cmd mb s3://myregistry

Enable versioning for the above S3 bucket (can also be done graphically within Amazon console):

aws s3api put-bucket-versioning --bucket ${bucket_name} --versioning-configuration Status=Enabled

Provide a name for the Kubernetes cluster and set the S3 bucket URL in the following environment variables:

export KOPS_CLUSTER_NAME= {KOPS_CLUSTER_NAME}
export KOPS_STATE_STORE=s3://${bucket_name}
export AWS_ACCESS_KEY={ACCESS_KEY}
export AWS_SECRET_KEY={ACCESS_SECRET}

The previous code block can be added to the ~/.bash_profile or ~/.profile depending on the operating system to make it available on all terminal environments.

Create a Kubernetes cluster definition using kops by providing the required node count, node size, and AWS zones:

kops create cluster \
--node-count=2 \
--node-size=t2.medium \
--zones=us-east-1a \ (the region that was assigned to you)
--name=${KOPS_CLUSTER_NAME}

 

Deploy pods & services

In this tutorial the namespace “wso2am” is used. It is important to add this to the cluster since all the following steps rely on this. Whoever you can edit the namespace to your own liking easily. The tutorial comes with a file called namespace-wso2-cluster.yaml it contains the following code:

apiVersion: v1
kind: Namespace
metadata:
  name: wso2am

You can change the name space by changing the “name” variable. You can create the namespace with the following command:

kubectl create -f namespace-wso2-cluster.yaml

Now you have to configure the cluster so it can access your docker hub registry to pull images you created.  You can use the following command for this (if you created a new namespace don’t forget to add –namespace=<namespace> ).

kubectl create secret docker-registry <secret_name> --docker-server="https://index.docker.io/v1/" --docker-username=MYUSER --docker-password=MYPASS --docker-email=MYEMAIL

After the cluster is created we can start setting up Kubernetes. The easiest way to do this with Yaml files which can be uploaded to the cluster using the command:

kubectl create -f <yaml file>

Besides the namespace yaml 2 more yaml´s have been provided. The first one is for the deployment. It contains information about the image that will be deployed and the ports the pods will be listening on. Besides this you can specify how replications nodes need to be created (default is 1). This gives you the opportunity to quickly scale up the application if needed.

The second yaml that has been provided is related to the services that Kubernetes will expose to the outside world.  It is linked to the deployment and describes 3 types of ports. Target ports which is the port that the application in the pod describes. Node ports which is the port the node exposes to the outside world or the other nodes and the Cluster port which is a port exposed via the cluster ip address that is possibly shared by the cluster nodes.

 

Conclusion

The previous chapters show that there is good set of Open Source tools to create a API management cluster in a cloud. The Open Source implementations have reached a level of maturity that can challenge the proprietary software in this field. Setting it up requires investigating your needs so fitting configuration can be selected.

 

Pros & Cons

A typical cluster setup for WSO2 API Manager could perfectly run in the cloud and is easily expandable. There are however some issues that have to be considered beforehand. Overall the following pros and cons can be should be considered.

Pros

  • Scalability
  • No downtime when updating
  • Oauth2/OpenID available “Out of the box”
  • Low total cost of ownership
  • Open Source
  • Predefined WSO2 images can be used to speed up installation

Cons

  • Creating a docker image from scratch for Kubernetes with WSO2 is completely supported although it is labor intensive the first time until you automatice the setup
  • Once images have been created changes to the image are complicated (Using configmap can simplify this)
  • Ingress is limited to port 80 & 443
  • Using Loadbalancing in the cloud can be pricy
  • Predefined images might not fit everyone’s needs

 

References

Building a kubernetes Docker image | https://github.com/wso2-attic/kubernetes-artifacts
Kubernetes documentation and tutotials | https://kubernetes.io/docs/home
Predefined images for WSO2 API management | https://github.com/wso2/kubernetes-apim

 
 

Gfi España

Gfi España

Gfi es una empresa de Consultoría y Servicios Informáticos con más de 2.500 profesionales en España y 14.500 a nivel Internacional.

Gfi España ha escrito 64 entradas


Archivos adjuntos

File File size Downloads
pdf GFI Hands On - Working with WSO2 API Manager in the CLOUD 7 MB 109
Gfi España

Gfi España

Gfi es una empresa de Consultoría y Servicios Informáticos con más de 2.500 profesionales en España y 14.500 a nivel Internacional.

Deja un comentario

Tu dirección de correo electrónico no será publicada. Los campos obligatorios están marcados con *

Este sitio usa Akismet para reducir el spam. Aprende cómo se procesan los datos de tus comentarios.