Creating a Hybrid GigaSpaces Cluster – Cloud and On-Premise

In addition to creating a cluster that is either On-Premise or in the cloud, it is also possible to create a hybrid cluster – a cluster that is provisioned in both environments.

In this topic, we will create a hybrid cluster that has both local on-premise and AWS cloud components.

We will use the gsctl command, a simple CLI tool for creating GigaSpaces clusters, as follows:

  • Create a GigaSpaces cluster.
  • Deploy GigaSpaces services in the cluster.
  • Undeploy the GigaSpaces services and tear down the cluster.

Prerequisites

Before beginning to work with the gsctl tool, ensure that you have the following:

  • Java 8 or higher installed on your local workstation.

  • AWS account and appropriate credentials as described in the AWS Command Line Interface documentation.

    To deploy a GigaSpaces cluster in AWS you need the following:

    • Valid AWS account
    • Credentials that include aws_access_key_id and aws_secret_access_key
    • Configuration parameters that include a defined aws_region
  • A VPC and subnet with a connection to your network environment (VPN). The connection between a single ec2 instance and one of your On-Premise machines must work both ways.

  • The subnet auto-assigning public IP address must be enabled.

  • Windows only: Set the following environment variables manually:
    • AWS_ACCESS_KEY_ID
    • AWS_SECRET_ACCESS_KEY
    • AWS_REGION

In addition, perform the following setup tasks for On-Premise components:

  • Disable the password prompt when using sudo
  • Install Unzip on all the cluster machines
  • Ensure that you have a single pem file for all your on-premise machines
  • Verify connectivity to all the machines from your workstation
  • Centos machines only - disable the firewall (firewalld service) and make sure it is set to stay disabled on start (for example, when the machine reboots)

Downloading and Running the Utility

The gsctl tool can be downloaded from a dedicated repository. You can start an interactive shell for gsctl and run commands using short notation, or you can run commands from outside the interactive shell (from the OS shell) by adding java -jar gsctl.jar before each command.

You must run the utility from the same directory where you placed the gsctl.jar file.

To download the gsctl utility and launch the interactive shell:

  1. Create an empty directory on your machine. In this example, the drive is C: and the directory name is gsDemo.

  2. Download the most current version of the utility (gsctl.jar file) from the dedicated repository to the directory you created (gsDemo).

  3. To start the interactive shell, open a command window and type the following:

    java -jar gsctl.jar
    java -jar C:\gsDemo\gsctl.jar

Creating a GigaSpaces Cluster

Follow the steps in the procedure to create a GigaSpaces cluster and deploy the GigaSpaces services. All of the procedures on this page describe the short notation as used from inside the interactive shell.

To create a cluster:

  1. Verify that an .aws folder exists in the home directory on your local machine, and that it contains the config and credentials files. The credentials file should include your aws_access_key_id and aws_secret_access_key, and the config files should include your aws_region and output definitions.

  2. Create an empty directory on your machine. In this example, the directory name is gsDemo.
  3. Download the most current version of the gsctl.jar file from the dedicated repository to the directory you created (gsDemo) and extract the contents.
  4. Add the pem file to this directory.
  5. Open a command window and type the following to define a cluster called gs_demo_cluster:

    init --cluster-name=gs_demo_cluster --aws=1 --on-prem=1

    This command also creates a cluster.yaml file, which you can modify to supply your AWS resources. In this case you must provide the vpc id and subnet id that your IT has previously created. Other resources can be created automatically on the create command if you leave the value.

    By default, the utility creates all required resources, except vpc and subnet id in hybrid mode.

    Also modify the yaml file as follows, in the on-Premise component section:

    • keyName - name of your pem file (without the .pem extension)

    • userName - name of the on-premise machine being used to create the cluster

    • profiles - the on-premise worker (client) name, along with the IP addresses of the host machines

      You must have 3 masters (gsManagers) and at least 1 worker across the cluster.

    name: gs_demo_cluster
    gsManagers: 3
    clusterComponents:
    - type: "AWS"
    name: "AWS_1"
    userName: "<auto-generate>"
    keyName: "<auto-generate>"
    vpcId: "<auto-generate>"
    vpcSubnetId: "<auto-generate>"
    securityGroup: "<auto-generate>"
    amiId: "<auto-generate>"
    #iamInstanceProfileArn: "<auto-generate>"      #uncomment the lines below in order to use volumes
    #volumes:
    #  ebs:
    #  - name: "default aws master name"
    #    id: "<required parameter>"
    masters:
    label: "GS Cluster [gs_demo_cluster_aws] Master Group"
    profiles:
    - name: "default aws master name"
    type: "m4.xlarge"
    tags: []
    count: 2       # Make sure that total masters count from all components together equals to 3
    workers:
    label: "GS Cluster [gs_demo_cluster_aws] Worker Group"
    profiles:
    - name: "default aws worker name"
    type: "m4.xlarge"
    tags: []
    count: 3
    - type: "OnPremise"
    name: "OnPremise_1"
    userName: "root"
    keyName: "sshkey"
    masters:
    label: "GS Cluster [gs_demo_cluster_on_prem] Master Group"
    profiles:
    - name: "default on premise master name"
    tags: []
    hosts:
    - "172.17.0.2"
    workers:
    label: "GS Cluster [gs_demo_cluster_on_prem] Worker Group"
    profiles:
    - name: "default on premise worker name"
    tags: []
    hosts:
    - "172.17.0.5"
    - "172.17.0.6"
    - "172.17.0.7"
  6. To create the hybrid cluster, run the following command:

    create

The cloud platform begins to create the cluster:

You can monitor the progress of the cluster in your cloud platform dashboard, for example the VPC Dashboard in AWS:

The process of creating the cluster takes only a few minutes until the nodes are up and running. You can see when the Master (server) nodes and Worker (client) nodes are running and have passed the status checks.

Deploying the GigaSpaces Services

After your cluster is up and running, you can deploy GigaSpaces services and microservices.

The deploy operation can be done using the GigaSpaces CLI (Deploy with Command Line), the REST API (Deploy with REST) and the Ops Manager (Deploy and Undeploy Services in Ops Manager).

The cluster is started as secured. Access to the Ops Manager and the REST API requires the user to be authenticated. Three built-in users are pre-defined: gs-admin, gs-mngr and gs-viewer. The password is the Nomad token from the output of the create command.

The gsctl tool comes with sample processor (stateful) and feeder (stateless) services in the default artifacts repository. To deploy them, use 'data-processor.jar' and 'data-feeder.jar' in the URL in the deploy screen.

For more information about the artifact repository, see the Managing the GigaSpaces Product Version topic.

Monitoring the GigaSpaces Services

After you deploy your GigaSpaces services and microservices, you can monitor them using the following built-in web interfaces:

  • Ops Manager
  • Grafana
  • Zipkin

To access the administration and monitoring tools:

  1. Run the following command:

    list-services

    This returns all the services with their URLs.

  2. Copy any of the GigaSpacesManager URLs into your browser to open Ops Manager and view the deployed GigaSpaces services.

  3. Copy the grafana URL to your browser to open Grafana and navigate to the pre-defined dashboards:

  4. Select the Telegraf system metrics dashboard in Grafana to view the cluster metrics:

Removing a GigaSpaces Cluster

You can delete your cluster when you no longer need it, in order to release the cloud resources.

To remove the GigaSpaces cluster:

In the directory where you created the cluster, run the following command:

gsctl.jar destroy

This tears down the cluster.