Salesforce

Space Clustering SLA (Magic xpa 3.x)

« Go Back

Information

 
Created ByKnowledge Migration User
Approval Process StatusPublished
Objective
Description

Space Clustering SLA (Magic xpa 3.x)

Space clustering defines the number of Space partitions, the number of partition backups, and the way they are spread on the available grid containers (GSCs).

Space clustering is governed by the Service Level Agreement (SLA) definitions. This means that the grid will always try to maintain the defined clustering when deploying the Space.

Clustering is defined in the MgxpaGSSpace_sla.xml file, which is found under the GigaSpaces-xpa\config folder. By default, this file defines two partitions with one backup each (four in total), and with a restriction that a primary partition and its backup partition cannot run under the same process.

For the grid to comply with the SLA definition, you need to ensure that you define enough GSCs.

In the above default configurations, since a primary partition cannot run with its backup under the same process, you need at least two GSCs running for a successful Space deployment.

The most common SLA settings are:

  1. cluster-schema – This should always be set to partitioned-sync2backup, which means that data can be in partitions and each partition can have a backup that is synchronized with it.

  2. number-of-instances – The required number of Space partitions, meaning instances of the Magic processing unit, which will be loaded. The default is 1. If you have a lot of data in memory, you may need to increase this number.

  3. number-of-backups – The number of backup partitions for each primary partition. During development you can decide that you do not need a backup and you can set this value to 0. If the number-of-instances="2" and the number-of-backups="1", there will be four instances of the Magic processing unit.

  4. max-instances-per-vm – When this is set to 1, you ensure that a primary partition and its backup(s) cannot be provisioned to the same JVM (GSC). The number of instances of the same partition that will be deployed in the same JVM (GSC), that is, under the same process. If max-instances-per-vm="1", the primary and backup instances of the same partition will not be deployed on the same GSC.

  5. max-instances-per-machine – When this is set to 1, you ensure that a primary partition and its backup(s) cannot be provisioned to the same machine. Setting this to 1 should be restricted to a cluster containing a minimum of three machines. Then, if one of the machines fails, the lost partitions will move to the third machine. Or, it can also be used in a two machine cluster, but there is a risk having primary partitions with no backup until the second machine is back up and running.

Here are some SLA examples:

  1. For single partitions with two backups, and primary and backup partitions on separate GSCs, set the following in the MgxpaGSSpace_sla.xml file:

    <os-sla:sla cluster-schema="partitioned-sync2backup" number-of-instances="1" number-of-backups="2" max-instances-per-vm="1">

The above example requires at least three containers on a single machine. Each container will hold a single partition.

Note: Using two backups is not recommended. This example is brought here to show how the required number of GSCs is calculated.

  1. For two partitions with one backup each, and primary and backup partitions on separate GSCs, set the following in the MgxpaGSSpace_sla.xml file:

    <os-sla:sla cluster-schema="partitioned-sync2backup" number-of-instances="2" number-of-backups="1" max-instances-per-vm="1">

The above example requires at least two containers on a single machine. Each container will hold two partitions.

  1. For two partitions with one backup, and primary and backup partitions on separate machines, set the following in the MgxpaGSSpace_sla.xml file:

    <os-sla:sla cluster-schema="partitioned-sync2backup" number-of-instances="2" number-of-backups="1" max-instances-per-machine ="1">

The above example requires at least two machines with at least one container on each machine. In each machine, the container will hold two partitions. If there is a cluster of two machines, and one of the machines fails, the Magic Space deployment will be incomplete (compromised) and no backup partition will replace the lost backup partitions until the failed machine starts up again.

*** The use of max-instances-per-machine ="1" should be restricted to a cluster containing a minimum of three machines. Then, if one of the machines fails, the lost partitions will move to the third machine.

*** The number of GSCs is defined in the gs-agent.bat file, found under the GigaSpaces-xpa\bin folder. In the command starting with call gs-agent.bat, you should define the number of GSCs to match the number of required partitions by modifying the number next to the gsa.gsc parameter.

Note:

If you are running your projects on a cluster, make sure that all of the machines’ clocks are synchronized.

Reference
Attachment 
Attachment