SIOS SANless clusters

SIOS SANless clusters High-availability Machine Learning monitoring

  • Home
  • Products
    • SIOS DataKeeper for Windows
    • SIOS Protection Suite for Linux
  • News and Events
  • Clustering Simplified
  • Success Stories
  • Contact Us
  • English
  • 中文 (中国)
  • 中文 (台灣)
  • 한국어
  • Bahasa Indonesia
  • ไทย

Step-By-Step: ISCSI Target Server Cluster In Azure

June 13, 2020 by Jason Aw Leave a Comment

Step-By-Step_ ISCSI Target Server Cluster In Azure

Step-By-Step: ISCSI Target Server Cluster In Azure

I recently helped someone build an iSCSI target server cluster in Azure and realized that I never wrote a step-by-step guide for that particular configuration. So to remedy that, here are the step-by-step instructions in case you need to do this yourself.

Pre-Requisites

I’m going to assume you are fairly familiar with Azure and Windows Server, so I’m going to spare you some of the details. Let’s assume you have at least done the following as a pre-requisite

  • Provision two servers (SQL1, SQL2) each in a different Availability Zone (Availability Set is also possible, but Availability Zones have a better SLA)
  • Assign static IP addresses to them through the Azure portal
  • Joined the servers to an existing domain
  • Enabled the Failover Clustering feature and the iSCSI Target Server feature on both nodes
  • Add three Azure Premium Disk to each node.
    NOTE: this is optional, one disk is the minimum required. For increased IOPS we are going to stripe three Premium Azure Disks together in a storage pool and create a simple (RAID 0) virtual disk
  • SIOS DataKeeper is going to be used to provided the replicated storage used in the cluster. If you need DataKeeper you can request a trial here.

Create Local Storage Pool

Once again, this step is completely optional, but for increased IOPS we are going to stripe together three Azure Premium Disks into a single Storage Pool. You might be tempted to use Dynamic Disk and a spanned volume instead, but don’t do that! If you use dynamic disks you will find out that there is some general incompatibility that will prevent you from creating iSCSI targets later.

Don’t worry, creating a local Storage Pool is pretty straight forward if you are aware of the pitfalls you might encounter as described below. The official documentation can be found here.

Pitfall #1 – although the documentation says the minimum size for a volume to be used in a storage pool is 4 GB, I found that the P1 Premium Disk (4GB) was NOT recognized. So in my lab I used 16GB P3 Premium Disks.

Pitfall #2 – you HAVE to have at least three disks to create a Storage Pool.

Pitfall #3 – create your Storage Pool before you create your cluster. If you try to do it after you create your cluster you are going to wind up with a big mess as Microsoft tries to create a clustered storage pool for you. We are NOT going to create a clustered storage pool, so avoid that mess by creating your Storage Pool before you create the cluster. If you have to add a Storage Pool after the cluster is created you will first have to evict the node from the cluster, then create the Storage Pool.

Based on the documentation found here, below are the screenshots that represent what you should see when you build your local Storage Pool on each of the two cluster nodes. Complete these steps on both servers BEFORE you build the cluster.

Step-By-Step: ISCSI Target Server Cluster In Azure

You should see the Primordial pool on both servers.

Step-By-Step: ISCSI Target Server Cluster In Azure

Right-click and choose New Storage Pool…

Step-By-Step: ISCSI Target Server Cluster In Azure

Step-By-Step: ISCSI Target Server Cluster In Azure

Step-By-Step: ISCSI Target Server Cluster In Azure

Step-By-Step: ISCSI Target Server Cluster In Azure

Choose Create a virtual disk when this wizard closes

Step-By-Step: ISCSI Target Server Cluster In Azure

Step-By-Step: ISCSI Target Server Cluster In Azure

Step-By-Step: ISCSI Target Server Cluster In Azure

Notice here you could create storage tiers if you decided to use a combination of Standard, Premium and Ultra SSD

Step-By-Step: ISCSI Target Server Cluster In Azure

Step-By-Step: ISCSI Target Server Cluster In Azure

For best performance use Simple storage layout (RAID 0). Don’t be concerned about reliability since Azure Managed Disks have triple redundancy on the backend. Simple is required for optimal performance.

Step-By-Step: ISCSI Target Server Cluster In Azure

For performance purposes use Fixed provisioning. You are already paying for the full Premium disk anyway, so no need not to use it all.Step-By-Step: ISCSI Target Server Cluster In Azure

Step-By-Step: ISCSI Target Server Cluster In Azure

Step-By-Step: ISCSI Target Server Cluster In Azure

Step-By-Step: ISCSI Target Server Cluster In Azure

Step-By-Step: ISCSI Target Server Cluster In Azure

Step-By-Step: ISCSI Target Server Cluster In Azure

Step-By-Step: ISCSI Target Server Cluster In Azure

Step-By-Step: ISCSI Target Server Cluster In Azure

Step-By-Step: ISCSI Target Server Cluster In Azure

Step-By-Step: ISCSI Target Server Cluster In Azure

Step-By-Step: ISCSI Target Server Cluster In Azure

Now you will have a 45 GB X drive on your first node. Repeat this entire process for the second node.

Create Your Cluster

Now that each server each have their own 45 GB X drive, we are going to create the basic cluster. Creating a cluster in Azure is best done via Powershell so that we can specify a static IP address. If you do it through the GUI you will soon realize that Azure assigns your cluster IP a duplicate IP address that you will have to clean up, so don’t do that!

Here is an example Powershell code to create a new cluster.

 New-Cluster -Name mycluster -NoStorage -StaticAddress 10.0.0.100 -Node sql1, sql2

The output will look something like this.

PS C:\Users\dave.DATAKEEPER> New-Cluster -Name mycluster -NoStorage 
-StaticAddress 10.0.0.100 -Node sql1, sql2
WARNING: There were issues while creating the clustered role that 
may prevent it from starting. 
For more information view the report file below.
WARNING: Report file location: C:\windows\cluster\Reports\Create Cluster 
Wizard mycluster on 2020.05.20 
At 16.54.45.htm

Name     
----     
mycluster

The warning in the report will tell you that there is no witness. Because there is no shared storage in this cluster you will have to create either a Cloud Witness or a File Share Witness. I’m not going to walk you through that process as it is pretty well documented at those links.

Don’t put this off, go ahead and create the witness now before you move to the next step!

You now should have a basic 2-node cluster that looks something like this.

Step-By-Step: ISCSI Target Server Cluster In Azure

Configure A Load Balancer For The Cluster Core IP Address

Clusters in Azure are unique in that the Azure virtual network does not support gratuitous ARP. Don’t worry if you don’t know what that means, all you have to really know is that cluster IP addresses can’t be reached directly. Instead, you have to use an Azure Load Balancer, which redirects the client connection to the active cluster node.

There are two steps to getting a load balancer configured for a cluster in Azure. The first step is to create the load balancer. The second step is to update the cluster IP address so that it listens for the load balancer’s health probe and uses a 255.255.255.255 subnet mask which enables you to avoid IP address conflicts with the ILB.

We will first create a load balancer for the cluster core IP address. Later we will edit the load balancer to also address the iSCSI cluster resource IP address that we will be created at the end of this document.

Step-By-Step: ISCSI Target Server Cluster In Azure

Step-By-Step: ISCSI Target Server Cluster In Azure

Notice that the static IP address we are using is the same address that we used to create the core cluster IP resource.

Step-By-Step: ISCSI Target Server Cluster In Azure

Once the load balancer is created you will edit the load balancer as shown below

Step-By-Step: ISCSI Target Server Cluster In Azure

Add the two cluster nodes to the backend pool

Step-By-Step: ISCSI Target Server Cluster In Azure

Add the two cluster nodes to the backend pool

Step-By-Step: ISCSI Target Server Cluster In Azure

Add a health probe. In this example we use 59999 as the port. Remember that port, we will need it in the next step.

Step-By-Step: ISCSI Target Server Cluster In Azure

Create a new rue to redirect all HA ports, Make sure Floating IP is enabled.

STEP 2 – EDIT TO CLUSTER CORE IP ADDRESS TO WORK WITH THE LOAD BALANCER

As I mentioned earlier, there are two steps to getting the load balancer configured to work properly. Now that we have a load balancer, we have to run a Powershell script on one of the cluster nodes. The following is an example script that needs to be run on one of the cluster nodes.

$ClusterNetworkName = “Cluster Network 1” 
$IPResourceName = “Cluster IP Address” 
$ILBIP = “10.0.0.100” 
Import-Module FailoverClusters
Get-ClusterResource $IPResourceName | Set-ClusterParameter 
-Multiple @{Address=$ILBIP;ProbePort=59998;SubnetMask="255.255.255.255"
;Network=$ClusterNetworkName;EnableDhcp=0} 

The important thing about the script above, besides getting all the variables correct for your environment, is making sure the ProbePort is set to the same port you defined in your load balancer settings for this particular IP address. You will see later that we will create a 2nd health probe for the iSCSI cluster IP resource that will use a different port. The other important thing is making sure you leave the subnet set at 255.255.255.255. It may look wrong, but that is what it needs to be set to.

After you run it the output should look like this.

 PS C:\Users\dave.DATAKEEPER> $ClusterNetworkName = “Cluster Network 1” 
$IPResourceName = “Cluster IP Address” 
$ILBIP = “10.0.0.100” 
Import-Module FailoverClusters
Get-ClusterResource $IPResourceName | Set-ClusterParameter 
-Multiple @{Address=$ILBIP;ProbePort=59999;SubnetMask="255.255.255.255"
;Network=$ClusterNetworkName;EnableDhcp=0}
WARNING: The properties were stored, but not all changes will take effect 
until Cluster IP Address is taken offline and then online again.

You will need to take the core cluster IP resource offline and bring it back online again before it will function properly with the load balancer.

Assuming you did everything right in creating your load balancer, your Server Manager on both servers should list your cluster as Online as shown below.

Step-By-Step: ISCSI Target Server Cluster In Azure

Check Server Manager on both cluster nodes. Your cluster should show as “Online” under Manageability.

Install DataKeeper

I won’t go through all the steps here, but basically at this point you are ready to install SIOS DataKeeper on both cluster nodes. It’s a pretty simple setup, just run the setup and choose all the defaults. If you run into any problems with DataKeeper it is usually one of two things. The first issue is the service account. You need to make sure the account you are using to run the DataKeeper service is in the Local Administrators Group on each node.

The second issue is in regards to firewalls. Although the DataKeeper install will update the local Windows Firewall automatically, if your network is locked down you will need to make sure the cluster nodes can communicate with each other across the required DataKeeper ports. In addition, you need to make sure the ILB health probe can reach your servers.

Once DataKeeper is installed you are ready to create your first DataKeeper job. Complete the following steps for each volume you want to replicate using the DataKeeper interface.

Step-By-Step: ISCSI Target Server Cluster In Azure

Step-By-Step: ISCSI Target Server Cluster In Azure

Use the DataKeeper interface to connect to both servers

Step-By-Step: ISCSI Target Server Cluster In Azure

Click on create new job and give it a name

Step-By-Step: ISCSI Target Server Cluster In Azure

Step-By-Step: ISCSI Target Server Cluster In Azure

Step-By-Step: ISCSI Target Server Cluster In Azure

Step-By-Step: ISCSI Target Server Cluster In Azure

Click Yes to register the DataKeeper volume in the cluster

Step-By-Step: ISCSI Target Server Cluster In Azure

Once the volume is registered it will appear in Available Storage in Failover Cluster Manager

Create The ISCSI Target Server Cluster

In this next step we will create the iSCSI target server role in our cluster. In an ideal world I would have a Powershell script that does all this for you, but for sake of time for now I’m just going to show you how to do it through the GUI. If you happen to write the Powershell code please feel free to share with the rest of us!

There is one problem with the GUI method. ou will wind up with a duplicate IP address in when the IP Resource is created, which will cause your cluster resource to fail until we fix it. I’ll walk through that process as well.

Step-By-Step: ISCSI Target Server Cluster In Azure

Step-By-Step: ISCSI Target Server Cluster In Azure

Step-By-Step: ISCSI Target Server Cluster In Azure

Step-By-Step: ISCSI Target Server Cluster In Azure

Step-By-Step: ISCSI Target Server Cluster In Azure

Step-By-Step: ISCSI Target Server Cluster In Azure

Step-By-Step: ISCSI Target Server Cluster In Azure

Go to the Properties of the failed IP Address resource and choose Static IP and select an IP address that is not in use on your network. Remember this address, we will use it in our next step when we update the load balancer.

You should now be able to bring the iSCSI cluster resource online.

Step-By-Step: ISCSI Target Server Cluster In Azure

Update Load Balancer For ISCSI Target Server Cluster Resource

Like I mentioned earlier, clients can’t connect directly to the cluster IP address (10.0.0.110) we just created for the iSCSI target server cluster. We will have to update the load balancer we created earlier as shown below.

Step-By-Step: ISCSI Target Server Cluster In Azure

Start by adding a new frontend IP address that uses the same IP address that the iSCSI Target cluster IP resource uses.

Step-By-Step: ISCSI Target Server Cluster In Azure

Add a second health probe on a different port. Remember this port number, we will use it again in the powershell script we run next

Step-By-Step: ISCSI Target Server Cluster In Azure

We add one more load balancing rule. Make sure to change the Frontend IP address and Health probe to use the ones we just created. Also make sure direct server return is enabled.

The final step to allow the load balancer to work is to run the following Powershell script on one of the cluster nodes. Make sure you use the new Healthprobe port, IP address and IP Resource name.

 $ClusterNetworkName = “Cluster Network 1” 
$IPResourceName = “IP Address 10.0.0.0” 
$ILBIP = “10.0.0.110” 
Import-Module FailoverClusters
Get-ClusterResource $IPResourceName | Set-ClusterParameter 
-Multiple @{Address=$ILBIP;ProbePort=59998;SubnetMask="255.255.255.255"
;Network=$ClusterNetworkName;EnableDhcp=0} 

Your output should look like this.

 PS C:\Users\dave.DATAKEEPER> $ClusterNetworkName = “Cluster Network 1” 
$IPResourceName = “IP Address 10.0.0.0” 
$ILBIP = “10.0.0.110” 
Import-Module FailoverClusters
Get-ClusterResource $IPResourceName | Set-ClusterParameter 
-Multiple @{Address=$ILBIP;ProbePort=59998;SubnetMask="255.255.255.255"
;Network=$ClusterNetworkName;EnableDhcp=0}
WARNING: The properties were stored, but not all changes will take effect 
until IP Address 10.0.0.0 is taken offline and then online again.

Make sure to take the resource offline and online for the settings to take effect.

Create Your Clustered ISCSI Targets

Before you begin, it is best to check to make sure Server Manager from BOTH servers can see the two cluster nodes, plus the two cluster name resources, and they both appear “Online” under manageability as shown below.

Step-By-Step: ISCSI Target Server Cluster In Azure

If either server has an issue querying either of those cluster names then the next steps will fail. If there is a problem I would double check all the steps you took to create the load balancer and the Powershell scripts you ran.

We are now ready to create our first clustered iSCSI targets. From either of the cluster nodes, follows the steps illustrated below as an example on how to create iSCSI targets.

Step-By-Step: ISCSI Target Server Cluster In Azure

Step-By-Step: ISCSI Target Server Cluster In Azure

Step-By-Step: ISCSI Target Server Cluster In Azure

Step-By-Step: ISCSI Target Server Cluster In Azure

Step-By-Step: ISCSI Target Server Cluster In Azure

Step-By-Step: ISCSI Target Server Cluster In Azure

Step-By-Step: ISCSI Target Server Cluster In Azure

Of course assign this to whichever server or servers will be connecting to this iSSI target.

Step-By-Step: ISCSI Target Server Cluster In Azure

Step-By-Step: ISCSI Target Server Cluster In Azure

Step-By-Step: ISCSI Target Server Cluster In Azure

Step-By-Step: ISCSI Target Server Cluster In Azure

And there you have it, you now have a functioning iSCSI target server in Azure.

If you build this leave a comment and me know how you plan to use it!

Articles reproduced with permission from Clusteringwithmeremortals

Filed Under: Clustering Simplified Tagged With: Azure, ISCSI Target Server Cluster

Solution Brief: SANless Clusters for Hybrid Cloud Environments

June 9, 2020 by Jason Aw Leave a Comment

SANless Clusters for Hybrid Cloud Environments

Solution Brief: SANless Clusters for Hybrid Cloud Environments

SIOS SANless clusters are an easy, cost-efficient way to add disaster protection to your physical server-based cluster environment without the cost and complexity of an additional data center or disaster recovery site. Add a SIOS SANless cluster node in a cloud to your physical server-based cluster environment for efficient, real time, block level replication and disaster protection for your business critical applications. SIOS software enables failover of application instances across geographic locations and cloud availability zones or regions to provide site-wide, local, and regional disaster protection. SIOS SANless software lets you build a cluster using the local storage available to your physical, virtual, or cloud systems. SIOS software keeps local storage synchronized for high availability protection without the need for shared storage.

Configuration flexibility

Whether you want to protect applications in a physical server, a private cloud within your organization, in a public cloud or a hybrid cloud, SIOS SANless software gives you the flexibility to build a fully automated, application-centric cluster and replication solution with your choice of industry standard hardware, replication schema, and deployment (active/active, active/passive).

SANless Clusters for Hybrid Cloud Environments
SIOS SANless clusters span environments letting you protect data with high availability and disaster recovery without the cost and complexity of a remote disaster recovery site.

SIOS software lets you replicate between the configurations of your choice – between SAN and SANless environments and any combination of physical, virtual, and cloud configurations. No vendor lock in. No need for identical hardware at the source and destination.

Easy to use. Easy to own

You can build a SIOS SANless cluster and configure it in minutes using our intuitive interface. SIOS also makes monitoring and management of your clusters easy. The user-friendly management console lets you monitor the status of protected servers, communication paths, resources and applications at glance.

Key Benefits

Disaster Protection

• Easy, cost efficient high availability and disaster protection for business critical applications

Flexibility

• Mix physical server and cloud environments for maximum efficiency.

Ease of Use

• Intuitive console for easy ongoing monitoring and management.

Download Solution Brief on SANless Clusters for Hybrid Cloud Environments

Filed Under: Clustering Simplified

Solution Brief: SANless Cluster Solutions for Virtual Server Environments

June 9, 2020 by Jason Aw Leave a Comment

SANless Cluster Solutions for Virtual Server Environments

Solution Brief: SANless Cluster Solutions for Virtual Server Environments

SIOS SANless software lets you build a cluster in a virtualized environment without the need for shared storage. You can use any local storage types available and provided by the hypervisor. SIOS software uses efficient block-level replication to keep local storage synchronized, enabling the standby servers in your cluster to continue to operate after a failover with access to the most recent data.

Cluster virtual machines

SIOS SANless software lets you create a cluster using virtual machines sitting on top of any hypervisor (VMware, Xen, Microsoft Hyper-V, and others). It uses real-time replication to synchronize storage on the primary VM with storage on a standby VM located in the same data center, in your disaster recovery site, or both. In the event of a disaster, the standby VM can be brought into service immediately, eliminating the hours needed for restoration from back-up media. You simply access the replicated VMs in the DR site directly.

Hyper-V host clustering that supports Live Migrations

In Microsoft Hyper-V environments, SIOS SANless software allows you to cluster entire Hyper-V host machines at the hypervisor level for complete VM portability and failover protection. By keeping a real-time copy of the running VM synchronized on an alternate Hyper-V host, SIOS software allows you to easily failover or Live Migrate a VM from one Hyper-V host to another. You get complete portability to move individual VMs or all of the VMs on a host to another Hyper-V host in the cluster.

Build a SIOS SANless cluster using virtual servers (A). In Microsoft Hyper-V environments (B), SIOS SANless clusters can be used at the virtual machine level for easy Live Migrations and complete server portability.

Easy disaster recovery testing

SIOS software also lets you restore replicated VMs to perform disaster recovery testing without disruption to the production site. When testing is complete, SIOS software eliminates changes that were made on the target server during the testing and resumes the replication from where it stopped.

Download Solution Brief on SANless Cluster Solutions for Virtual Server Environments

Filed Under: Clustering Simplified

Solution Brief: High Availability for SQL Server in Amazon Cloud Environments

May 17, 2020 by Jason Aw Leave a Comment

High Availability for SQL Server

Solution Brief: High Availability for SQL Server in Amazon Cloud Environments

SIOS software provides a simple, cost-efficient way to provide high availability protection for SQL Server in the Amazon Web Services Cloud. Add SIOS DataKeeper Cluster Edition software to a Windows Server Failover Clustering Environment such as SQL Server Always On Failover Cluster Instance (FCI) to create a cloud-friendly SANless cluster. Use AWS Quickstart deployment templates to create a SIOS SANless cluster in minutes.

Fast, Cost-Efficient Way to Add High Availability

Like all traditional failover clustering solutions, SQL Server FCI environments require the use of a shared storage. This requirement makes them impractical or impossible in public cloud environments, including Amazon Web Services. SIOS SANless clustering software eliminates this requirement in an environment that is fully integrated with Windows Server Failover Clustering. SIOS software adds the flexibility to protect your business critical applications such as SQL Server Standard or Enterprise Edition in Windows or Linux and any combination of physical, virtual, and cloud environments.

Fast, Efficient Synchronization

SIOS software uses highly efficient block-level replication to synchronize storage in all cluster nodes in realtime to create a SANless cluster. By replicating data volumes at the block level, SIOS software use significantly fewer system resources, makes more efficient use of the available bandwidth and transfers more data faster across than file-based replication alternatives. As a result, SIOS software delivers incredibly fast replication speeds—without hardware accelerators or compression devices. You get efficient storage without the cost or configuration limitations of a traditional SAN-based environment.

Failover Across Availability Zones for Disaster Protection

It keeps real-time copies of data synchronized across multiple nodes and across EC2 Availability Zones (AZs) for availability and disaster protection.

High Availability with SQL Server Standard Edition

SIOS DataKeeper Cluster Edition software can be used with SQL Server Standard Edition FCI to create a cost-efficient high availability cluster without the need for more costly SQL Server Enterprise Edition licenses.

Solution Brief: High Availability for SQL Server in Amazon Cloud Environments

Key Benefits

Enables Clustering in the Cloud

• Makes cluster failover protection in cloud environments possible by eliminating the need for shared storage.

• Fully integrated with Windows Server Failover Clustering (WSFC)

Protection for Applications and Data

• High availability and disaster protection in a cloud environment.

Ease of Use

• AWS Quick Start deployment templates

• Intuitive console for easy ongoing AWS monitoring and management.

Download our Solution Brief High Availability for SQL Server in Amazon Cloud Environments

Filed Under: Clustering Simplified Tagged With: Amazon Web Services Cloud, High Availability, SQL Server

High Availability and DR for S/4HANA and other SAP platforms

April 28, 2020 by Jason Aw 1 Comment

High Availability and DR for S/4HANA and other SAP platforms

High Availability and DR for S/4HANA and other SAP platforms

SAP is the market leader in enterprise application software. Over the span of many years, SAP had helped companies of all sizes and in all industries run efficiently and effectively. As a result of their hard work, they have built an ecosystem of enterprises heavily reliant on its platform. 77% of the world’s transaction revenue touches an SAP system.

SAP application touches many critical parts of a company such as its’ ERP, manufacturing, business processes, customer service etc. It has become the lifeline of many enterprises that depends on it for their business to operate properly. As such, high-availability has became one of the top concerns of company managements when it comes to their SAP systems.

In this article, we will discuss at a high-level what is HANA system replication and how it works. What are the limitations when it comes to high-availability, and how we can overcome them. We will also discuss about the options for HANA’s high-availability and the key differences.

To select the right solution to use for HA, ask yourselves at the end of the day

  • Meet Recovery Time Objectives (RTO)

—– How long can SAP be down before you recover?

  • Meet Recovery Point Objectives (RPO)

—–  How old can your data be when service is restored

  • Meet Availability Service Level Agreements (SLA)

—– How much uptime do you need?

SAP HANA system replication

SAP HANA System Replication is a reliable data protection and disaster recovery solution that provides continuous synchronization of a HANA database to a secondary location either in the same data center, remote site or in the cloud.

System Replication is a standard SAP HANA feature that comes with the software. Using this feature, all data is replicated to the secondary site and data is pre-loaded into memory on the secondary site which helps to reduce the recovery time objective (RTO) significantly. So in case of a failover, the secondary site will be able to take over without even performing a HANA DB (re)start and will work as primary DB instantaneously upon failover. However, the failover has to be triggered manually by the admin using the sr_takeover command. For the replication to be reversed, or failback to primary, separate commands will need to be issued as well.

HANA System Replication failover high-availability and DR
Figure 1: HANA System Replication failover high-availability and DR

Below are some key points of the HANA system replication method for HA and DR:

  • Redundant Servers / Nodes
  • In-memory database replicated by HANA system replication (in “log replay” mode)
  • Multiple replication options: sync, sync-mem, async
  • Supports active-active (read-only on secondary)
  • Setup and admin through HANA cockpit, HANA studio or command line

Limitations

  • No monitoring of application process or replication failures and automated failover
  • Failover, reverse replication and failback has to be performed manually – many manual steps are needed
  • No virtual IP
  • No integrated HA failover orchestration together with SAP ASCS etc. components

As you can probably deduce from the above points by now, HANA system replication is designed to protect against data loss. Such that when an issue happens with the primary node, an admin can manually run a “sr_takeover” command, so that a problem with the primary system will not take down the entire SAP setup which depends on the HANA database for the prolonged period of downtime. However, a lot of this work has to happen manually and depends on human manual intervention, which although is good enough for DR, it does not make an ideal situation for HA (where downtime needs to be prevented).

SIOS High Availability Clustering

SIOS high availability software for SAP lets you protect SAP S/4HANA in any configuration (or combination) of physical, virtual, cloud (public, private, and hybrid) and high performance flash storage environments. SIOS software provides easy and flexible configuration, fast replication, and comprehensive monitoring and protection of the entire SAP S/4HANA environment.

Specifically for SAP S/4HANA and the HANA database. SIOS can be used to complement what SAP is already doing with the HANA system replication. SIOS adds on to what SAP has to provide true high-availability – automated monitoring of key HANA application processes, and provide automated failover, failback, including virtual IP(s), even if you have multi-instance within a single HANA node.

SIOS HANA System Replication failover high-availability and DR
Figure 2: SIOS HANA System Replication failover high-availability and DR

Below are some key points of the SIOS Protection Suite for SAP HANA HA and DR:

  • Works in the cloud cross AZ and AR
  • Provides automated failure detection and failover for key SAP HANA DB components:
    — SAP HANA Host Agent
    — SAP HANA sapstartsrv
    — SAP HANA Replication
  • Enables automated SAP HANA replication takeover, switchback
  • Automatically reverse replication
  • Verifies and monitors that the HANA DB is running
  • Provides Virtual IP
  • “Full stack” failover orchestration with ASCS etc. SAP components

Four steps to install and configure HA for HANA database

We will not discuss the specific steps of how to configure SAP HANA, since there are already many on-line resources that cover those steps. But at a high-level, what you need to do are 4 basic steps:

  1. Install SAP HANA
  2. Configure HANA system Replication
    See – https://help.sap.com/viewer/6b94445c94ae495c83a19646e7c3fd56/2.0.02/en-US/676844172c2442f0bf6c8b080db05ae7.html
  3. Install SIOS protection suite
    See – http://docs.us.sios.com/spslinux/9.4.1/en/topic/sios-protection-suite-for-linux-installation-guide
  4. Use HANA recovery kit (wizard) in GUI to protect HANA
    See – http://docs.us.sios.com/spslinux/9.4.1/en/topic/sap-hana-recovery-kit

The installation process flow are similar for other SAP components (ASCS, ERS, PAS, Web Dispatcher etc.) as well.

With the HANA recovery kit included in the SIOS protection suite software, you can basically use a wizard in the SIOS Lifekeeper management GUI, to quickly protect a HANA database instance. You can also assign the virtual IP address for clients to connect to it, and manage the entire stack from it. Build a multi-instance environment and the solution will manage all the instances, virtual IPs etc. within the a fully integrated GUI, which makes it very easy to configure, manage the entire SAP landscape that is on SIOS HA.

SIOS Lifekeeper Management GUI for SAP HANA ASCS and ERS
Figure 3: SIOS Lifekeeper Management GUI for SAP HANA ASCS and ERS

Comprehensive HA/DR stack for SAP –

Other than HANA database, SIOS Protection Suite also provides protection for key SAP services and supporting applications, all of which can be managed from the same GUI :

  • Primary Application Server (PAS)
  • ABAP SAP Central Service (ASCS)
  • SAP Central Services (SCS)
  • Enqueue and message servers
  • Enqueue Replication Server (ERS)
  • Database (Oracle, Sybase, MaxDB, HANA, etc)
  • Shared and/or Replicated File Systems
  • Logical Volumes (LVM)
  • NFS Mounts and Exports
  • Virtual IPs

Clustering in the cloud

When moving SAP to the cloud, one of the key challenges is how to protect the SAP database, as well as the SAP applications stack in a SAP supported architecture. SIOS has been forefront of this move and are designed, certified and supported by SAP as well as all the major cloud providers.

The diagram below is a high-level design of how a pair of S/4HANA system can be deployed across different availability zones, or even regions. In cloud environments, as the providers do have very low latencies between AZs, it is entirely possible to use synchronous replication across the AZs, thereby creating a pair of highly available S/4HANA system, not just for HA but also for DR at the same time. This is because AZs are geographically separate datacenters, much like how on-premise DR datacenters are, which highly redundant high-speed network connectivity between them.

SIOS Protection Suite for SAP S/4HANA cloud architecture
Figure 4: SIOS Protection Suite for SAP S/4HANA cloud architecture

Why use SIOS over open-source HA for SAP?

This question will invariably come up in people’s mind, since some Linux vendors already provide their HA extensions (HAE) or clustering, why would anyone want to use a commercial 3rd party HA solution like SIOS?

  1. Open-source HA is being offered as part of certain OS flavors “enterprise SAP” extensions subscription – it comes at a cost, it’s definitely not free, and not all Linux flavors are supported. SIOS supports all the major Linux flavors including Redhat, SUSE, Centos and Oracle Linux. For customers who want to run Windows for their ASCS or Content Server etc. SIOS also has Windows based solution with Windows clustering support, making it a one-stop-shop for the entire SAP landscape regardless of platform.
  2. Commercial HA support – OS vendors depend on open-source community for bug fixes, which can be a problem if the bug requires a longer time to get solved by a less active contributor. SIOS provides commercial support with dedicated support and development team just for its high-availability solution. It has immediate 24×7 support resolution, which would give customers much more confidence when there are issues that may develop.
  3. Complex setup and admin via command line is needed by open-source tools. They are made up of different components like Pacemaker, Corosync etc. maintained by different open-source initiatives. SIOS provides all-in-one GUI for wizards-based setup and admin. It allows one to deploy SAP HA in a matter of hours instead of weeks/months.
  4. SIOS provide pre-built application monitoring and failover orchestration for all SAP and cloud components requiring HA through a wizard in the GUI, as opposed to using HA extensions that still requires a lot of manual configuration.
  5. Automatically ensures SAP ERS is always running in opposite node of ASCS – SIOS provides the intelligence even in a multi-node ASCS setup, if a failover occured and ASCS failsover to the node with the running ERS, when the original ASCS node recovers, ERS gets automatically switched across so that the locks are always getting the redundancy needed. Opensource solution requires this to be done manually, hence impacts reliability and availability especially in times of multiple failures and recovery.
  6. SIOS reduces implementation/management time and costs, the lesser time you spend implementing and maintaining HA, the more time you will have for other more important tasks.
  7. Open-source use its STONITH mechanism which had been hardly reliable especially in cloud environments, SIOS provides multi-throng approach to prevent false failover and split-brain – quorum witness, multiple comm. path (heartbeat) which has been proven for over 20 years to be highly reliable in many scenarios.

Summary

SAP HANA system replication feature comes as part of the software and works well to protect the database from dataloss in case a problem arise from hardware or system failures. However if high-availability is the requirement, it would still need a 3rd party solution in order to get some of the automated monitoring, failover orchestration, virtual IP and so on. While there are opensource options in the form of enterprise Linux OS subscriptions for SAP, they certainly do not come free, and technical support is still limited as they purely relying on opensource community to maintain the Pacemaker, Corosync etc. projects. and to get support from contributors. There are also limitations in the native System Replication, opensource HAE which can be overcome by a commercial software solution vendor like SIOS.

Hence, SIOS as a reliable 3rd party high-availability solution provider can help to ensure enterprise customers get the reliability and high-availability that they need in their mission critical SAP systems operations. For a peace of mind, SIOS proves to be a very viable complementary solution to SAP HANA system replication, which is also fully supported by SAP and all the major OS and platform vendors.

Author:

Jason Aw SIOS Technology
Jason Aw
An IT professional who has been focused on high-availability and disaster recovery for over 20 years. Currently employed at SIOS Technology Corp. as Strategic Business Development for APAC.

Filed Under: Blog posts, Clustering Simplified

  • « Previous Page
  • 1
  • …
  • 68
  • 69
  • 70
  • 71
  • 72
  • …
  • 104
  • Next Page »

Recent Posts

  • Transitioning from VMware to Nutanix
  • Are my servers disposable? How High Availability software fits in cloud best practices
  • Data Recovery Strategies for a Disaster-Prone World
  • DataKeeper and Baseball: A Strategic Take on Disaster Recovery
  • Budgeting for SQL Server Downtime Risk

Most Popular Posts

Maximise replication performance for Linux Clustering with Fusion-io
Failover Clustering with VMware High Availability
create A 2-Node MySQL Cluster Without Shared Storage
create A 2-Node MySQL Cluster Without Shared Storage
SAP for High Availability Solutions For Linux
Bandwidth To Support Real-Time Replication
The Availability Equation – High Availability Solutions.jpg
Choosing Platforms To Replicate Data - Host-Based Or Storage-Based?
Guide To Connect To An iSCSI Target Using Open-iSCSI Initiator Software
Best Practices to Eliminate SPoF In Cluster Architecture
Step-By-Step How To Configure A Linux Failover Cluster In Microsoft Azure IaaS Without Shared Storage azure sanless
Take Action Before SQL Server 20082008 R2 Support Expires
How To Cluster MaxDB On Windows In The Cloud

Join Our Mailing List

Copyright © 2025 · Enterprise Pro Theme on Genesis Framework · WordPress · Log in