SIOS SANless clusters

SIOS SANless clusters High-availability Machine Learning monitoring

  • Home
  • Products
    • SIOS DataKeeper for Windows
    • SIOS Protection Suite for Linux
  • News and Events
  • Clustering Simplified
  • Success Stories
  • Contact Us
  • English
  • 中文 (中国)
  • 中文 (台灣)
  • 한국어
  • Bahasa Indonesia
  • ไทย

How To Install A SIOS Protection Suite for Linux License Key

February 23, 2022 by Jason Aw Leave a Comment

How To Install A SIOS Protection Suite for Linux License Key

How To Install A SIOS Protection Suite for Linux License Key

Once you have installed SIOS Protection Suite for Linux software and have activated your license, you will need to install your license key before you can begin to run it. This 4 minute video will review how to install SIOS Protection Suite for Linux software and demonstrate how to activate your license to get started using your SIOS Protection Suite for Linux software.

Watch as a SIOS support representative shows you how to check that your SPS image file is mounted, to ensure you have the license file, and how to install and enter the complete path name. Use our simple license key manager to validate your activated licenses from purchased entitlements, download and apply license keys and start your SIOS Protection Suite for Linux software.

This video also walks through the process of how to access our SIOS Documentation portal, where you can find release notes, installation guides, technical documentation and information detailing SIOS Protection Suite for Linux as well as a wide range of topics on everything SIOS.

View tips and convenient insights on how to complete steps fast and simply.  Now you can begin protecting your critical applications with SIOS Protection Suite for Linux.

How To Install A SIOS Protection Suite for Linux License Key

Reproduced with permission from SIOS

Filed Under: Clustering Simplified Tagged With: High Availability Linux, Linux, Linux Cluster

Understanding and Avoiding Split Brain Scenarios

September 23, 2021 by Jason Aw Leave a Comment

Understanding and Avoiding Split Brain Scenarios

Understanding and Avoiding Split Brain Scenarios

Split brain. Most readers of our blogs will have heard the term, in the computing context that is, yet we cannot help but to sympathize with those whose first mental image is of the chaos that would result if someone had two brains, both equally in control at the same time.

What is a Failover Cluster Split Brain Scenario?

In a failover cluster split brain scenario, neither node can communicate with the other, and the standby server may promote itself to become an active server because it believes the active node has failed. This results in both nodes becoming ‘active’ as each would see the other as being failed. As a result, data integrity and consistency is compromised as data on both nodes would be changing. This is referred to as split brain.

There are two types of split-brain scenarios which may occur for an SAP HANA resource hierarchy if appropriate steps are not taken to avoid them.

  • HANA Resource Split Brain: The HANA resource is Active (ISP) on multiple cluster nodes. This situation is typically caused by a temporary network outage affecting the communication paths between cluster nodes.
  • SAP HANA System Replication Split Brain: The HANA resource is Active (ISP) on the primary node and Standby (OSU) on the backup node, but the database is running and registered as the primary replication site on both nodes. This situation is typically caused by either a failure to stop the database on the previous primary node during failover, having Autostart enabled for the database, or a database administrator manually running “hdbnsutil -sr_takeover” on the secondary replication site outside of the clustering software environment.

Avoiding Split Brain Issues

Recommendations for avoiding or resolving each type of split-brain scenario in the SIOS Protection Suite clustering environment are given below.

While in a split-brain scenario, a message similar to the following is logged and broadcast to all open consoles every quickCheck interval (default 2 minutes) until the issue is resolved.

EMERG:hana:quickCheck:HANA-SPS_HDB00:136363:WARNING: 
A temporary communication failure has occurred between servers 
hana2-1 and hana2-2. 
Manual intervention is required in order to minimize the risk of 
data loss. 
To resolve this situation, please take one of the following resource 
hierarchies out of service: HANA-SPS_HDB00 on hana2-1 
or HANA-SPS_HDB00 on hana2-2. 
The server that the resource hierarchy is taken out of service on 
will become the secondary SAP HANA System Replication site.

Recommendations for resolution:

  1. Investigate the database on each cluster node to determine which instance contains the most up-to-date or relevant data. This determination must be made by a qualified database administrator who is familiar with the data.
  2. The HANA resource on the node containing the data that needs to be retained will remain Active (ISP) in LifeKeeper, and the HANA resource hierarchy on the node that will be re-registered as the secondary replication site will be taken entirely out of service in LifeKeeper. Right-click on each leaf resource in the HANA resource hierarchy on the node where the hierarchy should be taken out of service and click Out of Service …
  3. Once the SAP HANA resource hierarchy has been successfully taken out of service, LifeKeeper will re-register the Standby node as the secondary replication site during the next quickCheck interval (default 2 minutes). Once replication resumes, any data on the Standby node which is not present on the Active node will be lost. Once the Standby node has been re-registered as the secondary replication site, the SAP HANA hierarchy has returned to a highly available state.

SAP HANA System Replication Split Brain Resolution

While in this split-brain scenario, a message similar to the following is logged and broadcast to all open consoles every quick. Check interval (default 2 minutes) until the issue is resolved.

EMERG:hana:quickCheck:HANA-SPS_HDB00:136364:WARNING: 
SAP HANA database HDB00 is running and registered as 
primary master on both hana2-1 and hana2-2. 
Manual intervention is required in order to 
minimize the risk of data loss. To resolve this situation, 
please stop database instance 
HDB00 on hana2-2 by running the command ‘su – spsadm -c 
“sapcontrol -nr 00 -function Stop”’ 
on that server. Once stopped, 
it will become the secondary SAP HANA System Replication site.

Recommendations for resolution:

  1. Investigate the database on each cluster node to determine whether important data exists on the Standby node which does not exist on the Active node. If important data has been committed to the database on the Standby node while in the split-brain state, the data will need to be manually copied to the Active node. This determination must be made by a qualified database administrator who is familiar with the data.
  2. Once any missing data has been copied from the database on the Standby node to the Active node, stop the database on the Standby node by running the command given in the LifeKeeper warning message:

    su – adm -c “sapcontrol -nr <Inst#> -function Stop”

    where is the lower-case SAP System ID for the HANA installation and <Inst#> is the instance number for the HDB instance (e.g., the instance number, for instance, HDB00 is 00)

  3. Once the database has been successfully stopped, LifeKeeper will re-register the Standby node as the secondary replication site during the next quickCheck interval (default 2 minutes). Once replication resumes, any data on the Standby node which is not present on the Active node will be lost. Once the Standby node has been re-registered as the secondary replication site, the SAP HANA hierarchy has returned to a highly available state.

Being aware of common split-brain scenarios and taking these steps to mitigate them can save you time and protect data integrity.

Reproduced with permission from SIOS

Filed Under: Clustering Simplified Tagged With: high availability - SAP, Linux, SAP S/4HANA, split brain

Seven Skills That Your Team Needs if You are Going with Open Source High Availability

March 31, 2021 by Jason Aw Leave a Comment

Seven Skills That Your Team Needs if You are Going with Open Source High Availability

Seven Skills That Your Team Needs if You are Going with Open Source High Availability

In the realm of High Availability (HA) there are certain important skills your team needs if you decide to go the route of open source. Open source by definition denotes software that is freely available to use.

Today, there are numerous commercial implementations of high availability clusters for many operating systems provided by vendors like Microsoft and SIOS Technology Corp.  These commercial solutions provide resource monitoring, dependency management, failover and cluster policies, and some form of management prepackaged and priced.  An alternative to commercial implementations are several open source options that also give companies the opportunity to provide high availability for their enterprise.

As companies continue to look for optimizations, cost savings, and potential tighter control, a growing number of companies and customers are also considering moving to open source availability solutions.

Here are seven skills that your team may need for a move to Open Source HA:

1. Coding skills

In many cases the lack of pre-packaged and bundled support for enterprise applications means that your team will need to be able to develop solutions to protect components, fix issues with bundled components, or write application connectors to ensure application awareness is properly handled.  Lots of people can write scripts, but your team will need to know how to create and adhere to sound development practices and standards.  The basics of this include things such as:

  • Design and Architecture Requirements
  • Design Reviews
  • Code / Code Reviews and Unit Tests (preferably automated)

2. Knowledge of the technology environment

Many enterprise applications require integration with multiple systems in order to provide high availability that meets the Service Level Agreements (SLA) and Service Level Objectives (SLO).  Your team will require deep application awareness and knowledge of the technology environment to build protection and solutions for this integration with multiple enterprise systems.  You need people who know the ins and outs of the critical applications, the technology environment for those applications, networking, hardware, hypervisors, and an understanding of the environmental and application dependencies.  You’ll also need team members who understand the architecture, features, and limitations of the set of HA technologies that you intend to use from the Open Source community. Consider how much of these areas your team knows and understands:

  • Data passing and node communication
  • Node failure
  • Application management
  • System recovery and restart
  • Logging and messages
  • Data resilience and protection

3. Business process knowledge

You need someone to understand your business requirements, and the business process.  Your team needs professionals who understand the enterprise’s business and the processes that drive it.  Your team will need to know and understand how much budget is available to spend for developing the solution, how much risk the business is willing to take, and how to gather additional requirements that may be unspoken or unspecified.

The team will also need to know, or to hire someone who knows how to convert those business requirements into software requirements and how to manage a process that brings a minimum viable high availability solution to fruition that meets the needs of the business, the speed of the business, and fits within the processes of the business.

4. Experience with OS, Applications and Infrastructure

If you are looking to go all open, your team will need experience understanding Operating Systems, Applications and Infrastructure.  You’ll need to understand the various OS release cycles, including kernel versions for Linux, updates and hotfixes for Windows.  You have applications in house that need to be supported, but you’ll need to also be diligent to understand the application update cycle, their dependencies, and the intersection of applications and OS support matrices.  If your environment is homogeneous, great.  Otherwise, your team will need to know the differences between RHEL, RHEL derivatives, and SUSE.  If you are both Linux and Windows you’ll need to know these as well.  You’ll also need to understand the difference that the infrastructure will make on the application and OS combination.  AWS and Azure present differences for high availability that differs from GCP, on-premise, and other hypervisors.

5. Change management capabilities

Imagine that you have the development team to create the solution, with technical and business knowledge along with a firm grasp of the OS, Infrastructure and Applications.  But, getting the scripts together is just the beginning.  Your team will also need change management capabilities.  How will your team keep track of the code changes and versions, packages, and package locations?  How will your team manage the releases of updates and changes?  Your team will need to be versed in a source repository, such as git, project management tools, such as Jira, and release train proficiency.  You’ll need a team that understands how to make updates to code, deliver patches and fixes, all while avoiding unwanted impact.

6. Data analytics and troubleshooting experience

When you enter the space of delivering your own HA solution your team will need analytics and troubleshooting experience.  You’ll need to have resources who understand the intersection of application code, system messages, and application error logs and trace files.  When a system crash occurs, you’ll have to dig deeper into the logs to troubleshoot and find the root cause, analyze the data to make recommendations, and be prepare to roll out changes (see #5 above).  Don’t forget, your team will also need to know and understand what the data from these logs and trace files can tell you about the health of your environment even when there isn’t an error, failure or system crash.

7. Connections (Dev, QA, Partners, Community)

Let’s be honest, your business isn’t about delivering high availability, but if you decide to dive into the realm of open source HA you are going to need more help than just the brilliance on your team.  Key to getting that additional help will be understanding where to start and then making the right connections to community developers, persons who are experts on testing, HA and application partners, and the open source community.  Open forums have been really helpful, but you’ll need to double check if the response times are compliant with your SLAs and SLOs.

Using Open Source solutions is an option that many companies choose to pursue for cost concerns and a perception of flexibility, lower cost, and less risk.  But, buyer beware, there may be hidden costs in the form of new skills and management, and hidden risks in terms of the open source programs you use that will be needed for any “roll your own HA solution.”

– Cassius Rhue, VP, Customer Experience

Reproduced from SIOS

Filed Under: Clustering Simplified Tagged With: High Availability, high availability - SAP, Linux, Open Source

How to Deliver High Availability for SQL Server in Linux Environments

September 10, 2020 by Jason Aw Leave a Comment

How to Deliver High Availability for SQL Server in Linux Environments

How to Deliver High Availability For SQL Server in Linux Environments

If your organization is running business-critical Microsoft SQL Server on Linux, your IT team undoubtedly knows how challenging continually maintaining high availability, performance and security can be. Particularly difficult is how to ensure high availability with robust replication and automatic failover. Using open-source software and an easily configured HA SANless cluster solution can offer a simpler maintenance approach without sacrificing the safety and performance your organization requires.

Limited High Availability Options for Linux

Most Linux distributions give IT departments two inferior choices for high availability: either pay more for the SQL Server Enterprise Edition to implement Always On Availability Groups, or struggle to make complex do-it-yourself HA Linux configurations work well—something that can be extraordinarily difficult to do.

The problem with using the Enterprise Edition is that it undermines the cost-saving strategy for using an open-source operating system on commodity hardware. For a limited number of small SQL Server applications, it might be possible to justify the additional cost. But it’s too expensive for many database applications and will do nothing to provide general-purpose HA for Linux.

Providing HA across all applications running in a Linux environment is possible using open-source software, such as Pacemaker and Corosync, or SUSE Linux Enterprise High Availability Extension. But getting the full software stack to work as desired requires creating (and testing) custom scripts for each application, and these scripts often need to be retested and updated after even minor changes are made to any of the software or hardware being used. Availability-related capabilities that are unsupported in both SQL Server Standard Edition and Linux can make this effort more challenging.

Finding an Alternative High Availability Solution for SQL Server in Linux

To make HA both cost-effective and easy to implement, you may want to consider two different, general-purpose approaches.

One is using storage-based systems that protect data by replicating it within a redundant and resilient storage area networks (SANs). This approach is agnostic with respect to the host operating system, but it requires that the entire SAN infrastructure be acquired from a single vendor and relies on separate failover provisions to deliver high availability.

The other approach is host-based and involves creating a storage-agnostic SANless cluster across Linux server instances. As an HA overlay, these clusters are capable of operating across both the LAN and WAN in private, public and hybrid clouds. The overlay is also application-agnostic, enabling organizations to have a single, universal HA solution across all applications. While this approach does consume host resources, these are relatively inexpensive and easy to scale in a Linux environment.

Most HA SANless cluster options provide a combination of real-time block-level data replication, continuous application monitoring, and configurable failover/failback recovery policies to protect all business-critical applications, including those using Always On Failover Cluster Instances available in the Standard Edition of SQL Server.

SIOS Technology Corp. offers more robust HA SANless cluster solutions for Linux with advanced capabilities that are designed to free IT from the complexity and daily challenges of supporting and optimizing computing infrastructures. The SIOS Protection Suite solution with LifeKeeper provides:

  • Continuous monitoring of the entire Linux application stack
  • Complete Application-Aware Protection with its application recovery kits (ARK) for fast, safe recovery or failover of complex applications and databases
  • Wizard-driven setup for Linux clustering
  • Configuration flexibility, such as using a traditional shared-storage cluster or software to synchronize local storage in a SANless cluster configuration

For example, a SANless cluster can handle two concurrent failures. The basic operation is the same in the LAN and WAN, as well as across private, public, and hybrid clouds.

In a typical two-node cluster server #1 is initially the primary that replicates data to servers #. It experiences a problem, automatically triggering a failover to server #2, which now becomes the primary.

In this situation, the IT department would likely begin diagnosing and repairing whatever problem caused server #1 to fail. Once fixed, it could take over as the primary or server #2 could continue in that capacity replicating data to servers #1.

With most HA SANless clustering configurations, failovers are automatic, and both failovers and failbacks can be controlled by a browser-based console.

For further information about SIOS LifeKeeper and Protection Suite solutions, visit SIOS SAN and SANless High Availability Clusters for Cluster Server Environments.

Reproduced with permission from SIOS

Filed Under: Clustering Simplified Tagged With: High Availability, Linux, SQL Server High Availability

Step-By-Step: How to configure a SANless MySQL Linux failover cluster in Amazon EC2

August 18, 2020 by Jason Aw Leave a Comment

Step-By-Step: How to configure a SANless MySQL Linux failover cluster in Amazon EC2

Step-By-Step: How to configure a SANless MySQL Linux failover cluster in Amazon EC2

In this step by step guide, I will take you through all steps required to configure a highly available, 2-node MySQL cluster (plus witness server) in Amazon’s Elastic Compute Cloud (Amazon EC2). The guide includes both screenshots, shell commands and code snippets as appropriate. I assume that you are somewhat familiar with Amazon EC2 and already have an account. If not, you can sign up today. I’m also going to assume that you have basic familiarity with Linux system administration and failover clustering concepts like Virtual IPs, etc.

Failover clustering has been around for many years. In a typical configuration, two or more nodes are configured with shared storage to ensure that in the event of a failover on the primary node, the secondary or target node(s) will access the most up-to-date data. Using shared storage not only enables a near-zero recovery point objective, it is a mandatory requirement for most clustering software. However, shared storage presents several challenges. First, it is a single point of failure risk. If shared storage – typically a SAN – fails, all nodes in the cluster fails. Second, SANs can be expensive and complex to purchase, setup and manage. Third, shared storage in public clouds, including Amazon EC2 is either not possible, or not practical for companies that want to maintain high availability (99.99% uptime), near-zero recovery time and recovery point objectives, and disaster recovery protection.

The following demonstrates how easy it is to create a SANless cluster in the clouds to eliminate these challenges while meeting stringent HA/DR SLAs. The steps below use MySQL database with Amazon EC2 but the same steps could be adapted to create a 2-node cluster in AWS to protect SQL, SAP, Oracle, or any other application.

NOTE: Your view of features, screens and buttons may vary slightly from screenshots presented below

1. Create a Virtual Private Cloud (VPC)
2. Create an Internet Gateway
3. Create Subnets (Availability Zones)
4. Configure Route Tables
5. Configure Security Group
6. Launch Instances
7. Create Elastic IP
8. Create Route Entry for the Virtual IP
9. Disable Source/Dest Checking for ENIs
10. Obtain Access Key ID and Secret Access Key
11. Linux OS Configuration
12. Install EC2 API Tools
13. Install and Configure MySQL
14. Install and Configure Cluster
15. Test Cluster Connectivity

Overview

This article will describe how to create a cluster within a single Amazon EC2 region. The cluster nodes (node1, node2 and the witness server) will reside different Availability Zones for maximum availability. This also means that the nodes will reside in different subnets.

The following IP addresses will be used:

  • node1: 10.0.0.4
  • node2: 10.0.1.4
  • witness: 10.0.2.4
  • virtual/”floating” IP: 10.1.0.10

Step 1: Create a Virtual Private Cloud (VPC)

First, create a Virtual Private Cloud (aka VPC). A VPC is an isolated network within the Amazon cloud that is dedicated to you. You have full control over things like IP address blocks and subnets, route tables, security groups (i.e. firewalls), and more. You will be launching your Azure Iaas virtual machines (VMs) into your Virtual Network.

From the main AWS dashboard, select “VPC”

Under “Your VPCs”, make sure you have selected the proper region at the top right of the screen. In this guide the “US West (Oregon)” region will be used, because it is a region that has 3 Availability Zones. For more information on Regions and Availability Zones, click here.

Give the VPC a name, and specify the IP block you wish to use. 10.0.0.0/16 will be used in this guide:

You should now see the newly created VPC on the “Your VPCs” screen:

Step 2: Create an Internet Gateway

Next, create an Internet Gateway. This is required if you want your Instances (VMs) to be able to communicate with the internet.

On the left menu, select Internet Gateways and click the Create Internet Gateway button. Give it a name, and create:

Next, attach the internet gateway to your VPC:

Select your VPC, and click Attach:

 

Step 3: Create Subnets (Availability Zones)

Next, create 3 subnets. Each subnet will reside in it’s own Availability Zone. The 3 Instances (VMs: node1, node2, witness) will be launched into separate subnets (and therefore Availability Zones) so that the failure of an Availability Zone won’t take out multiple nodes of the cluster.

The US West (Oregon) region, aka us-west-2, has 3 availability zones (us-west-2a, us-west-2b, us-west-2c). Create 3 subnets, one in each of the 3 availability zones.

Under VPC Dashboard, navigate to Subnets, and then Create Subnet:

Give the first subnet a name (“Subnet1)”, select the availability zone us-west-2a, and define the network block (10.0.0.0/24):

Repeat to create the second subnet availability zone us-west-2b:

Repeat to create the third subnet in availability zone us-west-2c:

Once complete, verify that the 3 subnets have been created, each with a different CIDR block, and in separate Availability Zones, as seen below:

Step 4: Configure Route Tables

Update the VPC’s route table so that traffic to the outside world is sent to the Internet Gateway created in a previous step. From the VPC Dashboard, select Route Tables. Go to the Routes tab, and by default only one route will exist which allows traffic only within the VPC.

Click Edit:

Add another route:

The Destination of the new route will be “0.0.0.0/0” (the internet) and for Target, select your Internet Gateway. Then click Save:

Next, associate the 3 subnets with the Route Table. Click the “Subnet Associations” tab, and Edit:

Check the boxes next to all 3 subnets, and Save:

Verify that the 3 subnets are associated with the main route table:

Later, we will come back and update the Route Table once more, defining a route that will allow traffic to communicate with the cluster’s Virtual IP, but this needs to be done AFTER the linux Instances (VMs) have been created.

Step 5: Configure Security Group

Edit the Security Group (a virtual firewall) to allow incoming SSH and VNC traffic. Both will later be used to configure the linux instances as well as installation/configuration of the cluster software.

On the left menu, select “Security Groups” and then click the “Inbound Rules” tab. Click Edit:

Add rules for both SSH (port 22) and VNC. VNC generally uses ports in the 5900, depending on how you configure it, so for the purposes of this guide, we will open the 5900-5910 port range. Configure accordingly based on your VNC setup:

Step 6: Launch Instances

We will be provisioning 3 Instances (Virtual Machines) in this guide. The first two VMs (called “node1” and “node2”) will function as cluster nodes with the ability to bring the MySQL database and it’s associated resources online. The 3rd VM will act as the cluster’s witness server for added protection against split-brain.

To ensure maximum availability, all 3 VMs will be deployed into different Availability Zones within a single region. This means each instance will reside in a different subnet.

Go to the main AWS dashboard, and select EC2:

 

Create “node1”

Create your first instance (“node1”). Click Launch Instance:

Select your linux distribution. The cluster software used later supports RHEL, SLES, CentOS and Oracle Linux. In this guide we will be using RHEL 7.X:

Size your instance accordingly. For the purposes of this guide and to minimize cost, t2.micro size was used because it’s free tier eligible. See here for more information on instance sizes and pricing.

Next, configure instance details. IMPORTANT: make sure to launch this first instance (VM) into “Subnet1“, and define an IP address valid for the subnet (10.0.0.0/24) – below 10.0.0.4 is selected because it’s the first free IP in the subnet.
NOTE: .1/.2/.3 in any given subnet in AWS is reserved and can’t be used.

Next, add an extra disk to the cluster nodes (this will be done on both “node1” and “node2”). This disk will store our MySQL databases and the later be replicated between nodes.

NOTE: You do NOT need to add an extra disk to the “witness” node. Only “node1” and “node2”. Add New Volume, and enter in the desired size:

Define a Tag for the instance, Node1:

Associate the instance with the existing security group, so the firewall rules created previous will be active:

Click Launch:

IMPORTANT: If this is the first instance in your AWS environment, you’ll need to create a new key pair. The private key file will need to be stored in a safe location as it will be required when you SSH into the linux instances.

Create “node2”

Repeat the steps above to create your second linux instance (node2). Configure it exactly like Node1. However, make sure that you deploy it into “Subnet2” (us-west-2b availability zone). The IP range for Subnet2 is 10.0.1.0/24, so an IP of 10.0.1.4 is used here:

Make sure to add a 2nd disk to Node2 as well. It should be the same exact size as the disk you added to Node1:

Give the second instance a tag…. “Node2”:

Create “witness”

Repeat the steps above to create your third linux instance (witness). Configure it exactly like Node1&Node2, EXCEPT you DON’T need to add a 2nd disk, since this instance will only act as a witness to the cluster, and won’t ever bring MySQL online.

Make sure that you deploy it into “Subnet3” (us-west-2c availability zone). The IP range for Subnet2 is 10.0.2.0/24, so an IP of 10.0.2.4 is used here:

NOTE: default disk configuration is fine for the witness node. A 2nd disk is NOT required:

Tag the witness node:

It may take a little while for your 3 instances to provision. Once complete, you’ll see then listed as running in your EC2 console:

Step 7: Create Elastic IP

Next, create an Elastic IP, which is a public IP address that will be used to connect into you instance from the outside world. Select Elastic IPs in the left menu, and then click “Allocate New Address”:

 

Select the newly created Elastic IP, right-click, and select “Associate Address”:

Associate this Elastic IP with Node1:

Repeat this for the other two instances if you want them to have internet access or be able to SSH/VNC into them directly.

Step 8: create Route Entry for the Virtual IP

At this point all 3 instances have been created, and the route table will need to be updated one more time in order for the cluster’s Virtual IP to work. In this multi-subnet cluster configuration, the Virtual IP needs to live outside the range of the CIDR allocated to your VPC.

Define a new route that will direct traffic to the cluster’s Virtual IP (10.1.0.10) to the primary cluster node (Node1)

From the VPC Dashboard, select Route Tables, click Edit. Add a route for “10.1.0.10/32” with a destination of Node1:

Step 9: Disable Source/Dest Checking for ENIs

Next, disable Source/Dest Checking for the Elastic Network Interfaces (ENI) of your cluster nodes. This is required in order for the instances to accept network packets for the virtual IP address of the cluster.

Do this for all ENIs.

Select “Network Interfaces”, right-click on an ENI, and select “Change Source/Dest Check”.

Select “Disabled“:

Step 10: Obtain Access Key ID and Secret Access Key

Later in the guide, the cluster software will use the AWS Command Line Interface (CLI) to manipulate a route table entry for the cluster’s Virtual IP to redirect traffic to the active cluster node. In order for this to work, you will need to obtain an Access Key ID and Secret Access Key so that the AWS CLI can authenticate properly.

In the top-right of the EC2 Dashboard, click on your name, and underneath select “Security Credentials” from the drop-down:

Expand the “Access Keys (Access Key ID and Secret Access Key)” section of the table, and click “Create New Access Key”. Download Key File and store the file in a safe location.

Step 11: Configure Linux OS

Connect to the linux instance(s):

To connect to your newly created linux instances (via SSH), right-click on the instance and select “Connect”. This will display the instructions for connecting to the instance. You will need the Private Key File you created/downloaded in a previous step:

Example:

Here is where we will leave the EC2 Dashboard for a little while and get our hands dirty on the command line, which as a Linux administrator you should be used to by now.

You aren’t given the root password to your Linux VMs in AWS (or the default “ec2-user” account either), so once you connect, use the “sudo” command to gain root privileges:

$sudo su –

Unless you have already have a DNS server setup, you’ll want to create host file entries on all 3 servers so that they can properly resolve each other by nameEdit /etc/hosts

Add the following lines to the end of your /etc/hosts file:

10.0.0.4 node1
10.0.1.4 node2
10.0.2.4 witness
10.1.0.10 mysql-vip

Disable SELinux

Edit /etc/sysconfig/linux and set “SELINUX=disabled”:

# vi /etc/sysconfig/selinux

 

# This file controls the state of SELinux on the system. # SELINUX= can take one of these three values:

#   enforcing – SELinux security policy is enforced.

#   permissive – SELinux prints warnings instead of enforcing.

# disabled – No SELinux policy is loaded.

SELINUX=disabled

# SELINUXTYPE= can take one of these two values:

#       targeted – Targeted processes are protected, #       mls – Multi Level Security protection.

SELINUXTYPE=targeted

Set Hostnames

By default, these Linux instances will have a hostname that is based upon the server’s IP address, something like “ip-10-0-0-4.us-west-2.compute.internal”

You might notice that if you attempt to modify the hostname the “normal” way (i.e. editing /etc/sysconfig/network, etc), after each reboot, it reverts back to the original!! I found a great thread in the AWS discussion forums that describes how to actually get hostnames to remain static after reboots.

Details here: https://forums.aws.amazon.com/message.jspa?messageID=560446

Comment out modules that set hostname in “/etc/cloud/cloud.cfg” file. The following modules can be commented out using #.

# – set_hostname

# – update_hostname

Next, also change your hostname in /etc/hostname.

Reboot Cluster Nodes

Reboot all 3 instances so that SELinux is disabled, and the hostname changes take effect.

Install and Configure VNC (and related packages)

In order to access the GUI of our linux servers, and to later install and configure our cluster, install VNC server, as well as a handful of other required packages (cluster software needs the redhat-lsb and patch rpms).

# yum groupinstall “X Window System”

# yum groupinstall “Server with GUI”

# yum install tigervnc-server xterm wget unzip patch redhat-lsb

# vncpasswd

The following URL is a great guide to getting VNC Server running on RHEL 7 / CentOS 7:For RHEL 7.x/CentOS7.x:

https://www.digitalocean.com/community/tutorials/how-to-install-and-configure-vnc-remote-access-for-the- gnome-desktop-on-centos-7

NOTE: This example configuration runs VNC on display 2 (:2, aka port 5902) and as root (not secure). Adjust accordingly!

# cp /lib/systemd/system/vncserver@.service /etc/systemd/system/vncserver@:2.serv

# vi /etc/systemd/system/vncserver@:2.service

[Service]

Type=forking

# Clean any existing files in /tmp/.X11-unix environment ExecStartPre=/bin/sh -c ‘/usr/bin/vncserver -kill %i > /dev/null 2>&1 || :’

ExecStart=/sbin/runuser -l root -c “/usr/bin/vncserver %i -geometry 1024×768” PIDFile=/root/.vnc/%H%i.pid

ExecStop=/bin/sh -c ‘/usr/bin/vncserver -kill %i > /dev/null 2>&1 || :’

# systemctl daemon-reload

# systemctl enable vncserver@:2.service

# vncserver :2 -geometry 1024×768

For RHEL/CentOS 6.x systems:

# vi /etc/sysconfig/vncservers

 

VNCSERVERS=”2:root” VNCSERVERARGS[2]=”-geometry 1024×768″

 

# service vncserver start

# chkconfig vncserver on

Open a VNC client, and connect to the <ElasticIP:2>. If you can’t get it, it’s likely your linux firewall is in the way. Either open the VNC port we are using here (port 5902), or for now, disable the firewall (NOT RECOMMENDED FOR PRODUCTION ENVIRONMENTS):

# systemctl stop firewalld

# systemctl disable firewalld

Partition and Format the “data” disk

When the linux instances were launched, and extra disk was added to each cluster node to store the application data we will be protecting. In this case it happens to be MySQL databases.

The second disk should appear as /dev/xvdb. You can run the “fdisk -l” command to verify. You’ll see that
/dev/xvda (OS) is already being used.

# fdisk -l

# Start End Size Type NameDisk /dev/xvda: 10.7 GB, 10737418240 bytes, 20971520 sectors Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk label type: gpt

1 2048 4095 1M BIOS boot parti
2 4096 20971486 10G Microsoft basic
Disk /dev/xvdb: 2147 MB, 2147483648 bytes, 4194304 sectors Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes

Here I will create a partition (/dev/xvdb1), format it, and mount it at the default location for MySQL, which is
/var/lib/mysql. Perform the following steps on BOTH “node1” and “node2”:

# fdisk /dev/xvdb

Welcome to fdisk (util-linux 2.23.2).

 

Changes will remain in memory only, until you decide to write them. Be careful before using the write command.

 

Device does not contain a recognized partition table

Building a new DOS disklabel with disk identifier 0x8c16903a.

 

Command (m for help): n

Partition type:

p      primary (0 primary, 0 extended, 4 free) e    extended

Select (default p): p

Partition number (1-4, default 1): 1

First sector (2048-4194303, default 2048): <enter>

Using default value 2048

Last sector, +sectors or +size{K,M,G} (2048-4194303, default 4194303): <enter>

Using default value 4194303

Partition 1 of type Linux and of size 2 GiB is set

 

Command (m for help): w

The partition table has been altered!

Calling ioctl() to re-read partition table. Syncing disks.

# mkfs.ext4 /dev/xvdb1
# mkdir /var/lib/mysql

On node1, mount the filesystem:

# mount /dev/xvdb1 /var/lib/mysql

The EC2 API Tools (EC2 CLI) must be installed on each of the cluster nodes, so that the cluster software can later manipulate Route Tables, enabling connectivity to the Virtual IP.

Step 12: Install EC2 API Tools

The following URL is an excellent guide to setting this up:

http://docs.aws.amazon.com/AWSEC2/latest/CommandLineReference/set-up-ec2-cli-linux.html

Here are the key steps:
Download, unzip, and move the CLI tools to the standard location (/opt/aws):

# wget http://s3.amazonaws.com/ec2-downloads/ec2-api-tools.zip

# unzip ec2-api-tools.zip

# mv ec2-api-tools-1.7.5.1/ /opt/aws/

# export EC2_HOME=”/opt/aws”

If java isn’t already installed (run “which java” to check), install it:

# yum install java-1.8.0-openjdk

# export JAVA_HOME=”/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.71-

Example (Based on default config of RHEL 7.2 system. Adjust accordingly)

You’ll need your AWS Access Key and AWS Secret Key. Keep these values handy, because they will be needed later during cluster setup too! Refer to the following URL for more information:

https://console.aws.amazon.com/iam/home?#security_credential

# export AWS_ACCESS_KEY=your-aws-access-key-id

# export AWS_SECRET_KEY=your-aws-secret-key

Test CLI utility functionality:

# /opt/aws/bin/ec2-describe-regions
REGION eu-west-1 ec2.eu-west-1.amazonaws.com
REGION ap-southeast-1 ec2.ap-southeast-1.amazonaws.com
REGION ap-southeast-2 ec2.ap-southeast-2.amazonaws.com
REGION eu-central-1 ec2.eu-central-1.amazonaws.com
REGION ap-northeast-2 ec2.ap-northeast-2.amazonaws.com
REGION ap-northeast-1 ec2.ap-northeast-1.amazonaws.com
REGION us-east-1 ec2.us-east-1.amazonaws.com
REGION sa-east-1 ec2.sa-east-1.amazonaws.com
REGION us-west-1 ec2.us-west-1.amazonaws.com
REGION us-west-2 ec2.us-west-2.amazonaws.com

Step 13: Install and Configure MySQL

Next, install the MySQL packages, initialize a sample database, and set “root” password for MySQL. In RHEL7.X, the MySQL packages have been replaced with the MariaDB packages.

On “node1”:

# yum install mariadb mariadb-server

# mount /dev/xvdb1 /var/lib/mysql

# /usr/bin/mysql_install_db –datadir=”/var/lib/mysql/” –user=mysql

# mysqld_safe –user=root –socket=/var/lib/mysql/mysql.sock –port=3306 –datadi

#

# # NOTE: This next command allows remote connections from ANY host.  NOT a good # echo “update user set Host=’%’ where Host=’node1′; flush privileges | mysql mys #

# #Set MySQL’s root password to ‘SIOS’

# echo “update user set Password=PASSWORD(‘SIOS’) where User=’root’; flush

Create a MySQL configuration file. We will place this on the data disk (that will later be replicated –
/var/lib/mysql/my.cnf). Example:

# vi /var/lib/mysql/my.cnf

 

[mysqld] datadir=/var/lib/mysql

socket=/var/lib/mysql/mysql.sock

pid-file=/var/run/mariadb/mariadb.pid user=root

port=3306

# Disabling symbolic-links is recommended to prevent assorted security risks symbolic-links=0

[mysqld_safe]

log-error=/var/log/mariadb/mariadb.log pid-file=/var/run/mariadb/mariadb.pid

 

[client] user=root password=SIOS

Move the original MySQL configuration file aside, if it exists:

# mv /etc/my.cnf /etc/my.cnf.orig

On “node2”, you ONLY need to install the MariaDB/MySQL packages. The other steps aren’t required:On “node2”:

[root@node2 ~]# yum install mariadb mariadb-server

Step 14: Install and Configure the Cluster

At this point, we are ready to install and configure our cluster. SIOS Protection Suite for Linux (aka SPS-Linux) will be used in this guide as the clustering technology. It provides both high availability failover clustering features (LifeKeeper) as well as real-time, block level data replication (DataKeeper) in a single, integrated solution. SPS-Linux enables you to deploy a “SANLess” cluster, aka a “shared nothing” cluster meaning that cluster nodes don’t have any shared storage, as is the case with EC2 Instances.

Install SIOS Protection Suite for Linux

Perform the following steps on ALL 3 VMs (node1, node2, witness):

Download the SPS-Linux installation image file (sps.img) and and obtain either a trial license or purchase permanent licenses. Contact SIOS for more information.

You will loopback mount it and run the “setup” script inside, as root (or first “sudo su -” to obtain a root shell) For example:

# mkdir /tmp/install

# mount -o loop sps.img /tmp/install

# cd /tmp/install

# ./setup

During the installation script, you’ll be prompted to answer a number of questions. You will hit Enter on almost every screen to accept the default values. Note the following exceptions:

  • On the screen titled “High Availability NFS” you may select “n” as we will not be creating a highly available NFS server
  • Towards the end of the setup script, you can choose to install a trial license key now, or later. We will install the license key later, so you can safely select “n” at this point
  • In the final screen of the “setup” select the ARKs (Application Recovery Kits, i.e. “cluster agents”) you wish to install from the list displayed on the screen.
    • The ARKs are ONLY required on “node1” and “node2”. You do not need to install on “witness” Navigate the list with the up/down arrows, and press SPACEBAR to select the following:
        • lkDR – DataKeeper for Linux
        • lkSQL – LifeKeeper MySQL RDBMS Recovery Kit
      • This will result in the following additional RPMs installed on “node1” and “node2”:
        • steeleye-lkDR-9.0.2-6513.noarch.rpm steeleye-lkSQL-9.0.2-6513.noarch.rpm

Install Witness/Quorum package

The Quorum/Witness Server Support Package for LifeKeeper (steeleye-lkQWK) combined with the existing failover process of the LifeKeeper core allows system failover to occur with a greater degree of confidence in situations where total network failure could be common. This effectively means that failovers can be done while greatly reducing the risk of “split-brain” situations.

Install the Witness/Quorum rpm on all 3 nodes (node1, node2, witness):

# cd /tmp/install/quorum

# rpm -Uvh steeleye-lkQWK-9.0.2-6513.noarch.rpm

On ALL 3 nodes (node1, node2, witness), edit /etc/default/LifeKeeper, set NOBCASTPING=1
On ONLY the Witness server (“witness”), edit /etc/default/LifeKeeper, set WITNESS_MODE=off/none

Install the EC2 Recovery Kit Package

SPS-Linux provides specific features that allow resources to failover between nodes in different availability zones and regions. Here, the EC2 Recovery Kit (i.e. cluster agent) is used to manipulate Route Tables so that connections to the Virtual IP are routed to the active cluster node.

Install the EC2 rpm (node1, node2):

# cd /tmp/install/amazon

# rpm -Uvh steeleye-lkECC-9.0.2-6513.noarch.rpm

Install a License key

On all 3 nodes, use the “lkkeyins” command to install the license file that you obtained from SIOS:

# /opt/LifeKeeper/bin/lkkeyins <path_to_file>/<filename>.lic

Start LifeKeeper

On all 3 nodes, use the “lkstart” command to start the cluster software:

# /opt/LifeKeeper/bin/lkstart

Set User Permissions for LifeKeeper GUI

On all 3 nodes, create a new linux user account (i.e. “tony” in this example). Edit /etc/group and add the “tony” user to the “lkadmin” group to grant access to the LifeKeeper GUI. By default only “root” is a member of the group, and we don’t have the root password here:

 

# useradd tony

# passwd tony

# vi /etc/group

 

lkadmin:x:1001:root,tony

Open the LifeKeeper GUI

Make a VNC connection to the Elastic IP (Public IP) address of node1. Based on the VNC configuration from above, you would connect to <Public_IP>:2 using the VNC password you specified earlier. Once logged in, open a terminal window and run the LifeKeeper GUI using the following command:

# /opt/LifeKeeper/bin/lkGUIapp &

You will be prompted to connect to your first cluster node (“node1”). Enter the linux userid and password specified during VM creation:

Next, connect to both “node2” and “witness” by clicking the “Connect to Server” button highlighted in the following screenshot:

You should now see all 3 servers in the GUI, with a green checkmark icon indicating they are online and healthy:

Create Communication Paths

Right-click on “node1” and select Create Comm Path

Select BOTH “node2” and “witness” and then follow the wizard. This will create comm paths between:


node1 & node2 node1 & witness


A comm path still needs to be created between node2 & witness. Right click on “node2” and select Create Comm Path. Follow the wizard and select “witness” as the remote server:


At this point the following comm paths have been created:

node1 <—> node2 node1 <—> witness node2 <—> witness

The icons in front of the servers have changed from a green “checkmark” to a yellow “hazard sign”. This is because we only have a single communication path between nodes.

If the VMs had multiple NICs (information on creating Azure VMs with multiple NICs can be found here, but won’t be covered in this article), you would create redundant comm paths between each server.


To remove the warning icons, go to the View menu and de-select “Comm Path Redundancy Warning”:


Result:

 

Verify Communication Paths

Use the “lcdstatus” command to view the state of cluster resources. Run the following commands to verify that you have correctly created comm paths on each node to the other two servers involved:

# /opt/LifeKeeper/bin/lcdstatus -q -d node1
MACHINE NETWORK ADDRESSES/DEVICE STATE PRIO node2 TCP 10.0.0.4/10.0.1.4

ALIVE 1 witness TCP 10.0.0.4/10.0.2.4 ALIVE 1
#/opt/LifeKeeper/bin/lcdstatus -q -d node2
MACHINE NETWORK ADDRESSES/DEVICE STATE PRIO node1 TCP 10.0.1.4/10.0.0.4

ALIVE 1 witness TCP 10.0.1.4/10.0.2.4 ALIVE 1
#/opt/LifeKeeper/bin/lcdstatus -q -d witness

MACHINE NETWORK ADDRESSES/DEVICE STATE PRIO node1 TCP 10.0.2.4/10.0.0.4
ALIVE 1 node2 TCP 10.0.2.4/10.0.1.4 ALIVE 1

Create a Data Replication cluster resource (i.e. Mirror)

Next, create a Data Replication resource to replicate the /var/lib/mysql partition from node1 (source) to node2 (target). Click the “green plus” icon to create a new resource:


Follow the wizard with these selections:

Please Select Recovery Kit: Data Replication Switchback Type: intelligent

Server: node1

Hierarchy Type: Replicate Exiting Filesystem

Existing Mount Point: /var/lib/mysql

Data Replication Resource Tag: datarep-mysql

File System Resource Tab: /var/lib/mysql

Bitmap File: (default value)

Enable Asynchronous Replication: No

After the resource has been created, the “Extend” (i.e. define backup server) wizard will appear.

Use the following selections:

Target Server: node2 Switchback Type: Intelligent Template Priority: 1

Target Priority: 10 Target Disk: /dev/xvdb1

Data Replication Resource Tag: datarep-mysql Bitmap File: (default value)

Replication Path: 10.0.0.4/10.0.1.4 Mount Point: /var/lib/mysql

Root Tag: /var/lib/mysql

The cluster will look like this:

Create Virtual IP

Next, create a Virtual IP cluster resource. Click the “green plus” icon to create a new resource:


Follow the wizard to create the IP resource with these selections:

Select Recovery Kit: IP Switchback Type: Intelligent IP Resource: 10.1.0.10

Netmask: 255.255.255.0

Network Interface: eth0

IP Resource Tag: ip-10.1.0.10

Extend the IP resource with these selections:

Switchback Type: Intelligent Template Priority: 1

Target Priority: 10

IP Resource: 10.1.0.10

Netmask: 255.255.255.0

Network Interface: eth0

IP Resource Tag: ip-10.1.0.10

The cluster will now look like this, with both Mirror and IP resources created:

Configure a Ping List for the IP resource

By default, SPS-Linux monitors the health of IP resources by performing a broadcast ping. In many virtual and cloud environments, broadcast pings don’t work. In a previous step, we set “NOBCASTPING=1” in
/etc/default/LifeKeeper to turn off broadcast ping checks. Instead, we will define a ping list.

This is a list of IP addresses to be pinged during IP health checks for this IP resource.

In this guide, we will add the witness server (10.0.2.4) to our ping list.

Right-click on the IP resource (ip-10.1.0.10) and select Properties:

You will see that initially, no ping list is configured for our 10.1.0.0 subnet. Click “Modify Ping List”:

Enter “10.0.2.4” (the IP address of our witness server), click “Add address” and finally click “Save List”:


You will be returned to the IP properties panel, and can verify that 10.0.2.4 has been added to the ping list. Click OK to close the window:

Create the MySQL resource hierarchy

Next, create a MySQL cluster resource. The MySQL resource is responsible for stopping/starting/monitoring of your MySQL database.

Before creating MySQL resource, make sure the database is running. Run “ps -ef | grep sql” to check.

If it’s running, great – nothing to do. If not, start the database back up:

# mysqld_safe –user=root –socket=/var/lib/mysql/mysql.sock –port=3306 –datadi

Follow the wizard with to create the IP resource with these selections:To create, click the “green plus” icon to create a new resource:

Select Recovery Kit: MySQL Database Switchback Type: Intelligent Server: node1

Location of my.cnf: /var/lib/mysql

Location of MySQL executables: /usr/bin

Database Tag: mysql

Extend the IP resource with the following selections:

Target Server: node2 Switchback Type: intelligent Template Priority: 1

Target Priority: 10

As a result, your cluster will look as follows. Notice that the Data Replication resource was automatically moved underneath the database (dependency automatically created) to ensure it’s always brought online before the database:

Create an EC2 resource to manage the route tables upon failover

SPS-Linux provides specific features that allow resources to failover between nodes in different availability zones and regions. Here, the EC2 Recovery Kit (i.e. cluster agent) is used to manipulate Route Tables so that connections to the Virtual IP are routed to the active cluster node.

To create, click the “green plus” icon to create a new resource:


Follow the wizard to create the EC2 resource with these selections:

Select Recovery Kit: Amazon EC2 Switchback Type: Intelligent Server: node1

EC2 Home: /opt/aws

EC2 URL: ec2.us-west-2.amazonaws.com

AWS Access Key: (enter Access Key obtained earlier) AWS Secret Key: (enter Secret Key obtained earlier) EC2 Resource Type: RouteTable (Backend cluster)

IP Resource: ip-10.1.0.10

EC2 Resource Tag: ec2-10.1.0.10

Extend the IP resource with the following selections:

Target Server: node2 Switchback Type: intelligent Template Priority: 1

Target Priority: 10

EC2 Resource Tag: ec2-10.1.0.10

The cluster will look like this. Notice how the EC2 resource is underneath the IP resource:

Create a Dependency between the IP resource and the MySQL Database resource

Create a dependency between the IP resource and the MySQL Database resource so that they failover together as a group. Right click on the “mysql” resource and select “Create Dependency”:

On the following screen, select the “ip-10.1.0.10” resource as the dependency. Click Next and continue through the wizard:

At this point the SPS-Linux cluster configuration is complete. The resource hierarchy will look as follows:

Step 15: Test Cluster Connectivity

At this point, all of our Amazon EC2 and Cluster configurations are complete! Cluster resources are currently active on node1:

Test connectivity to the cluster from the witness server (or another linux instance if you have one) SSH into the witness server, “sudo su -” to gain root access. Install the mysql client if needed:

[root@witness ~]# yum -y install mysql

Test MySQL connectivity to the cluster:

[root@witness ~]# mysql –host=10.1.0.10 mysql -u root -p

Execute the following MySQL query to display the hostname of the active cluster node:

MariaDB [mysql]> select @@hostname;

++

| @@hostname |

++

| node1     |

++

1 row in set (0.00 sec) MariaDB [mysql]>

Using LifeKeeper GUI, failover from Node1 -> Node2″. Right-click on the mysql resource underneath node2, and select “In Service…”:

After failover has completed, re-run the MySQL query. You’ll notice that the MySQL client has detected that the session was lost (during failover) and automatically reconnects:

Execute the following MySQL query to display the hostname of the active cluster node, verifying that now “node2” is active:

MariaDB [mysql]> select @@hostname;

ERROR 2006 (HY000): MySQL server has gone away No connection. Trying to reconnect…

Connection id:      12

Current database: mysql

++

| @@hostname |

++

| node2     |

++

1 row in set (0.53 sec) MariaDB [mysql]>

Reproduced with permission from SIOS

 

Filed Under: Clustering Simplified Tagged With: #SANLess, amazon, AWS, cluster, Linux

  • 1
  • 2
  • 3
  • Next Page »

Recent Posts

  • White Paper: Building Management Systems and the Need for High Availability
  • White Paper: Multi-Cloud Architecture Explained – Use Cases, Risks and Best Practices
  • Introducing the Generic Load-Balancer Kit for SIOS LifeKeeper and Microsoft Azure
  • Solution Brief: High Availability for SAP S4/HANA
  • Fact Sheet: BMS High Availability

Most Popular Posts

Maximise replication performance for Linux Clustering with Fusion-io
Failover Clustering with VMware High Availability
create A 2-Node MySQL Cluster Without Shared Storage
create A 2-Node MySQL Cluster Without Shared Storage
SAP for High Availability Solutions For Linux
Bandwidth To Support Real-Time Replication
The Availability Equation – High Availability Solutions.jpg
Choosing Platforms To Replicate Data - Host-Based Or Storage-Based?
Guide To Connect To An iSCSI Target Using Open-iSCSI Initiator Software
Best Practices to Eliminate SPoF In Cluster Architecture
Step-By-Step How To Configure A Linux Failover Cluster In Microsoft Azure IaaS Without Shared Storage azure sanless
Take Action Before SQL Server 20082008 R2 Support Expires
How To Cluster MaxDB On Windows In The Cloud

Join Our Mailing List

Copyright © 2022 · Enterprise Pro Theme on Genesis Framework · WordPress · Log in