SIOS SANless clusters

SIOS SANless clusters High-availability Machine Learning monitoring

  • Home
  • Products
    • SIOS DataKeeper for Windows
    • SIOS Protection Suite for Linux
  • News and Events
  • Clustering Simplified
  • Success Stories
  • Contact Us
  • English
  • 中文 (中国)
  • 中文 (台灣)
  • 한국어
  • Bahasa Indonesia
  • ไทย

Deployment of a SQL Server Failover Cluster Instance on Huawei Cloud

September 28, 2021 by Jason Aw Leave a Comment

Huawei Cloud high availability ECS IaaS

Deployment of a SQL Server Failover Cluster Instance on Huawei Cloud

*DISCLAIMER: While the following completely covers the high availability portion within the scope of our product, this is a setup “guide” only and should be adapted to your own configuration.

Overview

HUAWEI CLOUD is a leading cloud service provider not just in China but also has global footprint with many datacenters around the world. They bring Huawei’s 30-plus years of expertise together in ICT infrastructure products and solutions and are committed to providing reliable, secure, and cost-effective cloud services to empower applications, harness the power of data, and help organizations of all sizes grow in today’s intelligent world. HUAWEI CLOUD is also committed to bringing affordable, effective, and reliable cloud and AI services through technological innovation.

DataKeeper Cluster Edition provides replication in a virtual private cloud (VPC) within a single region across availability zones for the Huawei cloud. In this particular SQL Server clustering example, we will launch four instances (one domain controller instance, two SQL Server instances and a quorum/witness instance) into three availability zones.

Huawei Cloud SIOS Datakeeper HA Architecture

DataKeeper Cluster Edition provides support for a data replication node outside of the cluster with all nodes in Huawei cloud. In this particular SQL Server clustering example, four instances are launched (one domain controller instance, two SQL Server instances and a quorum/witness instance) into three availability zones. Then an additional DataKeeper instance is launched in a second region including a VPN instance in both regions. Please see Configuration of Data Replication From a Cluster Node to External DR Site for more information. For additional information on using multiple regions please see Connecting Two VPCs in Different Regions.

Huawei Cloud SIOS Datakeeper DR architecture

DataKeeper Cluster Edition also provides support for a data replication node outside of the cluster with only the node outside of the cluster in Huawei Cloud. In this particular SQL Server clustering example, WSFC1 and WSFC2 are in an on-site cluster replicating to a Huawei Cloud instance. Then an additional DataKeeper instance is launched in a region in Huawei Cloud. Please see Configuration of Data Replication From a Cluster Node to External DR Site for more information.

Huawei Cloud SIOS Datakeeper Hybrid DR Architecture

Requirements

Description Requirement
Virtual Private Cloud In a single region with three availability zones
Instance Type Minimum recommended instance type: s3.large.2
Operating System See the DKCE Support Matrix
Elastic IP One elastic IP address connected to the domain controller
Four instances One domain controller instance, two SQL Server instances and one quorum/witness instance
Each SQL Server ENI (Elastic Network Interface) with 4 IPs

·         Primary ENI IP statically defined in Windows and used by DataKeeper Cluster Edition

·         Three IPs maintained by ECS while used by Windows Failover Clustering , DTC and SQLFC

Volumes Three volumes (EBS and NTFS only)

·         One primary volume (C drive)

·         Two additional volumes

o    One for Failover Clustering

o    One for MSDTC

Release Notes

Before beginning, make sure you read the DataKeeper Cluster Edition Release Notes for the latest information. It is highly recommended that you read and understand the DataKeeper Cluster Edition Installation Guide.

Create a Virtual Private Cloud (VPC)

A virtual private cloud is the first object you create when using DataKeeper Cluster Edition.

*A virtual Private Cloud (VPC) is an isolated private cloud consisting of a configurable pool of shared computing resources in a public cloud.

  1. Using the email address and password specified when signing up for Huawei Cloud, sign in to the Huawei Cloud Management Console.
  2. From the Services dropdown, select Virtual Private Cloud.

  1. On the right side of the screen, click on Create VPC and select the region that you want to use.
  2. Input the name that you want to use for the VPC
  3. Define your virtual private cloud subnet by entering your CIDR (Classless Inter-Domain Routing) as described below
  4. Input the subnet name, then click Create Now.

*A Route Table will automatically be created with a “main” association to the new VPC. You can use it later or create another Route Table.

*HELPFUL LINK:
Huawei’s Creating a Virtual Private Cloud (VPC)

Launch an Instance

The following walks you through launching an instance into your subnet. You will want to launch two instances into one availability zone, one for your domain controller instance and one for your SQL instance. Then you will launch another SQL instance into another availability zone and a quorum witness instance into yet another availability zone.

*HELPFUL LINKS:
Huawei Cloud ECS Instances

  1. Using the email address and password specified when signing up for Huawei Cloud, sign in to the Huawei Cloud Management Console.
  2. From the Service List dropdown, select Elastic Cloud Server.

  1. Select Buy ECS button and choose the Billing Mode, Region and AZ (Availability Zone) to deploy the Instance
  2. Select your Instance Type. (Note:Select s3.large.2 or larger.).
  3. Choose an Image. Under Public Image, select the Windows Server 2019 Datacenter 64bit English image
    1. For Configure Network, select your VPC.
    2. For Subnet, select an Subnet that you want to use, select Manually-specified IP address and input the IP address that you want to use
    3. Select the Security Group to use or Edit and select an existing one.
    4. Assign an EIPif you need the ECS instance to access the internet
    5. Click Configure Advanced Settings and provide a name for the ECS, use Password for Login Mode and provide the secure password for Administrator login
    6. Click Configure Now on Advanced Options Add a Tag to name your instance and Click on Confirm
  4. Perform final review of the Instance and click on Submit.

*IMPORTANT: Make a note of this initial administrator password. It will be needed to log on to your instance.

Repeat the above steps for all instances.

Connect to Instances

You can connect to your domain controller instance via Remote Login from the ECS pane.

Login as administrator and enter your administrator password.

*BEST PRACTICE: Once logged on, it is best practice to change your password.

Configure the Domain Controller Instance

Now that the instances have been created, we started with setting up the Domain Service instance.

This guide is not a tutorial on how to set up an Active Domain server instance. We recommend reading articles on how to set up and configure an Active Directory server. It is very important to understand that even though the instance is running in a Huawei cloud, this is a regular installation of Active Directory.

Static IP Addresses

Configure Static IP Addresses for your Instances

  1. Connect to your domain controller instance.
  2. Click Start/ Control Panel.
  3. Click Network and Sharing Center.
  4. Select your network interface.
  5. Click Properties.
  6. Click Internet Protocol Version 4 (TCP/IPv4), then Properties.
  7. Obtain your current IPv4 address, default gateway and DNS server for the network interface from Amazon.
  8. In the Internet Protocol Version 4 (TCP/IPv4) Properties dialog box, under Use the following IP address, enter your IPv4 address.
  9. In the Subnet mask box, type the subnet mask associated with your virtual private cloud subnet.
  10. In the Default Gateway box, type the IP address of the default gateway and then click OK.
  11. For the Preferred DNS Server, enter the Primary IP Address of Your Domain Controller(ex. 15.0.1.72).
  12. Click Okay, then select Close. Exit Network and Sharing Center.
  13. Repeat the above steps on your other instances.

Join the Two SQL Instances and the Witness Instance to Domain

*Before attempting to join a domain make these network adjustments. On your network adapter, Add/Change the Preferred DNS server to the new Domain Controller address and its DNS server. Use ipconfig /flushdns to refresh the DNS search list after this change. Do this before attempting to join the Domain.

*Ensure that Core Networking and File and Printer Sharing options are permitted in Windows Firewall.

  1. On each instance, click Start, then right-click Computer and select Properties.
  2. On the far right, select Change Settings.
  3. Click on Change.
  4. Enter a new Computer Name.
  5. Select Domain.
  6. Enter Domain Name– (ex. docs.huawei.com).
  7. Click Apply.

*Use Control Panel to make sure all instances are using the correct time zone for your location.

*BEST PRACTICE: It is recommend that the System Page File is set to system managed (not automatic) and to always use the C: drive.

Control Panel > Advanced system settings > Performance > Settings > Advanced > Virtual Memory. Select System managed size, Volume C: only, then select Set to save.

Assign Secondary Private IPs to the Two SQL Instances

In addition to the Primary IP, you will need to add three additional IPs (Secondary IPs) to the elastic network interface for each SQL instance.

  1. From the Service List dropdown, select Elastic Cloud Server.
  2. Click the instance for which you want to add secondary private IP addresses.
  3. Select NICs > Manage Virtual IP Address.
  4. Click on Assign Virtual IP address and select Manual enter an IP address that is within the subnet range for the instance (ex. For 15.0.1.25, enter 15.0.1.26). Click Ok.
  5. Click on the More dropdown on the IP address row, and select Bind to Server, select the server to bind the IP address to, and the NIC card.
  6. Click OK to save your work.
  7. Perform the above on both SQL Instances.

*HELPFUL LINKS:
Managing Virtual IP Addresses
Binding a Virtual IP Address to an EIP or ECS

Create and Attach Volumes

DataKeeper is a block-level volume replication solution and requires that each node in the cluster have additional volume(s) (other than the system drive) that are the same size and same drive letters. Please review Volume Considerations for additional information regarding storage requirements.

Create Volumes

Create two volumes in each availability zone for each SQL server instance, a total of four volumes.

  1. From the Service List dropdown, select Elastic Cloud Server.
  2. Click the instance for which you want to manage
  3. Go to the Disks tab
  4. Click Add Disk to add a new volume of your choice and size, make sure you select the volume in the same AZ as the SQL server that you intend to attach it to
  5. Select the check box to agree to the SLA and Submit
  6. Click Back to Server Console
  7. Attach the disk if necessary to the SQL instance
  8. Do this for all four volumes.

*HELPFUL LINKS:
Elastic Volume Service

Configure the Cluster

Prior to installing DataKeeper Cluster Edition, it is important to have Windows Server configured as a cluster using either a node majority quorum (if there is an odd number of nodes) or a node and file share majority quorum (if there is an even number of nodes). Consult the Microsoft documentation on clustering in addition to this topic for step-by-step instructions. Note: Microsoft released a hotfix for Windows 2008R2 that allows disabling of a node’s vote which may help achieve a higher level of availability in certain multi-site cluster configurations.

Add Failover Clustering

Add the Failover Clustering feature to both SQL instances.

  1. Launch Server Manager.
  2. Select Features in the left pane and click Add Features in the Features This starts the Add Features Wizard.
  3. Select Failover Clustering.
  4. Select Install.

Validate a Configuration

  1. Open Failover Cluster Manager.
  2. Select Failover Cluster Manager, select Validate a Configuration.
  3. Click Next, then add your two SQL instances.

Note: To search, select Browse, then click on Advanced and Find Now. This will list available instances.

  1. Click Next.
  2. Select Run Only Tests I Select and click Next.
  3. In the Test Selection screen, deselect Storage and click Next.
  4. At the resulting confirmation screen, click Next.
  5. Review Validation Summary Report then click Finish.

Create Cluster

  1. In Failover Cluster Manager, click on Create a Cluster then click Next.
  2. Enter your two SQL instances.
  3. On the Validation Warning page, select No then click Next.
  4. On the Access Point for Administering the Cluster page, enter a unique name for your WSFC Cluster. Then enter the Failover Clustering IP address for each node involved in the cluster. This is the first of the three secondary IP addresses added previously to each instance.
  5. IMPORTANT!Uncheck the “Add all available storage to the cluster” checkbox. DataKeeper mirrored drives must not be managed natively by the cluster. They will be managed as DataKeeper Volumes.
  6. Click Next on the Confirmation
  7. On Summary page, review any warnings then select Finish.

Configure Quorum/Witness

  1. Create a folder on your quorum/witness instance (witness).
  2. Share the folder.
    1. Right-click folder and select Share With / Specific People….
    2. From the dropdown, select Everyone and click Add.
    3. Under Permission Level, select Read/Write.
    4. Click Share, then Done. (Make note of the path of this file share to be used below.)
  3. In Failover Cluster Manager, right-click cluster and choose More Actions and Configure Cluster Quorum Settings. Click Next.
  4. On the Select Quorum Configuration, choose Node and File Share Majority and click Next.
  5. On the Configure File Share Witness screen, enter the path to the file share previously created and click Next.
  6. On the Confirmation page, click Next.
  7. On the Summary page, click Finish.

Install and Configure DataKeeper

After the basic cluster is configured but prior to any cluster resources being created, install and license DataKeeper Cluster Edition on all cluster nodes. See the DataKeeper Cluster Edition Installation Guide for detailed instructions.

  1. Run DataKeeper setup to install DataKeeper Cluster Edition on both SQL instances.
  2. Enter your license key and reboot when prompted.
  3. Launch the DataKeeper GUI and connect to server.

*Note: The domain or server account used must be added to the Local System Administrators Group. The account must have administrator privileges on each server that DataKeeper is installed on. Refer to DataKeeper Service Log On ID and Password Selection for additional information.

  1. Right click on Jobs and connect to both SQL servers.
  2. Create a Job for each mirror you will create. One for your DTC resource, and one for your SQL resource..
  3. When asked if you would like to auto-register the volume as a cluster volume, select Yes.

*Note: If installing DataKeeper Cluster Edition on Windows “Core” (GUI-less Windows), make sure to read Installing and Using DataKeeper on Windows 2008R2/2012 Server Core Platforms for detailed instructions.

Configure MSDTC

  1. For Windows Server 2012 and 2016, in the Failover Cluster Manager GUI, select Roles, then select Configure Role.
  2. Select Distributed Transaction Coordinator (DTC), and click Next.

*For Windows Server 2008, in the Failover Cluster Manager GUI, select Services and Applications, then select Configure a Service or Application and click Next.

  1. On the Client Access Point screen, enter a name, then enter the MSDTC IP address for each node involved in the cluster. This is the second of the three secondary IP addresses added previously to each instance. Click Next.
  2. Select the MSDTC volume and click Next.
  3. On the Confirmation page, click Next.
  4. Once the Summary page displays, click Finish.

Install SQL on the First SQL Instance

  1. On the domain controller server create a folder and share it..
    1. For example “TEMPSHARE” with Everyone permission.
  2. Create a sub folder “SQL” and copy the SQL .iso installer into that sub folder.
  3. On the SQL server, create a network drive and attach it to the shared folder on the domain controller.
    • . For example “net use S: \\\TEMPSHARE
  4. On the SQL server the S: drive will appear. CD to the SQL folder and find the SQL .iso installer. Right click on the .iso file and select Mount. The setup.exe installer will appear with the SQL .iso installer.

F:\>Setup /SkipRules=Cluster_VerifyForErrors /Action=InstallFailoverCluster

  1. On Setup Support Rules, click OK.
  2. On the Product Key dialog, enter your product key and click Next.
  3. On the License Terms dialog, accept the license agreement and click Next.
  4. On the Product Updates dialog, click Next.
  5. On the Setup Support Files dialog, click Install.
  6. On the Setup Support Rules dialog, you will receive a warning. Click Next, ignoring this message, since it is expected in a multi-site or non-shared storage cluster.
  7. Verify Cluster Node Configuration and click Next.
  8. Configure your Cluster Network by adding the “third” secondary IP address for your SQL instance and click Next. Click Yes to proceed with multi-subnet configuration.
  9. Enter passwords for service accounts and click Next.
  10. On the Error Reporting dialog, click Next.
  11. On the Add Node Rules dialog, skipped operation warnings can be ignored. Click Next.
  12. Verify features and click Install.
  13. Click Close to complete the installation process.

Install SQL on the Second SQL Instance

Installing the second SQL instance is similar to the first one.

  1. On the SQL server, create a network drive and attach it to the shared folder on the domain controller as explained above for the first SQL server.
  2. Once the .iso installer is mounted, run SQL setup once again from the command line in order to skip the Validate Open a Command window, browse to your SQL install directory and type the following command:

Setup /SkipRules=Cluster_VerifyForErrors /Action=AddNode /INSTANCENAME=”MSSQLSERVER”

(Note: This assumes you installed the default instance on the first node)

  1. On Setup Support Rules, click OK.
  2. On the Product Key dialog, enter your product key and click Next.
  3. On the License Terms dialog, accept the license agreement and click Next.
  4. On the Product Updates dialog, click Next.
  5. On the Setup Support Files dialog, click Install.
  6. On the Setup Support Rules dialog, you will receive a warning. Click Next, ignoring this message, since it is expected in a multi-site or non-shared storage cluster.
  7. Verify Cluster Node Configuration and click Next.
  8. Configure your Cluster Network by adding the “third” secondary IP address for your SQL Instance and click Next. Click Yes to proceed with multi-subnet configuration.
  9. Enter passwords for service accounts and click Next.
  10. On the Error Reporting dialog, click Next.
  11. On the Add Node Rules dialog, skipped operation warnings can be ignored. Click Next.
  12. Verify features and click Install.
  13. Click Close to complete the installation process.

Common Cluster Configuration

This section describes a common 2-node replicated cluster configuration.

  1. The initial configuration must be done from the DataKeeper UI running on one of the cluster nodes. If it is not possible to run the DataKeeper UI on a cluster node, such as when running DataKeeper on a Windows Core only server, install the DataKeeper UI on any computer running Windows XP or higher and follow the instruction in the Core Only section for creating a mirror and registering the cluster resources via the command line.
  2. Once the DataKeeper UI is running, connect to each of the nodes in the cluster.
  3. Create a Job using the DataKeeper UI. This process creates a mirror and adds the DataKeeper Volume resource to the Available Storage.

!IMPORTANT: Make sure that Virtual Network Names for NIC connections are identical on all cluster nodes.

  1. If additional mirrors are required, you can Add a Mirror to a Job.
  2. With the DataKeeper Volume(s)now in Available Storage, you are able to create cluster resources (SQL, File Server, etc.) in the same way as if there were a shared disk resource in the cluster. Refer to Microsoft documentation for additional information in addition to the above for step-by-step cluster configuration instructions.

Connectivity to the cluster (virtual) IPs

In addition to the Primary IP and secondary IP, you will also need to configure the virtual IP addresses in the Huawei Cloud so that they can be routed to the active node.

  1. From the Service List dropdown, select Elastic Cloud Server.
  2. Click on one of the SQL instance for which you want to add cluster virtual IP address (one for MSDTC, one for SQL Failover Cluster)
  3. Select NICs > Manage Virtual IP Address.
  4. Click on Assign Virtual IP address and select Manual enter an IP address that is within the subnet range for the instance (ex. For 15.0.1.25, enter 15.0.1.26). Click Ok.
  5. Click on the More dropdown on the IP address row, and select Bind to Server, select both the server to bind the IP address to, and the NIC card.
  6. Use the same steps 4. and 5 for the MSDTC and SQLFC virtual IPs
  7. Click OKto save your work.

Management

Once a DataKeeper volume is registered with Windows Server Failover Clustering, all of the management of that volume will be done through the Windows Server Failover Clustering interface. All of the management functions normally available in DataKeeper will be disabled on any volume that is under cluster control. Instead, the DataKeeper Volume cluster resource will control the mirror direction, so when a DataKeeper Volume comes online on a node, that node becomes the source of the mirror. The properties of the DataKeeper Volume cluster resource also display basic mirroring information such as the source, target, type and state of the mirror.

Troubleshooting

Use the following resources to help troubleshoot issues:

  • Troubleshooting issues section
  • For customers with a support contract – http://us.sios.com/support/overview/
  • For evaluation customers only – Pre-sales support

Additional Resources:

Step-by-Step: Configuring a 2-Node Multi-Site Cluster on Windows Server 2008 R2 – Part 1 — http://clusteringformeremortals.com/2009/09/15/step-by-step-configuring-a-2-node-multi-site-cluster-on-windows-server-2008-r2-%E2%80%93-part-1/

Step-by-Step: Configuring a 2-Node Multi-Site Cluster on Windows Server 2008 R2 – Part 3 — http://clusteringformeremortals.com/2009/10/07/step-by-step-configuring-a-2-node-multi-site-cluster-on-windows-server-2008-r2-%E2%80%93-part-3/

Filed Under: Blog posts, Clustering Simplified, Datakeeper Tagged With: #SANLess Clusters for SQL Server Environments, #SANLess Clusters for Windows Environments, disaster recovery, ECS, High Availability, Huawei Cloud, SQL Server Failover Cluster

On-Demand Webinar: Voice of Experience: SQL Expert Shares 5 Best Practices for SQL Server HA in the Cloud

May 18, 2019 by Jason Aw Leave a Comment

Voice of Experience SQL Expert Shares 5 Best Practices for SQL Server High Availability in the Cloud

On-Demand Webinar: Voice of Experience: SQL Expert Shares 5 Best Practices for SQL Server High Availability in the Cloud

Everyone is moving to the cloud! But are you thinking about high availability?

In this webinar, Heath Carroll discusses some of the best high availability options available to you when it comes to protecting Microsoft SQL Server in a cloud environment. Heath reviews templates you should get familiar with as well as tips to get it right, every time.

Register Webinar: Voice of Experience: SQL Expert Shares 5 Best Practices for SQL Server HA in the Cloud

Filed Under: News and Events Tagged With: Azure Cloud, HA clusters-cloud, High Availability, SQL Server Failover Cluster

A Guide To Configure A SQL Server Failover Cluster Instance in Azure

March 31, 2019 by Jason Aw Leave a Comment

Step-By-Step: How To Configure A SQL Server 2008 R2 Failover Cluster Instance in Azure

If you need a guide Configure A SQL Server Failover Cluster Instance in Azure, you probably are still using SQL Server 2008/2008 R2. And, want to take advantage of the extended security updates that Microsoft is offering if you move your SQL Server 2008/2008 R2 into Azure. I previously wrote about this topic in this blog post.

You may be wondering how to make sure your SQL Server Failover Cluster instance remains highly available once you make the move to Azure. Today, most people have business critical SQL Server 2008/2008 R2 configured as a clustered instance (SQL Server FCI) in their data center. When looking at Azure you have probably come to the realization that due to the lack of shared storage it might seem that you can’t bring your SQL Server FCI to the Azure cloud. However, that is not the case thanks to SIOS DataKeeper.

SIOS DataKeeper enables you to build a SQL Server Failover Cluster instance in Azure, AWS, Google Cloud, or anywhere else where shared storage is not available or where you wish to configure multi-site clusters where shared storage doesn’t make sense. DataKeeper has been enabling SANless clusters for Windows and Linux since 1999. Microsoft documents the use of SIOS DataKeeper for SQL Server Failover Cluster instance in their documentation: High availability and disaster recovery for SQL Server in Azure Virtual Machines.

I’ve written about SQL Server FCI’s running in Azure before, But I never published a Step-by-Step Guide specific to SQL Server 2008/2008 R2. The good news is that it works just as great with SQL 2008/2008 R2 as it does with SQL 2012/2014/2016/2017 and the soon to be released 2019. Also, regardless of the version of Windows Server (2008/2012/2016/2019) or SQL Server (2008/2012/2014/2016/2017) the configuration process is similar enough that this guide should be sufficient enough to get you through any configurations.

If your flavor of SQL or Windows is not covered in any of my guides, don’t be afraid to jump in and build a SQL Server FCI and reference this guide, I think you will figure out any differences and if you ever get stuck just reach out to me on Twitter @daveberm and I’ll be glad to give you a hand.

This guide uses SQL Server 2008 R2 with Windows Server 2012 R2. As of the time of this writing I did not see an Azure Marketplace image of SQL 2008 R2 on Windows Server 2012 R2, so I had to download and install SQL 2008 R2 manually. Personally I prefer this combination, but if you need to use Windows Server 2008 R2 or Windows 212 that is fine. If you use Windows Server 2008 R2 don’t forget to install the kb3125574Convenience Rollup Update for Windows Server 2008 R2 SP1. Or if you are stuck with Server 2012 (not R2) you need the Hotfix in kb2854082.

Don’t be fooled by this article that says you must install kb2854082 on your SQL Server 2008 R2 instances. If you start searching for that update for Windows Server 2008 R2 you will find that only the version for Server 2012 is available. That particular hotfix for Server 2008 R2 is instead included in the rollup Convenience Rollup Update for Windows Server 2008 R2 SP1.

PROVISION AZURE INSTANCES

I’m not going to go into great detail here with a bunch of screenshots, especially since the Azure Portal UI tends to change pretty frequently, so any screenshots I take will get stale pretty quickly. Instead, I will just cover the important topics that you should be aware of.

FAULT DOMAINS OR AVAILABILITY ZONES?

In order to ensure your SQL Server instances are highly available, you have to make sure your cluster nodes reside in different Fault Domains (FD) or in different Availability Zones (AZ). Not only do your instances need to reside in different FDs or AZs, but your File Share Witness (see below) also needs to reside in a FD or AZ that is different than that one your cluster nodes reside in.

Here is my take on it. AZs are the newest Azure feature, but they are only supported in a handful of regions so far. AZs give you a higher SLA (99.99%) then FDs (99.95%), and protect you against the kind of cloud outages I describe in my post Azure Outage Post-Mortem. If you can deploy in a region that supports AZs then I recommend you use AZs.

In this guide I used AZs which you will see when you get to the section on configuring the load balancer. However, if you use FDs everything will be exactly the same, except the load balancer configuration will reference Availability Sets rather than Availability Zones.

WHAT IS A FILE SHARE WITNESS YOU ASK?

Without going into great detail, Windows Server Failover Clustering (WSFC) requires you configure a “Witness” to ensure failover behaves properly. Windows Server Failover Clustering supports three kinds of witnesses: Disk, File Share, Cloud. Since we are in Azure a Disk Witness is not possible. Cloud Witness is only available with Windows Server 2016 and later, so that leaves us with a File Share Witness. If you want to learn more about cluster quorums check out my post on the Microsoft Press Blog, From the MVPs: Understanding the Windows Server Failover Cluster Quorum in Windows Server 2012 R2

ADD STORAGE TO YOUR SQL SERVER INSTANCES

As you provision your SQL Server instances you will want to add additional disks to each instance. Minimally you will need one disk for the SQL Data and Log file, one disk for Tempdb. Whether or not you should have a separate disk for log and data files is somewhat debated when running in the cloud. On the back end the storage all comes from the same place and your instance size limits your total IOPS. In my opinion there really isn’t any value in separating your log and data files since you cannot ensure that they are running on two physical sets of disks. I’ll leave that for you to decide, but I put log and data all on the same volume.

Normally a SQL Server 2008 R2 FCI would require you to put tempdb on a clustered disk. However, SIOS DataKeeper has this really nifty feature called a DataKeeper Non-Mirrored Volume Resource. This guide does not cover moving tempdb to this non-mirrored volume resource, but for optimal performance you should do this. There really is no good reason to replicate tempdb since it is recreated upon failover anyway.

As far as the storage is concerned you can use any storage type, but certainly use Managed Disks whenever possible. Make sure each node in the cluster has the identical storage configuration. Once you launch the instances you will want to attach these disks and format them NTFS. Make sure each instance uses the same drive letters.

NETWORKING

It’s not a hard requirement, but if at all possible use an instance size that supports accelerated networking. Also, make sure you edit the network interface in the Azure portal so that your instances use a static IP address. For clustering to work properly you want to make sure you update the settings for the DNS server so that it points to your Windows AD/DNS server and not just some public DNS server.

SECURITY

By default, the communications between nodes in the same virtual network are wide open, but if you have locked down your Azure Security Group you will need to know what ports must be open between the cluster nodes and adjust your security group. In my experience, almost all the issues you will encounter when building a cluster in Azure are either caused by blocked ports.

DataKeeper has some some ports that are required to be open between the clustered instance. Those ports are as follows:
UDP: 137, 138
TCP: 139, 445, 9999, plus ports in the 10000 to 10025 range

Failover cluster has its own set of port requirements that I won’t even attempt to document here. This article seems to have that covered. http://dsfnet.blogspot.com/2013/04/windows-server-clustering-sql-server.html

In addition, the Load Balancer described later will use a probe port that must allow inbound traffic on each node. The port that is commonly used and described in this guide is 59999.

And finally if you want your clients to be able to reach your SQL Server instance you want to make sure your SQL Server port is open, which by default is 1433.

Remember, these ports can be blocked by the Windows Firewall or Azure Security Groups, so to be sure to check both to ensure they are accessible.

JOIN THE DOMAIN

A requirement for SQL Server 2008 R2 FCI is that the instances must reside in the same Windows Server Domain. So if you have not done so, make sure you have joined the instances to your Windows domain

LOCAL SERVICE ACCOUNT

When you install DataKeeper, it will ask you to provide a service account. You must create a domain user account and then add that user account to the Local Administrators Group on each node. When asked during the DataKeeper installation, specify that account as the DataKeeper service account. Note – Don’t install DataKeeper just yet!

DOMAIN GLOBAL SECURITY GROUPS

You will be asked to specify two Global Domain Security Groups as you install SQL 2008 R2. You might want to look ahead at the SQL install instructions and create those groups now. Also, create a domain user account and place them in each of these security accounts. You will specify this account as part of the SQL Server Cluster installation.

OTHER PRE-REQUISITES

You must enable both Failover Clustering and .Net 3.5 on each instance of the two cluster instances. When you enable Failover Clustering, also be sure to enable the optional “Failover Cluster Automation Server”. This is required for a SQL Server 2008 R2 cluster in Windows Server 2012 R2.

CREATE THE CLUSTER AND DATAKEEPER VOLUME RESOURCES

We are now ready to start building the cluster. The first step is to create the base cluster. Because of the way Azure handles DHCP, we MUST create the cluster using Powershell and not the Cluster UI. We use Powershell because it will let us specify a static IP address as part of the creation process. If we used the UI, it would see that the VMs use DHCP and it will automatically assign a duplicate IP address. Therefore to avoid that situation, let’s use the Powershell as shown below.

New-Cluster -Name cluster1 -Node sql1,sql2 -StaticAddress 10.0.0.100 -NoStorage

After the cluster creates, run Test-Cluster. This is required before SQL Server will install.

Test-Cluster

You will get warnings about Storage and Networking. Thankfully, you can ignore those as they are expected in a SANless cluster in Azure. However, address any other warnings or errors before moving on.

After the cluster is created, you will need to add the File Share Witness. On the third server we specified as the file share witness, create a file share and give Read/Write permissions to the cluster computer object we just created above. In this case $Cluster1 will be the name of the computer object that needs Read/Write permissions at both the share and NTFS security level.

Once the share is created, you can use the Configure Cluster Quorum Wizard as shown below to configure the File Share Witness.

INSTALL DATAKEEPER

It is important to wait until the basic cluster is created before we install DataKeeper, since the DataKeeper installation registers the DataKeeper Volume Resource type in failover clustering. If you jumped the gun and installed DataKeeper already that is okay. Simply run the setup again and choose Repair Installation.

The screenshots below walk you through a basic installation. Start by running the DataKeeper Setup.

The account you specify below must be a domain account. It must be part of the Local Administrators group on each of the cluster nodes.

When presented with the SIOS License Key manager you can browse out to your temporary key. Or if you have a permanent key, you can copy the System Host ID and use that to request your permanent license. If you ever need to refresh a key, the SIOS License Key Manager is a program that will be installed that you can run separately to add a new key.

CREATE DATAKEEPER VOLUME RESOURCE

Once DataKeeper is installed on each node you are ready to create your first DataKeeper Volume Resource. The first step is to open the DataKeeper UI and connect to each of the cluster nodes.

If everything is done correctly the Server Overview Report should look something like this.

You can now create your first Job as shown below.

After you choose a Source and Target you are presented with the following options. For a local target in the same region, the only thing you need to select is Synchronous.

Choose Yes and auto-register this volume as a cluster resource.

Once you complete this process open up the Failover Cluster Manager and look in Disk. You should see the DataKeeper Volume resource in Available Storage. At this point WSFC treats this as if it were a normal cluster disk resource.

SLIPSTREAM SP3 ONTO SQL 2008 R2 INSTALL MEDIA

SQL Server 2008 R2 is only supported on Windows Server 2012 R2 with SQL Server SP2 or later. Unfortunately, Microsoft never released a SQL Server 2008 R2 installation media that that includes SP2 or SP3. Instead, you must slipstream the service pack onto the installation media BEFORE you do the installation. If you try to do the installation with the standard SQL Server 2008 R2 media, you will run into all kinds of problems. I don’t remember the exact errors you will see. But I do recall they didn’t really point to the exact problem. You will waste a lot of time trying to figure out what went wrong.

As of the date of this writing, Microsoft does not have a Windows Server 2012 R2 with SQL Server 2008 R2 offering in the Azure Marketplace. Do bring your own SQL license if you want to run SQL 2008 R2 on Windows Server 2012 R2 in Azure. If they add that image later, or if you choose to use the SQL 2008 R2 on Windows Server 2008 R2 image, you must first uninstall the existing standalone instance of SQL Server before moving forward.

I followed the guidance in Option 1 of this article to slipstream SP3 on onto my SQL 2008 R2 installation media. You will of course have to adjust a few things as this article references SP2 instead of SP3. Make sure you slipstream SP3 on the installation media we will use for both nodes of the cluster. Once that is done, continue to the next step.

INSTALL SQL SERVER ON THE FIRST NODE

Using the SQL Server 2008 R2 media with SP3 slipstreamed, run setup and install the first node of the cluster as shown below.

If you use anything other than the Default instance of SQL Server, you will have some additional steps not covered in this guide. The biggest difference is you must lock down the port that SQL Server uses since by default a named instance of SQL Server does NOT use 1433. Once you lock down the port you also need to specify that port instead of 1433 whenever we reference port 1433 in this guide, including the firewall setting and the Load Balancer settings.

Here make sure to specify a new IP address that is not in use. This is the same IP address we will use later when we configure the Internal Load Balancer later.

As I mentioned earlier, SQL Server 2008 R2 utilizes AD Security Groups. If you have not already created them, go ahead and create them now as show below before you continue to the next step in the SQL install

Specify the Security Groups you created earlier.

Make sure the service accounts you specify are a member of the associated Security Group.

Specify your SQL Server administrators here.

If everything goes well you are now ready to install SQL Server on the second node of the cluster.

INSTALL SQL SERVER ON THE SECOND NODE

One the second node, run the SQL Server 2008 R2 with SP3 install and select Add Node to a SQL Server Failover Clustering Instance.

Proceed with the installation as shown in the following screenshots.

Assuming everything went well, you should now have a two node SQL Server 2008 R2 cluster configured that looks something like the following.

However, you probably will notice that you can only connect to the SQL Server instance from the active cluster node. The problem is that Azure does not support gratuitous ARP .Your clients probably cannot connect directly to the Cluster IP Address. Instead, the clients must connect to an Azure Load Balancer, which will redirect the connection to the active node. To make this work there are two steps: Create the Load Balancer and Fix the SQL Server Cluster IP to respond to the Load Balancer Probe and use a 255.255.255.255 Subnet mask. Those steps are described below.

CREATE THE AZURE LOAD BALANCER

I’m going to assume your clients can communicate directly to the internal IP address of the SQL cluster. Let’s go ahead to create an Internal Load Balancer (ILB) in this guide. If you need to expose your SQL Instance on the public internet, use a Public Load Balancer instead.

In the Azure portal, create a new Load Balancer following the screenshots as shown below. The Azure portal UI changes rapidly. Bbut these screenshots should give you enough information to do what you need to do. I will call out important settings as we go along.

Here we create the ILB. The important thing to note on this screen is you must select “Static IP address assignment”. Specify the same IP address that we used during the SQL Cluster installation too.

Since I used Availability Zones, I see Zone Redundant as an option. If you used Availability Sets your experience will be slightly different.

In the Backend pool be sure to select the two SQL Server instances. You DO NOT want to add your File Share Witness in the pool.

Here we configure the Health Probe. Most Azure documentation uses port 59999, so we will stick with that port for our configuration.

Then we will add a load balancing rule. In our case we want to redirect all SQL Server traffic to TCP port 1433 of the active node. It is also important that you select Floating IP (Direct Server Return) as Enabled.

RUN POWERSHELL SCRIPT TO UPDATE SQL CLIENT ACCESS POINT

Now we must run a Powershell script on one of the cluster nodes to allow the Load Balancer Probe to detect which node is active. The script also sets the Subnet Mask of the SQL Cluster IP Address to 255.255.255.255.255 so that it avoids IP address conflicts with the Load Balancer we just created.

# Define variables
$ClusterNetworkName = “” 
# the cluster network name (Use Get-ClusterNetwork on Windows Server 2012 of 
higher to find the name)
$IPResourceName = “” 
# the IP Address resource name 
$ILBIP = “” 
# the IP Address of the Internal Load Balancer (ILB) and SQL Cluster
Import-Module FailoverClusters
# If you are using Windows Server 2012 or higher:
Get-ClusterResource $IPResourceName | Set-ClusterParameter 
-Multiple @{Address=$ILBIP;ProbePort=59999;SubnetMask="255.255.255.255";
Network=$ClusterNetworkName;EnableDhcp=0}
# If you are using Windows Server 2008 R2 use this: 
#cluster res $IPResourceName /priv enabledhcp=0 address=$ILBIP probeport=59999  
subnetmask=255.255.255.255

This is what the output will look like if run correctly.

windows server failover cluster

You probably notice that the end of that script has a commented line of code to use if you are running on Windows Server 2008 R2. Running Windows Server 2008 R2? Ensure you run the code specific for Windows Server 2008 R2 at a Command prompt, it is not Powershell.

NEXT STEPS

You’re not the first if  you get to this point and you still cannot connect to the cluster remotely. There are a lot of things that can go wrong in terms of security, load balancer, SQL ports, etc. I wrote this guide to help troubleshoot connection issues.

In fact, I ran into some strange issues in terms of my SQL Server TCP/IP Properties in SQL Server Configuration Manager. When I looked at the properties I did not see the SQL Server Cluster IP address as one of the addresses it was listening on. As such I had to add it manually. I’m not sure if that was an anomaly. Although it certainly was an issue I had to resolve before I could connect to the cluster from a remote client.

As I mentioned earlier, one other improvement you can make to this installation is to use a DataKeeper Non-Mirrored Volume Resource for TempDB. If you set that up please be aware of the following two configuration issues people commonly run into.

The first issue is if you move tempdb to a folder on the 1st node, you must be sure to create the exact same folder structure on the second node. If you don’t do that, when you try to failover SQL Server will fail to come online since it can’t create TempDB.

The second issue occurs anytime you add another DataKeeper Volume Resource to a SQL Cluster after the cluster is created. You must go into the properties of the SQL Server cluster resource and make it dependent on the new DataKeeper Volume resource you added. This is true for the TempDB volume and any other volumes you may decide to add after the cluster is created.

If you have any questions about this configuration or any other cluster configurations please feel free to reach out to me on Twitter @DaveBerm

Reproduced with permission from Clusteringformeremortals.com

Filed Under: Clustering Simplified, Datakeeper Tagged With: failover cluster, SQL Server, SQL Server Failover Cluster, Windows Server Failover Clustering

Azure ILB In ARM For SQL Server Failover Cluster Instances

June 15, 2018 by Jason Aw Leave a Comment

Configuring The #AZURE ILB In ARM For SQL Server Failover Cluster Instance Or AG Using AZURE Powershell 1.0

Configuring The #AZURE ILB In ARM For SQL Server Failover Cluster Instance Or AG Using AZURE Powershell 1.0

In an earlier post I went into some great detail about how to configure the Azure ILB in ARM for SQL Server Failover Cluster or AG resources. The directions in that article were written prior to the GA of Azure PowerShell 1.0. With the availability of Azure PowerShell 1.0 the main script that creates the ILB needs to be slightly different. The rest of the article is still accurate. However if you are using Azure PowerShell 1.0 or later the script to create the ILB described in that article should be as follows.

#Replace the values for the below listed variables
$ResourceGroupName ='SIOS-EAST' # Resource Group Name in which the SQL nodes are deployed
$FrontEndConfigurationName = 'FEEAST' #You can provide any name to this parameter.
$BackendConfiguratioName = 'BEEAST' #You can provide any name to this parameter.
$LoadBalancerName = 'ILBEAST' #Provide a Name for the Internal Local balance object
$Location ='eastus2' # Input the data center location of the SQL Deployements
$subname = 'public' # Provide the Subnet name in which the SQL Nodes are placed
$ILBIP = '10.0.0.201' # Provide the IP address for the Listener or Load Balancer
$subnet = Get-AzureRMVirtualNetwork -ResourceGroupName $ResourceGroupName | 
Get-AzureRMVirtualNetworkSubnetConfig –name $subname
$FEConfig=New-AzureRMLoadBalancerFrontendIpConfig -Name $FrontEndConfigurationName 
-PrivateIpAddress $ILBIP -SubnetId $subnet.Id
$BackendConfig=New-AzureRMLoadBalancerBackendAddressPoolConfig 
-Name $BackendConfiguratioName
New-AzureRMLoadBalancer -Name $LoadBalancerName -ResourceGroupName $ResourceGroupName 
-Location $Location -FrontendIpConfiguration $FEConfig 
-BackendAddressPool $BackendConfig

The rest of that original article is the same, but I have just copied it here for ease of use…

Using GUI

Now that the ILB is created, we should see it in the Azure Portal in Resource Group. See pic below.

Azure ILB In ARM For SQL Server Failover Cluster Instances

The rest of the configuration can also be completed through PowerShell, but I’m going to use the GUI in my example.

If you want to use PowerShell, you could probably piece together the script by looking at this article. Unfortunately, this article confuses me. I’ll figure it out some day and try to document it in a user friendly format. As of now, I think the GUI is fine for the next steps.

Let’s Get Started

Follow along with the screen shots below. If you get lost, follow the navigation hints at the top of the Azure Portal to figure out where we are.

First Step

  • Click Backend Pool setting tab. Selects the backend pool to update the Availability Set and Virtual Machines. Save your changes.

Azure ILB In ARM For SQL Server Failover Cluster Instances

  • Configure Load Balancer’s Probe by clicking Add on the Probe tab. Give the probe a name and configure it to use TCP Port 59999. I have left the probe interval and the unhealthy threshold set to the default settings. This means it will take 10 seconds before the ILB removes the passive node from the list of active nodes after a failover. Your clients may take up to 10 seconds to be redirected to the new active node. Be sure to save your changes.

Azure ILB In ARM For SQL Server Failover Cluster Instances

Next Step

  • Navigate to the Load Balancing Rule Tab and add a new rule. Give the rule a sensible name (SQL1433 or something). Choose TCP protocol port 1433 (assuming you are using the default instance of SQL Server). Choose 1433 for the Backend port as well. For the Backend Pool, we will choose the Backend Pool we created earlier (BE). For the Probe that we will also choose the Probe we created earlier.

We do not want to enable Session persistence but we do want to enable Floating IP (Direct Server Return). I have left the idle timeout set to the default setting. You might want to consider increasing that to the maximum value. Reason is that I have seen some applications such as SAP log error messages each time the connection is dropped and needs to be re-established.

Azure ILB In ARM For SQL Server Failover Cluster Instances

  • At this point the ILB is configured. There is only one final step that needs to take place for SQL Server Failover Cluster. We need to update the SQL IP Cluster Resource just the exact same way we had to in the Classic deployment model. To do that you will need to run the following PowerShell script on just one of the cluster nodes. Make note, SubnetMask=“255.255.255.255” is not a mistake. Use the 32 bit mask regardless of what your actual subnet mask is.

One Final Note

In my initial test I still was not able to connect to the SQL Resource name even after I completed all of the above steps. After banging my head against the wall for a few hours I discovered that for some reason the SQL Cluster Name Resource was not registered in DNS. I’m not sure how that happened or whether it will happen consistently, but if you are having trouble connecting I would definitely check DNS and add the SQL cluster name and IP address as a new A record if it is not already in there.

And of course don’t forget the good ole Windows Firewall. You will have to make exceptions for 1433 and 59999 or just turn it off until you get everything configured properly like I did. You probably want to leverage Azure Network Security Groups anyway instead of the local Windows Firewall for a more unified experience across all your Azure resources.

Good luck and let me know how you make out.

Head over here to see how SIOS helped companies across the globe with creating SQL Server Failover Cluster.

Reproduced with permission from Clustering For Mere Mortals.

Filed Under: Clustering Simplified Tagged With: SQL Server Failover Cluster

Why Build A SQL server Failover Cluster Instance In Azure Cloud?

March 13, 2018 by Jason Aw Leave a Comment

Why Would You Want To Build A SQL server Failover Cluster Instance In The Azure Cloud?

There was an interesting discussion happening today in the Twitterverse. Basically, someone asked the question “Has anyone set up a SQL server Failover Cluster Instance in Azure?” The ensuing conversation involved some well respect SQL Server experts. And it led to the following question, “Why would you want to build a SQL Server AlwaysOn Failover Cluster instance in the cloud?”

That question could be interpreted in two ways: “Why do you need High Availability in the Cloud” or “Why wouldn’t you use AlwaysOn Availability Groups instead of Failover Cluster Instances?”

Let’s address each question one at a time.

Question 1 – Why do you need High Availability in the Azure Cloud?

  • You might think that just because you host your SQL Server instance in Azure, that you are covered by their 99.95% uptime SLA. If you think that, you would be wrong. In order to take advantage of the 99.95% SLA, you have to have at least two instances of SQL running in an Availability Set. With a single instance of SQL running, you can definitely expect that there will minimally be downtime during maintenance periods. But, you are also susceptible to unplanned failures.
  • Two instances of SQL Server cannot generally be load balanced. You have to implement some sort of mechanism to keep the servers in sync. To ensure that if there is a problem with one of the servers, the other server will be able to continue to service the requests. High Availability solutions like AlwaysOn Availability Groups, AlwaysOn Failover Cluster Instances and even the deprecated Database Mirroring can provide high availability for SQL Server in that scenario. Other solutions like log shipping and transactional replication may be able to help keep data synchronized between servers. But they are not typically considered high availability solutions and will not ensure the availability of your SQL Server.
  • Microsoft does occasionally need to perform maintenance on Azure that could bring down an entire Upgrade Domain and all the instances running in that Upgrade Domain. You don’t have any say on when this will happen. So, you need to have a mechanism in place to ensure that if they do have to bring down your primary SQL Server instance, you can expect that your secondary SQL Server instance will take over the workload without missing a beat. All of the high availability solutions mentioned above can ensure that you will continue to run in the event that Microsoft is doing maintenance on the Upgrade Domain of your primary server. Microsoft will only do maintenance on a single Upgrade Domain at a time. This ensures that your secondary server will still be online assuming you put the both in the same Availability Set.
  • What do you do if YOU want to performance maintenance on your production SQL Server? Maybe you want to install a Service Pack or other hotfix? Without a secondary server to fail over to, you will have to schedule planned downtime. One of the primary benefits of any high availability solution is the ability to do rolling upgrades, minimizing the impact of planned downtime.

Question 2 – Why wouldn’t you use AlwaysOn Availability Groups instead of Failover Cluster Instances?

  • Save Money! SQL Server AlwaysOn Availability Groups requires Enterprise Edition of SQL Server. Why not save money and deploy SQL Server Standard Edition and build a simple 2-node Failover Cluster Instance? Unless you need Enterprise Edition for some other reason, this is a no brainer.
  • Protect the ENTIRE SQL Server instance. AlwaysOn Availability Groups only protects user defined databases; you cannot protect the System and MSDB databases. If you build a SQL Server Failover Cluster Instance instead, you are protecting the ENTIRE instance, including the System and MSDB databases.
  • Ease Administration. In Azure, you are limited to just on client listener. This limits you to just one Availability Group. In contrast, with a Failover Cluster Instance one client listener is all you need, so there is no limitation.
  • Worker Thread Exhaustion. With AlwaysOn AG, you have to keep an eye on the available worker threads. The available worker threads limit the number of databases you can protect with AlwaysOn AG. In contrast, AlwaysOn Failover Clustering with DataKeeper block level replication does not consume more resources for each database you add. This means you can scale to protect hundreds of databases without the additional overhead associated with AlwaysOn AG.
  • Distribute Transaction Support. AlwaysOn AG does not support distributed transactions (DTC). If your application requires DTC support, you are going to have to look at an AlwaysOn Failover Cluster Instance instead.
  • Support of Other Replication Technologies. If you plan on setting up Peer to Peer replication between two databases protected by AlwaysOn AG you can forget about it. In fact, there are many restrictions you have to be aware of once you deploy AlwaysOn Availability Groups. AlwaysOn FCI’s do not have any of those restrictions.

Conclusion

Knowing what you know above, shouldn’t the question really be “Why would I want to implement AlwaysOn AG in the Cloud when I can have a much more robust and inexpensive solution building an AlwaysOn Failover Cluster instance?”

If you are interested in building an AlwaysOn Failover Cluster Instance in Azure, check out my blog post Step-by-Step: How to configure a SQL Server Failover Cluster Instance (FCI) in Microsoft Azure IaaS #SQLServer #Azure #SANLess

Reproduced with permission from https://clusteringformeremortals.com/2015/03/05/why-would-you-want-to-build-a-sqlserver-failover-cluster-instance-in-the-azure-cloud/

Filed Under: Clustering Simplified Tagged With: failover cluster, High Availability, SQL Server Failover Cluster, SQL Server Failover Cluster Instance

  • 1
  • 2
  • Next Page »

Recent Posts

  • Transitioning from VMware to Nutanix
  • Are my servers disposable? How High Availability software fits in cloud best practices
  • Data Recovery Strategies for a Disaster-Prone World
  • DataKeeper and Baseball: A Strategic Take on Disaster Recovery
  • Budgeting for SQL Server Downtime Risk

Most Popular Posts

Maximise replication performance for Linux Clustering with Fusion-io
Failover Clustering with VMware High Availability
create A 2-Node MySQL Cluster Without Shared Storage
create A 2-Node MySQL Cluster Without Shared Storage
SAP for High Availability Solutions For Linux
Bandwidth To Support Real-Time Replication
The Availability Equation – High Availability Solutions.jpg
Choosing Platforms To Replicate Data - Host-Based Or Storage-Based?
Guide To Connect To An iSCSI Target Using Open-iSCSI Initiator Software
Best Practices to Eliminate SPoF In Cluster Architecture
Step-By-Step How To Configure A Linux Failover Cluster In Microsoft Azure IaaS Without Shared Storage azure sanless
Take Action Before SQL Server 20082008 R2 Support Expires
How To Cluster MaxDB On Windows In The Cloud

Join Our Mailing List

Copyright © 2025 · Enterprise Pro Theme on Genesis Framework · WordPress · Log in