SIOS SANless clusters

SIOS SANless clusters High-availability Machine Learning monitoring

  • Home
  • Products
    • SIOS DataKeeper for Windows
    • SIOS Protection Suite for Linux
    • SIOS iQ Machine Learning IT Analytics
  • News and Events
  • Clustering Simplified
  • Success Stories
  • Contact Us
  • English
  • 中文 (中国)
  • 中文 (台灣)
  • 한국어
  • Bahasa Indonesia
  • ไทย

How To Configure SQL Server 2008 R2 Failover Cluster Instance In Azure

February 10, 2019 by Jason Aw Leave a Comment

Step-By-Step: How To Configure SQL Server 2008 R2 Failover Cluster Instance In Azure

Still figuring out how to ensure SQL Server instance remains highly available once you make the move to Azure? Today, most people have business critical SQL Server 2008/2008 R2 configured as a clustered instance (SQL Server FCI) in their data center. When looking at Azure, you have probably come to the realization that due to the lack of shared storage it might seem that you can’t bring your SQL Server FCI to the Azure cloud. However, that is not the case thanks to SIOS DataKeeper.

SIOS DataKeeper enables you to build a SQL Server FCI in Azure, AWS or Google Cloud. And even anywhere else where shared storage is not available. DataKeeper has been enabling SANless clusters for WIndows and Linux since 1999. Microsoft documents the use of SIOS DataKeeper for SQL Server FCI in their documentation: High availability and disaster recovery for SQL Server in Azure Virtual Machines.

I’ve written about SQL Server FCI’s running in Azure before. But I never published a Step-by-Step Guide specific to SQL Server 2008/2008 R2. The good news is that it works just as great with SQL 2008/2008 R2 as it does with SQL 2012/2014/2016/2017. As well as the soon to be released 2019. Also, regardless of the version of Windows Server (2008/2012/2016/2019) or SQL Server (2008/2012/2014/2016/2017), the configuration process is similar enough that this guide should be sufficient enough to get you through any configurations.

If your flavor of SQL or Windows is not covered in any of my guides, don’t be afraid to jump in and build a SQL Server FCI and reference this guide.

This guide uses SQL Server 2008 R2 with Windows Server 2012 R2. As of the time of this writing I did not see an Azure Marketplace image of SQL 2008 R2 on Windows Server 2012 R2. So I had to download and install SQL 2008 R2 manually. Personally I prefer this combination. It’s fine if you need to use Windows Server 2008 R2. If you use Windows Server 2008 R2, don’t forget to install the hotfix described in this article.  It allows Windows Server 2008 R2 to be part of a FCI in Azure.

Provision Azure Instances

I’m not going to go into great detail here with a bunch of screenshots. Reason being Azure Portal UI tends to change pretty frequently and screenshots will get stale pretty quickly. Instead, I will just cover the important topics that you should be aware of.

Fault Domains Or Availability Zones?

In order to ensure your SQL Server instances are highly available, you have to make sure your cluster nodes reside in different Fault Domains (FD) or in different Availability Zones (AZ). Apart from that, File Share Witness (see below) also needs to reside in a FD or AZ that is different than that one your cluster nodes reside in.

Here is my take on it. AZs are the newest Azure feature, but they are only supported in a handful of regions so far. AZs give you a higher SLA (99.99%) then FDs (99.95%). Furthermore, it protects you against the kind of cloud outages I describe in my post Azure Outage Post-Mortem. If you can deploy in a region that supports AZs then I recommend you use AZs.

In this guide I used AZs which you will see when you get to the section on configuring the load balancer. However, if you use FDs everything will be exactly the same, except the load balancer configuration will reference Availability Sets rather than Availability Zones.

What Is A File Share Witness You Ask?

Without going into great detail, Windows Server Failover Clustering (WSFC) requires you configure a “Witness” to ensure failover behaves properly. WSFC supports three kinds of witnesses: Disk, File Share, Cloud. Since we are in Azure, a Disk Witness is not possible. Cloud Witness is only available with Windows Server 2016 and later, so that leaves us with a File Share Witness. If you want to learn more about cluster quorums check out my post on the Microsoft Press Blog, From the MVPs: Understanding the Windows Server Failover Cluster Quorum in Windows Server 2012 R2

Add Storage To Your SQL Server Instances

As you provision your SQL Server instances you will want to add additional disks to each instance. Minimally you will need one disk for the SQL Data and Log file, one disk for Tempdb. Whether or not you should have a seperate disk for log and data files is somewhat debated when running in the cloud. On the back end, the storage all comes from the same place and your instance size limits your total IOPS. In my opinion, there really isn’t any value in separating your log and data files since you cannot ensure that they are running on two physical sets of disks. I’ll leave that for you to decide, but I put log and data all on the same volume.

Normally a SQL Server 2008 R2 FCI would require you to put tempdb on a clustered disk. However, SIOS DataKeeper has this really nifty feature called a DataKeeper Non-Mirrored Volume Resource. This guide does not cover moving tempdb to this non-mirrored volume resource, but for optimal performance you should do this. There really is no good reason to replicate tempdb since it is recreated upon failover anyway.

As far as the storage is concerned you can use any storage type, but certainly use Managed Disks whenever possible. Make sure each node in the cluster has the identical storage configuration. Once you launch the instances you will want to attach these disks and format them NTFS. Make sure each instance uses the same drive letters.

Networking

It’s not a hard requirement, but if at all possible use an instance size that supports accelerated networking. Also, make sure you edit the network interface in the Azure portal so that your instances use a static IP address. For clustering to work properly you want to make sure you update the settings for the DNS server so that it points to your Windows AD/DNS server and not just some public DNS server.

Security

By default, the communications between nodes in the same virtual network are wide open. However, if you have locked down your Azure Security Group, you will need to know what ports must be open between the cluster nodes and adjust your security group. In my experience, almost all the issues you will encounter when building a cluster in Azure are either caused by blocked ports.

DataKeeper has some some ports that are required to be open between the clustered instance.

Those ports are as follows:
UDP: 137, 138
TCP: 139, 445, 9999, plus ports in the 10000 to 10025 range

Failover cluster has its own set of port requirements that I won’t even attempt to document here. This article seems to have that covered. http://dsfnet.blogspot.com/2013/04/windows-server-clustering-sql-server.html

In addition, the Load Balancer described later will use a probe port that must allow inbound traffic on each node. The port that is commonly used and described in this guide is 59999.

And finally if you want your clients to be able to reach your SQL Server instance you want to make sure your SQL Server port is open, which by default is 1433.

Remember, these ports can be blocked by the Windows Firewall or Azure Security Groups, so to be sure to check both to ensure they are accessible.

Join The Domain

A requirement for SQL Server 2008 R2 FCI is that the instances must reside in the same Windows Server Domain. So if you have not done so, make sure you have joined the instances to your Windows domain

Local Service Account

When you install DataKeeper it will ask you to provide a service account. You must create a domain user account and then add that user account to the Local Administrators Group on each node. When asked during the DataKeeper installation, specify that account as the DataKeeper service account. Note – Don’t install DataKeeper just yet!

Domain Global Security Groups

When you install SQL 2008 R2, you will be asked to specify two Global Domain Security Groups. Do look ahead at the SQL install instructions and create those groups now. Set up a domain user account and place them in each of these security accounts. Remember specify this account as part of the SQL Server Cluster installation.

Other Pre-Requisites

You must enable both Failover Clustering and .Net 3.5 on each instance of the two cluster instances. As you enable Failover Clustering, be sure to enable the optional “Failover Cluster Automation Server”. It is required for a SQL Server 2008 R2 cluster in Windows Server 2012 R2.

Create The Cluster And Datakeeper Volume Resources

We are now ready to start building the cluster. The first step is to create the base cluster. Because of the way Azure handles DHCP, we MUST create the cluster using Powershell and not the Cluster UI. We use Powershell because it will let us specify a static IP address as part of the creation process. If we used the UI it would see that the VMs use DHCP and it will automatically assign a duplicate IP address, so we we want to avoid that situation by using Powershell as shown below.

New-Cluster -Name cluster1 -Node sql1,sql2 -StaticAddress 10.0.0.100 -NoStorage

After the cluster creates, run Test-Cluster. This is required before SQL Server will install.

Test-Cluster

You will get warnings about Storage and Networking, but you can ignore those as they are expected in a SANless cluster in Azure. If there are any other warnings or errors you must address those before moving on.

After the cluster is created you will need to add the File Share Witness. On the third server we specified as the file share witness, create a file share and give Read/Write permissions to the cluster computer object we just created above. In this case $Cluster1 will be the name of the computer object that needs Read/Write permissions at both the share and NTFS security level.

Once the share is created, you can use the Configure Cluster Quorum Wizard as shown below to configure the File Share Witness.

Install DataKeeper

It is important to wait until the basic cluster is created before we install DataKeeper since the DataKeeper installation registers the DataKeeper Volume Resource type in failover clustering. If you jumped the gun and installed DataKeeper already that is okay. Simply run the setup again and choice Repair Installation.

The screenshots below walk you through a basic installation. Start by running the DataKeeper Setup.

The account you specify below must be a domain account and must be part of the Local Administrators group on each of the cluster nodes.

When presented with the SIOS License Key manager, you can browse out to your temporary key. Or if you have a permanent key, you can copy the System Host ID and use that to request your permanent license. If you ever need to refresh a key, the SIOS License Key Manager is a program that will be installed that you can run separately to add a new key.

Create DataKeeper Volume Resource

Once DataKeeper is installed on each node you are ready to create your first DataKeeper Volume Resource. The first step is to open the DataKeeper UI and connect to each of the cluster nodes.

If everything is done correctly the Server Overview Report should look something like this.

You can now create your first Job as shown below.

After you choose a Source and Target you are presented with the following options. For a local target in the same region the only thing you need to select is Synchronous.

Choose Yes and auto-register this volume as a cluster resource.

Once you complete this process open up the Failover Cluster Manager and look in Disk. You should see the DataKeeper Volume resource in Available Storage. At this point WSFC treats this as if it were a normal cluster disk resource.

Slipstream SP3 Onto SQL 2008 R2 Install Media

SQL Server 2008 R2 is only supported on Windows Server 2012 R2 with SQL Server SP2 or later. Unfortunately, Microsoft never released a SQL Server 2008 R2 installation media that that includes SP2 or SP3. Instead, you must slipstream the service pack onto the installation media BEFORE you do the installation. If you try to do the installation with the standard SQL Server 2008 R2 media, you will run into all kinds of problems. I don’t remember the exact errors you will see. Although I do recall they didn’t really point to the exact problem and you will waste a lot of time trying to figure out what went wrong.

As of the date of this writing, Microsoft does not have a Windows Server 2012 R2 with SQL Server 2008 R2 offering in the Azure Marketplace. You probably will be bringing your own SQL license if you want to run SQL 2008 R2 on Windows Server 2012 R2 in Azure. If they add that image later, or if you choose to use the SQL 2008 R2 on Windows Server 2008 R2 image you must first uninstall the existing standalone instance of SQL Server before moving forward.

I followed the guidance in Option 1 of this article to slipstream SP3 on onto my SQL 2008 R2 installation media. You will of course have to adjust a few things as this article references SP2 instead of SP3. Make sure you slipstream SP3 on the installation media we will use for both nodes of the cluster.

Install SQL Server On The First Node

Using the SQL Server 2008 R2 media with SP3 slipstreamed, run setup and install the first node of the cluster as shown below.

If you use anything other than the Default instance of SQL Server you will have some additional steps not covered in this guide. The biggest difference is locking down the port that SQL Server uses. By default a named instance of SQL Server does NOT use 1433. Once you lock down the port, you also need to specify that port instead of 1433 including the firewall setting and the Load Balancer settings.

Here make sure to specify a new IP address that is not in use. This is the same IP address we will use later when we configure the Internal Load Balancer later.

As I mentioned earlier, SQL Server 2008 R2 utilizes AD Security Groups. Go ahead and configure SQL server now as shown below before you continue.

Specify the Security Groups you created earlier.

Make sure the service accounts you specify are a member of the associated Security Group.

Specify your SQL Server administrators here.

If everything goes well, you are now ready to configure SQL server on the second node of the cluster.

Install SQL Server On The Second Node

One the second node, run the SQL Server 2008 R2 with SP3 install and select Add Node to a SQL Server FCI.

Proceed with the installation as shown in the following screenshots.

Assuming everything went well, you should now have a two node SQL Server 2008 R2 cluster configured that looks something like the following.

However, you probably will notice that you can only connect to the SQL Server instance from the active cluster node. The problem is that Azure does not support gratuitous ARP, so your clients cannot connect directly to the Cluster IP Address. Instead, the clients must connect to an Azure Load Balancer. And that will redirect the connection to the active node. To make this work there are two steps. First, create the Load Balancer and Fix the SQL Server Cluster IP to respond to the Load Balancer Probe. Next, use a 255.255.255.255 Subnet mask. Those steps are described below.

Create The Azure Load Balancer

I’m going to assume your clients can communicate directly to the internal IP address of the SQL cluster so we will create an Internal Load Balancer (ILB) in this guide. If you need to expose your SQL Instance on the public internet you can use a Public Load Balancer instead.

In the Azure portal create a new Load Balancer following the screenshots as shown below. The Azure portal UI changes rapidly, but these screenshots should give you enough information to do what you need to do. I will call out important settings as we go along.

Here we create the ILB. The important thing to note on this screen is you must select “Static IP address assignment” and specify the same IP address that we used during the SQL Cluster installation.

Since I used Availability Zones I see Zone Redundant as an option. If you used Availability Sets your experience will be slightly different.

In the Backend pool be sure to select the two SQL Server instances. You DO NOT want to add your File Share Witness in the pool.

Here we configure the Health Probe. Most Azure documentation has us using port 59999, so we will stick with that port for our configuration.

Here we will add a load balancing rule. In our case we want to redirect all SQL Server traffic to TCP port 1433 of the active node. It is also important that you select Floating IP (Direct Server Return) as Enabled.

Run Powershell Script To Update SQL Client Access Point

Now we must run a Powershell script on one of the cluster nodes to allow the Load Balancer Probe to detect which node is active. The script also sets the Subnet Mask of the SQL Cluster IP Address to 255.255.255.255.255 so that it avoids IP address conflicts with the Load Balancer we just created.

# Define variables
$ClusterNetworkName = “” 
# the cluster network name (Use Get-ClusterNetwork on Windows Server 2012 of higher to find the name)
$IPResourceName = “” 
# the IP Address resource name 
$ILBIP = “” 
# the IP Address of the Internal Load Balancer (ILB) and SQL Cluster
Import-Module FailoverClusters
# If you are using Windows Server 2012 or higher:
Get-ClusterResource $IPResourceName | Set-ClusterParameter -Multiple @{Address=$ILBIP;ProbePort=59999;SubnetMask="255.255.255.255";Network=$ClusterNetworkName;EnableDhcp=0}
# If you are using Windows Server 2008 R2 use this: 
#cluster res $IPResourceName /priv enabledhcp=0 address=$ILBIP probeport=59999  subnetmask=255.255.255.255

This is what the output will look like if run correctly.

Next Steps

If you get to this point and you still cannot connect to the cluster remotely, you wouldn’t be the first person. There are a lot of things that can go wrong in terms of security, load balancer, SQL ports, etc. I wrote this guide to help troubleshoot connection issues.

In fact, in this very installation I ran into some strange issues in terms of my SQL Server TCP/IP Properties in SQL Server Configuration Manager. When I looked at the properties, I did not see the SQL Server Cluster IP address as one of the addresses it was listening on, so I had to add it manually. I’m not sure if that was an anomaly. It certainly was an issue I had to resolve before I could connect to the cluster from a remote client.

As I mentioned earlier, one other improvement you can make to this installation is to use a DataKeeper Non-Mirrored Volume Resource for TempDB. If you set that up please be aware of the following two configuration issues people commonly run into.

The first issue is if you move tempdb to a folder on the 1st node, you must be sure to create the exact same folder structure on the second node. If you don’t do that when you try to failover SQL Server will fail to come online since it can’t create TempDB

The second issue occurs anytime you add another DataKeeper Volume Resource to a SQL Cluster after the cluster is created. You must go into the properties of the SQL Server cluster resource and make it dependent on the new DataKeeper Volume resource you added. This is true for the TempDB volume and any other volumes that you may decide to add after the cluster is created.

Read here to know how to configure SQL servers and ensure High Availability  

Reproduced with permission from Clusteringformeremortals.com

Filed Under: Clustering Simplified Tagged With: Clustering, configure sql server, SQL Server

The Availability Equation – High Availability Solutions

December 9, 2018 by Jason Aw Leave a Comment

The Availability Equation – High Availability Solutions.jpg

The Availability Equation

Are you familiar with the Availability Equation? In a nutshell, this equation shows how the total time needed to restore an application to usability is equal to the time required to detect that an application is experiencing a problem plus the time required to perform a recovery action:

TRESTORE = TDETECT + TRECOVER

Key Concepts Of High Availability Solutions

The equation introduces the key concepts of high availability (HA): clustering, problem detection, and subsequent recovery. HA solutions monitor the health of business application components; when problems are detected, these solutions act to restore them to service. The objective of deploying high availability solutions is to minimize downtime.

Reducing detection and recovery times are two important tasks of any HA solution that you choose to deploy. Today’s applications are combinations of technologies: servers, storage, network infrastructure, and so on. When reviewing your HA options, be certain that you understand the technologies that each solution uses to detect and recover from all outage types. Each technology has a direct impact on service restoration times.

Local Detection And Recovery

High availability solutions are straightforward. One technology that is crucial to providing the fastest possible restoration time is known as local detection and recovery (aka service-level problem detection and recovery). In a basic clustering solution, servers are connected. They are configured that one or more servers can take over the operations of another in the event of a server failure. The server nodes in the cluster continuously send small data packets, often called heartbeat signals, to each other to indicate that they are “alive”.

In simple clustered environments, when one server stops generating heartbeats, other cluster members assume that this server is down. It will then begin the process of taking over responsibility for that server’s domain of operation. This approach is adequate for detecting failure at the server level. But unless problems cause the interruption or cessation of heartbeat signals, server-level detection is inadequate. More than that, it can actually magnify the extent and impact of an outage.

For example, if Apache processes hang, the server may still send heartbeats. Even though the Web server subsystem has ceased to perform its primary function. Rather than restart the Apache subsystem on the same or a different server, a basic server-level clustering solution would restart the entire software stack of the failed server on a backup server, thereby causing interruption to users and extending recovery time.

How It Works

Using local detection and recovery, advanced clustering solutions deploy health-monitoring agents within individual cluster servers, to monitor individual system components such as a file system, a database, user-level application, IP address, and so on. These agents use heuristics that are specific to the monitored component. Therefore, the agents can predict and detect operational issues and then take the most appropriate recovery action. Often, the most efficient recovery method is to stop and restart the problem subsystem on the same server.

The time to restore an application to user availability can be greatly reduced by enabling recovery within the same physical server. Also, by detecting failures at a more granular level than simply by observing server-level heartbeats. Solutions such as the SteelEye Protection Suite for Linux from SIOS  provides this level of detection and recovery for your environment.  Make certain that whichever HA solution you deploy can also support local detection and recovery.

Would you like to enjoy high availability solutions for your projects? Check in with us. Need more references, here are our success stories.
Reproduced with permission from Linuxclustering

Filed Under: Clustering Simplified, Datakeeper Tagged With: Clustering, high availability solutions

Maximise replication performance for Linux Clustering with Fusion-io

November 27, 2018 by Jason Aw Leave a Comment

Maximise replication performance for Linux Clustering with Fusion-io

Tips To Maximise Replication Performance For Linux Clustering With Fusion-io

When most people think about setting up a cluster, it usually involves two or more servers, and a SAN – or some other type of shared storage. SAN’s are typically very costly and complex to setup and maintain. Also, they technically represent a potential Single Point of Failure (SPOF) in your cluster architecture. These days, more and more people are turning to companies like Fusion-io, with their lightning fast ioDrives, to accelerate critical applications.  These storage devices sit inside the server (i.e. aren’t “shared disks”). Therefore it can’t be used as cluster disks with many traditional clustering solutions. Fortunately, there are ways to Maximise replication performance for Linux Clustering with Fusion-io. Solutions that allow you to form a failover cluster when there is no shared storage involved – i.e. a “shared nothing” cluster.

Traditional Cluster Maximise replication performance for Linux Clustering with Fusion-io - Traditional Cluster  “Shared Nothing” ClusterMaximise replication performance for Linux Clustering with Fusion-io - Shared-Nothing Cluster

When leveraging data replication as part of a cluster configuration, it’s critical that you have enough bandwidth so that data can be replicated across the network just as fast as it’s written to disk.  The following are tuning tips that will allow you to get the most out of your “shared nothing” cluster configuration, when high-speed storage is involved:

Network

  • Use a 10Gbps NIC: Flash-based storage devices from Fusion-io (or other similar products from OCZ, LSI, etc) are capable of writing data at speeds in the HUNDREDS (750 ) of MB/sec or more.  A 1Gbps NIC can only push a theoretical maximum of ~125 MB/sec, so anyone taking advantage of an ioDrive’s potential can easily write data much faster than could be pushed through a 1 Gbps network connection.  To ensure that you have sufficient bandwidth between servers to facilitate real-time data replication, a 10 Gbps NIC should always be used to carry replication traffic
  • Enable Jumbo Frames: Assuming that your Network Cards and Switches support it, enabling jumbo frames can greatly increase your network’s throughput while at the same time reducing CPU cycles.  To enable jumbo frames, perform the following configuration (example from a RedHat/CentOS/OEL linux server)
    • ifconfig <interface_name> mtu 9000
    • Edit /etc/sysconfig/network-scripts/ifcfg-<interface_name> file and add “MTU=9000” so that the change persists across reboots
    • To verify end-to-end jumbo frame operation, run this command: ping -s 8900 -M do <IP-of-other-server>
  • Change the NIC’s transmit queue length:
    • /sbin/ifconfig <interface_name> txqueuelen 10000
    • Add this to /etc/rc.local to preserve the setting across reboots

TCP/IP Tuning

  • Change the NIC’s netdev_max_backlog:
    • Set “net.core.netdev_max_backlog = 100000” in /etc/sysctl.conf
  • Other TCP/IP tuning that has shown to increase replication performance:
    • Note: these are example values and some might need to be adjusted based on your hardware configuration
    • Edit /etc/sysctl.conf and add the following parameters:
      • net.core.rmem_default = 16777216
      • net.core.wmem_default = 16777216
      • net.core.rmem_max = 16777216
      • net.core.wmem_max = 16777216
      • net.ipv4.tcp_rmem = 4096 87380 16777216
      • net.ipv4.tcp_wmem = 4096 65536 16777216
      • net.ipv4.tcp_timestamps = 0
      • net.ipv4.tcp_sack = 0
      • net.core.optmem_max = 16777216
      • net.ipv4.tcp_congestion_control=htcp

Adjustments

Typically you will also need to make adjustments to your cluster configuration, which will vary based on the clustering and replication technology you decide to implement.  In this example, I’m using the SteelEye Protection Suite for Linux (aka SPS, aka LifeKeeper), from SIOS Technologies. It allows users to form failover clusters leveraging just about any back-end storage type: Fiber Channel SAN, iSCSI, NAS, or, most relevant to this article, local disks that need to be synchronized/replicated in real time between cluster nodes.  SPS for Linux includes integrated, block level data replication functionality that makes it very easy to setup a cluster when there is no shared storage involved.

Recommendations

In order to Maximise replication performance for Linux Clustering with Fusion-io, let’s try this. SteelEye Protection Suite (SPS) for Linux configuration recommendations:

  • Allocate a small (~100 MB) disk partition, located on the Fusion-io drive to place the bitmap file.  Create a filesystem on this partition and mount it, for example, at /bitmap:
    • # mount | grep /bitmap
    • /dev/fioa1 on /bitmap type ext3 (rw)
  • Prior to creating your mirror, adjust the following parameters in /etc/default/LifeKeeper
    • Insert: LKDR_CHUNK_SIZE=4096
      • Default value is 64
    • Edit: LKDR_SPEED_LIMIT=1500000
      • (Default value is 50000)
      • LKDR_SPEED_LIMIT specifies the maximum bandwidth that a resync will ever take — this should be set high enough to allow resyncs to go at the maximum speed possible
    • Edit: LKDR_SPEED_LIMIT_MIN=200000
      • (Default value is 20000)
      • LKDR_SPEED_LIMIT_MIN specifies how fast the resync should be allowed to go when there is other I/O going on at the same time — as a rule of thumb, this should be set to half or less of the drive’s maximum write throughput in order to avoid starving out normal I/O activity when a resync occurs

From here, go ahead and create your mirrors and configure the cluster as you normally would.

Interested to Maximise Replication Performance For Linux Clustering With Fusion-io, see what else SIOS can offer.
Reproduced with permission from LinuxClustering

Filed Under: Clustering Simplified, Datakeeper Tagged With: Clustering, Fusion-io, Linux, maximise replication performance for linux clustering with fusion io, replication

Automated Disaster Recovery Protection Clustering Solution For Hedge Fund 

May 6, 2018 by Jason Aw Leave a Comment

Automated Disaster Recovery Protection Clustering Solution For Hedge Fund

Automated Disaster Recovery Protection Clustering Solution For Leading International Macro Hedge Fund

netConsult Selects SteelEye LifeKeeper For Disaster Recovery Protection Clustering Solution of Exchange, Oracle9i and SQL Server Trading Systems.

netConsult is a leading consulting and systems integration provider specializing in data security management for financial services firms. It has implemented SteelEye LifeKeeper to provide automated disaster recovery protection for one of its key clients, a leading international macro hedge fund based in London, England.

Complete, Automated Disaster Recovery Protection Clustering Solution

“Due to the real-time, high-value nature of our client’s business, any kind of failure in their mission critical systems prevents them from completing timely trades. It can have significant impact upon their business. Especially,  in terms of the profitability of transactions and the confidence of their investment clients,” said Richard McDonald principal and founder of netConsult. “SteelEye LifeKeeper is one of four disaster recovery software solutions we researched on behalf of our client. The deciding factors in favor of LifeKeeper were that: it was the only truly complete, automated solution for disaster recovery. It was the only product that was capable of being demonstrated out of the box. And was by far the most reasonably priced given the complexity of our client’s environment, including the need to cluster a heterogeneous mix of storage and servers.”

Increased Server And Application Availability

“Coupled with overall heightened security concerns, it became imperative for our client to implement a comprehensive solution for business continuity. It should function reliably and efficiently. Regardless of the scale of interruption, from a simple server failure to a disaster requiring complete site recovery,” McDonald added. “We appreciated the assurance of disaster recovery protection. Additionally, our client has benefited greatly from the overall increase in server and application availability that the LifeKeeper clustering solution has made possible.”

Effectiveness & Stability Of LifeKeeper

LifeKeeper is being used to ensure complete disaster recovery protection of netConsult’s client’s core, business-critical trading systems. These include: Calypso a currency trading system based on Oracle 9i from Calypso Technology; Tradar Portfolio, also a currency trading system based on Microsoft SQL Server from Tradar Limited; and Microsoft Exchange Server, a messaging and collaboration platform being used for internal communication and confirmation of trading orders. The SteelEye LifeKeeper solution provides automated monitoring, failover and failback of these systems, coupled with integrated disk-level data replication between the customer’s primary business location in the heart of London, and its managed recovery site some distance outside of London, operated by Sungard Data Systems.

SteelEye LifeKeeper is installed on eight servers in a geographically dispersed, heterogeneous stretch-cluster configuration of four server pairs. All running the Windows 2000 operating system – four HP ProLiant BL20p G2 blade servers, accessing Hitachi Thunder 9500 V Series modular storage at the primary location. They are individually clustered over a wide-area network with four HP ProLiant DL380 G2 and DL380 G3 servers, accessing HP MSA1000 storage at the recovery site. Each of the server pairs provide individual high availability protection for the Calypso, Tradar and Exchange systems respectively. The fourth server pair ensures high availability of file and print services. The integrated, disk-level data replication capabilities within LifeKeeper enable the continuous synchronization of the server pairs. Thereby the creation of a stretch-cluster configuration to support disaster recovery over a wide area network.

Disaster Recovery Clustering of Exchange

McDonald went on to say “LifeKeeper proved to be a hands-off solution that was easy to implement in an environment known for its complexities. Since going live we have experienced several server outages. The incidents involved our client’s Oracle and SQL Server systems. LifeKeeper handled so well that no interruption or change in level of service was noticed by any users.”

In addition to the benefit of complete Disaster Recovery Protection Clustering Solution, the LifeKeeper solution has proved to be very useful during systems maintenance. Previously, it was necessary to undertake the manual and time-consuming process of configuring and manually migrating users to backup systems. Now, with LifeKeeper, production systems and user connectivity can be automatically failed over to the recovery location while maintenance is applied to the primary server. Once this task is complete, systems and users are failed back in a similar automated fashion.

“The support we have received from the SteelEye technical team during and after implementation has been excellent,” McDonald concluded. “The deployment of disaster recovery solutions for environments like Exchange and Oracle is no simple task. SteelEye continues to demonstrate a broad range of experience and knowledge, which we value greatly.”

Ensure Business Continuity

“We are excited by the selection of our SteelEye LifeKeeper Disaster Recovery solution by netConsult. The latter is a very well-respected financial services IT consultancy,” said John Banfield, European sales director for SteelEye. “The ability of the LifeKeeper solution to address their client’s critical business requirements and technical complexity is a strong validation of our approach. It is solid proof that LifeKeeper delivers industrial-strength assurance of continuity in the most demanding of environments.”

To find out how our Disaster Recovery Protection Clustering Solution can benefit you, go here.

Filed Under: Success Stories Tagged With: Business Continuity, Clustering, disaster recovery, disaster recovery protection clustering solution, Exchange

Join My Session On Deploying Highly Available SQL Server in Azure

March 31, 2018 by Jason Aw Leave a Comment

@Sqlsatnash Deploying Highly Available SQL Server In #Azure Session At SQL Saturday Nashville, Jan 16th

I’ll be heading to Nashville to share about deploying highly available SQL server. While there, there is a couple of things that I can’t wait to catch up on  – Technology and music. While I’m there, I certainly hope I am able to have some good music at The Station Inn.

Come By My Session On Deploying Highly Available SQL Server in Azure

Jan 16th is going to be a great day of learning and networking. Hang out with my #SQLPass family and join my session. This hour long session is great for those who are keen in learning about deploying SQL Server in Azure.

On Cloud Database/Application Development & Deployment

As we are already aware, Windows Azure is an excellent IaaS platform to deploy SQL Server. There is a need to plan for high availability and disaster recovery even as Microsoft manages the infrastructure. In this session, learn how to leverage Azure Fault Domains, Upgrade Domains, and Internal Load Balancers to ensure high availability of SQL Server deployments within the Azure cloud. You will learn to see the difference between Azure Classic and Azure Resource Manager. And at the same time, how it would affect your SQL Server availability. While Microsoft Azure offers SLA’s of 99.95%, make sure your SQL Server deployment qualifies. Again, this session is best suited for those with intentions to move or have already moved your SQL Servers instances to Azure. By the way, participants for this session should have a basic knowledge of SQL Server AlwaysOn Failover Clustering as well as Availability Groups. But if you don’t, no fear because you should be able to catch up pretty fast with a little bit of practice and experimenting.

Reproduced with permission from https://clusteringformeremortals.com/2015/12/21/sqlsatnash-deploying-highly-available-sql-server-in-azure-session-at-sql-saturday-nashville-jan-16th/

Filed Under: Clustering Simplified Tagged With: Azure, Clustering, Highly Available SQL Server, SQL Server

  • 1
  • 2
  • 3
  • 4
  • Next Page »

Recent Posts

  • How To Configure SQL Server 2008 R2 Failover Cluster Instance In Azure
  • High Availability Applications For Business Operations – An Interview
  • Ensure High Availability for SQL Server on Amazon Web Services
  • Options for When Public Cloud Service Levels Fall Short
  • Managing Cost of Cloud for High-Availability Applications

Most Popular Posts

Maximise replication performance for Linux Clustering with Fusion-io
Failover Clustering with VMware High Availability
create A 2-Node MySQL Cluster Without Shared Storage
create A 2-Node MySQL Cluster Without Shared Storage
SAP for High Availability Solutions For Linux
Bandwidth To Support Real-Time Replication
The Availability Equation – High Availability Solutions.jpg
Choosing Platforms To Replicate Data - Host-Based Or Storage-Based?
Guide To Connect To An iSCSI Target Using Open-iSCSI Initiator Software
Best Practices to Eliminate SPoF In Cluster Architecture
Step-By-Step How To Configure A Linux Failover Cluster In Microsoft Azure IaaS Without Shared Storage azure sanless
Take Action Before SQL Server 20082008 R2 Support Expires
How To Cluster MaxDB On Windows In The Cloud

Join Our Mailing List

Copyright © 2019 · Enterprise Pro Theme on Genesis Framework · WordPress · Log in