SIOS SANless clusters

SIOS SANless clusters High-availability Machine Learning monitoring

  • Home
  • Products
    • SIOS DataKeeper for Windows
    • SIOS Protection Suite for Linux
  • News and Events
  • Clustering Simplified
  • Success Stories
  • Contact Us
  • English
  • 中文 (中国)
  • 中文 (台灣)
  • 한국어
  • Bahasa Indonesia
  • ไทย

Making Sense Of Virtualization Availability Options

January 21, 2018 by Jason Aw Leave a Comment

What Are The Virtualization Availability Options?

Microsoft Windows Server 2008 R2 and vSphere 4.0 are newly released. Let’s take a look at some Virtualization Availability Options when considering the availability of your virtual servers and the applications running on them.

I also will take this opportunity to describe some of the features that enable virtual machine availability. Additionally, I have grouped these features into their function roles to help highlight their purpose.

Planned Downtime

Live Migration and VMware’s VMotion are both solutions that allow an administrator to move a virtual machine from one physical server to another with no perceivable downtime. There is one key thing to remember. The move must be a planned event in order to move a virtual machine from one server to another without any downtime. A planned event is that the virtual machine’s memory is synchronized between the servers, before the actual switchover occur. This is true of both Microsoft’s and VMware’s solutions. Also keep in mind that both of these technologies require the use of shared storage to hold the virtual hard disks (VMDK and VHD files), which limits Live Migration and VMotion to local area networks. This also means that any downtime planned for the storage array must be handled in a different way. Important to note if you want to limit the impact to your virtual machines.

Unplanned Downtime

Microsoft’s Windows Server Failover Clustering and VMware’s High Availability (HA) are solutions that are available to protect virtual machines in the event of unplanned downtime. Both solutions are similar. They monitor virtual machines for availability. The VMs are moved to standby node if there is a failure. Then, the machines are rebooted for this recovery process. There was no time to sync the memory before failover.

Disaster Recovery

How do I recover my virtual machines in the event of a complete site loss? The good news is that virtualization makes this process a whole lot easier. A virtual machine is simply a file that can picked up and moved to another server. Up to this point, VMware and Microsoft are pretty similar in their availability features and functionality. However, here is where Microsoft really shines. VMware offers Site Recovery Manager which is a fine product. But is limited in support to only SRM-certified array-based replication solutions. Also, the failover and failback process is not trivial and can take the better part of a day to do a complete round trip from the DR site back to the primary data center. It does have some nice features like DR testing. In my experience with Microsoft’s solution for disaster recovery they have a much better solution when it comes to disaster recovery.

Microsoft’s Hyper-V DR solution

Microsoft’s Hyper-V DR solution is Windows Server Failover Clustering in a multi-site cluster configuration (see video demonstration). In this configuration, the performance and behavior is the same as a local area cluster, yet it can span data centers. Essentially, you can actually move your virtual machines across data centers with little to no perceivable downtime. Failback is the same process, just point and click to move the virtual machine resource back to the primary data center. There is no built-in “DR Testing”. Although I think it is preferable to do an actual DR test in just the matter of a minute or two with no perceivable downtime.

Host-Based Replication Vendors

One other thing I like about WSFC multi-site clusters is that the replication options include not only array-based replication vendors, but also host-based replication vendors. This really gives you a wide range of replication solutions in all price ranges and does not require that you upgrade your existing storage infrastructure.

Fault Tolerance

Fault tolerance basically eliminates the need to reboot a virtual machine in the event of an unexpected failure. VMware has the edge here in that it offers VMware FT. There are a few other 3rd party hardware and software vendors that play in this space as well. There are plenty of limitations and requirements when it comes to implementing FT systems. This is an option if you need to ensure a hardware component failure results in zero downtime vs. the minute or two it takes to boot up a VM in a standard HA configuration. You probably want to make sure that your existing servers are already chock full of hot standby CPUs, RAM, power supplies, etc. And you have redundant paths to the network and storage. Otherwise you may be throwing good money after bad. Fault tolerance is great for protection from hardware failures. What happens if your application or the virtual machine’s operating system is behaving badly? That is when you need application level clustering as described below.

Application Availability

Everything I have discussed up to this point really only takes into consideration the health of your physical servers and your virtual machines as a whole. This is all well and good, however, what happens if your virtual machine blue screens? Or what if that latest SQL service pack broke your application? In those cases, none of these solutions are going to do you one bit of good. For those most critical applications, you really must cluster at the application layer. Look into clustering solutions that run within the OS on the virtual machine vs. within the hypervisor. In the Microsoft world, this means MSCS/WSFC or 3rd party clustering solutions. Your storage options, when clustering within the virtual machine, are limited in scope to either iSCSI targets or host-based replication solutions.  Currently, VMware really does not have a solution to this problem. It would defer to solutions that run within the virtual machine for application layer monitoring.

Summary

With the advent of virtualization, it is really not a question of if you need availability. And more of a question of what Virtualization Availability Options will help meet your SLA and/or DR requirements. I hope that this information helps you make sense of the availability options available to you.

Reproduced with permission from https://clusteringformeremortals.com/2009/08/14/making-sense-of-virtualization-availability-options-2/

Read our success stories to understand how SIOS can help you

Filed Under: Clustering Simplified Tagged With: DataKeeper, DR, Virtualization, Virtualization Availability Options, Vmotion, VMware, VMware High Availability

Remove The Weakest Link, Ensure High Availability Cluster Configuration

January 21, 2018 by Jason Aw Leave a Comment

Build A High Availability Cluster Configuration

When we build a High Availability Cluster Configuration, your application availability is only as good as its weakest link. What this means is that if you bought great servers with redundant everything (CPU, fans, power, RAID, RAM, etc) and a super deluxe SAN with multi-path connectivity. Coupled with multiple SAN switches and clustered your application with your favorite clustering software. You probably have a very reliable application – right? Well, not necessarily. Are the servers plugged into the same UPS? Are they on the same network switch? Are they cooled by the same AC unit? Are they in the same building? Is your SAN truly reliable? Any one of these issues among others is a single point of failure in a High Availability Cluster Configuration.

Seek And Remove The Weakest Link in Cluster Configuration

Of course, you have to know when “good enough” is “good enough”. Your budget and your SLAs will help decide what exactly is good enough. However, one area where I am concerned that people may be skimping is in the area of storage. With the advent of cheap or free iSCSI target software solutions, I am seeing some people recommend that you just throw some iSCSI target software on a spare server and voilà – instant shared storage.

Mind you I’m not talking about OEM iSCSI solutions that have built in failover technology and/or other availability features; or even storage virtualization solutions such as FalconStor. I’m talking about the guy who has a server running Windows Server 2008 that he has loaded up with storage and wants to turn it into an iSCSI target. This is great in a lab. But if you are serious about HA, you should think again. Even Microsoft only provides their iSCSI target software to qualified OEM builders experienced in delivering enterprise class storage arrays.

What You Are Actually Getting?

First of all, this is Windows. Not some hardened OS built to only serve storage. It will require maintenance, security updates, hardware fixes, etc. It basically has the same reliability as the application server you are trying to protect. Does it make sense to cluster your application servers. Yet use the same class of server and OS to host your storage? You basically have moved your single point of failure away from your application server and moved it to your storage server. It’s not a smart move as far as I am concerned.

Some of the Enterprise Class iSCSI target software includes synchronous and/or asynchronous replication software and snapshot capabilities. This functionality certainly helps in terms of your recovery point objective (RPO). Although it won’t help your recovery time objective (RTO) unless the failover is automatic and seamless to your clustering software. Let’s say the primary iSCSI storage array fails in the middle of the night. Who is going to be there to activate the replicated copy? You may be down for quite some time before you even realize there is a problem. Again, this may be “good enough”; you just need to be aware of what you are signing up for. Is that the High Availability Cluster Configuration you’re seeking?

SIOS DataKeeper

One thing you can do to improve the reliability of your iSCSI target server is to use a replication product such as SteelEye DataKeeper Cluster Edition to eliminate the single point of failure. Let me illustrate.

Typical Shared Storage Configuration
Figure 1 – Typical Shared Storage Configuration. In the event that the iSCSI target becomes unavailable, all the nodes go offline.

If we take the same configuration shown above and add a hot standby iSCSI target using SteelEye DataKeeper Cluster Edition to do replication AND automatic failover, you have just given you iSCSI target solution a whole new level of availability. That solution would look very much like this.

DataKeeper Cluster Edition - High Availability Cluster Configuration
Figure 2 – In this scenario, DataKeeper Cluster Edition is replicating the iSCSI attached volume on the active node to the iSCSI attached volume on the passive node, which is connected to an entirely different iSCSI target server.

The key difference in the solution which utilizes SteelEye DataKeeper Cluster Edition vs. replication solutions provide by some iSCSI target vendors is in the integration with WSFC. The question to ask of your iSCSI solution vendor is this…

What happens if I pull the power cord on the active iSCSI target server?

If the recovery process is a manual procedure, it is not a true HA solution. But what if it is automatic and completely integrated with WSFC? Then you have a much higher level of availability and have eliminated the iSCSI array as a single point of failure.

Chat with us to also achieve High Availability Cluster Configuration

Reproduced with permission from Clusteringformortals.

Filed Under: Clustering Simplified, Datakeeper Tagged With: cluster, DataKeeper Cluster Edition, High Availability, high availability cluster configuration, SANLess Clustering

Steeleye Datakeeper Cluster Edition Wins Windows It Pro Best High Availability/Disaster Recovery Awards

January 20, 2018 by Jason Aw Leave a Comment

I am pleased to announce that Windows IT Pro has awarded SteelEye DataKeeper Cluster Edition the Best High Availability and Disaster Recovery Product in two categories; Community Choice Gold Award and Editors’ Best Silver Award.

SteelEye DataKeeper Cluster Edition - Best High Availability Disaster Recover ProductSteelEye DataKeeper Cluster Edition - Best High Availability Disaster Recover Product

I am really proud to be a part of the SteelEye DataKeeper team and I appreciate all of the Windows IT Pro community that voted for us in the Community Choice award!

Reproduced with permission from https://clusteringformeremortals.com/2009/11/20/steeleye-datakeeper-cluster-edition-wins-windows-it-pro-best-high-availabilitydisaster-recovery-awards/

Filed Under: Clustering Simplified, Datakeeper Tagged With: DataKeeper, DataKeeper Cluster Edition, disaster recovery, High Availability, Windows IT Pro

How Can Asynchronous Replication Be Used In A Multi-Site Cluster? Isn’t The Data Out Of Sync?

January 18, 2018 by Jason Aw Leave a Comment

How Can Asynchronous Replication Be Used In A Multi-Site Cluster? Isn’t The Data Out Of Sync?

I have been asked this question more than a few times, so I thought I would answer it in my first blog post.  The basic answer is yes, you can lose data in an unexpected failure when using asynchronous replication in a multi-site cluster.  In an ideal world, every company would have a dark fiber connection to their DR site and use synchronous replication with their multi-site cluster, eliminating the possibility of data loss.  However, the reality is that in many cases, the WAN connectivity to the DR site has too much latency to support synchronous replication.  In such cases, asynchronous replication is an excellent alternative.

What Are My Options?

There are more than a few options when choosing an asynchronous replication solution to use with your WSFC multi-site cluster. This includes array based solutions from companies like EMC, IBM, HP, etc. and host based solutions, like the one that is near and dear to me, “SteelEye DataKeeper Cluster Edition“.  Since I know DataKeeper best, I will explain how this all works from DataKeeper’s prospective.

What About SteelEye DataKeeper?

When using SteelEye DataKeeper and asynchronous replication, we allow a certain number of writes to be stored in the async queue.  The number of writes which can be queued is determined by the “high water mark”. This is an adjustable value used by DataKeeper to determine how much data can be in the queue before the mirror state is changed from “mirroring” to “paused”.  A “paused” state is also entered anytime there is a communication failure between the secondary and primary server. While in a paused state, automatic failover in a multi-site cluster is disabled, limiting the amount of data that can be lost in an unexpected failure.  If the original data set is deemed “lost forever”, then the remaining data on the target server can be manually unlocked and the cluster node can then be brought into service.

While in the “paused” state, DataKeeper allows the async queue to drain until we reach the “low water mark”, at which point the mirror enters a “resync” state until all of the data is once again is in sync.  At that point, the mirror is once again in the “mirroring” state and automatic failover is once again enabled.

As long as your WAN link is not saturated or broken, you should never see more than a few writes at any given time in this async queue.   In an unexpected failure (think pulled power cord) you will lose any write that is in the async queue.  This is the trade off you make when you want the awesome recovery point objective (RPO) and recovery time objective (RTO) which you achieve with a multi-site cluster, but your WAN link has too much latency to effectively support synchronous replication.

Try The SteelEye DataKeeper

Take time to monitor the DataKeeper Async Queue via the Windows Performance Logs and Alerts. I think you will be pleasantly surprised to find that most of the time the async queue is empty due to the efficiency of the DataKeeper replication engine.  Even in times of heavy writes, the async queue seldom grows very large and always drains almost immediately. So the amount of data that is at risk at any given time is minimal.  Compared to the alternative in a disaster to restore from last night’s backup, the number of writes you could lose in an unexpected failure using asynchronous replication is minimal!

Of course, there are some instances where even losing a single write is not tolerable.  In those cases, it is recommended to use SteelEye DataKeeper’s synchronous replication option across a high-speed, low latency LAN or WAN connection.

Reproduced with permission from Clusteringformeremortals.com

Filed Under: Clustering Simplified, Datakeeper Tagged With: cluster

WHY I SHOULD CONVERT MY #AZURE CLUSTERS TO MANAGED DISKS TODAY!

August 9, 2017 by Jason Aw Leave a Comment

You may have heard about the recent storage outage that impacted some instances in the US East region back on March 16th. A root cause analysis of the outage is posted here.

March 16th US East Storage Outage

Customer impact: A subset of customers using Storage in the East US region may have experienced errors and timeouts while accessing their storage account in a single Storage scale unit

You might be asking, “What is a single Storage scale unit”. Well, you can think of it as a single storage cluster, or single SAN, or however you want to think about it. I don’t think Azure publishes their exact infrastructure, but you can probably assume that behind the scenes they are using Scale Out File Servers for backend storage.

So the question is, how could I have survived this outage with minimal downtime? If you read further down that root cause analysis you come across this little nugget.

Virtual Machines using Managed Disks in an Availability Set would have maintained availability during this incident.

What’s Managed Disks you ask? Well, just on February 8th Corey Sanders announced the GA of Managed Disks. You can read all about Managed Disks here. https://azure.microsoft.com/en-us/services/managed-disks/

The reason why Managed Disks would have helped in this outage is that by leveraging an Availability Set combined with Managed Disks you ensure that each of the instances in your Availability Set are connected to a different “Storage scale unit”. So in this particular case, only one of your cluster nodes would have failed, leaving the remaining nodes to take over the workload.

Prior to Managed Disks being available (anything deployed before 2/8/2016), there was no way to ensure that the storage attached to your servers resided on different Storage scale units. Sure, you could use different storage accounts for each instances, but in reality that did not guarantee that those Storage Accounts provisioned storage on different Storage scale units.

So while an Availability Set ensured that your instances reside in different Fault Domains and Update Domains to ensure the availability of the instance itself, the additional storage attached to each instance really represented a single point of failure. Although the storage itself is highly resilient, with three copies of your data and geo-redundant options available, in this case with a power failure the entire Storage scale unit went down along with all the servers attached to it.

So long story short…migrate to Managed Disk as soon as possible in order to help minimize downtime

https://docs.microsoft.com/en-us/azure/virtual-machines/virtual-machines-windows-migrate-to-managed-disks

And if you really want to minimize downtime you should consider Hybrid Cloud Deployments that span cloud providers or on-prem to cloud!

 

Reposted from original post by Dave Bermingham Microsoft Clustering MVP –  https://clusteringformeremortals.com/2017/03/22/why-i-should-convert-my-azure-clusters-to-managed-disks-today/

Filed Under: Clustering Simplified Tagged With: #SANLess Clusters for SQL Server Environments, #SANLess Clusters for Windows Environments, Awards, Azure, azure availability managed disk, Clustering 101, High Availability

  • « Previous Page
  • 1
  • …
  • 81
  • 82
  • 83
  • 84
  • Next Page »

Recent Posts

  • Video: Why You Should Run Regular Health Checks To Ensure High Availability
  • 5 Retail Challenges Solved with a Robust HA/DR Solution
  • How to Protect Applications in Cloud Platforms
  • How to Protect Applications and Databases
  • How to Protect Applications in Windows Operating System

Most Popular Posts

Maximise replication performance for Linux Clustering with Fusion-io
Failover Clustering with VMware High Availability
create A 2-Node MySQL Cluster Without Shared Storage
create A 2-Node MySQL Cluster Without Shared Storage
SAP for High Availability Solutions For Linux
Bandwidth To Support Real-Time Replication
The Availability Equation – High Availability Solutions.jpg
Choosing Platforms To Replicate Data - Host-Based Or Storage-Based?
Guide To Connect To An iSCSI Target Using Open-iSCSI Initiator Software
Best Practices to Eliminate SPoF In Cluster Architecture
Step-By-Step How To Configure A Linux Failover Cluster In Microsoft Azure IaaS Without Shared Storage azure sanless
Take Action Before SQL Server 20082008 R2 Support Expires
How To Cluster MaxDB On Windows In The Cloud

Join Our Mailing List

Copyright © 2023 · Enterprise Pro Theme on Genesis Framework · WordPress · Log in