SIOS SANless clusters

SIOS SANless clusters High-availability Machine Learning monitoring

  • Home
  • Products
    • SIOS DataKeeper for Windows
    • SIOS Protection Suite for Linux
  • News and Events
  • Clustering Simplified
  • Success Stories
  • Contact Us
  • English
  • 中文 (中国)
  • 中文 (台灣)
  • 한국어
  • Bahasa Indonesia
  • ไทย

How to Remove SIOS DataKeeper Storage from SIOS LifeKeeper

March 23, 2022 by Jason Aw Leave a Comment

How to Remove SIOS DataKeeper Storage from SIOS LifeKeeper

How to Remove SIOS DataKeeper Storage from SIOS LifeKeeper

Greg Tucker, SIOS Senior Product (Windows) Support Engineer will demonstrate in this 3-minute video, how to properly remove SIOS DataKeeper storage from SIOS LifeKeeper.

It is highly recommended that you remove the DataKeeper resource from the cluster prior to removing DataKeeper.

At the end of the video, Greg shares the SIOS Support contact info in the event there are other questions or issues.

How to Remove SIOS DataKeeper Storage from SIOS LifeKeeper | SIOS

Reproduced with permission from SIOS

Filed Under: Clustering Simplified Tagged With: DataKeeper, storage

Managing a Real-Time Recovery in a Major Cloud Outage

January 19, 2019 by Jason Aw Leave a Comment

Managing a Real-Time Recovery in a Major Cloud Outage

Managing A Real-Time Recovery In A Major Cloud Outage

Disasters happen, making sudden downtime reality. But there are things all customers can do to survive virtually any cloud outage.

Stuff happens. Failures—both large and small—are inevitable. What is not inevitable is extended periods of downtime.

Consider the day the South Central US Region of Microsoft’s Azure cloud experienced a catastrophic failure. A severe thunderstorm led to a cascading series of problems that eventually knocked out an entire data center. In what some have called “The Day the Azure Cloud Fell from the Sky,” most customers were offline, not just for a few seconds or minutes, but for a full day. Some were offline for over two days. While Microsoft has since addressed the many issues that led to the outage, the incident will long be remembered by IT professionals.

That’s the bad news. The good news is: There are things all Azure customers can do to survive virtually any outage. It can be from a single server failing to an entire data center going offline. In fact, Azure customers who implement robust high-availability and/or disaster recovery provisions, complete with real-time data replication and rapid, automatic failover, can expect to experience no data loss, and little or no downtime whenever catastrophe strikes.

See also: Nutanix sees enterprise cloud winning the cloud race

Managing The Cloud Outage

This article examines four options for providing disaster recovery (DR) and high availability (HA) protections in hybrid and purely Azure cloud configurations. Two of the options are specific to the Microsoft SQL Server database, which is a popular application in the Azure cloud; the other two options are application-agnostic. The four options, which can also be used in various combinations, are compared in the table and include:

  • The Azure Site Recovery (ASR) Service
  • SQL Server Failover Cluster Instances with Storage Spaces Direct
  • SQL Server Always On Availability Groups
  • Third-party Failover Clustering Software

RT Insights SIOS_Real-timeRecovery for Cloud Outage_181119

RTO and RPO 101

Before describing the four options, it is necessary to have a basic understanding of the two metrics used to assess the effectiveness of DR and HA provisions: Recovery Time Objective and Recovery Point Objective. Those familiar with RTO and RPO can skip this section.

RTO is the maximum tolerable duration of an outage. Online transaction processing applications generally have the lowest RTOs, and those that are mission-critical often have an RTO of only a few seconds. RPO is the maximum period during which data loss can be tolerated. If no data loss is tolerable, then the RPO is zero.

The RTO will normally determine the type of HA and/or DR protection needed. Low recovery times usually demand robust HA provisions that protect against routine system and software failures, while longer RTOs can be satisfied with basic DR provisions designed to protect against more widespread, but far less frequent disasters.

The data replication used with HA and DR provisions can create the need for a potential tradeoff between RTO and RPO. In a low-latency LAN environment, where replication can be synchronous, the primary and secondary datasets can be updated concurrently. This enables full recoveries to occur automatically and in real-time, making it possible to satisfy the most demanding recovery time and recovery point objectives (a few seconds and zero, respectively) with no tradeoff necessary.

Across the WAN, by contrast, forcing the primary to wait for the secondary to confirm the completion of updates for every transaction would adversely impact on performance. For this reason, data replication in the WAN is usually asynchronous. This can create a tradeoff between accommodating RTO and RPO that normally results in an increase in recovery times. Here’s why: To satisfy an RPO of zero, manual processes are needed to ensure all data (e.g. from a transaction log) has been fully replicated on the secondary before the failover can occur This extra effort lengthens the recovery time, which is why such configurations are often used for DR and not HA.

Azure Site Recovery (ASR) Service

ASR is Azure’s DR-as-a-service (DRaaS) offering. ASR replicates both physical and virtual machines to other Azure sites, potentially in other regions, or from on-premises instances to the Azure cloud. The service delivers a reasonably rapid recovery from system and site outages, and also facilitates planned maintenance by eliminating downtime during rolling software upgrades.

Like all DRaaS offerings, ASR has some limitations, the most serious being the inability to automatically detect and failover from many failures that cause application-level downtime. Of course, this is why the service is characterized as being for DR and not for HA.

With ASR, recovery times are typically 3-4 minutes depending, of course, on how quickly administrators are able to manually detect and respond to a problem. As described above, the need for asynchronous data replication across the WAN can further increase recovery times for applications with an RPO of zero.

SQL Server Failover Cluster Instance with Storage Spaces Direct

SQL Server offers two of its own HA/DR options: Failover Cluster Instances (discussed here) and Always On Availability Groups (discussed next).

FCIs afford two advantages: The feature is available in the less expensive Standard Edition of SQL Server, and it does not depend on having shared storage like traditional HA clusters do. This latter advantage is important because shared storage is simply not available in the cloud—from Microsoft or any other cloud service provider.

A popular choice for storage in the Azure cloud is Storage Spaces Direct (S2D), which supports a wide range of applications, and its support for SQL Server protects the entire instance and not just the database. A major disadvantage of S2D is that the servers must reside within a single data center, making this option suitable for some HA needs but not for DR. For multi-site HA and DR protections, the requisite data replication will need to be provided by either log shipping or a third-party failover clustering solution.

SQL Server Always On Availability Groups

While Always On Availability Groups is SQL Server’s most capable offering for both HA and DR, it requires licensing the more expensive Enterprise Edition. This option is able to deliver a recovery time of 5-10 seconds and a recovery point of seconds or less. It also offers readable secondaries for querying the databases (with appropriate licensing), and places no restrictions on the size of the database or the number of secondary instances.

An Always On Availability Groups configuration that provides both HA and DR protections consists of a three-node arrangement with two nodes in a single Availability Set or Zone, and the third in a separate Azure Region. One notable limitation is that only the database is replicated and not the entire SQL instance, which must be protected by some other means.

In addition to being cost-prohibitive for some database applications, this approach has another disadvantage. Being application-specific requires IT departments to implement other HA and DR provisions for all other applications. The use of multiple HA/DR solutions can substantially increase complexity and costs (for licensing, training, implementation and ongoing operations), making this another reason why organizations increasingly prefer using application-agnostic third-party solutions.

Third-party Failover Clustering Software

With its application-agnostic and platform-agnostic design, failover clustering software is able to provide a complete HA and DR solution for virtually all applications in private, public and hybrid cloud environments. This includes for both Windows and Linux.

Being application-agnostic eliminates the need for having different HA/DR provisions for different applications. Being platform-agnostic makes it possible to leverage various capabilities and services in the Azure cloud, including Fault Domains, Availability Sets and Zones, Region Pairs, and Azure Site Recovery.

As complete solutions, the software includes, at a minimum, real-time data replication, continuous monitoring capable of detecting failures at the application level, and configurable policies for failover and failback. Most solutions also offer a variety of value-added capabilities that enable failover clusters to deliver recovery times below 20 seconds with minimal or no data loss to satisfy virtually all HA/DR needs.

Making It Real

All four options, whether operating separately or in concert, can have roles to play in making the continuum of DR and HA protections more effective and affordable for the full spectrum of enterprise applications. This includes from those that can tolerate some data loss and extended periods of downtime, to those that require real-time recovery to achieve five-9’s of uptime with minimal or no data loss.

To survive the next cloud outage in the real-world, make certain that whatever DR and/or HA provisions you choose are configured with at least two nodes spread across two sites. Also be sure to understand how well the provisions satisfy each application’s recovery time and recovery point objectives. As well as any limitations that might exist, including the need for manual processes required to detect all possible failures, and trigger failovers in ways that ensure both application continuity and data integrity.

About Jonathan Meltzer

Jonathan Meltzer is Director, Product Management, at SIOS Technology. He has over 20 years of experience in product management and marketing for software and SaaS products that help customers manage, transform, and optimize their human capital and IT resources.

Reproduced from RTinsights

Filed Under: News and Events Tagged With: Azure, Cloud, cloud outage, cybersecurity, microsoft azure, multi-cloud, recovery, server failover, SQL, storage

How To Set Up Low Cost SAN With Linux Software iSCSI Target

December 12, 2018 by Jason Aw Leave a Comment

Set up Low Cost SAN with Linux Software iSCSI Target

Step-By-Step Guide To Set Up Low Cost SAN With Linux Software iSCSI Target

A software iSCSI target can be a great way to set up shared storage when you don’t have enough dough to afford pricey SAN hardware. The iSCSI target acts just like a real hardware iSCSI array, except it’s just a piece of software running on a traditional server (or even a VM!). Setting up an iSCSI target is an easy and low cost way to get the shared storage you need. It does not matter if you’re using a clustering product like Microsoft Windows Server Failover Clustering (WSFC), a cluster filesystem such as GFS or OCFS. Or even if you’re wanting to get the most out of your virtualization platform (be it VMware, XenServer, or Hyper-V) by enabling storage pooling and live migration.

About Lio-Target

Recently, the Linux kernel has adopted LIO-Target as the standard iSCSI target for Linux. LIO-Target is available in Linux kernels 3.1 and higher. LIO-Target supports SCSI-3 Persistent Reservations, which are required by Windows Server Failover Clustering, VMware vSphere, and other clustering products. The LUNs (disks) presented by the iSCSI target can be entire disks, partitions, or even just plain old files on the filesystem. LIO-Target supports all of these options.

Below, we’ll walk through the steps to configure LIO-Target on an Ubuntu 12.04 server. Other recent distros will probably work also, but the steps may vary slightly.

Configuration Steps

First, install the Lio-target packages:

# apt-get install –no-install-recommends targetcli python-urwid

Lio-target is controlled using the targetcli command line utility.

The first step is to create the backing store for the LUN. In this example, we’ll use a file-backed LUN, which is just a normal file on the filesystem of the iSCSI target server.

# targetcli

/> cd backstores/
/backstores> ls
o- backstores …………………………………………………… […]
o- fileio …………………………………………. [0 Storage Object]
o- iblock …………………………………………. [0 Storage Object]
o- pscsi ………………………………………….. [0 Storage Object]
o- rd_dr ………………………………………….. [0 Storage Object]
o- rd_mcp …………………………………………. [0 Storage Object]

/backstores> cd fileio

/backstores/fileio> help create  (for help)

/backstores/fileio> create lun0 /root/iscsi-lun0 2g  (create 2GB file-backed LUN)

Second Step

Now the LUN is created. Half way there to Set up Low Cost SAN with Linux Software iSCSI Target. Next we’ll set up the target so client systems can access the storage.

/backstores/fileio/lun0> cd /iscsi

/iscsi> create   (create iqn and target port group)

Created target iqn.2003-01.org.linux-iscsi.murray.x8664:sn.31fc1a672ba1.
Selected TPG Tag 1.
Successfully created TPG 1.
Entering new node /iscsi/iqn.2003-01.org.linux-iscsi.murray.x8664:sn.31fc1a672ba1/tpgt1

/iscsi/iqn.20…a672ba1/tpgt1> set attribute authentication=0   (turn off chap auth)

/iscsi/iqn.20…a672ba1/tpgt1> cd luns

/iscsi/iqn.20…a1/tpgt1/luns> create /backstores/fileio/lun0   (create the target LUN)
Selected LUN 0.
Successfully created LUN 0.
Entering new node /iscsi/iqn.2003-01.org.linux-iscsi.murray.x8664:sn.31fc1a672ba1/tpgt1/luns/lun0

/iscsi/iqn.20…gt1/luns/lun0> cd ../../portals

iSCSI traffic can consume a lot of bandwidth. You’ll probably want the iSCSI traffic to be on a dedicated (or SAN) network, rather than your public network.

/iscsi/iqn.20…tpgt1/portals> create 10.10.102.164  (create portal to listen for connections)
Using default IP port 3260
Successfully created network portal 10.10.102.164:3260.
Entering new node /iscsi/iqn.2003-01.org.linux-iscsi.murray.x8664:sn.31fc1a672ba1/tpgt1/portals/10.10.102.164:3260

/iscsi/iqn.20….102.164:3260> cd ..

/iscsi/iqn.20…tpgt1/portals> create 10.11.102.164
Using default IP port 3260
Successfully created network portal 10.11.102.164:3260.
Entering new node /iscsi/iqn.2003-01.org.linux-iscsi.murray.x8664:sn.31fc1a672ba1/tpgt1/portals/10.11.102.164:3260

/iscsi/iqn.20…102.164:3260> cd ../../acls

Final Step

Register the iSCSI initiators (client systems) to Set up Low Cost SAN with Linux Software iSCSI Target. To do this, you’ll need to find the initiator names of the systems. For Linux, this will usually be in /etc/iscsi/initiatorname.iscsi. For Windows, the initiator name is found in the iSCSI Initiator Properties Panel in the Configuration Tab.

/iscsi/iqn.20…a1/tpgt1/acls> create iqn.1994-05.com.redhat:f5b312caf756   (register initiator — this IQN is the IQN of the initiator — do this for each initiator that will access the target)
Successfully created Node ACL for iqn.1994-05.com.redhat:f5b312caf756
Created mapped LUN 0.
Entering new node /iscsi/iqn.2003-01.org.linux-iscsi.murray.x8664:sn.31fc1a672ba1/tpgt1/acls/iqn.1994-05.com.redhat:f5b312caf756

/iscsi/iqn.20….102.164:3260> cd /

Now, remember to save the configuration. Without this step, the configuration will not be persistent.
/> saveconfig  (SAVE the configuration!)

/> exit


You’ll now need to connect your initiators to the target. Generally you’ll need to provide the IP address of the target to connect to it. After the connection is made, the client systems will see a new disk. The disk will need to be formatted before use.

And that’s it! You’re ready to use your new SAN. Have fun!

Having problems to Set up Low Cost SAN with Linux Software iSCSI Target, read our other helpful articles
Reproduced with permission from Linuxclustering

Filed Under: Clustering Simplified Tagged With: iscsi, set up low cost san with linux software iscsi target, storage

Platforms to replicate data (Host-Based Replication vs SAN Replication)

December 10, 2018 by Jason Aw Leave a Comment

Choosing Platforms To Replicate Data - Host-Based Or Storage-Based?

Choosing Platforms To Replicate Data – Host-Based Or Storage-Based?

Two common platforms to replicate data are from the server host that operates against the data and from the storage array that holds the data.

When creating remote replicas for business continuity, the decision whether to deploy a host- or storage-based solution depends heavily on the platform that is being replicated and the business requirements for the applications that are in use. If the business demands zero impact to operations in the event of a site disaster, then host-based techniques provide the only feasible solution.

Host-Based Replication

One of the two platforms to replicate data is Host-based replication. It doesn’t lock users into a particular storage array from any one vendor. SIOS SteelEye DataKeeper, for example, can replicate from any array to any array, regardless of vendor. This ability ultimately lowers costs and provides users the flexibility to choose what is right for their environment. Most host-based replication solutions can also replicate data natively over IP networks, so users don’t need to buy expensive hardware to achieve this functionality.

Host-based solutions are storage-agnostic, providing IT managers complete freedom to choose any storage that matches the needs of the enterprise. The replication software functions with any storage hardware that can be mounted to the application platform, offering heterogeneous storage support. It can operate at the block or volume level are also ideally suited for cluster configurations.

One disadvantage is that host-based solutions consume server resources and can affect overall server performance. Despite this possibility, a host-based solution might still be appropriate when IT managers need a multi-vendor storage infrastructure or have a legacy investment or internal expertise in a specific host-based application.

Storage-Based Replication

Another platforms to replicate data is the storage-based replication is OS-independent and adds no processing overhead. However, vendors often demand that users replicate from and to similar arrays. This requirement can be costly, especially when you use a high-performance disk at your primary site — and now must use the same at your secondary site. Also, storage-based solutions natively replicate over Fibre Channel and often require extra hardware to send data over IP networks, further increasing costs.

A storage-based alternative does provide the benefit of an integrated solution from a dedicated storage vendor. These solutions leverage the controller of the storage array as an operating platform for replication functionality. The tight integration of hardware and software gives the storage vendor unprecedented control over the replication configuration and allows for service-level guarantees that are difficult to match with alternative replication approaches. Most storage vendors have also tailored their products to complement server virtualization and use key features such as virtual machine storage failover. Some enterprises might also have a long-standing business relationship with a particular storage vendor; in such cases, a storage solution might be a relevant fit.

Choices

High quality of service comes at a cost, however. Storage-based replication invariably sets a precondition of like-to-like storage device configuration. This means that two similarly configured high-end storage arrays must be deployed to support replication functionality, increasing costs and tying the organization to one vendor’s storage solution.

This locking in to a specific storage vendor can be a drawback. Some storage vendors have compatibility restrictions within their storage-array product line, potentially making technology upgrades and data migration expensive. When investigating storage alternatives, IT managers should pay attention to the total cost of ownership: The cost of future license fees and support contracts will affect expenses in the longer term.

Cost is a key consideration, but it is affected by several factors beyond the cost of the licenses. Does the solution require dedicated hardware, or can it be used with pre-existing hardware? Will the solution require network infrastructure expansion and if so, how much? If you are using replication to place secondary copies of data on separate servers, storage, or sites, realize that this approach implies certain hardware redundancies. Replication products that provide options to redeploy existing infrastructure to meet redundant hardware requirements demand less capital outlay.

Pros And Cons

Before deciding between a host- or storage-based replication solution, carefully consider the pros and cons of each, as illustrated in the following table.

Host-Based Replication Storage-Based Replication
Pros
  • Storage agnostic
  • Sync and async
  • Data can reside on any storage
  • Unaffected by storage upgrades
  • Single vendor for storage and replication
  • No burden on host system
  • OS agnostic
Cons
  • Use of computing resources on host

 

  • Vendor lock-in
  • Higher cost
  • Data must reside on array
  • Distance limitations of Fibre Channel
Best Fit
  • Multi-vendor storage environment
  • Need option of sync or async
  • Implementing failover cluster
  • Replicating to multiple targets
  • Prefer single vendor
  • Limited distance and controlled environment
  • Replicating to single target

 

To understand how SIOS can work on platforms to replicate data, do read our success stories

Reproduced with permission from Linuxclustering

Filed Under: Clustering Simplified Tagged With: data replication, platforms to replicate data, storage

Microsoft Wants Your Input On The Next Version Of Windows Server

March 13, 2018 by Jason Aw Leave a Comment

Microsoft Wants Your Input On The Next Version Of Windows Server

Windows Server has a new UserVoice page: http://windowsserver.uservoice.com/forums/295047-general-feedback with subsections:

  • Clustering: http://windowsserver.uservoice.com/forums/295074-clustering
  • Storage: http://windowsserver.uservoice.com/forums/295056-storage
  • Virtualization: http://windowsserver.uservoice.com/forums/295050-virtualization
  • Networking: http://windowsserver.uservoice.com/forums/295059-networking
  • Nano Server: http://windowsserver.uservoice.com/forums/295068-nano-server
  • Linux Support: http://windowsserver.uservoice.com/forums/295062-linux-support

This is where YOU get to provide Microsoft with your feedback directly.

Reproduced with permission from https://clusteringformeremortals.com/2015/05/12/microsoft-wants-your-input-on-the-next-version-of-windows-server/

Filed Under: Clustering Simplified Tagged With: Clustering, Linux Support, Microsoft, Nano Server, Networking, storage, UserVoice, Virtualization, Windows Server

  • 1
  • 2
  • Next Page »

Recent Posts

  • How to Assess if My Network Card Needs Replacement
  • Application Intelligence in Relation to High Availability
  • Transitioning from VMware to Nutanix
  • Are my servers disposable? How High Availability software fits in cloud best practices
  • Data Recovery Strategies for a Disaster-Prone World

Most Popular Posts

Maximise replication performance for Linux Clustering with Fusion-io
Failover Clustering with VMware High Availability
create A 2-Node MySQL Cluster Without Shared Storage
create A 2-Node MySQL Cluster Without Shared Storage
SAP for High Availability Solutions For Linux
Bandwidth To Support Real-Time Replication
The Availability Equation – High Availability Solutions.jpg
Choosing Platforms To Replicate Data - Host-Based Or Storage-Based?
Guide To Connect To An iSCSI Target Using Open-iSCSI Initiator Software
Best Practices to Eliminate SPoF In Cluster Architecture
Step-By-Step How To Configure A Linux Failover Cluster In Microsoft Azure IaaS Without Shared Storage azure sanless
Take Action Before SQL Server 20082008 R2 Support Expires
How To Cluster MaxDB On Windows In The Cloud

Join Our Mailing List

Copyright © 2025 · Enterprise Pro Theme on Genesis Framework · WordPress · Log in