SIOS SANless clusters

SIOS SANless clusters High-availability Machine Learning monitoring

  • Home
  • Products
    • SIOS DataKeeper for Windows
    • SIOS Protection Suite for Linux
  • News and Events
  • Clustering Simplified
  • Success Stories
  • Contact Us
  • English
  • 中文 (中国)
  • 中文 (台灣)
  • 한국어
  • Bahasa Indonesia
  • ไทย

Disaster Recovery Planning in an Unpredictable World

April 4, 2026 by Jason Aw Leave a Comment

Disaster Recovery Planning in an Unpredictable World

Disaster Recovery Planning in an Unpredictable World

Computer systems and computerized infrastructure have become a load-bearing part of a modern business environment. As such, the potential for downtime is not just annoying – it is costly. Though the world is unpredictable, having an emergency plan in place through effective disaster recovery planning can ensure that an unexpected issue does not lead to an unexpected problem. This is the role of a High Availability and Disaster Recovery solution.

Understanding High Availability and Disaster Recovery

High Availability and Disaster Recovery is a multi-faceted endeavor of mutually supportive efforts. Though these concepts work in tandem to uplift one another, it is important to understand the boundaries between them.

What is High Availability?

High Availability refers to the capacity of a system, application, or other infrastructure component to readily continue operation. This encompasses the ability of an infrastructure component to be restarted, migrated, or otherwise recovered with minimal loss or regression in the operational state.

This is to say, the infrastructure is able to continue serving the designated role with access to up-to-date information. Additionally, highly available infrastructure may accommodate the ability for multiple infrastructure components to act in a primary role to provide availability.

What is Disaster Recovery?

Disaster recovery refers to the capacity of a system, application, or infrastructure component to withstand a catastrophic failure. Often, disaster recovery is concerned with the catastrophic and irrecoverable loss of some infrastructure component.

A simple example of a disaster recovery solution can be seen any time a data backup is taken and stored off-site. Doing this to protect the data against building-wide disasters that would make the original storage media unrecoverable meets the criteria of a disaster recovery solution, though via an implementation that leaves room for improvement.

How High Availability and Disaster Recovery Work Together

When combining High Availability and Disaster Recovery, both can work to aid the other’s stated goals. A High Availability solution accommodates the ability to ensure systems can resume their operative role in a timely manner, and the infrastructure that can resume the system’s operative role is frequently a part of the disaster recovery solution.

When planned accordingly, the ability to migrate workloads to a healthy infrastructure can enable a disaster recovery solution to operate quickly and effectively, minimizing downtime. These two elements work hand in hand to produce environments that prioritize resilience and uptime equally.

The Real Cost of Downtime

Every computer system, infrastructure component, or other element of a production environment is susceptible to failure. When failure occurs, it is easy to measure the opportunity cost for lost revenue, reduced productivity, or costs of remediating the issues from which downtime originated. These costs alone posed an average cost of $300,000 or more per hour of downtime, a figure cited by 91% of medium to large-sized companies estimating the cost of downtime, as reported in a study performed by International Technology Intelligence Consulting in 2024.

Often not considered, though, is the “soft cost” of downtime. Outages can erode customer confidence, blemish the reputation of an organization, and apply additional pressure to the personnel responsible for the environment. Though downtime does pose a very real and very immediate cost to business, the ripples of such an occurrence may send shockwaves through a business for months or years to come.

Make Resilience a Design Requirement

Infrastructure reaches the peaks of High Availability and the highest capacity for disaster recovery when it is designed with the intention of being a highly available environment that has a strong disaster recovery plan.

The first stage of honoring HA/DR as a design requirement entails setting realistic expectations. Often, these expectations can be summarized via the “Recovery Point Objective” (RPO) and “Recovery Time Objective” (RTO).

To briefly describe these metrics:

  • Recovery Point Objective describes the data that an organization can stand to lose when restoring from a backup
  • Recovery Time Objective describes the desired amount of time before an unavailable environment is able to return to operation.

Defining these metrics naturally sidesteps a common issue. As systems are prioritized by their HA/DR needs, systems that are more resilient to downtime can make use of simpler implementations. Systems that require extremely low RTO and RPO metrics, in turn, can be allocated more effort to ensure that the solutions in place on these systems are equipped to meet the higher operational standards.

Use Automation to Reduce Risk in Disaster Recovery Planning

When addressing the strategies for High Availability and Disaster Recovery, the topic is often business-critical systems. These systems often require speedy issue resolution performed in a reliable manner so that an issue does not spiral out of control. Though the personnel responsible for these systems are experts in the nuances of the environment, the potential of human error during issue resolution is an avoidable risk factor.

A robust High Availability and Disaster Recovery solution can incorporate automated failure detection along with automated recovery actions. Not only is the response faster when the issue is automatically detected and executes a recovery plan in kind, but an automated response also takes action methodically and efficiently without the possibility of human error.

Build Redundancy Beyond Technology

Though it is important to design with HA/DR in mind and ensure that solutions can provide automated responses, there is still a human element to designing, creating, and maintaining critical systems. The key to leveraging personnel in these solutions is to allow teams to work in a low-stress environment that allows for careful and methodical problem-solving approaches. When a person is involved in any work, the outcomes should undergo a validation process to ensure that the solution functions as intended.

Even further than the conditions in which work is done, it is also important to ensure that personnel have access to the knowledge that they need to work effectively. If only one person on a team is capable of a particular maintenance task, then there is potential for a gap in operations should they become unavailable.

Planning for operational continuity extends beyond on-system considerations. Ensuring that teams operate to reduce knowledge silos and can put their outcomes to the test before moving into production can protect systems by avoiding issues entirely.

Disaster Recovery Planning Best Practices for Resilient Systems

While there is no one-size-fits-all approach to implementing High Availability and Disaster Recovery solutions, there are guidelines and best practices that can help build out a disaster recovery planning strategy that suits your organization. The aforementioned points serve as a great foundation. Additionally, improvements can be found via some generally applicable goals such as finding and eliminating single points of failure, documenting processes with clear roles and responsibilities, maintaining an identical QA copy of the production environment to validate procedures, distributing systems across geographically distinct regions, and frequently reviewing and updating documentation.

Preparing for the Next Disruption with Disaster Recovery Planning

Disruptions are inevitable, and no organization wants to experience an outage from a failure that could have been predicted and avoided. Taking an approach of intentional planning and implementing a layered solution to provide environments with High Availability and Disaster Recovery ensures that, whether predictable or not, an environment is prepared to weather issues and continue operating at full capacity, so business can operate without a hiccup.

Request a demo to see how SIOS high availability and disaster recovery solutions help protect critical systems and keep your business running.

Author: Philip Merry, SIOS Technology Corp.

Reproduced with permission from SIOS

Filed Under: Clustering Simplified Tagged With: disaster recovery

Active-Active vs. Active-Passive

March 30, 2026 by Jason Aw Leave a Comment

Active-Passive

Active-Active vs. Active-Passive

High Availability Architecture Guide

Active-Active and Active-Passive are two different architectural configurations for server nodes in a high availability cluster. Active-Active architecture refers to both servers being powered up and processing data. Active-Passive is quite different in that only one server is actively working to process data, and the secondary server is waiting in an inactive state to take over control if the Active server has a failure.

High Availability Systems and Core Components

High availability is all about eliminating single points of failure, meaning that, should a problem occur with a particular node, another node is available to take on the work.

Key components of a highly available system:

  • a primary processing core node with memory and power
  • a standby processing core node with memory and power
  • communication links between the two core components
  • storage at the local level or shared between core components

Active-Active Architecture

In an Active-Active architecture, two identical servers are run at the same time, both active, each capable of processing transactions. Transactions can be handled by either server.

Benefits of Active-Active Architecture

Both servers are on all the time, versus other configurations, which have nodes that are unused in normal operation. The potential benefits are as follows:

  • Scalability, especially using cloud platforms, makes peak usage problems a thing of the past
  • Workload of servers can be balanced so one is not overloaded
  • Overall, increased throughput for the same amount of hardware

Scalability

On a cloud platform, Active-Active architecture is very scalable. For example, AWS AutoScale can be used to add more EC2 instances on demand to allow the cluster to grow to handle data peaks.

Load balancing

A load balancer can be provisioned upstream of the nodes to send transactions to the lighter-loaded server, ensuring traffic is balanced across the cluster to ensure high throughput of work items.

Active-Active Use Cases

Heavy data, transactional-type processing, and multi-node hosted applications are best for active/active configurations. Here are some examples:

  • Multi-node, globally distributed database systems
  • mathematical data processing for real-time applications
  • big data/data warehousing
  • high traffic website hosting
  • telecom networking and SMS

Active-Passive Architecture

In an Active-Passive architecture, a clustered environment employs two servers. One will be designated to be in active mode, performing processing. The other server will be in standby mode, not performing any data processing, but ready to take over should there be a failover from the active node or a user-issued switchover from the active node.

Benefits of Active-Passive Architecture

As only one server is active at a time, one server enjoys downtime (powered up, but in standby mode, essentially keeping up with any data copying needs from the active unit, ready to take over control if needed, but not actually processing any active work). The potential benefits are as follows:

  • Reduced power needs for the cluster
  • Increased hardware longevity – components last longer when they operate under less strain and aren’t consistently pushed to their limits
  • reduced cooling needs, and lower power bills due to less cooling
  • simplified resource view – the resources will be active on the active node
  • A load balancer is not needed

Cost-Effectiveness of Active-Active vs. Active-Passive

As only half of the processing power of the cluster is being utilized for real work, there is a higher overall cost for the hardware for the amount of processing that can be performed in an active-passive configuration, so it is slightly less cost-effective than an active-active configuration.

Simplified Management

The resources will be active on the active node – there is no guessing which node is currently actively hosting a particular resource.

Active-Passive Use Cases

Important systems that must stay up with low data loss, such as:

  • financial processing systems
  • backend retail systems
  • disaster recovery solutions
  • relational databases
  • cost-reduced high availability for small to mid-sized companies
  • legacy systems that require a simple hosting solution

Active-Active vs. Active-Passive in Disaster Recovery Solutions

Role of Active-Active vs. Active-Passive

Active-Active Disaster Recovery (DR) systems are implemented on geographically dispersed nodes, both handling production traffic. If one goes down, the workload is funneled to the system that is still up. Downtime and user disruption are virtually undetectable, although workload processing may drop to lower levels than normal with one system down.

An Active-Passive Disaster Recovery (DR) system implements a disaster recovery solution whereby the standby system will take over if the primary system fails. A small amount of downtime on the transition of activity will occur in the event of failure of the active node, but workload levels should be indistinguishable when the standby node takes over from the old active system.

Integration with Redundant Systems

Implementing Disaster Recovery using redundant systems is a strategy of providing the capability to switch activity to a synchronized backup system, where data is at the same level as on the old active system, and the new active system comes online within a short time period. Redundancy considerations should also consider hardware redundancy, communication path redundancy, and software redundancy (via high availability) when choosing to implement a redundant system.

Choosing Between Active-Active vs. Active-Passive Architecture for Your Business

Factors to Consider

Selecting the right architecture for your business depends on factors such as:

  • cost, including ongoing cloud costs if wishing to use cloud-hosted nodes.
  • mission-critical system, or transactional high data?
  • user temperance of living with occasional small amounts of downtime, performance requirements – e.g., FCC penalties for non-compliance in uptime?
  • geographical dispersion of nodes and storage for lower latency, and the ability to increase nodes on demand to accommodate peak demands.

Performance and Uptime Requirements

Performance and uptime obligations for the business should be established prior to deciding on an architecture.

For businesses providing services that have uptime in the three-nines (99.9%) that allows just 8 hours of downtime on an annual basis, it’s certainly possible to provide this with Active-Passive, if failovers are swift and the system is well monitored and maintained. Four nines (99.99%) uptime, is mostly in the domain of Active-Active systems.

Levels of transactional processing should also be considered. If large continuous data transaction rates are expected, an Active-Active configuration may be a better fit.

Active-Active vs. Active-Passive: Which Architecture Is Right for Your Business?

Both active-active and active-passive systems have their place. As an organization, you may like to host critical systems that can’t go down on active-active architecture systems. For the other systems that can tolerate occasional downtime, active-passive may be the correct choice. A blend of technologies may be right to cover all systems. Companies have great options to suit their needs: larger, spread-out businesses can benefit from the flexibility of a cloud-hosted active-active system, while smaller companies can enjoy the simplicity and cost savings of an active-passive setup. There’s a solution for everyone.

If you’re evaluating Active-Active vs. Active-Passive for your high availability strategy, request a demo to see how SIOS can help you design the right architecture for your business.

Author: Paul Scrutton, Software System Engineer at SIOS

Reproduced with permission from SIOS

Filed Under: Clustering Simplified Tagged With: High Availability

Broadcom/VMware: Time To Decouple High Availability From Your Hypervisor

March 24, 2026 by Jason Aw Leave a Comment

Broadcom VMware Time To Decouple High Availability From Your Hypervisor

Broadcom/VMware: Time To Decouple High Availability From Your Hypervisor

If you are an IT Architect, Admin, or Site Reliability Engineer (SRE) managing critical workloads on VMware, your 2026 likely began with a singular headache: The Renewal. Since the Broadcom acquisition, the “Broadcom Tax” has become a well-known cost. Between the elimination of perpetual licenses, mandatory shifts to massive subscription bundles, and aggressive 72-core minimums, “standardizing on VMware” has become an exercise in forced over-provisioning.

But there is a risk greater than the price hike: the cost of application downtime.

The “VM Restart” Fallacy: Why VMware HA Isn’t True High Availability

For years, the industry has mistaken “VMware HA” for true High Availability. If a host fails, VMware restarts the VM on another server. While this is a fast reboot, it is not High Availability.

VMware HA only monitors the physical server’s “heartbeat” to determine whether the host is operational or not. It is blind to the world inside the VM. It cannot detect a database that is hung, application services that are deadlocked, or storage that is unavailable.

Today’s mission-critical ecosystems—SAP HANA, SQL Server, Oracle, and AI-driven GPU systems—require more than a “power cycle” approach. They require application-level protection.

SIOS LifeKeeper: True HA via Application-Aware Intelligence

SIOS LifeKeeper provides visibility across your application environment: network, storage, OS, and database layers. It ensures rapid, Application-Aware Failover in compliance with application-specific best practices to deliver reliable uptime, not just a fast reboot.

While Broadcom’s licensing model effectively taxes your growth and tethers you to their ecosystem, SIOS offers true architectural freedom. Our platform-agnostic licensing allows you to migrate workloads to AWS, Azure, or alternative hypervisors without losing your HA protection. With SIOS, you aren’t just buying software; you’re securing an exit strategy from vendor lock-in.

Slashing TCO After VMware Pricing Changes: Protect the App, Not the Hypervisor

Broadcom not only requires you to buy subscription licenses, but it often requires you to upgrade your entire VMware stack or purchase bloated subscription tiers just to access the HA features needed for a single Tier-1 application.

Why upgrade your entire infrastructure license to protect one SQL Server or SAP instance? SIOS provides enterprise-class HA that lives with your application, regardless of which VMware “bundle” the Broadcom mandate. SIOS also gives you the flexibility to purchase subscription or perpetual licenses.

Eliminate the Cost and Complexity of SANs and vSAN Dependencies

Many new VMware bundles push customers toward vSAN, in environments where every millisecond counts, SIOS DataKeeper allows you to build clusters using local, high-performance NVMe storage. You get the protection of a cluster without the proprietary complexity or the “storage tax” of a virtual SAN.

SIOS delivers the capabilities—such as advanced data replication—that VMware typically gates behind its most expensive tiers. By decoupling HA from the hypervisor, you can maintain world-class uptime on more economical VMware licenses, potentially saving six or seven figures on your next renewal.

VMware HA vs. SIOS LifeKeeper and DataKeeper

Feature VMware HA (vSphere Foundation) SIOS LifeKeeper
& DataKeeper
Failover Trigger Host/Hardware failure only. Application, OS, Storage, or Network failure.
App Intelligence None. It’s a “black box” restart. Recovery Kits for SAP, SQL, Oracle, & more.
Cloud Flexibility Requires specific VMware Cloud stacks. Native in AWS, Azure, GCP, or Hybrid.
Storage Model Dependent on vSAN or Shared Storage. SANless Clusters via local NVMe/SSD.
Licensing Complex, Core-based, Bundle-heavy. Predictable, portable, and application-focused. Your choice of perpetual or subscription.

Reclaim Your Infrastructure Freedom with Application-Level High Availability

SIOS gives you the flexibility to maintain high availability on your own terms while you evaluate your long-term relationship with Broadcom.

By choosing SIOS, you gain the freedom to move workloads between VMware, Nutanix, or the Public Cloud without rewriting scripts or retraining your team. You get uptime determined by the health of the application environment, not just the server’s power light.

If your upcoming renewal feels like a dead end, it’s time to move your High Availability out of the hypervisor and into the application layer.

Request a demo today to see how SIOS delivers application-level high availability across VMware, cloud, and hybrid environments.

Author: Margaret Hoagland, VP Global Sales and Marketing at SIOS

Reproduced with permission from SIOS

Filed Under: Clustering Simplified Tagged With: Application availability

Keeping Buildings Safe: High Availability in Maintenance and Security Systems

March 13, 2026 by Jason Aw Leave a Comment

The Critical Role of QA and Production Environments in High Availability

Keeping Buildings Safe: High Availability in Maintenance and Security Systems

In this episode of TFiR: Let’s Talk, host Swapnil Bhartiya speaks with Dave Bermingham, Director of Customer Success at SIOS Technology, about why high availability and resiliency are critical for building maintenance and security systems. Bermingham explains how these systems differ from, but often interact with, other building technologies, and why uninterrupted operation is essential to occupant safety and building functionality. The conversation explores how organizations can balance security with accessibility, the role of emerging technologies such as AI, machine learning, and IoT in improving reliability, and best practices for ensuring system availability through redundancy, monitoring, and risk planning.

Author: Beth Winkowski, SIOS Technology Corp. Public Relations

Reproduced with permission from SIOS

Filed Under: Clustering Simplified Tagged With: High Availability

Designing High Availability Through Modularity and Abstraction

March 6, 2026 by Jason Aw Leave a Comment

The Critical Role of QA and Production Environments in High Availability

Designing High Availability Through Modularity and Abstraction

Thus far, this series has explored parallels between technical design and rhetoric. The “rhetoric” of a technical solution, the strategy of communicating meaning and purpose, is presented via the design patterns and concepts. The design patterns and concepts exist as a conceptual foundation, upon which the meaning is translated into an applied form when put into practice during implementation.

As previously discussed, the continuity and integrity of this conceptual foundation are paramount to ensuring that solutions are kept up to a standard that is conducive to maintenance, improvement, and long-term reliability. External factors influencing a solution’s design challenge the goal of upholding the conceptual foundations put forth in a solution’s design. These external factors can conflict with the standing principles, and thus, tools, applications, and platforms used in a solution must be chosen mindfully.

In the third and final part of this blog series, modularity and abstraction will be explored as a means to put boundaries in place and ensure that projects with a wide scope can continue to reap the benefits of a well-formed, rhetorically sound design.

High Availability Design Principles: Why Modularity and Abstraction Matter

Before addressing modularization and abstraction as strategies, it is important to understand why these should be implemented. Starting broadly with an analogy, a speaker trying to convince their audience to agree with their plan might first need to outline multiple foundational points. In doing so, each pillar of their argument’s foundation gets put forth and justified.

The speaker first must set up the “A implies B” and “C implies D” basis, upon which they can form the argument “B and D imply E”. This strategy ensures that the reasoning in which “A implies B” does not cross-contaminate and detract from the separate point “C implies D”. This strategy is frequently used because it allows each component of the speaker’s argument to stand independently of others. If the argument “C implies D” is flawed, it can be reconciled while the argument “A implies B” remains sound.

The reason for this structure is the same reason why technical systems are decentralized – a problem in a point of sale system can be remediated without the need to expand the remediation efforts to the databases, APIs, network architecture, and so on. The strategies referenced above are, of course, in reference to the concepts of modularity and abstraction.

Modularity in High Availability Architectures

First, addressing modularity, this is the practice of creating systems from components that are self-contained. In the rhetorical sense, the arguments “A implies B” and “C implies D” are simply modules of reasoning that get assembled into the argument as a whole.

More technically, modularized components (such as the point of sale system in the previous example) allow issues to be addressed entirely within the module where the issue originates.  Each module in the solution acts as a building block, and a problem in a single building block can be resolved without dismantling the entire solution.

Abstraction as a Strategy for Scalable Infrastructure Design

Closely related to modularity is “abstraction”. Abstraction is the practice of ensuring the design of the overall solution is independent and agnostic to the design of the modules that compose the overall solution.

Further, abstraction as a design strategy also holds that each module is independent and agnostic to the design of every other module. When a solution is designed to use abstracted elements, these elements can be reused and applied in use cases that allow for understanding to be amplified throughout the project.

Designing High Availability That “Stays Out of the Way”

When designs are built of modular components, boundaries are drawn. These boundaries ensure that each module can “Stay out of the way” of the other modules. When the components are abstracted, the contents of each module can be understood more easily.

In turn, the boundaries serve as a structure by which the design can be understood, and the abstraction within these boundaries serves as an entry point to understand the foundations of the use case. The structure provided via modularity and abstraction mirrors the role of rhetoric in providing a framework by which purpose is understood.

Managing Complex Network Architectures with Modular HA Solutions

As technical solutions are being developed to address more complex problems, the need for a solid framework in that solution’s design grows as well. Network architecture, often the culmination of many solutions that are complex in their own right, serves as a fantastic example of the increasingly complex problem and growing requirement for solid frameworks in design. Furthermore, network architecture often suffers from continual growth as it has to absorb the sprawling web of systems that contribute to the purpose of a growing business.

Layered on top of this, the solution architecture must then employ solutions for High Availability and/or Disaster Recovery. This creates a hot spot for design conflicts to arise, but can be easily mitigated with the strategies of modularization and abstraction.

Applying Modularity and Abstraction with SIOS High Availability Software

The benefits of High Availability software can be achieved without the baggage of complexity and hacked solutions. SIOS LifeKeeper, as an example of a design-compliant High Availability tool, is created in a way that the principles of its operation can mesh seamlessly with the environment in which it is used.

LifeKeeper is modular, as it does not impose requirements outside of the LifeKeeper-protected systems. LifeKeeper also facilitates the abstraction of infrastructure components to bite-sized elements – systems that work together to ensure availability are grouped into a “cluster”.

Through this abstraction, the rhetoric of the environment remains strong – understanding the makeup of one cluster lays the foundation to understand all clusters. Layers of the design can be understood for their purpose; there is no need for asterisks and special considerations on how implementations differ across the design. As the clusters act independently of other clusters or external solution components, a boundary can be drawn where the design elements of each respective layer are contained, avoiding conflict with other layers of the infrastructure.

Building Long-Term Resilient Infrastructure with SIOS Protection Suite

As with any software or tool, SIOS Protection Suite (SIOS LifeKeeper and/or SIOS DataKeeper) influences the design of environments in which they are used. Though these patterns are brought in by virtue of having a LifeKeeper and DataKeeper protected environment, SIOS LifeKeeper and SIOS DataKeeper carefully selected the patterns in use to ensure that these patterns enable abstraction and modularity within the solution as a whole. As a result of the layered abstraction enabled by both LifeKeeper and DataKeeper, the introduction of these utilities facilitates integration with the IT infrastructure that maintains cohesion in the solution’s design.

As a result of the design patterns employed, clusters protected by SIOS Protection Suite (LifeKeeper and/or DataKeeper) compose an abstract and modular element that fits seamlessly into existing designs and solutions. LifeKeeper and DataKeeper do more than simplify the administration of single systems or each respective cluster; LifeKeeper and DataKeeper work with the principles at play in a deployment.

Creating infrastructure becomes simplified and more efficient as the use of SIOS Protection Suite allows for a simple method of understanding the system’s role in the design, while at the same time providing a simple method for implementing High Availability and Disaster Recovery. Administrators may use LifeKeeper and DataKeeper as a tool to improve their ability to understand, operate, and improve upon the solution for years to come.

See how high availability can support your infrastructure’s design—without adding complexity. Request a demo today!

Author: Philip Merry, CX Software Engineer at SIOS

Reproduced with permission from SIOS

Filed Under: Clustering Simplified Tagged With: disaster recovery, High Availability

  • 1
  • 2
  • 3
  • …
  • 113
  • Next Page »

Recent Posts

  • Disaster Recovery Planning in an Unpredictable World
  • Active-Active vs. Active-Passive
  • Broadcom/VMware: Time To Decouple High Availability From Your Hypervisor
  • How To Improve Customer Satisfaction in Technical Support
  • Keeping Buildings Safe: High Availability in Maintenance and Security Systems

Most Popular Posts

Maximise replication performance for Linux Clustering with Fusion-io
Failover Clustering with VMware High Availability
create A 2-Node MySQL Cluster Without Shared Storage
create A 2-Node MySQL Cluster Without Shared Storage
SAP for High Availability Solutions For Linux
Bandwidth To Support Real-Time Replication
The Availability Equation – High Availability Solutions.jpg
Choosing Platforms To Replicate Data - Host-Based Or Storage-Based?
Guide To Connect To An iSCSI Target Using Open-iSCSI Initiator Software
Best Practices to Eliminate SPoF In Cluster Architecture
Step-By-Step How To Configure A Linux Failover Cluster In Microsoft Azure IaaS Without Shared Storage azure sanless
Take Action Before SQL Server 20082008 R2 Support Expires
How To Cluster MaxDB On Windows In The Cloud

Join Our Mailing List

Copyright © 2026 · Enterprise Pro Theme on Genesis Framework · WordPress · Log in