SIOS SANless clusters

SIOS SANless clusters High-availability Machine Learning monitoring

  • Home
  • Products
    • SIOS DataKeeper for Windows
    • SIOS Protection Suite for Linux
  • News and Events
  • Clustering Simplified
  • Success Stories
  • Contact Us
  • English
  • 中文 (中国)
  • 中文 (台灣)
  • 한국어
  • Bahasa Indonesia
  • ไทย

High Availability Health-Check Services, Optimization, and Training

August 27, 2025 by Jason Aw Leave a Comment

High Availability Health-Check Services, Optimization, and Training

High Availability Health-Check Services, Optimization, and Training

Customers regularly engage SIOS for consultancy services such as high availability (HA) health check services and training. This helps customers keep their IT infrastructure in good working order and keeps their staff trained to operate it.

High Availability Health-Check Services

Health check services are a Professional Services offering from SIOS that checks the customer’s SIOS server infrastructure and produces a report for the customer. A detailed review of the SIOS HA LifeKeeper environment is performed, as well as a thorough examination of the product logs and customer run logs. During this review, version levels, communication paths, quorum, Application Recovery Kits, and tuning parameters are examined and compared against recommended settings. A report is generated with any potential risks outlined and actionable suggestions for improvements.

High Availability Optimization

Optimization of High Availability services can be broken down into two areas:

  • High availability optimization – this optimization reduces downtime and maintains the operational uptime of a system. This is achieved by the use of system-initiated failovers and user-initiated switchovers to a backup hardware system that can take over when a primary system fails or is manually switched over. A Disaster Recovery (DR) node may be positioned on a WAN so that if the main LAN-based nodes go down, fast recovery can be achieved by failing over to the DR node. Backups may also be routinely made, so that recovery of specific files can be performed if needed.
  • Cost-optimized high availability – this optimization examines a customer’s system to determine the best way to offer redundancy, while keeping costs down. This may involve using cloud services for scaling and utilizing lower-tiered services to reduce cost. Serverless architectures with a pay-per-use model can also be used. All of this reduces hardware costs.

High Availability Training

SIOS offers on-demand high availability product training for LifeKeeper for Linux and DataKeeper for Windows products through the Udemy training platform.

Additionally, SIOS offers remote custom training for organizations via the Professional Services organization for these products, which come with training materials and self-guided exercises. For the DataKeeper course, an Individualized breakout/consultancy session is available.

SIOS Technology Corporation provides high availability cluster software that protects & optimizes IT infrastructures with cluster management for your most important applications. Request a demo today and see how easy clustering can be.

Author: Paul Scrutton, Software System Engineer at SIOS Technology Corp.

Reproduced with permission from SIOS

Filed Under: Clustering Simplified Tagged With: High Availability

Eliminate Shadow IT High Availability Problems

August 20, 2025 by Jason Aw Leave a Comment

Eliminate Shadow IT High Availability Problems

Eliminate Shadow IT High Availability Problems

Many of us are familiar with the term Shadow IT.  Most often, the term is used to refer to technology systems, software, subscriptions, and other services that are used by employees of a particular company without the overall approval, knowledge, or oversight by the company’s official IT department.  Most often, these systems, services, or subscriptions are downloaded and installed, or used and managed by individuals outside of the IT department.

For example, perhaps your company officially uses Windows 365, but others prefer Dropbox, so they configured a Dropbox account to share files instead of OneDrive.  Another example of Shadow IT occurs when a company has settled on one messaging platform, but other teams or departments within the company download and configure Zoom for Slack or WhatsApp.

Common Examples of Shadow IT in the Workplace

Shadow IT occurs in many different areas, from messaging to meetings, coding tools to storage.  While most teams and organizations that have some form of Shadow IT do not deploy them maliciously or with evil intent, the presence of Shadow IT nevertheless introduces risks.

These services, software, systems, and subscriptions introduce potential risks, including:

  • Security issues
  • Data compliance
  • Support challenges
  • Management and maintenance issues (due to sprawl)
  • Additional cost (licensing and manpower)

How Shadow IT Impacts High Availability (HA)

In addition to security and data compliance risks, Shadow IT may also be introducing a significant High Availability (HA) risk.

While many examples of Shadow IT mentioned online are related to messaging applications, meeting tools, IDEs, and development applications, the breadth of Shadow IT can also impact High Availability (HA).  When Shadow IT includes the deployment of systems that store critical information and data, this creates a High Availability risk.

These systems, because of the nature of the data stored on them, need to be monitored and protected by a commercial HA solution.  In addition, critical data that is essential to business functions needs to be highly available and protected against data loss by a replication solution, backup solution, or both.

Business Risks of Unprotected Shadow IT Critical Applications

Lack of High Availability Protection

Often, when a team has deployed a system without input or authorization from IT, it may not be monitored, protected, backed up, or even paired with a HA system for failover recovery.  This is a significant risk to the organization’s HA strategy.  If the data is critical for an internal organization or project, leaving it unprotected could jeopardize the business.

Financial Losses and Business Disruption from Shadow IT Downtime

Shadow IT risks also arise when essential applications are downloaded, installed, and configured without the official IT department’s oversight.  If essential applications are running on an unprotected system or without HA monitoring and recovery protection, the risks and results can be catastrophic.  Imagine the scenario where an application is essential to the Sales workflow and orders system.  Because the software is a part of Shadow IT Infrastructure, the IT team has no knowledge of its use or its impact on the business.  If the application fails, the business will be impacted. Depending on the type of failure, the impact on operations could cost hundreds of thousands to millions of dollars.

When the critical application fails, without proper HA protection, the manual recovery process can be cumbersome, complex, and prone to errors.  This risk to the operation is due in part to the growing complexity of application environments and technical requirements. Exacerbating the complexity, when an application falls into the category of Shadow IT, the limited knowledge of the application’s existence and recovery procedures can lead to unplanned and unprepared actions to restore full operation.

Steps to Identify and Eliminate Shadow IT HA Problems

Identify All Shadow IT Systems That Impact High Availability

The first step in avoiding HA disasters due to Shadow IT is to identify the subscriptions, services, systems, applications, data, and software that have become a part of the unmanaged IT infrastructure.  Gain visibility into what tools are being used, by whom, and for what purpose.
This can be done by utilizing existing network monitoring, cloud monitoring, or endpoint detection tools.  You can also engage with the IT security and infrastructure analysis service vendors to perform a helpful audit of tools, services, systems, and subscriptions.

Remediate Risks and Remove Unnecessary Shadow IT Assets

Once this identification has been done, the next step is to start with remediation.  Remediation includes eliminating unused and unnecessary systems as well as implementing controls and processes for the administration of each acquired item. Be sure to adjust workflows for eliminated systems, as the removal of systems can impact several teams and activities within the organization.

Protect Critical Applications with High Availability and Replication

For systems, applications, and services that must remain, especially those housing critical data and applications, deploy a commercially available HA and replication solution to protect the business from the key threats of application downtime, data loss, system unavailability, and downtime of the systems hosting the critical data, applications, or tools.

Educate Teams on the Risks of Shadow IT to HA Systems

Lastly, educate the organization about the dangers and risks associated with Shadow IT, including the risks due to dependencies, architecture complexities, data vulnerability, and unexpected downtime of unprotected systems.

Build a Resilient HA Architecture to Eliminate Shadow IT Downtime

Shadow IT is not limited to meeting and messaging tools, development systems and services, nor apps like Dropbox, OneDrive, Box, and online services.  Shadow IT tools often lack proper backup and recovery mechanisms, as well as uptime guarantees.  As a result, critical business processes and data could be inaccessible or even permanently lost due to a failure scenario.  When not officially integrated into HA protection, failures at the system, application, network, or storage layer can lead to broken workflows, inefficiencies in processing, or business downtime and reputational loss.

Eliminate Shadow IT HA problems by creating a well-architected HA environment for the systems, services, applications, and workloads that your company identifies and chooses to incorporate into the official IT department offerings.  This architecture should include a commercially available HA, data replication, and backup solution that is deployed on an enterprise-ready hypervisor.

Ready to strengthen your HA architecture with proven expertise? Request a demo today and see how SIOS can help you design and deploy a high availability solution that protects your business from Shadow IT downtime.

Author:Cassius Rhue, VP, Customer Experience

Reproduced with permission from SIOS

Filed Under: Clustering Simplified Tagged With: High Availability

Achieving High Availability Cost-Effectively

August 15, 2025 by Jason Aw Leave a Comment

Achieving High Availability Cost-Effectively

Achieving High Availability Cost-Effectively

These days, applications and data are the lifeblood of most organizations, and everyone expects that the applications and data they need to get their work done will be available. But ensuring true application and data high availability (HA) – by which we mean the assurance that you can interact with them at least 99.99% of the time – can sound like a costly proposition. But a clear understanding of where to apply HA and how to do so cost effectively can more than justify the cost. In fact, it can protect you from the extensive costs and consequences of downtime and disaster. The question is, how can you determine how best to invest in an HA infrastructure?

This Networking Computing article by a SIOS solutions architect looks at the considerations for determining which applications and data justify the investment in an HA infrastructure and suggests cost-effective steps that organizations can take to improve the availability of those systems and data stores that it will opt not to protect with a full HA infrastructure.

Want to take the next step with SIOS? Request a demo today to see how SIOS can help you protect critical workloads, minimize downtime, and ensure seamless high availability.

Author: Beth Winkowski, Public Relations

Reproduced with permission from SIOS

Filed Under: Clustering Simplified Tagged With: High Availability

Why Company History Matters in HA

August 5, 2025 by Jason Aw Leave a Comment

Why Company History Matters in HA

Why Company History Matters in HA

There are many places to start with regarding building the plan, strategy, design, and architecture for a highly available cluster.  Of course, wise builders want to understand the basic requirements: two nodes or three, RTO under 10 minutes or under five minutes, RPO of near zero or absolutely zero.  Architects also want to understand how many nodes, and how the hardware and network can be made resilient.  Will you deploy in the datacenter, on the cloud, or a mixture of both?  In addition to understanding the architecture of the underlying hardware, requirements gathering and design also lead to gaining an understanding of the critical applications, the High Availability (HA) software, processes, and governance procedures that will need to be followed, additional dashboards and integrations required for reporting, monitoring, and alert distribution.  All team members will also want to understand the basics of recovery and failover orchestration, of course.

Why Company and Solution Provider History Matters in High Availability

But one thing that often gets overlooked in the deployment of High Availability is company history.  Of course, if you are going to entrust your enterprise environment to a monitoring, alerting, recovery, and failover orchestration solution, you’d want to know and understand who they are, what they do, and how long they’ve been doing it well. Is this a new startup company located in Buford, Wyoming, a company available in the US only, or a global company that happens to have an HA offering in mothballs that are only trotted out to close other parts of a deal?

As you build your architecture, of course, you need to know that the HA company knows, understands, and does HA well.  But, and this is a big one, the most important history your team needs to know when architecting your HA solution isn’t theirs but yours.

As VP of Customer Experience, I’ve worked with numerous customers, teams, architects, and solution integration teams on deploying HA solutions across on-premise and off.  In many of these discussions, one overlooked factor in deploying a sound infrastructure and HA architecture is the history of the company itself.  So, why does your company, or the company you are architecting HA for, matter?  Five (5) ways in which company history should impact your HA architecture

Five Ways Company History Shapes HA Architecture

Here are Five (5) ways in which company history should impact your HA architecture:

1. Company Size (Too big or too small)

What is your company’s history with regards to the HA team?  Does your company have too many people on the team, with conflicting or overlapping roles and responsibilities?  Or does your company have a team that is undersized, even as it overachieves?  Depending on the history of your company and its size over that time, you may need to make adjustments in your design for additional authentication, more granular permissions and restrictions, etc.  If your team is small, perhaps adding the burden of developing and maintaining a free solution would be too much of a burden.  If your team is large, with many roles and overlaps, and time to develop custom solutions, consider if a commercial solution would be a better fit to free those resources up for new development, additional improvements, or even greater efficiency in the day-to-day operations.

2. Company Life Cycle (Every five years or not until it breaks)

What is your company’s life cycle history?  Does your CIO/CTO revamp your entire infrastructure on a fixed cycle, or are they more of an “If it ain’t broke, don’t fix it,” type?  If your company has a long history of trading out and replacing solutions and providers, then your architecture will need to be more robust to handle the swapping in and out of components and pieces. In this case, your HA architecture will also need to factor in offboarding, end of life, and onboarding of a potentially new solution within a short period of time.  A key for this type of high turnover will be to limit custom work and hard dependencies.

On the other hand, if your HA solution will be in place for ten years or more, you’ll want to make sure that your vendors provide maintenance and extended support for the critical components within your infrastructure.  Your architecture will also need to heavily weigh the challenges that might be encountered with various software solutions and interoperability as the solution ages past the standard support lifecycle, and how to mitigate those risks.

3. Company Staffing (The revolving door or the lone ranger)

As VP of Customer Experience, one of my most shocking memories was working with a company to architect a solution for HA.  Within one week of the go-live date, the project manager for that team announced that he and his whole team had been terminated.  The go-live would be transferred to a new team, both new to the company and new to HA.  As I would later learn, company Z had a revolving door policy with IT and the administrators for their HA environment.  Most, if not all, of their resources were contractors.  If your company has a history of high turnover, then your architecture and design must include a runbook, and the process and procedures for maintenance need to also include training; formal product training, procedural testing, administration training, and chaos scenarios.

The revolving door isn’t the only company staffing history to be aware of.  The Lone Ranger is another scenario that is critical to know and understand.  At SIOS, our team joined a bewildered project manager looking for any answers and information regarding their enterprise systems, both involving SIOS and beyond.  The Lone Ranger had left the company due to unspecified reasons, and upon their departure, new members of the team discovered that a lot of tacit knowledge was undocumented and unaccounted for in any documents they could find.  When designing and building your architecture, knowing the type of staffing and history of staffing can help you design solutions properly, and may lead your team to choose a solution that is commercially available and staffed with services for the unfortunate Lone Ranger departures.

4. Company Past Disasters

Company disasters and downtime are another historical point that needs to be understood well by designers of HA solutions.  Typically, company disasters make their way into future architecture designs as requirements.  The past disasters, including their root cause, risk mitigation strategies, detection, prevention, and reporting recommendations, are often added to the deck of initial requirements.  However, digging into the history of the disasters may uncover more requirements and factors that need to be accounted for.  As VP of Customer Experience, our team learned a tremendous amount of data for building a better experience for several of our clients by understanding the company’s disasters.  In one instance, unattended VM maintenance was a big part of the company’s strategy, but also a source of many company availability issues.  While working with architects, our services team not only addressed application availability but helped the design team account for backup and recovery, maintenance and upgrades, and rollback strategies that maintain availability in the event of an automated failure.

5. Company Culture

As VP of Customer Experience, our team works closely with customers and partners who are passionate about application availability, adhering to the most stringent of Service Level Agreements (SLA) and Service Level Objectives.  As we worked with these teams, their designs and architecture specifications reflected a company culture that considered availability (architecture, design, hardware, networking, applications, cluster software, people, and process) as an indispensable part of their business.  Sadly, not all companies have this type of company culture.  Knowing the history of your company’s culture will definitely shape the way you implement HA, bringing out the best in design and architecture, either for adherence to the culture or as a method to improve culture and business success.

Don’t Overlook the Role of Company History in HA Decisions

Yes, the company history of the datacenter or cloud provider is important.  Knowing the history of Lou’s Low Cost Cloud, LLC (no offense to Lou), which has been hemorrhaging equipment, while running from the mostly un-air-conditioned garage of Lou’s parents’ home, is important if you were considering Lou for your datacenter.  Yes, the company history of the application and the HA vendor is also important.  Knowing the history of your ERP, Database, and frontend application provider is key to assessing and mitigating risks, understanding deployment patterns and methodology, and gaining confidence that timely fixes, updates, security, and support will be a cornerstone of your architecture.  But, do not underestimate the importance of knowing your own company history and how the critical failures should shape your new and ongoing HA decisions and infrastructure.

Ready to strengthen your HA architecture with proven expertise? Request a demo today and see how SIOS can help you design and deploy a high availability solution built for your company’s unique history and future needs.

Author: Cassius Rhue, VP, Customer Experience

Reproduced with permission from SIOS

Filed Under: Clustering Simplified Tagged With: High Availability

What’s the Best Setting for an Operating System Paging File for Maximum Performance and Stability?

July 27, 2025 by Jason Aw Leave a Comment

What’s the Best Setting for an Operating System Paging File for Maximum Performance and Stability

What’s the Best Setting for an Operating System Paging File for Maximum Performance and Stability?

DataKeeper depends on many configuration settings in the operating system software.  Because of this, many times when a configuration change is made to the cluster, the impacts of the change are not completely understood by our customers.  This, in turn, can affect how SIOS DataKeeper operates.  Knowing these dependencies in advance can help when upgrading or changing your cluster configuration.   One of the key operating system features that DataKeeper depends upon is the location of the Paging file.

What is the Operating System Paging File?  

The operating system paging file is a hidden file that the operating system uses when the server’s physical memory is full.  The paging file acts as extra server memory, and it actually resides on a drive in the server. The paging file allows the server to continue to operate and maintain system performance even when physical memory is low by utilizing the paging file for additional memory.

Where Should the Operating System Paging File be located?

By default, the operating system paging file is placed on C:\ or the <root> drive.  The operating system configuration includes an option to allow automatic management of the paging file.   When this is set, the operating system can move the paging file automatically to any disk in the system after a reboot.  With DataKeeper, it is recommended that the automatic management of the paging file be disabled so that the paging file is not moved to other volumes that may be used by DataKeeper.  The operating system is not aware of which volumes are being used by DataKeeper and may unexpectedly move the paging file to a volume with a DataKeeper mirror.   With DataKeeper in your cluster, the paging file needs to be located on a volume that is not used for DataKeeper mirroring (such as the C drive).

Why is the Location of the Operating System Paging File Important to SIOS DataKeeper?

Why does it matter where the paging file is if DataKeeper is being used on your servers?  The location of the paging file can affect the operation of DataKeeper.  If the paging file is on a DataKeeper volume that is currently the source of the mirror, everything will appear to work fine.  However, when a switchover or failover occurs,  the source of the mirror then becomes the target of the mirror, and with a paging file present on the target volume, DataKeeper will fail to lock the target volume. If DataKeeper cannot lock the volume, switchovers and failovers will fail, affecting High Availability. Locking the target of the mirror is required, and paging files on the volume will keep DataKeeper from being able to lock the volume. With DataKeeper v8.11.0 a new feature was added to the product to help customers with this.  In v8.11.0, DataKeeper now prevents a paging file from being created on a DataKeeper volume.

Summary: What Happens When the Paging File is Located on a DataKeeper Volume?

DataKeeper purposely locks the volume on the target system to prevent writes from occurring on the target system.  In order for DataKeeper to lock a target volume, there cannot be an operating system paging file on the volume.  Many times, systems are configured at the OS level to “Automatically Manage Paging Files,” and sometimes page files end up getting placed on the DataKeeper volumes by the OS.  To overcome this, we recommend that this OS setting be changed.  Refer to the product documentation for further details.  Also, we recommend upgrading to DataKeeper v8.11.0 so you can benefit from this new DataKeeper feature that prevents paging files from being created on DataKeeper mirrored volumes.

Want to take the next step with SIOS? Request a demo today to see how SIOS can help you protect critical workloads, minimize downtime, and ensure seamless high availability.

Author: Sandi Hamilton, Director of Product Support Engineering at SIOS Technology Corp.

Reproduced with permission from SIOS

Filed Under: Clustering Simplified Tagged With: SIOS Datakeeper

  • « Previous Page
  • 1
  • 2
  • 3
  • 4
  • 5
  • …
  • 109
  • Next Page »

Recent Posts

  • Three Keys to Mastering High Availability in Your On-Prem Data Center
  • Why High Availability Matters in Manufacturing 4.0
  • Reframing Early Computer Science Education: The Soft Skills of Solution Design Part 1
  • How to Cut SQL Server HA/DR Costs and Gain Advanced Features
  • Commonalities between Disaster Recovery (DR) and your spare tire

Most Popular Posts

Maximise replication performance for Linux Clustering with Fusion-io
Failover Clustering with VMware High Availability
create A 2-Node MySQL Cluster Without Shared Storage
create A 2-Node MySQL Cluster Without Shared Storage
SAP for High Availability Solutions For Linux
Bandwidth To Support Real-Time Replication
The Availability Equation – High Availability Solutions.jpg
Choosing Platforms To Replicate Data - Host-Based Or Storage-Based?
Guide To Connect To An iSCSI Target Using Open-iSCSI Initiator Software
Best Practices to Eliminate SPoF In Cluster Architecture
Step-By-Step How To Configure A Linux Failover Cluster In Microsoft Azure IaaS Without Shared Storage azure sanless
Take Action Before SQL Server 20082008 R2 Support Expires
How To Cluster MaxDB On Windows In The Cloud

Join Our Mailing List

Copyright © 2025 · Enterprise Pro Theme on Genesis Framework · WordPress · Log in