SIOS SANless clusters

SIOS SANless clusters High-availability Machine Learning monitoring

  • Home
  • Products
    • SIOS DataKeeper for Windows
    • SIOS Protection Suite for Linux
  • News and Events
  • Clustering Simplified
  • Success Stories
  • Contact Us
  • English
  • 中文 (中国)
  • 中文 (台灣)
  • 한국어
  • Bahasa Indonesia
  • ไทย

High Availability Applications For Business Operations – An Interview

February 1, 2019 by Jason Aw Leave a Comment

About High Availability Applications For Business Operations – An Interview with Jerry Melnick

We are in conversation with Jerry Melnick, President & CEO, SIOS Technology Corp. Jerry is responsible for directing the overall corporate strategy for SIOS Technology Corp. and leading the company’s ongoing growth and expansion. He has more than 25 years of experience in the enterprise and high availability software markets. Before joining SIOS, he was CTO at Marathon Technologies where he led business and product strategy for the company’s fault tolerant solutions. His experience also includes executive positions at PPGx, Inc. and Belmont Research. There he was responsible for building a leading-edge software product and consulting business focused on supplying data warehouse and analytical tools.

Jerry began his career at Digital Equipment Corporation. He led an entrepreneurial business unit that delivered highly scalable, mission-critical database platforms to support enterprise-computing environments in the medical, financial and telecommunication markets. He holds a Bachelor of Science degree from Beloit College with graduate work in Computer Engineering and Computer Science at Boston University.

What is the SIOS Technology survey and what is the objective of the survey?

SIOS Technology Corp. with ActualTech Media conducted a survey of IT staff to understand current trends and challenges related to the general state of high availability applications in organizations of all sizes. An organization’s HA applications are generally the ones that ensure that a business remains in operation. Such systems can range from order taking systems to CRM databases to anything that keeps employees, customers, and partners working together.

We’ve learned that the news is mixed when it comes to how well high availability applications are supported.

Who responded to the survey?

For this survey, we gathered responses from 390 IT professionals and decision makers from a broad range of company sizes in the US. Respondents managed databases, infrastructure, architecture, systems, and software development as well as those in IT management roles.

What were some of the key findings uncovered in the survey results?

The following are key findings based on the survey results:

  • Most (86%), but not all, organizations are operating their HA applications with some kind of clustering or high availability mechanism in place.
  • A full 95% of respondents report that they have an occasional failure in the underlying HA services that support their applications.
  • Ninety-eight (98%) of respondents to our survey indicated that they see either regular or occasional application performance issues.
  • When such issues occur, for most organizations, it takes between three and five hours to identify the cause and correct the issue and it takes using between two and four tools to do so.
  • Small companies are leading the way by going all-in on operating their HA applications in the cloud. More than half (54%) of small companies intend to be running 50% or more of their HA applications in the cloud by the end of 2018.
  • For companies of all sizes, control of the application environment remains a key reason why workloads remain on premises. 60% of respondents indicating that this has played a factor in retaining one or more HA application on-premises rather than moving it into the cloud.

Tell us about the Enterprise Application Landscape. Which applications are in use most; and which might we be surprised about?

We focused on tier 1 mission critical applications, including Oracle, Microsoft SQL Server, SAP/HANA. For most organizations operating these kinds of services, they are the lifeblood. They hold the data that enables the organization to achieve its goals.

56% of respondents to our survey are operating Oracle workloads while 49% are running Microsoft SQL Server. Rounding out the survey, 28% have SAP/HANA in production. These are all clearly critical workloads in most organizations, but there are others. For this survey, we provided respondents an opportunity to tell us what, beyond these three big applications, they are operating that can be considered mission critical. Respondents that availed themselves of this response option indicate that they’re also operating various web databases, primarily from Amazon, as well as MySQL and PostgresQL databases. To a lesser extent, organizations are also operating some NoSQL services that are considered mission critical.

How often does an application performance issue affect end users?

Application performance issues are critical for organizations. 98% of respondents indicating these issues impact end users in some way ranging from daily (experienced by 18% of respondents) to just one time per year (experience by 8% of respondents) and everywhere in between. Application performance issues lead to customer dissatisfaction and can lead to lost revenue and increased expenses. But, there appears to be some disagreement around such issues depending on your perspective in the organization. Respondents holding decision maker roles have a more positive view of the performance situation than others. Only 11% of decision makers report daily performance challenges compared to around 20% of other respondents.

Is it easier to resolve cloud-based application performance issues?

Most IT pros would like to fully eliminate the potential for application performance issues that operate in a cloud environment. But the fact is that such situations can and will happen. There is a variety of tools available in the market to help IT understand and address application performance issues. IT departments have, over the years, cobbled together troubleshooting toolkits. In general, the fewer tools you need to work with to resolve a problem, the more quickly you can bring services back into full operation. That’s why it’s particularly disheartening to learn that only 19% of responses turn to a single tool to identify cloud application performance issues. This leaves 81% of respondents having to use two or more tools. But, it gets worse. 11% of respondents need to turn to five or more tools in order to identify performance issues with the cloud applications

So now we know cloud-based application performance issues can’t be totally avoided, how long until we can expect a fix?

The real test of an organization’s ability to handle such issues comes when measuring the time it takes to recover when something does go awry. 23% of respondents can typically recover in less than an hour. Fifty-six percent (56%) of respondents take somewhere between one and three hours to recover. After that 23% take 3 or more hours. This isn’t to say that these people are recovering from a complete failure somewhere. They are reacting to a performance fault somewhere in the application. And it’s one that’s serious enough to warrant attention. A goal for most organizations is to reduce the amount of time that it takes to troubleshoot problems. This will reduce the amount of time it takes to correct them.

Do future plans about moving HA applications to the cloud show stronger migration?

We requested information from respondents around their future plans as they pertain to moving additional high availability applications to the cloud. Nine percent (9%) of respondents indicate that all of their most important applications are already in the cloud. By the end of 2018, one-half of respondents expect to have more than 50% of their HA applications migrated to the cloud. While 29% say that they will have less than half of the HA applications in such locations. Finally, 12% of respondents say that they will not be moving any more HA applications to the cloud in 2018.

How would you sum up the SIOS Technology survey results?

Although this survey and report represent people’s thinking at a single point in time, there are some potentially important trends that emerge. First, it’s clear that organizations value their mission-critical applications, as they’re protecting them via clustering or other high availability technology. A second takeaway is that even with those safeguards in place, there’s more work to be done, as those apps can still suffer failures and performance issues. Companies need to look at the data and ask themselves. Therefore, if they’re doing everything they can to protect their crucial assets. You can download the report here.

Contact us if you would like to enjoy High Availability Applications in your project.

Reproduced from Tech Target

Filed Under: News and Events Tagged With: High Availability, high availability applications, Jerry Melnick, SIOS

Five Cloud Predictions for 2019 by SIOS

January 16, 2019 by Jason Aw Leave a Comment

 Five Cloud Predictions for 2019 by SIOS

From HA and IT service management to DevOps and IT operations analytics

SIOS Technology Corp.‘s president and CEO Jerry Melnick reveals his top cloud predictions for 2019.

The cloud has a rich history of continual improvements. 2019 will usher in some fairly significant ones that enhance capabilities, simplify operations and reduce costs.

Five Major Trends That Guide His Cloud Predictions For 2019:

1. Advances in Technology Will Make the Cloud Substantially More Suitable for Critical Applications

IT staff now have become more comfortable with the cloud for critical applications. Their concerns about security and reliability, especially for five-9’s of uptime, have diminished substantially. Initially, organizations will prefer to use whatever HA failover clustering technology they currently use in their data centers to protect critical applications being migrated to the cloud. This clustering technology will also be adapted and optimized for enhanced operations in the cloud. At the same time, cloud service providers will continue to advance their ability to provide higher service levels, leading to the cloud ultimately becoming the preferred platform for all enterprise applications.

2. Dynamic Utilization Will Make HA and DR More Cost-effective for More Applications, Further Driving Migration to the Cloud

With its virtually unlimited resources spread around the globe, the cloud is the ideal platform for delivering high uptime. But provisioning standby resources that sit idle most of the time has been cost-prohibitive for many applications. The increasing sophistication of fluid cloud resources deployed across multiple zones and regions, all connected via high-quality internet working, now enables standby resources to be allocated dynamically only when needed. This will dramatically lower the cost of provisioning HA and DR protections.

3. The Cloud Will Become a Preferred Platform for SAP Deployments

As the platforms offered by cloud service providers continue to mature, their ability to host SAP applications will become commercially viable and, therefore, strategically important. For CSPs, SAP hosting will be a way to secure long-term engagements with enterprise customers. For the enterprise, “SAP-as-a-Service” will be a way to take full advantage of the enormous economies of scale in the cloud without sacrificing performance or availability.

4. Cloud ‘Quick-start’ Templates Will Become the Standard for Complex Software and Service Deployments

Quick-start templates are wizard-based interfaces. It employ automated scripts to dynamically provision, configure and orchestrate the resources and services needed to run specific applications. Among their key benefits are reduced training requirements, improved speed and accuracy. Not forgetting the ability to minimize or even eliminate human error as a major source of problems. By making deployments more turnkey, quick-start templates will substantially decrease the time and effort it takes for DevOps staff to setup, test and roll out dependable configurations.

5. Advanced Analytics And AI Will Be Everywhere and in Everything, Including Infrastructure Operations

Advanced analytics and artificial intelligence will simplify IT operations and improve infrastructure. Finally, it would help application robustness, and lower overall costs. Along with this trend, AI and analytics will become embedded in HA and DR solutions. As well as cloud service provider offerings to improve service levels. With the ability to quickly, automatically and accurately understand issues and diagnose problems across complex configurations, the reliability, and thus the availability, of critical services delivered from the cloud will vastly improve.

Concluding his cloud predictions for 2019, according to Melnick, “2019 is set to be an exciting year for the cloud with new capabilities and enhancements further driving migration to the cloud.  With these new improvements, built atop an already-solid foundation, the cloud may well achieve that long-anticipated tipping point where it becomes the preferred platform for a majority of enterprise applications for a majority of organizations.”

SIOS Technology Corp.‘s president and CEO Jerry Melnick’s cloud predictions for 2019 is reproduced with permission from SIOS

Read SIOS Success stories to learn how SIOS could benefit your projects

Filed Under: News and Events Tagged With: cloud predictions for 2019, Jerry Melnick, SIOS

Database Trends and Applications: 10 Ways to Save Money and Provide More Comprehensive Availability Protection in SQL Server Environments

May 14, 2015 by Margaret Hoagland

Microsoft SQL Server has become a business-critical database for a growing number of enterprises that rely on it to run a wide range of essential business processes. As enterprises look to continuously improve the efficiency of their data centers, they face the challenges involved in improving their ability to provide high availability and disaster protection for SQL Server.

A common strategy for providing high availability protection for SQL Server is to use AlwaysOn Availability Groups, a high availability feature included with SQL Server 2012 Enterprise Edition. It is positioned as an evolution of SQL Server Database Mirroring and an alternative to AlwaysOn Failover Clustering.

You can also use AlwaysOn Failover Clustering, which is included in both the SQL Server Enterprise and Standard Editions. While AlwaysOn Failover Clustering allows you to create a cluster in a physical server environment, it requires shared storage, which is not available in a cloud environment, and may not be practical in a virtual server environment. A third option is to add SANLess clustering software to AlwaysOn Failover Clustering. SANLess clustering provides high availability and more comprehensive data protection for a fraction of the cost of AlwaysOn Availability Groups which requires the very expensive Enterprise Edition of SQL Server

SANLess clustering software is an ingredient that enhances a Windows Server Failover Clustering (WSFC) environment by providing real-time, block-level replication to synchronize local attached storage. The resulting synchronized storage appears to WSFC as a virtual SAN, enabling you to create a SANLess cluster that eliminates the cost, complexity, and single point of failure risk of a SAN. SANLess clusters also help you save money and enhance availability in a variety of ways.

1. Use SQL Server Standard Edition with AlwaysOn Failover Clustering to Save Licensing Costs.

AlwaysOn Availability Groups requires SQL Server 2012 Enterprise Edition. SANLess clustering software lets you use AlwaysOn Failover Clustering, which is included in both Standard and Enterprise Editions of SQL Server as a more cost-efficient failover and disaster protection solution.

Figure1 shows a side-by-side comparison of software licensing costs for a traditional cluster using AlwaysOn Availability Groups to protect SQL Server Enterprise Edition versus a SANLess cluster using AlwayOn Failover Clustering and SANless clustering software (SIOS DataKeeper™ Cluster Edition) to protect SQL Server Standard Edition.

Costs are calculated for comparable two-node clusters with four, eight, and sixteen cores. Software Assurance licensing costs are also included. As shown, the SANLess cluster saves $13,124 in a four-core cluster, $33,448 in an eight-core cluster, and $74,096 in a 16-core cluster configuration. These savings include the purchase of SANLess clustering software, which is licensed per node. When used in multiple SQL Server environments, SANLess clusters can save several hundred thousand dollars in software licenses.

Figure 1 – Cost Comparison of Different High Availability Solutions

View the Complete Article at Database Trends and Applications

Filed Under: News and Events, News posts Tagged With: Editorial, Jerry Melnick, SQL Server

VMblog.com: High Availability vSphere for SQL Server: 5 Things You Need to Know

February 11, 2015 by Margaret Hoagland

SQL Server administrators have many options for implementing high availability (HA) in a VMware environment. VMware offers vSphere HA. Microsoft offers Windows Server Failover Clustering (WSFC). And SQL Server in WSFC has its own HA options with AlwaysOn Availability Groups and AlwaysOn Failover Clusters.

Third party vendors also provide solutions purpose-built for HA and disaster recovery, and these often integrate with other solutions to create even more options. For example, some solutions leverage the AlwaysOn Failover Cluster feature included with SQL Server to deliver robust HA and data protection for less than the cost of AlwaysOn Availability Groups that require the more expensive Enterprise Edition.

This article highlights five things every SQL Server administrator should know before formulating a high availability strategy for mission-critical applications in a vSphere environment. Such a strategy is likely to resemble the multi-site configuration shown in Figure 1, which is not possible with some HA options.

1. High-Availability Clusters for vSphere require Raw Disk Mapping

The layers of abstraction used in virtualized servers afford substantial flexibility, but such abstractions can cause problems when a virtual machine (VM) must interface with a physical device. This is the case for vSphere with Storage Area Networks (SANs).

To enable compatibility with certain SAN and other shared-storage features, such as I/O fencing and SCSI reservations, vSphere utilizes a technology called Raw Device Mapping (RDM) to create a direct link through the hypervisor between the VM and the external storage system. The requirement for using RDM with shared storage exists for any cluster, including a SQL Server Failover Cluster.

In a traditional cluster created with WSFC in vSphere, RDM must beused to provide virtual machines (VMs) direct access to the underlying storage (SAN). RDM is able to maintain 100 percent compatibility with all SAN commands, making virtualized storage access seamless to the operating system and applications which is an essential requirement of WSFC.

RDM can be made to work effectively, but achieving the desired result is not always easy, and may not even be possible. For example, RDM does not support disk partitions, so it is necessary to use “raw” or whole LUNs (logical unit numbers), and mapping is not available for direct-attached block storage and certain RAID devices.

2. Use of Raw Disk Mapping means Sacrificing Popular VMware Features

Another important aspect of being fully informed about RDM involves understanding the hurdles it can create for using other VMware features, many of which are popular with SQL Server administrators. When these hurdles are deemed unacceptable, as they often are, they eliminate Raw Device Mapping as an option for implementing high availability.

The underlying problem is how RDM interferes with VMware features that employ virtual machine disk (VMDK) files. For example, RDM prevents the use of VMware snapshots, and this in turn prevents the use of any feature that requires snapshots, such as Virtual Consolidated Backups (VCBs).

Raw Disk Mapping also complicates data mobility, which creates impediments to using the features that make server virtualization so beneficial, including converting VMs into templates to simplify deployment, and using vMotion to migrate VMs dynamically among hosts.

Another potential problem for transaction-intensive applications like SQL Server is the inability to utilize Flash Read Cache when RDM is configured.

3. Shared Storage can create a Single Point of Failure

The traditional need for clustered servers to have direct access to shared storage can create limitations for high availability and disaster recovery provisions, and these limitations can, in turn, create a barrier to migrating business-critical applications to vSphere.

In a traditional failover cluster, two or more physical servers (cluster nodes) are connected to a shared storage system. The application runs on one server, and in the event of a failure, clustering software, such as Windows Server Failover Clustering, moves the application to a standby node. Similar clustering is also possible with virtualized servers in a vSphere environment, but this requires a technology like Raw Disk Mapping so that the VMs can access the shared storage directly.

Whether the servers are physical or virtual, the use of shared storage can create a single point of failure. A SAN can have a high availability configuration, of course, but that increases its complexity and cost, and can adversely affect performance, especially for transaction-intensive applications like SQL Server.

4. HA vSphere Clusters can be built without Sacrificing VMware Functionality

Some third-party solutions are purpose-built to overcome the limitations associated with shared storage and the requirement to use RDM with SQL Server’s AlwaysOn Failover Clusters and Windows Server Failover Clusters.

Figure 1 – A multi-site high-availability configuration protects applications from outages that affect an entire data center.

The best of these solutions provide complete configuration flexibility, making it possible to create a SANLess cluster to meet a wide range of needs – from a two-node cluster in a single site, to a multinode cluster, to a cluster with nodes in different geographic locations for disaster protection as shown in Figure 1. Some of these solutions also make it possible to implement LAN/WAN-optimized, real-time block-level replication in either a synchronous or asynchronous manner. In effect, these solutions are capable of creating a RAID 1 mirror across the network, automatically changing the direction of the data replication (source and target) as needed after failover and failback.

Just as importantly, a SANLess cluster is often easier to implement and operate with both physical and virtual servers. For example, for solutions that are integrated with WSFC, administrators are able to configure high-availability clusters using a familiar feature in a way that avoids the use of shared storage as a potential single point of failure. Once configured, most solutions then automatically synchronize the local storage in two or more servers (in one or more data centers), making them appear to WSFC as if it was a shared storage device.

5. HA SANLess Clusters deliver Superior Capabilities and Performance

In addition to creating a single point of failure, replicating data on a SAN can significantly reduce throughput performance in VMware environments. Highly transactional applications like SQL Server are particularly vulnerable to these performance-related factors.

Figure 2 – Testing of SQL Server’s AlwaysOn Availability Groups and SIOS #SANLess clusters shows the throughput advantage possible with replication techniques purpose built for high availability and high performance.

Figure 2 summarizes test results that show the 60-70 percent performance penalty associated with using SQL Server AlwaysOn Availability Groups to replicate data. These test results also show how a purpose-built high-availability SANLess cluster, which utilizes local storage, is able to perform nearly as well as configurations not protected with any data replication or mirroring.

The #SANLess cluster tested is able to achieve this impressive performance because its driver sits immediately below NTFS. As writes occur on the primary server, the driver writes one copy of the block to the local VMDK and another copy simultaneously across the network to the secondary server which has its own independent VMDK.

SANLess clusters have many other advantages, as well. For example, those that use block-level replication technology that is fully integrated with WSFC are able to protect the entire SQL Server instance, including the database, logons and agent jobs-all in an integrated fashion. Contrast this approach with AlwaysOn Availability Groups, which failover only user-defined databases, and require IT staff to manage other data objects for every cluster node separately and manually.

##

About the Author

Jerry Melnick, COO, SIOS Technology Corp.

Jerry Melnick (jmelnick@us.sios.com) is responsible for defining corporate strategy and operations at SIOS Technology Corp. (www.us.sios.com), maker of SIOS SAN and #SANLess cluster software (www.clustersyourway.com). He more than 25 years of experience in the enterprise and high availability software industries. He holds a Bachelor of Science degree from Beloit College with graduate work in Computer Engineering and Computer Science at Boston University.

Filed Under: News and Events, News posts Tagged With: #SANLess, Jerry Melnick, News, VMware

Recent Posts

  • Transitioning from VMware to Nutanix
  • Are my servers disposable? How High Availability software fits in cloud best practices
  • Data Recovery Strategies for a Disaster-Prone World
  • DataKeeper and Baseball: A Strategic Take on Disaster Recovery
  • Budgeting for SQL Server Downtime Risk

Most Popular Posts

Maximise replication performance for Linux Clustering with Fusion-io
Failover Clustering with VMware High Availability
create A 2-Node MySQL Cluster Without Shared Storage
create A 2-Node MySQL Cluster Without Shared Storage
SAP for High Availability Solutions For Linux
Bandwidth To Support Real-Time Replication
The Availability Equation – High Availability Solutions.jpg
Choosing Platforms To Replicate Data - Host-Based Or Storage-Based?
Guide To Connect To An iSCSI Target Using Open-iSCSI Initiator Software
Best Practices to Eliminate SPoF In Cluster Architecture
Step-By-Step How To Configure A Linux Failover Cluster In Microsoft Azure IaaS Without Shared Storage azure sanless
Take Action Before SQL Server 20082008 R2 Support Expires
How To Cluster MaxDB On Windows In The Cloud

Join Our Mailing List

Copyright © 2025 · Enterprise Pro Theme on Genesis Framework · WordPress · Log in