SIOS SANless clusters

SIOS SANless clusters High-availability Machine Learning monitoring

  • Home
  • Products
    • SIOS DataKeeper for Windows
    • SIOS Protection Suite for Linux
    • SIOS iQ Machine Learning IT Analytics
  • News and Events
  • Clustering Simplified
  • Success Stories
  • Contact Us
  • English
  • 中文 (中国)
  • 中文 (台灣)
  • 한국어
  • Bahasa Indonesia
  • ไทย

High Availability Applications For Business Operations – An Interview

February 1, 2019 by Jason Aw Leave a Comment

About High Availability Applications For Business Operations – An Interview with Jerry Melnick

We are in conversation with Jerry Melnick, President & CEO, SIOS Technology Corp. Jerry is responsible for directing the overall corporate strategy for SIOS Technology Corp. and leading the company’s ongoing growth and expansion. He has more than 25 years of experience in the enterprise and high availability software markets. Before joining SIOS, he was CTO at Marathon Technologies where he led business and product strategy for the company’s fault tolerant solutions. His experience also includes executive positions at PPGx, Inc. and Belmont Research. There he was responsible for building a leading-edge software product and consulting business focused on supplying data warehouse and analytical tools.

Jerry began his career at Digital Equipment Corporation. He led an entrepreneurial business unit that delivered highly scalable, mission-critical database platforms to support enterprise-computing environments in the medical, financial and telecommunication markets. He holds a Bachelor of Science degree from Beloit College with graduate work in Computer Engineering and Computer Science at Boston University.

What is the SIOS Technology survey and what is the objective of the survey?

SIOS Technology Corp. with ActualTech Media conducted a survey of IT staff to understand current trends and challenges related to the general state of high availability applications in organizations of all sizes. An organization’s HA applications are generally the ones that ensure that a business remains in operation. Such systems can range from order taking systems to CRM databases to anything that keeps employees, customers, and partners working together.

We’ve learned that the news is mixed when it comes to how well high availability applications are supported.

Who responded to the survey?

For this survey, we gathered responses from 390 IT professionals and decision makers from a broad range of company sizes in the US. Respondents managed databases, infrastructure, architecture, systems, and software development as well as those in IT management roles.

What were some of the key findings uncovered in the survey results?

The following are key findings based on the survey results:

  • Most (86%), but not all, organizations are operating their HA applications with some kind of clustering or high availability mechanism in place.
  • A full 95% of respondents report that they have an occasional failure in the underlying HA services that support their applications.
  • Ninety-eight (98%) of respondents to our survey indicated that they see either regular or occasional application performance issues.
  • When such issues occur, for most organizations, it takes between three and five hours to identify the cause and correct the issue and it takes using between two and four tools to do so.
  • Small companies are leading the way by going all-in on operating their HA applications in the cloud. More than half (54%) of small companies intend to be running 50% or more of their HA applications in the cloud by the end of 2018.
  • For companies of all sizes, control of the application environment remains a key reason why workloads remain on premises. 60% of respondents indicating that this has played a factor in retaining one or more HA application on-premises rather than moving it into the cloud.

Tell us about the Enterprise Application Landscape. Which applications are in use most; and which might we be surprised about?

We focused on tier 1 mission critical applications, including Oracle, Microsoft SQL Server, SAP/HANA. For most organizations operating these kinds of services, they are the lifeblood. They hold the data that enables the organization to achieve its goals.

56% of respondents to our survey are operating Oracle workloads while 49% are running Microsoft SQL Server. Rounding out the survey, 28% have SAP/HANA in production. These are all clearly critical workloads in most organizations, but there are others. For this survey, we provided respondents an opportunity to tell us what, beyond these three big applications, they are operating that can be considered mission critical. Respondents that availed themselves of this response option indicate that they’re also operating various web databases, primarily from Amazon, as well as MySQL and PostgresQL databases. To a lesser extent, organizations are also operating some NoSQL services that are considered mission critical.

How often does an application performance issue affect end users?

Application performance issues are critical for organizations. 98% of respondents indicating these issues impact end users in some way ranging from daily (experienced by 18% of respondents) to just one time per year (experience by 8% of respondents) and everywhere in between. Application performance issues lead to customer dissatisfaction and can lead to lost revenue and increased expenses. But, there appears to be some disagreement around such issues depending on your perspective in the organization. Respondents holding decision maker roles have a more positive view of the performance situation than others. Only 11% of decision makers report daily performance challenges compared to around 20% of other respondents.

Is it easier to resolve cloud-based application performance issues?

Most IT pros would like to fully eliminate the potential for application performance issues that operate in a cloud environment. But the fact is that such situations can and will happen. There is a variety of tools available in the market to help IT understand and address application performance issues. IT departments have, over the years, cobbled together troubleshooting toolkits. In general, the fewer tools you need to work with to resolve a problem, the more quickly you can bring services back into full operation. That’s why it’s particularly disheartening to learn that only 19% of responses turn to a single tool to identify cloud application performance issues. This leaves 81% of respondents having to use two or more tools. But, it gets worse. 11% of respondents need to turn to five or more tools in order to identify performance issues with the cloud applications

So now we know cloud-based application performance issues can’t be totally avoided, how long until we can expect a fix?

The real test of an organization’s ability to handle such issues comes when measuring the time it takes to recover when something does go awry. 23% of respondents can typically recover in less than an hour. Fifty-six percent (56%) of respondents take somewhere between one and three hours to recover. After that 23% take 3 or more hours. This isn’t to say that these people are recovering from a complete failure somewhere. They are reacting to a performance fault somewhere in the application. And it’s one that’s serious enough to warrant attention. A goal for most organizations is to reduce the amount of time that it takes to troubleshoot problems. This will reduce the amount of time it takes to correct them.

Do future plans about moving HA applications to the cloud show stronger migration?

We requested information from respondents around their future plans as they pertain to moving additional high availability applications to the cloud. Nine percent (9%) of respondents indicate that all of their most important applications are already in the cloud. By the end of 2018, one-half of respondents expect to have more than 50% of their HA applications migrated to the cloud. While 29% say that they will have less than half of the HA applications in such locations. Finally, 12% of respondents say that they will not be moving any more HA applications to the cloud in 2018.

How would you sum up the SIOS Technology survey results?

Although this survey and report represent people’s thinking at a single point in time, there are some potentially important trends that emerge. First, it’s clear that organizations value their mission-critical applications, as they’re protecting them via clustering or other high availability technology. A second takeaway is that even with those safeguards in place, there’s more work to be done, as those apps can still suffer failures and performance issues. Companies need to look at the data and ask themselves. Therefore, if they’re doing everything they can to protect their crucial assets. You can download the report here.

Contact us if you would like to enjoy High Availability Applications in your project.

Reproduced from Tech Target

Filed Under: News and Events Tagged With: High Availability, high availability applications, Jerry Melnick, SIOS

Five Cloud Predictions for 2019 by SIOS

January 16, 2019 by Jason Aw Leave a Comment

 Five Cloud Predictions for 2019 by SIOS

From HA and IT service management to DevOps and IT operations analytics

SIOS Technology Corp.‘s president and CEO Jerry Melnick reveals his top cloud predictions for 2019.

The cloud has a rich history of continual improvements. 2019 will usher in some fairly significant ones that enhance capabilities, simplify operations and reduce costs.

Five Major Trends That Guide His Cloud Predictions For 2019:

1. Advances in Technology Will Make the Cloud Substantially More Suitable for Critical Applications

IT staff now have become more comfortable with the cloud for critical applications. Their concerns about security and reliability, especially for five-9’s of uptime, have diminished substantially. Initially, organizations will prefer to use whatever HA failover clustering technology they currently use in their data centers to protect critical applications being migrated to the cloud. This clustering technology will also be adapted and optimized for enhanced operations in the cloud. At the same time, cloud service providers will continue to advance their ability to provide higher service levels, leading to the cloud ultimately becoming the preferred platform for all enterprise applications.

2. Dynamic Utilization Will Make HA and DR More Cost-effective for More Applications, Further Driving Migration to the Cloud

With its virtually unlimited resources spread around the globe, the cloud is the ideal platform for delivering high uptime. But provisioning standby resources that sit idle most of the time has been cost-prohibitive for many applications. The increasing sophistication of fluid cloud resources deployed across multiple zones and regions, all connected via high-quality internet working, now enables standby resources to be allocated dynamically only when needed. This will dramatically lower the cost of provisioning HA and DR protections.

3. The Cloud Will Become a Preferred Platform for SAP Deployments

As the platforms offered by cloud service providers continue to mature, their ability to host SAP applications will become commercially viable and, therefore, strategically important. For CSPs, SAP hosting will be a way to secure long-term engagements with enterprise customers. For the enterprise, “SAP-as-a-Service” will be a way to take full advantage of the enormous economies of scale in the cloud without sacrificing performance or availability.

4. Cloud ‘Quick-start’ Templates Will Become the Standard for Complex Software and Service Deployments

Quick-start templates are wizard-based interfaces. It employ automated scripts to dynamically provision, configure and orchestrate the resources and services needed to run specific applications. Among their key benefits are reduced training requirements, improved speed and accuracy. Not forgetting the ability to minimize or even eliminate human error as a major source of problems. By making deployments more turnkey, quick-start templates will substantially decrease the time and effort it takes for DevOps staff to setup, test and roll out dependable configurations.

5. Advanced Analytics And AI Will Be Everywhere and in Everything, Including Infrastructure Operations

Advanced analytics and artificial intelligence will simplify IT operations and improve infrastructure. Finally, it would help application robustness, and lower overall costs. Along with this trend, AI and analytics will become embedded in HA and DR solutions. As well as cloud service provider offerings to improve service levels. With the ability to quickly, automatically and accurately understand issues and diagnose problems across complex configurations, the reliability, and thus the availability, of critical services delivered from the cloud will vastly improve.

Concluding his cloud predictions for 2019, according to Melnick, “2019 is set to be an exciting year for the cloud with new capabilities and enhancements further driving migration to the cloud.  With these new improvements, built atop an already-solid foundation, the cloud may well achieve that long-anticipated tipping point where it becomes the preferred platform for a majority of enterprise applications for a majority of organizations.”

SIOS Technology Corp.‘s president and CEO Jerry Melnick’s cloud predictions for 2019 is reproduced with permission from SIOS

Read SIOS Success stories to learn how SIOS could benefit your projects

Filed Under: News and Events Tagged With: cloud predictions for 2019, Jerry Melnick, SIOS

S2D For SQL Server Failover Cluster Instances 

September 8, 2018 by Jason Aw Leave a Comment

Storage Space Direct (S2D) For SQL Server Failover Cluster Instances

Storage Spaces Direct For SQL Server Failover Cluster Instances

With the introduction of Windows Server 2016 Datacenter Edition a new feature called Storage Spaces Direct (S2D) was introduced. At a very high level, S2D For SQL Server Failover Cluster Instances allows you to pool together locally attached storage and present it to the cluster as a CSV for use in a Scale Out File Server. Then it can be accessed over SMB 3 and used to hold cluster data such as Hyper-V VMDK files. This can also be configured in a hyper-converged (HCI) fashion such that the application and data can all run on the same set of servers.  This is a grossly over-simplified description, but for details, you will want to look here.

Storage Spaces Direct Stack

Image taken from https://docs.microsoft.com/en-us/windows-server/storage/storage-spaces/storage-spaces-direct-overview

The main use case targeted is hyper-converged infrastructure for Hyper-V deployments. However, there are other use cases, including leveraging this SMB storage to store SQL Server Data to be used in a SQL Server Failover Cluster Instance

Why would anyone want to do that?

Well, for starters you can now build a highly available 2-node SQL Server Failover Cluster Instance (FCI) with SQL Server Standard Edition, without the need for shared storage. Previously, if you wanted HA without a SAN you pretty much were driven to buy SQL Server Enterprise Edition and make use of Always On Availability Groups or purchase SIOS DataKeeper and leverage the 3rd party solution which lets you build SANless clusters with any version of Windows or SQL Server. SQL Server Enterprise Edition can really drive up the cost of your project, especially if you were only buying it for the Availability Groups feature.

In addition to the cost associated with Availability Groups, there are a number of other technical reasons why you might prefer a Failover Cluster over an AG. Application compatibility, instance vs. database level protection, large number of databases, DTC support, trained staff, etc., are just some of the technical reasons why you may want to stick with a Failover Cluster Instance.

SIOS DataKeeper Solution Vs S2D For SQL Server Failover Cluster Instances 

Microsoft lists both the SIOS DataKeeper solution and the S2D solution as two of the supported solutions for SQL Server FCI in their documentation here.

S2D For SQL Server Failover Cluster Instances 

https://docs.microsoft.com/en-us/azure/virtual-machines/windows/sql/virtual-machines-windows-sql-high-availability-dr

When comparing the two solutions, you have to take into account that SIOS has been allowing you to build SANless Clusters since 1999. But the S2D For SQL Server Failover Cluster Instances is still in its infancy.  Having said that, there are bound to be some areas where S2D has some catching up to do. Or, simply features that they will never support simply due to the limitations with the technology.

Before Choosing Your SANless Cluster Solution

Have a look at the following table for an overview of some of the things you should consider before you choose your SANless cluster solution.

S2D For SQL Server Failover Cluster Instances 

If we go through this chart, we see that SIOS DataKeeper clearly has some significant advantages. For one, DataKeeper supports a much wider range of platforms, going all the way back to Windows Server 2008 R2 and SQL Server 2008 R2. The S2D solution only supports the latest releases of Windows and SQL Server 2016/2017. S2D also requires the Datacenter Edition of Windows, which can add significantly to the cost of your deployment. In addition, SIOS delivers the ONLY HA/DR solution for SQL Server on Linux that works both on-prem and in the cloud.

Analysis Of The Differences

But beyond the cost and platform limitations, I think the most glaring gap comes when we start to consider disaster recovery options for your SANless cluster. Allan Hirt, SQL Server Cluster guru and fellow Microsoft Cloud and Datacenter Management MVP, recently posted about this S2D limitation. In his article Revisiting Storage Spaces Direct and SQL Server FCIs  Allan points out that due to the lack of support for stretching S2D clusters across sites or including an S2D based cluster as a leg in an Always On Availability Group, the best option for DR in the S2D scenario is log shipping!

Don’t get me wrong. Log shipping has been around forever and will probably be around long after I’m gone. But that is taking a HUGE step backwards when we think about all the disaster recovery solutions we have become accustomed to, like multi-site clusters, Availability Groups, etc.

In contrast, the SIOS DataKeeper solution fully supports Always On Availability Groups. Better yet – it can allow you to stretch your FCI across sites to give you the best HA/DR solution you could hope to achieve in terms of RTO/RPO. In an Azure environment, DataKeeper also support Azure Site Recovery (ASR), giving you even more options for disaster recovery.

The rest of this chart is pretty self explanatory. It basically consist of a list hardware, storage and networking requirements that must be met before you can deploy an S2D cluster. An exhaustive list of S2D requirements is maintained here.  https://docs.microsoft.com/en-us/windows-server/storage/storage-spaces/storage-spaces-direct-hardware-requirements

SIOS Datakeeper. What’s Good

The SIOS DataKeeper solution is much more lenient. It supports any locally attached storage and as long as the hardware passes cluster validation, it is a supported cluster configuration. The block level replication solution has been working great ever since 1 Gbps was considered a fast LAN and a T1 WAN connection was considered a luxury.

SANless clustering is particularly interesting for cloud deployments. The cloud does not offer traditional shared storage options for clusters. So for users in the middle of a “lift and shift” to the cloud that want to take their clusters with them they must look at alternate storage solutions. For cloud deployments, SIOS is certified for Azure, AWS and Google and available in the relevant cloud marketplace. While there doesn’t appear to be anything blocking deployment of S2D based clusters in Azure or Google, there is a conspicuous lack of documentation or supportability statements from Microsoft for those platforms.

Make A Safe Choice

SIOS DataKeeper has been doing this since 1999. SIOS has heard all the feature requests, uncovered all the bugs, and has a rock solid solution for SANless clusters that is time tested and proven. While Microsoft S2D is a promising technology, as a 1st generation product I would wait until the dust settles and some of the feature gap closes before I would consider it for my business critical applications.

To know more about S2D For SQL Server Failover Cluster Instances, find out here SIOS DataKeeper

Reproduced with permission from Clusteringformeremortals.com

Filed Under: Clustering Simplified, Datakeeper Tagged With: DataKeeper, s2d for sql server failover cluster instances, SIOS, SQL Server Failover Cluster Instance

Deploy SQL Server Alwayson Failover Clusters In Amazon EC2 With AWS Cloud

February 12, 2018 by Jason Aw Leave a Comment

Webinar Invite!

DEPLOYING YOUR BUSINESS CRITICAL SQL SERVER APPS ON AMAZON EC2

Amazon Web Services (AWS) and SIOS Technology Corp, an AWS Partner Network (APN) Technology Partner, invite you to attend this live webinar to learn how to optimize mission critical SQL Server deployments on Amazon EC2.

Learn how to take advantage of the cost benefits and flexibility of Amazon EC2 while maintaining protection with native Microsoft Windows Server Failover Clustering – all without shared storage.

WHO SHOULD ATTEND:

Solution Architects, Developer, Development Leads and other SQL Professionals

PRESENTERS:

Miles Ward, Solutions Architect, Amazon Web Services

Tony Tomarchio, Director of Field Engineering, SIOS Technology Corp

DATE / TIME:

Wednesday, June 5, 2013 – 10AM PT / 1PM ET

CLICK HERE TO REGISTER

http://bit.ly/10VLtDu

Reproduced with permission from https://clusteringformeremortals.com/2013/05/23/webinar-invite-how-to-deploy-sql-server-alwayson-failover-clusters-in-amazon-ec2-with-awscloud-amazonaws/

Filed Under: Clustering Simplified Tagged With: Amazon EC2, Microsoft Windows Server Failover Clustering, Miles Ward, SIOS, SQL Server, Tony Tomarchio, Webinar

Hurricane Sandy Disaster Recovery For Business

February 4, 2018 by Jason Aw Leave a Comment

The accident taught us the importance of Disaster Recovery. Have you prepared well for it?

My thoughts and prayers go out to those affect by this massive storm. Although I live in NJ, my neighborhood remained relatively unscathed other than some downed trees and power lines. The pictures coming in from the coastal communities up and down the eastern seaboard show that many people did not fare as well. I’m hopeful that most of the damage is property that can be rebuilt, but I am sorry to hear that some people lost their lives and I can only imagine the pain of their friends and family – I am truly sorry for their loss.

Need help with Disaster Recovery?

As an employee of a company that specializes in disaster recovery software, I am also privy to many stories of companies that lost data that cannot be replaced. Many of these companies never recover from such catastrophes, but those that do are usually the ones who immediately look to put into place a plan that includes some sort of real-time data protection that includes replicating their critical data offsite or to some cloud repository so they are never caught in such a predicament again. If that is your story or even if you were lucky enough to avoid disaster this time but want to prepare ahead, please contact me immediately so I can help you assess your risks and recommend some data protection and DR solutions to help mitigate the risks.

Reproduced with permission from https://clusteringformeremortals.com/2012/10/30/hurricane-sandy-disaster-recovery-for-business/

Filed Under: Clustering Simplified Tagged With: disaster recovery, SIOS, SIOS DataKeeper Cluster Edition

  • 1
  • 2
  • Next Page »

Recent Posts

  • How To Configure SQL Server 2008 R2 Failover Cluster Instance In Azure
  • High Availability Applications For Business Operations – An Interview
  • Ensure High Availability for SQL Server on Amazon Web Services
  • Options for When Public Cloud Service Levels Fall Short
  • Managing Cost of Cloud for High-Availability Applications

Most Popular Posts

Maximise replication performance for Linux Clustering with Fusion-io
Failover Clustering with VMware High Availability
create A 2-Node MySQL Cluster Without Shared Storage
create A 2-Node MySQL Cluster Without Shared Storage
SAP for High Availability Solutions For Linux
Bandwidth To Support Real-Time Replication
The Availability Equation – High Availability Solutions.jpg
Choosing Platforms To Replicate Data - Host-Based Or Storage-Based?
Guide To Connect To An iSCSI Target Using Open-iSCSI Initiator Software
Best Practices to Eliminate SPoF In Cluster Architecture
Step-By-Step How To Configure A Linux Failover Cluster In Microsoft Azure IaaS Without Shared Storage azure sanless
Take Action Before SQL Server 20082008 R2 Support Expires
How To Cluster MaxDB On Windows In The Cloud

Join Our Mailing List

Copyright © 2019 · Enterprise Pro Theme on Genesis Framework · WordPress · Log in