SIOS SANless clusters

SIOS SANless clusters High-availability Machine Learning monitoring

  • Home
  • Products
    • SIOS DataKeeper for Windows
    • SIOS Protection Suite for Linux
  • News and Events
  • Clustering Simplified
  • Success Stories
  • Contact Us
  • English
  • 中文 (中国)
  • 中文 (台灣)
  • 한국어
  • Bahasa Indonesia
  • ไทย

Multi-Cloud Disaster Recovery

October 30, 2021 by Jason Aw Leave a Comment

Multi-Cloud Disaster Recovery

Multi-Cloud Disaster Recovery

If this topic sounds confusing, we get it. With our experts’ advice, we hope to temper your apprehensions – while also raising some important considerations for your organisation before or after going multi-cloud. Planning for disaster recovery is a common point of confusion for companies employing cloud computing, especially when it involves multiple cloud providers.

It’s taxing enough to ensure data protection and disaster recovery (DR) when all data is located on-premises. But today many companies have data on-premises as well as with multiple cloud providers, a hybrid strategy that may make good business sense but can create challenges for those tasked with data protection. Before we delve into the details, let’s define the key terms.

What is multi-cloud?

Multi-cloud is the utilization of two or more cloud providers to serve an organization’s IT services and infrastructure. A multi-cloud approach typically consists of a combination of major public cloud providers, namely Amazon Web Services (AWS), Google Cloud Platform (GCP), and Microsoft Azure.

Organizations choose the best services from each cloud provider based on costs, technical requirements, geographic availability, and other factors. This may mean that a company uses Google Cloud for development/test, while using AWS for disaster recovery, and Microsoft Azure to process business analytics data.

Multi-cloud differs from hybrid cloud which refers to computing environments that mix on-premises infrastructure, private cloud services, and a public cloud.

Who uses multiple clouds?

  • Regulated industries – Many organizations run different business operations in different cloud environments. This may be a deliberate strategy of optimizing their IT environments based on the strengths of individual cloud providers or simply the product of a decentralized IT organization.
  • Media and Entertainment – Today’s media and entertainment landscape is increasingly composed of relatively small and specialized studios that meet the swelling content-production needs of the largest players, like Netflix and Hulu. Multi-cloud solutions enable these teams to work together on the same projects, access their preferred production tools from various public clouds, and streamline approvals without the delays associated with moving large media files from one site to another.
  • Transportation and Autonomous Driving – Connected car and autonomous driving projects generate immense amounts of data from a variety of sensors. Car manufacturers, public transportation agencies, and rideshare companies are among those motivated to take advantage of multi-cloud innovation, blending both accessibility of data across multiple clouds without the risks of significant egress charges and slow transfers, while maintaining the freedom to leverage the optimal public cloud services for each project.
  • Energy Sector – Multi-cloud adoption can help lower the significant costs associated with finding and drilling for resources. Engineers and data scientists can use machine learning (ML) analytics to identify places that merit more resources to prospect for oil, to gauge environmental risks of new projects, and to improve safety.

Multi-cloud disaster recovery pain points:

  • Not reading before you sign. Customers may face issues if they fail to read the fine print in their cloud agreements. The cloud provider is responsible for its computer infrastructure, but customers are responsible for protecting their applications and data. There are many reasons for application downtime that are not covered under cloud SLAs. Business critical workloads need high availability and disaster recovery protection software as well.
  • Developing a centralized protection policy. A centralized protection policy must be created to cover all data, no matter where it lives. Each cloud provider has its unique way of accessing, creating, moving and storing data, with different storage tiers. It can be cumbersome to create a disaster recovery plan that covers data across different clouds.
  • Reporting. This is important for ensuring protection of data in accordance with the service-level agreements that govern it. Given how quickly users can spin up cloud resources, it can be challenging to make sure you’re protecting each resource appropriately and identifying all data that needs to be incorporated into your DR plan.
  • Test your DR plan. Customers must fully screen and test their DR strategy. A multi cloud strategy compounds the need for testing. Some providers may charge customers for testing, which reinforces the need to read the fine print of the contract.
  • Resource skill sets. Finding an expert in one cloud can be challenging; with multi-cloud you will either need to find expertise in each cloud, or the rare individual with significance in multiple clouds.

Overcoming the multi-cloud DR challenge

Meeting these challenges requires companies to develop a data protection and recovery strategy that covers numerous issues. Try asking yourself the following strategic questions:

  • Have you defined the level of criticality for all applications and data? How much money will a few minutes of downtime for critical applications cost your organization in  end user productivity, customer satisfaction, and IT labor?
  • Will data protection and recovery be handled by IT or application owners and creators in a self-service model?
  • Did you plan for data optimization, using a variety of cloud- and premises-based options?
  • How do you plan to recover data? Restoring data to cloud-based virtual machines or using a backup image as the source of recovery?

Obtain the right multi-cloud DR solution

The biggest key to success in data protection and recovery in a multi-cloud scenario is ensuring you have visibility into all of your data, no matter how it’s stored. Tools from companies enable you to define which data and applications should be recovered in a disaster scenario and how to do it – whether from a backup image or by moving data to a newly created VM in the cloud, for example.

The tool should help you orchestrate the recovery scenario and, importantly, test it. If the tool is well integrated with your data backup tool, it can also allow you to use backups as a source of recovery data, even if the data is stored in different locations – like multiple clouds. Our most recent SIOS webinar discusses this same point; watch it here if you’re interested. SIOS Datakeeper lets you run your business-critical applications in a flexible, scalable cloud environment, such as Amazon Web Services (AWS), Azure, and Google Cloud Platform without sacrificing performance, high availability or disaster protection. SIOS DataKeeper is available in the AWS Marketplace and the only Azure certified high availability software for WSFC offered in the Azure Marketplace.

Reproduced from SIOS

Filed Under: Clustering Simplified Tagged With: Amazon AWS, Azure, Cloud, disaster recovery, GCP, Google Cloud Platform, multi-cloud, public cloud

12 Questions to Uncomplicate Your Cloud Migration

September 10, 2021 by Jason Aw Leave a Comment

12 Questions to Uncomplicate Your Cloud Migration

12 Questions to Uncomplicate Your Cloud Migration

Cloud migration best practices 

The “cloud is becoming more complicated,” it was the first statement in an hour-long webinar detailing the changes and opportunities with the boom in cloud computing and cloud migration.  The presenter continued with an outline of cloud related things that traditional IT is now facing in their journey to AWS, Azure, GCP or other providers.

There were nine areas that surfaced as complications in the traditional transition to cloud:

  • Definitions
  • Pricing
  • Networking
  • Security
  • Users, Roles, and Profiles
  • Applications and Licensing
  • Services and Support
  • Availability
  • Backups

As VP of Customer Experience for SIOS Technology Corp I’ve seen how the following areas can impact a transition to cloud. To mitigate these complications, consumers are turning to managed service providers, cloud solution architects, contractors and consultants, and a bevy of related services, guides, blog posts and related articles. Often in the process of turning to outside or outsourced resources the complications to cloud are not entirely removed.  Instead, companies and the teams they have employed to assist or to transition them to cloud still encounter roadblocks, speed bumps, hiccups and setbacks.

Most often these complications and slowdowns in migrating to the cloud come from twelve unanswered questions:

  1. What are our goals for moving to the cloud?
  2. What is your current on-premise architecture?  Do you have a document, list, flow chart, or cookbook?
  3. Are all of your application, database, availability and related vendors supported on your target cloud provider platform?
  4. What are your current on-premises risks and limitations?  What applications are unprotected, what are the most common issues faced on-premises?
  5. Who is responsible for the cloud architecture and design?  How will this architecture and design account for your current definitions and the definitions of the cloud provider?
  6. Who are the key stakeholders, and what are their milestones, business drivers, and deadlines for the business project?
  7. Have you shared your project plan and milestones with your vendors?
  8. What are the current processes, governance, and business requirements?
  9. What is the migration budget and does it include staff augmentation, training, and services? What are your estimates for ongoing maintenance, licensing, and operating expenses?
  10. What are your team’s existing skills and responsibilities?
  11. Who will be responsible for updating governance, processes, new cloud models, and the various traditional roles and responsibilities?
  12. What are the applications, services, or functions that will move from IaaS to SaaS models?

Know Your Goals for the Cloud

So, how will answering these twelve questions will improve your cloud migration. As you can see from the questions, understanding your goals for the cloud is the first, and most important step.  It is nearly universally accepted that “a cloud service provider such as AWS, Azure, or Google can provide the servers, storage, and communications resources that a particular application will require,” but for many customers, this only eliminates “he need for computer hardware and personnel to manage that hardware.” Because of this fact, often customers are focused on equipment or data center consolidation or reduction, without considering that there are additional cloud opportunities and gaps that they still need to consider. For example, cloud does eliminate management of hardware, but it “does not eliminate all the needs that an application and its dependencies will have for monitoring and recovery,” so if your goal was to get all your availability from the cloud, you may not reach that goal, or it may require more than just moving on premises to an IaaS model.   Knowing your goals will go a long way in helping you map out your cloud journey.

Know Your Current On-Premises Architecture

A second critical category of questions needed for a proper migration to the cloud, (or any new platform) is understanding the current on-premises architecture. This step not only helps with the identification of your critical applications that need availability, but also their underlying dependencies, and any changes required for those applications, databases, and backup solutions based on the storage, networking, and compute changes of the cloud.  Answering this question is also a key step in assessing the readiness of your applications and solutions for the cloud and quantifying your current risks.

A third area that will greatly benefit from working through these questions occurs when you discuss and quantify current limitations.  Frequently, we see this phase of discovery opening the door to limitations of current solutions that do not exist in the cloud.  For example, recently our services team worked with a customer impacted by performance issues in their SQL database cluster.  A SIOS expert assisting with their migration inquired about the solution and architecture, and VM sizing decisions. After a few moments, a larger more application sized instance was deployed correcting limitations that the customer had accepted due to their on-premise restrictions on compute, memory, and storage.  Similarly we have worked with customers who were storage sensitive.  They would run applications with smaller disks and a frequent resizing policy, due to disk capacity constraints. While storage costs should be considered, running with minimal margins can become a limitation of the past.

Understand Business and Governance Changes

The final group of questions help your team understand schedules, business impacts, deadlines, and governance changes that need to be updated or replaced because they may no longer apply in the cloud. Migrating to the cloud can be a smooth transition and journey.  However, failing to assess where you are on the journey and when you need to complete the journey can make it into a nightmare. Understanding timing is important and can be keenly aided by considering stakeholders, application vendors, business milestones, and business seasons.  Selfishly, SIOS Technology Corp. wants customers to understand their milestones because as a Service provider it minimizes the surprises. But, we also encourage customers to answer these questions as they often uncover misalignment between departments and stakeholders. The DBAs believes that the cutover will happen on the last weekend of the month, but Finance is intent on closing the books over the final weekend of the same month; or the IT team believes that cutover can happen on Monday, but the applications team is unavailable until Wednesday, and perhaps most importantly the legal team hasn’t combed through the list of new NDAs, agreements, licensing, and governance changes necessary to pull it all together.

As customers work through the questions, with safety and empathy, what often emerges is a puzzle of pieces, ownership, processes, and decision makers that needs to be put back together using the cloud provider box top and honest conversations on budget, staffing, training, and services.  The end result may not be a flawless migration, but it will definitely be a successful migration.

For help with your cloud migration strategy and high availability implementation, contact SIOS Technology Corp.

– Cassius Rhue, VP, Customer Experience

Learn more about common cloud migration challenges.

Read about some misconceptions about availability in the cloud.

Reproduced from SIOS

Filed Under: Clustering Simplified Tagged With: Amazon AWS, Amazon EC2, Azure, Cloud, High Availability, migration

Cloud Migration Best Practices for High Availability

March 25, 2021 by Jason Aw Leave a Comment

Cloud Migration Best Practices for High Availability

Cloud Migration Best Practices for High Availability

In 2020 we have seen more enterprises migrating more of their mission-critical applications, ERPs and databases to the cloud. However, not all of these migrations have been smooth. I have personally witnessed cloud migration projects dramatically slowed and even stopped due to a lack of planning for application availability, the complexity of retrofitting ‘DIY High Availability’, misunderstanding related to what a ‘lift and shift’ entails and unexpected costs.

There are a number of best practices, cloud checklists, and other ways for organizations to prepare for the cloud. The following best practices should be factored into every migration strategy for high availability clustering for those who have either hit pause on their 2020 cloud migration, or plan to forge ahead in 2021.

Cloud Migration Best Practices

Gather the requirements

Many organizations moving to the cloud think that the cloud is an on-premises architecture moved to the cloud. This misunderstanding in cloud migration often leads to stalls and delays when networking, storage, disk speeds, and system sizes for on-premises collide with the cloud reality. A smoother transition to cloud begins by gathering the real requirements for the infrastructure, governance and compliance, security, sizing, and related controls and resources.

Design and Document

In the design phase, the architecture of on-premises environments is mapped to the cloud environment that has been chosen for maximum availability and thoroughly documented. In this phase, as the architecture takes shape and you identify the strategy for IPs, load balancers, IOPS, and data availability. Teams need to look at how availability native to the cloud needs to be augmented with a robust application and infrastructure availability solution capable of automating complexities of the cloud. At SIOS, our experts in AWS and Azure clustering and availability work with customers to swap on-premises NFS for AWS EFS, Azure ANF, or a standalone NFS cluster tier. Additionally, a key part of the successful implementation in this phase will be documenting everything. Documentation is an often-neglected, but essential element of migration success.

Plan for High Availability

Achieving high availability in the cloud requires understanding the requirements, creating the design, and documenting a plan that lays out a strategy for achieving those requirements. A basic plan should include staffing, staff training, deploying a QA system testing, pre-production steps, deployment, post deployment validation, and on-going iterations. The best outcomes for cloud migration arise from a deliberate, planned process; not an ad hoc, break-fix approach.

Staff

How well is your team staffed for the cloud migration? Traditional help desk, client/server IT, or IT teams may not be enough for the cloud migration. If your team is new to the cloud, it may be time to consider adding more resources or professional services-based solutions. Migrating to the cloud can be taxing, tedious, and difficult without the proper insight, information, or training. Does your staff need to incorporate training related to the cloud environment? And while you are looking into training and professional services to assist your IT team, check with your vendor for training related to the availability solution. Many vendors provide flexible training for the HA solution and cloud training can be obtained with the cloud vendors or popular sites such as Udemy.

Deploy QA

The QA deployment phase is the phase in which the team executes the plans for deploying the actual systems into the cloud. Successful deployment teams validate their plans and strategy, understand the data migration process, uncover any missing dependencies, and prepare for the next step in the process, especially testing. When this step is skipped or skimped on, the once-promising migrations often stall or fail. When you reach the QA system deployment phase, your team will do the heavy lifting of the initial migration and configuration of the applications, databases, and critical data in the cloud.

Test Your High Availability

Testing in your QA environment is a critical step. These tests are not a waste of time; they are a time saver. Deploying environments in the cloud is often easier than deploying on-premises. Your QA environment can be scripted with tools like Ansible, deployed quickly as templates from the cloud marketplace or a cloned image, or deployed and built from cloud formation templates. Once deployed, disaster scenarios can be ironed out and optimized before a disaster, not in them. Test scenarios can be leveraged to identify overprovisioning, under-provisioning or bottlenecks with networking or disk speeds. A full test scenario can also be used as a part of an on-boarding strategy for new staff. Additionally, testing should be performed on snapshots and backups as well.

Deploy Production

When the testing phase completes, and your team has validated the test results, the next phase is to move from QA to pre-production, and from pre-production to go-live. The testing phase is the last phase of the heavy lifting involving final user acceptance testing, a final cutover and update of the production data, and then the users.

Review, Revise, and Repeat

A successful migration does not end once you reach the go-live phase, but continues through the lifecycle phases. In the post go-live phase of the cloud migration strategy, your team continues to review, revise, and repeat the steps from ‘Gather’ through ‘Deploy Production’. In fact, your team should repeat this process again and again, based on requirements specific to releases, application updates, security updates, related system maintenance, operating system versions, disaster recovery planning, as well as the requirements from your high availability vendor’s own best practices. The cloud platform is always evolving and adding new features, functionality, and updates that can enhance your existing HA solution and architecture. Reviewing, revising, and repeating the process will be a necessary step in successful onboarding.

In 2021 we’ll see more enterprises migrating more mission-critical applications, ERPs and databases to the cloud. A key major factor in their success will be utilizing cloud migration best practices to avoid delays and failures throughout the process. Understanding your business requirements and needs, documenting the design and plan, deploying in a QA environment with purpose built clustering solutions, and executing extensive testing before go-live will be essential. Contact SIOS Technology to understand how the SIOS Protection Suite can be included in your thoughtful cloud migration best practices.

-Cassius Rhue, VP, Customer Experience

Reproduced from SIOS

 

Filed Under: Clustering Simplified Tagged With: Amazon AWS, Amazon EC2, Azure, Cloud, High Availability

Six Reasons Your Cloud Migration Has Stalled

December 22, 2020 by Jason Aw Leave a Comment

Six reasons your cloud migration has stalled

 

 

Six Reasons Your Cloud Migration Has Stalled

More and more customers are seeking to take advantage of the flexibility, scalability and performance of the cloud. As the number of applications, solutions, customers, and partners making the shift increases, be sure that your migration doesn’t stall.

Avoid the Following Six Reasons Cloud Migrations Stall

1. Incomplete cloud migration project plans

Project planning is widely thought to be a key contributor to project success. The planning plays an essential role in helping guide stakeholders, diverse implementation teams, and partners through the project phases. Planning helps identify desired goals, align resources and teams to those goals, reduce risks, avoid missed deadlines, and ultimately deliver a highly available solution in the cloud.  Incomplete plans and incomplete planning are often a big cause of stalled projects.  At the ninth hour a key dependency is identified. During an unexpected server reboot an application monitoring and HA hole is identified (see below). Be sure that your cloud migration has a plan, and work the plan.

2. Over-engineering on-premises

“This is how we did it on our on-premises nodes,” was the phrase that started a recent customer conversation. The customer engaged with Edmond Melkomian, Project Manager for SIOS Professional Services, when their attempts to migrate to the cloud stalled.  During a discovery session, Edmond was able to uncover a number of over-engineered items related to on-premises versus cloud architecture. For some projects, reproducing what was done on premises can be a resume for bloat, complexity, and delays. Analyze your architecture and migration plans and ruthlessly eliminate over-engineered components and designs, especially with networking and storage.

3. Under-provisioning

Controlling cost and preventing sprawl are an important and critical aspect of cloud migrations.  However, some customers, anxious about per hour charges and associated costs for disks and bandwidth fall into the trap of under-provisioning.  In this trap, resources are improperly sized, be that disks that have the wrong speed characteristics, compute resources with the wrong CPU or memory footprint, or clusters with the wrong number of nodes.  In such under-provisioned cases, issues arise when User Acceptance Test (UAT) begins and expected/anticipated workloads create a log jam on undersized resources.  Or a cost optimization of a target node is unable to properly handle resources in a failover scenario. While resizing virtual machines in the cloud is a simple process, these sizing issues often create delays while architects and Chief Financial Officers try to understand the impact of re-provisioning resources.

4. Internal IT processes

Every great enterprise company has a set of internal processes, and chances are your team and company are no exception.  IT processes are usually key among the processes that can have a large impact on the success of your cloud migration strategy. In the past, many companies had long requisition and acquisition processes, including bids, sizing guides, order approvals, server prep and configuration, and final deployment.  The cloud process has dramatically altered the way compute, storage, and network resources, among others, are acquired and deployed.  However, if your processes haven’t kept up with the speed of the cloud your migration may hit a snag when plans change.

5. Poor High Availability planning

Another reason that cloud migrations can stall involves high availability planning. High availability requires more than a bundle of tools or enterprise licenses.  HA requires a careful, thorough and thoughtful system design.  When deploying an HA solution your plan will need to consider capacity, redundancy, and the requirements for recovery and correction. With a plan, requirements are properly identified, solutions proposed, risks thought through, and dependencies for deployment and validation managed. Without a plan, the project and deployment are vulnerable to risks, single point of failure issues, poor fit, and missing layers and levels of application protection or recovery strategies.  Often when there has been a lack of HA planning, projects stall while the requirements are sorted out.

6. Incomplete or invalid testing

Ron, a partner migrating his end customer to the cloud, planned to go-live over an upcoming three day weekend. The last decision point for ‘go/no-go’ was a batch of user acceptance testing on the staging servers.  The first test failed.  In order to make up for lost time due to other migration snags, Ron and team skipped over a number of test cases related to integrating the final collection of security and backup software on the latest OS with supporting patches. The simulated load, the first on the newly minted servers, tripped a series of issues within Ron’s architecture including a kernel bug, a CPU and memory provisioning issue, and storage layout and capacity issues. The project was delayed for more than four weeks to address customer confidence, proper testing and validation, resizing and architecture, and apply software and OS fixes.

The promises of the cloud are enticing, and a well planned cloud migration will position you and your team to take advantage of these benefits. Whether you are beginning or in the middle of a cloud migration, we hope this article helps you be more aware of common pitfalls so you can hopefully avoid them.

– Cassius Rhue, Vice President, Customer Experience

Reproduced from SIOS

Filed Under: Clustering Simplified Tagged With: Amazon AWS, Amazon EC2, Azure, Cloud

Step-By-Step: ISCSI Target Server Cluster In Azure

June 13, 2020 by Jason Aw Leave a Comment

Step-By-Step_ ISCSI Target Server Cluster In Azure

Step-By-Step: ISCSI Target Server Cluster In Azure

I recently helped someone build an iSCSI target server cluster in Azure and realized that I never wrote a step-by-step guide for that particular configuration. So to remedy that, here are the step-by-step instructions in case you need to do this yourself.

Pre-Requisites

I’m going to assume you are fairly familiar with Azure and Windows Server, so I’m going to spare you some of the details. Let’s assume you have at least done the following as a pre-requisite

  • Provision two servers (SQL1, SQL2) each in a different Availability Zone (Availability Set is also possible, but Availability Zones have a better SLA)
  • Assign static IP addresses to them through the Azure portal
  • Joined the servers to an existing domain
  • Enabled the Failover Clustering feature and the iSCSI Target Server feature on both nodes
  • Add three Azure Premium Disk to each node.
    NOTE: this is optional, one disk is the minimum required. For increased IOPS we are going to stripe three Premium Azure Disks together in a storage pool and create a simple (RAID 0) virtual disk
  • SIOS DataKeeper is going to be used to provided the replicated storage used in the cluster. If you need DataKeeper you can request a trial here.

Create Local Storage Pool

Once again, this step is completely optional, but for increased IOPS we are going to stripe together three Azure Premium Disks into a single Storage Pool. You might be tempted to use Dynamic Disk and a spanned volume instead, but don’t do that! If you use dynamic disks you will find out that there is some general incompatibility that will prevent you from creating iSCSI targets later.

Don’t worry, creating a local Storage Pool is pretty straight forward if you are aware of the pitfalls you might encounter as described below. The official documentation can be found here.

Pitfall #1 – although the documentation says the minimum size for a volume to be used in a storage pool is 4 GB, I found that the P1 Premium Disk (4GB) was NOT recognized. So in my lab I used 16GB P3 Premium Disks.

Pitfall #2 – you HAVE to have at least three disks to create a Storage Pool.

Pitfall #3 – create your Storage Pool before you create your cluster. If you try to do it after you create your cluster you are going to wind up with a big mess as Microsoft tries to create a clustered storage pool for you. We are NOT going to create a clustered storage pool, so avoid that mess by creating your Storage Pool before you create the cluster. If you have to add a Storage Pool after the cluster is created you will first have to evict the node from the cluster, then create the Storage Pool.

Based on the documentation found here, below are the screenshots that represent what you should see when you build your local Storage Pool on each of the two cluster nodes. Complete these steps on both servers BEFORE you build the cluster.

Step-By-Step: ISCSI Target Server Cluster In Azure

You should see the Primordial pool on both servers.

Step-By-Step: ISCSI Target Server Cluster In Azure

Right-click and choose New Storage Pool…

Step-By-Step: ISCSI Target Server Cluster In Azure

Step-By-Step: ISCSI Target Server Cluster In Azure

Step-By-Step: ISCSI Target Server Cluster In Azure

Step-By-Step: ISCSI Target Server Cluster In Azure

Choose Create a virtual disk when this wizard closes

Step-By-Step: ISCSI Target Server Cluster In Azure

Step-By-Step: ISCSI Target Server Cluster In Azure

Step-By-Step: ISCSI Target Server Cluster In Azure

Notice here you could create storage tiers if you decided to use a combination of Standard, Premium and Ultra SSD

Step-By-Step: ISCSI Target Server Cluster In Azure

Step-By-Step: ISCSI Target Server Cluster In Azure

For best performance use Simple storage layout (RAID 0). Don’t be concerned about reliability since Azure Managed Disks have triple redundancy on the backend. Simple is required for optimal performance.

Step-By-Step: ISCSI Target Server Cluster In Azure

For performance purposes use Fixed provisioning. You are already paying for the full Premium disk anyway, so no need not to use it all.Step-By-Step: ISCSI Target Server Cluster In Azure

Step-By-Step: ISCSI Target Server Cluster In Azure

Step-By-Step: ISCSI Target Server Cluster In Azure

Step-By-Step: ISCSI Target Server Cluster In Azure

Step-By-Step: ISCSI Target Server Cluster In Azure

Step-By-Step: ISCSI Target Server Cluster In Azure

Step-By-Step: ISCSI Target Server Cluster In Azure

Step-By-Step: ISCSI Target Server Cluster In Azure

Step-By-Step: ISCSI Target Server Cluster In Azure

Step-By-Step: ISCSI Target Server Cluster In Azure

Step-By-Step: ISCSI Target Server Cluster In Azure

Now you will have a 45 GB X drive on your first node. Repeat this entire process for the second node.

Create Your Cluster

Now that each server each have their own 45 GB X drive, we are going to create the basic cluster. Creating a cluster in Azure is best done via Powershell so that we can specify a static IP address. If you do it through the GUI you will soon realize that Azure assigns your cluster IP a duplicate IP address that you will have to clean up, so don’t do that!

Here is an example Powershell code to create a new cluster.

 New-Cluster -Name mycluster -NoStorage -StaticAddress 10.0.0.100 -Node sql1, sql2

The output will look something like this.

PS C:\Users\dave.DATAKEEPER> New-Cluster -Name mycluster -NoStorage 
-StaticAddress 10.0.0.100 -Node sql1, sql2
WARNING: There were issues while creating the clustered role that 
may prevent it from starting. 
For more information view the report file below.
WARNING: Report file location: C:\windows\cluster\Reports\Create Cluster 
Wizard mycluster on 2020.05.20 
At 16.54.45.htm

Name     
----     
mycluster

The warning in the report will tell you that there is no witness. Because there is no shared storage in this cluster you will have to create either a Cloud Witness or a File Share Witness. I’m not going to walk you through that process as it is pretty well documented at those links.

Don’t put this off, go ahead and create the witness now before you move to the next step!

You now should have a basic 2-node cluster that looks something like this.

Step-By-Step: ISCSI Target Server Cluster In Azure

Configure A Load Balancer For The Cluster Core IP Address

Clusters in Azure are unique in that the Azure virtual network does not support gratuitous ARP. Don’t worry if you don’t know what that means, all you have to really know is that cluster IP addresses can’t be reached directly. Instead, you have to use an Azure Load Balancer, which redirects the client connection to the active cluster node.

There are two steps to getting a load balancer configured for a cluster in Azure. The first step is to create the load balancer. The second step is to update the cluster IP address so that it listens for the load balancer’s health probe and uses a 255.255.255.255 subnet mask which enables you to avoid IP address conflicts with the ILB.

We will first create a load balancer for the cluster core IP address. Later we will edit the load balancer to also address the iSCSI cluster resource IP address that we will be created at the end of this document.

Step-By-Step: ISCSI Target Server Cluster In Azure

Step-By-Step: ISCSI Target Server Cluster In Azure

Notice that the static IP address we are using is the same address that we used to create the core cluster IP resource.

Step-By-Step: ISCSI Target Server Cluster In Azure

Once the load balancer is created you will edit the load balancer as shown below

Step-By-Step: ISCSI Target Server Cluster In Azure

Add the two cluster nodes to the backend pool

Step-By-Step: ISCSI Target Server Cluster In Azure

Add the two cluster nodes to the backend pool

Step-By-Step: ISCSI Target Server Cluster In Azure

Add a health probe. In this example we use 59999 as the port. Remember that port, we will need it in the next step.

Step-By-Step: ISCSI Target Server Cluster In Azure

Create a new rue to redirect all HA ports, Make sure Floating IP is enabled.

STEP 2 – EDIT TO CLUSTER CORE IP ADDRESS TO WORK WITH THE LOAD BALANCER

As I mentioned earlier, there are two steps to getting the load balancer configured to work properly. Now that we have a load balancer, we have to run a Powershell script on one of the cluster nodes. The following is an example script that needs to be run on one of the cluster nodes.

$ClusterNetworkName = “Cluster Network 1” 
$IPResourceName = “Cluster IP Address” 
$ILBIP = “10.0.0.100” 
Import-Module FailoverClusters
Get-ClusterResource $IPResourceName | Set-ClusterParameter 
-Multiple @{Address=$ILBIP;ProbePort=59998;SubnetMask="255.255.255.255"
;Network=$ClusterNetworkName;EnableDhcp=0} 

The important thing about the script above, besides getting all the variables correct for your environment, is making sure the ProbePort is set to the same port you defined in your load balancer settings for this particular IP address. You will see later that we will create a 2nd health probe for the iSCSI cluster IP resource that will use a different port. The other important thing is making sure you leave the subnet set at 255.255.255.255. It may look wrong, but that is what it needs to be set to.

After you run it the output should look like this.

 PS C:\Users\dave.DATAKEEPER> $ClusterNetworkName = “Cluster Network 1” 
$IPResourceName = “Cluster IP Address” 
$ILBIP = “10.0.0.100” 
Import-Module FailoverClusters
Get-ClusterResource $IPResourceName | Set-ClusterParameter 
-Multiple @{Address=$ILBIP;ProbePort=59999;SubnetMask="255.255.255.255"
;Network=$ClusterNetworkName;EnableDhcp=0}
WARNING: The properties were stored, but not all changes will take effect 
until Cluster IP Address is taken offline and then online again.

You will need to take the core cluster IP resource offline and bring it back online again before it will function properly with the load balancer.

Assuming you did everything right in creating your load balancer, your Server Manager on both servers should list your cluster as Online as shown below.

Step-By-Step: ISCSI Target Server Cluster In Azure

Check Server Manager on both cluster nodes. Your cluster should show as “Online” under Manageability.

Install DataKeeper

I won’t go through all the steps here, but basically at this point you are ready to install SIOS DataKeeper on both cluster nodes. It’s a pretty simple setup, just run the setup and choose all the defaults. If you run into any problems with DataKeeper it is usually one of two things. The first issue is the service account. You need to make sure the account you are using to run the DataKeeper service is in the Local Administrators Group on each node.

The second issue is in regards to firewalls. Although the DataKeeper install will update the local Windows Firewall automatically, if your network is locked down you will need to make sure the cluster nodes can communicate with each other across the required DataKeeper ports. In addition, you need to make sure the ILB health probe can reach your servers.

Once DataKeeper is installed you are ready to create your first DataKeeper job. Complete the following steps for each volume you want to replicate using the DataKeeper interface.

Step-By-Step: ISCSI Target Server Cluster In Azure

Step-By-Step: ISCSI Target Server Cluster In Azure

Use the DataKeeper interface to connect to both servers

Step-By-Step: ISCSI Target Server Cluster In Azure

Click on create new job and give it a name

Step-By-Step: ISCSI Target Server Cluster In Azure

Step-By-Step: ISCSI Target Server Cluster In Azure

Step-By-Step: ISCSI Target Server Cluster In Azure

Step-By-Step: ISCSI Target Server Cluster In Azure

Click Yes to register the DataKeeper volume in the cluster

Step-By-Step: ISCSI Target Server Cluster In Azure

Once the volume is registered it will appear in Available Storage in Failover Cluster Manager

Create The ISCSI Target Server Cluster

In this next step we will create the iSCSI target server role in our cluster. In an ideal world I would have a Powershell script that does all this for you, but for sake of time for now I’m just going to show you how to do it through the GUI. If you happen to write the Powershell code please feel free to share with the rest of us!

There is one problem with the GUI method. ou will wind up with a duplicate IP address in when the IP Resource is created, which will cause your cluster resource to fail until we fix it. I’ll walk through that process as well.

Step-By-Step: ISCSI Target Server Cluster In Azure

Step-By-Step: ISCSI Target Server Cluster In Azure

Step-By-Step: ISCSI Target Server Cluster In Azure

Step-By-Step: ISCSI Target Server Cluster In Azure

Step-By-Step: ISCSI Target Server Cluster In Azure

Step-By-Step: ISCSI Target Server Cluster In Azure

Step-By-Step: ISCSI Target Server Cluster In Azure

Go to the Properties of the failed IP Address resource and choose Static IP and select an IP address that is not in use on your network. Remember this address, we will use it in our next step when we update the load balancer.

You should now be able to bring the iSCSI cluster resource online.

Step-By-Step: ISCSI Target Server Cluster In Azure

Update Load Balancer For ISCSI Target Server Cluster Resource

Like I mentioned earlier, clients can’t connect directly to the cluster IP address (10.0.0.110) we just created for the iSCSI target server cluster. We will have to update the load balancer we created earlier as shown below.

Step-By-Step: ISCSI Target Server Cluster In Azure

Start by adding a new frontend IP address that uses the same IP address that the iSCSI Target cluster IP resource uses.

Step-By-Step: ISCSI Target Server Cluster In Azure

Add a second health probe on a different port. Remember this port number, we will use it again in the powershell script we run next

Step-By-Step: ISCSI Target Server Cluster In Azure

We add one more load balancing rule. Make sure to change the Frontend IP address and Health probe to use the ones we just created. Also make sure direct server return is enabled.

The final step to allow the load balancer to work is to run the following Powershell script on one of the cluster nodes. Make sure you use the new Healthprobe port, IP address and IP Resource name.

 $ClusterNetworkName = “Cluster Network 1” 
$IPResourceName = “IP Address 10.0.0.0” 
$ILBIP = “10.0.0.110” 
Import-Module FailoverClusters
Get-ClusterResource $IPResourceName | Set-ClusterParameter 
-Multiple @{Address=$ILBIP;ProbePort=59998;SubnetMask="255.255.255.255"
;Network=$ClusterNetworkName;EnableDhcp=0} 

Your output should look like this.

 PS C:\Users\dave.DATAKEEPER> $ClusterNetworkName = “Cluster Network 1” 
$IPResourceName = “IP Address 10.0.0.0” 
$ILBIP = “10.0.0.110” 
Import-Module FailoverClusters
Get-ClusterResource $IPResourceName | Set-ClusterParameter 
-Multiple @{Address=$ILBIP;ProbePort=59998;SubnetMask="255.255.255.255"
;Network=$ClusterNetworkName;EnableDhcp=0}
WARNING: The properties were stored, but not all changes will take effect 
until IP Address 10.0.0.0 is taken offline and then online again.

Make sure to take the resource offline and online for the settings to take effect.

Create Your Clustered ISCSI Targets

Before you begin, it is best to check to make sure Server Manager from BOTH servers can see the two cluster nodes, plus the two cluster name resources, and they both appear “Online” under manageability as shown below.

Step-By-Step: ISCSI Target Server Cluster In Azure

If either server has an issue querying either of those cluster names then the next steps will fail. If there is a problem I would double check all the steps you took to create the load balancer and the Powershell scripts you ran.

We are now ready to create our first clustered iSCSI targets. From either of the cluster nodes, follows the steps illustrated below as an example on how to create iSCSI targets.

Step-By-Step: ISCSI Target Server Cluster In Azure

Step-By-Step: ISCSI Target Server Cluster In Azure

Step-By-Step: ISCSI Target Server Cluster In Azure

Step-By-Step: ISCSI Target Server Cluster In Azure

Step-By-Step: ISCSI Target Server Cluster In Azure

Step-By-Step: ISCSI Target Server Cluster In Azure

Step-By-Step: ISCSI Target Server Cluster In Azure

Of course assign this to whichever server or servers will be connecting to this iSSI target.

Step-By-Step: ISCSI Target Server Cluster In Azure

Step-By-Step: ISCSI Target Server Cluster In Azure

Step-By-Step: ISCSI Target Server Cluster In Azure

Step-By-Step: ISCSI Target Server Cluster In Azure

And there you have it, you now have a functioning iSCSI target server in Azure.

If you build this leave a comment and me know how you plan to use it!

Articles reproduced with permission from Clusteringwithmeremortals

Filed Under: Clustering Simplified Tagged With: Azure, ISCSI Target Server Cluster

  • « Previous Page
  • 1
  • 2
  • 3
  • 4
  • …
  • 18
  • Next Page »

Recent Posts

  • Three Keys to Mastering High Availability in Your On-Prem Data Center
  • Why High Availability Matters in Manufacturing 4.0
  • Reframing Early Computer Science Education: The Soft Skills of Solution Design Part 1
  • How to Cut SQL Server HA/DR Costs and Gain Advanced Features
  • Commonalities between Disaster Recovery (DR) and your spare tire

Most Popular Posts

Maximise replication performance for Linux Clustering with Fusion-io
Failover Clustering with VMware High Availability
create A 2-Node MySQL Cluster Without Shared Storage
create A 2-Node MySQL Cluster Without Shared Storage
SAP for High Availability Solutions For Linux
Bandwidth To Support Real-Time Replication
The Availability Equation – High Availability Solutions.jpg
Choosing Platforms To Replicate Data - Host-Based Or Storage-Based?
Guide To Connect To An iSCSI Target Using Open-iSCSI Initiator Software
Best Practices to Eliminate SPoF In Cluster Architecture
Step-By-Step How To Configure A Linux Failover Cluster In Microsoft Azure IaaS Without Shared Storage azure sanless
Take Action Before SQL Server 20082008 R2 Support Expires
How To Cluster MaxDB On Windows In The Cloud

Join Our Mailing List

Copyright © 2025 · Enterprise Pro Theme on Genesis Framework · WordPress · Log in