SIOS SANless clusters

SIOS SANless clusters High-availability Machine Learning monitoring

  • Home
  • Products
    • SIOS DataKeeper for Windows
    • SIOS Protection Suite for Linux
  • News and Events
  • Clustering Simplified
  • Success Stories
  • Contact Us
  • English
  • 中文 (中国)
  • 中文 (台灣)
  • 한국어
  • Bahasa Indonesia
  • ไทย

Six Reasons Your Cloud Migration Has Stalled

December 22, 2020 by Jason Aw Leave a Comment

Six reasons your cloud migration has stalled

 

 

Six Reasons Your Cloud Migration Has Stalled

More and more customers are seeking to take advantage of the flexibility, scalability and performance of the cloud. As the number of applications, solutions, customers, and partners making the shift increases, be sure that your migration doesn’t stall.

Avoid the Following Six Reasons Cloud Migrations Stall

1. Incomplete cloud migration project plans

Project planning is widely thought to be a key contributor to project success. The planning plays an essential role in helping guide stakeholders, diverse implementation teams, and partners through the project phases. Planning helps identify desired goals, align resources and teams to those goals, reduce risks, avoid missed deadlines, and ultimately deliver a highly available solution in the cloud.  Incomplete plans and incomplete planning are often a big cause of stalled projects.  At the ninth hour a key dependency is identified. During an unexpected server reboot an application monitoring and HA hole is identified (see below). Be sure that your cloud migration has a plan, and work the plan.

2. Over-engineering on-premises

“This is how we did it on our on-premises nodes,” was the phrase that started a recent customer conversation. The customer engaged with Edmond Melkomian, Project Manager for SIOS Professional Services, when their attempts to migrate to the cloud stalled.  During a discovery session, Edmond was able to uncover a number of over-engineered items related to on-premises versus cloud architecture. For some projects, reproducing what was done on premises can be a resume for bloat, complexity, and delays. Analyze your architecture and migration plans and ruthlessly eliminate over-engineered components and designs, especially with networking and storage.

3. Under-provisioning

Controlling cost and preventing sprawl are an important and critical aspect of cloud migrations.  However, some customers, anxious about per hour charges and associated costs for disks and bandwidth fall into the trap of under-provisioning.  In this trap, resources are improperly sized, be that disks that have the wrong speed characteristics, compute resources with the wrong CPU or memory footprint, or clusters with the wrong number of nodes.  In such under-provisioned cases, issues arise when User Acceptance Test (UAT) begins and expected/anticipated workloads create a log jam on undersized resources.  Or a cost optimization of a target node is unable to properly handle resources in a failover scenario. While resizing virtual machines in the cloud is a simple process, these sizing issues often create delays while architects and Chief Financial Officers try to understand the impact of re-provisioning resources.

4. Internal IT processes

Every great enterprise company has a set of internal processes, and chances are your team and company are no exception.  IT processes are usually key among the processes that can have a large impact on the success of your cloud migration strategy. In the past, many companies had long requisition and acquisition processes, including bids, sizing guides, order approvals, server prep and configuration, and final deployment.  The cloud process has dramatically altered the way compute, storage, and network resources, among others, are acquired and deployed.  However, if your processes haven’t kept up with the speed of the cloud your migration may hit a snag when plans change.

5. Poor High Availability planning

Another reason that cloud migrations can stall involves high availability planning. High availability requires more than a bundle of tools or enterprise licenses.  HA requires a careful, thorough and thoughtful system design.  When deploying an HA solution your plan will need to consider capacity, redundancy, and the requirements for recovery and correction. With a plan, requirements are properly identified, solutions proposed, risks thought through, and dependencies for deployment and validation managed. Without a plan, the project and deployment are vulnerable to risks, single point of failure issues, poor fit, and missing layers and levels of application protection or recovery strategies.  Often when there has been a lack of HA planning, projects stall while the requirements are sorted out.

6. Incomplete or invalid testing

Ron, a partner migrating his end customer to the cloud, planned to go-live over an upcoming three day weekend. The last decision point for ‘go/no-go’ was a batch of user acceptance testing on the staging servers.  The first test failed.  In order to make up for lost time due to other migration snags, Ron and team skipped over a number of test cases related to integrating the final collection of security and backup software on the latest OS with supporting patches. The simulated load, the first on the newly minted servers, tripped a series of issues within Ron’s architecture including a kernel bug, a CPU and memory provisioning issue, and storage layout and capacity issues. The project was delayed for more than four weeks to address customer confidence, proper testing and validation, resizing and architecture, and apply software and OS fixes.

The promises of the cloud are enticing, and a well planned cloud migration will position you and your team to take advantage of these benefits. Whether you are beginning or in the middle of a cloud migration, we hope this article helps you be more aware of common pitfalls so you can hopefully avoid them.

– Cassius Rhue, Vice President, Customer Experience

Reproduced from SIOS

Filed Under: Clustering Simplified Tagged With: Amazon AWS, Amazon EC2, Azure, Cloud

Step-By-Step: ISCSI Target Server Cluster In Azure

June 13, 2020 by Jason Aw Leave a Comment

Step-By-Step_ ISCSI Target Server Cluster In Azure

Step-By-Step: ISCSI Target Server Cluster In Azure

I recently helped someone build an iSCSI target server cluster in Azure and realized that I never wrote a step-by-step guide for that particular configuration. So to remedy that, here are the step-by-step instructions in case you need to do this yourself.

Pre-Requisites

I’m going to assume you are fairly familiar with Azure and Windows Server, so I’m going to spare you some of the details. Let’s assume you have at least done the following as a pre-requisite

  • Provision two servers (SQL1, SQL2) each in a different Availability Zone (Availability Set is also possible, but Availability Zones have a better SLA)
  • Assign static IP addresses to them through the Azure portal
  • Joined the servers to an existing domain
  • Enabled the Failover Clustering feature and the iSCSI Target Server feature on both nodes
  • Add three Azure Premium Disk to each node.
    NOTE: this is optional, one disk is the minimum required. For increased IOPS we are going to stripe three Premium Azure Disks together in a storage pool and create a simple (RAID 0) virtual disk
  • SIOS DataKeeper is going to be used to provided the replicated storage used in the cluster. If you need DataKeeper you can request a trial here.

Create Local Storage Pool

Once again, this step is completely optional, but for increased IOPS we are going to stripe together three Azure Premium Disks into a single Storage Pool. You might be tempted to use Dynamic Disk and a spanned volume instead, but don’t do that! If you use dynamic disks you will find out that there is some general incompatibility that will prevent you from creating iSCSI targets later.

Don’t worry, creating a local Storage Pool is pretty straight forward if you are aware of the pitfalls you might encounter as described below. The official documentation can be found here.

Pitfall #1 – although the documentation says the minimum size for a volume to be used in a storage pool is 4 GB, I found that the P1 Premium Disk (4GB) was NOT recognized. So in my lab I used 16GB P3 Premium Disks.

Pitfall #2 – you HAVE to have at least three disks to create a Storage Pool.

Pitfall #3 – create your Storage Pool before you create your cluster. If you try to do it after you create your cluster you are going to wind up with a big mess as Microsoft tries to create a clustered storage pool for you. We are NOT going to create a clustered storage pool, so avoid that mess by creating your Storage Pool before you create the cluster. If you have to add a Storage Pool after the cluster is created you will first have to evict the node from the cluster, then create the Storage Pool.

Based on the documentation found here, below are the screenshots that represent what you should see when you build your local Storage Pool on each of the two cluster nodes. Complete these steps on both servers BEFORE you build the cluster.

Step-By-Step: ISCSI Target Server Cluster In Azure

You should see the Primordial pool on both servers.

Step-By-Step: ISCSI Target Server Cluster In Azure

Right-click and choose New Storage Pool…

Step-By-Step: ISCSI Target Server Cluster In Azure

Step-By-Step: ISCSI Target Server Cluster In Azure

Step-By-Step: ISCSI Target Server Cluster In Azure

Step-By-Step: ISCSI Target Server Cluster In Azure

Choose Create a virtual disk when this wizard closes

Step-By-Step: ISCSI Target Server Cluster In Azure

Step-By-Step: ISCSI Target Server Cluster In Azure

Step-By-Step: ISCSI Target Server Cluster In Azure

Notice here you could create storage tiers if you decided to use a combination of Standard, Premium and Ultra SSD

Step-By-Step: ISCSI Target Server Cluster In Azure

Step-By-Step: ISCSI Target Server Cluster In Azure

For best performance use Simple storage layout (RAID 0). Don’t be concerned about reliability since Azure Managed Disks have triple redundancy on the backend. Simple is required for optimal performance.

Step-By-Step: ISCSI Target Server Cluster In Azure

For performance purposes use Fixed provisioning. You are already paying for the full Premium disk anyway, so no need not to use it all.Step-By-Step: ISCSI Target Server Cluster In Azure

Step-By-Step: ISCSI Target Server Cluster In Azure

Step-By-Step: ISCSI Target Server Cluster In Azure

Step-By-Step: ISCSI Target Server Cluster In Azure

Step-By-Step: ISCSI Target Server Cluster In Azure

Step-By-Step: ISCSI Target Server Cluster In Azure

Step-By-Step: ISCSI Target Server Cluster In Azure

Step-By-Step: ISCSI Target Server Cluster In Azure

Step-By-Step: ISCSI Target Server Cluster In Azure

Step-By-Step: ISCSI Target Server Cluster In Azure

Step-By-Step: ISCSI Target Server Cluster In Azure

Now you will have a 45 GB X drive on your first node. Repeat this entire process for the second node.

Create Your Cluster

Now that each server each have their own 45 GB X drive, we are going to create the basic cluster. Creating a cluster in Azure is best done via Powershell so that we can specify a static IP address. If you do it through the GUI you will soon realize that Azure assigns your cluster IP a duplicate IP address that you will have to clean up, so don’t do that!

Here is an example Powershell code to create a new cluster.

 New-Cluster -Name mycluster -NoStorage -StaticAddress 10.0.0.100 -Node sql1, sql2

The output will look something like this.

PS C:\Users\dave.DATAKEEPER> New-Cluster -Name mycluster -NoStorage 
-StaticAddress 10.0.0.100 -Node sql1, sql2
WARNING: There were issues while creating the clustered role that 
may prevent it from starting. 
For more information view the report file below.
WARNING: Report file location: C:\windows\cluster\Reports\Create Cluster 
Wizard mycluster on 2020.05.20 
At 16.54.45.htm

Name     
-- –     
mycluster

The warning in the report will tell you that there is no witness. Because there is no shared storage in this cluster you will have to create either a Cloud Witness or a File Share Witness. I’m not going to walk you through that process as it is pretty well documented at those links.

Don’t put this off, go ahead and create the witness now before you move to the next step!

You now should have a basic 2-node cluster that looks something like this.

Step-By-Step: ISCSI Target Server Cluster In Azure

Configure A Load Balancer For The Cluster Core IP Address

Clusters in Azure are unique in that the Azure virtual network does not support gratuitous ARP. Don’t worry if you don’t know what that means, all you have to really know is that cluster IP addresses can’t be reached directly. Instead, you have to use an Azure Load Balancer, which redirects the client connection to the active cluster node.

There are two steps to getting a load balancer configured for a cluster in Azure. The first step is to create the load balancer. The second step is to update the cluster IP address so that it listens for the load balancer’s health probe and uses a 255.255.255.255 subnet mask which enables you to avoid IP address conflicts with the ILB.

We will first create a load balancer for the cluster core IP address. Later we will edit the load balancer to also address the iSCSI cluster resource IP address that we will be created at the end of this document.

Step-By-Step: ISCSI Target Server Cluster In Azure

Step-By-Step: ISCSI Target Server Cluster In Azure

Notice that the static IP address we are using is the same address that we used to create the core cluster IP resource.

Step-By-Step: ISCSI Target Server Cluster In Azure

Once the load balancer is created you will edit the load balancer as shown below

Step-By-Step: ISCSI Target Server Cluster In Azure

Add the two cluster nodes to the backend pool

Step-By-Step: ISCSI Target Server Cluster In Azure

Add the two cluster nodes to the backend pool

Step-By-Step: ISCSI Target Server Cluster In Azure

Add a health probe. In this example we use 59999 as the port. Remember that port, we will need it in the next step.

Step-By-Step: ISCSI Target Server Cluster In Azure

Create a new rue to redirect all HA ports, Make sure Floating IP is enabled.

STEP 2 – EDIT TO CLUSTER CORE IP ADDRESS TO WORK WITH THE LOAD BALANCER

As I mentioned earlier, there are two steps to getting the load balancer configured to work properly. Now that we have a load balancer, we have to run a Powershell script on one of the cluster nodes. The following is an example script that needs to be run on one of the cluster nodes.

$ClusterNetworkName = “Cluster Network 1” 
$IPResourceName = “Cluster IP Address” 
$ILBIP = “10.0.0.100” 
Import-Module FailoverClusters
Get-ClusterResource $IPResourceName | Set-ClusterParameter 
-Multiple @{Address=$ILBIP;ProbePort=59998;SubnetMask="255.255.255.255"
;Network=$ClusterNetworkName;EnableDhcp=0} 

The important thing about the script above, besides getting all the variables correct for your environment, is making sure the ProbePort is set to the same port you defined in your load balancer settings for this particular IP address. You will see later that we will create a 2nd health probe for the iSCSI cluster IP resource that will use a different port. The other important thing is making sure you leave the subnet set at 255.255.255.255. It may look wrong, but that is what it needs to be set to.

After you run it the output should look like this.

 PS C:\Users\dave.DATAKEEPER> $ClusterNetworkName = “Cluster Network 1” 
$IPResourceName = “Cluster IP Address” 
$ILBIP = “10.0.0.100” 
Import-Module FailoverClusters
Get-ClusterResource $IPResourceName | Set-ClusterParameter 
-Multiple @{Address=$ILBIP;ProbePort=59999;SubnetMask="255.255.255.255"
;Network=$ClusterNetworkName;EnableDhcp=0}
WARNING: The properties were stored, but not all changes will take effect 
until Cluster IP Address is taken offline and then online again.

You will need to take the core cluster IP resource offline and bring it back online again before it will function properly with the load balancer.

Assuming you did everything right in creating your load balancer, your Server Manager on both servers should list your cluster as Online as shown below.

Step-By-Step: ISCSI Target Server Cluster In Azure

Check Server Manager on both cluster nodes. Your cluster should show as “Online” under Manageability.

Install DataKeeper

I won’t go through all the steps here, but basically at this point you are ready to install SIOS DataKeeper on both cluster nodes. It’s a pretty simple setup, just run the setup and choose all the defaults. If you run into any problems with DataKeeper it is usually one of two things. The first issue is the service account. You need to make sure the account you are using to run the DataKeeper service is in the Local Administrators Group on each node.

The second issue is in regards to firewalls. Although the DataKeeper install will update the local Windows Firewall automatically, if your network is locked down you will need to make sure the cluster nodes can communicate with each other across the required DataKeeper ports. In addition, you need to make sure the ILB health probe can reach your servers.

Once DataKeeper is installed you are ready to create your first DataKeeper job. Complete the following steps for each volume you want to replicate using the DataKeeper interface.

Step-By-Step: ISCSI Target Server Cluster In Azure

Step-By-Step: ISCSI Target Server Cluster In Azure

Use the DataKeeper interface to connect to both servers

Step-By-Step: ISCSI Target Server Cluster In Azure

Click on create new job and give it a name

Step-By-Step: ISCSI Target Server Cluster In Azure

Step-By-Step: ISCSI Target Server Cluster In Azure

Step-By-Step: ISCSI Target Server Cluster In Azure

Step-By-Step: ISCSI Target Server Cluster In Azure

Click Yes to register the DataKeeper volume in the cluster

Step-By-Step: ISCSI Target Server Cluster In Azure

Once the volume is registered it will appear in Available Storage in Failover Cluster Manager

Create The ISCSI Target Server Cluster

In this next step we will create the iSCSI target server role in our cluster. In an ideal world I would have a Powershell script that does all this for you, but for sake of time for now I’m just going to show you how to do it through the GUI. If you happen to write the Powershell code please feel free to share with the rest of us!

There is one problem with the GUI method. ou will wind up with a duplicate IP address in when the IP Resource is created, which will cause your cluster resource to fail until we fix it. I’ll walk through that process as well.

Step-By-Step: ISCSI Target Server Cluster In Azure

Step-By-Step: ISCSI Target Server Cluster In Azure

Step-By-Step: ISCSI Target Server Cluster In Azure

Step-By-Step: ISCSI Target Server Cluster In Azure

Step-By-Step: ISCSI Target Server Cluster In Azure

Step-By-Step: ISCSI Target Server Cluster In Azure

Step-By-Step: ISCSI Target Server Cluster In Azure

Go to the Properties of the failed IP Address resource and choose Static IP and select an IP address that is not in use on your network. Remember this address, we will use it in our next step when we update the load balancer.

You should now be able to bring the iSCSI cluster resource online.

Step-By-Step: ISCSI Target Server Cluster In Azure

Update Load Balancer For ISCSI Target Server Cluster Resource

Like I mentioned earlier, clients can’t connect directly to the cluster IP address (10.0.0.110) we just created for the iSCSI target server cluster. We will have to update the load balancer we created earlier as shown below.

Step-By-Step: ISCSI Target Server Cluster In Azure

Start by adding a new frontend IP address that uses the same IP address that the iSCSI Target cluster IP resource uses.

Step-By-Step: ISCSI Target Server Cluster In Azure

Add a second health probe on a different port. Remember this port number, we will use it again in the powershell script we run next

Step-By-Step: ISCSI Target Server Cluster In Azure

We add one more load balancing rule. Make sure to change the Frontend IP address and Health probe to use the ones we just created. Also make sure direct server return is enabled.

The final step to allow the load balancer to work is to run the following Powershell script on one of the cluster nodes. Make sure you use the new Healthprobe port, IP address and IP Resource name.

 $ClusterNetworkName = “Cluster Network 1” 
$IPResourceName = “IP Address 10.0.0.0” 
$ILBIP = “10.0.0.110” 
Import-Module FailoverClusters
Get-ClusterResource $IPResourceName | Set-ClusterParameter 
-Multiple @{Address=$ILBIP;ProbePort=59998;SubnetMask="255.255.255.255"
;Network=$ClusterNetworkName;EnableDhcp=0} 

Your output should look like this.

 PS C:\Users\dave.DATAKEEPER> $ClusterNetworkName = “Cluster Network 1” 
$IPResourceName = “IP Address 10.0.0.0” 
$ILBIP = “10.0.0.110” 
Import-Module FailoverClusters
Get-ClusterResource $IPResourceName | Set-ClusterParameter 
-Multiple @{Address=$ILBIP;ProbePort=59998;SubnetMask="255.255.255.255"
;Network=$ClusterNetworkName;EnableDhcp=0}
WARNING: The properties were stored, but not all changes will take effect 
until IP Address 10.0.0.0 is taken offline and then online again.

Make sure to take the resource offline and online for the settings to take effect.

Create Your Clustered ISCSI Targets

Before you begin, it is best to check to make sure Server Manager from BOTH servers can see the two cluster nodes, plus the two cluster name resources, and they both appear “Online” under manageability as shown below.

Step-By-Step: ISCSI Target Server Cluster In Azure

If either server has an issue querying either of those cluster names then the next steps will fail. If there is a problem I would double check all the steps you took to create the load balancer and the Powershell scripts you ran.

We are now ready to create our first clustered iSCSI targets. From either of the cluster nodes, follows the steps illustrated below as an example on how to create iSCSI targets.

Step-By-Step: ISCSI Target Server Cluster In Azure

Step-By-Step: ISCSI Target Server Cluster In Azure

Step-By-Step: ISCSI Target Server Cluster In Azure

Step-By-Step: ISCSI Target Server Cluster In Azure

Step-By-Step: ISCSI Target Server Cluster In Azure

Step-By-Step: ISCSI Target Server Cluster In Azure

Step-By-Step: ISCSI Target Server Cluster In Azure

Of course assign this to whichever server or servers will be connecting to this iSSI target.

Step-By-Step: ISCSI Target Server Cluster In Azure

Step-By-Step: ISCSI Target Server Cluster In Azure

Step-By-Step: ISCSI Target Server Cluster In Azure

Step-By-Step: ISCSI Target Server Cluster In Azure

And there you have it, you now have a functioning iSCSI target server in Azure.

If you build this leave a comment and me know how you plan to use it!

Articles reproduced with permission from Clusteringwithmeremortals

Filed Under: Clustering Simplified Tagged With: Azure, ISCSI Target Server Cluster

Toyo Gosei Ltd. Migrates SAP Enterprise System to Azure

April 26, 2020 by Jason Aw Leave a Comment

migration to cloud

Toyo Gosei Ltd. Migrates SAP Enterprise System to Azure: To Build a “System That Never Stops” With Replication

“”We got several proposals for both on-premises and cloud, and decided to migrate to Microsoft Azure with Fujitsu’s proposal of SIOS DataKeeper – that best fits our requirements,” said Akihiko Kobayashi, a System Representative.

Toyo Gosei is a long-established chemical manufacturer that has been operating business for 65 years. The company’s main product, photosensitive materials for photoresist, is an indispensable material for manufacturing liquid crystal displays and semiconductor integrated circuits. The company is also focusing on technological development for the most advanced photosensitive materials.

In 2007, the company was required to select the successor system of GLOVIA/Process C1, which had been used as a core business system. While receiving proposals from several companies, they chose to introduce “SAP,” an ERP system of German company SAP because of its solid J-SOX support. Time passed and around 2015, the servers installed when introducing SAP had come to the end of maintenance.

IT Infrastructure

The company’s IT infrastructure is an on-premises VMware-based data center and a remote data center for business continuity/disaster protection. Since most of their applications run on the Microsoft Windows operating system, they used guest-level Windows Server failover clustering in their VMware environment to provide high availability and disaster protection.

The Challenge – Migrating to Azure

Behind the migration to cloud, there were needs to be free from on-premises system maintenance, demands for flexibility of scale-up and preparing for disaster based on the experience during the Great East Japan Earthquake.

Their decision to go to cloud was driven by the fact that the servers in their premises physically moved during the earthquake and it almost led to a failure.

When migrating to Azure, the company built a backup system to address system failures and in case of disasters. “SAP has all the data necessary for our business. If SAP stops, the production process also stops. If the outage continues for two or three days, shipment, payment and billing is also stopped. The SAP system cannot be stopped,” said Kobayashi.

The first step was to set up a SAP backup system on Azure to take a daily backup of the production system in the East Japan region of Azure and a weekly backup of the standby system in the West Japan region.

Implementation

“However, backup is just a backup. Basically we need to make production system redundant in order to prevent it from stopping. On AWS, which was used for information systems, shared disks were available with a redundant configuration. However, Azure does not support shared disks. For this reason, we decided to use DataKeeper of SIOS Technology that enables data replication on Azure,” said Kobayashi.

They created a cluster configuration between storage systems connected to the redundant SAP production system and replicate the data using DataKeeper to make it consistent. This provides the same availability as when using shared disks even on Azure where shared disk configuration is not supported.

“We have been in stable operations after the initial stage where a failover occurred,” said Kobayashi. “Regarding SIOS DataKeeper, the only thing we have to do is renew the maintenance contract.”

The Results

As a mid-term plan in the future, they need to prepare for the “SAP 2025 problem” where support for the current SAP version will expire. They have not built a specific plan, but Kobayashi said, “when moving to the new architecture S/4HANA and if clustering is required, we will implement SIOS DataKeeper because we trust it.“

SIOS DataKeeper is a reliable partner for Kobayashi. “Because you cannot stop the production system, it is IT personnel’s responsibility to choose a reliable tool,” he said.

Get a Free Trial of SIOS DataKeeper

Learn more about SAP high availability on Azure

Download the PDF version of the Case Study

Filed Under: Success Stories Tagged With: Azure, migration

Achieving Application Consistent Recovery Points of SQL Server 2008 R2 With Azure Site Recovery In Azure

June 20, 2019 by Jason Aw Leave a Comment

Achieving Application Consistent Recovery Points of SQL Server 2008 R2 With Azure Site Recovery In Azure

Achieving Application Consistent Recovery Points of SQL Server 2008 R2 With Azure Site Recovery In Azure

If you want to use ASR to replicate SQL Server 2008 R2 standalone or clustered instances, you will need to update the SQL Writer  to 2012 or later.

You can use SQL express version as it is a free download.

https://www.microsoft.com/en-us/download/details.aspx?id=29062

Once downloaded, navigate to the download location and run the executable with /x.  This will give you an option to specify a location to extract the files to.

ENU\x64\SQLEXPRADV_x64_ENU.exe /x

Once the extraction completes, navigate to the extracted location and the following location:

SQL\1033_enu_lp\x64\setup\x64

Within that folder you should find SQLWriter.msi.  Run this on the system where you want to update the SQL writer.

You now will be able to use ASR to do application consistent recovery points of SQL Server 2008 R2.

Reproduced with permission from Clusteringformeremortals.com

Filed Under: Clustering Simplified Tagged With: Azure, SQL Server

New Azure “SQL Server Settings” Blade In The Azure Portal

May 30, 2019 by Jason Aw Leave a Comment

New Azure SQL Server Settings Blade In The Azure Portal

New Azure “SQL Server Settings” Blade In The Azure Portal

There is a new blade in the Azure portal when creating a new SQL Server virtual machine. I’ve been looking for an announcement regarding this new Azure portal experience but to no avail. This feature wasn’t available when I took the screen shots for my last post on creating a SQL Server 2008 R2 FCI in Azure on April 19th. I presume it must be relatively new.

New Azure "SQL Server Settings" Blade In The Azure Portal
New Azure “SQL Server Settings” blade on the Azure portal

Most of the settings are pretty self explanatory. Under Security and Networking, you can specify the port you want SQL to listen on. It also appears as if the Azure Security Group will be updated to allow different levels of access to the SQL instance: Local, Private or Public. Authentication options are also exposed in this new SQL Server settings blade.

Security, Networking and Authentication options are part of your SQL Server deployment

The rest of the features include licensing, patching and backup options. In addition, if you are deploying the Enterprise Edition of SQL Server 2016 or later, you also have the option to enable SQL Server R Services for advanced analytics.

Licensing, Patching, Backup and R Services options can be automatically configured

All of those options are welcome additions to the Azure portal experience when provisioning a new SQL Server instance. I’m sure the seasoned DBA probably has a list of a few dozen other options they would like to tweak before a SQL Server deployment, but this is certainly a step in the right direction.

Storage Configuration Options

The most interesting new feature I have found on this blade is the Storage Configuration option.

When you click on Change Configuration, you get the following blade.

As you slide the IOPS slider to the right you will see the number of data disks increase, the Storage Size increase, and the Throughput increase. You will be limited to the max number of IOPS and disks supported by that instance size. You see in the screenshot below I am able to go as high as 80,000 IOPS when provisioning storage for a Standard E64-16s_v3 instance.

The Standard E64-16s_v3 instance size supports up to 80,000 IOPS

There is also a “Storage optimization” option. I haven’t tried all the different combinations to know exactly what the Storage optimization setting does. If you know how the different options change the storage configuration, leave me a comment, or we will just wait for the official documentation to be released.

For my test, I provisioned a Standard DS13 v2 instance and maxed out the IOPS at 25600, the max IOPS for that instance size. I also optimized the storage for Transactional processing.

I found that when this instance was provisioned, six P30 premium disk were attached to the instance. This makes sense, since each P30 delivers 5000 IOPS, so it would take at least six of them to deliver the 25,600 IOPS requested. This also increased the Storage Size to 6 TB, since each P30 gives you one 1 TB of storage space. The Read-only host caching was also enabled on these disks.

The six disks were automatically provisioned and attached to the instance

I logged in to the instance to see what Azure had done with those disk. Fortunately, they had done exactly what I would have done; they created a single Storage Pool with the six P30 disks and created a Simple (aka, RAID 0) Storage Space and provisioned a single 6 TB F:\ drive.

This storage configuration wizard validates some of the cloud storage assumptions I made in my previous blog post, Storage Considerations for Running SQL Server in Azure. It seems like a single, large disk should suffice in most circumstances.

A Simple Storage Space consisting of the six P30s are presented as a single F:\ drive

This storage optimization is not available in every Azure Marketplace offering. For example, if you are moving SQL Server 2008 R2 to Azure for the extended security updates, you will find that this storage optimization in not available in the SQL2008R2/Windows Server 2008 R2 Azure Marketplace image. Of course, Storage Spaces was not introduced until Windows Server 2012, so that makes sense. I did verify that this option is available with the SQL Server 2012 SP4 on Windows Server 2012 R2Azure Marketplace offering.

There is a minor inconvenience however. In addition to adding this new Storage configuration option on SQL Server settings blade, they also removed the option to add Data Disks on the Disks blade. Let’s say I wanted to provision additional storage without creating a Storage Space. To do that, I would have to create the instance first and then come back and add Data disks after it the virtual machine is provisioned.

Final Thoughts

All of the SQL Server configuration options in this new Azure blade are welcome additions. I would love to see the list tunable settings grow. Information text should include guidance on current best practices for each tunable.

What SQL Server or Windows OS tunables would you like to see exposed as part of the provisioning process to make your life as a DBA easier? These tunables make your life easier. They would also make the junior DBA look like a season pro by guiding them through all the current SQL Server configuration best practices.

I think the new Storage configuration option is probably the most compelling new addition. Prior to the Storage configuration wizard, users had to be aware of the limits of their instance size, the limits of the storage they were adding. On top of that, have the wherewithal to stripe together multiple disks in a Simple Storage Space to get the maximum IOPS. A few years ago, I put together a simple Azure Storage Calculator to help people make these decisions. My calculator is currently outdated. That said, this new Storage configuration option may make it obsolete anyway.

I would love to see this Storage configuration wizard included as a standard offering in the Disks blade of every Windows instance type. Instead in just the SQL Server instances. I would let the user choose to use the new Storage configuration  “Wizard” experience. Or even the “Classic” experience where you manually add and manage storage.

Reproduced with permission from Clusteringformeremortals.com

Filed Under: Clustering Simplified Tagged With: Azure, SQL Server

  • « Previous Page
  • 1
  • 2
  • 3
  • 4
  • …
  • 17
  • Next Page »

Recent Posts

  • Video: The SIOS Advantage
  • Demo Of SIOS DataKeeper For A Three-Node Cluster In AWS
  • 2023 Predictions: Data Democratization To Drive Demand For High Availability
  • Understanding the Complexity of High Availability for Business-Critical Applications
  • Epicure Protects Business Critical SQL Server with Amazon EC2 and SIOS SANLess Clustering Software

Most Popular Posts

Maximise replication performance for Linux Clustering with Fusion-io
Failover Clustering with VMware High Availability
create A 2-Node MySQL Cluster Without Shared Storage
create A 2-Node MySQL Cluster Without Shared Storage
SAP for High Availability Solutions For Linux
Bandwidth To Support Real-Time Replication
The Availability Equation – High Availability Solutions.jpg
Choosing Platforms To Replicate Data - Host-Based Or Storage-Based?
Guide To Connect To An iSCSI Target Using Open-iSCSI Initiator Software
Best Practices to Eliminate SPoF In Cluster Architecture
Step-By-Step How To Configure A Linux Failover Cluster In Microsoft Azure IaaS Without Shared Storage azure sanless
Take Action Before SQL Server 20082008 R2 Support Expires
How To Cluster MaxDB On Windows In The Cloud

Join Our Mailing List

Copyright © 2023 · Enterprise Pro Theme on Genesis Framework · WordPress · Log in