SIOS SANless clusters

SIOS SANless clusters High-availability Machine Learning monitoring

  • Home
  • Products
    • SIOS DataKeeper for Windows
    • SIOS Protection Suite for Linux
  • News and Events
  • Clustering Simplified
  • Success Stories
  • Contact Us
  • English
  • 中文 (中国)
  • 中文 (台灣)
  • 한국어
  • Bahasa Indonesia
  • ไทย

A Different Name On Azure Portal GUI

August 15, 2018 by Jason Aw Leave a Comment

A Different Name On Azure Portal GUI

“Badrequest: The Virtual Network Public-Azure-East Does Not Exist” The Virtual Network Name Displayed In The Portal Can Be Wrong #Azure #Azureclassic

Today, I learned something new –  Different Name On Azure Portal GUI. I was trying to help a customer deploy some VMs in Azure Classic that have two NIC cards. No problem I say, it’s been a while since I worked with Azure Classic. From what I recall it was pretty straight forward, although it had to be done via PowerShell as there is not GUI option in the portal for deploying two NICs.

The basic directions can be found here.

https://azure.microsoft.com/en-us/documentation/articles/virtual-networks-multiple-nics/

However, after banging my head against the wall for a few hours, I stumble across this nugget of information.

https://thelonedba.wordpress.com/2015/07/17/new-azurevm-badrequest-the-virtual-network-foo-does-not-exist

Different Name On Azure Portal GUI

It seems like what the Azure Portal GUI says about the name of your virtual network can sometime be completely different than the actual name which is returned when you run Get-AzureVMNetSite | Select Name. Essentially, there’s a different name on Azure Portal GUI. And why is that so?

See the screen shots below. The Virtual Network that I created called “Public-Azure-East” is actually called “Group Group Azure Public East”. How that happened and why there is a different name on Azure Portal GUI is beyond my comprehension.

As you can see, my feeble attempts at creating the virtual machine failed. It is saying “BadRequest: The virtual network Public-Azure-East does not exist.” I was sure it had something to do with the multiple subscriptions I use. But it turned out to be this bug where it displays a different name on Azure Portal GUI.

A Different Name On Azure Portal GUI

A Different Name On Azure Portal GUI

Why something so simple as creating a VM with two NICs can’t be accomplished via the GUI is another story completely.

Reproduced with permission from Clusteringformeremortals.com

Filed Under: Clustering Simplified Tagged With: Azure Classic, Azure Portal GUI, different name on azure portal gui

Disaster Recovery For SQL Server Standard Edition

August 12, 2018 by Jason Aw Leave a Comment

Disaster recovery for SQL Server Standard Edition

Replicating A 2-Node SQL Server 2012/2014 Standard Edition Cluster To A 3rd Server For Disaster Recovery

Disaster recovery for SQL Server Standard Edition is possible with SIOS DataKeeper Cluster Edition. Here’s how.

Many people have found themselves settling for SQL Server Standard Edition due to the cost of SQL Server Enterprise Edition. SQL Server Standard Edition has many of the same features, but it comes with a few limitations. One limitation is that it does not support AlwaysOn Availability Groups. Also, it only supports two nodes in a cluster. With Database Mirroring being deprecated and only supporting synchronous replication in Standard Edition, you really have limited disaster recovery options.

Disaster recovery for SQL Server Standard Edition

One of those options is SIOS DataKeeper Cluster Edition. DataKeeper will work with your existing shared storage cluster. The software allows you to extend it to a 3rd node using either synchronous or asynchronous replication. If you are using SQL Server Enterprise, simply add that 3rd node as another cluster member for a true multisite cluster. However, since we are talking about SQL Server Standard Edition, you can’t add a 3rd node directly to the cluster. The good news is DataKeeper will allow you to replicate data to a 3rd node so your data is protected.

Disaster recovery for SQL Server Standard Edition means you are going to use DataKeeper to bring that 3rd node online as the source of the mirror. Next use SQL Server Management Studio to mount the databases that are on the replicated volumes. Your clients will also need to be redirected to this 3rd node. But it is a very cost effective solution with an excellent RPO and reasonable RTO.

The SIOS documentation talks about how to do Disaster recovery for SQL Server Standard Edition. Here, I have summarized the steps recently for one of my clients.

Configuration

  • Stop the SQL Resource
  • Remove the Physical Disk Resource From The SQL Cluster Resource
  • Remove the Physical Disk from Available Storage
  • Online Physical Disk on SECONDARY server. Add the drive letter (if not there)
  • Run emcmd . setconfiguration <drive letter> 256
    and Reboot Secondary Server. This will cause the SECONDARY server to block access to the E driver. It’s an important step because you don’t want two servers have access to the E drive at the same time, if you can avoid it.
  • Online the disk on PRIMARY server
  • Add the Drive letter if needed
  • Create a DataKeeper Mirror from Primary to DR
    You may have to wait a minute for the E drive to appear available in the DataKeeper Server Overview Report on all the servers before you can create the mirror properly. If done properly, you will create a mirror from PRIMARY to DR. As part of that process DataKeeper will ask you about the SECONDARY server which shares the volume you are replicating.

In The Event Of Disaster ….

ON DR NODE

  • Run EMCMD . switchovervolume <drive letter>
  • The first time make sure the SQL Service account has read/write access to all data and log files. You WILL have to explicitly grant this access the very first time you try to mount the databases.
  • Use SQL Management Studio to mount the databases
  • Redirect all clients to the server in the DR site. Better yet have the applications that reside in the DR site pre-configured to point to the SQL Server instance in the DR site.

AFTER DISASTER IS OVER

  • Power the servers (PRIMAY, SECONDARY) in the main site back on
  • Wait for mirror to reach mirroring state
  • Determine which node was previous source (run PowerShell as an administrator)
    get-clusterresource -Name “<DataKeeper Volume Resource name>” | get-clusterparameter
  • Make sure no DataKeeper Volume Resources are online in the cluster
  • Start the DataKeeper GUI on one cluster node. Resolve any split brain conditions (most likely there are none) ensuring the DR node is selected as the source during any split-brain recovery procedures
  • On the node that was reported as the previous source run EMCMD . switchovervolume <drive letter>
  • Bring SQL Server online in Failover Cluster Manager

The above steps assume you have SIOS DataKeeper Cluster Edition installed on all three servers (PRIMARY, SECONDARY, DR). PRIMARY and SECONDARY are a two node shared storage cluster. You are replicating data to DR which is just a standalone SQL Server instance (not part of the cluster) with just local attached storage. The Disaster Recovery Server will have a volume(s) that is the same size and drive letter as the shared cluster volume(s). This works rather well and will even let you replicate to a target that is in the cloud if you don’t have your own Disaster Recovery site configured.

You can also build the same configuration using all replicated storage if you want to eliminate the SAN completely.

Here is a nice short video that illustrates some of the possible configurations for disaster recovery for SQL Server Standard Edition. http://videos.us.sios.com/medias/aula05u2fl

Reproduced with permission from Clusteringformeremortals.com

Filed Under: Clustering Simplified, Datakeeper Tagged With: disaster recovery, disaster recovery for sql server standard edition, SQL Server Standard Edition

Supported Services With Azure Resource Manager (ARM)

June 20, 2018 by Jason Aw Leave a Comment

Azure Service Management (Classic) or Azure Resource Manager (ARM)?

I deal with users every week that are moving business critical workloads to Azure. The first question I usually ask is whether they are using Azure Service Management (Classic) or Azure Resource Manager (ARM).

I usually recommend ARM. It is the new way of doing things and all the new features are being developed for ARM. However, there are a few things that are not compatible with ARM yet. As time goes by this list of unsupported features gets smaller and smaller. Meantime, it is good to know there is an existing document which seems to be updated on a regular basis which lists all of the features and whether they are supported with ARM. https://azure.microsoft.com/en-us/documentation/articles/resource-manager-supported-services/

It’s a decent list but it’s not complete. I only found App Service Environment was not supported on this page out on the App Service Environment page.

Reproduced with permission from Clustering For Mere Mortals.

Filed Under: Clustering Simplified Tagged With: Azure Resource Manager

Fix Azure ILB Connection In SQL Server Alwayson Failover Cluster Instance

June 19, 2018 by Jason Aw Leave a Comment

Troubleshooting Azure ILB Connection Issues In A SQL Server Failover Instance Cluster Connection

Troubleshooting Azure ILB Connection Issues In A SQL Server Failover Instance Cluster Connection

I use the following tools to help me deal with troubleshooting SQL Server Failover Cluster Instance Connectivity issues. Especially those pesky Azure ILB Connection Issues. I’ll try to update this article whenever I find a new tool.

NETSTAT

The first tool is a simple test to verify whether the SQL Cluster IP is listening on the port  it should be listening on. In this case, the SQL Cluster IP address is 10.0.0.201. But it is using the default instance which is port 1433.

Here is the command which will help you quickly identify whether the active node is listening on that port. In our case below everything looks normal.

C:\Users\dave.SIOS>netstat -na | find "1433"
TCP    10.0.0.4:49584         10.0.0.201:1433        ESTABLISHED
TCP    10.0.0.4:49592         10.0.0.201:1433        ESTABLISHED
TCP    10.0.0.4:49593         10.0.0.201:1433        ESTABLISHED
TCP    10.0.0.4:49595         10.0.0.201:1433        ESTABLISHED
TCP    10.0.0.201:1433        0.0.0.0:0              LISTENING
ESTABLISHED
TCP    10.0.0.201:1433        10.0.0.4:49592         ESTABLISHED
TCP    10.0.0.201:1433        10.0.0.4:49593         ESTABLISHED
TCP    10.0.0.201:1433        10.0.0.4:49595         ESTABLISHED

Once I can be sure SQL is listening to the proper port, I use PSPING to try to connect to the port remotely.

PSPING

PSPing is part of the PSTools package available from Microsoft. I usually download the tool and put PSPing directly in my System32 folder so I can use it whenever I want without having to change directories.

Now, assuming everything is configured properly from the ILB, Cluster and Firewall perspective, you should be able to ping the SQL Cluster IP address and port 1433 from the passive server. You will get the results shown below…

C:\Users\dave.SIOS>psping 10.0.0.201:1433
PsPing v2.01 - PsPing - ping, latency, bandwidth measurement utility
Copyright (C) 2012-2014 Mark Russinovich
Sysinternals - www.sysinternals.com
TCP connect to 10.0.0.201:1433:
5 iterations (warmup 1) connecting test:
Connecting to 10.0.0.201:1433 (warmup): 6.99ms
Connecting to 10.0.0.201:1433: 0.78ms
Connecting to 10.0.0.201:1433: 0.96ms
Connecting to 10.0.0.201:1433: 0.68ms
Connecting to 10.0.0.201:1433: 0.89ms
If things are not configured properly you may see results similar to the following…
C:\Users\dave.SIOS>psping 10.0.0.201:1433
TCP connect to 10.0.0.102:1433:
5 iterations (warmup 1) connecting test:
Connecting to 10.0.0.102:1433 (warmup): 
This operation returned because the time out period expired.
Connecting to 10.0.0.102:1433 (warmup): 
This operation returned because the time out period expired.
Connecting to 10.0.0.102:1433 (warmup): 
This operation returned because the time out period expired.
Connecting to 10.0.0.102:1433 (warmup): 
This operation returned because the time out period expired.
Connecting to 10.0.0.102:1433 (warmup): 
This operation returned because the time out period expired.

If PSPing connects but yet your application is having a problem connecting, you may need to dig a bit deeper. I have seen some application like Great Plains also want to make a connection to port 445. If your application can’t connect but PSPing connects fine to 1433. Then you may need to do a network trace and see what other ports your application is trying to connect to. Your last step would be to add load balancing rules for those ports as well.

NAMED INSTANCES

Planning to use a named instances? You need to make sure you lock down your TCP service to use a static port. At the same time, you also need to make sure you add a rule to your load balancer to redirect UDP 1434 for the SQL Browser Service. Otherwise you won’t be able to connect to your named instance.

FIREWALL

Opening up TCP ports 1433 and 59999 should cover all the manual steps required. But when troubleshooting connection issues, I generally turn the Windows Firewall off to eliminate the firewall as a possible cause of the problem. Don’t forget. Azure also has a firewall called Network Security Groups. If anyone changed that from the default that could be blocking traffic as well.

NAME RESOLUTION

Try pinging the SQL cluster name. It should resolve to the SQL Server cluster iP address. Although I have seen on more than a few occasions, the DNS A-record associated with the SQL Cluster network name mysteriously disappear from DNS. If that is the case, go ahead and read-ad the SQL Custer name and IP address as an A record in DNS.

SQL CONFIGURATION MANAGER

In SQL Configuration Manager, you should see the SQL Cluster IP Address listed and port 1433. If by chance you installed a Named Instance, you of course will need to go in here and lock the port to a specific port and make your load balancing rules reflect that port. Because of the Azure ILB limitation of only on ILB per AG, I really don’t see an valid reason to use a named instance. Make it easier on yourself and just use the default instance of SQL. (Update: as of Oct 2016 you CAN have multiple IP addresses per ILB, so you CAN have multiple instances of SQL installed in the cluster.)

 

Reproduced with permission from Clustering For Mere Mortals.

Filed Under: Clustering Simplified Tagged With: AZURE ILB CONNECTION, Failover Cluster Instances, SQL SERVER ALWAYSON FCI CLUSTER

Azure ILB In ARM For SQL Server Failover Cluster Instances

June 15, 2018 by Jason Aw Leave a Comment

Configuring The #AZURE ILB In ARM For SQL Server Failover Cluster Instance Or AG Using AZURE Powershell 1.0

Configuring The #AZURE ILB In ARM For SQL Server Failover Cluster Instance Or AG Using AZURE Powershell 1.0

In an earlier post I went into some great detail about how to configure the Azure ILB in ARM for SQL Server Failover Cluster or AG resources. The directions in that article were written prior to the GA of Azure PowerShell 1.0. With the availability of Azure PowerShell 1.0 the main script that creates the ILB needs to be slightly different. The rest of the article is still accurate. However if you are using Azure PowerShell 1.0 or later the script to create the ILB described in that article should be as follows.

#Replace the values for the below listed variables
$ResourceGroupName ='SIOS-EAST' # Resource Group Name in which the SQL nodes are deployed
$FrontEndConfigurationName = 'FEEAST' #You can provide any name to this parameter.
$BackendConfiguratioName = 'BEEAST' #You can provide any name to this parameter.
$LoadBalancerName = 'ILBEAST' #Provide a Name for the Internal Local balance object
$Location ='eastus2' # Input the data center location of the SQL Deployements
$subname = 'public' # Provide the Subnet name in which the SQL Nodes are placed
$ILBIP = '10.0.0.201' # Provide the IP address for the Listener or Load Balancer
$subnet = Get-AzureRMVirtualNetwork -ResourceGroupName $ResourceGroupName | 
Get-AzureRMVirtualNetworkSubnetConfig –name $subname
$FEConfig=New-AzureRMLoadBalancerFrontendIpConfig -Name $FrontEndConfigurationName 
-PrivateIpAddress $ILBIP -SubnetId $subnet.Id
$BackendConfig=New-AzureRMLoadBalancerBackendAddressPoolConfig 
-Name $BackendConfiguratioName
New-AzureRMLoadBalancer -Name $LoadBalancerName -ResourceGroupName $ResourceGroupName 
-Location $Location -FrontendIpConfiguration $FEConfig 
-BackendAddressPool $BackendConfig

The rest of that original article is the same, but I have just copied it here for ease of use…

Using GUI

Now that the ILB is created, we should see it in the Azure Portal in Resource Group. See pic below.

Azure ILB In ARM For SQL Server Failover Cluster Instances

The rest of the configuration can also be completed through PowerShell, but I’m going to use the GUI in my example.

If you want to use PowerShell, you could probably piece together the script by looking at this article. Unfortunately, this article confuses me. I’ll figure it out some day and try to document it in a user friendly format. As of now, I think the GUI is fine for the next steps.

Let’s Get Started

Follow along with the screen shots below. If you get lost, follow the navigation hints at the top of the Azure Portal to figure out where we are.

First Step

  • Click Backend Pool setting tab. Selects the backend pool to update the Availability Set and Virtual Machines. Save your changes.

Azure ILB In ARM For SQL Server Failover Cluster Instances

  • Configure Load Balancer’s Probe by clicking Add on the Probe tab. Give the probe a name and configure it to use TCP Port 59999. I have left the probe interval and the unhealthy threshold set to the default settings. This means it will take 10 seconds before the ILB removes the passive node from the list of active nodes after a failover. Your clients may take up to 10 seconds to be redirected to the new active node. Be sure to save your changes.

Azure ILB In ARM For SQL Server Failover Cluster Instances

Next Step

  • Navigate to the Load Balancing Rule Tab and add a new rule. Give the rule a sensible name (SQL1433 or something). Choose TCP protocol port 1433 (assuming you are using the default instance of SQL Server). Choose 1433 for the Backend port as well. For the Backend Pool, we will choose the Backend Pool we created earlier (BE). For the Probe that we will also choose the Probe we created earlier.

We do not want to enable Session persistence but we do want to enable Floating IP (Direct Server Return). I have left the idle timeout set to the default setting. You might want to consider increasing that to the maximum value. Reason is that I have seen some applications such as SAP log error messages each time the connection is dropped and needs to be re-established.

Azure ILB In ARM For SQL Server Failover Cluster Instances

  • At this point the ILB is configured. There is only one final step that needs to take place for SQL Server Failover Cluster. We need to update the SQL IP Cluster Resource just the exact same way we had to in the Classic deployment model. To do that you will need to run the following PowerShell script on just one of the cluster nodes. Make note, SubnetMask=“255.255.255.255” is not a mistake. Use the 32 bit mask regardless of what your actual subnet mask is.

One Final Note

In my initial test I still was not able to connect to the SQL Resource name even after I completed all of the above steps. After banging my head against the wall for a few hours I discovered that for some reason the SQL Cluster Name Resource was not registered in DNS. I’m not sure how that happened or whether it will happen consistently, but if you are having trouble connecting I would definitely check DNS and add the SQL cluster name and IP address as a new A record if it is not already in there.

And of course don’t forget the good ole Windows Firewall. You will have to make exceptions for 1433 and 59999 or just turn it off until you get everything configured properly like I did. You probably want to leverage Azure Network Security Groups anyway instead of the local Windows Firewall for a more unified experience across all your Azure resources.

Good luck and let me know how you make out.

Head over here to see how SIOS helped companies across the globe with creating SQL Server Failover Cluster.

Reproduced with permission from Clustering For Mere Mortals.

Filed Under: Clustering Simplified Tagged With: SQL Server Failover Cluster

  • « Previous Page
  • 1
  • …
  • 92
  • 93
  • 94
  • 95
  • 96
  • …
  • 114
  • Next Page »

Recent Posts

  • How APM Tools and High Availability Clusters Improve Network Resilience
  • Selecting the Right Storage for SQL Server High Availability in the Cloud
  • Disaster Recovery Planning in an Unpredictable World
  • Active-Active vs. Active-Passive
  • Broadcom/VMware: Time To Decouple High Availability From Your Hypervisor

Most Popular Posts

Maximise replication performance for Linux Clustering with Fusion-io
Failover Clustering with VMware High Availability
create A 2-Node MySQL Cluster Without Shared Storage
create A 2-Node MySQL Cluster Without Shared Storage
SAP for High Availability Solutions For Linux
Bandwidth To Support Real-Time Replication
The Availability Equation – High Availability Solutions.jpg
Choosing Platforms To Replicate Data - Host-Based Or Storage-Based?
Guide To Connect To An iSCSI Target Using Open-iSCSI Initiator Software
Best Practices to Eliminate SPoF In Cluster Architecture
Step-By-Step How To Configure A Linux Failover Cluster In Microsoft Azure IaaS Without Shared Storage azure sanless
Take Action Before SQL Server 20082008 R2 Support Expires
How To Cluster MaxDB On Windows In The Cloud

Join Our Mailing List

Copyright © 2026 · Enterprise Pro Theme on Genesis Framework · WordPress · Log in