SIOS SANless clusters

SIOS SANless clusters High-availability Machine Learning monitoring

  • Home
  • Products
    • SIOS DataKeeper for Windows
    • SIOS Protection Suite for Linux
  • News and Events
  • Clustering Simplified
  • Success Stories
  • Contact Us
  • English
  • 中文 (中国)
  • 中文 (台灣)
  • 한국어
  • Bahasa Indonesia
  • ไทย

How To Create A 2-Node MySQL Cluster Without Shared Storage – Part 1

November 29, 2018 by Jason Aw Leave a Comment

create A 2-Node MySQL Cluster Without Shared Storage

Step-by-Step: How to Create a 2-Node MySQL Cluster Without Shared Storage, Part 1

The primary advantage of running a MySQL cluster is obviously high availability (HA). To get the most from this type of solution, you will want to eliminate as many potential single points of failure as possible. Conventional wisdom says that you can’t form a cluster without some type of shared storage, which technically represents a single point of failure in your clustering architecture. However, there are solutions. The SteelEye Protection Suite (SPS) for Linux allow you to eliminate storage as a single point of failure by providing real-time data replication between cluster nodes. Let’s look at a typical scenario: You form a cluster that leverages local, replicated storage to protect a MySQL database.

In order to create create a 2-Node MySQL Cluster Without Shared Storage, we’ll assume you’re working with an evaluation copy of SPS in a lab environment. We’re also presuming that you’ve confirmed that the primary and secondary servers and the network all meet the requirements for running this type of setup. (You can find details of these requirements in the SIOS SteelEye Protection Suite for Linux MySQL with Data Replication Evaluation Guide.)

First step to create A 2-Node MySQL Cluster Without Shared Storage

Before you begin setting up your cluster, you’ll need to configure the storage. The data that you want to replicate need to reside on a separate file system or logical volume. Keep in mind that the size of the target disk, whether you’re using a partition or logical volume, must be equal to or larger than the source.

In this example, we presume that you’re using a disk partition. (However, LVM is also fully supported.) First, partition the local storage for use with SteelEye DataKeeper. On the primary server, identify a free, unused disk partition to use as the MySQL repository or create a new partition. Use the fdisk utility to partition the disk, then format the partition and temporarily mount it at /mnt. Move any existing data from /var/lib/mysql/ into this new disk partition (assuming a default MySQL configuration). Unmount and then remount the partition at /var/lib/mysql. You don’t need to add this partition to /etc/fstab, as it will be mounted automatically by SPS.

On the secondary server, configure your disk as you did on the primary server.

Installing MSQL

Next you’ll deal with MySQL. On the primary server, install both the mysql and mysql-server RPM packages (if they don’t already exist on the system) and apply any required dependencies. Verify that your local disk partition is still mounted at /var/lib/mysql. If necessary, initialize a sample MySQL database. Ensure that all the files in your MySQL data directory (/var/lib/mysql) have the correct permissions and ownership, and then manually start the MySQL daemon from the command line. (Note: Do not start MySQL via the service command or the /etc/init.d/ script.)

Connect with the mysql client to verify that MySQL is running.

Update and verify the root password for your MySQL configuration. Then create a MySQL configuration file, such as the sample file shown here:

———-

# cat /var/lib/mysql/my.cnf

[mysqld]
datadir=/var/lib/mysql
socket=/var/lib/mysql/mysql.sock
pid-file=/var/lib/mysql/mysqld.pid
user=root
port=3306
# Default to using old password format for compatibility with mysql 3.x
# clients (those using the mysqlclient10 compatibility package).
old_passwords=1

# Disabling symbolic-links is recommended to prevent assorted security risks;
# to do so, uncomment this line:
# symbolic-links=0

[mysqld_safe]
log-error=/var/log/mysqld.log
pid-file=/var/run/mysqld/mysqld.pid

[client]

user=root

password=SteelEye

———-

In this example, we place this file in the same directory that we will later replicate (/var/lib/mysql/my.cnf). Delete the original MySQL configuration file (in /etc).

On the secondary server, install both the mysql and mysql-server RPM packages if necessary, apply any dependencies, and ensure that all the files in your MySQL data directory (/var/lib/mysql) have the correct permissions and ownership.

Installing SPS for Linux

Next, install SPS for Linux. For ease of installation, SIOS provides a unified installation script (called “setup”) for SPS for Linux. Instructions for how to obtain this software are in an email that comes with the SPS for Linux evaluation license keys.

Download the software and evaluation license keys on both the primary and secondary servers. On each server, run the installer script, which will install a handful of prerequisite RPMs, the core clustering software, and any optional ARKs that are needed. In this case, you will want to install the MySQL ARK (steeleye-lkSQL) and the DataKeeper (i.e., Data Replication) ARK (steeleye-lkDR). Apply the license key via the /opt/LifeKeeper/bin/lkkeyins command and start SPS for Linux via its start script, /opt/LifeKeeper/lkstart.

At this point you have SPS installed, licensed, and running on both of your nodes, and your disk and the MySQL database that you want to protect are configured.

In the next post, we’ll look at the remaining steps in the shared-nothing clustering process:

Create the following

  • Communication (Comm) paths, i.e. heartbeats, between the primary and target servers
  • An IP resource
  • The mirror and launching data replication
  • MySQL database resource
  • MySQL IP address dependency

Interested to know how to create a 2-Node MySQL Cluster Without Shared Storage for your project, chat with us or read our success stories.
Reproduced with permission from Linuxclustering

Filed Under: Clustering Simplified Tagged With: create a 2 node mysql cluster without shared storage, data replication, High Availability, MySQL

Failover Clustering With VMware High Availability A Perfect Match?

November 28, 2018 by Jason Aw Leave a Comment

Failover Clustering with VMware High Availability

Failover Clustering With VMware High Availability: Overkill Or A Perfect Match?

Implementing high availability (HA) at the VMware layer is a useful solution. It helps to protect against some types of failures. However, VMware HA alone simply doesn’t cover all the bases. Let’s explore the possibility of Failover Clustering with VMware High Availability.

According to Gartner Research, most unplanned outages are caused by application failure (40 percent of outages) or admin error (40 percent). Hardware, network, power, or environmental problems cause the rest (20 percent total). VMware HA focuses on protection against hardware failures, but a good application-clustering solution picks up the slack in other areas.

Having A Good Strategy Is Essential For Failover Clustering with VMware High Availability

Here are a few things to consider when architecting the proper HA strategy for your VMware environment.

Failover Clustering With VMware High Availability: Overkill Or A Perfect Match

Shorten Outages With Application-level Monitoring And Clustering

What about recovery speed? In a perfect world, there would be no failures, outages or downtime. But if an unplanned outage does occur, the next best thing is to get up and running again fast. This equation represents the total availability of your environment:

As you can see, detection time is a crucial piece of the equation. Here’s another place where VMware HA alone doesn’t quite cut it. VMware HA treats each virtual machine (VM) as a “black box” and has no real visibility into the health or status of the applications that are running inside. The VM and OS running inside might be just fine, but the application could be stopped, hung, or misconfigured, resulting in an outage for users.

Even when a host server failure is the issue, you must wait for VMware HA to restart the affected VMs on another host in the VMware cluster. That means that applications running on those VMs are down until 1) the outage is detected, 2) the OS boots fully on the new host system, 3) the applications restart, and 4) users reconnect to the apps.

By clustering at the application layer between multiple VMs, you are not only protected against application-level outages, you also shorten your outage-recovery time. The application can simply be restarted on a standby VM, which is already booted up and waiting to take over. To maximize availability, the VMs involved should live on different physical servers. Or even better, separate VMware HA clusters or even separate datacenters!

Eliminate Storage As A Potential Single Point Of Failure (SPOF)

Traditional clustering solutions, including VMware HA, require shared storage and typically protect applications or services only within a single data center. Technically, the shared-storage device represents an SPOF in your architecture. If you lose access to the back-end storage, your cluster and applications are down for the count. The goal of any HA solution is to increase overall availability by eliminating as many potential SPOFs as possible.

So how can you augment a native VMware HA cluster to provide greater levels of availability? To protect your entire stack, from hardware to applications, start with VMware HA. Next, you need a way to monitor and protect the applications. Clustering at the application level (i.e., within the VM) is the natural choice. Be sure to choose a clustering solution that supports host-based data replication (i.e., a shared-nothing configuration) so that you don’t need to go through the expense and complexity of setting up SAN-based replication. SAN replication solutions also typically lock you into a single storage vendor. On top of that, to cluster VMs by using shared storage, you generally need to enable Raw Device Mapping (RDM). This means that you lose access to many powerful VMware functions, such as vMotion.

Going with a shared-nothing cluster configuration eliminates the storage tier as an SPOF. At the same time allows you to use vMotion to migrate your VMs between physical hosts. It’s a win-win! A shared-nothing cluster is also an excellent solution for disaster recovery because the standby VM can reside at a different data center.

Cover All The Bases 

Application-failover clustering, layered over VMware HA, offers the best of both worlds. You can enjoy built-in hardware protection and application awareness, greater flexibility and scalability, and faster recovery times. Even better, the solution doesn’t need to break the bank.

Want to understand how Failover Clustering With VMware High Availability could work for your projects? Check in with SIOS.
Reproduced with permission from LinuxClustering

Filed Under: Clustering Simplified Tagged With: failover clustering with vmware high availability, High Availability, Virtualization, VMware

Maximise replication performance for Linux Clustering with Fusion-io

November 27, 2018 by Jason Aw Leave a Comment

Maximise replication performance for Linux Clustering with Fusion-io

Tips To Maximise Replication Performance For Linux Clustering With Fusion-io

When most people think about setting up a cluster, it usually involves two or more servers, and a SAN – or some other type of shared storage. SAN’s are typically very costly and complex to setup and maintain. Also, they technically represent a potential Single Point of Failure (SPOF) in your cluster architecture. These days, more and more people are turning to companies like Fusion-io, with their lightning fast ioDrives, to accelerate critical applications.  These storage devices sit inside the server (i.e. aren’t “shared disks”). Therefore it can’t be used as cluster disks with many traditional clustering solutions. Fortunately, there are ways to Maximise replication performance for Linux Clustering with Fusion-io. Solutions that allow you to form a failover cluster when there is no shared storage involved – i.e. a “shared nothing” cluster.

 

Traditional Cluster

 “Shared Nothing” Cluster

When leveraging data replication as part of a cluster configuration, it’s critical that you have enough bandwidth so that data can be replicated across the network just as fast as it’s written to disk.  The following are tuning tips that will allow you to get the most out of your “shared nothing” cluster configuration, when high-speed storage is involved:

Network

  • Use a 10Gbps NIC: Flash-based storage devices from Fusion-io (or other similar products from OCZ, LSI, etc) are capable of writing data at speeds in the HUNDREDS (750 ) of MB/sec or more.  A 1Gbps NIC can only push a theoretical maximum of ~125 MB/sec, so anyone taking advantage of an ioDrive’s potential can easily write data much faster than could be pushed through a 1 Gbps network connection.  To ensure that you have sufficient bandwidth between servers to facilitate real-time data replication, a 10 Gbps NIC should always be used to carry replication traffic
  • Enable Jumbo Frames: Assuming that your Network Cards and Switches support it, enabling jumbo frames can greatly increase your network’s throughput while at the same time reducing CPU cycles.  To enable jumbo frames, perform the following configuration (example from a RedHat/CentOS/OEL linux server)
    • ifconfig <interface_name> mtu 9000
    • Edit /etc/sysconfig/network-scripts/ifcfg-<interface_name> file and add “MTU=9000” so that the change persists across reboots
    • To verify end-to-end jumbo frame operation, run this command: ping -s 8900 -M do <IP-of-other-server>
  • Change the NIC’s transmit queue length:
    • /sbin/ifconfig <interface_name> txqueuelen 10000
    • Add this to /etc/rc.local to preserve the setting across reboots

TCP/IP Tuning

  • Change the NIC’s netdev_max_backlog:
    • Set “net.core.netdev_max_backlog = 100000” in /etc/sysctl.conf
  • Other TCP/IP tuning that has shown to increase replication performance:
    • Note: these are example values and some might need to be adjusted based on your hardware configuration
    • Edit /etc/sysctl.conf and add the following parameters:
      • net.core.rmem_default = 16777216
      • net.core.wmem_default = 16777216
      • net.core.rmem_max = 16777216
      • net.core.wmem_max = 16777216
      • net.ipv4.tcp_rmem = 4096 87380 16777216
      • net.ipv4.tcp_wmem = 4096 65536 16777216
      • net.ipv4.tcp_timestamps = 0
      • net.ipv4.tcp_sack = 0
      • net.core.optmem_max = 16777216
      • net.ipv4.tcp_congestion_control=htcp

Adjustments

Typically you will also need to make adjustments to your cluster configuration, which will vary based on the clustering and replication technology you decide to implement.  In this example, I’m using the SteelEye Protection Suite for Linux (aka SPS, aka LifeKeeper), from SIOS Technologies. It allows users to form failover clusters leveraging just about any back-end storage type: Fiber Channel SAN, iSCSI, NAS, or, most relevant to this article, local disks that need to be synchronized/replicated in real time between cluster nodes.  SPS for Linux includes integrated, block level data replication functionality that makes it very easy to setup a cluster when there is no shared storage involved.

Recommendations

In order to Maximise replication performance for Linux Clustering with Fusion-io, let’s try this. SteelEye Protection Suite (SPS) for Linux configuration recommendations:

  • Allocate a small (~100 MB) disk partition, located on the Fusion-io drive to place the bitmap file.  Create a filesystem on this partition and mount it, for example, at /bitmap:
    • # mount | grep /bitmap
    • /dev/fioa1 on /bitmap type ext3 (rw)
  • Prior to creating your mirror, adjust the following parameters in /etc/default/LifeKeeper
    • Insert: LKDR_CHUNK_SIZE=4096
      • Default value is 64
    • Edit: LKDR_SPEED_LIMIT=1500000
      • (Default value is 50000)
      • LKDR_SPEED_LIMIT specifies the maximum bandwidth that a resync will ever take — this should be set high enough to allow resyncs to go at the maximum speed possible
    • Edit: LKDR_SPEED_LIMIT_MIN=200000
      • (Default value is 20000)
      • LKDR_SPEED_LIMIT_MIN specifies how fast the resync should be allowed to go when there is other I/O going on at the same time — as a rule of thumb, this should be set to half or less of the drive’s maximum write throughput in order to avoid starving out normal I/O activity when a resync occurs

From here, go ahead and create your mirrors and configure the cluster as you normally would.

Interested to Maximise Replication Performance For Linux Clustering With Fusion-io, see what else SIOS can offer.
Reproduced with permission from LinuxClustering

Filed Under: Clustering Simplified, Datakeeper Tagged With: Clustering, Fusion-io, Linux, maximise replication performance for linux clustering with fusion io, replication

Moving A Google Form Between Google Domains

November 23, 2018 by Jason Aw Leave a Comment

Moving A Google Form Between Google Domains

Moving A Google Form Between Google Domains

If you are anything like me, you might have a few different Google accounts that you work with on a regular basis. Recently, I needed help in Moving A Google Form Between Google Domains. I spent a fair amount of time creating a Google Form. Only to realize I did this while logged in with my personal account rather than my work account. I didn’t really want to redo the work I had done. I tried searching for answers online but nothing specific came up to addressed my situation.

It’s not hard to do. I figured I’d write it down just in case it happens to you. I stumbled upon the fix just by trying a few things. Let’s assume this is a new form with no data.

In Moving A Google Form Between Google Domains, all you have to do is the following:

  1. Add your second Google account as a Collaborator on the form
  2. Log in to your second Google account, open the form and “Make a copy” of the form

Moving A Google Form Between Google Domains

That’s it! Now you have a copy of the form in your second Google account. Of course if you had already collected some data on the first form, you would want to copy that Sheet. Put it in your second Google account as well and attach the form to that copy of the data. Be sure to delete the old form so you don’t accidentally use the old form.

Read tips like Moving A Google Form Between Google Domains here
Reproduced with permission from Clusteringformeremortals

Filed Under: Clustering Simplified Tagged With: Google, Google Domains, Google Form, moving a google form between google domains

Receiving Email Alerts With SIOS DataKeeper

November 19, 2018 by Jason Aw Leave a Comment

Receiving Email Alerts With SIOS Datakeeper

Receiving Email Alerts With SIOS DataKeeper

Over the past few weeks, I wrote a 3-part series on how to configure email alerts based on Perfmon Counters, System Event Log Entries and a specific Windows Service Start or Stop Event. These guides are relevant to any environment. All of my examples were geared towards monitoring SIOS DataKeeper. Also, it had some specific customer requests including monitoring the SIOS DataKeeper Service, as well as being alerted should the RPO exceed 5 seconds. I also included monitoring of the basic DataKeeper events that you would want to know about.

This video shows some of this alerting in action.

Interested to find out more about SIOS DataKeeper, read our SIOS success stories
Reproduced with permission from Clusteringformeremortals

Filed Under: Clustering Simplified Tagged With: email alerts with sios datakeeper, SIOS DataKeeper Cluster Edition

  • « Previous Page
  • 1
  • …
  • 84
  • 85
  • 86
  • 87
  • 88
  • …
  • 113
  • Next Page »

Recent Posts

  • Keeping Buildings Safe: High Availability in Maintenance and Security Systems
  • Designing High Availability Through Modularity and Abstraction
  • The Critical Role of QA and Production Environments in High Availability
  • The Danger of Turn It Off, Turn It Back On Again Thinking in High Availability
  • SIOS Partnerships

Most Popular Posts

Maximise replication performance for Linux Clustering with Fusion-io
Failover Clustering with VMware High Availability
create A 2-Node MySQL Cluster Without Shared Storage
create A 2-Node MySQL Cluster Without Shared Storage
SAP for High Availability Solutions For Linux
Bandwidth To Support Real-Time Replication
The Availability Equation – High Availability Solutions.jpg
Choosing Platforms To Replicate Data - Host-Based Or Storage-Based?
Guide To Connect To An iSCSI Target Using Open-iSCSI Initiator Software
Best Practices to Eliminate SPoF In Cluster Architecture
Step-By-Step How To Configure A Linux Failover Cluster In Microsoft Azure IaaS Without Shared Storage azure sanless
Take Action Before SQL Server 20082008 R2 Support Expires
How To Cluster MaxDB On Windows In The Cloud

Join Our Mailing List

Copyright © 2026 · Enterprise Pro Theme on Genesis Framework · WordPress · Log in