SIOS SANless clusters

SIOS SANless clusters High-availability Machine Learning monitoring

  • Home
  • Products
    • SIOS DataKeeper for Windows
    • SIOS Protection Suite for Linux
  • News and Events
  • Clustering Simplified
  • Success Stories
  • Contact Us
  • English
  • 中文 (中国)
  • 中文 (台灣)
  • 한국어
  • Bahasa Indonesia
  • ไทย

SIOS LifeKeeper vs SUSE HAE

December 14, 2022 by Jason Aw Leave a Comment

Controlling Cloud Costs without Sacrificing Availability or Performance

SIOS LifeKeeper vs SUSE HAE

In many enterprises, SAP systems are so essential to the operation of core enterprise business operations that even brief periods of downtime can have devastating consequences. However, Linux-based solutions can be complicated and error-prone. Even SUSE HAE and other open source clustering options, are highly manual and only protect individual components. Read the white paper to learn the differences between SUSE HAE and SIOS Protection Suite and find out the fastest, most accurate way to manage and optimize Linux environments.

Download the white paper here

Filed Under: Clustering Simplified Tagged With: Linux, SIOS LifeKeeper for Linux

Installing SAP MaxDB in an HA Environment

November 1, 2022 by Jason Aw Leave a Comment

Installing SAP MaxDB in an HA Environment

General SAP documentation on MaxDB is here: https://maxdb.sap.com/documentation/

MaxDB is a relational database management system (RDBMS) sold by SAP for large environments (SAP and non-SAP) that require enterprise-level database functionality. The first step to delivering high availability for any application is ensuring it is installed according to best practices. This blog provides important insight for installing MaxDB in a SIOS LIfeKeeper for Linux high availability clustering environment. It includes links to detailed installation documentation provided by SAP.

These instructions assume that you will perform the MaxDB installation steps on all nodes in your SIOS LifeKeeper cluster that will be “production” nodes.

1. Downloading the MaxDB software

  • Use your SAP account to download latest MaxDB package, in my case 51054410_2
  • Upload the package to your Linux instance, in this case to /mnt/software/ and extract the file using  SAPCAR with switches -xvf.
  • cd into the “MaxDB_7.9___SP10_Build_05_” folder and then into “DATA_UNITS” and then finally “MAXDB_LINUX_X86_64”
  • SAP document describing installation: https://maxdb.sap.com/doc/7_7/44/eb166db6f0108ee10000000a11466f/content.htm

2. Using the CLI Installer

Run SDBINST,  the MaxDB installation manager which will begin the installation process.

Walk through the options, either specify the values or accept the defaults:

Select 0 for all components. You will then be prompted for the installation name. Installation path, installation description, privatedata and a port number.

This installations instance data location will be  privatedata and the port number is the port that this instance will use while running, the default is 7200 for the first installation.

If you need to uninstall, follow the steps in this SAP document: https://maxdb.sap.com/doc/7_8/44/d8fc93daba5705e10000000a1553f6/content.htm

3. GUI Installer

To use the GUI installer, you will need to set up xauth and use xming (or similar X-Windows emulator), see https://superuser.com/questions/592185/how-do-i-get-x11-forwarding-to-work-on-windows-with-putty-and-xming

Note that the graphics libraries may need to be fixed. Fix some library links, Newer Linux versions have newer graphics libraries with different names. We can still use the newer libraries but MaxDB expects the older names and so we will create symbolic links to these existing libraries with the names that MaxDB expects to find:

ln /usr/lib64/libpangoxft-1.0.so.0 /usr/lib64/libpangox-1.0.so.0

ln /usr/lib64/libpng12.so.0 /usr/lib64/libpng.so.3

ln /usr/lib64/libtiff.so.5 /usr/lib64/libtiff.so.3

Now run setup:

cd /mnt/software/MaxDB_7.9___SP10_Build_05_/DATA_UNITS/MAXDB_LINUX_X86_64/

.//SDBSETUP

These templates simply pre-define parameters for the MaxDB that will be created as part of the installation. I used Desktop PC/Laptop simply because it’s aimed at small single user installations, You can change most of the parameters after installation completes. See this note for more details.

By default the global owner user created while setting up MaxDB gets /bin/false added to its entry in /etc/passwd  This addition is to restrict the account used for the MaxDB installation for security reasons e.g. you cannot login with this account. In our case we will use this user and we can change the entry in /etc/passwd to /etc/bash so that we can login and use the user that’s created for us in our example.

4. Setting up a database

Once we have the actual MaxDB software installed, we need to create a database and then start that database. In this example I will call my database SPS and the default admin user will be dbm with the password dbm.

sudo su – sdb

dbmcli -s -R  /sapdb/MAXDB/db db_create SPS dbm,dbm

dbmcli -d SPS -u dbm,dbm

user_put dbm PASSWORD=dbadmin

This should drop you to a prompt like this “dbmcli on SPS>”, this means that you are connected to the SPS db as sdb and we will now configure some parameters required to run the database.

param_startsession

param_init OLTP

param_put CAT_CACHE_SUPPLY 5000

param_put CACHE_SIZE 3000

param_put MAXDATAVOLUMES 5

param_put RUNDIRECTORYPATH /sapdb/MAXDB/run

param_checkall

param_commitsession

param_addvolume 1 DATA /sapdb/MAXDB/data/DISKD0001 F 2560

param_addvolume 1 LOG  /sapdb/MAXDB/log/DISKL001  F 2048

quit

Now it’s time to start the DB:

dbmcli -d SPS -u dbm,dbadmin db_start

All the above param and dbmcli commands should output OK when you execute them. If they do not then generally they will give you a vague idea of what’s wrong.

dbmcli -d SPS -u dbm,dbadmin

util_connect dbm,dbadmin

db_activate dba,dba

dbmcli -d SPS -u dbm,dbadmin load_systab -u dba,dba -ud domain

dbmcli -d SPS -u dbm,dbadmin

sql_connect dba,dba

sql_execute CREATE USER test PASSWORD test DBA NOT EXCLUSIVE

medium_put data datasave FILE DATA 0 8 YES

medium_put auto autosave FILE AUTO

util_connect dbm,dbadmin

backup_save data

autosave_on

Load_tutorial

auto_extend on

quit

Ok, now we need to create a DEFAULT key to allow SPS-L to connect to the resource, this is done as follows:

xuser -U sdb -d SPS -u dbm,dbadmin, make sure this is executed on all production nodes or make sure that you copy /home/sdb/.XUSER.62 to all production nodes.

Once we have these items complete we can start the global DB listener using:

/sapdb/programs/bin/sdbgloballistener start

Once the global DB listener is running you should be able to connect to the DB using something like MaxDB Studio or SQL.

Filed Under: Clustering Simplified Tagged With: Linux, MaxDB, SIOS LifeKeeper

How To Install A SIOS Protection Suite for Linux License Key

February 23, 2022 by Jason Aw Leave a Comment

How To Install A SIOS Protection Suite for Linux License Key

How To Install A SIOS Protection Suite for Linux License Key

Once you have installed SIOS Protection Suite for Linux software and have activated your license, you will need to install your license key before you can begin to run it. This 4 minute video will review how to install SIOS Protection Suite for Linux software and demonstrate how to activate your license to get started using your SIOS Protection Suite for Linux software.

Watch as a SIOS support representative shows you how to check that your SPS image file is mounted, to ensure you have the license file, and how to install and enter the complete path name. Use our simple license key manager to validate your activated licenses from purchased entitlements, download and apply license keys and start your SIOS Protection Suite for Linux software.

This video also walks through the process of how to access our SIOS Documentation portal, where you can find release notes, installation guides, technical documentation and information detailing SIOS Protection Suite for Linux as well as a wide range of topics on everything SIOS.

View tips and convenient insights on how to complete steps fast and simply.  Now you can begin protecting your critical applications with SIOS Protection Suite for Linux.

How To Install A SIOS Protection Suite for Linux License Key

Reproduced with permission from SIOS

Filed Under: Clustering Simplified Tagged With: High Availability Linux, Linux, Linux Cluster

Understanding and Avoiding Split Brain Scenarios

September 23, 2021 by Jason Aw Leave a Comment

Understanding and Avoiding Split Brain Scenarios

Understanding and Avoiding Split Brain Scenarios

Split brain. Most readers of our blogs will have heard the term, in the computing context that is, yet we cannot help but to sympathize with those whose first mental image is of the chaos that would result if someone had two brains, both equally in control at the same time.

What is a Failover Cluster Split Brain Scenario?

In a failover cluster split brain scenario, neither node can communicate with the other, and the standby server may promote itself to become an active server because it believes the active node has failed. This results in both nodes becoming ‘active’ as each would see the other as being failed. As a result, data integrity and consistency is compromised as data on both nodes would be changing. This is referred to as split brain.

There are two types of split-brain scenarios which may occur for an SAP HANA resource hierarchy if appropriate steps are not taken to avoid them.

  • HANA Resource Split Brain: The HANA resource is Active (ISP) on multiple cluster nodes. This situation is typically caused by a temporary network outage affecting the communication paths between cluster nodes.
  • SAP HANA System Replication Split Brain: The HANA resource is Active (ISP) on the primary node and Standby (OSU) on the backup node, but the database is running and registered as the primary replication site on both nodes. This situation is typically caused by either a failure to stop the database on the previous primary node during failover, having Autostart enabled for the database, or a database administrator manually running “hdbnsutil -sr_takeover” on the secondary replication site outside of the clustering software environment.

Avoiding Split Brain Issues

Recommendations for avoiding or resolving each type of split-brain scenario in the SIOS Protection Suite clustering environment are given below.

While in a split-brain scenario, a message similar to the following is logged and broadcast to all open consoles every quickCheck interval (default 2 minutes) until the issue is resolved.

EMERG:hana:quickCheck:HANA-SPS_HDB00:136363:WARNING: 
A temporary communication failure has occurred between servers 
hana2-1 and hana2-2. 
Manual intervention is required in order to minimize the risk of 
data loss. 
To resolve this situation, please take one of the following resource 
hierarchies out of service: HANA-SPS_HDB00 on hana2-1 
or HANA-SPS_HDB00 on hana2-2. 
The server that the resource hierarchy is taken out of service on 
will become the secondary SAP HANA System Replication site.

Recommendations for resolution:

  1. Investigate the database on each cluster node to determine which instance contains the most up-to-date or relevant data. This determination must be made by a qualified database administrator who is familiar with the data.
  2. The HANA resource on the node containing the data that needs to be retained will remain Active (ISP) in LifeKeeper, and the HANA resource hierarchy on the node that will be re-registered as the secondary replication site will be taken entirely out of service in LifeKeeper. Right-click on each leaf resource in the HANA resource hierarchy on the node where the hierarchy should be taken out of service and click Out of Service …
  3. Once the SAP HANA resource hierarchy has been successfully taken out of service, LifeKeeper will re-register the Standby node as the secondary replication site during the next quickCheck interval (default 2 minutes). Once replication resumes, any data on the Standby node which is not present on the Active node will be lost. Once the Standby node has been re-registered as the secondary replication site, the SAP HANA hierarchy has returned to a highly available state.

SAP HANA System Replication Split Brain Resolution

While in this split-brain scenario, a message similar to the following is logged and broadcast to all open consoles every quick. Check interval (default 2 minutes) until the issue is resolved.

EMERG:hana:quickCheck:HANA-SPS_HDB00:136364:WARNING: 
SAP HANA database HDB00 is running and registered as 
primary master on both hana2-1 and hana2-2. 
Manual intervention is required in order to 
minimize the risk of data loss. To resolve this situation, 
please stop database instance 
HDB00 on hana2-2 by running the command ‘su – spsadm -c 
“sapcontrol -nr 00 -function Stop”’ 
on that server. Once stopped, 
it will become the secondary SAP HANA System Replication site.

Recommendations for resolution:

  1. Investigate the database on each cluster node to determine whether important data exists on the Standby node which does not exist on the Active node. If important data has been committed to the database on the Standby node while in the split-brain state, the data will need to be manually copied to the Active node. This determination must be made by a qualified database administrator who is familiar with the data.
  2. Once any missing data has been copied from the database on the Standby node to the Active node, stop the database on the Standby node by running the command given in the LifeKeeper warning message:

    su – adm -c “sapcontrol -nr <Inst#> -function Stop”

    where is the lower-case SAP System ID for the HANA installation and <Inst#> is the instance number for the HDB instance (e.g., the instance number, for instance, HDB00 is 00)

  3. Once the database has been successfully stopped, LifeKeeper will re-register the Standby node as the secondary replication site during the next quickCheck interval (default 2 minutes). Once replication resumes, any data on the Standby node which is not present on the Active node will be lost. Once the Standby node has been re-registered as the secondary replication site, the SAP HANA hierarchy has returned to a highly available state.

Being aware of common split-brain scenarios and taking these steps to mitigate them can save you time and protect data integrity.

Reproduced with permission from SIOS

Filed Under: Clustering Simplified Tagged With: high availability - SAP, Linux, SAP S/4HANA, split brain

Seven Skills That Your Team Needs if You are Going with Open Source High Availability

March 31, 2021 by Jason Aw Leave a Comment

Seven Skills That Your Team Needs if You are Going with Open Source High Availability

Seven Skills That Your Team Needs if You are Going with Open Source High Availability

In the realm of High Availability (HA) there are certain important skills your team needs if you decide to go the route of open source. Open source by definition denotes software that is freely available to use.

Today, there are numerous commercial implementations of high availability clusters for many operating systems provided by vendors like Microsoft and SIOS Technology Corp.  These commercial solutions provide resource monitoring, dependency management, failover and cluster policies, and some form of management prepackaged and priced.  An alternative to commercial implementations are several open source options that also give companies the opportunity to provide high availability for their enterprise.

As companies continue to look for optimizations, cost savings, and potential tighter control, a growing number of companies and customers are also considering moving to open source availability solutions.

Here are seven skills that your team may need for a move to Open Source HA:

1. Coding skills

In many cases the lack of pre-packaged and bundled support for enterprise applications means that your team will need to be able to develop solutions to protect components, fix issues with bundled components, or write application connectors to ensure application awareness is properly handled.  Lots of people can write scripts, but your team will need to know how to create and adhere to sound development practices and standards.  The basics of this include things such as:

  • Design and Architecture Requirements
  • Design Reviews
  • Code / Code Reviews and Unit Tests (preferably automated)

2. Knowledge of the technology environment

Many enterprise applications require integration with multiple systems in order to provide high availability that meets the Service Level Agreements (SLA) and Service Level Objectives (SLO).  Your team will require deep application awareness and knowledge of the technology environment to build protection and solutions for this integration with multiple enterprise systems.  You need people who know the ins and outs of the critical applications, the technology environment for those applications, networking, hardware, hypervisors, and an understanding of the environmental and application dependencies.  You’ll also need team members who understand the architecture, features, and limitations of the set of HA technologies that you intend to use from the Open Source community. Consider how much of these areas your team knows and understands:

  • Data passing and node communication
  • Node failure
  • Application management
  • System recovery and restart
  • Logging and messages
  • Data resilience and protection

3. Business process knowledge

You need someone to understand your business requirements, and the business process.  Your team needs professionals who understand the enterprise’s business and the processes that drive it.  Your team will need to know and understand how much budget is available to spend for developing the solution, how much risk the business is willing to take, and how to gather additional requirements that may be unspoken or unspecified.

The team will also need to know, or to hire someone who knows how to convert those business requirements into software requirements and how to manage a process that brings a minimum viable high availability solution to fruition that meets the needs of the business, the speed of the business, and fits within the processes of the business.

4. Experience with OS, Applications and Infrastructure

If you are looking to go all open, your team will need experience understanding Operating Systems, Applications and Infrastructure.  You’ll need to understand the various OS release cycles, including kernel versions for Linux, updates and hotfixes for Windows.  You have applications in house that need to be supported, but you’ll need to also be diligent to understand the application update cycle, their dependencies, and the intersection of applications and OS support matrices.  If your environment is homogeneous, great.  Otherwise, your team will need to know the differences between RHEL, RHEL derivatives, and SUSE.  If you are both Linux and Windows you’ll need to know these as well.  You’ll also need to understand the difference that the infrastructure will make on the application and OS combination.  AWS and Azure present differences for high availability that differs from GCP, on-premise, and other hypervisors.

5. Change management capabilities

Imagine that you have the development team to create the solution, with technical and business knowledge along with a firm grasp of the OS, Infrastructure and Applications.  But, getting the scripts together is just the beginning.  Your team will also need change management capabilities.  How will your team keep track of the code changes and versions, packages, and package locations?  How will your team manage the releases of updates and changes?  Your team will need to be versed in a source repository, such as git, project management tools, such as Jira, and release train proficiency.  You’ll need a team that understands how to make updates to code, deliver patches and fixes, all while avoiding unwanted impact.

6. Data analytics and troubleshooting experience

When you enter the space of delivering your own HA solution your team will need analytics and troubleshooting experience.  You’ll need to have resources who understand the intersection of application code, system messages, and application error logs and trace files.  When a system crash occurs, you’ll have to dig deeper into the logs to troubleshoot and find the root cause, analyze the data to make recommendations, and be prepare to roll out changes (see #5 above).  Don’t forget, your team will also need to know and understand what the data from these logs and trace files can tell you about the health of your environment even when there isn’t an error, failure or system crash.

7. Connections (Dev, QA, Partners, Community)

Let’s be honest, your business isn’t about delivering high availability, but if you decide to dive into the realm of open source HA you are going to need more help than just the brilliance on your team.  Key to getting that additional help will be understanding where to start and then making the right connections to community developers, persons who are experts on testing, HA and application partners, and the open source community.  Open forums have been really helpful, but you’ll need to double check if the response times are compliant with your SLAs and SLOs.

Using Open Source solutions is an option that many companies choose to pursue for cost concerns and a perception of flexibility, lower cost, and less risk.  But, buyer beware, there may be hidden costs in the form of new skills and management, and hidden risks in terms of the open source programs you use that will be needed for any “roll your own HA solution.”

– Cassius Rhue, VP, Customer Experience

Reproduced from SIOS

Filed Under: Clustering Simplified Tagged With: High Availability, high availability - SAP, Linux, Open Source

  • 1
  • 2
  • 3
  • Next Page »

Recent Posts

  • Video: The SIOS Advantage
  • Demo Of SIOS DataKeeper For A Three-Node Cluster In AWS
  • 2023 Predictions: Data Democratization To Drive Demand For High Availability
  • Understanding the Complexity of High Availability for Business-Critical Applications
  • Epicure Protects Business Critical SQL Server with Amazon EC2 and SIOS SANLess Clustering Software

Most Popular Posts

Maximise replication performance for Linux Clustering with Fusion-io
Failover Clustering with VMware High Availability
create A 2-Node MySQL Cluster Without Shared Storage
create A 2-Node MySQL Cluster Without Shared Storage
SAP for High Availability Solutions For Linux
Bandwidth To Support Real-Time Replication
The Availability Equation – High Availability Solutions.jpg
Choosing Platforms To Replicate Data - Host-Based Or Storage-Based?
Guide To Connect To An iSCSI Target Using Open-iSCSI Initiator Software
Best Practices to Eliminate SPoF In Cluster Architecture
Step-By-Step How To Configure A Linux Failover Cluster In Microsoft Azure IaaS Without Shared Storage azure sanless
Take Action Before SQL Server 20082008 R2 Support Expires
How To Cluster MaxDB On Windows In The Cloud

Join Our Mailing List

Copyright © 2023 · Enterprise Pro Theme on Genesis Framework · WordPress · Log in