SIOS SANless clusters

SIOS SANless clusters High-availability Machine Learning monitoring

  • Home
  • Products
    • SIOS DataKeeper for Windows
    • SIOS Protection Suite for Linux
  • News and Events
  • Clustering Simplified
  • Success Stories
  • Contact Us
  • English
  • 中文 (中国)
  • 中文 (台灣)
  • 한국어
  • Bahasa Indonesia
  • ไทย

Part 1: AI is All About the Data: The Shift from Computer Science to Data Science

April 10, 2017 by sios2017 Leave a Comment

This is the first post in a two-part series. Part 2 is available here. We are highlighting the shifting roles of IT as artificial intelligence (AI) driven data science evolves.

You may think that the words “artificial intelligence” or “machine learning” sound like trendy buzzwords. In reality, much of the hype about this technology is true. Unlike past periods of excitement over artificial intelligence, today’s interest is no longer an academic exercise. Now, IT has a real-world need for faster solutions to problems that are too complex for humans alone. With virtualization, IT teams gain access to a huge variety and volume of real-time machine data. They want to use to understand and solve the issues in their IT operations environments. What’s more, businesses are seeing the value in dedicating budget and resources to leverage artificial intelligence, specifically machine learning, and deep learning. They are using this powerful technology to analyze this data to increase efficiency and performance.

Data Science to the Rescue Artificial Intelligence

The complexity of managing virtual IT environments is stressing out traditional IT departments. However, IT pros are discovering that the solution lies in the data and in the artificial intelligence-based tools that can leverage it. Most are in the process of understanding how powerful data is in making decisions about configuring, optimizing, and troubleshooting virtual environments. Early stage virtualization environments were monitored and managed in the same way physical server environments were. That is, IT pros operated in discrete silos (network, storage, infrastructure, application). They used multiple threshold- based tools to monitor and manage them focusing on individual metrics – CPU utilization, memory utilization, network latency, etc. When a metric exceeds a preset threshold, these tools create alerts – often thousands of alerts for a single issue.

If you compare a computer science approach to a data science (AI) approach, several observations become clear. IT based the traditional approach on computer science principles that they have used for the last 20 years. This threshold-based approach originated in relatively static, low-volume physical server environments. IT staff analyze individual alerts to determine what caused the problem, how critical it is, and how to fix it. However, unlike physical server environments, components in virtual environments are highly interdependent and constantly changing. Given the enormous growth of virtualized systems, IT pros cannot make informed decisions by analyzing alerts from a single silo at a time.

Artificial Intelligence, Deep Learning, and Machine Learning

To get accurate answers to key questions in large virtualized environments, IT teams need an artificial intelligence -based analytics solution. They need a solution capable of simultaneously considering all of the data arising from across the IT infrastructure silos and applications. In virtual environments, components share IT resources and interact with one another in subtle ways. You need a solution that understands these interactions and the changing patterns of their behavior over time. It should understand how it changes through a business week and as seasonal changes occur over the course of a year. Most importantly, IT needs AI-driven solutions that do the work for IT. It should identify root causes of issues, recommend solutions, predict future problems, and forecast future capacity needs.

Save

Filed Under: Blog posts Tagged With: #AI, Artificial Intelligence, Machine Learning, VMware

Webinar Explains How to Eliminate Over Sizing in Virtual Environments without Risking Application Performance

April 4, 2017 by Margaret Hoagland Leave a Comment

April 6th at 2:00 PM Eastern/11:00 AM Pacific

Register Here
According to experts, virtual environments are over-provisioned by as much as 80%. IT is wasting tens of thousands of dollars a year on hardware, software, and IT time that doesn’t benefit the company. Without an effective way to see across the virtual infrastructure silos and into the interactions between components, IT is blind-sided by performance issues, capacity over-runs, and other unexpected consequence. As more important applications are being moved into virtual environments, the pressure is even greater to deliver uninterrupted high performance and any cost. This limited view into virtual infrastructures is also causing IT to keep unnecessary snapshots, rogue VMDKs, and idle VMs. In this webinar, ActualTech Founder and noted vExpert, David Davis and SIOS’s director of product management, Jim Shocrylas discuss simple solutions to right-sizing virtual environments that are possible with machine learning based analytics.

Join this webinar to learn how machine learning based analytics solutions are delivering the precise, accurate information you need to right size your virtual environment without risking performance or availability.

Watch a demonstration of a machine learning based analytics tool about how to eliminate application performance issues, configure virtual resources for optimal performance and efficiency, and forecast performance requirements.

  • vSphere Admin challenges and solutions
  • Complex relationships and how to identify root cause
  • Identify wasted resources and recouped costs
  • Machine learning and how it can help you
  • What VMs/Apps need SSD caching and what kind
  • Prevent problems before they happen and quickly solve them if they ever do

This live webinar is interactive so bring your questions.

Register Here

 

Filed Under: Blog posts, News and Events

Are You Over Provisioning Your Virtual Infrastructure?

March 1, 2017 by sios2017 1 Comment

Right-Sizing VMware Environments with Machine Learning

According to leading analysts, today’s virtual data centers are as much as 80 percent overprovisioned – an issue that is wasting tens of thousands of dollars annually. The risks of overprovisioning virtual environments are urgent and immediate. IT managers face a variety of challenges related to correctly provisioning a virtual infrastructure. They need to stay within budget while avoiding downtime, delivering high performance for end-user productivity, ensuring high availability and meeting a variety of other service requirements. IT often deals with their fear of application performance issues by simply throwing hardware at the problem and avoiding any possibility of under-provisioning.  However, this strategy is driving costly over spending and draining precious IT time.  And even worse, when it comes time to compare the economics of on-premises hosting vs cloud, the costs of on-premises infrastructures are greatly inflated when the resources aren’t efficiently being used.  This can lead to poor decisions when planning a move to the cloud.

With all of these risks in play, how do IT teams know when their VMware environment is optimized?

Having access to accurate information that is simple to understand is essential.  The first step in right-sizing application workloads is understanding the patterns of the workloads and the resources they consume over time.  However, most tools take a simplistic approach when recommending resource optimization.  They use simple averages of metrics about a virtual machine.  This approach doesn’t give accurate information. Peaks and valleys of usage and interrelationships of resources cause unanticipated consequences for other applications when you reconfigure them.  To get the right information and make the right decisions for right-sizing, you need a solution such as SIOS Iq.  SIOS iQ applies machine learning to learn patterns of behavior of interrelated objects over time and across the infrastructure to accurately recommend optimizations that help operations, not hurt them.  Intelligent analytics beats averaging every time.

The second step towards a right-sizing strategy is eliminating the fear of dealing with performance issues when a problem happens or even preventing one in the first place.  This means having confidence that you have the accurate information needed to rapidly identify and fix an issue instead of simply throwing hardware at it and hoping it goes away.

Today’s tools are not very accurate. They lead IT through a maze of graphs and metrics without clear answers to key questions. IT teams typically operate and manage environments in separate silos — storage, networks, applications and hosts each with its own tools. To understand the relationships among of all the infrastructure components requires a lot of manual work and digging.  Further, these tools don’t deliver information, they only deliver marginally accurate data. And they require IT to do a lot of work to get that inaccurate data. That’s because they are threshold-based. IT has to set individual thresholds for each metric they want to measure –  CPU utilization, memory utilization, network latency, etc.. A single environment may need to set, monitor, and continuously tune thousands of individual thresholds. Every time the environment is changed, such as when a workload is moved or a new VM is created, the thresholds have to be readjusted. When a threshold is exceeded, these tools often create thousands of alerts, burying important information in “alert storms” with no root cause identified or resolution recommended.

Even more importantly, because these alerts are triggered off measurements of a single metric on a single resource, IT has to interpret the meaning and importance.  Ultimately the accuracy of interpretation is left to the skill and experience of the admin. When systems are changing and growing so fast and IT simply can’t keep up with it all- and the easiest course of action is to over-provision; wasting time and money in the process. Moreover, the actual root cause of the problem is often never fully addressed.

IT teams need smart tools that leverage advanced machine learning analytics to provide an aggregated, analyzed view of their entire infrastructure. A solution such as SIOS iQ helps to optimize provisioning, characterize underlying issues and identify and prioritize problems in virtual environments. SIOS iQ doesn’t use thresholds. It automatically analyzes the dynamic patterns of behavior between the related components in your environment over time. It automatically identifies a wide variety of wasted resources (rogue vmdks, snapshot waste, idle VMs). It also recommends changes to right-size all over- and under-provisioned VMs.

When it detects anomalous patterns of behavior, it provides a complete analysis of the root cause of the problem, the components affected by the problem, and recommended solutions to fix the problem. It not only recommends optimal provisioning of vCPU, vMem, and VMs, but also provides a detailed analysis of cost savings that its recommendations can deliver. Learn more about the SIOS iQ Savings and ROI calculator.

Here are three ways machine learning analytics can help avoid overprovisioning:

  1. Understand the causes of poor performance: By automatically and continuously observing resource utilization patterns in real-time, machine learning analytics can identify over- and undersized VMs and recommended configuration settings to right-size the VM for performance. If there’s a change, machine learning can dynamically update the recommendations.
  2. Reduce dependency on IT teams for resource sizing: App owners are often requesting as much storage capacity as possible, while VMware admins want to limit storage as much as possible. Machine learning analytics takes the guess work out of resource sizing and eliminates the finger-pointing that often happens among enterprise IT teams when there’s a problem.
  3. Eliminate unused or wasted IT resources: SIOS iQ will provide a saving and ROI analysis of wasted resources, including over-provisioned VMs, rogue VMDKs, unused VMs, and snapshot waste. It also provides recommendations for eliminating them and calculates the associated costs saving in both CapEx and Opex.
  4. Determine whether a cluster can tolerate host failure: With machine learning analytics, IT pros can easily right-size CPU and storage without putting SQL Server or end user productivity at risk. IT teams gain a deeper understanding into the capacity of the organization’s hosts and know whether a cluster can tolerate failure or other issues.

To learn more about how right-sizing your VMware environment with machine learning can save time and resources, check out our webinar: “Save Big by Right Sizing Your SQL Server VMware Environment.”

Filed Under: Blog posts, News and Events Tagged With: #over provisioning, Machine Learning, rogue VMDKs, snapshot waste

Understanding The Emerging field of AIOps – Part II

February 23, 2017 by Margaret Hoagland 1 Comment

This is the second post in a two-part series highlighting how AIOps is changing IT performance optimization. Part 1 explained the basic principles of AIOps. The original text of this series appeared in an article on Information Management.  Here we look at the business requirements driving the trend to AIOps.

Why do businesses need AIOps?

IT pros move more of their business-critical applications into virtualized environments. As a result, finding the root cause of application performance issues is more complicated than ever.  IT managers have to find problems in a complex web of VM applications, storage devices, network devices and services. These components that are connected in ways IT can’t always understand.

Often, the components a VMware or other virtual environment are interdependent and intertwined. When an IT manager moves a workload or makes a change to one component, they cause problems in several other components without their knowledge. If the components are in different so-called silos (network, infrastructure, application, storage, etc.), IT pros have even more trouble figuring out the actual cause of the problem.

Too Many Tools Required to Find Root Causes of Performance Issues

AIOPs Survey
SIOS AIOPS Survey

The process of correlating IT performance issues to its root cause is  difficult, if not impossible for IT leaders.  According to a recent SIOS report, 78 percent of IT professionals are using multiple tools to identify the cause of application performance issues in VMware. For example, they are using tools such as application monitoring, reporting and infrastructure analytics.

Often, when faced with an issue, IT assembles a team with representatives from each IT silo or area of expertise. Each team member uses his or her own diagnostic tools and looks at the problem their own silo-specific perspective. Next, the team members compare the results of their individual analyses identify common elements. Frequently, this process is highly manual. They look at changes in infrastructure that show up in several analyses in the same time frame. As a result, IT departments are wasting more and more of their budget on manual work and inaccurate trial-and-error inefficiencies.

To solve this problem and reduce wasted time, they are using an AIOPs approach. AIOps applies artificial intelligence (i.e., machine learning, deep learning) to automate problem-solving. The AIOPs trend is an important shift away from traditional threshold-based approaches that measure individual qualities (CPU utilization, latency, etc.) to a more holistic data-driven approach. Therefore, IT managers are using analytics tools to analyze data across the infrastructure silos in real-time. They are using advanced deep learning and machine learning analytics tools that learn the patterns of behavior between interdependent components over time.  As a result, they can automatically identify behaviors between components that may indicate a problem. More importantly, they automatically recommend the specific steps to resolve problems.

What’s Next for AIOps?

Virtual IT environments are creating an enormous volume of data and an unprecedented level of complexity. As a result, IT managers cannot manage these environments effectively with traditional, manual methods. Over the next few years, the IT profession will rapidly move from the traditional computer science approach to a modern “data science” AIOPs approach. For IT teams, this means embracing machine learning-based analytics solutions, and understanding how to use it to solve problems efficiently and effectively. Finally, executives need to work with their IT departments to identify to right AIOps platform for their business.

Read Part 1

Filed Under: Blog posts, News and Events Tagged With: #AIOps, Machine Learning, Sergey Razin, VMware

What You Need to Know About the Emerging field of AIOps – Part 1

February 16, 2017 by sios2017 Leave a Comment

This is the first post in a two-part series. We are highlighting how AIOps is changing IT performance optimization. The original text of this series appeared in an article on Information Management.

During the next two years, companies are set to spend $31.3 billion on cognitive systems tools. Today, companies are using tools based on these technologies (i.e., data analytics and machine learning) to solve problems in a wide range of areas. For example, companies are using artificial intelligence (AI)-powered customer service bots and trucking routes that data scientist design. Ironically, information technology (IT) departments have not yet fully leveraged the power of machine-learning based analytics — IT.

Survey Shows More Critical Apps in VMware

HoweAIOPs Surveyver, that is changing because IT environments are becoming increasingly complex. They are moving from physical servers to virtual environments. According to a recent study from SIOS Technology, 81 percent of IT teams are running business-critical applications in VMware environments.

Virtual environments are made up of components, such as VMs, applications, storage and network that are highly interrelated and constantly changing. To manage and optimize these environments, IT managers have to analyze an enormous volume of data. They learn the patterns of behavior between component. This lets them accurately correlate application service issues to the root cause of the problem in the virtual environment.  As a result, a new field has emerged – AIOps.

What is AIOps?

AIOps (algorithmic IT operations platforms) is a new term that Gartner uses to describe the next phase of IT operations analytics. These platforms use machine learning and deep learning technology to automate the process of finding performance issues in IT operations.

Right now, Gartner estimates only five percent of businesses have an AIOps platform in place. However, more businesses will adopt these platform during the next two years, bringing that number to 25 percent. Importantly, AIOps replaces human intelligence with machine intelligence. It deciphers interactions within virtual IT environments. Consequently, they can uncover infrastructure issues, correlate them to application operations problems and recommend solutions.

AIOps platforms use machine learning to understand how these environments behave over time to identify abnormal behavior. Furthermore, IT can even use AIOps platforms to find and stop potential threats before they become application performance issues.

Filed Under: Blog posts, News and Events Tagged With: #AIOps, IT operations analytics, root cause analysis, VMware performance

  • « Previous Page
  • 1
  • 2
  • 3
  • 4
  • 5
  • …
  • 7
  • Next Page »

Recent Posts

  • Transitioning from VMware to Nutanix
  • Are my servers disposable? How High Availability software fits in cloud best practices
  • Data Recovery Strategies for a Disaster-Prone World
  • DataKeeper and Baseball: A Strategic Take on Disaster Recovery
  • Budgeting for SQL Server Downtime Risk

Most Popular Posts

Maximise replication performance for Linux Clustering with Fusion-io
Failover Clustering with VMware High Availability
create A 2-Node MySQL Cluster Without Shared Storage
create A 2-Node MySQL Cluster Without Shared Storage
SAP for High Availability Solutions For Linux
Bandwidth To Support Real-Time Replication
The Availability Equation – High Availability Solutions.jpg
Choosing Platforms To Replicate Data - Host-Based Or Storage-Based?
Guide To Connect To An iSCSI Target Using Open-iSCSI Initiator Software
Best Practices to Eliminate SPoF In Cluster Architecture
Step-By-Step How To Configure A Linux Failover Cluster In Microsoft Azure IaaS Without Shared Storage azure sanless
Take Action Before SQL Server 20082008 R2 Support Expires
How To Cluster MaxDB On Windows In The Cloud

Join Our Mailing List

Copyright © 2025 · Enterprise Pro Theme on Genesis Framework · WordPress · Log in