SIOS SANless clusters

SIOS SANless clusters High-availability Machine Learning monitoring

  • Home
  • Products
    • SIOS DataKeeper for Windows
    • SIOS Protection Suite for Linux
  • News and Events
  • Clustering Simplified
  • Success Stories
  • Contact Us
  • English
  • 中文 (中国)
  • 中文 (台灣)
  • 한국어
  • Bahasa Indonesia
  • ไทย

Subscribe to Data Informed The IT Shift from Computer Science to Data Science

November 13, 2017 by Jason Aw Leave a Comment

You may think that the words “artificial intelligence” or “machine learning” sound like trendy buzzwords. In reality, much of the hype about this technology is true. Unlike past periods of excitement over artificial intelligence, today’s interest is no longer an academic exercise. Now, IT has a real-world need for faster solutions to problems that are too complex for humans alone. This includes identifying the root causes of performance issues in virtual infrastructures.

Today, almost every large enterprise has virtualized part, or all, of their data centers. With virtualization, IT teams gain access to a huge variety and volume of real-time machine data they want to use to understand and solve the issues in their IT operations environments. However, the complexity of managing virtual IT environments is stressing out traditional IT departments. As a result, IT pros are discovering that the solution lies in the data and in the artificial intelligence-based tools that can leverage it.

Data Science to the Rescue

As worldwide digital data levels continue to climb, companies are working to find the business value in their data, and to adapt their computer science strategies to the evolving data science market. Legacy management and monitoring tools used the same approach they used for physical server environments — that is, by looking at discrete silos (network, storage, infrastructure, application). They used multiple manually-set thresholds to focus on individual metrics — CPU utilization, memory utilization, network latency, etc., within each silo.

This threshold-based approach originated in a relatively static, well-understood physical server environment which has proven ineffective in handling the complexity of today’s virtual environments. Unlike their counterparts in physical server environments, components in virtual environments share host resources, creating complex, highly interdependent relationships between them. They are also highly dynamic, enabling IT to continually create and move workloads across VMs. IT pros can no longer make informed decisions using manual computer science approaches of yesterday and analyzing alerts from a single silo at a time. This is why companies are turning to “data science” approach that leverages sophisticated AI disciplines of machine learning and deep learning to get a holistic, automated solution to eliminate the time-consuming, manual process of problem-solving performance issues and optimizing virtual environments.

Machine Learning Analytics Tools Provide the Answers

Rather than monitoring individual metrics as threshold-based tools do, advanced machine learning-based solutions learn the complex behavior of interrelated components as they change over time. They can consider multiple metrics of related components simultaneously. As a result, they deliver much more precise, accurate information about virtual environments than either primitive machine learning tools or traditional threshold-based tools. Instead of creating “alert storms”, they identify the meaningful incidents associated with abnormal behavior at a specific time of day, week, month and year. And because machine learning is central to the design, there is no manual configuration required. Advanced machine learning solutions, can be up and running in minutes and learning behaviors immediately. As a result, this shift to a data-centric, behavior-based approach has major implications that significantly empower IT professionals. IT pros will always need domain expertise in computer science, but what analytical skills will IT need to become effective in this new AI-driven world?

Instead of spending their days reacting to and reworking application performance issues, IT will shift their focus from diagnosing problems to proactively predicting and avoiding them in the first place. Freed of the need to over-provision to ensure performance and reliability, they will be able to look for ways to optimize efficiency and spend their time focusing on the larger goals at hand. This allows IT to provide true business value, and work on projects that drive company objectives forward. Generally, that kind of value gives IT an important voice in senior management, bringing them into the decision making process and closing the gap between IT and operations. And as IT pros understanding and use of machine learning-based analytics tools advance, they will be on the forefront of building the foundation for automation and the future of the self-driving data center.

Jim’s Bio:

Jim Shocrylas is the Director of Product Management at SIOS.  Jim has more than 20 years in the IT industry, most recently as Portfolio Manager for EMC’s Emerging Technologies Division.

Filed Under: News and Events, News posts, Press Releases Tagged With: #AIOps, analytics, Artificial Intelligence, Machine Learning

Put an End to Trial and Error with Machine Learning Analytics

June 6, 2017 by Margaret Hoagland 2 Comments

When end-users report slow performance in business-critical applications, IT teams everything to fix the problem as quickly as possible. In virtual environments, where the root causes of problems are rarely straightforward, they may spend days trying and testing multiple different solutions. Troubleshooting this way creates a huge drain on IT time and resources – and even occasionally, morale. IT teams want to be innovators who add value to their business operations with new technology that automate manual tasks, increase end user productivity, streamline costs and respond to business needs quickly and flexibly. Unfortunately, without the insights and automation that machine learning analytics provides, IT departments are wasting more and more time and resources on low-value problem-solving.

ViSIOS iQ Machine Learning Analytics Eliminates Trial-and-Error Frustrationrtual Infrastructures are Too Complex
for One-Dimensional Approaches

What is causing this problem-solving quagmire? IT is running more business critical applications in complex, dynamic virtual infrastructures where traditional diagnostic and monitoring tools cannot identify root causes of application performance issues or provide specific steps to solve them. IT teams are still looking at their virtual infrastructures in individual operational silos – compute, application, storage, and network. They are using multiple tools to gather information about each silo and then piecing the results together manually to devise a theory about the root cause and a strategy for resolution.

Threshold-based Tools and Old-School Approaches

In a recent survey SIOS conducted, 78 percent of respondents are using multiple tools to identify the cause of application performance issues in VMware. Only 20 percent of respondents said the strategies they are using to resolve these issues is completely accurate the first time.

Legacy monitoring tools use threshold-based technology that was originally developed for physical server environments. They help you keep physical components operating within specific parameters, such as CPU utilization, storage latency, and network latency. You manually set the parameter thresholds for every metric you want to monitor in every silo and these tools will alert you every time a threshold is exceeded – often hundreds of times for a single incident.

More Data is Not More Information

In virtual environments, virtual resources share the physical host, storage, and network resources. These components work together in complex interrelationships that often mask the root causes of performance issues. IT pros responsible for each silo have to decipher hundreds of alerts and pinpoint what matters using their subjective opinions and good old trial and error.

Fortunately, new machine learning analytics solutions like SIOS iQ use deep learning techniques to look across the silos, factor in the interrelationships of virtual resources, and identify the root causes of application performance issues. They use predictive analytics technology to identify the issues that will cause performance issues in the future so you can avoid them. They provide a degree of automation, precision, and accuracy that humans with threshold-based tools cannot approximate.

Machine Learning Analytics Eliminates Trial and Error

Machine learning analytics tools tell you how to resolve the issues. You don’t need to weed through hundreds of alerts or compare dashboards filled with charts to diagnose the problem. You get the info you need without the expertise of a data scientist. With machine learning analytics, there is no need for data selection, modeling, preparation, extraction or configuration is necessary. SIOS iQ tells IT which infrastructure anomalies are important and which are minor so they can prioritize their valuable time.

With new and advanced machine learning and deep learning tools, IT teams can move from a reactive to proactive state. That means you can spend more time innovating and less time on trial-and-error.

Filed Under: Blog posts Tagged With: #ML, IT Analytics, Machine Learning

Part 2- AI: It’s All About the Data: The Shift from Computer Science to Data Science

April 14, 2017 by sios2017 Leave a Comment

This is the second post in a two-part series. Part One is available here. We are highlighting the shifting roles of IT with the emergence of machine learning based IT analytics tools.

Machine Learning Provides the Answers

The newest data science approach to managing and optimizing virtual infrastructures applies the AI discipline of machine learning (ML).

Rather than monitoring individual components in the traditional computer science way, ML tools analyze the behavior of interrelated components. They track the normal patterns of these complex behaviors as they change over time. Machine learning-based analytics tools automatically identify the root causes of performance issues and recommend the steps needed to fix them.

This shift to a data-centric, behavior-based approach has major implications that significantly empower IT professionals. IT pros will always need domain expertise in computer science. But what analytical skills will IT need to become effective in this new AI-driven world?

Unlike earlier analytics tools were general purpose or provided relatively low-level primitives or APIs, leaving IT to determine how to apply them for specific purposes. Early tools were largely impractical because they had limited applicability. Moreover, IT pros using them had to have a deep analytical background. New tools are much different. They allow IT pros to leapfrog ahead -to use advanced data science approaches without specialized training. Artificial IntelligenceThey automatically deliver fast, accurate solutions to complex problems like root cause analysis, rightsizing, or capacity planning.

First, IT will shift their emphasis from diagnosing problems to avoiding them in the first place. Next, freed of the need to over-provision to ensure performance and reliability, they will look for ways to optimize efficiency. Finally, they will use ML tools to implement strategies to evolve and scale their environments to support their business’s operations.

And as IT pro’s mature their understanding and use of machine learning-based analytics tools, they will be on the forefront of building the foundation for automation and the future of the self-driving data center.

Read Part 1

Save

Save

Filed Under: Blog posts Tagged With: Artificial Intelligence, Machine Learning

Part 1: AI is All About the Data: The Shift from Computer Science to Data Science

April 10, 2017 by sios2017 Leave a Comment

This is the first post in a two-part series. Part 2 is available here. We are highlighting the shifting roles of IT as artificial intelligence (AI) driven data science evolves.

You may think that the words “artificial intelligence” or “machine learning” sound like trendy buzzwords. In reality, much of the hype about this technology is true. Unlike past periods of excitement over artificial intelligence, today’s interest is no longer an academic exercise. Now, IT has a real-world need for faster solutions to problems that are too complex for humans alone. With virtualization, IT teams gain access to a huge variety and volume of real-time machine data. They want to use to understand and solve the issues in their IT operations environments. What’s more, businesses are seeing the value in dedicating budget and resources to leverage artificial intelligence, specifically machine learning, and deep learning. They are using this powerful technology to analyze this data to increase efficiency and performance.

Data Science to the Rescue Artificial Intelligence

The complexity of managing virtual IT environments is stressing out traditional IT departments. However, IT pros are discovering that the solution lies in the data and in the artificial intelligence-based tools that can leverage it. Most are in the process of understanding how powerful data is in making decisions about configuring, optimizing, and troubleshooting virtual environments. Early stage virtualization environments were monitored and managed in the same way physical server environments were. That is, IT pros operated in discrete silos (network, storage, infrastructure, application). They used multiple threshold- based tools to monitor and manage them focusing on individual metrics – CPU utilization, memory utilization, network latency, etc. When a metric exceeds a preset threshold, these tools create alerts – often thousands of alerts for a single issue.

If you compare a computer science approach to a data science (AI) approach, several observations become clear. IT based the traditional approach on computer science principles that they have used for the last 20 years. This threshold-based approach originated in relatively static, low-volume physical server environments. IT staff analyze individual alerts to determine what caused the problem, how critical it is, and how to fix it. However, unlike physical server environments, components in virtual environments are highly interdependent and constantly changing. Given the enormous growth of virtualized systems, IT pros cannot make informed decisions by analyzing alerts from a single silo at a time.

Artificial Intelligence, Deep Learning, and Machine Learning

To get accurate answers to key questions in large virtualized environments, IT teams need an artificial intelligence -based analytics solution. They need a solution capable of simultaneously considering all of the data arising from across the IT infrastructure silos and applications. In virtual environments, components share IT resources and interact with one another in subtle ways. You need a solution that understands these interactions and the changing patterns of their behavior over time. It should understand how it changes through a business week and as seasonal changes occur over the course of a year. Most importantly, IT needs AI-driven solutions that do the work for IT. It should identify root causes of issues, recommend solutions, predict future problems, and forecast future capacity needs.

Save

Filed Under: Blog posts Tagged With: #AI, Artificial Intelligence, Machine Learning, VMware

Are You Over Provisioning Your Virtual Infrastructure?

March 1, 2017 by sios2017 1 Comment

Right-Sizing VMware Environments with Machine Learning

According to leading analysts, today’s virtual data centers are as much as 80 percent overprovisioned – an issue that is wasting tens of thousands of dollars annually. The risks of overprovisioning virtual environments are urgent and immediate. IT managers face a variety of challenges related to correctly provisioning a virtual infrastructure. They need to stay within budget while avoiding downtime, delivering high performance for end-user productivity, ensuring high availability and meeting a variety of other service requirements. IT often deals with their fear of application performance issues by simply throwing hardware at the problem and avoiding any possibility of under-provisioning.  However, this strategy is driving costly over spending and draining precious IT time.  And even worse, when it comes time to compare the economics of on-premises hosting vs cloud, the costs of on-premises infrastructures are greatly inflated when the resources aren’t efficiently being used.  This can lead to poor decisions when planning a move to the cloud.

With all of these risks in play, how do IT teams know when their VMware environment is optimized?

Having access to accurate information that is simple to understand is essential.  The first step in right-sizing application workloads is understanding the patterns of the workloads and the resources they consume over time.  However, most tools take a simplistic approach when recommending resource optimization.  They use simple averages of metrics about a virtual machine.  This approach doesn’t give accurate information. Peaks and valleys of usage and interrelationships of resources cause unanticipated consequences for other applications when you reconfigure them.  To get the right information and make the right decisions for right-sizing, you need a solution such as SIOS Iq.  SIOS iQ applies machine learning to learn patterns of behavior of interrelated objects over time and across the infrastructure to accurately recommend optimizations that help operations, not hurt them.  Intelligent analytics beats averaging every time.

The second step towards a right-sizing strategy is eliminating the fear of dealing with performance issues when a problem happens or even preventing one in the first place.  This means having confidence that you have the accurate information needed to rapidly identify and fix an issue instead of simply throwing hardware at it and hoping it goes away.

Today’s tools are not very accurate. They lead IT through a maze of graphs and metrics without clear answers to key questions. IT teams typically operate and manage environments in separate silos — storage, networks, applications and hosts each with its own tools. To understand the relationships among of all the infrastructure components requires a lot of manual work and digging.  Further, these tools don’t deliver information, they only deliver marginally accurate data. And they require IT to do a lot of work to get that inaccurate data. That’s because they are threshold-based. IT has to set individual thresholds for each metric they want to measure –  CPU utilization, memory utilization, network latency, etc.. A single environment may need to set, monitor, and continuously tune thousands of individual thresholds. Every time the environment is changed, such as when a workload is moved or a new VM is created, the thresholds have to be readjusted. When a threshold is exceeded, these tools often create thousands of alerts, burying important information in “alert storms” with no root cause identified or resolution recommended.

Even more importantly, because these alerts are triggered off measurements of a single metric on a single resource, IT has to interpret the meaning and importance.  Ultimately the accuracy of interpretation is left to the skill and experience of the admin. When systems are changing and growing so fast and IT simply can’t keep up with it all- and the easiest course of action is to over-provision; wasting time and money in the process. Moreover, the actual root cause of the problem is often never fully addressed.

IT teams need smart tools that leverage advanced machine learning analytics to provide an aggregated, analyzed view of their entire infrastructure. A solution such as SIOS iQ helps to optimize provisioning, characterize underlying issues and identify and prioritize problems in virtual environments. SIOS iQ doesn’t use thresholds. It automatically analyzes the dynamic patterns of behavior between the related components in your environment over time. It automatically identifies a wide variety of wasted resources (rogue vmdks, snapshot waste, idle VMs). It also recommends changes to right-size all over- and under-provisioned VMs.

When it detects anomalous patterns of behavior, it provides a complete analysis of the root cause of the problem, the components affected by the problem, and recommended solutions to fix the problem. It not only recommends optimal provisioning of vCPU, vMem, and VMs, but also provides a detailed analysis of cost savings that its recommendations can deliver. Learn more about the SIOS iQ Savings and ROI calculator.

Here are three ways machine learning analytics can help avoid overprovisioning:

  1. Understand the causes of poor performance: By automatically and continuously observing resource utilization patterns in real-time, machine learning analytics can identify over- and undersized VMs and recommended configuration settings to right-size the VM for performance. If there’s a change, machine learning can dynamically update the recommendations.
  2. Reduce dependency on IT teams for resource sizing: App owners are often requesting as much storage capacity as possible, while VMware admins want to limit storage as much as possible. Machine learning analytics takes the guess work out of resource sizing and eliminates the finger-pointing that often happens among enterprise IT teams when there’s a problem.
  3. Eliminate unused or wasted IT resources: SIOS iQ will provide a saving and ROI analysis of wasted resources, including over-provisioned VMs, rogue VMDKs, unused VMs, and snapshot waste. It also provides recommendations for eliminating them and calculates the associated costs saving in both CapEx and Opex.
  4. Determine whether a cluster can tolerate host failure: With machine learning analytics, IT pros can easily right-size CPU and storage without putting SQL Server or end user productivity at risk. IT teams gain a deeper understanding into the capacity of the organization’s hosts and know whether a cluster can tolerate failure or other issues.

To learn more about how right-sizing your VMware environment with machine learning can save time and resources, check out our webinar: “Save Big by Right Sizing Your SQL Server VMware Environment.”

Filed Under: Blog posts, News and Events Tagged With: #over provisioning, Machine Learning, rogue VMDKs, snapshot waste

  • 1
  • 2
  • 3
  • 4
  • Next Page »

Recent Posts

  • 5 Retail Challenges Solved with a Robust HA/DR Solution
  • How to Protect Applications in Cloud Platforms
  • How to Protect Applications and Databases
  • How to Protect Applications in Windows Operating System
  • How to Protect Applications in Linux Operating System

Most Popular Posts

Maximise replication performance for Linux Clustering with Fusion-io
Failover Clustering with VMware High Availability
create A 2-Node MySQL Cluster Without Shared Storage
create A 2-Node MySQL Cluster Without Shared Storage
SAP for High Availability Solutions For Linux
Bandwidth To Support Real-Time Replication
The Availability Equation – High Availability Solutions.jpg
Choosing Platforms To Replicate Data - Host-Based Or Storage-Based?
Guide To Connect To An iSCSI Target Using Open-iSCSI Initiator Software
Best Practices to Eliminate SPoF In Cluster Architecture
Step-By-Step How To Configure A Linux Failover Cluster In Microsoft Azure IaaS Without Shared Storage azure sanless
Take Action Before SQL Server 20082008 R2 Support Expires
How To Cluster MaxDB On Windows In The Cloud

Join Our Mailing List

Copyright © 2023 · Enterprise Pro Theme on Genesis Framework · WordPress · Log in