Category Archives: Blog posts

Announcing AWS Quick Start Deploymnent Templates for SIOS SQL Failover Cluster

AWS Quick Start Templates Deploy SQL High-Availability Failover Cluster in the Cloud

Many businesses are struggling to deploy a high-availability failover cluster for SQL Server and other important applications in the cloud. This is because you need shared storage to create a failover cluster. Shared storage is not available or practical in most public clouds. As a result, Many IT teams kept SQL on-premises. Their experts in IT network, storage, and server would take months to plan, order, install, and configure physical environments for HA failover clustering. Finally, they would spend spent thousands of dollars upgrading to SQL Server Enterprise edition to gain advanced clustering capabilities.

SANless Failover Clustering Enables Cost-Efficient SQL High Availability Protection in the Cloud

Today, SIOS DataKeeper Cluster Edition is the first HA/DR solution to combine fully automated, application-centric clustering and efficient data replication. By integrating seamlessly into Windows Server Failover Clustering (WSFC), it enables a WSFC to work in a cloud where shared storage is not possible. SIOS DataKeeper works by synchronizing local storage in real time using highly efficient block-level replication. In this way, creates a SANless cluster to protect your Windows applications in the cloud. You can use it to protect SQL Server Standard Edition without the need for costly upgrades to SQL Server Enterprise Edition.

Quick Start Templates Make Deploying a Failover Cluster in AWS Easy

Now companies can easily deploy a two-node high-availability failover cluster automatically using an AWS EC2 Quick Start deployment. System administrators and managers can simply purchase the SIOS Amazon Machine Images (AMIs) on AWS Marketplace. They can use the AMI to deploy a two-node SQL Server Standard Edition cluster in the AWS cloud using an AWS Quick Start template.

Quick Start templates are automated reference deployments for key workloads on AWS. Each Quick Start launches, configures and runs the AWS service required to deploy a specific workload on AWS. Importantly, the templates use AWS best practices for security and availability. As a result, Quick Starts eliminate manual steps with a single click – they are fast, low-cost, and customizable.

The SIOS AMIs on AWS Marketplace provide an easy, convenient way for customers to purchase SIOS DataKeeper software to protect business critical applications in AWS. You can use them to deploy a high availability cluster using cost efficient SQL Server Standard Edition in the cloud.

Customers can purchase SIOS DataKeeper through the AWS Marketplace at: https://aws.amazon.com/marketplace/seller-profile?id=3c91e2f7-fc8d-4cce-a8aa-1e37abcb4408

To learn more about the SIOS DataKeeper Quick Start for AWS Cloud, visit: https://aws.amazon.com/quickstart/architecture/sios-datakeeper/

To learn more about the SIOS DataKeeper Cluster Edition for High Availability in Cloud Deployments:

SAN and SANless Clusters Resources

Part 2- AI: It’s All About the Data: The Shift from Computer Science to Data Science

This is the second post in a two-part series. Part One is available here. We are highlighting the shifting roles of IT with the emergence of machine learning based IT analytics tools. 

Machine Learning Provides the Answers

The newest data science approach to managing and optimizing virtual infrastructures applies the AI discipline of machine learning (ML).

Rather than monitoring individual components in the traditional computer science way, ML tools analyze the behavior of interrelated components. They track the normal patterns of these complex behaviors as they change over time. Machine learning-based analytics tools automatically identify the root causes of performance issues and recommend the steps needed to fix them.

This shift to a data-centric, behavior-based approach has major implications that significantly empower IT professionals. IT pros will always need domain expertise in computer science. But what analytical skills will IT need to become effective in this new AI-driven world?

Unlike earlier analytics tools were general purpose or provided relatively low-level primitives or APIs, leaving IT to determine how to apply them for specific purposes. Early tools were largely impractical because they had limited applicability. Moreover, IT pros using them had to have a deep analytical background. New tools are much different. They allow IT pros to leapfrog ahead -to use advanced data science approaches without specialized training. artificial intelligence and machine learning in virtual infrastructuresThey automatically deliver fast, accurate solutions to complex problems like root cause analysis, rightsizing, or capacity planning.

First, IT will shift their emphasis from diagnosing problems to avoiding them in the first place. Next, freed of the need to over-provision to ensure performance and reliability, they will look for ways to optimize efficiency. Finally, they will use ML tools to implement strategies to evolve and scale their environments to support their business’s operations.

And as IT pro’s mature their understanding and use of machine learning-based analytics tools, they will be on the forefront of building the foundation for automation and the future of the self-driving data center.

Read Part 1

Part 1: AI is All About the Data: The Shift from Computer Science to Data Science

This is the first post in a two-part series. Part 2 is available here. We are highlighting the shifting roles of IT as artificial intelligence (AI) driven data science evolves.

You may think that the words “artificial intelligence” or “machine learning”  sound like trendy buzzwords. In reality, much of the hype about this technology is true. Unlike past periods of excitement over artificial intelligence, today’s interest is no longer an academic exercise. Now, IT has a real-world need for faster solutions to problems that are too complex for humans alone. With virtualization, IT teams gain access to a huge variety and volume of real-time machine data. They want to use to understand and solve the issues in their IT operations environments. What’s more, businesses are seeing the value in dedicating budget and resources to leverage artificial intelligence, specifically machine learning, and deep learning. They are using this powerful technology to analyze this data to increase efficiency and performance.

Data Science to the Rescue artificial intelligence and machine learning in virtual infrastructures

The complexity of managing virtual IT environments is stressing out traditional IT departments. However, IT pros are discovering that the solution lies in the data and in the artificial intelligence-based tools that can leverage it. Most are in the process of understanding how powerful data is in making decisions about configuring, optimizing, and troubleshooting virtual environments. Early stage virtualization environments were monitored and managed in the same way physical server environments were.  That is, IT pros operated in discrete silos (network, storage, infrastructure, application). They used multiple threshold- based tools to monitor and manage them focusing on individual metrics – CPU utilization, memory utilization, network latency, etc. When a metric exceeds a preset threshold, these tools create alerts – often thousands of alerts for a single issue.

If you compare a computer science approach to a data science (AI) approach, several observations become clear. IT based the traditional approach on computer science principles that they have used for the last 20 years. This threshold-based approach originated in relatively static, low-volume physical server environments. IT staff analyze individual alerts to determine what caused the problem, how critical it is, and how to fix it. However, unlike physical server environments, components in virtual environments are highly interdependent and constantly changing. Given the enormous growth of virtualized systems, IT pros cannot make informed decisions by analyzing alerts from a single silo at a time.

Artificial Intelligence, Deep Learning, and Machine Learning

To get accurate answers to key questions in large virtualized environments, IT teams need an artificial intelligence -based analytics solution. They need a solution capable of simultaneously considering all of the data arising from across the IT infrastructure silos and applications. In virtual environments, components share IT resources and interact with one another in subtle ways. You need a solution that understands these interactions and the changing patterns of their behavior over time. It should understand how it changes through a business week and as seasonal changes occur over the course of a year. Most importantly, IT needs AI-driven solutions that do the work for IT. It should identify root causes of issues, recommend solutions, predict future problems, and forecast future capacity needs.

Stopping Alert Storms and Finding Root Causes of Performance Issues in VMware vSphere Infrastructures with Machine Learning

View this recorded webinar to hear noted vExpert and principle analyst for ActualTech Media, David M. Davis, and Jim Shocrylas, SIOS Technology’s Director of Product Management discussing techniques for stopping alert storms and dealing with a wide range of problems facing IT managers in VMware environments. View now.

David discusses the changes in IT that led to the creation of the IO “blender” that we see today and the ways traditional threshold-based monitoring and management tools are falling short. He reviews the challenges this situation poses for IT managers who are trying to solve problems, eliminate wasted resources, and meet service levels – from overwhelming alert storms, to “siloed” view of the infrastructure, to inefficient (and costly) trial-and-error problem-solving.

He discussed the ways new machine learning-based IT analytics are answering the questions that traditional threshold-based solutions cannot – what is the root cause of the problem and how to fix it. Jim Shocrylas provides a demo of SIOS iQ machine learning analytics solution and shows how easy it is to:

  • Be aware of important issues without alert storms
  • Identify root causes of performance issues quickly, easily, and accurately
  • Right-size performance and capacity in vSphere infrastructures without risk
  • Prevent problems before they happen

View now

Recorded Webinar Explains How to Eliminate Over Sizing in Virtual Environments without Risking Application Performance

View the Webinar Now: Easy, Risk-Free Ways to Right Size Your VMware Environment

According to experts, virtual environments are over-provisioned by as much as 80%. IT is wasting tens of thousands of dollars a year on hardware, software, and IT time that doesn’t benefit the company. Without an effective way to see across the virtual infrastructure silos and into the interactions between components, IT is blind-sided by performance issues, capacity over-runs, and other unexpected consequence. As more important applications are being moved into virtual environments, the pressure is even greater to deliver uninterrupted high performance and any cost. This limited view into virtual infrastructures is also causing IT to keep unnecessary snapshots, rogue VMDKs, and idle VMs. In this webinar, ActualTech Founder and noted vExpert, David Davis and SIOS’s director of product management, Jim Shocrylas discuss simple solutions to right-sizing virtual environments that are possible with machine learning based analytics.

Join this webinar to learn how machine learning based analytics solutions are delivering the precise, accurate information you need to right size your virtual environment without risking performance or availability.

Watch a demonstration of a machine learning based analytics tool about how to eliminate application performance issues, configure virtual resources for optimal performance and efficiency, and forecast performance requirements.

  • vSphere Admin challenges and solutions
  • Complex relationships and how to identify root cause
  • Identify wasted resources and recouped costs
  • Machine learning and how it can help you
  • What VMs/Apps need SSD caching and what kind
  • Prevent problems before they happen and quickly solve them if they ever do

View the Webinar Now: Easy, Risk-Free Ways to Right Size Your VMware Environment

Are You Over Provisioning Your Virtual Infrastructure?

Right-Sizing VMware Environments with Machine Learning

According to leading analysts, today’s virtual data centers are as much as 80 percent overprovisioned – an issue that is wasting tens of thousands of dollars annually. The risks of overprovisioning virtual environments are urgent and immediate. IT managers face a variety of challenges related to correctly provisioning a virtual infrastructure. They need to stay within budget while avoiding downtime, delivering high performance for end-user productivity, ensuring high availability and meeting a variety of other service requirements. IT often deals with their fear of application performance issues by simply throwing hardware at the problem and avoiding any possibility of under-provisioning.  However, this strategy is driving costly over spending and draining precious IT time.  And even worse, when it comes time to compare the economics of on-premises hosting vs cloud, the costs of on-premises infrastructures are greatly inflated when the resources aren’t efficiently being used.  This can lead to poor decisions when planning a move to the cloud.

With all of these risks in play, how do IT teams know when their VMware environment is optimized?

Having access to accurate information that is simple to understand is essential.  The first step in right-sizing application workloads is understanding the patterns of the workloads and the resources they consume over time.  However, most tools take a simplistic approach when recommending resource optimization.  They use simple averages of metrics about a virtual machine.  This approach doesn’t give accurate information. Peaks and valleys of usage and interrelationships of resources cause unanticipated consequences for other applications when you reconfigure them.  To get the right information and make the right decisions for right-sizing, you need a solution such as SIOS Iq.  SIOS iQ applies machine learning to learn patterns of behavior of interrelated objects over time and across the infrastructure to accurately recommend optimizations that help operations, not hurt them.  Intelligent analytics beats averaging every time.

The second step towards a right-sizing strategy is eliminating the fear of dealing with performance issues when a problem happens or even preventing one in the first place.  This means having confidence that you have the accurate information needed to rapidly identify and fix an issue instead of simply throwing hardware at it and hoping it goes away.

Today’s tools are not very accurate. They lead IT through a maze of graphs and metrics without clear answers to key questions. IT teams typically operate and manage environments in separate silos — storage, networks, applications and hosts each with its own tools. To understand the relationships among of all the infrastructure components requires a lot of manual work and digging.  Further, these tools don’t deliver information, they only deliver marginally accurate data. And they require IT to do a lot of work to get that inaccurate data. That’s because they are threshold-based. IT has to set individual thresholds for each metric they want to measure –  CPU utilization, memory utilization, network latency, etc.. A single environment may need to set, monitor, and continuously tune thousands of individual thresholds. Every time the environment is changed, such as when a workload is moved or a new VM is created, the thresholds have to be readjusted. When a threshold is exceeded, these tools often create thousands of alerts, burying important information in “alert storms” with no root cause identified or resolution recommended.

Even more importantly, because these alerts are triggered off measurements of a single metric on a single resource, IT has to interpret the meaning and importance.  Ultimately the accuracy of interpretation is left to the skill and experience of the admin. When systems are changing and growing so fast and IT simply can’t keep up with it all- and the easiest course of action is to over-provision; wasting time and money in the process. Moreover, the actual root cause of the problem is often never fully addressed.

IT teams need smart tools that leverage advanced machine learning analytics to provide an aggregated, analyzed view of their entire infrastructure. A solution such as SIOS iQ helps to optimize provisioning, characterize underlying issues and identify and prioritize problems in virtual environments. SIOS iQ doesn’t use thresholds. It automatically analyzes the dynamic patterns of behavior between the related components in your environment over time. It automatically identifies a wide variety of wasted resources (rogue vmdks, snapshot waste, idle VMs). It also recommends changes to right-size all over- and under-provisioned VMs.

When it detects anomalous patterns of behavior, it provides a complete analysis of the root cause of the problem, the components affected by the problem, and recommended solutions to fix the problem. It not only recommends optimal provisioning of vCPU, vMem, and VMs, but also provides a detailed analysis of cost savings that its recommendations can deliver. Learn more about the SIOS iQ Savings and ROI calculator.

Here are three ways machine learning analytics can help avoid overprovisioning:

  1. Understand the causes of poor performance: By automatically and continuously observing resource utilization patterns in real-time, machine learning analytics can identify over- and undersized VMs and recommended configuration settings to right-size the VM for performance. If there’s a change, machine learning can dynamically update the recommendations.
  2. Reduce dependency on IT teams for resource sizing: App owners are often requesting as much storage capacity as possible, while VMware admins want to limit storage as much as possible. Machine learning analytics takes the guess work out of resource sizing and eliminates the finger-pointing that often happens among enterprise IT teams when there’s a problem.
  3. Eliminate unused or wasted IT resources: SIOS iQ will provide a saving and ROI analysis of wasted resources, including over-provisioned VMs, rogue VMDKs, unused VMs, and snapshot waste. It also provides recommendations for eliminating them and calculates the associated costs saving in both CapEx and Opex.
  4. Determine whether a cluster can tolerate host failure: With machine learning analytics, IT pros can easily right-size CPU and storage without putting SQL Server or end user productivity at risk. IT teams gain a deeper understanding into the capacity of the organization’s hosts and know whether a cluster can tolerate failure or other issues.

To learn more about how right-sizing your VMware environment with machine learning can save time and resources, check out our webinar: “Save Big by Right Sizing Your SQL Server VMware Environment.

Understanding The Emerging field of AIOps – Part II

This is the second post in a two-part series highlighting how AIOps is changing IT performance optimization. Part 1 explained the basic principles of AIOps. The original text of this series appeared in an article on Information Management.  Here we look at the business requirements driving the trend to AIOps.

Why do businesses need AIOps?

IT pros move more of their business-critical applications into virtualized environments. As a result, finding the root cause of application performance issues is more complicated than ever.  IT managers have to find problems in a complex web of VM applications, storage devices, network devices and services. These components that are connected in ways IT can’t always understand.

Often, the components a VMware or other virtual environment are interdependent and intertwined. When an IT manager moves a workload or makes a change to one component, they cause problems in several other components without their knowledge. If the components are in different so-called silos (network, infrastructure, application, storage, etc.), IT pros have even more trouble figuring out the actual cause of the problem.

Too Many Tools Required to Find Root Causes of Performance Issues

AIOPs Survey
SIOS AIOPS Survey

The process of correlating IT performance issues to its root cause is  difficult, if not impossible for IT leaders.  According to a recent SIOS report, 78 percent of IT professionals are using multiple tools to identify the cause of application performance issues in VMware. For example, they are using tools such as application monitoring, reporting and infrastructure analytics.

Often, when faced with an issue, IT assembles a team with representatives from each IT silo or area of expertise. Each team member uses his or her own diagnostic tools and looks at the problem their own silo-specific perspective. Next, the team members compare the results of their individual analyses identify common elements. Frequently, this process is highly manual. They look at changes in infrastructure that show up in several analyses in the same time frame. As a result, IT departments are wasting more and more of their budget on manual work and inaccurate trial-and-error inefficiencies.

To solve this problem and reduce wasted time, they are using an AIOPs approach. AIOps applies artificial intelligence (i.e., machine learning, deep learning) to automate problem-solving. The AIOPs trend is an important shift away from traditional threshold-based approaches that measure individual qualities (CPU utilization, latency, etc.) to a more holistic data-driven approach. Therefore, IT managers are using analytics tools to analyze data across the infrastructure silos in real-time. They are using advanced deep learning and machine learning analytics tools that learn the patterns of behavior between interdependent components over time.  As a result, they can automatically identify behaviors between components that may indicate a problem. More importantly, they automatically recommend the specific steps to resolve problems.

What’s Next for AIOps?

Virtual IT environments are creating an enormous volume of data and an unprecedented level of complexity. As a result, IT managers cannot manage these environments effectively with traditional, manual methods. Over the next few years, the IT profession will rapidly move from the traditional computer science approach to a modern “data science” AIOPs approach. For IT teams, this means embracing machine learning-based analytics solutions, and understanding how to use it to solve problems efficiently and effectively. Finally, executives need to work with their IT departments to identify to right AIOps platform for their business.

Read Part 1

What You Need to Know About the Emerging field of AIOps – Part 1

This is the first post in a two-part series. We are highlighting how AIOps is changing IT performance optimization. The original text of this series appeared in an article on Information Management.

During the next two years, companies are set to spend $31.3 billion on cognitive systems tools. Today, companies are using tools based on these technologies (i.e., data analytics and machine learning) to solve problems in a wide range of areas. For example, companies are using artificial intelligence (AI)-powered customer service bots and trucking routes that data scientist design. Ironically, information technology (IT) departments have not yet fully leveraged the power of machine-learning based analytics — IT.

Survey Shows More Critical Apps in VMware

HoweAIOPs Surveyver, that is changing because IT environments are becoming increasingly complex. They are moving from physical servers to virtual environments. According to a recent study from SIOS Technology, 81 percent of IT teams are running business-critical applications in VMware environments.

Virtual environments are made up of components, such as VMs, applications, storage and network that are highly interrelated and constantly changing. To manage and optimize these environments, IT managers have to analyze an enormous volume of data. They learn the patterns of behavior between component. This lets them accurately correlate application service issues to the root cause of the problem in the virtual environment.  As a result, a new field has emerged – AIOps.

What is AIOps?

AIOps (algorithmic IT operations platforms) is a new term that Gartner uses to describe the next phase of IT operations analytics. These platforms use machine learning and deep learning technology to automate the process of finding performance issues in IT operations.

Right now, Gartner estimates only five percent of businesses have an AIOps platform in place. However, more businesses will adopt these platform during the next two years, bringing that number to 25 percent. Importantly, AIOps replaces human intelligence with machine intelligence. It deciphers interactions within virtual IT environments. Consequently, they can uncover infrastructure issues, correlate them to application operations problems and recommend solutions.

AIOps platforms use machine learning to understand how these environments behave over time to identify abnormal behavior. Furthermore, IT can even use AIOps platforms to find and stop potential threats before they become application performance issues.

Roadblocks to Optimizing Application Performance in VMware Environments – Part II

This is the second blog post in a two-part series examining challenges IT teams face in optimizing application performance and other issues in VMware environments. The original text of this series appeared in an article on Data Informed.  

In part one of this series, we uncovered that IT teams are currently using multiple tools to understand application performance issues in VMware. Read on to learn about the other challenges IT teams are facing in virtual environments

Application Performance Issues Are Eating Away at Time and Resources

While IT professionals are consulting their VMware environment application monitoring tools, critical hours are ticking by. For smaller businesses that have limited IT staff, this can cause considerable delays in day-to-day operations. IT teams cannot afford to waste time chasing false positives or focusing their energy on areas of the environment that are not truly the root cause of their application performance issue. Additionally, many IT teams are inundated by alerts from their VMware environment monitoring tools, making it difficult to pinpoint which alerts are meaningless and which are worth diagnosing to solve a potential application performance issue.Application Performance Labor Hours

These interruptions are significant, considering that our recent survey found more than half of IT professionals are facing applications performance issues every month. Additionally, 44 percent indicated that it takes them more than three hours to resolve application performance issues as they arise. Overall, it’s clear that IT teams are frequently facing issues in VMware environments, and they are wasting critical manpower and resources solving these issues.

The Causes of Application Performance Issues Remains a Mystery.

Despite the wide variety of tools available and the volume of time spent solving business-critical application performance issues, IT professionals remain uncertain they can attack these problems head-on. Of the IT professionals surveyed, only 20 percent believe the strategies they implement to resolve application performance issues are 100 percent accurate the first time. Even more alarming, seven percent would characterize their application performance issue resolutions as an “educated guess.” And across the board, it is rare for IT teams to implement a perfect solution to a performance issue– they frequently require a level of adjustment or even a complete rework.

What’s Next?

This trend towards moving business-critical data off of physical servers and onto virtual environments will continue for the foreseeable future, and the relationships between VM applications, network devices, storage devices and services will only grow more complex. Many CIOs are turning to machine learning solutions to help them better understand their infrastructure and learn to optimize the relationships that exists between the different IT disciplines. As a result, the core approach used by IT professionals are changing from a traditional computer science approach to a data science-centric approach. We’ve also seen the rise of “AIOps” or algorithmic IT operations platforms in the last year. A term coined by Gartner to describe machine learning applications in IT, Gartner estimates only five percent of businesses currently have AIOps platforms in place. However, that number is expected to mushroom to 25 percent in the next two years as IT becomes increasingly complex and difficult to manage.

Read Roadblocks to Optimizing Application Performance in VMware Environments part one

Roadblocks to Optimizing Application Performance in VMware Environments – Part I

This is the first post in a two-part series highlighting challenges IT teams face in optimizing VMware performance. The original text of this series appeared in an article on Data Informed.

When virtual computing first became popular, it was primarily used for non-business critical applications in pre-production environments, while critical applications were kept on physical servers. However, IT has warmed up to virtualization, recognizing the many benefits (reduced cost, increased agility, etc.) and moving more business-critical and database applications into virtual environments. In a recent survey of 518 IT professionals we conducted, we found that 81 percent of respondents are now running their business-critical applications, including SQL Server, Oracle or SAP, in their VMware environments.

VMware Performance Becomes Critical as More Important Applications Virtualized

While there are numerous benefits, virtualized environments introduce a new set of challenges for IT professionals. For IT teams tasked with finding and resolving VMware performance issues, specifically those that can impact business-critical applications, many find they are hitting the same cumbersome roadblocks related to tools, time and strategy.

IT Pros Need Multiple Tools to Gain a Holistic View of their VMware Environments.

vmware_performance_monitoring_toolsAccording to the survey results, 78 percent of IT professionals are using multiple tools– including application monitoring, reporting and infrastructure analytics– to identify the cause
of VMware performance issues for important applications. Even further, ten percent of IT professionals are using more than seven tools to understand their VMs and the issues that affect VMware performance. Optimizing VMWare performance and availability is incredibly complex, and the dynamic nature of these environments require highly advanced tools to address even the most standard performance issues.

Relying on several reporting tools every time an issue arises just isn’t sustainable for most IT teams. This is partly due to the fact that solving application performance issues requires a view of multiple IT disciplines or “silos” such as application, network, storage and compute. In larger organizations, that means each time an issue arises, representatives from each discipline need to come together and compare their findings– and the analysis results from the application team’s tool may point to a somewhat different cause than the storage team or the network team’s tool. The current strategy of relying on multiple tools and teams to evaluate each silo leaves IT with the manual, trial and error task of finding all the relevant data, assembling it and analyzing it to figure out what went wrong and what changed to cause the problem.

Stay tuned for part two of this series, where we’ll discuss issues related to time and resources wasted in uncovering issues, as well as finding the root cause of VMware performance issues.

Read Roadblocks to Optimizing Application Performance VMware Environments – Part II