Houston We Have a Problem (or How to Understand & Respond to Availability Alerts)

Reading Time: 3 minutes

A Successful Failure

Houston we have a problem!  It is an iconic line that reminds countless space buffs and movie fans about the great difficulty, potential disaster, and the perilous state of the Apollo 13 space mission – a mission NASA now calls “A Successful Failure.”  Ignoring your own application availability alerts may not go down in history as a defining moment, but can also wreak similar havoc
Now back to 1970:
“A routine stir of an oxygen tank ignited damaged wire insulation inside it, causing an explosion that vented the contents of both of the Service Module’s (SM) oxygen tanks to space. Without oxygen, needed for breathing and for generating electric power, the SM’s propulsion and life support systems could not operate. The Command Module’s (CM) systems had to be shut down to conserve its remaining resources for reentry, forcing the crew to transfer to the Lunar Module (LM) as a lifeboat. With the lunar landing canceled, mission controllers worked to bring the crew home alive.”

An explosion of oxygen tanks triggered alarms, warnings, pressure and voltage drops, interrupted communications, and then the now famous radio communication between the astronauts and Mission Control.  But what if, after the explosion, the crew did nothing? What if they never checked on the explosion, never responded to the warnings and gauges, and never informed Mission Control of there being an issue?  What if Mission Control, after being notified or alerted back at their dashboard in the control center, never attempted to provide any assistance?  What if the team buried their heads in the sand, or resigned themselves to fate and chance, never tried to learn, improvise, or improve from the failure they encountered?  The result would have been tragic!  It may have made it to a documentary, but hardly a blockbuster movie featuring an iconic line.

What Do You Do When an Alert is Triggered in Your Environment?

Space walks are a far cry from our own day to day activities, unless of course you work for NASA, but recent blogs on Apollo 13 do spark a question applicable to availability.  What do you do when there is an alert triggered in your environment? Do you just ignore it?  Do you downplay it, waiting to see if the alerts, log messages, or other indicators will just go away?  Do you contact your vendor support to understand how you can disable these alerts, warnings, and messages?  Or do you say, “We have a problem here and we need to work it out”? 

As a VP of Customer Experience at SIOS Technology Corp. we have experienced both sides of alerts and indicators.  We have painstakingly walked with customers who chose to ignore warnings, turning off critical alerts that indicated issues, ranging from application thresholds to network instability to potential data inconsistency.  And we have also seen customers who have tuned into their alerts, investigated why their alarms were going off, uncovered the root cause and enjoyed the fruit of their labor.  This fruit is most often the sweet reward of improved stability, innovation and learning, or an averted disaster.  

4 Things you can do when you your availability product triggers an alert

1. Determine if the type and criticality of the alert. Is the alert or error indicative of a warning, an error, or a critical issue? A good place to assist you and your team with understanding criticality is to consult with available documentation. Check the product documentation, online forums, knowledge base articles (KBA), and internal team data and process manuals. 

2. Assess the immediacy of the alert. For warnings and errors, how likely are they to progress into a critical issue or event.  For critical issues and alerts, this may be obvious but an assessment, even of critical events will provide some guidance on your next steps; self-correction, issue isolation, or immediate escalation.

3. Consult additional sources. What other sources can you access to make a determination about the alert condition? For example, if the alert is storage related, are there other tools that can expose the health of your storage?  If the issue is a network alert, are there hypervisor tools, traffic tools, NIC statistics, or other specialized monitoring tools deployed to help with analysis.

4. Contact Support.  In other words, if you are unsure, alert Mission Control. After determining the type, assessing the immediacy, and consulting additional sources, it is a good idea to contact your vendor for support.  A warning about a threshold for API calls may seem innocent, but if the API calls will fail once such a limit is reached, this could be cause for immediate action. Getting the authority of the specialist can be helpful in keeping peace of mind and avoiding disaster. 

An experienced vendor like SIOS can help you quickly identify the causes of problems and recommend the best solution.

Repeatedly ignoring problems in your availability environment can lead to unexpected, but no less devastating results. Addressing the problems indicated by alerts, log messages, warning indicators, or other installed and configured indicators gives your customers, your business, your teams, and yourself the “opportunity to solve the problems,” before it becomes a disaster, and strengthens your availability strategy and infrastructure.  Which will you choose?

–  Cassius Rhue, VP, Customer Experience


Recent Posts

SIOS Technology Expands Support in Linux Product Release

We’re excited to announce expanded support for the SIOS LifeKeeper for Linux 9.9.0 release, including: These newly supported configurations are fully compatible with […]

Read More

Achieving High Availability in the Retail Industry

Even minor drops in availability with retail applications can cause a substantial amount of loss of revenue and loss of business in the […]

Read More

Storage Considerations for Resizing Your Highly Available Cluster

When I was a Marine serving with a Tank Battalion, I remember that we’d all prepared ourselves to hear “FIRE IN THE HOLE” […]

Read More