Skip to content

Automation requires built-in, not bolt-on, monitoring

The industry is ablaze with talk about automation and DevOps. The proliferation of tools across the rest of IT infrastructure has created pressure on networking teams to automate. And with the dynamic requirements underpinning cloud, they must.arti

However, automation is not a new thing in networking. For more than a decade, enterprises have tried—and largely failed—to automate much of their networking operations. If automation has held promise for so long, why has it not taken root?

In short, the architectural discipline required to build out an extensible framework hasn’t emerged. The foundation for automated operations has not been properly constructed. Enterprises that look to automate their network operations must begin not with scripts, but with monitoring.

Workflows are at the center of automation

I have written before that automation is about verbs not nouns. When enterprises talk about automation in a networking context, the target of their automation desire is typically the network. The challenge in trying to automate the network is that, as a “noun”, it is not something that can be automated.

Automation is about workflows—activities strung together to achieve an objective. Executing these workflows in an automatic way allows enterprises to streamline provisioning, or simplify troubleshooting ,all in pursuit of the all-too-elusive agility that companies need during these cloudy times.

But while workflows must be the core focus area, there is something that must precede them to make automation come to life.

Even if a workflow is elegantly scripted, if it requires human intervention to initiate, the value of automation is only partially delivered. Yes, the workflow will be executed more quickly, but the benefit here is largely one of keystroke removal. The transformational value of automation is more than accelerating keystrokes—it’s the dynamic orchestration of activities within an operating context.

And this requires a connection to the events that trigger and initiate these workflows.

See something, do something

To really unlock automation, the premise is simple: see something, do something.

And while the do something component gets most of the attention, enterprises that fail to explicitly architect the see something foundation will find that their attempts to automate will yield predictably disappointing results. This means that enterprises need to architect monitoring as a top-tier component to their operations frameworks.

Workflows are a lot like software functions. There is some input required. If the automation harness does not have access to data, the workflows that can effectively be automated are limited. So automation architects should naturally start with the data. What data is available? How is it collected? How is it structured?

Past vs. future

Most data that is collected in IT is performance data. That is to say, it is collected and used to assess in hindsight how the infrastructure performed. This is a hugely useful thing to do, especially when evaluating the efficacy of a deployed solution or analyzing trends.

But automation is not about the past, but rather the future. This means that data has to be available in the moment. Real-time actions require real-time data, and that changes the monitoring architecture substantially. Periodic polling is not sufficient for any enterprise that is serious about becoming more automated.

This is why there is so much emphasis on streaming telemetry and collection techniques in the DevOps world. Talk to any highly automated IT team, and they will probably start with discussions of technologies like gRPC, and data distribution mechanisms like message buses.

Diversity matters

Think of data as the lifeblood of automation. The more data that you have, the more workflows that you can automate. Explicitly architecting a monitoring solution for access to diverse data will serve automation enthusiasts well.

One of the problems with networking is that the data aperture is frequently bound to the network. Put more simply, networking people collect networking data from networking devices. And while this is useful, it means that there is a world of data that could otherwise be useful that is left unmonitored. For example, having information about resource performance or application experience yields non-networking information that might be used to drive automation within the network.

Automation architects should therefore actively deploy monitoring and visualization tools that extend their reach beyond just the network. If you can combine device-level information about compute, storage and networking with application and user data, you can start to develop a broader view of what is happening within the ecosystem. Doing this in real-time will make the automation landscape far richer.

Start with workflows, but emphasize monitoring

The somewhat obvious conclusion of this line of reasoning is that enterprises looking to gain an agility advantage need to elevate operations to a leading architectural consideration. When designing new network architectures, this means the operations team needs to have a seat at the table.

But more than that, if we look to the automation leaders in the cloud companies (both cloud providers and SaaS providers), we see that they don’t just elevate operations—they lead with operations. They decide on their data strategy, including how they interface with systems and how they collect and correlate data. Everything else is subordinate to their operational requirements.

Enterprises that look to become more cloud-like in how they manage their networks would do well to learn something here. What data sources are required? What tools are necessary to collect that data? And how is that data used to either trigger or facilitate automated workflows?

All of this means that monitoring will necessarily shift from a bolt-on to a built-in attribute.

 

 By Michael Bushong

Published with permission from forums.juniper.net/t5/Blogs/ct-p/blogs