If you’ve been working in the IT operations management (ITOM) space for a while, it’s pretty likely that at some point, someone told you that agents were on the way out. There are a lot of good reasons for this. But it might surprise you to know that agents are about to make a bit of a comeback.
Historically, there were some good reasons to use agents: Occasionally, you can get more detailed data from an agent. Or maybe an endpoint owner is more comfortable installing an agent for you rather than giving your monitoring platform login credentials to their device. But for the past decade or so, many IT practitioners have been flocking toward modern monitoring platforms that tout the ability to deliver rich discovery and functionality without agents.
The truth is that agents can be a huge pain to deal with. They’re often proprietary. They have to be installed on every individual endpoint. They have to be compatible with the environment available on the endpoint. Sometimes, you have to ask for permission to even install the agent. The version installed has to be consistent with the version of the monitoring platform you’re running. They’re often licensed individually, which adds both cost and complexity. Moving to a new monitoring platform can seem daunting due to all the installed agents that need to be ripped and replaced. Some critical endpoints, like network appliances and hypervisors, can be so locked down that they won’t even permit you to install an agent. And the updates! In a world where urgent security patches are coming out on almost an hourly basis, the prospect of keeping thousands of installed agents updated and secure can give some IT Ops teams a severe case of the shakes.
Hang on, though, because something changes in a world where IT infrastructure becomes software defined.
Many IT organizations today are moving toward DevOps team structures, which roll out infrastructure changes at a much more rapid clip than has traditionally been the case. Even if your organization hasn’t embraced DevOps yet, you’re probably finding that your IT consumers are relying more heavily on services and microservices that are ephemeral by nature than they ever have before. In extreme cases, a microservice may spin up, live a rich, full life of about three minutes, then quietly go off into the good night to be replaced by something else. Hundreds and hundreds of times a day.
In that kind of environment, it becomes extremely difficult for agentless monitoring platforms to keep up in real time with the pace of changes. Many IT practitioners continue to cite the increased network demands of agentless architectures as a key disadvantage — imagine how much that network traffic increases if the monitoring platform has to interrogate every ephemeral service. The only reasonable way to remove that bottleneck is to place collection agents near where the services are being created and managed.
“But wait,” you might say, “agents are terrible!” You wouldn’t be wrong. But the good news is we have the opportunity to learn from the struggles of the past. For starters, we can use nonproprietary tools that already exist, like collectd, to assemble and disseminate all the monitoring data we need. Additionally, DevOps practices allow us to ensure that new applications are built with monitoring in mind from the start, which means that those applications will have functionality that facilitates metrics reporting baked in. In effect, the application becomes its own agent.
Many organizations, including a lot of our own clients, are already exploring these practices as they slowly transition to software-defined IT. There are a lot of things ITOM platforms can do to set these IT shops up for success through integrated, unified IT management that’s ready for the dynamically scaling, microservice-oriented environments that are coming.
If you’re interested in learning more about the future of ITOM platforms in a world full of microservices, consider attending GalaxZ18.