Forrester: Demystifying Hybrid Solutions and Architectures
Hybrid infrastructure is a strategy, not a solution — and it's more than just cloud.
Why Customers Choose Us
Discover why the largest companies in the world choose Zenoss.
Customer Support Portal
Zenoss Learning Center
Zenoss Partner Portal
Become a Partner
Top 5 Focus Areas to Succeed With DevOps
Forrester shares the tools, technologies and best practices to meet the challenges of today's modern IT environments.
Learn. Discuss. Participate.
Join thousands of Zenoss users and experts to learn, discuss and participate in the Zenoss Community.
Hybrid IT Monitoring
Zenoss provides complete visibility into physical, virtual, cloud and converged environments.
Request A Demo
This is an Open Source ZenPack developed by Zenoss, Inc. Enterprise support for this ZenPack is available to commercial customers with an active subscription.
This project is a Zenoss extension (ZenPack) that provides specialized modeling and monitoring capabilities for OpenVZ hosts.
While Zenoss always has had the ability to monitor an OpenVZ host system and containers (also called "VE"s) as standalone devices by connecting to the host and containers directly using SSH or SNMP, it did not have the ability to understand the relationship between the OpenVZ host and the container. It also did not have the ability to "see" containers by simply monitoring the host.
This ZenPack allows you to see the containers as components within an OpenVZ host even if you are not actively monitoring the individual containers. A number of metrics are made available for the containers without requiring monitoring to be configured on the containers themselves. This is a great benefit for hosting providers as well as large enterprise OpenVZ deployments.
With this ZenPack, it is also still possible to monitor containers "the old-fashioned way", as Linux devices by using SSH or SNMP, and if you do this, Zenoss will now be able to "connect" the OpenVZ host to the container device that you are monitoring. For containers containing production workloads, this dual-monitoring approach allows you to use traditional Zenoss monitoring and alerting functions within the container.
As of version 1.0.2, this ZenPack typically requires no manual post-install steps to enable for any OpenVZ host devices. All you need to do is to ensure that Zenoss has root SSH credentials for your OpenVZ host devices and that the OpenVZ system is in the /Server/Linux or /Server/SSH/Linux device classes. If you ensure that this is done, a remodel of the device should result in OpenVZ containers being monitored and appearing as components for the device.
To see OpenVZ containers in Zenoss right away, simply add your OpenVZ hosts to Zenoss if you have not already. Once discovered or added, you should see OpenVZ Containers menu under the device's Components list, in addition to a new OpenVZ Container Memory Utilization graph under the OpenVZ host device's Graphs page, at the bottom.
For any existing OpenVZ hosts that were added to Zenoss prior to ZenPack installation, choose Model Device... from the device's "gear" menu in the lower left of the detail screen to immediately remodel the device and display any OpenVZ containers that exist on the system.
Again, note that Zenoss must be configured so that it has root access to the OpenVZ host, either by password or via RSA/DSA public key. root access is required to properly retrieve all OpenVZ-related information. This information is specified under the Configuration Properties page.
With the modeler plugin enabled, remodeling the device should cause OpenVZ Containers to be displayed as Components of the modeled device. You should also see relevant information for each container on the system, such as its VEID, name, hostname, IP Address(es) (if assigned via venet), a link to the device (if you are monitoring the container directly via SSH or SNMP), the OS Template that was used to create the VE, the status of the "On Boot" flag and its status (running, stopped, etc.) In addition, you should see an OpenVZ Container Memory Utilization graph on the OpenVZ host device's Graphs page.
The OpenVZ host device's detail page should now look like this:
There will also be a new graph showing memory utilization of all containers on the system. This graph will be initially empty and will populate with data over the next hour:
Clicking on a Container in the OpenVZ Containers list will display these predefined graphs:
As mentioned earlier, typically no post-install steps are required to actually enable the OpenVZ ZenPack other than installing it and adding OpenVZ hosts or remodeling any existing OpenVZ hosts already in Zenoss. However, if you have a highly customized Zenoss install, it is possible that some manual steps still may be required to get the OpenVZ ZenPack up and running. This section describes what the OpenVZ ZenPack does "behind the scenes" to automatically enable itself, so that you can perform these steps manually if necessary, and also validate that the OpenVZ ZenPack is fully functional in your environment.
When the OpenVZ ZenPack is installed, it will automatically add the zenoss.cmd.linux.OpenVZ modeler plugin to the device classes /Server/Linux and /Server/SSH/Linux. The modeler plugin is the heart of the OpenVZ ZenPack, and is what connects to your Linux system and determines if OpenVZ is running, and if so will model the containers on the system as components which appear under the OpenVZ Containers components list. If for some reason you are using different device classes for Linux devices than those for which the OpenVZ ZenPack automatically is enabled, you will need to manually add zenoss.cmd.linux.OpenVZ as one of the modeler plugins for the device classes you are using.
Once the zenoss.cmd.linux.OpenVZ modeling plugin is enabled, it will connect to OpenVZ host devices and determine if they in fact have OpenVZ enabled. If OpenVZ is detected, the modeling plugin will automatically bind the OpenVZHost monitoring template to the OpenVZ Host device. In addition, each container detected on the device will automatically have the OpenVZContainer monitoring template bound to it. These monitoring templates run once every few minutes to collect new RRD metrics and utilization information. You will see charts under each Container listed under OpenVZ Containers, and as you might guess these metrics are collected by the OpenVZContainer monitoring template. In addition, you will see a new graph called OpenVZ Container Memory Utilization under the OpenVZ host device's Graph list, and RRD data for this graph is collected/calculated by the OpenVZHost monitoring template. The OpenVZHost monitoring template will also fire events when a new container is created, a container is destroyed, or there is another type of status change for a container such as it being started, stopped or suspended.
Please ensure that all the provided functionality in the OpenVZ ZenPack is being enabled. If not, you should now know where to look for troubleshooting purposes. If you get stuck, you may have encountered a bug of some kind, so file an Issue at https://github.com/zenoss/ZenPacks.zenoss.OpenVZ with detailed information about the problem you are experiencing.
The OpenVZContainer monitoring template collects data for each container and uses this data to populate data points in its openvz data source with new metrics every few minutes.
Note: These settings can be viewed by navigating to Advanced, Monitoring Templates, OpenVZContainer, /Server in the UI.
By default, the OpenVZContainer monitoring template defines four graphs that will appear for each Container component on an OpenVZ host:
The first three graphs are generated using data extracted from /proc/user_beancounters on the OpenVZ host. The CPU utilization graph is generated using /proc/vz/vestat information.
The openvz datasource has several data points pre-defined for you that are sourced from the /proc/vz/vestat file on the OpenVZ Host Device. These data points will appear with the prefix openvz. in the Data Points list:
The following datapoints can be defined by you (typically they would be set up as a GAUGE, though you could create a DERIVED if you wanted to see a delta) and if found, the OpenVZContainer monitoring template will populate them with data:
The "raw" form of the name, such as vestat.user, is also supported, but it's recommended that you use the explicit .jiffies suffix above.
In addition, a variant of these data points are available, with the CPU time conveniently converted to seconds (1 second = 100 jiffies):
The following cycles-based counters are also available:
The OpenVZ ZenPack does not provide .seconds equivalents for CPU cycles metrics. This may be added in a future release.
The openvz datasource also pulls data from the OpenVZ host device's /proc/user_beancounters file, which contains a number of container-specific metrics. This ZenPack includes some beancounters data points that are already defined for you, but additional ones you may be interested can also be defined and will be populated with data by the OpenVZContainer monitoring template if found.
These data points will appear with the prefix openvz. in the Data Points list, but don't have an additional prefix like vestat.. What this means that if a data point doesn't begin with openvz.vestat in the Data Points list, it is a beancounters data point. Here is a list of the data points that we have defined for you:
Additional data points can be added to the openvz datasource. All you need to do is name the data point according to the naming convention described here, and the OpenVZ ZenPack will populate the data point with RRD data.
The name of the Data Point should be of the following format:
Any resource name that is visible in /proc/user_beancounters can be used. These Data Points should typically be created as type of GAUGE with the appropriate name. The monitoring template will correlate the beancounter name with the metric name and populate it with data.
Note: OpenVZ allows individual resource limits to be disabled by setting the barrier and/or limit value to LONG_MAX, typically 9223372036854775807 on 64-bit systems. The OpenVZ monitoring template will detect LONG_MAX when it is set and will *not* write this data out to RRD, as it indicates "Unlimited" rather than a measured numerical value. This will result in NaN data for "Unlimited" barrier and limit values.
In addition, the OpenVZ ZenPack implements a number of enhanced capabilities regarding Data Points:
This ZenPacks also collects various container-specific IO metrics:
The above metrics are incrementing counts.
The above metrics are currently-active sync statistics that will go back to zero when there is no disk activity.
Note: These settings can be viewed by navigating to Advanced, Monitoring Templates, OpenVZHost, /Server in the UI.
The OpenVZHost monitoring template has two data sources: openvz and openvz_util. openvz is used for collecting container status and firing events on container status change. It is not intended to be changed.
The openvz_util data source is used for monitoring host utilization and can be modified by the user. It works similarly to the Container's openvz Data Source in that a sampling of data points have been added by default, but more can be added by the end user for metrics of interest. The data point names that are recognized are:
containers.[resource] and host.[resource] data points can be created, where [resource] is any resource name listed in /proc/user_beancounters. Any resource name beginning with containers.will contain the total current value of that resource for all containers on the system. For example, containers.oomguarpages will contain the sum of all oomguarpages for all containers on the host. The host.[resource] prefix can be used to extract the current value of the corresponding resource for the host, that is, VEID 0.
A very useful graph has been defined for the OpenVZ host, called "OpenVZ Container Memory Utilization." Using data from /proc/user_beancounters, a number of key metrics related to the memory utilization of all containers on the host are calculated and presented in percentage form, based on the formulas described here: http://wiki.openvz.org/UBC_systemwide_configuration .
This graph can be used to optimize the capacity of your OpenVZ hosts. In general, you want to maximize memory utilization without hitting too high a value for "RAM and Swap Used".
Note: OpenVZ also has commitment level formulas. These have not yet been integrated into the OpenVZ ZenPack at this time, but will be in the future. For commitment levels to work correctly, all containers on the host must have active memory resource limits. However, the metrics described above are available for all OpenVZ hosts, whether memory resource limits are active or not.
Future plans for development of this ZenPack include:
To submit new feature requests, bug reports, and submit improvements, visit the OpenVZ ZenPack on GitHub: https://github.com/zenoss/ZenPacks.zenoss.OpenVZ
View the discussion thread.
This ZenPack is developed and supported by Zenoss Inc. Contact Zenoss to request more information regarding this or any other ZenPacks. Click here to view all available Zenoss Open Source ZenPacks.