Forrester Insights:Powering Digital Transformation With Intelligent Monitoring & Analytics
451 Research: Key Trends in Machine Learning, AI & Cloud
Zenoss is the only enterprise monitoring solution certified Nutanix Ready - Integrated.
Zenoss Partner Portal
Become a Partner
Learn. Discuss. Participate.Join thousands of Zenoss users and experts to learn, discuss and participate in the Zenoss Community.
Customer Support Portal
Zenoss Learning Center
Customers for LifeAt Zenoss, our customers are at the core of everything we do.
Request A Demo
The EMC Isilon ZenPack adds support for monitoring Dell EMC Isilon storage management clusters.
This ZenPack is included with commercial versions of Zenoss and enterprise support for this ZenPack is provided to Zenoss customers with an active subscription.
Compatible with Zenoss 5.3.3 - 6.2 and Zenoss Cloud : Incompatible with Zenoss Resource Manager 4.x
Compatible with Zenoss Core 4.2.x, Zenoss Core 5.0.x, Zenoss Core 5.1.x, Zenoss Core 5.2.x, Zenoss Resource Manager 4.2.x, Zenoss Resource Manager 5.0.x, Zenoss Resource Manager 5.1.x, Zenoss Resource Manager 5.2.x
The EMC Isilon ZenPack provides support for monitoring the Dell EMC Isilon storage platform. Discovery and monitoring are performed over HTTP using Isilon OneFS HTTP API, which provides the bulk of the discovery and monitoring functionality.
SNMP is optionally supported as well, although the scope of data available via SNMP is fairly limited by comparison.
The features added by this ZenPack can be summarized as follows. They are each detailed further below.
The following components will be automatically discovered through the Isilon Cluster IP address, port and credentials you provide. The properties and relationships will be continually updated on Zenoss' normal remodeling interval which defaults to every 12 hours.
Relationships: Endpoint, Quota, License, File Pool, Access Zone, Flexnet Pool, GroupNet, SubNet, Rule, Cluster Node, Protocol, SMB Share, NFS Export, NTP Server, NDMP Context, NDMP User, NDMP Session, HDFS Proxy User, HDFS Rack, SmartPool Tier, SmartPool Node, Cloud Pool, CloudAccount, Cloud Policy
Relationships: Cluster Node, Fiber Channel Ports, ZenModel (platform) component relationships
Relationships: Isilon Cluster, Isilon Node, SmartPool Node, SmartPool Tier
Relationships: Isilon Cluster, SmartPool Node, Cluster Node
Relationships: Isilon Cluster, SmartPool Tier, Cluster Node
Relationships: Isilon Cluster
Relationships: Isilon Cluster, GroupNet, Flexnet Pool
Relationships: Isilon Cluster, SubNet, Access Zone, GroupNet, Interface, Rule
Relationships: Isilon Cluster, Access Zone, SubNet, Flexnet Pool, Interface, Rule, IP Service
Relationships: Isilon Cluster, Flexnet Pool, GroupNet, Interface, Rule
Relationships: Isilon Cluster, Cluster Interface, GroupNet, SubNet, Flexnet Pool
Relationships: SMB Share, NFS Export, NTP Server, NDMP Context, NDMP User, NDMP Session, HDFS Proxy User, HDFS Rack, Isilon Cluster
Relationships: Isilon Cluster, Protocol
Relationships: Isilon Node
Relationships: Storage Enclosure
Relationships: Rule, Interface
Relationships: Flexnet Pool, SubNet, GroupNet, Cluster Interface
Relationships: Isilon Cluster, Cloud Pool
Relationships: Isilon Cluster, CloudAccount, Cloud Policy
The following metrics will be collected every 5 minutes by default. The average statistic is collected, and the graphed value is per second for anything that resembles a rate.
Metrics: nodeCount, clusterHealth, clusterCPUUser, clusterCPUNice, clusterCPUSystem, clusterCPUInterupt, clusterCPUIdlePct, clusterNetworkInBytes, clusterNetworkOutBytes, clusterIfsInBytes, clusterIfsOutBytes, ifsTotalBytes, ifsUsedBytes, ifsAvailableBytes, ifsFreeBytes
Metrics: cluster_cpu_count, cluster_cpu_idle_avg, cluster_cpu_user_avg, cluster_cpu_sys_avg, cluster_cpu_intr_avg, cluster_cpu_nice_avg, cluster_disk_bytes_out_rate, cluster_disk_bytes_in_rate
Metrics: cluster_net_ext_bytes_in_rate, cluster_net_ext_bytes_out_rate, cluster_net_ext_errors_in_rate, cluster_net_ext_errors_out_rate, cluster_net_ext_packets_in_rate, cluster_net_ext_packets_out_rate, cluster_net_int_bytes_in_rate, cluster_net_int_bytes_out_rate, cluster_net_int_errors_in_rate, cluster_net_int_errors_out_rate, cluster_net_int_packets_in_rate, cluster_net_int_packets_out_rate
Metrics: filesystemStats, cluster_disk_xfer_size_in, cluster_disk_xfer_size_out, cluster_disk_xfers_in_rate, cluster_disk_xfers_out_rate
Metrics: cluster_dedupe_estimated_deduplicated_bytes, cluster_dedupe_estimated_saved_bytes, cluster_dedupe_logical_deduplicated_bytes, cluster_dedupe_logical_saved_bytes, cluster_dedupe_total_physical_bytes, cluster_dedupe_total_used_bytes
Metrics: cluster_node_count_all, cluster_node_count_diskless, cluster_node_count_down, cluster_node_count_readonly, cluster_node_count_smartfailed, cluster_node_count_up
Metrics: node_uptime, node_cpu_count, node_cpu_idle_avg, node_cpu_user_avg, node_cpu_sys_avg, node_cpu_intr_avg, node_cpu_nice_avg, node_load_1min, node_load_5min, node_load_15min, node_memory_used, node_memory_free, node_memory_cache, node_open_files, node_process_count, node_disk_bytes_out_rate_avg, node_disk_bytes_in_rate_avg, norm_load_1min, norm_load_5min, norm_load_15min
Metrics: node_ifs_bytes_deleted_rate, node_ifs_bytes_in_rate, node_ifs_bytes_out_rate, node_ifs_files_created_rate, node_ifs_files_removed_rate, node_ifs_num_lookups_rate, node_ifs_ops_in_rate, node_ifs_ops_out_rate, node_ifs_heat_write_total, node_ifs_journal_stats, abort_attempted_pct, commit_attempted_pct, flush_attempted_pct, btl_drain_attempted_pct, blocks_meta_attempted_pct, blocks_data_attempted_pct, drive_replay_attempted_pct, prepare_attempted_pct, node_ifs_ssd_bytes_avail, node_ifs_ssd_bytes_total
Metrics: node_net_ext_bytes_in_rate, node_net_ext_bytes_out_rate, node_net_ext_errors_in_rate, node_net_ext_errors_out_rate, node_net_ext_packets_in_rate, node_net_ext_packets_out_rate, node_net_int_bytes_in_rate, node_net_int_bytes_out_rate, node_net_int_errors_in_rate, node_net_int_errors_out_rate, node_net_int_packets_in_rate, node_net_int_packets_out_rate
Metrics: node_disk_access_latency_avg, node_disk_access_slow_avg, node_disk_busy_avg, node_disk_iosched_latency_avg, node_disk_iosched_queue_avg, node_disk_xfer_size_in_avg, node_disk_xfer_size_out_avg, node_disk_xfers_in_rate_avg, node_disk_xfers_out_rate_avg
Metrics: node_clientstats_active_cifs, node_clientstats_active_ftp, node_clientstats_active_hdfs, node_clientstats_active_http, node_clientstats_active_irp, node_clientstats_active_jobd, node_clientstats_active_lsass_out, node_clientstats_active_nfs, node_clientstats_active_nfs3, node_clientstats_active_nfs4, node_clientstats_active_nlm, node_clientstats_active_papi, node_clientstats_active_siq, node_clientstats_active_smb1, node_clientstats_active_smb2, node_clientstats_connected_cifs, node_clientstats_connected_ftp, node_clientstats_connected_hdfs, node_clientstats_connected_http, node_clientstats_connected_ndmp, node_clientstats_connected_nfs, node_clientstats_connected_nlm, node_clientstats_connected_papi, node_clientstats_connected_siq, node_clientstats_connected_smb
Metrics: svc_counters, v3_counters, v4_counters, ccb, icb, replay_tcp, replay_udp, sec_principal, sec_sid, sec_uid_gid, sec_username, v4_dircache
Metrics: nodeHealth, nodeCPUUser, nodeCPUNice, nodeCPUSystem, nodeCPUInterrupt, nodeCPUIdle, nodeNetworkInBytes, nodeNetworkOutBytes, nodeIfsInBytes, nodeIfsOutBytes
Metrics: GET, PUT, POST, DELETE, HEAD, UNSUPPORTED
Metrics: health, node_net_iface_bytes_in_rate, node_net_iface_bytes_out_rate, node_net_iface_packets_in_rate, node_net_iface_packets_out_rate, node_net_iface_errors_in_rate, node_net_iface_errors_out_rate
Metrics: protocolOpCount, inAvgBytes, inStdDevBytes, inBitsPerSecond, outAvgBytes, outStdDevBytes, outBitsPerSecond, latencyAverage, latencyStdDev
Metrics: sessions, openfiles
Metrics: nlm_locks, nlm_waiters
Metrics: node_sysfs_bytes_avail, node_sysfs_percent_avail, availBlocks, usedBlocks
Metrics: health, node_disk_xfers_in_rate_all, node_disk_xfers_out_rate_all, node_disk_xfer_size_in_all, node_disk_xfer_size_out_all, node_disk_bytes_in_rate_all, node_disk_bytes_out_rate_all, node_disk_busy_all, node_disk_access_latency_all, node_disk_access_slow_all, node_disk_iosched_queue_all, node_disk_iosched_latency_all, node_disk_ifs_bytes_total_all, node_disk_ifs_bytes_free_all, node_disk_ifs_inodes_used_all
Metrics: diskPerfOpsPerSecond, diskPerfInBitsPerSecond, diskPerfOutBitsPerSecond
A monitoring plugin collects events detected by the Isilon cluster and mirror events displayed in the OneFS HTTP web interface.
When combined with the Zenoss Service Dynamics product, this ZenPack adds built-in service impact and root cause analysis capabilities for services running on EMC Isilon . The service impact relationships shown in the diagram and described below are automatically added. These will be included in any services that contain one or more of the explicitly mentioned components.
Service Impact Relationships
Use the following steps to start monitoring an Isilon cluster using the Zenoss web interface.
SNMP discovery and monitoring is disabled by default, since most of its data is available via HTTP. To use SNMP monitoring, set the zSnmpMonitorIgnore zProperty to False at either the Device Class or Device level. Please note that each node must have its own externally available IP address in order for SNMP monitoring to function
Debug Mode can be enabled by setting the zEMCIsilonDebugMode property to True on the device class or Isilon cluster device. Debug Mode provides additional monitoring of the API usage itself, creating components that represent available URI endpoints. Collected metrics include counts for HTTP transactions against each URI by type (GET, POST, PUT, etc). Because this adds overhead to an already monitored system, these components are disabled by default and must be enabled by the user (component grid Monitoring dialog). Debug mode is intended to assist with identifying potential bottlenecks and query inefficiencies caused either by this ZenPack or by other applications utilizing the OneFS HTTP API.
Several scripts are included for use with troubleshooting or sample data collection. These scripts reside in the
directory and calling --help will display usage information for each
Currently, modeling of the Isilon cluster environment (including the nodes) is conducted using the Cluster IP address. This is so that IP configuration changes to nodes can be updated even in cases where a node no longer has an external IP.
Please note that IP address changes that take place will be updated when the Cluster device is remodeled (not the Node device).
For performance data collection, queries are conducted on a randomly selected external IP address belonging to either the cluster or one of the nodes. If a SmartHost IP is used as the host when initially adding the Cluster, then the SmartHost IP will be included in this selection.
Monitoring of the Isilon cluster and its nodes is designed with the expectation that each node will have at least one external IP interface that is accessible for monitoring.
Although version 1.0.1 removes this requirement (nodes can now be modeled/monitored without an external interface), it should still be considered a strong recommendation since monitoring multiple nodes against a single IP address could impact monitoring performance and/or reliability.
Operating with a small pool of IPs against a larger number of nodes means that API queries (used by this ZenPack) for each node, in addition to the cluster itself, will be directed at a subset of the nodes for the cluster. The zEMCIsilonHttpPoolSize zProperty has been introduced (version 1.0.1) and may ameliorate this concern by limiting the number of simultaneous API queries.
The EMC Isilon documentation seems to support this perspective in that it recommends a scaling formula for determining the number of IP Addresses that should be made available depending on the size of a cluster:
For example, the "Isilon External Network Connectivity Guide" recommends a total of 19 IP addresses (divided into pools) for a 3-node cluster.
One additional thing to note is that the optional SNMP monitoring will not work without at least one external IP address assigned to each node.
Certainly the default "admin" account privileges are more than sufficient for this ZenPack.
Security concerns (such as API write access for the admin user), however, make the creation of a separate "monitoring" User/Role a good idea, and we strongly recommend it.
Performance may also be improved by using a minimally configured account, since API calls with read/write access uses two client connections (according to the "isi statistics client" command output) instead of just one.
Fortunately, the creation of a Role with the needed Privileges is straightforward and can be done from the Isilon Admin UI.
The list of (read) permissions consists of:
Creating a Role with this list of Privileges and assigning a User to this Role will be sufficient for the ZenPack's monitoring requirements.
Installing this ZenPack will add the following items to your Zenoss system.
The EMC Isilon Zenpack can be upgraded. To upgrade the ZenPack, install the latest version over the existing one. There is no action for the user to migrate the data. The performance data and events of old ZenPack are retained as per the retain policy settings.
View the discussion thread.
This ZenPack is developed and supported by Zenoss Inc. Commercial ZenPacks are available to Zenoss commercial customers only. Contact Zenoss to request more information regarding this or any other ZenPacks. Click here to view all available Zenoss Commercial ZenPacks.