Edge Compute for modern networking best practice

In modern networks, best practice involves detecting and remediating faults in the quickest possible time, reducing or removing any downtime and its associated costs.

This fault detection is performed by network monitoring applications which pull telemetry data from networking equipment via monitoring interfaces like SNMP, Netflow, Syslog, and NETCONF/OpenConfig, which is then analysed, and potential or real fault conditions are alerted on, so that networking  staff can remediate the fault as quickly as possible. This telemetry data is generally transmitted over a distinct physical or logical management network, to keep it separate from in-band production data.

In simple networks, functioning correctly, there is ample bandwidth and connectivity for the network monitoring applications to directly poll, or subscribe to data streams from the networking devices. However, in today’s more complex networks, particularly those which are split between data centres, edge deployments in offices or production facilities (including employee homes), and the cloud, direct connectivity can become much harder or infeasible, due to bandwidth constraints, firewalling, NAT, etc.

This situation leads to events where when a network fault occurs, the centralized monitoring applications will lose visibility of the networking equipment that has been affected by the fault, which removes valuable information that network administrators may need to diagnose and remediate the issue right when they need it the most.

In these scenarios, the networking monitoring applications often have agents that deployed on compute hosts close to the network equipment, which can perform the monitoring tasks, aggregate the data, and then push the data back to the management application, minimising the impact of firewalling and NAT by using a push model, and the impact of bandwidth constraints by aggregating the data before transmission.

Example of monitoring applications in the networking space that use this architecture include Cisco ThousandEyes, Solarwinds, Zabbix.

Traditionally, these agents have meant network operators have needed to deploy extra hardware to host these agents, which themselves need to be managed, monitored, and administered, as well as taking up valuable rack space and power budget.

Gearlinx NR appliances perform the role of traditional OOBM devices, while also allowing the easy deployment of compute containers or virtual machines, allowing multiple vendors network monitoring agents to be easily deployed to the NR. This saves power, rack space, while still providing full visibility to management applications.

A concrete example of this architecture would be an edge deployment at a small branch office, or retail location. Often these deployments will have a single network link for in-band traffic, and several network devices, such as a router, a network switch, a UPS, and some wireless access points. All of this infrastructure needs to be monitored, but if the in-band link or the router itself fails, the network monitoring system no-longer has any visibility to the other devices

Deploying a Gearlinx NR gives traditional OOB management capabilities to the network administrators that are responsible for the site, allowing remote remediation of network issues, as well as the ability to deploy a Cisco ThousandEyes agent, which, combined with the OOB network connectivity that Gearlinx provides via the built-in cellular link, gives 24/7 visibility of the devices to the network monitoring systems, even during outages.