Providing Out-of-Band Connectivity to Mission-Critical IT Resources

Applications of Edge Computing

A healthcare worker presents various edge computing concepts to highlight some of the applications of edge computing

The edge computing market is huge and continuing to grow. A recent study projected that spending on edge computing will reach $232 billion in 2024. Organizations across nearly every industry are taking advantage of edge computing’s real-time data processing capabilities to get immediate business insights, respond to issues at remote sites before they impact operations, and much more. This blog discusses some of the applications of edge computing for industries like finance, retail, and manufacturing, and provides advice on how to get started.

What is edge computing?

Edge computing involves decentralizing computing capabilities and moving them to the network’s edges. Doing so reduces the number of network hops between data sources and the applications that process and use that data, which mitigates latency, bandwidth, and security concerns compared to cloud or on-premises computing.

Learn more about edge computing vs cloud computing or edge computing vs on-premises computing.

Edge computing often uses edge-native applications that are built from the ground up to harness edge computing’s unique capabilities and overcome its limitations. Edge-native applications leverage some cloud-native principles, such as containers, microservices, and CI/CD. However, unlike cloud-native apps, they’re designed to process transient, ephemeral data in real time with limited computational resources. Edge-native applications integrate seamlessly with the cloud, upstream resources, remote management, and centralized orchestration, but can also operate independently as needed.
.

Applications of edge computing

Industry

Applications

Financial services

  • Mitigate security and compliance risks of off-site data transmission

  • Gain real-time customer and productivity insights

  • Analyze surveillance footage in real-time

Industrial manufacturing

  • Monitor and respond to OT equipment issues in real-time

  • Create more efficient maintenance schedules

  • Prevent network outages from impacting production

Retail operations

  • Enhance the in-store customer experience

  • Improve inventory management and ordering

  • Aid loss prevention with live surveillance analysis

Healthcare

  • Monitor and respond to patient health issues in real-time

  • Mitigate security and compliance risks by keeping data on-site

  • Reduce networking requirements for wearable sensors

Oil, gas, & mining

  • Ensure continuous monitoring even during network disruptions

  • Gain real-time safety, maintenance, and production recommendations

  • Enable remote troubleshooting and recovery of IT systems

AI & machine learning

  • Reduce the costs and risks of high-volume data transmissions

  • Unlock near-instantaneous AI insights at the edge

  • Improve AIOps efficiency and resilience at branches

Financial services

The financial services industry collects a lot of edge data from bank branches, web and mobile apps, self-service ATMs, and surveillance systems. Many firms feed this data into AI/ML-powered data analytics software to gain insights into how to improve their services and generate more revenue. Some also use AI-powered video surveillance systems to analyze video feeds and detect suspicious activity. However, there are enormous security, regulatory, and reputational risks involved in transmitting this sensitive data to the cloud or an off-site data center.

Financial institutions can use edge computing to move data analytics applications to branches and remote PoPs (points of presence) to help mitigate the risks of transmitting data off-site. Additionally, edge computing enables real-time data analysis for more immediate and targeted insights into customer behavior, branch productivity, and security. For example, AI surveillance software deployed at the edge can analyze live video feeds and alert on-site security personnel about potential crimes in progress.

Industrial manufacturing

Many industrial manufacturing processes are mostly (if not completely) automated and overseen by operational technology (OT), such as supervisory control and data acquisition systems (SCADA). Logs from automated machinery and control systems are analyzed by software to monitor equipment health, track production costs, schedule preventative maintenance, and perform quality assurance (QA) on components and products. However, transferring that data to the cloud or centralized data center increases latency and creates security risks.

Manufacturers can use edge computing to analyze OT data in real time, gaining faster insights and catching potential issues before they affect product quality or delivery schedules. Edge computing also allows industrial automation and monitoring processes to continue uninterrupted even if the site loses Internet access due to an ISP outage, natural disaster, or other adverse event in the region. Edge resilience can be further improved by deploying an out-of-band (OOB) management solution like Nodegrid that enables control plane/data plane isolation (also known as isolated management infrastructure), as this will give remote teams a lifeline to access and recover OT systems.

Retail operations

In the age of one-click online shopping, the retail industry has been innovating with technology to enhance the in-store experience, improve employee productivity, and keep operating costs down. Retailers have a brief window of time to meet a customer’s needs before they look elsewhere, and edge computing’s ability to leverage data in real time is helping address that challenge. For example, some stores place QR codes on shelves that customers can scan if a product is out of stock, alerting a nearby representative to provide immediate assistance.

Another retail application of edge computing is enhanced inventory management. An edge computing solution can make ordering recommendations based on continuous analysis of purchasing patterns over time combined with real-time updates as products are purchased or returned. Retail companies, like financial institutions, can also use edge AI/ML solutions to analyze surveillance data and aid in loss prevention.

Healthcare

The healthcare industry processes massive amounts of data generated by medical equipment like insulin pumps, pacemakers, and imaging devices. Patient health data can’t be transferred over the open Internet, so getting it to the cloud or data center for analysis requires funneling it through a central firewall via MPLS (for hospitals, clinics, and other physical sites), overlay networks, or SD-WAN (for wearable sensors and mobile EMS devices). This increases the number of network hops and creates a traffic bottleneck that prevents real-time patient monitoring and delays responses to potential health crises.

Edge computing for healthcare allows organizations to process medical data on the same local network, or even the same onboard chip, as the sensors and devices that generate most of the data. This significantly reduces latency and mitigates many of the security and compliance challenges involved in transmitting regulated health data offsite. For example, an edge-native application running on an implanted heart-rate monitor can operate without a network connection much of the time, providing the patient with real-time alerts so they can modify their behavior as needed to stay healthy. If the app detects any concerning activity, it can use multiple cellular and ATT FirstNet connections to alert the cardiologist without exposing any private patient data.

Oil, gas, & mining

Oil, gas, and other mining operations use IoT sensors to monitor flow rates, detect leaks, and gather other critical information about equipment deployed in remote sites, drilling rigs, and offshore platforms all over the world. Drilling rigs are often located in extremely remote or even human-inaccessible locations, so ensuring reliable communications with monitoring applications in the cloud or data center can be difficult. Additionally, when networks or systems fail, it can be time-consuming and expensive – not to mention risky – to deploy IT teams to fix the issue on-site.

The energy and mining industries can use edge computing to analyze data in real time even in challenging deployment environments. For example, companies can deploy monitoring software on cellular-enabled edge computing devices to gain immediate insights into equipment status, well logs, borehole logs, and more. This software can help establish more effective maintenance schedules, uncover production inefficiencies, and identify potential safety issues or equipment failures before they cause larger problems. Edge solutions with OOB management also allow IT teams to fix many issues remotely, using alternative cellular interfaces to provide continuous access for troubleshooting and recovery.

AI & machine learning

Artificial intelligence (AI) and machine learning (ML) have broad applications across many industries and use cases, but they’re all powered by data. That data often originates at the network’s edges from IoT devices, equipment sensors, surveillance systems, and customer purchases. Securely transmitting, storing, and preparing edge data for AI/ML ingestion in the cloud or centralized data center is time-consuming, logistically challenging, and expensive. Decentralizing AI/ML’s computational resources and deploying them at the edge can significantly reduce these hurdles and unlock real-time capabilities.

For example, instead of deploying AI on a whole rack of GPUs (graphics processing units) in a central data center to analyze equipment monitoring data for all locations, a manufacturing company could use small edge computing devices to provide AI-powered analysis for each individual site. This would reduce bandwidth costs and network latency, enabling near-instant insights and providing an accelerated return on the investment into artificial intelligence technology.

AIOps can also be improved by edge computing. AIOps solutions analyze monitoring data from IT devices, network infrastructure, and security solutions and provide automated incident management, root-cause analysis, and simple issue remediation. Deploying AIOps on edge computing devices enables real-time issue detection and response. It also ensures continuous operation even if an ISP outage or network failure cuts off access to the cloud or central data center, helping to reduce business disruptions at vital branches and other remote sites.

Getting started with edge computing

The edge computing market has focused primarily on single-use-case solutions designed to solve specific business problems, forcing businesses to deploy many individual applications across the network. This piecemeal approach to edge computing increases management complexity and risk while decreasing operational efficiency.

The recommended approach is to use a centralized edge management and orchestration (EMO) platform to monitor and control edge computing operations. The EMO should be vendor-agnostic and interoperate with all the edge computing devices and edge-native applications in use across the organization. The easiest way to ensure interoperability is to use vendor-neutral edge computing platforms to run edge-native apps and AI/ML workflows.

For example, the Nodegrid platform from ZPE Systems provides the perfect vendor-neutral foundation for edge operations. Nodegrid integrated branch services routers like the Gate SR with integrated Nvidia Jetson Nano use the open, Linux-based Nodegrid OS, which can host Docker containers and edge-native applications for third-party AI, ML, data analytics, and more. These devices use out-of-band management to provide 24/7 remote visibility, management, and troubleshooting access to edge deployments, even in challenging environments like offshore oil rigs. Nodegrid’s cloud-based or on-premises software provides a single pane of glass to orchestrate operations at all edge computing sites.

Streamline your edge computing deployment with Nodegrid

The vendor-neutral Nodegrid platform can simplify all applications of edge computing with easy interoperability, reduced hardware overhead, and centralized edge management and orchestration. Schedule a Nodegrid demo to learn more.
Schedule a Demo

Cisco 4351 EOL Replacement Guide

A photo of the NSR, ZPE’s replacement option for the Cisco ISR 4431 EOL models.
The Cisco 4351 comes from the Integrated Services Router (ISR) product line of enterprise branch WAN solutions. The ISR 4351 works with Cisco’s software-defined wide area networking (SD-WAN) solution and the Cisco Digital Network Architecture (Cisco DNA) infrastructure management platform. It has a modular design that uses removable Network Interface Modules (NIMs) to extend its capabilities, for example, adding out-of-band (OOB) serial console management for up to 60 devices. Cisco announced end-of-life (EOL) dates for the entire ISR 4300 product line in 2022, and the 4351 is already past the end-of-sale and last ship dates. This guide compares Cisco 4351 EOL replacement options and discusses the innovative features and capabilities offered by modern, Gen 3 branch networking solutions. Click here for a list of Cisco ISR 4351 EOL products and replacement SKUs.

Upcoming Cisco ISR 4351 EOL dates

  • November 6, 2024 – End of routine failure analysis, end of new service attachment
  • August 31, 2025 – End of software maintenance releases and bug fixes
  • February 5, 2028 – End of service contract renewal
  • November 30, 2028 – Last date of support.

Looking to replace a different Cisco EOL model? Read our guides Cisco ISR 4431 EOL Replacement Guide and Cisco ISR EOL Replacement Options.

Cisco 4351 EOL replacement options

Cisco ISR 4351 (EOL)

Cisco Catalyst C8300

Nodegrid NSR

Out-of-band (OOB) management

Gen 1 OOB

Gen 2 OOB

Gen 3 OOB

Extensibility

Integrates with Cisco partners only

Integrates with Cisco partners only

Supports virtualization, containers, and integrations

Automation

• Policy-based automation

• Cloud-based automated device provisioning (ZTP)

• Automated deployment of network services (Cisco DNA)

• Policy-based automation

• Cloud-based automated device provisioning (ZTP)

• Automated deployment of network services (Cisco DNA)

• Zero Touch Provisioning (ZTP) via LAN/DHCP, WAN/ZPE Cloud, USB

• Auto-discovery via network scan and custom probes

• Integrated orchestration and automation:

  ◦ Puppet

  ◦ Chef

  ◦ Ansible

  ◦ RESTful

  ◦ ZPE Cloud

  ◦ Nodegrid Manager

Security

• Intrusion prevention

• Cisco Umbrella Branch

• Encrypted traffic analytics

• IPSec tunnels

• DMVPN

• FlexVPN

• GETVPN

• Content filtering

• NAT

• Zone-based firewall

• Intrusion prevention

• Cisco Umbrella Branch

• Encrypted traffic analytics

• IPSec tunnels

• DMVPN

• FlexVPN

• GETVPN

• Content filtering

• NAT

• Zone-based firewall

• Edgified, hardened device with BIOS protection, TPM 2.0, UEFI Secure Boot, Signed OS, Self-Encrypted Disk (SED), Geofencing

• X.509 SSH certificate support, 4096-bit encryption keys

• SDLC validated by Synopsys to eliminate CVEs and vulnerabilities from third-party integrations

• Selectable cryptographic protocols for SSH and HTTPS (TLSv1.3)

• SSL VPN (Client and Server)

• IPSec, WireGuard, strongSwan with support for multi-sites

• Local, AD/LDAP, RADIUS, TACACS+, and Kerberos authentication

• SAML support via Duo, Okta, Ping Identity

• Local, backup-user authentication support

• User-access lists per port

• Fine grain and role-based access control (RBAC)

• Firewall - IP packet and security filtering, IP forwarding support

• Two-factor authentication (2FA) with RSA and Duo

Hardware Services

• Serial console ports

• USB console ports

• IP management ports

• Voice functionality

• Compute module

• Serial console ports

• USB console ports

• Voice functionality

• Serial console ports

• USB console ports

• IP management ports

• PDU management

• IPMI device management

• (Optional) Compute module

• (Optional) Storage module

Network services

• Cisco SD-WAN software

• WAN optimization

• AppNAV

• Application visibility and control

• Multicast

• Overlay Transport Virtualization (OTV)

• Ethernet VPN (EVPNoMPLS)

• IPv6 support

• Cisco SD-WAN software

• WAN optimization

• AppNAV

• Application visibility and control

• Multicast

• Overlay Transport Virtualization (OTV)

• Ethernet VPN (EVPNoMPLS)

• IPv6 support

• IPv4 / IPv6 Support

• Embedded Layer 2 Switching

• VLAN

• Layer 3 Routing

• BGP

• OSPF

• RIP

• QoS

• DHCP (Client and Server)

Operating System

Cisco IOS

Cisco IOS

Built-in x86-64bit Linux Kernel Nodegrid OS

CPU

Multi-Core processor

Multi-Core processor

Intel x86-64 Multi-Core

Storage

4GB-8GB Flash memory

16GB M.2 SSD storage

32GB FLASH (mSATA SSD) (Upgradeable) Self-Encrypted Drive (SED)

RAM

4GB-8GB DRAM

8GB DRAM

8GB DDR DRAM (Upgradeable)

Size

2RU

2RU

1RU

The Cisco Catalyst C8300

Cisco recommends replacing the 4351 with the Catalyst C8300, but this platform does not go far enough to improve upon the limitations of the EOL model. For instance, both the ISR 4351 and the Catalyst C8300 replacement models are 2RU devices, making them too large for some branch and edge deployment use cases where space is limited. Additionally, while both platforms integrate with some of Cisco’s third-party partners (like ThousandEyes), Cisco is a closed ecosystem that may not support all the management, automation, and security tools needed to support an enterprise branch. Additionally, Cisco’s DNA software may not be able to control mixed-vendor infrastructure, leaving critical coverage gaps.

The Nodegrid Net SR (NSR)

A diagram showing all the capabilities of the Nodegrid NSR. ZPE Systems offers a range of enterprise branch network management solutions called Nodegrid that serve as an upgrade to Cisco 4351 EOL models. In particular, the Nodegrid Net Services Router (NSR) makes an ideal 4351 replacement due to its modular design, which can be extended with expansion modules for functionality like edge compute, PoE, USB OCP debug, and third-generation (or Gen 3) out-of-band management. Gen 3 OOB allows teams to deploy third-party automation and orchestration workflows over the OOB network to streamline branch provisioning, management, and recovery. Gen 3 OOB ensures 24/7 remote access to branch infrastructure even during network outages, provides a safe environment to recover from ransomware and other breaches, and keeps resource-intensive management workflows from bogging down the production network.

Want to see how Nodegrid stacks up against Cisco’s 4351 EOL replacement options? Click here to download the services routers comparative matrix.

Pictures of the compute module and Ethernet PoE module for the Nodegrid NSR. All Nodegrid solutions are completely vendor-neutral, integrating with or even directly hosting third-party software and extending complete visibility and control to legacy and mixed-vendor infrastructure. Nodegrid is essentially a branch-in-a-box, allowing companies to deploy infrastructure automation, network orchestration, branch security, and more on a single device that’s 1RU or smaller. Plus, this entire toolkit is available on an isolated, out-of-band network, ensuring remote teams have 24/7 access to keep business operating even during outages and ransomware attacks for superior network resilience.

Ready to replace your Cisco 4351 EOL solutions?

Nodegrid delivers vendor-neutral, branch-in-a-box solutions that streamline remote infrastructure management while improving network resilience. See our Cisco 4351 EOL replacement SKUs below or contact ZPE Systems for help choosing the right Nodegrid solution for your business.

Contact us

 

Cisco 4351 EOL replacement SKUs

Cisco 4351 EOL Product SKUs

In-Scope Features

Nodegrid Replacement Product SKUs

ISR4351-AX/K9

ISR4351-DNA

ISR4351-PM20

ISR4351-SEC/K9

ISR4351/K9

ISR4351-V/K9

ISR4351-VSEC/K9

Serial Console Module, Routing, 16 serial ports

ZPE-NSR-816-DAC with 1 x 16 port serial module 1 x ZPE-NSR-16SRL-EXPN

ISR4351-AX/K9

ISR4351-DNA

ISR4351-PM20

ISR4351-SEC/K9

ISR4351/K9

ISR4351-V/K9

ISR4351-VSEC/K9

Serial Console Module, Routing, 32 serial ports

ZPE-NSR-816-DAC with 2 x 16 port serial module 2 x ZPE-NSR-16SRL-EXPN

ISR4351-AX/K9

ISR4351-DNA

ISR4351-PM20

ISR4351-SEC/K9

ISR4351/K9

ISR4351-V/K9

ISR4351-VSEC/K9

Serial Console Module, Routing, 48 serial ports

ZPE-NSR-816-DAC with 3 x 16 port serial module 3 x ZPE-NSR-16SRL-EXPN

ISR4351-AX/K9

ISR4351-DNA

ISR4351-PM20

ISR4351-SEC/K9

ISR4351/K9

ISR4351-V/K9

ISR4351-VSEC/K9

Serial Console Module, Routing, 60 serial ports

ZPE-NSR-816-DAC with 4 x 16 port serial module 4 x ZPE-NSR-16SRL-EXPN

80 serial port option – no Cisco equivalent

Serial Console Module, Routing, 80 serial ports

ZPE-NSR-816-DAC with 5 x 16 port serial module 5 x ZPE-NSR-16SRL-EXPN

The Future of Edge Computing

The Future of Edge Computing
Edge computing moves computing resources and data processing applications out of the centralized data center or cloud, deploying them at the edges of the network and allowing companies to use their edge data in real-time. An explosion in edge data generated by Internet of Things (IoT) sensors, automated operational technology (OT), and other remote devices has created a high demand for edge computing solutions. A recent report from Grand View Research valued the edge computing market size at $16.45 billion in 2023 and predicted it to grow at a compound annual growth rate (CAGR) of 37.9% by 2030.

The current edge computing landscape comprises solutions focused on individual use cases,  lacking interoperability and central orchestration. The future of edge computing, as described by leading analysts at Gartner, depends on unifying the edge computing ecosystem with comprehensive strategies and centralized, vendor-neutral management and orchestration. This future relies on edge-native applications that integrate seamlessly with upstream resources, remote management, and orchestration while still being able to operate independently.

Where is edge computing now?

Many organizations already use edge computing technology to solve individual problems or handle specific workloads. For example, a manufacturing department may deploy an edge computing application to analyze log data and provide predictive maintenance recommendations for a single type of machine or assembly line. A single company may have a dozen or more disjointed edge computing solutions in use throughout the network, creating visibility and management headaches for IT teams. This piecemeal approach to edge computing results in what Gartner calls “edge sprawl”: many disparate solutions deployed without centralized control, security, or visibility. Edge sprawl increases management complexity and risk while decreasing operational efficiency, creating significant roadblocks for digital transformation initiatives.

Additionally, many organizations misunderstand edge computing by thinking it’s just about moving computing resources as close to the edge as possible to collect data. In reality, the true potential of the edge involves using edge data in real-time, gaining “cloud-in-a-box” capability that works in concert with the network’s upstream resources.

Anticipating the future of edge computing

At Gartner’s 2023 IT Infrastructure Operations & Cloud Strategies Conference, edge technology experts predicted that, by 2025, enterprises will create and process more than 50% of their data outside the centralized data center or cloud. Surging edge data volume will accelerate the challenges caused by a lack of strategy or orchestration.

Gartner’s 6 Edge Computing Challenges

Lack of extensibility

Many purpose-built edge computing solutions can’t adapt as use cases change or expand as the business scales, limiting agility and preventing efficient growth.

Inability to extract value from edge data

Much of the valuable data generated by edge sensors and devices gets left on the table, so to speak, because companies lack the resources needed to run all their data analytics and AI apps at the edge and are stuck simply collecting data rather than being able to do much with it.

Data storage constraints

Edge computing deployments are often smaller and have more data storage constraints than large data centers and cloud deployments, but quickly distinguishing between valuable data and destroyable junk is difficult with edge resources.

Knowledge debt from edge-native apps

Edge-native applications are designed for edge computing architectures from the ground up. Edge containers are similar to cloud-native apps, but clustering and cluster management work much differently, creating what’s known as “knowledge debt” and straining IT teams.

Lack of security controls, policies, & visibility

Edge deployments often lack many of the security features used in data centers, and sometimes other departments install edge computing solutions without onboarding them with IT for the application of security policies and monitoring agents, adding risk and increasing the attack surface.

Inability to remotely orchestrate, monitor, & troubleshoot

When equipment failures, configuration errors, or breaches take down edge networks, remote teams are often cut-off and unable to troubleshoot or recover without traveling on-site or paying for managed services, increasing the duration and cost of the outage. Current edge solutions are novel and don’t connect to or integrate with the full networking stack.

At the Gartner conference, analyst Thomas Bittman gave multiple presentations echoing his advice from the Building an Edge Computing Strategy report published earlier in the year. In preparing for the future of edge computing, Bittman urges companies to proactively develop a comprehensive edge computing strategy encompassing all potential use cases and addressing the challenges described above. His recommendations include:

  • Enabling extensibility by utilizing vendor-neutral platforms that allow for expansion and integration, which supports growth and agility at the edge.
  • Looking for opportunities to deploy artificial intelligence, data analytics, and machine learning alongside edge computing units, for example, with system-on-chip technology or all-in-one edge networking and computing devices.
  • Anticipating data storage and governance challenges at the edge by defining clear policies and deploying AI/ML data management solutions that dynamically determine data value.
  • Reducing knowledge debt by utilizing vendor-neutral platforms that support familiar container and cluster management technologies (like Docker and Kubernetes).
  • Securing the edge with a multi-layered defense, including hardware security, frequent patches, zero-trust policies, strong authentication, network micro-segmentation, and comprehensive security monitoring.
  • Centralizing edge management and orchestration (EMO) with a vendor-neutral platform that unifies control, supports environmental monitoring, and uses out-of-band (OOB) management while interoperating with automated edge management workflows (such as zero-touch provisioning and infrastructure configuration management).

Bittman’s recommended edge computing strategy uses the central EMO as a hub for all the technologies, processes, and workflows involved in operating and supporting the edge. This strategy will prepare companies for the future of edge computing and support efficient, agile growth and innovation.

Enter the future of edge computing with Nodegrid

Nodegrid is a vendor-neutral edge management and orchestration platform from ZPE Systems. Nodegrid easily interoperates with your choice of edge solutions and can directly run third-party AI, ML, data analytics, and data governance applications to help you extract more value from your edge data. The open, Linux-based Nodegrid OS can also host Docker containers and edge-native applications to reduce hardware overhead and knowledge debt.

Nodegrid devices protect your edge management interfaces with hardware security features like TPM and geofencing, support for strong authentication like 2FA, and integrations with leading zero-trust providers like Okta and PING. The Nodegrid OS and ZPE Cloud are Synopsys-validated to address security at every stage of the SDLC. Plus, you can run third-party security solutions for SASE, next-generation firewalls, and more.

Nodegrid edge networking solutions use out-of-band technology to give teams 24/7 remote visibility, management, and troubleshooting access to edge deployments. It freely interoperates with third-party solutions for infrastructure automation, monitoring, and recovery to support network resilience and operational efficiency. Nodegrid is like a cloud-in-a-box solution, incorporating edge computing and the full networking stack. Nodegrid’s edge management and orchestration platform provides single-pane-of-glass visibility, control, and resilience while supporting future edge growth.

Use Nodegrid for your Gartner-approved edge computing strategy

The Nodegrid EMO platform helps you anticipate the future of edge computing with vendor-neutral, single-pane-of-glass visibility and control. Watch a free Nodegrid demo to learn more.

Request a Demo

Edge Computing Requirements

Edge computing requirements displayed in a digital interface wheel.

The Internet of Things (IoT) and remote work capabilities have allowed many organizations to conduct critical business operations at the enterprise network’s edges. Wearable medical sensors, automated industrial machinery, self-service kiosks, and other edge devices must transmit data to and from software applications, machine learning training systems, and data warehouses in centralized data centers or the cloud. Those transmissions eat up valuable MPLS bandwidth and are attractive targets for cybercriminals.

Edge computing involves moving data processing systems and applications closer to the devices that generate the data at the network’s edges. Edge computing can reduce WAN traffic to save on bandwidth costs and improve latency. It can also reduce the attack surface by keeping edge data on the local network or, in some cases, on the same device.

Running powerful data analytics and artificial intelligence applications outside the data center creates specific challenges. For example, space is usually limited at the edge, and devices might be outdoors where power and climate control are more complex. This guide discusses the edge computing requirements for hardware, networking, availability, security, and visibility to address these concerns.

Edge computing requirements

The primary requirements for edge computing are:

1. Compute

As the name implies, edge computing requires enough computing power to run the applications that process edge data. The three primary concerns are:

  • Processing power: CPUs (central processing units), GPUs (graphics processing units), or SoCs (systems on chips)
  • Memory: RAM (random access memory)
  • Storage: SSDs (solid state drives), SCM (storage class memory), or Flash memory
  • Coprocessors: Supplemental processing power needed for specific tasks, such as DPUs (data processing units) for AI

The specific edge computing requirements for each will vary, as it’s essential to match the available compute resources with the needs of the edge applications.

2. Small, ruggedized chassis

Space is often quite limited in edge sites, and devices may not be treated as delicately as they would be in a data center. Edge computing devices must be small enough to squeeze into tight spaces and rugged enough to handle the conditions they’ll be deployed in. For example, smart cities connect public infrastructure and services using IoT and networking devices installed in roadside cabinets, on top of streetlights, and in other challenging deployment sites. Edge computing devices in other applications might be subject to constant vibrations from industrial machinery, the humidity of an offshore oil rig, or even the vacuum of outer space.

3. Power

In some cases, edge deployments can use the same PDUs (power distribution units) and UPSes (uninterruptible power supplies) as a data center deployment. Non-traditional implementations, which might be outdoors, underground, or underwater, may require energy-efficient edge computing devices using alternative power sources like batteries or solar.

4. Wired & wireless connectivity

Edge computing systems must have both wired and wireless network connectivity options because organizations might deploy them somewhere without access to an Ethernet wall jack. Cellular connectivity via 4G/5G adds more flexibility and ideally provides network failover/out-of-band capabilities.

5. Out-of-band (OOB) management

Many edge deployment sites don’t have any IT staff on hand, so teams manage the devices and infrastructure remotely. If something happens to take down the network, such as an equipment failure or ransomware attack, IT is completely cut off and must dispatch a costly and time-consuming truck roll to recover. Out-of-band (OOB) management creates an alternative path to remote systems that doesn’t rely on any production infrastructure, ensuring teams have continuous access to edge computing sites even during outages.

6. Security

Edge computing reduces some security risks but can create new ones. Security teams carefully monitor and control data center solutions, but systems at the edge are often left out. Edge-centric security platforms such as SSE (Security Service Edge) help by applying enterprise Zero Trust policies and controls to edge applications, devices, and users. Edge security solutions often need hardware to host agent-based software, which should be factored into edge computing requirements and budgets. Additionally, edge devices should have secure Roots of Trust (RoTs) that provide cryptographic functions, key management, and other features that harden device security.

7. Visibility

Because of a lack of IT presence at the edge, it’s often difficult to catch problems like high humidity, overheating fans, or physical tampering until they affect the performance or availability of edge computing systems. This leads to a break/fix approach to edge management, where teams spend all their time fixing issues after they occur rather than focusing on improvements and innovations. Teams need visibility into environmental conditions, device health, and security at the edge to fix issues before they cause outages or breaches.

Streamlining edge computing requirements

An edge computing deployment designed around these seven requirements will be more cost-effective while avoiding some of the biggest edge hurdles. Another way to streamline edge deployments is with consolidated, vendor-neutral devices that combine core networking and computing capabilities with the ability to integrate and unify third-party edge solutions. For example, the Nodegrid platform from ZPE Systems delivers computing power, wired & wireless connectivity, OOB management, environmental monitoring, and more in a single, small device. ZPE’s integrated edge routers use the open, Linux-based Nodegrid OS capable of running Guest OSes and Docker containers for your choice of third-party AI/ML, data analytics, SSE, and more. Nodegrid also allows you to extend automated control to the edge with Gen 3 out-of-band management for greater efficiency and resilience.

Want to learn more about how Nodegrid makes edge computing easier and more cost-effective?

To learn more about consolidating your edge computing requirements with the vendor-neutral Nodegrid platform, schedule a free demo!

Request a Demo

What is a Hyperscale Data Center?

shutterstock_2204212039(1)

As today’s enterprises race toward digital transformation with cloud-based applications, software-as-a-service (SaaS), and artificial intelligence (AI), data center architectures are evolving. Organizations rely less on traditional server-based infrastructures, preferring the scalability, speed, and cost-efficiency of cloud and hybrid-cloud architectures using major platforms such as AWS and Google. These digital services are supported by an underlying infrastructure comprising thousands of servers, GPUs, and networking devices in what’s known as a hyperscale data center.

The size and complexity of hyperscale data centers present unique management, scaling, and resilience challenges that providers must overcome to ensure an optimal customer experience. This blog explains what a hyperscale data center is and compares it to a normal data center deployment before discussing the unique challenges involved in managing and supporting a hyperscale deployment.

What is a hyperscale data center?

As the name suggests, a hyperscale data center operates at a much larger scale than traditional enterprise data centers. A typical data center houses infrastructure for dozens of customers, each containing tens of servers and devices. A hyperscale data center deployment supports at least 5,000 servers dedicated to a single platform, such as AWS. These thousands of individual machines and services must seamlessly interoperate and rapidly scale on demand to provide a unified and streamlined user experience.

The biggest hyperscale data center challenges

Operating data center deployments on such a massive scale is challenging for several key reasons.

 
 

Hyperscale Data Center Challenges

Complexity

Hyperscale data center infrastructure is extensive and complex, with thousands of individual devices, applications, and services to manage. This infrastructure is distributed across multiple facilities in different geographic locations for redundancy, load balancing, and performance reasons. Efficiently managing these resources is impossible without a unified platform, but different vendor solutions and legacy systems may not interoperate, creating a fragmented control plane.

Scaling

Cloud and SaaS customers expect instant, streamlined scaling of their services, and demand can fluctuate wildly depending on the time of year, economic conditions, and other external factors. Many hyperscale providers use serverless, immutable infrastructure that’s elastic and easy to scale, but these systems still rely on a hardware backbone with physical limitations. Adding more compute resources also requires additional management and networking hardware, which increases the cost of scaling hyperscale infrastructure.

Resilience

Customers rely on hyperscale service providers for their critical business operations, so they expect reliability and continuous uptime. Failing to maintain service level agreements (SLAs) with uptime requirements can negatively impact a provider’s reputation. When equipment failures and network outages occur - as they always do, eventually - hyperscale data center recovery is difficult and expensive.

Overcoming hyperscale data center challenges requires unified, scalable, and resilient infrastructure management solutions, like the Nodegrid platform from ZPE Systems.

How Nodegrid simplifies hyperscale data center management

The Nodegrid family of vendor-neutral serial console servers and network edge routers streamlines hyperscale data center deployments. Nodegrid helps hyperscale providers overcome their biggest challenges with:

  • A unified, integrated management platform that centralizes control over multi-vendor, distributed hyperscale infrastructures.
  • Innovative, vendor-neutral serial console servers and network edge routers that extend the unified, automated control plane to legacy, mixed-vendor infrastructure.
  • The open, Linux-based Nodegrid OS which hosts or integrates your choice of third-party software to consolidate functions in a single box.
  • Fast, reliable out-of-band (OOB) management and 5G/4G cellular failover to facilitate easy remote recovery for improved resilience.

The Nodegrid platform gives hyperscale providers single-pane-of-glass control over multi-vendor, legacy, and distributed data center infrastructure for greater efficiency. With a device like the Nodegrid Serial Console Plus (NSCP), you can manage up to 96 devices with a single piece of 1RU rack-mounted hardware, significantly reducing scaling costs. Plus, the vendor-neutral Nodegrid OS can directly host other vendors’ software for monitoring, security, automation, and more, reducing the number of hardware solutions deployed in the data center.

Nodegrid’s out-of-band (OOB) management creates an isolated control plane that doesn’t rely on production network resources, giving teams a lifeline to recover remote infrastructure during outages, equipment failures, and ransomware attacks. The addition of 5G/4G LTE cellular failover allows hyperscale providers to keep vital services running during recovery operations so they can maintain customer SLAs.

Want to learn more about Nodegrid hyperscale data center solutions from ZPE Systems?

Nodegrid’s vendor-neutral hardware and software help hyperscale cloud providers streamline their operations with unified management, enhanced scalability, and resilient out-of-band management. Request a free Nodegrid demo to see our hyperscale data center solutions in action.

Request a Demo

Healthcare Network Design

Edge Computing in Healthcare
In a healthcare organization, IT’s goal is to ensure network and system stability to improve both patient outcomes and ROI. The National Institutes of Health (NIH) provides many recommendations for how to achieve these goals, and they place a heavy focus on resilience engineering (RE). Resilience engineering enables a healthcare organization to resist and recover from unexpected events, such as surges in demand, ransomware attacks, and network failures. Resilient architectures allow the organization to continue operating and serving patients during major disruptions and to recover critical systems rapidly.

This guide to healthcare network design describes the core technologies comprising a resilient network architecture before discussing how to take resilience engineering to the next level with automation, edge computing, and isolated recovery environments.

Core healthcare network resilience technologies

A resilient healthcare network design includes resilience systems that perform critical functions while the primary systems are down. The core technologies and capabilities required for resilience systems include:

  • Full-stack networking – Routing, switching, Wi-Fi, voice over IP (VoIP), virtualization, and the network overlay used in software-defined networking (SDN) and software-defined wide area networking (SD-WAN)
  • Full compute capabilities – The virtual machines (VMs), containers, and/or bare metal servers needed to run applications and deliver services
  • Storage – Enough to recover systems and applications as well as deliver content while primary systems are down

These are the main technologies that allow healthcare IT teams to reduce disruptions and streamline recovery. Once organizations achieve this base level of resilience, they can evolve by adding more automation, edge computing, and isolated recovery infrastructure.

Extending automated control over healthcare networks

Automation is one of the best tools healthcare teams have to reduce human error, improve efficiency, and ensure network resilience. However, automation can be hard to learn, and scripts take a long time to write, so having systems are easily deployable with low technical debt is critical. Tools like ZTP (zero-touch provisioning), and the integration of technology like Infrastructure as Code (IaC), accelerate recovery by automating device provisioning. Healthcare organizations can use automation technologies such as AIOps with resilience systems technologies like out-of-band (OOB) management to monitor, maintain, and troubleshoot critical infrastructure.

Using automation to observe and control healthcare networks helps prevent failures from occuring, but when trouble does actually happen, resilience systems ensure infrastructure and services are quickly returned to health or rerouted when needed.

Improving performance and security with edge computing

The healthcare industry is one of the biggest adopters of IoT (Internet of Things) technology. Remote, networked medical devices like pacemakers, insulin pumps, and heart rate monitors collect a large volume of valuable data that healthcare teams use to improve patient care. Transmitting that data to a software application in a data center or cloud adds latency and increases the chances of interception by malicious actors. Edge computing for healthcare eliminates these problems by relocating applications closer to the source of medical data, at the edges of the healthcare network. Edge computing significantly reduces latency and security risks, creating a more resilient healthcare network design.

Note that teams also need a way to remotely manage and service edge computing technologies. Find out more in our blog Edge Management & Orchestration.

Increasing resilience with isolated recovery environments

Ransomware is one of the biggest threats to network resilience, with attacks occurring so frequently that it’s no longer a question of ‘if’ but ‘when’ a healthcare organization will be hit.

Recovering from ransomware is especially difficult because of how easily malicious code can spread from the production network into backup data and systems. The best way to protect your resilience systems and speed up ransomware recovery is with an isolated recovery environment (IRE) that’s fully separated from the production infrastructure.

 

A diagram showing the components of an isolated recovery environment.

An IRE ensures that IT teams have a dedicated environment in which to rebuild and restore critical services during a ransomware attack, as well as during other disruptions or disasters. An IRE does not replace a traditional backup solution, but it does provide a safe environment that’s inaccessible to attackers, allowing response teams to conduct remediation efforts without being detected or interrupted by adversaries. Isolating your recovery architecture improves healthcare network resilience by reducing the time it takes to restore critical systems and preventing reinfection.

To learn more about how to recover from ransomware using an isolated recovery environment, download our whitepaper, 3 Steps to Ransomware Recovery.

Resilient healthcare network design with Nodegrid

A resilient healthcare network design is resistant to failures thanks to resilience systems that perform critical functions while the primary systems are down. Healthcare organizations can further improve resilience by implementing additional automation, edge computing, and isolated recovery environments (IREs).

Nodegrid healthcare network solutions from ZPE Systems simplify healthcare resilience engineering by consolidating the technologies and services needed to deploy and evolve your resilience systems. Nodegrid’s serial console servers and integrated branch/edge routers deliver full-stack networking, combining cellular, Wi-Fi, fiber, and copper into software-driven networking that also includes compute capabilities, storage, vendor-neutral application & automation hosting, and cellular failover required for basic resilience. Nodegrid also uses out-of-band (OOB) management to create an isolated management and recovery environment without the cost and hassle of deploying an entire redundant infrastructure.

Ready to see how Nodegrid can improve your network’s resilience?

Nodegrid streamlines resilient healthcare network design with consolidated, vendor-neutral solutions. Request a free demo to see Nodegrid in action.

Request a Demo