Providing Out-of-Band Connectivity to Mission-Critical IT Resources

Edge Computing Platforms: Insights from Gartner’s 2024 Market Guide

Interlocking cogwheels containing icons of various edge computing examples are displayed in front of racks of servers

Edge computing allows organizations to process data close to where it’s generated, such as in retail stores, industrial sites, and smart cities, with the goal of improving operational efficiency and reducing latency. However, edge computing requires a platform that can support the necessary software, management, and networking infrastructure. Let’s explore the 2024 Gartner Market Guide for Edge Computing, which highlights the drivers of edge computing and offers guidance for organizations considering edge strategies.

What is an Edge Computing Platform (ECP)?

Edge computing moves data processing close to where it’s generated. For bank branches, manufacturing plants, hospitals, and others, edge computing delivers benefits like reduced latency, faster response times, and lower bandwidth costs. An Edge Computing Platform (ECP) provides the foundation of infrastructure, management, and cloud integration that enable edge computing. The goal of having an ECP is to allow many edge locations to be efficiently operated and scaled with minimal, if any, human touch or physical infrastructure changes.

Before we describe ECPs in detail, it’s important to first understand why edge computing is becoming increasingly critical to IT and what challenges arise as a result.

What’s Driving Edge Computing, and What Are the Challenges?

Here are the five drivers of edge computing described in Gartner’s report, along with the challenges that arise from each:

1. Edge Diversity

Every industry has its unique edge computing requirements. For example, manufacturing often needs low-latency processing to ensure real-time control over production, while retail might focus on real-time data insights to deliver hyper-personalized customer experiences.

Challenge: Edge computing solutions are usually deployed to address an immediate need, without taking into account the potential for future changes. This makes it difficult to adapt to diverse and evolving use cases.

2. Ongoing Digital Transformation

Gartner predicts that by 2029, 30% of enterprises will rely on edge computing. Digital transformation is catalyzing its adoption, while use cases will continue to evolve based on emerging technologies and business strategies.

Challenge: This rapid transformation means environments will continue to become more complex as edge computing evolves. This complexity makes it difficult to integrate, manage, and secure the various solutions required for edge computing.

3. Data Growth

The amount of data generated at the edge is increasing exponentially due to digitalization. Initially, this data was often underutilized (referred to as the “dark edge”), but businesses are now shifting towards a more connected and intelligent edge, where data is processed and acted upon in real time.

Challenge: Enormous volumes of data make it difficult to efficiently manage data flows and support real-time processing without overwhelming the network or infrastructure.

4. Business-Led Requirements

Automation, predictive maintenance, and hyper-personalized experiences are key business drivers pushing the adoption of edge solutions across industries.

Challenge: Meeting business requirements poses challenges in terms of ensuring scalability, interoperability, and adaptability.

5. Technology Focus

Emerging technologies such as AI/ML are increasingly deployed at the edge for low-latency processing, which is particularly useful in manufacturing, defense, and other sectors that require real-time analytics and autonomous systems.

Challenge: AI and ML make it difficult for organizations to determine how to strike a balance between computing power and infrastructure costs, without sacrificing security.

What Features Do Edge Computing Platforms Need to Have?

To address these challenges, here’s a brief look at three core features that ECPs need to have according to Gartner’s Market Guide:

  1. Edge Software Infrastructure: Support for edge-native workloads and infrastructure, including containers and VMs. The platform must be secure by design.
  2. Edge Management and Orchestration: Centralized management for the full software stack, including orchestration for app onboarding, fleet deployments, data storage, and regular updates/rollbacks.
  3. Cloud Integration and Networking: Seamless connection between edge and cloud to ensure smooth data flow and scalability, with support for upstream and downstream networking.

A simple diagram showing the computing and networking capabilities that can be delivered via Edge Management and Orchestration.

Image: A simple diagram showing the computing and networking capabilities that can be delivered via Edge Management and Orchestration.

  1.  

How ZPE Systems’ Nodegrid Platform Addresses Edge Computing Challenges

ZPE Systems’ Nodegrid is a Secure Service Delivery Platform that meets these needs. Nodegrid covers all three feature categories outlined in Gartner’s report, allowing organizations to host and manage edge computing via one platform. Not only is Nodegrid the industry’s most secure management infrastructure, but it also features a vendor-neutral OS, hypervisor, and multi-core Intel CPU to support necessary containers, VMs, and workloads at the edge. Nodegrid follows isolated management best practices that enable end-to-end orchestration and safe updates/rollbacks of global device fleets. Nodegrid integrates with all major cloud providers, and also features a variety of uplink types, including 5G, Starlink, and fiber, to address use cases ranging from setting up out-of-band access, to architecting Passive Optical Networking.

Here’s how Nodegrid addresses the five edge computing challenges:

1. Edge Diversity: Adapting to Industry-Specific Needs

Nodegrid is built to handle diverse requirements, with a flexible architecture that supports containerized applications and virtual machines. This architecture enables organizations to tailor the platform to their edge computing needs, whether for handling automated workflows in a factory or data-driven customer experiences in retail.

2. Ongoing Digital Transformation: Supporting Continuous Growth

Nodegrid supports ongoing digital transformation by providing zero-touch orchestration and management, allowing for remote deployment and centralized control of edge devices. This enables teams to perform initial setup of all infrastructure and services required for their edge computing use cases. Nodegrid’s remote access and automation provide a secure platform for keeping infrastructure up-to-date and optimized without the need for on-site staff. This helps organizations move much of their focus away from operations (“keeping the lights on”), and instead gives them the agility to scale their edge infrastructure to meet their business goals.

3. Data Growth: Enabling Real-Time Data Processing

Nodegrid addresses the challenge of exponential data growth by providing local processing capabilities, enabling edge devices to analyze and act on data without relying on the cloud. This not only reduces latency but also enhances decision-making in time-sensitive environments. For instance, Nodegrid can handle the high volumes of data generated by sensors and machines in a manufacturing plant, providing instant feedback for closed-loop automation and improving operational efficiency.

4. Business-Led Requirements: Tailored Solutions for Industry Demands

Nodegrid’s hardware and software are designed to be adaptable, allowing businesses to scale across different industries and use cases. In manufacturing, Nodegrid supports automated workflows and predictive maintenance, ensuring equipment operates efficiently. In retail, it powers hyperpersonalization, enabling businesses to offer tailored customer experiences through edge-driven insights. The vendor-neutral Nodegrid OS integrates with existing and new infrastructure, and the Net SR is a modular appliance that allows for hot-swapping of serial, Ethernet, computing, storage, and other capabilities. Organizations using Nodegrid can adapt to evolving use cases without having to do any heavy lifting of their infrastructure.

5. Technology Focus: Supporting Advanced AI/ML Applications

Emerging technologies such as AI/ML require robust edge platforms that can handle complex workloads with low-latency processing. Nodegrid excels in environments where real-time analytics and autonomous systems are crucial, offering high-performance infrastructure designed to support these advanced use cases. Whether processing data for AI-driven decision-making in defense or enabling real-time analytics in industrial environments, Nodegrid provides the computing power and scalability needed for AI/ML models to operate efficiently at the edge.

Read Gartner’s Market Guide for Edge Computing Platforms

As businesses continue to deploy edge computing solutions to manage increasing data, reduce latency, and drive innovation, selecting the right platform becomes critical. The 2024 Gartner Market Guide for Edge Computing Platforms provides valuable insights into the trends and challenges of edge deployments, emphasizing the need for scalability, zero-touch management, and support for evolving workloads.

Click below to download the report.

Get a Demo of Nodegrid’s Secure Service Delivery

Our engineers are ready to walk you through the software infrastructure, edge management and orchestration, and cloud integration capabilities of Nodegrid. Use the form to set up a call and get a hands-on demo of this Secure Service Delivery Platform.

Network Virtualization Platforms: Benefits & Best Practices

Network Virtualization Platforms: Benefits & Best Practices

Simulated network virtualization platforms overlaying physical network infrastructure.

Network virtualization decouples network functions, services, and workflows from the underlying hardware infrastructure and delivers them as software. In the same way that server virtualization makes data centers more scalable and cost-effective, network virtualization helps companies streamline network deployment and management while reducing hardware expenses.

This guide describes several types of network virtualization platforms before discussing the benefits of virtualization and the best practices for improving efficiency, scalability, and ROI.

What do network virtualization platforms do?

There are three forms of network virtualization that are achieved with different types of platforms. These include:

Type of Virtualization Description Examples of Platforms
Virtual Local Area Networking (VLAN) Creates an abstraction layer over physical local networking infrastructure so the company can segment the network into multiple virtual networks without installing additional hardware.

SolarWinds Network Configuration Manager

ManageEngine Network Configuration Manager

Software-Defined Networking (SDN) Decouples network routing and control functions from the actual data packets so that IT teams can deploy and orchestrate workflows across multiple devices and VLANs from one centralized platform.

Meraki

Juniper

Network Functions Virtualization (NFV) Separates network functions like routing, switching, and load balancing from the underlying hardware so teams can deploy them as virtual machines (VMs) and use fewer physical devices.

Red Hat OpenStack

VMware vCloud NFV

While network virtualization is primarily concerned with software, it still requires a physical network infrastructure to serve as the foundation for the abstraction layer (just like server virtualization still requires hardware in the data center or cloud to run hypervisor software). Additionally, the virtualization software itself needs storage or compute resources to run, either on a server/hypervisor or built-in to a networking device like a router or switch. Sometimes, this hardware is also referred to as a network virtualization platform.

The benefits of network virtualization

Virtualizing network services and workflows with VLANs, SDN, and NFVs can help companies:

  • Improve operational efficiency with automation. Network virtualization enables the use of scripts, playbooks, and software to automate workflows and configurations. Network automation boosts productivity so teams can get more work done with fewer resources.
  • Accelerate network deployments and scaling. Legacy deployments involve configuring and installing dedicated boxes for each function. Virtualized network functions and configurations can be deployed in minutes and infinitely copied to get new sites up and running in a fraction of the time.
  • Reduce network infrastructure costs. Decoupling network functions, services, and workflows from the underlying hardware means you can run multiple functions from once device, saving money and space.
  • Strengthen network security. Virtualization makes it easier to micro-segment the network and implement precise, targeted Zero-Trust security controls to protect sensitive and valuable assets.

Network virtualization platform best practices

Following these best practices when selecting and implementing network virtualization platforms can help companies achieve the benefits described above while reducing hassle.

Vendor neutrality

Ensuring that the virtualization software works with the underlying hardware is critical. The struggle is that many organizations use devices from multiple vendors, which makes interoperability a challenge. Rather than using different virtualization platforms for each vendor, or replacing perfectly good devices with ones that are all from the same vendor, it’s much easier and more cost-effective to use virtualization software that interoperates with any networking hardware. This type of software is called ‘vendor neutral.’

To improve efficiency even more, companies can use vendor-neutral networking hardware to host their virtualization software. Doing so eliminates the need for a dedicated server, allowing SDN software and virtualized network functions (VNFs) to run directly from a serial console or router that’s already in use. This significantly consolidates deployments, which saves  money and reduces the amount of space needed This can be a lifesaver in branch offices, retail stores, manufacturing sites, and other locations with limited space.

A diagram showing how multiple VNFs can run on a single vendor-neutral platform.

Virtualizing the WAN

We’ve mostly discussed virtualization in a local networking context, but it can also be extended to the WAN (wide area network). For example, SD-WAN (software-defined wide area networking) streamlines and automates the management of WAN infrastructure and workflows. WAN gateway routing functions can also be virtualized as VNFs that are deployed and controlled independently of the physical WAN gateway, significantly accelerating new branch launches.

Unifying network orchestration

The best way to maximize network management efficiency is to consolidate the orchestration of all virtualization with a single, vendor-neutral platform. For example, the Nodegrid solution from ZPE Systems uses vendor-neutral hardware and software to give networking teams a single platform to host, deploy, monitor, and control all virtualized workflows and devices. Nodegrid streamlines network virtualization with:

  • An open, x86-64bit Linux-based architecture that can run other vendors’ software, VNFs, and even Docker containers to eliminate the need for dedicated virtualization appliances.
  • Multi-functional hardware devices that combine gateway routing, switching, out-of-band serial console management, and more to further consolidate network deployments.
  • Vendor-neutral orchestration software, available in on-premises or cloud form, that provides unified control over both physical and virtual infrastructure across all deployment sites for a convenient management experience.

Want to see vendor-neutral network orchestration in action?

Nodegrid unifies network virtualization platforms and workflows to boost productivity while reducing infrastructure costs. Schedule a free demo to experience the benefits of vendor-neutral network orchestration firsthand.

Schedule a Demo

Applications of Edge Computing

A healthcare worker presents various edge computing concepts to highlight some of the applications of edge computing

The edge computing market is huge and continuing to grow. A recent study projected that spending on edge computing will reach $232 billion in 2024. Organizations across nearly every industry are taking advantage of edge computing’s real-time data processing capabilities to get immediate business insights, respond to issues at remote sites before they impact operations, and much more. This blog discusses some of the applications of edge computing for industries like finance, retail, and manufacturing, and provides advice on how to get started.

What is edge computing?

Edge computing involves decentralizing computing capabilities and moving them to the network’s edges. Doing so reduces the number of network hops between data sources and the applications that process and use that data, which mitigates latency, bandwidth, and security concerns compared to cloud or on-premises computing.

Learn more about edge computing vs cloud computing or edge computing vs on-premises computing.

Edge computing often uses edge-native applications that are built from the ground up to harness edge computing’s unique capabilities and overcome its limitations. Edge-native applications leverage some cloud-native principles, such as containers, microservices, and CI/CD. However, unlike cloud-native apps, they’re designed to process transient, ephemeral data in real time with limited computational resources. Edge-native applications integrate seamlessly with the cloud, upstream resources, remote management, and centralized orchestration, but can also operate independently as needed.
.

Applications of edge computing

Industry

Applications

Financial services

  • Mitigate security and compliance risks of off-site data transmission

  • Gain real-time customer and productivity insights

  • Analyze surveillance footage in real-time

Industrial manufacturing

  • Monitor and respond to OT equipment issues in real-time

  • Create more efficient maintenance schedules

  • Prevent network outages from impacting production

Retail operations

  • Enhance the in-store customer experience

  • Improve inventory management and ordering

  • Aid loss prevention with live surveillance analysis

Healthcare

  • Monitor and respond to patient health issues in real-time

  • Mitigate security and compliance risks by keeping data on-site

  • Reduce networking requirements for wearable sensors

Oil, gas, & mining

  • Ensure continuous monitoring even during network disruptions

  • Gain real-time safety, maintenance, and production recommendations

  • Enable remote troubleshooting and recovery of IT systems

AI & machine learning

  • Reduce the costs and risks of high-volume data transmissions

  • Unlock near-instantaneous AI insights at the edge

  • Improve AIOps efficiency and resilience at branches

Financial services

The financial services industry collects a lot of edge data from bank branches, web and mobile apps, self-service ATMs, and surveillance systems. Many firms feed this data into AI/ML-powered data analytics software to gain insights into how to improve their services and generate more revenue. Some also use AI-powered video surveillance systems to analyze video feeds and detect suspicious activity. However, there are enormous security, regulatory, and reputational risks involved in transmitting this sensitive data to the cloud or an off-site data center.

Financial institutions can use edge computing to move data analytics applications to branches and remote PoPs (points of presence) to help mitigate the risks of transmitting data off-site. Additionally, edge computing enables real-time data analysis for more immediate and targeted insights into customer behavior, branch productivity, and security. For example, AI surveillance software deployed at the edge can analyze live video feeds and alert on-site security personnel about potential crimes in progress.

Industrial manufacturing

Many industrial manufacturing processes are mostly (if not completely) automated and overseen by operational technology (OT), such as supervisory control and data acquisition systems (SCADA). Logs from automated machinery and control systems are analyzed by software to monitor equipment health, track production costs, schedule preventative maintenance, and perform quality assurance (QA) on components and products. However, transferring that data to the cloud or centralized data center increases latency and creates security risks.

Manufacturers can use edge computing to analyze OT data in real time, gaining faster insights and catching potential issues before they affect product quality or delivery schedules. Edge computing also allows industrial automation and monitoring processes to continue uninterrupted even if the site loses Internet access due to an ISP outage, natural disaster, or other adverse event in the region. Edge resilience can be further improved by deploying an out-of-band (OOB) management solution like Nodegrid that enables control plane/data plane isolation (also known as isolated management infrastructure), as this will give remote teams a lifeline to access and recover OT systems.

Retail operations

In the age of one-click online shopping, the retail industry has been innovating with technology to enhance the in-store experience, improve employee productivity, and keep operating costs down. Retailers have a brief window of time to meet a customer’s needs before they look elsewhere, and edge computing’s ability to leverage data in real time is helping address that challenge. For example, some stores place QR codes on shelves that customers can scan if a product is out of stock, alerting a nearby representative to provide immediate assistance.

Another retail application of edge computing is enhanced inventory management. An edge computing solution can make ordering recommendations based on continuous analysis of purchasing patterns over time combined with real-time updates as products are purchased or returned. Retail companies, like financial institutions, can also use edge AI/ML solutions to analyze surveillance data and aid in loss prevention.

Healthcare

The healthcare industry processes massive amounts of data generated by medical equipment like insulin pumps, pacemakers, and imaging devices. Patient health data can’t be transferred over the open Internet, so getting it to the cloud or data center for analysis requires funneling it through a central firewall via MPLS (for hospitals, clinics, and other physical sites), overlay networks, or SD-WAN (for wearable sensors and mobile EMS devices). This increases the number of network hops and creates a traffic bottleneck that prevents real-time patient monitoring and delays responses to potential health crises.

Edge computing for healthcare allows organizations to process medical data on the same local network, or even the same onboard chip, as the sensors and devices that generate most of the data. This significantly reduces latency and mitigates many of the security and compliance challenges involved in transmitting regulated health data offsite. For example, an edge-native application running on an implanted heart-rate monitor can operate without a network connection much of the time, providing the patient with real-time alerts so they can modify their behavior as needed to stay healthy. If the app detects any concerning activity, it can use multiple cellular and ATT FirstNet connections to alert the cardiologist without exposing any private patient data.

Oil, gas, & mining

Oil, gas, and other mining operations use IoT sensors to monitor flow rates, detect leaks, and gather other critical information about equipment deployed in remote sites, drilling rigs, and offshore platforms all over the world. Drilling rigs are often located in extremely remote or even human-inaccessible locations, so ensuring reliable communications with monitoring applications in the cloud or data center can be difficult. Additionally, when networks or systems fail, it can be time-consuming and expensive – not to mention risky – to deploy IT teams to fix the issue on-site.

The energy and mining industries can use edge computing to analyze data in real time even in challenging deployment environments. For example, companies can deploy monitoring software on cellular-enabled edge computing devices to gain immediate insights into equipment status, well logs, borehole logs, and more. This software can help establish more effective maintenance schedules, uncover production inefficiencies, and identify potential safety issues or equipment failures before they cause larger problems. Edge solutions with OOB management also allow IT teams to fix many issues remotely, using alternative cellular interfaces to provide continuous access for troubleshooting and recovery.

AI & machine learning

Artificial intelligence (AI) and machine learning (ML) have broad applications across many industries and use cases, but they’re all powered by data. That data often originates at the network’s edges from IoT devices, equipment sensors, surveillance systems, and customer purchases. Securely transmitting, storing, and preparing edge data for AI/ML ingestion in the cloud or centralized data center is time-consuming, logistically challenging, and expensive. Decentralizing AI/ML’s computational resources and deploying them at the edge can significantly reduce these hurdles and unlock real-time capabilities.

For example, instead of deploying AI on a whole rack of GPUs (graphics processing units) in a central data center to analyze equipment monitoring data for all locations, a manufacturing company could use small edge computing devices to provide AI-powered analysis for each individual site. This would reduce bandwidth costs and network latency, enabling near-instant insights and providing an accelerated return on the investment into artificial intelligence technology.

AIOps can also be improved by edge computing. AIOps solutions analyze monitoring data from IT devices, network infrastructure, and security solutions and provide automated incident management, root-cause analysis, and simple issue remediation. Deploying AIOps on edge computing devices enables real-time issue detection and response. It also ensures continuous operation even if an ISP outage or network failure cuts off access to the cloud or central data center, helping to reduce business disruptions at vital branches and other remote sites.

Getting started with edge computing

The edge computing market has focused primarily on single-use-case solutions designed to solve specific business problems, forcing businesses to deploy many individual applications across the network. This piecemeal approach to edge computing increases management complexity and risk while decreasing operational efficiency.

The recommended approach is to use a centralized edge management and orchestration (EMO) platform to monitor and control edge computing operations. The EMO should be vendor-agnostic and interoperate with all the edge computing devices and edge-native applications in use across the organization. The easiest way to ensure interoperability is to use vendor-neutral edge computing platforms to run edge-native apps and AI/ML workflows.

For example, the Nodegrid platform from ZPE Systems provides the perfect vendor-neutral foundation for edge operations. Nodegrid integrated branch services routers like the Gate SR with integrated Nvidia Jetson Nano use the open, Linux-based Nodegrid OS, which can host Docker containers and edge-native applications for third-party AI, ML, data analytics, and more. These devices use out-of-band management to provide 24/7 remote visibility, management, and troubleshooting access to edge deployments, even in challenging environments like offshore oil rigs. Nodegrid’s cloud-based or on-premises software provides a single pane of glass to orchestrate operations at all edge computing sites.

Streamline your edge computing deployment with Nodegrid

The vendor-neutral Nodegrid platform can simplify all applications of edge computing with easy interoperability, reduced hardware overhead, and centralized edge management and orchestration. Schedule a Nodegrid demo to learn more.
Schedule a Demo

NIS2 Compliance & Requirements

NIS2 Compliance

NIS2 – an update of the EU’s Network and Information Security Directive – seeks to enhance the cybersecurity level and resilience of EU member states. Compared to the original NIS, it significantly increases risk management, corporate accountability, business continuity, and reporting requirements. NIS2 became law in all EU member states on 17 October 2024, so affected organizations must take action to avoid fines and other penalties. This guide describes the 10 minimum cybersecurity requirements mandated by NIS2 and provides tips to simplify NIS2 compliance. Citation: Directive (EU) 2022/2555 of the European Parliament and of the Council of 14 December 2022 on measures for a high common level of cybersecurity across the Union, amending Regulation (EU) No 910/2014 and Directive (EU) 2018/1972, and repealing Directive (EU) 2016/1148 (NIS 2 Directive)

Who does NIS2 apply to, and what are the consequences for noncompliance?

NIS2 applies to organizations providing services deemed “essential” or “important” to the European economy and society. Essential Entities (EE) generally have at least 250 employees, annual turnover of €50 million, or balance sheets of €43 million. Essential sectors include:

Important Entities (IE) generally have at least 50 employees, annual turnover of €10 million, or balance sheets of €10 million. Important sectors include:

  • Postal services
  • Waste management
  • Chemicals
  • Research
  • Food
  • Manufacturing (e.g., medical devices and other equipment)
  • Digital providers (e.g., social networks, online marketplaces)

The NIS2 Directive outlines three types of penalties for noncompliance: non-monetary remedies, administrative fines, and criminal sanctions. Non-monetary remedies include things like compliance orders, binding instructions, security audit orders, and customer threat notification orders. Financial penalties for Essential Entities max out at €10 million or 2% of the global annual revenue, whichever is higher; for Important Entities, the maximum is €7 million or 1.4% of the global annual revenue, whichever is higher. NIS2 also directs member states to hold top management personally responsible for gross negligence in a cybersecurity incident, which could involve:

  • Ordering organizations to notify the public of compliance violations
  • Publicly identifying the people and/or entities responsible for the violation
  • Temporarily banning an individual from holding management positions (EEs only)

Even the nonfinancial penalties of NIS2 noncompliance can affect revenue by causing reputational damage and potential lost business, so it’s crucial for IEs and EEs to be prepared when this directive takes effect in their state.

10 Minimum requirements for NIS2 compliance

The NIS2 directive requires essential and important entities to take “appropriate and proportional” measures to manage security and resilience risks and minimize the impact of incidents. It mandates an “all-hazards approach,” which means creating a comprehensive business continuity framework that accounts for any potential disruptions, whether they be natural disasters, ransomware attacks, or anything in between. Organizations must implement “at least” the following requirements as a baseline for NIS2 compliance (click links for more info):

10 NIS2 Compliance Requirements

NIS2 Minimum Requirement

Implementation Tip

Maintain comprehensive risk analysis and information system security policies

Keep policies in a centralized repository with version control to track changes and prevent unauthorized modifications.

Implement robust security incident handling measures

Use AIOps to accelerate incident creation, triage, and root-cause analysis (RCA).

Establish business continuity and crisis management strategies

Use out-of-band (OOB) management and isolated recovery environments (IREs) to minimize downtime and improve resilience.

Mitigate supply chain security risks

Implement User and Entity Behavior Analytics (UEBA) to monitor third parties on the network.

Ensure network and IT system security throughout acquisition, development, and maintenance

Use automated provisioning, vulnerability scanning, and patch management to reduce risks.

Perform regular cybersecurity and risk-management assessments

Use artificial intelligence technology like large language models (LLMs) to streamline assessments.

Enforce cybersecurity training requirements for all personnel

Simulate phishing emails and other social engineering attacks to prepare users for the real thing.

Implement cryptography and, when necessary, encryption

Ensure all physical systems are protected by strong hardware roots of trust like TPM 2.0.

Establish secure user access control and asset management practices 

Use zero-trust policies and controls to restrict privileges and limit lateral movement.

Use multi-factor authentication (MFA) and encrypted communications 

Extend MFA to management interfaces and recovery systems to prevent compromise.

1. Risk analysis and information system security policies

Organizations must create and update comprehensive policies covering cybersecurity risk analysis and overall IT system security practices. These policies should cover all the topics listed below and include specific consequences and/or corrective measures for failing to follow the outlined processes.

Tip: Keeping all company policies in a centralized, version-controlled repository will help track updates over time and prevent anyone from making unauthorized changes.

2. Security incident handling

Entities must implement incident-handling tools and practices to help accelerate resolution and minimize the impact on end users and other essential or important services. This includes mechanisms for identifying problems, triaging according to severity, remediating issues, and notifying relevant parties. NIS2 outlines a specific timeline for reporting significant security incidents to the relevant authorities:

  • Within 24 hours – Entities must provide an early warning indicating whether they suspect an unlawful or malicious attack or whether it could have a cross-border impact.
  • Within 72 hours – Entities must update the relevant authorities with an assessment of the attack, including its severity, impact, and indicators of compromise.
  • Within one month – Organisations must submit a final report including a detailed description of the incident, the most likely root cause or type of threat, what mitigation measures were taken, and, if applicable, the cross-border impact. If the incident is still ongoing, entities must submit an additional report within one month of resolution.
Tip: AIOps (artificial intelligence for IT operations) analyzes monitoring logs using machine learning to identify threat indicators and other potential issues that less sophisticated tools might miss. It can also generate, triage, and assign incidents, perform root-cause analysis (RCA) and other automated troubleshooting, and take other actions to streamline security incident handling.

3. Business continuity and crisis management

Essential and important entities must establish comprehensive business continuity and crisis management strategies to minimize service disruptions. These strategies should include redundancies and backups as part of a resilience system that can keep operations running, if in a degraded state, during major cybersecurity incidents. It’s also crucial to maintain continuous access to management, troubleshooting, and recovery infrastructure during an attack.

Tip: Serial consoles with out-of-band (OOB) management provide an alternative path to systems and infrastructure that doesn’t rely on the production network, ensuring 24/7 management and recovery access during outages and other major incidents. OOB serial consoles can also be used to create an isolated recovery environment (IRE) where teams can safely restore and rebuild critical services without risking ransomware reinfection.

4. Supply chain security

Organizations must implement supply chain security risk management measures to limit the risk of working with third-party suppliers. These include performing regular risk assessments based on the supplier’s security and compliance history, applying zero-trust access control policies to third-party accounts, and keeping third-party software and dependencies up-to-date.

Tip: User and entity behavior analytics (UEBA) software uses machine learning to analyze account activity on the network and detect unusual behavior that could indicate compromise. It establishes baselines for normal behavior based on real user activity, reducing false positives and increasing detection accuracy even with vendors and contractors who operate outside of normal business hours and locations.

5. Secure network and IT system acquisition, development, and maintenance

Entities must ensure the security of network and IT systems during acquisition, development, and maintenance. This involves, among other things, inspecting hardware for signs of tampering before deployment, changing default settings and passwords on initial startup, performing code reviews on in-house software to check for vulnerabilities, and applying security patches as soon as vulnerabilities are discovered.

Tip: Automation can streamline many of these practices while reducing the risk of human error. For example, zero-touch provisioning automatically configures devices as soon as they come online, reducing the risk of attackers compromising a system-default admin account. Automated vulnerability scanning tools can help detect security flaws in software and systems; automated patch management ensures third-party updates are applied as soon as possible.

6. Cybersecurity and risk-management assessments

Organizations must have a way to objectively assess their cybersecurity and risk-management practices and remediate any identified weaknesses. These assessments involve identifying all the physical and logical assets used by the company, scanning for potential threats, determining the severity or potential impact of any identified threats, taking the necessary mitigation steps, and thoroughly documenting everything to streamline any reporting requirements.

Tip: An AI-powered cybersecurity risk assessment tool uses large language models (LLMs) and other machine learning technology to automate assessments with greater accuracy than older solutions. These tools are often better at identifying novel threats than human assessors or signature-based detection methods, and they typically provide automated reporting to aid in NIS2 compliance.

7. Cybersecurity training

Essential and important entities must enforce cybersecurity training and basic security hygiene policies for all staff. This training should include information about the most common social engineering attacks, such as email phishing or vishing (voice phishing), compliant data handling practices, and how to securely create and manage account credentials.

Tip: Some cybersecurity training programs include attack simulations – such as fake phishing emails – to test trainees’ knowledge and give them practice identifying social engineering attempts. These programs help companies identify users who need additional education and periodically reinforce what they have learned.

8. Cryptography and encryption

NIS2 requires organizations to use cryptography to protect systems and data from tampering. This includes encrypting sensitive data and communications when necessary.

Tip: Roots of Trust (RoTs) are hardware security mechanisms providing cryptographic functions, key management, and other important security features. RoTs are inherently trusted, so it’s important to choose up-to-date solutions offering strong cryptographic algorithms, such as Trusted Platform Module (TPM) 2.0.

9. User access control and asset management

Entities must establish policies and procedures for employees accessing sensitive data, including least-privilege access control and secure asset management. This also includes mechanisms for revoking access and locking down physical assets when users violate safe data handling policies, or malicious outsiders compromise privileged credentials.

Tip: Zero trust security uses network micro-segmentation and highly specific security policies to protect sensitive resources. MFA and continuous authentication controls seek to re-establish trust each time a user requests access to a new resource, making it easier to catch malicious actors and preventing lateral movement on the network.

10. Multi-factor authentication (MFA) and encrypted communications

The final minimum requirement for NIS2 compliance is using multi-factor authentication (MFA) and continuous authentication solutions to verify identities, as described above. Additionally, entities must be able to encrypt voice, video, text, and internal emergency communications when needed.

Tip: MFA, continuous authentication, and other zero-trust controls should also extend to management interfaces, resilience systems, and isolated recovery environments to prevent malicious actors from compromising these critical resources. The best practice is to isolate management interfaces and resilience systems using OOB serial consoles to prevent lateral movement from the production network.

How ZPE streamlines NIS2 compliance

EU-based entities classified as essential or important have limited time to implement all the security policies, practices, and tools required for NIS2 compliance. Using vendor-neutral, multi-purpose hardware platforms to deploy new security controls can help reduce the hassle and expense, making it easier to meet the October deadline. For example, a Nodegrid serial console from ZPE Systems combines out-of-band management, routing, switching, cellular failover, SSL VPN and secure tunnel capabilities, and environmental monitoring in a single device. The vendor-neutral Nodegrid OS supports GuestOS and containers for any third-party software, including next-generation firewalls (NGFWs), Secure Access Service Edge (SASE), automation tools like Puppet and Ansible, and UEBA. Nodegrid devices have strong hardware Roots of Trust with TPM 2.0, selectable encrypted cryptographic protocols and cipher suite levels, and configuration checksumTM. Plus, Nodegrid’s Gen 3 OOB creates the perfect foundation for infrastructure isolation, resilience systems, and isolated recovery environments.

Looking to Upgrade to a Nodegrid serial console?

Looking to replace your discontinued, EOL serial console with a Gen 3 out-of-band solution? Nodegrid can expand your capabilities and manage your existing solutions from other vendors. Click here to learn more!

DORA Compliance & Requirements

A map of the EU with the words DORA Digital Operation Resilience Act.

The European Union’s Digital Operational Resilience Act (DORA) creates a regulatory framework for information and communication technology (ICT) risk management and network resilience. It entered into EU law on 16 January 2023 and took effect on 17 January 2025, applying to any firm operating within the European financial sector. This guide outlines the technical requirements for DORA compliance and provides tips and best practices to streamline implementation.

Citation: Digital Operational Resilience Act (DORA)

Which organizations does DORA affect, and what are the consequences of non-compliance?

DORA applies to financial entities operating in the European Union, including:

  • Financial services
  • Payment institutions
  • Crypto-asset service providers
  • Crowdfunding service providers
  • Investment firms
  • Insurance companies
  • Data analytics and audit services
  • Fintech companies
  • Trading venues
  • Credit institutions
  • Credit rating agencies

Crucially, DORA also applies to third-party digital service providers that work with financial institutions, such as colocation data centers and cloud service providers.

Once DORA takes effect, each EU state will designate “competent authorities” to enforce compliance. Each state determines its own penalties, but potential consequences for non-compliance include fines, remediation, and withdrawal of DORA authorization.

ICT service providers (such as cloud vendors) labeled “critical” by the European Commission face additional oversight and non-compliance penalties, including fines of up to 1% of the provider’s average daily worldwide turnover the previous business year. Overseers can levy fines on a provider every day for up to six months until compliance requirements are met. These steep penalties make it essential for service providers to ensure their systems and processes are DORA-compliant.

What are DORA’s technical requirements?

DORA Requirement

Description

Technical Best Practices

ICT risk management

Financial institutions must develop a comprehensive ICT risk management framework containing strategies and tools for business resilience, recovery, and communication.

• Control/data plane separation

• Isolated recovery environments

ICT third-party risk management

Financial organizations in the EU must manage the risk of working with third-party vendors to prevent supply chain attacks.

• Automated patch management

• AIOps security monitoring

Digital operational resilience testing

Financial entities must establish a resilience testing program to validate their security defenses, backups, redundancies, and recovery systems every year.

• Control/data plane separation

• Alternative networking, compute, and storage

• Automated provisioning and recovery tools

ICT-related incident management

Financial firms must submit a root cause report within one month of a major incident.

• AIOps anomaly detection

• AIOps incident management

• AIOps root-cause analysis (RCA)

Information sharing

DORA encourages financial institutions to share cyber threat information within the community to help raise awareness and mitigate risks.

Using logs and analyses from technology solutions like UEBA and AIOps.

Oversight of critical third-party providers

Digital service providers deemed “critical” must follow the same compliance rules as the financial institutions they work with.

All of the above.

ICT risk management

DORA requires financial institutions to develop a comprehensive ICT risk management framework containing strategies and tools for business resilience, recovery, and communication. In addition to written policies and documented procedures, financial entities must implement technology such as security hardware and software, redundancies and backups, and resilience systems. Best practices for DORA-compliant risk management technologies include:

ICT third-party risk management

Financial organisations in the EU must manage the risk of working with third-party vendors to prevent supply chain attacks such as the MOVEit breach. ICT third-party risk management (TPRM) involves performing vendor due diligence to validate compliance with security standards and ensuring contractual provisions are in place to hold vendors accountable for security failures. On the technical side, financial entities should implement security policies and controls to limit third-party access and use monitoring tools that detect vulnerabilities, apply patches, and identify suspicious account behavior. Best practices for DORA-compliant TPRM technologies include:

Digital operational resilience testing

DORA requires financial entities to establish a resilience testing program to validate their security defenses, backups, redundancies, and recovery systems once per year. Examples of resilience tests include vulnerability scans, network security assessments, open-source software analyses, physical security reviews, penetration testing, and source code reviews. Financial entities deemed “critical,” as well as their critical ICT providers, must also undergo threat-led penetration testing (TLPT) every three years. DORA stipulates that these tests be performed by independent parties, though they can be internal so long as the organization takes steps to eliminate any conflict of interest. Technical best practices include:

ICT incident reporting

DORA streamlines and consolidates the incident reporting requirements that are currently fragmented across EU states. The takeaway from this section is a requirement for financial firms to submit a root cause report within one month of a major incident. Technical best practices for meeting this requirement involve using AIOps for:

Information sharing

This is less of a requirement than a suggestion, but DORA both allows and encourages financial institutions to share cyber threat information within the community to help raise awareness and mitigate risks. Best practices involve using (anonymized) logs from some of the technologies mentioned above, such as UEBA and AIOps.

Oversight of critical third-party providers

DORA requires “critical” digital service providers to follow the same compliance rules as the financial institutions they work with. Regulators may deem a provider critical if a large number of financial entities rely on them for business continuity or if they are difficult to replace/substitute when a failure occurs. Any cloud vendors, colocation data centers, or other digital service providers working in the EU’s financial sector should prepare for DORA by implementing:

Best practices for DORA compliance

Some of the technologies that can help simplify DORA compliance for financial institutions and critical service providers include:

Control/data plane separation

Separating the data plane (i.e., production network traffic) from the control plane (i.e., management and troubleshooting traffic) simplifies DORA compliance in two key ways:

  1. It isolates the management interfaces used to control ICT systems, making them inaccessible to malicious actors who breach the production network and aiding in resilience.
  2. It prevents resource-intensive automation, security monitoring, and resilience testing workflows from affecting the speed or availability of the production network.

The best practice for control and data plane separation is to use Gen 3 out-of-band (OOB) serial consoles, such as the Nodegrid product line from ZPE Systems. Gen 3 OOB provides a dedicated network for management traffic that doesn’t depend on production network resources, ensuring remote teams always have access, even during outages or ransomware attacks. It’s also vendor-neutral, allowing administrators to deploy third-party monitoring, automation, security, troubleshooting, and testing tools on the isolated control plane. Gen 3 OOB helps financial institutions and ICT service providers meet resilience and testing requirements cost-effectively.

Isolated recovery environments

Ransomware continues to be one of the biggest threats to resilience, with ransomware cases increasing by 73% in 2023 despite heightened awareness and additional cybersecurity spending. Preventing an attack may be nearly impossible, and full recovery often takes weeks due to the high rate of reinfection. The best way to reduce recovery time and meet DORA resilience requirements is with an isolated recovery environment (IRE) that’s fully separated from the production infrastructure.

A diagram showing the components of an isolated recovery environment.

An IRE contains systems dedicated to recovering from ransomware and other breaches, where teams can rebuild and restore applications, data, and other resources before deploying them back to the production network. It uses designated network infrastructure that’s completely separate from the production environment to mitigate the risk of malware reinfection. It also contains technologies like Retention Lock, role-based access control, and out-of-band management so teams can quickly and safely recover critical services and reduce DORA penalties.

Automated patch management

Cybercriminals often breach networks by exploiting known vulnerabilities in outdated software and firmware, as happened with 2023’s Ragnar Locker attacks. For large financial institutions and critical ICT providers, manually tracking and installing patches for all the third-party hardware and software used across the organization is too difficult and time-consuming, leaving potential vulnerabilities exposed for years. The best practice for meeting DORA’s third-party risk management requirement is to use an automated, vendor-agnostic patch management solution.

Automatic patch management tools discover all the software and devices used by the organization, monitor for known exploited vulnerabilities, and notify teams when vendors release updates. They centralize patch management for the entire network to simplify TPRM and aid in DORA compliance.

AIOps

AIOps uses artificial intelligence technology to automate and streamline IT operations. AIOps collects and analyses all the data generated by IT infrastructure, applications, monitoring tools, and security solutions to help identify significant events and make “intelligent” recommendations. AIOps helps with DORA compliance by providing:

  • Anomaly detection – Artificial intelligence analyses logs and detects outlier data points that could indicate an in-progress data breach or other problematic event.
  • Incident management – AIOps automatically generates, triages, and assigns service desk tickets to the appropriate team for resolution, significantly accelerating incident response.
  • Root-cause analysis – AIOps combs through all the relevant logs to determine the most likely cause of adverse events, making it easier to meet DORA’s root-cause reporting requirements.

How ZPE streamlines DORA compliance

The Nodegrid out-of-band management platform from ZPE Systems helps financial institutions and critical service providers meet DORA resilience requirements without increasing network complexity. Vendor-neutral Nodegrid serial consoles and integrated edge services routers deliver control plane isolation, centralized infrastructure patch management, and Guest OS/container hosting for third-party security, recovery, and AIOps tools. The Nodegrid platform provides a secure foundation for an isolated recovery environment that contains all the technology needed to get services back online and stay DORA compliant.

Download our 3 Steps to Ransomware Recovery whitepaper to learn how to improve network resilience with Nodegrid.
Download the Whitepaper

See how Nodegrid helped one of the EU’s largest banks meet modern security and compliance requirements.
Read the case study

 

Looking to replace your discontinued, EOL serial console with a Gen 3 out-of-band solution?

Looking to replace your discontinued, EOL serial console with a Gen 3 out-of-band solution? Nodegrid can expand your capabilities and manage your existing solutions from other vendors.

Click here to learn more!

SD-WAN Management Guide

SD-WAN Management Platform

SD-WAN applies software-defined networking (SDN) principles to wide area networks (WANs), which means it decouples networking logic from the underlying WAN hardware. SD-WAN management involves orchestrating and optimizing software-defined WAN workflows across the entire architecture, ideally from a single, centralized platform. This SD-WAN management guide explains how this technology works, the potential benefits of using it, and the best practices to help you get the most out of your SD-WAN deployment.

How does SD-WAN management work?

A typical WAN architecture uses a variety of links, including MPLS, wireless, broadband, and VPNs, to connect branches and other remote locations to enterprise applications and resources. SD-WAN is a virtualized service that overlays this physical architecture, giving software teams a unified software interface from which to manage network traffic and workflows across the enterprise. SD-WAN management decouples network control functions from the gateways and routers installed at remote sites, preventing administrators from having to manage each one individually. It also reduces the reliance on manual CLI rules and prompts, which are time-consuming and prone to human error, allowing teams to deploy policies across an entire network at the same time.

SD-WAN can also use multiple connection types (including 5G LTE, MPLS, and fiber) interchangeably, switching between them as needed to ensure optimal performance. Plus, SD-WAN management enables organizations to use virtualized and cloud-based security technologies (such as SASE) to secure remote traffic to SaaS, web, and cloud resources. This allows organizations to reduce traffic on expensive MPLS links by utilizing less-costly cellular and public internet links to handle cloud-destined traffic.

The benefits of SD-WAN management

SD-WAN Benefit

Description

Branch bandwidth cost reduction

SD-WAN reduces bandwidth costs by redirecting cloud- and internet-destined traffic across less expensive channels, reserving the MPLS link for enterprise traffic alone

Branch performance optimization

SD-WAN management uses technologies like application awareness and guaranteed minimum bandwidth to automatically optimize network performance

Branch automation & orchestration

SD-WAN’s software-based management enables automatic deployments, load balancing, failover, and intelligent routing with a centralized orchestrator

Branch security enhancement

SD-WAN enables the use of cloud-based security solutions like SASE and Zero Trust Edge that extend enterprise security controls to branch network traffic

Cost reduction

MPLS links provide a secure connection between branches and centralized data center resources, but the bandwidth is far more expensive than fiber or cellular. SD-WAN reduces branch bandwidth costs by using less expensive channels for traffic that’s destined for resources online and in the cloud, reserving MPLS bandwidth for enterprise traffic alone.

Improved performance

To optimize the performance of a traditional WAN, teams must create specific routing, bandwidth utilization, and load-balancing rules for each branch and appliance, and hope these policies adequately predict and resolve any potential issues. SD-WAN management uses technologies like application awareness and guaranteed minimum bandwidth to automatically optimize network performance.

Automation & orchestration

By decoupling network control functions from the underlying WAN hardware, SD-WAN enables automatic device deployments, load balancing, failover, and intelligent routing. Teams can orchestrate automated workflows across the entire network architecture from a centralized software platform, to make deployments and configuration changes more efficient.

Enhanced security

Branch networks often suffer from security gaps due to the difficulty in extending enterprise security policies and controls to remote sites. Securing branch traffic usually means backhauling all traffic through the data center’s firewall, eating up expensive MPLS bandwidth and introducing latency for the rest of the enterprise. Some organizations opt to deploy security appliances at each branch site, which is costly and gives network administrators more moving parts to manage. 

SD-WAN enables the use of cloud-based security solutions like SASE and Zero Trust Edge that extend enterprise security defenses to branch network traffic without backhauling or additional hardware. SD-WAN automatically identifies traffic destined for web or cloud resources and routes it through the cloud-based security stack across less-expensive internet links, saving money and reducing management complexity while improving branch security.

How to get the most out of your SD-WAN deployment

There are a variety of SD-WAN deployment models, each of which solves a different WAN problem, so it’s important to assess your organization’s requirements and capabilities to ensure you build an architecture that meets your needs. It’s also critical to consider the scalability, adaptability, security, and resilience of your SD-WAN deployment to prevent headaches down the road. 

For example, using a vendor-neutral platform like Nodegrid to host SD-WAN allows you to easily expand your branch networking capabilities with third-party software for automation, security, monitoring, troubleshooting, and more without deploying additional hardware, allowing you to easily scale and adapt to changing business requirements. Nodegrid also consolidates branch functions like routing, switching, out-of-band serial console management, SD-WAN management, and SASE network security in a single device for cost-effective branch deployments. Plus, Nodegrid enables isolated management infrastructure that’s resilient to threats and provides a safe recovery environment from ransomware attacks and network failures. 

Ready to get started on your SD-WAN deployment?

Nodegrid unifies control over mixed-vendor hardware and software solutions across the enterprise network architecture for efficient, streamlined SD-WAN management. Request a free demo to learn more.

Request a Demo