Offloading Network Policy Enforcement to Data Processing Units
Keywords:
Data Processing Units, Network Function Offloading, Hardware Acceleration, Programmable Data Planes, SmartNIC ArchitectureAbstract
General-purpose server CPUs in modern data centers bear a dual burden: executing application workloads while simultaneously enforcing network policies. This split responsibility introduces computational overhead, cache contention, and latency variability that degrade both application throughput and network performance. This article examines the architectural case for offloading policy enforcement, connection tracking, firewall operations, and traffic metering to Data Processing Units (DPUs)—purpose-built accelerators integrated directly into the network data path. By relocating these functions from host CPUs to dedicated silicon, organizations recover substantial compute headroom while achieving deterministic, sub-microsecond network performance. The article analyzes the bottlenecks of CPU-based network processing, the architectural design of modern DPUs, the role of open standards in enabling portable policy management, and the operational benefits across diverse deployment scenarios. Results demonstrate measurable gains in resource utilization, energy efficiency, and latency consistency for latency-sensitive workloads, establishing hardware-accelerated network processing as a foundational shift in data center architecture.
Downloads
References
J. Martins et al., "ClickOS and the Art of Network Function Virtualization," ClickOS and the art of network function virtualization, ResearchGate, January 2014. Available: https://www.researchgate.net/publication/312672450_ClickOS_and_the_art_of_network_function_virtualization
Behnam Montazeri et al., "Homa: A Receiver-Driven Low-Latency Transport Protocol Using Network Priorities," Homa: A Receiver-Driven Low-Latency Transport Protocol Using Network Priorities, ResearchGate, March 2018. Available: https://arxiv.org/abs/1803.09615
Huanxin Lin, " Efficient low-latency packet processing using On-GPU Thread-Data Remapping," November 2019. Available: https://www.sciencedirect.com/science/article/abs/pii/S0743731518305495
Leah Shalev, " The Tail at Amazon Web Services Scale," Aug. 2024. Available: https://ieeexplore.ieee.org/document/10636119
Jinli Yan et al., " PPB: a Path-based Packet Batcher to Accelerate Vector Packet Processor," ICCSE, 2020. Available: https://ieeexplore.ieee.org/document/9201881
Junye Zhang et al., " Revisiting the Underlying Causes of RDMA Scalability Issues," ISPA, 2025. Available: https://ieeexplore.ieee.org/document/10885282
Pat Bosshart et al., "P4: Programming Protocol-Independent Packet Processors," ACM SIGCOMM Computer Communication Review, ResearchGate, December 2013. Available: https://dl.acm.org/doi/10.1145/2656877.2656890
Nick McKeown et al., "OpenFlow: Enabling Innovation in Campus Networks," ACM SIGCOMM Computer Communication Review, ResearchGate, April 2008. Available: https://dl.acm.org/doi/10.1145/1355734.1355746
Wenwen Fu et al., "PASS: A Flexible Programmable Framework for Building an Integrated Security Stack in Public Cloud," ResearchGate, June 2025. Available: https://www.mdpi.com/2079-9292/14/13/2650?utm_source=researchgate.net&utm_medium=article
Hamed Hamzeh et al., "MRFS: A Multi-Resource Fair Scheduling Algorithm in Heterogeneous Cloud Computing," ResearchGate, April 2020. Available: https://ieeexplore.ieee.org/document/9202563
Downloads
Published
How to Cite
Issue
Section
License

This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.
All papers should be submitted electronically. All submitted manuscripts must be original work that is not under submission at another journal or under consideration for publication in another form, such as a monograph or chapter of a book. Authors of submitted papers are obligated not to submit their paper for publication elsewhere until an editorial decision is rendered on their submission. Further, authors of accepted papers are prohibited from publishing the results in other publications that appear before the paper is published in the Journal unless they receive approval for doing so from the Editor-In-Chief.
IJISAE open access articles are licensed under a Creative Commons Attribution-ShareAlike 4.0 International License. This license lets the audience to give appropriate credit, provide a link to the license, and indicate if changes were made and if they remix, transform, or build upon the material, they must distribute contributions under the same license as the original.


