Software Engineer – Hardware Dataflow

NVIDIA

Amsterdam

Description

NVIDIA is well positioned as the AI Computing Company – our Accelerators are the brains that power modern Deep Learning software frameworks, accelerated analytics, modern data centers, and autonomous vehicles. We are looking for a Software Engineer – HW Dataflow to deliver end-to-end Hardware/Software solutions that bridge the gap between the world and our accelerators. You will build and operate real-time, distributed compute frameworks and runtimes for planet-scale inference for LLMs and advanced AI applications at ultra-low latency, optimized for heterogeneous hardware and dynamic global workloads.

In this position you develop deterministic, low-overhead hardware abstractions for thousands of synchronously coordinated accelerators across a software-scheduled interconnection network. You will prioritize fault tolerance, real-time diagnostics, ultra-low-latency execution, and mission-critical reliability while future-proofing our software stack for next-gen silicon, innovative multi-chip topologies, and heterogeneous co-processors. Your code will run at the edge of physics – every clock cycle saved reduces latency for millions of users and extends NVIDIA's lead in the AI compute race. The ideal candidate is deeply curious about system internals, possesses expertise in computer architecture and hardware–software interfaces, and excels at profiling and optimizing systems for latency, throughput, and efficiency. We look for engineers who ship high-impact, production-ready code, believe that "untested code is broken code", and write empathetic, maintainable code with strong version control and modular design.

What You'll Be Doing

  • Deliver end-to-end Hardware/Software solutions bridging the gap between the world and our accelerators.
  • Build and operate real-time, distributed compute frameworks and runtimes to deliver planet-scale inference for LLMs and advanced AI applications at ultra-low latency, optimized for heterogeneous hardware and dynamic global workloads.
  • Develop deterministic, low-overhead hardware abstractions for thousands of synchronously coordinated accelerators across a software-scheduled interconnection network; prioritize fault tolerance, real-time diagnostics, ultra-low-latency execution, and mission-critical reliability.
  • Future-proof our software stack for next-gen silicon, innovative multi-chip topologies, emerging form factors, and heterogeneous co-processors.
  • Foster collaboration across cloud, compiler, infra, data centers, and hardware teams to align engineering efforts, enable seamless integrations, and drive progress towards shared goals.

What We Need To See

  • MSc or higher degree in CS/EE/CE/Mathematics or equivalent experience
  • Deep curiosity about system internals – from kernel-level interactions to hardware dependencies – and the ability to solve problems across abstraction layers down to the hardware details of our chips.
  • Minimum 2 years of relevant experience, preferably in computer architecture, compiler backends, algorithms, and hardware–software interfaces.
  • System-level programming (Haskell, C++, or similar) with emphasis on low-level optimizations and hardware-aware design.
  • Track record of shipping high-impact, production-ready code while collaborating effectively with cross-functional teams.
  • Experience profiling and optimizing systems for latency, throughput, and efficiency, with zero tolerance for wasted cycles or resources.
  • Commitment to automated testing and CI/CD pipelines.
  • Pragmatic technical judgement, balancing short-term velocity with long-term system health.
  • Empathetic, maintainable code with strong version control and modular design, prioritizing readability and usability for future teammates.

Ways To Stand Out From The Crowd

  • Experience with FPGA development, VFIO drivers, or HDL languages.
  • Experience shipping complex projects in fast-paced environments while maintaining team alignment and stakeholder support.
  • Hands-on optimization of performance-critical applications using GPUs, FPGAs, or ASICs (e.g., memory management, kernel optimization).
  • Familiarity with ML frameworks (e.g. PyTorch) and compiler tooling (e.g. MLIR) for AI/ML workflow integration.
  • You initiate without derailing, value "code in prod" over "perfect slides", and own outcomes from whiteboard to deployment.

Join our team of world-class engineers and be part of the groundbreaking work we do at NVIDIA. This isn't your typical job – it's a mission to redefine AI compute. If you're the kind of engineer who reads ISCA papers for fun and thinks "I can make that faster", this is your call.

, , JR2013481