NVIDIA Dynamo is an innovative, open-source platform focused on efficient, scalable inference for large language and reasoning models in distributed GPU environments. By bringing to bear sophisticated techniques in serving architecture, GPU resource management, and intelligent request handling, Dynamo achieves high-performance AI inference for demanding applications. Our team is addressing the most challenging issues in distributed AI infrastructure, and we’re searching for engineers enthusiastic about building the next generation of scalable AI systems. As a Principal Software Engineer on the Dynamo project, you will address some of the most sophisticated and high-impact challenges in distributed inference, including:
Dynamo k8s Serving Platform: Build the Kubernetes deployment and workload management stack for Dynamo to facilitate inference deployments at scale. Identify bottlenecks and apply optimization techniques to fully use hardware capacity.
Scalability & Reliability: Develop robust, production-grade inference workload management systems that scale from a handful to thousands of GPUs, supporting a variety of LLM frameworks (e.g., TensorRT-LLM, vLLM, SGLang).
Disaggregated Serving: Architect and optimize the separation of prefill (context ingestion) and decode (token generation) phases across distinct GPU clusters to improve throughput and resource utilization. Contribute to embedding disaggregation for multi-modal models (Vision-Language models, Audio Language Models, Video Language Models).
Dynamic GPU Scheduling: Develop and refine Planner algorithms for real-time allocation and rebalancing of GPU resources based on fluctuating workloads and system bottlenecks, ensuring peak performance at scale.
Intelligent Routing: Enhance the smart routing system to efficiently direct inference requests to GPU worker replicas with relevant KV cache data, minimizing re-computation and latency for sophisticated, multi-step reasoning tasks.
Distributed KV Cache Management: Innovate in the management and transfer of large KV caches across heterogeneous memory and storage hierarchies, using the NVIDIA Optimized Transfer Library (NIXL) for low-latency, cost-effective data movement.
What you'll be doing:
Collaborate on the design and development of the Dynamo Kubernetes stack.
Introduce new features to the Dynamo Python SDK and Dynamo Rust Runtime Core Library.
Design, implement, and optimize distributed inference components in Rust and Python.
Contribute to the development of disaggregated serving for Dynamo-supported inference engines (vLLM, SGLang, TRT-LLM, llama.cpp, mistral.rs).
Improve intelligent routing and KV-cache management subsystems.
Contribute to open-source repositories, participate in code reviews, and assist with issue triage on GitHub.
Work closely with the community to address issues, capture feedback, and evolve the framework’s APIs and architecture.
Write clear documentation and contribute to user and developer guides.
What we need to see:
BS/MS or higher in computer engineering, computer science or related engineering (or equivalent experience).
15+ years of proven experience in related field.
Strong proficiency in systems programming (Rust and/or C++), with experience in Python for workflow and API development. Experience with Go for Kubernetes controllers and operators development.
Deep understanding of distributed systems, parallel computing, and GPU architectures.
Experience with cloud-native deployment and container orchestration (Kubernetes, Docker).
Experience with large-scale inference serving, LLMs, or similar high-performance AI workloads.
Background with memory management, data transfer optimization, and multi-node orchestration.
Familiarity with open-source development workflows (GitHub, continuous integration and continuous deployment).
Excellent problem-solving and communication skills.
Ways to stand out from the crowd:
Prior contributions to open-source AI inference frameworks (e.g., vLLM, TensorRT-LLM, SGLang).
Experience with GPU resource scheduling, cache management, or high-performance networking.
Understanding of LLM-specific inference challenges, such as context window scaling and multi-model agentic workflows.
With highly competitive salaries and a comprehensive benefits package, NVIDIA is widely considered to be one of the technology world's most desirable employers. We have some of the most forward-thinking and hardworking people in the world working for us and, due to outstanding growth, our special engineering teams are growing fast. If you're a creative and autonomous engineer with a genuine passion for technology, we want to hear from you!
Your base salary will be determined based on your location, experience, and the pay of employees in similar positions. The base salary range is 272,000 USD - 425,500 USD.You will also be eligible for equity and benefits.
If an employer mentions a salary or salary range on their job, we display it as an "Employer Estimate". If a job has no salary data, Rise displays an estimate if available.
Senior full-stack engineer to design and ship integrations and features for a national social care eligibility and enrollment platform across Python/Django and TypeScript/React stacks.
Double seeks a Senior/Staff Full Stack Engineer experienced with TypeScript, React, and Node to help scale an all-in-one bookkeeping platform used by thousands of accounting professionals.
An AI infrastructure company is hiring a Senior C# Full-Stack Engineer to build and optimize high-performance data pipelines and evaluation tooling on a part-time remote contract.
Senior-level Java Developer for the Oregon Department of Justice to design, implement, and maintain the Origin child-support system across multiple functional modules.
Lead the design and implementation of Pickle’s payments platform and internal ledger to ensure auditable, vendor-agnostic money flows across our marketplace.
Lead the backend and distributed-systems engineering for Multiply Labs' multi-robot manufacturing platform, building schedulers, orchestration, and cloud–edge connectivity to scale automated cell therapy production.
Lead and scale engineering teams to build a highly scalable, cloud-native AI security platform for Palo Alto Networks' Prisma AIRS product.
Senior Front-End React Developer needed to build performant, accessible web experiences and collaborate across design and engineering at a leading digital consultancy.
Avero is hiring a hands-on Engineering Manager, Backend to lead backend modernization, mentor engineers, and contribute code on a fully remote team supporting a cloud migration.
Build and own production-grade web interfaces for a Series A fintech that combines AI-infrastructure financing with on-chain protocols as a FullStack Engineer.
Palo Alto Networks is hiring a Principal Backend Engineer to architect and scale backend systems for the Cortex platform, driving production-ready features across distributed, cloud-native environments.
Alignerr is hiring a Senior Rust Full-Stack Engineer to build and optimize high-performance distributed services and tooling for AI data and evaluation pipelines.
Influur is hiring a Fullstack Software Engineer (Node.js/Next.js) to help build and scale AI agents and production systems that automate influencer marketing campaigns.
NVIDIA is a publicly traded, multinational technology company headquartered in Santa Clara, California. NVIDIA's invention of the GPU in 1999 sparked the growth of the PC gaming market, redefined computer graphics, and ignited the era of modern AI.
77 jobs