We are looking for an Inference Engineering Manager to lead our AI Inference team. This is a unique opportunity to build and scale the infrastructure that powers Perplexity's products and APIs, serving millions of users with state-of-the-art AI capabilities.
You will own the technical direction and execution of our inference systems while building and leading a world-class team of inference engineers. Our current stack includes Python, PyTorch, Rust, C++, and Kubernetes. You will help architect and scale the large-scale deployment of machine learning models behind Perplexity's Comet, Sonar, Search, Deep Research products.
Build SOTA systems that are the fastest in the industry with cutting-edge technology
High-impact work on a smaller team with significant ownership and autonomy
Opportunity to build 0-to-1 infrastructure from scratch rather than maintaining legacy systems
Work on the full spectrum: reducing cost, scaling traffic, and pushing the boundaries of inference
Direct influence on technical roadmap and team culture at a rapidly growing company
Lead and grow a high-performing team of AI inference engineers
Develop APIs for AI inference used by both internal and external customers
Architect and scale our inference infrastructure for reliability and efficiency
Benchmark and eliminate bottlenecks throughout our inference stack
Drive large sparse/MoE model inference at rack scale, including sharding strategies for massive models
Push the frontier with building inference systems to support sparse attention, disaggregated pre-fill/decoding serving, etc.
Improve the reliability and observability of our systems and lead incident response
Own technical decisions around batching, throughput, latency, and GPU utilization
Partner with ML research teams on model optimization and deployment
Recruit, mentor, and develop engineering talent
Establish team processes, engineering standards, and operational excellence
5+ years of engineering experience with 2+ years in a technical leadership or management role
Deep experience with ML systems and inference frameworks (PyTorch, TensorFlow, ONNX, TensorRT, vLLM)
Strong understanding of LLM architecture: Multi-Head Attention, Multi/Grouped-Query Attention, and common layers
Experience with inference optimizations: batching, quantization, kernel fusion, FlashAttention
Familiarity with GPU characteristics, roofline models, and performance analysis
Experience deploying reliable, distributed, real-time systems at scale
Track record of building and leading high-performing engineering teams
Experience with parallelism strategies: tensor parallelism, pipeline parallelism, expert parallelism
Strong technical communication and cross-functional collaboration skills
Experience with CUDA, Triton, or custom kernel development
Background in training infrastructure and RL workloads
Experience with Kubernetes and container orchestration at scale
Published work or contributions to inference optimization research
If an employer mentions a salary or salary range on their job, we display it as an "Employer Estimate". If a job has no salary data, Rise displays an estimate if available.
Lead development and optimization of NVIDIA's DriveAV framework as a Manager of System Software Engineering, driving high-performance C++/CUDA software across heterogeneous compute for autonomous vehicles.
Intel is hiring a GPU Software Development Engineer to validate and debug graphics IP, enable new features, and improve performance across display, media, 3D, and compute domains.
Lead the technical design and build of CRM integrations and a scalable middleware/API layer for a rapidly expanding residential solar business.
Experienced software engineer needed to develop and optimize backend and full-stack features for a global visual content platform at Getty Images.
Remote entry-level software developer role for recent graduates or career changers to gain practical client-facing experience using Java, Python, or JavaScript while receiving training and mentorship through Jobgether.
Lead a remote engineering team to design, build, and scale a high-performance distributed search and data-processing product for a fast-moving private company.
Work remotely with a cross-functional team to design and build software solutions that apply Python and mathematical approaches to complex data problems.
Lead and grow a remote engineering team focused on designing and delivering large-scale, high-throughput data streaming systems for a SaaS environment.
Lead development of Linux- and RTOS-based embedded software for Starlink consumer devices and gateways, focusing on reliability, performance, and large-scale deployment.
Work remotely as a Software Engineer delivering and maintaining applications that support government transportation programs and public safety.
Lead the frontend for KnoxAI’s Nuxt 3 admin and customer applications to deliver polished, accessible UX for FedRAMP compliance workflows while contributing to backend improvements as needed.
Develop high-performance, scalable front-end applications and reusable UI components at NVIDIA to elevate interactive user experiences and support AI agent initiatives.
Solve real-world cyber challenges as a hybrid Software Developer supporting mission-critical networks at MAXISIQ in Lorton, VA.
Perplexity offers an AI chatbot-powered research and conversational search engine that answers queries using natural language predictive text. Since it's launch in 2022, has raised $165 million in funding, valuing the company at over $1 billion.
8 jobs