At Serve Robotics, we’re reimagining how things move in cities. Our personable sidewalk robot is our vision for the future. It’s designed to take deliveries away from congested streets, make deliveries available to more people, and benefit local businesses.
The Serve fleet has been delighting merchants, customers, and pedestrians along the way in Los Angeles while doing commercial deliveries. We’re looking for talented individuals who will grow robotic deliveries from surprising novelty to efficient ubiquity.
We are tech industry veterans in software, hardware, and design who are pooling our skills to build the future we want to live in. We are solving real-world problems leveraging robotics, machine learning and computer vision, among other disciplines, with a mindful eye towards the end-to-end user experience. Our team is agile, diverse, and driven. We believe that the best way to solve complicated dynamic problems is collaboratively and respectfully.
We are seeking a highly skilled ML Performance Engineer to join our robotics team. This technical role bridges the gap between ML research and real-time deployment, enabling advanced ML models to run efficiently on edge hardware such as NVIDIA Jetson platforms. You will work closely with ML researchers, embedded systems engineers, and robotics software teams to ensure that state-of-the-art models can be deployed with optimal performance on robotic platforms.
Responsibilities
Own the full lifecycle of ML model deployment on robots—from handoff by the ML team to full system integration.
Convert, optimize, and integrate trained models (e.g., PyTorch/ONNX/TensorRT) for Jetson platforms using NVIDIA tools.
Develop and optimize CUDA kernels and pipelines for low-latency, high-throughput model inference.
Profile and benchmark existing ML workloads using tools like Nsight, nvprof, and TensorRT profiler.
Identify and remove compute and memory bottlenecks for real-time inference.
Design and implement strategies for quantization, pruning, and other model compression techniques suited for edge inference.
Ensure models are robust to the resource constraints of real-time, low-power robotic systems.
Manage memory layout, concurrency, and scheduling for optimized GPU and CPU usage on Jetson devices.
Build benchmarking pipelines for continuous performance evaluation on hardware-in-the-loop systems.
Collaborate with QA and systems teams to validate model behavior in field scenarios.
Work closely with ML researchers to influence model architectures for edge deployability and provide technical guidance on the feasibility of real-time ML models in the robotics stack.
Qualifications
Bachelor’s degree in Computer Science, Robotics, Electrical Engineering, or equivalent field.
3+ years experience in deploying ML models on embedded or edge platforms (preferably robotics).
2+ years of experience with CUDA, TensorRT, and other NVIDIA acceleration tools.
Proficient in Python and C++, especially for performance-sensitive systems.
Experience with NVIDIA Jetson (e.g., Xavier, Orin) and edge inference tools.
Familiarity with model conversion workflows (e.g., PyTorch → ONNX → TensorRT).
What Makes You Standout
Master’s degree in Computer Science, Robotics, Electrical Engineering, or equivalent field.
Experience with real-time robotics systems (e.g., ROS2, middleware, safety-critical constraints and linux embedded systems).
Knowledge of performance tuning under thermal, power, and memory constraints on embedded devices.
Experience with model quantization (e.g., INT8), sparsity, and latency-aware model design.
Contributions to open-source ML or CUDA projects is a plus.
If an employer mentions a salary or salary range on their job, we display it as an "Employer Estimate". If a job has no salary data, Rise displays an estimate if available.
Serve Robotics is hiring a Senior Financial Analyst to lead budgeting, forecasting, variance analysis, and scenario modeling to support growth of its autonomous delivery business.
Serve Robotics is hiring a Training Manager to build and scale operations and safety training programs that prepare frontline operators and technicians to safely deliver with our robotic fleet.
Greenfield Engineering is hiring a Senior Software Integration Engineer to deliver systems software engineering and integration support for the VH-92A program at Patuxent River, MD.
StartEngine is hiring a Backend Java Engineer to build and scale cloud-native backend services that power its equity crowdfunding platform for U.S.-based remote candidates.
Build scalable backend systems and agent orchestration at Magical to push the boundaries of agentic AI and help the company scale rapidly from an early-stage product to enterprise-grade deployments.
Senior Full Stack Software Engineer to develop React front-ends and Go back-end services for a combined Transcarent + Accolade healthcare platform.
Build and integrate AI-driven orchestration features for multi-robot autonomy as a Full Stack Engineer at NODA AI.
At Mark43, you'll architect and implement AI-augmented multi-agent systems that streamline software development and delivery across engineering teams.
Use your backend engineering expertise to evaluate and improve AI-generated code and training data that will power next-generation developer tools.
Contribute to public-sector digital services as a C# Software Engineer at Nava, building scalable, user-focused systems in a remote-first team.
Work remotely as a Senior Software Engineer on Voltus's Product Platform team to design and implement core APIs and services that enable DER-based virtual power plants.
Lead remote engineering teams at HealthEquity to deliver enterprise-quality, multi-tenant .NET backend services and drive continuous improvement across development, delivery, and operational practices.
Build and maintain a high-fidelity, deterministic flight simulator in C++ to support development and certification of Joby’s electric air taxi.
Strala, a seed-backed AI startup transforming insurance claims, seeks a Forward Deployed Engineer to build, deploy, and iterate production LLM and ML solutions in partnership with customers in San Francisco.
Lead a team building a standardized, Kubernetes-based enterprise platform for NVIDIA's AI Factory to enable secure, scalable deployment of AI and enterprise applications across cloud and on-prem infrastructures.
Why deliver a 2-pound burrito in a 2-ton car? Serve is the future of sustainable, self-driving delivery. Our zero-emissions rovers are designed to serve people in public spaces, starting with food delivery. We partner with platforms and merchants ...
15 jobs