At Serve Robotics, we’re reimagining how things move in cities. Our personable sidewalk robot is our vision for the future. It’s designed to take deliveries away from congested streets, make deliveries available to more people, and benefit local businesses.
The Serve fleet has been delighting merchants, customers, and pedestrians along the way in Los Angeles while doing commercial deliveries. We’re looking for talented individuals who will grow robotic deliveries from surprising novelty to efficient ubiquity.
We are tech industry veterans in software, hardware, and design who are pooling our skills to build the future we want to live in. We are solving real-world problems leveraging robotics, machine learning and computer vision, among other disciplines, with a mindful eye towards the end-to-end user experience. Our team is agile, diverse, and driven. We believe that the best way to solve complicated dynamic problems is collaboratively and respectfully.
As a Senior Data Engineer in the Machine Learning (ML) Infrastructure team you will be helping us build out our petabyte scale data platform supporting data partnerships, ML and autonomy engineers. Your work will directly impact a new revenue stream through commercialization of our robot data. You will be focusing on building highly scalable data pipelines and improving data discoverability features. You will collaborate with ML engineers in the creation of diverse large scale datasets used to train cutting edge ML models that are deployed to our fleet of thousands of robots.
Architect and implement robust, scalable data pipelines to process, synchronize, and package robotics data (e.g., LiDAR, camera, IMU, proprietary maps) for third-party consumption.
Build a data processing and egress platform, ensuring the timely and accurate delivery of datasets according to strict partner SLAs.
Create data lifecycle policies to control cloud data costs. Build and maintain a universal data catalogue of all raw robot data. Create cost monitoring, attribution and alerting systems.
Build data discoverability platform features, use ml models to generate new attributes and maintain efficient, highly scalable search indexes.
Setup data access audit trails and strong security controls managed through IaC. Create lineage maps and expose data traceability capabilities to internal consumers.
5+ years of professional experience in software or data engineering.
Strong programming proficiency in Python, SQL
Hands-on experience building and maintaining large-scale data processing pipelines using cloud technologies
Proficiency with data warehousing and ETL/ELT concepts
Solid understanding of system design, along with data privacy and security best practices
Hands on experience setting up IaC to orchestrate cloud resources and security policies
Experience with GCP and solid understanding of fully managed cloud infrastructure
Familiarity with robotics data such as lidar, multi-modal camera, mapping, etc
Experience working in a fast paced startup environment
Experience building and optimizing terabyte scale data pipelines
If an employer mentions a salary or salary range on their job, we display it as an "Employer Estimate". If a job has no salary data, Rise displays an estimate if available.
Capgemini seeks an experienced Analytics Engineer to design, build and maintain data products and ETL pipelines that support high-value analytics for a major U.S. insurance client.
Phoenix American is hiring a detail-focused Data Steward to research, validate and resolve fund and rep-level data issues feeding into our MARS platform.
NV5 is hiring a LiDAR Calibration Analyst in Portland to perform calibration, QA/QC, and validation of airborne LiDAR swaths to meet internal and client accuracy standards.
A fast-growing SaaS firm is looking for a Finance Data Engineer to build scalable financial data workflows and automate critical finance processes to improve accuracy and visibility.
Roche PT is hiring a Data Engineer in Oceanside to design scalable pipelines, build FAIR data products, and ensure data quality and governance for pharmaceutical analytics.
Experienced LiDAR calibration analyst needed at NV5 in Portland to perform boresight adjustments, QA/QC, and lead process improvements for airborne LiDAR production.
Fixpoint is seeking detail-oriented visual annotators to evaluate and annotate AI-edited images in a flexible remote contract role.
NBCUniversal's Media Group seeks a Data Engineer to design, build, and operate reliable batch and streaming pipelines that power international streaming analytics and reporting.
Sandisk is hiring a senior Data Platform Architect to architect and deliver a scalable, governed lakehouse and AI-enabled data ecosystem spanning cloud and on-prem environments.
Stem is hiring a Data Operations Analyst to run and validate utility data imports, monitor data quality, and support the operations of our energy storage fleet in a remote role based in the Mountain/Pacific time zones.
Lead the design and operation of Tempo’s foundational data pipelines and warehouses to make blockchain and payments data accessible, reliable, and scalable for analytics and ecosystem tools.
Cardinal Health is looking for a Senior Data Engineering Analyst to develop and optimize GCP-based data pipelines (Cloud Composer/Airflow, Dataflow, BigQuery) and drive reliable, cost-effective data delivery for analytics.
Oscar Health is seeking a hands-on Director to architect and build a next-generation medical economics reporting and analytics suite that drives Total Cost of Care insights and decision-making.
Why deliver a 2-pound burrito in a 2-ton car? Serve is the future of sustainable, self-driving delivery. Our zero-emissions rovers are designed to serve people in public spaces, starting with food delivery. We partner with platforms and merchants ...
9 jobs