At Sift, we’re redefining how modern machines are built, tested, and operated. Our platform gives engineers real-time observability over high-frequency telemetry—eliminating bottlenecks and enabling faster, more reliable development.
Sift was born from our work at SpaceX on Dragon, Falcon, Starlink, and Starship—where scaling telemetry, debugging flight systems, and ensuring mission reliability demanded new infrastructure. Founded by a team from SpaceX, Google, and Palantir, Sift is built for mission-critical systems where precision and scalability are non-negotiable.
As an early engineer focused on our data infrastructure, you won’t just write code—you’ll define core architecture and help scale a real-time telemetry platform from the ground up. You’ll work on complex backend systems designed to ingest, store, and serve millions of high-frequency data points per second, enabling faster iteration cycles for some of the most advanced engineering teams in the world.
Design and build a horizontally scalable platform for ingesting millions of hardware sensor data points per second
Develop durable, efficient database solutions to support real-time reads and large-scale analytics workloads
Pioneer data architecture by integrating recent innovations in streaming and storage, including cloud-native and diskless designs
Help define engineering culture, standards, and processes
Collaborate closely with peers on architecture, design, code reviews, and scaling strategies
Bachelor's degree in Computer Science, Engineering, Physics, or another STEM discipline
7+ years of experience in backend, infrastructure, or data engineering roles
Hands-on experience with event-time-based stream processing or streaming SQL systems using tools like Apache Flink, Kafka Streams, Beam, or similar
Proficiency with relational and time-series databases like PostgreSQL, Druid, Pinot, TimescaleDB, or equivalent
Experience with large-scale distributed systems or low-latency backend services, ideally written in Go, Rust, or Python
Familiarity with DevOps and cloud infrastructure tools such as Kubernetes, Prometheus, ArgoCD, and Terraform
Strong communication skills and a collaborative approach to problem-solving
Familiarity with telemetry data from hardware systems, high-throughput ingest pipelines, or columnar storage formats like Apache Arrow and Parquet
Experience building resilient, performant systems that scale to billions of records
Curiosity about new data paradigms and eagerness to evaluate and integrate emerging tools and techniques
You’ll join a tight-knit team of world-class engineers building foundational systems for next-gen machines. Technologies you'll work with may include:
Frontend & Backend: Go, gRPC, PostgreSQL, Protobuf, React, TypeScript
Data Infrastructure: Apache Arrow, DataFusion, Flink, Kafka, Parquet, Rust
Ops & Tooling: AWS, Kubernetes, Docker, GitHub Actions, Prometheus, Argo CD, Grafana, Linux
Sift’s headquarters is in El Segundo, CA. We collaborate in person twice a week—on Mondays and Thursdays—and come together for a full week every two months. While we prefer team members to be local, we’re open to relocating candidates to LA or considering remote work from the San Francisco area for the right candidate.
Salary range: $200,000 - $250,000 per year. Plus equity and benefits.
U.S. Person Required: Must be a U.S. citizen, lawful permanent resident, or protected individual such as an asylee or refugee in compliance with ITAR (International Traffic in Arms Regulations) / EAR (Export Administration Regulations) regulations.
If an employer mentions a salary or salary range on their job, we display it as an "Employer Estimate". If a job has no salary data, Rise displays an estimate if available.
CivilGrid seeks a hybrid GIS Developer in Kansas City to build Python-based geoprocessing tools, manage spatial ETL pipelines, and help scale our utility-focused mapping platform.
Aviagen's Huntsville corporate office is hiring a Senior Data Engineer to design and optimize scalable SQL Server and Azure-based data pipelines that drive global analytics and business insights.
Radical AI is hiring a senior Data Engineer to design and operate scalable, auditable data pipelines and infrastructure that enable AI-driven materials discovery and autonomous lab workflows.
Experienced data engineering leader sought to manage a remote team building scalable batch and real-time pipelines, data lake architectures, and reliable data infrastructure.
An experienced data engineering executive is needed to lead Cengage Group's enterprise data platform strategy, scaling cloud-native systems and enabling analytics and AI/ML across the business.
ProducePay is hiring a Senior Data Engineer to design, build, and maintain scalable data pipelines and a modern analytics warehouse that enable evidence-based business decisions.
Lead Analytics Engineer to architect trusted, reusable data products and semantic layers at Coupa using DBT and Snowflake to power company-wide analytics.
Lead Cartesia's data organization to design and deliver the multimodal datasets and pipelines that power next-generation foundation models.