Laurel is on a mission to return time. As the leading AI Time platform for professional services firms, we’re transforming how organizations capture, analyze, and optimize their most valuable resource: time. Our proprietary machine learning technology automates work time capture and connects time data to business outcomes, enabling firms to increase profitability, improve client delivery, and make data-driven strategic decisions. We serve many of the world's largest accounting and law firms, including EY, Grant Thornton, and Latham & Watkins, and process over 1 billion work activities annually that have never been collected and aggregated before Laurel’s AI Time platform.
Our team comprises top talent in AI, product development, and engineering—innovative, humble, and forward-thinking professionals committed to redefining productivity in the knowledge economy. We're building solutions that empower workers to deliver twice the value in half the time, giving people more time to be creative and impactful. If you're passionate about transforming how people work and building a lasting company that explores the essence of time itself, we'd love to meet you.
Laurel’s infrastructure and security team is lovingly named the Time Owls, and they are out to make Laurel’s infrastructure more reliable, secure and easy to use. Our team builds and maintains tools that enable our engineers to work with AWS, Kubernetes and other services easily and safely. We contribute to all applications in the company, and act as both best practice advocates and policy enforcers.
We work closely with our AI engineers and data scientists, and this role will focus on both making sure their work can be fast, efficient, scalable and secure. If you are more interested in general infrastructure than data stuff, please check out our Platform Engineering role.
Write Terraform modules and Go libraries other teams can use to manage their infrastructure.
Work on Python API Services (Django) to improve the serving of our AI infrastructure.
Improve Airflow DAGs and optimize MongoDB and Postgres usage.
Develop and manage kubernetes, docker, and compute infrastructure for all of engineering.
Improve continuous integration, continuous deployment, and other automation.
Participate in an oncall rotation.
Help teams improve their reliability, security and observability through documentation, pull requests and developing tooling.
Work with vendors, engineers and others to improve the cost efficiency of our services.
Write backend code to help teams deliver functionality when there are deadlines or lack of resources.
Attend quarterly offsites (required travel), team standups, and other company meetings.
Raise the bar for quality and advocate to make our engineering best practices.
The following are our non-negotiables for candidates.
6+ years of experience in the following areas:
A developer with strong Python skills building and deploying APIs to support our AI team in delivering models to end users.
Experience with Airflow, Kubernetes, AWS.
Experience with Machine Learning and Large Language Models
Development experience with Terraform.
Experience working in startup environments
Significant experience with Linux administration
Experience with taking part in a regular engineering oncall rotation
Experience with any CI tooling/platforms
Experience with Postgresql and Mongodb
The following are things that we are looking for in a standout candidate and would help make this role a perfect fit.
Experience with Argo
Experience with cdk8s
Experience with Open Telemetry & Observe
Experience with Spacelift
Experience with CircleCI and/or GitHub Actions
Experience with Typescript and Go
Location: This role will be hybrid based out of our company hubs in either New York, San Francisco, or Los Angeles. We may consider exceptionally qualified remote candidates based in the United States or Canada.
Additional Benefits: Comprehensive medical/dental/vision coverage with covered premiums, 401(k), and additional benefits including wellness/commuter/FSA stipends.
Visa Sponsorship: Unfortunately we are unable to sponsor visas at this time.
To date, we've secured significant funding from renowned venture capitalists (Google Ventures, IVP, Anthos, Upfront Ventures), as well as notable individuals like Marc Benioff, Gokul Rajaram, Kevin Weil, and Alexis Ohanian
A smart, fun, collaborative, and inclusive team
Great employee benefits, including equity and 401K
Bi-annual, in-person company off-sites, in unique locations, to grow and share time with the team
An opportunity to perform at your best while growing, making a meaningful impact on the company's trajectory, and embodying our core values: understanding your "why," dancing in the rain, being your whole self, and sanctifying time
We encourage diverse perspectives and rigorous thinkers who aren't afraid to challenge the status quo. Laurel is an equal opportunity employer. All applicants will be considered for employment without attention to race, color, religion, sex, sexual orientation, gender identity, national origin, veteran or disability status. We are not able to support visa sponsorship or relocation assistance.
If you think you'd be a good fit for this role, we encourage you to apply, even if you don’t perfectly match all the bullet points in the job description. At Laurel, we strive to create an inclusive culture that encourages people from all walks of life to bring their unique, diverse perspectives to work. Every day, we aim to build an environment that empowers us all to do the best work of our careers, and we can't wait to show you what we have to offer!
If an employer mentions a salary or salary range on their job, we display it as an "Employer Estimate". If a job has no salary data, Rise displays an estimate if available.
Staff Platform Engineer to design and maintain secure, reliable AWS and Kubernetes platform tooling and automation that empowers application teams across Laurel.
Condé Nast is hiring a senior Data Engineer to design and maintain scalable dimensional data models and streaming/batch pipelines that power analytics and BI across multiple consumer and advertising domains.
Fixpoint is seeking detail-oriented visual annotators to evaluate and annotate AI-edited images in a flexible remote contract role.
Sandisk is hiring a senior Data Platform Architect to architect and deliver a scalable, governed lakehouse and AI-enabled data ecosystem spanning cloud and on-prem environments.
Philadelphia Gas Works is hiring a GIS Analyst to design, manage, and automate ESRI ArcGIS Enterprise GIS solutions that support utility asset management and business workflows.
Capgemini seeks an experienced Analytics Engineer to design, build and maintain data products and ETL pipelines that support high-value analytics for a major U.S. insurance client.
Lead the design and implementation of marketing data pipelines and canonical datasets to power OpenAI’s growth analytics and attribution.
NBCUniversal's Media Group seeks a Data Engineer to design, build, and operate reliable batch and streaming pipelines that power international streaming analytics and reporting.
Serve Robotics seeks a Senior Data Engineer to architect scalable, secure ML data pipelines and discovery platforms that power production robotic fleets and commercialization of robot data.
Stem is hiring a Data Operations Analyst to run and validate utility data imports, monitor data quality, and support the operations of our energy storage fleet in a remote role based in the Mountain/Pacific time zones.
Hiscox US seeks an Associate Data Engineer to help develop and maintain a modern Databricks-based cloud data platform using SQL/Python and CI/CD practices in a hybrid Atlanta role.
Schmidt Sciences seeks a technically fluent leader to architect and oversee next-generation data management and software systems for its astrophysics observatory and mission portfolio.
Lead Rho's Statistical Programming group as an Associate Director, providing technical and operational oversight for CDISC-compliant programming and cross-functional project delivery.
Cardinal Health is looking for a Senior Data Engineering Analyst to develop and optimize GCP-based data pipelines (Cloud Composer/Airflow, Dataflow, BigQuery) and drive reliable, cost-effective data delivery for analytics.