This position is posted by Jobgether on behalf of Curinos. We are currently looking for a Staff Engineer – Data Platform and Lakehouse in the United States.
This role provides a senior engineering opportunity to design and lead the implementation of a cloud-native data platform that powers advanced analytics, AI, and machine learning applications. The Staff Engineer will work closely with cross-functional teams, including product managers, engineers, and data scientists, to enable scalable, secure, and high-performance data solutions. The role requires balancing strategic architectural leadership with hands-on engineering, optimizing distributed data systems, and supporting the development of next-generation AI and SaaS products. Candidates will influence platform adoption, data governance, and automation standards while helping the organization achieve high reliability, scalability, and operational efficiency across its data ecosystem.
· Design, implement, and maintain scalable, secure, and maintainable data platforms on Databricks and cloud infrastructure (AWS).
· Provide architectural leadership and ensure consistency, resilience, and performance across distributed data processing systems.
· Develop reusable data pipelines, workflows, and ETL/ELT processes using Databricks Workflows, Airflow, or AWS Glue.
· Translate business objectives into technical platform capabilities in collaboration with product and cross-functional teams.
· Support AI/ML initiatives, including feature engineering, model deployment, and real-time data processing.
· Drive adoption of data governance standards, including access control, metadata management, lineage, and compliance.
· Establish CI/CD pipelines and DevOps automation for data infrastructure.
· Evaluate and integrate emerging technologies to enhance development, testing, deployment, and monitoring practices.
· 15+ years of experience in software development, covering the full SDLC from design to deployment and support.
· Proven ability to design and implement cloud-native data architectures on Databricks and AWS (Azure or GCP experience a plus).
· Deep expertise in Apache Spark, including performance tuning and distributed computing best practices.
· Advanced proficiency in Python and SQL, with solid software engineering foundations.
· Hands-on experience with Databricks Unity Catalog, Feature Store, Delta Live Tables, and data pipeline orchestration tools.
· Strong understanding of ETL/ELT design, data quality validation, observability, and monitoring practices. Experience with Monte Carlo preferred.
· Experience supporting AI/ML workloads and SaaS product integrations.
· Strong communication and collaboration skills for working with engineers, product managers, and data scientists.
· Knowledge of data governance, security, compliance, and metadata management best practices.
· Strategic mindset with the ability to align technical decisions with business goals.
· Competitive salary range: $170,000–$190,000 plus bonus potential.
· Flexible working options, including fully remote or hybrid schedules in major metropolitan areas (NYC, Boston, Chicago).
· Comprehensive financial, health, and lifestyle benefits.
· Generous annual leave, floating holidays, volunteering days, and a birthday day off.
· Learning and development programs to support career growth.
· Collaborative and inclusive culture with DE&I initiatives and regular social/networking events.
· Employee Assistance Program providing wellbeing, counselling, legal, and financial support.
Jobgether is a Talent Matching Platform that partners with companies worldwide to efficiently connect top talent with the right opportunities through AI-driven job matching.
When you apply, your profile goes through our AI-powered screening process designed to identify top talent efficiently and fairly.
🔍 Our AI evaluates your CV and LinkedIn profile thoroughly, analyzing your skills, experience, and achievements.
📊 It compares your profile to the job’s core requirements and past success factors to determine your match score.
🎯 Based on this analysis, we automatically shortlist the 3 candidates with the highest match to the role.
🧠 When necessary, our human team may perform an additional manual review to ensure no strong profile is missed.
The process is transparent, skills-based, and free of bias—focusing solely on your fit for the role. Once the shortlist is completed, we share it directly with the company managing the job opening. The final decision and next steps (such as interviews or additional assessments) are then made by their internal hiring team.
Thank you for your interest!
#LI-CL1
If an employer mentions a salary or salary range on their job, we display it as an "Employer Estimate". If a job has no salary data, Rise displays an estimate if available.
Lead backend engineering to scale an AI healthcare platform, architecting distributed systems and data pipelines for high-performance, production-grade services.
Work as a Software Engineer Consultant II building ASP.NET/C# backend and responsive web applications for Allstate Identity Protection in a collaborative Agile environment.
HHMI’s Janelia is hiring a Data Engineer to build robust, reproducible pipelines and tools that transform heterogeneous sequence data into high-quality inputs for AI-driven biological research.
Citizen Health is hiring a Staff Data Engineer to design and lead scalable data infrastructure and pipelines that enable AI-driven clinical insights and research.
Lead the architecture and delivery of Curinos's Databricks/AWS data platform and Lakehouse to enable scalable AI, analytics and product innovation across the organization.
Meteor Education is hiring a hands-on Data Architect to lead enterprise data architecture, integrations, and MDM across cloud and business application ecosystems in a US-remote capacity.
Experienced Database Architects are sought for a fully asynchronous, short-term contract to design occupation-specific assessment content for a top AI company.
DATAMAXIS seeks a senior EDW Informatica ETL Developer to build and optimize enterprise ETL processes for healthcare data, leveraging Informatica PowerCenter, Teradata, and advanced SQL.
Lead the design and operation of scalable data architectures and ETL pipelines for GE Aerospace’s Digital Workplace to enable analytics and enterprise productivity services.
Lead the design and delivery of scalable data architecture and ETL solutions at GE Aerospace to support cross-functional analytics, MDM and production data pipelines.
Blue Altair seeks a hands-on Data Analyst / SnapLogic Developer to build and maintain SnapLogic pipelines and ETL processes for a DC Metro client on a contract-to-hire engagement.
Lead the Data Analytics and Engineering organization at Palo Alto Networks in Santa Clara as a hands-on director who builds scalable data platforms, deploys ML-driven products, and partners with senior leaders to drive measurable business impact.
Amplify is hiring a Senior Analytics Engineer to build robust ELT pipelines and analytical models that improve student outcomes and power BI insights across product and business teams.
Taptap Send is hiring an Analytics Engineer (Fraud) to design dbt-based analytics, build real-time risk models, and own fraud monitoring and dashboards across the business.
Lead Macmillan's Data Analytics team to design and govern Data Products in Snowflake using dbt, SQL, and Python to enable reliable self-serve reporting across the organization.
Jobgether has the ambition to disrupt the recruitment industry as we know it by simplifying it and making it more accurate 🎯 Jobgether platform connects candidates and companies based on: - Skills -... Values - Ambition - Personality The candidat...
1136 jobs