Are you passionate about making a difference in people's lives? Do you enjoy working in a service-oriented industry? If so, this opportunity may be the right fit for you!
This role will require the candidate to be located in Denver or be open to relocation to Denver as this is an in office position.
This position is responsible for designing, building, and maintaining scalable, cloud-native data pipelines and platforms to power advanced analytics, real-time data applications, and AI/ML workflows. Plays a key role in enabling machine learning in production environments by developing robust data architectures, implementing lakehouse solutions, and collaborating with cross-functional teams to deliver high-quality, governed, and discoverable data products.
This role...
Designs, builds, and maintains scalable batch and real-time data pipelines using AWS-native tools.
Develops and manages data lake/lakehouse architectures leveraging S3, Glue Catalog, Athena, EMR (Iceberg/Delta Lake), and Redshift.
Collaborates with ML engineers and data scientists to operationalize ML models and support MLOps pipelines.
Ensures data quality, observability, lineage tracking, and compliance across all data products.
Designs scalable data models for storage in Amazon DynamoDB and other AWS-based stores.
Builds and maintains online/offline feature stores to support low-latency AI applications.
Contributes to decentralized data ownership and federated governance using tools like Amazon DataZone.
Partners with product managers, engineers, and analysts to align data infrastructure with strategic goals.
Designs systems using event-driven architecture and microservices principles.
May lead projects and perform other duties as assigned.
Occasional business travel may be required.
We are interested in speaking to individuals with the following...
Bachelor’s or Master’s degree in Computer Science, Engineering, or a related technical field required.
AWS certifications (Data Analytics, Machine Learning) or equivalent credentials preferred.
Expertise in Python and SQL for ETL/ELT, data transformation, and automation tasks.
Deep experience with AWS services: S3, Glue, Athena, Redshift, DynamoDB, Kinesis/MSK, Lambda, Step Functions, EMR.
Strong understanding of modern data architectures (data lake, lakehouse, data warehouse).
Hands-on experience with Apache Spark, Kafka, Airflow, and other orchestration tools.
Familiarity with DevOps and Infrastructure-as-Code tools (Pulumi, CloudFormation, CDK).
Knowledge of data governance, security, and compliance (e.g., HIPAA, GDPR).
Proficient in semantic layer design using tools like AWS Glue Catalog and third-party platforms (Alation, Collibra).
Ability to integrate semantic assets within lakehouse ecosystems.
Strong communication skills and ability to work cross-functionally in agile teams.
Problem-solving mindset with attention to detail and commitment to code quality.
Salary: $180,000-230,000
This role is bonus eligible.
Modivcare’s positions are posted and open for applications for a minimum of 5 days. Positions may be posted for a maximum of 45 days dependent on the type of role, the number of roles, and the number of applications received. We encourage our prospective candidates to submit their application(s) expediently so as not to miss out on our opportunities. We frequently post new opportunities and encourage prospective candidates to check back often for new postings.
We value our team members and realize the importance of benefits for you and your family.
Modivcare offers a comprehensive benefits package to include the following:
Modivcare is an Equal Opportunity Employer.
If an employer mentions a salary or salary range on their job, we display it as an "Employer Estimate". If a job has no salary data, Rise displays an estimate if available.
Lead Xplor’s Data & AI Platforms as a Director to innovate enterprise data strategy and productize AI-driven business solutions in a dynamic, growth-focused environment.
A hybrid role at Cockroach Labs for a Data Quality & Management Specialist driving CRM data excellence and AI-readiness in New York City.
Flywire is seeking a skilled Lead Analytics Engineer to scale its data capabilities and empower business users through advanced data modeling and analytics.
Experienced data-focused Software Engineer to build and optimize scalable ETL pipelines and cloud data infrastructure at JPMorgan Chase.
Senior Data Engineer position at Zoro.com to design cloud-native data pipelines and drive data architecture transformation in a hybrid onsite role based in Chicago.
Whatnot invites a skilled Data Engineer to advance their payments data architecture, powering key financial data products and compliance across their fast-growing marketplace.
Lead Golin's AI-driven transformation as a Senior Global AI Data Architect, designing scalable data infrastructures to power the future of communications.
The Data Specialist will oversee data management and reporting for a Veteran support program, ensuring data integrity and compliance while providing training and technical assistance to staff.
Lead global manufacturing data integration and reporting systems as Associate Director at FUJIFILM Biotechnologies, driving efficiency and compliance.
Lead and architect the Power BI-based Business Intelligence platform at Harris Williams to enable actionable insights and support investment banking operations.
Kubikware™ seeks a skilled Data Engineer located in Latin America with 5+ years of experience in Python, AWS, and data infrastructure to contribute to diverse digital projects remotely.
Lead Otter Products’ enterprise data strategy and analytics team as the Director of Data Management and Analytics to unlock data value and drive innovation.
Modivcare is leading the transformation to better connect people with care, wherever they are. We serve the most underserved by facilitating non-emergency medical transportation, remote patient monitoring, and personal care to enable greater acces...
3 jobs