Job Overview
We are seeking a skilled Data Engineer to join our team and drive our data infrastructure forward. In this role, you will primarily focus on maintaining and enhancing our data warehouse and pipelines (80%) while also contributing to data analysis and reporting initiatives (20%). You'll work closely with cross-functional stakeholders to build robust data solutions and create actionable insights through compelling visualizations.
Key Responsibilities
Data Engineering
- Infrastructure Management: Maintain, enhance, and optimize existing data warehouse architecture and ETL pipelines.
- Pipeline Development: Design and implement scalable ETL/ELT processes ensuring data quality, integrity, and timeliness.
- Performance Optimization: Monitor and improve pipeline performance, troubleshoot issues, and implement best practices.
- Documentation: Create and maintain comprehensive documentation for data engineering processes, architecture, and configurations.
Data Analysis & Reporting
- Stakeholder Collaboration: Partner with business teams to gather requirements and translate them into technical solutions.
- Report Development: Build and maintain PowerBI dashboards and reports that drive business decisions.
- Data Modeling: Develop new data models and enhance existing ones to support advanced analytics.
- Insight Communication: Transform complex data findings into clear, actionable insights for various departments.
Required Qualifications
Technical Skills
- Programming & Query Languages: Strong proficiency in Python, SQL, and PySpark.
- Big Data Platforms: Experience with cloud data platforms including Snowflake, BigQuery, and Databricks. Databricks experience highly preferred.
- Orchestration Tools: Proven experience with workflow orchestration tools (Airflow preferred).
- Cloud Platforms: Experience with AWS (preferred), Azure, or Google Cloud Platform.
- Data Visualization: Proficiency in PowerBI (preferred) or Tableau.
- Database Systems: Familiarity with relational database management systems (RDBMS).
Development Practices
- Version Control: Proficient with Git for code management and collaboration.
- CI/CD: Hands-on experience implementing and maintaining continuous integration/deployment pipelines.
- Documentation: Strong ability to create clear technical documentation.
Experience & Communication
- Professional Experience: 3+ years in data engineering or closely related roles.
- Language Requirements: Fluent English communication skills for effective collaboration with U.S. based team members.
- Pipeline Expertise: Demonstrated experience building and maintaining production data pipelinesk
If an employer mentions a salary or salary range on their job, we display it as an "Employer Estimate". If a job has no salary data, Rise displays an estimate if available.
Experienced Javanese linguist needed to evaluate, annotate, and refine Javanese language data to improve AI model performance and cultural accuracy.
Seasoned data architect needed to lead MongoDB sharding strategy and hands-on implementation for a fast-growing AI platform operating with remote-first teams.
Lead and scale Mixbook’s data organization to transform data into a strategic competitive advantage through robust infrastructure, governance, analytics, and machine learning.
Lead the design and evolution of an enterprise data ecosystem using Azure and Microsoft Fabric to enable analytics, reporting, and data-driven decision-making across the organization.
Contribute Northern Sotho expertise to train and evaluate AI language models by reviewing, annotating, and improving linguistic data in a remote contract role.
Interplay Learning is hiring a Data Engineer to build and maintain scalable ETL pipelines and data infrastructure (BigQuery, Dagster) to enable analytics and operational decision-making at a remote-first EdTech company.
Lead the design and implementation of a cloud-native Databricks-based data platform to support analytics, AI/ML, and scalable SaaS products across the organization.
Upstart is hiring a Data Platform Engineer to build scalable, user-friendly data curation tools, expand self-serve ETL frameworks, and manage the enterprise data catalog for its AI lending platform.
Experienced data architect needed to lead cloud data platform design and migrations, build scalable ETL/ELT pipelines, and guide cross-functional teams on Azure-focused engagements.
Support and maintain AWS- and Snowflake-based data pipelines and data quality processes as an Associate Data Engineer at Mercury Insurance.
Dyna Robotics seeks a Senior Software Engineer, Data Platform to architect and implement scalable pipelines and algorithms that extract signals from multimodal robot sensor and video data at our Redwood City office.
Use deep Kyrgyz language expertise to evaluate, annotate, and improve AI-generated Kyrgyz content as a remote contract AI Trainer.
Worldpay is seeking an entry-level Data Engineer to design, develop, and support core fintech software while collaborating with product, client, and engineering teams.