Let’s get started
By clicking ‘Next’, I agree to the Terms of Service
and Privacy Policy, and consent to receive emails from Rise
Jobs / Job page
Data Engineer image - Rise Careers
Job details

Data Engineer

Job Title: Data Engineer
Position Type: Full-Time, Remote
Working Hours: U.S. client business hours (with flexibility for pipeline monitoring and data refresh cycles)

About the Role:
Our client is seeking a Data Engineer to design, build, and maintain reliable data pipelines and infrastructure that deliver clean, accessible, and actionable data. This role requires strong software engineering fundamentals, experience with modern data stacks, and an eye for quality and scalability. The Data Engineer ensures data flows seamlessly from source systems to warehouses and BI tools, powering decision-making across the business.

Responsibilities:
Pipeline Development:

  • Build and maintain ETL/ELT pipelines using Python, SQL, or Scala.
  • Orchestrate workflows with Airflow, Prefect, Dagster, or Luigi.
  • Ingest structured and unstructured data from APIs, SaaS platforms, relational databases, and streaming sources.

Data Warehousing:

  • Manage data warehouses (Snowflake, BigQuery, Redshift).
  • Design schemas (star/snowflake) optimized for analytics.
  • Implement partitioning, clustering, and query performance tuning.

Data Quality & Governance:

  • Implement validation checks, anomaly detection, and logging for data integrity.
  • Enforce naming conventions, lineage tracking, and documentation (dbt, Great Expectations).
  • Maintain compliance with GDPR, HIPAA, or industry-specific regulations.

Streaming & Real-Time Data:

  • Develop and monitor streaming pipelines with Kafka, Kinesis, or Pub/Sub.
  • Ensure low-latency ingestion for time-sensitive use cases.

Collaboration:

  • Partner with analysts and data scientists to provide curated, reliable datasets.
  • Support BI teams in building dashboards (Tableau, Looker, Power BI).
  • Document data models and pipelines for knowledge transfer.

Infrastructure & DevOps:

  • Containerize data services with Docker and orchestrate in Kubernetes.
  • Automate deployments via CI/CD pipelines (GitHub Actions, Jenkins, GitLab CI).
  • Manage cloud infrastructure using Terraform or CloudFormation.

What Makes You a Perfect Fit:

  • Passion for clean, reliable, and scalable data.
  • Strong problem-solving skills with debugging mindset.
  • Balance of software engineering rigor and data intuition.
  • Collaborative communicator who thrives in cross-functional environments.

Required Experience & Skills (Minimum):

  • 3+ years in data engineering or back-end development.
  • Strong Python and SQL skills.
  • Experience with at least one major data warehouse (Snowflake, Redshift, BigQuery).
  • Familiarity with pipeline orchestration tools (Airflow, Prefect).

Ideal Experience & Skills:

  • Experience with dbt for transformations and data modeling.
  • Streaming data experience (Kafka, Kinesis, Pub/Sub).
  • Cloud-native data platforms (AWS Glue, GCP Dataflow, Azure Data Factory).
  • Background in regulated industries (healthcare, finance) with strict compliance.

What Does a Typical Day Look Like?
A Data Engineer’s day revolves around keeping pipelines running, improving reliability, and enabling teams with high-quality data. You will:

  • Check pipeline health in Airflow/Prefect and resolve any failed jobs.
  • Ingest new data sources, writing connectors for APIs or SaaS platforms.
  • Optimize SQL queries and warehouse performance to reduce costs and latency.
  • Collaborate with analysts/data scientists to deliver clean datasets for dashboards and models.
  • Implement validation checks to prevent downstream reporting issues.
  • Document and monitor pipelines so they’re reproducible, scalable, and audit-ready.
    In essence: you ensure the business has accurate, timely, and trustworthy data powering every decision.

Key Metrics for Success (KPIs):

  • Pipeline uptime ≥ 99%.
  • Data freshness within agreed SLAs (hourly, daily, weekly).
  • Zero critical data quality errors reaching BI/analytics.
  • Cost-optimized queries and warehouse performance.
  • Positive feedback from data consumers (analysts, scientists, leadership).

Interview Process:

  • Initial Phone Screen
  • Video Interview with Pavago Recruiter
  • Technical Task (e.g., build a small ETL pipeline or optimize a SQL query)
  • Client Interview with Engineering/Data Team
  • Offer & Background Verification

Average salary estimate

$130000 / YEARLY (est.)
min
max
$110000K
$150000K

If an employer mentions a salary or salary range on their job, we display it as an "Employer Estimate". If a job has no salary data, Rise displays an estimate if available.

Similar Jobs
Photo of the Rise User
Pavago Hybrid No location specified
Posted 14 hours ago

Experienced accounting leader needed to run end-to-end financial operations, ensure GAAP/IFRS compliance, and deliver executive-grade financial insights for a growing, remote U.S.-focused organization.

Photo of the Rise User
Posted 14 hours ago

Remote BDR role supporting U.S. sales teams: qualify inbound leads, run targeted outbound campaigns, and deliver well-documented opportunities to Account Executives.

Photo of the Rise User
Jobgether Hybrid No location specified
Posted 11 hours ago

Senior Data Engineer needed to architect and optimize enterprise-scale SQL/dbt pipelines and cloud data warehouse solutions for remote, executive-level analytics and reporting.

Photo of the Rise User
Posted 6 hours ago

Experienced data engineer needed to develop scalable ETL pipelines, maintain enterprise data warehouses, and mentor junior teammates within a remote, healthcare-focused environment.

Photo of the Rise User
Highmark Health Hybrid PA, Working at Home - Pennsylvania
Posted 19 hours ago

Experienced Data Engineer with strong Python/PySpark and SQL skills needed to build scalable ETL pipelines and cloud data solutions for a healthcare-focused team at Highmark Health (US citizens only).

Posted 10 hours ago

Work on high-visibility sustainability AI projects as a Machine Learning Data Engineer, designing scalable data pipelines and preparing ML-ready datasets for environmental and human-rights use cases.

Photo of the Rise User
Posted 2 hours ago
Inclusive & Diverse
Diversity of Opinions
Dare to be Different
Collaboration over Competition
Growth & Learning

Live Nation is hiring a Senior Platform Engineer to design and build Databricks-based data platform capabilities, automation and onboarding workflows that improve performance, reliability and security of the enterprise data lake.

Photo of the Rise User
Nationwide Hybrid Ohio - Columbus, Three Nationwide Plaza
Posted 3 hours ago

Nationwide is hiring a Senior Analyst, Data Engineer to design and operate Databricks-based pipelines that deliver trusted operational data for P&C analytics and reporting.

Photo of the Rise User

Arch is hiring a detail-oriented Data Processing Analyst (12-month contract) to process and classify private investment data at our Manhattan office.

Photo of the Rise User
Posted 18 hours ago

Experienced data engineer needed to design and operationalize scalable data pipelines and real-time processing across large-scale environments while collaborating with DevSecOps and mentoring junior engineers.

PrePass Hybrid No location specified
Posted 20 hours ago

Join PrePass as a Data Engineer to build and maintain scalable Azure-based data pipelines that enable analytics, reporting, and operational insights for the transportation industry.

Posted 17 hours ago

Covenant Health is hiring a detail-oriented Health Information Specialist to manage medical records tasks, maintain MPI integrity, and prepare monthly birth and mortality reports for the State of Tennessee.

Photo of the Rise User
IMO Health Hybrid No location specified
Posted 21 hours ago

Experienced health information professional needed to lead complex code set mapping and QA efforts at IMO Health while mentoring teammates and driving cross-functional mapping improvements.

Photo of the Rise User
Posted 9 hours ago

Lead a remote data engineering and analytics team to shape enterprise data strategy, governance, and architecture for a highly regulated organization.

Photo of the Rise User

Lead the Data Collection function at CyberCube to design, build, and operate large-scale internet data collection systems that feed market-differentiating cyber risk analytics.

pavago - thinking globally to grow locally 🌍 welcome to pavago, where the world is your talent pool. we believe in a borderless future where businesses can harness the best of international expertise without breaking the bank. 🌟 why choose pav...

35 jobs
MATCH
Calculating your matching score...
FUNDING
DEPARTMENTS
SENIORITY LEVEL REQUIREMENT
TEAM SIZE
EMPLOYMENT TYPE
Full-time, remote
DATE POSTED
October 14, 2025
Risa star 🔮 Hi, I'm Risa! Your AI
Career Copilot
Want to see a list of jobs tailored to
you, just ask me below!