The Principal Data Engineer will design, build, implement, and maintain data processing pipelines for the extraction, transformation, and loading (ETL) of data from a variety of data sources. The role will develop robust and scalable solutions that transform data into a useful format for analysis, enhance data flow, and enable end-users to consume and analyze data faster and easier.
Expected Duties:
Principal Data Engineers will design, build, implement, and maintain data processing pipelines for the extraction, transformation, and loading (ETL) of data from a variety of data sources
Expected to lead the writing of complex SQL queries to support analytics needs
Responsible for developing technical tools and programming that leverage artificial intelligence, machine learning, and big-data techniques to cleanse, organize and transform data and to maintain, defend and update data structures and integrity on an automated basis
Principal Data Engineers will evaluate and recommend tools and technologies for data infrastructure and processing. Collaborate with engineers, data scientists, data analysts, product teams, and other stakeholders to translate business requirements to technical specifications and coded data pipelines
The role will work with tools, languages, data processing frameworks, and databases such as R, Python, SQL, Databricks, Spark, Delta, APIs. Work with structured and unstructured data from a variety of data stores, such as data lakes, relational database management systems, and/or data warehouses
Qualifications: Knowledge, Skills, and Abilities
The role will include work on problems of diverse scope where analysis of information requires evaluation of identifiable factors. Work is expected to be done independently through independent judgment.
- Ability to assess unusual circumstances and uses sophisticated analytical and problem-solving techniques to identify the cause
- Ability to enhance relationships and networks with senior internal/external partners who are not familiar with the subject matter often requires persuasion
- Architect and scale our modern data platform to support real-time and batch processing for financial forecasting, risk analytics, and customer insights
- Enforce high standards for data governance, quality, lineage, and compliance
- Partner with stakeholders across engineering, finance, sales, and compliance to translate business requirements into reliable data models and workflows.
- Evaluate emerging technologies and lead POCs that shape the future of our data stack.
- Champion a culture of security, automation, and continuous delivery in all data workflows
Technical Qualifications:
- Deep expertise in Python, SQL, and distributed processing frameworks like Apache Spark, Databricks, Snowflake, Redshift, BigQuery.
- Proven experience with cloud-based data platforms (preferably AWS or Azure).
- Hands-on experience with data orchestration tools (e.g., Airflow, dbt) and data warehouses (e.g., Databricks, Snowflake, Redshift, BigQuery).
- Strong understanding of data security, privacy, and compliance within a financial services context.
- Experience working with structured and semi-structured data (e.g., Delta, JSON, Parquet, Avro) at scale.
- Familiarity with modelling datasets in Salesforce, Netsuite and Anaplan to solve business use cases required.
- Previous experience Democratizing data at scale for the enterprise a huge plus.
Educational Qualifications and Work experience
- Bachelor's or master's degree in computer science, Engineering, or a related field.
- 6-8 years of experience in data engineering, with a strong focus on financial systems on SaaS platforms.
MeridianLink has a wonderful culture where people value the work they do and appreciate each other for their contributions. We develop our employees so they can grow professionally by preferring to promote from within. We have an open-door policy with direct access to executives; we want to hear your ideas and what you think. Our company believes that to be productive in the long term, we must have a genuine work-life balance. We understand that employees have families and full lives outside of the office. To that end, we honor their personal commitments.
MeridianLink is an Equal Opportunity Employer. We do not discriminate based on race, religion, color, sex, age, national origin, disability, or any other characteristic protected by applicable law.
MeridianLink runs a comprehensive background check, credit check, and drug test as part of our offer process.
Salary range of $148,000 - $202,000 [It is not typical for offers to be made at or near the top of the range.] The actual salary will be determined based on experience and other job-related factors permitted by law including geographical location.
Meridianlink offers:
Stock options or other equity-based awards
Insurance coverage (medical, dental, vision, life, and disability)
Flexible paid time off
Paid holidays
401(k) plan with company match
Remote work
All compensation and benefits are subject to the terms and conditions of the underlying plans or programs, as applicable and as may be amended, terminated, or superseded from time to time.
#LI-REMOTE
If an employer mentions a salary or salary range on their job, we display it as an "Employer Estimate". If a job has no salary data, Rise displays an estimate if available.
Experienced cloud/data engineer needed to implement Terraform-driven AWS infrastructure and Python-based ETL pipelines supporting mission-critical defense projects.
Progyny is hiring a Senior Data Engineer to build and optimize scalable data transformation pipelines that enable ML/AI, analytics, and product insights across its platform.
Travelers is seeking a hands-on Data & Analytics Architect to drive cloud-native modernization, API strategy, and data platform architecture for the Claims organization.
Senior architect needed to lead enterprise MLOps strategy, architecture, and implementation across cloud-native platforms for a large U.S. insurer in partnership with Capgemini.
LMI is hiring a Data Management Analyst to lead data governance and quality efforts for OSD Energy, ensuring secure, accessible, and actionable data to support mission outcomes.
MissionWired seeks a SQL-focused Data Engineer to build and optimize ETL pipelines and data structures that enable advanced segmentation and reporting for progressive and nonprofit campaigns.
Experienced BI Developer sought to build Tableau dashboards, develop ETL/data models, and translate enterprise metrics into actionable insights for a mature payments and fraud prevention company.
Lead Aledade's Payer Performance BI team to build scalable data products and executive-level analytics that measure and drive payer performance across Medicare Advantage, Commercial, and Medicaid.
PermitFlow is hiring an Analytics Engineer to build and maintain scalable data models, ETL pipelines, and self-serve analytics for a fast-growing construction tech startup based in NYC.
Senior Data Engineer (Sr. Consultant) to build scalable data pipelines, ML model lifecycle infrastructure, and data models for Visa's Alternate Payments product suite in Austin, TX.
Dun & Bradstreet is hiring an Associate Data Advisor to advise commercial teams and programmatic buyers on audience strategies, segment activation, and data-driven campaign optimization.
We democratize financial services technology and data so our clients can focus on what’s truly important: their employees, customers and communities.
13 jobs