Our Krakenites are a world-class team with crypto conviction, united by our desire to discover and unlock the potential of crypto and blockchain technology.
What makes us different?
Kraken is a mission-focused company rooted in crypto values. As a Krakenite, you’ll join us on our mission to accelerate the global adoption of crypto, so that everyone can achieve financial freedom and inclusion. For over a decade, Kraken’s focus on our mission and crypto ethos has attracted many of the most talented crypto experts in the world.
Before you apply, please read the Kraken Culture page to learn more about our internal culture, values, and mission. We also expect candidates to familiarize themselves with the Kraken app. Learn how to create a Kraken account here.
As a fully remote company, we have Krakenites in 70+ countries who speak over 50 languages. Krakenites are industry pioneers who develop premium crypto products for experienced traders, institutions, and newcomers to the space. Kraken is committed to industry-leading security, crypto education, and world-class client support through our products like Kraken Pro, Desktop, Wallet, and Kraken Futures.
Become a Krakenite and build the future of crypto!
Join our Data Engineering Team at Kraken!
Are you passionate about designing and building scalable data systems that power one of the fastest-growing companies in cryptocurrency? We’re seeking a skilled Data Engineer to join our Data Platform team and help us architect the future of Kraken’s data ecosystem.
As a Data Engineer at Kraken, you’ll be responsible for building and maintaining high-performance data pipelines, ensuring the reliability and scalability of our data infrastructure, and enabling teams across the company to access clean, consistent, and timely data. You’ll work with modern technologies and large-scale datasets, playing a key role in making data accessible for analytics, machine learning, and product innovation.
Build scalable and reliable data pipelines that collect, transform, load and curate data from internal systems
Augment data platform with data pipelines from external systems.
Ensure high data quality for pipelines you build and make them auditable
Drive data systems to be as near real-time as possible
Support design and deployment of distributed data store that will be central source of truth across the organization
Build data connections to company's internal IT systems
Develop, customize, configure self service tools that help our data consumers to extract and analyze data from our massive internal data store
Evaluate new technologies and build prototypes for continuous improvements in data engineering.
5+ years of work experience in relevant field (Data Engineer, DWH Engineer, Software Engineer, etc)
Experience with data-lake and data-warehousing technologies and relevant data modeling best practices (Presto, Athena, Glue, etc)
Proficiency in at least one of the main programming languages used: Python and Scala. Additional programming languages expertise is a big plus!
Experience building data pipelines/ETL in Airflow, and familiarity with software design principles.
Excellent SQL and data manipulation skills using common frameworks like Spark/PySpark, or similar.
Expertise in Apache Spark, or similar Big Data technologies, with a proven record of processing high volumes and velocity of datasets.
Experience with business requirements gathering for data sourcing.
Bonus - Kafka and other streaming technologies like Apache Flink.
#LI-Remote
This job is accepting ongoing applications and there is no application deadline.
Please note, applicants are permitted to redact or remove information on their resume that identifies age, date of birth, or dates of attendance at or graduation from an educational institution.
We consider qualified applicants with criminal histories for employment on our team, assessing candidates in a manner consistent with the requirements of the San Francisco Fair Chance Ordinance.
Kraken is powered by people from around the world and we celebrate all Krakenites for their diverse talents, backgrounds, contributions and unique perspectives. We hire strictly based on merit, meaning we seek out the candidates with the right abilities, knowledge, and skills considered the most suitable for the job. We encourage you to apply for roles where you don't fully meet the listed requirements, especially if you're passionate or knowledgable about crypto!
As an equal opportunity employer, we don’t tolerate discrimination or harassment of any kind. Whether that’s based on race, ethnicity, age, gender identity, citizenship, religion, sexual orientation, disability, pregnancy, veteran status or any other protected characteristic as outlined by federal, state or local laws.
Stay in the know
If an employer mentions a salary or salary range on their job, we display it as an "Employer Estimate". If a job has no salary data, Rise displays an estimate if available.
Support AECOM’s project teams in Salt Lake City as a Data Management Intern, gaining practical experience in data organization, quality control, and reporting for infrastructure projects.
Commonwealth Fusion Systems is hiring an Enterprise Cloud Data Engineer to design scalable ETL/ELT pipelines and enable Generative AI solutions across the organization.
SMU DataArts hires a Data Engineer I/II to maintain and optimize a Snowflake-based data warehouse, build automated ETL and API integrations, and deliver high-quality datasets and analytics to support research.
World Wide Professional Solutions is hiring a senior Business Intelligence Engineer in Chandler, AZ to lead Power BI and Microsoft Fabric analytics, build scalable dataflows and semantic models, and drive dashboard governance for large construction and semiconductor projects.
Lead the analytics function at Prop Firm Match to create a GDPR-compliant, scalable analytics stack and turn cross-domain data into clear, actionable business decisions.
Lead AngelList’s data engineering function to build high-fidelity pipelines, canonical models, and data quality practices that power operational and investor-facing workflows.
Lead the data strategy and delivery for Gross‑to‑Net insights at Pfizer, enabling scalable, governed data solutions that power commercial analytics and financial decision-making.
Contribute your STEM expertise and Chinese fluency to evaluate, annotate, and improve AI-generated content as a remote contractor helping train next-generation language models.
Senior Lead Data Manager needed to lead clinical data management activities, mentor teams, and ensure compliant, timely delivery of study databases at Altasciences.
NBCUniversal is hiring a Manager, Data Pipelines to build and operate scalable media data engineering solutions that enable measurement and insight across Peacock and NBCU linear businesses.
Work within NYC DOT’s Analytics, Performance, and Management team to build and maintain cloud-based data pipelines and integrations that power policy, research, and operational decisions for the Department of Sustainable Delivery.
NVIDIA is hiring a Data Governance Analyst to champion data quality, implement validation and metadata systems, and collaborate cross-functionally to improve data-driven decision making.
Lead the creation and scaling of a clinical data annotation program at a mental-health focused AI company to deliver high-quality labeled data that informs safer, more personalized care.