Our Krakenites are a world-class team with crypto conviction, united by our desire to discover and unlock the potential of crypto and blockchain technology.
What makes us different?
Kraken is a mission-focused company rooted in crypto values. As a Krakenite, you’ll join us on our mission to accelerate the global adoption of crypto, so that everyone can achieve financial freedom and inclusion. For over a decade, Kraken’s focus on our mission and crypto ethos has attracted many of the most talented crypto experts in the world.
Before you apply, please read the Kraken Culture page to learn more about our internal culture, values, and mission. We also expect candidates to familiarize themselves with the Kraken app. Learn how to create a Kraken account here.
As a fully remote company, we have Krakenites in 70+ countries who speak over 50 languages. Krakenites are industry pioneers who develop premium crypto products for experienced traders, institutions, and newcomers to the space. Kraken is committed to industry-leading security, crypto education, and world-class client support through our products like Kraken Pro, Desktop, Wallet, and Kraken Futures.
Become a Krakenite and build the future of crypto!
Join our Data Infrastructure team and play a pivotal role in upholding the reliability, scalability, and efficiency of our robust Data platform. As a Senior Site Reliability Engineer (SRE) specialized in Data Infrastructure, you will collaborate closely with diverse cross-functional teams to conceive, execute, and oversee the foundational data infrastructure that empowers our array of applications and services.
As a key member of our Data Infrastructure team, you will:
Design the data governance mechanisms that ensure our lakehouse is easy to interact with, secure and in compliance with all applicable regulations.
Implement the infrastructure we use to ingest our data, store it, catalog it with the right metadata and capture its lineage.
Provide a state-of-the-art suite of BI tools for multiple teams within the company.
Guarantee the availability, high performance, scalability and cost efficiency of our data platform.
Your proficiency in cloud technologies, infrastructure as code, automation, monitoring, logging, user and machine AuthNZ, and certificate management will be instrumental in upholding the exceptional operational standards we set for our services.
Implement data infrastructure solutions (self service) that support the needs of 10+ business units and over 100 engineering and data analysts
Utilize Infrastructure as Code (IaC) principles to design, provision, and manage both on-premises and cloud (AWS) infrastructure components using tools such as Terraform
Develop and maintain automation scripts using bash/shell scripting and to automate operational tasks and deployments
Enhance and manage CI/CD pipelines to facilitate consistent software deployments across the data infrastructure
Implement robust data monitoring and alerting solutions to proactively detect anomalies and performance issues
Manage and implement role-based access control (RBAC) and permissions for a multitude of user groups and machine workflows across different environments
Manage and maintain real-time streaming data architecture using technologies like Kafka and Debezium Change Data Capture (CDC)
Ensure the timely and accurate processing of streaming data, enabling data analysts and engineers to gain insights from up-to-date information
Utilize Kubernetes to manage containerized applications within the data infrastructure, ensuring efficient deployment, scaling, and orchestration
Implement effective incident response procedures and participate in on-call rotations
Collaborate with data analysts, engineers, and cross-functional teams to understand requirements and implement appropriate solutions
Document architecture, processes, and best practices to enable knowledge sharing and support continuous improvement
Support AI/ML teams with their infra requests
Proven experience (5+ years) working as a Site Reliability Engineer, Infrastructure Engineer, Data Infrastructure Engineer, or similar roles, with a focus on data infrastructure and security
Experience with maintaining real-time data processing technologies, such as Kafka and Flink clusters and Debezium instances
Working experience in managing hybrid multi-tenant cloud systems particularly on AWS
Infrastructure as Code tools such as Terraform, Terragrunt and Atlantis
Experience with containerization and orchestration tools, particularly Kubernetes, Nomad, and Docker
Solid understanding of bash/shell scripting and proficiency in at least one programming language (preferably Python or JVM languages)
Experience maintaining data-related technologies: Apache Airflow, Apache Spark, DBs, BI tooling
Experience solving data access management issues at large scale data-lake
Familiarity with CI/CD deployment pipelines and related tools
Strong problem-solving skills and the ability to troubleshoot complex systems
Experience with data-related technologies (databases, data lakes, airflow, spark) is a plus
This job is accepting ongoing applications and there is no application deadline.
Please note, applicants are permitted to redact or remove information on their resume that identifies age, date of birth, or dates of attendance at or graduation from an educational institution.
We consider qualified applicants with criminal histories for employment on our team, assessing candidates in a manner consistent with the requirements of the San Francisco Fair Chance Ordinance.
Kraken is powered by people from around the world and we celebrate all Krakenites for their diverse talents, backgrounds, contributions and unique perspectives. We hire strictly based on merit, meaning we seek out the candidates with the right abilities, knowledge, and skills considered the most suitable for the job. We encourage you to apply for roles where you don't fully meet the listed requirements, especially if you're passionate or knowledgable about crypto!
As an equal opportunity employer, we don’t tolerate discrimination or harassment of any kind. Whether that’s based on race, ethnicity, age, gender identity, citizenship, religion, sexual orientation, disability, pregnancy, veteran status or any other protected characteristic as outlined by federal, state or local laws.
Stay in the know
If an employer mentions a salary or salary range on their job, we display it as an "Employer Estimate". If a job has no salary data, Rise displays an estimate if available.
Kraken seeks an experienced Data Engineer to build scalable data pipelines and platform services that power analytics, ML, and product decision-making across the company.
Senior delivery leader sought to run a large North American utility transformation program, coordinating cross-functional teams and engaging C-suite stakeholders to deliver Kraken’s energy platform at scale.
Lead the data strategy and delivery for Gross‑to‑Net insights at Pfizer, enabling scalable, governed data solutions that power commercial analytics and financial decision-making.
Lead AngelList’s data engineering function to build high-fidelity pipelines, canonical models, and data quality practices that power operational and investor-facing workflows.
City Year is seeking a short-term contractor Data Engineer to maintain and enhance Azure/Databricks data pipelines supporting reporting and integrations.
Mastercard is hiring a Vice President of Data Quality Platforms to lead the strategy and execution of enterprise data quality platforms, ensuring accurate, consistent, and compliant master and merchant data across global systems.
World Wide Professional Solutions is hiring a senior Business Intelligence Engineer in Chandler, AZ to lead Power BI and Microsoft Fabric analytics architecture for large-scale construction and manufacturing projects.
Senior engineering leader needed to build and evolve high-throughput streaming and batch data infrastructure that enables analytics, application development, and AI/ML at scale for a leading medical-robotics company.
City Year is hiring a Senior Data Engineer to lead Azure-based ETL/ELT data pipeline design, integrations, and data architecture to support K–12 data needs in a 100% remote role.
Lead scalable data architecture and pipeline development at WWPS to enable enterprise analytics for large-scale manufacturing construction projects in the Phoenix area.
Avalyn Pharma is hiring a Senior Clinical Data Manager to lead data strategy and delivery for global Phase II/III pulmonary fibrosis trials, ensuring integrity and regulatory readiness of clinical datasets.
Kraken seeks an experienced Data Engineer to build scalable data pipelines and platform services that power analytics, ML, and product decision-making across the company.
An experienced Systems Engineer is needed to design ontologies and build data pipelines in Palantir Foundry to support analytics, governance, and operational workflows.
CGS is hiring a Records Management Specialist III to deliver technical records and docket-management support for a major federal agency program.
Handshake is looking for an Analytics Engineer to build robust data models and pipelines, drive data governance, and enable product and business teams with accurate, actionable analytics.