Let’s get started
By clicking ‘Next’, I agree to the Terms of Service
and Privacy Policy, and consent to receive emails from Rise
Jobs / Job page
Data Engineer image - Rise Careers
Job details

Data Engineer

Company Description

BHFT is a proprietary algorithmic trading firm. Our team manages the full trading cycle, from software development to creating and coding strategies and algorithms.
Our trading operations cover key exchanges. The firm trades across a broad range of asset classes, including equities, equity derivatives, options, commodity futures, rates futures, etc. We employ a diverse and growing array of algorithmic trading strategies, utilizing both High-Frequency Trading (HFT) and Medium-Frequency Trading (MFT) approaches.

Looking ahead, we are expanding into new markets and products. As a dynamic company, we continuously experiment with new markets, tools, and technologies.
We’ve got a team of 200+ professionals, with a strong emphasis on technology—70% are technical specialists in development, infrastructure, testing, and analytics spheres. The remaining part of the team supports our business operations, such as Risks, Compliance, Legal, Operations and more.

With a strong focus on innovation and performance, BHFT is actively expanding its presence in traditional financial markets. We value a results-driven culture, emphasizing collaboration, transparency, and constant improvement, all while offering the flexibility of remote work and a globally distributed team.

 

Job Description

Data Engineering team is responsible for designing, building, and maintaining the Data Lake infrastructure, including ingestion pipelines, storage systems, and internal tooling for reliable, scalable access to market data.
 

Key Responsibilities

  • Ingestion & Pipelines: Architect batch + stream pipelines (Airflow, Kafka, dbt) for diverse structured and unstructured marked data. Provide reusable SDKs in Python and Go for internal data producers.

  • Storage & Modeling: Implement and tune S3, column‑oriented and time‑series data storage for petabyte‑scale analytics; own partitioning, compression, TTL, versioning and cost optimisation.

  • Tooling & Libraries: Develop internal libraries for schema management, data contracts, validation and lineage; contribute to shared libraries and services for internal data consumers for research, backtesting and real-time trading purposes.

  • Reliability & Observability: Embed monitoring, alerting, SLAs, SLOs and CI/CD; champion automated testing, data quality dashboards and incident runbooks.

  • Collaboration: Partner with Data Science, Quant Research, Backend and DevOps to translate requirements into platform capabilities and evangelise best practices.

Qualifications

  • 5+ years of experience building and maintaining production-grade data systems, with proven expertise in architecting and launching data lakes from scratch.
  • Expert-level Python development skills (Go and C++ nice to have).
  • Hands-on experience with modern orchestration tools (Airflow) and streaming platforms (Kafka).
  • Advanced SQL skills including complex aggregations, window functions, query optimization, and indexing.
  • Experience designing high-throughput APIs (REST/gRPC) and data access libraries.
  • Solid fundamentals in Linux, containerization (Docker), and cloud object storage solutions (AWS S3, GCS).
  • Strong knowledge of handling diverse data formats including structured and unstructured data, with experience optimizing storage strategies such as partitioning, compression, and cost management.
  • Fluency in English for confident communication, documentation, and collaboration within an international team.

Additional Information

What we offer:

  • Working in a modern international technology company without bureaucracy, legacy systems, or technical debt.
  • Excellent opportunities for professional growth and self-realization.
  • We work remotely from anywhere in the world, with a flexible schedule.
  • We offer compensation for health insurance, sports activities, and professional training.

Average salary estimate

$200000 / YEARLY (est.)
min
max
$140000K
$260000K

If an employer mentions a salary or salary range on their job, we display it as an "Employer Estimate". If a job has no salary data, Rise displays an estimate if available.

Similar Jobs
Photo of the Rise User
Intelliswift Hybrid San Ramon, CA
Posted 23 hours ago

A San Ramon-based financial client needs an experienced Data Architect (Informatica ETL) to lead data warehousing and integration efforts on a 12+ month contract-to-hire engagement.

Photo of the Rise User
Posted 2 hours ago

As a Data Engineer II at Cisco IT in RTP, you will design and maintain data pipelines and platforms that enable analytics and machine learning for global business teams.

Switchboard Hiring Hybrid No location specified
Posted 22 hours ago

Open Technology Solutions is hiring a Data Engineer to design and maintain Snowflake-centric pipelines and transformations using SQL, Python, dbt, and Fivetran to enable analytics across partner credit unions.

Photo of the Rise User
IBC Hybrid San Antonio, TX
Posted 19 hours ago

IBC Bank is hiring a Power BI Developer I to create and maintain Power BI dashboards and reports that turn operational data into actionable business insights.

Innovative algorithmic trading company

2 jobs
MATCH
Calculating your matching score...
FUNDING
DEPARTMENTS
SENIORITY LEVEL REQUIREMENT
TEAM SIZE
EMPLOYMENT TYPE
Full-time, remote
DATE POSTED
November 21, 2025
Risa star 🔮 Hi, I'm Risa! Your AI
Career Copilot
Want to see a list of jobs tailored to
you, just ask me below!