At January, we're fixing what's broken in credit. Our data-driven platform rebuilds trust, delivers results, and helps millions move toward brighter financial futures while bringing humanity to consumer finance. Using data intelligence, we create trust and deliver better outcomes for consumers and creditors alike.
Our mission is simple: expand access to credit while empowering consumers to achieve lasting stability and control of their financial lives. We began by building the foundation for creditors to engage with and support their borrowers at scale across the entire debt lifecycle. We've mastered outsourced collections by combining best-in-class performance with differentiated consumer satisfaction and superior compliance. And we're just getting started. Together, we're creating a financial system where trust and opportunity spark lasting change in people's lives.
As January's founding Senior Data Engineer, you'll transform how we leverage data to expand access to credit — not by fixing what's broken, but by unlocking what's possible. You'll take full ownership of our modern data stack, evolving it from a capable system maintained part-time by analysts and engineers into a world-class platform that anticipates and enables our most ambitious data initiatives. You'll design the data infrastructure that helps millions achieve financial stability, ensuring every insight flows seamlessly from production to decision-makers. By establishing data engineering as a core discipline at January, you'll free our analysts to focus on insights while you architect the scalable foundation that powers our next phase of growth.
Own and optimize our entire data platform — taking our Snowflake warehouse from analyst-maintained to engineer-optimized while standardizing data models for customer reporting, operational dashboards, and ML features
Build self-healing data pipelines — designing ETL processes that scale automatically with volume, implementing monitoring that catches issues before anyone notices, and optimizing costs without sacrificing performance
Democratize data access — creating intuitive models that help PMs, analysts, and ops teams find answers independently while maintaining security and compliance requirements
Bridge engineering and analytics — establishing feedback loops between production systems and analytical needs, ensuring schema changes don't break downstream dependencies, and influencing how new features generate data
Institute modern data practices — implementing testing frameworks, building CI/CD pipelines for infrastructure changes, and creating documentation that enables others to extend your work
Drive strategic infrastructure decisions — identifying where new tools unlock capabilities, balancing quick wins with architectural vision, and building the foundation for an eventual data engineering team
Deliver immediate impact through key projects including:
Data Model Redesign: Architect unified models that reduce query redundancy for client reporting by 50% while maintaining flexibility
Pipeline Reliability: Strengthen monitoring systems to catch 99% of issues before they impact users
Cost Optimization: Reduce our Snowflake spend by 30-40% through intelligent clustering and lifecycle management
Analytics Enablement: Create semantic layers that enable technical and non-technical users alike to easily extract value from rich user data
5+ years in data engineering or analytics engineering with progressive technical responsibility
Deep expertise with modern data warehouses (Snowflake, BigQuery, or Redshift) including performance tuning and cost optimization
Advanced SQL skills — you can write elegant queries and debug why that 45-minute monster is destroying our compute budget
Production experience with dbt or similar transformation tools, including testing and documentation best practices
Proven ability to build and maintain ETL/ELT pipelines at scale using modern orchestration tools
Track record of designing data models that balance analytical flexibility with performance at scale
Experience as a sole or lead data engineer, owning infrastructure end-to-end without a large team
History of partnering with engineering teams to improve data quality at the source
Demonstrated success in reducing infrastructure costs while improving performance
Experience implementing data quality frameworks and proactive monitoring systems
Systems thinker who sees beyond individual pipelines to understand organizational data flow
Ownership mentality — you build your own roadmap and drive initiatives without waiting for permission
Strategic perspective that connects technical decisions to business outcomes
Collaborative approach to working with analysts, engineers, and product managers
Clear communicator who writes documentation people actually read
Bias toward shipping iteratively rather than pursuing perfection
Experience with streaming architectures and real-time analytics
Familiarity with ML infrastructure and feature stores
Knowledge of financial data privacy regulations and compliance
Previous startup or high-growth company experience
If an employer mentions a salary or salary range on their job, we display it as an "Employer Estimate". If a job has no salary data, Rise displays an estimate if available.
University of Maryland Upper Chesapeake Health is hiring a certified Oncology Data Specialist (ODS-C) to manage and report cancer registry data remotely with emphasis on accurate abstraction, data quality, and regulatory submissions.
Experienced Oracle Developer needed to develop and optimize ETL pipelines and support OBIA/OBIEE implementations for a remote-based analytics platform.
Lead WBL's BI and data architecture efforts remotely by ensuring robust data infrastructure, governance, and business-facing Power BI reporting.
Experienced Data Engineer needed to build and operate secure, production-grade ETL/ELT pipelines and migrate legacy data systems to modern, scalable architectures in a remote US role.
Experienced SSIS Developer needed to build scalable ETL pipelines and Azure data solutions for a well-established IT consulting firm serving finance and enterprise clients.
Seeking a hands-on, detail-oriented Data Engineer to maintain Python/SQL pipelines and Tableau dashboards in support of Risepoint’s research and analytics work.
Phantom is hiring a Senior Data Engineer to design scalable onchain and product analytics pipelines and lead data quality, modeling, and performance improvements across a modern data stack.
Edwards Lifesciences is seeking a Clinical Data Management Analyst to specify EDC requirements, maintain data integrity, and support clinical studies for the TMTT portfolio in a hybrid Irvine-based role.
Moneta seeks a hands-on Data Engineer to build and maintain scalable Microsoft-centric data pipelines and support analytics across Snowflake, SQL Server, and Microsoft Fabric.
Silgan is hiring a Senior Data Engineer to build scalable ETL pipelines, optimize complex SQL across multiple platforms, and deliver actionable reporting using Power BI and Tableau.
Hands-on summer internship at Experian to develop data pipelines and explore AI-assisted database security within a hybrid work model.
Lead analytics initiatives for Healthcare and Life Sciences clients at Sia, driving ETL, analyses, dashboards and predictive modeling to turn data into actionable business insights.
Lead DexCare's data engineering team to design and deliver scalable, compliant data pipelines and lakehouse architectures that power analytics and product features.
At January, we believe that anyone can make healthier food choices when they have the right insights and own their personal data. At the end of the day, we’re all consumer advocates—literally. We advocate for a world in which people can make bette...
3 jobs