Let’s get started
By clicking ‘Next’, I agree to the Terms of Service
and Privacy Policy, and consent to receive emails from Rise
Jobs / Job page
Senior Data Engineer image - Rise Careers
Job details

Senior Data Engineer

Today, we live in a world where everything has become convenient. Now you can get a ride anywhere, buy anything, answer any question with just a couple clicks on your phone. Convenience isn’t a luxury, it’s an expectation.

So why not renting? It’s still a chore to get utilities set up, buy renters insurance, get air filters changed, handle pest control, and more.

That’s why we’ve built the world’s first Resident Experience Platform that makes resident onboarding, resident services, and ancillary revenue effortless for property managers.

We’re passionate about turning friction into triple win experiences for residents, property managers, and investors. That way renting can be easy and rewarding for everyone.

And now you can join us. Apply today to join 220+ passionate, creative people who strive to make a difference each day so residents, property managers, and investors all win; creating the ultimate Triple Win! 🔥🔥🔥

About the Role

We are looking for a Senior Data Engineer to be a critical builder on our Data Platform team. This is an Individual Contributor (IC) role focused on designing, building, and scaling the data systems that power Second Nature.

Reporting to the Director of Data Engineering, you will own the development of robust data pipelines that process high-volume, high-throughput data from our most critical business sources (Finance, RevOps, and core platform streams).

You won’t just be building; you’ll be a key technical and consultative partner to the business, translating stakeholder needs into efficient, automated workflows that deliver reliable insights. You'll be a key part of a team that embraces modern tooling, including AI-assisted development, to build smarter and faster.

While this position has no people management, it comes with substantial technical ownership and influence over our architecture, tooling, and processes. Our team is expanding quickly, offering clear paths to technical leadership or principal-level roles for high-performers.

What You'll Do

1. Build & Scale Our Core Data Infrastructure

  • Master our GCP Stack: Design, build, and maintain scalable, secure data pipelines within the Google Cloud Platform (GCP), leveraging BigQuery, Dataflow, Cloud Composer, and Pub/Sub.

  • Write Production-Grade Code: Implement and test robust, efficient, and maintainable data workflows using advanced Python and SQL.

  • Unify Business Data: Integrate and model data from diverse, high-priority business systems (including Salesforce, billing platforms, and operational databases) into unified, reliable datasets.

  • Optimize for Performance: Proactively monitor, troubleshoot, and optimize existing infrastructure for speed, cost-efficiency, and reliability (e.g., BigQuery partitioning, clustering, and slot management).

2. Drive Technical Excellence & Solution Ownership

  • Own Projects End-to-End: Lead the full lifecycle of data projects, from technical design and architecture through to deployment, monitoring, and maintenance.

  • Champion Data Quality: Establish and own data quality monitoring and observability (e.g., Datadog) to build and maintain trust in our data platform.

  • Innovate and Modernize: Collaborate on tool selection and architectural decisions, helping to evolve our stack to support advanced analytics, AI integration, and engineering best practices like CI/CD and IaC.

  • Be the Problem-Solver: Proactively identify performance bottlenecks, data anomalies, and reliability issues, and drive them to resolution.

3. Collaborate & Consult Across the Business

  • Be the Data Partner: Act as a key technical partner for Finance, RevOps, and other business units, translating their complex needs into robust, scalable data solutions.

  • Communicate Clearly: Articulate complex data concepts and architectural decisions effectively to both technical and non-technical audiences.

  • Coordinate Across Teams: Work closely with our internal and offshore data engineering teams to coordinate development, testing, and delivery.

Days 0–30: Onboarding, Discovery & First Wins

Objectives:

  • Technical Immersion: Gain a functional understanding of our specific GCP setup, deployment patterns, and security protocols.

  • Business Domain Mapping: Understand why we built what we built. Map the data flow from source (e.g., Salesforce) to value (e.g., Board Deck).

  • Tooling Setup: Fully configure your local environment, including your AI-assisted development stack.

Key Activities:

  • Audit the Architecture: Review existing Cloud Composer DAGs, BigQuery datasets, and Terraform configurations.

    • Example: Diagram the critical path for our "Revenue" dataset to identify potential single points of failure.

  • Stakeholder "Listening Tour": Meet with key data consumers in Finance and RevOps.

    • Example: Ask them, "What is the one dashboard you don't trust right now, and why?"

  • Ship a "First Fix": Deploy code to production within your first two weeks.

    • Example: Fix a known bug, update a deprecated API call in Python, or add a missing field to a schema.

  • Establish AI Workflow: Integrate AI tools into your daily workflow in a compliant way.

    • Example: set up your IDE (VS Code/PyCharm) with the approved AI extensions and generate your first set of unit tests using the tool.

Success Indicator: You have successfully deployed to production using our CI/CD pipeline and can explain the high-level architecture of our core data platform to a peer.

Days 31–60: Ownership, Optimization & Efficiency

Objectives:

  • Domain Ownership: Becoming the primary point of contact for a specific slice of the data stack (e.g., Marketing Data, Billing Pipelines, or Platform Events).

  • Proactive Optimization: Move from "keeping the lights on" to "upgrading the wiring." Focus on cost, speed, or reliability.

  • AI-Driven Refactoring: Use modern tooling to pay down technical debt faster than we created it.

Key Activities:

  • Performance/Cost Tuning: Identify inefficiency in GCP and fix it.

    • Example: Find the top 5 most expensive BigQuery queries and refactor them (partitioning/clustering) to reduce costs by 15%+.

    • Example: Refactor a sluggish Dataflow job to handle higher throughput.

  • Enhance Observability: Move us from "reactive" to "proactive" monitoring.

    • Example: implementing a Datadog monitor that alerts on "data drift" (unexpected changes in data distribution) rather than just job failure.

  • Deepen Business Logic: Translate a complex business requirement into code without needing hand-holding.

    • Example: Work with Finance to automate a manual Excel reconciliation process into a robust SQL pipeline.

Success Indicator: You are the "Go-To" person for at least one major data domain. You have delivered a measurable improvement in either system performance, cost efficiency, or data quality.

Days 61–90: Strategy, Architecture & Scale

Objectives:

  • Architectural Leadership: Contribute to design decisions that affect the next 6–12 months of the roadmap.

  • Standardization: Elevate the team's engineering bar.

  • Cross-Team Influence: Act as a bridge between Engineering and the Business, proactively suggesting data solutions.

Key Activities:

  • Future-Proofing the Stack: Lead the design for a major new capability.

    • Example: Design the architecture for a new Real-Time streaming ingestion pipeline.

    • Example: Create the "data readiness" plan for a future AI/Machine Learning initiative.

  • Elevate Best Practices: Introduce a new tool or process that improves the developer experience (DX).

    • Example: Implement Infrastructure-as-Code (Terraform) for a manually managed resource.

    • Example: Create a "Prompt Library" for the team to help junior engineers use AI tools more effectively.

  • Define Reliability Standards: Formalize trust with the business.

    • Example: Define and publish SLAs (Service Level Agreements) for key tables (e.g., "Executive Dashboards are guaranteed fresh by 8:00 AM EST").

Success Indicator: You are effectively operating as a Senior Engineer—identifying problems before they become fires, mentoring others (even informally), and driving technical decisions that align with business goals.

About you

  • Experience: 7+ years experience in data engineering or a related backend development role.

  • Core Technical Skills: Expert-level skills in Python and SQL for complex data processing and pipeline development.

  • GCP Expertise (Required): Deep, hands-on experience with the Google Cloud Platform (GCP) data stack. Familiarity with AWS or Azure is acceptable only if you have a proven ability to adapt quickly and deeply to GCP.

  • Modern Data Stack Fluency: Proven experience with our core technologies or their direct equivalents: BigQuery, Dataflow, and Airflow (or Cloud Composer). Experience with Databricks is also valuable.

  • The AI-Assisted Engineer: You are comfortable and excited to use AI-assisted development tools (e.g., GitHub Copilot, etc.) to improve your own productivity, code quality, and workflow. You see these tools as a multiplier.

  • Infrastructure & DevOps Skills: Hands-on experience with containerization (Docker) and a solid understanding of CI/CD pipelines for data workflows.

  • Security & Cost Mindset: A strong understanding of cloud security best practices (IAM, least-privilege) and a proven ability to analyze and optimize for cost, not just performance.

  • Proven Builder: A track record of building, testing, and deploying scalable, secure data pipelines in a high-volume production environment.

  • Systems Integrator: Demonstrated success integrating complex, disparate business systems (e.g., CRMs like Salesforce, ERPs, billing platforms) into a unified data model.

  • Stakeholder Management: Excellent communication skills. You can just as easily brainstorm with an engineer as you can present a solution to an executive in Finance.

  • Ownership Mindset: A strong sense of accountability. You proactively hunt for problems and solutions, not just wait for tickets.

  • Adaptability: Comfortable and effective in a fast-paced, iterative environment with evolving requirements.

  • Infrastructure-as-Code (IaC), especially Terraform, is a major plus.

  • Experience with Datadog or other modern observability stacks.

  • Experience building data pipelines that feed ML models (e.g., on Vertex AI) or using BigQuery ML.


We get it. Requirements can sometimes hold people back from applying to a job, but don’t let that be the case here. If you believe you have the skills it takes to elevate this role, team, and company, we encourage you to apply for this role.

Remote Work Statement
This is a remote-first, work-from-home position with required bi-annual in-person company meetings in January and July (business travel covered by Second Nature). You must be available during scheduled working hours, with a distraction-free work environment and reliable high-speed internet.


Our Core Values

  • Pirate ship, not a cruise ship. Bias towards action.

  • Massive growth takes massive growth. We embrace challenges to increase our impact.

  • Grow the pie. We focus on results so our customers & their customers win. Triple Win!

  • Purple heart. We put the team before ourselves.

  • Extreme ownership. See something? Say something; right the ship to get us back on course.

  • Be a moment maker. We aim to shatter the status quo.

AI Innovation

We're thrilled about the transformative potential of AI innovation and its ability to drive progress at Second Nature. As we continue to explore and integrate AI into our workflows, we’re eager to learn how you’ve embraced and implemented AI in your professional journey. In the interview process, we look forward to hearing about your experiences and exploring how we can collectively leverage AI technology to accelerate our growth.


Why Second Nature?
🩺 Health First: Medical, Dental, Vision, & Life Insurance; 401K Plan
📍 Location: Work Remotely from anywhere in the US
📆 Flexibility: Open PTO and sick days
🤩 Culture: Diverse, inclusive, supportive, and growth-focused
🚀 Growth: Be part of a fast-growing team building a new category and making a real difference
💻 The Product: Award-winning, customer-loved platform that truly delivers on our mission

Second Nature is an Equal Opportunity Employer and prohibits discrimination and harassment of any kind. We take action to ensure equal employment opportunities for all candidates and employees and to provide employees with a workplace free of discrimination and harassment. Our hiring decisions are based on business needs, job requirements and individual qualifications, without regard to race, color, religion or belief, family or parental status, or any other status protected by federal and/or state law.

Average salary estimate

$175000 / YEARLY (est.)
min
max
$150000K
$200000K

If an employer mentions a salary or salary range on their job, we display it as an "Employer Estimate". If a job has no salary data, Rise displays an estimate if available.

Similar Jobs
Photo of the Rise User
Posted 16 hours ago

Street Context/BlueMatrix is hiring a Data Engineer to develop and optimize Snowflake + dbt ELT pipelines and ensure reliable data delivery across the organization.

Photo of the Rise User
LSU Hybrid 0103 Fred C. Frey Computing Services Building
Posted 15 hours ago

A data-focused analyst role at LSU's PMO, responsible for data strategy, governance, reporting, and translating stakeholder requirements into actionable analytics and solutions.

Photo of the Rise User
Suno Hybrid No location specified
Posted 15 hours ago

Suno seeks an Analytics Engineer to build reliable analytics pipelines, dbt transformations, and dashboards that power product and GTM decisions at a fast-moving AI music startup.

Photo of the Rise User
Posted 11 hours ago

Lead architecture and scaling of a fintech’s cloud data platform as Principal Data Engineer, driving ETL/ELT, data warehousing, and high-performance pipelines.

Photo of the Rise User

Cengage Group is hiring a Director of Product Analytics to lead analytics strategy, instrumentation, and experimentation across its digital learning products.

Photo of the Rise User
Posted 8 hours ago

Experienced Data Engineer needed to build and optimize large-scale, secure cloud-native data pipelines and infrastructure while mentoring teams and supporting analytics and ML initiatives for enterprise and government clients.

Photo of the Rise User
GOVX Hybrid No location specified
Posted 14 hours ago

GOVX is seeking a Cloud Data Engineer to design, build, and optimize Fabric- and Spark-based data pipelines and Delta Lake house architectures to centralize and standardize corporate data for analytics and ML.

Second Nature is a manufacturer and retailer of household air filters. This company is headquartered in Raleigh, North Carolina and was established in 2012.

4 jobs
MATCH
Calculating your matching score...
FUNDING
DEPARTMENTS
SENIORITY LEVEL REQUIREMENT
INDUSTRY
TEAM SIZE
EMPLOYMENT TYPE
Full-time, remote
DATE POSTED
December 3, 2025
Risa star 🔮 Hi, I'm Risa! Your AI
Career Copilot
Want to see a list of jobs tailored to
you, just ask me below!