Bad software is everywhere, and we’re tired of it. Sentry is on a mission to help developers write better software faster so we can get back to enjoying technology.
With more than $217 million in funding and 100,000+ organizations that believe we’re on to something, we're building performance and error monitoring tools that help companies like Disney, Microsoft, and Atlassian spend less time fixing bugs and more time building products.
Sentry embraces a hybrid work model, with Mondays, Tuesdays, and Thursdays set as in-office anchor days to encourage meaningful collaboration. If you like to selfishly build things that make your digital life better, come help us build the next generation of software monitoring tools.
Data Engineering at Sentry builds and scales the infrastructure that powers our analytics, product insights, and operational decision-making. We design data pipelines, manage large-scale processing systems, and ensure that our teams can access reliable, high-quality data. We work closely with Business Intelligence, Product, and Engineering to provide the foundation for data-driven decisions.
As a Data Engineering Intern, we’re looking for someone who enjoys solving complex technical problems, thrives in building systems at scale, and is eager to learn how to deliver data infrastructure that grows with the company. You’ll gain hands-on experience with modern data engineering tools and contribute to projects that have real impact across Sentry.
Work with GCP services (BigQuery, Pub/Sub, Cloud Storage, etc.) to support scalable and reliable data systems
Develop and optimize DAGs in Airflow to schedule and automate workflows
Write efficient Python and SQL code to process, transform, and analyze large datasets
Partner with Data Engineering and Business Intelligence teams to ensure data quality, consistency, and availability across the company
Support initiatives to improve the scalability, monitoring, and reliability of our data infrastructure
You get excited about building systems that move and process large volumes of data efficiently
You are curious about how raw data becomes insights and want to contribute to the foundation that makes analytics possible
You are a self-starter who enjoys ownership, problem-solving, and learning new technologies
You are energized by working in a dynamic environment where priorities evolve as the company grows
Currently pursuing a Bachelor’s degree, graduating in 2027 or later, in computer science, data engineering, or a related technical discipline, with a 3.0 minimum GPA or equivalent
Exposure to Python and SQL for data processing and pipeline development
Familiarity with data engineering concepts such as batch and streaming data processing
Exposure to tools such as Kafka, Pub/Sub, Airflow, BigQuery, or other GCP services
Understanding of software engineering best practices (version control, testing, CI/CD) is a plus
Ability to communicate clearly and work collaboratively with technical and non-technical teams
The base salary range (or hourly wage range, if applicable) that Sentry reasonably expects to pay for this position is $53.13/hr. A successful candidate’s actual base salary (or hourly wage) amount will be determined by a variety of relevant factors including, without limitation, the candidate’s work location, education, work and other relevant experience, skills, and job-related knowledge. A successful candidate will be eligible to participate in Sentry’s employee benefit plans/programs applicable to the candidate’s position (including incentive compensation, equity grants, paid time off, and group health insurance coverage). See Sentry Benefits for more details about the Company’s benefit plans/programs.
Sentry is committed to providing equal employment opportunities to its employees and candidates for employment regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity, veteran status, or other legally-protected characteristic. This commitment includes the provision of reasonable accommodations to employees and candidates for employment with physical or mental disabilities who require such accommodations in order to (a) perform the essential functions of their jobs, or (b) seek employment with Sentry. We strive to build a diverse team, with an inclusive culture where every teammate can thrive. Sentry is an open-source company because we believe that everyone, everywhere, should have the ability and tools to make great software. Software should be accessible. That starts with making our industry accessible.
If you need assistance or an accommodation due to a disability, you may contact us at [email protected].
Want to learn more about how Sentry handles applicant data? Get the details in our Applicant Privacy Policy.
If an employer mentions a salary or salary range on their job, we display it as an "Employer Estimate". If a job has no salary data, Rise displays an estimate if available.
Procurement Sciences seeks a Senior Data Engineer to build and operate production data pipelines and foundational data models that enable analytics, ML training, and business insights for a fast-growing, AI-driven GovCon SaaS platform.
Arbiter seeks a Senior Business Intelligence Developer to build dashboards, analytics, and data pipelines on GCP/dbt that drive decisions across providers and payers.
SpaceX is hiring a Data Engineer in Redmond, WA to build data systems and analytics that power the Direct to Cell network and scale connectivity for millions of users.
AbbVie seeks a Senior Master Data Analyst to lead Material Master Data governance and quality across SAP and enterprise systems from its North Chicago hybrid site.
Help build Fram Energy's data platform and quality practices as the Founding Data and Quality Engineer, driving reliable pipelines and observability for a fast-moving climate and proptech startup in NYC.
Responsible for contributing to design and implementation of large-scale data and analytics solutions at AbbVie’s Lake County hybrid BTS team, supporting R&D and enterprise analytics.
Brillio is hiring a GIS Specialist (Irving, TX — hybrid) to develop and validate spatial data solutions using Python, QGIS, PostGIS/PostgreSQL, and modern deployment practices.
Redhorse Corporation seeks a Data Management Specialist to design, maintain, and quality-control ESRI ArcGIS data solutions supporting the BLM National Operations Center in Denver.
Firstup is hiring a remote Data Engineer to design scalable ETL and data infrastructure, optimize large-scale query performance, and support analytics for a rapidly growing, distributed product.
Experienced Data Engineer needed to architect and deliver scalable, low-latency cloud data systems and lead cross-functional projects for a US-based partner, recruited via Jobgether.
A partner company is hiring a Senior Data Engineer to architect and operate scalable batch and real-time data pipelines and drive analytics across the organization in a remote US role.
Mod Op is seeking a Junior Data Engineer to build scalable ETL/ELT pipelines and dashboards across GCP/AWS to enable data-driven marketing and analytics.
Senior data engineering leader needed to define and scale enterprise data platforms, enable analytics and AI workflows, and manage distributed engineering teams across the organization.
As one of the largest and most secure mutual business insurance companies in the nation, we know how important it is to attract—and retain—talented, hardworking individuals. That’s why we offer you opportunities to grow professionally and personal...
8 jobs