At Truelogic we are a leading provider of nearshore staff augmentation services headquartered in New York. For over two decades, we’ve been delivering top-tier technology solutions to companies of all sizes, from innovative startups to industry leaders, helping them achieve their digital transformation goals.
Our team of 600+ highly skilled tech professionals, based in Latin America, drives digital disruption by partnering with U.S. companies on their most impactful projects. Whether collaborating with Fortune 500 giants or scaling startups, we deliver results that make a difference.
By applying for this position, you’re taking the first step in joining a dynamic team that values your expertise and aspirations. We aim to align your skills with opportunities that foster exceptional career growth and success while contributing to transformative projects that shape the future.
A global technology company focused on empowering today’s modern marketers through innovative SaaS solutions in the areas of research, communications, and media. With a strong product mindset and a growing suite of marketing technologies, the company is committed to delivering tools that drive measurable impact, streamline decision-making, and support digital transformation across the marketing ecosystem.
We’re looking for a passionate and experienced Data Engineer to join our team and help us build scalable and efficient ETL pipelines in the Google Cloud Platform (GCP) ecosystem. The ideal candidate will bring strong experience in PySpark, Python, and SQL, with a solid understanding of data pipeline orchestration and performance optimization. You will play a key role in transforming raw data into actionable insights that drive business value.
Design, build, and maintain scalable ETL pipelines using PySpark and other GCP-native tools.
Extensive experience utilizing the DataBricks platform.
Work with GCP infrastructure and services (e.g., BigQuery, Dataflow, Cloud
Composer, Cloud Functions, Pub/Sub, etc..).
Develop and optimize data ingestion, transformation, and loading processes.
Collaborate with data scientists, product managers, and other engineers to understand data needs and deliver high-quality solutions.
Ensure data quality, reliability, and performance across pipelines.
Monitor, troubleshoot, and enhance data workflows and systems.
Apply best practices for orchestration, version control, and CI/CD in data workflows.
Participate in design discussions, peer reviews, and code quality checks.
5-7 years of experience in data engineering, preferably in a cloud-native environment.
Strong programming skills: Python & PySpark.
Advanced SQL & experience working with large datasets.
Proficiency with Databricks.
Hands-on experience with GCP services, specifically: BigQuery, Dataflow, Cloud Composer (Airflow), Pub/Sub.
Experience orchestrating data pipelines & workflow automation.
Proven track record of improving pipeline performance & maintaining data integrity.
Experience with performance tuning of data workflows.
Good communication, analytical thinking, and problem-solving abilities.
Ability to work independently and collaboratively in a cross-functional team.
Bachelor’s or Master’s degree in Computer Science, Engineering, IT, or equivalent experience.
100% Remote Work: Enjoy the freedom to work from the location that helps you thrive. All it takes is a laptop and a reliable internet connection.
Highly Competitive USD Pay: Earn an excellent, market-leading compensation in USD, that goes beyond typical market offerings.
Paid Time Off: We value your well-being. Our paid time off policies ensure you have the chance to unwind and recharge when needed.
Work with Autonomy: Enjoy the freedom to manage your time as long as the work gets done. Focus on results, not the clock.
Work with Top American Companies: Grow your expertise working on innovative, high-impact projects with Industry-Leading U.S. Companies.
A Culture That Values You: We prioritize well-being and work-life balance, offering engagement activities and fostering dynamic teams to ensure you thrive both personally and professionally.
Diverse, Global Network: Connect with over 600 professionals in 25+ countries, expand your network, and collaborate with a multicultural team from Latin America.
Team Up with Skilled Professionals: Join forces with senior talent. All of our team members are seasoned experts, ensuring you're working with the best in your field.
If an employer mentions a salary or salary range on their job, we display it as an "Employer Estimate". If a job has no salary data, Rise displays an estimate if available.
Experienced Senior DevOps Engineer (GCP/AWS) sought for a leading nearshore team to drive platform improvements and reliability in innovative marketing projects.
Experienced Senior DevOps Engineer skilled in GCP and AWS sought to drive platform reliability and scalability for a dynamic marketing tools company within a fully remote, mature organization.
Peraton seeks an experienced Database Engineer to design and maintain advanced cloud data systems supporting mission-critical operations remotely.
Experienced Staff Data Engineer needed to build scalable data infrastructure and analytics platforms for a leading fintech company in a fully remote setting.
Innovate the future of federal data strategy as a Data Engineer at LMI, leveraging your DoD clearance and data engineering expertise.
Innovative fintech company Chime seeks a Software Engineer to design and optimize scalable data pipelines and workflows in a hybrid work environment.
Qloo is looking for a Data Engineer to enhance and maintain high-performance data pipelines supporting their innovative Taste AI technology.
Digital Management LLC is hiring a Data Engineer in McLean, VA to architect and maintain data solutions leveraging AWS and Python technologies for impactful business intelligence.