Let’s get started
By clicking ‘Next’, I agree to the Terms of Service
and Privacy Policy, and consent to receive emails from Rise
Jobs / Job page
Data Engineer (NLP-Focused) image - Rise Careers
Job details

Data Engineer (NLP-Focused)

We are looking for a Data Engineer (NLP-Focused) to build and optimize the data pipelines that fuel our Ukrainian LLM and Kyivstar’s NLP initiatives. In this role, you will design robust ETL/ELT processes to collect, process, and manage large-scale text and metadata, enabling our data scientists and ML engineers to develop cutting-edge language models. You will work at the intersection of data engineering and machine learning, ensuring that our datasets and infrastructure are reliable, scalable, and tailored to the needs of training and evaluating NLP models in a Ukrainian language context. This is a unique opportunity to shape the data foundation of a pioneering AI project in Ukraine, working alongside NLP experts and leveraging modern big data technologies.


About us 


Kyivstar.Tech is a Ukrainian hybrid IT company and a resident of Diia.City. We are a subsidiary of Kyivstar, one of Ukraine's largest telecom operators. 

 

Our mission is to change lives in Ukraine and around the world by creating technological solutions and products that unleash the potential of businesses and meet users' needs. 

 

Over 600+ KS.Tech specialists work daily in various areas: mobile and web solutions, as well as design, development, support, and technical maintenance of high-performance systems and services. 

 

We believe in innovations that truly bring quality changes and constantly challenge conventional approaches and solutions. Each of us is an adherent of entrepreneurial culture, which allows us never to stop, to evolve, and to create something new. 


Responsibilities:


• Design, develop, and maintain ETL/ELT pipelines for gathering, transforming, and storing large volumes of text data and related information. Ensure pipelines are efficient and can handle data from diverse sources (e.g., web crawls, public datasets, internal databases) while maintaining data integrity.

• Implement web scraping and data collection services to automate the ingestion of text and linguistic data from the web and other external sources. This includes writing crawlers or using APIs to continuously collect data relevant to our language modeling efforts.

• Implementation of NLP/LLM specific data processing: cleaning and normalization of text, like filtering of toxic content, de-duplication, de-noising), detection and deletion of personal data.

• Formation of specific SFT/RLHF datasets from existing data, including data augmentation/labeling with LLM as teacher.

• Set up and manage cloud-based data infrastructure for the project. Configure and maintain data storage solutions (data lakes, warehouses) and processing frameworks (e.g., distributed compute on AWS/GCP/Azure) that can scale with growing data needs.

• Automate data processing workflows and ensure their scalability and reliability. Use workflow orchestration tools like Apache Airflow to schedule and monitor data pipelines, enabling continuous and repeatable model training and evaluation cycles.

• Maintain and optimize analytical databases and data access layers for both ad-hoc analysis and model training needs. Work with relational databases (e.g., PostgreSQL) and other storage systems to ensure fast query performance and well-structured data schemas.

• Collaborate with Data Scientists and NLP Engineers to build data features and datasets for machine learning models. Provide data subsets, aggregations, or preprocessing as needed for tasks such as language model training, embedding generation, and evaluation.

• Implement data quality checks, monitoring, and alerting. Develop scripts or use tools to validate data completeness and correctness (e.g., ensuring no critical data gaps or anomalies in the text corpora), and promptly address any pipeline failures or data issues. Implement data version control.

• Manage data security, access, and compliance. Control permissions to datasets and ensure adherence to data privacy policies and security standards, especially when dealing with user data or proprietary text sources.


Required Qualifications:


Education & Experience: 3+ years of experience as a Data Engineer or in a similar role, building data-intensive pipelines or platforms. A Bachelor’s or Master’s degree in Computer Science, Engineering, or related field is preferred. Experience supporting machine learning or analytics teams with data pipelines is a strong advantage.


NLP Domain Experience: Prior experience handling linguistic data or supporting NLP projects (e.g., text normalization, handling different encodings, tokenization strategies). Knowledge of Ukrainian text sources and data sets, or experience with multilingual data processing, can be an advantage given our project’s focus. Understanding of FineWeb2 or similar processing pipelines approach.


Data Pipeline Expertise: Hands-on experience designing ETL/ELT processes, including extracting data from various sources, using transformation tools, and loading into storage systems. Proficiency with orchestration frameworks like Apache Airflow for scheduling workflows. Familiarity with building pipelines for unstructured data (text, logs) as well as structured data.


Programming & Scripting: Strong programming skills in Python for data manipulation and pipeline development. Experience with NLP packages (spaCy, NLTK, langdetect, fasttext, etc.). Experience with SQL for querying and transforming data in relational databases. Knowledge of Bash or other scripting for automation tasks. Writing clean, maintainable code and using version control (Git) for collaborative development.


Databases & Storage: Experience working with relational databases (e.g., PostgreSQL, MySQL) including schema design and query optimization. Familiarity with NoSQL or document stores (e.g., MongoDB) and big data technologies (HDFS, Hive, Spark) for large-scale data is a plus. Understanding of or experience with vector databases (e.g., Pinecone, FAISS) is beneficial, as our NLP applications may require embedding storage and fast similarity search.


Cloud Infrastructure: Practical experience with cloud platforms (AWS, GCP, or Azure) for data storage and processing. Ability to set up services such as S3/Cloud Storage, data warehouses (e.g., BigQuery, Redshift), and use cloud-based ETL tools or serverless functions. Understanding of infrastructure-as-code (Terraform, CloudFormation) to manage resources is a plus.


Data Quality & Monitoring: Knowledge of data quality assurance practices. Experience implementing monitoring for data pipelines (logs, alerts) and using CI/CD tools to automate pipeline deployment and testing. An analytical mindset to troubleshoot data discrepancies and optimize performance bottlenecks.


Collaboration & Domain Knowledge: Ability to work closely with data scientists and understand the requirements of machine learning projects. Basic understanding of NLP concepts and the data needs for training language models, so you can anticipate and accommodate the specific forms of text data and preprocessing they require. Good communication skills to document data workflows and to coordinate with team members across different functions.


Preferred Qualifications:


Advanced Tools & Frameworks: Experience with distributed data processing frameworks (such as Apache Spark or Databricks) for large-scale data transformation, and with message streaming systems (Kafka, Pub/Sub) for real-time data pipelines. Familiarity with data serialization formats (JSON, Parquet) and handling of large text corpora.


Web Scraping Expertise: Deep experience in web scraping, using tools like Scrapy, Selenium, or Beautiful Soup, and handling anti-scraping challenges (rotating proxies, rate limiting). Ability to parse and clean raw text data from HTML, PDFs, or scanned documents.


CI/CD & DevOps: Knowledge of setting up CI/CD pipelines for data engineering (using GitHub Actions, Jenkins, or GitLab CI) to test and deploy changes to data workflows. Experience with containerization (Docker) to package data jobs and with Kubernetes for scaling them is a plus.


Big Data & Analytics: Experience with analytics platforms and BI tools (e.g., Tableau, Looker) used to examine the data prepared by the pipelines. Understanding of how to create and manage data warehouses or data marts for analytical consumption.


Problem-Solving: Demonstrated ability to work independently in solving complex data engineering problems, optimising existing pipelines, and implementing new ones under time constraints. A proactive attitude to explore new data tools or techniques that could improve our workflows.


What we offer:


• Office or remote – it’s up to you. You can work from anywhere, and we will arrange your workplace.

• Remote onboarding.

• Performance bonuses for everyone (annual or quarterly — depends on the role).  

• We train employees: with the opportunity to learn through the company’s library, internal resources, and programs from partners.   

• Health and life insurance.  

• Wellbeing program and corporate psychologist.  

• Reimbursement of expenses for Kyivstar mobile communication.  


Kyivstar Glassdoor Company Review
No rating Glassdoor star iconGlassdoor star iconGlassdoor star iconGlassdoor star iconGlassdoor star icon
Kyivstar DE&I Review
No rating Glassdoor star iconGlassdoor star iconGlassdoor star iconGlassdoor star iconGlassdoor star icon
CEO of Kyivstar
Kyivstar CEO photo
Unknown name
Approve of CEO

Average salary estimate

$60000 / YEARLY (est.)
min
max
$30000K
$90000K

If an employer mentions a salary or salary range on their job, we display it as an "Employer Estimate". If a job has no salary data, Rise displays an estimate if available.

Similar Jobs
Photo of the Rise User

Join Kyivstar.Tech as a Growth Marketing Visual Designer to produce performance-focused static and motion creatives that fuel user growth across paid and organic channels.

Photo of the Rise User

Prime Therapeutics is looking for a remote Business Intelligence Specialist to develop analytics, reports and data assets that support the Sacramento market and broader organizational BI objectives.

Photo of the Rise User

Capital One seeks a Senior Data Engineer experienced in Python, Scala, and AWS to build and optimize cloud-scale data platforms and streaming solutions that support analytics and product delivery.

Photo of the Rise User
American Express Hybrid New York, New York, United States
Posted 7 hours ago
Inclusive & Diverse
Empathetic
Collaboration over Competition
Growth & Learning
Transparent & Candid
Medical Insurance
Dental Insurance
Mental Health Resources
Life insurance
Disability Insurance
Child Care stipend
Employee Resource Groups
Learning & Development

Lead US commercial data strategy and third-party data partnerships at American Express to enhance credit and fraud risk decisioning through analytics, governance, and product collaboration.

Photo of the Rise User
Posted 22 hours ago

Claritas Rx is hiring a Senior Data Engineer to define and implement robust, HIPAA-compliant data architectures and lead engineering efforts across a remote US team.

Lead the Data Integrity team to ensure accurate, secure, and compliant customer and operational data flows across systems and partners for a leading 3PL.

Posted 11 hours ago

Lark seeks a data-focused Business Analyst to translate client and product needs into robust reporting specifications and data solutions for a healthcare AI platform.

Photo of the Rise User
Posted 13 hours ago

Allata is hiring a Lead Data Engineer (Databricks) to design, build, and optimize enterprise-grade data pipelines and data products across lakehouse and data warehouse platforms.

Photo of the Rise User
NBCUniversal Hybrid 1221 Avenue of The Americas, New York, NEW YORK
Posted 4 hours ago

Lead vendor strategy and contracting for NBCUniversal’s AdSales data and identity products, sourcing and managing third-party data and technology partners to drive advertiser solutions.

Photo of the Rise User
Posted 20 hours ago

Trexquant seeks a hands-on Data Platform Engineer to build scalable, observable data infrastructure and ensure high-precision data reliability across its trading operations.

Use your expertise in Haitian and Guianese French to evaluate and teach AI models, documenting linguistic failures and suggesting improvements to prompts and metrics.

BBH Hybrid Jersey City
Posted 8 hours ago

Brown Brothers Harriman is hiring a Market Data Analyst in Jersey City to lead market data procurement, vendor/contract management, invoice processes, and cost-control initiatives across the firm.

Photo of the Rise User
Comcast Hybrid PA - Philadelphia, 1701 John F Kennedy Blvd
Posted 11 hours ago

Comcast Business is hiring a Data & Analytics Engineer to design scalable data pipelines, manage data systems, and enable business intelligence across enterprise platforms.

Photo of the Rise User
Posted 20 hours ago

Lead the design and delivery of scalable data pipelines and integrations as a senior engineering leader focused on data quality and cross-functional impact.

MATCH
Calculating your matching score...
FUNDING
DEPARTMENTS
SENIORITY LEVEL REQUIREMENT
TEAM SIZE
EMPLOYMENT TYPE
Full-time, hybrid
DATE POSTED
August 15, 2025
Risa star 🔮 Hi, I'm Risa! Your AI
Career Copilot
Want to see a list of jobs tailored to
you, just ask me below!