Sciforium is an AI infrastructure company developing next-generation multimodal AI models and a proprietary, high-efficiency serving platform. Backed by multi-million-dollar funding and direct sponsorship from AMD with hands-on support from AMD engineers the team is scaling rapidly to build the full stack powering frontier AI models and real-time applications.
We offer a fast-moving, collaborative environment where engineers have meaningful impact, learn quickly, and tackle deep technical challenges across the AI systems stack.
Role Overview
Sciforium is seeking a highly skilled Distributed Training Engineer to build, optimize, and maintain the critical software stack that powers our large-scale AI training workloads. In this role, you will work across the entire machine learning infrastructure from low-level CUDA/ROCm runtimes to high-level frameworks like JAX and PyTorch to ensure our distributed training systems are fast, scalable, stable, and efficient.
This position is ideal for someone who loves deep systems engineering, debugging complex hardware–software interactions, and optimizing performance at every layer of the ML stack. You will play a pivotal role in enabling the training and deployment of next-generation LLMs and generative AI models.
Key Responsibilities
Software Stack Maintenance: Maintain, update, and optimize critical ML libraries and frameworks including JAX, PyTorch, CUDA, and ROCm across multiple environments and hardware configurations.
End-to-End Stack Ownership: Build, maintain, and continuously improve the entire ML software stack from ROCm/CUDA drivers to high-level JAX/PyTorch tooling.
Distributed Training Optimization: Ensure all model implementations are efficiently sharded, partitioned, and configured for large-scale distributed training.
System Integration: Continuously integrate and validate modules for runtime correctness, memory efficiency, and scalability across multi-node GPU/accelerator clusters.
Profiling & Performance Analysis: Conduct detailed profiling of compilation graphs, training workloads, and runtime execution to optimize performance and eliminate bottlenecks.
Debugging & Reliability: Troubleshoot complex hardware–software interaction issues, including vLLM compilation failures on ROCm, CUDA memory leaks, distributed runtime failures, and kernel-level inconsistencies.
Collaborate with research, infrastructure, and kernel engineering teams to improve system throughput, stability, and developer experience.
Must-Haves
5+ years of industry experience in ML systems, distributed training, or related fields.
Bachelor’s or Master’s degree in Computer Science, Computer Engineering, Electrical Engineering, or related technical fields.
Strong programming experience in Python, C++, and familiarity with ML tooling and distributed systems.
Deep understanding of profiling tools (e.g., Nsight, ROCm Profiler, XLA profiler, TPU tools).
Deep expertise with partitioning configuration on the modern ML frameworks such as PyTorch and JAX.
Experience with multi-node distributed training systems and orchestration frameworks (DTensor, GSPMD, etc.).
Hands-on experience maintaining or building ML training stacks involving CUDA, ROCm, NCCL, XLA, or similar technologies.
Nice-to-Haves
Extensive experience with the XLA/JAX stack, including compilation internals and custom lowering paths.
Familiarity with distributed serving or large-scale inference frameworks (e.g., vLLM, TensorRT, FasterTransformer).
Background in GPU kernel optimization or accelerator-aware model partitioning.
Strong understanding of low-level C++ building blocks used in ML frameworks (e.g., XLA, CUDA kernels, custom ops).
Opportunity to build frontier-scale AI infrastructure powering next-generation LLMs and multimodal models.
Work with top-tier engineers and researchers across systems, GPUs, and ML frameworks.
Tackle high-impact performance and scalability challenges in training and inference.
Access state-of-the-art GPU clusters, datasets, and tooling.
Opportunity to publish, patent, and push the boundaries of modern AI
Join a culture of innovation, ownership, and fast execution in a rapidly scaling AI organization.
Medical, dental, and vision insurance
401k plan
Daily lunch, snacks, and beverages
Flexible time off
Competitive salary and equity
Sciforium is an equal opportunity employer. All applicants will be considered for employment without attention to race, color, religion, sex, sexual orientation, gender identity, national origin, veteran or disability status.
If an employer mentions a salary or salary range on their job, we display it as an "Employer Estimate". If a job has no salary data, Rise displays an estimate if available.
HarmonEyes is looking for a Junior Software Engineer to help build data pipelines, SDK components, and cloud infrastructure for an AI-driven eye-tracking platform.
Visa is hiring a Senior Consultant Software Engineer to design and deliver scalable, secure payment and crypto-related services as part of a hybrid Austin engineering team.
Expeditors is hiring a seasoned COBOL Developer III to support, enhance and migrate the eTMS Export mainframe system that powers the company’s global operations.
Lead the architecture and operation of NVIDIA's global observability platform to ensure reliable, high-performance telemetry for large-scale AI and data systems.
NBCUniversal is hiring a Senior Software Engineer to lead development of cloud control plane tooling and automation across multi-cloud environments, driving governance, observability, and DevOps best practices.
Develop software solutions for manufacturing systems and instrumentation at Illumina, working cross-functionally to enable reliable, scalable production of genomics products.
Experienced application security engineer sought to lead client-side hardening, cryptography, and secure SDLC practices for high-impact desktop and mobile applications in a fully-remote US position.
The Browser Company is looking for an experienced Services Engineer to build secure, low-latency backend services that power Dia’s AI in the browser for individual and enterprise users.
Help accelerate developer productivity on Poe by building tooling, reusable components, and APIs as a Senior Software Engineer focused on developer experience.
WisdomTree is hiring a Smart Contract Developer to build and deploy tokenization and on-chain financial systems across Ethereum, Avalanche and Polygon for its digital assets platforms.
Senior CIS Application Software Developer (SME) needed to design and implement secure, scalable Java-based enterprise solutions for federal clients, with strong experience in Spring Boot and Drools.
Senior Software Engineer, Growth — lead engineering for acquisition, onboarding, and retention at Monarch by building experimentation systems, shipping data‑driven features, and mentoring the team.
Lead and scale Imprint's Payments engineering function to improve reliability, observability, and auditability of mission-critical money-movement services across ACH, debit, bill pay, and check processing.