ML Solutions Engineer (ROCm Portability)
At TensorWave, we’re leading the charge in AI compute, building a versatile cloud platform that’s driving the next generation of AI innovation. We’re focused on creating a foundation that empowers cutting-edge advancements in intelligent computing, pushing the boundaries of what’s possible in the AI landscape.
About the Role:
We are seeking an exceptional ML Solutions Engineer who specializes in GPU portability and performance optimization. This is a senior-level role for someone who has significant experience with CUDA, ROCm, and kernel development, and is passionate about enabling workloads to run efficiently on AMD hardware.
As a technical expert, you will help migrate and optimize CUDA-based workloads to ROCm, working with both internal teams and third-party developers. You will play a critical role in advancing our ROCm enablement strategy and driving adoption across the ecosystem.
Key Responsibilities:
Partner with customers, internal engineering, and third-party developers to migrate CUDA workloads to ROCm.
Profile, debug, and optimize GPU kernels for performance, scalability, and efficiency.
Contribute to ROCm enablement across open source ML frameworks and libraries.
Leverage tools such as Composable Kernel, HIP, PyTorch/XLA, and RCCL to enable and tune distributed training workloads.
Provide technical guidance on best practices for GPU portability, including kernel-level optimizations, mixed precision, and memory hierarchy usage.
Act as a technical liaison, translating customer requirements into actionable engineering work.
Create internal documentation, playbooks, and training material to scale knowledge across teams.
Represent TensorWave in the broader ROCm ecosystem through contributions, collaboration, and customer advocacy.
Qualifications:
Must-Have:
5+ years of experience in GPU programming, ML infrastructure, or HPC roles.
Strong hands-on experience with CUDA, HIP, and ROCm.
Proficiency in kernel development (e.g., CUDA, HIP, Composable Kernel, Triton).
Deep knowledge of GPU performance profiling tools (Nsight, rocprof, perf, etc.).
Understanding of distributed ML workloads (e.g., PyTorch Distributed, MPI, RCCL).
Proven ability to work in customer-facing technical roles, including solution design and workload migration.
Strong programming skills in Python, C++, and GPU kernel languages.
Nice-to-Have:
Contributions to ROCm-enabled open source ML frameworks (PyTorch, Megatron, vLLM, SGLang, etc.).
Familiarity with compiler technology (LLVM, MLIR, XLA).
Experience with containerized environments and Kubernetes for GPU workloads.
Knowledge of performance modeling for multi-GPU and multi-node workloads.
Familiarity with AI/ML workload benchmarking and tuning at scale.
Foundation in networking, especially as it pertains to RDMA, RoCE, and Infiniband.
What Success Looks Like
Customers successfully migrate and optimize their CUDA workloads to ROCm, with measurable performance gains.
Strong collaboration between internal engineering and external developers leads to faster enablement of ROCm workloads.
Best practices, playbooks, and tooling are well-documented and continuously improved.
Make GPUs go Brrrrrrr
What We Bring:
Stock Options
100% paid Medical, Dental, and Vision insurance
Life and Voluntary Supplemental Insurance
Short Term Disability Insurance
Flexible Spending Account
401(k)
Flexible PTO
Paid Holidays
Parental Leave
Mental Health Benefits through Spring Health
If an employer mentions a salary or salary range on their job, we display it as an "Employer Estimate". If a job has no salary data, Rise displays an estimate if available.
Experienced engineering leader needed to guide teams building scalable, secure payment systems using Java, JavaScript/React, and modern cloud CI/CD practices at Visa.
TENEX is hiring a Full Stack Engineer Intern to work on scalable backend services and modern frontend applications in a fast-growing, on-site San Jose engineering team.
Veeva Systems is hiring a Senior Backend Software Engineer to lead development of scalable Python-based backend services for new products on the New Markets team (remote, PST/MST).
NVIDIA is hiring a Senior Deep Learning Software Engineer to develop and optimize PyTorch components and production AI solutions for large-scale GPU deployments.
Contribute to CapeZero's mission-driven platform by building scalable Django/Python backends and APIs that power renewable energy financing and modeling tools.
Lead the design and operation of scalable Kubernetes and cloud-native infrastructure at Green Dot, driving CI/CD, observability, and team growth in a fully remote U.S. role.
PactFi is hiring a Senior Software Engineer skilled in Laravel and Vue.js to help build secure, scalable infrastructure for private asset transactions.
Experienced embedded control software engineer needed to develop and integrate high-performance control and machine vision features for remote-operated machinery at a client site in Chillicothe, IL.
Ambient.ai is hiring a Full Stack Engineer to build scalable, real-time backend systems and APIs that power its AI-driven physical security platform.
Lead the architecture and implementation of large-scale backend systems and LLM-driven agents at a high-growth AI customer-service startup headquartered in NYC with remote options in Austin.
Experienced full-stack engineer wanted to lead architecture and production-ready services at april, building scalable, data-driven tax solutions used by millions.
Founding Forward Deployed Engineer to own and ship complex, latency-sensitive customer integrations while building the FDE function and shaping product direction at Anam.
Trimble is hiring a Software Engineer Intern for its CTCT division in Dayton, OH to work on positioning/control software using C++, Python, Matlab/Simulink and modern dev tooling.
Supercharge your large-scale PyTorch LLM workloads with our cloud powered by AMD MI300X
3 jobs