About the Team
OpenAI’s Inference team ensures that our most advanced models run efficiently, reliably, and at scale. We build and optimize the systems that power our production APIs, internal research tools, and experimental model deployments. As model architectures and hardware evolve, we’re expanding support for a broader set of compute platforms - including AMD GPUs - to increase performance, flexibility, and resiliency across our infrastructure.
We are forming a team to generalize our inference stack - including kernels, communication libraries, and serving infrastructure - to alternative hardware architectures.
About the Role
We’re hiring engineers to scale and optimize OpenAI’s inference infrastructure across emerging GPU platforms. You’ll work across the stack - from low-level kernel performance to high-level distributed execution - and collaborate closely with research, infra, and performance teams to ensure our largest models run smoothly on new hardware.
This is a high-impact opportunity to shape OpenAI’s multi-platform inference capabilities from the ground up with a particular focus on advancing inference performance on AMD accelerators.
In this role, you will:
Own bring-up, correctness and performance of the OpenAI inference stack on AMD hardware.
Integrate internal model-serving infrastructure (e.g., vLLM, Triton) into a variety of GPU-backed systems.
Debug and optimize distributed inference workloads across memory, network, and compute layers.
Validate correctness, performance, and scalability of model execution on large GPU clusters.
Collaborate with partner teams to design and optimize high-performance GPU kernels for accelerators using HIP, Triton, or other performance-focused frameworks.
Collaborate with partner teams to build, integrate and tune collective communication libraries (e.g., RCCL) used to parallelize model execution across many GPUs.
You can thrive in this role if you:
Have experience writing or porting GPU kernels using HIP, CUDA, or Triton, and care deeply about low-level performance.
Are familiar with communication libraries like NCCL/RCCL and understand their role in high-throughput model serving.
Have worked on distributed inference systems and are comfortable scaling models across fleets of accelerators.
Enjoy solving end-to-end performance challenges across hardware, system libraries, and orchestration layers.
Are excited to be part of a small, fast-moving team building new infrastructure from first principles.
Nice to Have:
Contributions to open-source libraries like RCCL, Triton, or vLLM.
Experience with GPU performance tools (Nsight, rocprof, perf) and memory/comms profiling.
Prior experience deploying inference on other non-NVIDIA GPU environments.
Knowledge of model/tensor parallelism, mixed precision, and serving 10B+ parameter models.
About OpenAI
OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity.
We are an equal opportunity employer, and we do not discriminate on the basis of race, religion, color, national origin, sex, sexual orientation, age, veteran status, disability, genetic information, or other applicable legally protected characteristic.
For additional information, please see OpenAI’s Affirmative Action and Equal Employment Opportunity Policy Statement.
Qualified applicants with arrest or conviction records will be considered for employment in accordance with applicable law, including the San Francisco Fair Chance Ordinance, the Los Angeles County Fair Chance Ordinance for Employers, and the California Fair Chance Act. For unincorporated Los Angeles County workers: we reasonably believe that criminal history may have a direct, adverse and negative relationship with the following job duties, potentially resulting in the withdrawal of a conditional offer of employment: protect computer hardware entrusted to you from theft, loss or damage; return all computer hardware in your possession (including the data contained therein) upon termination of employment or end of assignment; and maintain the confidentiality of proprietary, confidential, and non-public information. In addition, job duties require access to secure and protected information technology systems and related data security obligations.
To notify OpenAI that you believe this job posting is non-compliant, please submit a report through this form. No response will be provided to inquiries unrelated to job posting compliance.
We are committed to providing reasonable accommodations to applicants with disabilities, and requests can be made via this link.
OpenAI Global Applicant Privacy Policy
At OpenAI, we believe artificial intelligence has the potential to help people solve immense global challenges, and we want the upside of AI to be widely shared. Join us in shaping the future of technology.
If an employer mentions a salary or salary range on their job, we display it as an "Employer Estimate". If a job has no salary data, Rise displays an estimate if available.
Stargate is hiring a BMS/Controls Coordinator to lead controls integration and commissioning for OpenAI’s Burlington, TX data center.
Help shape proactive, helpful ChatGPT notifications by building end-to-end ranking and recommendation systems that combine classical ML and LLM techniques.
Exact Sciences seeks a Medical Informatics software engineering intern in San Diego to help design, implement, and validate clinical software components and data pipelines that advance cancer diagnostics.
Lead frontend development for Red 6's AR pilot training platform, building performant web interfaces that integrate with real-time augmented reality systems.
PAR Technology seeks an experienced AI Engineer to architect and implement LLM integrations, retrieval-augmented workflows, and data-enriched features that drive product differentiation and customer value.
Mapbox seeks a Software Development Engineer II to develop scalable, high-throughput backend APIs and services for Maps, Traffic, and MapGPT using Python, TypeScript/Node.js, and AWS.
Lead the development and migration of Rust-based video and streaming infrastructure for DroneSense's public-safety drone platform, integrating streaming protocols and cloud services while validating changes through drone testing.
Mastercard is hiring a Vice President of Data & AI to architect and lead a cloud-first data and AI platform that powers scalable, market-facing analytics and GenAI products for Business & Market Insights.
Tamarind Bio is hiring a Full-Stack Engineer to design, build, and scale the web and API stack that enables AI-driven drug discovery for enterprise customers.
Senior Platform Engineer role at Astronomer to design and operate high-scale, cloud-native production infrastructure that accelerates reliable data and AI workloads.
Lead multiple engineering teams at ATPCO to modernize and scale the critical systems that power global airline fare distribution.
Mindex is hiring a Solution Lead (Salesforce) to architect and lead enterprise Salesforce implementations, combining Apex development with technical leadership and client-facing responsibilities.
Senior iOS engineer sought to architect and deliver performant, scalable mobile experiences for Fandango and sister Versant properties within the NBCUniversal family.
Experienced security engineer needed to design and operate a scalable Elastic Stack with Elastic EDR/Defend to support advanced threat detection and incident response.
Experienced front-end engineers with React and GCP/AI exposure are needed to design responsive, enterprise-grade UIs and integrate GenAI-enhanced features for a remote, long-term W-2 engagement.
OpenAI is a US based, private research laboratory that aims to develop and direct AI. It is one of the leading Artifical Intellgence organizations and has developed several large AI language models including ChatGPT.
68 jobs