📍 San Francisco | Work Directly with CEO & founding team | Report to CEO | OpenAI for Physics | 🏢 5 Days Onsite
Location: Onsite in San Francisco
Compensation: Competitive Salary + Equity
UniversalAGI is building OpenAI for Physics. AI startup based in San Francisco and backed by Elad Gil (#1 Solo VC), Eric Schmidt (former Google CEO), Prith Banerjee (ANSYS CTO), Ion Stoica (Databricks Founder), Jared Kushner (former Senior Advisor to the President), David Patterson (Turing Award Winner), and Luis Videgaray (former Foreign and Finance Minister of Mexico). We’re building foundation AI models for physics that enable end-to-end industrial automation from initial design through optimization, validation, and production.
We're building a high-velocity team of relentless researchers and engineers that will define the next generation of AI for industrial engineering. If you're passionate about AI, physics, or the future of industrial innovation, we want to hear from you.
As a founding ML Infrastructure Engineer, you'll be in the arena from day one, building the backbone that powers AI for physics at scale. This is your chance to build and own the entire ML infrastructure stack, from finetuning, training pipelines to low-latency customer deployments that serve foundation models in production.
You'll work directly with the CEO and founding team to build infrastructure that can train on petabytes of simulation data, serve physics models with strict accuracy requirements, and deploy seamlessly into customer environments with enterprise security and compliance needs. You're coming up with new paradigms for how AI models integrate into industrial engineering workflows.
Build and scale fine tuning & training infrastructure for foundation models, distributed training across multi-GPU and multi-node clusters, optimizing for throughput, cost, and iteration speed
Design and implement model serving systems with low latency, high reliability, and the ability to handle complex physics workloads in production
Build fine-tuning pipelines that let customers adapt our foundation models to their specific use cases, data, and workflows without compromising model quality or security
Build deployment serving infrastructure for on-premise and cloud environments, working through customer security requirements and compliance constraints
Create robust data pipelines that can ingest, validate, and preprocess massive CFD datasets from diverse sources and formats
Instrument everything: Build observability, monitoring, and debugging tools that give our team and customers full visibility into model performance, data quality, and system health
Work directly with customers on deployment, integration, and scaling challenges, turning their infrastructure pain points into product improvements
Move fast and ship: Take infrastructure from prototype to production in weeks, iterating based on real customer needs and research team feedback
This is a role for someone who's built ML systems that actually work in production, who understands both the research side and the operational reality, and is ready to solve some of humanity's hardest infrastructure problems.
3+ years of hands-on experience building and scaling ML infrastructure for fine tuning, training, serving, or deployment
Deep experience with cloud platforms (AWS, GCP, Azure) and infrastructure-as-code (Terraform, Kubernetes, Docker)
Deep expertise in distributed training frameworks (PyTorch Distributed, DeepSpeed, Ray, etc.) and multi-GPU/multi-node orchestration
Strong foundation in ML serving: Experience building low-latency inference systems, model optimization, and production deployment
Expert-level coding skills in Python and infrastructure tools, comfortable diving deep into ML frameworks and optimizing performance
Understanding of ML workflows: Training pipelines, experiment tracking, model versioning, and the full lifecycle from research to production
Strong communicator capable of bridging customers, engineers, and researchers, translating infrastructure constraints into product decisions
Outstanding execution velocity: Ships fast, debugs quickly, and thrives in ambiguity
Exceptional problem-solving ability: Willing to dive deep into unfamiliar systems and figure out what's actually broken
Comfortable in high-intensity startup environments with evolving priorities and tight deadlines
Computer Aided Engineer Software experience.
Experience deploying ML in enterprise environments with strict security, compliance, and air-gapped requirements
Built fine-tuning infrastructure for foundation models.
Experience with model optimization techniques
Deep understanding of GPU programming and performance optimization (CUDA, Triton, etc.)
Experience with large-scale data engineering for ML, ETL pipelines, and data validation systems
Built MLOps platforms or developer tools for ML teams
Experience at high-growth AI startups (Seed to Series C) or leading AI labs
Forward deployed experience working directly with customers on complex integrations
Open-source contributions to ML infrastructure or training frameworks
Technical Respect: Ability to earn respect through hands-on technical contribution
Intensity: Thrives in our unusually intense culture - willing to grind when needed
Customer Obsession: Passionate about solving real customer problems, not just cool tech
Deep Work: Values long, uninterrupted periods of focused work over meetings
High Availability: Ready to be deeply involved whenever critical issues arise
Communication: Can translate complex technical concepts to customers and team
Growth Mindset: Embraces the compounding returns of intelligence and continuous learning
Startup Mindset: Comfortable with ambiguity, rapid change, and wearing multiple hats
Work Ethic: Willing to put in the extra hours when needed to hit critical milestones
Team Player: Collaborative approach with low ego and high accountability
Opportunity to shape the technical foundation of a rapidly growing foundational AI company.
Work on cutting-edge industrial AI problems with immediate real-world impact.
Direct collaboration with the founder & CEO and ability to influence company strategy
Competitive compensation with significant equity upside.
In-person first culture - 5 days a week in office with a team that values face-to-face collaboration.
Access to world-class investors and advisors in the AI space.
We provide great benefits, including:
Competitive compensation and equity.
Competitive health, dental, vision benefits paid by the company.
401(k) plan offering.
Flexible vacation.
Team Building & Fun Activities.
Great scope, ownership and impact.
AI tools stipend.
Monthly commute stipend.
Monthly wellness / fitness stipend.
Daily office lunch & dinner covered by the company.
Immigration support.
“The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood; who strives valiantly; who errs, who comes short again and again... who at the best knows in the end the triumph of high achievement, and who at the worst, if he fails, at least fails while daring greatly." - Teddy Roosevelt
At our core, we believe in being “in the arena.” We are builders, problem solvers, and risk-takers who show up every day ready to put in the work: to sweat, to struggle, and to push past our limits. We know that real progress comes with missteps, iteration, and resilience. We embrace that journey fully knowing that daring greatly is the only way to create something truly meaningful.
If you're ready to join the future of physics simulation, push creative boundaries, and deliver impact, UniversalAGI is the place for you.
If an employer mentions a salary or salary range on their job, we display it as an "Employer Estimate". If a job has no salary data, Rise displays an estimate if available.
NextLink Labs seeks a Senior Full-Stack Engineer (Ruby on Rails & React) to build scalable client applications, reduce technical debt, and contribute to consulting best practices in a remote-first consulting firm.
Contribute to Sprig’s AI-powered platform as a fullstack engineer focused on large-scale backend systems, distributed data workflows, and frontend integrations in a hybrid San Francisco role.
Genworth seeks a Full Stack Java Developer to develop, maintain, and support imaging applications and integrations using Java, Spring Boot, APIs, and front-end technologies in a collaborative, onsite team environment.
Experienced software engineer needed to develop enterprise Java/JEE applications and SOA solutions as part of a fast-paced, Agile systems-integration team.
InfStones is hiring a Senior Blockchain Software Engineer in Dallas to build scalable backend systems for cross-chain data collection and analysis.
Sequence is hiring a Senior Software Engineer to design and operate revenue-critical event streaming and backend infrastructure for B2B billing at scale.
Work on the software backbone of next-generation home robots by building scalable cloud-native systems, full-stack apps, and distributed data pipelines at a fast-moving early-stage startup.
Quizlet is hiring a Senior Software Engineer to design and deliver scalable, secure APIs, SDKs, and plugin systems that integrate Quizlet with partners, AI agents, and learning platforms.
Lead mobile architecture and development for a consumer goods company, designing and delivering iOS and Android applications integrated with cloud and hardware systems.
Quizlet is hiring a Staff Platform Engineer (Database Systems) to lead database performance, resilience, and automation for its high-scale learning platform from the San Francisco office.
Senior engineering leader needed to manage and grow multiple engineers across disciplines while preserving Ashby’s high-ownership, low-process engineering culture.
Engineering Manager needed to lead and mentor a distributed team delivering secure, scalable web and mobile client applications using Python/Django and React/React Native in a remote US role with periodic on-sites.
Lead Product Engineer needed to shape the core UX of Plural's tokenized energy-finance platform, building TypeScript/React dApps that bridge TradFi and DeFi users.