ML Ops Engineer — Agentic AI Lab (Founding Team)
Location: San Francisco Bay Area
Type: Full-Time
Compensation: Competitive salary + meaningful equity (founding tier)
Backed by 8VC, we're building a world-class team to tackle one of the industry’s most critical infrastructure problems.
About the Role
Our AI Lab is pioneering the future of intelligent infrastructure through open-source LLMs, agent-native pipelines, retrieval-augmented generation (RAG), and knowledge-graph-grounded
models.
We’re hiring an ML Ops Engineer to be the glue between ML research and production systems
— responsible for automating the model training, deployment, versioning, and observability pipelines that power our agents and AI data fabric.
You’ll work across compute orchestration, GPU infrastructure, fine-tuned model lifecycle
management, model governance, and security e
Responsibilities
Build and maintain secure, scalable, and automated pipelines for:
LLM fine-tuning, SFT, LoRA, RLHF, DPO training
RAG embedding pipelines with dynamic updates
Model conversion, quantization, and inference rollout
Manage hybrid compute infrastructure (cloud, on-prem, GPU clusters) for training and
inference workloads using Kubernetes, Ray, and Terraform
Containerize models and agents using Docker, with reproducible builds and CI/CD via
GitHub Actions or ArgoCD
Implement and enforce model governance: versioning, metadata, lineage, reproducibility,
and evaluation capture
Create and manage evaluation and benchmarking frameworks (e.g. OpenLLM-Evals,
RAGAS, LangSmith)
Integrate with security and access control layers (OPA, ABAC, Keycloak) to enforce
model policies per tenant
Instrument observability for model latency, token usage, performance metrics, error
tracing, and drift detection
Support deployment of agentic apps with LangGraph, LangChain, and custom inference
backends (e.g. vLLM, TGI, Triton)
Desired Experience
Model Infrastructure:
4+ years in MLOps, ML platform engineering, or infra-focused ML roles
Deep familiarity with model lifecycle management tools: MLflow, Weights & Biases, DVC,
HuggingFace Hub
Experience with large model deployments (open-source LLMs preferred): LLaMA,
Mistral, Falcon, Mixtral
Comfortable with tuning libraries (HuggingFace Trainer, DeepSpeed, FSDP, QLoRA)
Familiarity with inference serving: vLLM, TGI, Ray Serve, Triton Inference Server
Automation + Infra:
Proficient with Terraform, Helm, K8s, and container orchestration
Experience with CI/CD for ML (e.g. GitHub Actions + model checkpoints)
Managed hybrid workloads across GPU cloud (Lambda, Modal, HuggingFace Inference,
Sagemaker)
Familiar with cost optimization (spot instance scaling, batch prioritization, model sharding)
Agent + Data Pipeline Support:●
Familiarity with LangChain, LangGraph, LlamaIndex or similar RAG/agent orchestration tools
Built embedding pipelines for multi-source documents (PDF, JSON, CSV, HTML)
Integrated with vector databases (Weaviate, Qdrant, FAISS, Chroma)
Security & Governance:
Implemented model-level RBAC, usage tracking, audit trails
Integrated with API rate limits, tenant billing, and SLA observability
Experience with policy-as-code systems (OPA, Rego) and access layers
Preferred Stack
LLM Ops: HuggingFace, DeepSpeed, MLflow, Weights & Biases, DVC
Infra: Kubernetes (GKE/EKS), Ray, Terraform, Helm, GitHub Actions, ArgoCD
Serving: vLLM, TGI, Triton, Ray Serve
Pipelines: Prefect, Airflow, Dagster
Monitoring: Prometheus, Grafana, OpenTelemetry, LangSmith
Security: OPA (Rego), Keycloak, Vault
Languages: Python (primary), Bash, optionally Rust or Go for tooling
Mindset & Culture Fit
Builder's mindset with startup autonomy: you automate what slows you down
Obsessive about reproducibility, observability, and traceability
Comfortable with a hybrid team of AI researchers, DevOps, and backend engineers
Interested in aligning ML systems to product delivery, not just papers
Bonus: experience with SOC2, HIPAA, or GovCloud-grade model operations
What We’re Looking For
Experience:
5+ years as a full stack or backend engineer
Experience owning and delivering production systems end-to-end
Prior experience with modern frontend frameworks (React, Next.js)
Familiarity with building APIs, databases, cloud infrastructure, or deployment workflows at scale
Comfortable working in early-stage startups or autonomous roles, prior experience as a founder, founding engineer, or a 0-1 pre-seed startup is a big plus
Mindset:
Comfortable with ambiguity, eager to prototype and iterate quickly
Strong sense of ownership — prefers to build systems rather than wait for tickets
Enjoys thinking about architecture, performance, and tradeoffs at every level
Clear communicator and pragmatic team player
Values equity and impact over prestige or hierarchy
Prior startup or founding team experience
Why This Role Matters
Your work will enable models and agents to be trained, evaluated, deployed, and governed at
scale — across many tenants, models, and tasks. This is the backbone of a secure, reliable,
and scalable AI-native enterprise system. If you dream about using AI to solve some really hard
real world problems – we would love to hear from you.
If an employer mentions a salary or salary range on their job, we display it as an "Employer Estimate". If a job has no salary data, Rise displays an estimate if available.
Fabrion is looking for a systems-minded founding Product Designer to build AI-powered supplier management experiences in the automotive industry.
Innovative AI startup in the San Francisco Bay Area is looking for a seasoned Data Engineer to create scalable data fabrics and semantic infrastructures fueling next-gen enterprise AI applications.
Tinder is looking for a Software Engineer III to enhance its web application with modern technologies, focusing on quality, accessibility, and performance.
Experienced Fullstack Engineer with a passion for scalable, high-performance applications wanted to join Zencastr’s innovative fully remote team pushing the future of podcasting.
Contribute as a Backend Engineer at Everest to build cutting-edge AI-native infrastructure that enhances executive assistive workflows.
Technica is hiring a mid-level Software Developer to lead application development projects in support of the FBI's IT infrastructure within a hybrid work setting.
Become a key contributor on Team Amex by developing innovative frontend solutions with React.js in a collaborative and evolving tech environment.
An innovative tech company is looking for a skilled C++ Developer to create and optimize radar system software in a fully remote role with core working hours and occasional travel.
Experienced Java software engineers are wanted by American Express to build scalable microservices and APIs driving innovative loyalty and benefit experiences.
Visa invites experienced Staff Software Engineers to lead the design and development of secure, high-scale payment systems in a hybrid work environment.
Senior Software Engineer at Palantir to design and build Kubernetes infrastructure tooling supporting large-scale, secure, and automated cluster operations.
Seeking an experienced .NET/Full Stack Developer familiar with Rally Agile tools to join an onsite team in NC or GA for a contract role focused on backend and frontend application development.
Contribute as a Senior Java Kotlin Engineer at American Express, driving innovation in a critical global payments platform with cutting-edge technologies.
Palantir is seeking a Senior Software Engineer to innovate and maintain their advanced Kubernetes network infrastructure supporting large-scale secure operations.
Lead frontend development for cutting-edge cybersecurity AI operations at Palo Alto Networks as a Senior Staff Software Engineer.