Mirage is the leading AI short-form video company. We’re building full-stack foundation models and products that redefine video creation, production and editing. Over 20 million creators and businesses use Mirage’s products to reach their full creative and commercial potential.
We are a rapidly growing team of ambitious, experienced, and devoted engineers, researchers, designers, marketers, and operators based in NYC. As an early member of our team, you’ll have an opportunity to have an outsized impact on our products and our company's culture.
Our Products
Our Technology
Press Coverage
Our Investors
We’re very fortunate to have some the best investors and entrepreneurs backing us, including Index Ventures, Kleiner Perkins, Sequoia Capital, Andreessen Horowitz, Uncommon Projects, Kevin Systrom, Mike Krieger, Lenny Rachitsky, Antoine Martin, Julie Zhuo, Ben Rubin, Jaren Glover, SVAngel, 20VC, Ludlow Ventures, Chapter One, and more.
** Please note that all of our roles will require you to be in-person at our NYC HQ (located in Union Square)
We do not work with third-party recruiting agencies, please do not contact us**
About the Role
As an expert in making AI models run fast—really fast—you live at the intersection of CUDA, PyTorch, and generative models, and get excited by the idea of squeezing every last bit of performance out of modern GPUs. You will have the opportunity to turn our cutting-edge video generation research into scalable, production-grade systems. From designing custom CUDA or Triton kernels to profiling distributed inference pipelines, you'll work across the full stack to make sure our models train and serve at peak performance.
Key Responsibilities
Optimize model training and inference pipelines, including data loading, preprocessing, checkpointing, and deployment, for throughput, latency, and memory efficiency on NVIDIA GPUs
Design, implement, and benchmark custom CUDA and Triton kernels for performance-critical operations
Integrate low-level optimizations into PyTorch-based codebases, including custom ops, low-precision formats, and TorchInductor passes
Profile and debug the entire stack—from kernel launches to multi-GPU I/O paths—using Nsight, nvprof, PyTorch Profiler, and custom tools
Work closely with colleagues to co-design model architectures and data pipelines that are hardware-friendly and maintain state-of-the-art quality
Stay on the cutting edge of GPU and compiler tech (e.g., Hopper features, CUDA Graphs, Triton, FlashAttention, and more) and evaluate their impact
Collaborate with infrastructure and backend experts to improve cluster orchestration, scaling strategies, and observability for large experiments
Provide clear, data-driven insights and trade-offs between performance, quality, and cost
Contribute to a culture of fast iteration, thoughtful profiling, and performance-centric design
Required Qualifications
Bachelor's degree in Computer Science, Electrical/Computer Engineering, or equivalent practical experience
3+ years of hands-on experience writing and optimizing CUDA kernels for production ML workloads
Deep understanding of GPU architecture: memory hierarchies, warp scheduling, tensor cores, register pressure, and occupancy tuning
Strong Python skills and familiarity with PyTorch internals, TorchScript, and distributed data-parallel training
Proven track record profiling and accelerating large-scale training and inference jobs (e.g., mixed precision, kernel fusion, custom collectives)
Comfort working in Linux environments with modern CI/CD, containerization, and cluster managers such as Kubernetes
Preferred Qualifications
Advanced degree (MS/PhD) in Computer Science, Electrical/Computer Engineering, or related field
Experience with multi-modal AI systems, particularly video generation or computer vision models
Familiarity with distributed training frameworks (DeepSpeed, FairScale, Megatron) and model parallelism techniques
Knowledge of compiler optimization techniques and experience with MLIR, XLA, or similar frameworks
Experience with cloud infrastructure (AWS, GCP, Azure) and GPU cluster management
Ability to translate research goals into performant code, balancing numerical fidelity with hardware constraints
Strong communication skills and experience mentoring junior engineers
Comprehensive medical, dental, and vision plans
401K with employer match
Commuter Benefits
Catered lunch multiple days per week
Dinner stipend every night if you're working late and want a bite!
Doordash DashPass subscription
Health & Wellness Perks (Talkspace, Kindbody, One Medical subscription, HealthAdvocate, Teladoc)
Multiple team offsites per year with team events every month
Generous PTO policy
Captions provides equal employment opportunities to all employees and applicants for employment and prohibits discrimination and harassment of any type without regard to race, color, religion, age, sex, national origin, disability status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by federal, state or local laws.
Please note benefits apply to full time employees only.
If an employer mentions a salary or salary range on their job, we display it as an "Employer Estimate". If a job has no salary data, Rise displays an estimate if available.
Lead architecture and delivery of a scalable, secure provider-operations platform that applies GenAI and distributed systems engineering to transform healthcare workflows.
Constructor is hiring a frontend-focused full-stack engineer to own and evolve the Searchandising dashboard, delivering performant UI, scalable integrations, and excellent merchandiser experiences on a fully remote team.
Senior backend engineer to design and deliver high-availability, low-latency cloud services for Palo Alto Networks' Cloud Management Platform, based in Santa Clara (in-office 3 days/week).
Anduril is hiring a Senior ATAK Engineer to architect and ship high-performance, map-based Android UI and backend integrations for tactical EW and UAS systems.
Lead development of Brave's iOS browser Wallet feature, shipping secure, privacy-first code in an open-source, remote team.
Lead the design and implementation of scalable Python backend systems on GCP to enable AI-powered financial products at a high-growth global fintech.
Lead architecture and delivery of scalable, mission‑critical data services and streaming workflows at Chainalysis to support real‑time investigations and analytics.
Rula is hiring a Staff SRE & DevOps Engineer to drive observability, reliability, and SRE practices across its remote engineering teams.
Work with TWG Global’s AI Engineering team in Santa Monica to help build and deploy scalable ML, generative AI, and agentic systems on cloud platforms that power personalization and recommendation products.
Runloop seeks a hands-on Site Reliability Engineer to build and maintain reliable, observable infrastructure for its AI-focused code sandbox platform.
Lead the frontend development of AI-driven clinical web applications at Intelligent Medical Objects, focusing on TypeScript, modern frameworks, accessibility, and technical leadership.
An Android Engineer II role focused on building and maintaining Kotlin-based Android apps (Compose, Coroutines) as part of Chick-fil-A's Customer Technology team.
Lead a remote backend engineering team at Flock Safety, driving architecture and delivery for Ruby on Rails-backed systems that support their aviation (drone) product suite.
The Mirage Casino-Hotel imagines itself an oasis in the desert of Las Vegas. Part of the MGM Resorts International (formerly MGM MIRAGE) family of about a half dozen Vegas resorts, The Mirage offers guests more than 3,000 rooms and 100,000 square ...
4 jobs