Role Summary:
Own the 0→1. You’ll turn vague customer use cases into working proofs-of-concept that showcase what Mem0 can do. This means rapid full-stack prototyping, stitching together AI tools, and aggressively experimenting with memory retrieval approaches until the use case works end-to-end. You’ll partner closely with Research and Backend, communicate trade-offs clearly, and hand off winning prototypes that can be hardened for production.
What You'll Do:
Build POCs for real use cases: Stand up end-to-end demos (UI + APIs + data) that integrate Mem0 in the customer’s flow.
Experiment with memory retrieval: Try different embeddings, indexing, hybrid search, re-ranking, chunking/windowing, prompts, and caching to hit task-level quality and latency targets.
Prototype with Research: Implement paper ideas and new techniques from scratch, compare baselines, and keep what wins.
Create eval harnesses: Define small gold sets and lightweight metrics to judge POC success; instrument demos with basic telemetry.
Integrate AI tooling: Combine LLMs, vector DBs, Mem0 SDKs/APIs, and third-party services into coherent workflows.
Collaborate tightly: Work with Backend on clean contracts and data models; with Research on hypotheses; share learnings and next steps.
Package & handoff: Write concise docs, scripts, and templates so Engineering can productionize quickly.
Minimum Qualifications
Full-stack fluency: Next.js/React on the front end and Python backends (FastAPI/Django/Flask) or Node where needed.
Strong Python and TypeScript/JavaScript; comfortable building APIs, wiring data models, and deploying quick demos.
Hands-on with the LLM/RAG stack: embeddings, vector databases, retrieval strategies, prompt engineering.
Track record of rapid prototyping: moving from idea → demo in days, not months; clear documentation of results and trade-offs.
Ability to design small, meaningful evaluations for a use case (quality + latency) and iterate based on evidence.
Excellent communication with Research and Backend; crisp specs, readable code, and honest status updates.
Nice to Have:
Model serving/fine-tuning experience (vLLM, LoRA/PEFT) and lightweight batch/async pipelines.
Deployments on Vercel/serverless, Docker, basic k8s familiarity; CI for demo apps.
Data visualization and UX polish for compelling demos.
Prior Forward-Deployed/Solutions/Prototyping role turning customer needs into working software.
About Mem0
We're building the memory layer for AI agents. Think long-term memory that enables AI to remember conversations, learn from interactions, and build context over time. We're already powering millions of AI interactions. We are backed by top-tier investors and are well capitalized.
Our Culture
Office-first collaboration - We're an in-person team in San Francisco. Hallway chats, impromptu whiteboard sessions, and shared meals spark ideas that remote calls can't.
Velocity with craftsmanship - We build for the long term, not just shipping features. We move fast but never sacrifice reliability or thoughtful design - every system needs to be fast, reliable, and elegant.
Extreme ownership - Everyone at Mem0 is a builder-owner. If you spot a problem or opportunity, you have the agency to fix it. Titles are light; impact is heavy.
High bar, high trust - We hire for talent and potential, then give people room to run. Code is reviewed, ideas are challenged, and wins are celebrated—always with respect and curiosity.
Data-driven, not ego-driven – The best solution wins, whether it comes from a founder or an engineer who joined yesterday. We let results and metrics guide our decisions.
If an employer mentions a salary or salary range on their job, we display it as an "Employer Estimate". If a job has no salary data, Rise displays an estimate if available.
Mem0 is seeking a skilled Full Stack Engineer to deliver end-to-end scalable features on their AI memory platform, combining strong frontend and backend expertise in an in-person team environment.
A Senior Research Engineer role to lead memory feature lifecycle from research to production, driving innovations in model fine-tuning and evaluation at scale.
Contribute to cutting-edge AI agent development at AbbVie to transform technology delivery and drive innovation in healthcare solutions.
Lead frontend development for cutting-edge cybersecurity AI operations at Palo Alto Networks as a Senior Staff Software Engineer.
A Senior Full-Stack ML Engineer opportunity at Priceline to develop and operationalize advanced machine learning and generative AI solutions in a hybrid work setting.
Contribute as a Senior Full Stack Developer on Reda Sullivan's legal tech platform, building secure, scalable features in a fully remote 4-week project.
Exciting opportunity at Valinor to engineer secure, scalable cloud and UI systems for a pioneering field medical platform in the defense tech space.
Senior Engineer with expertise in Golang and GCP to develop robust, scalable cloud applications for American Express’s Global Loyalty & Benefits Platform in a hybrid work model.
Lead the design and development of advanced forensic software at AnaVation, supporting critical intelligence operations.
Lead software engineering teams in building scalable, cloud-native automotive finance solutions at Toyota Financial Services in Plano, TX.
Technica is hiring a mid-level Software Developer to lead application development projects in support of the FBI's IT infrastructure within a hybrid work setting.
BrightAI is recruiting a Staff Computer Vision/AI Engineer to architect and deploy advanced AI solutions remotely across the US, driving real-world impact in physical infrastructure industries.
Contribute to the future of global payments as a Senior Java Kotlin Engineer at American Express, developing innovative solutions on a cutting-edge technical platform.
Contribute to advancing Notion AI as a Software Engineer AI Intern in a dynamic onsite winter 2026 internship at Notion’s New York or San Francisco office.
Senior Engineer role at Nordstrom’s Product Platform team focusing on innovative product information solutions in a hybrid Seattle setting.