Joining us as a Research Engineer, you'll be at the forefront of tackling one of the most critical challenges in AI today: safety and alignment. Your work will be pivotal in understanding and mitigating the risks of advanced AI, conducting foundational research to make our models safer, and solving the core technical problems of AI alignment—ensuring our models behave in accordance with human values and intentions.
The Safety team is dedicated to pioneering and implementing techniques that make our models more robust, honest, and harmless. As a Research Engineer, you will bridge the gap between theoretical research and practical application, writing high-quality code to test hypotheses and integrating successful safety solutions directly into our products. Your research will not only protect millions of users but also contribute to the broader scientific community's understanding of how to build safe, beneficial AI.
Develop and implement novel evaluation methodologies and metrics to assess the safety and alignment of large language models.
Research and develop cutting-edge techniques for model alignment, value learning, and interpretability.
Conduct adversarial testing to proactively uncover potential vulnerabilities and failure modes in our models.
Analyze and mitigate biases, toxicity, and other harmful behaviors in large language models through techniques like reinforcement learning from human feedback (RLHF) and fine-tuning.
Collaborate with engineering and product teams to translate safety research into practical, scalable solutions and best practices.
Stay abreast of the latest advancements in AI safety research and contribute to the academic community through publications and presentations.
Hold a PhD (or equivalent experience) in a relevant field such as Computer Science, Machine Learning, or a related discipline.
Write clear and clean production-facing and training code
Experience working with GPUs (training, serving, debugging)
Experience with data pipelines and data infrastructure
Strong understanding of modern machine learning techniques, particularly transformers and reinforcement learning, with a focus on their safety implications.
Are passionate about the responsible development of AI and dedicated to solving complex safety challenges.
Experience with product experimentation and A/B testing
Experience training large models in a distributed setting
Familiarity with ML deployment and orchestration (Kubernetes, Docker, cloud)
Experience with explainable AI (XAI) and interpretability techniques.
Have research in AI safety, alignment, ethics, or a related area.
Knowledge of the broader societal and ethical implications of AI, including policy and governance.
Publications in relevant academic journals or conferences in the field of machine learning
Character.AI empowers people to connect, learn and tell stories through interactive entertainment. Over 20 million people visit Character.AI every month, using our technology to supercharge their creativity and imagination. Our platform lets users engage with tens of millions of characters, enjoy unlimited conversations, and embark on infinite adventures.
In just two years, we achieved unicorn status and were honored as Google Play's AI App of the Year—a testament to our innovative technology and visionary approach.
Join us and be a part of establishing this new entertainment paradigm while shaping the future of Consumer AI!
At Character, we value diversity and welcome applicants from all backgrounds. As an equal opportunity employer, we firmly uphold a non-discrimination policy based on race, religion, national origin, gender, sexual orientation, age, veteran status, or disability. Your unique perspectives are vital to our success.
If an employer mentions a salary or salary range on their job, we display it as an "Employer Estimate". If a job has no salary data, Rise displays an estimate if available.
A hands-on R&D/Product & Process Development engineering internship at AbbVie supporting prototype fabrication, physical testing, data analysis and documentation for regenerative medicine projects.
Lead nonclinical safety strategy at Enveda as Director, Toxicology, driving IND-enabling programs and regulatory submissions to move novel small-molecule candidates from bench to clinic.
Lawrence Livermore National Laboratory seeks a Space Hardware Postdoctoral Researcher to advance structural dynamics, materials development, and design for next-generation optical space payloads.
Brooks Running is hiring a Master’s graduate intern for the Run Perception team to run perception studies, analyze runner feedback, and deliver data-driven recommendations for footwear and apparel at the Seattle HQ.
AbbVie's Biologics Drug Product Development team is looking for a PhD student intern to study LLPS and excipient-driven colloidal stability in high-concentration biologic formulations at the South San Francisco CMC site.
Lead research-to-production work on LLM reasoning, agent decision-making, and interpretability at an early-stage startup building scalable AI automation.
Support upstream bioprocess operations at Eurofins Indianapolis by running flask cultures, operating WAVE and single-use bioreactors, sampling and harvesting, and maintaining accurate documentation and lab inventory.
Brooks seeks a Master's graduate intern to support the Run Research Team with motion-capture data collection, biomechanical and materials analysis, and research-driven product recommendations at its Seattle headquarters.
Lead frontier LLM experimentation and productionize interpretable AI agent workflows as an early research scientist at DimRed.
Applied Research Laboratories is hiring a Robotics and Autonomous System Designer to develop and test autonomy and target-recognition algorithms for UUVs and support field deployments and data analysis.
Develop and validate autonomy and target-recognition algorithms for UUVs at ARL's Advanced Technology Laboratory, supporting field testing and cross-organizational collaboration.
PhD-level Drug Substance intern to support cell line engineering, mammalian cell culture, and protein purification efforts within AbbVie's BioCMC team in South San Francisco.
Temporary researcher needed to design, machine, and test prototype components for battery and separations experiments within the Hatzell Group at Princeton University's Andlinger Center.
Character.ai is a neural language model chatbot service provider based in California that leverages sophisticated language models to facilitate conversations with users. Our mobile app had over 1.7 million downloads within its first week in 2023.
1 jobs