Let’s get started
By clicking ‘Next’, I agree to the Terms of Service
and Privacy Policy, and consent to receive emails from Rise
Jobs / Job page
Manager (Technical Governance Team) image - Rise Careers
Job details

Manager (Technical Governance Team)

About MIRI

The Machine Intelligence Research Institute (MIRI) is a nonprofit based in Berkeley, California, focused on reducing existential risks from the transition to smarter-than-human AI. We've historically been very focused on technical alignment research. Since summer 2023, we have shifted our focus towards communication and AI governance. See our strategy update post for more details.

About the Technical Governance Team

We are looking to build a dynamic and versatile team that can quickly produce a large range of research outputs for the technical governance space. Please feel free to fill out this form, or contact us at [email protected].

We focus on researching and designing technical aspects of regulations and policy that could lead to safe AI. The team works on:

  • Inputs into regulations, requests for comments by policy bodies (e.g. NIST/US AISI, EU, UN)

  • Technical research to improve international coordination

  • Limitations of current AI safety proposals and policies

  • Communicating with and consulting for policymakers and governance organizations

Our previous publications are available on our website if you would like to read them. We have a draft of a research agenda that will inform future projects, it is available upon request.

About the Role

We are primarily hiring for researchers but also interested in hiring a manager for the team.

In this role, you would manage a team working on the above areas, and have the opportunity to work on these areas directly. See here for the technical governance researcher job ad.

This role could involve the following, but we are open to candidates who want to focus on a subset of these responsibilities.

  • External stakeholder management, e.g., build and maintain relationships with policy makers and AI company employees (the target audience for much of our work)

  • Internal stakeholder management, e.g., interface with the rest of MIRI and ensure our work is consistent with broader MIRI goals, pre-publication review of the team’s outputs

  • Project management, e.g., track existing projects, motivate good work toward deadlines

  • People management, e.g., run future hiring rounds, fellowships

  • Bonus: Research contributions, e.g., contributing to object level work

  • In the above work, maintain particular focus on what is needed for solutions to scale to smarter-than-human intelligence and conduct research on which new challenges may emerge at that stage

Most of the day-to-day work of the team is a combination of reading, writing, and meetings. Some example activities could include:

  • Threat modeling, working out how AI systems could cause large-scale harm, and hopefully what actions could be taken to prevent this

  • Responding to a US government agency’s Request for Comment

  • Learning about risk management practices in other industries, and applying these to AI

  • Designing and implementing evaluations of AI models, for example to demonstrate failure modes with current policy

  • Preparing and presenting informative briefings to policymakers, such as explaining the basics and current state of AI evaluations

  • Reading a government or AI developer’s AI policy document, and writing a report on its limitations

  • Designing new AI policies and standards which address the limitations of current approaches

Who We’re Looking For

There are no formal degree requirements to work on the team, however we are especially excited about applicants who have a strong background in AI Safety and have particular previous experience or familiarity working in (or as) one or more of:

  • Compute governance. Technical knowledge of AI hardware / chips manufacturing and related governance proposals.

  • Policy (including AI policy). Experience here could involve writing legislation or white papers, engaging with policy makers or other research in AI policy and governance

  • Strong AI Safety generalist. For example, you have produced good AI safety research and have a good overview-level understanding of empirical, theory and conceptual approaches, or otherwise have a demonstrated ability to think clearly and carefully about AI safety.

  • Bonus: Research or engineering focused on frontier AI models or the AI tech stack. The role may involve creating or running model evaluations, benchmarking AI hardware, conducting scaling law experiments, and other empirical work.

We are also excited about candidates who are particularly strong in the following areas:

  • Agency – You get things done without someone constantly looking over your shoulder. You notice problems and are motivated to fix them. You focus on solving the problem, not waiting to be told what to do. You know when to defer to another’s decision, and when to ask for guidance. You are an active member of the team, not a mindless cog in the machine.

  • Conscientiousness – You are diligent and hard-working, and complete your work reliably and dependably. You desire to do tasks well and effectively. You pay attention to details and are organized, and able to manage lots of small tasks and projects.

  • Comfort learning on the job – You enjoy and are able to quickly acquire new skills and knowledge as needed. You feel comfortable working on underspecified tasks where part of your job is to further develop the research questions appropriately.

  • Generative thinking – You enjoy coming up with and iterating on new ideas. You can generate original work as well as extend others’ thoughts. You aren’t afraid to suggest things, or point out flaws in your or others’ thoughts.

  • Communication (Internal) – You are a team player who is excited to work together with others and willing to attend several weekly meetings. You proactively keep teammates/manager in the loop about the status of projects you manage, when things are falling behind, when you need more information. You voice your confusions.

  • Communication (External) – You are able to communicate effectively to external stakeholders who have a range of technical expertise, including policymakers. You can produce concise, clear, and compelling writing, and deliver presentations on the team's research and ideas.

In addition, we are looking for candidates who:

  • Are broadly aligned with MIRI's values and willing to work toward MIRI’s goals (i.e., the world needs to build an Off Switch for AI), and the Technical Governance Team’s research directions (e.g., those described in our research agenda)

  • Are passionate about MIRI’s mission and excited to support our work in reducing existential risks from AI.

Logistics

  • Application deadline - No current deadline, we will evaluate applications as they come in.

  • Location – In-office (in Berkeley, CA)

  • Compensation – $120–230k.

    • The range is due to the wide possibility space of experience and skills that candidates may bring.

    • We strive to ensure that all staff are paid an appropriate and comfortable living wage such that they feel fairly compensated and are able to focus on doing great work.

  • Benefits – MIRI offers a variety of benefits including:

    • Health insurance (the best available plans from Kaiser and Blue Shield) as well as dental and vision coverage. (We cannot always offer comparable benefits to international staff.)

    • “No vacation policy” – staff are encouraged to take vacation when they want/need to in coordination with their manager.

  • Visas – We can potentially sponsor visas for particularly promising candidates.

Average salary estimate

$175000 / YEARLY (est.)
min
max
$120000K
$230000K

If an employer mentions a salary or salary range on their job, we display it as an "Employer Estimate". If a job has no salary data, Rise displays an estimate if available.

Similar Jobs

Lila Sciences is hiring a hands-on Research Assistant to perform powder sample preparation, gas/vapor isotherm measurements, and porosity analysis to accelerate AI-guided materials discovery.

Support PTSD-focused clinical and laboratory research for Ohio State's Department of Psychiatry as a Clinical Research Coordinator working with DoD partners at Fort Bragg near Fayetteville, NC.

Photo of the Rise User
Posted 1 hour ago
Inclusive & Diverse
Diversity of Opinions
Passion for Exploration
Dare to be Different
Empathetic
Growth & Learning
Paid Holidays
Medical Insurance
Equity
401K Matching
Learning & Development
Social Gatherings
Flex-Friendly
Maternity Leave
Paternity Leave
Sabbatical

Canva is hiring a Senior Research Engineer to build autonomous, scalable evaluation systems that assess and align generative design models using multimodal LLMs and inference-time techniques.

Photo of the Rise User

Care Access is hiring an entry-level Clinical Research Assistant for the Future of Medicine program to conduct venipuncture, process specimens, support community recruitment events, and maintain study documentation in Union, NJ and nearby regions.

Photo of the Rise User

AnswerLab is hiring a detail-oriented UX Research Operations Coordinator to manage participant recruitment, scheduling, logistics, and study operations for remote UX research in Arizona.

Photo of the Rise User
The Democratic Party Hybrid Washington, DC (DNC HQ)
Posted 23 hours ago

The DNC is hiring a Research Manager at its Washington, DC headquarters to lead research plans, manage junior staff, and deliver timely, evidence-based materials for campaign messaging and rapid response.

Photo of the Rise User

Senior HEOR leader to shape global market access and evidence strategy for Illumina's molecular diagnostics portfolio, translating data into payer-ready value to expand patient access.

Photo of the Rise User

Care Access is hiring a mid-level Travel Clinical Research Coordinator to manage protocol-driven screening, enrollment, and study operations nationwide with frequent travel.

Photo of the Rise User

Care Access is hiring a mid-level Travel Clinical Research Coordinator to manage study conduct, regulatory compliance, and participant care across nationwide site deployments while traveling approximately 75% of the time.

Photo of the Rise User

Lead a small interdisciplinary Design Physics group at LLNL to advance nuclear-weapon-related physics and mentor top technical talent while supporting Strategic Deterrence programs.

Photo of the Rise User
Posted 22 hours ago

Environmental Science Analysts in the Fort Worth office will perform field assessments, regulatory research, and GIS-supported analyses to advance client projects and environmental compliance.

Develop and deploy physics-informed interatomic potentials and integrate atomistic simulations into agentic AI frameworks to drive autonomous materials discovery at Lila Sciences.

Northeastern Hybrid Boston, MA (Main Campus)
Posted 14 hours ago

Northeastern University's Exercise Psychology Laboratory seeks a full-time Research Coordinator to manage participant-based studies on physical activity, nutrition, mindfulness, and yoga.

MATCH
Calculating your matching score...
DEPARTMENTS
SENIORITY LEVEL REQUIREMENT
TEAM SIZE
No info
HQ LOCATION
No info
EMPLOYMENT TYPE
Full-time, onsite
DATE POSTED
September 13, 2025
Risa star 🔮 Hi, I'm Risa! Your AI
Career Copilot
Want to see a list of jobs tailored to
you, just ask me below!