AI Safety Watch

Below is an incomplete list of resources I have been following for AI safety and security. Some other curated collections: AISafety.com, AI Safety资源汇总 (知乎). Feel free to email me if something you feel important yet miss here.

Institutes or Groups

Government: UK AI Security Institute, US Center for AI Standards and Innovation (CAISI), China AI Safety & Development Association (CNAISDA),

Industry: Centre for AI Safety (CAIS), Apollo Research, Palisade Research, London Initiative for Safe AI (LISA), The Centre for the Governance of AI (GovAI), GreySwan, TruthfulAI, Safe AI Forum (SAIF), EleutherAI, METR, Redwood, RAND, FAR.AI,

Academia (I must miss a lot): The Berkeley Center for Responsible, Decentralized Intelligence (RDI)

Forums

AI Alignment Forum, LessWrong

Seminars

The TrustML Young Scientist Seminars (TrustML YSS)

Funding Opportunities

Coefficient Giving (Open Philanthropy), Schmidt Science, ARIA, Effective Altruism Funds, Foresight Institute, The Frontier Model Forum, Future of Life Institute,

Training Program

MATS, SPAR, AIgoverse, Principles of Intelligence, Pivotal,