Backed by £15 million in funding, the UK’s AI Security Institute is leading a new project with Anthropic, Amazon, and the Canadian AI Safety Institute to ensure AI alignment. Full story https://t.co/6etf9Snm0h #Tech | #News | #AI | #AISecurityInstitue | #AIAlingment
AI Security Institute launches international coalition to safeguard AI development - https://t.co/UVIk5jg5Iz https://t.co/wsC9vhU5MU
We’re joining the UK AI Security Institute's Alignment Project, contributing compute resources to advance critical research. As AI systems grow more capable, ensuring they behave predictably and in line with human values gets ever more vital. https://t.co/TyZnjOLGKW
The UK’s AI Security Institute (AISI) has launched the Alignment Project, an international coalition aimed at advancing research on AI alignment and control to ensure artificial intelligence systems behave in ways consistent with human values and goals. The project is backed by over £15 million in funding, offering up to £1 million per research project, along with access to computing resources, venture capital investment, and expert support. Key partners in the initiative include Anthropic, Amazon, and the Canadian AI Safety Institute. Anthropic has committed to contributing compute resources to support the research. The Alignment Project seeks to coordinate global efforts to address urgent challenges in AI safety as AI systems become increasingly capable.