There's been a bunch of discourse in the run up to the November AI Safety Summit in the UK about what safety policies should be in place. ARC Evals & Anthropic are pushing for 'Responsible Scaling', which doesn't put any hard upper limits on the about of compute that powerful models can use.
There are others who think we need a global compute cap. Thoughts enforcing a ceiling for the amount of compute/FLOP that both state & non-state actors can use?
[link] [comments]