AI presents numerous risks to humanity. Meanwhile, progress in AI capabilities is accelerating at a frantic pace and humanity is not prepared for the consequences. AI companies are encouraging each other in a race to develop superhuman intelligence, where safety is actively sacrificed for monetary gain. We must force our governments to step in and prevent AI from reaching superhuman levels before we know how to make it safely. This pause needs to happen on an international level, including with current US adversaries.
This protest will occur before the Second AI Safety Summit on the 22nd of May in Seoul. Our goal is to convince the few influential individuals (ministers) visiting to be the adults in the room and draft a treaty prioritizing and enforcing AI safety. It’s up to us to make them understand that they may be the only ones who have the power to fix the problem.
We plan to meet at 5:00 PM by the Centennial Flame, rain or shine. Hope to see you there!