Welcome to the monthly academic salon discussing all aspects under the critical topic of AI safety - this month will feature talks from folks from OpenAI and Center for AI Safety
As experts in AI, this is a forum to discuss and address the deepest technical and ethical questions around AI safety in society. Primarily a social event to discuss current issues, featuring short talks - while leaving more time for Q&A and feasible counterfactuals that could strengthen any hypotheses.
Being that “AI Safety” is now a cliché colloquialism: topics like negative externalities, respecting privacy and autonomy, toxicity, bias, system integrity, alignment are all under the umbrella for this meetup.
NOTE:
Since some proposed solutions may prove controversial or divisive, we expect participants to respect all points of view. By privileging pluralism over consensus, we hope illuminating dialogues here may translate to actionable progress in advancing AI safely.
If you can't make it this month, join us monthly in the future for invigorating discussions.
6:00pm - Doors / food - time to chat
6:30pm - Lightning Talks (5-10 min):
- Zifan Wang, TinyFish and formerly of Center for AI Safety, "Mitigating Harms (Generated) from LLMs: Safeguarding and Unlearning"
- Artur Kiulian, PolyAgent, ““Framing the Future: How We Can, Should, and Must Regulate AI””
- Shreya Rajpal, Guardrailsai.com
7:00pm - More time to chat
8:00pm - Shutting it down
158 on the list
G
L
RS
PB
JB
JS
JV
CL
J
RL
DS
S
L
HM
PS
A
TG
A
RZ
CC
MB
MC
D
SS
+89
Restricted Access
Verify your phone number to view event details and activity