Ever wonder how transformers actually do some of their computation? Wanted to know more about what this “mechanistic interpretability” thing is and how it relates to AI safety? This is the workshop for you!
We’ll be analyzing how transformers perform induction (in particular in the vein of mechanistic interpretability as first kicked off by Anthropic’s 2021 transformer circuits paper). This workshop is meant for people who have previously built and trained their own transformers previously. We will assume that people are already very familiar with a standard decoder-only GPT-2-style transformer.
This workshop will consist of an introduction talk into mechanistic interpretability of LLMs, some guiding principles, and then a hands-on exercise where we actually do some interpretability exercises.
You will need to bring a computer to participate!
Guest List
4 On The List
JC
HL
JW
BF
CW
BB
DG
ED
JM
MT
J
DB
SM
L
MM
GH
NW
LW
RL
JR
TW
+15
Restricted Access
Verify your phone number to view event details and activity