Why AI Safety?

I believe that there is a high chance we are going to build a superhuman AI in my lifetime. This is going to be one of the greatest transitions in the history of humanity. This transition comes with enormous risks, and there is a high chance that humanity will destroy itself in the process.

I don't want that. I want to see humanity flourish, I want our consciousness to expand.

Therefore, I have chosen to dedicate myself towards trying to make AI go well.

Why am I doing technical instead of policy work?

I believe that both can be incredibly impactful, but this is a question of personal fit.

I love to read dense academic papers. I love to think about technical concepts till my head hurts. I love to write code.