I occasionally write about AI alignment and related topics on LessWrong/Alignment Forum, under the username “LawrenceC”: https://www.lesswrong.com/users/LawrenceC
I argue that new machine learning researchers often delay contact with reality too long, why this is a mistake, and give suggestions for how to avoid it.