From what I recall this is a bit off -- not a bad mental model but the math plays out different.
Linear regression has a closed form solution of X projected onto Y: \hat{\beta} = (X'X)^{-1} X' Y
It is equivalent to the Maximum Likelihood Estimator (MLE) for linear regression. However, for logistic regression, MLE would estimate different from MLE for the log odds output.
Linear regression on {class_inclusion} = XB gives the linear probability model, which has limited utility. The required transform is covered by another commenter.
Linear regression has a closed form solution of X projected onto Y: \hat{\beta} = (X'X)^{-1} X' Y
It is equivalent to the Maximum Likelihood Estimator (MLE) for linear regression. However, for logistic regression, MLE would estimate different from MLE for the log odds output.
Linear regression on {class_inclusion} = XB gives the linear probability model, which has limited utility. The required transform is covered by another commenter.