Logistic Regression actually originated from Naive Bayes, by converting its terms into probabilities of 1 and -1.
Then, we take the logarithm on both sides, which transforms the formulation into a linear form.
This leads to the condition:
(w^t)x+b>0
which represents a linear equation.
Maximum Likelihood Estimation (MLE):
Maximum A Posteriori (MAP):
P(w∣D)= (P(D∣w)P(w))/P(D)
where \( D \) is the dataset.