Introduction to Machine Learning (Part-3)
Probability is a way to model uncertain events and Machine Learning is trying to make a process similar to real world process.
It is a basic model using machine learning.
It says if we have some missing data then through function we can retrieve the missing data.
Example : Apply for a bank loan is a real world process, it can be based on Age, income, Education, Marriage status, Defaulted ...etc. Here we have to apply Joint Probability Distribution. Here from function we predict probability to approximate actual probability.
Principle Components : These are the factors of the process , on which a process is based on.
Logistic Regression : It is used where the scenario is that input is continuous variable and output is categorical variable.It can be represented as :
P(Y=1|X) = 1 / 1 + e(2Σi=1βixi)
Here βi is the parameter. We have to learn the function which minimizes the error.
P(Y=0|X) = 1 - P(Y=1|X) = 1 - 1 / 1 + e(2Σi=1βixi)
If we assume P(Y=1|X) > P(Y=0|X), here we can bet on 1 because here P(Y=1|X) is greater. If P(Y=1|X) = P(Y=0|X) , then it will lead to decision boundary and this the proof that Logistic Regression can create a Linear decision boundary.
If we substitute the values in P(Y=1|X) = P(Y=0|X)
1 / 1 + e(2Σi=1βixi) = e-(2Σi=1βixi) / 1 + e(2Σi=1βixi)
1 = e(2Σi=1βixi)
By taking log on both sides
log 1 = log e(-Σβixi)
c = Σβixi
Σβixi = 0
This is equation of plane.
In logistic regression also we try to fit linear decision boundary.
D maps P'(Y|X)
where here D is discriminative model and P'(Y|X) is conditional probability of Y given X, if it would be P'(XY) then it would have been generative model.
P(Y|X) = P(XY)/P(X)
We get X by marginalization P(X) = ΣY P(XY)
In Linear regression, input as well as output both are continuous.
Linear and Logistic Regression are not universal approximate , we cannot find f through them but can find the approximate by Mixture of Gaussian and Neural Network and can help to find approximate function.
Comments
Post a Comment