반응형
Birth of Logistic Loss Function
- In linear regression, the goal was to find weights and bias that minimize the squared error between the actual and predicted values.
- Besides, the goal of classfication problems like logistic regression is to increase the actual ratio of correctly classified samples data itself.
- However, the ratio of correctly classified samples is not a differentiable function.
- In other words, since it cannot be used as a loss function for gradient descent algorithm, we need to find an equation that could be used as a loss function.
- This is when the logistic loss function was created.
- By using this function as the loss function, we can achieve similar objectives.
$L = -(y \log(a) + (1-y)\log(1-a))$
Details
- $a$ is the estimation value after going through the activation function.
- Simply put, the logistic loss function is a binary classficiation version of the cross-entropy loss function, which is used for multi-class classification.
- Binary classification only has 2 possible answers: yes(1) or no(0).
- In other words, the target value is either 1 or 0.
- Therefore, the logistic loss function is also organized into cases where y is either 1 or 0.
$L$ | |
$y = 1$ | $-log(a)$ |
$y = 0$ | $-log(1-a)$ |
- We can see that whether y is 0 or 1, minimizing the equation for each case will naturally lead $a$ to reach our desired target value.
- For example, when y is 1, to minimize the logistic loss function value, $a$ naturally gets closer to 1.
- Conversely, when y is 0, minimizing the logistic loss function value naturally makes $a$ get closer to 0.
- In other words, when we minimize the logistic loss function, the value of $a$ becomes our most ideally conceived value, and it becomes evident that we can use this function as a loss function.
I keep having vivid dreams of success. Then it's time to get up and make them come true.
- Conor Mcgregor -
반응형
'캐글' 카테고리의 다른 글
[Kaggle Study] 8. About Structuring ML Projects (2) - Error Analysis & Incorrectly labeled / Mismatch data (0) | 2024.11.13 |
---|---|
[Kaggle Study] 7. About Structuring ML Projects (1) (0) | 2024.11.12 |
[Kaggle Study] 5. Regularization 가중치 규제 (4) | 2024.10.30 |
[Kaggle Study] 4. Overfitting, Underfitting, Variance and Bias (4) | 2024.10.29 |
[Kaggle Study] 3. Learning Rate (2) | 2024.10.29 |