Posts Tagged With ‘regression&8217


Solution to Kaggle’s Dogs vs. Cats Challenge using Logistic Regression

In the previous post, I discussed a solution to Kaggle’s Dogs vs. Cats Challenge using Convolutional Neural Networks. CNN’s takes time to train and I tried a number of different network models and various values for hyperparameters before achieving 94% accuracy. This was very time consuming and it took around two days to determine the best network model and values of the hyperparameters. I used grid-search with the help of TrainCNN.py [1] to tune the value of hyperparameters. One run of TrainCNN.py for grid-search took few hours and since I was unable to do anything related to CNN, I decided to try Logistic Regression on another machine to solve the problem. I used LogisticRegressionCV from Scikit-learn which is the cross-validated version of the LogisticRegression function. I am not going to discuss the code in this blog post as it is straightforward implementation and instead encourage you to read it from LogisticRegression.py in my Exploring Deep Learning repository at Github.

Kaggle’s dogs vs. cats dataset has 25,000 images in two equal classes of dogs and cats. I used 15,000 (7,500 each for dogs and cats) randomly selected images for fitting model and 5,000 images (2,500 each for dogs and cats) for validation.

There are two parameters for processing the dataset itself: image size and whether to standardizing images or not. For logistic regression there is choice of solver and a hyperparameter called Cs which describes the strength of the regularization. Smaller values of Cs specifies stronger regularization. I did grid-search for optimal solution for these parameters and below are the results:

Solver  ImageSize  Rescale  TrainingAcc  ValidationAcc  TimeToFit (s)  Memory (GB)
lbfgs75True67.661.8308.713.5
lbfgs100True70.161.3544.823.6
lbfgs125True72.360.6857.936.5
 
sag75True67.661.91222.613.2
sag100True70.161.32255.523.2
sag125True72.660.53572.636.1
 
lbfgs125False81.757.3944.436.5
sag125False84.858.44072.136.1
 
lbfgs125True68.162.03635.836.5

Solver

Sklearn recommends using liblinear for a smaller dataset and sag or saga for larger dataset. However, the default solver is lbfgs for logistic regression. Since dogs vs. cats dataset is relatively large for logistic regression, I decided to compare lbfgs and sag solvers. Comparing rows 1-3 with 4-6, we can see that although the training and validation accuracy is same for both lbfgs and sag solvers, the sag solver is about four times slower than lbfgs solver. Thus, sklean has a good default value of lbfgs as solver for logistic regression.

Image Size

If we compare image size for any one solver (rows 1-3 or 4-6) we can see that as the image size increases, training accuracy increases from 67.6% to 72.6%. However, the validation accuracy stays roughly the same at 61-62%. This indicates that model is being over-fitting over training samples. In the regularization section, we will see how to handle overfitting by adjusting the regularization strength.

Image Normalization

Sklearn recommends that features should be approximately of the same scale. “Note that [for] ‘sag’ and ‘saga’ fast convergence is only guaranteed on features with approximately the same scale. You can preprocess the data with a scaler from sklearn.preprocessing” [2]. I used sklean.preprocessing.StandardScaler to normalized both training and validation data. StandardScaler transform the data so that each feature has a zero mean and unit standard deviation. Looking at the rows 7 and 8, we can see that without image normalization both lbfgs and sag massively overfits the training data with the training accuracy of 82% and 85%, respectively and the validation accuracy of only 57% and 58%. Both solvers are also about three times slower then when images were normalized. This clearly highlights the importance of the feature normalization.

Regularization

Once I decided on the solver (lbfgs), image size (125), and that images should be normalized, I fine tuned for regularization strength (Cs). I used L2 regularization since lbfgs supports only L2 regularization. To use L1 regularization we have to use saga solver but since sag and saga are so much slower than lbfgs I decided not to try it out. LogisticRegressionCV in sklearn supports grid-search for hyperparameters internally, which means we don’t have to use model_selection.GridSearchCV or model_selection.RandomizedSearchCV. LogisticRegressionCV has a parameter called Cs which is a list all values among which the solver will find the best model. I used Cs = [1e-12, 1e-11, …, 1e11, 1e12]. The results for fine tuning is presented in the last row (row 9) in the table above. It can be seen that the training accuracy has dropped from 72.3% to 68.1% while validation accuracy has increased from 60.6% to 62%. This, tuning for regularization strength does indeed decrease the degree of overfitting the training data.

Conclusions

In this article, I presented results for image classification for Kaggle’s dogs vs. cats dataset using logistic regression. The classifier achieved an accuracy of 62% on validation images. It may be possible to achieve higher accuracy by further tuning image size, preprocessing images, using a grayscale image instead of RGB color images, using a different value of regularization strength, or using both L1 and L2 regularization. I choose not to further explore since the memory requirements for logistic regression in sklearn is very large (last column in the table above).

References

  1. https://github.com/saurabhg17/ExploringDeepLearning
  2. https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegressionCV.html