Error loading numpy arrays

If you received “Object arrays cannot be loaded when allow_pickle=False” while loading a numpy array using numpy.load() it is because numpy has changed the default loading behaviour since version 1.16.3. If you are using numpy version newer than this, at many places on internet it is advised to simply downgrade the numpy version. This is not the correct solution at all. From the numpy documentation:

allow_pickle : bool, optional
Allow loading pickled object arrays stored in npy files. Reasons for disallowing pickles include security, as loading pickled data can execute arbitrary code. If pickles are disallowed, loading object arrays will fail. Default: False
Changed in version 1.16.3: Made default False in response to CVE-2019-6446.

Thus, the correct solution is to pass allow_pickle=True to the numpy.load function. However, this should be used carefully and ideally only with the files you have previously saved yourself since picking in python can execute arbitrary code thereby compromising the system security.


Solution to Kaggle’s Dogs vs. Cats Challenge using Logistic Regression

In the previous post, I discussed a solution to Kaggle’s Dogs vs. Cats Challenge using Convolutional Neural Networks. CNN’s takes time to train and I tried a number of different network models and various values for hyperparameters before achieving 94% accuracy. This was very time consuming and it took around two days to determine the best network model and values of the hyperparameters. I used grid-search with the help of TrainCNN.py [1] to tune the value of hyperparameters. One run of TrainCNN.py for grid-search took few hours and since I was unable to do anything related to CNN, I decided to try Logistic Regression on another machine to solve the problem. I used LogisticRegressionCV from Scikit-learn which is the cross-validated version of the LogisticRegression function. I am not going to discuss the code in this blog post as it is straightforward implementation and instead encourage you to read it from LogisticRegression.py in my Exploring Deep Learning repository at Github.

Kaggle’s dogs vs. cats dataset has 25,000 images in two equal classes of dogs and cats. I used 15,000 (7,500 each for dogs and cats) randomly selected images for fitting model and 5,000 images (2,500 each for dogs and cats) for validation.

There are two parameters for processing the dataset itself: image size and whether to standardizing images or not. For logistic regression there is choice of solver and a hyperparameter called Cs which describes the strength of the regularization. Smaller values of Cs specifies stronger regularization. I did grid-search for optimal solution for these parameters and below are the results:

Solver  ImageSize  Rescale  TrainingAcc  ValidationAcc  TimeToFit (s)  Memory (GB)
lbfgs75True67.661.8308.713.5
lbfgs100True70.161.3544.823.6
lbfgs125True72.360.6857.936.5
 
sag75True67.661.91222.613.2
sag100True70.161.32255.523.2
sag125True72.660.53572.636.1
 
lbfgs125False81.757.3944.436.5
sag125False84.858.44072.136.1
 
lbfgs125True68.162.03635.836.5

Solver

Sklearn recommends using liblinear for a smaller dataset and sag or saga for larger dataset. However, the default solver is lbfgs for logistic regression. Since dogs vs. cats dataset is relatively large for logistic regression, I decided to compare lbfgs and sag solvers. Comparing rows 1-3 with 4-6, we can see that although the training and validation accuracy is same for both lbfgs and sag solvers, the sag solver is about four times slower than lbfgs solver. Thus, sklean has a good default value of lbfgs as solver for logistic regression.

Image Size

If we compare image size for any one solver (rows 1-3 or 4-6) we can see that as the image size increases, training accuracy increases from 67.6% to 72.6%. However, the validation accuracy stays roughly the same at 61-62%. This indicates that model is being over-fitting over training samples. In the regularization section, we will see how to handle overfitting by adjusting the regularization strength.

Image Normalization

Sklearn recommends that features should be approximately of the same scale. “Note that [for] ‘sag’ and ‘saga’ fast convergence is only guaranteed on features with approximately the same scale. You can preprocess the data with a scaler from sklearn.preprocessing” [2]. I used sklean.preprocessing.StandardScaler to normalized both training and validation data. StandardScaler transform the data so that each feature has a zero mean and unit standard deviation. Looking at the rows 7 and 8, we can see that without image normalization both lbfgs and sag massively overfits the training data with the training accuracy of 82% and 85%, respectively and the validation accuracy of only 57% and 58%. Both solvers are also about three times slower then when images were normalized. This clearly highlights the importance of the feature normalization.

Regularization

Once I decided on the solver (lbfgs), image size (125), and that images should be normalized, I fine tuned for regularization strength (Cs). I used L2 regularization since lbfgs supports only L2 regularization. To use L1 regularization we have to use saga solver but since sag and saga are so much slower than lbfgs I decided not to try it out. LogisticRegressionCV in sklearn supports grid-search for hyperparameters internally, which means we don’t have to use model_selection.GridSearchCV or model_selection.RandomizedSearchCV. LogisticRegressionCV has a parameter called Cs which is a list all values among which the solver will find the best model. I used Cs = [1e-12, 1e-11, …, 1e11, 1e12]. The results for fine tuning is presented in the last row (row 9) in the table above. It can be seen that the training accuracy has dropped from 72.3% to 68.1% while validation accuracy has increased from 60.6% to 62%. This, tuning for regularization strength does indeed decrease the degree of overfitting the training data.

Conclusions

In this article, I presented results for image classification for Kaggle’s dogs vs. cats dataset using logistic regression. The classifier achieved an accuracy of 62% on validation images. It may be possible to achieve higher accuracy by further tuning image size, preprocessing images, using a grayscale image instead of RGB color images, using a different value of regularization strength, or using both L1 and L2 regularization. I choose not to further explore since the memory requirements for logistic regression in sklearn is very large (last column in the table above).

References

  1. https://github.com/saurabhg17/ExploringDeepLearning
  2. https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegressionCV.html

Solution to Kaggle’s Dogs vs. Cats Challenge using Convolutional Neural Networks

Dogs vs. cats challenge [1] from Kaggle ended in Jan 2014 but it is still extremely popular for getting started in deep learning. This is because of two main reasons: the data set is small (25,000 images taking up about 600MB), and it is relatively easy to get a good score.

There are many many online articles discussing on how pre-process data , design a CNN model and finally train the model. So, in this post I am not going to discuss the implementation details. Instead, I am simply going to report my results using a custom designed model and transfer learning. I used Tensorflow and tf.keras with Python and it is available from my Exploring Deep Learning repository [2] at Github.

Learning using a Custom Model

Note that this is my best attempt and not the first attempt. I used four blocks of 2D convolution layers followed by max pooling. In the end, I used two dense layers and a softmax layer as output. I also used dropout layers and image augmentation. The exact command line for training this model is:

The CNN model is given below:

The above model was trained on 15,000 (7,500 each for dogs and cats) randomly chosen images from the Kaggle data set and validated with a separate 5,000 (2,500 each for dogs and cats) images. The model achieved 94% accuracy after 24 epochs. It took about 4 hours of training on my PC with NVidia GeForce GTX 1050 with 2GB of RAM.

Cross-entropy loss for training and validation and the classification
accuracy for training and validation using a custom CNN model.

Transfer Learning using VGG16 Model

For the second part, I used the VGG16 model with imagenet weights without the top layer and a custom denser layers at the end. Similar to the previous step, I used dropout layers and image augmentation. The exact command line for training this model is:

The CNN model is given below:

The above model was trained on the same dataset as the custom model above and it achieved an accuracy of 98% after 11 epochs. Clearly, this model is far more efficient and more accurate then the custom designed model.

Cross-entropy loss for training and validation and the classification accuracy
for training and validation using a transfer learning from VGG16 model.

References:

  1. https://www.kaggle.com/c/dogs-vs-cats
  2. https://github.com/saurabhg17/ExploringDeepLearning

String Selection Widget for Qt5

Some time back, I developed a data entry application in Qt5. One of the requirements was to let the user select a single string from a predefined list of string. I developed a custom widget called SStringSelector for this purpose. SStringSelector has two views: display and selection. The display view presents the currently selected string (blank if no string is selected), and a push button. To select a string, the user clicks on the button which presents the user with the selection dialog. The selection dialog consists of a list of string in an QListWidget and the user can select one of them by double-clicking a string. If the list of strings are long, the user can filter them using a filter QLineEdit present above the QListWidget.

SStringSelector is distributed as a part of QtUtils repository hosted on Github. The SStringSelector widget is really simple to use. Simple add the SStringSelector.h and SStringSelector.cpp files in your project and add an instance of SStringSelector in the layout of your app.

Below are some screenshots of the widget under Windows:

The SStringSelector Widget.
Selection Dialog of the SStringSelector Widget.
Filtering Strings in the Selection Dialog.

Color Picker Widget for Qt5

Qt5 support standard dialogs such as QFileDialog, QFontDialog, and QColorDialog, however, it does not provide a color picker to allow a user to pick a color. Recently, I need a color picker for one of my projects and I implemented a simple color picker widget.

SColorPicker is available from Github as a part of QtUtils repository. To use SColorPicker, add the header and cpp files directly in your project. Then, simply add an instance of SColorPicker in a layout. SColorPicker will appear as 16×16 pixels colored square in the layout. If you need a different size, change it in the SColorPicker's constructor. When a user double-clicks on the colored square, the system’s color dialog will appear allowing the user to choose a color. The selected color can be obtained from color() function or by connecting to colorPicked() signal.

Below are the screenshots of the SColorPicker_Demo and system color dialog present to the user on Windows 10 computer.


Fill Disk Partition

Recently, I had to give away a computer with couple of disks in it. I wanted to securely erase data on these disks as I stored personal sensitive information on them. Using a program such as DBAN was not an option as I was not allowed to remove the operating system from the computer. My goal was to simply overwrite free space from all the partitions. I couldn’t find anything I liked so I ended up writing a simple tool called FillPartition in python.

FillPartition is hosted on Github at https://github.com/saurabhg17/FillPartition. It is really easy to use with just one mandatory argument (the path of the partition) and one optional argument (–outputDir, -od) the directory in the partition where files should be written. FillPartition writes 1GB files filled with 0 bytes until the free space is less than 1GB and then write one final file of the size equal to the remaining free space.

Below is a screenshot of a run of FillPartition on Windows


Tomato Cells under Microscope

To see tomato cells under microscope, simply squeeze a bit of tomato juice on a clean glass slide and gently place a cover slip over it.

Micrographs

Below is the micrograph of the tomato cells:

Tomato cells magnified 40 times

Tomato cells are floating in the juice and hence are not connected to each other. The thick black circles are air bubbles that got trapped between slide and cover slip. I was not able to get rid of them after couple of tries.

Tomato cells are very big compared to onion skin cells. In fact, they are more than 25 times bigger than onion skin cells! Below is the micrograph of onion skin cells for comparison:

Onion cells magnified 40 times


Search Box using QLineEdit

This week, at work I had to implement a search box for a software I am working on. The search box is to filter some data dynamically as user types a query. I wanted to show a clear (cross) icon at the right side of the search box so that user can clear the results instead of selecting the current query and deleting it manually. Lastly, for clarity I wanted to show a search icon on the left side of search box. The search box looks like this:

Screenshot of the Search box implemented using QLineEdit

After the user enters a query a clear icon appears on the right. The clear icon is in fact a button and clicking it will clear the current search.
Screenshot of the Search box with keywords implemented using QLineEdit

It is really easy to make this search box using QLineEdit. We need only the following three lines of code:

Line 2 enables the clear button which adds the clear action and cross icon to the right. Line 3 adds another action with a search icon to the left of the QLineEdit. We don’t listen to this action as it is merely decorative. Line 4 adds a placeholder text which is shown in the QLineEdit but is cleared as soon as user starts typing.

We only connect textChanged(const QString&)  signal which is emitted both when a user clicks on the cross icon and when he enters a search query.




Markdown to PDF Converter

Few weeks ago, I published SLogLib (a cross-platform logging library) on GitHub. I wrote the user manual in a readme.md file as is the standard practice at GitHub. However, since most of users don’t have markdown viewers installed on their machines they would either need to access GitHub repository or would have to convert it to more popular format such as PDF or perhaps HTML. For many users going online is becoming standard practice to access documentation but I prefer offline manuals. Thus, I wanted to ship a PDF manual along with the code.

I searched high and low for a standalone tool to convert markdown to PDF but surprisingly there are not a lot of options out there. The first tool I came across was GitPrint. It is conceptually innovative and straightforward to use with GitHub. Just add /your_user_name/repository_name at the end of http://gitprint.com and it prints the readme.md in the repository to PDF. The PDF generated is of good quality but there are no styling options. Also, it failed to include images in the PDF so I had to kept looking. One of the frequently recommended tool is PanDoc, which is a swiss-army knife to convert files from one markup format into another. However, in my experience it doesn’t do a good job of converting markdown to PDF. Another popular tool online is a markdown-pdf package for Node.js. Since, I have no prior experience with Node.js I haven’t tried it yet.

Earlier this year, I bought a MacBook Pro and installed a markdown editor called MacDown. It is really nice tool with side-by-side rendering of markup and HTML. It can export markdown as PDF and produces very good quality PDF’s. It also supports lots of styling options as well as a CSS to customize PDF generation. In the end, I used it to generate PDF for SLogLib.

Even though I had a PDF for SLogLib, I wanted to find/build a cross-platform tool to convert markdown to PDF.

The basic idea to convert markdown to PDF is simple. First convert markdown to HTML and then print HTML to PDF. I used hoedown to convert markdown to PDF because of several reasons:

  1. First and foremost it is cross-platform and compiles as a standalone binary for all three main platforms: Windows, Linux, and OSX.
  2. MacDown uses it too and I was quite happy with its rendering.
  3. It supports not only standard markdown but also several non-standard extensions.

To converted HTML to PDF one of the most popular tool I came across was wkhtmltopdf. It is also cross-platform and complies into standalone binaries for all popular platforms. In fact, it is possible to download the pre-built library right from its website. Wkhtmltopdf uses a modified version of webkit shipped with Qt. It uses webkit to render the html and print to PDF. However, while testing I found that on a Windows 7 machines there is a serious problem with font kerning. It has been reported by a lot of users but I haven’t found a solution to fix it. Wkhtmltopdf would have been ideal as I could simply write a command line and/or GUI tool wrapping the functionality of hoedown and Wkhtmltopdf.

Screenshot of markdown to PDF generated from MacDown

Screenshot of PDF generated from MacDown.

Screenshot of markdown to PDF generated wkhtmltopdf

Screenshot of PDF generated from wkhtmltopdf.

I could not find any other standalone cross-platform tool to convert HTML to PDF. So, for now I decided to use dompdf which is written in PHP. Once I started used PHP I thought why not make it a web based tool. This would allow me to learn about SEO which I have been promising myself to learn one day :). The tools is hosted at http://markdown2pdf.com. At the moment it doesn’t appear in first five pages in Google search for “markdown to pdf” or “markdown 2 pdf”. I am playing with various SEO tools and techniques and hope to get it within first five pages.

My quest for a standalone tool is not yet complete. I will try to find a solution for wkhtmltopdf kerning issue or find another standalone cross-platform tool for converting from HTML to PDF. I will update with my findings on this blog.




Onion Cells under Microscope

In this post, I will show how to make a wet mount slide for looking onion cells under a microscope.

Making the slide
  1. Take a clean slide and place a drop of water in the centre
  2. Take a small piece of onion and carefully peel the translucent membrane from the rough underside
    of the slide. To peel the membrane, you can either use a sharp blade or a pair of tweezers. It is important to do this step carefully so as to not break too many cells. So, ideally always hold the peeled membrane at the edges.
  3. Now carefully, place the membrane in the drop of water placed earlier on the slide.
  4. You may want to put a small drop of tincture iodine over the onion membrane. This is to help create contrast between cell nuclei and other parts of cells.
  5. Finally, gently lower a cover slip over the membrane.
Micrographs

Below are the micrographs of the onion cells. The nuclei are the small dark circles and the thick black lines are the cell walls.

Onion cells magnified 40 times

Onion cells magnified 40 times.

Onion cells magnified 100 times

Onion cells magnified 100 times.