Wednesday, November 8, 2017

Setting up Your Web App with Flask

With Python Flask, you can easily setup a web application that is interactive. In this tutorial, we will build a simple deep neural network server that will classify the given image into one of the 1000 categories. When done, your app will look like

Let's create a folder where all your web app files will reside.
$ mkdir ~/web_app
$ cd ~/web_app

Install Flas, Keras and OpenCV modules
$ pip install flask keras opencv-python

Next, create two more folders as below
$ mkdir templates uploads

Create and copy the code below:

Create templates/index.html file and copy the code below:

Create templates/predict.html file and copy the code below:

That's it! Your web app will classify a given image---either uploaded directly from the client or using the web url---using ResNet50 pre-trained network.

To run the server, run
$ python

While the server is running, you can browse to http://SERVER_IP:8888 to view the web app, where of course SERVER_IP must be replaced with the server's actual IP address.

Wednesday, November 1, 2017

Running Keras Deep Neural Network Inference Web Server using WebDNN

Web app is a great tool for simple demo across any platforms. I find it very useful because I can easily show my trained model's output to anyone like a breeze. The catch, however, would be setting up the server, but once the server is up and running, it cannot be any better to boast in your resume and show off to your boss with your trained model.

Here is a tutorial for setting up a simple neural network inference server using WebDNN library. Although its Github page and official documentations are very clear, I still had to spend some time to get it to work. In addition, the official documentation instructs users to compile and install emscripten from source, which will consume quite some time; I will show you how to get around this with install pre-compiled version.

First, one needs to clone WebDNN repository from Github:
$ git clone && cd webdnn

Note that WebDNN only supports Python3.6+, so you need to install this unless you already have it on the system. To check the your Python3 version, run
$ python3 --version

Make sure that it is 3.6+. There are plenty of resources that you can search on Google on how to install Python 3.6+. For instance, on Mac OS X, using homebrew is probably the easiest way:
$ brew install python3

Once you have Python 3.6+, it is a good idea to create virtual environment for what you will need to do.
$ virtualenv -p `which python3` python3

Now, activate the environment and install necessary packages
$ source python3/bin/activate
$ pip install tensorflow-gpu keras h5py

The Python environment is now complete. You now need to setup emscripten environment.
$ git clone && cd emsdk
$ ./emsdk install latest
$ ./emsdk activate latest
$ source ./

Now, we need to install eigen library.
$ wget
$ tar jxf 3.3.3.tar.bz2
$ export CPLUS_INCLUDE_PATH=$PWD/eigen-eigen-67e894c6cd8f
$ cd ..

Finally, we are ready. Let's first create pre-trained ResNet Keras model that we will use. Run Python and run the following lines
$ python
>>> from keras.applications import resnet50
>>> model = resnet50.ResNet50(include_top=True, weights='imagenet')

Exit Python and run the following
$ python ./bin/ resnet50.h5 --input_shape '(1,224,224,3)' --out output

After some time, it will generates files in output directory. To run the server, we first need to modify the example/resnet/script.js file. We must make sure to point to the correct directory by changing the weight path. For the version I have, I modified line 39 to point to output directory.
let runner = await WebDNN.load(`/output`, {backendOrder: backend_name});

Finally, we are ready to run the server. To start the server, run the following in webdnn directory
$ python -m http.server

On your web browser, go to address localhost:8000/example/resnet/index.html

You should be able to test ResNet50 model!

Tuesday, October 24, 2017

Using RAM Disk for Expediting Neural Net Training

These days I am constantly experimenting models with different architecture / hyper parameters. Because I am working with lots of images, I realized that it takes quite a bit of time for loading training or validation images every epoch from the disk. Yes, I am using a solid state drive, but it is still slow compared to RAM. To speed up the training, I have been saving images into the RAM in my Python code, which definitely expedites the training.

However, there are two main issues I see. The first is that raw training files are usually in JPEG images, which do not take up much space. However, when I am saving the image data in the Python code, I save the images in the numpy array of bitmap format, which significantly expands the storage space.

The second issue is that I have two GPUs in the server, thus sharing the CPU and RAM. When I am running different models on each GPU, but they share the same set of training data, I would have two copies of the exact same dataset in memory.

So, I was looking for a solution, and I found one that is very easy to implement. On Linux, there is a way to create a RAM disk, which is basically chunk of data saved in RAM but the system treats it as if it is a disk. Basically, I would mount a RAM disk and have the programs access the data from this RAM disk mount location.

Here is how to do it. The instructions below are based on this excellent article. First, create a folder to which you will mount the RAM disk.
$ sudo mkdir /mnt/ramdisk

Next, mount the RAM disk with specified space. For example,
$ sudo mount -t tmpfs -o size=1024m tmpfs /mnt/ramdisk

Finally, copy your training data into this folder and make sure to have your training code point to the new location.
$ cp -r /your/training/data /mnt/ramdisk

Happy training!

** Something to keep in mind **
- This is RAM, so every time your system reboots, the content will be gone
- Make sure that you have enough free RAM so that your data can fit in

Monday, October 16, 2017

Thread-Safe Generators in Python

In this post, we will go over how to create thread-safe generators in Python. This post is heavily based on this excellent article.

Generator makes life so much easier when coding in Python, but there is a catch; raw generators are not thread-safe. Consider the example below:

We see that the generator does not produce correct output when multiple threads are accessing this at the same time.

One easy way to make it thread-safe is by creating a wrapper class that simply lets only one thread to execute the generator's next method at any given time with threading lock. This is shown below:

Note that the generator now is thread-safe but doesn't execute its next method in parallel. You can also use Python's decorator to make it look even easier, although it basically does the same thing.

Monday, October 2, 2017

Three Minutes Daily Vim Tip: Disable a Key

The letter 'J' in vim is mapped to a shortcut to join two lines. Unfortunately, the letter 'j' is used a lot to navigate, and I often mistakenly press shift along with 'j'. This is quite annoying, so I decided to simply disable this shortcut.

To do this, open up ~/.vimrc file and add the following line:
nnoremap J <nop>

That's it!

Sunday, October 1, 2017

Applying Common Changes to Multiple Branches in Git

Say you have two branches, branchA and branchB.

Assume that in branchA you have

whereas in branchB you have

Say you want to make some change that will be common to both branchA and branchB; that is, for example, you want to add the same file to them, so that branchA shall become

and branchB shall become

To do this, you first need to commit to either branch, say branchA.
$ git checkout branchA
$ # write file4.txt
$ git add file4.txt
$ git commit file4.txt -m 'add file4'

Next, you just need to clone the very last commit using git cherry-pick
$ git checkout branchB
$ git cherry-pick branchA

If you have more branches, then simply repeat the above cherry-pick steps to branchC, branchD, and so forth.

Saturday, September 30, 2017

Tensorflow Fundamentals - K-means Cluster Part 2

From the previous post, I have shown how to calculate k-mean cluster using Tensorflow. In this post, I will add a bit more advanced implementations. In particular, I will show you how to implement conditional statement in Tensorflow.

The difference is at guessing the initial set of centroids. In the previous implementation, I simply chose k random points as initial centroids. Here, instead, I am selecting the first centroid to be the point furthest away from the origin. Next ith initial centroid for i = {1,2,...,k} is chosen such that the sum of the distances from previous i-1 centroids is the largest. This way, we can significantly reduce iteration number required to achieve the final state.

Thursday, September 28, 2017

Simple Thread Pool Implementation in Python

Here is a very basic implementation of a thread pool class with callback support in Python 2, based on this reference. I have added callback parameter so that for each task the callback will be invoked with the return value from the job.

Saturday, September 23, 2017

Building Deep Learning Machine Under $2000 with Dual GTX 1080 GPUs

With my experimental model getting larger and larger, training takes too long time. This is especially true as most of my model is vision-based, so it requires a lot of computation and memory. Yes, I could use cloud computing, such as AWS or GCP, so I did some calculation.

The cheapest monthly cost I found for an instance with 2x GTX 1080 Ti GPUs is $500 (AWS or GCP costs much higher). In just four months of using the service, I would spend $2000 on the cloud service.

Instead, I could spend $2000 once building my own system with 2 GPUs, and some $50 or less each month for electric bill and train two models simultaneously. I could even sell the rig later on when I need to upgrade. I am expecting resale value of 1/3 to 1/2 of my system in 2 years.

The answer is quite obvious at this point. I need to build my own rig. After some research, below is the list of parts and justification if necessary.

CPU: AMD Ryzen 1700
This is a 8-core 16-thread processor from AMD. Since most of the computation during training is performed by GPU and not CPU, I did not want to spend more than $300 on CPU. I debated whether to get even cheaper one, 1600, which has 6-cores and 12-threads with higher clock speed. This could be a better option for neural network training. They are both good options. However, at the time of buying, I could not get Ryzen 1600 at its retail price, because 1600 was in such a high demand.

I did not get Intel CPU because it is over-priced at the moment, as the new generation Coffee Lake is imminent. If I could wait a few more months, maybe Coffee Lake processors could be much better candidates than Kaby Lake.

This was a tough call. I could get 1060 6G, 1070 8G, 1080 8G, or 1080 Ti 11G. The best bang for the buck would be 1060 6G, but I wanted more VRAM than 6GB. Next up is 1070 8G, but this was too expensive at the time due to high demand, costing around $500. Next up is 1080 8G, which is around $550 with more than 15% boost in performance. Next up is 1080 Ti 11G at $750, but this is too expansive compared to 1080 8G and the performance gain does not justify it. I therefore went with GTX 1080 8G. In fact, I got 2x GTX 1080 8G to train two models simultaneously. If you are willing to spend extra $500, you could go with 2x GTX 1080 Ti 11G.

AMD GPUs were not considered, as most deep learning libraries do not fully support AMD GPUs at the moment. I really hope AMD catches up with GPGPU support for deep learning libraries soon.

Mainboard: ASRock Fatal1ty X370 Gaming K4
This was one of the cheapest mainboards that support AMD Ryzen series CPUs and 2x PCI Express3 8 lanes each. Since I was getting two GPUs, I wanted to make sure that both GPUs get at least PCI Express3 8 lanes.

Yes I could have chosen CPU and mainboard to support dual PCI Express3 16 lanes, but this would sky-rocket my rig cost, and I don't think there will be much performance difference between PCI Express3 8 lanes vs 16 lanes for GTX 1080 graphics cards (source). If you are getting GTX 1080 Ti series, then perhaps you may want to opt for high-end CPU that supports PCI Express 32+ lanes and mainboard to fully support PCI Express3 16 lanes for each GPU, but you would have to spend $3000 or so on the system.

RAM: 2x DDR4 2400 8G
I will get more RAM when the cost goes down a bit. Currently the memory price is just too expansive.

SSD: Samsung Evo 860 500G
Just get a decent SSD with >= 500G. Absolutely no HDD, as this will significantly lower the performance. Samsung's SSDs are renowned for speed and stability.

Power Supply: 850W Gold-rated
Maybe 850W is too much for my config, but it is always better to choose power supply with abundant output. A cheap low-quality low-power output supply can actually destroy the whole system! I roughly estimated 100W for CPU, 200W for each GPU, and 100W for the rest. This is 600W in total, and I wanted 200W margin just to be safe, but 700W+ should have worked just fine. For dual GTX 1080 Ti configuration, you may want to get at least 850W or more.

Case: ATX Mid-Tower
Choose whatever you like as long as it is large enough to fit two GPUs and the motherboard. Most of ATX mid-tower cases should do.

Note that you should select a case with lots of fans and ventilation for cooling. I made a mistake of getting a case that wasn't so good in cooling, and the GPU temperature went up to 90C or more, so I had to buy and install additional fans to cool them down. It is very important to keep them cool enough, probably below 85C at full load. With GTX 1080 Ti GPUs, I imagine that cooling will be even more critical.

OK, so the grand total excluding monitor/mouse/keyboards, etc is a bit more than $1900 before tax. With this config, you can train a network that requires up to 16GB of VRAM, since you have 2x GTX 1080 8G, although you will need to make sure to distribute the work load between the two GPUs manually in the code.

I installed Ubuntu 16.04 LTS for now, although I may switch to Cent OS later on. After installing NVIDIA CUDA toolkit, I can successfully detect both GPUs and use them simultaneously with no problem. I did not connect them with SLI though, since it is not needed for my purpose.

Good luck with configuring your system!

Tuesday, September 19, 2017

Tensorflow Fundamentals - K-means Cluster Part 1

Now that we are familiar with Tensorflow, let us actually write code. For this series of posts, we are going to implement K-means clustering algorithm with Tensorflow.

K-means clustering algorithm is to divide a set of points into k-clusters. The simplest algorithm is
1. choose k random points
2. cluster all points into corresponding k groups, where each point in the group is closest to the centroid
3. update the centroids by finding geometric centroids of the clusters
4. repeat steps 2 & 3 until satisfied

Below is my bare-minimum implementation in Tensorflow.