Monday, October 16, 2017

Thread-Safe Generators in Python

In this post, we will go over how to create thread-safe generators in Python. This post is heavily based on this excellent article.

Generator makes life so much easier when coding in Python, but there is a catch; raw generators are not thread-safe. Consider the example below:

We see that the generator does not produce correct output when multiple threads are accessing this at the same time.

One easy way to make it thread-safe is by creating a wrapper class that simply lets only one thread to execute the generator's next method at any given time with threading lock. This is shown below:

Note that the generator now is thread-safe but doesn't execute its next method in parallel. You can also use Python's decorator to make it look even easier, although it basically does the same thing.

Monday, October 2, 2017

Three Minutes Daily Vim Tip: Disable a Key

The letter 'J' in vim is mapped to a shortcut to join two lines. Unfortunately, the letter 'j' is used a lot to navigate, and I often mistakenly press shift along with 'j'. This is quite annoying, so I decided to simply disable this shortcut.

To do this, open up ~/.vimrc file and add the following line:
nnoremap J <nop>

That's it!


Sunday, October 1, 2017

Applying Common Changes to Multiple Branches in Git

Say you have two branches, branchA and branchB.

Assume that in branchA you have
file1.txt
file3.txt

whereas in branchB you have
file2.txt
file3.txt

Say you want to make some change that will be common to both branchA and branchB; that is, for example, you want to add the same file to them, so that branchA shall become
file1.txt
file3.txt
file4.txt

and branchB shall become
file2.txt
file3.txt
file4.txt

To do this, you first need to commit to either branch, say branchA.
$ git checkout branchA
$ # write file4.txt
$ git add file4.txt
$ git commit file4.txt -m 'add file4'

Next, you just need to clone the very last commit using git cherry-pick
$ git checkout branchB
$ git cherry-pick branchA

If you have more branches, then simply repeat the above cherry-pick steps to branchC, branchD, and so forth.

Saturday, September 30, 2017

Tensorflow Fundamentals - K-means Cluster Part 2

From the previous post, I have shown how to calculate k-mean cluster using Tensorflow. In this post, I will add a bit more advanced implementations. In particular, I will show you how to implement conditional statement in Tensorflow.


The difference is at guessing the initial set of centroids. In the previous implementation, I simply chose k random points as initial centroids. Here, instead, I am selecting the first centroid to be the point furthest away from the origin. Next ith initial centroid for i = {1,2,...,k} is chosen such that the sum of the distances from previous i-1 centroids is the largest. This way, we can significantly reduce iteration number required to achieve the final state.

Thursday, September 28, 2017

Simple Thread Pool Implementation in Python

Here is a very basic implementation of a thread pool class with callback support in Python 2, based on this reference. I have added callback parameter so that for each task the callback will be invoked with the return value from the job.


Saturday, September 23, 2017

Building Deep Learning Machine Under $2000 with Dual GTX 1080 GPUs

With my experimental model getting larger and larger, training takes too long time. This is especially true as most of my model is vision-based, so it requires a lot of computation and memory. Yes, I could use cloud computing, such as AWS or GCP, so I did some calculation.

The cheapest monthly cost I found for an instance with 2x GTX 1080 Ti GPUs is $500 (AWS or GCP costs much higher). In just four months of using the service, I would spend $2000 on the cloud service.

Instead, I could spend $2000 once building my own system with 2 GPUs, and some $50 or less each month for electric bill and train two models simultaneously. I could even sell the rig later on when I need to upgrade. I am expecting resale value of 1/3 to 1/2 of my system in 2 years.

The answer is quite obvious at this point. I need to build my own rig. After some research, below is the list of parts and justification if necessary.

CPU: AMD Ryzen 1700
This is a 8-core 16-thread processor from AMD. Since most of the computation during training is performed by GPU and not CPU, I did not want to spend more than $300 on CPU. I debated whether to get even cheaper one, 1600, which has 6-cores and 12-threads with higher clock speed. This could be a better option for neural network training. They are both good options. However, at the time of buying, I could not get Ryzen 1600 at its retail price, because 1600 was in such a high demand.

I did not get Intel CPU because it is over-priced at the moment, as the new generation Coffee Lake is imminent. If I could wait a few more months, maybe Coffee Lake processors could be much better candidates than Kaby Lake.


GPU: NVIDIA GTX 1080
This was a tough call. I could get 1060 6G, 1070 8G, 1080 8G, or 1080 Ti 11G. The best bang for the buck would be 1060 6G, but I wanted more VRAM than 6GB. Next up is 1070 8G, but this was too expensive at the time due to high demand, costing around $500. Next up is 1080 8G, which is around $550 with more than 15% boost in performance. Next up is 1080 Ti 11G at $750, but this is too expansive compared to 1080 8G and the performance gain does not justify it. I therefore went with GTX 1080 8G. In fact, I got 2x GTX 1080 8G to train two models simultaneously. If you are willing to spend extra $500, you could go with 2x GTX 1080 Ti 11G.

AMD GPUs were not considered, as most deep learning libraries do not fully support AMD GPUs at the moment. I really hope AMD catches up with GPGPU support for deep learning libraries soon.

Mainboard: ASRock Fatal1ty X370 Gaming K4
This was one of the cheapest mainboards that support AMD Ryzen series CPUs and 2x PCI Express3 8 lanes each. Since I was getting two GPUs, I wanted to make sure that both GPUs get at least PCI Express3 8 lanes.

Yes I could have chosen CPU and mainboard to support dual PCI Express3 16 lanes, but this would sky-rocket my rig cost, and I don't think there will be much performance difference between PCI Express3 8 lanes vs 16 lanes for GTX 1080 graphics cards (source). If you are getting GTX 1080 Ti series, then perhaps you may want to opt for high-end CPU that supports PCI Express 32+ lanes and mainboard to fully support PCI Express3 16 lanes for each GPU, but you would have to spend $3000 or so on the system.

RAM: 2x DDR4 2400 8G
I will get more RAM when the cost goes down a bit. Currently the memory price is just too expansive.

SSD: Samsung Evo 860 500G
Just get a decent SSD with >= 500G. Absolutely no HDD, as this will significantly lower the performance. Samsung's SSDs are renowned for speed and stability.

Power Supply: 850W Gold-rated
Maybe 850W is too much for my config, but it is always better to choose power supply with abundant output. A cheap low-quality low-power output supply can actually destroy the whole system! I roughly estimated 100W for CPU, 200W for each GPU, and 100W for the rest. This is 600W in total, and I wanted 200W margin just to be safe, but 700W+ should have worked just fine. For dual GTX 1080 Ti configuration, you may want to get at least 850W or more.

Case: ATX Mid-Tower
Choose whatever you like as long as it is large enough to fit two GPUs and the motherboard. Most of ATX mid-tower cases should do.

Cooling
Note that you should select a case with lots of fans and ventilation for cooling. I made a mistake of getting a case that wasn't so good in cooling, and the GPU temperature went up to 90C or more, so I had to buy and install additional fans to cool them down. It is very important to keep them cool enough, probably below 85C at full load. With GTX 1080 Ti GPUs, I imagine that cooling will be even more critical.


OK, so the grand total excluding monitor/mouse/keyboards, etc is a bit more than $1900 before tax. With this config, you can train a network that requires up to 16GB of VRAM, since you have 2x GTX 1080 8G, although you will need to make sure to distribute the work load between the two GPUs manually in the code.

I installed Ubuntu 16.04 LTS for now, although I may switch to Cent OS later on. After installing NVIDIA CUDA toolkit, I can successfully detect both GPUs and use them simultaneously with no problem. I did not connect them with SLI though, since it is not needed for my purpose.

Good luck with configuring your system!

Tuesday, September 19, 2017

Tensorflow Fundamentals - K-means Cluster Part 1

Now that we are familiar with Tensorflow, let us actually write code. For this series of posts, we are going to implement K-means clustering algorithm with Tensorflow.

K-means clustering algorithm is to divide a set of points into k-clusters. The simplest algorithm is
1. choose k random points
2. cluster all points into corresponding k groups, where each point in the group is closest to the centroid
3. update the centroids by finding geometric centroids of the clusters
4. repeat steps 2 & 3 until satisfied

Below is my bare-minimum implementation in Tensorflow.

Tensorflow Fundamentals - Interactive Session

As I have discussed in my previous posts, Tensorflow's computation graphs will not evaluate the expression unless one explicitly asks it to do so. One may find this quite annoying during debugging, so Tensorflow provides what is called Interactive Session, which let's you evaluate the expressions as you go, with minimal code.

This is really simple; just call
sess = tf.InteractiveSession()

in the beginning, and this will act like the block
with tf.Session() as sess:

See the code below.

Sunday, September 17, 2017

Tensorflow Fundamentals - Computation Graph Part 3


Next up, we want to now evaluate an expression from given input values. Let us construct a function (graph)
f(x,y) = x + y

where x,y are input values to fed into the graph f. To do this, we need to use tf.placeholder methods to define input nodes, and supply feed_dict parameter to run method as shown below:

The code is easy enough to be self-explanatory. The output of the code shall look similar to
$ python tf_computation_graph_p3.py 2>/dev/null
8 + -6 = 2
-6 + 4 = -2
-10 + -1 = -11
-6 + 0 = -6
5 + 9 = 14
7 + 6 = 13
3 + 8 = 11
3 + 6 = 9
5 + -4 = 1
0 + -3 = -3

Tensorflow Fundamentals - Computation Graph Part 2

Let's continue our journey on Tensorflow's computation graph.

We will now make use of Tensorflow's variables. They are different from constant in that
1. they are mutable during the execution, i.e., they can change their values
2. they will store their state (value), which shall equal to that from the lastest execution

For example, you can define a counter variable that will increment its value on each execution:
counter = counter + 1

Below is the simple demo code for doing this in Tensorflow:

The only catch here is that
1. tf.assign is another operation that will assign a new value to the variable on each execution (run)
2. one must initialize all variables

Running it will output
0
1
2
3
4
5

So far so good!