Sunday, September 17, 2017

Tensorflow Fundamentals - Computation Graph Part 1

This post is intended for anyone, including myself, who is having difficulty grasping the very basic concepts of Tensorflow: Computation Graph. This post is heavily based on Tensorflow's official documentation.

The way Tensorflow and Theano operates is just different from all those other ones in that there are two distinct phases. In the first phase one builds the computation graph that defines the computations to be performed. It is like a function where given whatever inputs, it will spit out outputs. For example, let us say you define
f(x,y) = x + y

Then f is the computation graph in Tensroflow. This phase is called construction phase.

In the second phase, one executes the graph by feeding in the inputs. For example,
f(1,2) = 3
f(3,2) = 5

and so on for any x,y pairs you feed in. The graph will output the numerical values given numerical inputs. This phase is called execution phase.

One difference, however, is that not only the mathematical operations, such as additions, subtractions, multiplications, and divisions but also each variables are considered as operation nodes in Tensorflow's computation graphs. Thus, with the example above, we now have three ops:
x
y
+

Here, x y are constant ops, meaning that their values will be some constant directly fed in during the execution phase. The output of the graph f can be fed into a more complex graph.

Let's do a very simple example in code. We will define the computation graph for
f = pi + 1

and compute for constant pi = 3.14159...


The output of the script shall yield
4.14159

So far so easy. We will progressively construct and execute more complex and useful graphs, so stay with me.

No comments:

Post a Comment