Tensorflow literally is the flow of tensors, but we didn’t go into much detail. In order to better justify the architectural decisions, we will elaborate a bit on this. Three types of tensors
In Tensorflow, there are three primary types of tensors:
- tf.Variable
- tf.constant
- tf.placeholder
It’s worth it to take a a look at each of these to discuss the differences, and when they are to be used. tf.Variable
The tf.Variable tensor is the most straight forward basic tensor, and is in many ways analogous to pure Python variables in that the value of it is, well, variable.
Variables retain their value during the entire session, and are therefore useful when defining learnable parameters such as weights in neural networks, or anything else that’s going to change as the code is running.
You define a variable as by the following:
a = tf.Variable([1,2,3], name=“a”)
Here, we create a tensor variable with the initial state [1,2,3], and the name a. Notice, that Tensorflow is not able to inherit the Python variable name, so if you want to have a name on the graph (more on that later), you need to specify a name.
There are a few more options, but this is only meant to cover the basics. tf.constant
The tf.Constant is very similar to tf.Variable with one major difference, they are immutable, that is the value is constant (wow, Google really nailed the naming of tensors).
The usage follows that of the tf.Variable tensor:
b = tf.constant([1,2,3], name=“b”)
You use this whenever you have a value that doesn’t change through the execution of the code for example to denote some property of the data, or to store the learning rate when using neural networks. tf.placeholder
Finally, we have the tf.placeholder tensor. As the name implies, this tensor type is used to define variables, or graph nodes (operations), for which you don’t have an initial value. You then defer setting a value until you actually do the computation using sess.run. This is useful for example as a proxy for your training data when defining the network.
When running the operations, you need to pass actual data for the placeholders. This is done like so:
c = tf.placeholder(tf.int32, shape=[1,2], name=“myPlaceholder”) with tf.Session() as sess: sess.run(tf.global_variables_initializer()) res = sess.run(c, feed_dict={ c:[5,6]] }) print (res)
[5 6]]
Notice that we define a placeholder by first passing a non-optional parameter of the element type (here tf.int32), and then we define the shape using matrix dimension notation. The [1,2] denotes a matrix with 1 row and two columns. If you haven’t studied linear algebra, this may seem confusing at first: why denote the height before the width?, and isn’t [1,2] a 1 by 2 matrix itself with the values 1 and 2?
These are valid questions, but in-depth answers are out of the scope of this essay. However, to give you the gist of it, the apparantly weird notation form has some quite neat mnemonic properties for some matrix operations, and yes, [1,2] can also be seen as a one by two matrix in itself. Tensorflow uses the list like notation because it supports n-dimensional matrices, and it’s therefore very convenient as we will see later.
When we evaluate the value of c with sess.run we pass in the actual data using a feed_dict. Notice that we use the Python variable name, and not the name given to the Tensorflow graph to target the placeholder. This same approach also extends to multiple placeholders where each variable name is mapped to a dictionary key of the same name.