TensorFlow is an open source software library for Machine Learning.It was originally developed by researchers and engineers working on the Google Brain team within Google’s Machine Intelligence Research organization for the purposes of conducting machine learning and deep neural networks research.

TensorFlow is programming paradigm and some of its main features are that numeric computation is expressed as a **computational graph backbone**.TensorFlow program is going to be a graph where the graph nodes are the group to be operations, shorthand as an **operation in your code**.They have any number of inputs and a single output.**The edges between our nodes are going to be tensors that flow between them**.

The best way of thinking about what tensors are **in practice is as n-dimensional arrays**.The advantage of using flow graphs as the backbone of your deep learning framework is that **it allows you to build complex models in terms of small and simple operations**.This is going to make your gradient calculation extremely simple when we get to that.You’re going to be very grateful for the automatic different when you’re coding large models in your project and in the feature.

Another way of thinking about a TensorFlow graph is that each **operation is a function** that can be **evaluated at that point**.

Neural network with one hidden layer what it’s computational graph in TensorFlow might look like.

**h=ReLU(Wx+b)**

So we have some hidden layer that we are trying to compute, as the `ReLU`

activation of some parameter matrix `W`

times some input `x`

plus a bias term.

`ReLu`

is an activation function standing for rectified linear unit.We are applying some nonlinear function over our linear input that is what gives the neural networks their expressive function.The `ReLU`

takes the max of your input and zero.

We have variables **b** and **W**.We have a placeholder with the **x**. Nodes for each of the operations in our graph.

Variables are going to be stateful nodes which output their current value.In our case, Variable **b** and **W** is retained their current value over multiple executions.It’s easy to restore saved values to variables.

Variable has a number of other useful features.they can be saved to your disk during and after training. That allows people from different companies and group to save, store, and send over their model parameter to other people.They also make gradient update by default.It will apply over all of the variables and your graph.

The variable is the things that you wanna tune to minimize the loss.It is really important to remember that variable in the graph like **b** and **W** are still operations.

All of your node in the graph are operations.When you evaluate the operation that is these variables in our run time you will get the value of those variables.

**Placeholders(x)** is nodes whose value is **fed in at execution time**. Placeholder values that we’re going to add into our computation during training.So this going to be our input.

So for placeholders, we don’t give any initial values.we just assign a data type, and we assign a shape of tensor so the graph still knows what to computer even though it doesn’t have any stored values yet.

The third type of node is mathematical operations. This going to be your matrix multiplication, your addition, and your **ReLU**. All of these are nodes in your TensorFlow graphs.It’s very important that we’re actually calling on TensorFlow mathematical operations as opposed to Numpy operations.

### Related Post

**Train Image classifier with TensorFlow**

**Train your Object Detection model locally with TensorFlow**

**Android TensorFlow Machine Learning**