top of page

TensorFlow common methods

Updated: Aug 30


TensorFlow

TensorFlow common methods:

Notes:

tf refers to the tensorflow data structure, in this guide we present the TensorFlow common methods


Extracting Values

  • Indexing: 

tf[index]: Extracts elements from a tensor using indices.
  • Transforms data type tensorflow

tf.numpy().tolist(): Transforms data ype into python list
tf.numpy(): Transforms data type into numpy array
  • TensorFlow Operations:

tf.gather: Extracts elements from a tensor based on indices.
tf.slice: Extracts a slice of a tensor.
tf.boolean_mask: Extracts elements based on a boolean mask.
tf.reduce_max, tf.reduce_min, tf.reduce_mean: Computes aggregate values over the tensor.

Arithmetic Operations

  • Addition: 

tf.add(x, y): Adds two tensors element-wise.
  • Subtraction: 

tf.subtract(x, y): Subtracts one tensor from another element-wise.
  • Multiplication: 

tf.multiply(x, y): Multiplies two tensors element-wise.
  • Division: 

tf.divide(x, y): Divides one tensor by another element-wise.
  • Power: 

tf.pow(x, y): Raises a tensor to a power element-wise.
  • Square Root: 

tf.sqrt(x): Calculates the square root of each element in a tensor.
  • Absolute Value: 

tf.abs(x): Calculates the absolute value of each element in a tensor.
  • Negation: 

tf.negative(x): Negates each element in a tensor.


Matrix Operations

  • Matrix Multiplication: 

tf.matmul(x, y): Multiplies two matrices.
  • Transpose: 

tf.transpose(x): Transposes a tensor.
  • Determinant: 

tf.linalg.det(x): Calculates the determinant of a square matrix.
  • Inverse: 

tf.linalg.inv(x): Calculates the inverse of a square matrix.
  • Trace:

tf.linalg.trace(x): Calculates the trace of a matrix.
  • Eigenvalues:

 tf.linalg.eigvals(x): Calculates the eigenvalues of a matrix.


Activation Functions

  • Sigmoid: 

tf.sigmoid(x): Applies the sigmoid activation function.
  • ReLU: 

tf.nn.relu(x): Applies the rectified linear unit (ReLU) activation function.
  • Tanh: 

tf.tanh(x): Applies the hyperbolic tangent activation function.
  • Softmax: 

tf.nn.softmax(x): Applies the softmax activation function, often used for classification.
  • Leaky ReLU: 

tf.nn.leaky_relu(x): Applies the leaky ReLU activation function.
  • ELU: 

tf.nn.elu(x): Applies the exponential linear unit (ELU) activation function.


Loss Functions

  • Mean Squared Error: 

tf.keras.losses.mean_squared_error(y_true, y_pred): Calculates the mean squared error between predicted and true values.
  • Cross-Entropy: 

tf.keras.losses.categorical_crossentropy(y_true, y_pred): Calculates the categorical cross-entropy loss.
  • Binary Cross-Entropy: 

tf.keras.losses.binary_crossentropy(y_true, y_pred): Calculates the binary cross-entropy loss.
  • Hinge Loss: 

tf.keras.losses.hinge(y_true, y_pred): Calculates the hinge loss, often used in support vector machines.
  • Cosine Similarity: 

tf.keras.losses.cosine_similarity(y_true, y_pred): Calculates the cosine similarity between two vectors.


Optimization Algorithms

  • Gradient Descent: 

tf.keras.optimizers.SGD(): Implements the stochastic gradient descent optimization algorithm.
  • Adam: 

tf.keras.optimizers.Adam(): Implements the Adam optimization algorithm.
  • RMSprop: 

tf.keras.optimizers.RMSprop(): Implements the RMSprop optimization algorithm.
  • Adagrad: 

tf.keras.optimizers.Adagrad(): Implements the Adagrad optimization algorithm.


Neural Network Layers

  • Dense: 

tf.keras.layers.Dense(): Creates a fully connected layer.
  • Convolutional: 

tf.keras.layers.Conv2D(): Creates a 2D convolutional layer.
  • Recurrent: 

tf.keras.layers.LSTM(): Creates a long short-term memory (LSTM) layer.
  • Pooling: 

tf.keras.layers.MaxPooling2D(): Creates a 2D max pooling layer.
  • Dropout: 

tf.keras.layers.Dropout(): Applies dropout regularization.
  • Batch Normalization: 

tf.keras.layers.BatchNormalization(): Applies batch normalization.


Data Manipulation

  • Reshaping: 

tf.reshape(x, shape): Reshapes a tensor.
  • Slicing: 

tf.slice(x, begin, size): Extracts a slice from a tensor.
  • Concatenation: 

tf.concat(values, axis): Concatenates tensors along a specified axis.
  • Stacking: 

tf.stack(values, axis): Stacks tensors along a new axis.
  • Tile: 

tf.tile(x, multiples): Repeats a tensor multiple times.
  • Padding: 

tf.pad(x, paddings): Pads a tensor with zeros.


Miscellaneous

  • Reduce Mean: 

tf.reduce_mean(x): Calculates the mean of a tensor.
  • Reduce Sum: 

tf.reduce_sum(x): Calculates the sum of a tensor.
  • Reduce Max: 

tf.reduce_max(x): Calculates the maximum value in a tensor.
  • Reduce Min: 

tf.reduce_min(x): Calculates the minimum value in a tensor.
  • Cast:

 tf.cast(x, dtype): Casts a tensor to a different data type.
  • Shape: 

tf.shape(x): Returns the shape of a tensor.
  • Size: 

tf.size(x): Returns the number of elements in a tensor.
  • Random: 

tf.random.uniform(shape): Generates random numbers from a uniform distribution.

bottom of page