## Introduction

The advancement of interest in Deep Learning in recent years and the explosion of Machine Learning tools like TensorFlow, PyTorch, etc., will also be cited, which will provide ease of use and easy debugging of codes.

Many popular frameworks such as MxNet, Tensorflow, Jax, PaddlePaddle, Caffe 2, Mindspore, and Theano will gain popularity because they will construct a static dataflow graph that represents the computation and that can be applied to batches of data. Although they will provide visibility into the whole computation and theoretically, they are leveraged with improved performance and scalability. This will come at the cost of flexibility, ease of debugging, and ease of use.

This article provides insights about Pytorch, which is a Machine Learning framework written in Python. As most Deep Learning frameworks either focus on usability or speed, but Pytorch shows these two goals are compatible: It is designed to support imperative and Pythonic Programming Style which will support codes as models, which makes debugging easy, and it will remain efficient and support hardware accelerators tools like GPU(Graphic Processing Unit) and TPU(Tensor Processing Units).

Several Python libraries are built on top of PyTorch, such as torch-vision, Timm for computer vision, torch text and hugging face for text, and torch audio for speech, which are what give PyTorch its power.

This article was published as a part of the Data Science Blogathon.

## Table of Contents

## Why Pytorch?

- It is the most popular research deep learning framework.
- Able to access many pre-built deep learning models (Torch Hub/ torch-vision.models).
- Whole stack: preprocess data, model data, deploy model in your application/cloud.
- Originally designed and used in-house by Facebook/Meta (now open source and used by companies such as Tesla, Microsoft, and Open AI).
- PyTorch minimizes cognitive overhead while focusing on flexibility and speed.
- Since its release of PyTorch in early 2017, it will gain popularity.

Increasing trends of Pytorch since its launch.

#### Tensor

Tensor is a fundamental building block of Pytorch, and it is basically the same as a Numpy array. It is mostly used for converting Images, Audio into a mathematical form used for processing, as computers don’t understand images but standard numbers. Hence, it is important to convert images into numerical forms.

One of the important features which are offered by tensors is they can store track of all the operations performed on them, which helps to compute the optimized output; this can be done by using the Autograd functionality of a tensor.

In simplistic terms, scaler – vector – matrices – tensor as flow

1. Scaler is a 0-dimensional Vector.

2. Vector is a 1-dimensional Vector.

3. Matrices are 2-dimensional Vectors.

4. Tensors are generalized N-dimensional Tensors.

#### Tensor Use Cases

Pytorch is a built-in Google Collab now we look at how to use its basic codes to work on Pytorch Tensors.

1. Importing Pytorch and getting its version.

```
import torch
print(torch.__version__)
# Output
1.13.1+cu116
```

2. Creating a scaler in Pytorch.

```
scaler = torch.tensor(7)
scaler
# Output
tensor(7)
```

3. Creating a tensor in Pytorch.

```
vector = torch.tensor([7,7])
vector
# output
tensor([7, 7])
```

4. To get the dimensions in Torch, we can use:

```
vector.ndim
# output
1
```

5. To get the shape of a vector in Torch.

```
vector.shape
# Output
torch.Size([2])
```

6. For getting the matrix in Pytorch.

```
Matrix = torch.tensor([[1,7],
[2,7]])
# output
tensor([[1, 7],
[2, 7]])
```

7. For Creating a random numbers in Pytorch.

```
random = torch.rand(7)
# output
tensor([0.0324, 0.9962, 0.0709, 0.7007, 0.6523, 0.0256, 0.4912])
```

8. Implement in-built functions like multiplication, addition, and subtraction in tensor.

9. Creating a float tensor.

```
torch.FloatTensor([1.1, 1.2, 1.3])
# Output
tensor([1.1000, 1.2000, 1.3000])
```

10. Creating a range of numbers using Pytorch.

```
torch.arange(0,10)
# Output
tensor([0, 1, 2, 3, 4, 5, 6, 7, 8, 9])
```

## Why are PyTorch Tensors Important for ML and DL?

In a supervised machine learning problem, we have data arranged in rows and columns with some target values (it can be binary classification such as True/False, Yes/No, or it could be a numerical entity). As we know to process Machine Learning algorithms, the data must be fed into mathematical forms. A table is naturally similar to a 2-D matrix in which each row (instance) or column (Feature) can be taught as 1-D vectors since the Machine Learning algorithm will understand only numerical digits, black and white images can be treated as a 2-D matrix containing numbers 0 and 1 and fed into a neural network for image classification or segmentation.

Sequence data or Time series data is another example where in 2-D data, one dimension(time) is fixed. For example:

1. Ecg data in monitoring machines.

2. A stock market price tracking data stream.

These are examples of using 2-D tensors in Classical Machine Learning( Linear Regression, Decision Tree, Support Vector Machine, Random Forest, Logistic Regression, etc.,.) and Deep Learning algorithms.

A color or Grayscale image will be considered a 3-D tensor. A 3-D(or a rank 3 tensor) tensor is a cube or array of arrays like so.

In a 3-D tensor, each Pixel is associated with ‘color channels’ – a vector of 3 numbers representing intensities in Red-Green-Blue(RGB matrix). A Pixel is commonly colored by the ordered amount of Red, Green, and blue within the confines of a singular byte. These 0 – 255 arrays of value will look like [255,255,255] for integers. When our tensor is the data type of int32, this is the interpretation method used, and when our tensor is the data type of float32, the value associated will be assumed to be in the range of 0-1. So, an integer [255,255,255] will represent pure white, but in float, it will be represented as [1,1,1] for pure white.

This implies that a 3D tensor is required to store images. Each three-value pixel must be saved with the specified width and height. You’ll need to decide what format is best to use, just like you did with the tic-tac-toe puzzle. It is standard procedure in TensorFlow and TensorFlow.js to save the RGB values in a tensor’s final dimension. Moreover, storing the values for the height, width, and color dimensions in that order is common. Although addressing rows and then columns are the traditional organizational reference sequence for matrices, this may seem strange for photos.

Similarly, video can be thought of as a sequence of color images or a frame in time, and video can be taught as 4-D tensors.

In other words, multi-dimensional tensors may easily represent various types of data from the physical world, including sensor and instrument data, commercial and financial data, and data from scientific or social experiments, making them suitable for processing by ML/DL algorithms inside a computer.

## Conclusion

Pytorch is a Machine Learning framework written in Python. Several Python libraries, such as torch-vision and Timm for computer vision, are built on top of PyTorch. It is able to access many pre-built deep learning models. Pytorch can preprocess data, model data, and deploy models in your application/cloud. Tensor is a fundamental building block of Pytorch, and it is basically the same as a Numpy array. It is mostly used for converting Images, Audio into a mathematical form used for computer processing. A color or Grayscale image will be considered a 3-D tensor, and a video will be considered a 4-D Tensor.

**The media shown in this article is not owned by Analytics Vidhya and is used at the Author’s discretion. **

Read the full article here