Getting Started with PyTorch¶
URL to this notebook: https://cainvas.ai-tech.systems/notebooks/details/?path=rohit/Scholar%20Modules/M4%20Frameworks/PyTorch/PyTorch_Overview.ipynb
Content¶
- PyTorch Basics
- PyTorch Example Problem
Further Reading:
- Getting Started with PyTorch https://medium.com/towards-artificial-intelligence/pytorch-in-2-minutes-9e18875990fd
- PyTorch Tutorials https://pytorch.org/tutorials/
import torch
PyTorch Tensors¶
Tensor is is a multi-dimensional matrix containing elements of a single data type like float or int. Torch defines 10 tensor types with CPU and GPU variants.
Most common type of PyTorch tensor is floatTensor()
, declared as
# Tensors
x = torch.FloatTensor([[1,2,3], [4,5,6]])
print(x.size(), "\n", x)
PyTorch Operations¶
PyTorch contains many mathematical operations over tensors. Additionally, it provides many utilities for efficient serializing of Tensors and arbitrary types. Here is an example of the addition/subtraction of tensors.
# Add tensors
x.add_(torch.ones([2,3])+torch.ones([2,3]))
# Subtract Tensor
x.sub_(torch.ones([2,3])*2)
PyTorch and NumPy¶
One can easily go back and forth between PyTorch and NumPy. Here is a simple example of converting np.matrix to PyTorch and changing the dimension to a single column
# Numpy to torch tensors
import numpy as np
y = np.matrix([[2,2],[2,2],[2,2]])
z = np.matrix([[2,2],[2,2],[2,2]], dtype='int16')
# syntactic sugar for matrix multiplication
# same as torch.matmul( x.short(), torch.from_numpy(z) )
x.short() @ torch.from_numpy(z)
# PyTorch allows a tensor to be a View of an existing tensor.
# View tensor shares the same underlying data with its base tensor.
# Supporting View avoids explicit data copy, for a fast, memory efficient reshaping, slicing and element-wise operations.
# Modifying view tensor changes base tensor as well.
x.view(1,6)
Further Reading:
- Here is a useful GitHub repo outlining PyTorch<->numpy conversions - https://github.com/wkentaro/pytorch-for-numpy-users
# move variables and copies across compute devices.
x = torch.FloatTensor([[1,2,3], [4,5,6]])
y = np.matrix([[2,2,2],[2,2,2]], dtype='float32')
if ( torch.cuda.is_available() ):
x = x.cuda();
y = torch.from_numpy(y).cuda()
z = x+y
print(z)
print(x.cpu())
PyTorch Variables¶
A Variable nothing but a thin layer that wrapped around Tensor. It supports almost all the APIs defined by a Tensor. Variable is cleverly defined as part of the autograded package. It provides classes and functions implementing automatic differentiation of arbitrary scalar-valued functions. Here is a simple example of PyTorch variable usage:
# Variable (part of autograd package)
# Variables (graph nodes) are thin wrappers around tensors and have dependency knowledge
# Variables enable backpropagation of gradients and automatic differentiations
# Variable are set a 'volatile' flag during infrencing.
from torch.autograd import Variable
v1 = Variable(torch.tensor([1.,2.,3.]), requires_grad=False)
v2 = Variable(torch.tensor([4.,5.,6.]), requires_grad=True)
v3 = v1 * v2
v3.data.numpy()
# Variables remember what created them.
v3.grad_fn
Back Propagation¶
The backpropagation algorithm is used to compute the gradients of the loss with respect to the input weights and biases to update the weights in the next iteration of optimization and eventually reduce the loss. PyTorch is smart in hierarchically defining a backward method on Variables to perform backpropagation.
Here is a simple example of backpropagation to compute differential with an example of sin(x)
# Backpropagation with example of sin(x)
x = Variable(torch.Tensor(np.array([0., 1., 1.5, 2.])*np.pi), requires_grad=True)
y = torch.sin(x)
x.grad
y.backward(torch.Tensor([1.,1.,1.,1]))
# Check gradient is indeed cox(x)
if ( (x.grad.data.int().numpy() == torch.cos(x).data.int().numpy()).all() ):
print ("d(sin(x)/dx = cos(x))")
2. PyTorch Example Problem¶
SLR: Simple Linear Regression¶
Now we’ve collected all the ammunition to get started on machine learning examples with simple linear regression problems. We’ll do that in 4 easy steps:
SLR: Step 1¶
In step 1, we synthetesize a dataset from an equation y=w.x+b with random error injected. See the example below:
# Simple Liner Regression
# Fit a line to the data. Y = w.x + b
# Deterministic behavior
np.random.seed(0)
torch.manual_seed(0)
# Step 1: Dataset
w = 2; b = 3
x = np.linspace(0, 10, 100)
y = w*x + b + np.random.randn(100)*2
x = x.reshape(-1, 1)
y = y.reshape(-1, 1)
import matplotlib.pyplot as plt
plt.figure(figsize=(4,4))
plt.title('Model and Dataset')
plt.xlabel('X'); plt.ylabel('Y')
plt.grid()
plt.plot(x, y, 'ro', label='Dataset', marker='x', markersize=4)
plt.legend(); plt.show()
SLR: Step 2¶
In step 2, we define a simple Class LinearRegressionModel with a method forward and constructor using torch.nn.Linear to a linear transformation to the incoming data
# Step 2: Model
class LinearRegressionModel(torch.nn.Module):
def __init__(self, in_dimn, out_dimn):
super(LinearRegressionModel, self).__init__()
self.model = torch.nn.Linear(in_dimn, out_dimn)
def forward(self, x):
y_pred = self.model(x);
return y_pred;
model = LinearRegressionModel(in_dimn=1, out_dimn=1)
SLR: Step 3¶
The next step is training the model with MSELossas cost function and SGD as the optimizer.
# Step 3: Training
cost = torch.nn.MSELoss()
optimizer = torch.optim.SGD(model.parameters(), lr=0.01, momentum=0.9)
inputs = Variable(torch.from_numpy(x.astype('float32')))
outputs = Variable(torch.from_numpy(y.astype('float32')))
for epoch in range(100):
# 3.1 forward pass:
y_pred = model(inputs)
# 3.2 compute loss
loss = cost(y_pred, outputs)
# 3.3 backward pass
optimizer.zero_grad(); # by default, gradient accumulate
loss.backward()
optimizer.step()
if ( (epoch+1) % 10 == 0 ):
print('epoch {}, loss {}'.format(epoch+1, loss.data))
SLR: Step 4¶
Evaluate the trained model on test set
model.eval()
test_x = np.linspace(0, 10, 5).reshape(-1,1)
test_x = Variable(torch.from_numpy(test_x.astype('float32')))
pred_y = model(test_x)
for (tx, ty) in enumerate(zip(test_x, pred_y)):
print ("ground truth = ", (w*ty[0]+b).item(), "vs model prediction = ", '%.2f'%(ty[1].item()))
SLR: Step 5¶
Now that training is done, let’s inspect our model visually and save it.
Save the Model¶
torch.save(model.state_dict(), "PyTorchModel.pt")
Further Reading:¶
- Getting Started with PyTorch https://medium.com/towards-artificial-intelligence/pytorch-in-2-minutes-9e18875990fd
- PyTorch Tutorials https://pytorch.org/tutorials/