BACKPROBAGATION IN NEURAL NETWORK

Code Sample and Text

Backpropagation In Python

In the case of simple feedforward neural networks, the process of backpropagation can be done efficiently in Python.


First, we will import the necessary libraries.


import numpy as np
	

Next, we will define a simple neural network with one hidden layer.


def sigmoid(x):
    return 1.0 / (1 + np.exp(-x))

def sigmoid_derivative(x):
    return x * (1.0 - x)

class NeuralNetwork:
    def __init__(self, x, y):
        self.input      = x
        self.weights1   = np.random.rand(self.input.shape[1],4) 
        self.weights2   = np.random.rand(4,1)                 
        self.y          = y
        self.output     = np.zeros(self.y.shape)

    def feedforward(self):
        self.layer1 = sigmoid(np.dot(self.input, self.weights1))
        self.output = sigmoid(np.dot(self.layer1, self.weights2))

    def backprop(self):
        # application of the chain rule to find derivative 
        #of the loss function with respect to weights2 and weights1
        d_weights2 = np.dot(self.layer1.T, (2*(self.y - self.output) * sigmoid_derivative(self.output)))
        d_weights1 = np.dot(self.input.T, (np.dot(2*(self.y - self.output) 
        * sigmoid_derivative(self.output), self.weights2.T) 
        * sigmoid_derivative(self.layer1)))

        # update the weights with the derivative of the loss function
        self.weights1 += d_weights1
        self.weights2 += d_weights2
	

Finally, we can use the NeuralNetwork class to train our neural network.


	# Training set
X = np.array([[0,0,1],
              [0,1,1],
              [1,0,1],
              [1,1,1]])
y = np.array([[0],[1],[1],[0]])

# Create neural network object
nn = NeuralNetwork(X, y)

# Train the network
for i in range(1500):
    nn.feedforward()
    nn.backprop()

print(nn.output)
	

In this code, we initialize our neural network with random weights. Then, we train the network by feeding forward the training inputs, comparing the network's output with the actual output, and backpropagating the errors to adjust the weights. Finally, we print the output of the trained neural network. Note: This code is for educational purposes and does not implement any optimizations such as stochastic gradient descent, mini-batch gradient descent, or Adam optimization algorithm. Also, the error calculation is done with the mean squared error, but other error functions like cross-entropy error can be used depending on the problem.

Comments

Popular posts from this blog

CIFAR-10 Dataset Classification Using Convolutional Neural Networks (CNNs) With PyTorch

Radial Basis Function Networks with PyTorch

Long-short-term-memory (LSTM) Word Prediction With PyTorch