Learn practical skills, build real-world projects, and advance your career

An implementation of neural network from scratch using only numpy.

alt

Each layers are called a Fully Connected Layer. The Fully Connected layer, optimizer, Loss, the activation functions ReLU and Softmax will be implemented seperately, and be finally called while implementing the network structure.

A stochastic gradient descent is a simple optimizer, which will be used to update the weights in the Fully Connected Layers, in every iteration.
w(k+1) = w(k) − η ∇L(w(k))
where ∇L(w(k)) is the gradient, η is the learning rate of our network, k is the iteration index.

import numpy as np

class Sgd():
    def __init__(self, learning_rate):
        self.learning_rate = learning_rate
    def calculate_update(self, weight_tensor, gradient_tensor):
        updated_weight = weight_tensor - (self.learning_rate) * gradient_tensor
        return updated_weight