## Creating a Basic Neural Network Using NumPy in Python

Introduction: Neural networks are powerful models that can learn and make predictions from data. Implementing a basic neural network from scratch helps us understand the inner workings of this technology. In this blog, we will walk through the process of creating a simple neural network using the NumPy library in Python. NumPy provides efficient numerical operations, making it an ideal choice for implementing neural networks. We will explain each step of the code to provide a comprehensive understanding of the neural network’s structure and functionality.

Step 1: Import the Required Libraries First, import the necessary libraries, including NumPy, which is used for numerical computations.

``````import numpy as np
``````

Step 2: Define the Neural Network Architecture Next, define the architecture of the neural network by specifying the number of input, hidden, and output nodes. In this example, we will create a neural network with one hidden layer.

``````class NeuralNetwork:
def __init__(self, input_nodes, hidden_nodes, output_nodes):
self.input_nodes = input_nodes
self.hidden_nodes = hidden_nodes
self.output_nodes = output_nodes
``````

Step 3: Initialize Weights and Biases Initialize the weights and biases of the network randomly. Weights determine the strength of connections between neurons, while biases introduce flexibility into the network.

``````self.weights_input_hidden = np.random.rand(self.hidden_nodes, self.input_nodes)
self.weights_hidden_output = np.random.rand(self.output_nodes, self.hidden_nodes)
self.biases_hidden = np.random.rand(self.hidden_nodes, 1)
self.biases_output = np.random.rand(self.output_nodes, 1)
``````

Step 4: Implement the Feedforward Function The feedforward function performs the calculations necessary to propagate inputs through the network and generate an output. Apply the activation function (e.g., sigmoid or ReLU) to each layer to introduce non-linearity.

``````def feedforward(self, inputs):
hidden_inputs = np.dot(self.weights_input_hidden, inputs) + self.biases_hidden
hidden_outputs = sigmoid(hidden_inputs)

final_inputs = np.dot(self.weights_hidden_output, hidden_outputs) + self.biases_output
final_outputs = sigmoid(final_inputs)

return final_outputs
``````

Step 5: Define the Activation Function An activation function introduces non-linearity into the neural network, allowing it to learn complex patterns. Here, we will use the sigmoid function as the activation function.

``````def sigmoid(x):
return 1 / (1 + np.exp(-x))
``````

Step 6: Train the Neural Network To train the neural network, we need a training loop that adjusts the weights and biases based on the error between predicted and target outputs. Implement a basic training loop using gradient descent.

``````def train(self, inputs, targets, learning_rate):
# Feedforward
hidden_inputs = np.dot(self.weights_input_hidden, inputs) + self.biases_hidden
hidden_outputs = sigmoid(hidden_inputs)

final_inputs = np.dot(self.weights_hidden_output, hidden_outputs) + self.biases_output
final_outputs = sigmoid(final_inputs)

# Backpropagation
output_errors = targets - final_outputs
hidden_errors = np.dot(self.weights_hidden_output.T, output_errors)

# Update weights and biases
self.weights_hidden_output += learning_rate * np.dot(output_errors * final_outputs * (1 - final_outputs), hidden_outputs.T)
self.weights_input_hidden += learning_rate * np.dot(hidden_errors * hidden_outputs * (1 - hidden_outputs), inputs.T)
self.biases_output += learning_rate * output_errors
self.biases_hidden += learning_rate * hidden
``````