Building Neural Network from Scratch without using any standard inbuilt python libraries.
This project aims to demonstrate the implementation of a neural network from scratch, without relying on any standard inbuilt libraries such as TensorFlow or PyTorch. By building a neural network from the ground up, we gain a deeper understanding of its underlying principles and mechanics.3 Datasets are used for this - Blobs, Circles and Fashion MNIST.
Custom Implementation: We develop every component of the neural network, including layers, activation functions, loss functions, and optimization algorithms, from scratch using Python.
The codebase is organized into modular components, making it easy to understand, modify, and extend. Each module encapsulates specific functionalities, promoting code reusability and maintainability.
Users can experiment with different network architectures, activation functions, and optimization techniques by tweaking parameters in the code. This flexibility allows for a deeper exploration of neural network behavior and performance.
Neural Network Architecture: Define the structure of the neural network, including the number of layers, neurons per layer, and connectivity between layers.
Implement various activation functions such as ReLU, Sigmoid, and Tanh, allowing users to choose the activation function best suited for their application.
Implement popular loss functions such as Mean Squared Error (MSE) and Cross-Entropy Loss, enabling efficient training and evaluation of the neural network.
Implement optimization algorithms like Gradient Descent and its variants (e.g., Stochastic Gradient Descent, Mini-Batch Gradient Descent) for training the neural network and updating model parameters.
Develop a training pipeline to iteratively train the neural network on labeled datasets, monitor performance metrics, and adjust model parameters to improve accuracy.
To get started with this project, follow these steps:
Navigate to the Directory: Move into the project directory using cd neural-network-from-scratch.
Explore the Code: Dive into the codebase to understand how each component of the neural network is implemented. Feel free to modify the code and experiment with different configurations.
Run Examples: Check out the example scripts provided in the examples directory to see how to use the neural network implementation for various tasks such as classification or regression.
Contribute: If you find any issues or have suggestions for improvements, we welcome contributions from the community. Fork the repository, make your changes, and submit a pull request.