FizTorch is a lightweight deep learning framework designed for educational purposes and small-scale projects. It provides a simple and intuitive API for building and training neural networks, inspired by popular frameworks like PyTorch.
- Tensor Operations: Basic tensor operations with support for automatic differentiation.
- Neural Network Layers: Common neural network layers such as Linear and ReLU.
- Sequential Model: Easy-to-use sequential model for stacking layers.
- Functional API: Functional operations for common neural network functions.
To install FizTorch, follow these steps:
-
Clone the Repository:
git clone https://github.com/ahammadnafiz/FizTorch.git cd FizTorch
-
Set Up a Virtual Environment (optional but recommended):
python -m venv fiztorch-env source fiztorch-env/bin/activate # On Windows, use `fiztorch-env\Scripts\activate`
-
Install Dependencies:
pip install -r requirements.txt
-
Install FizTorch:
pip install -e .
Here is a simple example of how to use FizTorch to build and train a neural network:
import numpy as np
from sklearn.datasets import load_breast_cancer
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
from sklearn.metrics import classification_report, confusion_matrix
from fiztorch.tensor import Tensor
from fiztorch.nn.layers import Linear, ReLU, Sigmoid
from fiztorch.nn.sequential import Sequential
import fiztorch.nn.functional as F
import fiztorch.optim.optimizer as opt
def load_data():
X, y = load_breast_cancer(return_X_y=True)
X = StandardScaler().fit_transform(X)
return train_test_split(Tensor(X), Tensor(y), test_size=0.2, random_state=42)
def create_model():
return Sequential(Linear(30, 64), ReLU(), Linear(64, 32), ReLU(), Linear(32, 1), Sigmoid())
def train_epoch(model, optimizer, X_train, y_train, batch_size=32):
indices = np.random.permutation(len(X_train.data))
for i in range(0, len(X_train.data), batch_size):
batch = indices[i:i+batch_size]
optimizer.zero_grad()
loss = F.binary_cross_entropy(model(Tensor(X_train.data[batch])), Tensor(y_train.data[batch]))
loss.backward()
optimizer.step()
def evaluate(model, X, y):
preds = model(X).data > 0.5
print(classification_report(y.data, preds))
print(confusion_matrix(y.data, preds))
def main():
X_train, X_test, y_train, y_test = load_data()
model, optimizer = create_model(), opt.Adam(model.parameters(), lr=0.001)
for _ in range(100): train_epoch(model, optimizer, X_train, y_train)
evaluate(model, X_test, y_test)
if __name__ == "__main__":
main()
Neural network training on MNIST digits using FizTorch library with Adam optimizer (configurable learning rate), batch support, real-time accuracy/loss tracking
Neural network training on California Housing Dataset using FizTorch library
from fiztorch.tensor import Tensor
from fiztorch.nn import Linear
# Create a linear layer
layer = Linear(2, 3)
# Create some input data
input = Tensor([[1.0, 2.0]])
# Forward pass
output = layer(input)
# Print the output
print(output)
from fiztorch.tensor import Tensor
from fiztorch.nn import ReLU
# Create a ReLU activation
relu = ReLU()
# Create some input data
input = Tensor([-1.0, 0.0, 1.0])
# Forward pass
output = relu(input)
# Print the output
print(output)
from fiztorch.tensor import Tensor
from fiztorch.nn import Linear, ReLU, Sequential
# Define a sequential model
model = Sequential(
Linear(2, 3),
ReLU(),
Linear(3, 1)
)
# Create some input data
input = Tensor([[1.0, 2.0]])
# Forward pass
output = model(input)
# Print the output
print(output)
- Implement basic tensor operations
- Add support for automatic differentiation
- Create fundamental neural network layers
- Build sequential model functionality
- Implement basic optimizers
- Add MNIST digit recognition example
- Add California housing regression example
- Add more activation functions (Leaky ReLU, ELU, SELU)
- Implement convolutional layers
- Add batch normalization
- Support GPU acceleration
- Create comprehensive documentation
- Add unit tests
- Implement data loading utilities
- Add model saving/loading functionality
- Implement dropout layers
- Add learning rate schedulers
- Create visualization utilities
- Support multi-GPU training
- Add model quantization
- Add Load dataset functionality
- Enhance tensor operations with more advanced functionalities (e.g., broadcasting).
- Add support for GPU acceleration (e.g., via CUDA or ROCm).
- Improve the API for ease of use and consistency.
- Add additional layers such as Convolutional, Dropout, and BatchNorm.
- Expand activation functions (e.g.,ELU).
- Integrate pre-trained models for common tasks.
- Implement additional optimizers
- Add learning rate schedulers.
- Enhance support for custom loss functions.
- Provide built-in dataset utilities (e.g., MNIST, CIFAR).
- Create a flexible data loader with augmentation capabilities.
- Add utilities for loss/accuracy visualization.
- Integrate real-time training monitoring (e.g., TensorBoard support).
- Establish guidelines for community-driven feature additions.
- Host challenges to encourage usage and development.
Contributions are welcome! Please follow these steps to contribute:
- Fork the repository.
- Create a new branch (
git checkout -b feature-branch
). - Commit your changes (
git commit -am 'Add new feature'
). - Push to the branch (
git push origin feature-branch
). - Create a new Pull Request.
FizTorch is licensed under the MIT License. See the LICENSE file for more information.
For any questions or feedback, please open an issue or contact the maintainers.