An image recognition app built with Swift and Core ML that allows users to upload images and analyzes them to predict the objects they contain. The app uses a Core ML model to provide probability-based predictions on what the object in the image might be.
Key Features • How To Use • Technologies Used • Documentation • License
-
Image recognition using Core ML: Users can upload an image, and the app will analyze it using a pre-trained MobileNetV2 Core ML model.
-
Probability-based predictions: The app provides a percentage-based likelihood for what the object in the image might be.
-
Real-time analysis: Once an image is uploaded, the Core ML model processes it and returns predictions instantly.
-
Fast and lightweight: The MobileNetV2 model is optimized for mobile devices, allowing efficient object detection.
-
Supports multiple image formats: Users can upload various image formats, and the app processes them seamlessly.
To run this project on your local machine, you'll need to have Xcode installed. Then, follow these steps:
# Clone this repository
$ gh repo clone yavuzsemrem/image-recognizer-core-ml
# Go into the repository
$ cd image-recognizer-core-ml
# Open the project in Xcode
$ open image-recognizer-core-ml.xcodeproj
# Run the app
$ xcodebuild
-
Swift - Main programming language.
-
Swift UIKit - For UI components.
-
MobileNetV2 Core ML models - The MobileNetv2 architecture trained to classify the dominant object in a camera frame or image.
-
Core ML - Integrate machine learning models into your app.
-
CocoaPods - For managing dependencies.
-
Swift Package Manager - For adding and managing Swift libraries.
- Core ML - Comprehensive documentation for using Core ML.
- MobileNetV2 docs - Official guide on how to integrate MobileNetV2 Core ML models in iOS apps.
- Core ML models list - Acces all of the Core ML models list that Apple shared.
This project is licensed under the MIT License - see the LICENSE file for details.