Replies: 1 comment
-
It looks like the weights were not found. Can you confirm you put them in |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
I have been trying out the train your own data tutorial on the yolov5 git (https://colab.research.google.com/github/roboflow-ai/yolov5-custom-training-tutorial/blob/main/yolov5-custom-training.ipynb) and I got it working there with roboflow but when I try to put it on Jupyter it adds an
\
instead of a/
to the dataset path which is what I think is making it come up with an errorcode
`# -- coding: utf-8 --
"""Prawn YOLOv5-Roboflow.ipynb
Automatically generated by Colaboratory.
Original file is located at
https://colab.research.google.com/drive/18xuKkU7DeiyidTe29h7Ujs0PMvID-e-B
Custom Training with YOLOv5
In this tutorial, we assemble a dataset and train a custom YOLOv5 model to recognize the objects in our dataset. To do so we will take the following steps:
Step 1: Install Requirements
"""
Commented out IPython magic to ensure Python compatibility.
#clone YOLOv5 and /content/sample_data 39, 35 after bath, 37.8, 37.6, 38.4
#from git import Repo
#Repo.clone_from("https://github.com/ultralytics/yolov5", "C:/Users/Helix/Documents/woodham/yolov5reO")
#!git clone https://github.com/ultralytics/yolov5 # clone repo
%cd C:\Users\Helix\Documents\woodham\yolov5repo
%pip install -qr requirements.txt # install dependencies
%pip install -q roboflow
import torch
import os
from IPython.display import Image, clear_output # to display images
import cv2
#print(f"Setup complete. Using torch {torch.version} ({torch.cuda.get_device_properties(0).name if torch.cuda.is_available() else 'CPU'})")
"""# Step 2: Assemble Our Dataset
In order to train our custom model, we need to assemble a dataset of representative images with bounding box annotations around the objects that we want to detect. And we need our dataset to be in YOLOv5 format.
In Roboflow, you can choose between two paths:
Annotate
Version
"""
#from roboflow import Roboflow
#rf = Roboflow(model_format="yolov5", notebook="ultralytics")
Commented out IPython magic to ensure Python compatibility.
set up environment
os.environ["DATASET_DIRECTORY"] = "C:/Users/Helix/Documents/woodham/yolov5repo"# "/content/sample_data"
#%cd C:/Users/Helix/Documents/woodham/yolov5repo/
Commented out IPython magic to ensure Python compatibility.
#after following the link above, recieve python code with these fields filled in
#!pip install roboflow
from roboflow import Roboflow
rf = Roboflow(api_key="eeeeeeeeeeeeeeee")
project = rf.workspace(myworkspace).project("projectname")
dataset = project.version(2).download("yolov5")
%pycat {dataset.location}/data.yaml
"""# Step 3: Train Our Custom YOLOv5 model
Here, we are able to pass a number of arguments:
dataset.location
"""
!python C:/Users/Helix/Documents/woodham/yolov5repo/train.py --img 416 --batch 16 --epochs 150 --data C:/Users/Helix/Documents/woodham/yolov5repo/projectname-2/data.yaml --weights yolov5s.pt --cache
"""# Evaluate Custom YOLOv5 Detector Performance
Training losses and performance metrics are saved to Tensorboard and also to a logfile.
If you are new to these metrics, the one you want to focus on is
mAP_0.5
- learn more about mean average precision here."""
Commented out IPython magic to ensure Python compatibility.
Start tensorboard
Launch after you have started training
logs save in the folder "runs"
%load_ext tensorboard
%tensorboard --logdir runs
"""#Run Inference With Trained Weights
Run inference with a pretrained checkpoint on contents of
test/images
folder downloaded from Roboflow."""
!python C:/Users/Helix/Documents/woodham/yolov5repo/detect.py --weights C:/Users/Helix/Documents/woodham/yolov5repo/runs/train/exp5/weights/best.pt --img 416 --conf 0.1 --source C:/Users/Helix/Documents/woodham/yolov5repo/test/images
#display inference on ALL test images
import glob
from IPython.display import Image, display
for imageName in glob.glob('C:/Users/Helix/Documents/woodham/yolov5repo/runs/detect/exp2/*.jpg'): #assuming JPG
display(Image(filename=imageName))
print("\n")
"""# Conclusion and Next Steps
Congratulations! You've trained a custom YOLOv5 model to recognize your custom objects.
To improve you model's performance, we recommend first interating on your datasets coverage and quality. See this guide for model performance improvement.
To deploy your model to an application, see this guide on exporting your model to deployment destinations.
Once your model is in production, you will want to continually iterate and improve on your dataset and model via active learning.
"""
#export your model's weights for future use
#from google.colab import files
#files.download('./runs/train/exp/weights/best.pt')
error log
loading Roboflow workspace...loading Roboflow project...
Downloading Dataset Version Zip in C:/Users/Helix/Documents/woodham/yolov5repo/Company-access-2 to yolov5pytorch: 100% [24409923 / 24409923] bytes
Extracting Dataset Version Zip to C:/Users/Helix/Documents/woodham/yolov5repo/Company-access-2 in yolov5pytorch:: 100%|██████████| 940/940 [00:00<00:00, 1023.28it/s]
usage: train.py [-h] [--weights WEIGHTS] [--cfg CFG] [--data DATA] [--hyp HYP]
[--epochs EPOCHS] [--batch-size BATCH_SIZE] [--imgsz IMGSZ]
[--rect] [--resume [RESUME]] [--nosave] [--noval]
[--noautoanchor] [--noplots] [--evolve [EVOLVE]]
[--bucket BUCKET] [--cache [CACHE]] [--image-weights]
[--device DEVICE] [--multi-scale] [--single-cls]
[--optimizer {SGD,Adam,AdamW}] [--sync-bn] [--workers WORKERS]
[--project PROJECT] [--name NAME] [--exist-ok] [--quad]
[--cos-lr] [--label-smoothing LABEL_SMOOTHING]
[--patience PATIENCE] [--freeze FREEZE [FREEZE ...]]
[--save-period SAVE_PERIOD] [--seed SEED]
[--local_rank LOCAL_RANK] [--entity ENTITY]
[--upload_dataset [UPLOAD_DATASET]]
[--bbox_interval BBOX_INTERVAL]
[--artifact_alias ARTIFACT_ALIAS]
train.py: error: unrecognized arguments: C:\Users\Helix\Documents\woodham\yolov5repo\projectname-2/data.yaml
detect: weights=['C:/Users/Helix/Documents/woodham/yolov5repo/runs/train/exp5/weights/best.pt'], source=C:/Users/Helix/Documents/woodham/yolov5repo/test/images, data=yolov5repo\data\coco128.yaml, imgsz=[416, 416], conf_thres=0.1, iou_thres=0.45, max_det=1000, device=, view_img=False, save_txt=False, save_conf=False, save_crop=False, nosave=False, classes=None, agnostic_nms=False, augment=False, visualize=False, update=False, project=yolov5repo\runs\detect, name=exp, exist_ok=False, line_thickness=3, hide_labels=False, hide_conf=False, half=False, dnn=False, vid_stride=1
YOLOv5 2022-11-9 Python-3.9.15 torch-1.13.0+cpu CPU
Traceback (most recent call last):
File "C:\Users\Helix\Documents\woodham\yolov5repo\detect.py", line 258, in
main(opt)
File "C:\Users\Helix\Documents\woodham\yolov5repo\detect.py", line 253, in main
run(**vars(opt))
File "Z:\anaconda3\envs\AUTOINSTALyolo\lib\site-packages\torch\autograd\grad_mode.py", line 27, in decorate_context
return func(*args, **kwargs)
File "C:\Users\Helix\Documents\woodham\yolov5repo\detect.py", line 95, in run
model = DetectMultiBackend(weights, device=device, dnn=dnn, data=data, fp16=half)
File "C:\Users\Helix\Documents\woodham\yolov5repo\models\common.py", line 345, in init
model = attempt_load(weights if isinstance(weights, list) else w, device=device, inplace=True, fuse=fuse)
File "C:\Users\Helix\Documents\woodham\yolov5repo\models\experimental.py", line 79, in attempt_load
ckpt = torch.load(attempt_download(w), map_location='cpu') # load
File "Z:\anaconda3\envs\AUTOINSTALyolo\lib\site-packages\torch\serialization.py", line 771, in load
with _open_file_like(f, 'rb') as opened_file:
File "Z:\anaconda3\envs\AUTOINSTALyolo\lib\site-packages\torch\serialization.py", line 270, in _open_file_like
return _open_file(name_or_buffer, mode)
File "Z:\anaconda3\envs\AUTOINSTALyolo\lib\site-packages\torch\serialization.py", line 251, in init
super(_open_file, self).init(open(name, mode))
FileNotFoundError: [Errno 2] No such file or directory: 'C:\Users\Helix\Documents\woodham\yolov5repo\runs\train\exp5\weights\best.pt'`
Beta Was this translation helpful? Give feedback.
All reactions