This is the author last semester a big creation of an initial preparation work, feel very fun, share the operation process.

ultralytics package installation Link to heading

Yolo currently has already progressed to v11 version, which commonly used v8 and v11 is developed by [ultralytics] (https://docs.ultralytics.com/), so if you want to deploy yolo algorithm is the most convenient download ultralytics package.

Pre-preparation Link to heading

Before complete the following steps, please download good [anaconda] (https://blog.csdn.net/qq_46569815/article/details/120424395)!

New environment Link to heading

Why create a new environment? In a word, if yolo deployment fails or is broken, and you don’t know how to deal with it, you can simply delete the environment and create a new one, that is, quarantine it. Run the following command to activate the environment and create a new environment.

conda create -n myenv # Create an environment
conda activate myenv # Activates the environment
` ` `
You should then see the name myenv at the front of the command line.
## Install the ultralytics package
(Magic environment may be required) Run commands directly
```bash
pip install ultralytics
` ` `
This step is quite long, you need to be patient, wait until the end of the command line prompt will naturally pop up, do not interrupt in the middle!
## Check whether the installation is successful
Type Python, and then enter it in the > command line
```bash
from ultralytics import YOLO
` ` `
If no error is reported, the installation is successful. If it fails, the best thing to do is exit the environment (' conda deactivate '), delete the environment (' conda env remove -n myenv '), and start all over again.

# Training process
## Download data
A complete training site is currently [kaggle] (https://www.kaggle.com/search), need to find the data are generally in the form of below
, also can be used directly if too lazy to find the following [helmet series] (https://www.kaggle.com/code/ammarnassanalhajali/nfl-training-and-inference-yolov5) data sets.
## Understand the data set
The downloaded data typically contains the following three folders
- train
- val
- test

They are used for testing, verification and detection, some lightweight packets may only have train data, val and test data sets are empty, do not panic, this data can also be trained.
There may be complicated folders in different data sets, but we only need to identify image and label. image stores pictures, while label stores txt files with the same file name as the pictures in image, but they identify the coordinates of objects. Each item has one line. With five numbers (or maybe one if there's only one row),
- The first number is the serial number (equivalent to stating what kind of object it is),
- The remaining four numbers are the two-dimensional coordinates of the two points constituting the rectangle of the identification box.
## Configure the yaml file
```yaml
train: D:\YOLO11test\css-data\train\images\

val: D:\YOLO11test\css-data\valid\images\

test: D:\YOLO11test\css-data\test\images\


nc: 10

names: [Hardhat,Mask,No-Hardhat, No-Mask, No-Safety Vest, Person, Safety Cone, Safety Vest,machinery,vehicle]
` ` `

- This file is equivalent to a guide to the training file, train, val, test respectively correspond to the position of the horizontal and vertical data set corresponding to the picture, it is strongly recommended to fill in the absolute position, the relative position may be because the default environment in the C disk bug, can not find the data.
- 'nc=10' indicates that there are ten things to be identified
- 'names: [···]' stands for the names given to each of the ten things separately. Note that this needs to be self-checked against the first number of each line in the label (that is, the serial number), do not confuse.
How do I select a python environment in vscode
First of all, you need to have a python plugin on vscode (yes, the plugin is called python),
Open a '.py 'file and proceed as follows:
- Click the version number next to python in the bottom right corner.

! [](/images/post/yolo python deployment/bottom-right.png)
- Select a compilation environment in the box that pops up above, you can browse manually if you don't have one
! [](/images/post/yolo python deployment/pop.png)
- If there is no successful configuration of the comrades do not worry, choose to run in the terminal, the method is to click the drop-down icon of the upper right triangle to select, you can jump out of the terminal, familiar with the command line partner is very familiar with. Select the corresponding environment 'python.exe' to run it
! [](/images/post/yolo python deployment/right top.jpg)! [](/images/post/yolo python deployment/command line.png)
For those who are not familiar with the command line, remember that Ctrl+C is a forced exit command, and use it when something happens!
## Training code
```python
from ultralytics import YOLO

# Load a model
model = YOLO("D:/YOLO11test/Models/yolov8n.pt")  # pretrained YOLO11n model

model.train(data="D:/YOLO11test/safehat.yaml",epochs=20)

model.val()
` ` `
- including ` yolov8n. Pt ` file from/ultralytics website (https://docs.ultralytics.com/) download the preliminary training, choose according to need, and the location of the absolute path here is also given.
- Here 'safeht.yaml' is the yaml file we just configured.
-epochs is the number of training rounds, if you just want to play around with it, make it smaller, or find a smaller data set up front.

Because of The next is a very long training process!
# Use the training results to reason
After the end of the training, the terminal will pop up a sign of training success, and tell where the trained model is (generally somewhere in the folder of the python environment you create), follow the guidelines to find the weight file 'best-pt', and finally copy it to a familiar location, and then write the following code:
```python
from ultralytics import YOLO

# Load a model
model = YOLO("D:/YOLO11test/best.pt")  # pretrained YOLO11n model

model.predict("D:/YOLO11test/1.mp4",save=True,line_width=2)
` ` `
The position of 'best.pt' is the position of your 'best.pt', and '1.mp4' is the video (can also be a picture) that you will use to train, and then the moment to witness the miracle appears!
Run, the model will cut your video frame by frame, and train it frame by frame. After the training, you can get the video with the inference result (its storage location will also be displayed in the terminal after the end of the operation).
# Reference tutorial
- https://blog.csdn.net/haojiyaha/article/details/143028516#comments_35126027
-  https://www.bilibili.com/video/BV1Sw411v7nR/?spm_id_from=333.337.search-card.all.click&vd_source=eca564ad7b44345c8105eeb b08f088c6