Last Updated on 07/11/2025 by Eran Feit
Understanding YOLOv5 in a Practical Way
YOLOv5 is one of the most popular deep learning frameworks for real-time object detection, and for good reason.
It’s fast, lightweight, and flexible, making it a great fit for everything from quick experiments on your laptop to full production pipelines running on GPUs or edge devices. This Yolov5 segmentation tutorial walks you through environment setup and real examples so you can create clean, production-ready masks.
Instead of treating detection as a slow, multi-stage process, YOLOv5 predicts bounding boxes and class probabilities in a single pass, which is exactly why it’s called “You Only Look Once.”
One of the biggest advantages of YOLOv5 is how friendly it is for developers.
You clone the repository, choose a pretrained model (like yolov5s, yolov5m, or yolov5x), and you’re ready to run detection with just a few commands—no complicated setup, no obscure hacks.
The same simple interface also lets you fine-tune models on your own dataset, export to different formats, and integrate with tools like TensorBoard for monitoring and debugging.
For segmentation tasks, YOLOv5 has dedicated *-seg models (such as yolov5x-seg) that extend the same philosophy: fast, clean, and easy to use.
Instead of only drawing boxes, these models add pixel-level masks, giving you a powerful way to separate objects from the background, count instances, or prepare data for more advanced pipelines.
This is exactly where your Yolov5 segmentation tutorial comes in—taking the strengths of YOLOv5 and turning them into a clear, repeatable workflow for real-world image segmentation.
Introduction for out tutorial
In this yolov5 segmentation tutorial, you’ll learn how to go from a clean environment setup to generating high-quality instance masks on your own images using yolov5x-seg.
We’ll walk through creating a dedicated conda environment, installing the correct PyTorch and CUDA versions, cloning the official YOLOv5 repository, and adding all the supporting libraries you need for a smooth workflow.
Then we’ll run segmentation on both single images and entire folders, so you can quickly validate results, build datasets, or plug masks into your existing computer vision pipelines.
By the end of this tutorial, you’ll have a reliable, repeatable yolov5 segmentation setup you can reuse across real-world projects without guesswork.
If you’re just starting with YOLO models, you can also explore my YOLOv5 image classification tutorial to understand the core workflow before moving into instance segmentation.
You can download the code here : https://ko-fi.com/s/3663aa2487
Link for the post in Medium : https://medium.com/@feitgemel/quick-yolov5-segmentation-tutorial-in-minutes-7b83a6a867e4
You can find more tutorials in my blog : https://eranfeit.net/blog/
Want to get started with Computer Vision or take your skills to the next level ?
If you’re just beginning, I recommend this step-by-step course designed to introduce you to the foundations of Computer Vision – Complete Computer Vision Bootcamp With PyTorch & TensorFlow
If you’re already experienced and looking for more advanced techniques, check out this deep-dive course – Modern Computer Vision GPT, PyTorch, Keras, OpenCV4
Getting Your YOLOv5 Segmentation Environment Ready
Short description:
In this part, you prepare a dedicated, conflict-free environment for YOLOv5 instance segmentation so everything runs smoothly on CPU or GPU.
A reliable YOLOv5 instance segmentation setup starts with isolation.
Using a dedicated conda environment keeps PyTorch, CUDA, OpenCV, and Ultralytics dependencies from colliding with other projects, which is critical once you start mixing detection, segmentation, and training workflows.
Installing the correct PyTorch + CUDA combo is where many beginners get stuck.
Pinning versions (like PyTorch 2.2.2 with CUDA 11.8) keeps your environment deterministic and aligned with the GPU drivers you actually have.
That stability means when you share this tutorial with your audience, they’re far less likely to hit mysterious import errors.
Finally, the extra libraries—NumPy, OpenCV, Matplotlib, Pillow, PyYAML, requests, tqdm, psutil, thop, ultralytics, tensorboard—complete a practical toolkit for visualization, profiling, logging, and experimentation.
Even if you start with simple image masks, this stack can grow smoothly into training custom segmentation models, monitoring GPU usage, or pushing results to dashboards.
Installation :
### Create a clean dedicated environment for YOLOv5 instance segmentation so dependencies stay organized. conda create --name YoloV5 python=3.8 ### Activate the environment so all upcoming installs are isolated from your global Python. conda activate YoloV5 ### Choose (or create) a working folder to keep YOLOv5 and your data together. cd /path/to/Cool-Python ### Clone the official YOLOv5 repository that includes detection, classification, and segmentation tools. git clone https://github.com/ultralytics/yolov5.git ### Move into the YOLOv5 folder so all commands run against the cloned repo. cd yolov5 ### (Optional) Verify your CUDA toolkit installation to confirm GPU support is available. nvcc --version ### Install PyTorch with CUDA 11.8 support, matching your GPU drivers for stable acceleration. conda install pytorch==2.2.2 torchvision==0.17.2 torchaudio==2.2.2 pytorch-cuda=11.8 -c pytorch -c nvidia ### Install GitPython to let YOLOv5 handle internal git operations when needed. pip install gitpython>=3.1.30 ### Install Matplotlib for plotting results, training curves, and visual diagnostics. pip install matplotlib>=3.3 ### Install NumPy for fast numerical operations across the pipeline. pip install numpy>=1.22.2 ### Install OpenCV for reading, writing, and displaying images and results. pip install opencv-python>=4.1.1 ### Install Pillow as the core image-processing backend for Python. pip install Pillow>=10.0.1 ### Install psutil to monitor system and GPU resource usage while running inference. pip install psutil ### Install PyYAML so YOLOv5 can read configuration and data files. pip install PyYAML>=5.3.1 ### Install requests to support model downloads and remote file access. pip install requests>=2.23.0 ### Install SciPy for additional scientific utilities used in some YOLOv5 workflows. pip install scipy>=1.4.1 ### Install thop to compute FLOPs and model complexity for performance analysis. pip install thop>=0.1.1 ### Install tqdm to get clean progress bars during inference and training. pip install tqdm>=4.64.0 ### Install Ultralytics tools for extended utilities and compatibility. pip install ultralytics>=8.0.147 ### Install TensorBoard support to visualize metrics if you later train segmentation models. pip install tensorboard Summary :
By the end of this section, you have a clean, GPU-ready YOLOv5 environment tailored for instance segmentation with yolov5x-seg.
Everything lives in one place, version conflicts are minimized, and you’re ready to move from theory to real masks in the next step.
Once your environment is stable, you’re also ready to try advanced segmentation foundations like Segment Anything Python — No-Training Image Masks or my Segment Anything auto-masks tutorial for instant masks without labels.
Running YOLOv5x-Seg for Instance Segmentation on Your Images
Short description:
Now you’ll point YOLOv5x-seg at a single image or a full folder and instantly generate masks, labels, and overlayed results.
With YOLOv5 installed, the actual instance segmentation step becomes refreshingly simple.
Instead of hand-writing inference loops, you’ll call segment/predict.py, which wraps pre- and post-processing, drawing masks, and saving outputs for you.
The --weights yolov5x-seg.pt flag loads the most accurate instance segmentation variant from the YOLOv5 family.
It’s heavier than yolov5n-seg or yolov5s-seg, but ideal when you care about crisp masks and can afford the extra compute—perfect for demos, tutorials, and offline analytics.
The --source argument is your entry point: a single file for quick tests, or an entire directory when you want to segment large batches.
Because the script walks all supported inputs, you can swap from one image to a dataset without touching the code.
Finally, the --name and --exist-ok flags help you keep experiments organized.
Each run writes predictions into runs/predict-seg/<name>, so you can compare different models, thresholds, or datasets side by side without overwriting results.
Test Images :


### Make sure you are inside the YOLOv5 project folder before running segmentation. cd /path/to/Cool-Python/yolov5 ### Copy your images directory into the YOLOv5 folder (or ensure the path is accessible). # Example: place your files under ./images for a clean, simple path. # (Perform the copy using your file explorer or a standard copy command.) ### Run YOLOv5x-seg on a single image to verify everything works end-to-end. python segment/predict.py --weights yolov5x-seg.pt --source images/Rahaf.jpg --name output --exist-ok ### Run YOLOv5x-seg on an entire folder to generate instance masks for all images inside. python segment/predict.py --weights yolov5x-seg.pt --source images --name output --exist-ok Summary
This section gives you a minimal yet powerful YOLOv5 segmentation tutorial: one command for a single test image, one for full batches.
You now have automatically saved masks, bounding boxes, and visual overlays—exactly what you need for reports, datasets, or feeding downstream pipelines.
For a powerful hybrid pipeline, check how I combine YOLOv8 detection with SAM to generate high-quality masks in this Segment Anything + YOLOv8 tutorial.
FAQ :
What does YOLOv5 instance segmentation actually do?
It detects each object and generates a separate pixel-level mask, letting you isolate, count, or measure individual instances in an image.
Is yolov5x-seg too heavy for everyday projects?
Yolov5x-seg is larger but still practical on a modern GPU, making it a strong choice when accuracy and clean masks matter more than raw FPS.
Can I run this tutorial on CPU only?
Yes, but expect slower inference; start with small images and batches to keep the experience responsive.
Where are the segmentation results saved?
Results are written inside the runs/predict-seg directory under the name you pass with the –name flag.
How do I change the input image folder?
Update the –source argument to point at any folder path containing supported image files.
Can I use smaller YOLOv5-seg models for faster demos?
Yes, swap yolov5x-seg.pt with yolov5n-seg.pt or yolov5s-seg.pt to gain speed at the cost of some mask accuracy.
Is this setup compatible with custom dataset training?
Yes, the same environment works for segment/train.py once you prepare labeled masks in YOLOv5 format.
What if my outputs show boxes but no masks?
Confirm you are using a *-seg checkpoint and that the script is segment/predict.py, not the standard detect.py.
Can I visualize results directly instead of only saving files?
Yes, add view-related flags (when available) or open the saved outputs with OpenCV, Matplotlib, or your preferred viewer.
Is this workflow production-ready?
It is a solid starting point; for production, you can containerize it, fix versions, and optimize models for your hardware.
Conclusion
YOLOv5 instance segmentation looks complex from the outside, but with the right structure it becomes a short, reliable checklist.
You set up a focused environment, clone the official repo, lock in a stable PyTorch + CUDA stack, and install all the supporting tools that make experiments predictable.
From there, a single segment/predict.py command turns static images into rich, pixel-level masks using yolov5x-seg.pt, giving you immediate visual feedback and exportable artifacts.
This mirrors modern research and industry practice, where YOLOv5-seg variants are routinely used for fast and accurate segmentation in domains like manufacturing, medical imaging, agriculture, and infrastructure monitoring.
Because your pipeline is clean and modular, it’s easy to extend: swap weights, add pre-processing, feed masks into tracking, or step up to custom training when you’re ready.
For your readers and viewers, this tutorial doesn’t just show “some commands”—it gives them a dependable entry point into real-world instance segmentation with minimal friction.
Connect
☕ Buy me a coffee — https://ko-fi.com/eranfeit
🖥️ Email : feitgemel@gmail.com
🤝 Fiverr : https://www.fiverr.com/s/mB3Pbb
Enjoy,
Eran
