...

Car Detection in Videos with OpenCV and Python

OpenCV Background Subtraction for Car Detection in Python

Last Updated on 01/11/2025 by Eran Feit

Why Background Subtraction Works for Car Detection

Car detection in video starts by separating moving objects from a static background. Background subtraction is a simple yet powerful technique to isolate the foreground and extract moving vehicles:contentReference[oaicite:0]{index=0}. It’s ideal for traffic scenes where the background continuously changes and no “clean” background image is available:contentReference[oaicite:1]{index=1}. By combining background subtraction with morphological operations like erosion and dilation:contentReference[oaicite:2]{index=2}, we can remove noise and strengthen object regions for robust detection.

This guide uses the createBackgroundSubtractorMOG2 function in OpenCV. You’ll learn how to tune its parameters and build a complete pipeline that detects cars in any video.

Understanding the Parameters of createBackgroundSubtractorMOG2

The cv2.createBackgroundSubtractorMOG2 function builds a statistical model of the background and highlights moving objects, making it effective at isolating vehicles. It accepts three key parameters:

  • history – Number of frames used to construct the background model; lower values adapt quickly, higher values provide greater stability. The default history length is 500analyticsvidhya.com.
  • varThreshold – Controls the squared Mahalanobis distance threshold between a pixel and the model to decide if it belongs to the background. Increasing this value reduces sensitivity to small variations (default is 16)analyticsvidhya.com.
  • detectShadows – Boolean flag indicating whether the algorithm should detect and mark shadows in gray; enabled by default, but may slow down processing. Set to false if shadow detection isn’t neededanalyticsvidhya.com.

Adjust these parameters to balance processing speed, noise reduction and detection accuracy.

Morphological Operations Cheat Sheet

OperationDescriptionPurpose
Erosion Removes pixels on object boundaries:contentReference[oaicite:7]{index=7}. Eliminates small white noise and separates connected objects:contentReference[oaicite:8]{index=8}.
Dilation Adds pixels to the boundaries of objects:contentReference[oaicite:9]{index=9}. Expands the foreground and closes small holes:contentReference[oaicite:10]{index=10}.
Opening Erosion followed by dilation:contentReference[oaicite:11]{index=11}. Removes small objects while preserving overall shape.
Closing Dilation followed by erosion:contentReference[oaicite:12]{index=12}. Fills small holes in objects to produce smoother masks.

Introduction to detecting cars in videos

In this tutorial, we will dive into car detection python in videos using OpenCV and Python.
The goal of this project is to build a simple but effective computer vision pipeline that detects moving cars in a video, draws bounding boxes around them, and displays the results side by side for better visualization.

This tutorial answers the title: it shows you step by step how to transform raw video into structured object detection output using OpenCV’s background subtraction and contour detection methods.

This guide focuses specifically on car detection python techniques, ensuring a thorough understanding of the necessary tools and methods.

By the end of this post, you will have a working script that detects vehicles in real-world videos, and you’ll understand the core building blocks of video object detection: reading frames, background subtraction, morphological transformations, contour analysis, and object annotation.

We will divide the code into three parts for better understanding:

  • Setting up the environment and reading the video.
  • Applying background subtraction and morphological transformations.
  • Detecting cars, annotating frames, and displaying results.

👉 If you’re interested in more advanced classification projects, check out my tutorial on Alien vs Predator image classification with ResNet50.


You can find more similar tutorials in my blog posts page here : https://eranfeit.net/blog/

check out our video here : https://youtu.be/YSLVAxgclCo&list=UULFTiWJJhaH6BviSWKLJUM9sg


Customizable Car Detection Script

# Import libraries import cv2 import numpy as np  # Use webcam by setting 0, or replace with video path cap = cv2.VideoCapture(0)  # Build background subtractor with custom parameters history = 100      # number of frames for background model varThreshold = 25  # sensitivity to pixel changes detectShadows = False bg_subtractor = cv2.createBackgroundSubtractorMOG2(     history=history, varThreshold=varThreshold, detectShadows=detectShadows)  # Structuring elements for morphological operations kernel_erode = cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (3, 3)) kernel_dilate = cv2.getStructuringElement(cv2.MORPH_RECT, (3, 3))  # Minimum contour area to filter out small blobs (noise) min_area = 15000  while True:     ret, frame = cap.read()     if not ret:         break      # Apply background subtractor and threshold     mask = bg_subtractor.apply(frame)     _, mask = cv2.threshold(mask, 20, 255, cv2.THRESH_BINARY)      # Clean up using erosion and dilation     mask = cv2.erode(mask, kernel_erode, iterations=1)     mask = cv2.dilate(mask, kernel_dilate, iterations=6)      # Find contours     contours, _ = cv2.findContours(mask, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)     annotated = frame.copy()      for cnt in contours:         if cv2.contourArea(cnt) > min_area:             x, y, w, h = cv2.boundingRect(cnt)             cv2.rectangle(annotated, (x, y), (x + w, y + h), (0, 0, 255), 2)             cv2.putText(annotated, "Car detected", (x, y - 10),                         cv2.FONT_HERSHEY_SIMPLEX, 0.5, (0, 255, 0), 1)      # Display original, mask, and annotated frames side by side     combined = np.hstack((frame,                           cv2.bitwise_and(frame, frame, mask=mask),                           annotated))     cv2.imshow("Original | Foreground | Detection",                cv2.resize(combined, None, fx=0.4, fy=0.4))      if cv2.waitKey(1) & 0xFF == ord('q'):         break  cap.release() cv2.destroyAllWindows()

How to Measure Detection Quality

Although this pipeline doesn’t use deep learning, it’s still important to evaluate how well it works. Consider logging the number of cars detected per frame and computing metrics like detection accuracy (percentage of true cars detected) and false positive rate. You can also track the average processing time per frame to estimate frames per second (FPS). Recording these metrics helps you tune parameters and compare different configurations.

Where to Find Sample Videos

To experiment with your pipeline, download traffic videos from public datasets such as UA-DETRAC, which contains varied traffic scenes. Alternatively, use dash‑cam footage or publicly available clips of highways to see how well your detection performs. Swapping in different videos can reveal how sensitive your settings are to lighting, noise and traffic density.

Further Reading


FAQ

What is background subtraction in OpenCV?

Background subtraction isolates moving objects from a static background, making it ideal for car detection:contentReference[oaicite:17]{index=17}.

Why use erosion and dilation in car detection?

Erosion removes small noise, while dilation grows the remaining object regions:contentReference[oaicite:18]{index=18}:contentReference[oaicite:19]{index=19}; together they clean up the mask for better detection.

Which OpenCV function creates the background subtractor?

The function cv2.createBackgroundSubtractorMOG2() builds a model to subtract the background and highlight moving cars:contentReference[oaicite:20]{index=20}.

How do I tune the detection sensitivity?

Adjust the varThreshold parameter of MOG2 for pixel variability and change the minimum contour area to filter out small blobs.

Can I run this script on a webcam feed?

Yes—replace the video path in cv2.VideoCapture() with 0 to use your webcam and process live traffic:contentReference[oaicite:21]{index=21}.

How do I stop the detection program?

Press the q key while the OpenCV window is active to break the loop and release the video capture:contentReference[oaicite:22]{index=22}.

Does this method detect multiple cars at once?

Yes—the contour loop processes every foreground blob and draws a bounding box around each car:contentReference[oaicite:23]{index=23}.

What if my video is noisy?

Increase the number of erosion iterations, choose a larger structuring element or adjust the threshold values to reduce noise:contentReference[oaicite:24]{index=24}.

Can this script be used for real-time traffic monitoring?

With appropriate hardware and optimized parameters, the background subtraction pipeline can run on live camera feeds for traffic monitoring:contentReference[oaicite:25]{index=25}.

How is this approach different from deep-learning detectors?

Background subtraction is lightweight and relies on frame differencing, whereas deep-learning detectors like YOLO require neural networks and training but handle complex scenes better.


Conclusion

We have successfully built a Python project that detects cars in videos using OpenCV.
The pipeline combined background subtraction, morphological transformations, contour filtering, and bounding box annotation to identify cars in motion.

This project is a strong starting point for real-world applications such as traffic monitoring, parking lot management, and smart city solutions.
You can expand it further by integrating object tracking, deep learning-based detection models, or even live video feeds from surveillance cameras.

You can find the full code here : https://ko-fi.com/s/2f2f851f93


Connect :

☕ Buy me a coffee — https://ko-fi.com/eranfeit

🖥️ Email : feitgemel@gmail.com

🌐 https://eranfeit.net

🤝 Fiverr : https://www.fiverr.com/s/mB3Pbb

Enjoy,

Eran

Eran Feit