Introduction
In this tutorial you will build a fast and lightweight car detection pipeline in Python using OpenCV background subtraction.
The approach uses the MOG2 model to separate moving vehicles from a static scene, making it ideal for traffic videos and fixed cameras.
You will generate a clean foreground mask, refine it with morphology, and detect vehicles by extracting contours and drawing bounding boxes.
This method avoids deep learning and runs in real time on ordinary hardware, which is perfect for demos, prototypes, and edge devices.
The code reads frames from a video, applies opencv background subtraction, thresholds the mask, erodes and dilates to reduce noise, and then finds external contours.
Large contours are filtered by area to ignore small artifacts, and each detected car is labeled on a copy of the original frame.
For convenient visualization, the script stacks the original frame, the colored foreground, and the annotated result into a single window.
This post naturally targets the keywords opencv background subtraction, opencv car detection, vehicle detection opencv, and motion detection opencv while keeping explanations clear and practical.
check out our video here : https://youtu.be/YSLVAxgclCo&list=UULFTiWJJhaH6BviSWKLJUM9sg
You can find the full code here : https://ko-fi.com/s/2f2f851f93
You can find more similar tutorials in my blog posts page here : https://eranfeit.net/blog/
Full Code
Full code and step-by-step explanation
Short description.
This single block loads a video, builds a MOG2 background model, cleans the mask with morphology, finds contours, filters by area, and draws labeled boxes on detected cars.
It also stacks the original frame, foreground, and annotated view for easy comparison in one window.
### Import OpenCV for computer vision operations. import cv2 ### Import NumPy for array operations and stacking frames side by side. import numpy as np ### Create a VideoCapture object to read frames from a video file. cap = cv2.VideoCapture("c:/temp/Cars - 1900.mp4") ### Instantiate the MOG2 background subtractor with a short history for quick adaptation. ### MOG2 models background as a mixture of Gaussians and returns a foreground mask of moving pixels. backgroundObject = cv2.createBackgroundSubtractorMOG2(history=2) ### Define a 3x3 kernel of ones (uint8) for morphological operations like erosion. kernel = np.ones((3,3),np.uint8) ### Use the default structuring element for dilation by passing None. ### This typically behaves like a 3x3 element in OpenCV’s Python bindings. kernel2 = None ### Process frames until the stream ends or the user quits. while True: ### Read the next frame from the video source. ret , frame = cap.read() ### If no frame is returned, end the loop. if not ret : break ### Apply background subtraction to obtain a foreground mask where moving objects are white. fgmask = backgroundObject.apply(frame) ### Binarize the mask to keep confident foreground pixels and suppress noise. _, fgmask = cv2.threshold(fgmask ,20 , 255 , cv2.THRESH_BINARY) ### Erode once to remove small white speckles and detach thin noise. fgmask = cv2.erode(fgmask, kernel, iterations=1) ### Dilate multiple times to reconnect vehicle regions and fill small holes. fgmask = cv2.dilate(fgmask,kernel2 , iterations=6 ) ### Find external contours on the cleaned binary mask. countors, _ = cv2.findContours(fgmask, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE) ### Keep a copy of the original frame for drawing boxes and labels. frameCopy = frame.copy() ### Iterate through all contours to locate sufficiently large moving objects. for cnt in countors: ### Filter out tiny contours by area to avoid false positives. if cv2.contourArea(cnt) > 20000: ### Compute the axis-aligned bounding rectangle for the contour. x , y, width , height = cv2.boundingRect(cnt) ### Draw a red rectangle around the detected vehicle. cv2.rectangle(frameCopy, (x,y), (x+width, y+ height) , (0,0,255), 2) ### Put a small green label above the bounding box. cv2.putText(frameCopy ,"Car detected", (x, y-10), cv2.FONT_HERSHEY_SIMPLEX, 0.3 , (0,255,0), 1, cv2.LINE_AA) ### Create a color foreground view by masking the frame with the foreground mask. forground = cv2.bitwise_and(frame,frame , mask = fgmask) ### Horizontally stack original, foreground, and annotated frames for a single display. stacked = np.hstack((frame,forground,frameCopy)) ### Resize for convenience and show the result in a single window. cv2.imshow("stacked", cv2.resize(stacked,None,fx=0.4, fy=0.4)) ### Exit if the user presses the 'q' key. if cv2.waitKey(1) == ord('q'): break ### Release the video resource. cap.release() ### Close all OpenCV windows. cv2.destroyAllWindows()
You can find the full code here : https://ko-fi.com/s/2f2f851f93
MOG2 creates a foreground mask from motion, which we clean with erosion and dilation to form solid blobs.
Contours are extracted from the binary mask, filtered by area, and enclosed by rectangles with labels.
This approach is efficient for fixed cameras and traffic videos where cars are the primary moving objects.
For better robustness, tune history, thresholds, kernel sizes, and contour area as needed.
Connect :
☕ Buy me a coffee — https://ko-fi.com/eranfeit
🖥️ Email : feitgemel@gmail.com
🤝 Fiverr : https://www.fiverr.com/s/mB3Pbb
Enjoy,
Eran