Breaking Down the Flame Detection Code
Computer Vision Project Part 4
Now that the model is trained, the next step is putting it to work. We can take that trained YOLOv8 model and load it into a live camera feed to test it in real time.
For this demo, I’m using a laptop webcam as the video source. In a real-world setup, this could easily be swapped for a wide-angle camera connected to an embedded system. The goal here is to show how the model responds to visible flames in a live environment.
Below is the flame detection script that I'll break down section by section.
Environment Fix for PyTorch (Windows Only)
os.environ["KMP_DUPLICATE_LIB_OK"] = "TRUE"
This prevents a crash on Windows machines caused by multiple OpenMP runtimes clashing, which can happen when using PyTorch with certain Python packages.
Importing Required Libraries
import cv2, numpy as np, threading, platform
from ultralytics import YOLO
We’re using OpenCV for video input and drawing, NumPy for basic array manipulation, and Ultralytics’ YOLO library to load and run the flame detection model. We also import threading for non-blocking alerts, and platform to check the OS for beep support.
Load the Flame Detection Model
FIRE_MODEL_PATH = "your_path_here"
fire_model = YOLO(FIRE_MODEL_PATH)
This loads a pre-trained YOLOv8 model that has been fine-tuned to detect flames. (Look at the past articles).
Optional Fire Warning Image
fire_warning_img = cv2.imread("fire_warning.png", cv2.IMREAD_UNCHANGED)
We load a transparent PNG warning image that will be overlaid when fire is detected. This adds a visual alert to the detection feed.
Play an Alert Sound (Windows Only)
winsound.Beep(2000, 500)
If the system is running on Windows, a beep will sound when fire is detected for the first time. The sound only plays once per detection event to avoid repetition.
Overlay Transparent Images on the Feed
Recommended by LinkedIn
def overlay_transparent(...)
This helper function allows you to draw transparent PNGs (like warning signs) on top of the video feed without distorting the background.
Start the Camera
cap = cv2.VideoCapture(0)
We activate the webcam and begin reading frames in a loop. cap.read() gives us a new frame from the camera in each cycle.
Run Inference on Each Frame
results = fire_model.track(source=fire_frame, stream=True, conf=CONFIDENCE_THRESHOLD)
YOLOv8 runs object tracking on each frame of the video. We loop through the results and check if the detected object is labeled "fire".
If a fire is detected with enough confidence:
🚨 Trigger Alerts and Display Warnings
if fire_detected:
play_beep()
overlay_transparent(...)
cv2.putText(...)
When "fire" is detected:
This visual and auditory feedback helps quickly grab attention in real-world usage.
Show the Output
cv2.imshow("Flame Detection", fire_frame)
This displays the live video feed with all detection results and overlays applied. You can exit the loop by pressing the q key.
Shutdown
cap.release()
cv2.destroyAllWindows()
When the loop exits, the webcam is released, and the OpenCV window is closed.
This concludes all the code for the flame detection program.
The github link to the full code is posted: https://github.com/dia-agrawal/fire-detection