3 Tips for Better Traffic Monitoring with Ai
Ai video analysis is proving to be a powerful tool for industries that require large amounts of video data to be analysed for insights. Traffic monitoring is a perfect example of an industry use case in which video can provide multiple insights into potential optimisation, safety and regulation.
Our streets are full of cameras providing great sources of insightful data for traffic monitoring authorities and Urban planners. The downside is the analysis of all these cameras is labour-intensive, which translates to costly.
The value of visual analysis is the multiple use cases that can be monitored through one camera field of view. Most cameras are currently single-focused, meaning their field of view focuses on an area and the viewer is identifying one use case, such as a number of vehicles entering an intersection. Similar to the inductive loops and traffic tubes found in and across roads. However, computer vision-based Ai can be set up to analyse multiple use cases from one field of view.
How do you get the best results from Computer vision Ai analysis on cameras for traffic monitoring?
Here are 3 steps that will help get the most out of the street cameras using Ai computer vision analysis:
Step 1: Start with the outcome in mind for better traffic monitoring
Ai computer vision can extract a lot of data insights out of one camera, however, that’s not always helpful, in fact, it can be a distraction. It’s critical to identify the intended outcome, and what insights are necessary. For example, if an intersection has a high level of congestion the outcome might be to determine where vehicles spend the most time idle (dwelling). Then work back to find the influencing factors, such as large numbers of cars entering from one street, or pedestrians crossing frequently from a pedestrian crossing too close to the intersection.
Once the outcome is decided the next step is to look at capturing the video and applying the correct Ai analysis tool.
Step 2: Capture the video correctly
Ai is a smart tool but it is limited to what you feed it, so if you put in rubbish you will receive rubbish analysis. Installing the camera correctly and setting the best field of view (FOV) will make sure you get the best results. Here are a few tips for installing a camera for traffic analysis:
Get the height right - computer vision requires objects to be visible at all times within the field of view. If an object or a portion of an object is covered from view (called occlusion) it can lead to miss-counting or miss-identification, which in turn ruins the accuracy of the data. Therefore when installing the camera it’s critical to review the video feed to establish what factors could cover the traffic from view.
By rule of thumb installation of cameras work best at 6-12 meters above ground and in a location close to the area of interest.
Adjust the Field of View - every camera has an optimal field of view, similar to the human eye objects become harder to discern as they move away in distance and to the side. Generally, pedestrians can be identified up to approximately 25m from the cameras and vehicles at 45m. It is worth reviewing the camera type and the field of view before purchasing an inadequate version.
Set the correct tools to get the best results - most Ai models allow the user to set graphical representations of lines, and boxes so as to gain specific data capture. Below is a short list of tools and their uses:
The key tools of analysis are:
Counting and Direction lines - these are lines that can be strategically drawn to identify when an object crosses the line and from which direction
Speed Detection - multiple lines can be drawn to identify when an object crosses multiple lines and mathematically estimate the speed of the object
Region of Interest - this is a box that can be stretched to cover the region of interest and can analyse the length of time an object dwells or when and where an object enters and leaves
Object Class - this is dependent on the model training, however, many traffic monitoring models will be capable of determining vehicle types (class) such as car, truck, pickup, van etc.
Desire Lines: by adding a pixel tracker, a line can be visualised to show the path an object moved through the screen. This is valuable to determine a journey of a vehicle or pedestrian and visually identify potential points of collision.
Most Ai providers like Felicity will allow multiple tools to be used on one camera scene. This is handy, combining lines to detect the total vehicle count and direction coupled with the amount of time idling in an intersection will provide a good understanding of the congestion and the peak times.
Step 3: Integrate with traffic management systems
The last stage requires the implementation of the insights. Once the cameras are set up correctly and the Ai algorithms are providing accurate and reliable data it's time to put the insights into action.
Data is only as good as the practical use it provides the receiver. Feeding the data into actionable end points completes the loop and makes the data valuable. Usually, API endpoints can push data or trigger events leading to an automation of tasks. A good example is when the congestion at an intersection reaches a limit, a signal is sent to the traffic light or traffic is redirected to reduce the congestion.
Data becomes richer the longer it is gathered. More insights and trends become visible and the effects clearer over time, leading to better predictive models for the future, hopefully reducing incidents and improving roads.
The last thing worth considering is data security, latency and storage. For more information on choosing between Edge Ai, Cloud Ai or Hybrid read this article.