AI and the evolution of surveillance systems

In this Help Net Security interview, Gerwin van der Lugt, CTO at Oddity, discusses the future of surveillance and AI’s influence. He also delves into how organizations can prevent their systems from perpetuating biases or violating individual rights.

surveillance future

What precautions are in place to ensure surveillance in sensitive areas, such as detention centers and prisons, remains ethical and respectful of privacy rights?

Oddity is a relatively young company and we’ve been able to take into account privacy-preserving and ethical software practices from the beginning. We practice “Privacy by Design” principles. For example, our software does not store video data at all in its default settings, since the existing video management systems already have that functionality.

Furthermore, we do not use customer data for training purposes. In most cases, our installations are not connected to the internet, and we need physical access for maintenance and troubleshooting. We believe that despite these inconveniences, it is still worth it. Especially when it comes to sensitive areas such as detention centers, where the persons under surveillance have little control over their privacy. In the end, our software is meant to help and protect people, and our goal is to do so with as little impact on privacy as possible.

What challenges do system integrators and enterprises face when ensuring the high quality of their surveillance solutions?

Successful camera surveillance deployments require a delicate interplay between the security cameras, sensors and other hardware, the video management system, and the network itself. The biggest challenge is building out a network infrastructure that can support the ever-increasing bandwidth requirements of modern cameras.

Surveillance installations tend to grow over time, and as the number of cameras increases, the bandwidth limit of the network is reached. Replacing a faulty camera is an easy maintenance job since it affects only a single camera. Replacing the network infrastructure to double or triple the amount of bandwidth is much harder and requires checking and replacing network hardware throughout the installation.

We often see integrators trying to work around this by decreasing the frame rate or bitrate settings of cameras to reduce the amount of bandwidth they need. Though it seems like a workable solution at first, for AI-powered surveillance, it can cause trouble. Machine learning models are sensitive to minor visual differences that the human eye can hardly detect. Especially if the algorithm was trained on high-quality input, it might struggle with low-quality video streams.

As such, when building a surveillance network infrastructure from scratch, it is a good idea to plan ahead and build it with plenty of room in bandwidth capacity to grow. This will save costs in the long run and ensures that the installation is ready for the AI-powered future!

What advice would you give to a large organization that wants to modernize an outdated surveillance system?

In terms of the physical hardware, in line with my recommendation above, reserving enough bandwidth is a crucial factor.

More importantly, the future of surveillance is AI, and organizations should design their surveillance system with AI in mind. Where a typical camera surveillance center might still have people looking at walls of matrix screens with video feeds, they may soon be warned pro-actively when an incident occurs. With the same number of staff, they will be much more effective.

The first step is to determine the types of incidents you are most interested in. AI-solutions exist for many incident types. It is important to consider AI from the get-go and involve AI vendors as soon as possible. As of 2023, AI is not ready to completely replace humans yet. Organizations would be wise to set up a hybrid deployment with humans-in-the-loop, while still monitoring and filling the gaps.

The role of AI in surveillance is a hot topic for debate. How can an organization ensure its systems do not perpetuate biases or infringe on individual rights?

AI models are influenced by the datasets used to train them. It is imperative that AI vendors carefully tune and balance their datasets to prevent biases from occurring. Balancing datasets is a manual process that requires making sure that the humans visible in the datasets are a good representation of reality, and do not have biases towards certain human traits. In our case, we use diverse groups of actors, from all over the world, to play out violence for our training datasets to ensure they are balanced. Furthermore, testing regularly for such biases can go a long way.

A carefully designed system can protect and help people without significantly impacting their privacy. This requires considering privacy from designing to implementing AI systems. I believe that the future of AI-powered surveillance will see reduced privacy infringement. Currently, large surveillance installations still require humans looking at camera streams all the time. In a trigger-based workflow, where humans take actions after an AI has alerted them, the amount of security camera footage seen by humans is much less, and thus the risk of privacy infringement decreases.

How do you envision the future of AI-enhanced video surveillance in the next 5-10 years?

I am convinced that the classic camera surveillance center such as it exists now, with many staff and video walls will be phased out. Instead, AI systems proactively alert relevant security personnel when incidents occur. The detection rate of incidents increases to 80% or even higher (from 5% to 25%) and first responders and security staff will be empowered to help more and spend less of their valuable time looking at video footage.

Don't miss