Google and ZSL set up AI to detect poachers



Google is working with researchers from the Zoological Society of London to help detect poachers and recognise animals through artificial intelligence.

Usually millions of images captured by heat and motion triggered cameras would have to be manually processed, with a person sifting through the files and recording the animals observed.

But that role is being handed over to Google's algorithm, which has been specially trained for the task.

Put simply, machine learning is a practical application of artificial intelligence (AI).

Google's algorithm has been taught to recognise one animal from another based on previous examples. Around a million and a half images were used to develop and train this particular model.

Once this dataset has been processed, the algorithm can encounter new images and recognise the animals featured.

Google's algorithm mistook these skiers for a dog.

"Machine learning has the potential to really speed up our analysis of these images to help species identification," says Sophie Maxwell, Conservation Technology Lead at ZSL.

"It also helps us to detect poachers in the field. We can download the algorithms to sit on the cameras themselves, so that they can detect humans in the images in real time, and raise alerts of those in protected areas so that we can respond to these threats."

Once this dataset has been processed, the algorithm can encounter new images and recognise the animals featured.

But there is a catch to all this. Machine Learning faces a challenge that humans do not. Subtle variations are able to trick even the most sophisticated algorithms into mistaking one image for another.

These are known as "adversarial examples." In December a team from MIT fooled Google's algorithm into thinking that a photo of skiers was a dog. Such a mistake wouldn't have been made by a human under the same conditions.

But the accuracy level looks set to improve over time according to Matt McNeil, Head of Google Cloud Customer Engineering.

Around 1.5 million images were used to programme the algorithm.

"As you start creating more extensive models which are trained on much larger datasets they start becoming much more resilient to changes in pixels. Being able to be more accurate really."

"I think there is an aspect which is simply related to the quantity and the depth of training."

And both the quantity and depth of such training will need to be much greater if the public are to trust AI in other areas like driverless cars - it's no good if your car misinterprets a stop sign as a lollipop.

A significant body of work is being done to guard against "adversarial examples".

Teaming up with conservationists and teaching algorithms to identify species is just one of the ways that machine learning is being toughened up to take on new challenges in the future.