A group of researchers from five different universities developed a way to catalog and track animals using camera traps and a deep learning algorithm, with the project including participants from Auburn University, Harvard, Oxford, the University of Minnesota, and the University of Wyoming. The paper describing the accomplishment is titled “Automatically identifying, counting, and describing wild animals in camera-trap images with deep learning” and was published by the National Academy of Sciences earlier this week. Digging into that research, it turns out that the new algorithm can identify wildlife used in the study with an accuracy of around 94.9 to 99.1-percent. It could also count wildlife in a given snapshot with an accuracy between 63 and 84.7-percent. While the task accomplished there might seem simple and arbitrary, the researchers say that the uses for the technology could be groundbreaking. Not least of all, that’s because the rate of accuracy is actually better than humans, who fell in at 96.6-percent for identification.
The implications of the research seem to be that camera traps may be able to replace human observers when it comes some of the more labor-intensive tasks associated with animal studies. That includes the act of counting to determine population sizes for conservation, identifying member species of an ecosystem, and at least a few other aspects of the fields of biology. Meanwhile, many scientists and researchers already utilize camera traps in their research but those often capture so many images that it takes months to sort through them all. On that front, the new algorithm might be tied in with other software to help automate image sorting. Of course, the same technology could also be used to fuel the birth of high-tech ranching and there are undoubtedly plenty of other commercial applications too. Best of all, in most cases, the use of cameras and A.I. would be predominantly unintrusive. That means that minimal human interaction with the animals and their environment would be required. Furthermore, the research may prove to be useful as a proof of concept for other areas of study which require a high level of human observation.
With that said, counting is the area where the algorithm seemed to come up short. Humans were accurate in counts approximately 90-percent of the time. This study encompassed efforts of more than 50,000 people and 225 camera traps, with the algorithm itself learning from more than 3.2 million photos. Those ranged from very close-up shots to those obscured by grass. Images also covered a variety of environmental conditions, including both daytime and nighttime shots. However, by comparison to something like the research conducted for Alphabet’s cancer initiatives, the number of photos used here is still relatively low. With more research and A.I. training, the computer model could get much better over time.