A new feature in an upcoming version of Apple’s iOS will enable iPhones to look at images and identify which parts of that picture are cats or dogs.
The software, called VNAnimalDetector, basically draws a digital rectangle around any part of a photograph containing an animal and labels it as either a cat or dog.
The software shows Apple’s continued efforts to take complicated and tricky machine learning techniques, and package them as part of the iPhone operating system.
It also emphasizes Apple’s large investment into computer vision, a subset of computer science that focuses on enabling cameras and other sensors to “see” and understand the real world. Computer vision is also a core underlying technology for self-driving cars and augmented reality glasses.
The detector also suggests that Apple is very aware of how many pictures of their fuzzy friends iPhone users take. Apple’s Photos app has been able to identify cats and dogs from user photos since 2016.
The software is part of Apple’s Vision framework, which gives developers tools for image recognition, and it isn’t intended for end users. Instead, the Vision framework is designed to be built into apps. Apple also built a detector for humans, which can take an image as input, and draw rectangles around any people in the picture.
It was previously possible to identify cats or people using machine learning. Lots of computer scientists and big companies have made pet detectors in the past. But VNAnimalDetector makes it easier to build into apps, in as little as four lines of code, according to a presentation at Apple’s annual developer’s conference earlier this month.
Other new advancements in the Vision framework include the ability to identify whether two images are similar, additional facial recognition abilities and the ability to identify which objects are in a photo, even if they’re not pets.
Additional technical information about the Vision API is available on Apple’s website. Additional cat photos can be found all over the internet.