Microsoft updates cognitive services vision in Azure
Microsoft has updated its artificial intelligence-driven services vision and search services in its Azure platform to help data scientists and developers save time by escaping the cycle of coding new AI algorithms and testing them before deployment.
"We have been conducting research in AI for more than two decades and infusing it into our products and services. Now we are bringing it to everyone through simple, yet powerful tools. One of those tools is Microsoft cognitive services, a collection of cloud-hosted APIs that let developers easily add AI capabilities for vision, speech, language, knowledge and search into applications, across devices and platforms such as iOS, Android and Windows," Joseph Sirosh, corporate vice president, artificial intelligence & research division of Microsoft, wrote in a blog post.
Microsoft also has other Azure tools that help developers and data scientists to code their own AI algorithms.
The custom vision service is built on the idea of helping systems identify images more accurately, Siroh said. “The service makes it possible for developers to easily train a classifier with their own data, export the models and embed these custom classifiers directly in their applications, and run it offline in real time on iOS, Android and many other edge devices," he said.
"Custom vision service can be used for a multiplicity of scenarios: retailers can easily create models that can auto-classify images from their catalogs, social sites can more effectively filter and classify images of specific products and national parks can detect whether images from cameras include wild animals or not," Siroh explained.
Microsoft’s Face API, which helps developers provide face and emotion recognition, has also been updated to recognise more faces in different scenarios. "It detects the location and attributes of human faces and emotions in an image, which can be used to personalise user experiences. With Face API, developers can help determine if two faces belong to the same person, identify previously tagged people, find similar-looking faces in a collection and find or group photos of the same person from a collection," Siroh said.
In addition, Bing entity search has been updated so that developers can identify the most relevant entity based on searched terms and provide primary details about those entities, according to Siroh. Entities could span across multiple international markets and market types including information about famous people, places, movies, TV shows, video games and books.
"Many scenarios can be covered with Bing entity search: for instance, a messaging app could provide an entity snapshot of a restaurant, making it easier for a group to plan an evening. A social media app could augment users’ photos with information about the locations of each photo," Siroh explained.