Microsoft Removes Facial Recognition Tool That Claims To Detects Emotions

Microsoft’s AI-powered emotion recognition has been criticized as unscientific. Microsoft is removing access to many of its Artificial intelligence services from public use. 

Microsoft Removes Facial Recognition Tool That Detects Emotions

Table of Contents

Microsoft Corp. is disposing of many of its face recognition products driven by artificial intelligence, including one that uses videos and photographs to show the emotions individuals are expressing. Microsoft has released a 27-page “Responsible AI Standard” which has outlined its objectives for fair and reliable AI. Microsoft has planned to gradually remove access to many of its face recognition services from the hand of public 

When this service will discontinue?

Many features will no longer be available to new users, and current users will have to stop using these features by the end of the year. In addition to removing public access to its emotion recognition tool, Microsoft is also retiring Azure Face’s ability to identify “attributes such as gender, age, smile, facial hair, hair, and makeup.” Existing customers will have one year before losing access to artificial intelligence tools that purport to infer emotion, gender, age, smile, facial hair, hair, and makeup. Some services with less harmful potential (like automatically blurring faces in images and videos) will remain open-access.

In starting, Microsoft will limit access to some features of its facial recognition services (known as Azure Face) and remove others entirely. Users will have to apply to use Azure Face for facial identification, and for example, telling Microsoft exactly how and where they’ll be deploying its systems. 

Why emotion recognition AI is criticized?

Many experts have pointed out that Emotion-recognizing AI fails to show accurate data. Some research has shown that the technology is far from ideal, frequently misidentifying female individuals and those with a darker complexion.

When AI was used to detect criminal suspects and in other surveillance scenarios, it failed significantly with some flaws. Some services of this AI also raised concerns about privacy and civil rights.

“These efforts raised important questions about privacy, the lack of consensus on a definition of ’emotions,’ and the inability to generalize the linkage between facial expression and emotional state across use cases, regions, and demographics,” Sarah Bird, principal group product manager at Microsoft’s Azure AI unit, said in a blog post.



Pratham is founder of Infostation experienced in digital marketing and content writing. He wants to motivate his readers to insipre to make change in this world with his words.

Leave a Comment

Leave a Comment

Your email address will not be published. Required fields are marked *