Microsoft recently announced the phasing out of public access to AI-powered facial analysis features in several Azure services.
The decision is part of a broader review of Microsoft’s AI ethics policies. The company’s updated responsible AI standards, first announced in 2019, emphasize accountability in determining who uses its services as well as increased human oversight of where those tools are. used.
New customers will now need to request access to use facial recognition operations in Azure Face API, Computer Vision, and Video Indexer. Meanwhile, existing customers have one year to apply and receive approval for continued access to facial recognition services based on the use cases provided.
Some face detection features such as blur, exposure, glasses, head pose, landmarks, noise, occlusion and face bounding box detection will remain generally available and do not require an application. However, the company will remove Azure Face’s ability to identify attributes such as gender, age, smile, facial hair, hair, and makeup.
Sarah BirdLead Group Product Manager, Azure AI, explains in an Azure blog post that some features will be available in another service:
Although API access to these attributes is no longer available to general-purpose customers, Microsoft recognizes that these features can be useful when used for a variety of controlled accessibility scenarios. Microsoft remains committed to supporting technology for people with disabilities and will continue to use these features in support of this goal by integrating them into applications such as Seeing AI.
Microsoft competitor AWS also has a set of AI-powered facial analysis features with its Rekognition service, which, for example, has been used extensively by the US Internal Revenue Service (IRS) through the ID.me system, but was discontinued due to the accuracy and privacy of the technology. concerns. It’s unclear whether AWS will follow Microsoft in limiting access to facial recognition operations.
Of all the BigTechs, I feel like Microsoft (and probably Apple) just seem to need more responsible and ethical AI. Good to see at least some form of acceptance.
Additionally, Microsoft will also place similar restrictions on its Custom Neural Voice feature, allowing customers to create AI voices from recordings of real people, also known as deepfake audio. Natasha Crampton, Chief AI Officer at Microsoft, wrote in a blog post:
This technology has exciting potential in education, accessibility, and entertainment, yet it’s also easy to imagine how it could be used to impersonate speakers and mislead listeners inappropriately.
Finally, more details on the limitations are available on the documentation page.