The algorithms are not perfect, and the quality of the descriptions will vary, but for users of screen readers, having some description for an image is often better than no context at all,” explained Travis Leithead, Senior Program Manager, Microsoft Edge.Īccording to Microsoft’s data, more than half of the images processed by screen readers are missing alt text, which is a big problem for making the Internet more accessible. “When a screen reader finds an image without a label, that image can be automatically processed by machine learning algorithms to describe the image in words and capture any text it contains. The accessibility feature uses Azure Cognitive Services intelligence to generate alt text for web images that do not include it. Microsoft Edge is now capable of generating automatic images descriptions for users browsing the web with screen readers.
0 Comments
Leave a Reply. |