Microsoft improves Bing search results with MEB
[ad_1]
Microsoft recently introduced “Make Every Feature Binary” (MEB) to improve its Bing search engine. MEB is a large-scale analysis model that goes beyond pure semantics and reflects a more nuanced relationship between search queries and documents. To make searching more precise and dynamic, MEB harnesses the power of big data and accepts an input feature space with over 200 billion binary features.
DNN and transformers for Bing
The Bing search stack relies on natural language models to improve the understanding of the basic search algorithm of users’ search intent and associated web pages. Deep learning computer vision techniques are used to improve the discoverability of billions of images, even when text descriptions or summary metadata do not accompany queries. Models based on machine learning are used to retrieve captions in the larger body text that answer specific questions.
Register for our Workshop>>
The introduction of Transformers was a game-changer in the understanding of natural language. Unlike DNN architectures which dealt with words individually and sequentially, Transformers could understand the context and the relationship between each word and all of the other words surrounding it in a sentence. As of April 2019, Bing has incorporated large transformer models to deliver high quality upgrades.
How MEB Improves Search Performance
Transformer-based deep learning models have been preferred due to their advanced understanding of semantic relationships. While these models have shown great promise, they still fail to capture a nuanced understanding of individual facts. Enter MEB.
The SEM model has 135 billion parameters that help it map unique facts to features to gain a more nuanced understanding. It is also formed with over 500 billion query / document pairs from three years of Bing research. This gives MEB the ability to memorize the facts represented by the binary characteristics while reliably learning from a large amount of data continuously.
The Microsoft team used heuristics for each Bing search impression to determine if users were happy with the results. “Satisfactory” documents were tagged as positive samples. Other documents for the same print were tagged as negative samples. For each query-document pair, the characteristics were extracted from the query text, document URL, title, and body text. These binary characteristics are then passed to the scattered neural network model. This helps to minimize the loss of cross entropy between the model’s predicted click probability and the actual click label.
Large-scale feature design and training are key to the MEB model. Traditional digital functions only care about the corresponding query and count document. On the other hand, the SEM functionalities are very specific and are defined on the N-gram level relationship between the query and the document. All features are designed as binary features to easily cover manually created digital features. These features are taken directly from the plain text, allowing MEB to perform end-to-end optimization in a single path. The current production model uses three main features:
- N-Gram Pair Query and Document Features
- Hot-encoding of compartmentalized digital characteristics
- Hot encoding of categorical characteristics
Benefits
MEB is currently in production for all Bing searches in all regions and languages, making it Microsoft’s largest universal model. Compared to Transformer-based deep learning models like GPT-3, the SEM model can even learn hidden intentions between query and document. It can also identify negative relationships between words or phrases to reveal what users might not want to see for a query.
With the introduction of MEB in Bing, Microsoft has the following advantages:
- A 2% increase in click-through rate (CTR) on early search results
- 1% reduction in manual rephrasing of requests
- More than 1.5% reduction in the number of clicks on the pagination (need to click on the button on the next page)
The SEM model consists of an input layer of binary characteristics, a feature integration layer, a pooling layer and two dense layers. Generated from 49 feature groups, the input layers contain 9 billion features. Each of the binary characteristics is encoded in a 15-dimensional integration vector. After pooling and concatenating the sum by group, the vector is passed through the dense layers to produce an estimate of the click probability.
“If you are using DNNs to power your business, we recommend that you experiment with large sparse neural networks to complement these models. This is especially true if you have a large historical flow of user interactions and can easily build simple binary functionality, â€the team said in a blog post.
Join our Telegram group. Be part of an engaging online community. Join here.
Subscribe to our newsletter
Receive the latest updates and relevant offers by sharing your email.

[ad_2]