Bixby’s Powerful Personalization Drives New Assistant Experiences


The Next Gen Assistant

Imagine the following scenario… Your voice assistant wakes you up with “Good morning Jennifer, it’s 6am on this sunny Thursday morning. Your skim milk latte is ready, just as you like it. You have an action-packed day today, starting with a customer meeting at 9am in the city. Would you like me to schedule an Uber for an 8:45 am arrival time?”

You respond back with “Yes, that would be perfect.” Then your assistant says, “Great, it’s booked. By the way, I would recommend either your red Hugo Boss dress or gray Chanel suit for your customer meeting.”

While getting ready for work, your assistant reads aloud top technology news stories based on your personalized feeds along with important emails from work, family, and friends. As you’re heading to the door, your assistant reminds you of tonight’s dinner party and suggests three favorite restaurant options nearby with available reservations for 8pm. You select the Italian one and dash out to catch your Uber.

The scenario described above will be commonplace in the not-too-distant future as voice assistants continue to evolve, becoming more personalized and aware of our preferences, daily routines, and habits. Personalization unlocks the true promise of voice assistants as they become increasingly ubiquitous and intertwined in our daily lives.

Bixby’s Powerful Personalization

Bixby, Samsung’s intelligence platform, has been designed from the ground up with personalization capabilities for streamlining user interactions and delivering a tailored, unique experience for each user. Personalization is achieved through machine learning, which can help users complete tasks more quickly and efficiently by highlighting the most relevant results based on previous selections and learned preferences. Bixby can learn at both the individual-user level and community level.

Bixby offers users and developers complete transparency and control over what the platform learns. Bixby will not learn anything about users unless they grant Bixby permission to do so. In addition, users can view what Bixby has learned and change or delete stored preferences at any time.

Bixby offers two types of personalization features: Preference Learning and Selection Learning.



Preference Learning
Preference Learning is about learning which items an individual user prefers and surfacing those from a long list of results. The goal is to help users find what they’re looking for faster. Preference Learning is based on a broad set of an individual’s interactions over time, such as user requests, observations, and selections. Preference Learning is based solely on individual learning – there is no concept of community-level learning.

Bixby’s Preference Learning capability allows developers and companies of all sizes to take advantage of Bixby’s sophisticated, built-in machine learning algorithms to learn about user preferences and serve up a more personalized experience on the Bixby platform.

With a few lines of code, you can implement preference learning by telling the system which properties to learn. Let’s take a hotel for example. Within Bixby, a hotel is modeled with a number of different properties, such as star rating, hotel brand, address, and phone number. Some of these properties are useful for Preference Learning (star rating, hotel brand); others are not (phone number, address).

As mentioned above, Bixby allows transparency and control for both users and developers. Bixby will not learn anything unless the user explicitly confirms that this preference should be learned. See below for an example of Bixby prompting the user on whether to learn that Thai is a favorite cuisine type.

Personalization Reference Image 01



Below is another example of preference learning. In this example, the user has made several restaurant queries for steakhouses in the past. Bixby learned that this user prefers steakhouses and presents a top-rated steakhouse with a “Based on your preferences” highlight to help streamline the user decision process by offering the user’s favorite cuisine type as a first option. These highlights are added by developers and can be used by any service to surface what might be a top pick for the user based on insights from preference learning.

Personalization Reference Image 02



To enable Preference Learning, you add a preferable tag and reference a valid field. In the example below, name (a primitive name type) and productTypes (a primitive enum) are possible user preferences. Keep in mind that Bixby can only learn about preferences for the following types: enum, boolean, name, and qualified primitives. You cannot declare decimal, integer, or text types as preferences.

structure (FlowerProduct) {

property (name) {
type (ProductName)
min (Required) max (One)
}

property (features) {
type (FlowerFeature)
min(Optional) max (Many)
}

property (productTypes) {
type (ProductType)
min(Optional) max (Many)
}

// preferences
features {
preferable {
preference (name)
preference (productTypes)
}
}

}



Selection Learning
Assistants need to interact with users, and part of that interaction involves asking questions. Bixby’s Selection Learning can automatically select the best option for the user based on prior selections and context, such as current time, location, and actions. The main benefit is to minimize the number of questions users have to answer during their conversation with the assistant.

Bixby models can have minimum [min (Required)] and maximum [max (One)] requirements. In the minimum case, Bixby requires at least 1 input parameter. If Bixby does not have this, it will automatically respond with, “I need one of these to continue.” In the maximum case, if more than 1 item is returned, Bixby will generate a prompt asking the user to specify, “Which one?” Prompts have different names, and in the maximum case, this prompt is called a Selection Prompt. Rather than asking the user the same question multiple times, Bixby can learn based on prior selections from both the individual and community. Developers can define selection strategies to inform Bixby which selections to learn on.

For example, if a user says, “What’s the weather in Boston?” There are actually thirteen cities with the name Boston in the U.S. Bixby might learn that higher population cities are more common than smaller population cities, or Bixby might weigh capital cities higher. One of the benefits of Selection Learning at the community level is that out-of-the-box, Bixby is able to use insights gained from the community to make selections on behalf of the user. For example, if you say, “What’s the weather in Boston?” Bixby may automatically select Boston, Massachusetts based on certain selection strategies as described above. However, I may live near Boston, Virginia. So, initially, Bixby may choose the wrong Boston for me, but as a user, I can correct this via the Understanding Page. I just swipe down on the mobile device after issuing my voice command, and I can change Boston, MA to Boston, VA. Bixby will learn that I am different than the general community and will respond differently when I ask for the weather in Boston.

Personalization Reference Image 03



On the flip side, if Bixby does not have sufficient confidence in the user’s request and the user asks, “Hi Bixby, what’s the weather in Dublin?”, Bixby identifies four cities named Dublin and prompts the user to make a specific selection. Bixby learns this selection so that the next time the user asks for weather in Dublin, the desired Dublin surfaces.

Personalization Reference Image 04



You must enable Selection Learning for the action input you want Bixby to learn. In this example, when you include a with-learning block within a default-select block in an action input declaration, you instruct Bixby to learn the geo.NamedPoint that each user individually prefers during weather.FindWeather actions. Based on the actions and Selection Strategies that you define, Bixby dynamically learns the best geo.NamedPoint for each user and their context.

action (weather.FindWeather) {
type (Search)

input (where) {
type (geo.NamedPoint)
min (Required)

default-select {
// Enabled Selection Learning w/o any Selection Behaviors
with-learning { }
}
}
...
}


Conclusion

Developers can build powerful personalized experiences by incorporating Preference Learning and Selection Learning in their Bixby Capsules. We hope this quick reference gives you a better understanding of how to implement these features into your Bixby Capsules. For additional information and sample capsules on Personalization and Learning, visit the Bixby Developer Center today.

Blogs

Bixby holds all the cards. Join the Bixby Premier Developer Program today.

Conversational Assistants have been widely deployed on phones and other devices since 2011. Today, people send billions of requests each week! Here are some predictions to watch for in the Assistant space in 2019.
Read More
Tutorials

Bixby 201: Intermediate Bixby, Modeling and Javascript

Want to take your skills to the next level? It’s time to watch our latest video with Roger Kibbe: Bixby 201 – Intermediate Bixby: Modeling and Javascript
Read More
Tutorials

Bixby Views: Updates Vol. 1

Bixby Views lets you create complex, interactive user interfaces in our declarative Bixby language. Take a look at the latest Volume 1 updates to Bixby Views with Viv Labs Tech Writer, Anne Danis!
Read More