Training phrases are collections of possible utterances that users might say to match an intent. You don't have to define every possible example of what a user might say because of Dialogflow's built-in machine learning, which naturally expands training phrases to other similar user utterances. However, you'll need to add multiple training phrases within an intent (20 or more examples) so your agent can recognize a greater variety of user input.
For example, a training phrase like "I want pizza" trains your agent to recognize similar input, like "Get a pizza" or "Order pizza". As you create intents and training phrases for your agent, Dialogflow constructs a dynamic model behind the scenes to make decisions about how to handle user input. This algorithm is unique to each agent and is based on your intent specifications.
Entities and annotation
Training phrases allow your agent to successfully match user input to an intent. To further help your agent with this matching process, you can annotate training phrases with entities. Entities represent categories of things— for example, colors, cities, and numbers— and annotation refers to the linking of words or values within training phrases to their corresponding entities. You can manually annotate your training phrases, but Dialogflow can also automatically annotate for you. Once a word or phrase is annotated, it becomes highlighted in your training phrases.
Dialogflow defines system entities, which are pre-built entities that correspond to commonly used categories like color, time, and city names. You can also create developer-defined entities when you want Dialogflow to recognize a certain category of things that isn't represented by a system entity. For more information about entity types, see Entities.
Dialogflow's ability to use entities to categorize specific parts of your training phrases is one of its most powerful features. Because of this, your agent can recognize any user input that corresponds to an annotated entity. Entities are very important in the intent matching process, as they give Dialogflow more information when trying to match utterances that don't exactly correspond to training phrases.
For example, imagine that you defined a training phrase like "What is the weather like on Tuesday at 3 PM?" You can annotate "Tuesday" and "3 PM" with a date and time entity. This annotation tells Dialogflow to match more variations, like "What is the weather like on Wednesday at noon?", "How's the weather tomorrow at 10 PM?", or any other variation that contains a date and time. If you didn't annotate the phrase, "What is the weather like on Tuesday at 3 PM?", the training phrase would match user input that contained "Tuesday" and "3 PM", but not any other date or time. In the next section, we'll discuss further how entities extract relevant information (parameters) from user utterances.
Example and template modes
Each training phrase can be in one of two modes:
Example mode: Indicated by the " icon. Training phrases in example mode are written in natural language and annotated so that parameter values can be extracted. For example, "What is the weather going to be tomorrow in San Francisco" is written in example mode format. It’s easier to provide training phrases as examples rather than templates. Machine learning is also able to train itself faster with example mode than with training phrases.
Template mode: Indicated by the @ icon. Training phrases in template mode contain direct references to entities prefixed with the @ sign instead of annotations. An example of a training phrase written in template mode is, "What is the weather going to be on @date in @city?".