You are on page 1of 120

VoiceObjects 7

Tutorial
VoiceObjects 7

Tutorial
VoiceObjects 7.1
To ensure that you are using the documentation that corresponds to the software you are licensed to use,
compare this version number with the software version shown in About VoiceObjects in the Help menu of the
software you are using.

Copyright
Copyright © 2001-2007 VoiceObjects and its licensors. All rights reserved.
VoiceObjects GmbH, Friedrich-Ebert-Strasse, 51429 Bergisch Gladbach, Germany
Published in Germany – Legal information January 2007
Information in this document is subject to change without notice and does not represent a commitment on the part
of VoiceObjects or any of its subsidiaries. The software described in this document is furnished under a license
agreement or nondisclosure agreement. The software may be used or copied only in accordance with the terms
of the agreement. You may not copy, use, modify, or distribute the software except as specifically allowed in the
license or nondisclosure agreement. No part of this document may be reproduced or transmitted in any form or by
any means, electronic or mechanical, including photocopying and recording, for any purpose, without the express
written permission of VoiceObjects or a subsidiary thereof.
Protected by German patent 101 47 341. Patents pending.
Companies, names, and dates used in examples herein are fictitious unless otherwise noted. If such names affect
copyrights or trademarks or others, please notify VoiceObjects by e-mail at vo-documentation@voiceobjects.com.

Trademarks
VoiceObjects is a registered trademark of VoiceObjects GmbH in Germany or other countries. VoiceObjects 7,
VoiceObjects Server, VoiceObjects Analyzer, VoiceObjects Desktop, and VoiceObjects Studio are trademarks of
VoiceObjects GmbH in Germany or other countries. Any other trademarks, trade names or service marks
mentioned in this document belong to their respective owners.
The material presented herein is based upon information that we consider reliable, but we do not represent that it
is error-free and complete. VoiceObjects is not making any representation or granting any warranty with respect
to such material, and the distribution of such material shall not subject VoiceObjects to any liability.

Explicit Copyright Notice


This product includes software developed by the Apache Software Foundation (www.apache.org).
Copyright © 1999-2007 – The Apache Software Foundation. All rights reserved.
Java and all Java-related trademarks and logos are trademarks or registered trademarks of Sun Microsystems,
inc. in the U.S., other countries, or both.
Specific versions of this product contain copyright material licensed from AdventNet, Inc (www.adventnet.com).
All rights to such copyright material rest with AdventNet.
Specific versions of this product contain copyright material authorized by the Eclipse Foundation
(www.eclipse.org), their contributors and others. All rights reserved.
Specific versions of this product contain copyright material authorized to copy from Bocaloco Software LLC
(www.bocaloco.com). All rights to such copyright material rest with Bocaloco Software.
Specific versions of this product work with Microsoft Excel or make use of copyright material from Microsoft
Corporation (www.microsoft.com). All rights to such copyright material rest with Microsoft. Microsoft and Excel are
registered trademarks of Microsoft Corporation.

Document Number: E-002-20070330-VO7

Copyright © 2001-2007 VoiceObjects 2


VoiceObjects 7

Table of Contents
TABLE OF CONTENTS ................................................................................................... 3
WELCOME TO VOICEOBJECTS TUTORIAL ...................................................................... 7
Content ................................................................................................................. 7
Organization ......................................................................................................... 7
Tutorial Application ............................................................................................... 8
Functionality..................................................................................................... 8
Dialog flow ....................................................................................................... 9
Using the Online Tutorial ...................................................................................... 9
Typographical Conventions ................................................................................ 11
Feedback and Questions.................................................................................... 11
BEFORE YOU BEGIN .................................................................................................. 12
LESSON 1 – GETTING STARTED .................................................................................. 13
Objectives ........................................................................................................... 13
Getting Started With VoiceObjects Desktop....................................................... 13
Open VoiceObjects Desktop ......................................................................... 13
Log into VoiceObjects Desktop ..................................................................... 13
Adding a User ..................................................................................................... 14
What Comes Next............................................................................................... 16
LESSON 2 – CREATING A PROJECT ............................................................................. 17
Objectives ........................................................................................................... 17
Creating a Project ............................................................................................... 17
The Main Desktop Window................................................................................. 19
What Comes Next............................................................................................... 20
LESSON 3 – CREATING OUTPUT ................................................................................. 21
Objectives ........................................................................................................... 21
Objects................................................................................................................ 21
Object Editors ..................................................................................................... 21
Output Objects.................................................................................................... 22
Creating an Output Object .................................................................................. 23
What Comes Next............................................................................................... 25
LESSON 4 – CREATING AND DEPLOYING A SERVICE .................................................... 26
Objectives ........................................................................................................... 26
Servers and Services.......................................................................................... 26
Configuring a New Service ................................................................................. 26
Configuring a Server Object ............................................................................... 28
Create a new Server object ........................................................................... 29
Use an existing Server object........................................................................ 30
Starting the Tutorial Service ............................................................................... 30
Calling the Application ........................................................................................ 32
Dialog Flow 1 ...................................................................................................... 33
What Comes Next............................................................................................... 33

Copyright © 2001-2007 VoiceObjects 3


VoiceObjects 7

LESSON 5 – STRUCTURING THE DIALOG FLOW ............................................................ 34


Objectives ........................................................................................................... 34
Structuring Dialog Flows..................................................................................... 34
Module Objects................................................................................................... 34
Creating a Module Object ................................................................................... 35
Displaying Dialogs in the Dialog Designer.......................................................... 36
Changing the Start Object .................................................................................. 37
Redeploying an Application ................................................................................ 38
Dialog Flow 2 ...................................................................................................... 38
What Comes Next............................................................................................... 39
LESSON 6 – INTERACTING WITH THE CALLER .............................................................. 40
Objectives ........................................................................................................... 40
Interacting with the Caller ................................................................................... 40
The Input Object ................................................................................................. 40
Creating an Input Object..................................................................................... 40
Specify the input request ............................................................................... 41
Specify the recognition grammar................................................................... 42
Specify the result handling through a Variable object ................................... 43
Adding the Input Object to the Application ......................................................... 44
Dialog Flow 3 ...................................................................................................... 44
Creating a Second Input Object ......................................................................... 45
Dialog Flow 4 ...................................................................................................... 46
Checking the Results by an Output Object......................................................... 47
Different methods to create autonomous objects.......................................... 47
Create the Output object repeating the results.............................................. 47
Dialog Flow 5 ...................................................................................................... 48
What Comes Next............................................................................................... 48
LESSON 7 –HANDLING EVENTS .................................................................................. 49
Objectives ........................................................................................................... 49
Event Handling Strategies .................................................................................. 49
Defining a Global No Input Event ....................................................................... 49
Defining a Global No Match Event ..................................................................... 51
Defining Context-Sensitive Event Handling........................................................ 52
Dialog Flow 6 ...................................................................................................... 54
What Comes Next............................................................................................... 54
LESSON 8 – CONFIRMING INPUT ................................................................................. 55
Objectives ........................................................................................................... 55
Confirmation ....................................................................................................... 55
Creating a Confirmation Object .......................................................................... 55
Specify the confirmation request ................................................................... 56
Specify the confirmation grammar................................................................. 57
Specify the correction request ....................................................................... 58
Specify the correction mapping ..................................................................... 58
Modify the input request ................................................................................ 60

Copyright © 2001-2007 VoiceObjects 4


VoiceObjects 7

Manipulating Dialog Flows in the Dialog Designer ............................................. 61


Dialog Flow 7 ...................................................................................................... 61
What Comes Next............................................................................................... 62
LESSON 9 – USING LAYERS ........................................................................................ 63
Objectives ........................................................................................................... 63
The Concept of Layers ....................................................................................... 63
Using a Layer...................................................................................................... 63
Creating a Layer Object...................................................................................... 64
Specify the state indicator ............................................................................. 64
Specify the layer states ................................................................................. 65
Adding the Layer to the Application.................................................................... 66
Dialog Flow 8 ...................................................................................................... 68
What Comes Next............................................................................................... 68
LESSON 10 – HANDLING DYNAMIC DATA .................................................................... 69
Objectives ........................................................................................................... 69
About Dynamic Data........................................................................................... 69
Using the Script Object ....................................................................................... 69
Creating a Script Object...................................................................................... 70
Specify the script code .................................................................................. 71
Define the parameter set ............................................................................... 72
Playing the Result to the Caller .......................................................................... 73
Dialog Flow 9 ...................................................................................................... 74
What Comes Next............................................................................................... 74
LESSON 11 – PROCESSING CONDITIONS ..................................................................... 75
Objectives ........................................................................................................... 75
Conditional Processing ....................................................................................... 75
Asking For Agent Transfer.................................................................................. 75
Specifying an Exit Object.................................................................................... 77
Branching the Dialog Flow.................................................................................. 78
Dialog Flow 10 .................................................................................................... 81
What Comes Next............................................................................................... 81
LESSON 12 – DESIGNING USER-FRIENDLY APPLICATIONS ........................................... 82
Objectives ........................................................................................................... 82
Voice User Interface Design ............................................................................... 82
Standard and Custom Navigation....................................................................... 82
Specifying Custom Navigation............................................................................ 83
Dialog Flow 11 .................................................................................................... 84
Random Prompting............................................................................................. 85
Specify random prompts................................................................................ 85
What Comes Next............................................................................................... 88
LESSON 13 – LOGGING THE RESULTS ......................................................................... 89
Objectives ........................................................................................................... 89
Logging ............................................................................................................... 89
Creating a Log Object......................................................................................... 89

Copyright © 2001-2007 VoiceObjects 5


VoiceObjects 7

Adding the Log Object to the Application ........................................................... 91


Accessing the Log File........................................................................................ 92
CONCLUSION ............................................................................................................. 94
APPENDIX A - PROJECT DOCUMENTATION .................................................................. 95
Introducing Project Documentation .................................................................... 95
Creating a Project Documentation for the Life Insurance Application................ 95
Examples ............................................................................................................ 98
Cover page .................................................................................................... 98
Dialog flow ..................................................................................................... 99
Object overview ........................................................................................... 100
Object statistics ........................................................................................... 101
APPENDIX B – OVERVIEW OBJECT TYPES ................................................................. 102
Components Category...................................................................................... 102
Resources Category ......................................................................................... 103
Logic Category.................................................................................................. 104
Actions Category .............................................................................................. 105
Layers Category ............................................................................................... 105
OSDMs Category.............................................................................................. 106
APPENDIX C - HOW TO LEARN MORE ........................................................................ 108
VoiceObjects Help ............................................................................................ 108
Open VoiceObjects Help ............................................................................. 108
Using VoiceObjects Help............................................................................. 109
Using Dialog Help ............................................................................................. 110
Prime Insurance Sample Application................................................................ 110
Printed Documentation ..................................................................................... 111
Overview VoiceObjects Documentation ........................................................... 111
CONTACTING VOICEOBJECTS ................................................................................... 115
About VoiceObjects .......................................................................................... 115
VoiceObjects Offices ........................................................................................ 116
Corporate Headquarters:............................................................................. 116
EMEA Headquarters:................................................................................... 116
United Kingdom Office:................................................................................ 116
Technical Support............................................................................................. 117
Technical Support Americas:....................................................................... 117
Technical Support EMEA: ........................................................................... 117
Training ............................................................................................................. 118
VoiceObjects University Training Center Americas..................................... 119
VoiceObjects University Training Center EMEA ......................................... 119
Documentation Feedback................................................................................. 120

Copyright © 2001-2007 VoiceObjects 6


Welcome to VoiceObjects Tutorial - Content

VoiceObjects 7

Welcome to VoiceObjects Tutorial


VoiceObjects provides enterprises with an open and flexible infrastructure to efficiently create, deploy,
manage, and analyze self-service phone portals.
Within the VoiceObjects product family, VoiceObjects Desktop is the easy-to-use Web interface for
creating, testing, deploying and monitoring applications.
This Tutorial intends to give you a first impression of how to easily develop applications with
VoiceObjects using VoiceObjects Desktop.

Content
The Tutorial explains in a step-by-step approach how to get started with VoiceObjects Desktop
and how to build and run simple applications – introducing the basic objects provided by
VoiceObjects for application development and deployment. While completing these lessons,
you will also learn about some advanced features provided by VoiceObjects, e.g. to give your
dialogs a more natural hear and feel.
The Tutorial gives you a foundation for using the basic features of VoiceObjects Desktop and
building your first phone applications with it. If you want to learn more about additional features
– either while working through the lessons or after completing this Tutorial – refer to
VoiceObjects Help (or its corresponding PDF documents) which describes all features and
components of the VoiceObjects products in detail (see Appendix C – How To Learn More for
an overview on all available additional resources).
Especially refer to
• the Design Guide to learn more about designing applications within VoiceObjects.
• the Desktop Guide for details on each element and command within VoiceObjects Desktop.
• the Object Reference for a detailed description on the configuration of each available dialog
object.

LNote: The Tutorial explains how to build a small voice application and therefore only deals with
the voice channel. Other channels (video, text, or Web) or not covered here.

Organization
The Tutorial consists of the following lessons:
• Lesson 1 – Getting Started – describes how to open and log into VoiceObjects Desktop,
and how to create a new user – briefly introducing the concept of user management that is
provided by VoiceObjects.
• Lesson 2 – Creating a Project – describes how to create a project – introducing the
concept of projects within VoiceObjects – and gives you an initial tour of the main work
areas of VoiceObjects Desktop.
• Lesson 3 – Creating Output – introduces the general concept of objects, which is the
basic underlying concept of VoiceObjects. It explains how to create output by using an
Output object.
• Lesson 4 – Creating and Deploying a Service – introduces the configuration objects
Server and Service, explains how to configure both, and how to start an application in order
to be able to call into it.
• Lesson 5 – Structuring the Dialog Flow – introduces the structuring of dialog flows by
using Module objects.

Copyright © 2001-2007 VoiceObjects 7


Welcome to VoiceObjects Tutorial - Tutorial Application

VoiceObjects 7

• Lesson 6 – Interacting with the Caller – explains how to create an Input object to collect
input from the caller.
• Lesson 7 – Handling Events – introduces simple event handling strategies and explains
how to add both global and context-sensitive event handling to your dialog.
• Lesson 8 – Confirming Input – introduces a more complex object – the Confirmation
object – which is used to confirm input collected by the caller and, if required, to correct it.
• Lesson 9 – Using Layers – introduces the concept of layers and explains how to add a
custom layer to your application that changes the dialog behavior depending on the time of
the day the call comes in.
• Lesson 10 – Handling Dynamic Data – describes how to compute dynamic data in
VoiceObjects and explains how to use the Script object.
• Lesson 11 – Processing Conditions – explains how to make use of conditions to
influence the dialog behavior.
• Lesson 12 – Designing User-Friendly Applications – introduces the functionality of
navigation and random prompting, two additional dialog design features provided in
VoiceObjects.
• Lesson 13 – Logging the Results – explains how to store the call results into a log file in
order to evaluate them.
• Appendix A – Project Documentation – gives an example of how you can document your
project out of VoiceObjects Desktop.
• Appendix B – Overview Object Types – presents a list of all available objects, sorted by
categories, with their associated icons.
• Appendix C – How to Learn More – gives an overview on related resources provided by
VoiceObjects that help you to get started with VoiceObjects and for ongoing support, and
describes how to access them.

Tutorial Application
During the course of the Tutorial you will create a simple but complete application.

Functionality
The Tutorial application represents a life insurance service which is similar to that included in
the VoiceObjects sample application Prime Insurance.
The Prime Life Insurance service simply asks the caller for his/her gender and age and
calculates the life insurance fee based on the caller input.
During the course of the Tutorial several additional features will be added to the Life Insurance
application like a confirmation, a custom layer, event handling, etc. which makes it a complete
and feasible application.

Copyright © 2001-2007 VoiceObjects 8


Welcome to VoiceObjects Tutorial - Using the Online Tutorial

VoiceObjects 7

Dialog flow
The following dialog flow shows an example dialog for the Prime Life Insurance service:

Object – Caller Dialog Flow


Welcome to the Prime Life Insurance service!
To help me calculate what your monthly premium would be
for life insurance, please say if you are male or female.

Female.
Caller

Now please say your age.

Twenty-eight.
Caller

Ok, I have recognized your gender as female and your age


as twenty-eight. Is that correct?

Yes.
Caller

The monthly fee for a twenty-eight year old female is 27


dollars.
Would you like to speak to an agent now?

No.
Caller

Thanks for calling. Goodbye!

Using the Online Tutorial


To open VoiceObjects Tutorial, do one of the following:

From within VoiceObjects Desktop:


• On the Project Home Page in the main VoiceObjects Desktop window, click the Tutorial
button.
• On the Help menu of the main Desktop window, click VoiceObjects Tutorial.

From out of VoiceObjects Desktop:


Using Windows
• In the Start menu of Windows, point to Programs and then to VoiceObjects 7. Click
Tutorial to open VoiceObjects Tutorial.
• In your VoiceObjects installation folder open the folder \platform\desktop\tutorial\en, and
click the file index.htm.

Copyright © 2001-2007 VoiceObjects 9


Welcome to VoiceObjects Tutorial - Using the Online Tutorial

VoiceObjects 7

Using UNIX/Linux
• In your VoiceObjects installation directory open the folder /platform/desktop/tutorial/en, and
click the file index.htm.
In all these cases VoiceObjects Tutorial will open up in a separate window.

Within VoiceObjects Tutorial the following features are provided:


Contents
The Contents button displays all available books and pages of the VoiceObjects Tutorial in the
left-hand pane. When you click a closed book, it opens to display its sub-books and pages.
When you click an open book, it closes. When you click pages, the respective topics will be
displayed in the right-hand pane.

Search
The Search button enables you to search for words in the VoiceObjects Tutorial and locate
topics containing these words. Full-text searching looks through every word in the VoiceObjects
Tutorial to find matches. When the search is completed, a list of topics is displayed in the left-
hand pane so you can select a specific topic to be displayed in the right-hand pane. Within this
topic, all words that match the search word will be highlighted in orange.

Glossary
The Glossary button provides a list of VoiceObjects-specific terms in the upper left-hand pane.

Copyright © 2001-2007 VoiceObjects 10


Welcome to VoiceObjects Tutorial - Typographical Conventions

VoiceObjects 7

If you click a term, its corresponding definition is displayed in the lower left-hand pane.

Back
If you click Back you return to the previously viewed topic.

Print Topic
The Print Topic button opens up a print dialog in order to print the respective topic displayed in
the right-hand pane.

In addition to these buttons, you can also navigate through the VoiceObjects Tutorial by Text
Links: Text within a topic that is blue and underlined is a hyperlink that jumps to another related
topic or Web page.

Typographical Conventions
This document uses the following typographical conventions:

Italic Font Used to indicate names of applications, projects, objects, variables,


files, output text, and book titles.

Bold Font Used to indicate any screen terminology like names of windows,
worksheets, editors, menus, boxes, tabs, folders, and fields.

Courier New Used for grammar code.

All path specifications in this document use slashes (/) to apply to both Linux and Windows. If
you work on Windows you may also use backslashes (\).

Feedback and Questions


If you have any comments on this document please send your feedback to
vo-documentation@voiceobjects.com.
If you have technical difficulties please contact your local VoiceObjects administrator or if you
have a valid software support and maintenance contract in place send an email describing your
problem to support@voiceobjects.com.

Copyright © 2001-2007 VoiceObjects 11


Before You Begin - Feedback and Questions

VoiceObjects 7

Before You Begin


Before you begin to work through the lessons of the Tutorial take care of the following
requirements.
The VoiceObjects platform must be installed and configured - following the steps described in
the Installation Guide.
On your Web server, the VoiceObjects Desktop process and the VoiceObjects Server process
must be running. For more information about starting these processes see the Installation Guide
or contact your local VoiceObjects administrator.
In order to call the sample applications a connection to a media platform is required. The media
platform must support ASR (Automated Speech Recognition) and TTS (text-to-speech).
In addition, you will need the following information, which has been specified during installation:

Item Your notes

IP address of your local VoiceObjects


installation (e.g.
http://server:port/VoiceObjects)

User ID and password for your local


VoiceObjects installation, by default
voadmin and manager

Appropriate media platform driver

Name of the local server


(<servername>), default is VOServer

Telephone number by which your


application can be reached

If you do not have this information available contact your local VoiceObjects administrator or
see the Installation Guide for instructions on where to find this information.

Copyright © 2001-2007 VoiceObjects 12


Lesson 1 – Getting Started - Objectives

VoiceObjects 7

Lesson 1 – Getting Started

Objectives
Lesson 1 describes how to open and log into VoiceObjects Desktop, and how to create a new
user – briefly introducing the concept of user management that is provided by VoiceObjects.
After completing this lesson you will know
• how to open and log into VoiceObjects Desktop,
• how to add a user to a VoiceObjects installation.

Getting Started With VoiceObjects Desktop


In the first step you will learn how to open and enter VoiceObjects Desktop.

Open VoiceObjects Desktop


Start the Internet Explorer, enter the address of your VoiceObjects installation in the address
bar, and press ENTER.
Your VoiceObjects installation address (URI) could look like
http://server:port/VoiceObjects/Desktop
where

server denotes the network name or IP address of the physical server running
the VoiceObjects Desktop process, and

port denotes the Connector Port for the VoiceObjects Desktop process. The
port is set to 80 by default, but can be modified during installation and
configuration. See the Installation Guide or the Administration Guide for
details.
If the default value of 80 is used it is not necessary to specify the port (in
which case the colon must be omitted).

Example: http://192.168.1.20:80/VoiceObjects/Desktop
The VoiceObjects Start screen will be displayed. The Login screen will be loaded automatically
after a few seconds.

Log into VoiceObjects Desktop


On the Login screen, you need to provide a user ID and password in order to log into
VoiceObjects Desktop.
1. Enter voadmin as user ID and manager as password (if no other user ID and password has
been given to you by your administrator).

Copyright © 2001-2007 VoiceObjects 13


Lesson 1 – Getting Started - Adding a User

VoiceObjects 7

2. Click Login to enter VoiceObjects Desktop.

LNote: You can only log into VoiceObjects Desktop if an appropriate Desktop seat license is
available. If a message appears telling you that no more seats are available, contact your
administrator. For more information on license management within the VoiceObjects platform
see Chapter 4 – Managing Licenses in the Administration Guide.
The Open Project window will open up.

Adding a User
VoiceObjects provides the capability to create and maintain individual user accounts for all team
members, and to assign roles to them that match their responsibilities in the application
lifecycle. Access can be restricted to only those functions and views that team members need to
do their job.
If you have logged into VoiceObjects Desktop as voadmin, we strongly recommend to first
create a second user and to use this second user for proceeding with the Tutorial.

Caution: The voadmin account should only be used in emergency situations since any failure
may result in an unmanageable VoiceObjects installation.
To create a second user, do the following:
1. In the Open Project window, click Control Center to go straight to the main Desktop
window without creating a project. We will create a project in the next step with the new
user.
The main Desktop window opens up, which will be explained in more detail later on.

Copyright © 2001-2007 VoiceObjects 14


Lesson 1 – Getting Started - Adding a User

VoiceObjects 7

2. From the Tools menu, select Configure New User. This opens up a User editor (editors
will also be explained in more detail later on).

3. In the Name field of the upper left corner of the User editor, enter a name which should
correspond to your full name (e.g. John Smith).
4. In the User ID field in the User Credentials section, enter a User ID which will serve as
your login name (e.g. jsmith) and in the Password field enter a password for you.
LNote: User IDs must not contain blanks or special characters other than
underscores and must be unique within the repository.
5. In the User role field, select Server Administrator from the drop-down list. Server
Administrators have full access to all aspects of a VoiceObjects installation, as you will
need when working through the Tutorial.
The User editor looks like this now:

Copyright © 2001-2007 VoiceObjects 15


Lesson 1 – Getting Started - What Comes Next

VoiceObjects 7

6. On the tool bar of the User editor, click Save and Close.
For a detailed information on user management within VoiceObjects and on user roles refer to
the Administration Guide.
7. From the File menu, select Exit.

LNote: You need to clearly log out this way (instead of just closing the browser window) to return
the Desktop seat license. For more information on Desktop seat licenses see Chapter 4 –
Managing Licenses in the Administration Guide.
8. Open VoiceObjects Desktop again (see Open VoiceObjects Desktop) and login as the user
you have just created, entering the respective user ID and password.
You will be back in the Open Project window, this time working in your own user account.

What Comes Next


Next, you will learn how to create a project within VoiceObjects Desktop.
Lesson 2 – Creating a Project

Copyright © 2001-2007 VoiceObjects 16


Lesson 2 – Creating a Project - Objectives

VoiceObjects 7

Lesson 2 – Creating a Project

Objectives
Lesson 2 describes how to create a project – introducing the concept of projects within
VoiceObjects – and gives you an initial tour of the main work areas of VoiceObjects Desktop.
After completing this lesson you will know
• how to create a project within VoiceObjects Desktop,
• what the major elements of the main Desktop window are.

Creating a Project
After having successfully logged into VoiceObjects Desktop with the appropriate user role, (see
Lesson 1) you will find yourself in the Open Project window.
Next, you will learn how to create your own project.
Within the VoiceObjects platform, all applications developed with VoiceObjects Desktop are
handled as projects. Each project provides a collection of objects for creating applications.
The Open Project window contains a project selection list presenting all available projects and
its available project versions (VoiceObjects supports the use of different versions of a project,
e.g. for development or test purposes). When you log in for the first time the project selection list
will be empty.

Copyright © 2001-2007 VoiceObjects 17


Lesson 2 – Creating a Project - Creating a Project

VoiceObjects 7

Within the project selection list you can do one of the following: Open, copy, or delete a project
or project version, or modify its properties. For details refer to Chapter 2 – Working With
Projects in the Desktop Guide.

1. In the Open Project window, click Create New Project. The New Project window will
open up.

2. In the New Project window, enter a project name (mandatory, e.g. Tutorial) and a project
description (optional) for the new project.
3. By default, Version 1.0 is provided as project version name for the first project version. You
may modify the project version name, if desired, and again add an optional description.
4. Click Create and Open. Your new project will be created and the main Desktop window
opens up. The name of the new project is added to the project selection list.

LNote: If you have exited VoiceObjects Desktop and log in again later on, you open your Tutorial
project in the Open Project window by clicking it once in the project selection list, which will
display the available project versions below the project, and then selecting a project version.
For more information on creating new projects see Chapter 2 – Working with Projects in the
Desktop Guide.
Next, you will get introduced to the main Desktop window.

Copyright © 2001-2007 VoiceObjects 18


Lesson 2 – Creating a Project - The Main Desktop Window

VoiceObjects 7

The Main Desktop Window


Take a while to familiarize yourself with the main work environment of VoiceObjects Desktop,
also called the main Desktop window throughout the documentation.

In the left-hand pane you will find the Object Browser.


The Object Browser contains folders representing object categories and types – filled with the
objects available for the current project (for an overview on all available objects see Appendix B
– Overview Object Types). Moreover it shows the Configuration folder, which contains project-
independent configuration objects (see also Lesson 4 – Creating and Deploying a Service), a
Libraries folder and a Clipboard folder.
When creating a new project like you have just done, the Object Browser initially displays
• the Logic folder containing the Expression subfolder filled with some system expressions.
• the Configuration folder containing at least the User subfolder with one user. Depending on
the options you have chosen during installation you may also see the Server subfolder.
• an empty Libraries folder and an empty Clipboard folder.
For details on all folders in the Object Browser refer to Object Browser in Chapter 3 –
VoiceObjects Desktop Elements in the Desktop Guide.
In the right-hand pane, which is the main work area of VoiceObjects Desktop, the Project
Home Page is displayed initially. Later on you will see that you can also open up a Dialog
Designer, the Control Center and other kinds of worksheets here.
The Project Home Page displays information on the currently selected project and project
version.

Copyright © 2001-2007 VoiceObjects 19


Lesson 2 – Creating a Project - What Comes Next

VoiceObjects 7

At the top of the main Desktop window, you can see the menu bar, which allows access to the
menus File, New, Tools, View, and Help.
Refer to Chapter 3 – VoiceObjects Desktop Elements in the Desktop Guide for a detailed
description of all elements of the main Desktop window.

What Comes Next


Next, you will learn about objects and how to create output by using an Output object.
Lesson 3 – Creating Output

Copyright © 2001-2007 VoiceObjects 20


Lesson 3 – Creating Output - Objectives

VoiceObjects 7

Lesson 3 – Creating Output

Objectives
Lesson 3 introduces the general concept of objects, which is the basic underlying concept of
VoiceObjects. It explains how to create output by using an Output object.
After completing this lesson you will know
• what is meant by objects in the VoiceObjects context,
• what object editors look like,
• how to create an Output object.

Objects
Objects are the core components from which applications are built within VoiceObjects. Based
on an object-oriented approach the VoiceObjects platform contains different object types which
each provide specific functionality and properties.
Each individual object has a particular configuration and can be used wherever this specific kind
of behavior is needed. The object is created just once, but can be used multiple times within
different contexts. This reusability applies to single objects as well as to complex object
sequences. This reduces development and maintenance efforts significantly (see also
Reusability of Objects in Chapter 1 – Application Design – The Modular Approach in the Design
Guide).
VoiceObjects distinguishes between so-called dialog objects and configuration objects.
Dialog objects are used to build applications. VoiceObjects provides a lot of different types –
grouped into six categories. Appendix B – Overview Object Types contains an overview of all
dialog object types currently available. For details how to configure these objects see the Object
Reference.
Configuration objects are used to configure the VoiceObjects platform. User objects, which you
have already learned to know in Lesson 1 (see Adding a User) and projects, which you created
in Lesson 2 (see Creating a Project), belong to the configuration objects. The other two are the
Server object and the Service object, which you will learn to know in Lesson 4 – Creating and
Deploying a Service.

Object Editors
Objects are created and modified in editor windows that are called object editors. Object editors
have different features which are particular to each object type. Depending on the object type
you want to create or edit, the corresponding object editor opens up.
As an example, an Input editor – the object editor for an Input object – is shown below:

Copyright © 2001-2007 VoiceObjects 21


Lesson 3 – Creating Output - Output Objects

VoiceObjects 7

Each object editor contains, at the top, a title bar displaying the name of the object (after saving
it) and the type of the object editor and, below the title bar, a toolbar containing buttons for the
most frequently used commands when creating or editing objects.
Moreover, each object editor contains a Name field in the upper left corner to enter a unique
name for the object. Entering a name is mandatory, and an object cannot be saved without a
name.
Depending on the object type, two or more tabs are available in the corresponding object editor,
which contain all definitions of the object. Within one tab one or more sections, represented by
boxes, are available, which contain all attributes and parameters defining an object.

Output Objects
Any output presented to a caller is represented within VoiceObjects by an Output object. Thus,
Output objects are one of the most basic objects.
The Output object belongs to the object category Components. Objects grouped into the
Components category are higher-level objects, which provide fairly complex dialog processing
capabilities.
Output objects can either be autonomous or embedded as part of other objects such as Input,
Menu, or Confirmation objects.

Copyright © 2001-2007 VoiceObjects 22


Lesson 3 – Creating Output - Creating an Output Object

VoiceObjects 7

In the voice and video channel, outputs may contain text that is read out through a text-to-
speech engine, or prerecorded audio or video files, possibly containing various dynamic content
as well as Silence objects.
For a detailed description of the complete functionalities of the Output object see Output in the
Object Reference.
In the following we will create a simple autonomous Output object that just plays back a simple
welcome message to the caller.

Creating an Output Object


To create an Output object select Output from the New menu or click the Output icon on
the toolbar. An empty Output editor will open up.

In the Output editor, enter Welcome as the name for the new Output object in the Name field at
the top of the editor window.
On the Definition tab, which is the main area for configuring objects in all object editors, you
see one section: the Output section. The Output section contains an empty Output item.
1. Type a welcome message (e.g. Welcome to the Prime Life Insurance service!) in the text
field of the Output item. This text will be read to the caller by a speech synthesis system
when running the application.
2. Leave all other values as default for now.
The Output section looks like this now:

Copyright © 2001-2007 VoiceObjects 23


Lesson 3 – Creating Output - Creating an Output Object

VoiceObjects 7

3. On the Output Editor toolbar, click Save and Close.


An Output object may contain any number of individual Output items. To add an item, click the
Add button on the toolbar of the Output section.

As you will see during the course of this Tutorial there are several objects that may contain
multiple items (e.g. Grammar, Hyperlink, Input, Layer).
For an overview on the function of all fields within the Output section refer to Output in the
Object Reference.
In the Object Browser, you will now see a Components folder containing an Output folder,
containing your Welcome Output object.

Copyright © 2001-2007 VoiceObjects 24


Lesson 3 – Creating Output - What Comes Next

VoiceObjects 7

What Comes Next


Next, you will learn how to deploy your application – comprised so far of a single Output object
– so that you can call it.
Lesson 4 – Creating and Deploying a Service

Copyright © 2001-2007 VoiceObjects 25


Lesson 4 – Creating and Deploying a Service - Objectives

VoiceObjects 7

Lesson 4 – Creating and Deploying a Service

Objectives
Lesson 4 introduces the configuration objects Server and Service, explains how to configure
both, and how to start an application in order to be able to call it.
After completing this lesson you will know
• what is meant by a server and a service,
• how to configure a Service object,
• how to configure a Server object,
• what the Control Center is and how to start an application in the Control Center.

Servers and Services


While VoiceObjects Desktop is the web-based graphical working environment for developing
and maintaining applications, VoiceObjects Server is the active component that interacts with
media platforms and processes the applications.
A VoiceObjects Server is a logical entity that is represented by a Server object in the
VoiceObjects Metadata Repository. It can either run on a single physical machine or be
distributed over a server farm consisting of any number of physical machines.
The media platform connects to the server via HTTP requests.
Applications developed with VoiceObjects Desktop are deployed to a server as a service
represented by a Service object. Only services can be called from a phone. Services provide
attributes to the server such as the language being used, associated media platform drivers,
and the start object of the application (see below).
For detailed information on servers and services see the Deployment Guide.

Configuring a New Service


To configure a new service within VoiceObjects Desktop do the following:
1. From the Tools menu, select Configure New Service. An empty Service editor will open
up.

Copyright © 2001-2007 VoiceObjects 26


Lesson 4 – Creating and Deploying a Service - Configuring a New Service

VoiceObjects 7

2. In the Name field at the top of the editor window enter a name for the new Service object
(e.g. Tutorial).
3. In the Application Parameters section, specify the start object of the application
associated with this service. Typically, this is a Module object. Since we only have created
an Output object so far, which we want to listen to, select your Output object here.

To select the Welcome Output object as the start object, click the Context Menu button
to the right of the Start object field. From the context menu, select Browse and then
Output. In the Search Result window, click the Welcome Output object to link it into the
Start object field.
4. In the Language field, select the appropriate language for your application to run in (e.g.
English US). The language selected here is the language that is used if you leave Default
as the language parameter in any output or grammar definition.

#Tip: The Language field, as well as a lot of other fields across all object editors that allow
selecting input from a drop-down list, provides a so-called auto-complete function. This means
that if you enter for example just a single letter the selection in the drop-down list will be
reduced to all entries starting with this letter and the first entry that applies to the selection will
be entered into the field.
Example: if you enter “F” in the Language field the selection in the drop-down list will be
reduced to all available entries starting with an “F” (Finnish, French, French (CA), French (FR))
with Finnish being entered into the Language field.
For further details see Object editor fields in Chapter 6 – Object Editors in the Desktop Guide.

Copyright © 2001-2007 VoiceObjects 27


Lesson 4 – Creating and Deploying a Service - Configuring a Server Object

VoiceObjects 7

5. In the Communcation Parameters section, enter the unique name of the service in the
VSN (Reference ID) field (e.g. Tutorial). The VSN (VoiceObjects Service Name) is used to
identify the service when initiating it via an HTTP request from the media platform.
6. In the Driver field, select the appropriate driver for your media platform from the drop-down
menu.
7. For all other properties leave the default selections.
The Service editor will look like this now:

8. On the toolbar of the Service editor, click Save and Close.


For a detailed description of all parameters and settings within the Service editor refer to
Chapter 2 – Configuring Servers and Services in the Deployment Guide.

Configuring a Server Object


In order to run your application you need to specify the server that processes your service. This
is done through a Server object.
In the Object Browser, open the Configuration folder and then the Server folder (if available)
and check if there already exists a Server object called VOServer.
If it doesn´t exist (or if even the Server folder doesn´t exist) you need to create it. For more
information see Create a new Server object below.

Copyright © 2001-2007 VoiceObjects 28


Lesson 4 – Creating and Deploying a Service - Configuring a Server Object

VoiceObjects 7

If it already exists you can use this Server object. For more information see Use an existing
Server object.

LNote: If the folder already contains a Server object with a different name or more than one
Server object ask your VoiceObjects administrator how to proceed.

Create a new Server object


1. From the Tools menu, select Configure New Server. An empty Server editor opens up.

2. In the Server editor, enter a name for the new Server object in the Name field at the top of
the editor window. It is recommended to use the name of its reference ID here (see below).
3. In the Configuration Parameters section, enter the reference ID for the server in the
Reference ID field. The default value is VOServer.
LNote: In order for the installation to work properly, the reference ID specified here
needs to be identical with the <servername> specified in the VOServer_Configuration.xml
file (see Before You Begin).
4. Leave all other values as default.
5. In the Hosted Services section, add the service which you want to be processed by the
server (e.g. Tutorial ). To add a service, click the Context Menu button to the right of
the Service field. From the context menu, select Browse and in the Search Result window
click the Tutorial Service object to link it into the Service field.
The Server editor will look like this now:

Copyright © 2001-2007 VoiceObjects 29


Lesson 4 – Creating and Deploying a Service - Starting the Tutorial Service

VoiceObjects 7

6. On the toolbar of the Server editor, click Save and Close.


After creating a new Server object you need to restart the VoiceObjects Server process on your
Web server to activate it. Further information on how to start this process can be found in the
Installation Guide or contact your VoiceObjects administrator.

Use an existing Server object


1. In the Object Browser, open the Configuration folder and then the Server folder, and
double-click VOServer. The Server editor will open up.

2. If there already exists a service in the Hosted Services section click the Add button to
add a new empty row to the list.

3. Click the Context Menu button to the right of the Service field. From the context menu,
select Browse and in the Search Result window click the Tutorial Service object to link it
into the Service field.
4. On the toolbar of the Server editor, click Save and Close.

Starting the Tutorial Service


In order to activate the Welcome application you need to start the Tutorial Service in the
Control Center.
The Control Center is the graphical interface component within VoiceObjects Desktop that
enables the deployment and monitoring of servers and services.
For a detailed description of the Control Center see Chapter 3 – Managing Servers and
Services in the Deployment Guide.
To open the Control Center, do the following:
1. In the Object Browser, open the Configuration folder and then the Server folder.
2. Point to the Server object you have just created (e.g. VOServer) and right-click it.

Copyright © 2001-2007 VoiceObjects 30


Lesson 4 – Creating and Deploying a Service - Starting the Tutorial Service

VoiceObjects 7

3. On the context menu, click Control Center.

The Control Center is loaded into a new tab in the right-hand pane of VoiceObjects Desktop. If
this tab already exists, the view switches to it.

Take a while to familiarize yourself with the Control Center.


The Server Management tab displays detailed information about the server configuration in the
Server Details section, as well as information about the list of hosted services in the Service
Manager section.
To start your Tutorial service, do the following:

In the Server Details section, click the Context Menu button to the right of the server name
and select Reload Service List. The list of services hosted on this server will be reloaded and
the service you have just created (Tutorial) will be added to the list and displayed in the Service
Manager section.

Copyright © 2001-2007 VoiceObjects 31


Lesson 4 – Creating and Deploying a Service - Calling the Application

VoiceObjects 7

To start your service, click the Context Menu button to the right of the service name
(Tutorial) in the Service Manager section, and from the context menu select Start. After a while
the light to the left of the service name turns to green indicating that the status of the service is
started.

LNote: The Start command is only available in the context menu if a service is stopped (which is
also the status for a new service). If a service has already been started you need to select
Redeploy from the context menu in order to make any changes effective (see also the following
lessons).

Calling the Application


Before you can try out your Welcome application you need to specify the URL that connects the
calls from the underlying media platform to the server.
The initial HTTP request from the media platform to the server has the following format:
http://server:port/VoiceObjects/DialogMapping?VSN=ReferenceID
where

server denotes the network name or IP address of the physical server running
the VoiceObjects Server process.

port denotes the connector port for the VoiceObjects Server process. The
port is set to 8099 by default, but can be modified during installation and
configuration. See the Installation Guide for details.

Reference ID denotes the unique name which identifies the service when initiating it
via an HTTP request from the media platform.

In order to check if the URL is valid, add &ping=server to it and enter it into your Web browser. If
it is correct you will see the status of the service (e.g. Started).
Example:
http://192.168.1.20:8099/VoiceObjects/DialogMapping?VSN=tutorial&ping=server
Now you can call the telephone number by which the new service can be reached.

Copyright © 2001-2007 VoiceObjects 32


Lesson 4 – Creating and Deploying a Service - Dialog Flow 1

VoiceObjects 7

Dialog Flow 1
The dialog flow for the current application just consists of one object so far:

Object – Caller Dialog Flow


Welcome to Prime Life Insurance service!

When you call your application, the output “Welcome to the Prime Life Insurance service!” will be
played back to you over the phone.

What Comes Next


Next, you will learn how to structure the dialog flow by using Module objects.
Lesson 5 – Structuring the Dialog Flow

Copyright © 2001-2007 VoiceObjects 33


Lesson 5 – Structuring the Dialog Flow - Objectives

VoiceObjects 7

Lesson 5 – Structuring the Dialog Flow

Objectives
Lesson 5 introduces the structuring of dialog flows and the use of Module objects.
After completing this lesson you will know
• why to structure a dialog flow,
• how to create a Module object,
• what the Dialog Designer is,
• how to redeploy an application.

Structuring Dialog Flows


So far you have created an application that just consists of one single object playing an output
to the caller.
During the course of this Tutorial you will enhance your application and add more objects to it –
though the overall number will still stay quite small. More complex applications actually consist
of a large number of single objects. Have a look at the complete Prime Insurance sample
application provided with the VoiceObjects platform to get an impression on that.
In order to structure complex dialog flows that consist of large numbers of objects, designers
split applications into logical units. Single objects are grouped into sub-dialogs that can be
combined later on to form larger applications.
Structuring dialogs into units not only helps designers keep a clear dialog structure but – due to
the reusability of complete modules – also helps to rapidly develop new applications based on
existing components.
Several object types are provided by VoiceObjects to facilitate the building of logical units. The
most common one is the Module object explained below, others are the Sequence and the
Menu object (for details see the descriptions of the respective objects in the Object Reference).
For further information on reusability see Chapter 1 – The Modular Concept in the Design
Guide.

Module Objects
The Module object typically represents the start node in a dialog flow and is the basic
component when building applications.
The Module object belongs to the object category Components.
Module objects are intended to represent both stand-alone dialogs and sub-dialogs. The Prime
Insurance application, for example, includes the sub-applications Health Insurance, Life
Insurance, Car Insurance, Claim Check, and Claim Report, which are developed as separate
Module objects. The Prime Insurance Portal – again a Module object – combines these
applications into a single one.
As you will see later on, the Module object also allows you to specify global settings for an
application, say for event handling and navigation, which can be inherited by subordinate
objects in the dialog flow.
For further information on inheritance see also Chapter 1 – The Modular Concept in the Design
Guide.

Copyright © 2001-2007 VoiceObjects 34


Lesson 5 – Structuring the Dialog Flow - Creating a Module Object

VoiceObjects 7

Creating a Module Object


The main interaction of a root-level Module object is to play a welcome message to the caller, to
process an embedded sequence of objects, and to play a goodbye message.
To create the Module object for the Tutorial application, do the following:

1. From the New menu, select Module or click the Module button on the toolbar. An
empty Module editor will open up.

2. In the Module editor, enter Life Insurance as the name for the new Module object in the
Name field at the top of the editor window.

3. Expand the Welcome Message section by clicking the Maximize button in the right
corner of the toolbar or by clicking the left field of the toolbar.

#Tip: In VoiceObjects Desktop all sections of an object editor that are represented by boxes can
be expanded either by clicking the Maximize button in the right corner of the toolbar or by
clicking the left field of the toolbar. To collapse a box again, click the Minimize button in place of
the previous Maximize button , or click the left field of the toolbar again.
Since we have already created a welcome message for the Prime Life Insurance service with
creating the Welcome Output object in Lesson 3, we do not have to type the message into the
text field in the Welcome Message section but can link the Welcome Output object into it
instead.
To link the Welcome Output object into the text field in the Welcome Message section, do the
following:

1. Click the Context Menu button to the right of the text field.
2. From the context menu, select Browse and then Output.
3. In the Search Result window, click the Output object Welcome to link it into the text field.

Copyright © 2001-2007 VoiceObjects 35


Lesson 5 – Structuring the Dialog Flow - Displaying Dialogs in the Dialog Designer

VoiceObjects 7

Your Welcome Message section looks like this now:

1. In the Goodbye Message section, enter an appropriate goodbye message like Thanks for
calling. Goodbye!.
2. Leave all other values as default.
3. On the Module Editor toolbar, click Save and Close.
For an overview on all parameters that can be specified in the Module editor refer to Module in
the Object Reference.
In the Object Browser, you will now see a Module folder in the Components folder, containing
your Life Insurance Module object.

Displaying Dialogs in the Dialog Designer


While the Object Browser simply lists all available autonomous objects, the Dialog Designer
displays objects within the tree structures of the complete dialog flows they are being used in,
and allows you to modify them within this graphical environment. This helps you to keep a clear
idea of your dialog even if it becomes very complex.
The Dialog Designer is the main work area of VoiceObjects Desktop for designing and
developing applications.

Copyright © 2001-2007 VoiceObjects 36


Lesson 5 – Structuring the Dialog Flow - Changing the Start Object

VoiceObjects 7

For more information on the Dialog Designer see Chapter 5 – Dialog Designer in the Desktop
Guide.
To open the Dialog Designer for the Life Insurance Module object, do the following:
1. In the Object Browser, open the Components folder, and then the Module folder.
2. Select the Life Insurance Module object and right-click it.
3. From the context menu, select Display Dialog. The dialog flow will be displayed in a
Dialog Designer in the right-hand pane of VoiceObjects Desktop.
Since the Life Insurance Module object doesn´t contain any other objects the dialog flow is very
simple. In the course of the Tutorial you will see the display develop into more complex dialog
flows.

Each item displayed in the dialog flow represents an object. The top left object of a dialog flow
represents the object that you selected to be displayed (in this case the Life Insurance Module
object).
If you double-click an object in the Dialog Designer of VoiceObjects Desktop the corresponding
object editor will open up.
A right-click on an object opens a context menu with a number of commands that are available
for that object.
Objects preceded by a plus box [+] include subtrees of one or more objects. Clicking the plus
box [+] will expand the corresponding object and display its subtree. Clicking a minus box [-] in
front of an object will collapse it.
Object names written in blue correspond to autonomous objects. Autonomous objects are
visible and accessible within the Object Browser and can be added to other dialogs.
Object names written in black correspond to embedded objects. Embedded objects are
automatically defined inside another object. They are only accessible within their parent object
and are not visible from within the Object Browser.
For detailed information on all display features of the Dialog Designer see Chapter 5 – Dialog
Designer in the Desktop Guide.
Later on you will see how you can manipulate objects within the Dialog Designer to create
applications quickly and easily.

Changing the Start Object


In order to test whether your application still works properly, you first need to change the start
object of your service to the new Life Insurance Module object.
1. In the Object Browser, open the Service folder in the Configuration folder. Select your
Tutorial Service object and double-click it. The corresponding Service editor opens up.
2. In the Application Parameters section, select the Life Insurance Module object as the start
object by clicking the Context Menu button to the right of the Start object field. From
the context menu, select Browse and then Module. In the Search Result window, click the
Life Insurance Module object and it will be linked into the Start object field.

Copyright © 2001-2007 VoiceObjects 37


Lesson 5 – Structuring the Dialog Flow - Redeploying an Application

VoiceObjects 7

3. On the toolbar of the Service editor, click Save and Close.

Redeploying an Application
In order to test whether the structurally modified application is still working properly you need to
redeploy it.
1. In the Object Browser, right-click the Server object that hosts your service (VOServer).
2. From the context menu, select Control Center. The Control Center opens up.
3. In the Service Manager section, select the Tutorial service.

4. Click the Context Menu button to the right of the Tutorial service and click Redeploy.

You can now call your service again with the telephone number by which it can be reached.

Dialog Flow 2
The dialog flow of your application has changed slightly, now consisting of two prompts:

Object – Caller Dialog Flow


Welcome to the Prime Life Insurance service!
(EMPTY)
Thanks for calling. Goodbye!

Again, the output “Welcome to the Prime Life Insurance service!” will be played back to you
over the phone, this time activated by a Module object, followed by a goodbye message.

Copyright © 2001-2007 VoiceObjects 38


Lesson 5 – Structuring the Dialog Flow - What Comes Next

VoiceObjects 7

What Comes Next


Next, you will add an Input object to the Life Insurance application, requesting input from the
caller.
Lesson 6 – Interacting with the Caller

Copyright © 2001-2007 VoiceObjects 39


Lesson 6 – Interacting with the Caller - Objectives

VoiceObjects 7

Lesson 6 – Interacting with the Caller

Objectives
Lesson 6 explains how to create an Input object to request input from the caller and store the
result in a variable.
After completing this lesson you will know
• how to create an Input object,
• how to specify a TTG grammar,
• how to handle the result through a Variable object.

Interacting with the Caller


Phone applications typically interact with the caller, requesting information such as a name or a
phone number, or presenting a choice to the caller asking for a selection of an item.
Several object types are provided by VoiceObjects to create interactive dialogs. The most
common one is the Input object explained below, others are the Confirmation, List, Menu, and
Recording object.
For details see Chapter 3 – How to Interact with the Caller in the Design Guide and the
descriptions of the respective objects in the Object Reference.

The Input Object


The Input object is one of the most frequently used objects. It is used to request one or more
pieces of information from the caller.
The Input object plays an Output object to ask for the data, and provides a grammar defining
the admissible utterances (which can, in case of the voice channel, either be DTMF or voice
input). The recognized values are returned in slots and assigned to variables.
The Input object belongs to the object category Components.
For detailed information on the Input object including how to deal with other channels than voice
refer to Input in the Object Reference.

Creating an Input Object


The main interaction of an Input object is to play an output to the caller that requests information
and to assign the recognized input from the caller to a variable. In the case of the Life Insurance
application you will use a first Input object to ask the caller for his or her gender and to store the
recognized answer.
To create this Input object do the following:

1. From the New menu, select Input or click the Input button on the toolbar. An empty
Input editor will open up.

Copyright © 2001-2007 VoiceObjects 40


Lesson 6 – Interacting with the Caller - Creating an Input Object

VoiceObjects 7

2. In the Input editor, enter Ask for gender as the name for the new Input object in the Name
field at the top of the editor window.

Specify the input request


1. In the Input Request section, enter a request like To help me calculate what your monthly
premium would be for life insurance, please say if you are male or female. in the text area
of the embedded output.
2. Leave all other values as default.
Your Input Request section will now look like this:

Copyright © 2001-2007 VoiceObjects 41


Lesson 6 – Interacting with the Caller - Creating an Input Object

VoiceObjects 7

Specify the recognition grammar


The Grammar section defines the grammar specifying the utterances that can be recognized as
input.
In case of the voice channel, the grammar can define a set of phrases that a caller might say or
DTMF keys a caller might press on the phone’s keypad during a dialog in response to a
particular request prompt.
For a detailed description of using grammars see the Grammar object in the Object Reference.
To specify the grammar for the Ask for gender Input object, do the following:

1. Expand the Grammar section by clicking the Maximize button in the right corner of the
toolbar or by clicking the left field of the toolbar.
On the Voice/Text tab in the Grammar section, you can, in case of using the voice channel,
define a grammar for voice input (as opposed to DTMF input).
By default, the TTG check box is selected, indicating that text-to-grammar is used. This enables
you to define grammars by means of comma-separated lists of utterances without having to
bother about grammar formats specific to the underlying media platform.
When building voice applications in a rapid prototyping approach, it is often desirable to start the
definition of a grammar by just providing a couple of typical utterances a caller might say.
In the case of the Life Insurance application, a first TTG grammar will be used containing the
two possible genders.
2. Enter female and male, separated by a comma, into the TTG field in the Grammar section.
3. Leave all other values as default.
Your Grammar section will look like this now:

Copyright © 2001-2007 VoiceObjects 42


Lesson 6 – Interacting with the Caller - Creating an Input Object

VoiceObjects 7

Specify the result handling through a Variable object


When using TTG, the result (the recognized input from the caller) is returned in a slot called
sltTTG. This slot name must be used in the Result Handling section of the Input editor to
assign it to a Variable object.
1. In the Result Handling section, enter sltTTG in the Slot name field.
2. To assign the result to a variable you need to create a Variable object. To do so, click the
Context Menu button to the right of the Variable field. From the context menu, select
New and then Variable. An empty Variable editor will open up.
3. In the Variable editor, enter Caller gender as the name for the new Variable object in the
Name field at the top of the editor window.

4. Leave all other values as default and on the toolbar of the editor click Save and Close.
The Caller gender Variable object will be linked into the Ask for gender Input object.

Copyright © 2001-2007 VoiceObjects 43


Lesson 6 – Interacting with the Caller - Adding the Input Object to the Application

VoiceObjects 7

Your Result Handling section looks like this now:

Finally, click Save and Close on the Input Editor toolbar.


For an overview on all parameters that can be specified within the Input editor refer to Input in
the Object Reference.
In the Object Browser, you will now see an Input folder in the Components folder, containing
your Ask for gender Input object.
Note, that the Caller gender Variable object that you created in the previous step to store the
caller input also has been saved as an autonomous object and can be found in the respective
folder in the Object Browser (Variable folder in the Logic folder).

Adding the Input Object to the Application


To add the Ask for gender Input object to the Module object of the Tutorial application do the
following:
1. In the Object Browser, double-click the Life Insurance Module object to open it. The
corresponding Module editor will open up.

2. In the Object Sequence section, click the Context Menu button to the right of the
Object field. From the context menu, select Browse and then Input. In the Search Result
window, click the Ask for gender Input object to link it into the Object field.
3. On the Module Editor toolbar, click Save and Close.
Look at the Life Insurance Module object in the Dialog Designer.
To display the Dialog Designer for the Life Insurance Module object, right-click it in the Object
Browser and from the context menu, select Display Dialog.
As you can see the Ask for gender Input object has been added to the embedded sequence of
the Life Insurance Module object.

Dialog Flow 3
After redeploying your service in the Control Center (see Redeploying an Application in Lesson
5), call your application again and test it.
The following dialog flow shows a possible dialog for the current application:

Copyright © 2001-2007 VoiceObjects 44


Lesson 6 – Interacting with the Caller - Creating a Second Input Object

VoiceObjects 7

Object – Caller Dialog Flow


Welcome to the Prime Life Insurance service!
To help me calculate what your monthly premium would be
for life insurance, please say if you are male or female.

Female.
Caller

Thanks for calling. Goodbye!

Creating a Second Input Object


In the Life Insurance application a second Input object is used to ask the caller for his or her age
and to store the recognized answer. Since, to simplify matters, we will not use any external
grammars during the Tutorial, we will use a so-called built-in grammar for the recognition of the
age (see below). Built-in grammars are a special type of grammar provided by the media
platforms for standard inputs such as yes/no, a date, a phone number etc. For further
information on built-in grammars see Grammar in the Object Reference.
To create the Input object asking for the caller’s age, do the following:

1. From the New menu, select Input or click the Input button on the toolbar to open up an
empty Input editor.
2. In the Input editor, enter Ask for age as the name for the new Input object in the Name field
at the top of the editor window.
3. In the Input Request section, enter a request like Now please say your age. in the text
area.
4. Expand the Grammar section and in the Type field select Built-in from the drop-down list.
In the TTG field just enter number to refer to the built-in grammar of your media platform for
numbers.
Your Grammar section will look like this now:

5. In the Result Handling section, click the Context Menu button to the right of the
Variable field to create and link a new Variable object.

Copyright © 2001-2007 VoiceObjects 45


Lesson 6 – Interacting with the Caller - Dialog Flow 4

VoiceObjects 7

6. In the Variable editor, enter Caller age in the Name field at the top of the editor window.
Leave all other values as default and on the toolbar of the editor click Save and Close. The
Caller age Variable object will be linked into the Variable field in the Result Handling
section of the Ask for age Input object.
In the case of built-in grammars, the Slot field is left empty.
7. Leave all other values in the Input editor as default.
8. Finally, click Save and Close on the Input Editor toolbar.
Again, you can find the Ask for age Input object in the Input folder of the Object Browser and
the Caller age Variable object in the Variable folder.
Add the Ask for age Input object to the Life Insurance Module object as you have done with the
Ask for gender Input object previously:
1. In the Object Browser, double-click the Life Insurance Module object to open the
corresponding Module editor.
2. In the Object Sequence section, add a new row to the object list by clicking the Add
button on the toolbar.

3. Click the Context Menu button to the right of the new Object field and link the Ask for
age Input object by browsing.
4. On the Module Editor toolbar, click Save and Close.
Again, display the Dialog Designer for the Life Insurance Module object via its context menu in
the Object Browser. Find the Ask for age Input object added to the embedded sequence of the
Life Insurance Module object below the Ask for gender Input object.

Dialog Flow 4
After redeploying your service in the Control Center (see Redeploying an Application in Lesson
5), call your application again and test it.
The following dialog flow shows a possible dialog for the current application:

Object – Caller Dialog Flow


Welcome to the Prime Life Insurance service!
To help me calculate what your monthly premium would be
for life insurance, please say if you are male or female.

Female.
Caller

Now please say your age.

Copyright © 2001-2007 VoiceObjects 46


Lesson 6 – Interacting with the Caller - Checking the Results by an Output Object

VoiceObjects 7

Object – Caller Dialog Flow

28.
Caller

Thanks for calling. Goodbye!

Checking the Results by an Output Object


Next, check if your inputs actually have been recognized properly. This is done by adding an
Output object to the Life Insurance Module object that plays back the content of the Caller
gender Variable object and the Caller Age Variable object.

Different methods to create autonomous objects


Up to now you have created new objects by selecting an object type from the New menu,
defining it, then saving it, and in a separate step adding them to the Life Insurance Module
object.
A quicker and more convenient way to add objects to the dialog flow is to create an object
directly within the editor of the object you want to add it to (as you have actually done with the
Caller gender Variable object previously).

In this case, click the Context Menu button to the right of the field into which you want to
insert the new object and, via the New Menu, select the object type you want to add. After
creating and saving it, the new object will be automatically linked into the object you have
started from.
In both cases autonomous objects will be created that will be displayed in the Object Browser.

Create the Output object repeating the results


This time, create the new object from inside the Life Insurance Module object.
1. Open the Life Insurance Module object.

2. In the Object Sequence section, add a second row by clicking the Add button on its
toolbar.

3. Click the Context Menu button to the right of the empty Object field. From the context
menu, select New and then Output. An empty Output editor will open up.
4. In the Output editor, enter Repeat caller gender and age as the name for the new Output
object in the Name field at the top of the editor window.
5. In the Output section, enter I have recognized your gender as in the text field.
6. At the end of this text, link the Caller gender Variable object by browsing for it via the
context menu.
7. Enter and your age as followed by the Caller age Variable object.
8. Leave all other values as default and on the toolbar of the editor click Save and Close.
The Repeat Caller gender and age Output object will be linked into the object sequence of the
Life Insurance Module object.
The Dialog Designer for the Life Insurance Module object now looks like this:

Copyright © 2001-2007 VoiceObjects 47


Lesson 6 – Interacting with the Caller - Dialog Flow 5

VoiceObjects 7

Dialog Flow 5
After redeploying your service in the Control Center again (see Redeploying an Application in
Lesson 5), test how your application has changed.
The following dialog flow shows a possible dialog for the current application:

Object – Caller Dialog Flow


Welcome to the Prime Life Insurance service!
To help me calculate what your monthly premium would be
for life insurance, please say if you are male or female.

Female.
Caller

Now please say your age.

Twenty-eight.
Caller

Ok, I have recognized your gender as female and your age


as twenty-eight.
Thanks for calling. Goodbye!

After providing a gender and an age the most recently created Output object will play the results
back to you. By this, you can now check if your input has been recognized and stored correctly.

LNote: Sometimes it is desirable to test the application functionality without having access to a
media platform. This can be accomplished by using the Debug mode. A Debug mode session
can be directly started from within the Control Center via the entry Debug Viewer in a service’s
context menu. This entry is only available for started services. For detailed information o the
Debug mode refer to Chapter 4 –Service Deployment in the Deployment Guide.

What Comes Next


Next, you will learn how to handle exceptions, i.e. if you do not recognize any input from the
caller or receive input that is not expected.
Lesson 7 –Handling Events

Copyright © 2001-2007 VoiceObjects 48


Lesson 7 –Handling Events - Objectives

VoiceObjects 7

Lesson 7 –Handling Events

Objectives
Lesson 7 introduces basic event handling strategies and explains how to add global and
context-sensitive event handling to your dialog.
After completing this lesson you will know
• why event handling items should be defined,
• how to define global event handling,
• how to define context-sensitive event handling.

Event Handling Strategies


Event handling specifies how to proceed within a dialog in cases of exceptions. Various kinds of
exceptions may occur during a call. Standard events are, for example,
No Input - the caller has been asked for input and none is detected within the timeout interval,
No Match - some input is detected, but that input does not match the underlying grammar.
Event handling can be specified on different levels of your application. At the Module object
level, global event handling can be specified. At the subordinate object level, object-specific
behavior for events can be set.
In the case of the Tutorial application you will next add event handling definitions for No Input
and No Match.
The event handling for the first occurrence of both events will be defined globally in the Module
object with generic responses.
The event handling for a second occurrence of a No Input and No Match event will be defined
locally within an Input object providing further, context-sensitive information to the caller.
For further information on advanced event handling see Chapter 8 – Advanced Event Handling
in the Design Guide.

Defining a Global No Input Event


To specify a global No Input event for the Life Insurance application, do the following:
1. In the Dialog Designer or in the Object Browser double-click the Life Insurance Module
object. The corresponding Module editor will open up.
2. Click the Event Handling tab to bring it to the front.

3. Click the Context Menu button to the right of the Object field and select New and then
Output. An empty Output editor will open up.
4. In the Output editor, enter a name for the Output object (Global No Input) in the Name
field.
5. In the text area, enter the following output:
Sorry?

Copyright © 2001-2007 VoiceObjects 49


Lesson 7 –Handling Events - Defining a Global No Input Event

VoiceObjects 7

6. On the toolbar of the Output editor, click Save and Close.


7. Select ASR No Input as event type from the drop-down menu in the Event Type field.
8. Select If >= 1 as occurrence level from the drop-down menu in the Occurence field. This
means that, unless overridden by a higher occurrence level, this output message will be
played on the first and all subsequent occurrences of no input.

9. On the toolbar of the Module editor, click the Save button to save your current object
without leaving its object editor.

For further details on event handling refer to the Object Reference.

Copyright © 2001-2007 VoiceObjects 50


Lesson 7 –Handling Events - Defining a Global No Match Event

VoiceObjects 7

Defining a Global No Match Event


To specify a global No Match event for the Life Insurance application, do the following:

1. Add a second event handling item by clicking the Add button on the Event Handling
toolbar.

2. Click the Context Menu button to the right of the Object field and select New and then
Ouptut. An empty Output editor will open up.
3. In the Output editor, enter a name for the Output object (Global No Match) in the Name
field.
4. In the text area, enter the following output:
Say again, please?

On the toolbar of the Output editor, click Save and Close.


5. Select ASR No Match as event type from the drop-down menu in the Event Type field.
6. Select If >= 1 as occurrence level from the drop-down menu in the Occurence field. This
means that, unless overridden by a higher occurrence level, this output message will be
played on the first and all subsequent occurrences of input not matching the grammar.

Copyright © 2001-2007 VoiceObjects 51


Lesson 7 –Handling Events - Defining Context-Sensitive Event Handling

VoiceObjects 7

Finally, click Save and Close on the toolbar of the Module editor.

Defining Context-Sensitive Event Handling


Next, you will define event handling definitions for the case that a No Input or No Match event
occurs a second time. In such a case, the caller might need some further instruction on what to
do.
Context-sensitive event handling for the Prime Life Insurance service will be specified locally
within an Input object requesting information from the caller.
To do so, double-click the Ask for age Input object in the Dialog Designer or in the Object
Browser. The corresponding Input editor will open up.
Click the Event Handling tab to bring it to the front.
Specify an event handling item for a No Input event with the following parameters:

Field Parameter

Create a new Output object called Ask for age NI with the output
Object Sorry, I still couldn´t hear you. Please say your age, for example
thirty-five.

Event type ASR No Input

Copyright © 2001-2007 VoiceObjects 52


Lesson 7 –Handling Events - Defining Context-Sensitive Event Handling

VoiceObjects 7

Field Parameter

Occurrence If >=2

Specify an event handling item for a No Match event with the following parameters:

Field Parameter

Create a new Output object called Ask for age NM with the
Object output Sorry, I couldn´t understand you. Just say your age, for
example thirty-five.

Event type ASR No Match

Occurrence If >=2

The Event Handling section of the Ask for age Input object looks like this now:

Finally, click Save and Close on the toolbar of the Input editor.

Copyright © 2001-2007 VoiceObjects 53


Lesson 7 –Handling Events - Dialog Flow 6

VoiceObjects 7

Dialog Flow 6
After redeploying your service in the Control Center again, call it and test how your application
behaves if you say nothing or say something not recognized by the grammar.
The following dialog flow shows an example dialog for the Prime Life Insurance service you
have created so far – this time including event handling:

Object – Caller Dialog Flow


Welcome to the Prime Life Insurance service!
To help me calculate what your monthly premium would be
for life insurance, please say if you are male or female.

Female.
Caller

Now please say your age.

No Input 1
Caller

Sorry?.

No Input 2
Caller

Sorry, I still couldn´t hear you. Please say your age, for
example thirty-five.

Twenty-eight years (No Match 1)


Caller

Say again, please?

Twenty-eight years
Caller

Sorry, I still couldn´t hear you. Just say your age, for
example thirty-five.

Twenty-eight
Caller

Ok, I have recognized your gender as female and your age


as twenty-eight.
Thanks for calling. Goodbye!

What Comes Next


Next, you will add a Confirmation object to the Life Insurance application confirming the
answers of the caller and, if necessary, correcting them.
Lesson 8 – Confirming Inputs

Copyright © 2001-2007 VoiceObjects 54


Lesson 8 – Confirming Input - Objectives

VoiceObjects 7

Lesson 8 – Confirming Input

Objectives
Lesson 8 introduces the Confirmation object used to confirm input collected by the caller and, if
required, to correct it.
After completing this lesson you will know
• how to configure a Confirmation object with a single correction item,
• how to manipulate dialog flows in the Dialog Designer.

Confirmation
So far, the Prime Life Insurance service just repeats back the recognized caller input through a
simple Output object. In this lesson you will learn how to specify a Confirmation object in order
to allow correction.
A Confirmation object is used to confirm and if necessary to correct the information collected in
the previous dialog. It belongs to the object category Components.
The Confirmation object is one of the more advanced objects. It can be used to confirm
individual caller inputs immediately after each is collected, or to confirm a summarized block of
items at the end of an input series. If the summary is rejected, the caller can specify which item
needs to be corrected. Thus, the Confirmation object can be configured to confirm several caller
inputs in one dialog step.
In the Life Insurance application a Confirmation object is used to confirm two inputs in a row.
The caller is asked to confirm the recognized gender and age. If the caller denies he or she is
asked again for his gender, age, or both, depending on what has been rejected.

Creating a Confirmation Object


To create the Confirmation object for the Life Insurance application, do the following:

1. From the New menu, select Confirmation or click the Confirmation button on the
toolbar. An empty Confirmation editor will open up.

Copyright © 2001-2007 VoiceObjects 55


Lesson 8 – Confirming Input - Creating a Confirmation Object

VoiceObjects 7

2. In the Confirmation editor, enter Confirm gender and age as the name for the new
Confirmation object.

#Tip: If you configure a more advanced object like the Confirmation object, it is recommended to
save it from time to time by clicking the Save button on the toolbar of the editor.

Specify the confirmation request


Expand the Confirmation Request section and link the Repeat caller gender and age Output
object – created in Lesson 6 – into the text field by browsing for it via the Context Menu button
. This Output object repeats the collected input using two variables: Ok, I have recognized
your gender as <Caller gender Variable object> and your age as <Caller age Variable object>.
Add Is that correct? behind the Output object.
Your Confirmation Request section looks like this now:

Copyright © 2001-2007 VoiceObjects 56


Lesson 8 – Confirming Input - Creating a Confirmation Object

VoiceObjects 7

Specify the confirmation grammar


Next, specify two separate recognition grammars for confirming and denying:
1. Expand the Confirmation Grammar section.
2. On the Confirm tab, enter some affirming utterances like yes, okay, or correct etc. in the
TTG field.

3. Switch to the Deny tab and enter some rejecting utterances like no, not correct etc. in the
TTG field.

Copyright © 2001-2007 VoiceObjects 57


Lesson 8 – Confirming Input - Creating a Confirmation Object

VoiceObjects 7

Specify the correction request


Expand the Correction Request section and enter the initial prompt of the correction request
(e.g. Please tell me what was wrong. Gender, age, or both?) in the text field.
This prompt is played if the caller denies the confirmation request before processing the object
that performs the correction.

Specify the correction mapping


The Correction Mapping section consists of one or more correction items. Each item contains
an object reference as a destination, and an embedded Grammar object definition.
The grammar defines the utterances that identify this particular item as the one that needs to be
corrected. These utterances are in reply to the correction request.
The object reference specified in the Object field of a correction item points to the object that
processes the correction, typically an Input or a Sequence object.
1. Expand the Correction Mapping section.
2. Define a first correction item to be processed when the caller answered “gender” by linking
the Ask for gender Input object as a destination object into the Object field in the
Correction Item section.
3. In the Grammar section of the correction item, enter gender into the TTG field.

Copyright © 2001-2007 VoiceObjects 58


Lesson 8 – Confirming Input - Creating a Confirmation Object

VoiceObjects 7

4. Add a second row by clicking the Add button in the toolbar of the Correction Mapping
section.
5. Define a second correction item to be processed when the caller answered “age” by linking
the Ask for age Input object into the Object field of the second correction item and entering
age into the TTG field of its Grammar section.
6. Add a third row to define a third correction item to be processed when the caller answered
both.
7. From the context menu right from the Object field select New and then Sequence to create
a new Sequence object that will ask for gender and age sequentially.
8. In the Sequence editor, enter Ask for gender and age as the name for the new Sequence
object.
9. In the Object Sequence section, link the Ask for gender Input object in to the Object field.
Add a new row by using the Add button on the toolbar and link the Ask for age Input object
into the second Object field. The Sequence editor looks like this now:

Copyright © 2001-2007 VoiceObjects 59


Lesson 8 – Confirming Input - Creating a Confirmation Object

VoiceObjects 7

10. Click Save and Close on the toolbar of the Sequence editor to link the Sequence object as
a destination object into the third correction item.
Finally, click Save and Close on the toolbar of the Confirmation editor.

Modify the input request


Since the input request of the Ask for Gender Input object is quite long and best serves as an
initial prompt (To help me calculate what your monthly premium would be for life insurance,
please say if you are male or female.) you should add a second shorter output for any
repetitions of that output (e.g. in the case of confirmation).
1. Open the Ask for Gender Input object and expand the Input Request section.
2. In the Occurrence field of the first output select Only once.
3. Add a second row by clicking the Add button in the toolbar of the Input Request section.
4. Enter a request like Please say if you are male or female. in the text area of the second
output.
5. In the Occurrence field of the second output select If >=2.
6. Leave all other values as default.
Your Input Request section looks like that now:

Click Save and Close on the toolbar of the Input editor.

Copyright © 2001-2007 VoiceObjects 60


Lesson 8 – Confirming Input - Manipulating Dialog Flows in the Dialog Designer

VoiceObjects 7

Manipulating Dialog Flows in the Dialog Designer


You can add the Confirm gender and age Confirmation object to the Life Insurance Module
object either in the corresponding Module editor (by adding it to the object sequence, see
Lesson 6 – Interacting with the Caller) or directly in the Dialog Designer.
To add it to the dialog flow in the Dialog Designer, open the Dialog Designer for the Life
Insurance Module object (if not already open).
Since the Confirm gender and age Confirmation object already includes the Repeat gender and
age Output object in the correction request it is no longer needed in the dialog flow and can be
replaced by the Confirmation object.
1. In the Dialog Designer, select the Repeat gender and age Output object.
2. Right-click it and from the context menu select Replace, then Browse, and then
Confirmation.
3. In the Search Result window, click the Confirm gender and age Confirmation object.
The Confirm gender and age Confirmation object will be inserted into the dialog flow in place of
the Repeat gender and age Output object.

For a detailed description of all commands for manipulating objects in dialog flows provided on
the context menus of the Object Browser and the Dialog Designer refer to Chapter 4 – Object
Browser, Chapter 5 – Dialog Designer, and Chapter 7 – Basic Commands in the Desktop
Guide.

Dialog Flow 7
You have now added a confirmation to the Life Insurance application, which will do the
following:
The caller is asked to confirm their gender and age that have each been stored in a variable.
When the caller confirms the dialog proceeds with the next object in the dialog flow. When the
caller rejects the recognized input, the Ask for gender Input object, the Ask for age Input object,
or both are processed again, depending on what the caller rejected. Afterwards, the server
returns to the Confirmation object, starting over with the confirmation request, until the caller
finally confirms.
After redeploying your service in the Control Center, call it and test the confirmation behavior.

Copyright © 2001-2007 VoiceObjects 61


Lesson 8 – Confirming Input - What Comes Next

VoiceObjects 7

The following dialog flow shows an example dialog for the Prime Life Insurance service you
have created so far:

Object – Caller Dialog Flow


Welcome to the Prime Life Insurance service!
To help me calculate what your monthly premium would be
for life insurance, please say if you are male or female.

Female.
Caller

Now please say your age.

Twenty-eight.
Caller

Ok, I have recognized your gender as female and your age


as thirty-eight. Is that correct?

No.
Caller

Please tell me what was wrong. Gender, age, or both?

Age.
Caller

Now please say your age.

Twenty-eight.
Caller

Ok, I have recognized your gender as female and your age


as twenty-eight. Is that correct?

Yes.
Caller

Thanks for calling. Goodbye!

What Comes Next


Next, you will learn about layers and how to add a custom layer to your application which
changes the dialog behavior depending on the time of the call.
Lesson 9 – Layers

Copyright © 2001-2007 VoiceObjects 62


Lesson 9 – Using Layers - Objectives

VoiceObjects 7

Lesson 9 – Using Layers

Objectives
Lesson 9 introduces the concept of layers, which is a basic feature of VoiceObjects, and
explains how to create a simple custom layer changing the dialog behavior depending on the
hour of day the call comes in.
After completing this lesson you will know
• what is meant by layers,
• how to create a Layer object having different states,
• how to implement a layer into an application.

The Concept of Layers


Layers are an integral part of the VoiceObjects platform. They allow designers to easily
separate the core dialog logic from the way an application is presented. Layers can influence
aspects such as language, channel and persona, but they can also change the application
behavior itself. For instance, a menu can have additional choices for gold customers depending
on specific profile data available to the application, or certain application features can be
disabled outside normal business hours.
VoiceObjects provides standard layers, the so-called system layers such as language, input
mode or channel, which are frequently required in applications and need not be implemented
manually. They are present in many objects and can readily be used within the corresponding
object editors. All other layers, which are rather application-specific such as persona, caller
preferences, service level, etc., are each defined within a single Layer object.
For a detailed description of using layers see Chapter 7 - How to Use Layers in the Design
Guide.
For a detailed description of the Layer object see Chapter 5 – Layers in the Object Reference.

Using a Layer
In the case of the Life Insurance application, a time-dependent custom layer will be
implemented to simply welcome the caller according to the time of the day (e.g. good morning,
good evening). A more sophisticated application would probably use such a layer to also modify
the application behavior according to the time of the day, i.e. to transfer the caller to an agent
during the business hours only.
The layer is implemented through a Layer object. In the Layer object you specify a specific
custom layer for an application by listing all states the corresponding layer can be in.
In the case of the Life Insurance application, there will be three possible states, one for the
morning, one for the afternoon and one for evening.

Copyright © 2001-2007 VoiceObjects 63


Lesson 9 – Using Layers - Creating a Layer Object

VoiceObjects 7

Creating a Layer Object


To create the Layer object for the Life Insurance application, do the following:
1. From the New menu, select Layer. An empty Layer editor will open up.

2. In the Layer editor, enter Time of day as the name for the new Layer object.

Specify the state indicator


In the State Indicator section the indicator is specified, that determines which state of the layer
is used. This is typically done with a Variable or Expression object.
In the case of the Life Insurance application, the layer state depends on the hour of day when
the call comes in. This is retrieved through an Expression object with the function NOW
returning the current time.
1. In the State Indicator section, enter an Expression object into the Object field by selecting
New and then Expression from the context menu. An empty Expression editor opens up.
2. Enter Hours of day into the Name field of the Expression editor.
3. From the drop-down list in the Function field, select NOW (belonging to the Date and Time
functions) and in the Arg. (1) field enter HH to specify the format as hours. For detailed
information on expression functions see Expression in the Object Reference.

Copyright © 2001-2007 VoiceObjects 64


Lesson 9 – Using Layers - Creating a Layer Object

VoiceObjects 7

4. On the toolbar, click Save and Close. The Hours of Day Expression object will be linked
into the Object field of the State Indicator section.

For detailed information on Expression objects see Expression in the Object Reference.
Next, specify the states.

Specify the layer states


The States section lists all states the layer can be in. Specify one state for the morning, one for
the afternoon and one for the evening.
1. In the State ID field of the first State section, enter morning.
The state ID is obligatory and identifies a state within a layer. It must be unique across the
project, i.e. there cannot be two states – independent of the Layer object – with the same state
ID in your project. State IDs must not contain blanks or other special characters except hyphens
or underscores.
The label is used for display purposes of layer states outside the Layer editor. In all editors that
allow you to associate layer conditions, the label is shown instead of the state ID. If you do not
specify a label, the state ID is copied into this field when saving the Layer object.
When defining a layer by means of a state indicator, a value or a comma-separated list of
values is entered into the Indicator value field to compare the content of the state indicator
object with. If the content of this object is equal to one of the indicator values, then this state is
an active state of the layer.
2. Enter 01,02,03,04,05,06,07,08,09,10,11 into the Indicator values field as an indicator for
the layer state morning (activated for all calls between 01:00h and 11:59h).

3. Specify a second state with afternoon as State ID and 12,13,14,15,16,17 as indicator value
(for all calls between 12:00h and 17:59h).
4. Specify a third state with evening as State ID and 18,19,20,21,22,23,24 as indicator value
(for all calls between 18:00h and 00:59h).

Copyright © 2001-2007 VoiceObjects 65


Lesson 9 – Using Layers - Adding the Layer to the Application

VoiceObjects 7

Your States section will look like this now (all State boxes minimized):

5. Finally, click Save and Close on the toolbar of the Layer editor.
In the Object Browser you will now find a Layers folder, containing your Time of day Layer
object with three states.

Adding the Layer to the Application


You can now use the Time of day Layer object to change the behavior of your dialog depending
on the time the call comes in. During this Tutorial we will just adapt the welcome message to the
time of the day so we need to modify the Welcome Output object, which is linked as the
welcome message into the Life Insurance Module object, accordingly.

1. Open the Welcome Output object and click the Context Menu button to the right of the
Layer field of the Output item. Select the Time of day Layer object listed at the end of the
context menu and from the submenu select morning as the layer state for this Output item.

2. In the text field, modify the welcome message to Good morning and welcome to the Prime
Life Insurance service!.

Copyright © 2001-2007 VoiceObjects 66


Lesson 9 – Using Layers - Adding the Layer to the Application

VoiceObjects 7

3. Add a second Output item to the Output section by clicking the Add button . Again, link
the Time of day Layer object into the Layer field, this time selecting the layer state
afternoon as its value.
4. Enter Good afternoon and welcome the Prime Life Insurance service! into the text field of
the second Output item.
5. Add a third Output item to the Output section and link the Time of day Layer object into the
Layer field, this time selecting the layer state evening as its value.
6. Enter Good evening and welcome the Prime Life Insurance service! into the text field of the
third Output item.
The Welcome Output object looks like this now (one Output item minimized for display
purposes):

Finally, click Save and Close on the toolbar of the Output editor.
You have now specified three different welcome messages, which will be selected during the
call depending on the time of day.

Copyright © 2001-2007 VoiceObjects 67


Lesson 9 – Using Layers - Dialog Flow 8

VoiceObjects 7

Dialog Flow 8
You have now enhanced the Life Insurance application, providing slightly different dialog
behavior for the morning, afternoon and evening by creating and adding a custom layer.
Depending on the hour of day when the call comes in the caller will hear a different welcome
message.
After redeploying your service in the Control Center, call it and test this behavior. You will most
probably hear either the welcome message for the morning state or for the afternoon state
depending on when you are calling.
Use a Variable object containing a fix value as state indicator instead of the expression function
NOW if you want to test the other states.
The following call flow shows an example dialog for the Prime Life Insurance service for calls
between 12:00h and 17:59h – i.e. for callers calling in the afternoon:

Object – Caller Dialog Flow


Good afternoon and welcome to the Prime Life Insurance
service!
To help me calculate what your monthly premium would be
for life insurance, please say if you are male or female.

What Comes Next


Next, we will add a Script object to the Life Insurance application in order to calculate the life
insurance fee.
Lesson 10 – Handling Dynamic Data

Copyright © 2001-2007 VoiceObjects 68


Lesson 10 – Handling Dynamic Data - Objectives

VoiceObjects 7

Lesson 10 – Handling Dynamic Data

Objectives
Lesson 10 briefly explains how to handle dynamic data within VoiceObjects and how to use the
Script object in order to return a value on the basis of input collected from the caller.
After completing this lesson you will know
• which objects to use to handle dynamic data
• how to specify embedded script code in a Script object
• how to pass parameters into the Script object and return a value.

About Dynamic Data


Typically, applications need to manipulate data, either coming from the caller or from a back-
end system. VoiceObjects provides several objects to store different types of data and to
perform operations on these data.
The most common object to be used to handle information during a call is the Variable object
that has already been introduced during the course of this Tutorial. The Variable object is a
single non-typed value, which may contain names, numbers, Boolean values, etc. It can be
used within almost any other object to control certain object properties or processing conditions
dynamically at call time. For detailed information on the Variable object see Variable in the
Object Reference.
To store a table of data, the Collection object can be used (see Collection in the Object
Reference).
Data can be manipulated either within VoiceObjects or externally, e.g. by connecting to back-
end systems such as databases or CRM (Customer Relationship Management) systems.
Internal processing is done via Expression or Script objects, while external processing uses
Connector objects.
Expression objects provide functions to access variable or collection values, and to
manipulate, set or transform them. Examples are standard numerical operations, or string
operations to get the length of a string or to replace substrings. For a complete functional
description refer to the Expression object in the Object Reference.
More complex operations can be realized using the Script object as will be done on the case of
the Life insurance application (see Using the Script Object below).
The Connector object provides the framework for back-end connectivity. It enables the
execution of external programs, which can either be Java code or arbitrary server-side scripts
executed by means of HTTP requests. For detailed information refer to the Connector object in
the Object Reference.
For further information on handling data refer to Chapter 4 – How to Handle Data in the Design
Guide.

Using the Script Object


The Script object allows the specification of server-side scripting language code to control any
kind of business logic within the dialog flow. The script code can either be embedded within the
object definition itself, or it can be referenced as an external file source.
Any number of parameters from the dialog context can be passed into the Script object. These
parameters can be a mix of constants, Variable, Expression, Collection, Layer, or other Script
objects.

Copyright © 2001-2007 VoiceObjects 69


Lesson 10 – Handling Dynamic Data - Creating a Script Object

VoiceObjects 7

A Script object can provide a return value, and thus act as a replacement for an Expression
object. The return value of the Script object is defined by the value of the last statement that is
evaluated within the JavaScript code.
The Script object belongs to the category Resources.
For further details on the usage and functionality of the Script object refer to Script in the Object
Reference.
In the case of the Life Insurance application you will use the Script object to calculate the
insurance fee for the caller based on the parameters gender and age that have previously been
collected and stored in variables.

Creating a Script Object


To add a Script object to the Life Insurance application, do one the following:
Open the Life Insurance Module object by double-clicking it in the Object Browser. In the
Module editor that opens up add a new row to the Object Sequence section, and from the
context menu select New and then Script.
OR
Display the dialog of the Life Insurance Module object in the Dialog Designer, right-click the
embedded sequence and from the context menu that comes up select Append, then New and
then Script.
An empty Script editor will open up.

Copyright © 2001-2007 VoiceObjects 70


Lesson 10 – Handling Dynamic Data - Creating a Script Object

VoiceObjects 7

In the Script editor, enter Compute life insurance fee as the name for the new Script object.

Specify the script code


In the Script Resource section, define the code to be processed within the Script object. In the
case of the Life Insurance application provide the following code as an embedded definition:

if ((gender == "male") || (gender == "1"))


{fee = 35;}
else
{fee = 27;}
if (age> 30)
{fee = fee + (age - 30);}

This code will return the life insurance fee based on gender and age.

Copyright © 2001-2007 VoiceObjects 71


Lesson 10 – Handling Dynamic Data - Creating a Script Object

VoiceObjects 7

Define the parameter set


In the Parameter Set section, provide the parameters that are required to exchange processing
information with the script code.

1. Click the Context Menu button and browse for the Caller gender Variable object to add
it to the Parameter field. Enter gender in the Alias field to define the name under which the
parameter is accessible within the script code.

2. Add a second parameter to the Parameter Set section by clicking the Add button on its
toolbar. Browse for the Caller age Variable object to add it into the Parameter field, and
enter age into the Alias field.
3. Add a third parameter to the Parameter Set section to specify the variable that will return
the computed fee. Use the Context Menu button to create a new Variable object called
Life insurance fee. In the Alias field, enter fee.
Your Parameter Set section looks like this now:

Leave all other values as default and click Save and Close on the toolbar of the Script editor.
In the Object Browser you will now find a Resources folder, containing a Script folder, which
contains your Compute life insurance fee Script object.

Copyright © 2001-2007 VoiceObjects 72


Lesson 10 – Handling Dynamic Data - Playing the Result to the Caller

VoiceObjects 7

Playing the Result to the Caller


Next, add a new Output object to the embedded sequence of the Life Insurance Module object
to play back the result value of the processing of the Compute life insurance fee Script object to
the caller.
1. In the Output editor, enter Play life insurance fee as the name for the new Output object.
2. Enter the following prompt into the text field and add the relevant Variable object where
appropriate:
The monthly premium for a [V:Caller age] year old [V:Caller gender] is [V:Life insurance fee]
dollars.
The Play life insurance fee Output object looks like this now:

The dialog flow for the Life Insurance Module object now looks like this:

Copyright © 2001-2007 VoiceObjects 73


Lesson 10 – Handling Dynamic Data - Dialog Flow 9

VoiceObjects 7

Dialog Flow 9
You have now added two more objects to the dialog flow, which do the following:
The Compute life insurance fee Script object calculates the monthly fee for the caller based on
the gender and age provided and the Play life insurance fee Output object plays back the result
to the caller.
After redeploying your service in the Control Center, call it and test the application behavior.
The following dialog flow shows an example dialog for the Prime Life Insurance service you
have created so far:

Object – Caller Dialog Flow


Good morning and welcome to the Prime Life Insurance
service!
To help me calculate what your monthly premium would be
for life insurance, please say if you are male or female.

Female.
Caller

Now please say your age.

Twenty-eight.
Caller

Ok, I have recognized your gender as female and your age


as twenty-eight. Is that correct?

Yes.
Caller

The monthly fee for a twenty-eight year old female is 27


dollars.
Thanks for calling. Goodbye!

What Comes Next


Next, you will learn how to influence the dialog behavior by the use of conditions.
Lesson 11 – Processing Conditions

Copyright © 2001-2007 VoiceObjects 74


Lesson 11 – Processing Conditions - Objectives

VoiceObjects 7

Lesson 11 – Processing Conditions

Objectives
Lesson 11 introduces the use of conditions in order to influence the dialog behavior. Moreover it
simulates a transfer to an agent by the use of an Exit object. After completing this lesson you
will know
• what is basically meant by conditions in VoiceObjects,
• how to create an Exit object,
• how to create a Silence object,
• how to branch the dialog flow with an If object.

Conditional Processing
Depending on parameters such as user input, user profile, external data, date, time, etc. parts
of a dialog flow may or may not be processed.
If objects and Case objects can be used, to make parts of a dialog flow dependent on certain
conditions. An If object is used to split a dialog flow into two subtrees depending on a condition
evaluated at call time. A Case object is used to split the dialog flow into multiple subtrees
depending on one or more conditions evaluated at call time. For further details refer to the If and
Case object in the Object Reference.
In addition, preconditions can be used to express certain dialog conditions and influence the
dialog behavior. A precondition is a condition that is evaluated at call time before processing an
object. The object is only processed if the condition evaluates to true. For more information
about preconditions see Pre- and Postprocessing in the Object Reference.

In the Dialog Designer, objects with a precondition are indicated with a Precondition icon .
In the case of the Life Insurance application an If object will be used to influence the dialog flow
depending on the caller wishing to be transferred to an agent or not.

Asking For Agent Transfer


After calculating the life insurance fee through the Script object in the previous lesson, the next
dialog step in the Life Insurance application is to offer the caller to speak to an agent. This is
realized by another Input object, which is possibly followed by an agent transfer and therefore
embedded in a Sequence object.
3. Add a Sequence object to the embedded sequence of the Life Insurance Module object in
the respective Module editor or in the Dialog Designer (see the previous lesson how to do
this).
4. In the Sequence editor, enter Offer option to connect to an agent as the name for the new
Sequence object.

5. In the Object Sequence section of the Sequence editor, click the Context Menu button
to the right of the Object field and add a new Input object.
6. In the Input editor, enter Speak to an agent now? as the name for the new Input object.
7. Enter Would you like to speak to an agent now? in the text area of the embedded output in
the Input Request section.

Copyright © 2001-2007 VoiceObjects 75


Lesson 11 – Processing Conditions - Asking For Agent Transfer

VoiceObjects 7

8. In the Grammar section, enter yes, no in the TTG field to specify a classical Yes/No
grammar.

9. In the Result Handling section add a new Variable object called Yes/No answer to the
Variable field and enter sltTTG in the Slot field.

10. Leave all other values as default.


11. On the toolbar of the Input editor click Save and Close and find the Input object added to
the object sequence of the Offer option to connect to an agent Sequence object.

Copyright © 2001-2007 VoiceObjects 76


Lesson 11 – Processing Conditions - Specifying an Exit Object

VoiceObjects 7

Specifying an Exit Object


Next, you will create an Exit object in order to simulate the transfer to an agent, which will be
added to the dialog flow later on. Transferring a call is actually done by a Transfer object but
due to its technical complexity will be simulated by an Exit object for demonstration purposes
here. For further information on real transfers refer to the Transfer object in the Object
Reference.
To create the Exit object for the Life Insurance application, do the following:
1. From the New menu, select Exit. An empty Exit editor will open up.

2. In the Exit editor, enter Simulate agent transfer as the name for the new Exit object.
3. In the Goodbye Message section, enter something like Hi! My name is Ray MacArthur. I
understand you're interested in life insurance. How can I help you? in the text field.

4. Click the Context Menu button to the right of the text field and create a new Silence
object via the context menu.
5. In the Silence editor, enter Very long pause as the name for the new Silence object.
6. In the Silence Duration section, select 60 Seconds from the drop-down list in the Duration
field.

Copyright © 2001-2007 VoiceObjects 77


Lesson 11 – Processing Conditions - Branching the Dialog Flow

VoiceObjects 7

7. On the toolbar of the Silence editor click Save and Close to add the Silence object.

The Goodbye Message section of the Exit object looks like this now:

8. In the Processing section of the Exit editor select Disconnect from the drop-down list in
the Dialog exit type field in order to disconnect the call after processing the Exit object.

On the toolbar of the Exit editor click Save and Close. The Exit object will be added to the
dialog flow in the next step.

Branching the Dialog Flow


Next, you will add an If object to the Sequence object in order to split the dialog flow depending
on the caller input given in the Speak to an agent now? Input object.
1. Add a new row to the Object Sequence section of the Offer option to connect to an agent
Sequence object by clicking the Add button on its toolbar.
2. Click the Context Menu button to the right of the Object field and create a new If object via
the context menu.

Copyright © 2001-2007 VoiceObjects 78


Lesson 11 – Processing Conditions - Branching the Dialog Flow

VoiceObjects 7

1. In the If editor, enter Check if caller wants agent transfer as the name for the new If object.
In the IF section of the If editor you specify the condition that needs to be met in order to
process the THEN section (i.e. the caller said yes when being asked for an agent transfer).
2. In the IF section, select the equal function “=” from the drop-down list in the Function field
to check if argument [1] equals argument [2].
3. In the Arg. [1] field, add the Yes/no answer Variable object by browsing.
4. In the Arg. [2] field, enter yes.
The IF section looks like this now:

Copyright © 2001-2007 VoiceObjects 79


Lesson 11 – Processing Conditions - Branching the Dialog Flow

VoiceObjects 7

5. In the THEN section, add the Simulate agent transfer Exit object created previously into the
Object field by browsing for it via the Context Menu button .

6. On the toolbar of the If editor click Save and Close to add the If object to the object
sequence of the Offer option to connect to an agent Sequence object.

Finally, on the toolbar of the Sequence editor click Save and Close.

The dialog flow for the Life Insurance Module object looks like this now:

Copyright © 2001-2007 VoiceObjects 80


Lesson 11 – Processing Conditions - Dialog Flow 10

VoiceObjects 7

Dialog Flow 10
You have now added a Sequence object to the Life Insurance application, which contains two
objects. The first one is an Input object that asks the caller if he/she wants to be transferred to
an agent. The second object is an If object that, depending on the answer of the caller to the
previous question, either processes an Exit object that simulates an agent transfer or returns to
the Module object.
After redeploying your service in the Control Center, call it and test the application behavior.
The following dialog flow shows an example dialog for the Prime Life Insurance service you
have created so far:

Object – Caller Dialog Flow


Good morning and welcome to the Prime Life Insurance
service!
To help me calculate what your monthly premium would be
for life insurance, please say if you are male or female.

Female.
Caller

Now please say your age.

Twenty-eight.
Caller

Ok, I have recognized your gender as female and your age


as twenty-eight. Is that correct?

Yes.
Caller

The monthly fee for a twenty-eight year old female is 27


dollars.
Would you like to speak to an agent now?

Yes.
Caller

<…Check caller input…>[ yes ]


Hi! My name is Ray MacArthur. I understand you're
interested in life insurance. How can I help you?
[ Silence of 60 second ]
< Disconnect the call >

What Comes Next


Next, you will add some additional features to the Life Insurance application that improve the
design of the application with respect to its usability and its hear and feel.
Lesson 12 – Designing User-Friendly Applications

Copyright © 2001-2007 VoiceObjects 81


Lesson 12 – Designing User-Friendly Applications - Objectives

VoiceObjects 7

Lesson 12 – Designing User-Friendly Applications

Objectives
Lesson 12 briefly explains some additional features available in VoiceObjects for creating
successful phone applications with respect to what is called a good Voice User Interface (VUI)
design. These features improve the design of the application with respect to its usability and its
hear and feel.
After completing this lesson you will know
• roughly what is meant by Voice User Interface design,
• what is meant by standard and custom navigation,
• how to specify custom navigation,
• what random prompting is and how it is specified.

Voice User Interface Design


While there are many important elements in a phone application, the Voice User Interface (the
portion of the system that callers hear and speak to) has potentially the greatest impact on its
ultimate success or failure. Callers will only use phone applications that both meet their needs
and are easy to use. Voice User Interface design covers areas like the designing of the dialog
flow logic, prompt design (i.e. the composing of the verbiage), audio design (i.e. the non-speech
audio), and the persona presenting the prompts.
VoiceObjects provides several features, which help you to easily create well-designed
applications. One has already been introduced in Lesson 7 with the event handling strategies,
which specify how to proceed within a dialog in cases of exceptions. Another one is standard
and custom navigation, which enables the caller to control the dialog flow.
Random Prompting, the playing of variations of a given prompt, is yet another technique
provided for a good Voice User Interface design. It helps to make the dialog more lively.
For further information on Voice User Interface design refer to Appendix A – Voice User
Interface Design in the Design Guide.

Standard and Custom Navigation


Standard and custom navigation commands enable the caller to control the dialog flow.
Standard navigation provides basic dialog flow navigation, like going back or forward in the
dialog. The different types of navigation commands can only be defined in Module objects.
With custom navigation commands the Web concept of hyperlinks is available in voice
applications. By using an application specific voice or DTMF command, the caller can, for
example, jump back to the main menu, pause the application, or be transferred to a call center
agent. These hyperlinks can be accessed throughout the application (through inheritance) or
within defined sections of it. Custom navigation commands may or may not undergo a
confirmation step, i.e. the Hyperlink object can be configured such that the caller either needs to
confirm the activation of the hyperlink (e.g. Do you really want to leave the application?), or the
target of the hyperlink is processed without confirmation.
For further details on standard and custom navigation refer to Navigation in the Object
Reference.
In the case of the Life Insurance application custom navigation will be specified in order to skip
several dialog steps and jump to an agent immediately.

Copyright © 2001-2007 VoiceObjects 82


Lesson 12 – Designing User-Friendly Applications - Specifying Custom Navigation

VoiceObjects 7

Specifying Custom Navigation


Custom navigation can be defined in Module objects as well as in other objects expecting input
from the caller. Hyperlinks defined in any Module object are inherited by the objects in the
sequence of that Module object. So in order to enable the caller to jump to an agent from any
point in the dialog flow the respective hyperlink needs to be defined in the Module object.
Open the Life Insurance Module object, switch to the Navigation tab and expand the Custom
Navigation section.

Each custom navigation command consists of an embedded hyperlink definition, which is


structurally simpler than the autonomous Hyperlink object.
To specify a hyperlink for the Life Insurance application, do the following:
1. In the Hyperlink section, add the Simulate agent transfer Exit object to the Object field by
browsing for it via the context menu. In case the Mode being set to Process object (the
default) the Object field specifies the object to be processed when the hyperlink command
is activated by the caller.
2. In the Continuation field, set continuation to Do not return since the caller is not going to
return to the dialog after being transferred to an agent.
3. In the Activation Grammar section enter agent in the TTG field as the command that
activates the hyperlink.
The Custom Navigation section of the Life Insurance Module object looks like this now:

Copyright © 2001-2007 VoiceObjects 83


Lesson 12 – Designing User-Friendly Applications - Dialog Flow 11

VoiceObjects 7

4. Switch back to the Definition tab of the Life Insurance Module object.
5. Expand the Welcome Message section and enter something like If you want to talk to an
agent just say agent. in the text field (behind the linked Welcome Output object) to point the
caller to this opportunity.
6. On the toolbar of the Module editor, click Save and Close.

Dialog Flow 11
You have now added a hyperlink to the Life Insurance application, which allows the caller to
interrupt the dialog flow and directly talk to an agent.
After redeploying your service in the Control Center, call it and test this behavior.
The following dialog flow shows an example dialog for the Prime Life Insurance service you
have created so far:

Copyright © 2001-2007 VoiceObjects 84


Lesson 12 – Designing User-Friendly Applications - Random Prompting

VoiceObjects 7

Object – Caller Dialog Flow


Welcome to the Prime Life Insurance service! If you want to
talk to an agent just say agent.
To help me calculate …

Agent.
Caller

Hi! My name is Ray MacArthur. I understand you're


interested in life insurance. How can I help you?
[ Silence of 60 second ]
< Disconnect the call >

Random Prompting
Another valuable technique for good Voice User Interface design is called random prompting.
Random prompting means the playing of variations of a given prompt in a randomized way,
which means that when it comes time for the application to play a given prompt, there are
multiple versions of that prompt for the system to select from. Each time the application needs
to play that particular prompt, a different variant will be selected until all have been exhausted.
By this the typical robotic character of a phone application is minimized.
Prompts can vary either in their wording or simply in the way they're spoken.
Random prompts can be achieved by simply providing multiple Output items with the same
language layer, occurrence level, channel, custom layer (defined in the Layer field), and input
mode setting.
For further details on random prompting refer to the Output object in the Object Reference and
to Appendix A – Voice User Interface Design in the Design Guide.
In the case of the Life Insurance application random prompts will be specified for the Welcome
Output object embedded in the Life Insurance Module object.

Specify random prompts


The Welcome Output object already contains three Output items differing in their time-
dependent custom layer condition specified by the Time of day Layer object. To achieve
random prompting we will next add two more Output items for the layer state morning with
exactly the same layer settings than the one already defined.
1. Open the Welcome Output object.

2. Click the Add button on the toolbar of the Output section to add another Output item.
3. In the text field add a welcome message that differs from the one in the first Output item
with layer state morning, e.g. Good morning, this is Prime Life Insurance.

Copyright © 2001-2007 VoiceObjects 85


Lesson 12 – Designing User-Friendly Applications - Random Prompting

VoiceObjects 7

4. Add a third Output item with layer state morning again with a different welcoming prompt.

Copyright © 2001-2007 VoiceObjects 86


Lesson 12 – Designing User-Friendly Applications - Random Prompting

VoiceObjects 7

On the toolbar of the Output editor, click Save and Close.


Since all three Output items with the layer state morning have the same settings the system will
now pick up one of these welcoming messages randomly when calling the Life Insurance
application in the morning. True randomizing ensures that all available prompts are played
before a repetition occurs.
As an exercise, you may also add random prompts for the layer states afternoon and evening.
After redeploying your service in the Control Center, call it and test this behavior.

Copyright © 2001-2007 VoiceObjects 87


Lesson 12 – Designing User-Friendly Applications - What Comes Next

VoiceObjects 7

What Comes Next


Next, and finally, you will add a Log object to the Life Insurance application to store dialog data
into a file for analytic purposes.
Lesson 13 – Logging the Results

Copyright © 2001-2007 VoiceObjects 88


Lesson 13 – Logging the Results - Objectives

VoiceObjects 7

Lesson 13 – Logging the Results

Objectives
Lesson 13 briefly explains how to store the results received from a caller request into a log file
in order to be able to evaluate them later on. This is done by a Log object.
After completing this lesson you will know
• how to create a Log object to store results,
• how to access the log file from within the Control Center.

Logging
Within VoiceObjects application-level logging information may be written to a log file, to the
media platform, to a custom database, or to the system database. This is done by using the Log
object.
The default channel for logging is writing to a service-specific file that can be accessed via the
Control Center (see Accessing the Log File).
Logging information may contain arbitrary dynamic content such as caller input, data retrieved
from back-ends, etc. It may be written out at any point during a call, or after the call has ended.
In the case of the Life Insurance application you will log the gender and age of the caller for
further dialog analysis purposes.

Creating a Log Object


To create the Log object for the Life Insurance application, do the following:
2. From the New menu, select Log. An empty Log editor will open up.

Copyright © 2001-2007 VoiceObjects 89


Lesson 13 – Logging the Results - Creating a Log Object

VoiceObjects 7

3. In the Log editor, enter Log caller data for dialog analysis as the name for the new Log
object.
4. In the Destination field, select Log File from the drop-down list, which indicates that the
Log item writes to a log file. A separate log file is created for each service.
5. Enter Gender: into the text field and browse for the Caller gender Variable object to add it
behind it. This will return the gender of the caller.
6. Next, enter Age: and add the Caller age Variable object by browsing. This will return the
age of the caller.
The Log caller data for dialog analysis Log object will look like this now:

Copyright © 2001-2007 VoiceObjects 90


Lesson 13 – Logging the Results - Adding the Log Object to the Application

VoiceObjects 7

Click Save and Close on the toolbar of the Log editor.


In the Object Browser you will now find a Log folder in the Resources folder, which contains
your Log gender and age object.

Adding the Log Object to the Application


To add the Log gender and age Log object to the Life Insurance application, do the following:
1. Open the Dialog Designer for the Life insurance Module object (if not already open) and
right-click the Compute life insurance fee Script object.
2. From the context menu, select Insert and browse for the Log gender and age Log object. It
will be inserted behind the Confirmation object which is the point in the dialog flow at which
all information required for this Log object (i.e. gender and age) will be available.
The dialog flow for the Life Insurance Module object looks like this now:

Copyright © 2001-2007 VoiceObjects 91


Lesson 13 – Logging the Results - Accessing the Log File

VoiceObjects 7

The actual behavior of the Life Insurance application has not changed but results are stored into
a log file now.
After redeploying your service in the Control Center, call it several times to store some results.

Accessing the Log File


To access your log file, open the Control Center by right-clicking the Server object VOServer in
the Object Browser.
The Control Center provides access to detailed log information both on the level of the server
as well as on the level of each individual service.
Switch to the Service Logging tab to bring it to the front. The Service Logging tab provides
access to the log files for all Services.

Copyright © 2001-2007 VoiceObjects 92


Lesson 13 – Logging the Results - Accessing the Log File

VoiceObjects 7

Each service has two associated log files named VSN_service.log and VSN_error.log (where
VSN stands for the VoiceObjects Service Name of the respective service). The VSN_service.log
file contains messages written by the Log object with destination Log File. The VSN_error.log
file contains all error messages related to this service.
Log files can be viewed (up to a maximum size of 1 MB), and downloaded. Both viewing and
downloading are only possible if the log file is not empty.
To view a log file in a new window, click on the name of the log file. To download it, click the
Download button to the right of the log file name.
To view the log file for the Tutorial service, click Tutorial_Service.log. The log file will be
displayed in a separate window (for better readability the layout of the example log file below
has been slightly modified):

[Category: INFO]
[Log Name: Log caller data for dialog analysis]
[Label: ]
[Ref ID: 0.135.149:OVAPac1617bc0000000000003a03000001114bbf400d_BVO_Log]
[VSN: Tutorial]
[Dialog ID: OVAPac1617bc000000000000258d000001114bbb7c64]
[TS: 2007-03-13 16:25:04,174]
[DNIS: ]
[ANI: VoiceObjects Administrator]
Gender: male Age: 44

[Category: INFO]
[Log Name: Log caller data for dialog analysis]
[Label: ][Ref ID:
0.135.149:OVAPac1617bc0000000000003a03000001114bbf400d_BVO_Log]
[VSN: Tutorial]
[Dialog ID: OVAPac1617bc00000000000027a9000001114bbb7c64]
[TS: 2007-03-13 16:37:28,000]
[DNIS: ]
[ANI: VoiceObjects Administrator]
Gender: male Age: 54

Copyright © 2001-2007 VoiceObjects 93


Conclusion - Accessing the Log File

VoiceObjects 7

Conclusion
We have reached the end of the Tutorial here.
You have been introduced to the basic functionalities of VoiceObjects Desktop and to the basic
objects available within VoiceObjects by building a complete voice application including event
handling, navigation, a custom layer and logging functionality.
By now, you will be able to build your own applications using VoiceObjects.
However, the features and objects that have been introduced during the course of the Tutorial
have not all been described in detail. Other features and a lot of additional objects available in
the VoiceObjects platform have not been introduced at all.
In order to get a detailed description of the entire functionality of VoiceObjects Desktop you
should read the Desktop Guide next.
In order to learn more about designing voice applications with the VoiceObjects platform read
the Design Guide.
If you want to get a closer look to all objects and their configurations refer to the Object
Reference.
See Appendix C – How To Learn More for an overview on all available additional resources and
how to access them.

Copyright © 2001-2007 VoiceObjects 94


Appendix A - Project Documentation - Introducing Project Documentation

VoiceObjects 7

Appendix A - Project Documentation


The project documentation functionality within VoiceObjects Desktop is used to document the
overall structure as well as specific details of your project e.g. to present it to customers, for
internal meetings and so on.

Introducing Project Documentation


Using the project documentation functionality you can create a PDF file that either shows the
dialog flow similar to its display in the Dialog Designer, a table overview of all or a selected
subset of objects, or both.
The PDF document can be created for the entire project version or starting from a particular
object. When creating the project documentation for the entire project version, the system loops
through all existing Module objects and creates the according documentation. When starting
from a particular object the project documentation only shows objects, which are referenced
within the corresponding subtree of the application.

Creating a Project Documentation for the Life Insurance


Application
In the documentation folder of your VoiceObjects installation you can find an example
documentation for the Life Insurance application (LifeInsuranceProjectDocumentation.pdf).
To create this documentation, which includes the dialog flow and the object overview for the Life
Insurance application do the following:
1. Open the Tutorial project.
2. On the menu bar, select Tools and then Project Documentation.
The Project Documentation window opens up.

Copyright © 2001-2007 VoiceObjects 95


Appendix A - Project Documentation - Creating a Project Documentation for the Life Insurance
Application

VoiceObjects 7

3. In the Project Documentation window, select Dialog flow and object overview and click
Next.

Copyright © 2001-2007 VoiceObjects 96


Appendix A - Project Documentation - Creating a Project Documentation for the Life Insurance
Application

VoiceObjects 7

4. In the Dialog Flow and Object Overview window, specify the object details to be included
in your dialog flow and your object overview. For details see Create Project Documentation
in Chapter 7 – Basic Commands in the Desktop Guide.
In the example documentation LifeInsuranceProjectDocumentation.pdf the settings as
shown in the screen above have been selected.
5. Click Create to start creating the documentation.
A PDF document will be generated containing a cover page (if selected), the dialog flow and the
object overview according to your selection, and a statistics page (see examples below).
See the file LifeInsuranceProjectDocumentation.pdf in the documentation folder of your
VoiceObjects installation for the complete example documentation.
For a detailed description of the project documentation functionality see Create Project
Documentation in Chapter 7 – Basic Commands in the Desktop Guide.

Copyright © 2001-2007 VoiceObjects 97


Appendix A - Project Documentation - Examples

VoiceObjects 7

Examples
Cover page
The cover page lists general information about the name, the short description, the creation
date and the owner for both project and project version.

Copyright © 2001-2007 VoiceObjects 98


Appendix A - Project Documentation - Examples

VoiceObjects 7

Dialog flow
The dialog flow shows the objects of the Life Insurance application in the same order as they
appear in the Dialog Designer. Within the dialog flow you will find hyperlinks to jump from an
object to its object definition in the object overview, if created. Available hyperlinks are indicated
by blue object names.

Copyright © 2001-2007 VoiceObjects 99


Appendix A - Project Documentation - Examples

VoiceObjects 7

Object overview
The object overview lists all object definitions that are referenced within the Life Insurance
Module object. Within the object overview you will find hyperlinks to jump from an object to its
child objects. Available hyperlinks are indicated by blue object names.

Copyright © 2001-2007 VoiceObjects 100


Appendix A - Project Documentation - Examples

VoiceObjects 7

Object statistics
The statistics page is automatically generated when an object overview is created. It lists all
object types and displays the total number of objects per type. The statistics page also serves
as a table of contents. By clicking an object type you arrive at the corresponding section in the
PDF file. Note that a hyperlink for an object type is only available if at least one object of this
type exists in your application.

Copyright © 2001-2007 VoiceObjects 101


Appendix B – Overview Object Types - Components Category

VoiceObjects 7

Appendix B – Overview Object Types


This chapter provides an overview of all object types available in VoiceObjects for creating
dialogs – grouped into categories. For a detailed description of all dialog objects refer to the
Object Reference.

Components Category
Objects grouped into the Components category are higher-level objects, which provide dialog
functionality frequently used within applications, such as menus, reading out lists and tables,
summarizing and confirming multiple caller input etc. This category also offers the most basic
objects necessary to interact with the caller, like the Output and Input object.
The objects of the Components category have various integrated system layers that provide for
example sophisticated event handling and dialog navigation capabilities.

Icon Object Name Short Description


Represents the start node in a dialog flow and
provides global settings, which are inherited by
Module objects contained in the module. Typical global
settings made here affect e.g. event handling,
standard navigation, and ASR tuning.
Plays an Output to request information from the
caller and assigns the caller’s input to one or more
Input Variable or Layer objects. The input can be either
DTMF or recognized voice input in the voice and
video channel, or text in the text and Web channel.
Presents any type of output to the caller. In the voice
and video channel, outputs may contain text that is
read out through a text-to-speech engine, or
Output prerecorded audio or video files, possibly containing
various dynamic content as well as Silence objects.
In the text and Web channel, the output defines what
is displayed on the screen of the mobile device.
Represents a dialog component to group multiple
Sequence
objects into a processing sequence.
Presents a list of choices and lets the caller select
one. When the caller has selected an item, the dialog
Menu
continues with the processing of the object
referenced in that item.
Represents a configurable dialog component to
summarize information collected in one or more
Confirmation previous dialog steps, and to let the caller confirm the
summary or correct individual items (e.g. by using
Input objects).
Generates a dynamic and interactive dialog that
presents the specified List items and provides list
navigation and selection options to the caller.
List
Supports both one-dimensional lists and two-
dimensional tables, offering vertical and horizontal
navigation, if required.

Copyright © 2001-2007 VoiceObjects 102


Appendix B – Overview Object Types - Resources Category

VoiceObjects 7

Resources Category
Objects grouped into the Resources category provide access to external resources such as
grammar, prerecorded audio, or existing VoiceXML files. It also covers objects for back-end
access and custom logging.

Icon Object Name Short Description


Provides access to prerecorded audio files. Audio
Audio files can cover spoken prompts, music, or sound
effects. Not supported in the text and Web channel.
Provides access to video files. Used in multimedia
Video services for 3G mobile devices. Not supported in the
text and Web channel.
Provides the concept of output formatting for
dynamic content contained in Variable, Expression,
Format
Script, Collection, and Layer objects, which are
presented to the caller in an Output object.
Allows applications to play silence for a specified
Silence
duration. Not supported in the text and Web channel.
Defines a recognition grammar, which can be used in
all objects that accept caller input (such as Input,
Menu, or Confirmation objects). A recognition
grammar defines the input a caller can respond with
Grammar
to a particular request prompt (voice, DTMF, or text).
Allows the definition of embedded grammars,
references to external grammar files or grammars
that are dynamically generated at call time.
Allows access to external data sources and back-end
systems to connect the dialog to dynamic content by
providing an open Java, Web services and XML API
Connector for bi-directional data exchange. It can also be used
to establish a connection to various other
communication channels such as SMS, MMS, e-mail,
fax or Web sites.
Allows the specification of server-side scripting
Script language code (JavaScript) to control any kind of
complex business logic within the dialog flow.
Provides access to an existing VoiceXML resource.
Alternatively it can be used to plug-in custom
Plug-In VoiceXML code to take advantage of proprietary
VoiceXML extensions of the underlying media
platform. Not supported in the text and Web channel.
Provides the capability to write log messages to a
Log file, to a custom database, to the media platform, or
to the system database.
Represents a general access path to any kind of
Resource external resource in the same way that a URL
Locator (Uniform Resource Locator) does in the Web
environment.

Copyright © 2001-2007 VoiceObjects 103


Appendix B – Overview Object Types - Logic Category

VoiceObjects 7

Logic Category
Objects grouped into the Logic category provide low-level elements based on standard
programming constructs to control the logic of a dialog flow and the corresponding dialog
context.

Icon Object Name Short Description


Provides the concept of a variable as in any standard
Variable programming or scripting language to control the
dialog context.
Represents a collection of data values that can be
accessed element by element with an index key
Collection
similar to a database table or the construct of a hash
table in any standard programming language.
Provides a comprehensive set of expression
functions, which can be used to perform, for
Expression example, arithmetical calculations, string operations,
comparison operations, or any kind of Boolean
operation.
Allows you to specify a standard IF-THEN-ELSE
If construct as in any standard programming or
scripting language.
Represents a standard CASE or SWITCH construct
Case as in any standard programming or scripting
language.
Allows you to implement standard loop constructs of
Loop the type WHILE (condition) {…} and DO {…} WHILE
(condition).
Allows you to incorporate an explicit GOTO or
Goto
GOSUB instruction within a dialog flow.

Copyright © 2001-2007 VoiceObjects 104


Appendix B – Overview Object Types - Actions Category

VoiceObjects 7

Actions Category
Objects grouped into the Actions category are dialog elements that offer interactive dialog
control to the caller, e.g. pausing an application, being transferred to a call center agent, or
branching to a different part of the application.

Icon Object Name Short Description


Allows the caller to use a predefined command to
Hyperlink branch out to a certain dialog component or
subdialog at any point in time.
Allows the caller to pause a call at any point in time
with a predefined hyperlink command. It can also be
used to pause the application at certain points in the
Pause dialog flow by design, e.g. to give the caller time to
perform a task in a troubleshooting application before
moving on to the next step. Not supported in the text
and Web channel.
Provides a mechanism to record a message from the
Recording caller, which can also be played back to the caller for
review. Not supported in the text and Web channel.
Connects the caller to another entity (e.g. telephone
Transfer line or another application). Not supported in the text
and Web channel.
Provides the termination of the connection between
Exit the phone of the caller and the application hosted on
VoiceObjects Server.

Layers Category
The Layers category contains objects that provide custom layer definition capabilities.

Icon Object Name Short Description


Lets the developer define typical custom layers like
persona, customer/caller state, or time of day. These
layers can be used throughout the application to
Layer
allow for best practices of Voice User Interface
design, like time-dependent greeting, persona-based
prompts, menus depending on caller status etc.

Copyright © 2001-2007 VoiceObjects 105


Appendix B – Overview Object Types - OSDMs Category

VoiceObjects 7

OSDMs Category
The OSDMs category contains OpenSpeech™ DialogModules™ (OSDMs). OSDMs are
building blocks for voice applications, which accelerate the development process by packaging
application functionality for applications based on VoiceXML. They provide all prompts,
grammars, dialog flow, and other parameters required to address frequent speech application
tasks, like obtaining alphanumeric strings, dates, phone numbers, etc.
The following table lists all thirteen OSDM types available within the VoiceObjects platform
together with short descriptions and alias names by which the types are referred to in the
OSDM handbook.

Icon Type Alias Description


Alphanumeric Alphanum Collects a string of connected
numbers and letters (i.e. the caller
is not required to pause after each
digit or letter). Since
distinguishing individual letters is
especially difficult, it is usually
necessary to constrain the task by
specifying a list of valid
alphanumeric strings (e.g. 1000
possible account IDs).
Credit Card Expiration CCExpDate Collects the expiration date for a
credit card, usually a month and a
year. The OSDM recognizes the
last day of a given month as well.
Credit Card Number CreditCard Collects a credit card number. Can
understand card numbers for Visa,
MasterCard, American Express,
Discover, Diners Club, and many
private label cards provided by
retailers.
Currency Currency Collects currency amounts in dollars
and cents. You can constrain the
range of currency amounts, specify
the granularity, and specify a
disambiguation mode.
Custom Context CustomContext Allows you to create your own
vocabulary and grammar to
implement new functions not
provided by the other OSDMs.
Date Date Collects dates in several formats
(e.g. “June fourth” or “6/4/70”). You
can specify an entry format and
constrain the range of dates
accepted by the DialogModule.

Copyright © 2001-2007 VoiceObjects 106


Appendix B – Overview Object Types - OSDMs Category

VoiceObjects 7

Icon Type Alias Description


Digits Digits Fundamentally identical to the
Alphanum DialogModule, except
that it collects a string of connected
digits only. To constrain the
recognition task, you can specify a
minimum and maximum number of
digits to listen for (e.g. at least
seven digits, but not more than ten).
In most cases, this DialogModule is
more accurate than Alphanum for
digits-only collections.
Number NaturalNumbers Collects any numerical amount.
Callers can say “fifty four hundred”
or “fifty point twenty five” instead of
digits only.
Phone Number Phone Collects a phone number in the
North American Numbering Plan
consisting of ten digits (long
distance) with optional “1,” seven
digits (local) or three digits (411,
911, etc.) as configured. Phone
numbers must be spoken digit-by-
digit; one-eighthundred is supported
as a special case. Certain illegal
strings for area codes and
exchanges are not allowed (e.g.,
000-000-xxxx).
Postal Code ZipCode Recognizes a five-digit or nine-digit
US ZIP Code.
Social Security SocialSecurity Collects a 9-digit US social security
Number number. Illegal numbers, such as
those beginning with three zeroes,
are screened out.
Time Time Collects a time of day. Callers can
use 12-hour or 24-hour times, as
well as prefix words such as “about”
and “around” To constrain the
recognition task, you can specify a
range of valid times, granularity,
and a disambiguation mode for
determining whether to listen for
times in the past or in the future.
Yes/No YesNo Collects either an affirmative or
negative response from the caller.

Copyright © 2001-2007 VoiceObjects 107


Appendix C - How to Learn More - VoiceObjects Help

VoiceObjects 7

Appendix C - How to Learn More


If you are new to working with VoiceObjects or to developing phone applications in general here are
some resources that help you to get started and for ongoing support.

VoiceObjects Help
VoiceObjects provides you with an Online Help that serves as a reference for using
VoiceObjects.

Open VoiceObjects Help


To open VoiceObjects Help do one of the following:

From within VoiceObjects Desktop:


• On the Project Home Page in the main VoiceObjects Desktop window, click the Help
button.
• On the Help menu in the main VoiceObjects Desktop window, click VoiceObjects Help.

From out of VoiceObjects Desktop:


Using Windows
• In the Start menu of Windows, point to Programs and then to VoiceObjects 7. Click Help
to open VoiceObjects Help.
• In your VoiceObjects installation folder open the folder \platform\desktop\help\en, and click
the file index.htm.
Using UNIX/Linux
• In your VoiceObjects installation directory open the folder /platform/desktop/help/en, and
click the file index.htm.
In all these cases VoiceObjects Help will open up in a separate window.

Copyright © 2001-2007 VoiceObjects 108


Appendix C - How to Learn More - VoiceObjects Help

VoiceObjects 7

Using VoiceObjects Help


The following buttons are available at the top of VoiceObjects Help:

Contents
The Contents button displays all available books and pages of VoiceObjects Help in the left-
hand pane. When you click a closed book, it opens to display its sub-books and pages. When
you click an open book, it closes. When you click pages, the respective topics will be displayed
in the right-hand pane.

Search
The Search button enables you to search for words in VoiceObjects Help and locate topics
containing these words. Full-text searching looks through every word in VoiceObjects Help to
find matches. When the search is completed, a list of topics is displayed in the left-hand pane
so you can select a specific topic to be displayed in the right-hand pane. Within this topic, all
words that match the search word will be highlighted in orange.

Glossary
The Glossary button provides a list of VoiceObjects-specific terms in the upper left-hand pane.
If you click a term, its corresponding definition is displayed in the lower left-hand pane.

Copyright © 2001-2007 VoiceObjects 109


Appendix C - How to Learn More - Using Dialog Help

VoiceObjects 7

Back
If you click Back you return to the previously viewed topic.

Print Topic
The Print Topic button opens up a print dialog in order to print the respective topic displayed in
the right-hand pane.

In addition to these buttons, you can also navigate through VoiceObjects Help by Text Links:
Text within a topic that is blue and underlined is a hyperlink that jumps to another related topic
or Web page.

Using Dialog Help


As you work with VoiceObjects Desktop, you can obtain information about windows, editors, or
boxes by using the dialog help available in the application.

You can access this help by clicking the Help button in the toolbars of windows, editors or
boxes. A separate window will open up displaying the appropriate help topic.

Prime Insurance Sample Application


The VoiceObjects platform comes with a sample application called Prime Insurance – a sample
voice portal for an insurance company.
Prime Insurance serves as a full-featured out-of-the-box sample application that you can call
within minutes after having installed VoiceObjects. But going far beyond this step, it presents
sample implementations for many important techniques often used in dynamic voice

Copyright © 2001-2007 VoiceObjects 110


Appendix C - How to Learn More - Printed Documentation

VoiceObjects 7

applications. It can therefore serve as a reference that you may come back to from time to time
during ongoing voice application development work.
The Prime Insurance package consists of the application itself, which is provided as an XML
export file, and a set of audio and grammar files. Both parts are automatically installed by the
VoiceObjects Installer (unless you perform a minimal installation, or explicitly exclude them
during a custom installation).
For details on how to be able to call the application see the Prime Insurance Primer.

Printed Documentation
In addition to VoiceObjects Help there are also PDF documents available for printing. See
Overview VoiceObjects Documentation for a list of all documents currently available.

LNote: The content of the PDF documents completely corresponds to the online VoiceObjects
Help, except of the Installation Guide, which is only available as a PDF document. The Tutorial
PDF corresponds to the online VoiceObjects Tutorial.
You can find the following PDF documents directly on the VoiceObjects CD:
VO-Installation.pdf (Installation Guide)
VO-FAQ.pdf (Frequently Asked Questions)
VO-Troubleshooting.pdf (Troubleshooting Instructions)

All other PDF documents are available as follows:

From within VoiceObjects Desktop:


On the Help menu of the main VoiceObjects Desktop window, click VoiceObjects
Documentation.

From out of VoiceObjects Desktop:


Using Windows
In your VoiceObjects installation folder you can find the complete PDF documents in the folder
\docs.
Using UNIX/Linux
In your VoiceObjects installation directory you can find the complete PDF documents in the
folder /docs.

Overview VoiceObjects Documentation


The following documents are available in VoiceObjects in PDF format.

TLNote: The content of the PDF documents completely corresponds to the online VoiceObjects
Help, except of the Installation Guide, which is only available as a PDF document. The Tutorial
PDF corresponds to the online VoiceObjects Tutorial.

Copyright © 2001-2007 VoiceObjects 111


Appendix C - How to Learn More - Overview VoiceObjects Documentation

VoiceObjects 7

Name of the Document


Document name Content
Guide Number

Description of how to E-006-


administrate your 20070330-VO7
Administration VO-
VoiceObjects installation,
Guide Administration.pdf
including user
management.
Description of E-005-
VoiceObjects Analyzer, 20070330-VO7
the Web-based service
Analyzer Guide VO-Analyzer.pdf
analysis environment for
standard and customized
real-time analysis.
Information on how to E-017-
contact specific areas of 20070330-VO7
the VoiceObjects
Contacting company, including
VO-Contact.pdf
VoiceObjects VoiceObjects
Headquarters and
Offices, Technical
Support and Training.
Comprehensive E-009-
description of all 20070330-VO7
elements of deployment
and monitoring in the
Deployment VoiceObjects platform,
VO-Deployment.pdf
Guide using the graphical
Control Center, the Web
Services Interface, the
Command Line Interface,
or SNMP.
Introduction to the design E-007-
and development of high 20070330-VO7
Design Guide VO-Design.pdf
quality applications using
VoiceObjects.
Comprehensive E-008-
description of all 20070330-VO7
elements of VoiceObjects
Desktop Guide VO-Desktop.pdf Desktop, the Web-based
graphical design and
monitoring environment
of VoiceObjects.

Frequently Provides solutions for the E-025-


Asked VO-FAQ.pdf most common issues with 20070330-VO7
Questions VoiceObjects products.

Copyright © 2001-2007 VoiceObjects 112


Appendix C - How to Learn More - Overview VoiceObjects Documentation

VoiceObjects 7

Name of the Document


Document name Content
Guide Number

Explanation of E-010-
VoiceObjects specific 20070330-VO7
terms as well as general
Glossary VO-Glossary.pdf
technical terms related to
conversational
technology.
Overview on all resources E-018-
available with 20070330-VO7
How To Learn
VO-Help.pdf VoiceObjects that help
More
you to get started and for
ongoing support.
Detailed description of E-016-
Infostore, the logging 20070330-VO7
Infostore Guide VO-Infostore.pdf
component of
VoiceObjects Server.
Step-by-step guide E-003-
Installation through the installation of 20070330-VO7
VO-Installation.pdf
Guide the VoiceObjects
platform.
Detailed description of E-004-
how to configure each 20070330-VO7
Object VO- dialog object with
Reference ObjectReference.pdf VoiceObjects Desktop as
well as using
VoiceObjectsXML.
Overview of the Prime E-011-
Prime
VO-PrimeInsurance Insurance sample 20070330-VO7
Insurance
Primer.pdf application provided with
Primer
VoiceObjects.
Explanation of the E-020-
functionality of the 20070330-VO7
Storyboard Manager – a
Storyboard VO-Storyboard tool to view, modify, and
Manager Manager.pdf print the storyboard,
which provides a detailed
list of all prompts used in
a voice application.
Detailed feature E-019-
description of 20070330-VO7
VoiceObjects Studio – the
Studio Guide VO-Studio.pdf client-side application
development environment
through any Eclipse-
based IDE.
General guidelines for E-023-
Trouble-
VO-Trouble troubleshooting when 20070330-VO7
shooting
shooting.pdf working with
Instructions
VoiceObjects products.

Copyright © 2001-2007 VoiceObjects 113


Appendix C - How to Learn More - Overview VoiceObjects Documentation

VoiceObjects 7

Name of the Document


Document name Content
Guide Number

Step-by-step instruction E-002-


describing how to get 20070330-VO7
started with VoiceObjects
Tutorial VO-Tutorial.pdf
Desktop and how to build
and run simple voice
applications.
Detailed description of E-021-
the functionality of the 20070330-VO7
Web Service Interface
Web Services VO-Web
(WSI), and explanation of
Guide Services.pdf
how to develop and
deploy applications
through it.
Description of XDK and E-013-
the XML-based 20070330-VO7
XDK Guide VO-XDK.pdf application markup
language
VoiceObjectsXML.

Copyright © 2001-2007 VoiceObjects 114


Contacting VoiceObjects - About VoiceObjects

VoiceObjects 7

Contacting VoiceObjects

About VoiceObjects
VoiceObjects is redefining over-the-phone customer service for global enterprises and carriers.
By delivering adaptive, cost-effective self-service phone portals, VoiceObjects enables
organizations to personalize each caller’s experience, to integrate phone self-service into
comprehensive customer experience strategies, and to manage the complexity of the world’s
most sophisticated phone applications. VoiceObjects’ award-winning phone application server
software is used by leading companies including Adobe, T-Mobile and Volkswagen Financial
Services and provides personalized customer service experiences to more than 500 million
callers each year.
VoiceObjects Product Family
VoiceObjects provides enterprises with an open and flexible infrastructure to efficiently create,
deploy, manage, and analyze self-service phone portals. The VoiceObjects product family
consists of the following products:
• VoiceObjects Server
VoiceObjects Server, a phone application server, is the central component within the
VoiceObjects product family for service execution and management. It enables highly
scalable, carrier-grade deployment and management of personalized, over-the-phone self-
service applications. VoiceObjects Server generates dialogs dynamically at call-time,
provides application execution environment for online updates and rollbacks, supports
applications over voice, video, text and Web channels, and enables enterprise integration
through Service Oriented Architecture (SOA). VoiceObjects Server supports all major voice
and text browsers, VXML-based IVRs for voice and USSD-based browsers for text-based
applications.
• VoiceObjects Analyzer
VoiceObjects Analyzer is the Web-based service analysis environment within the
VoiceObjects product family. VoiceObjects Analyzer interacts with data written to Infostore
in real time, providing an up-to-the-minute graphical view of the status of the system. Pre-
configured reports allow instant analysis of the most prevalent questions.
• VoiceObjects Desktop
VoiceObjects Desktop is the easy-to-use Web interface for creating, testing, deploying and
monitoring applications.
• VoiceObjects Studio
VoiceObjects Studio is the client-side application development environment through any
Eclipse-based IDE (Integrated Development Environment).
VoiceObjects Server supports all major operating systems, relational database systems, Web
application servers, and media platforms.
VoiceObjects is committed to open standards, such as VoiceXML, and is a member of the W3C
and the VoiceXML Forum.
VoiceObjects is headquartered in San Mateo, Calif. and has subsidiaries in Germany and the
United Kingdom.
For more information visit www.VoiceObjects.com.

Copyright © 2001-2007 VoiceObjects 115


Contacting VoiceObjects - VoiceObjects Offices

VoiceObjects 7

VoiceObjects Offices
Corporate Headquarters:

Address: VoiceObjects Inc.


1875 South Grant Street, Suite 720
San Mateo CA 94402

Phone: +1 (650) 288 0299

Fax: +1 (650) 525 9414

Internet: www.voiceobjects.com

EMEA Headquarters:

Address: VoiceObjects GmbH


Friedrich-Ebert-Strasse
D-51429 Bergisch Gladbach
Germany

Phone: +49 (2204) 845 100

Fax: +49 (2204) 845 101

Internet: www.voiceobjects.com

United Kingdom Office:

Address: VoiceObjects Ltd


1 Northumberland Avenue
Trafalgar Square
London WC2N 5BW

Phone: +44 (0)870 351 6748

Fax: +44 (0)870 351 6749

Internet: www.voiceobjects.com

Copyright © 2001-2007 VoiceObjects 116


Contacting VoiceObjects - Technical Support

VoiceObjects 7

Technical Support
The VoiceObjects Technical Support team is available to assist customers and partners with
software installation, sizing recommendations, troubleshooting, enhancement requests and
other forms of assistance in using VoiceObjects products.
You can contact the VoiceObjects Technical Support by visiting the support Web site on the
internet, contacting the support team by e-mail, or calling with your questions.

LNote: Refer to the terms of your software license agreement or support contract to determine
the level of support available to you.

Technical Support Americas:

Hours of Operation: Monday to Friday


9:00 A.M. to 6:00 P.M. PST
(excluding national holidays)

Phone: +1 (650) 288 0310

Fax: +1 (650) 525 9414

E-mail: support@voiceobjects.com

Internet: www.voiceobjects.com/support

Technical Support EMEA:

Hours of Operation: Monday to Friday


9:00 A.M. to 5:00 P.M. CET
(excluding public holidays)

Phone: +49 (2204) 845 190

Fax: +49 (2204) 845 191

E-mail: support@voiceobjects.com

Internet: www.voiceobjects.com/support

Copyright © 2001-2007 VoiceObjects 117


Contacting VoiceObjects - Training

VoiceObjects 7

Training
VoiceObjects training courses range from an overview on VoiceObjects Server to specialized
classes covering nearly all topics and levels with technical, design or sales focus. Additionally,
VoiceObjects Partners can get “VoiceObjects Certified”.
Training courses are held at a VoiceObjects University Training Center (see below) or at your
premises. Our standard training program is continually extended to include new offerings. In
addition, we are happy to quote customized courses to meet your individual requirements: at a
partner or customer site, or over the Internet.

Technical track:
• VoiceObjects Phone Application Server
(2 days)
• Development of Voice Applications for VoiceObjects Server
(3 days)
• Development of Text and Mobile Web Applications for VoiceObjects Server
(1 day)
• VoiceObjects Best Practices
(3 days)
• Infostore and VoiceObjects Analyzer
(2 days)
• VoiceObjects XDK
(1 day – upon request)
• Installation and Maintenance of VoiceObjects
(2 days)
• Operation of VoiceObjects
(1 day – upon request)
• VoiceObjects Release Update
(1 day – upon request)

Design track:
• User Interface Design for Phone Channels
(1 day – upon request)
• Phone User Interface Design for Text and Mobile Web Applications
(1 day)
• Phone User Interface Design for Voice Applications
(4 or 5 days)

Partner Sales track:


• VoiceObjects Sales
(1 day – upon request)
• VoiceObjects Presales
(2 days – upon request)

Copyright © 2001-2007 VoiceObjects 118


Contacting VoiceObjects - Training

VoiceObjects 7

Certification workshops:
• VoiceObjects Certified Application Developer (Voice)
(1 day)
• VoiceObjects Certified System Administrator
(1 day)
• VoiceObjects Certified Sales Engineer
(1 day)

L Note: For a current training schedule or registration please refer to our Web site or contact us
via e-mail.

VoiceObjects University Training Center Americas

Address: VoiceObjects Inc.


1875 South Grant Street, Suite 720
San Mateo CA 94402

Phone: +1 (650) 288 0299

Fax: +1 (650) 525 9414

E-mail: training-us@voiceobjects.com

Internet: www.voiceobjects.com/training

VoiceObjects University Training Center EMEA

Address: VoiceObjects GmbH


Friedrich-Ebert-Strasse
D-51429 Bergisch Gladbach
Germany

Phone: +49 (2204) 845 100

Fax: +49 (2204) 845 101

E-mail: training-emea@voiceobjects.com

Internet: www.voiceobjects.com/training

Copyright © 2001-2007 VoiceObjects 119


Contacting VoiceObjects - Documentation Feedback

VoiceObjects 7

Documentation Feedback
If you have any comments on the VoiceObjects documentation you are welcome to send your
feedback to vo-documentation@voiceobjects.com.
Please include the following information with your feedback:
• Version number of your VoiceObjects software
• topic title for VoiceObjects Help
or
• book title, chapter title and page number for printed documentation
• your suggestion how to improve the documentation

LNote: This e-mail address is only for documentation feedback; you will not receive a reply.

Copyright © 2001-2007 VoiceObjects 120

You might also like