General Information and Planning - IVR

Transcription

General Information and Planning - IVR
WebSphere Voice Response for AIX with DirectTalk
Technology
General Information and Planning
Version 6.1
GC34-7084-04
WebSphere Voice Response for AIX with DirectTalk
Technology
General Information and Planning
Version 6.1
GC34-7084-04
Note
Before using this information and the product it supports, read the general information under “Notices” on
page 203.
This edition applies to Version 6, Release 1 of IBM WebSphere Voice Response for AIX with DirectTalk Technology
(program number 5724-I07), and to all subsequent releases and modifications until otherwise indicated in new
editions. Make sure you are using the correct edition for the level of the product.
© Copyright IBM Corporation 1991, 2011.
US Government Users Restricted Rights – Use, duplication or disclosure restricted by GSA ADP Schedule Contract
with IBM Corp.
Contents
Figures .
.
.
.
.
.
.
.
.
.
.
.
. vii
Tables .
.
.
.
.
.
.
.
.
.
.
.
. ix
. .
. .
. .
. .
. .
. .
. .
. xi
. xii
. xii
. xiii
. xiii
. xiii
. xiv
.
About this information . . .
Typographic conventions . .
Accessibility . . . . . . .
Notes on terminology . . .
Where to find more information
Useful Web sites . . . .
Making comments on this book
. .
. .
. .
. .
. .
. .
. .
What's changed in WebSphere Voice
Response for AIX Version 6.1? . . . . . xv
AIX support upgraded to Version 6.1 . . . xvii
Improved cache control. . . . . . . . xvii
Enhanced VoiceXML and CCXML
application support for call information . . xvii
Enhanced CCXML support . . . . . . xviii
Support for DB2 Version 9.5 . . . . . . xviii
Enhanced Genesys CTI support . . . . . xviii
Java support upgraded to Version 6.0 . . . xix
User-configurable logging for monitoring of
VoiceXML and CCXML applications . . . . xix
Enhanced application support for SIP
headers . . . . . . . . . . . . . xix
Additional VRBE problem determination
utility . . . . . . . . . . . . . . xx
Improved speech technology . . . . . . xx
Using multiple recognition contexts . . . . xxi
SIP register support . . . . . . . . . xxi
Enhanced trombone support . . . . . . xxi
Product packaging . . . . . . . . . xxii
Part 1. Introducing WebSphere
Voice Response . . . . . . . . . 1
Chapter 1. The benefits of voice
applications . . . . . . . . . . .
Where does WebSphere Voice Response add
value? . . . . . . . . . . . . .
As the voice access channel for application
server on demand solutions . . . . .
As a solution for contact and call centers .
© Copyright IBM Corp. 1991, 2011
As a platform for a service provider of on
demand voice services . . . . . . . . 4
Voice applications in the real world . . . . 5
Example 1: Handling increasing numbers of
customer requests . . . . . . . . . 6
Example 2: Excellent customer service with
low cost . . . . . . . . . . . . . 8
How voice response technology can help your
business . . . . . . . . . . . . . 12
Supply chain management . . . . . . 12
Financial institutions . . . . . . . . 12
Transportation industry . . . . . . . 13
Service industries . . . . . . . . . 13
Information providers. . . . . . . . 13
Government agencies . . . . . . . . 13
Educational institutions . . . . . . . 13
Mobile workforce and telecommuting . . 14
Telephone operating companies . . . . 14
Enterprise Voice Portals and the Internet
14
WebSphere Voice Response services . . . . 15
Automated attendant . . . . . . . . 15
Telephone access to multiple systems and
applications . . . . . . . . . . . 15
Voice response . . . . . . . . . . 15
Fax response . . . . . . . . . . . 15
Voice mail. . . . . . . . . . . . 15
Transaction-related voice messaging . . . 16
Coordinated voice and data transfer . . . 16
Access to paging systems . . . . . . 16
Automated outbound calling . . . . . 16
Intelligent peripheral . . . . . . . . 16
What voice response applications do . . . . 16
Inbound calls . . . . . . . . . . 17
Outbound calls . . . . . . . . . . 18
Transferring calls . . . . . . . . . 19
Voice messaging . . . . . . . . . 19
Information access . . . . . . . . . 20
Summarizing WebSphere Voice Response
voice application capabilities . . . . . . 20
. 3
. 4
. 4
. 4
Chapter 2. How WebSphere Voice
Response applications work . . . . . . 23
Developing applications for WebSphere Voice
Response . . . . . . . . . . . . . 23
CCXML overview . . . . . . . . . 24
iii
VoiceXML overview . . . . . . . .
Java overview . . . . . . . . . .
State tables overview . . . . . . . .
Integrating different programming models
Application development tools for
CCXML, VoiceXML and Java . . . . .
Using CCXML applications . . . . . . .
How is an incoming call handled by
CCXML? . . . . . . . . . . . .
Sequence of events in a CCXML
application . . . . . . . . . . .
How does the caller interact with the
CCXML application? . . . . . . . .
How does the CCXML browser access
CCXML documents? . . . . . . . .
The benefits of CCXML . . . . . . .
Using VoiceXML applications . . . . . .
How is an incoming call handled by
VoiceXML? . . . . . . . . . . .
What controls the sequence of events in a
VoiceXML application? . . . . . . .
How does the caller interact with the
VoiceXML application? . . . . . . .
How do you specify what the VoiceXML
application says? . . . . . . . . .
How is the spoken output for VoiceXML
applications stored? . . . . . . . .
How do VoiceXML applications access
information? . . . . . . . . . . .
Integration and interoperability of
VoiceXML applications . . . . . . .
The benefits of VoiceXML . . . . . .
Using Java applications . . . . . . . .
How is an incoming call handled by Java?
What controls the sequence of events in a
Java application? . . . . . . . . .
How does the caller interact with the Java
application? . . . . . . . . . . .
How do you specify what the Java
application says? . . . . . . . . .
How is the spoken output for Java
applications stored? . . . . . . . .
How do Java applications access
information? . . . . . . . . . . .
Integration and interoperability of Java
applications . . . . . . . . . . .
The benefits of Java . . . . . . . .
State table applications . . . . . . . .
How is an incoming call handled by state
tables? . . . . . . . . . . . . .
iv
General Information and Planning
25
25
26
26
28
31
31
32
33
33
33
33
34
35
36
36
36
36
37
37
38
38
39
39
40
What controls the sequence of events in a
state table application? . . . . . . .
System variables . . . . . . . . .
How do you specify what the state table
application says? . . . . . . . . .
How state table voice applications handle
voice messages . . . . . . . . . .
Integration and interoperability of state
tables . . . . . . . . . . . . .
Application development tools for state
tables . . . . . . . . . . . . .
The benefits of state tables and custom
servers . . . . . . . . . . . . .
How voice applications access other resources
Speech Recognition . . . . . . . .
Text-to-speech . . . . . . . . . .
How does WebSphere Voice Response send
fax output? . . . . . . . . . . .
How does WebSphere Voice Response
interact with a TDD? . . . . . . . .
How does WebSphere Voice Response play
background music? . . . . . . . .
How WebSphere Voice Response performs
call tromboning . . . . . . . . . .
Analog Display Services Interface (ADSI)
support . . . . . . . . . . . .
Planning and designing voice applications . .
Creating the voice output for applications . .
National language support . . . . . .
Importing prerecorded voice data for state
table applications . . . . . . . . .
Recording voice segments . . . . . .
Text-to-speech . . . . . . . . . .
Key facts about components of voice
applications . . . . . . . . . . . .
General . . . . . . . . . . . .
CCXML . . . . . . . . . . . .
VoiceXML . . . . . . . . . . . .
Java . . . . . . . . . . . . . .
State tables . . . . . . . . . . .
Accessing other resources . . . . . .
44
45
45
46
47
48
49
49
49
51
52
52
54
55
56
58
59
59
59
60
60
60
60
61
61
62
62
63
40
41
42
42
43
44
Chapter 3. Using WebSphere Voice
Response . . . . . . . . . . . . 65
The graphical user interface . . . . . . . 65
Access . . . . . . . . . . . . . 65
Configuration . . . . . . . . . . 66
Operations . . . . . . . . . . . 69
Applications (state tables only). . . . . 72
Help . . . . . . . . . . . . . 77
Other tools for system and application
management . . . . . . . . . .
System management . . . . . .
Application management. . . . .
Key facts about using WebSphere Voice
Response . . . . . . . . . . .
.
.
.
. 77
. 77
. 78
.
. 79
Part 2. Planning to install
WebSphere Voice Response . . . 81
Chapter 4. Telephone network . . . .
Planning the telephony environment . . .
Connection to the telephone network .
Channel associated signaling . . . .
Coexistence of signaling protocols. . .
Channel bank . . . . . . . . .
Channel service unit . . . . . . .
Address signaling support . . . . .
Exchange data link . . . . . . . .
Common channel signaling . . . . .
Voice over IP . . . . . . . . .
Supporting other signaling protocols .
Integrating WebSphere Voice Response
with Genesys Framework . . . . .
Integrating WebSphere Voice Response
with Cisco ICM software . . . . .
Fax connection requirements . . . .
Using ADSI telephones . . . . . .
Choosing the application to answer
incoming calls . . . . . . . . . .
Dialed number information (DID or
DNIS) . . . . . . . . . . . .
Common channel signaling . . . .
CallPath Server . . . . . . . .
Exchange data link . . . . . . .
Channel identification . . . . . .
Estimating telephony traffic . . . . .
People you need . . . . . . . .
Telephony traffic information . . . .
Calculating telephony traffic . . . .
Determining a blockage rate . . . .
Estimating the number of channels
needed . . . . . . . . . . .
Additional considerations . . . . .
Planning the switch configuration . . .
When the switch has no queuing . .
When the switch has queuing. . . .
Other switch feature planning issues .
Switch configuration questions . . .
. 83
. 83
. 84
. 84
. 92
. 93
. 94
. 94
. 94
. 96
. 109
. 113
. 115
. 117
. 118
. 118
. 119
.
.
.
.
.
.
.
.
.
.
119
119
119
119
120
120
120
120
122
122
.
.
.
.
.
.
.
122
123
134
135
135
138
138
Chapter 5. Workstation and voice
processing . . . . . . . . . . .
Minimum requirements . . . . . . . .
Recommended requirements . . . . . .
Prerequisite and associated software
products . . . . . . . . . . . . .
WebSphere Voice Response software . . .
DB2 support . . . . . . . . . .
Associated products . . . . . . . .
Channel increments . . . . . . . .
Migration from previous releases . . .
Licensing WebSphere Voice Response
software . . . . . . . . . . . . .
The WebSphere Voice Response licensing
model. . . . . . . . . . . . .
The network licensing environment . . .
How many licenses do I need? . . . .
Hardware requirements . . . . . . . .
BladeCenter computer . . . . . . .
System p5 and pSeries computer. . . .
Telephony hardware . . . . . . . .
Optional hardware . . . . . . . .
Displays . . . . . . . . . . . .
Keyboard and mouse . . . . . . .
Machine-readable media . . . . . .
Printer . . . . . . . . . . . .
Location planning. . . . . . . . . .
Physical dimensions . . . . . . . .
Environment . . . . . . . . . .
Memory and storage planning . . . . .
How much memory? . . . . . . .
How much disk space? . . . . . . .
Requirements for CCXML, VoiceXML and
Java applications . . . . . . . . . .
Size of processor . . . . . . . . .
Amount of memory . . . . . . . .
Number of channels . . . . . . . .
Java garbage collection . . . . . . .
141
141
143
144
145
145
145
146
146
146
146
147
148
149
149
149
151
154
155
155
155
155
155
156
156
156
156
157
159
159
160
161
161
Chapter 6. Scalability with WebSphere
Voice Response . . . . . . . . . . 163
Scalable CCXML and VoiceXML
configurations . . . . . . . . . . . 163
Scalable Java configurations . . . . . . 166
What is a single system image (SSI)? . . . 168
Planning a single system image . . . . . 172
Migrating from a stand-alone system to a
single system image . . . . . . . . . 173
Custom servers in a single system image . . 174
Contents
v
Chapter 7. Data communications network
Network requirements . . . . . . .
Network planning for remote information
access . . . . . . . . . . . . .
Attaching the pSeries computer to a
remote host system . . . . . . .
177
. 177
Notices . . . . . . . . . . . . . 203
Trademarks . . . . . . . . . . . . 205
. 178
Glossary .
. 178
List of WebSphere Voice Response and
associated documentation . . . . . .
WebSphere Voice Response software . . .
IBM hardware for use with WebSphere Voice
Response . . . . . . . . . . . .
WebSphere Voice Response related products
WebSphere Voice Server. . . . . . .
Unified Messaging for WebSphere Voice
Response . . . . . . . . . . .
AIX and the IBM pSeries computer . . .
HACMP . . . . . . . . . . . .
SS7 . . . . . . . . . . . . .
Integrated Services Digital Network. . .
Bellcore Specifications for ADSI Telephones
Chapter 8. Summary . . . . . . . . 183
Let's talk . . . . . . . . . . . . . 183
Publications. . . . . . . . . . . 183
WebSphere Voice Response support . . . 184
Planning checklist. . . . . . . . . . 184
Voice applications. . . . . . . . . 184
Telephony connectivity . . . . . . . 188
Data communications . . . . . . . 193
Summary of planning tasks . . . . . . 194
Summary of requirements . . . . . . . 195
Part 3. Appendixes . . . . . . . 199
Appendix. WebSphere Voice Response
language support . . . . . . . .
vi
General Information and Planning
Index
. 201
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
. 207
241
241
242
242
242
242
243
243
243
244
245
. 247
Figures
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.
12.
13.
14.
15.
16.
17.
18.
19.
20.
21.
22.
23.
24.
25.
26.
27.
28.
The Software Company's customer
contact system . . . . . . . . . . 7
Useful Utilities’ contact center . . . . 10
Integrating different programming
models from CCXML . . . . . . . 27
Integrating different programming
models from VoiceXML . . . . . . 27
Integrating different programming
models from Java . . . . . . . . 28
Editing VoiceXML using the
Communication Flow Builder . . . . 29
Editing CCXML using the Voice Toolkit
editor . . . . . . . . . . . . 30
E-business application model . . . . 41
Adding voice to e-business . . . . . 42
The components of a voice application
using a state table environment. . . . 43
A speech recognition environment
50
Two people communicating using
Telecommunications Devices for the
Deaf . . . . . . . . . . . . . 53
How WebSphere Voice Response
interacts with a Telecommunications
Device for the Deaf . . . . . . . . 54
Using background music . . . . . . 55
Example flowchart for a voice
application that accepts key input . . . 58
WebSphere Voice Response Welcome
window . . . . . . . . . . . 65
Pack Configuration window . . . . . 66
The System Monitor window . . . . 70
The Object Index, the Applications
window, and an Application window. . 74
The icon view of a state table . . . . 76
The channel bank converts analog to
digital signals . . . . . . . . . 93
The exchange data link connection
94
ISDN as an access protocol . . . . . 97
ISDN B-Channels and D-Channels
97
Attaching WebSphere Voice Response as
an intelligent peripheral in North
America . . . . . . . . . . . 98
Using E1 ISDN trunks . . . . . . 105
Using T1 ISDN trunks without NFAS
105
Using T1 ISDN trunks with NFAS
105
© Copyright IBM Corp. 1991, 2011
29.
30.
31.
32.
33.
34.
35.
36.
37.
38.
39.
40.
41.
42.
43.
44.
45.
46.
47.
48.
49.
Using T1 ISDN trunks with NFAS and
D-channel backup . . . . . . . .
Attaching WebSphere Voice Response
in a telephone network . . . . . .
Attaching multiple WebSphere Voice
Response systems in a telephone
network . . . . . . . . . . .
An example of a VoIP network
The role of a signaling process in
exchange data link signaling . . . .
The role of a signaling process in ISDN
signaling . . . . . . . . . . .
WebSphere Voice Response connection
to the Genesys Framework using state
tables . . . . . . . . . . . .
WebSphere Voice Response connection
to the Genesys Framework using
VoiceXML and CCXML . . . . . .
WebSphere Voice Response and a
private switch with queuing . . . .
Another way to integrate WebSphere
Voice Response into a switch queue . .
VoiceXML applications using
WebSphere Voice Response to access
speech recognition servers . . . . .
Accessing speech technologies using
VoiceXML browsers on the same
machine as WebSphere Voice Response .
Accessing speech technologies using
VoiceXML browsers on separate
machines to WebSphere Voice Response
Java applications running on separate
systems . . . . . . . . . . .
Integrating WebSphere Voice Response
with WebSphere application server . .
A stand-alone WebSphere Voice
Response system . . . . . . . .
A small single system image . . . .
A large single system image . . . .
Data communications network
attachment (example A) . . . . . .
Data communications network
attachment (example B) . . . . . .
Data communications network
attachment (example C) . . . . . .
106
107
108
111
114
115
116
117
136
137
164
165
166
167
168
170
171
172
179
180
181
vii
50.
viii
Data communications network
attachment (example D) . . .
General Information and Planning
.
.
. 182
Tables
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
Functions provided by T1 CAS protocols 86
Functions provided by E1 CAS protocols 90
Functions provided by ISDN protocols 101
How WebSphere Voice Response
supports ISDN protocols . . . . . 102
Switch types and line signaling
protocols that can be used with
D-channel backup.. . . . . . . . 104
Number of channels required according
to traffic and blockage rate . . . . . 123
Minimum configuration for WebSphere
Voice Response . . . . . . . . . 141
Recommended configuration for
WebSphere Voice Response . . . . . 143
Licensed program products required
144
Licensing of WebSphere Voice
Response components . . . . . . 148
© Copyright IBM Corp. 1991, 2011
11.
12.
13.
14.
15.
16.
17.
18.
19.
20.
Digital trunk adapters supported by
System p5, pSeries and RS/6000
models used with WebSphere Voice
Response . . . . . . . . . .
Adapter and pack requirements
Storage required for voice messages
Storage required for compressed voice
segments . . . . . . . . . .
Storage required for uncompressed
voice segments . . . . . . . .
WebSphere Voice Response custom
servers in a single system image . .
Summary of planning tasks . . .
Summary of software requirements
Summary of hardware requirements
Language support in WebSphere Voice
Response for AIX . . . . . . .
. 151
152
158
. 158
. 158
. 174
. 194
195
196
. 201
ix
x
General Information and Planning
About this information
This information provides an overview of the IBM® WebSphere® Voice
Response for AIX® with DirectTalk® Technology voice processing system. It
describes the functions that are available in the product, and can help you
decide whether it satisfies your voice processing needs. When you have
ordered WebSphere Voice Response, the book will be useful background
reading for anyone who has to install or configure the product, provide
system administration support, or design and write voice applications.
If you are a newcomer to WebSphere Voice Response, start with Chapter 1,
“The benefits of voice applications,” on page 3, which tells you about:
v
v
v
v
Voice processing with WebSphere Voice Response
What WebSphere Voice Response applications do
How WebSphere Voice Response applications work
Using WebSphere Voice Response
If you are already using a previous level of either WebSphere Voice
Response or DirectTalk for AIX, start with “What's changed in WebSphere
Voice Response for AIX Version 6.1?” on page xv, which tells you about the
new functions and other enhancements that are available in this release.
If you are preparing for the installation of WebSphere Voice Response, read
Part 2, “Planning to install WebSphere Voice Response,” on page 81, which
includes information about:
v Connecting WebSphere Voice Response to the telephone network
v Setting up your System p5®, pSeries® or BladeCenter® computer, together
with the associated voice processing hardware and software
v Configuring WebSphere Voice Response either to use VoiceXML
applications with a Web server, or as a single system image (SSI), running
state table applications.
v Configuring your WebSphere Voice Response systems
v Connecting WebSphere Voice Response to an SNA network
At the end of the book, there is a summary in the form of a planning
checklist, together with a list of planning tasks and a summary of hardware
and software prerequisites.
© Copyright IBM Corp. 1991, 2011
xi
Typographic conventions
This book uses the following typographic conventions:
boldface
Identifies an item that is in a WebSphere Voice Response window. The
item might be a keyword, an action, a field label, or a pushbutton.
Whenever one of the steps in a procedure includes a word in
boldface, look in the window for an item that is labeled with that
word.
boldface italics
Are used for emphasis. Take extra care wherever you see bold italics.
italics
Identify one of the following:
v New terms that describe WebSphere Voice Response components or
concepts. A term that is printed in italics is usually followed by its
definition.
v Parameters for which you supply the actual names or values.
v References to other books.
monospace
Identifies one of the following:
v Text that you type in an AIX window. Because AIX is case sensitive,
ensure that you type the uppercase and lowercase characters exactly
as shown.
v Names of files and directories (path names).
Accessibility
WebSphere Voice Response for AIX is a voice application enabler. The
applications that are developed to run on WebSphere Voice Response provide
telephone access to business data and services. In this way, WebSphere Voice
Response provides accessibility for people who cannot access the data and
services by using regular Web pages or traditional graphic interfaces. These
telephone user interfaces are fully accessible to people who are blind or have
low vision and, if speech recognition is used, to people with mobility
impairments or limited hand use. Speech recognition capability can be
provided by products such as IBM WebSphere Voice Server. In addition,
support for users of Telephony Devices for the Deaf (TDD) is provided as part
of the WebSphere Voice Response product.
With WebSphere Voice Response you can perform many application
development and system administration tasks with a text editor or line
commands—these are accessible if you use a screen reader product to
interface with them. Also, the default settings of the WebSphere Voice
xii
General Information and Planning
Response graphical user interface can be changed to produce large fonts and
high contrast colors. Details of how to use these accessibility features can be
found in the WebSphere Voice Response for AIX: User Interface Guide book.
Alternatively, application development can be done with Java or VoiceXML
development tools that are supplied by IBM and third parties.
You can also use a screen-reader product to access the WebSphere Voice
Response publications in HTML format (for details of their availability see
“List of WebSphere Voice Response and associated documentation” on page
241).
Notes on terminology
v A glossary of commonly-used terms is at the end of this book.
v The full product name of WebSphere Voice Response for AIX with DirectTalk
Technology is generally abbreviated in this book to WebSphere Voice Response.
v The term pSeries is generically used in this book to refer both to PCI-based
RS/6000® computers and to appropriate models of the System p5 and
pSeries ranges. (Consult your IBM representative for details of models that
are supported for use with WebSphere Voice Response.) RS/6000 computers
with an MCA bus are not supported.
v The IBM Quad Digital Trunk Telephony PCI Adapter is generally referred to in
this book by its abbreviation DTTA. This adapter is a replacement for the
IBM ARTIC960RxD Quad Digital Trunk PCI Adapter, which is generally
referred to by the abbreviation DTXA. The DTXA is not supported with
WebSphere Voice Response Version 6.1.
v References made to the VoiceXML 2.1 specification are intended to include
VoiceXML 2.0 unless otherwise specified.
Where to find more information
The information provided in the WebSphere Voice Response library will help
you complete WebSphere Voice Response tasks more quickly. A complete list
of the available publications and where you can obtain them is shown in “List
of WebSphere Voice Response and associated documentation” on page 241.
Useful Web sites
The following Web sites are useful sources of information about WebSphere
Voice Response and related products:
WebSphere Voice Response
http://www.ibm.com/software/pervasive/voice_response_aix/
About this information
xiii
IBM WebSphere developerWorks resources (including WebSphere Voice
products)
http://www.ibm.com/developerworks/websphere/zones/voice/
VoiceXML Version 2.0 and 2.1 specifications
http://www.w3.org/TR/voicexml21/
http://www.w3.org/TR/voicexml20/
CCXML Version 1.0 specification
http://www.w3.org/TR/2011/PR-ccxml-20110510/
Genesys
For more information on Genesys products go to the Genesys Web
site at http://www.genesyslab.com
Making comments on this book
If you especially like or dislike anything about this book, feel free to send us
your comments.
You can comment on what you regard as specific errors or omissions, and on
the accuracy, organization, subject matter, or completeness of this book. Please
limit your comments to the information that is in this book and to the way in
which the information is presented. Speak to your IBM representative if you
have suggestions about the product itself.
When you send us comments, you grant to IBM a nonexclusive right to use or
distribute the information in any way it believes appropriate without
incurring any obligation to you.
You can get your comments to us quickly by sending an e-mail to
[email protected]. Alternatively, you can mail your comments to:
User Technologies,
IBM United Kingdom Laboratories,
Mail Point 095, Hursley Park,
Winchester, Hampshire,
SO21 2JN, United Kingdom
Please ensure that you include the book title, order number, and edition date.
xiv
General Information and Planning
What's changed in WebSphere Voice Response for AIX
Version 6.1?
There are significant enhancements to WebSphere Voice Response for AIX in
this release including:
v AIX support upgraded to Version 6.1
– WebSphere Voice Response for AIX Version 6.1 supports the use of AIX
Version 6.1 as its operating system.
– All WebSphere Voice Response AIX device drivers converted to 64-bit
operation.
See “AIX support upgraded to Version 6.1” on page xvii for more
information.
v Improved cache control
– You can display or expire contents of the WebSphere Voice Response
CCXML, VoiceXML, voice segment cache.
– You can also select one or all documents or voice segments in a cache to
expire.
See “Improved cache control” on page xvii for more information.
v Enhanced VoiceXML and CCXML application support for call information
– Support for individual call data session variables for the SIP, ISDN and
SS7 protocols.
See “Enhanced VoiceXML and CCXML application support for call
information” on page xvii for more information.
v Enhanced CCXML support
– Call Control XML (CCXML) runtime support for most features described
in the latest version of the CCXML specification.
See “Enhanced CCXML support” on page xviii for more information.
v ECMAScript support
You can now configure WebSphere Voice Response to use either version 1.3
or version 1.7 of ECMAScript. To maintain backwards compatibility with
previous WebSphere Voice Response applications, version 1.3 remains the
default. Refer to “Using ECMAScript” in WebSphere Voice Response for AIX:
Using the CCXML Browser for details.
v Support for DB2 Version 9.5
See “Support for DB2 Version 9.5” on page xviii for more information.
v Enhanced Genesys CTI support
© Copyright IBM Corp. 1991, 2011
xv
– Support for the Genesys I-Server interface now includes the Route
Request/Response API.
– Call allocation compatibility with the Genesys supplied D2IS custom
server.
See “Enhanced Genesys CTI support” on page xviii for more information.
v Support for IPV6 IP addresses
v Java support upgraded to Version 6.0
– WebSphere Voice Response for AIX Version 6.1 supports the use of Java
Version 6.0 for Using CCXML, VoiceXML or Java applications.
See “Java support upgraded to Version 6.0” on page xix for more
information.
v User-configurable logging for monitoring of VoiceXML and CCXML
applications
– Ability to configure VoiceXML and CCXML application logging to allow
real-time monitoring of such applications.
See “User-configurable logging for monitoring of VoiceXML and CCXML
applications” on page xix for more information.
v Enhanced application support for SIP headers
– Call-ID and Call-Info SIP headers are now supported by VoiceXML and
CCXML application call data session variables.
See “Enhanced application support for SIP headers” on page xix for more
information.
v Improved VRBE tracing
See “Additional VRBE problem determination utility” on page xx for more
information.
v Improved speech technology
– Support for MRCP-V1.0-compliant speech technologies such as Nuance
Speech Server, Loquendo Speech Server, and WebSphere Voice Server.
– State table applications can now access MRCP text-to-speech servers.
– Additional ‘proprietary’ DTMF barge-in detection method.
– Remote DTMF grammar detection and compilation support.
– Support for some vendor-specific VoiceXML properties.
– MRCP V1.0 support for recording end-pointed audio with Nuance
Recognizer 9.
See “Improved speech technology” on page xx for more information.
v Support for multiple recognition contexts
WebSphere Voice Response now supports the use of multiple recognition
contexts for speech recognition in VoiceXML applications. See “Using
multiple recognition contexts” on page xxi for more information.
v SIP Register support
xvi
General Information and Planning
See “SIP register support” on page xxi for more information.
v Enhanced trombone support
See “Enhanced trombone support” on page xxi for more information.
v Enhanced FAX card support
WebSphere Voice Response now supports Brooktrout TR1034 digital fax
cards for PCI providing 4, 16, or 30 fax channels. Previously, only the
30-channel card was supported. Refer to “Introducing Brooktrout Fax” in
WebSphere Voice Response for AIX: Fax using Brooktrout for more information.
v Product packaging
See “Product packaging” on page xxii for more information.
v Streamlined installation and migration from previous version
– Ability to install directly over previous version.
AIX support upgraded to Version 6.1
You can run WebSphere Voice Response for AIX Version 6.1 only with AIX
Version 6.1.
All WebSphere Voice Response AIX device drivers now operate in 64-bit
mode.
Improved cache control
WebSphere Voice Response for AIX Version 6.1 now has improved cache
control.
By running the dtjcache command you can:
v List the contents of one or all of the WebSphere Voice Response voice
segment, VoiceXML, or CCXML caches.
v Expire a single VoiceXML document or CCXML document, identified by
URL.
v Expire one voice segment identified by URL, or all Voice Segments in the
Voice Segment cache.
Refer to the section “dtjcache script” in the WebSphere Voice Response for AIX:
Deploying and Managing VoiceXML and Java Applications manual for details.
Enhanced VoiceXML and CCXML application support for call information
The specifications for both VXML and CCXML define some
protocol-independent connection-related variables that can be used to return
related connection data. WebSphere Voice Response for AIX Version 6.1 now
supports such session variables for the SIP, ISDN and SS7 protocols.
What's changed in WebSphere Voice Response for AIX Version 6.1?
xvii
Previously, the WebSphere Voice Response CCXML browser had some access
to protocol-delivered information though transition event variables called
$event.info.FROM_HDR and $event.info.TO_HDR which provided signalling
information based on the contents of SV542 and SV541. All call header
information was listed within the variables as a series of key-value pairs
(KVPs). The VXML browser had no direct access to these values.
Enhanced CCXML support
Call Control XML (CCXML) runtime support for most features described in
the latest version of the CCXML specification at http://www.w3.org/TR/2011/
PR-ccxml-20110510/
For details of CCXML elements and attributes supported refer to “Elements”
in WebSphere Voice Response for AIX: Using the CCXML Browser.
Support for DB2 Version 9.5
A limited use version of DB2 Workgroup Server Edition, Version 9.5 is
provided with WebSphere Voice Response for AIX Version 6.1. One limitation
enforced in this bundled DB2 is: A limit (maximum) of 4 GB of instance
memory. This limit will only be significant for (for example) very large voice
messaging solutions; in which case the purchase of a full DB2 license may be
advised. Other limitations are advised in the DB2 license. For the majority of
WebSphere Voice Response for AIX users, none of these will be significant.
DB2 is provided only for the storage and management of data used by
WebSphere Voice Response for AIX, and if it is to be used by other
applications, a separate license must be purchased.
Enhanced Genesys CTI support
WebSphere Voice Response for AIX Version 6.1 support for the Genesys
I-Server interface now includes the Genesys Route Request/Response API.
An additional Genesys API function that support user data being set on the
request is now available:
RouteRequest
This request is used to make the Genesys Universal Routing Server
(URS) route the call. It is used to signal that the call has finished
being processed by WebSphere Voice Response and should now be
routed.
For further information, refer to the section “Using advanced CTI
features” in the WebSphere Voice Response: VoiceXML Programmer's
Guide for WebSphere Voice Response manual.
xviii
General Information and Planning
This Genesys API function is accessed using the VXML object tag. Some
Genesys properties have to be configured before using this API function.
For better compatibility with the Genesys supplied D2IS custom server
WebSphere Voice Response can now be configured using the configuration
parameter Inbound Call Channel Allocation Method so that a call is allocated
to a specific “logical” trunk and channel as determined by a SIP Switch or
Gateway, ensuring the call is delivered and communicated as if it was a
physical endpoint.
Java support upgraded to Version 6.0
WebSphere Voice Response for AIX Version 6.1 supports the use of Java
Version 6.0 for Using CCXML, VoiceXML or Java applications.
The 32-bit version of Java 1.6, SR7 or later is required.
Unless greater than 4GB address space is required, the 32-bit Java 6 JVM
(32-bit) generally outperforms the 64-bit JVM (64-bit) in standard
configurations.
User-configurable logging for monitoring of VoiceXML and CCXML applications
WebSphere Voice Response for AIX Version 6.1 can now be configured so that
VoiceXML and CCXML application logging using the <log> element is
redirected away from the binary log.x.log files (which are still used to log
other events). This allows real-time monitoring of VoiceXML and CCXML
applications.
Enhanced application support for SIP headers
The specification for SIP includes definitions for the Call-ID header which
contains a unique call identifier, and also the Call-Info header which provides
additional information about the caller or callee, depending on whether it
relates to a request or response. WebSphere Voice Response for AIX Version
6.1 now supports these headers for an INVITE request, allowing the header
information to be used in state table, VoiceXML, and CCXML applications.
The SIP Invite To and From headers are always processed by Version 6.1 but
the processing of the other SIP headers above is controlled by a configuration
file.
What's changed in WebSphere Voice Response for AIX Version 6.1?
xix
Additional VRBE problem determination utility
A new WebSphere Voice Response utility is available to system administrators
that enables them to collect a dtbeProblem output (or run any other
command) automatically when an error or other message is reported in the
VRBE log (.log) files or WebSphere Voice Response trace (.trc) files.
Refer to the section “dtjlogmon script” in the WebSphere Voice Response:
Deploying and Managing VoiceXML and Java Applications manual for details.
Improved speech technology
WebSphere Voice Response for AIX Version 6.1 supports speech recognition
and text-to-speech for VoiceXML applications using speech servers compliant
with Media Resource Control Protocol (MRCP) V1.0, such as WebSphere Voice
Server 5.1, Nuance Speech Server version 5.1.2 (Recognizer 9.0.13, Vocalizer
5.0.2), and Loquendo Speech Server V7.
The WebSphere Voice Response for AIX MRCP client state table API provides
a means by which WebSphere Voice Response state table applications can
access MRCP text-to-speech synthesizer resources attached to the WebSphere
Voice Response client over an IP network. Refer to “The WebSphere Voice
Response for AIX MRCP state table API” in the WebSphere Voice Response for
AIX: MRCP for State Tables for more information.
In addition to the speech barge-in detection method, WebSphere Voice
Response now supports a new dtmf_only ‘proprietary’ barge-in detection
method which stops audio output only after a user has pressed any DTMF
key. Refer to the section “Comparing barge-in detection methods” in the
WebSphere Voice Response for AIX: VoiceXML Programmer’s Guide for more
information.
For SIP calls, it is now also possible to configure WebSphere Voice Response
for AIX so that DTMF detection and DTMF grammar compilation are
performed remotely by a speech server rather than by WebSphere Voice
Response for AIX. Refer to the section “Remote DTMF grammars” in the
WebSphere Voice Response for AIX: VoiceXML Programmer’s Guide for more
information.
Some proprietary, vendor-specific VoiceXML properties are also supported.
Properties that match a given pattern can be passed through from a VoiceXML
document then sent to a speech server in an MRCP SET-PARAMS message. Refer
to the section “VoiceXML elements and attributes” in the WebSphere Voice
Response for AIX: VoiceXML Programmer’s Guide for more information.
xx
General Information and Planning
Using multiple recognition contexts
Advanced Natural Language solutions can be developed using speech
technologies (such as Nuance Recognizer 9) that can provide a list of likely
speech recognition result information where a speech utterance containing
multiple recognition contexts has been provided. This also allows multiple
interpretations of a speech utterance containing more than one recognition
context to be handled. Refer to “Using multiple result grammars” in the
WebSphere Voice Response for AIX: VoiceXML Programmer's Guide for details.
SIP register support
WebSphere Voice Response for AIX now supports a SIP register service that
registers a list of ‘usernames’ to one or more SIP registrars as defined in
user-configurable files. A SIP registrar is used to process SIP address or URI
information from a user agent. This information is stored in a location service
and can be later interrogated to determine how to route calls to the user. It
can also be used for load balancing and providing system redundancy. The
user-configurable files contain registrar information with an associated list of
user names in one or more files, and can override default values for port,
timeout of registration, user agent, and priority. These registrations are
triggered by enabling one or more VoIP trunks. For further information, refer
to the section “Using a SIP register” in the WebSphere Voice Response: Voice
over IP using Session Initiation Protocol manual.
A username can be specified in several different ways.
v For a 480-channel system, each available channel could be defined as an
individual available username, for example 001@mywvr to 480@mywvr
v To represent individual applications, (equivalent to a VRBE NumToApp
application mappings representation). Registering one or more applications
that can be answered by a particular machine, for example: 12345@myWVR
identifies that the WebSphere Voice Response system can handle calls for
user 12345 and has an equivalent NumToApp mapping defined in the
default.cff configuration file.
v A single username registered to represent the machine as available, for
example: [email protected]
Enhanced trombone support
The existing WebSphere Voice Response trombone functionality for call
transfer has been improved to allow an agent to whom a call has been
transferred to return a call back to an IVR application after the interaction
with a caller complete.
This feature is configured using the TromboneTerminationValue trombone
custom server configuration parameter. For further information, refer to the
What's changed in WebSphere Voice Response for AIX Version 6.1?
xxi
section “Custom server function definitions” in the WebSphere Voice
Response: Designing and Managing State Table Applications manual
Product packaging
The WebSphere Voice Response for AIX Version 6.1 package contains the
following items:
v WebSphere Voice Response for AIX Version 6.1 (base product code plus
DB2)
v Quick Start Guide
v Publications for WebSphere Voice Response for AIX Version 6.1 in PDF
format.
xxii
General Information and Planning
Part 1. Introducing WebSphere Voice Response
By bringing together your existing telephone network, business logic servers,
and data communications network, voice processing with WebSphere Voice
Response enables you to provide customers with the information they need
via the telephone.
Telephone
Network
Voice
Processing
WebSphere
Voice
Response
Caller
Logic
Data
Communications
Network
Business
Object
Server /
Web
Server
Database
Information
The basic WebSphere Voice Response voice processing system comprises:
v One or more BladeCenter or pSeries computers
v The AIX operating system
v The IBM WebSphere Voice Response for AIX licensed program product,
supporting signaling protocols to connect to a PABX or central office switch.
v Specialized hardware for the pSeries computer to communicate with the
telephone network. By using the maximum of four DTEA or DTTA cards in
the system unit, you can configure up to 480 channels on a single pSeries
computer (with an E1 connection).
No hardware adapters are required with BladeCenter or pSeries computers
when using the DTNA software support for Voice over IP networks..
The first part of this book describes the benefits of WebSphere Voice Response
voice processing to a business, what the product's main functions are, and
how they work.
© Copyright IBM Corp. 1991, 2011
1
2
General Information and Planning
Chapter 1. The benefits of voice applications
Voice applications give your customers access to your services using what is
still the most widely-used communications technology: the telephone. It's all
about improving customer service and productivity.
Nowadays, we call it customer relationship management, but the facts are the
same as they have always been:
v It's five times more expensive to gain a customer than to keep a customer.
v 95% of all dissatisfied customers will do business with you again, if you
resolve their complaint quickly.
v On average, unhappy customers tell 25 people; happy customers tell five.
v Customers now expect consistently high levels of service, at times that suit
them not you, for all their business needs.
Voice applications can help you deal with customer relationship management
issues by:
v Offering your services 24 hours a day, seven days a week, instead of 9 to 5,
five days a week.
v Screening customers' needs and routing them, either to an automated voice
response service or to a human agent, depending on why they called.
v Reducing queues by dealing with simple enquiries where an agent may not
be needed. This also frees agents to deal with the more complicated queries
that differentiate the quality of your customer services.
WebSphere Voice Response is proven technology:
v It has been used throughout the industry for over 10 years.
v Over 250,000 ports have been sold worldwide to more than 500 customers.
v It's scalable from four to more than 1000 ports.
v It has been tested and approved for attachment to telephone networks and
PABX switches worldwide.
v It conforms to industry standards
v It offers you a choice of the Windows or AIX platforms.
WebSphere Voice Response allows you to move your platforms to the latest
industry application programming environments and infrastructures, such as:
v Voice Portal programming, using WebSphere Portal Server and WebSphere
Voice Application Access (WVAA), to generate VoiceXML dialogs for the
platform.
v CCXML and VoiceXML applications hosted on a Web server
© Copyright IBM Corp. 1991, 2011
3
v Java applications
Where does WebSphere Voice Response add value?
There a number of on demand environments where WebSphere Voice
Response can provide real business advantage:
As the voice access channel for application server on demand solutions
For those using IBM WebSphere Application Server-based solutions,
WebSphere Voice Response, while using its VoiceXML and CCXML
industry-standard programming environments to handle the telephony
channels and voice call processing, can be integrated with WebSphere Portal
Server and WebSphere Voice Application Access (WVAA) as the business
portal access point to deliver voice-enabled on demand solutions. If you want
to speech-enable applications with either speech recognition or text-to-speech,
then WebSphere Voice Server can also be included. If required, WebSphere
Voice Response and WebSphere Voice Server can be used together for
non-IBM application server solutions.
As a solution for contact and call centers
WebSphere Voice Response is the self-service channel in many contact and call
center solutions. It is deployed between the telephone network and the call
center switch, either behind the call center switch, or in front of the PBX or
automatic call distribution (ACD) switch. Most call center deployments deploy
WebSphere Voice Response behind the switch. WebSphere Voice Response is
deployed in front of the switch where a significant percentage of calls will be
completed in the IVR application, or where the WebSphere Voice Response is
being used to provide call steering across several call centers. With VoIP SIP
support, WebSphere Voice Response can also be used in IP contact/call center
solutions. WebSphere Voice Response can be used in conjunction with
products from the main Computer Telephony Integration (CTI) suppliers. By
adding WebSphere Voice Server, the self-service applications can support
conversational speech, thereby making the customer interaction more natural
and friendly.
As a platform for a service provider of on demand voice services
WebSphere Voice Response is a proven platform for delivering service
provider services. It is an open platform running on AIX with CCXML, and
VoiceXML industry-standard programming environments, as well as legacy
voice programming environments for its existing customers. WebSphere Voice
Response is resilient and highly scalable, as a result of its:
v IBM pSeries, PCI telephony adapter and AIX support, which meets NEBS
compliance.
v High scalability, with up to 480 channels per pSeries system, or clusters of
systems for large installations.
4
General Information and Planning
v Highly-scalable and redundant SS7 software, when connected to a service
provider's wired or wireless switches.
v VoIP/SIP software, enabling the platform to support next generation service
provider networks with both existing service provider applications or
services and new on demand next generation services.
v High-performance platform that allows some mass calling applications to
be run.
v Scalable, redundant and resilient centralized application management:
– for VoiceXML and CCXML programming environments through
application servers such as WebSphere Application Server
– for state table and custom server applications to over 2000 ports, through
IBM HACMP and WebSphere Voice Response Single System Image (SSI)
clusters. Unified Messaging for WebSphere Voice Response can use this
scalability to provide voicemail support for large systems.
By adding WebSphere Voice Server, the service provider applications can
support conversational speech and make the caller interaction more natural
and friendly manner. They can also drive new services that are not easily
achievable with DTMF interaction.
Other key on demand features of WebSphere Voice Response include the
optimized variable costs model that allows you to manage the channel
licenses you own through the product's Licence Use Management tools. These
licenses are then only activated when channels are Enabled. This provides the
flexibility to use licensed channels across whichever system or site that needs
them to handle the dynamic application load.
Voice applications in the real world
The telephone is a universal means of communication. All businesses and
most homes have one. We're all becoming familiar with using the telephone to
conduct all kinds of business, such as:
v Ordering goods from catalogs
v Checking airline schedules
v
v
v
v
Querying prices
Reviewing account balances
Recording and retrieving messages
Getting assistance from company helpdesks
In each of these examples, a telephone call involves an agent (or service
representative) talking to the caller, getting information, entering that
information into a business application, and reading information from that
Chapter 1. The benefits of voice applications
5
application back to the caller. Voice response technology, as provided by
WebSphere Voice Response, lets you automate this process.
In the on demand world, the telephone is an important means of
communication. All the examples of business transactions listed above can be
provided from a Web site. When you support your Web site with voice
processing, you provide your customers with more choice in the way they do
business with you.
Before we start looking at the functions and features that make WebSphere
Voice Response the best choice for voice applications, we need to look in more
detail at how those applications can help your business succeed.
The following two real-world examples discuss some of the demands being
made on customer-facing businesses. For both examples, we show how a
combination of Web-based services, supported by voice applications, and
possibly by computer-telephony integration (CTI), can help you provide the
best in customer service.
The examples are:
v A software retail company that is having to handle a rapidly increasing
number of customer requests.
v A utility company that wants to provide the best possible customer service
in today's deregulated environment.
Example 1: Handling increasing numbers of customer requests
The Software Company is planning to release a long-awaited update to its
best-selling accounting application: The Virtual BookCook. As a result, the
company expects a large increase of interest from customers:
v Some people will want to buy only the new release. They'll want to find
out where the nearest supplier is, download the software from the Web,
and so on.
v Some people, who have already installed the new release, will be looking
for technical advice or help.
v Some people might have bought the previous release just the week before,
and will want a cheap (preferably free) upgrade to the new release.
Preparing for the expected
The Software Company's main challenge is to handle peaks in customer
demand, without setting up a complicated and expensive customer contact
system that is often idle outside those peak periods. It also needs to address
the problem that the peak periods include several different kinds of customer
queries.
6
General Information and Planning
The company can deal with the problems piece by piece, by automatically
filtering the demand. Figure 1 shows how everything fits together.
Customers
Tier 2: Logic
Tier 3: Data
Business object
server /
Web server
Data server
Tier 1: Presentation
Telephone
Network
Caller
Web
applications
Telephone
Network
Caller
Voice
applications
Helpdesk
Agents
Computer
telephony
integration
Telephone
Network
Caller
Customer Service
Agents
Telephone
Network
Caller
Figure 1. The Software Company's customer contact system
The Web as first point of contact:
The Software Company knows that most of its customers are happy to use its
Web site as their first point of contact with the company.To deal with the
customer requirements in this environment, The Software Company Web site
offers:
v The ability to order, or download, new releases of software over the Web
v The ability to upgrade existing software over the Web
v A frequently asked questions (FAQ) page to answer common questions
about new releases
Sometimes customers will not or cannot use the Web for their query. Before
moving directly to a human agent at this point, The Software Company uses
voice applications to filter calls.
Chapter 1. The benefits of voice applications
7
Voice applications as the second point of contact:
In this example, the use of voice applications to provide automatic customer
services has three main advantages:
v Providing a similar set of services to those provided over the Web filters
out a percentage of the total demand on the company's agents.
v The Software Company can use voice applications to provide a 24–hour,
7–day service. Business is no longer lost, therefore, when people try to
contact the company outside normal business hours.
v When a customer elects from the voice applications to be transferred to an
agent, it gives the company the chance to further filter calls by establishing
the type of call before automatically forwarding it to the agent.
For example, The Software Company can set up a menu that says: “To
order BookCook Version 2, press 1; for technical help with installing
BookCook 2, press 2; to report a problem with any version of BookCook,
press 3....” and so on.
Using the intelligent call transfer functions that are available in
computer-telephony integration (CTI) products (such as those from Genesys
or Cisco), callers who want to order software can be connected to customer
service agents; callers who want help with installation, or who have a
problem, can be connected to helpdesk agents.
When people reach an agent:
The combination of Web and voice applications minimizes the intervention of
human agents, thereby cutting costs and providing an ‘out-of-hours' service.
Example 2: Excellent customer service with low cost
Around the world, the increasing deregulation of utility companies has had
two major effects in call centers:
v As utility companies have merged, customers find themselves dealing with
a new breed of multi-utility company; for example, gas companies that sell
electricity, and electricity companies that selling water. When the utilities
merged, the call centers, with differing systems and processes also merged.
The new call centers must now handle these different systems and
processes, together with the variety of customer needs and requests.
v As a result of mergers and takeovers, customer satisfaction with the
services of utility companies has generally fallen world-wide—this is often
because of the inefficiency and ineffectiveness of call centers.
In this example, Useful Utilities is a national electricity company that has a
reputation for good customer service, largely built on an efficient call center
with a high level of automation. Recently Useful Utilities bought a small, but
multinational gas company that has a high level of technical expertise in
8
General Information and Planning
distributing domestic gas supplies. The gas company's call center, however, is
less automated than that of Useful Utilities, and according to customer
surveys is less well-liked, although it is skilled in dealing with requests in a
variety of national languages.
Useful Utilities wants to merge the two call centers into a dynamic new
contact center that provides:
v Excellent customer service, through offering a wide range of customer
contact points, including e-mail and the Web
v A high level of automation, with more than 50 percent of all customer
enquiries being handled without a human agent’s intervention
v The best people reputation in the business. The company want to guarantee
the quality of service to their customers by offering:
– A range of national languages
– The fastest response in the business
The best by any standard
Figure 2 on page 10 shows how Useful Utilities can combine the Web, voice
applications, and CTI to convert two physically separate call centers into a
single virtual contact center, as described in Creating a virtual contact center.
Chapter 1. The benefits of voice applications
9
Customers
Tier 1: Presentation
Tier 2: Logic
Tier 3: Data
Business object
server /
Web server
Data server
Telephone
Network
Caller
Web
applications
Telephone
Network
Caller
Voice
applications
Contact Center 1
Agents
Telephone
Network
Telephone
Network
Caller
Caller
Computer
telephony
integration
Agent
applications
Contact Center 2
Agents
Telephone
Network
Telephone
Network
Caller
Caller
Figure 2. Useful Utilities’ contact center
Creating a virtual contact center:
Merging the two call centers into one virtual contact center is not a problem
with IBM voice products. Whether the centers have different switches, or
share a switch, the voice response products, combined with CTI products such
as Genesys, Callpath or Cisco ICM ensure that Useful Utilities presents a
single face to its customers.
To distribute calls between the two call centers, Useful Utilities can take
advantage of the intelligent load balancing in the CTI software. It evaluates
the relative level of call activity in the two centers. When a call comes in, it
routes the call to the least busy of the two centers.
Widening the range of contacts:
10
General Information and Planning
Setting up the wide range of contact points can be achieved as described in
The Web as first point of contact: . The customer service Web page can offer
the ability to do real business over the Web, including paying bills, setting up
direct debits, and so on.
Voice processing can also help build the company’s reputation by enabling
services that depend on automation, and by presenting an expert appearance
to customers.
Automation at its best:
For a large utility company such as Useful Utilities, automation starts with the
voice applications—a (probably) toll-free number providing a wide range of
self-service facilities to customers. Services that Useful Utilities could provide
using voice response technology include:
v Customized messages for specific occurrences; for example, one problem
most energy companies face is unexpected power outages. Using the calling
number, Useful Utilities could identify those calls that are from customers
in an area with an outage, and direct those calls to a customized outage
announcement.
v Information about the customer’s account or services available
v Self service over the phone; for example, paying bills, ordering statements
or requesting deferred payments.
The experience of similar companies indicates that, with comprehensive,
reliable, scalable Web and voice response services, Useful Utilities’ aim of
handling half of its customer queries without human intervention is easily
achievable.
When customers who have been filtered through the voice applications want
more help, automatic call distribution, at the heart of CTI products can
provide impressive services such as:
Skill-based call routing
Customers can be directed to the most knowledgeable agent for their
call. For example, queries about Useful Utilities’ new All Energy
discount program for customers who take both gas and electricity, can
be directed only to agents trained in that program.
As a multinational company, Useful Utilities might want to route
incoming calls to make best use of their agents’ national language
skills. When an incoming call is detected as originating from a French
telephone number (via the calling line identifier (CLI)), or a
French-speaking customer selects French from a menu, the voice
application could issue initial prompts in French. Then, if there is a
need to be transferred to an agent, the customer gets straight through
to one who speaks French.
Chapter 1. The benefits of voice applications
11
Availability-based call routing
At times of peak demand the routing can pull in agents who are
usually occupied in other activities. This flexibility of response to
demand enables leading companies in customer relationship
management to advertise features such as “we always answer before
the fifth ring” or “all calls answered within 30 seconds.”
The result is an improved service overall. Customers get faster,
more–appropriate responses, and they get the ability to do easy things for
themselves, 24 hours a day, 7 days a week. Meeting this kind of demand
needs a solution that not only meets the technical challenges, but is also
scalable to meet the growth in demand that is the product of excellent service.
How voice response technology can help your business
Having looked at two detailed examples of the role that voice applications can
play in improving customer service, this section shows how voice response
can help in specific business situations.
Supply chain management
If your business depends on providing customers across the country with
up-to-date inventory information, you can create a voice response application
to take orders from customers and retrieve the information they need (such as
prices and quantities available).
When a customer inquires about a stock item, the application can determine
the item's availability, reserve the stock, and schedule its delivery. The
application can verify the customer's charge account balance or credit limit, or
check on the status of an order. The application then updates your inventory
database or files to show any activity that resulted from a call.
A voice response application can also support communication between the
main office and your marketing or sales force. Company representatives can
use the telephone to obtain product information (such as release schedules) or
order product literature anytime, anywhere.
Financial institutions
The financial industry depends on timely, accurate information. Using a
WebSphere Voice Response application, brokerage firms can make current
stock prices, quotations, and portfolio balances available over the telephone.
Clients can perform complex transactions without the intervention of a broker.
When a broker's advice is required, the application can transfer the call.
A bank can let customers access their account balances, obtain information on
interest rates and mortgages, calculate loan payments, or transfer funds, all
12
General Information and Planning
using WebSphere Voice Response applications. An application can also call
customers to inquire about transactions such as renewing a Certificate of
Deposit.
Enhanced applications can be created that use technologies such as speech
recognition or text-to-speech, as provided in Websphere Voice Server.
Transportation industry
WebSphere Voice Response applications greatly simplify the process of
making travel arrangements. Travelers can use WebSphere Voice Response to
check schedules and status, make reservations, and select seat assignments.
Voice applications can call passengers to inform them of scheduling or status
changes.
If your company is a transportation company, you can use a WebSphere Voice
Response application to track the location of your company's vehicles. Your
employees on the road can receive up-to-date weather reports, new schedules
and assignments, and messages from home, all in a single telephone call.
Service industries
Organizations such as hospitals and health care facilities must supply fast,
accurate information to patients and health care providers. A WebSphere Voice
Response application can make the latest patient information accessible over
the telephone. Patients can obtain test results or schedule appointments
automatically, and health care providers can reschedule appointments by
having an application make the call.
Information providers
WebSphere Voice Response facilitates the provision of all kinds of information
over the phone: time, weather, dial-a-joke, horoscopes, lottery results, sports
events, news, exchange rates, and so on. Whether the information you provide
is dynamic or static, short or long, you can use WebSphere Voice Response to
deliver it to subscribers.
Government agencies
National and local governments are constantly required to provide
information of all types to a variety of people. The use of a WebSphere Voice
Response application ensures that such information is always available and in
more than one national language. WebSphere Voice Response can also provide
services such as allowing callers to check on the status of a tax refund or
social security benefit, informing jurors whether to report for duty, and
providing callers with information about current employment opportunities.
Educational institutions
A WebSphere Voice Response application can provide information about class
schedules, availability, and course content. Students can register using the
telephone, and the application that handles the registration process can also
Chapter 1. The benefits of voice applications
13
update the database containing enrollment information. A WebSphere Voice
Response application can call students to inform them of schedule changes or
openings in a class for which enrollment had been closed.
Mobile workforce and telecommuting
Many organizations traditionally have workers whose jobs involve travel.
Increasingly, people are working at home, instead of in a centralized office.
Both groups can benefit from WebSphere Voice Response, as shown in the
following examples:.
v You can have a personnel application that uses the voice processing
technology to notify employees of change in the scheduling of important
meetings or work projects.
v Airline attendants often bid for particular preferred work or shift schedules.
This process can be automated with a voice application, while at the same
time the system updates the schedule database as bids are received.
v Special work groups can leave voice messages of specific results,
instructions, or special comments.
Telephone operating companies
WebSphere Voice Response provides interactive voice response and automated
attendant applications to improve customer support services.
WebSphere Voice Response can also be installed in a telephone company's
network to provide enhanced network services. Such services can include
public voice mail systems, topping-up prepaid mobile phones, and voice
activated dialing. WebSphere Voice Response can function in the intelligent
network, either as an intelligent peripheral or as a voice resource that is part
of a larger service node.
Enterprise Voice Portals and the Internet
With the growth of the Internet, new breeds of service company have been
created: the Internet service providers (ISPs), application service providers
(ASPs), and Voice Portals. All of these companies are very much focussed on
using a web-centric application model to allow people easy access to services
over the Web. WebSphere Voice Response can add voice processing to these
services.
WebSphere Voice Response includes support for building solutions with
VoiceXML and Java; this facilitates the creation both of Voice Portal solutions
or voice interfaces to existing web-based applications. Existing web
application infrastructure, web servers, application servers, and business logic,
can be reused to deliver VoiceXML documents that define voice interactions,
while the power and flexibility of Java can be used for more complex
integration tasks. The addition of WebSphere Voice Server to the system
provides technologies for speech recognition and text-to-speech.
14
General Information and Planning
WebSphere Voice Response services
Having looked at the benefits that WebSphere Voice Response offers to
various types of business, this section now looks at the types of services it
provides to deliver these benefits.
Automated attendant
An automated attendant can direct incoming calls to different customer
service representatives or departments, or to different automated applications.
Callers can make choices by using the telephone keypad or by speaking.
Telephone access to multiple systems and applications
WebSphere Voice Response gives the caller access to a complete range of
automated applications and information in a single call. Even if the
information is stored on several different types of host computer, perhaps
because one company has merged with another, the caller can access it with
the convenience of a single call.
Voice response
As you might expect, voice response technology is at the heart of the services
offered by WebSphere Voice Response. Voice response is the use of
prerecorded speech to provide information in response to input from a
telephone caller, and is typically used in applications such as telephone
banking and order entry. Short pieces of information are retrieved from a
database and spoken to the caller, by a voice application stringing together
words that have been previously recorded. WebSphere Voice Response
applications can also speak long strings of information, either prerecorded, or
stored as text and converted to speech as required.
Fax response
For more detailed or even graphical information, you can use the fax support
that is supplied as an integral part of WebSphere Voice Response, by adding a
Brooktrout fax card to your computer. You can develop a fax application
tailored to your needs. For example, a sales representative who is in a meeting
can call WebSphere Voice Response and ask for an appropriate brochure to be
faxed to him automatically. Your sales force, therefore, no longer need to carry
all of your sales brochures with them.
Voice mail
WebSphere Voice Response provides voice mail capabilities, ranging from
those of a simple answering machine, to full integration with other voice
processing functions. You can use IBM's fully-functional voice mail application
(see “Voice messaging” on page 19) to provide advanced answering machine
and call handling services for all your employee. Alternatively you can write
voice mail functions into any of your own applications—for example, your
automated attendant can include an option for leaving a voice message.
Chapter 1. The benefits of voice applications
15
Transaction-related voice messaging
Transaction-related voice messaging allows callers to leave voice messages
that are related to transactions they have conducted using WebSphere Voice
Response. For example, although the routine aspects of an order are dealt
with by an automated transaction, and stored in a structured database, any
special instructions, such as a change of shipping address, can be sent in a
voice message.
Coordinated voice and data transfer
The automated attendant or other voice processing application can let the
caller talk directly to a service agent. While the call is being transferred, a
computer-telephony integration (CTI) product, such as those from Genesys or
Cisco, can be used to display on the agent's screen any data that WebSphere
Voice Response has already gathered. The agent is thus immediately prepared
to handle the inquiry.
Access to paging systems
If the person to whom the caller wants to speak is traveling, WebSphere Voice
Response can transfer the call to a paging system.
Automated outbound calling
WebSphere Voice Response does not help only to handle inbound calls; it can
also make outbound calls. For example:
A business can automatically call a customer to notify them that their order
has been shipped and give an approximate arrival date.
v Passengers on a flight could be notified of a change in schedule.
v
v A doctor's office or clinic can place a call to a patient to confirm an
appointment.
v A survey could be conducted totally automatically.
You can easily automate your outbound applications to improve customer
service, and introduce potentially revenue-generating applications.
Intelligent peripheral
In an advanced intelligent network (AIN), WebSphere Voice Response can be
used as an intelligent peripheral. As such it can provide enhanced services
such as announcements, speech synthesis, speech recognition, and voice
messaging.
What voice response applications do
Having looked at the kinds of services that WebSphere Voice Response
provides, this section now describes the voice applications you can write to
take advantage of those services.
16
General Information and Planning
You can design voice applications to answer inbound calls and interact with
callers, or make outbound calls and interact with the called party. Your voice
applications can also:
v
v
v
v
v
v
v
perform calculations
read and retrieve information from databases
update databases
communicate information by speaking it back over the telephone
let callers leave a message
invoke other applications
transfer callers to a human operator
Inbound calls
The initial call handling can be done in two ways:
v WebSphere Voice Response automatically answers each inbound call, and
uses information in the call to identify a corresponding application profile;
it can then pass the call to the appropriate voice application and continue
as in steps 1 to 7 on page 18 below. Many different applications can be
running simultaneously, possibly in a combination of the CCXML,
VoiceXML, Java and state table programming environments.
v As an alternative, WebSphere Voice Response can be configured so that
incoming calls are passed directly to a CCXML application (in ALERTING
state) before the call is answered. This has the advantage that the CCXML
application, prior to connecting the call, can query:
– Call information
– Application availability
– For service providers, play a simple information message (if the switch
supports this facility)
CCXML could reject or redirect the call, in which case the inbound call
would be terminated, and the remaining steps below would not apply.
However, if CCXML connects the call, it can then invoke VoiceXML or Java
applications to handle the dialogs, and continue as below:
1. The application plays a greeting (sometimes called an announcement) to
callers, and then prompts them to indicate the information they want. The
greeting can be prerecorded or can use text-to-speech technology. Callers
can interrupt the prompt if they already know what they want to do.
The application can be designed to play greetings in different languages
depending on information such as the number that callers dialed, or the
callers' own numbers.
2. The application waits for callers to respond for a set period of time.
Callers can either respond, either by
v speaking (if speech recognition is implemented)
Chapter 1. The benefits of voice applications
17
v pressing keys on a DTMF phone, ranging from a single digit (such as 1
for yes or 2 for no) to multiple digits (such as a catalog number or
personal identification number)
3. If the response does not match the criteria you have defined (such as a
specific number of digits), the voice application can prompt callers to enter
the response again.
4. When the waiting period has elapsed, the application can prompt the
caller again, as often as you think necessary.
5. The application takes whatever action is appropriate to the caller's
response; for example, retrieving information from a database, host
system, or file system and speaking it to callers; updating information in a
database; storing or retrieving a voice message; or making another call.
6. After taking action, the application should tell the caller again what to do
next.
7. The caller might indicate, in response to a prompt, that the interaction is
over. The application can respond by saying good-bye, then disconnect, or
the caller might simply hang up. If the caller hangs up, the application can
detect this, and automatically disconnect itself.
Outbound calls
A voice application could be triggered to make an outbound call by an event
such as a seat becoming available on a flight. The voice application then
follows a sequence like this:
v The application makes a telephone call, for example to a passenger who is
on the waiting list for a flight, and whose telephone number is stored in a
database.
v If WebSphere Voice Response makes the call (using state tables, Java or
CCXML), it can determine whether the call is answered or if the line is
busy.
v WebSphere Voice Response plays a recorded or text-to-speech greeting,
which might start by asking the called person to confirm that they are the
person for whom the information is intended. The application can ask for,
and check, a password or personal identification number. The application
then speaks the information to the called party. It can then prompt the
called party to say what to do next.
v After the initial greeting, an outbound call follows much the same sequence
as an inbound call. The called party should have the opportunity to ask for
other actions to be taken. Depending on your business requirements you
can offer a range of actions in your application.
v As with an inbound call, either the called party or the WebSphere Voice
Response application can end the interaction. The called party may either
hang up, or tell the application not to take any further action, in which
event the application must disconnect.
18
General Information and Planning
Transferring calls
You can arrange for an application to transfer either an inbound or outbound
call to another extension. For example, in the scenario described above, the
person being notified of the new flight availability could be given the option
to transfer directly to a booking agent. The application can be written to
connect the external party to the agent as soon as the agent answers, or to
play an introductory message to the agent. This message, which is not heard
by the other party is called whisper and could be used in this case to alert the
agent to the fact that the passenger has been called from a waiting list. The
application can also decide whether to check that the transfer to the agent is
completely successful, or to stop monitoring the transfer earlier.
In this situation, the application does not expect to interact with the caller
again, unless it detects a failure in the connection to the agent. If the
connection is successful, WebSphere Voice Response drops out of the call and
the call is ended by the hang-up of the agent or the external party.
In an alternative scenario, the booking becomes an automatic part of the
application, but the called party is given the chance to talk to an agent if they
have any queries. In this case, the application expects the external party to
return to the application when the query is resolved. The application should
therefore refer, rather than transfer, the call and the voice system stays
connected even though the agent has answered. The voice system then detects
that the agent has hung up, and continues the interaction with the external
party. The call reverts to a normal outbound call.
WebSphere Voice Response supports both blind and screened transfers. In a
blind transfer, WebSphere Voice Response requests the transfer and hangs up.
In a screened transfer, it requests the transfer but does not hang up until the
called number has answered.
Voice messaging
Voice messaging allows people to leave spoken messages for others. The most
obvious use is when a person tries to call someone else who cannot answer
the phone. Instead of just hanging up and trying later, the caller is invited to
“leave a message after the tone”. This is known as voice mail. Voice mail is
stored in a voice mailbox.
Increasingly, users require more than just voice mail; they want all their mail
anywhere, anytime, anyhow. The solution to this requirement is called unified
messaging.
Unified messaging provides a central service that coordinates and provides
access to all communications formats through the interface that is the most
Chapter 1. The benefits of voice applications
19
convenient to the user. For example, users can retrieve voice or fax messages
over the Web, or have e-mail spoken to them over the phone using
text-to-speech.
IBM offers a fully functional messaging product: IBM Unified Messaging for
Websphere Voice Response. IBM Unified Messaging is an application built on
WebSphere Voice Response for AIX. So, if you're a WebSphere Voice Response
for AIX user, or thinking of becoming one, find out more about IBM Unified
Messaging from our Web site or your IBM representative.
Information access
WebSphere Voice Response can take advantage of any computer applications
that your company already uses to retrieve, store, and manipulate
information. For example, to receive an account balance, a bank customer
usually talks to an agent who uses a computer. The agent ensures the
computer is running the program that retrieves information from the database
of account balances, types in the customers account number and, by pressing
a key, requests the program to display the customer's balance. Only then can
the agent tell the customer how much money is in the account.
A WebSphere Voice Response application can follow the same path. The
application starts the program running on the computer, passes the account
number to the program, asks it to find the account balance, then speaks the
amount to the caller.
Because WebSphere Voice Response communicates with existing programs,
you can access information in an existing database without writing new host
programs for that purpose. The database can be on the same host as
WebSphere Voice Response, or it can be another host or another type of
computer. If the database is accessed via a 3270 or 5250 based application or
transaction server, WebSphere Voice Response can use its built-in 3270 or 5250
terminal emulation support to exchange information. For example, your voice
application can start a CICS® transaction that accesses a DB2® database,
without your needing to alter the CICS transaction.
Your opportunities for information access are limited only by the resources of
the host computer and the connectivity options at your site.
Summarizing WebSphere Voice Response voice application capabilities
Here's a brief reminder of the capabilities of all WebSphere Voice Response
applications:
v WebSphere Voice Response applications handle inbound calls.
v WebSphere Voice Response applications make outbound calls.
v WebSphere Voice Response applications can transfer calls.
20
General Information and Planning
v WebSphere Voice Response applications interact with callers using spoken
prompts.
v Callers interact with WebSphere Voice Response applications by using
speech or the telephone keypad.
v Callers can interrupt WebSphere Voice Response by using speech (with
speech recognition) or the telephone keypad.
v WebSphere Voice Response applications speak information to callers.
v Information can be prerecorded or synthesized from text (with
text-to-speech).
v WebSphere Voice Response applications access, store, and manipulate
information on local or host databases, and on multiple databases on
multiple computers.
v WebSphere Voice Response applications can store and play back messages.
v WebSphere Voice Response supports multiple voice applications on a single
host.
v WebSphere Voice Response lets you share voice, applications, and messages
across multiple hosts.
v It's easy to integrate WebSphere Voice Response state table or Java voice
applications with computer-telephony integration (CTI) by using Genesys
or Cisco ICM.
v WebSphere Voice Response gives you a choice of application programming
environments:
– VoiceXML: the industry-standard voice programming language, designed
for developing DTMF and speech-enabled applications, which are then
located on a central web server, in the same way as other web
applications.
– CCXML: the industry-standard call control programming language, that
allows complex call control, or interaction with telephony operations.
– Java: for developing voice applications on multiple WebSphere Voice
Response platforms, or for integrating your voice applications with
multi-tier business applications. For outbound calls, a combination of
CCXML and VoiceXML are the preferred solution for telephony, rather
than Java.
– State tables—for optimizing performance or for using all the WebSphere
Voice Response functions, including ADSI, TDD, Fax and custom servers.
Chapter 1. The benefits of voice applications
21
22
General Information and Planning
Chapter 2. How WebSphere Voice Response applications
work
A WebSphere Voice Response application consists of programmed interaction
between a voice system and a caller. You can create applications to use any of
the services described in Chapter 1, “The benefits of voice applications,” on
page 3, but the ones that you actually use will depend on the tasks you want
your voice system to perform, and the hardware and software that you have
installed.
This chapter describes the application programming environments that
WebSphere Voice Response supports, and explains how voice applications can
access other resources. It then outlines some things to remember when you
plan and design voice applications. Finally, it describes the choices available to
you when you create the voice output for your applications:
v
v
v
v
v
v
v
“Developing applications for WebSphere Voice Response”
“Using CCXML applications” on page 31
“Using VoiceXML applications” on page 33
“Using Java applications” on page 38
“State table applications” on page 43
“How voice applications access other resources” on page 49
“Planning and designing voice applications” on page 58
v “Creating the voice output for applications” on page 59
v “Key facts about components of voice applications” on page 60
Developing applications for WebSphere Voice Response
To exploit the power and functionality that WebSphere Voice Response uses to
provide phone access to your business data and logic, you must develop the
dialogs that control the interaction between the caller and the data. This
section introduces the application programming environments that are
supported by WebSphere Voice Response:
v CCXML (with VoiceXML)
v VoiceXML
v Java
v State tables (including custom servers)
© Copyright IBM Corp. 1991, 2011
23
CCXML overview
CCXML applications provide telephony call control support for telephony
systems. Although CCXML has been designed to complement and integrate
with VoiceXML systems, the two languages are separate, and VoiceXML is not
a requirement if you want to run CCXML applications.
The use of CCXML for voice applications is optional, but there are advantages
for call management. Whereas VoiceXML and Java applications are generally
used for dialog applications, CCXML is designed to be used for incoming
call-handling. The call-handling control support provided by CCXML
includes:
v Accepting calls
v Rejecting calls
v Transferring calls
v Running VoiceXML dialog applications
v Running dialog and other applications written in Java
CCXML allows for handling of asynchronous events and advanced telephony
operations, involving substantial amounts of signals, status events, and
message-passing.
The Java and VoiceXML environment can run CCXML applications in a
CCXML Browser. A static CCXML document can be used, or dynamic CCXML
can be generated by an application server. A combination of static and
dynamic CCXML is also possible. The advantage of this architecture is that all
the application logic can be controlled by an application server, allowing it to
manage which dialog applications are used in which circumstances.
The CCXML support provided in WebSphere Voice Response is implemented
to the W3C CCXML Version 1.0 specification , and provides the following
capabilities:
v It can be used with VoiceXML 2.0 and Java applications, to handle basic
inbound and outbound calls
v It provides parsing by the CCXML interpreter of all CCXML tags defined in
the W3C standard. However, some tags (such as those for conference) are
not supported by the base WebSphere Voice Response product.
v It enables routing of incoming calls to specific applications based on ANI or
DNIS, thereby simplifying the WebSphere Voice Response configuration.
v On incoming calls, the WebSphere Voice Response signaling process can
pass to CCXML applications additional protocol data, such as ISDN and
SS7 information elements, and SIP URLs (in tags via ECMA script).
v It provides the ability to specify a channel group on an outbound call, to
select which protocol is used to make the call.
24
General Information and Planning
v It can be used for call handling within an application, with the ability to
call CTI products for contact center, call center applications and solutions.
For guidance about developing CCXML applications, refer to the WebSphere
Voice Response for AIX: Using the CCXML Browser book. For more information
about how to configure WebSphere Voice Response to work with CCXML
applications, and the use of the CCXML Browser, refer to the WebSphere Voice
Response for AIX: Deploying and Managing VoiceXML and Java Applications book.
VoiceXML overview
You can develop dialogs using the industry-standard markup language,
VoiceXML (Voice eXtensible Markup Language). With VoiceXML you can use
a Web-centric programming model and a markup language that is specially
designed to bring the speed and flexibility of Web-based development and
content delivery to interactive voice response applications. That is, anyone
who can use HTML to develop a graphical application can develop a voice
application for WebSphere Voice Response.
VoiceXML dialogs provide a flexible presentation layer on top of the logic and
data layers of your business applications. Using VoiceXML, you can build
sophisticated dialogs that enable the caller to use either speech or DTMF to
provide input, and to hear responses as either synthesized or prerecorded
speech. The speech input from the caller can be either recognized or recorded.
VoiceXML allows you to build usable and flexible voice interfaces to your
business.
The recommended approach for building voice dialogs is to use VoiceXML
that is delivered either statically or dynamically from a web server or an
application server such as WebSphere Application Server. An application built
in this way can use the broad range of capabilities of a J2EE environment,
such as Enterprise Java Beans (EJBs) and servlets, to build voice applications
that are integrated with other business systems. Where necessary, computer
telephony integration (CTI), legacy voice applications, and other technologies
may also be incorporated, using the facilities in WebSphere Voice Response
Java or state tables.
For details of how VoiceXML applications work, see “Using VoiceXML
applications” on page 33.
Java overview
WebSphere Voice Response provides a Java application programming interface
(Java API) that allows you to develop a voice application entirely in Java. The
support for Java applications in earlier releases of WebSphere Voice Response
was based on supplied JavaBeans, and applications developed in this way can
run unchanged on the current release of WebSphere Voice Response. However,
for all new applications, support is provided solely through the Java API.
Chapter 2. How WebSphere Voice Response applications work
25
From a Java application, you can invoke programs written in WebSphere
Voice Response's proprietary state table programming language. You can also
use Java as a wrapper layer to integrate VoiceXML dialogs with existing state
table applications, although the recommended method for doing this is
through CCXML . If your existing state table applications provide standard
dialogs, or invoke custom servers that perform telephony activities such as
tromboning or CTI, you can continue to use these state tables with CCXML,
by making calls through Java programs.
For more detailed information about how Java voice applications work, see
“Using Java applications” on page 38.
State tables overview
The WebSphere Voice Response for AIX proprietary development
environment, state tables, must be used to access the following functionality:
v
v
v
v
Fax
Telecommunications devices for the deaf (TDD)
Analog display services interface (ADSI)
System extensions coded as custom servers
For information about how state tables work, see “State table applications” on
page 43.
Extending support with custom servers
If you need to extend WebSphere Voice Response's functionality, you can use
the C or C++ programming languages to develop custom servers . The
openness of this environment allows the WebSphere Voice Response platform
to be extended and adapted to meet almost any needs now or in the future.
Integrating different programming models
You can provide voice access to business applications using VoiceXML alone,
but more advanced (and efficient) system requirements can be met by
integrating CCXML with VoiceXML or Java. Figure 3 on page 27 shows how
CCXML can be used to directly drive telephony operations, while invoking
multiple VoiceXML dialogs or Java applications to manage interaction with a
caller.
26
General Information and Planning
Voice XML
dialogs
Call routing
CCXML
Java
applications
telephony
operations
Figure 3. Integrating different programming models from CCXML
Figure 4 shows how a VoiceXML dialog, whether invoked from a CCXML
application, or directly from call routing, can make use of the <object> tag to
get access to Java classes, including voice applications created with the Java
API, or state tables.
Call routing
or from CCXML
VoiceXML
dialog
<object> tag
Java voice
application
State table
application
Generic
Java class
Figure 4. Integrating different programming models from VoiceXML
Figure 5 on page 28 shows how a Java application that has been invoked
either from a CCXML application, or directly by call routing, can be used in
an integrated system to make calls to existing state table applications or
custom servers, as well as handling some telephony functions.
Chapter 2. How WebSphere Voice Response applications work
27
Call routing
or from CCXML
Java
application
State table
Custom
server
access to
advanced
telephony
functions
extensions to the
WebSphere
Voice Response
platform
legacy
interaction with caller,
access to business
logic and data,
advanced telephony
Figure 5. Integrating different programming models from Java
Application development tools for CCXML, VoiceXML and Java
You can develop applications completely independently of the WebSphere
Voice Response system by using any industry-standard editor. However, the
IBM WebSphere Voice Toolkit is an integrated graphical development
environment, and is particularly recommended as it provides significant
advantages in areas such as syntax checking, code analysis and document
parsing. The toolkit runs in a Windows environment.
CCXML and VoiceXML tools
The WebSphere Voice Toolkit V6.0.1, which can be downloaded from
http://www.ibm.com/software/pervasive/voice_toolkit, supports the
CCXML 1.0 and the VoiceXML 2.1 specifications, and includes a grammar
editor, pronunciation builder, and an audio recorder. You can configure a
development environment to create, test, and debug custom voice portlets
using VoiceXML 2.0 or 2.1. Other features of the toolkit include:
v The ability to debug your portlets using the local debugging environment
v The ability to create VoiceXML applications using the Communication Flow
Builder (as shown in Figure 6 on page 29)
28
General Information and Planning
v An editor that can handle both CCXML and VoiceXML source code (as
shown in Figure 7 on page 30)
v A conversion wizard to assist you in migrating any VoiceXML 1.0
applications to 2.0 or 2.1
v An integrated VoiceXML 2.1 Application Simulator and Debugger
v Integrated Concatenative text-to-speech (CTTS), and speech recognition
engines
The Toolkit editor also includes a wizard that allows you to select and
customize Reusable Dialog Components (RDC) that are written to the
VoiceXML 2.0 or 2.1 standard. These RDCs contain pretested VoiceXML code
for commonly-used functions such as credit card type, currency, date
information, and so on.
Figure 6. Editing VoiceXML using the Communication Flow Builder
Chapter 2. How WebSphere Voice Response applications work
29
Figure 7. Editing CCXML using the Voice Toolkit editor
To use the WebSphere Voice Toolkit V6.0.1, you need an IBM Rational
Development environment, such as Rational Application Developer for
WebSphere Software.
Application development tools for Java
The WebSphere Voice Response for AIX Version 6.1 package contains jar files
that you can import into a Rational development environment such as
Rational Application Developer. If your applications will be using WebSphere
Voice Server, you can use the speech recognition and text-to-speech engines
supplied with the WebSphere Voice Toolkit to test these functions.
Alternatively, you can use a text editor to code Java applications, or another
Java integrated development environment of your choice.
What you need to use the Voice Toolkit
To run the WebSphere Voice Toolkit V6.0.1, you will need the following
system configuration:
30
General Information and Planning
v Minimum of an Intel Pentium III 800 MHz processor or equivalent (1.0
GHz recommended)
v 1 GB RAM
v A display adapter setting of at least 256 colors and 1024×768 resolution
v A minimum of 800 MB free disk space (in addition to the disk space
requirement for the Rational environment), plus additional space for the
installation options that you select. For installation purposes, an additional
800 MB of temporary space is required on the drive specified in your TMP
environment variable.
v Microsoft Windows XP (Service Pack 1 or newer)
v An existing installation of the Rational Software Development Platform 6.0.
This includes Rational Web Developer (RWD), Rational Application
Developer (RAD), or Rational Software Architect (RSA). The toolkit
installation program does not permit installation onto any other platform
v A sound card with speakers
v A microphone (if you want to record speech)
For testing speech recognition and text-to-speech applications, you can use the
speech engines provided in the Voice Toolkit. These engines support the use
of multiple national languages.
Using CCXML applications
This section describes how CCXML applications work, how you implement
them and when you should consider CCXML rather than other programming
models:
v “How is an incoming call handled by CCXML?”
v “Sequence of events in a CCXML application” on page 32
v “How does the caller interact with the CCXML application?” on page 33
v “How does the CCXML browser access CCXML documents?” on page 33
v “The benefits of CCXML” on page 33
How is an incoming call handled by CCXML?
When an incoming call is detected by WebSphere Voice Response, the called
number is identified and is used to look up an application profile
corresponding to that number. The Incoming_Call state table, after issuing an
AnswerCall action, calls the InvokeStateTable action to invoke the state table
specified in the profile. If the JavaApplication state table is specified, and if
the Java and XML environment is running, the call is routed to the Java and
XML environment, and the called number is used to look up the relevant
application in a configuration file. For a CCXML application, this is the
Chapter 2. How WebSphere Voice Response applications work
31
CCXService with an associated uniform resource identifier (URI), which is
used to locate the appropriate VoiceXML document and then fetch it using the
hypertext transfer protocol (HTTP).
Caching
To avoid repeated downloading of the same CCXML document, every
document is cached. The CCXML browser checks whether the requested
CCXML document has been modified since it was last used. If the requested
document has expired, the system uses the cached version.
You can display or expire contents of the WebSphere Voice Response CCXML
cache. You can select one or all documents in the cache to expire. New
versions of any expired contents will be fetched when next retrieved from the
cache but content in the process of being loaded cannot be expired until fully
loaded. Refer to the section “dtjcache script” in the WebSphere Voice Response
for AIX: Deploying and Managing VoiceXML and Java Applications manual for
details.
The role of the browser
The CCXML browser works in a similar way to a web browser, in that it is a
rendering device, but it does not present and receive information from a user.
Instead the CCXML browser renders to the telephony. Rather than rendering
pages that are defined in HTML, the CCXML browser deals with pages that
are defined as CCXML documents.
Single or multiple browsers
CCXML services can automatically start as many browsers as required, or a
single browser can handle all the calls simultaneously
CCXML does not output to the user, so it is language independent. Multiple
browsers are therefore not required to handle calls in multiple languages.
Storing CCXML documents
CCXML documents are stored on a Web server. They are retrieved by the
CCXML browser using an HTTP request.
Sequence of events in a CCXML application
In a CCXML application, the sequence of events is determined by the order in
which the events arrive at the CCXML browser.
32
General Information and Planning
How does the caller interact with the CCXML application?
The caller interacts with a CCXML application via dialogs (VoiceXML, Java, or
other). The caller does not interact directly with a CCXML application.
How does the CCXML browser access CCXML documents?
The tags <fetch> and <createccxml> are used to download and execute a
CCXML document from the Web server, via a specified URI.
The benefits of CCXML
The benefits of using CCXML to create call control applications are:
v Industry-standard markup language allows you to use state-of-the-art tools
to build usable, flexible call control applications
v Dynamic call routing capability
v Easy integration with multi-tier, on demand, business applications—a single
application model for both Web and voice solutions
v Reuse of Web infrastructure—common business logic to support multiple
presentation channels
v Common toolset for creating applications for all presentation channels: Web,
voice, WAP
v Platform independence — CCXML applications are stored on a Web or
application server, and can be run on generic CCXML platforms.
v Ease of application deployment and updating, resulting from having
applications located centrally on a web server.
v Use of the <dialog> tag enables CCXML dialogs to call voice applications
written using VoiceXML, Java or state tables.
Using VoiceXML applications
This section describes how VoiceXML applications work, how you implement
them and when you should consider VoiceXML rather than other
programming models:
v “How is an incoming call handled by VoiceXML?” on page 34
v “What controls the sequence of events in a VoiceXML application?” on page
35
v “How does the caller interact with the VoiceXML application?” on page 36
v “How do you specify what the VoiceXML application says?” on page 36
v “How is the spoken output for VoiceXML applications stored?” on page 36
v “How do VoiceXML applications access information?” on page 36
v “Integration and interoperability of VoiceXML applications” on page 37
v “Application development tools for CCXML, VoiceXML and Java” on page
28
v “The benefits of VoiceXML” on page 37
Chapter 2. How WebSphere Voice Response applications work
33
How is an incoming call handled by VoiceXML?
When an incoming call is detected by WebSphere Voice Response, the called
number is identified and is used to look up an application profile
corresponding to that number. The Incoming_Call state table, after issuing an
AnswerCall action, calls the InvokeStateTable action to run the state table
specified in the profile. If the JavaApplication state table is specified, and if
the Java and VoiceXML environment is running, the call is routed to the Java
and VoiceXML environment, and the called number is used to look up the
relevant application in a configuration file. For a VoiceXML application, this is
a special Java application, called the VoiceXML browser. Associated with the
browser is a uniform resource identifier (URI), which is used to locate the
appropriate VoiceXML document and then fetch it using the hypertext transfer
protocol (HTTP).
Caching
To avoid repeated downloading of the same VoiceXML document, every
document is cached. The VoiceXML browser checks whether the requested
VoiceXML document has been modified since it was last used. If the requested
document has not been modified, the system uses the cached version.
You can display or expire contents of the WebSphere Voice Response
VoiceXML cache. You can select one or all documents or voice prompt files in
the cache to expire. New versions of any expired contents will be fetched
when next retrieved from the cache but content in the process of being loaded
cannot be expired until fully loaded. Refer to the section “dtjcache script” in
the WebSphere Voice Response for AIX: Deploying and Managing VoiceXML and
Java Applications manual for details.
The role of the browser
The VoiceXML browser works in a similar way to a web browser, in that it is
a rendering device, but it uses audio rather than visual means to present and
receive information from a user. The VoiceXML browser, like a web browser,
communicates with web servers using standard internet protocols and when
required it downloads pages, and the resources to which the pages refer
(audio clips rather than images). Rather than rendering pages that are defined
in HTML, the VoiceXML browser deals with pages that are defined as
VoiceXML documents.
The requirement for multiple browsers
Each instance of the VoiceXML browser handles a single call at one time so
multiple instances of the browser must be available, waiting for calls; as many
as you expect to handle during your peak hour. Each browser waits for calls
for a specific VoiceXML application. If you have more than one application,
34
General Information and Planning
you must have a different set of browsers for each, or you can have a
“top-level” VoiceXML application that greets callers, asks them what service
they require, and calls other VoiceXML applications as necessary, using their
URI to locate them.
This technique also works for applications that require to handle calls in
multiple languages. The use of such a top-level document reduces the number
of running browsers required, and makes more efficient use of the available
resources.
Using an Application Server to store VoiceXML documents
Another approach that can be used is for the VoiceXML document that is
initially accessed to submit the calling or called number to a routing
application located on an Application Server. This then downloads the
appropriate VoiceXML document, based on the submitted number. The benefit
of this approach is that all call routing and application selection is done on
the Application Server, rather than the WebSphere Voice Response client.
What controls the sequence of events in a VoiceXML application?
In a VoiceXML application, the sequence of events is determined by the
VoiceXML dialog, which is written as series of tags in a flat text file, or
VoiceXML document, as shown in the following example:
<?xml version="2.0"?>
<vxml version="2.0">
<!--This simple menu does not require text to speech or speech
<!--recognition capabilities.It plays an audio file and recognizes
<!--DTMF input.-->
<menu>
<prompt>
<audio src="hello.wav"/>
</prompt>
<choice dtmf="1"next="#trains"/>
<choice dtmf="2"next="#boats"/>
<choice dtmf="3"next="#planes"/>
<choice dtmf="0"next="#end_menu"/>
</menu>
<form id="trains">
<block>
<audio src="trains.wav"/>
</block>
</form>
<form id="boats">
<block>
<audio src="boats.wav"/>
</block>
</form>
<form id="planes">
<block>
<audio src="planes.wav"/>
</block>
Chapter 2. How WebSphere Voice Response applications work
35
</form>
<form id="end_menu">
<block>
<audio src="goodbye.wav"/>
</block>
</form>
</vxml>
How does the caller interact with the VoiceXML application?
In the example, a list of options is played to the caller and the caller is asked
to make a selection by pressing a key on the telephone keypad (a DTMF key).
Keys can also be used to enter identification numbers and so on.
As an alternative to pressing keys, VoiceXML dialogs can allow callers to
respond by speech. The speech can be either recognized or recorded. To
record the caller’s speech and and store it as an audio file on the Web server,
the Web server must support HTTP POST, which allows data to be sent to an
originating Web server.
For more information about speech recognition see “Speech Recognition” on
page 49.
How do you specify what the VoiceXML application says?
In the VoiceXML document you can specify either the names of prerecorded
audio files or the text that is to be converted into speech by speech synthesis,
or text to speech. The example shown plays an audio file, hello.wav, to
prompt the caller to make a selection, then it selects another audio file to play,
according to the caller’s choice.
How is the spoken output for VoiceXML applications stored?
Pre-recorded audio files are stored as headerless µ-law, headerless a-law, .wav,
or .au files on the Web or application server. Audio files referenced in a
VoiceXML document are automatically downloaded and cached by the
VoiceXML browser until needed. Cached audio files are automatically deleted
from the cache after an expiry time.
For more information about text-to-speech, see “Text-to-speech” on page 51
How do VoiceXML applications access information?
The easiest way to understand how a VoiceXML application accesses
information that is stored in back-end databases is to consider how such
transactions are performed in an HTML Web page. In an HTML Web page,
you present a form where the user can type in the information required by
the transaction. The user then clicks on a Submit button. This invokes an
HTML <post> tag, which causes the form to be posted to the Web server,
where it is processed by a server-side application.
36
General Information and Planning
In a VoiceXML document, the form is presented to the user by prerecorded
audio or text–to–speech. The user provides input and makes choices by using
speech or by pressing keys. A VoiceXML document is simply an auditory
version of a form. When the form is complete, the input is posted to the Web
server by using a <submit> tag in the document. On the Web server, the form
can be processed by the same server-side application that processes the
equivalent HTML form.
In both cases, the server-side application processes the user’s input, accesses
the back-end server database with an SQL command or similar, updates it or
obtains the required information, and returns to the VoiceXML application,
which presents the outcome to the user using prerecorded audio or text to
speech.
Integration and interoperability of VoiceXML applications
VoiceXML applications can be integrated with other systems by using Java, or
the <object> tag, as described in “Integrating different programming models”
on page 26.
The benefits of VoiceXML
The benefits of using VoiceXML to create voice dialogs are:
v Industry-standard markup language allows you to use state-of-the-art tools
to build usable, flexible dialogs
v Easy integration with multi-tier, on demand, business applications—a single
application model for both Web and voice solutions
v Reuse of Web infrastructure—common business logic to support multiple
presentation channels
v Common toolset for creating applications for all presentation channels: Web,
voice, WAP
v Platform independence—VoiceXML applications are stored on a Web or
application server, and can be run on generic VoiceXML platforms.
v Applications are managed on the application server, so there is no need for
separate application management for the voice channel.
v Ease of application deployment and updating, resulting from having
applications located centrally on a web server.
v Use of the <object> tag enables VoiceXML dialogs to call voice applications
written using Java or state tables.
v Availability of reusable dialog components—tested VoiceXML building
blocks for common application functions
Chapter 2. How WebSphere Voice Response applications work
37
Using Java applications
This section describes the two Java alternatives to using the capabilities of
VoiceXML.
There are two alternatives to using the capabilities of VoiceXML:
v Implement an additional Java application
v Develop your entire voice application using Java
The Java language is especially suitable for modular programming. You can
design your applications as a set of classes, which you can reuse in other
applications, and which are easy to customize and maintain. Using the
following headings, this section describes how Java applications work, how
you implement them and when you should consider Java rather than other
programming models:
v “How is an incoming call handled by Java?”
v “What controls the sequence of events in a Java application?” on page 39
v “How does the caller interact with the Java application?” on page 39
v “How do you specify what the Java application says?” on page 40
v
v
v
v
v
“How is the spoken output for Java applications stored?” on page 40
“How do Java applications access information?” on page 41
“Integration and interoperability of Java applications” on page 42
“Application development tools for Java” on page 30
“The benefits of Java” on page 42
How is an incoming call handled by Java?
When a call is routed to the Java environment, the called number is used to
look up the appropriate Java application in a configuration file.
The WVR class, which represents the WebSphere Voice Response system,
handles telephone calls and establishes communication with the WebSphere
Voice Response system. The WVR class can also be used to make outgoing
calls and to return the call at the end of the interaction.
The need for multiple application instances
Each instance of a Java application handles a single call at one time. You must
have multiple instances of the application waiting for calls; as many as you
expect to handle during your peak hour. If you have more than one
application, you need a different set of instances for each, or you can have a
top-level Java application that greets the caller, asks them what service they
require, and calls other applications as needed. The called application can be
written in Java, VoiceXML, or it can be a state table.
38
General Information and Planning
What controls the sequence of events in a Java application?
In a Java application, the sequence of events is determined by the ordering of
statements in the program.
The WebSphere Voice Response Java API includes classes and methods
representing the WebSphere Voice Response system, and also the components
of a typical voice response application.
v WVR methods represent the WebSphere Voice Response system and its
interactions.
v Call methods interact with and manipulate telephone calls through actions
such as playing audio play(), recording the caller record(), getting input
playAndGetInput()and also by performing telephony functions:
– getCalledNumber() and getANI(), which provide called and calling
number information to the application
– consult(), conference(), transfer(), blindConference(), and
blindTransfer(), which provide call transfer and conference call facilities.
v PlayAttributes specifies the way that greetings, messages, notices and so on
to are to be played.
v MenuAttributes allows a caller to press keys or say command words, to
select an option. It handles all the usual error conditions: time-out, invalid
key entered, and so on. Both the voice description for the item, and the
telephone key or speech recognition word that is associated with selecting it
are included.
v InputAttributes allows the caller to enter data, by pressing keys or
speaking. The InputAttributes includes both the voice description and the
length of time allowed, and, if necessary, the key to be recognized as an
Enter key. The class has a validator to allow checking for various formats:
currency, date, credit card number, and so on.
v DTMFAttributes specifies the maximum number of DTMF keys allowed,
and the valid DTMFtermination keys.
v RecoAttributes specifies the context which identifies the grammar for
speech recognition, the maximum number of possible recognition results to
return from the recogniser, and whether or not a tone is played to indicate
when recognition begins.
How does the caller interact with the Java application?
The caller can interact with the application in two ways:
v Free form key or voice input
v Structured menu input using key presses (DTMF), or speech recognition, or
both.
Chapter 2. How WebSphere Voice Response applications work
39
There are two attribute classes that are used in conjunction with the
Call.playAndGetInput() method to facilitate this: MenuAttributes and
InputAttributes
To record the caller’s speech and store it as a voice segment, you use the
Call.record() method.
For more information about speech recognition see “Speech Recognition” on
page 49.
How do you specify what the Java application says?
You can use prerecorded speech or text-to-speech in a Java voice application
by using any of the following subclasses of MediaType. These media data
classes specify the sounds that the caller hears:
v The VoiceSegment class specifies recorded voice data and is used both for
output and for input recorded by the Call.record() method.
v AudioCurrency specifies a currency amount to be spoken.
v AudioDate specifies a date to be spoken.
v AudioNumber specifies a number to be spoken.
v AudioString specifies a character string to be spoken.
v AudioTime specifies a time to be spoken.
v DTMFSequence which specifies dual-tone multifrequency signals to be
played.
v MediaSequence which is a composite of other media data objects.
v The TextToSpeech class specifies text from which speech is to be
synthesized.
You can also use arrays of the MediaType object to make a composite
sequences of other media elements.
All the methods capable of playing a message can accept arrays of
MediaTypes. This allows your applications to play chains of MediaType
objects in one operation.
How is the spoken output for Java applications stored?
Prerecorded audio files need to be imported, using a special utility, into the
WebSphere Voice Response voice segment database, where they are stored as
voice segments in a special Java space. For more information about voice
segments, see “Creating the voice output for applications” on page 59.
For more information about text-to-speech, see “Text-to-speech” on page 51.
40
General Information and Planning
How do Java applications access information?
Unlike the proprietary programming languages and interfaces that are
supplied with many interactive voice response systems (including WebSphere
Voice Response state tables), the Java API is an open interface. This means
that you can use the Java API, together with other APIs, and beans from other
sources to create your applications.
Communicating with host applications using Java
There are a number of ways to communicate with host-based systems, but the
recommended method is to use WebSphere Application Server, as shown in
Figure 8 below.
Adding voice capability to existing business applications
Most importantly, you can add voice capability to existing business
applications, taking advantage of what is already available, without the need
to change it.
For example, if you're already an IBM e-business, you might already be using
Enterprise JavaBeans (EJBs) to write the server side components of your
business applications in Java. Your enterprise application model looks
something like the one shown in Figure 8.
WebSphere
Application Server
Telephone
Network
Caller
HTML/HTTP
Web browser
JSP
JSPJSP
Command
Connector
Data
EJB
EJBEJB
Figure 8. E-business application model
This model is designed to deliver Web pages. You would probably use
Rational Application Developer to create the business logic and author the
Web pages with the WebSphere Voice Toolkit. The applications would use
connectors to access the data in tier-3.
It's easy to add voice to this model. Let's put it in terms of a real example.
Assume that one of the Java applications offered by the e-business shown in
Figure 8 accesses insurance policy data and returns it to the user's Web
Chapter 2. How WebSphere Voice Response applications work
41
browser. The enterprise can add voice to the solution without changing the
business logic and tier-3 server delivering data to users. Using Rational
Application Developer,and the Java API that is supplied with WebSphere
Voice Response, they can implement a voice application that integrates with
the existing Web application at the Enterprise JavaBeans (EJB) level. The final
result provides user access to insurance data through fully integrated voice
and Web applications, as shown in Figure 9.
In this configuration the WebSphere Application Server (WAS) client is
installed on the voice response system to interact with the WAS Server.
WebSphere
Application Server
Telephone
Network
Caller
HTML/HTTP
Web browser
JSP
JSPJSP
Connector
Data
Host System
Voice response node
Java
Java
Java
application
application
application
Command
EJB
EJBEJB
Telephone
Java and VoiceXML
environment
WebSphere Voice
Response system
Figure 9. Adding voice to e-business
This is a simple example, but the same principles apply to complex e-business
environments. WebSphere Voice Response might link with several IBM
products that enable successful e-business across enterprises, using not only
WebSphere but also products such as SecureWay (for directory access),
MQSeries (for messaging), CICS (for transaction processing), and DB2 (for
database access).
Integration and interoperability of Java applications
In addition to being able to access data, Java applications are also very easy to
integrate with other systems. For example, call control can either be handled
by the WebSphere Voice Response system, or be fully integrated with Genesys
or Cisco ICM.
The benefits of Java
Java provides the following benefits:
v Java applications can be developed easily using Rational Application
Developer, and the tools provided in the WebSphere Voice Toolkit.
42
General Information and Planning
v Platform independence: Java applications run either on an application
server or on the WebSphere Voice Response system.
v Applications are managed on the application server, so separate application
management is not needed for the voice channel.
State table applications
A state table is a program that specifies the basic logic of the interaction that
takes place with callers. A state table can invoke other state tables, and uses
prompts and voice segments to communicate with the caller. A voice segment
is an audio recording, usually of spoken words. A prompt is a small program
that specifies the logic of the voice output that callers hear, allowing you to
combine voice segments efficiently, rather than having to record every
utterance separately.This section describes how state table applications work,
how you create them, and when you should consider state tables rather than
other programming models.
A state table application is a collection of state tables, prompts, voice
segments, and servers, which together provide the desired function. The
top-level state table is the one that is specified in an application profile, hence
you will find it convenient to refer to the voice application by the name of
this state table. Figure 10 shows the various components of a voice
application.
Telephone
Network
Incoming
Call
Voice
Processing
Server
Data
WebSphere
Voice
Response
Incoming_Call
Caller
Application
Profile
3270
Server
System/370
3270
Application System/390
AS/400
Any
Computer
State Table
Prompts
Voice
Segments
Custom
Server
Business
Logic
Information
Figure 10. The components of a voice application using a state table environment
Chapter 2. How WebSphere Voice Response applications work
43
This section describes how state table applications work, how you create
them, and when you should consider state tables rather than other
programming models:
v
v
v
v
v
v
v
“How is an incoming call handled by state tables?”
“What controls the sequence of events in a state table application?”
“System variables” on page 45
“How do you specify what the state table application says?” on page 45
“How state table voice applications handle voice messages ” on page 46
“Integration and interoperability of state tables” on page 47
“Application development tools for state tables” on page 48
v “The benefits of state tables and custom servers” on page 49
How is an incoming call handled by state tables?
When WebSphere Voice Response detects an incoming call, the called number
is identified and is used to look up the application profile corresponding to
that number. The Incoming_Call state table, after issuing an AnswerCall
action, calls the InvokeStateTable action to invoke the state table that is
specified in the profile (if the JavaApplication state table is specified, then
control is passed to the Java and VoiceXML environment). The main role of an
application profile, therefore, is to specify how incoming calls to each number
are to be handled.
In addition, application profiles can:
v Define mailboxes for an application that uses voice messaging
v Initialize the values of a group of system variables
v Specify the language database to be used initially by the application.
A state table needs an application profile only if it handles inbound calls or
lets callers access mailboxes.
What controls the sequence of events in a state table application?
A state table contains a sequence of instructions, called states, that perform
the activities of a voice application. When a voice application is started, the
states in the associated state table are processed. Each state defines an action
to be performed. The result of each action controls which state is run next.
Any state table can invoke other state tables; this technique is called nesting.
A state table can be started in one of the following ways:
v Directly (the state table name is specified explicitly)
v By variable (the state table name is specified as a variable)
v By profile (an application profile name is specified, and it specifies the state
table name)
44
General Information and Planning
More than one application can use the same state table, concurrently.
System variables
State tables have access to many system variables to which WebSphere Voice
Response assigns information. These variables can then be retrieved by state
tables and prompts. Some of these variables contain information about the call
in progress, such as the call signaling type and the logical channel number.
Others contain general information such as the time and date in various
formats. Many of the supplied variables are used for voice messages,
including, for example, the number of messages in a caller's mailbox, and the
time and date a message was received.
How do you specify what the state table application says?
Prompts are used in state tables to specify what a caller hears. A prompt not
only identifies the words spoken to the caller, but also specifies the logic of
when and how the words are spoken.
A prompt can be simple or complex, depending on whether the prompt
always speaks the same words or whether the prompt logic specifies
conditions for speaking different phrases. For example, you might want an
application to greet callers with the phrase “good morning” if it is before
noon and “good afternoon” if it is after noon. To do so, you can construct a
prompt that tests whether the time is before 12:00 or after 12:00. Depending
on the results of the test, the prompt follows “good” with either the word
“morning” or the word “afternoon”.
System prompts
System prompts are supplied with WebSphere Voice Response. Each of these
prompts accepts a number as input and speaks it in a different way. For
example, if you want 12.34 spoken as a real number, it is spoken as “twelve
and thirty-four hundredths”, but 12.34 spoken as a currency amount in U.S.
English would be “twelve dollars and thirty-four cents”. System prompts can
speak numbers as integers, ordinals, dates, times, and telephone numbers.
The system prompts are available in several national languages. You can
create your own versions of them for other languages or to meet local
requirements.
System prompts use voice segments to produce the individual numbers,
letters and other common sounds. A set of frequently-used voice segments is
provided with WebSphere Voice Response. For more information about using
and creating voice segments see “Creating the voice output for applications”
on page 59.
Chapter 2. How WebSphere Voice Response applications work
45
How state table voice applications handle voice messages
Voice applications can give callers the opportunity to leave and retrieve voice
messages in electronic mailboxes, and to forward or delete them. In addition,
voice mail applications should allow each subscriber to record a greeting for
callers to hear. The greetings are recorded as voice segments.
Each mailbox can also have an audio name recorded for it. This is a voice
segment that can be inserted into a general greeting if the subscriber does not
record a greeting. For example, your voice mail application could have a
general greeting that says:
“You have reached the mailbox of ‘.......’ Please leave a message after the
tone.”
The owner of the mailbox then records his audio name, “John Smith”. This is
inserted into the general greeting, so that what the customer hears is:
“You have reached the mailbox of ‘John Smith’. Please leave a message after
the tone.”
The audio name can also be inserted into the message header that is heard
before a message.
Some of the state table actions are specifically intended to provide voice
messaging functions. State tables and custom servers use the WebSphere Voice
Response voice database to access voice messages, greetings and audionames.
Mailboxes
A mailbox is space that is reserved on the pSeries computer hard disk, so that
an application can store messages. Without at least one mailbox, an
application cannot use voice messaging.
Mailboxes are defined in an application profile, which includes optional
information such as the password and the referral telephone number of the
subscriber. When someone calls a messaging application, WebSphere Voice
Response assigns each item of messaging information to a system variable.
The voice application can make control decisions based on the value of any
system variable. For example, an owner of a mailbox can record alternative
greetings to be spoken, and the messaging application could use the value of
the Caller: Mailbox: Owner status system variable to determine which
greeting to be spoken. (See “System variables” on page 45 for more
information.)
46
General Information and Planning
Distribution lists
When a voice application includes the appropriate logic, callers can use the
telephone to create distribution lists and send messages to all the mailboxes
listed. A distribution list is a list of mailboxes to which messages can be sent.
Each distribution list belongs to a subscriber's mailbox. Distribution lists can
be either public, or private to the owner.
Subscriber classes
To avoid using up large amounts of disk space, you can limit the length of
messages. You can also use subscriber classes to control the maximum number
of messages in a single mailbox. Each subscriber class specifies this limit and,
in addition, puts limits on the number of mailboxes, distribution lists, and
greetings that can be recorded. Each application profile can then be assigned
to a subscriber class, and applications can check subscriber class variables to
ensure that the limits are not exceeded.
Integration and interoperability of state tables
State tables can optionally call servers to provide other functions. A server is a
program that acts as a bridge between WebSphere Voice Response and other
software components. WebSphere Voice Response supports two types of
servers:
v A 3270 server lets you access information on remote computers that use the
3270 data stream. If you already have 3270 applications that access
information needed by your WebSphere Voice Response applications, you
can create a 3270 server to obtain this information. To develop the server
and run it, use WebSphere Voice Response 3270 terminal emulation. A 3270
server consists of screen definitions and scripts. See WebSphere Voice
Response for AIX: 3270 Servers for detailed information.
v A custom server is a C language or C ++ language program with which
you can provide access to data or applications on any type of computer,
including the pSeries computer on which WebSphere Voice Response is
running. Although your information might be held in a database on a host
that uses the 3270 data stream, you might find it more convenient to write a
custom server to access it rather than create a 3270 server. Other functions
for which custom servers can be used include external fax, speech
recognition, or text-to-speech servers, and integration with CTI products
(see “How voice applications access other resources” on page 49).
Some features, such as support for CallPath Enterprise Server, Cisco
Intelligent Contact Management (ICM) software, ISDN and WebSphere
Voice Server speech recognition and text to speech are implemented using
custom servers.
See WebSphere Voice Response for AIX: Custom Servers for detailed
information.
Chapter 2. How WebSphere Voice Response applications work
47
You can integrate state tables with VoiceXML and Java applications by using
the StateTable bean in a Java application. You might want to do this if you
have legacy applications, or if you want to use a custom server to extend the
capabilities of the WebSphere Voice Response system.
You can also start Java and VoiceXML applications from a state table by using
the InvokeStateTable action to invoke the JavaApplication state table. If the
Java and VoiceXML environment is running, control can then be passed to a
Java application. See WebSphere Voice Response for AIX: Application Development
using State Tables for more information.
Application development tools for state tables
You can use either of the two different approaches to develop your state table
applications:
Using the state table editor
WebSphere Voice Response includes an easy-to-use state table editor for
creating and editing voice applications. This editor can represent the logic in a
graphical Icon view or a text-based List view, and is accessed from the
WebSphere Voice Response user interface. It is described in “State tables ” on
page 75.
Using an ASCII editor
You can use an ASCII editor instead of the WebSphere Voice Response user
interface to create the code for state tables, prompts, and 3270 server scripts.
ASCII-format code can be stored in an external code repository, imported into
WebSphere Voice Response, and then debugged using the WebSphere Voice
Response windows. You can use a command-line interface instead of the
window menu option to import ASCII state tables and prompts, so you can
schedule jobs to import automatically all the changed files from your code
repository. User-specified version-control information can be retained, after
importing files into WebSphere Voice Response.
You can define as read-only objects that are imported from ASCII source files,
to prevent modification within WebSphere Voice Response. However, if you
prefer, an imported state table, prompt, or script can be modified.
A state table, prompt, or script developed or modified using the WebSphere
Voice Response user interface can be exported and then modified using your
favorite editor. However, if you import an ASCII state table, and then export it
again, the ASCII code generated by WebSphere Voice Response does not
necessarily look the same as the ASCII file you originally imported. In other
words, ‘round-tripping’ is not normally possible.
48
General Information and Planning
The benefits of state tables and custom servers
While it is simpler to use VoiceXML to create new dialogs, you may need to :
v Write a state table to provide support for fax, ADSI, or TDD devices
v Write a custom server to expand the capabilities of the WebSphere Voice
Response platform
v Call legacy state tables and custom servers that you want to continue to use
How voice applications access other resources
The openness of the WebSphere Voice Response platform allows you to build
on the functionality of the base product, using the APIs supplied so that you
can make use of other facilities.
The openness of the WebSphere Voice Response platform allows you to build
on the functionality of the base product, using the APIs supplied. In this way
you can make use of facilities such as:
v Speech recognition servers
v Text-to-speech servers
v Fax
v Telecommunications devices for the deaf
v Call tromboning
v ADSI
This flexibility allows you to choose solutions that are best suited to your
organization's requirements.
IBM Business Partners and other solution providers make use of the flexibility
of the custom server interface to provide complete applications for speech
recognition, fax integration, speech synthesis, switch integration, and internet
access.
Speech Recognition
Speech recognition enables people to use their voices instead of the telephone
keys to drive voice applications. It provides the following advantages over
DTMF tones:
v The call interaction is easier and more natural.
v Callers who are using a cellular (mobile) or hands-free telephone don't have
to remove it from their ears to continue the interaction.
v Applications can be designed specifically to accurately recognize times,
dates, and currencies.
v The overall time taken to complete transactions is reduced.
Chapter 2. How WebSphere Voice Response applications work
49
The simplest speech recognition applications enable callers without
push-button (DTMF) telephones to use equivalent applications. The
technology can also be used to provide more sophisticated applications that
use large vocabulary technologies. For telephone applications, speech
recognition must be speaker-independent. Voice applications can be designed
with or without barge-in (or cut-through) capability.
Speech recognition is particularly useful if a large number of your callers do
not have DTMF telephones, if the application does not require extensive data
input, or if you want to offer callers the choice of using speech or pressing
keys.
Figure 11 shows a typical speech recognition setup. To handle the processing
that speech recognition demands, the example uses multiple LAN-connected
server machines.
I want to
fly from
Heathrow
Which airport would
you like to fly from ?
LAN
PSTN
WebSphere
Voice Response
systems
Speech
recognition
servers
Figure 11. A speech recognition environment
When speech recognition technology is being used in a telephony
environment, you need to think about the following characteristics:
v The speech being processed is received from telephone lines rather than a
microphone. The speech recognition has, therefore, to cope with the lower
quality speech that is associated with the limited bandwidth and sensitivity
of telephone systems.
50
General Information and Planning
v Instead of being trained to a single user, speech recognition technology has
to adapt to the different voice characteristics of many different callers,
operating in a speaker independent manner.
Speech recognition using WebSphere Voice Server
Speech recognition is supported by WebSphere Voice Server in both the
VoiceXML and Java programming environments. The runtime components of
WebSphere Voice Server or other MRCP Version 1 compliant speech
recognition product work with the speech recognition engine to perform the
recognition. Recognition engines can be distributed across multiple systems so
that resources can be shared, and redundancy is provided.
To allow highly-accurate recognition of continuous speech in this
environment, the speech must follow a format that is defined by the
application developers. These formats (or grammars) can allow many possible
ways of speaking requests, and many tens of thousands of words can be
recognized. Although WebSphere Voice Server provides the means for you to
support large-vocabulary speech recognition, the system does not allow
dictation.
The WebSphere Voice Toolkit enables an application developer to create the
grammars that perform the recognition. The grammars can be created in
multiple national languages.
For detailed information about the speech recognition functions provided in
WebSphere Voice Server, refer to the WebSphere Voice Server infocenter at:
http://publib.boulder.ibm.com/infocenter/pvcvoice/51x/index.jsp
.
Text-to-speech
Some information, for example, news items, stock prices, or electronic mail is
difficult or impractical to prerecord before it is made available over a
telephony system. Instead, your application can have text converted into
synthesized speech as it is needed.
Text-to-speech provides the necessary flexibility for applications that have a
large number or varying spoken responses. information can be read to the
caller without the need to prerecord voice segments. Applications can use
synthesized speech, prerecorded speech, or a mixture of both.
WebSphere Voice Response uses the open standard communication protocol
MRCP Version 1 to communicate with speech technologies such asWebSphere
Voice Server.
Chapter 2. How WebSphere Voice Response applications work
51
Text-to-Speech using WebSphere Voice Server
When using WebSphere Voice Server or third party speech technology
vendor's product compatible with MRCP V1, the application uses the
WebSphere Voice Response MRCP client interface to send text to a
text-to-speech server. The text-to-speech server returns the synthesized speech
back through the WebSphere Voice Response MRCP client interface and the
application then sends it down the voice channel to the caller.
Speech synthesis technology is very dependent on the language that is being
synthesized. The WebSphere Voice Server Text-To-Speech function is available
in a number of languages.
WebSphere Voice Server's Text-To-Speech function allows your voice
applications to adapt to specific circumstances by synthesizing a prompt from
a text string and playing it dynamically in real time. For example, in response
to a caller's request for which no prerecorded prompt is available, a voice
application can select an appropriate text string and convert it to speech to
play back to the caller. This enables your application developers to develop
voice applications more quickly and cheaply. The Text-To-Speech function also
supports barge-in (or cut through) so that synthesized speech can be
interrupted by a caller in the same way that a prerecorded prompt can.
Text-to-speech servers can be on separate systems that are shared between
multiple WebSphere Voice Response systems, allowing more callers to be
handled at the same time. This type of arrangement is scalable, and also
provides redundancy, so that if the need arises to close down part of the
system, the remaining machines can continue to provide a service.
For detailed information about the text-to-speech functions provided in
WebSphere Voice Server, refer to the WebSphere Voice Server infocenter at:
http://publib.boulder.ibm.com/infocenter/pvcvoice/51x/index.jsp
.
How does WebSphere Voice Response send fax output?
You can modify your voice applications to make use of the fax capabilities
that are built into WebSphere Voice Response. You need to install a Brooktrout
fax card into your pSeries computer—the necessary device drivers are
supplied with WebSphere Voice Response to ensure that installation and setup
are straightforward. See WebSphere Voice Response for AIX: Fax using Brooktrout
for details.
How does WebSphere Voice Response interact with a TDD?
WebSphere Voice Response has built-in function to communicate with
standard TDD devices. The Telecommunications Device for the Deaf (TDD) is
52
General Information and Planning
a telephony device with a QWERTY keyboard and a small display and,
optionally, a printer. Instead of speaking into a mouthpiece, the caller types
messages on the keyboard; instead of hearing a voice from the receiver, the
caller views the messages on the screen, and can print them for later reading.
Usually, both callers use TDDs.
Figure 12. Two people communicating using Telecommunications Devices for the Deaf
Enabling your WebSphere Voice Response voice applications to communicate
with TDDs makes your services and information available to callers who are
unable to use an ordinary telephone. WebSphere Voice Response sends TDD
characters as if it were another TDD.
With the TDD support, WebSphere Voice Response can exchange data with
TDD devices without any additional hardware. Both the US (45.45) and
European (50.00) baud rates can be used.
Chapter 2. How WebSphere Voice Response applications work
53
Voice
Processing
Telephone
Network
WebSphere
Voice
Response
TDD
Data
Communications
Network
Local
Area
Network
TDD
Characters
State Table
Caller
Information
Custom
Server
pSeries
Figure 13. How WebSphere Voice Response interacts with a Telecommunications Device for the Deaf
A TDD application works in much the same way as a voice application, with
a state table, prompts and TDD-character (rather than voice) segments,
together with a custom server that uses specialized subroutines to recognize
and generate TDD characters dynamically. Writing a TDD application is
similar to writing a DTMF-replacement application that uses discrete-word
speech recognition; the application recognizes TDD characters sent by the
caller, bases its decisions on them, and replies using TDD characters.
Specially tailored prompts and TDD-character segments are supplied in a
form suitable for U.S. English—if you want to support TDD users in other
languages, you can define other TDD languages.
WebSphere Voice Response's support is designed to operate with devices
meeting the EIA Standard Project PN-1663 TDD Draft, 9 June 1986
specification.
How does WebSphere Voice Response play background music?
Some requests take a while to process; playing background music during such
periods can help to indicate to the caller that the application is still working
on the request. Your organization might already play background music to
callers during call transfer, and your decision to play music during voice
applications, and your choice of music, should take this into consideration.
You might decide to play neutral easy-listening music that suits the image of
54
General Information and Planning
your organization, or to play something related to your latest advertizing
campaign. Your background music does not have to be musical; it can be
spoken information about your product, a dialog or interview about it, or just
keyboard noises, if you want to make it sound as if the request is being
processed by a person.
Note: Before you can use any music, you must obtain permission from the
owner of any rights to it, such as copyright in the music and recording.
However, you do have permission to use the sample tunes supplied with
WebSphere Voice Response.
Telephone
Network
Voice
Processing
Telephony
Channel
Prompts
Background
Music
Music
Channel
WebSphere
Voice
Response
Database
State Table
Tunes
Custom
Server
Tunes
File
Background
Music
Subsystem
Caller
pSeries
Figure 14. Using background music
How WebSphere Voice Response performs call tromboning
A trombone operation occurs when a call coming in on one WebSphere Voice
Response channel connects directly with an outgoing call on another
WebSphere Voice Response channel. This is how it works:
1. A call arrives at WebSphere Voice Response, and the application decides
that the user needs to be connected to a third party by using a trombone
operation. The application then uses a custom server function to initiate
this operation.
2. The custom server requests WebSphere Voice Response to make an
outgoing call to the third party on another channel.
Chapter 2. How WebSphere Voice Response applications work
55
3. When the third party answers, they are connected to the original caller
using the TDM H.100 bus that is internal to the WebSphere Voice
Response system.
Note: When using the DTNA software simulation of a DTEA card, call
tromboning between DTTA PSTN channels and VoIP/SIP channels is not
possible, although it is possible to trombone between DTNA channels.
Using call tromboning
You can use call tromboning:
1. To simulate transfer. If the telephony protocol you are using cannot
support transfer, you can use the trombone operation to simulate a
transfer operation. The connection is maintained until either the caller or
the third party ends the call by hanging up. The disadvantage of this
operation over a normal switch-based transfer operation is that two
WebSphere Voice Response channels are occupied for the whole time that
the caller and third party are talking. With a switch-based transfer, the
channel between the caller and WebSphere Voice Response is broken as
soon as the caller is transferred to the third party.
2. To allow reference to a third party, then returning to the WebSphere Voice
Response application. The trombone allows the caller to consult a third
party. When the third party hangs up, or the caller ends the consultation,
the caller is connected back to WebSphere Voice Response at the point
where they left the application they were using. This might be useful in a
voice mail application, for example, where a caller can break out to
respond to an urgent voice mail message, then return to processing the
rest of the voice mail.
For details of how to install, configure and use the trombone custom server,
refer to the WebSphere Voice Response for AIX: Designing and Managing State
Table Applications book.
Analog Display Services Interface (ADSI) support
This feature adds the following capabilities to your system:
v Support for interaction with analog display services interface (ADSI)
devices
v A development environment to create and manage ADSI scripts
v Support for the transmission of ADSI functions to ADSI devices with
real-time variable substitution
v Support for the receipt of alphanumeric data from ADSI devices
WebSphere Voice Response provides ADSI function as standard, allowing
applications to use the special functions provided by ADSI telephones. These
telephones have display panels and programmable keys which can make user
56
General Information and Planning
interaction with the voice program more user friendly. Some ADSI telephones
also incorporate a Mondex card reader to enable Mondex smart cards to be
updated.
WebSphere Voice Response provides the ability to send and receive ADSI
data, enabling the keys to be programmed to suit the particular circumstances
of the call, and short messages to be displayed on the telephone screen. The
data transmitted to an ADSI telephone conforms to the Bell Communications
Research (Bellcore) standard, and can be of two different types:
Server display control (SDC)
This type of data is sent when a voice application involves a
continuing connection between the ADSI generating device, such as
WebSphere Voice Response, and the ADSI telephone.
Feature download management (FDM)
This allows a number of alternative display and key setups (overlays)
to be downloaded and stored in the ADSI telephone. These overlays
can be selected by SDC actions or by various signals (events)
occurring at the telephone.
WebSphere Voice Response supports both data types.
WebSphere Voice Response contains the following components to support
ADSI devices:
v A script language to enable you to write the control information and
programs. This language also allows for parameter substitution points at
which parameters passed from WebSphere Voice Response can be included
in the downloaded data.
v Online help and the WebSphere Voice Response for AIX: Programming for the
ADSI Feature, which describe how to create ADSI script source files using
the language provided, and how to convert them into ADSI scripts which
can be interpreted by an ADSI telephone.
v A user interface to assist with the following tasks:
– The addition of ADSI script source files to a voice application database.
– Compilation of the script source files into a form that is recognized by
ADSI telephones.
– Specification of the parameter data that is to be included in the ADSI
scripts.
v System actions to enable the ADSI data to be sent to and received from an
ADSI telephone using the programmable keys or a Mondex card reader.
Note: WebSphere Voice Response does not support sending ADSI data to an
on-hook telephone.
Chapter 2. How WebSphere Voice Response applications work
57
Planning and designing voice applications
The creation of a logic flowchart should be part of the first phase of
application design. Putting effort into the initial detailed planning and design
makes application development using the WebSphere Voice Response user
interface easy. It cannot be stressed too strongly, however, that you need both
a good understanding of the business requirements of the application and a
logical and methodical way of working with the voice application
components.
An example flowchart is shown inFigure 15.
Welcome and find out if caller
has a pushbutton phone
Transfer a caller without a
pushbutton phone to a human
agent, or invoke a speech
recognition application
Offer choice of services and get
key pressed by caller
If key = 1, ask caller for account
number, then retrieve and play
account balance information
If key = 2, ask caller which
interest rate they want and play
appropriate information
If key is not 1 or 2 or 3, play
"Please try again"
If key is 3, play "Goodbye and
thank you for calling"
Notice that the designer
has already considered
the possibility that the
caller may press a key
that is not assigned to
either a service offering
or to ending the call.
Figure 15. Example flowchart for a voice application that accepts key input
To design the most effective voice applications you need an awareness of both
the caller and the business requirements. You also need an aptitude for
programming.
58
General Information and Planning
Remember that design of the dialog is equivalent to screen design and
interaction in a traditional online application. For some callers, interacting
with a voice application is a new experience, and the benefits of WebSphere
Voice Response can be gained only if you take particular care with your user
interaction design.
Application design also requires a knowledge of the databases that contain
the information. If you are providing voice messaging, you must be able to
forecast how much space will be needed by the messages.
After you have analyzed your business requirements, you need to design and
create a prototype. You can create a flow chart like the one shown in Figure 15
on page 58. When you have done that, you are ready to design your state
table, one state at a time. Finally, you need to design the prompts before
recording the voice segments that the application needs.
Creating the voice output for applications
You can store the voice output (what the caller hears from applications) either
as prerecorded voice segments or as text, from which speech can be
synthesized dynamically. A voice segment consists of the spoken words (for
example, “hello” or “good morning”) or sounds (for example, music) available
to WebSphere Voice Response voice applications. A segment can be a single
word, a sound, a phrase, one or more sentences, or a combination of words
and sounds. A set of voice segments and voice logic modules are provided as
part of WebSphere Voice Response—these include all the voice segments
needed by the system prompts to produce numbers, letters, days of the week
and other common sounds. You can either use the voice segments as supplied,
or you can re-record them using a voice of your choosing to match that used
in the applications. In the latter case, you can still use the supplied logic
modules.
National language support
WebSphere Voice Response supports voice segments recorded in multiple
languages for a single application. The voice segments and logic modules
provided in WebSphere Voice Response are available in a number of
languages. The specific languages you can select differ according to whether
you are using the State Table, Java or VoiceXML programming environments.
For details of the languages provided for each environment see “WebSphere
Voice Response language support,” on page 201.
Importing prerecorded voice data for state table applications
If you have prerecorded voice data, you can use the batch voice import (BVI)
utilities to import it. This set of utilities allows you to take voice data from an
input source such as a digital audio tape (DAT) player, and turn it into
segments in the WebSphere Voice Response voice segment database.
Chapter 2. How WebSphere Voice Response applications work
59
Batch voice import utilities automatically breaks the continuous audio stream
into segments based upon periods of silence, and then move those segments
into the WebSphere Voice Response voice segment database. The basic
assumption is that the complete set of segments to be imported is recorded as
one continuous stream of audio, with the end of one segment and the start of
the next being identified by a period of silence that is longer than that which
might occur within a segment.
The BVI utilities are described in detail in the WebSphere Voice Response for
AIX: Application Development using State Tables.
Recording voice segments
To record voice segments, you have the option of using either a telephone or a
microphone to record voice data directly into the computer:
Recording over the telephone
To use the telephone, you dial the number associated with a supplied voice
application and then record the voice segment by speaking into the telephone.
The voice segment is recorded either in uncompressed or compressed form.
The voice quality is quite satisfactory for many applications, but you may
want to use one of the other methods, which give better-quality sound.
Recording with a microphone
If you are using a microphone, you can also use the Voice Segment Editor,
which enables you to record voice segments directly.
Text-to-speech
For more information about text-to-speech (speech synthesis), see
“Text-to-speech” on page 51.
Key facts about components of voice applications
The components of WebSphere Voice Response provide capabilities such as
outbound calling and other advanced CTI functions, VoiceXML or Java voice
dialogs. Custom servers can be used for extending WebSphere Voice Response
capabilities.
General
v CCXML is recommended for providing capabilities such as outbound
calling and other advanced CTI functions, although applications that use a
combination of both VoiceXML and Java are also supported.
v VoiceXML is recommended for developing voice dialogs.
v State tables can be used to access fax, ADSI, and TDD devices.
60
General Information and Planning
v Custom servers can be used to extend the capabilities of the WebSphere
Voice Response platform.
v When CCXML is not configured for call handling, application profiles route
each incoming call to a specific state table or to the Java and VoiceXML
environment.
v After CCXML has routed calls to the Java and VoiceXML environment, they
can be handled by VoiceXML or Java applications, according to a
configuration file.
CCXML
The CCXML support provided in WebSphere Voice Response is implemented
to the W3C CCXML Version 1.0 specification , and provides the following
capabilities:
v CCXML applications allow sophisticated handling and control of multiple
calls, and can be used with VoiceXML and Java applications to both answer
incoming calls and place outgoing calls. They can also be used to invoke
voice applications.
v CCXML enables routing of incoming calls to specific applications based on
ANI or DNIS, thereby simplifying the WebSphere Voice Response
configuration.
v On outbound calls, CCXML applications can specify a channel group, to
select the protocol that is used to make the call.
v CCXML applications can invoke CTI products for contact or call center
applications.
VoiceXML
v VoiceXML dialogs are fetched from a Web or application server and cached
to provide rapid access.
v A VoiceXML browser is used to access VoiceXML dialogs.
v Multiple VoiceXML browsers must be running to handle all the calls you
expect.
v The VoiceXML dialog controls the interaction with the caller.
v The caller can enter data by pressing keys, or by speaking; the speech can
be recognized or recorded.
v The application can play prerecorded audio or text-to-speech.
v Prerecorded audio is stored on the Web or application server.
v A <submit> tag is used to send the user's input to the server.
v A server-side application program is used to access back-end data.
v By using the <object> tag, VoiceXML dialogs can call voice applications
written using Java or state tables.
v VoiceXML applications can be developed and tested using the WebSphere
Voice Toolkit.
Chapter 2. How WebSphere Voice Response applications work
61
Java
v Java applications are written using the Java API.
v The WVR class waits for incoming calls, makes outgoing calls and returns
calls to the system.
v Call methods interact with and manipulate telephone calls through actions
such as playing audio, recording the caller, getting input and performing
telephony functions.
v The Call.invokeStateTable() method can be used to access legacy state tables
and custom servers
v Sufficient instances of a Java application must be running to handle all the
calls you expect.
v The caller can control the application by pressing keys or speaking
commands.
v The caller can enter data by pressing keys, or by speaking; the speech can
be recognized or recorded.
v The application can play prerecorded audio or text-to-speech.
v Prerecorded audio is stored in a special Java space in the WebSphere Voice
Response voice database.
v Java applications can be developed using the IBM WSAD or WSSD and
tested using the WebSphere Voice Response Simulator.
State tables
v Application profiles also define mailboxes and initialize system variables for
use by state tables.
v State tables control the sequence of activities.
v Prompts specify the voice segments to be spoken.
v Voice segments are the sounds the caller hears.
v 3270 servers enable access to information through the 3270 data stream.
v WebSphere Voice Response 3270 supports up to three 3270 terminal
emulation sessions for each call.
v Custom servers allow access to information on all types of computers.
v Custom servers enable you to incorporate other functions into voice
applications.
v Messages are stored in mailboxes.
v Distribution lists enable messages to be sent to multiple mailboxes.
v Subscriber classes limit the storage used by messages.
62
General Information and Planning
Accessing other resources
The openness of the WebSphere Voice Response platform allows the system
to access other resources such as speech recognition, text-to-speech, fax,
telecommunications devices for the deaf, background music, call
tromboning, and ADSI phones.
v Speech recognition capability allows users to enter data and make choices
by speaking rather than by pressing keys.
v Text-to-speech capability is particularly useful if information changes too
rapidly or the amount of information is too large to prerecord.
v Speech recognition and text-to-speech can be provided by IBM WebSphere
Voice Server or other speech technology products.
v
Chapter 2. How WebSphere Voice Response applications work
63
64
General Information and Planning
Chapter 3. Using WebSphere Voice Response
To incorporate WebSphere Voice Response successfully into your
telecommunications network and to develop applications that maximize its
potential, you need a variety of skills. WebSphere Voice Response is packaged
and presented as a fully interactive, window-based system to make the tasks
you need to perform as easy as possible. The functions provided can be used
in any of the state table, Java, or VoiceXML programming environments
(except where specified).
This chapter introduces:
v “The graphical user interface”
v “Other tools for system and application management” on page 77
The graphical user interface
Examine the WebSphere Voice Response user interface to get an idea of what
you must do to make WebSphere Voice Response work for you. This section
describes the functions that are available under each of the menu items of the
Welcome window.
Figure 16 shows the WebSphere Voice Response Welcome window, from
which you can perform most tasks.
Figure 16. WebSphere Voice Response Welcome window
Access
The Access menu enables you to log on to and log off from the WebSphere
Voice Response user interface. The interface is password-protected and each
user can have a different password.
© Copyright IBM Corp. 1991, 2011
65
When you have logged on, you have access to the functions specified in your
administrator profile (see “Administrator profiles” on page 67).
From the Access menu, you can also Close the Welcome window. Closing the
window does not shut down the run-time system, which can continue to
receive and make calls.
Configuration
The Configuration menu offers the following options:
v Pack configuration
v System configuration
v 3270 session configuration (state tables only)
v Administrator profiles
v Application profiles
v Subscriber classes (state tables only)
v National Languages (state tables only)
v Help editor
Pack configuration
You can use this option to configure the telephony environment, by specifying
your country, switch type, and other basic details. The system responds by
setting various system parameters to appropriate values. You can use the Pack
Configuration window to view the existing settings of many of the telephony
parameters (see Figure 17).
Figure 17. Pack Configuration window
Be aware that any adjustments that you make to the telephony environment
might affect compliance with telecommunications authority regulations.
66
General Information and Planning
Because of this, use of Pack Configuration must be limited to authorized
personnel who are familiar with these regulations. Only one user at a time
can have write access. However, all users can view the information in the
Pack Configuration windows.
System configuration
WebSphere Voice Response is installed with a default system configuration
that includes the number of active telephone channels, switch signaling
protocols, and other operating parameters.
You can specify new values for a wide range of system parameters, from the
number of buffers in the buffer pool to how often the system prints reports
and archives statistics.
To help you set multiple parameter values, you can use supplied templates that
include predefined sets of values.
Only one user at a time can have write access. However, all users can view
the information in the System Configuration windows.
3270 session configuration (state tables only)
Communication Server for AIX creates sessions according to the LU2 session
profiles you define. An LU2 session profile specifies a communication session
between the pSeries computer and a remote computer using the LU2 protocol
(3270).
You configure a 3270 session by assigning a 3270 server to use it. You can
reconfigure a session at any time, or delete it if it is no longer required.
If you are using one of the CallPath call processing products, you can also
assign a session to a telephone number. CallPath can transfer the session to an
agent when it transfers the call.
WebSphere Voice Response supports the same number of 3270 sessions as
there are Communication Server LU2 session profiles, up to a maximum of
254. Some of the sessions can communicate with one remote computer and
some with another, and you can use a maximum of three sessions at any one
time on each channel.
Administrator profiles
Several people at the same time can use WebSphere Voice Response to
perform system administration and develop voice applications. You can assign
separate administrator IDs and access privileges to each person using the
Chapter 3. Using WebSphere Voice Response
67
system. The access privileges control which functions are available to the
person using a particular profile.
Application profiles
You can use application profiles to link applications to incoming calls, voice
messaging resources, and state tables. Each application profile specifies a main
state table that can be invoked by incoming calls and by other state tables,
and which has access to mailboxes and other voice messaging resources. (See
“How is an incoming call handled by state tables?” on page 44 for more
information about application profiles.)
Subscriber classes (state tables only)
Subscriber classes help you limit the amount of space that is used for voice
messages. Classify your mailbox users using one or more subscriber classes,
and then set limits on number of messages per mailbox, number of
distribution lists per mailbox, and so on.
Languages (state tables only)
WebSphere Voice Response supports the use of multiple national languages:
v A voice application can play prompts in multiple languages.
v Window text can be displayed in different languages.
You can add as many languages as are required to support your use of the
WebSphere Voice Response user interface and your voice applications. Adding
a new language sets up a database that is either voice-only, window-text only,
or both.
The voice portion of a language database contains the voice segment
directories, voice tables, and specific prompts for that language.
Window text includes display text and help text. Display text includes menus,
window titles, field labels, and button labels. You can use the Language
String Editor to translate this text into any language for which a database
exists.
The Telecommunications Device for the Deaf optional feature includes a
voice-only language (one that does not include window text) to support the
TDD. The language name is U.S. English TDD. If you need to support TDDs
in other languages, you can add your own TDD languages to the database.
68
General Information and Planning
Help editor
Help text is the online help information that you can access from each
window. You can translate the help text into any language for which a
window text database exists, or you can use the help editor to change the text
to suit your needs.
Operations
The Operations menu offers the following:
v System monitor
v 3270 session manager (state tables only, if the 3270 option is installed)
v Custom server manager
v Statistics
v Immediate shutdown
v Quiesce shutdown
System Monitor
The system monitor is a real-time graphical display that shows:
v Alarm conditions
v
v
v
v
v
Trunk and channel status
CPU usage
Buffer pool usage
Hard disk usage
Channel usage
Chapter 3. Using WebSphere Voice Response
69
Figure 18. The System Monitor window
Figure 18 shows how the System Monitor window reveals the state of the
system at a glance. When resource usage exceeds a particular percentage, the
indicators change color to alert you to potential problems. Using the monitor
functions, you can block some or all of the channels on a trunk, quiesce the
trunk, or disable it immediately.
70
General Information and Planning
The System Monitor window also indicates hardware and software alarm
conditions, using color to indicate their severity. Further information about
each alarm message can be viewed directly from the monitor window.
You can have the System Monitor window for a single WebSphere Voice
Response system displayed on different display units, and you can have
System Monitor windows for multiple WebSphere Voice Response systems
displayed on a single display unit.
When you minimize the System Monitor window, the background color of the
icon indicates the highest severity of alarm outstanding. For example, if a red
alarm condition occurs, the icon changes to red.
3270 session manager (state tables only)
You can check the status of all 3270 sessions at a glance. You can also monitor
the activity of an individual session. The information that is sent backward
and forward is shown in real time. You can also see the sequence in which the
server is handling screens and which fields the server is using on each screen.
You can dynamically allocate sessions to 3270 servers, remove stalled sessions
from service, and stop stalled 3270 servers: all without stopping WebSphere
Voice Response.
You can have the 3270 Session Manager window for a single WebSphere Voice
Response system displayed on different display units, and you can have 3270
Session Manager windows for multiple WebSphere Voice Response systems
displayed on a single display unit.
Custom server manager
You can monitor the status of your custom servers. The available information
about a custom server includes:
v The AIX process ID (PID)
v How often the custom server is used
v How many telephone calls are accessing it
v The last function that it executed
You can start and stop servers dynamically.
You can have the custom server Manager window for a single WebSphere
Voice Response system displayed on several display units, and you can have
custom server Manager windows for multiple WebSphere Voice Response
systems displayed on a single display unit.
Chapter 3. Using WebSphere Voice Response
71
Statistics
After developing an application and starting to use it, you probably want to
know how well it is working. Which options are used the most often? At
what point do callers usually feel satisfied and disconnect? How often does a
caller ask to speak with an agent?
WebSphere Voice Response automatically collects information about each call.
The information includes how many calls access each application, where in
the call flow a caller hangs up, and how often a specific event occurs. An
event is anything you want to define as an event, such as the caller pressing a
key.
WebSphere Voice Response automatically logs this information for you. You
can view the information online, or have WebSphere Voice Response generate
and print reports automatically at preset intervals or whenever you want
them.
WebSphere Voice Response also maintains information about line usage, such
as how much activity took place on each channel and what applications are
being called, and statistics about the 3270 sessions and the links to all remote
computers; these statistics include the maximum number of sessions available
at any one time. You can view all this information online, or in printed form,
in the same way as the call information.
In addition to the predetermined report formats that you can view online or
print, you can also export the raw data from the DB2 database in which it is
held, and process it to produce reports of your own.
Immediate shutdown and quiesce shutdown
Immediate shutdown closes each channel immediately and does not wait for
any calls to stop.
With quiesce shutdown, WebSphere Voice Response monitors each channel
and closes them separately as calls to each channel stop. WebSphere Voice
Response then shuts down gracefully.
Applications (state tables only)
The Applications menu offers the following, all of which apply only in the
state tables environment:
v Applications, which opens the Applications window. From this window,
you can develop and manage all the components.
v Voice segments
v Voice tables
72
General Information and Planning
v
v
v
v
Prompts
State tables
3270 servers
Custom servers
The Applications window
The Applications window shows you all the applications that are in the
system. From this window, in addition to being able to find and open all the
objects in your applications, you can export complete applications (full export),
partial applications (partial export), or just the objects that have been changed
since the last export or a specified date (delta export). You also import
applications into this window, with the option of previewing the objects you
are importing.
The Object Index:
The Object Index displays all the objects in the WebSphere Voice Response
system that are relevant to voice applications. You can use the Object Index to
locate any object such as a state table, prompt, or application profile. To add
an object to an application, you can drag it from the Object Index to an
Application window.
The Application window:
The Application window shows you all the state tables, prompts, and so on
that are needed for a single voice application, and therefore provides a
permanent definition of the application. From the Application window, you can
open objects for editing.
Chapter 3. Using WebSphere Voice Response
73
Figure 19. The Object Index, the Applications window, and an Application window
Voice segments
You can record voice segments either by using the telephone or an Ultimedia
Audio Adapter. You edit voice segments by using a graphical representation
of the sound. By editing, you can remove unwanted gaps in the voice
segment.
Voice tables
Each voice segment is stored in a voice directory. Prompts can use segments
from any voice directory but, to help you organize your voice segments, you
can also group them logically in a voice table. This allows you to refer to voice
segments by using an index value that is relevant to the application. For
example, the Days_of_the_Week voice table allows you to refer to the voice
segments for “Sunday” using the value 1, “Monday” using the value 2, and so
on.
The system voice tables catalog the system voice segments in groups, such as all
the letters of the alphabet, numbers, and days of the week. You can create
additional voice tables as you need them. Alternatively, as you record more
voice segments, you can add them to your existing voice tables.
74
General Information and Planning
Prompts
Prompts are used in state tables to define what a caller hears and to define
the logic of when and how the words are played. The system prompts are
kept in the system prompt directory. All the other prompts to be used by a
particular state table are grouped in a user-created prompt directory.
You can use the prompt editor (a menu-driven interface) for developing
prompts, or you can use an ASCII editor to develop the prompts, which you
then have to import. You can also export to ASCII format prompts that you
have created using the prompt editor.
State tables
WebSphere Voice Response includes an easy-to-use graphical editor for
creating and editing voice applications. Its interface provides two views of the
main sequence and logic of the application:
v Icon, in which the logic of the voice application is represented by graphical
objects and text that can be directly manipulated by mouse actions. An
example of the icon view is shown in Figure 20 on page 76.
v List, in which the logic of the voice application is represented by text only,
but which can also be directly manipulated by mouse actions.
To create the logic of a voice application, open a new state table, then drag
actions from an action palette and drop them onto a work canvas. You can
rearrange the actions (or states) on the canvas, and determine their sequence
by connecting each possible result of an action with the next appropriate
action. The connections between actions are shown as lines. Open the actions
to specify parameters and other details.
Chapter 3. Using WebSphere Voice Response
75
Figure 20. The icon view of a state table
A state table debugger exists with which you can step through a state table,
go to a particular action, stop the state table, or continue. You can also add or
delete variables or assign values to them, and add or delete break points.
3270 servers
The 3270 servers option enables you to define the server to WebSphere Voice
Response, and use 3270 terminal emulation to capture the 3270 screens and
define the 3270 fields to be used in the server.
You can use a menu-driven interface to develop the script, or you can use an
ASCII editor to develop and then import the script.. You can also export to
ASCII format any scripts that you have developed using the menu-driven
interface.
Custom servers
The custom servers option enables you to define the user functions and to
define and generate the main function of a custom server. After you have
written and defined the functions, you can build and install the server.
76
General Information and Planning
Help
The Help menu offers the following information:
v Help on the window
v Help about Help
Other tools for system and application management
This section describes how to manage your WebSphere Voice Response system
using alternative methods to those described earlier in this chapter.
System management
ASCII console
Although you can use the Operations menu on the WebSphere Voice
Response user interface to do most system management tasks (as described in
“Operations” on page 69), a user interface that runs on an ASCII display is
also available. This ASCII system management console gives you access to the
same WebSphere Voice Response system resources, but does not require a
display with graphics capability.
For details of the tasks you can perform using the ASCII console, see the
WebSphere Voice Response for AIX: User Interface Guide book.
Command line utilities
The command-line utilities are more convenient than the WebSphere Voice
Response user interface if:
v You are physically distant from the other location and cannot use a
graphical user interface to the other system
v You need to distribute an application to multiple sites
The command-line utilities are fully compatible with those in the similar
window interface—for example, if you export an object using the window
interface, you can import it using the command-line utility.
Integrating WebSphere Voice Response with network
management facilities
You can integrate WebSphere Voice Response with your network management
facilities by requesting WebSphere Voice Response to send alarms and current
system information to a central systems management point. This enables a
network of one or more WebSphere Voice Response systems to be managed
from a central point, and also enables a central network operator to help with
troubleshooting.
Chapter 3. Using WebSphere Voice Response
77
Simple Network Management Protocol:
If you have the correct software, you can useSimple Network Management
Protocol (SNMP) to obtain up-to-date status information on WebSphere Voice
Response. This means that you can use NetView® or another SNMP-compliant
network management package to obtain data such as the current status of
each pack, the number of buffers available to hold voice segments, or the
SNA status of a specific 3270 session. The status of WebSphere Voice Response
resources can be modified via SNMP, enabling you to manage and control a
WebSphere Voice Response system remotely.
SNMP traps can optionally be sent for some or all WebSphere Voice Response
alarms when they occur.
Application management
Application Monitor
Application Monitor is a standalone Java utility that lets you monitor the
status of Java and VoiceXML applications that are running on multiple
systems. From one central system you can see at a glance whether
applications are handling calls or waiting for calls. You can download the Java
and VoiceXML environment Application Monitor from http://www.ibm.com/
software/voice and install and run it on any system that supports Java
Version 1.3.
ASCII console
As with the system management functions above, you can use the text-based
ASCII console to manage applications. This provides an alternative to the
Applications menu on the WebSphere Voice Response user interface.
Command-line import and export utilities
The import and export functions that are available from the window-based
user interfaces are also available from the AIX command line. Using the
command-line import and export utilities, you can develop applications at a
central location, then distribute them to remote locations, without leaving the
central location. You can also import applications from a remote location, so
that you can examine or debug them at a central location.
The command-line utilities make it easy to package applications in such a
way that importing and exporting require minimum user intervention. You
can write simple shell scripts with which remote users can easily import or
export their own applications. All you need to do then is tell the users to type
a single command on the AIX command line, instead of guiding them through
a series of menus and buttons.
78
General Information and Planning
Key facts about using WebSphere Voice Response
WebSphere Voice Response’s features include: graphical user interface with
drag and drop using AIXwindows®, print options, translatable window text,
online help for each window, and simultaneous support for multiple
administrators or developers.
These are some of WebSphere Voice Response’s features:
v Graphical user interface using AIXwindows
v Drag-and-drop interface for application management and state table editing
v Online help for each window
v Support for many administrators or developers at the same time
v Print option for state tables, reports, statistics, and so on
v Capability for translation of window text into other languages
For administration you can:
v Use either a graphical system monitor or an ASCII interface for application
management functions
v Display real-time status of telephone lines and system resource usage
v Collect statistics on channel, application, and 3270 session usage
v Send alerts to NetView, Tivoli, SNMP
v Use SNMP-compliant packages to manage your system
Chapter 3. Using WebSphere Voice Response
79
80
General Information and Planning
Part 2. Planning to install WebSphere Voice Response
This part of the book discusses the hardware and software required to use
WebSphere Voice Response and the preinstallation planning you should do.
Telephone
Network
Voice
Processing
WebSphere
Voice
Response
Caller
Logic
Data
Communications
Network
Business
Object
Server /
Web
Server
Database
Information
The structure of this part of the book is as follows:
v Chapter 4, “Telephone network,” on page 83 describes how the telephony
environment interfaces with the voice processing system.
v Chapter 5, “Workstation and voice processing,” on page 141 lists the
hardware and software requirements for WebSphere Voice Response, and
discusses planning considerations.
v Chapter 6, “Scalability with WebSphere Voice Response,” on page 163
describes how the scalability of WebSphere Voice Response can be used to
create a single system image (SSI) across a network of systems.
v Chapter 7, “Data communications network,” on page 177 describes how to
connect WebSphere Voice Response to an SNA network to access data.
© Copyright IBM Corp. 1991, 2011
81
82
General Information and Planning
Chapter 4. Telephone network
Before you can decide what hardware and software you need for WebSphere
Voice Response, you must consider the telephony environment with which the
voice processing system will interface.
Telephone
Network
Voice
Processing
WebSphere
Voice
Response
Logic
Data
Communications
Network
Business
Object
Server /
Web
Server
Caller
Database
Information
This chapter includes information about the following topics:
v “Planning the telephony environment”
v “Choosing the application to answer incoming calls ” on page 119
v “Planning the switch configuration” on page 134
Planning the telephony environment
WebSphere Voice Response can work with a variety of different types of
connection to the telephone network. These connections provide different
capabilities, which in turn affect the types of application you can provide
when using WebSphere Voice Response. Similarly, telephony regulations and
public networks vary from country to country. This section provides much of
the information that you need to plan your configuration.
You should also check, however, with your IBM representative, who has more
information about switch-specific and country-specific1 variations of the
connection facilities that WebSphere Voice Response provides.
This section provides information about the following:
v “Connection to the telephone network ” on page 84
1. Specific requirements for each country are documented in the appropriate README_homologation.xxxx file (where
xxxx is the country identifier) in the /usr/lpp/dirTalk/homologation directory.
© Copyright IBM Corp. 1991, 2011
83
v
v
v
v
“Channel associated signaling”
“Coexistence of signaling protocols” on page 92
“Channel bank” on page 93
“Channel service unit ” on page 94
“Address signaling support ” on page 94
“Exchange data link” on page 94
“Common channel signaling” on page 96
“Supporting other signaling protocols” on page 113
“Integrating WebSphere Voice Response with Genesys Framework” on page
115
v “Fax connection requirements ” on page 118
v “Integrating WebSphere Voice Response with Cisco ICM software” on page
117
v
v
v
v
v
v “Using ADSI telephones” on page 118
Connection to the telephone network
The telephony equipment to which WebSphere Voice Response is connected is
referred to as the switch. WebSphere Voice Response can work with different
types of switch:
v Digital central office switch (exchange)
v Private automated branch exchange (PABX)
v Automatic call distributor (ACD) system
v Host-controlled digital switch
The connection is either a T1 or E1 digital trunk connection. Analog
connections are supported by using an analog channel bank. The WebSphere
Voice Response telephony interface is either a T1 D4-Mode 3 interface for T1,
or an ITU-T G.703 interface for E1. The pSeries computer model and the
number of digital trunk adapters you have affects the number of trunks and
channels you can use (“System p5 and pSeries computer” on page 149).
There are various ways in which WebSphere Voice Response can be connected
to the telephone network, such as through the local telephone exchange or
central office (using T1 or E1 standard business lines or trunks), or by using a
PABX system on your own premises.
WebSphere Voice Response can use either channel associated signaling (CAS) or
common channel signaling (CCS) protocols. The following sections contain
information about the switch functions that are supported by each protocol.
Channel associated signaling
With channel associated signaling (CAS), signaling information is carried in
the voice channel or in a channel that is permanently tied to the voice
84
General Information and Planning
channel. A number of different channel associated signaling protocols are
used. They are generally classed as being either T1 (used in Canada, Japan,
China (Hong Kong S.A.R.), the U.S.A., and other countries or regions) or E1
(used in Europe, Latin America, and other countries).
In general, each protocol has its own set of telephony capabilities. Your choice
is dependent on what subset of protocols your switch or PABX supports, and
which protocols provide the functionality that your applications require. The
lists in “T1 channel associated signaling protocols ” and “E1 channel
associated signaling protocols ” on page 86 are meant only as a rough guide,
because a given type of switch might support additional functionality, or only
a subset of the functionality of the protocol.
T1 channel associated signaling protocols
WebSphere Voice Response supports the following T1 signaling protocols:
v FXS (loop start and ground start)
v E&M (immediate start, delay start, and wink start)
v DID (immediate start, delay start, and wink start)
v SAS (loop start)
The functions provided by these protocols are summarized in Table 1 on page
86.
FXS:
WebSphere Voice Response is connected to a switch, using the two-way
foreign exchange subscriber (FXS) protocol that is described in ANSI
TIA/EIA-464-B (loop start or ground start operation); outgoing address
signaling that uses DTMF tones, dial pulses, or MFR1 is supported.
FXS is a station (line-side) protocol that supports call transfer. Answer
supervision is not provided and far-end disconnection is indicated only with
ground start.
ESF framing connection that uses 4-bit ABCD (ESF) robbed-bit framing format
is supported on DTTA trunks.
E&M and DID:
WebSphere Voice Response is connected to a switch, using the two-way E&M
tie line (trunk-side) protocol described in ANSI TIA/EIA-464-B. Incoming and
outgoing address signaling using DTMF tones, dial pulses, or MFR1 is
supported for immediate, delay, and wink address register start types.
Chapter 4. Telephone network
85
Customers who attach to the PTT in Taiwan can use incoming MFR1 tones to
get number information during call setup using modified MFR1.
ESF framing connection using 4-bit ABCD (ESF) robbed-bit framing format is
supported on DTTA trunks.
E&M supports both answer supervision and far-end disconnect.
DID is the same as the E&M tie-line protocol, except that it does not support
making outbound calls.
SAS:
WebSphere Voice Response is connected to a switch, using the two-way
special access subscriber (SAS) protocol described in ANSI TIA/EIA-464-B
(loop start operation only); outgoing address signaling using DTMF tones, dial
pulses, or MFR1 is supported.
SAS is a station (line-side) protocol that supports call transfer. Answer
supervision is not provided and far-end disconnection is not indicated.
Table 1. Functions provided by T1 CAS protocols
Protocol
Type
In the Type column, Trunk means
‘trunk-side protocol’ and Line
means ‘line-side protocol’.
Connectivity
Function
Channel
bank
Answer
detection
Call
transfer
Far-end
disconnect
ANI
DID or
DNIS
Yes
Yes
Yes
PSTN
PABX
Trunk
Yes
Yes
Yes
2
Yes
No
Line
Yes
Yes
Yes
2
No
Yes
No
4
No
5
No
5
SAS Loop Start
Line
Yes
Yes
No
Yes
No
4
No
5
No
5
1
Line
Yes
Yes
No
Yes
Yes
No
5
No
5
E&M
1
FXS Loop Start
1
FXS Ground Start
Yes
Yes
2
3
Note:
1. For both 2-bit AB (SF) format and 4-bit ABCD (ESF) format, as defined in TIA/EIA 464-B.
2. A channel bank for 4-bit (ESF) format CAS signaling must support extended superframe (ESF).
3. Yes, if the switch offers a release link trunk.
4. Yes, if a disconnect clear signal is provided.
5. Some PABX and ACD systems send number identification by sending DTMF digits before, or after the call is answered.
E1 channel associated signaling protocols
WebSphere Voice Response supports the following CEPT (E1) signaling
protocols:
86
General Information and Planning
v
v
v
v
E&M (immediate start, delay start, and wink start)
EL7
FXS Loop Start
Italy
v
v
v
v
v
v
v
v
R2
R2MFC
RE
SL
TS003
U.K. Callstream2
U.K. Exchange
U.K. Tie/DDI
The functions provided by these protocols are described below and then
summarized in Table 2 on page 90.
E&M:
This is the same as T1 E&M protocol (see E&M and DID).
This protocol is a tie line (trunk-side) protocol that supports incoming and
outgoing address signaling, answer supervision, and indication of far-end
disconnection. Call transfer is not available unless the switch offers a release
link trunk when using the E&M protocol.
EL7:
WebSphere Voice Response is connected to the Ericsson MD110 PABX via a
2048 kilobits digital link. This proprietary protocol was developed by Ericsson
specifically as an interface to WebSphere Voice Response and is based on the
RE protocol, enhanced to provide answer supervision and far-end disconnect.
This protocol is a station (line-side) protocol that supports call transfer and
outgoing address signaling.
Italy:
WebSphere Voice Response is connected to a digital central office switch in
Italy via the two-way Exchange Line/DID protocol described in Comitato
2. With some restrictions; ask your IBM representative for details. The restrictions are documented in the
README_homologation.uk file in the /usr/lpp/dirTalk/homologation directory.
Chapter 4. Telephone network
87
Elettrotechnico Italiano (CEI) standard 103.7. This protocol supports incoming
and outgoing calls using DTMF or dial pulses for address signaling.
This protocol is a central office trunk (trunk-side) protocol that supports
outgoing address signaling and recall (which can be used for call transfer if
offered by the switch), and receives answer supervision and indication of
far-end disconnection.
R2:
WebSphere Voice Response is connected to a switch via the two-way R2
digital line signaling protocol that is described in CCITT Q.421. Incoming and
outgoing address signaling that uses DTMF tones, dial pulses, or MFR1 is
supported.
This protocol is a tie line (trunk-side) protocol that supports incoming and
outgoing address signaling, answer supervision, and indication of far-end
disconnection.
R2MFC:
R2MFC is supported only in countries that use the E1 standard, and although
the fundamentals of the protocol are standardized by the ITU, exact details
vary widely between countries. WebSphere Voice Response currently only
supports the Korean variant of the protocol, and you should contact your IBM
representative if you wish IBM to consider adding additional R2MFC variants.
R2MFC is based on the R2 protocol (described above), but the address
signaling information is transferred using a special set of multi-frequency
tones. There is one set of tones for the incoming information and another for
the outgoing information . When used together these comprise a
“handshaking” protocol for the transmission of information between the two
R2MFC network elements, enabling information to be transferred more
quickly and reliably than by equivalent methods such as dial pulses or DTMF
On an incoming R2MFC call, the called number (DNIS) is present in SV185
when the incoming call state table is invoked. Outbound calls can be made
using the MakeCall state table action. To specify a called number only, set the
Phone Number parameter of the MakeCall action to the value of the called
number, and then set the Format parameter to a string of # characters that is
equal to the number of digits in the called number. To request R2MFC to
additionally transmit the calling number, append the calling number to the
called number in Phone Number (separated by a C character), and then set
Format to a string of # characters that is equal to the total number of
characters (including the C) in Phone Number. For example:
88
General Information and Planning
v To set called number 12345, set Phone Number to 12345 and Format to
#####.
v To set called number 12345 and calling number 67890, set Phone Number to
12345C67890 and Format to ###########
RE:
WebSphere Voice Response is connected to a channel bank via the remote
extension (RE) protocol, which is similar to T1 FXS loop start (see FXS).
This protocol is a station (line-side) protocol that supports call transfer and
outgoing address signaling, but does not receive answer supervision or
indication of far-end disconnection.
The signaling bit patterns for RE are the same as those for U.K. Tie/DDI.
However, incoming seizure is determined by recognizing the ringing pattern
as defined by system parameters.
SL:
WebSphere Voice Response is connected to a digital central office switch in
France via the two-way signaling protocol that is described in
ST/PAA/TPA/1064; this protocol is similar to T1 FXS (see FXS) in that it
supports both incoming and outgoing calls but only outgoing address
signaling using DTMF, MFR1, or dial pulses.
This protocol is a station (line-side) protocol that supports call transfer, answer
supervision, and indication of far-end disconnection.
TS003:
WebSphere Voice Response is connected to the Telstra network in Australia
via the TS003 protocol. Customers who are connected directly to the PTT (that
is, with no PABX) can use this.
U.K.Callstream:
WebSphere Voice Response is connected to the British Telecom Callstream
service in the United Kingdom. Incoming address signaling is dial pulses with
immediate start operation. Call transfer is supported.
This protocol is a central office trunk (trunk-side) protocol that supports
outgoing address signaling and recall (which can be used for call transfer if
offered by the switch), and receives answer supervision and indication of
far-end disconnection.
Chapter 4. Telephone network
89
U.K.Exchange:
WebSphere Voice Response is connected to British Telecom or Mercury
switched networks in the United Kingdom via the Exchange Line protocol
described in British Approvals Board for Telecommunications standard
OTR001 and Mercury SS5502.
This protocol is a central office trunk (trunk-side) protocol that supports
outgoing address signaling and recall (which can be used for call transfer if
offered by the switch), and receives answer supervision and indication of
far-end disconnection.
U.K.Tie/DDI:
WebSphere Voice Response is connected to British Telecom or Mercury
switched networks in the United Kingdom via the Tie Line/Direct Dialing
Inward protocol (commonly known as SSDC5) that is described in British
Approvals Board for Telecommunications standard OTR001 and Mercury
SS5502. Tie line operation supports incoming and outgoing calls using delay
start or immediate start address signaling with DTMF or dial pulses. DDI
operation supports incoming calls only, by using the same types of address
signaling.
This protocol can also be used whenever inverted (with respect to ANSI
TIA/EIA-464-B) E&M is required.
This protocol is a tie-line (trunk-side) protocol that supports incoming and
outgoing address signaling, answer supervision, and indication of far-end
disconnection.
Table 2. Functions provided by E1 CAS protocols
Protocol
Type
In the Type column,
Trunk means ‘trunk-side
protocol’ and Line
means ‘line-side
protocol’.
Connectivity
Function
ANI
DID or
DNIS
Yes
Yes
Yes
PABX
Channel
bank
Answer
detection
Trunk
Yes
Yes
Yes
Yes
No
FXS Loop Start
Line
Yes
Yes
Yes
No
Yes
No
EL7/CAS
Line
No
Yes
No
Yes
Yes
Yes
Trunk
Yes
No
No
Yes
No
Yes
E&M
Italy
90
1
General Information and Planning
5
Call
transfer
Far-end
disconnect
PSTN
2
3
No
4
No
4
No
6
No
6
No
Yes
Table 2. Functions provided by E1 CAS protocols (continued)
Protocol
Type
In the Type column,
Trunk means ‘trunk-side
protocol’ and Line
means ‘line-side
protocol’.
Connectivity
Function
PSTN
PABX
Channel
bank
Answer
detection
Call
transfer
Far-end
disconnect
ANI
DID or
DNIS
Trunk
Yes
Yes
Yes
Yes
No
Yes
Yes
Yes
Trunk
Yes
No
No
Yes
No
Yes
Yes
Yes
RE
Line
No
No
Yes
No
Yes
No
No
No
SL 10
Line
Yes
No
No
Yes
No
Yes
No
No
Trunk
Yes
Yes
Yes
Yes
No
Yes
Yes
Yes
UK Callstream
Trunk
Yes
No
No
Yes
No12
Yes
No
No
UK Exchange
Trunk
Yes
No
No
Yes
No
Yes
No
No
UK Tie/DDI
Trunk
Yes
No
Yes
Yes
Yes
R2
7
R2MFC 8, 9
TS003
11
No
13
Yes
No
12
In the Type column, Trunk means “trunk-side protocol” and Line means “line-side protocol”
Note:
1. For connection to Siemens Hicom 300 switch.
2. Yes, if the switch offers a release link trunk or an ACL exchange data link is available.
3. Yes, if a disconnect clear signal is provided.
4. Some PABX and ACD systems send number identification by sending DTMF digits before or after the call is
answered.
5. Unique protocol for Ericsson MD110.
6. Yes, if a VMS exchange data link is available on the MD110.
7. R2 digital line signaling as specified by ITU-T Q.421.
8. Korean only
9. R2MFC uses R2 for line signaling
10. Subscriber Loop, used in France.
11. Multifrequency compelled (MFC) is not supported.
12. Yes, to support ”inverted“ E&M.
13. “Mid-call diversion” is supported.
Chapter 4. Telephone network
91
Disadvantages of channel associated signaling
Although channel associated signaling is widely available in almost every
country, several problems occur when signaling protocols ofthis type are used
to connect voice response units. Such problems are:
v Because only two bits or four bits of signaling information are available for
each channel (in each direction), the channel-associated signaling bits can be
used only to pass very basic information between the voice response unit
and the switch (such as ring and on-hook). With channel associated
signaling, incoming call information such as ANI, DNIS, or DID, or
outgoing information such as the dialed number, must be sent in-band as a
sequence of tones (MF or DTMF) in the voice channel. These tones might be
heard by the caller, and can cause annoyance.
v DTMF or MF tones cannot be sent faster than about 10 tones per second.
This means that for long numbers, the caller might have to wait while call
information is being transferred between the switch and the voice response
unit.
v No error checking occurs on MF or DTMF tones, so for line errors incorrect
information might not be detected.
v There is a wide variation in channel-associated line signaling protocols
across the world.
v Beyond the line signaling protocols, a voice response unit must to support a
variety of different tone sequencing schemes (DTMF, R1 MF, R2 MFC, and
so on).
v Some channel associated signaling connections do not provide positive
answer detection (that is, a called subscriber has picked up the phone) or
hang-up detection. This means that the voice response unit may have to
assume that a call has been answered when it detects voice energy on the
line. Similarly, it may have to assume that a caller has hung up when no
voice energy has been detected for some time.
Coexistence of signaling protocols
A single WebSphere Voice Response system supports either T1 or E1 trunks,
but not both. All the channels do not have to share the same T1 or E1
protocol. For example, a trunk can be defined as T1. Some of the channels can
be defined to use the E&M trunk protocol and some can be defined to use
FXS. Some of the channels that use FXS can be defined to support loop start
signaling, while others can be defined to support ground start signaling.
Common channel signaling can coexist with channel associated signaling.
That is, some trunks can use common channel signaling while others use a
channel associated signaling protocol. You cannot mix channel associated and
common channel protocols within a single trunk.
92
General Information and Planning
Channel bank
If only analog connections are available from the central office or PABX, you
can use a channel bank (or multiplexer) to convert the analog signals to digital,
to enable connection to WebSphere Voice Response. This configuration is
shown in Figure 21.
Voice
Processing
Telephone
Network
Data
Communications
Network
pSeries
WebSphere
Voice
Response
Caller
Analog
Lines
ital s
Dig ction
e
onn
C
Switch
Channel
Bank
Figure 21. The channel bank converts analog to digital signals
In addition, some PABXs that support digital trunk protocols do not support
call transfer signaling via the digital trunk. If your switch is one of these, you
might want to connect WebSphere Voice Response to the switch through a
channel bank. You might also need a channel bank if the switch transfers calls
via digital trunk signaling protocols that WebSphere Voice Response does not
support.
WebSphere Voice Response operates with most channel banks. However,
when selecting a channel bank, you should ensure that it supports the
protocols required by the switch connections and by WebSphere Voice
Response.
A representative of your network provider or PABX vendor can help you
determine whether you need a channel bank and what channel bank is
suitable.
Chapter 4. Telephone network
93
Channel service unit
In the United States, when WebSphere Voice Response is connected to a
central office switch via a T1 trunk, the connection must be made through a
channel service unit (CSU). The CSU protects the circuit against power surges
and allows the telephone company to test the trunk.
You might also need a CSU if you connect WebSphere Voice Response to a
PABX or channel bank. When WebSphere Voice Response is located more than
15 meters (49.5 feet) from a PABX, channel bank, or a powered repeater, a
CSU is required to complete the connection.
WebSphere Voice Response works with any FCC-compliant CSU.
Address signaling support
WebSphere Voice Response recognizes inbound multifrequency (MFR1) and
dual-tone multifrequency (DTMF) tones in addition to dial pulse.
WebSphere Voice Response can generate outbound MFR1 and DTMF tones.
Exchange data link
An exchange data link helps you overcome some of the problems associated
with channel associated signaling protocols, by exploiting the host access
control links that many switches provide.
Telephone
Network
Voice
Processing
pSeries
Exchange Data Link
RS232 (V24) cable
tions
onnec
Switch
runk C
igital T
D
Switch
Figure 22. The exchange data link connection
94
General Information and Planning
WebSphere
Voice
Response
Data
Communications
Network
An exchange data link passes telephony event information and traffic statistics
to a computer, and can also allow the computer some control over the switch.
This is commonly known as computer-telephony integration (CTI). If your
switch supports CTI, you can use a WebSphere Voice Response exchange data
link to provide the signaling functions that are not provided by the channel
associated signaling protocol. For example:
v The telephone number that the caller dialed could be sent from the switch
to WebSphere Voice Response, where it can be used either to choose the
application to answer the call, or to locate the mailbox in which voice
messages are left.
For this reason, an exchange data link (or another method of getting the
called number) is essential when using a WebSphere Voice Response voice
mail application.
v The telephone number from which the call was forwarded could be sent
from the switch to WebSphere Voice Response.
v WebSphere Voice Response could send a message waiting indicator to the
switch, to signal that the message waiting lamp should be turned on or off
(or a stutter tone introduced) on the called party's telephone. The switch
can send a response if it is unable to process the message waiting indicator.
The following types of exchange data link are provided with WebSphere Voice
Response and require no additional programming:
v Simplified Message Service Interface (SMSI): for use with the Avaya
Technologies central office switch
v Simplified Message Desk Interface (SMDI): for use with the Northern
Telecom DMS100 central office switch
v Voice Message Service (VMS): for use with the Ericsson MD110 PABX
v Application Connectivity Link (ACL): for use with the Siemens Hicom 300
PABX
To take advantage of the exchange data link, the pSeries computer must be
connected directly to the switch (as shown in Figure 22 on page 94). The
physical link is an EIA RS232C (V.24) serial connection:
v For an SMSI/SMDI/VMS connection, you can use a conventional null
modem cable to connect one of the pSeries computer's serial ports directly
to the appropriate I/O port on the switch. Alternatively, if the length or
routing of the null modem cable is likely to cause problems, you can use a
pair of modems to link the two ports.
v For an ACL connection you require a standard serial cable (not a null
modem cable). This cable must be connected to the serial port of an adapter
that supports bisynchronous communications, such as the IBM
Multiprotocol Communications Controller. Again, you can use a pair of
modems instead of the serial cable.
Chapter 4. Telephone network
95
If your switch does not use SMSI, SMDI, VMS, or ACL, you can use CallPath
Server to get called and calling number information, and to support call
transfer and message waiting. Alternatively, if your switch has a host access
control link, you can write your own signaling process to interface with it, to
provide functions such as call transfer, message waiting, and call monitoring
(see “How voice applications access other resources” on page 49).
Common channel signaling
The other way in which you can overcome the disadvantages of channel
associated signaling protocols (see “Disadvantages of channel associated
signaling” on page 92), is to use a common channel signaling (CCS) protocol.
Common channel signaling avoids all these problems. Using short, fast
messages that are sent (bidirectionally) down a single time slot of a trunk, the
switch/network and the voice response unit can communicate much more
efficiently than they can with a channel associated signaling trunk.
WebSphere Voice Response supports two types of common channel signaling
protocol:
v Integrated Services Digital Network (ISDN)
v Signaling System 7 (SS7)
WebSphere Voice Response also supports the development of specialized
common channel signaling protocols for intelligent network (IN) applications,
through the use of the signaling interface.
Integrated Services Digital Network
In contrast to Signaling System 7, the ISDN primary rate interface (PRI) is
designed as an access protocol, and is therefore also known as primary rate
access (PRA).
96
General Information and Planning
SS7
Service
Control Point
ISDN
Caller
PTT or Telephone
Operating Company's
Network
SS7
ISDN
SS7
Switch
WebSphere
Voice Response
Switch
Voice
Circuits
Voice
Circuits
Switch
Figure 23. ISDN as an access protocol
An ISDN trunk is a T1 or E1 trunk with one of the timeslots dedicated to
signaling; this timeslot is called the D-channel. (The D-channel signaling
protocol is based on the ITU-T Q.921 and Q.931 recommendations.) The
remaining timeslots are used for voice and are called B-channels (see
Figure 24). ISDN trunks are generally used between an external subscriber
and a switch within a network. ISDN will eventually displace channel
associated signaling (that is, T1 and E1 trunks) as the means by which
primary-rate subscribers attach to SS7-controlled networks.
One E1 or T1 Trunk
D-Channel (Signalling)
B-Channels (Voice)
WebSphere
Voice
Response
Switch
Figure 24. ISDN B-Channels and D-Channels. This example shows one digital trunk. An E1 trunk has 30 B-channels ;
a T1 trunk has 23 B-channels.
Chapter 4. Telephone network
97
Telephone
Operating Company's
Network
Service
Control Point
SS7
Access protocol
(CAS or ISDN)
or
Analog
SS7
Voice
Circuits
Caller
Switch
WebSphere
Voice
Response
ISDN
Switch
Figure 25. Attaching WebSphere Voice Response as an intelligent peripheral in North America
In addition to being used as an access protocol, primary rate ISDN is used in
the United States to attach intelligent peripherals within an operating company's
advanced intelligent network (as shown in Figure 25). WebSphere Voice Response
can either function as an intelligent peripheral, or provide voice processing
function as part of a larger service node.
Advantages of ISDN
The usual reason for using ISDN is that it allows a single digital channel to be
used for several different types of communication services, such as voice,
X.25, video, and so on. However, for Voice Response Units such as WebSphere
Voice Response, ISDN offers a further set of benefits, even if the Voice
Response Unit is used only for voice:
v Reliable call setup and clearing
v Fast, reliable, delivery of called and calling number to the Voice Response
Unit
98
General Information and Planning
ISDN protocols supported
The optional ISDN features support:
v Primary rate interface (PRI) vendor-specific implementations
v PRI switching and signaling capabilities
v Calling number identification services for PRI
An optional feature of WebSphere Voice Response allows you to connect to
the network over an ISDN primary rate interface that conforms to each of the
following protocols:
v Lucent 5ESS 5E8
v Lucent 5ESS 5E9 National 1
v Lucent TR41449/TR41459
v
v
v
v
v
v
v
Nortel DMS100 BCS34/36
Nortel DMS100 National 2 — NA015
Nortel DMS250 — IEC05
Lucent 5ESS 5E11–15 National ISDN2
INS Net Service 1500
Euro-ISDN
E1 ISDN QSIG ECMA 143 and international standard ISO/IEC 11572:2000
The National ISDN effort is being driven by Bellcore and the Corporation for
Open Systems, on behalf of the operating companies and the North American
ISDN Users Forum (NIUF).
The functions provided by each standard are summarized in Table 3 on page
101. Table 4 on page 102 shows how WebSphere Voice Response supports
ISDN protocols.
Lucent 5ESS 5E8
5ESS 5E8 is a custom protocol implemented by Lucent Technologies
on 5ESS central office switches in the U.S. It provides 23B+D channels
over a T1 facility.
Lucent 5ESS 5E9 National 1
5ESS 5E9 is a custom protocol implemented by Lucent Technologies
on 5ESS central office switches in the U.S. It provides 23B+D channels
over a T1 facility.
Lucent TR41449/TR41459
TR41449 and TR41459 are the custom protocols implemented by
Lucent Technologies on the Definity PABX and 4ESS switch for use
with T1 facilities.
Chapter 4. Telephone network
99
Nortel DMS100 BCS34/36
DMS100 BCS34 and BCS36 are custom protocols implemented by
Nortel on DMS100 central office switches in the U.S. They provide
23B+D channels over a T1 facility.
Nortel DMS100 National 2 NA015
DMS100 National 2 NA015 is a custom protocol implemented by
Nortel on DMS100 central office switches in the U.S. It provides
channels over a T1 facility.
Nortel DMS250 IEC05
DMS250 IEC05 custom protocols implemented by Nortel on DMS250
central office switches in the U.S. It provides channels over a T1
facility.
Lucent 5ESS 5E11–15 National ISDN2
National ISDN2 is a common ISDN standard developed for use in the
U.S. It provides 23B+D channels over a T1 facility. National ISDN-2 is
supported on the Summa Four PBX switch and on the Lucent 5ESS
switch with 5E12 protocol.
INS Net Service 1500
INS Net Service 1500 is an ISDN standard developed for use in Japan.
It provides 23B+D channels over a T1 facility.
Euro-ISDN
Euro-ISDN is a common European ISDN standard that provides
30B+D channels over an E1 facility. Euro-ISDN services are available
in seventeen countries in Europe (including France, Germany, and the
U.K.). Most of these countries provide Euro-ISDN services for
production use; in a few countries, only pilot services are available. In
some European countries, ISDN is already the generally-accepted
method of connecting telephony equipment to the public network
when multiple lines are required (and is tariffed accordingly).
Note: In some countries, a Euro-ISDN switch may send multiple
channels in the Channel ID Information element of a RESTART
message. If this occurs, WebSphere Voice Response only acknowledges
the RESTART message if all the channels in the relevant trunk have
had their operating status set to inservice. If the switch is configured
in this manner, the WebSphere Voice Response ISDN Signaling system
parameter Send RESTART on Channel Enable must be be set to No
(refer to the WebSphere Voice Response for AIX: Configuring the System
book for details).
E1 QSIG
E1 QSIG is specified by the ECMA-143 (International Standard
ISO/IEC 11572), which defines the signalling procedures and protocol
at the Q-reference point between Private Integrated Network
100
General Information and Planning
Exchanges (PINXs) within a Private Services Network (PISN). The
physical connection must be an E1 telephony trunk.
WebSphere Voice Response is intended to be used either as the
originating PINX or the terminating PINX and provides functionality
to become a transit PINX.WebSphere Voice Response QSIG also
supports ECMA-242, which provides control for a Message Waiting
Indicator (MWI) over the signaling D-Channel. MWI control can be
invoked using the signaling library SL_STATION_SET_MWI_REQ
primitive and supports the following states: SL_MWI_ON and
SL_MWI_OFF. Refer to the WebSphere Voice Response for AIX:
Programming for the Signaling Interface book for further information.
Table 3. Functions provided by ISDN protocols
DID or DNIS
ANI
Far-end disconnect
Call transfer
PSTN
In the Type column, Trunk
means ‘trunk-side protocol’
and Line means ‘line-side
protocol’.
Function
Answer detection
Connectivity
Channel bank
Type
PABX
Protocol
Lucent 5ESS 5E8
Trunk
Yes
No
Yes1
Yes
No
Yes
Yes
Yes
Lucent 5ESS 5E9
National 1
Trunk
Yes
No
Yes1
Yes
No
Yes
Yes
Yes
Lucent TR41449/
41459
Trunk
No
Yes
Yes1
Yes
No
Yes
Yes
Yes
Nortel DMS100
BCS34/36
Trunk
Yes
No
Yes1
Yes
No
Yes
Yes
Yes
Nortel DMS 100
National 2 NA015
Trunk
Yes
No
Yes1
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes2
Yes3
Yes2
Nortel DMS 250
IEC05
Trunk
Lucent 5ESS
5E11–15 National
ISDN2
Trunk
Yes
No
Yes1
Yes
Yes2
Yes
Yes
Yes
INS Net Service
1500
Trunk
Yes
Yes
Yes1
Yes
No
Yes
Yes
Yes
Yes
No
Yes
1
Yes
Yes3
Chapter 4. Telephone network
101
Table 3. Functions provided by ISDN protocols (continued)
DID or DNIS
ANI
Far-end disconnect
Call transfer
PSTN
In the Type column, Trunk
means ‘trunk-side protocol’
and Line means ‘line-side
protocol’.
Function
Answer detection
Connectivity
Channel bank
Type
PABX
Protocol
Euro-ISDN
Trunk
Yes
Yes
–
Yes
Yes4
Yes
Yes
Yes
E1 QSIG
Trunk
No
Yes
No
No
Yes4
Yes
Yes
Yes
Note:
1. A channel service unit (CSU) for ISDN must support extended superframe (ESF) line framing.
2. Bellcore 2B channel transfer feature
3. RLT transfer
4. Single step transfer
5ESS 5E9 National 1
TR41449/ TR41459
DMS100 BCS34/36
DMS100 National
DMS250 IEC05
5ESS 5E11–15 National 2
INS 1500
Euro-ISDN
User
Side
only
User
Side
only
User
Side
only
User
Side
only
User
Side
only
User
Side
only
User
Side
only
User
Side
only
User
Side
only
Endpinx
User Side
only
3.1 KHz Audio
Bearer Capability
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
64 kbit Speech-Bearer
Capability
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Bway2
Bway
Bway
Bway
Bway
Bway
Bway
Bway
Bway
Bway
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
No
Bway2
Bway
Bway
Bway
Bway
Bway
Bway
Bway
Bway
Bway
ISDN Feature
Network or User
Side
Called and Calling
Party Number1
User-to-User
signaling
Restart message
support
102
General Information and Planning
E1 QSIG
5ESS 5E8
Table 4. How WebSphere Voice Response supports ISDN protocols
E1 QSIG
Euro-ISDN
INS 1500
5ESS 5E11–15 National 2
DMS250 IEC05
DMS100 National
DMS100 BCS34/36
ISDN Feature
TR41449/ TR41459
5ESS 5E8
5ESS 5E9 National 1
Table 4. How WebSphere Voice Response supports ISDN protocols (continued)
B-channel Service
Message Support
–
–
Yes
No
Yes
Yes
Yes
No
–
No
ETSI 300-062 to 064
Direct Dial In (DDI)
–
–
–
–
–
–
–
–
Yes
–
ETSI 300-050 to 052
Multiple Subscriber
Number (MSN)
–
–
–
–
–
–
–
–
Yes
–
ETSI 300-367 to 369
Explicit Call Transfer
(ECT)
–
–
–
–
–
–
–
–
No
No
ECMA-300 Single
Step Call Transfer
Supplementary
Service (SSCT)
–
–
–
–
–
–
–
–
Yes
Yes
Bellcore GR- 2865 2
B-channel transfer
–
–
–
–
Yes
Yes
Yes
–
–
No
Release link trunk
(RLT) transfer
–
–
–
–
Yes
Yes
–
–
–
No
Non-Facility
Associated Signaling
(NFAS)
–
Yes
Yes
Yes
Yes
Yes
Yes
Yes
–
No
D-channel backup
–
No
Yes
No
Yes
Yes
Yes
No
–
No
No
No
No
No
No
No
No
–
–
No
Bellcore TR-1268 Call
by Call service
selection
Note:
1. Call information is transferred through the use of system variables SV541 and SV542. For details refer to the WebSphere Voice
Response for AIX: Application Development using State Tables book.
2. "Bway" is an abbreviation for "Bothway"
3. A hyphen indicates that the feature is not applicable for that protocol.
Chapter 4. Telephone network
103
Non-Facility Associated Signaling
WebSphere Voice Response supports Non-Facility Associated Signaling (NFAS)
on T1 systems. In the normal ISDN configuration, each T1 trunk has 23 bearer
channels (B-channels) carrying voice traffic, and 1 signaling channel (D-channel)
carrying signaling traffic. (23B+D).
In an NFAS configuration, the ISDN trunks are grouped into NFAS groups.
You can define up to four NFAS groups on each pSeries computer with a
maximum of 10 trunks in each group. The signaling traffic for all the trunks
in a group is carried on one D-channel. The trunk that carries the signaling
information on its 24th channel is called the primary trunk.
For some switches and line signaling protocols, you can also set aside a
channel on another trunk in each group to act as a backup for the signaling
channel. The backup D-channel does not carry voice and will automatically
take over the signaling traffic if the primary D-channel goes out of service.
The trunk that has its 24th channel on standby to carry the signaling
information is called the backup trunk.
Table 5 shows the switch types and line signaling protocols that have NFAS
D-channel backup enabled and the custom server needed for each. These
switches and signaling protocols also support the use of service messages to
take indiviual channels in and out of service.
Table 5. Switch types and line signaling protocols that can be used with D-channel
backup.
Switch type
Line signaling
Custom server
Avaya (Lucent) 4ESS
TR41449
sigproc_att_tr41449.imp
Avaya (Lucent) 5ESS – 2000 ISDN National 2
sigproc_t1_nat.imp
Avaya (Lucent) Definity
G1/G2/G3
TR41449
sigproc_att_tr41449.imp
NT DMS 100
ISDN National
sigproc_dms_nat.imp
NT DMS 250
ISDN_IEC05
sigproc_dms250.imp
Summa Four
ISDN National 2
sigproc_t1_nat.imp
TR41449
sigproc_att_tr41449.imp
To illustrate the concept of non-facility associated signaling, Figure 26 on page
105 and Figure 27 on page 105 show how many voice channels you get with
four E1 or T1 trunks. Figure 28 on page 105 shows how many channels you
get with four T1 trunks using NFAS. Figure 29 on page 106 shows how many
channels you get with four T1 trunks using NFAS with D-channel backup.
104
General Information and Planning
Four E1 Trunks
30 B+D
30 B+D
30 B+D
30 B+D
120 Voice Channels
DTTA
Switch
Figure 26. Using E1 ISDN trunks
Four T1 Trunks
23 B+D
23 B+D
23 B+D
23 B+D
92 Voice Channels
DTTA
Switch
Figure 27. Using T1 ISDN trunks without NFAS
Four T1 Trunks with Non-Facility Associated Signalling
23 B+D
24 B
24 B
95 Voice Channels
Switch
24 B
DTTA or DTXA
Figure 28. Using T1 ISDN trunks with NFAS
Chapter 4. Telephone network
105
Four T1 Trunks using Non-Facility Associated Signaling
with primary and backup D-channels
23 B+D
23 B+D
24 B
94 Voice Channels
Switch
24 B
DTTA
Figure 29. Using T1 ISDN trunks with NFAS and D-channel backup
Key facts about ISDN support
v WebSphere Voice Response supports ISDN connectivity, giving customers
the benefits of common channel signaling, such as very fast call setup and
teardown, reliable network signaling, and the delivery of called and calling
number information.
v WebSphere Voice Response supports voice calls on the B-channels, by using
the speech and 3.1kHz bearer capabilities.
v WebSphere Voice Response protects customer investment in applications by
allowing existing state tables to run unchanged over ISDN.
v WebSphere Voice Response supports a maximum of 480 concurrent voice
channels with ISDN on E1 on PCI-based hardware.
v WebSphere Voice Response supports a maximum of 380 concurrent voice
channels with ISDN on T1 on PCI-based hardware.
v WebSphere Voice Response supports up to four Non-Facility Associated
Signaling (NFAS) groups on T1 systems, with a primary and a backup
D-channel for up to 10 trunks in each group.
v WebSphere Voice Response supports the use of service maintenance
messages to take individual channels out of service on T1 systems.
v WebSphere Voice Response supports the coexistence of ISDN trunks and
channel associated signaling trunks in the same system for ease of
migration.
v WebSphere Voice Response supports a number of key supplementary
services.
v WebSphere Voice Response operates in Terminal Equipment (TE) mode.
Signaling System 7
Note: In previous releases of WebSphere Voice Response, SS7 support was
provided through the separately-orderable SS7 Call Manager feature with the
SS7 PRPQ. These features are replaced in version 4.2 with a single new PRPQ
that supports the same hardware, and provides similar functionality, but uses
106
General Information and Planning
a new software stack. For details of the functionality provided by this
solution, and its availability, please contact your IBM representative.
Signaling System 7 (SS7) is the common channel signaling protocol that is
used internally in telephone operators' digital communication networks. SS7 is
not an access protocol, by which terminal equipment attaches to the network,
but is used to attach voice response units, such as WebSphere Voice Response.
The SS7 protocol is implemented to ISUP specifications, which WebSphere
Voice Response supports. The higher levels of the protocol have many local
variations.
PTT Network
Service
Control Point
SS7
SS7
Access protocol
(CAS or
Euro-ISDN)
Voice Circuits
Caller
Switch
WebSphere
Voice
Response
Voice Circuits
Switch
Figure 30. Attaching WebSphere Voice Response in a telephone network
Chapter 4. Telephone network
107
PTT Network
Service
Control Point
SS7
SS7
Access protocol
(CAS or
Euro-ISDN)
Voice Circuits
Caller
Switch
WebSphere
Voice Response
SS7 Server
WebSphere
WebSphere
Voice
Response
WebSphere
Voice
Response
WebSphere
Voice
Response
Voice Response
Switch
Voice Circuits
Figure 31. Attaching multiple WebSphere Voice Response systems in a telephone network
SS7 links and network components
SS7 uses discrete messages to exchange information. Packet-switched data
links are used to send the messages between switches and the other end
points. There are two types of connections in a telephony network:
v Packet switched links that are used for SS7 messages (but not voice data).
v Circuit switched links used for voice data (but not SS7 messages).
The SS7 protocol stack
SS7 is structured in a multi-layered stack that corresponds closely to the layers
of the standard OSI model, although some SS7 components span several
layers. SS7 supports the Integrated Services User Part (ISUP), which spans the
presentation, session, and transport layers.
The ISUP handles the setup and tear-down of telephone calls and provides
functions that are available with primary rate ISDN. These include called and
108
General Information and Planning
calling number notification (or suppression), the ability to control billing
(charging) rates, advanced telephony functions such as transfer, and control
over whether the voice channel is used for voice, fax, or data.
ISUP messages flow only during the setup and tear-down phases of a call.
Communication between your SS7 switch and SS7 Server is determined by the
existence of point codes. These are addresses that each side uses to send
messages to the other. Each side has a concept of a Source Point Code (SPC)
and a Destination Point Code (DPC). The SPC of your switch is the DPC of
the SS7 Server; the SPC of SS7 Server is the DPC of your switch.
Voice over IP
Voice Over Internet Protocol allows the sending of telephony voice over IP
(Internet Protocol) data connections rather than over existing dedicated voice
networks, switching and transmission equipment. This can have several
advantages, including:
1. Significant savings can be made in cost and space for telephony switches
and other related equipment.
2. Compression techniques can be used to reduce the amount of data being
sent in each packet.
3. Standard computer equipment can be used, instead of the
highly-proprietary telephony hardware.
4. The same data connections can be shared between data and voice, with
the excess bandwidth being used for data when voice call volume is low.
This also makes it much easier to create applications that integrate voice
and data.
WebSphere Voice Response can be connected to VoIP networks using one or
more external VoIP/PSTN Gateways. See Figure 32 on page 111.
How does Voice over IP work?
In any telephony system, two things are carried by the network: voice data
and signaling information. Voice is the sound information detected by the
microphone in the telephone and transmitted to the receiver over a
communication channel. Signaling is the information exchanged between
stations participating in the call when a call is started or ended, or when an
action (for example, call transfer) is requested. Traditionally, both voice and
signaling information have been sent together through dedicated circuit
switched telephony channels (for example T1 or El, or ISDN). However, with
VoIP, voice and signaling are sent using standard TCP/IP protocols over a
physical link such as an Ethernet network. This exchange of signaling and
voice information takes place in both directions at the same time with each
endpoint sending and receiving information over the IP network.
Chapter 4. Telephone network
109
Sending voice data over an IP network
With VoIP, voice data is digitally encoded using µ-law or A-law Pulse Code
Modulation (PCM). The voice data can then be compressed and sent over the
network in TCP/IP Unacknowledged Datagram Packets (UDPs). Standard
TDM telephony sends voice data at a low constant data rate. With VoIP,
relatively small packets are sent at a constant rate. The total overall rate of
sending data is the same for each kind of telephony.
The advantage of VoIP is that one high-speed network can carry the packets
for many voice channels and possibly share with other types of data at the
same time (for example, FTP, and data sockets). A single high-speed network
is much easier to set up and maintain than a large number of circuit switched
connections (for example, T1 links).
The TCP/IP UDP protocol is used to transmit voice data over a VoIP network.
UDP is a "send and forget" protocol with no requirement for the transmitter to
retain sent packets should there be a transmission or reception error. If the
transmitter did retain sent packets, the flow of real-time voice would be
adversely affected by a request for retransmission or by the retransmission
itself; especially if there is a long path between transmitter and receiver).
The main problems with using UDP are that:
v There is no guarantee that a packet may actually be delivered.
v Packets can take different paths through the network and arrive out of
order.
To overcome these problems, the Real Time Protocol (RTP) is used with VoIP.
RTP provides a method of handling disordered and missing packets and
makes the best possible attempt to recreate the original voice data stream
(comfort noise is intelligently substituted for missing packets)
Signaling over an IP network
The SIGNALLING message is used by the VoIP phone that initiates a call (the
calling party) to inform the called party that a connection is required. The
called party can then accept the call or reject the call (for example, if the
called party is already busy). Other signaling exchanges will be initiated by
actions like near or far end hang-up, and call transfer.
For VoIP, several signaling protocols are in general use:
v Session Initiation Protocol (SIP) is a modern protocol which is being used
more widely.
v Media Gateway Control Protocol (MGCP) is used internally with telephone
networks.
110
General Information and Planning
v H.323 is an older protocol which is no longer widely supported.
WebSphere Voice Response supports only SIP, using a version that fully
conforms to RFC 3261 (the standard definition for SIP). SIP is based on
text-based messages, that are exchanged between endpoints whenever any
signaling is required. These message exchanges are mapped by WebSphere
Voice Response SIP support to standard telephony actions within WebSphere
Voice Response itself. Standard telephony actions include:
v Incoming calls
v Outgoing calls
v Near end hang-up
v Far end hang-up
v Transfers (several types are supported including “blind” and “attended”)
SIP messages can use either TCP (a reliable, guaranteed message exchange) or
UDP (a non-guaranteed datagram protocol).
Components of a VoIP network
T1/E1connection
to PSTN
WebSphere
Voice
Response
Gateway
IP Phone
IP Network
SIP Proxy
IP Phone
IP Phone
IP Phone
Figure 32. An example of a VoIP network
Chapter 4. Telephone network
111
There are three main components of a VoIP network, as shown in Figure 32 on
page 111 above:
Endpoints
In a VoIP network, any device that can make or receive telephone
calls is called an endpoint. An endpoint can be one of the following:
v A SIP hard phone
v A SIP soft phone
v WebSphere Voice Response (which simulates a number of phones)
for incoming or outgoing calls.
Gateways
A gateway is a device that acts as a bridge between VoIP and the
PSTN network. A gateway can take an incoming call from a T1
interface, convert the signaling into SIP message exchanges, and
convert the voice from TDM into RTP packets.
Proxy servers
In a SIP system, a proxy server (used with a register and a location
server), can provide the following services
v Call Routing including URI translation.
v Registration.
v Access (authentication) to a SIP network.
A Proxy server provides the intelligence of how calls are routed
within a SIP VoIP network. For example, a gateway might be
configured to send all incoming calls to the SIP proxy server which
will then route the calls to specific endpoints (this could be done to
perform load balancing or skills-based routing).
Using VoIP with WebSphere Voice Response
The Voice over IP (SIP) feature for WebSphere Voice Response allows a
WebSphere Voice Response application to act as an IP telephony endpoint.
With the ability to run up to 480 E1 or 384 T1 channels (16 trunks) at the
same time using four DTEA cards, WebSphere Voice Response can simulate
up to 480 SIP endpoints. (As with standard telephony, each simulated
endpoint behaves like a real hard or soft phone). Using the WebSphere Voice
Response DTNA software implementation of a DTEA, up to 8 trunks only are
supported (half the number of SIP endpoints), but no additional hardware is
required.
WebSphere Voice Response applications that interact with the SIP network can
be written using CCXML, VoiceXML, Java or state tables. As long as some
simple numbering plan rules are adhered to within the SIP network, existing
WebSphere Voice Response applications should run unmodified (as they
112
General Information and Planning
operate in terms of standard E.164 telephone numbers). If required,
WebSphere Voice Response applications can be written specially for VoIP, to
fully exploit the additional functionality provided by SIP (for example, textual
addresses - URLs or URIs).
Hardware requirements for supporting VoIP with a pSeries computer using
DTEAs:
For a WebSphere Voice Response system to be able to support VoIP, you
require:
v A pSeries computer 615 or 630 system unit
v One or more Digital Trunk Ethernet Adapters (DTEAs) installed in the
system unit
For detailed information about DTEA cards, the DTNA software
implementation, and how to set up and configure a VoIP system, refer to the
WebSphere Voice Response for AIX: Voice over IP using Session Initiation Protocol
book.
Note: DTEA cards are no longer available but are still supported by
WebSphere Voice Response Version 6.1.
Supporting other signaling protocols
Using the subroutines in the WebSphere Voice Response signaling library, you
can use signaling protocols not otherwise supported by WebSphere Voice
Response, including:
v Exchange data link protocols, which are sometimes used to send call
information from the switch to the voice application (for a high-level view,
see Figure 33 on page 114)
v Specialized intelligent network (IN) protocols (for a high-level view, see
Figure 34 on page 115)
For more information about exchange data links and common channel
signaling, see “Planning the telephony environment” on page 83. When you
write a program using subroutines from the signaling library, that program is
known as a signaling process.
Chapter 4. Telephone network
113
Voice
Processing
Telephone
Network
pSeries
Multiport
Interface
Card
Carrier
Network
Signaling
Caller
Digital Trunk
Adapter
Signaling
Process
WebSphere
Voice
Response
Voice
Switch
Figure 33. The role of a signaling process in exchange data link signaling
114
General Information and Planning
Data
Communications
Network
Voice
Processing
Telephone
Network
Data
Communications
Network
pSeries
Signaling
Process
Signaling
Caller
WebSphere
Voice
Response
Digital Trunk
Adapter
Carrier
Network
Switch
Signaling
Voice
Figure 34. The role of a signaling process in ISDN signaling
Integrating WebSphere Voice Response with Genesys Framework
WebSphere Voice Response applications can use Genesys Framework for the
following functions:
v To get called and calling number information from the switch
v To transfer calls (even if call transfer is not supported directly by the
switch)
v Send and receive Genesys call data
v Genesys Call routing requests
The interface is switch independent and connectivity is based on Genesys
switch support. State Table support is provided by Genesys with the D2IS
custom server supplied as part of the Genesys Framework packages.
VXML/CCXML support is provided by the built in IVR server support. You
will also need Genesys Framework 7.1 or higher.
The following configurations are possible:
Chapter 4. Telephone network
115
v State Table access to the Genesys Framework using the Genesys-supplied
D2IS custom server (shown in Figure 35). WebSphere Voice Response can be
configured so that a call is allocated to a specific “logical” trunk and
channel as determined by a SIP Switch or Gateway. This ensures that the
call is delivered and communicated as if it was a physical endpoint. The
allocated trunk and channel are based on the dialled number.
v VXML/CCXML access to the Genesys Framework using the built-in
WebSphere Voice Response support. (shown in Figure 36 on page 117) This
functionality runs independently of the Genesys-supplied D2IS custom
Server
WebSphere Voice Response can also be integrated with vendor
computer-telephony integration (CTI) solutions such as Genesys Framework,
to provide more advanced solutions with closer integration with customer
switches and automatic call distributor applications (ACD) functionality
Telephone
Network
Voice
Processing
pSeries
Data
Communications
Network
WebSphere
Voice Response
State Table
Caller
Information
D2IS
Custom Server
Genesys
Framework
Local Area Network
Switch
Figure 35. WebSphere Voice Response connection to the Genesys Framework using state tables
116
General Information and Planning
Telephone
Network
Voice
Processing
pSeries
Data
Communications
Network
WebSphere
Voice Response
VXML/CCXML
Caller
Information
VRBE with
built-in support
Genesys
Framework
Local Area Network
Switch
Figure 36. WebSphere Voice Response connection to the Genesys Framework using VoiceXML and CCXML
Related information
v “Telephony functionality” in the WebSphere Voice Response: VoiceXML
Programmer's Guide for WebSphere Voice Response manual.
v “Adding telephony capability”in the WebSphere Voice Response: Deploying
and Managing VoiceXML and Java Applications manual.
v “TelephonyService configuration entry” in the WebSphere Voice Response:
Deploying and Managing VoiceXML and Java Applications manual.
v “Using advanced CTI features” in the WebSphere Voice Response:
VoiceXML Programmer's Guide for WebSphere Voice Response manual.
Integrating WebSphere Voice Response with Cisco ICM software
The Cisco Intelligent Contact Management (ICM) software (formerly Geotel
Intelligent CallRouter (ICR)) allows you to create an enterprise-wide virtual
call center and provide improved call distribution performance. The Cisco
ICM custom server enables a WebSphere Voice Response system to
communicate with Cisco ICM over a TCP/IP link. The ICM gathers
information about the state of the trunks on the WebSphere Voice Response
system, and about the calls that are processed. WebSphere Voice Response for
AIX supports Cisco ICM Version 7.
You can use the data collected by the Cisco ICM to balance your call load by
monitoring the load in real-time. You can see how your system is used by
Chapter 4. Telephone network
117
using the reports of inbound calls, outbound calls, abandoned calls, and the
services that have been used. The Cisco ICM also provides a call routing
function.
WebSphere Voice Response can also be integrated with Genesys CTI server.
Fax connection requirements
WebSphere Voice Response provides a custom server and state tables to assist
in the sending and receiving of faxes, through easy-to-use state table
interfaces. To use this fax support, you need to install a Brooktrout fax card
and develop a fax application to suit your needs or, alternatively, modify an
existing application to make use of fax. The fax card uses the TDM bus to
connect to the WebSphere Voice Response telephony channels. Fax is
supported only when DTTA is used to communicate with the telephone
network, and not with DTNA or DTEA. The necessary device drivers for the
fax card are supplied as part of WebSphere Voice Response to simplify
installation and setup. A sample fax application is also supplied to
demonstrate how a typical fax application can work with WebSphere Voice
Response—this is fully documented and can be used as a model for a bespoke
application.
WebSphere Voice Response can transmit data to a combined telephone and fax
machine without breaking the call (one-call mode). It can also provide
two-call fax support when the request for a fax arrives in an inbound call. At
this point WebSphere Voice Response terminates the inbound call and makes
an outbound call to dispatch the fax, either to a combined telephone and fax,
or to a standalone fax machine.
For more details about using WebSphere Voice Response with a Brooktrout fax
card, refer to the WebSphere Voice Response for AIX: Fax using Brooktrout book.
Using ADSI telephones
The Analog Display Services Interface, normally abbreviated to ADSI, is a Bell
Communications Research (Bellcore) standard, which defines a protocol for
data transmission over a voice-grade telephony channel. Devices such as a
central office or interactive voice response (IVR) system that can generate
ADSI data can communicate with ADSI compatible telephones. These
telephones have some processing capability, an area of read/write memory, a
display screen, navigation keys (to scroll information up, down, left, and
right), and softkeys. The softkeys can be programmed to perform different
functions at different times.
Using the ADSI component of WebSphere Voice Response for AIX, you can
create scripts to control ADSI telephones.
118
General Information and Planning
Choosing the application to answer incoming calls
WebSphere Voice Response can be configured to respond with an appropriate
voice application for each caller. Normally you associate each application with
a different telephone number that callers dial according to the service they
require. Then WebSphere Voice Response has to recognize the dialed
telephone number. The dialed number can be recognized in different ways,
depending on what information about the call is available from the switch, in
your country:
v Dialed number information (DID or DNIS)
v Common channel signaling
v CallPath Server
v Voice over Internet Protocol (VoIP) Gateway
v Exchange data link
v Channel Identification
Note: The dialed number is also referred to as the called number.
Dialed number information (DID or DNIS)
The dialed number can be provided via dialed number identification service
(DNIS) or direct inward dialing (DID). WebSphere Voice Response can receive
and use dialed number information if the information is transmitted by the
switch as DTMF or MFR1 tones in the voiceband or as dial pulses on the
signaling channel. Dialed number information might not be available,
depending on the switch to which WebSphere Voice Response is connected,
and how the trunk interface is configured.
Common channel signaling
With ISDN, the called and calling numbers are always available if provided
by your carrier.
CallPath Server
If you have CallPath Server, the dialed number can be passed to WebSphere
Voice Response by a signaling process supplied with WebSphere Voice
Response.
Exchange data link
If you have an RS232 exchange data link (a direct physical attachment
between the switch and the pSeries computer), the dialed number can be
passed to WebSphere Voice Response either by one of the exchange data link
signaling processes provided with WebSphere Voice Response, or by a
custom-written signaling process.
Chapter 4. Telephone network
119
Channel identification
Alternatively, the application can be chosen on the basis of the specific
channel on which the call is received. Channel identification information is
always available. A system variable also identifies the logical channel that the
application is using.
Estimating telephony traffic
Because connecting WebSphere Voice Response to the correct number of
channels is critical to success, you need to estimate how much telephony
traffic you expect to process with WebSphere Voice Response. You must do
this before you can decide what hardware and software to order:
v Gather information about telephony traffic
v Calculate telephony traffic
v Determine a blockage rate
v Estimate the number of channels needed
People you need
The person who estimates your telephony traffic should have a basic
knowledge of telephony traffic engineering. If no one in your company has
this knowledge, find someone to help you who is familiar with the principles
and techniques.
When a large volume of traffic is handled, installing WebSphere Voice
Response can have an impact on an existing telephone system. Involve a
representative from your telephone service provider, who may be a PABX
vendor or a telephone company.
When you have gathered this information, you should involve an automatic
call distributor (ACD) engineer or an IBM Call Center specialist to help define
your needs.
Telephony traffic information
To estimate telephony traffic, you need two numbers. One is the number of
busy hour calls you expect WebSphere Voice Response to handle. The other is
the estimated call hold time you expect after WebSphere Voice Response is
installed. Also, you should think about other factors, especially the impact
that your use of WebSphere Voice Response will have on telephony traffic.
Number of busy hour calls
The number of busy hour calls is the number of calls you receive during the
busiest 60 minute period of an average day in your busiest month. You might
already know how many busy hour calls you are handling—if not, you may
be able to use ACD reports, trunk traffic statistics, or telephone bills to
120
General Information and Planning
estimate the number. Use this number as a starting point for calculating the
number of busy hour calls you expect to receive once WebSphere Voice
Response is installed.
Call hold time
Call hold time is the average amount of time, in seconds, that a call takes
from start to finish. This includes the time it takes to set up a call, greet the
caller, perform the requested transaction (including database access and
speaking the results to the caller), and get ready for the next call. The time it
takes to set up a call includes ring time and the time it takes the switch to
recognize a disconnect signal. ACD reports, billing records, and trunk
monitoring information can help you estimate existing call hold time.
Otherwise, you can time a representative sample of actual calls.
If agents are already performing the transactions that you plan to include in
your voice applications, you can use the existing call hold time as a starting
point for calculating what the call hold time should be once WebSphere Voice
Response is installed. If WebSphere Voice Response is to perform new
transactions, use modeling and simulation techniques to estimate both the
busy hour calls and the call hold times. Remember that host system or remote
data base access can introduce significant delays, especially at peak times.
Other factors to consider
You might want to adjust the number of busy hour calls and call hold time to
allow for some additional factors. Here are a few to consider:
v Growth: Your business plan might call for an increase in business, which
will probably cause an increased amount of traffic.
v Expanded hours: When WebSphere Voice Response is installed, your
business can operate 24 hours a day. This is likely to smooth out the
telephony traffic.
v Expanded scope: WebSphere Voice Response can expand the scope of
operations by offering additional services, thus increasing traffic.
v Demand stimulation: After WebSphere Voice Response is installed, callers
often call more frequently because they find it more convenient.
In addition, if you also decide to replace channel associated signaling with
ISDN, you need to think about the reduced call-setup and clearing times,
which could result in shorter call hold times for the same talk-time.
Chapter 4. Telephone network
121
Calculating telephony traffic
When you have determined the number of busy hour calls and call hold times
you expect, you can estimate the amount of traffic you expect WebSphere
Voice Response to handle. Telephony traffic is measured in units called
Erlangs.
To calculate your call center traffic in Erlangs, use this formula:
Let c be the number of busy hour calls.
Let t be the call hold time in seconds.
Multiply c by t and divide the product by 3600. The result is the traffic in
Erlangs:
(c x t) / 3600 = traffic in Erlangs.
Determining a blockage rate
Traffic in Erlangs cannot be translated directly into a number of channels. To
determine how many channels you need, you must also know what blockage
rate is acceptable to your callers.
Blockage rate indicates what percentage of callers are not connected
immediately. For example, a blockage rate of 1% means that if 500 people call
during peak hours, five of them will receive busy tones or have to wait in the
switch queue.
If you are already operating with a high blockage rate, you might be losing
many of the calls you would be getting with a lower blockage rate. So, if you
assume a lower blockage rate when you calculate the number of channels you
need, you should perhaps assume a higher Erlang number.
Estimating the number of channels needed
The blockage rate and traffic in Erlangs allow you to estimate how many
channels you need. The number of channels depends, in part, on whether
WebSphere Voice Response is connected in such a way that it is behind a
switch queue. To determine the number of channels you need, estimate how
much telephony traffic you expect. This estimate can then be converted into a
specific number of telephone channels. Your estimate should then be validated
by an expert.
For a non-queueing situation (when you do not have an ACD in front of
WebSphere Voice Response), Table 6 on page 123 can be used to give a
preliminary estimate of the number of channels you will need. First find your
busy hour Traffic in Erlangs in the first column. Then find the Blockage Rate
you find acceptable in the column headings. The cell where the row and
122
General Information and Planning
column meet gives you the Number of Channels you need. Before
implementation, your estimate should be validated by an expert.
For example, if your traffic load is 20 Erlangs and you find a blockage rate of
5% acceptable, you need 26 channels. Because an E1 trunk carries 30 channels
and a T1 trunk carries 24, the number of channels you order depends on the
type of system:
v For an E1 system, you need to order 30 channels (one trunk). If you use all
30 channels, you will be able to achieve a better blockage rate (for the same
amount of traffic).
v For a T1 system, you need to order 48 channels (two trunks). If you use all
48 channels, you will be able to achieve a better blockage rate (for the same
amount of traffic).
Additional considerations
You should decide whether extra channels are to be made available for
internal use. Voice application developers use a channel to test applications.
When a developer is using a channel, that channel is not available to service
calls.
Also consider whether WebSphere Voice Response will be making outbound
calls. If so, you might want to reserve a number of channels for outbound
calling only. Those channels are also not available to receive calls.
If you are currently using an ACD, the design of the system can determine
how many channels you need.
Note: If you intend to use background music during voice applications, you
must include the number of background music channels (up to eight channels
can be used for background music on each trunk).
Table 6. Number of channels required according to traffic and blockage rate
Acceptable Blockage Rate
Traffic in Erlangs
0.1%
0.5%
1%
2%
5%
Number of Channels Required
1
6
5
5
4
4
2
8
7
7
6
5
3
10
9
8
8
7
4
12
11
10
9
8
Chapter 4. Telephone network
123
Table 6. Number of channels required according to traffic and blockage rate (continued)
Acceptable Blockage Rate
Traffic in Erlangs
124
0.1%
0.5%
1%
2%
5%
Number of Channels Required
5
14
12
11
10
9
6
15
14
13
12
10
7
17
15
14
13
11
8
18
16
15
14
13
9
20
18
17
15
14
10
21
19
18
17
15
11
23
20
19
18
16
12
24
22
20
19
17
13
26
23
22
20
18
14
27
24
23
21
19
15
28
26
24
23
20
16
30
27
25
24
21
17
31
28
27
25
22
18
32
29
28
26
23
19
34
30
29
27
24
20
35
32
30
28
26
21
36
33
31
29
27
22
37
34
32
31
28
23
39
35
34
32
29
General Information and Planning
Table 6. Number of channels required according to traffic and blockage rate (continued)
Acceptable Blockage Rate
Traffic in Erlangs
0.1%
0.5%
1%
2%
5%
Number of Channels Required
24
40
36
35
33
30
25
41
38
36
34
31
26
42
39
37
35
32
27
44
40
38
36
33
28
45
41
39
37
34
29
46
42
40
38
35
30
47
44
42
39
36
31
49
45
43
41
37
32
50
46
44
42
38
33
51
47
45
43
39
34
52
48
46
44
40
35
54
49
47
45
41
36
55
51
48
46
42
37
56
52
49
47
43
38
57
53
51
48
44
39
58
54
52
49
45
40
60
55
53
50
46
41
61
56
54
51
47
42
62
57
55
52
48
Chapter 4. Telephone network
125
Table 6. Number of channels required according to traffic and blockage rate (continued)
Acceptable Blockage Rate
Traffic in Erlangs
126
0.1%
0.5%
1%
2%
5%
Number of Channels Required
43
63
59
56
53
49
44
64
60
57
55
50
45
66
61
58
56
51
46
67
62
59
57
52
47
68
63
61
58
53
48
69
64
62
59
54
49
70
65
63
60
55
50
71
66
64
61
56
51
73
68
65
62
57
52
74
69
66
63
58
53
75
70
67
64
59
54
76
71
68
65
60
55
77
72
69
66
61
56
78
73
70
67
62
57
80
74
71
68
63
58
81
75
73
69
64
59
82
76
74
70
65
60
83
78
75
71
66
61
84
79
76
72
67
General Information and Planning
Table 6. Number of channels required according to traffic and blockage rate (continued)
Acceptable Blockage Rate
Traffic in Erlangs
0.1%
0.5%
1%
2%
5%
Number of Channels Required
62
85
80
77
74
68
63
86
81
78
75
69
64
88
82
79
76
70
65
89
83
80
77
71
66
90
84
81
78
72
67
91
85
82
79
73
68
92
86
83
80
74
69
93
87
84
81
75
70
95
89
85
82
76
71
96
90
87
83
77
72
97
91
88
84
78
73
98
92
89
85
79
74
99
93
90
86
80
75
100
94
91
87
81
76
101
95
92
88
82
77
102
96
93
89
83
78
104
97
94
90
84
79
105
98
95
91
85
80
106
100
96
92
86
Chapter 4. Telephone network
127
Table 6. Number of channels required according to traffic and blockage rate (continued)
Acceptable Blockage Rate
Traffic in Erlangs
128
0.1%
0.5%
1%
2%
5%
Number of Channels Required
81
107
101
97
93
87
82
108
102
98
94
88
83
109
103
99
95
89
84
110
104
100
96
90
85
112
105
101
97
90
86
113
106
103
98
91
87
114
107
104
99
92
88
115
108
105
101
93
89
116
109
106
102
94
90
117
110
107
103
95
91
118
111
108
104
96
92
119
113
109
105
97
93
121
114
110
106
98
94
122
115
111
107
99
95
123
116
112
108
100
96
124
117
113
109
101
97
125
118
114
110
102
98
126
119
115
111
103
99
127
120
116
112
104
General Information and Planning
Table 6. Number of channels required according to traffic and blockage rate (continued)
Acceptable Blockage Rate
Traffic in Erlangs
0.1%
0.5%
1%
2%
5%
Number of Channels Required
100
128
121
117
113
105
101
129
122
118
114
106
102
131
123
119
115
107
103
132
124
121
116
108
104
133
125
122
117
109
105
134
127
123
118
110
106
135
128
124
119
111
107
136
129
125
120
112
108
137
130
126
121
113
109
138
131
127
122
114
110
139
132
128
123
115
111
141
133
129
124
116
112
142
134
130
125
117
113
143
135
131
126
118
114
144
136
132
127
119
115
145
137
133
128
120
116
146
138
134
129
121
117
147
139
135
130
122
118
148
140
136
131
123
Chapter 4. Telephone network
129
Table 6. Number of channels required according to traffic and blockage rate (continued)
Acceptable Blockage Rate
Traffic in Erlangs
130
0.1%
0.5%
1%
2%
5%
Number of Channels Required
119
149
142
137
132
124
120
151
143
138
133
125
121
152
144
139
134
126
122
153
145
140
135
127
123
154
146
142
136
128
124
155
147
143
137
128
125
156
148
144
138
129
126
157
149
145
139
130
127
158
150
146
141
131
128
159
151
147
142
132
129
160
152
148
143
133
130
162
153
149
144
134
131
163
154
150
145
135
132
164
155
151
146
136
133
165
156
152
147
137
134
166
158
153
148
138
135
167
159
154
149
139
136
168
160
155
150
140
137
169
161
156
151
141
General Information and Planning
Table 6. Number of channels required according to traffic and blockage rate (continued)
Acceptable Blockage Rate
Traffic in Erlangs
0.1%
0.5%
1%
2%
5%
Number of Channels Required
138
170
162
157
152
142
139
171
163
158
153
143
140
173
164
159
154
144
141
174
165
160
155
145
142
175
166
161
156
146
143
176
167
162
157
147
144
177
168
163
158
148
145
178
169
164
159
149
146
179
170
166
160
150
147
180
171
167
161
151
148
181
172
168
162
152
149
182
173
169
163
153
150
183
174
170
164
154
151
185
176
171
165
155
152
186
177
172
166
156
153
187
178
173
167
157
154
188
179
174
168
158
155
189
180
175
169
159
156
190
181
176
170
159
Chapter 4. Telephone network
131
Table 6. Number of channels required according to traffic and blockage rate (continued)
Acceptable Blockage Rate
Traffic in Erlangs
132
0.1%
0.5%
1%
2%
5%
Number of Channels Required
157
191
182
177
171
160
158
192
183
178
172
161
159
193
184
179
173
162
160
194
185
180
174
163
161
195
186
181
175
164
162
197
187
182
176
165
163
198
188
183
177
166
164
199
189
184
178
167
165
200
190
185
179
168
166
201
191
186
180
169
167
202
192
187
181
170
168
203
194
188
182
171
169
204
195
189
183
172
170
205
196
190
184
173
171
206
197
191
185
174
172
207
198
192
186
175
173
209
199
194
187
176
174
210
200
195
188
177
175
211
201
196
189
178
General Information and Planning
Table 6. Number of channels required according to traffic and blockage rate (continued)
Acceptable Blockage Rate
Traffic in Erlangs
0.1%
0.5%
1%
2%
5%
Number of Channels Required
176
212
202
197
190
179
177
213
203
198
191
180
178
214
204
199
192
181
179
215
205
200
193
182
180
216
206
201
194
183
181
217
207
202
195
184
182
218
208
203
196
185
183
219
209
204
197
186
184
220
210
205
198
187
185
221
211
206
199
187
186
223
212
207
200
188
187
224
213
208
201
189
188
225
215
209
202
190
189
226
216
210
203
191
190
227
217
211
204
192
191
228
218
212
205
193
192
229
219
213
206
194
193
230
220
214
207
195
194
231
221
215
208
196
Chapter 4. Telephone network
133
Table 6. Number of channels required according to traffic and blockage rate (continued)
Acceptable Blockage Rate
Traffic in Erlangs
0.1%
0.5%
1%
2%
5%
Number of Channels Required
195
232
222
216
209
197
196
233
223
217
210
198
197
234
224
218
211
199
198
236
225
219
212
200
199
237
226
220
213
201
200
238
227
221
214
202
201
239
228
222
215
203
202
240
229
223
216
204
203
241
230
224
217
205
204
242
231
226
218
206
205
243
232
227
219
207
206
244
233
228
221
208
Planning the switch configuration
You might want to think about adjusting the configuration of the switch to
which you are connecting WebSphere Voice Response. The switch is
configured to work in your current operating environment. That environment
changes when WebSphere Voice Response is installed. This section suggests
some switch configuration issues that you should consider. Some of these
issues arise from the presence or absence of queuing. The previous section
suggested other common switch features that might require attention. Your
switch vendor or telephone service provider can help you determine whether
your switch requires some operational accommodations or reconfiguration.
134
General Information and Planning
When the switch has no queuing
When someone makes a telephone call, they do not want to hear a busy
signal or endless ringing. If your switch does not include queuing, ensure that
the absence of queues does not result in dissatisfied callers.
For example, if the switch offers call forwarding, you might want to forward
all the calls that cannot be answered to an agent or to a voice mailbox.
When the switch has queuing
Adding WebSphere Voice Response to your telephone system is like adding
more agents to a queue. When you connect WebSphere Voice Response to a
switch with queuing, WebSphere Voice Response can act as the first or the
second agent. Figure 37 on page 136 and Figure 38 on page 137 show two
ways in which to integrate WebSphere Voice Response into a queue
configuration.
Call transfer with switch queuing
Figure 37 on page 136 illustrates how WebSphere Voice Response can be set
up as the first agent. In this case, call transfer allows callers to speak to an
agent as well as to WebSphere Voice Response.
Chapter 4. Telephone network
135
Telephone
Network
Voice
Processing
Data
Communications
Network
pSeries
Caller
WebSphere
Voice
Response
Switch
Queue 1
Queue 2
Agents
Figure 37. WebSphere Voice Response and a private switch with queuing
Figure 38 on page 137 illustrates how WebSphere Voice Response can be set
up as the second agent. In this case, voice applications handle the overflow
calls. Call transfer allows callers who find themselves in the overflow queue
to speak to an agent as well as to WebSphere Voice Response.
136
General Information and Planning
Telephone
Network
Voice
Processing
Caller
pSeries
Data
Communications
Network
WebSphere
Voice
Response
Switch
Queue 1
Queue 2
High
Priority
Queue
Agents
Figure 38. Another way to integrate WebSphere Voice Response into a switch queue
WebSphere Voice Response supports both blind transfers and screened transfers.
A blind transfer is one in which WebSphere Voice Response requests a transfer
and then hangs up. A screened transfer is one in which WebSphere Voice
Response requests a transfer and waits until the extension is answered before
it hangs up.
Here are some aspects to consider if you are going to create voice applications
that transfer calls:
v WebSphere Voice Response can send a hook flash or feature code to request
a switch feature such as call transfer. Therefore the switch must be able to
read the signal as a feature request.
v You can specify how long WebSphere Voice Response waits between
sending a hook flash signal and dialing the target extension. This helps you
Chapter 4. Telephone network
137
account for switches that send a dial tone when transfer is requested and
those that allow the requestor to start dialing immediately. The wait can be
specified globally, using a system parameter, or for a specific voice
application.
v A state table can also control how soon WebSphere Voice Response hangs
up after it has dialed the target extension. Some switches interpret a very
fast disconnect as a request to cancel the transfer, and return the call to the
transferring agent. So, WebSphere Voice Response must be configured not
to hang up too soon. The definition of ‘too soon' depends on whether the
call is being transferred to a target number on the same switch or on a
different switch.
Queue priorities
Switch queue configuration is particularly important when WebSphere Voice
Response is the second agent, as in Figure 38 on page 137, and answers all
overflow calls. If you plan to integrate WebSphere Voice Response into a call
center as the second agent, the switch should be able to support multi-tiered
queues.
Multi-tiered queuing allows callers to receive a more efficient service. A caller
who has been put into the overflow queue and then chooses to transfer to an
agent, is placed into a high priority queue.
Here are some switch queue considerations:
v If the switch is configured with multi-tiered queues, will the configuration
be suitable when WebSphere Voice Response is included?
v If you plan to make WebSphere Voice Response the second agent, can the
switch be configured with multi-tiered queues?
Other switch feature planning issues
Here are some other issues to consider when planning for WebSphere Voice
Response installation:
v Not all switches use the same timing. You can adjust the default WebSphere
Voice Response telephony configuration to work with the timing that your
switch uses.
v Your switch might be programmed to include features that make sense only
when a person is answering the telephone (for example, voice messaging).
But you might want to turn on call forwarding, so that the switch forwards
unanswered calls to an agent.
Switch configuration questions
The “Planning checklist” on page 184 includes a starter list of configuration
questions to think about before you connect WebSphere Voice Response to a
138
General Information and Planning
switch. If necessary, your telephone service provider and your IBM
representative can help you decide how to make WebSphere Voice Response
work better with your switch.
Chapter 4. Telephone network
139
140
General Information and Planning
Chapter 5. Workstation and voice processing
This chapter lists the minimum hardware and software requirements for
WebSphere Voice Response. It also provides more detail about these
requirements, and about the optional items that you might need for using
WebSphere Voice Response in specific ways. The chapter also discusses
location planning for the hardware, and the factors you should take into
consideration when you are planning how much memory and storage you
need to order. To install the hardware and software that WebSphere Voice
Response needs, follow the instructions provided in WebSphere Voice Response
for AIX: Installation. This chapter also introduces the factors you should think
about when you plan to connect some WebSphere Voice Response systems
together in a single system image (SSI).
Telephone
Network
Voice
Processing
WebSphere
Voice
Response
Caller
Logic
Data
Communications
Network
Business
Object
Server /
Web
Server
Database
Information
Note: This chapter does not provide detailed requirements for systems that
use DTEA cards or the DTNA software simulation of a DTEA card. If you
plan to use a voice over IP configuration, refer to the WebSphere Voice Response
for AIX: Voice over IP using Session Initiation Protocol book.
Minimum requirements
To run a WebSphere Voice Response voice application in production, you need
at least the minimum configuration that is shown in Table 7 below.
Table 7. Minimum configuration for WebSphere Voice Response
Item
Minimum configuration
Platform
© Copyright IBM Corp. 1991, 2011
pSeries computer p615 model
7029-6E3/6C31
141
Table 7. Minimum configuration for WebSphere Voice Response (continued)
Item
Minimum configuration
RAM
1 GB
Storage
10 GB hard disk space
Installation Media
CD-ROM drive.
Display
1024 x 1280 color display
Keyboard
101-key keyboard
Mouse
pSeries computer mouse (3-button
mouse)
Voice Processing
Optional DTTA (feature number 6312
or 6313)
Software
Refer to WebSphere Voice Response
Installation.
1. DTNA Voice over IP is supported on all Power 4, Power 5 and Power 6 systems. At
least two processors are strongly recommended, unless the number of channels in
use is low (for example, under 30).
The configuration shown in Table 7 on page 141 allows you to run an
application with the following characteristics:
v A simple state table application
v Little or no voice messaging
v Local database
Small voice segments
Compressed voice segments only
Voice segments recorded over the telephone
No speech recognition
Between 1000 and 1500 calls per day, 25% in the busy hour, with 1%
blocking
v Average call hold time of 60 seconds
v 12 telephony channels
v
v
v
v
v
If you add extra channel support (see “Channel increments ” on page 146)
you can run more than 12 channels.
If you add Communications Server for AIX to the software configuration,
callers can access a database on a host computer, by using the 3270 data
stream—this can be configured over both SNA and TCP/IP networks. In the
142
General Information and Planning
minimum configuration, one 3270 emulation session per call is supported. For
3270 emulation, a faster CPU and more memory is required.
With WebSphere Voice Response, you can start with a small configuration and
add extra capacity as your voice processing needs grow. To add more capacity
for voice messaging or voice segments, provide speech recognition capability,
or increase the number of lines you can support, you must add to the basic
configuration shown in Table 7 on page 141. Use the information in the
following topics to select a configuration that meets your business needs.
Recommended requirements
To run a WebSphere Voice Response voice application in production, you need
at least the recommended configuration that is shown in Table 8 below.
Table 8. Recommended configuration for WebSphere Voice Response
Item
Minimum configuration
Platform
Power 6 model 5201 with 2 x 4.7GHz
processors.
RAM
4 GB
Storage
10 GB hard disk space
Installation Media
CD-ROM drive.
Display
1024 x 1280 color display
Keyboard
101-key keyboard
Mouse
pSeries computer mouse (3-button
mouse)
Voice Processing
DTNA
Software
Refer to WebSphere Voice Response
Installation.
1. DTNA Voice over IP is supported on all Power 6 systems. At least two processors
or processor cores are strongly recommended, unless the number of channels in use
is low (for example, under 30).
The configuration shown in Table 8 allows you to run an application with the
following characteristics:
v A simple VoiceXML application
v Little or no voice messaging
v Local database
v Small voice segments
Chapter 5. Workstation and voice processing
143
Compressed voice segments only
Voice segments recorded over the telephone
No speech recognition
Between 1000 and 1500 calls per day, 25% in the busy hour, with 1%
blocking
v Average call hold time of 60 seconds
v 12 telephony channels
v
v
v
v
If you add extra channel support (see “Channel increments ” on page 146)
you can run more than 12 channels.
If you add Communications Server for AIX to the software configuration,
callers can access a database on a host computer, by using the 3270 data
stream—this can be configured over both SNA and TCP/IP networks. In the
minimum configuration, one 3270 emulation session per call is supported. For
3270 emulation, a faster CPU and more memory is required.
With WebSphere Voice Response, you can start with a small configuration and
add extra capacity as your voice processing needs grow. To add more capacity
for voice messaging or voice segments, provide speech recognition capability,
or increase the number of lines you can support, you must add to the basic
configuration shown in Table 8 on page 143. Use the information in the
following topics to select a configuration that meets your business needs.
Prerequisite and associated software products
The other products, and which versions of those products, required by certain
WebSphere Voice Response functions are listed here.
Table 9. Licensed program products required
Function
Product
Version
All
AIX for the pSeries computer
6.1
License Use Management GUI
License Use Runtime 4.6.8.13supplied free with AIX
6.1.0.22
Developing custom servers
using C/C++ language
IBM XL C/C++ Enterprise Edition 9.0.0.2
for AIX compiler
Using 3270 communications
Communications Server for AIX
6.3.1.0
Using CCXML, VoiceXML or
Java
IBM Developer Kit and Runtime
for AIX, Java Technology Edition
(32-bit)
6.0, Service
Refresh 3 (SR3)
(also known as IBM 32-bit SDK for
AIX, Java Technology Edition.)
144
General Information and Planning
1
Table 9. Licensed program products required (continued)
Function
Product
Version
1. Currently only Java 1.6, SR3 is supported. For details of later versions of Java
supported, refer to the System Requirements web page at http://www.ibm.com/
software/pervasive/voice_response_aix/system_requirements
Unless greater than 4GB address space is required, the 32-bit Java 6 JVM (32-bit)
generally outperforms the 64-bit JVM (64-bit) in standard configurations.
WebSphere Voice Response software
The base IBM WebSphere Voice Response for AIX licensed program product
includes all the function to support the capabilities described in the previous
sections of this book, such as: handling calls and interacting with callers,
accessing, storing, and manipulating information, storing and retrieving
messages.
Note: For more detailed information about the WebSphere Voice Response
packages and filesets required see WebSphere Voice Response for AIX:
Installation. For information about any PTFs required , see the Readme file on
the WebSphere Voice Response CD – any PTFs are provided on a separate CD
as part of the WebSphere Voice Response package.
DB2 support
DB2 Version 9.5 at fix pack 4 is provided with WebSphere Voice Response for
AIX Version 6.1, and is installed as part of the installation process. Later DB2
9.5 fix packs (when available) can be installed by following the procedure
documented in the WebSphere Voice Response for AIX: Installation book.
To work with WebSphere Voice Response for AIX, DB2 must not be updated
to a version later than 9.5. DB2 is provided only for the storage and
management of data used by WebSphere Voice Response for AIX, and if it is
to be used by other applications, a separate license must be purchased.
Associated products
The following associated products can be used with WebSphere Voice
Response for AIX:
v WebSphere Voice Server
– Speech recognition and text-to-speech capabilities
– Grammar development tools
v WebSphere Voice Application Access (WVAA)
v WebSphere Portal Server
v WebSphere Application Server
v Genesys Callpath CTI server
Chapter 5. Workstation and voice processing
145
v Cisco ICM CTI server
Channel increments
The basic system includes processing for 12 telephony channels. Additional
channels are available in increments of 30 (E1) or 24 (T1). The number of
channels you want to support will affect which model of pSeries computer
you choose (see “System p5 and pSeries computer” on page 149).
Migration from previous releases
If you want to upgrade to WebSphere Voice Response Version 6.1, you must
be already using either DirectTalk for AIX Version 2.3 or WebSphere Voice
Response Version 3.1. If you are using any level prior to these, contact your
IBM representative for assistance.
The procedures for migrating between releases are described in the WebSphere
Voice Response for AIX: Installation book.
Licensing WebSphere Voice Response software
This section provides an overview of how licensing is implemented in
WebSphere Voice Response for AIX, and provides guidance on the number of
licenses you'll need. For details of how to configure your systems, and how to
enroll and distribute licenses see the WebSphere Voice Response for AIX:
Installation book.
In WebSphere Voice Response, your license requirements are based on the
number of telephony channels you intend to use. Licenses must be registered
with License Use Runtime, which WebSphere Voice Response uses to monitor
usage, and to check for infringements. Ensure that License Use Runtime
Version 5.2 or above is installed and correctly configured on your system
before you run WebSphere Voice Response.
For detailed information on planning a network license environment see Using
License Use Management Runtime. You can obtain a PDF version of this
document online at http://publibfp.boulder.ibm.com/epubs/pdf/
c2396700.pdf
The WebSphere Voice Response licensing model
WebSphere Voice Response uses concurrent licenses. These are held by a
network license server and temporarily granted to a WebSphere Voice
Response system to allow it to run. While WebSphere Voice Response is still
running on that system, the license remains unavailable to other networked
machines. When WebSphere Voice Response stops running, the license is
returned to the network license server, where it becomes available to other
license client machines.
146
General Information and Planning
If your WebSphere Voice Response machines are not networked or you have a
single standalone system, each machine can be configured to act as its own
license server.
The WebSphere Voice Response licensing policy
WebSphere Voice Response licenses are customer-managed, which means it is
your responsibility to monitor the use of existing licenses and to purchase
new licenses when necessary. License Use Runtime will provide you with all
the information necessary to monitor license use on your system and the
facility to enroll and distribute the licenses you purchase.
With customer-managed use you can choose a soft stop or hard stop policy.
If a license is requested under the soft stop policy and all licenses are already
allocated, a temporary license will be allocated and WebSphere Voice
Response will start without error. Where this occurs, you are expected and
trusted to order more licenses from IBM without delay. License Use
Management keeps a record of the maximum number of temporary licenses
issued at any one time as a high-water mark. You can use the License Use
Management Basic License Tool to monitor the high-water mark and determine
whether you need to purchase more licenses.
Technically, if all the licenses are already in use and a further license is
allocated, this is a license violation. However, choosing this option allows you
time to purchase and distribute extra licenses without disruption to your
business activities. If you have already purchased additional licenses but not
yet enrolled and distributed them, you should do so at this time.
Hard stop policy still allows WebSphere Voice Response to start but
WebSphere Voice Response will issue a red alarm and log an error.
WebSphere Voice Response uses soft stop as the default option. You can use
the Basic License Tool to change the policy to hard stop and back again at any
time.
The network licensing environment
There are three major components to the network licensing environment for
WebSphere Voice Response:
v Network license clients.
v Network license servers.
v Central registry license server.
These components can be set up on separate physical machines or with some
or all components on the same machine. Each machine must have License Use
Runtime Version 5.2 or above installed and configured. If the machines in
Chapter 5. Workstation and voice processing
147
your network satisfy this requirement but have different versions of License
Use Runtime installed, you should check the compatibility notes in Using
License Use Management Runtime.
Network license clients
In License Use Management, a network license client is any node configured
to make use of licenses by requesting them from a network license server. In
this case, a network license client will be any machine that has WebSphere
Voice Response installed. A network license client is configured for License
Use Management in the same way regardless of whether it is a WebSphere
Voice Response server or client.
Each network license client must be connected to a network license server.
Network license servers
A network license server is a node in the network on which network licenses
are stored for use by multiple network license clients. You must have at least
one network license server in your network and it is possible to have more.
When the user at a client starts a licensed program, License Use Runtime at
the network license server determines whether a license is available.
Central registry license server
The central registry is a database that contains information about the
enrollment and distribution of licenses. It also holds information related to
those licenses, such as hard or soft stop policy. The central registry license
server is used to track soft stop licenses and record the high-water mark. All
customer-managed use controlled products, including WebSphere Voice
Response, require you to identify one central registry for each network.
How many licenses do I need?
The number of licenses you need depends solely on the number of channels
you are using, as shown in Table 10 below.
Table 10. Licensing of WebSphere Voice Response components
WebSphere Voice Response
component
Number of licenses required
Channels
1 license per channel in use
If you are using HACMP/ES to direct traffic to an alternative WebSphere
Voice Response system in the event of a failure in your normal production
system, you do not require licenses for the redundant system provided it is
not used in other circumstances.
148
General Information and Planning
Hardware requirements
This section discusses the hardware you might need:
v “BladeCenter computer”
v “System p5 and pSeries computer”
v “Telephony hardware” on page 151
v “Displays ” on page 155
v “Keyboard and mouse ” on page 155
v “Machine-readable media ” on page 155
v “Printer ” on page 155
BladeCenter computer
For Voice Over Internet (VoIP) Telephony capability using Session Initiation
Protocol (SIP), WebSphere Voice Response for AIX runs on different models of
NEBS compliant BladeCenter computers without the need for a Digital Trunk
Ethernet Adapter (DTEA) card. All Power BladeCenter systems are supported
for VoIP/SIP, including: BladeCenter JS20, JS21, JS23, and JS43.
The VoIP/SIP support is implemented by a software simulation of a DTEA
card named Digital Trunk No Adapter (DTNA), which allows most
WebSphere Voice Response for AIX applications to run without the need for a
DTTA, or DTEA adapter.
For information about Voice over IP and how it is implemented in WebSphere
Voice Response Version 6.1, see “Voice over IP” on page 109.
System p5 and pSeries computer
WebSphere Voice Response for AIX runs on different models of pSeries
computer. The list below shows the available models that you can use (based
on their support for AIX Version 6.1), but you should ask your IBM
representative for up-to-date details of supported models.
v pSeries 615 Model (7029-6C3 and 7029-6E3)
v pSeries 630 Model (7028-6C4 and 6E4)
v pSeries 650 Model (7038-6M2)
v System p5-505 (9115-520)
v
v
v
v
v
System
System
System
System
System
p5-510
p5-520
p5-550
p5-570
p6 520
(9110-510) or 51A (9110-51A)
(9111-520) or 52A (9131-52A)
(9113-550) or 55A (9133-55A)
(9115-570)
Model 8203-E4A
v System p6 550 Model 8204-E8A
v System p6 570 Model 9117-MMA
Chapter 5. Workstation and voice processing
149
v pSeries 7311-D20 I/O Drawer (available on pSeries 650-6M2, 630-6C4, and
System p5 520, 550 and 570, and System p6 520, 550 and 570)
v pSeries 7311-D10 I/O Drawer (available on pSeries 650-6M2)
pSeries computers are typically used as servers for many different
applications and users. WebSphere Voice Response can be one of these
applications, but you must be very careful when balancing the application
load. When running many voice channels, WebSphere Voice Response places
heavy demands on the CPU; this could affect the other applications. In
general, it is not a good idea to run other AIX applications on a system that is
servicing callers with WebSphere Voice Response applications.
The pSeries computer model you need also depends on:
v How many channels you want to be able to support.
The number of channels you can support depends on the number of
expansion slots available for digital trunk adapters, and the complexity of
your voice application.
v The type of voice processing your applications offer.
You must take into consideration the complexity of the voice applications
and whether they access remote information. You must also consider
whether you require speech recognition or text-to-speech, or both, and
whether this additional function is to be provided by remote servers or the
local system. If using speech recognition, consider whether you want to
allow callers to use barge-in (simultaneously playing prompts and passing
voice input to a speech recognizer increases the load). WebSphere Voice
Response's capacity to handle calls and give a voice response depends on
these factors, and you will need a faster model of pSeries computer to
handle more complex applications and provide an acceptable response time.
Your IBM representative can help you decide what physical configuration is
best.
v Whether you intend to use a VoiceXML, Java or state table programming
environment.
v Whether you want a floor-mounted system or a desktop system.
v Whether you need to consider NEBS compliance.
High-quality audio recording
For recording and playing back higher-quality audio than can be recorded
over the telephone, use a separate PC with an industry audio card (for
example, Creative Labs Soundblaster), and import standard audio files (such
as .wav) using the Batch Voice Import facility of WebSphere Voice Response.
150
General Information and Planning
Telephony hardware
Using DTEA or DTNA: This section addresses only DTTAs. If you plan to
use a DTEA card or the DTNA software simulation of a DTEA card for a
voice over IP configuration, refer to the WebSphere Voice Response for AIX: Voice
over IP using Session Initiation Protocol book for details about how to set up
your system.
To process voice data supplied by a telephony connection on an pSeries
computer using WebSphere Voice Response for AIX, you need three functions:
v The digital trunk adapter, which connects the bus of the pSeries computer
to the voice-processing function and to the telephony trunk
v The voice processor, which contains digital signal processors (DSPs) that are
programmed to perform functions such as voice compression and tone
detection
v The trunk interface, which connects the voice processor to the telephony
trunk
All three of the above functions are provided by a DTTA card installed in the
pSeries computer.
Digital trunk adapters
The DTTA adapters manage the digital signals coming from the switch to the
pSeries computer on up to four trunk lines.
The number of DTTA cards you can install depends on the specific pSeries
computer you are using, as defined in the list below:
Table 11. Digital trunk adapters supported by System p5, pSeries and RS/6000 models
used with WebSphere Voice Response
System p5, pSeries, or RS/6000 model
Maximum number of
DTTAs
pSeries 615 Models 7029-6C3 and 6E3
Three
pSeries 620 Models 7025-6F0 and 6F1
Not supported
pSeries 630 Models 7028-6C4 and 6E4
Four
pSeries 650 Model 7038-6M2
Four
System p5-520 Model 9111-520
Three
System p5-550 Model 9113-550
Four
System p5-570 Model 9115-570
Not supported, but see
pSeries 7311-D20 I/O
Drawer below
Chapter 5. Workstation and voice processing
151
Table 11. Digital trunk adapters supported by System p5, pSeries and RS/6000 models
used with WebSphere Voice Response (continued)
Maximum number of
DTTAs
System p5, pSeries, or RS/6000 model
System p6-520 Model 8203-E4A
Two
System p6-550 Model 8204-E8A
Two
System p6-570 Model 9117-MMA
Not supported, but see
next row
pSeries 7311-D20 I/O Drawer on System p6-520, p6-550,
p6-570
Four per i/o drawer
pSeries 7311-D20 I/O Drawer on System p5-520, p5-550 or
p5-570
Four per i/o drawer
pSeries 7311-D20 I/O Drawer on pSeries 650-6M2
Four per i/o drawer
pSeries 7311-D20 I/O Drawer on pSeries 630-6C4
Four per i/o drawer
pSeries 7311-D10 I/O Drawer on pSeries 650-6M2
Three per i/o drawer
Intellistation 9114 Model 275
One
The number of adapters you require
Basic WebSphere Voice Response for AIX includes 12 ports on one E1 or T1
trunk, but you can order additional channels (see “Channel increments ” on
page 146 for details).
Table 12 below identifies the number of DTTA adapters you require for
specified numbers of channels. It also identifies the number of DTTA licenses
needed.
Table 12. Adapter and pack requirements
Channels
152
T1
E1
DTTAs Required
24
30
1
48
60
72
90
96
120
120
150
144
180
168
210
192
240
General Information and Planning
2
Table 12. Adapter and pack requirements (continued)
Channels
T1
E1
DTTAs Required
216
270
3
240
300
264
330
288
360
312
390
336
420
360
450
384
480
4
Configuring DTTA cards
If you install more than one DTTA adapter in your pSeries computer system
unit, you need an H.100 bus cable to connect them together to facilitate the
transfer of digital data and clock signals between the cards.
It is important that DTTA adapters are installed in PCI slots in such a way as
to minimize any bus traffic contention between them and any other PCI
adapters, or other devices that use the buses. If the adapters have not been
preinstalled, please contact your IBM representative for advice. Up-to-date
guidance about the placement and configuration of adapters in specific
models of pSeries computer can also be found on the support pages of the
WebSphere Voice Response website at
http://www.ibm.com/support/docview.wss?rs=761&uid=swg21253839
H.100 connections
The DTTA are H.100 adapters. All H.100 adapters in a pSeries computer must
be connected together using an H.100 top-connector cable. This is to
synchronize the adapters and to support functions such as voice recognition
and channel-to-channel connection (tromboning). H.100 is the industry
standard for PCI systems.
WebSphere Voice Response supports the H.100 top-connector buses for DTTA.
To make connection easier, H.100 adapters should be installed in adjacent slots
in the pSeries computer system unit.
A 4-way H.100 cable is available as WebSphere Voice Response feature code
2877 (FC2877) with the DTTA. Normally a system configuration which
Chapter 5. Workstation and voice processing
153
includes more than one DTTA adapter will automatically include one FC2877
H.100 cable. The 4-way cable 2877 allows up to four H.100 (DTTA) adapters to
be connected together.
Telecommunications cables for DTTAs
To connect each DTTA to the telephone network, you need to use RJ45 (for
E1) or RJ48 (for T1) connectors, which plug into the rear connector of the
adapter. Each DTTA can then be connected to up to four telephony trunks.
Note: RJ45 or RJ48 cables are not supplied with the DTTA. You must order
the cables you require separately.
Synchronization of trunk clocks
WebSphere Voice Response DTTA trunk interfaces are designed to operate in a
clock-slave mode where WebSphere Voice Response attempts to recover the
clock from the received signal and synchronizes its transmit clock to this
recovered received clock. If the network or PBX is not configured as a clock
master, incorrect operation and clock instability might occur.
WebSphere Voice Response can handle unsynchronized trunk clocks, but it
adjusts the internal clock to normalize all clocks to the internal TDM bus. This
action is probably not noticeable on voice, but it might cause excessive data
errors if trunks are being used for other types of data, such as fax. For this
reason, it is better that all trunk clocks be synchronized (this normally occurs
automatically with direct network connections, but channel banks may require
some special attention).
Optional hardware
To extend the functionality of WebSphere Voice Response for AIX, you can
add the following optional hardware to a pSeries computer:
Brooktrout fax card
Enables you to develop WebSphere Voice Response applications that
can send and receive faxes on up to 30 telephony channels.
WebSphere Voice Response for AIX supports a single Brooktrout
TR1034 fax card installed in a PCI-based pSeries computer with a
DTTA adapter also installed. Refer to the WebSphere Voice Response for
AIX: Fax using Brooktrout book for more information.
SS8 Networks Inc SS7 hardware adapter
Enables your applications to take advantage of the features of
telephony networks using the SS7 Common Channel Signaling (CCS)
protocol. The software required for SS7 support is provided by a
separately orderable IBM PRPQ (7J0465), which contains the
Distributed7 software for both SS7 Server and WebSphere Voice
154
General Information and Planning
Response client machines. See “Signaling System 7 ” on page 106.
Refer also to the SS7 Support for WebSphere Voice Response: SS7 User's
Guide book for more information.
Displays
WebSphere Voice Response has a graphical window interface for both system
management and application development. To support the interface, the
pSeries computer must be equipped with a color graphics monitor and a
graphics display adapter, or a color graphics workstation that supports X11R6
or above. For the whole of each window to be viewable at the same time, the
display has to be set to a resolution of 1024 x 1280, and so must be capable of
supporting that resolution.
An ASCII interface provides access to the system management functions via a
character-based terminal. The ASCII interface can be used via a modem or
dial-up line from a remote location.
Keyboard and mouse
To use the WebSphere Voice Response user interface, you need either a mouse
(an IBM 3–button mouse is preferable) or keyboard. The system operates with
a 101-key keyboard, a 102-key keyboard, or a 106-key keyboard. The 102-key
keyboard is available with keysets for a number of different national
languages.
Machine-readable media
Unless you order your system preinstalled, or plan to download WebSphere
Voice Response from the IBM website, the software is packaged and delivered
on CD-ROM, so you need a CD-ROM drive to install it.
You might need extra machine-readable media for backing up files.
Printer
Any printer that is supported by the pSeries computer and AIX can be used
to print WebSphere Voice Response information such as the graphical view of
a state table, a custom server build report, a list of the components of a voice
application, or a statistical report.
Location planning
This section describes how best to position the hardware required for
WebSphere Voice Response.
For T1 telephony, the DTTA adapter (in a pSeries computer) must be within
15.25 meters (50 feet) of the switch, channel bank, or CSU to which it is
connected.
Chapter 5. Workstation and voice processing
155
For E1 120- balanced connections, the DTTA adapter (in a pSeries computer)
must be within 7.5 meters (24.6 feet) of the switch to which it is connected.
Physical dimensions
Wherever you plan to locate WebSphere Voice Response, you need enough
space for the pSeries computer. For details, see the RS/6000 and pSeries: Site
and Hardware and Planning Information guide.
Environment
Be careful to keep the pSeries computer out of dusty or polluted places. In
addition, try to place them where they cannot be knocked or shaken. Ensure
that the location does not become too hot, too cold, or too humid.
The acceptable temperature and humidity ranges in which the digital trunk
adapters can operate are as follows:
v Temperature: 10° to 40° C (50° to 104° F)
v Wet bulb temperature: 27° C (80.6° F)
v Relative humidity: 8 to 80 percent, non-condensing.
Memory and storage planning
The pSeries computer system that you choose must not only be able to handle
the number of telephone channels you need, it must also answer calls and
provide information quickly. To satisfy these requirements, you need adequate
random access memory (RAM) and disk storage space.
How much memory?
The minimum amount of RAM (specified in “Minimum requirements” on
page 141) allows WebSphere Voice Response to support up to 12 channels
(when using state tables), with no 3270 terminal emulation, voice messaging,
or speech recognition. If this is insufficient for your applications, you need to
take the following factors into consideration:
v The number of channels you need to process (see “Estimating telephony
traffic” on page 120).
v The number of active host sessions (remote host, custom server, or local
database) you need to run at the same time. For example, if you need to
support 30 channels and 30 sessions of 3270 terminal emulation, you need 2
GB of RAM.
v The type of applications you intend to run. For example, if you need to
support 120 channels with voice messaging you need a minimum of 2 GB
of RAM.
v Whether you use ISDN.
v Whether you use DTNA for Voice over IP/SIP.
v Whether you use Java, VoiceXML or state table applications.
156
General Information and Planning
Your IBM representative can give you more advice about the system
configuration you need to run WebSphere Voice Response.
How much disk space?
The amount of disk space you need depends on a number of factors,
including:
v The average size of a voice application
v The number of applications that WebSphere Voice Response handles
v The number of languages in which WebSphere Voice Response runs
applications
v How often you archive the WebSphere Voice Response logs
v How much RAM you have on the system
v How much other information you plan to store on the pSeries computer
The minimum pSeries computer configuration with which WebSphere Voice
Response operates is described in “Minimum requirements” on page 141. The
WebSphere Voice Response software, including AIX and AIXwindows,
occupies about 4 GB of internal disk storage.
Generally, the amount of paging space you need on the hard disk is double
the amount of system RAM.
WebSphere Voice Response requires space to store maintenance and
administration files, error files, and statistics. At midnight and each time the
system is restarted, the old files are saved and new files are created. One set
of files can use up to 1.5 MB of disk space.
You also need disk space to store voice application files (state tables, voice
segments, and messages). Allow extra disk space so that you can import and
export application files. Importing and exporting allow you to back up
individual applications and move them from one pSeries computer to another.
Additional national languages require additional disk space.
In addition to application files, the pSeries computer can also store the
business information that is available to callers through voice applications. If
you plan to store business information on the pSeries computer, ensure that
you order enough disk space.
Voice storage
Stored voice includes voice segments that are used in prompts and voice
messages. The capacity requirements are determined by whether voice
compression is used. Voice segments can be recorded compressed or
Chapter 5. Workstation and voice processing
157
uncompressed. (Voice messages are always compressed.) Refer to Table 13 and
Table 14 and Table 15 to determine the storage needed.
Table 13. Storage required for voice messages
Average message length in
seconds
15
30
60
Number of messages
120
180
240
MB of Storage
20000
492
984
1968
3936
5904
7872
15000
369
738
1476
2952
4428
5904
10000
246
492
984
1968
2952
3936
5000
123
246
492
984
1476
1968
1000
25
49
98
197
295
394
500
12
25
49
98
148
197
Table 14. Storage required for compressed voice segments
Average segment length in seconds
5
10
Number of compressed segments
15
20
25
30
MB of Storage
2000
16.0
32.0
48.0
64.0
80.0
96.0
1500
12.0
24.0
36.0
48.0
60.0
72.0
1000
8.0
16.0
24.0
32.0
40.0
48.0
900
7.2
14.4
21.6
28.8
36.0
43.2
800
6.4
12.8
19.2
25.6
32.0
38.4
700
5.6
11.2
16.8
22.4
28.0
33.6
600
4.8
9.6
14.4
19.2
24.0
28.8
500
4.0
8.0
12.0
16.0
20.0
24.0
400
3.2
6.4
9.6
12.8
16.0
19.2
300
2.4
4.8
7.2
9.6
12.0
14.4
200
1.6
3.2
4.8
6.4
8.0
9.6
100
0.8
1.6
2.4
3.2
4.0
4.8
15
20
25
30
Table 15. Storage required for uncompressed voice segments
Average segment length in seconds
5
10
Number of uncompressed segments
MB of Storage
2000
80.0
160.0
240.0
320.0
400.0
480.0
1500
60.0
120.0
180.0
240.0
300.0
360.0
1000
40.0
80.0
120.0
160.0
200.0
240.0
158
General Information and Planning
Table 15. Storage required for uncompressed voice segments (continued)
Average segment length in seconds
5
10
Number of uncompressed segments
15
20
25
30
MB of Storage
900
36.0
72.0
108.0
144.0
180.0
216.0
800
32.0
64.0
96.0
128.0
160.0
192.0
700
28.0
56.0
84.0
112.0
140.0
168.0
600
24.0
48.0
72.0
96.0
120.0
144.0
500
20.0
40.0
60.0
80.0
100.0
120.0
400
16.0
32.0
48.0
64.0
80.0
96.0
300
12.0
24.0
36.0
48.0
60.0
72.0
200
8.0
16.0
24.0
32.0
40.0
48.0
100
4.0
8.0
12.0
16.0
20.0
24.0
Requirements for CCXML, VoiceXML and Java applications
This section compares the performance of voice applications written in Java
with the same applications written as state tables. These comparisons come
from performance tests that used a number of different types of computer,
each running the supplied Menu application as a sample.
The sample application is very simple. In particular it does not connect to any
external systems as a real voice application would. For more realistic Java
applications, that use use external systems such as JDBC or MQSeries, the
total application load depends very much on these systems and might be
more or less than indicated here.
The sections that follow compare performance in the most important areas.
However, there is one overriding piece of advice: be generous in the choice of
computer for a particular number of telephony channels.
Size of processor
For the simple case where the Java applications are run in the voice response
node, the Java version of the application uses more of the processing
capability than the highly-optimized state table version. However, the
increasing performance of pSeries computer means that the specific processor
should not be a limiting factor. The memory used in the system increases only
by the amount required for the Java process.
If you run the applications in a distributed model (that is, in a separate node
from the voice response node), you need more processing power. However, in
most cases, the added flexibility and redundancy you achieve from the
Chapter 5. Workstation and voice processing
159
distributed model is more important than the slight performance advantage
you can get from running your applications in the voice response node.
CCXML
Running CCXML applications requires a relative increase of 25 to 30% in
processing power compared with that required to run VoiceXML 2.1
applications, depending on the complexity of the applications. For example, if
50% of the available processing power is used by a VoiceXML 2.1 application,
62.5% is used when a CCXML application is also run on the same machine.
Amount of memory
The Java process requires more memory than that normally needed for a
WebSphere Voice Response environment running only state table applications.
When running a Java application, for a system of 120 lines, the Java virtual
machine typically needs about 12 to 32 MB, although the exact amount
depends on the specific application. For a VoiceXML application using 30
lines, 256 MB of memory would typically be needed, but the following factors
can have a significant impact on this requirement, and on performance in
general:
v The amount of memory that is allocated to caching VoiceXML pages (cache
can be used to avoid having to continuously download pages from a web
server). The impact of caching is reduced when running VoiceXML 2.1
applications.
v The efficiency of VoiceXML documents. With VoiceXML documents, there is
some impact from tag statements being parsed each time, regardless of
whether they are executed. Even though parsing is optimized through the
use of cache, it is more efficient to put the tags that are not normally
required (for example, those used in error handling) into a separate
document that can be called only when needed.
v The number of prerecorded audio files used. A VoiceXML document checks
the cache for expiry of audio files, so the more files there are, the longer
this takes; for this reason, a document should load into cache only those
audio files that it uses.
v The use of ECMA scripts as an extension to VoiceXML. Although this can
have a significant impact, the effect can be reduced by using external
scripts; these are optimized by the browser, reducing both memory and
processor requirements.
v The size of grammar files. These should be kept as small as possible.
CCXML
Adding CCXML in multicall mode (using one browser for all calls) requires
only about 10 MB extra memory. In comparison, single call mode (using one
browser per call) requires more memory—typically an extra 10 MB for the
160
General Information and Planning
first channel and approximately 1 to 5 MB extra for each additional channel,
depending on the complexity of the application.
Number of channels
The Java and VoiceXML environment supports a maximum of 480 channels.
Java, and particularly VoiceXML applications are thought to typically use at
least twice as much processing power as their state table equivalents, but this
can be because the applications contain more extensive function, or the
comparision has been made against small state table examples.
Java garbage collection
One of the benefits of Java is that it handles all memory management for you,
avoiding most of the pitfalls of memory leaks and faulty use of pointers.
However, this means that the Java VM has to clean up the memory from time
to time, using a process called garbage collection, so that it can continue
allocating new objects as required.
Within the Java VM, the garbage collector marks and then sweeps up objects
that are no longer referred to, and compacts the objects into the largest chunks
possible to coalesce free memory. Although there is a system overhead
involved in garbage collection, it does not affect playing or recording voice or
user input, because the base WebSphere Voice Response system handles all
the real-time operations.
Chapter 5. Workstation and voice processing
161
162
General Information and Planning
Chapter 6. Scalability with WebSphere Voice Response
WebSphere Voice Response provides a highly scalable system, whichever
programming environment you use. You can increase scalability adding extra
client nodes as installation traffic grows.
WebSphere Voice Response applications can be stored centrally in all
programming environments and shared across multiple systems:
v The basic system includes processing for 12 telephony channels. Additional
channels are available in increments of 30 (E1) or 24 (T1), with up to 480
lines on one system or thousands of lines on multiple systems.
v In CCXML and VoiceXML environments with WebSphere Application
Server, SSI is not supported. However, as all applications are hosted on the
Web server, this is the area that must be scalable. IBM WebSphere
Application Server is very scalable for Web applications, and can be used
for as many standalone WebSphere Voice Response systems running
VoiceXML as are required.
v Java environments support SSI for the voice segments, although not for the
runtime applications.
v Single system image (SSI) with redundant database server allows better
scalability with state table applications.
v Multiple SSI nodes can be deployed.
v Multiple IBM web servers can be used with load balancing using
WebSphere Network Dispatcher.
Scalable CCXML and VoiceXML configurations
All CCXML and VoiceXML applications are stored centrally on a Web or
application server. WebSphere Voice Response does not hold any of these
applications and therefore does not require any of its own shared application
features. The two key technology features that are frequently required are
speech recognition and text-to-speech. To allow these to be run from
VoiceXML, the Java and VoiceXML environment is also required. This
environment allows CCXML or VoiceXML applications to be run on the same
system as WebSphere Voice Response or on other systems that support a Java
Virtual Machine (JVM).
© Copyright IBM Corp. 1991, 2011
163
Internet / Intranet
Web
server
VoiceXML/HTTP
Data
PSTN
WebSphere
Voice Response
for AIX
servers
WebSphere
Voice Server
Speech
Recognition
servers
Figure 39. VoiceXML applications using WebSphere Voice Response to access speech recognition servers
Figure 39 summarizes how this system is put together. If speech recognition
and text-to-speech technologies are used, they are normally configured in a
pool of separate dedicated servers, such that multiple WebSphere Voice
Response systems can use them to optimize system flexibility.
164
General Information and Planning
WebSphere Voice Response for AIX
Java and VoiceXML
Environment
VoiceXML browser
PSTN
WebSphere
Voice
Response
VoiceXML browser
VoiceXML browser
Internet / Intranet
...
Web
server
Speech
Recognition
engine
Text-to-Speech
synthesis
engine
Data
WebSphere Voice Server
servers
p
Figure 40. Accessing speech technologies using VoiceXML browsers on the same machine as WebSphere Voice
Response
Figure 40 shows the more detailed components. WebSphere Voice Response
acts as the "Connection Layer" to interface to the telephone system. The
VoiceXML browser layer runs on top of the Java and VoiceXML environment's
runtime layer, and multiple browsers can be run at the same time in the same
JVM. Multiple speech recognition and text-to-speech engines can be run on
the same or different systems, and are connected to WebSphere Voice
Response via a high speed LAN such as 100 MB Ethernet. The Java runtime
layer that the VoiceXML browser uses can be run on systems other than the
one that is running WebSphere Voice Response, allowing a more flexible
system design if required. You can also run some VoiceXML Browsers on the
WebSphere Voice Response machine and some VoiceXML Browsers on other
systems to balance the application load—such a configuration is shown in
Figure 41 on page 166.
Chapter 6. Scalability with WebSphere Voice Response
165
WebSphere Voice Response
for AIX
Java and VoiceXML
Environment
VoiceXML browser
VoiceXML browser
Java and VoiceXML
Environment
PSTN
WebSphere
Voice
Response
VoiceXML browser
...
Java and VoiceXML
Environment
Internet / Intranet
VoiceXML browser
VoiceXML browser
VoiceXML browser
...
Web
server
Speech
Recognition
engine
Text-to-Speech
Data
synthesis
engine
WebSphere Voice Server
servers
p
Figure 41. Accessing speech technologies using VoiceXML browsers on separate machines to WebSphere Voice
Response
Scalable Java configurations
All Java applications are stored centrally on a separate Java-enabled system,
and are not stored in WebSphere Voice Response. The Java and VoiceXML
environment layer runs in a Java Virtual Machine and has native code on
different operating systems to integrate with the underlying WebSphere Voice
Response products. Voice response beans run on top of this environment
layer. Java also makes use of many of the features of the underlying
WebSphere Voice Response system, including speech technologies.
166
General Information and Planning
Voice response
nodes
Java and VoiceXML
Environment
Application nodes
Java and VoiceXML
Environment
WebSphere
Voice
Response
Java application
Java application
Java application
Java and VoiceXML
Environment
PSTN
...
WebSphere
Voice
Response
Java and VoiceXML
Environment
Java application
Java and VoiceXML
Environment
Java application
WebSphere
Voice
Response
...
Java application
Figure 42. Java applications running on separate systems
Figure 42 shows applications that are managed outside the WebSphere Voice
Response environment and running on multiple systems (which are running
JVMs). If one type of application is running on the top application node and a
different type of application is running on the second application node, the
system can be configured such that multiple WebSphere Voice Response
systems can access the applications as if they were running on the same node.
Chapter 6. Scalability with WebSphere Voice Response
167
Voice response
nodes
Java and VoiceXML
Environment
WebSphere
Voice
Response
IBM WebSphere
application server
Java and VoiceXML
Environment
Java and VoiceXML
Environment
PSTN
WebSphere
Voice
Response
Java application
Java application
Java application
...
Java and VoiceXML
Environment
WebSphere
Voice
Response
Figure 43. Integrating WebSphere Voice Response with WebSphere application server
Figure 43 shows that because WebSphere application server runs applications
in a JVM, it is easy to define the application nodes on the application server
itself, and have these centrally accessed by multiple WebSphere Voice
Response systems. This capability is one of the key benefits of having Java
applications available to the WebSphere Voice Response system, but managed
elsewhere. It also allows the applications to access any other software
connectors or systems that are running on the application server. You can also
run some of your Java applications on the WebSphere Voice Response
machine and some on other systems to balance the application load.
What is a single system image (SSI)?
You can use a local area network to connect a cluster of WebSphere Voice
Response systems. All the systems can then share all the application data in
the cluster (such as state tables and custom servers) and all the voice data
(such as voice segments and voice messages). When the systems are
connected in this way, you can install a voice application on one system in the
cluster, and make it available to all the systems. This means that as your
168
General Information and Planning
business grows, you can add more systems with little additional work.The
cluster of WebSphere Voice Response systems is known as a single system
image (SSI). This section tells you what you should consider when you plan to
create a single system image.
Each system in the cluster is known as a node. Each node is configured either
as a client or as a server:
Client node
A client node handles the interactions with callers. It runs WebSphere
Voice Response (configured as a client), and it must have a connection
to your telephony environment. A client node contains no application
data; it gets this from the server, to which it is connected by a local
area network.
Database server node
The database server node contains the application object database.
This is a DB2 database that contains all the state tables and prompts
that all the WebSphere Voice Response systems in the single system
image can use. It also contains information about the custom servers
that are installed. The server node has WebSphere Voice Response
installed, configured as a server. If you want the server node to
handle interactions with some callers, you can add a connection to
your telephony environment.
Voice server node
A voice server node contains the voice data for all the voice
applications that run on the single system image. It also contains the
program files for the custom servers that are installed on the single
system image. The node stores its information in an AIX file system.
This node need not have WebSphere Voice Response installed, unless
you want it to handle interactions with some callers; in this event, the
node must also have a connection to your telephony environment.
The database server and the voice server are usually in the same pSeries
computer, but you can install them onto two separate systems if you are
creating a large single system image and you want to distribute the processing
load. Note that under the terms of the license for DB2 when supplied with
WebSphere Voice Response, DB2 must be installed on the same physical
machine as the WebSphere Voice Response software, and must not be used
with any other applications.
The nodes of a single system image must be connected using a local area
network. The type of network you use depends on the size of the voice
solution you are implementing. For example, a small cluster running a simple
voice application (such as an information-announcement application that
plays recorded weather information) might require only a token ring network.
Chapter 6. Scalability with WebSphere Voice Response
169
However a larger cluster, used for many voice applications or for a voice
messaging service, may require a network that can provide a higher capacity
and performance, such as an asynchronous transfer mode (ATM) network.
In comparison, a stand-alone WebSphere Voice Response system (that is, one
not configured as an SSI node) must have WebSphere Voice Response, the
telephony connection, the application data, and the voice data all installed on
the same pSeries computer. If you want to create an additional system, you
must install all these items on a new stand-alone system. Figure 44 shows a
stand-alone WebSphere Voice Response system, which is not connected to any
other WebSphere Voice Response systems. Both the application and voice data
it uses are also stored on the same pSeries computer.
Telephone
Network
Voice
Processing
Data
Communications
Network
WebSphere
Voice
Response
Application
Database
Voice Data
Switch
pSeries
Figure 44. A stand-alone WebSphere Voice Response system
Figure 45 on page 171 shows a small single system image. Each WebSphere
Voice Response client has four trunks of telephony, and the server has two
trunks installed. However, you do not have to install telephony components
onto the server. The single system image shown in the figure is suitable for
running an IVR application.
170
General Information and Planning
Telephone
Network
Voice
Processing
Data
Communications
Network
WebSphere
Voice
Response
Client
WebSphere
Voice
Response
Client
LAN
Switches
WebSphere
Voice
Response
Application
Database
Voice Data
Switch
Server
Figure 45. A small single system image
Figure 46 on page 172 shows a larger single system image. This configuration
has more clients installed and there are no telephony components on the
server. This configuration is suitable for a large voice messaging system, and
it is likely that the server will perform no functions other than to serve the
WebSphere Voice Response single system image.
Chapter 6. Scalability with WebSphere Voice Response
171
Telephone
Network
Data
Communications
Network
Voice
Processing
WebSphere
Voice
Response
Clients
WebSphere
Voice
Response
LAN
WebSphere
Voice
Response
WebSphere
Voice
Response
WebSphere
Voice
Response
WebSphere
Voice
Response
Trunks
Database
Host
WebSphere
Voice
Response
Switches
Application
Database
Voice Data
Data
Server
Figure 46. A large single system image
Planning a single system image
When you plan your single system image, think about: AIX login account,
telephony network, configuring the nodes, 3270 sessions, and AIX locale
definitions.
When you plan your single system image, think about the following:
AIX login account
The WebSphere Voice Response AIX login user ID (usually named
dtuser) must be defined identically on all nodes in a single system
image. The user ID must have the same name on every system, and
the same numeric identifiers.
When you install WebSphere Voice Response on a stand-alone system,
you can let WebSphere Voice Response create the dtuser account
automatically. However, when you are setting up a single system
172
General Information and Planning
image, before you install WebSphere Voice Response you must create
the account yourself on each system that will make up the single
system image. You must ensure that the account is created exactly the
same on every system. For information on how to create the account
yourself, refer to the WebSphere Voice Response for AIX: Installation
book.
Telephony network
All the client nodes in an single system image must connect to an E1
network, or they must all connect to a T1 network. This is because
uncompressed voice data recorded on an E1 client does not play
correctly on a T1 client (similarly, voice data recorded on a T1client
does not play correctly on an E1 client).
Configuring the nodes
When you have installed WebSphere Voice Response on each system
that makes up your single system image, you are ready to configure
each node. None of the configuration data is shared between the
nodes: you must configure each node separately.
For information on how to configure your WebSphere Voice Response
systems to create a single system image, refer to the WebSphere Voice
Response for AIX: Configuring the System book.
3270 sessions
If you plan to use 3270 sessions in your single system image, you
must configure the 3270 sessions on all client nodes that have 3270
installed.
AIX locale definitions
All nodes in a single system image must be configured with the same
AIX locale definitions.
Migrating from a stand-alone system to a single system image
If you want to migrate an existing WebSphere Voice Response system to a
single system image, note that you can restore the data from only one
WebSphere Voice Response system to a single system image. This is because
most of the database is shared, so it would get overwritten if you attempted
to restore on more than one node of the single system image.
If you have multiple stand-alone systems that you want to configure as a
single system image, you must plan your migration very carefully. First you
must export all the data to one stand-alone system, then use the saveDT
command on that system to store a copy of the data. Then use the restoreDT
command to restore the data to the system you have configured as the SSI
server node.
Chapter 6. Scalability with WebSphere Voice Response
173
For more information on migrating to a single system image, refer to the
WebSphere Voice Response for AIX: Installation book.
Custom servers in a single system image
This section lists the custom servers that are supplied with WebSphere Voice
Response, and shows to what extent these custom servers observe the
guidelines outlined in the WebSphere Voice Response for AIX: Custom Servers
book.
For guidelines on how you should design custom servers to ensure that they
operate correctly when they are run in a single system image, refer to the
WebSphere Voice Response for AIX: Custom Servers book.
Table 16 lists the custom servers that are supplied with WebSphere Voice
Response, and shows to what extent these custom servers observe the
guidelines. In this table:
v Compliant means that the custom server does not contravene any of the
guidelines.
v Tolerant means that custom server can run in a single system image, but
the notes describe some restrictions.
Table 16. WebSphere Voice Response custom servers in a single system image
WebSphere Voice
Response-Supplied custom
server
SSI Compliance Notes
ISDN Features
Tolerant
Telecommunications Device Compliant
for the Deaf (TDD) Feature
174
CallPath Signaling Process
Tolerant
Cisco Feature
Compliant
ADSI Feature
Tolerant
Signaling System Number
7 Feature
Compliant
General Information and Planning
All instances in the single system
image must connect to the same
CallPath server or CallPath Server and
switch.
Table 16. WebSphere Voice Response custom servers in a single system
image (continued)
WebSphere Voice
Response-Supplied custom
server
SSI Compliance Notes
BVI Sample
Tolerant
The BVI custom server must run on
only one node of the single system
image. This is because a configuration
file is stored in the custom server
directory, and work files are created in
that directory.
Juke Box Sample
Compliant
This custom server could reduce
network performance if it is
configured to play voice data over
NFS, rather than locally.
TDM Sample
Compliant
RecordAudioName Sample Compliant
Voice Messaging Sample
Compliant
Brooktrout Fax
Compliant
Custom Server Sample
Compliant
Chapter 6. Scalability with WebSphere Voice Response
175
176
General Information and Planning
Chapter 7. Data communications network
This chapter discusses the requirements for connecting WebSphere Voice
Response to an SNA network, and also gives examples of how to attach the
pSeries computer to a host computer so that information can be accessed
using 3270 terminal emulation. Connection is supported over both SNA and
TCP/IP–based networks.
Telephone
Network
Voice
Processing
WebSphere
Voice
Response
Caller
Logic
Data
Communications
Network
Business
Object
Server /
Web
Server
Database
Information
Network requirements
This section describes the network hardware required for WebSphere Voice
Response.
To access information on a remote computer, the pSeries computer must be
equipped with one of the following, depending on the type of network:
v IBM Token-Ring High-Performance Network Adapter
v Ethernet High-Performance LAN Adapter
v IBM 4-Port Multiprotocol Communication Controller
v X.25 communications adapter and associated cables
To complete the physical link when you are using the multiprotocol controller,
you need a 4-port multiprotocol interface cable and one of the following:
v A V.35 attachment cable
v An EIA RS232C attachment cable
For a list of the software you need, see “WebSphere Voice Response software”
on page 145.
© Copyright IBM Corp. 1991, 2011
177
Network planning for remote information access
WebSphere Voice Response can access information on a remote S/370, S/390,
or AS/400 by attaching it to a data communications network. The best way to
attach WebSphere Voice Response depends on a number of factors, such as
how far WebSphere Voice Response is from the remote computer and what
information exchange requirements the network must support.
Work with your system programmer, network or system administrator, IBM
representative, or all three to decide on the best network configuration. The
pSeries computer can be part of many different network configurations.
Attaching the pSeries computer to a remote host system
This section shows four ways in which the pSeries computer can be attached
to a remote host system, which might be a System/370, System/390, or
AS/400 system. These attachments enable WebSphere Voice Response to use
SNA LU2 protocols to emulate a 3270 terminal and get access to the
information on the host computer.
The four examples show how you can attach the pSeries computer:
v Using a token ring LAN and an IBM 3174 network gateway (Figure 47 on
page 179)
v Using a token ring LAN and a network controller (Figure 48 on page 180)
v Using an SDLC attachment, modems, and a network controller (Figure 49
on page 181)
v Using a token ring LAN, an IBM 3174 network gateway that is attached via
SDLC modems, and a network controller (Figure 50 on page 182).
These are only examples. Depending on the distances involved, some of the
items shown in figures Figure 47 on page 179 through Figure 50 on page 182
might be unnecessary.
Example A
Attach the pSeries computer to a token-ring local area network (LAN) that is
attached to the host through an IBM 3174 (used as a network gateway). This
attachment requires the following items:
v Token-ring adapter in the pSeries computer
v Cable to attach the pSeries computer to the LAN
v Token-ring adapter in the 3174
178
General Information and Planning
Telephone
Network
Voice
Processing
WebSphere
Voice
Response
Data
Communications
Network
Local
Area
Network
pSeries
IBM 3174
System/390
Figure 47. Data communications network attachment (example A)
Example B
Attach the pSeries computer to a token-ring LAN that is attached to the host
via a network controller. This attachment requires the following items:
v Token-ring adapter in the pSeries computer
v Cable to attach the pSeries computer to the LAN
v Token-ring adapter in the network controller
Chapter 7. Data communications network
179
Telephone
Network
Data
Communications
Network
Voice
Processing
Local
Area
Network
WebSphere
Voice
Response
pSeries
WebSphere
Voice
Response
IBM
3745
Communications
Controller
System/390
pSeries
Figure 48. Data communications network attachment (example B)
Example C
Attach the pSeries computer to the host using a single-drop SDLC attachment
and modems. This attachment requires the following items:
v Multiprotocol adapter in the pSeries computer
v Modems that operate at a rate compatible with the line speed
180
General Information and Planning
Telephone
Network
Voice
Processing
Data
Communications
Network
EIA 232,
V.24 or V.35
WebSphere
Voice
Response
pSeries
Modem
Modem
IBM
3745
Communications
Controller
System/390
Figure 49. Data communications network attachment (example C)
Example D
Attach the pSeries computer to a token-ring LAN that is connected to the host
through a network controller. The network controller is attached to an SDLC
connection that is accessed via an IBM 3174 (used as a network gateway). This
attachment requires the following items:
v Token-ring adapter in the pSeries computer
v Cable to attach the pSeries computer to the LAN
v Token-ring adapter in the 3174
Chapter 7. Data communications network
181
Telephone
Network
Voice
Processing
WebSphere
Voice
Response
Data
Communications
Network
Local
Area
Network
IBM 3174
pSeries
EIA 232
V.24 or V.35
WebSphere
Voice
Response
pSeries
Modem
Modem
IBM
3745
Communications
Controller
System/390
Figure 50. Data communications network attachment (example D)
182
General Information and Planning
Chapter 8. Summary
This section summarizes background and planning information about
WebSphere Voice Response and includes a planning checklist.
This book has given you an introduction to voice processing, and an overview
of how you can realize the benefits of this technology with WebSphere Voice
Response. We have also looked at what WebSphere Voice Response
applications do and how they work.
Part 2, “Planning to install WebSphere Voice Response,” on page 81 discussed
the requirements for installing WebSphere Voice Response, and the planning
you need to do. These requirements and planning activities are summarized
in “Summary of planning tasks” on page 194.
Chapter 3, “Using WebSphere Voice Response,” on page 65 introduced the
window-based user interface and discussed other tools and tasks.
Let's talk
WebSphere Voice Response:
v Comes with a comprehensive hardware warranty and software service.
v Grows with your business: start with only 12 lines and grow to 480 lines on
one system, or thousands of lines on multiple systems.
v Is based on advanced pSeries computer and AIX operating system
technology.
v Can provide total voice and call processing solutions when used with other
CTI products.
v Can be integrated with multiple speech technologies.
v Supports multiple application development environments including
CCXML, VoiceXML and Java.
v Comes from a worldwide company that can provide comprehensive
support and service, with proven expertise in making database information
available to those who need it.
Publications
When you buy WebSphere Voice Response, you receive a CD-ROM of the
documentation that tells you how to set up and use the system, how to
develop your applications, and how to solve any problems you might
© Copyright IBM Corp. 1991, 2011
183
encounter. All the available book titles are listed in “List of WebSphere Voice
Response and associated documentation” on page 241, and you can order
printed copies of all of them from IBM.
WebSphere Voice Response support
WebSphere Voice Response is more than just the hardware and software
components—when you buy WebSphere Voice Response, you also get system
support. Part of this support is the online help and the documentation that
comes with the system. In addition, WebSphere Voice Response includes a
network of support resources, available up to 24 hours a day, 7 days a week
(depending on your service agreement with IBM).
Your IBM software support center can help if you encounter an
undocumented error message, for example, or if you have followed the
instructions in the documentation and they have produced an unexpected
result.
IBM support have access to additional product documentation and
information based on the experiences other companies have had with
WebSphere Voice Response. Beyond the support center are IBM technical
experts, who are ready to provide you with assistance.
Planning checklist
Considering the questions and checking the boxes or filling in the required
information in this checklist will help you plan for WebSphere Voice
Response. References are given where more information is available elsewhere
in this book.
Voice applications
The number and type of applications you offer (and the number of calls
these applications handle) will affect the resources you need.
How many applications will you
offer to callers?
How many different national
languages will the applications
use?
How many channels are needed to
handle the expected call volume?
184
General Information and Planning
See “Estimating telephony
traffic” on page 120.
How much memory and storage
will each application need?
See “Memory and storage
planning” on page 156.
Answering incoming calls
If you have more than one application to answer incoming calls, you
need to have some way of selecting the right application. You might use
different telephone numbers, in which event the dialed number needs to
be passed from the switch to WebSphere Voice Response.
h
Will you have applications that
See “Inbound calls” on page
answer incoming calls?
17.
h
If you have VoiceXML
See “Inbound calls” on page
applications, will call routing be
17.
done using CCXML?
h
Do you need dialed number
See “Choosing the application
identification (DNIS or DID)?
to answer incoming calls ” on
page 119.
Transferring callers to an agent
With some applications, you should provide callers with the opportunity
to speak to a human agent if they want to.
h
Do you need call transfer
capability?
h
Do you need to coordinate the
A CTI product such as
transfer of voice and data?
Genesys Callpath can help
you transfer voice and data at
the same time.
Making outgoing calls
For some scenarios, your application will need to make outgoing calls.
h
Will you have applications that
See “Outbound calls” on page
make outgoing calls?
18.
Chapter 8. Summary
185
Caller interaction
How will callers interact with the application?
h
DTMF tones?
See “State table applications”
on page 43.
h
Speech recognition?
See “Speech Recognition” on
page 49.
h
TDD input?
See “How does WebSphere
Voice Response interact with
a TDD?” on page 52.
How will information be delivered to callers?
h
Pre-recorded speech?
See “Creating the voice
output for applications” on
page 59.
h
Synthesized speech?
See “Text-to-speech” on page
51.
h
One-call fax?
See “How does WebSphere
Voice Response send fax
h
Two-call fax?
output?” on page 52.
h
TDD output?
See “How does WebSphere
Voice Response interact with
a TDD?” on page 52.
If speech recognition is to be used, what type of input must it be
capable of handling?
h
Discrete words?
See “Speech Recognition” on
page 49.
h
Connected digits?
h
Continuous speech?
For speech recognition, how large a vocabulary is required?
h
Under 5000 words?
See “Speech Recognition” on
page 49
h
5000 words or over?
For speech recognition, to enable callers to interrupt prompts you will
need to implement barge-in?
Is barge-in (cut-through) required? h
186
General Information and Planning
Application programming environment
Based on functional capability, and whether WebSphere Voice Response
is to be a standalone IVR, part of a CTI call center, or used as a voice
portal, which application programming environment is most suitable?
h
Portal using WVAA to generate
VoiceXML?
h
VoiceXML?
h
CCXML + VoiceXML?
h
Java?
h
State table?
Databases
Where will the files containing business information be stored?
h
Locally, on storage media attached
See “Information access” on
to the pSeries computer?
page 20 and Chapter 7, “Data
communications network,” on
h
Remotely, on storage media
page 177.
attached to other computers?
You can use terminal emulation to provide telephone access to the same
data. You need one terminal emulation session for each database (or file)
accessed during a call.
h
Are databases currently accessed
See “Integration and
by screen-based applications using
interoperability of state
the 3270 data stream?
tables” on page 47.
How many different 3270 terminal
emulations sessions do you need
per call?
You might want to be able to locate a caller's database records quickly
without asking their name. You can do this if they always call from the
same number, which needs to be passed from the switch to WebSphere
Voice Response.
h
Do you need calling number
identification (ANI or CLID)?
Chapter 8. Summary
187
Voice messaging
Voice messaging can be used stand-alone,
with other voice applications.
h
Do you need message waiting
indicator (MWI) control?
h
Will callers who already use fax or
e-mail want it integrated with
voice mail?
as voice mail, or integrated
See “How state table voice
applications handle voice
messages ” on page 46.
Telephony connectivity
This section relates to the basic telephony interface between your switch
(telephone exchange) and WebSphere Voice Response:
Switch type
What type of switch is WebSphere Voice Response going to be
connected to?
Manufacturer:
Model:
Software level:
What type of signals does the switch receive?
h
Analog?
See “Analog interface” on
page 191.
h
T1 digital?
(Canada, Japan, China (Hong
Kong S.A.R.), U.S.A.)
h
E1 digital?
(Europe, Latin America.)
h
Voice over Internet (VoIP)
If the signals are digital, what type of signaling protocol is used?
h
Channel associated signaling?
See “Channel associated
signaling” on page 189.
h
Common channel signaling?
See “Common channel
signaling” on page 191.
188
General Information and Planning
h
Both?
See “Coexistence of signaling
protocols” on page 92.
Channel associated signaling
What channel associated signaling protocols are provided by the switch?
What capabilities are available with these channel associated signaling
protocols?
h
Call transfer?
h
Answer detect signal?
h
Far-end hangup signal?
What call information does the switch provide?
h
Calling number?
h
Called number or DNIS?
What types of address signaling are supported by the switch?
h
Decadic dialing (dial pulse)?
h
DTMF?
h
MFR1?
How does the switch control address signaling?
h
Ground start or wink start?
h
Delay start?
h
Immediate start?
h
Dial tone?
How does the switch send the called and calling numbers?
Chapter 8. Summary
189
h
In-band in the information
channel?
h
Out-of-band, in the signaling
channel?
h
Not at all?
Exchange data link
If your switch has a host computer interface, you can use an exchange
data link between WebSphere Voice Response and the switch. What
capabilities does the interface provide?
h
Calling number?
h
Called number?
h
Reason for forwarding?
h
Call transfer?
h
Far-end disconnect signal?
h
Message waiting indicator control?
Which protocol is used?
h
Application Connectivity Link
(ACL)?
See “Exchange data link” on
page 119.
h
Simplified Message Desk Interface
(SMDI)?
h
Simplified Message Service
Interface (SMSI)?
h
Voice Message Service (VMS)?
h
Other?
See “How voice applications
access other resources” on
page 49.
What type of physical link is used?
h
X.25?
h
RS232?
190
General Information and Planning
h
Other?
Common channel signaling
Which common channel signaling protocol?
h
Signaling System Number 7 (SS7)?
See “Signaling System 7 ” on
page 106.
h
Euro-ISDN?
See “Integrated Services
Digital Network” on page 96.
h
5ESS 5E8/9/12 ISDN?
h
DMS100 BCS34/36 ISDN?
h
National 2 ISDN?
h
TR41459?
h
INS 1500 ISDN
h
E1 QSIG
Do you require call transfer facilities with ISDN?
h
DMS250 IEC05
h
DMS100 NA007/008
h
E1 QSIG
Analog interface
If the incoming signals are analog, you need a channel bank to convert
them to digital. What protocols are to be used?
To connect the switch to the
channel bank?
h
To connect WebSphere Voice
Response to the channel bank?
h
Chapter 8. Summary
191
CTI integration
You need to consider if and how to integrate the voice applications into
a wider network.
Is integration with CallPath
required?
h
See “Integrating WebSphere
Voice Response with Genesys
Framework” on page 115.
Is integration with Cisco
Intelligent Contact Management
software required?
h
See “Integrating WebSphere
Voice Response with Cisco
ICM software” on page 117.
Is integration with Genesys CTI
required?
h
Availability and Redundancy
Are any of the redundancy features to be used in your configuration?
XML
Central Web server with
CCXML/VoiceXML applications?
h
CCXML/VoiceXML applications
running in multiple JVMs?
h
Dual CCXML nodes for call
handling?
h
Java
Central Java application server?
h
Dual LAN to Java application
nodes?
h
State table
Multisystem single system image?
h
Dual SSI Database Server with
HACMP switchover?
h
Dual LAN on SSI?
h
SS7
192
General Information and Planning
Dedicated SS7 Server with
WebSphere Voice Response SS7
Client?
h
Redundant dedicated server?
h
Switch queuing
You need to think about how to integrate the voice applications into
your current switch configuration. For more information, see “Planning
the switch configuration” on page 134
h
Does the switch offer queueing?
h
Is the switch configured with
multi-tiered queues?
h
Will the existing queue
configuration work when
WebSphere Voice Response is
included as an agent?
Power supply
What kind of power supply do you need?
h
AC?
h
DC (-48v)?
Data communications
Remote access
How do you intend to connect the pSeries computer to other computers,
or to Xstations, in the network?
h
TCP/IP over a token-ring LAN?
h
TCP/IP over an Ethernet LAN?
h
SNA over a token-ring LAN?
h
SNA over an SDLC link?
Chapter 8. Summary
193
h
3270 terminal emulation?
System management
How do you intend to provide system management?
h
Locally, using the pSeries
computer; console?
h
Locally or remotely, using an
ASCII terminal?
h
Remotely, using SNMP?
h
Remotely, using NetView: either
NetView for AIX or NetView/390?
Summary of planning tasks
Table 17. Summary of planning tasks
Task
See page
Estimating telephony traffic
“Estimating
telephony
traffic” on page
120
Calculating telephony traffic
“Calculating
telephony
traffic” on page
122
Determining blockage rate
“Determining a
blockage rate”
on page 122
Estimating number of channels required
“Estimating the
number of
channels
needed” on
page 122
Planning for switch configuration
“Planning the
switch
configuration”
on page 134
194
General Information and Planning
Table 17. Summary of planning tasks (continued)
Task
See page
Planning for software installation
“WebSphere
Voice Response
software” on
page 145
Planning for hardware installation
“Hardware
requirements”
on page 149
Location planning (physical, environmental, and electrical)
“Location
planning” on
page 155
Estimating memory and storage requirements
“Memory and
storage
planning” on
page 156
Planning to use speech recognition
“Speech
Recognition”
on page 49
Planning the network for access to remote information
“Network
planning for
remote
information
access” on
page 178
Summary of requirements
Table 18. Summary of software requirements
Item
Required for
See page
IBM WebSphere Voice Response for AIX
Version 6.1
All
-
IBM AIX Version 6.1
All
-
IBM VisualAge C++ Professional for AIX
Version 6 (compiler)
Developing custom servers using the C++
language
“How voice
applications
access other
resources” on
page 49
Chapter 8. Summary
195
Table 18. Summary of software requirements (continued)
Item
Required for
IBM Communications Server for AIX,
Version 6.0.1
v Access to remote information
v Remote administration or development
v NetView operation
See page
“Network
planning for
remote
information
access” on
page 178
IBM License Use Runtime for AIX,
Version 6.1
All. Used for license management
“Licensing
WebSphere
Voice
Response
software” on
page 146
The following Java filesets:
Java and VoiceXML applications
Table 9 on
page 144
v Java6.sdk 6.0.0
v Java6.license 6.0.0
X11.vfb 6.1.0.0
Java and VoiceXML applications
Table 19. Summary of hardware requirements
Item
Required for
See page
pSeries computer
All
“Hardware
requirements”
on page 149
Display
All
“Displays ” on
page 155
Graphics display adapter
All
“Displays ” on
page 155
Keyboard
All
“Keyboard and
mouse ” on
page 155
Mouse
Optional
“Keyboard and
mouse ” on
page 155
196
General Information and Planning
Table 19. Summary of hardware requirements (continued)
Item
Required for
See page
Tape drive
Optional
“Machinereadable media
” on page 155
CD drive
All
“Machinereadable media
” on page 155
Diskette drive
Optional
“Machinereadable media
” on page 155
Printer
Optional
“Printer ” on
page 155
Digital Trunk Telephony Adapter
(DTTA)
pSeries computers
“Configuring
DTTA cards”
on page 153
Exchange data link
Out-of-band messaging
“Coexistence of
signaling
protocols” on
page 92
Channel service unit
CO switch in the US Private switch or
channel bank
“Channel
bank” on page
93
Token-ring adapter
Token-ring LAN
“Attaching the
pSeries
computer to a
remote host
system ” on
page 178
Ethernet adapter
Ethernet LAN
“Attaching the
pSeries
computer to a
remote host
system ” on
page 178
Chapter 8. Summary
197
198
General Information and Planning
Part 3. Appendixes
© Copyright IBM Corp. 1991, 2011
199
200
General Information and Planning
Appendix. WebSphere Voice Response language support
This section identifies which languages are supported by WebSphere Voice
Response in each of the programming environments.
Multiple language options can be installed and supported on one WebSphere
Voice Response for AIX system (for details see the WebSphere Voice Response for
AIX: Configuring the System). The table below shows which languages are
supported in each of the programming environments.
Table 20. Language support in WebSphere Voice Response for AIX
Language and
country/region
Supported for state
table applications
Supported for Java
applications
Supported for
VoiceXML applications
Castilian – Spain
U
U1
Catalan–Spain
U
U1
Cantonese – China
(Hong Kong)
U1
Dutch – Belgium
U
English – Australia
U
U
U2
English – UK
U
U
U
English – US
U
U
U
U
U
French – Canada
French – France
U
U
U1
German – Germany
U
U
U
Italian – Italy
U
U
U1
Japanese – Japan
U
Korean – Korea
U1
Portuguese – Brazil
U
U
Simplified Chinese –
China
U1
U
Spanish – Latin America
U
U2
Spanish – Mexico
U
U1
Note:
1. Supported only with WebSphere Voice Server Version 4.2
2. Supported only with WebSphere Voice Server Version 5.1
© Copyright IBM Corp. 1991, 2011
201
For state tables and Java applications, voice segments and the corresponding
logic modules are included on the product CD-ROM. For VoiceXML,
prerecorded segments are not provided, as separate text-to-speech technology
is normally used instead in this environment.
For detailed information about using other national languages with Java
applications, refer to the WebSphere Voice Response for AIX: Developing Java
applications book. Details of using other national languages with VoiceXML can
be found in the WebSphere Voice Response: VoiceXML Programmer's Guide book.
202
General Information and Planning
Notices
This information was developed for products and services offered in the
U.S.A.
IBM may not offer the products, services, or features discussed in this
document in other countries. Consult your local IBM representative for
information on the products and services currently available in your area. Any
reference to an IBM product, program, or service is not intended to state or
imply that only that IBM product, program, or service may be used. Any
functionally equivalent product, program, or service that does not infringe
any IBM intellectual property right may be used instead. However, it is the
user’s responsibility to evaluate and verify the operation of any non-IBM
product, program, or service.
IBM may have patents or pending patent applications covering subject matter
described in this document. The furnishing of this document does not give
you any license to these patents. You can send license inquiries, in writing, to:
The IBM Director of Licensing, IBM Corporation,
North Castle Drive,
Armonk,
NY 10504-1785,
U.S.A.
For license inquiries regarding double-byte (DBCS) information, contact the
IBM Intellectual Property Department in your country or send inquiries, in
writing, to:
IBM World Trade Asia Corporation Licensing,
2-31 Roppongi 3-chome Minato-ku,
Tokyo 106,
Japan.
The following paragraph does not apply to the United Kingdom or any
other country where such provisions are inconsistent with local law:
INTERNATIONAL BUSINESS MACHINES CORPORATION PROVIDES THIS
PUBLICATION "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER
EXPRESS OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE
IMPLIED WARRANTIES OF NON-INFRINGEMENT, MERCHANTABILITY
© Copyright IBM Corp. 1991, 2011
203
OR FITNESS FOR A PARTICULAR PURPOSE. Some states do not allow
disclaimer of express or implied warranties in certain transactions, therefore,
this statement may not apply to you.
This information could include technical inaccuracies or typographical errors.
Changes are periodically made to the information herein; these changes will
be incorporated in new editions of the publication. IBM may make
improvements and/or changes in the product(s) and/or the program(s)
described in this publication at any time without notice.
Any references in this information to non-IBM Web sites are provided for
convenience only and do not in any manner serve as an endorsement of those
Web sites. The materials at those Web sites are not part of the materials for
this IBM product and use of those Web sites is at your own risk.
IBM may use or distribute any of the information you supply in any way it
believes appropriate without incurring any obligation to you.
Licensees of this program who wish to have information about it for the
purpose of enabling: (i) the exchange of information between independently
created programs and other programs (including this one) and (ii) the mutual
use of the information which has been exchanged, should contact: IBM UK
Limited, Department 88013, 4NW, 76/78 Upper Ground, London, SE1 9PZ,
England. Such information may be available, subject to appropriate terms and
conditions, including in some cases, payment of a fee.
The licensed program described in this document and all licensed material
available for it are provided by IBM under terms of the IBM Customer
Agreement, IBM International Programming License Agreement, or any
equivalent agreement between us.
Information concerning non-IBM products was obtained from the suppliers of
those products, their published announcements or other publicly available
sources. IBM has not tested those products and cannot confirm the accuracy
of performance, compatibility or any other claims related to non-IBM
products. Questions on the capabilities of non-IBM products should be
addressed to the suppliers of those products.
COPYRIGHT LICENSE: This information contains sample application
programs in source language, which illustrate programming techniques on
various operating platforms. You may copy, modify, and distribute these
sample programs in any form without payment to IBM, for the purposes of
developing, using, marketing or distributing application programs conforming
to the application programming interface for the operating platform for which
the sample programs are written. These examples have not been thoroughly
204
General Information and Planning
tested under all conditions. IBM, therefore, cannot guarantee or imply
reliability, serviceability, or function of these programs.
For country-specific notes on the use of WebSphere Voice Response, refer to
the README file located in the directory /usr/lpp/dirTalk/homologation.
The file name is in the format README_homologation.xxxx, where xxxx is
the country/region identifier.
Trademarks
IBM, the IBM logo, and ibm.com are trademarks or registered trademarks of
International Business Machines Corporation in the United States, other
countries, or both. If these and other IBM trademarked terms are marked on
their first occurrence in this information with a trademark symbol (® or ™),
these symbols indicate U.S. registered or common law trademarks owned by
IBM at the time this information was published. Such trademarks may also be
registered or common law trademarks in other countries. A current list of IBM
trademarks is available on the Web at Copyright and trademark information
(http://www.ibm.com/legal/copytrade.shtml).
Adobe, is a registered trademark of Adobe Systems Incorporated in the
United States, and/or other countries.
Java and all Java-based trademarks and logos are trademarks of Oracle
and/or its affiliates.
Microsoft, Windows, and the Windows logo are trademarks of Microsoft
Corporation in the United States, other countries, or both.
Other company, product or service names might be trademarks or service
marks of others.
Notices
205
206
General Information and Planning
Glossary
The following terms and abbreviations are defined as they are used in the context of
WebSphere Voice Response. If you do not find the term or abbreviation you are looking for,
see IBM Dictionary of Computing, McGraw-Hill, 1994 or the AIX: Topic Index and Glossary,
SC23–2513.
Special Characters
µ-law The companding algorithm that is
used primarily in North America
and Japan when converting from
analog to digital speech data.
(Compand is a contraction of
compress and expand.) Contrast
with A-law.
Numerics
2 B-channel transfer feature
See Integrated Services Digital
Network (ISDN) two B-channel
transfer.
3270 host application
An application on the IBM
System/370™ System/390®, or
AS/400® that interacts with
terminals that support the 3270 data
stream.
3270 script language
See script language.
3270 server
A function of WebSphere Voice
Response that provides a software
interface between WebSphere Voice
Response and IBM System/370,
System/390, or AS/400 architecture
business applications that interact
with terminals that support the 3270
data stream. Contrast with custom
server.
5ESS
(2) The ISDN protocol that is used
on the 5ESS switch. It provides 23
B-channels and a D-channel over a
T1 trunk.
6312 Digital Trunk Telephony Adapter
(DTTA)
See Digital Trunk Telephony Adapter.
6313 Digital Trunk Telephony Adapter
(DTTA) with Blind Swap Cassette (BSC)
See Digital Trunk Telephony
Adapter with Blind Swap Cassette.
A
A-law The companding algorithm that is
used in Europe, Latin America, and
other countries when converting
from analog to digital speech data.
(Compand is a contraction of
compress and expand.) Contrast
with µ-law.
access protocol
A protocol that is used between an
external subscriber and a switch in a
telephone network.
ACD
See automatic call distributor.
ACL
See application connectivity link.
action See state table action.
Action Palette
An area that contains folders and
icons that can be selected to create
state table actions.
(1) A Lucent Technologies switch.
© Copyright IBM Corp. 1991, 2011
207
Address Resolution Protocol (ARP)
In HACMP, the Internet
communication protocol that
dynamically maps Internet
addresses to physical (hardware)
addresses on local area networks.
Limited to networks that support
hardware broadcast.
The usr/sbin/cluster/etc/clinfo.rc
script, which is invoked by the
clinfo daemon whenever a network
or node event occurs, updates the
system ARP cache. This ensures that
the IP addresses of all cluster nodes
are updated after an IP address
takeover. The script can be further
customized to handle site-specific
needs.
administrator profile
Data that describes a WebSphere
Voice Response user. Information
that is in an administrator profile
includes ID, password, language
preference, and access privileges.
ADSI See analog display services interface.
ADSI telephone
A “smart” telephone that can
interpret and return ADSI data.
advanced intelligent network (AIN)
A telephone network that expands
the idea of the intelligent network
(IN) to provide special services more
efficiently; for example, by giving
users the ability to program many
of the services themselves.
AIN
See advanced intelligent network.
alarm Any condition that WebSphere Voice
Response thinks worthy of
documenting with an error message.
Strictly, the term alarm should
include only red (immediate
attention) and yellow (problem
208
General Information and Planning
condition), but it is also used to
refer to green (a red or yellow
message has been cleared) and
white (information) conditions.
Contrast with alert.
alert
A message that is sent to a central
monitoring station, as the result of
an alarm. Contrast with alarm.
alternate mark inversion (AMI)
A T1 line coding scheme in which
binary 1 bits are represented by
alternate positive and negative
pulses and binary 0 bits by spaces
(no pulse). The purpose is to make
the average dc level on the line
equal to zero.
AMI
See alternate mark inversion.
analog
Data in the form of continuously
variable signals, such as voice or
light signals.
analog display services interface (ADSI)
A Bellcore signaling protocol that is
used with existing voice networks.
ADSI supports analog transmission
of voice and text-based information
between a host or switch, voice mail
system, service bureau, or similar,
and a subscriber's ADSI-compatible
screen telephone. A single
voice-grade telephony channel is
shared between voice and data,
using a technique by which the
channel is taken over for the
transmission of modem-encoded
data.
ANI
See automatic number identification.
annotation
In speech recognition, an
alphanumeric string that is used to
mark a grammar when it is defined.
When the grammar is used in an
application, both the word and the
alphanumeric string are returned to
the application.
announcement-only greeting
In voice mail, a greeting that does
not give the caller a chance to leave
a voice message.
application
A (usually) customer-written
program or set of programs that
might consist of one or more state
tables or custom servers that are
running on WebSphere Voice
Response, with associated voice
segments. See voice application.
application connectivity link (ACL)
A service that transmits out-of-band
information between WebSphere
Voice Response and the Siemens
Hicom 300 switch.
application profile
Data that describes initial actions
that are to be performed when the
telephone is answered. Information
in an application profile indicates to
the channel process which state
table to load.
application server interface (ASI)
The principal software component
of WebSphere Voice Response that
manages the real-time channel
processing.
application server platform (ASP)
A platform that is used for Web and
voice applications for e-business.
ASI
See application server interface.
ASP
See application server platform.
audio name
The audible name that relates to a
specific application profile ID and
mailbox.
auto-attendant
Automated attendant. A voice
application that answers incoming
calls and asks callers which number
or other service they would like.
automatic call distributor (ACD)
A telephone system feature that
automatically queues and processes
inbound calls according to
predefined rules. For example, a call
might be routed to the agent whose
line has been idle longest.
automatic number identification (ANI)
A service available in the U.S. that
provides the telephone number of
the calling party. It is generated by
the caller's originating central office
switch, sent to a telephone network
carrier if required, then sent directly
either to a switch or to a voice
processing system.
autostubbing
A state table icon view utility that
automatically converts lines into
stubs when they cross a specified
number of columns.
B
B8ZS
Bipolar with 8-zero substitution. A
T1 line code that is required for
64Kb channels such as ISDN.
B-channel
See bearer channel. See also Integrated
Services Digital Network (ISDN) .
background music
Any audio data that is to be played
on a music channel.
barge-in
The capability that allows a prompt
to be interrupted by an utterance
Glossary
209
that is then passed to a speech
recognizer. See also cut-through
channel.
baseforms
The set of phonetic pronunciations
that are associated with a grammar.
In WebSphere Voice Server, the IBM
dictionary of pronunciations is used.
basic rate interface (BRI)
The means of ISDN access that is
normally used by private
subscribers. It provides two
B-channels of 64 Kb per second and
one D-channel of 16 Kb per second
for signaling. This is often known as
2B+D. Contrast with primary rate
interface (PRI).
beans Java beans with which you can
build voice applications to use the
services of WebSphere Voice
Response on any platform.
bearer channel
In an ISDN interface, a duplex
channel for transmitting data or
digital voice between the terminal
and the network. The B-channel
operates at 64 Kb per second.
bearer service
The type of service that defines how
an ISDN connection will be used.
Typical bearer services are speech
telephony, 64 Kb per second data,
and high-quality speech.
blind transfer
A type of call transfer in which the
call is routed to another extension
and the original call is ended. No
check is made to determine whether
the transferred call is answered or if
the number is busy. Contrast with
screened transfer.
bnf
210
Abbreviation for Backus-Naur Form,
General Information and Planning
which is used to describe the syntax
of a given language and its notation.
In speech recognition, a special
adaptation of grammar
representation that is specified by
Speech Recognition Control Language
(SRCL) (pronounced “circle”).
bos
Base Operating System.
bps
bits per second.
BRI
See basic rate interface.
bridge See DVT bridge.
British Approvals Board for
Telecommunications
The British standards organization
that is responsible for approval of
equipment that is to be attached to
the PSTN.
C
cadence
The modulated and rhythmic
recurrence of an audio signal. For
example, a series of beeps or a
series of rings.
call
Telephone call. Often used to mean
a single run-time instance of a voice
application.
call center
A central point at which all inbound
calls are handled by a group of
individuals in a controlled
sequential way. Call centers are
usually a front end to a business
such as airline ticketing or mail
order.
Call Control eXtensible Markup Language
(CCXML)
Language designed to provide
telephony call control support for
VoiceXML or other dialog systems.
Refer to the CCXML forum web site
at http://www.w3.org/TR/ccxml
call forwarding
The process of sending incoming
calls to a different number.
called party
Any person, device, or system that
receives a telephone call. Contrast
with caller.
caller
(1) Any person, device, or system
that makes a telephone call. (2)
Often used to refer to any user of a
voice application, although
WebSphere Voice Response might
have made an outbound call and
the user is really the called party. (3)
In voice mail, any person who
makes a telephone call to a
subscriber. Contrast with user.
calling line identification presentation
(CLIP) An ISDN supplementary service
that advises the called party of the
caller's number; for example, by
displaying it on a telephone display
panel.
CallPath
Software that provides basic
computer-telephony integration
(CTI) enablement and
comprehensive CTI functionality.
This includes access to, and
management of, inbound and
outbound telecommunications.
call session
The sequence of events that occurs
from the time a call is started to the
time all activities related to
answering and processing the call
are completed.
call transfer
A series of actions that directs a call
to another telephone number. See
also dual-line call transfer.
CAS
See channel associated signaling.
cascading resources
Resources that can be taken over by
more than one node. A takeover
priority is assigned to each
configured cluster resource group in
a per-node way. In the event of a
takeover, the node with the highest
priority gets the resource group. If
that node is unavailable, the node
with the next-highest priority gets
the resource group, and so on.
CAS tone
Customer Premise Equipment
Alerting Signal tone. In ADSI, this
tone is sent to the ADSI telephone
to switch the phone to data mode.
CBX
See computerized branch exchange.
CCH
See Comité de Coordination de
l'Harmonisation.
CCITT
See Comité Consultatif International
Télégraphique et Téléphonique.
CCS
See common channel signaling (CCS).
central office (CO)
A telephone switching system that
resides in the telephone service
provider's network. Different types
of central office switches exist,
depending upon the role of the
switch in the telephone network.
Commonly, a central office switch
connects customer lines to other
customer lines or trunks, and is the
point at which local subscriber lines
end for switching to other lines or
trunks.
central registry
A component of the Licence Use
Glossary
211
Management network topology. A
server's database that logs requests
for licenses, upgrades for licenses,
and journals all license activity in a
tamper-proof auditable file.
CEPT See Conference Européenne des
Administrations des Postes et
Télécommunications.
CGI
See Common Gateway Interface.
channel
One of the 24 channels that are on a
T1 trunk, or one of the 30 channels
that are on an E1 trunk. See also
speech recognition session, music
channel.
channel-associated signaling (CAS)
A method of communicating
telephony supervisory or line
signaling (on-hook and off-hook)
and address signaling on T1 and E1
digital links. The signaling
information for each traffic (voice)
channel is transmitted in a signaling
channel that is permanently
associated with the traffic channel.
On T1 links, supervisory signaling
is sent in the traffic channel by
using robbed-bit signaling (RBS). On
E1 links, a separate channel is used
to send signaling. Address signaling
can be transmitted either in the
signaling channel (out-of-band) or
in the traffic channel (in-band).
Contrast with common channel
signaling (CCS).
channel bank
A device that converts an analog
line signal to a digital trunk signal.
channel number
The identifying number that is
assigned to a licensed channel on
the T1 or E1 trunk that connects
212
General Information and Planning
WebSphere Voice Response to the
switch, channel bank, or channel
service unit.
channel process (CHP)
The AIX process that runs the logic
of the state table; each active caller
session has one active channel
process.
channel service unit (CSU)
A device that is used to connect a
digital phone line to a multiplexer, a
channel bank, or directly to another
device that generates a digital
signal. A CSU performs specific
line-conditioning and equalization
functions, and responds to loopback
commands that are sent from the
CO.
CHP
See channel process.
CIC
See circuit identification code.
CICS
See customer information control
system.
circuit identification code (CIC)
A 12-bit number that identifies a
trunk and channel on which a call is
carried.
clear message
A message that is displayed by
WebSphere Voice Response to tell
the operator that a red or yellow
error message has been cleared.
client node
In a single system image (SSI), a
WebSphere Voice Response system
that handles interactions with
callers. A client node must have a
telephony connection. It does not
store application or voice data; it
gets data from the server node of
the SSI.
CLIP
See calling line identification
presentation.
cluster
Loosely-coupled collection of
independent systems (nodes) that
are organized into a network to
share resources and to communicate
with each other. HACMP defines
relationships among cooperating
systems where peer cluster nodes
provide the services that a cluster
node offers if that node cannot do
so.
cluster configuration
User definition of all cluster
components. Component
information is stored in the Object
Data Manager. Components include
cluster name and ID, and
information about member nodes,
adapters, and network modules.
CO
See central office.
codec
Refers to adapters that compress
and decompress video files. The
letters "codec" represent
"compression/decompression"; in
the past, they represented
"coder/decoder."
Comité de Coordination de
l'Harmonization
The CEPT committee responsible for
standards.
Comitato Elettrotechnico Italiano
The Italian standards organization
responsible for signaling protocols.
Comité Consultatif International
Télégraphique et Téléphonique (CCITT)
This organization has been renamed
and is now known as the
International Telecommunications
Union - Telecommunication
Standardization Sector (ITU-T).
common channel signaling (CCS)
A method of communicating
telephony information and line
signaling events (for example, call
setup and call clearing) on a
dedicated signaling channel. The
signaling channel is either a
predefined channel on an E1 or T1
digital link, or a completely separate
link between the switch and
WebSphere Voice Response. For data
integrity and reliability, the
information is usually
communicated using a data link
protocol. The telephone information
and line signaling events are sent as
data packets. SS7 and ISDN are
common-channel signaling
protocols. Contrast with channel
associated signaling.
Common Gateway Interface (CGI)
An interface to programs that
provide services on the world wide
Web.
compiled grammar file
A grammar in binary format that
was built by the WebSphere Voice
Server grammar development tools.
compound license
In License Use Management, a type
of license that allows a system
administrator to generate license
passwords for a given number of
licenses. A compound license can
generate either nodelocked or
non-nodelocked licenses, but not
both
computer-telephony integration (CTI)
The use of a general-purpose
computer to issue commands to a
telephone switch to transfer calls
and provide other services.
Typically, CTI is used in call centers.
Glossary
213
computerized branch exchange (CBX)
A computer-driven, digital
communications controller that
provides telephone communication
between internal stations and
external networks.
Conférence Européenne des
Administrations des Postes et
Télécommunications (CEPT)
European Conference of Postal and
Telecommunications
Administrations.
configuration file
See parameter file.
configuration parameter
A variable that controls the behavior
of the system or the behavior of all
applications that are running on the
system. See parameter file, system
parameter.
container window
A window that lists the names of all
existing objects of the same type.
context
A set of one or more grammars that
is enabled and used during a
recognition action. The grammars
are specified by a FILELIST file.
Parameters that influence the
recognition, such as the maximum
initial silence period and the ending
silence period, are also defined by
the context. More than one context
can be enabled for a recognition.
context name
The name given to a context in a
context profile that is used for
WebSphere Voice Server.
context profile
Describes to the WebSphere Voice
Server process which contexts
should be loaded into an engine. A
214
General Information and Planning
WebSphere Voice Response for
Windows application specifies
which context profiles to load into
the engine it has reserved.
context type
Indicates to the recognition engine
how to interpret the grammar file.
Possible types are: VOCAB_FILE,
GRAMMAR_FILE, TEXT,
MNR_FILE, MNR,
PERSONAL_FILE,
PERSONAL_WDS,
BASEFORM_FILE.
continuous speech recognition
Recognition of words that are
spoken in a continuous stream.
Unlike isolated or discrete word
recognition, users do not have to
pause between words.
conversation
See speech recognition session.
CPE
See customer premises equipment.
CSU
See channel service unit .
CTI
See computer-telephony integration.
customer information control system
(CICS)
A licensed program that enables
transactions that are entered at
remote workstations to be processed
concurrently by user-written
application programs. It includes
facilities for building, using, and
maintaining databases.
custom server
A C language or C++ language
program that provides data
manipulation and local or remote
data stream, database, or other
services that are additional to those
that the state table interface
provides. Custom servers provide
an interface between WebSphere
Voice Response and business
applications, functions, or other
processes to give callers access to
business information and voice
processing functions such as speech
recognition.
customer premises equipment (CPE)
Telephony equipment that is on the
premises of a business or domestic
customer of the telephone company.
An example is a private branch
exchange (PBX).
cut-through channel
A channel of voice data that has
been passed through
echo-cancellation algorithms. The
channel provides echo-canceled
voice data that can then be used by
the engine in a recognition attempt.
This is similar to barge-in.
D
daemon
In the AIX operating system, a
program that runs unattended to
perform a standard service.
database server node
In a single system image (SSI), a
WebSphere Voice Response system
that contains the WebSphere Voice
Response DB2 database. This is
usually the same node as the voice
server node.
DBIM The internal database manager of
WebSphere Voice Response.
DBS
The database server of WebSphere
Voice Response.
DCBU See D-channel backup.
D-channel
See delta channel.
D-channel backup (DCBU)
An ISDN NFAS configuration where
two of the T1 facilities have a
D-channel, one of which is used for
signaling, and the other as a backup
if the other fails. See also non-facility
associated signaling.
DDI
See direct inward dialing.
DDS
See production system.
delay start
A procedure that is used with some
channel-associated signaling
protocols to indicate when a switch
or PABX is ready to accept address
signaling. After seizure, the switch
sends off-hook until it is ready to
accept address signaling, at which
time it sends on-hook. Contrast with
immediate start and wink start.
delta channel
In an ISDN interface, the D-channel
or delta channel carries the
signaling between the terminal and
the network. In a basic rate
interface, the D-channel operates at
16 Kb per second. In a primary rate
interface, the D-channel operates at
64 Kb per second.
destination point code (DPC)
A code that identifies the signaling
point to which an MTP signal unit
is to be sent. Unique in a particular
network.
development system
A WebSphere Voice Response
system that is not used to respond
to, or make, “live” calls; it is used
only to develop and test
applications. Contrast with
production system.
dial
To start a telephone call. In
telecommunication, this action is
Glossary
215
performed to make a connection
between a terminal and a
telecommunication device over a
switched line.
dial by name
To press the keys that are related to
subscribers' names instead of to
their telephone numbers or
extensions.
dialed number identification service
(DNIS)
A number that is supplied by the
public telephone network to identify
a logical called party. For example,
two toll-free numbers might both be
translated to a single real number.
The DNIS information distinguishes
which of the two toll-free numbers
was dialed.
dialog box
A secondary window that presents
information or requests data for a
selected action.
dial tone
An audible signal (call progress
tone) that indicates that a device
such as a PABX or central office
switch is ready to accept address
information (DTMF or dial pulses).
DID
See direct inward dialing.
digital signal processing (DSP)
A set of algorithms and procedures
that processes electronic signals
after their conversion to digital
format. Because of the specific
mathematical models that are
required to perform this processing,
specialized processors are generally
used.
Digital Subscriber signaling System
Number 1 (DSS1)
A signaling protocol that is used
216
General Information and Planning
between ISDN subscriber equipment
and the network. It is carried on the
ISDN D-channel. ITU-T
recommendations Q.920 to Q.940
describe this protocol.
Digital Trunk Ethernet Adapter (DTEA)
A Radysis adapter card that
provides the audio streaming (RTP)
interface between the WebSphere
Voice Response internal H.100 bus
and Ethernet for a maximum of 120
channels using uncompressed
(G.711) voice, and compressed
G.723.2 and G.729A compressed
voice.
Digital Trunk No Adapter (DTNA)
A device driver that supports
uncompressed (G.711) voice RTP
streaming.
Digital Trunk Telephony Adapter (DTTA)
The IBM Quad Digital Trunk
Telephony PCI Adapter. In
WebSphere Voice Response, this
adapter is known as a DTTA. It
allows you to connect directly to the
telephony network from a pSeries
computer without the need for an
external pack.
Digital Trunk Telephony Adapter (DTTA)
with Blind Swap Cassette (BSC)
The IBM Quad Digital Trunk
Telephony PCI Adapter. In
WebSphere Voice Response, this
adapter is known as a DTTA. It
allows you to connect directly to the
telephony network from a pSeries
computer without the need for an
external pack. This DTTA includes a
short Blind Swap Cassette (BSC)
which is required for installing the
DTTA in machines that use the BSC
(for example, the pSeries 650–6M2).
diphone
A transitional phase from one sound
to the next that is used as a building
block for speech synthesis. Typically,
between one thousand and two
thousand diphones exist in any
national language.
direct dial in (DDI)
See direct inward dialing.
direct inward dialing (DID)
A service that allows outside parties
to call directly to an extension of a
PABX. Known in Europe as direct
dial in (DDI).
direct speech recognition
Identification of words from spoken
input that are read directly from the
telephony channel. Contrast with
indirect speech recognition.
DirectTalk bean
One of the beans that is provided
with WebSphere Voice Response. It
provides access from a voice
application to simple call control
functions: waiting for a call, making
an outgoing call, handing a call over
to another application, and
returning a call when finished.
discrete word recognition
Identification of spoken words that
are separated by periods of silence,
or input one at a time. Contrast
with continuous speech recognition.
disconnect
To hang up or terminate a call.
Distributed Voice Technologies (DVT)
A component of WebSphere Voice
Response that provides an interface
to allow you to integrate your own
voice technology (such as a speech
recognizer) with your WebSphere
Voice Response system.
distribution list
In voice mail, a list of subscribers to
whom the same message can be
sent.
DMS100
(1) A Northern Telecom switch. (2)
The custom ISDN protocol that is
run on the DMS100 switch,
providing 23 B-channels and a
D-channel over a T1 trunk.
DNIS See dialed number identification
service.
double-trunking
See trombone.
down The condition in which a device is
unusable as a result of an internal
fault or of an external condition,
such as loss of power.
downstream physical unit (DSPU)
Any remote physical unit (data link,
storage, or input/output device)
that is attached to a single network
host system.
DPC
See destination point code.
drop-in grammar
A set of precompiled grammar rules
that can be used by an
application-specific grammar to
improve the recognition
performance.
DSP
See digital signal processing.
DSPU See downstream physical unit.
DSS1
See Digital Subscriber signaling
System Number 1.
DTMF
See dual-tone multifrequency.
DTEA See Digital Trunk Ethernet Adapter.
DTNA
See Digital Trunk No Adapter.
Glossary
217
DTTA See Digital Trunk Telephony Adapter.
dtuser The name of the AIX account that is
set up during the installation
process for the use of all users of
WebSphere Voice Response.
dual-line call transfer
A call transfer method in which the
primary and secondary lines remain
bridged until a call is completed.
(Also known as tromboning: see
trombone).
dual-tone multifrequency (DTMF)
The signals are sent when one of the
telephone keys is pressed. Each
signal is composed of two different
tones.
DVT
dynamic vocabulary
A vocabulary that is defined while
an application is running.
E
E&M
A channel-associated signaling
protocol in which signaling is done
using two leads: an M-lead that
transmits battery or ground and an
E-lead that receives open or ground.
E1
A digital trunking facility standard
that is used in Europe and
elsewhere. It can transmit and
receive 30 digitized voice or data
channels. Two additional channels
are used for synchronization,
framing, and signaling. The
transmission rate is 2048 Kb per
second. Contrast with T1.
See Distributed Voice Technologies.
DVT bridge
The interface between a voice
technology component (such as a
speech recognizer) and the DVT
server. A bridge must exist for each
technology that you want to
integrate with DVT.
DVT_Client2
A WebSphere Voice Response
custom server that passes
commands and data to DVT_Server.
DVT interface
A WebSphere Voice Response
programming interface that is used
by a DVT bridge. It enables
integration of voice applications
with Distributed Voice Technologies to
provide functions such as speech
recognition.
DVT_Server
A component of DVT that allocates
and manages system resources in
response to requests from
DVT_Client2.
218
DVT service
The combination of a voice
application, a DVT bridge, and a
voice technology that allows a caller
to interact with your business.
General Information and Planning
echo cancelation
A filter algorithm that compares a
copy of the voice data that is being
sent to a caller, with the voice data
being that is received from the
caller. Any echo of the sent data is
removed before the received data is
sent on, for example, to a speech
recognizer.
edge
See result.
EDL
See exchange data link.
emulation
The imitation of all or part of one
computer system by another, so that
the imitating system accepts the
same data, runs the same programs,
and gets the same results as the
imitated computer system does.
standard, agreed in 1993, that
provides a basic range of services
and supplementary services using
30 B-channels plus a D-channel over
an E1 trunk.
endpoint
In Voice over Internet Protocol, a place
where calls are originated and
ended.
engine
A speech recognition process that
accepts voice data as input and
returns the text of what was said as
output. It is the process that
performs the recognition.
engine type
Each engine must be configured
with a specific type. The type is a
textual tag that is associated with a
specific engine and does not change
the operation or functionality of the
engine.
error message
Any message that is displayed by
WebSphere Voice Response in the
System Monitor as an alarm and
optionally written to the WebSphere
Voice Response error log, or to the
AIX error log (as an alert). Strictly,
the term error message should
include only red (immediate
attention) and yellow (problem
situation) messages, but it is also
used to refer to green (a red or
yellow message has been cleared)
and white (informational) messages.
Ethernet
A 10/100 network connection
between the VoIP gateway and the
Speech Server that supports VoIP.
ETS
European Telecommunications
Standard or European
Telecommunication Specification.
ETSI
European Telecommunications
Standards Institute.
Euro-ISDN
The common European ISDN
exchange data link
A serial connection that carries
messaging information between
WebSphere Voice Response and the
Lucent Technologies 1AESS,
Northern Telecom DMS100, Ericsson
MD110 switch, or Siemens Hicom
300.
exit
A point in a supplied application
from which control can be passed to
another custom-written application.
On completion, the custom-written
application passes control back to
the supplied application.
F
fade in
To gradually increase the volume of
sounds, such as background music.
fade out
To gradually decrease the volume of
sounds, such as background music.
failover
A transparent operation that, in the
event of a system failure, switches
responsibility for managing
resources to a redundant or standby
system. Also known as fallover.
FDM
See Feature Download Management.
Feature Download Management (FDM)
An ADSI protocol that enables
several alternative key and screen
overlays to be stored in an ADSI
telephone, and to be selected by
predetermined events at the
telephone.
Glossary
219
Federal Communication Commission
(FCC) The standard body in the United
States that is responsible for
communication.
field
An identifiable area in a window
that is used to enter or display data.
FILELIST
A WebSphere Voice Server
Telephony runtime file that defines
which files to load into a WebSphere
Voice Server engine. It contains a
list in the form:
context type grammar filename
...
...
Recursion is not permitted; that is,
no contexts of type FILELIST can be
specified in a FILELIST. When a
FILELIST is loaded, all the
grammars that are specified in it are
loaded into the engine. From then
on, the grammars that are loaded
when the FILELIST is specified are
regarded as a single context.
Foreign Exchange Subscriber (FXS)
A signaling protocol that links a
user's location to a remote exchange
that would not normally be serving
that user, to provide, for example,
calls to outside the local area at the
local rate.
frame A group of data bits that is
surrounded by a beginning
sequence and an ending sequence.
fsg
220
Abbreviation for finite state
grammar. In WebSphere Voice
Server, the extension of a file that
contains grammar specifications in
compiled, binary form. It is
generated from a .bnf file and is
called a .fsg file.
General Information and Planning
function
In ADSI, an ADSI instruction or
group of instructions.
FXS
See Foreign Exchange Subscriber.
G
gatekeeper
A component of a Voice over Internet
Protocol that provides services such
as admission to the network and
address translation.
gateway
A component of Voice over Internet
Protocolthat provides a bridge
between VoIP and circuit-switched
environments.
G.711
Specification for uncompressed
voice for PSTN and Voice over
Internet Protocol access.
G.723.1
Compressed audio codecs that are
used on Voice over Internet Protocol
connection for voice.
G.729A
Compressed audio codecs that are
used on Voice over Internet Protocol
connection for voice.
glare
A condition that occurs when both
ends of a telephone line or trunk are
seized at the same time.
grammar
A structured collection of words and
phrases that are bound together by
rules. A grammar defines the set of
all words, phrases, and sentences
that might be spoken by a caller
and are recognized by the engine. A
grammar differs from a vocabulary in
that it provides rules that govern
the sequence in which words and
phrases can be joined together.
greeting
In voice mail, the recording that is
heard by a caller on reaching
subscriber's mailbox. See also
announcement-only greeting. Contrast
with voice message.
greeting header
In voice mail, a recording that is
made by a subscriber and played to
callers either before or instead of a
personal greeting.
Groupe Special Mobile (GSM)
A CEPT/CCH standard for mobile
telephony.
H
HACMP (High-Availability Cluster
Multi-Processing) for AIX
Licensed Program Product (LPP)
that provides custom software that
recognizes changes in a cluster and
coordinates the use of AIX features
to create a highly-available
environment for critical data and
applications.
HACMP/ES
Licensed Program Product (LPP)
that provides Enhanced Scalability
to the HACMP for AIX LPP. An
HACMP/ES cluster can include up
to 32 nodes.
hang up
To end a call. See also disconnect.
HDB3 High-density bipolar of order 3. An
E1 line coding method in which
each block of four successive zeros
is replaced by 000V or B00V, so that
the number of B pulses between
consecutive V pulses is odd.
Therefore, successive V pulses are of
alternate polarity so that no dc
component is introduced. Note: B
represents an inserted pulse that
observes the alternate mark inversion
(AMI) rule and V represents an AMI
violation. HDB3 is similar to B8ZS
that is used with T1.
HDLC See high-level data link control.
high-level data link control
An X.25 protocol.
homologation
The process of getting a telephony
product approved and certified by a
country's telecommunications
authority.
hook flash
A signal that is sent to a switch to
request a switch feature (such as call
transfer).
host application
An application residing on the host
computer.
hunt group
A set of telephone lines from which
a non-busy line is found to handle,
for example, an incoming call.
I
immediate start
A procedure that is used with some
channel-associated signaling
protocols, when the address
signaling is sent within 65
milliseconds of going off-hook.
Contrast with delay start and wink
start.
IN
See intelligent network.
in-band
In the telephony voice channel,
signals are said to be carried
in-band. Contrast with out-of-band.
indirect speech recognition
Identification of words from spoken
Glossary
221
input that are read from a file.
Contrast with direct speech
recognition.
initialize
To prepare a system, device, or
program for operation; for example,
to initialize a diskette.
input parameter
Data that is received by a program
such as a prompt, 3270 script,
custom server, or state table from
the program that called it. Contrast
with local variable and system
variable.
integrated messaging
A messaging system in which more
than one copy of a single message is
stored, the copies being kept
synchronized by the applications
that are used to access them.
Contrast with unified messaging.
Integrated Services Digital Network
(ISDN)
A digital end-to-end
telecommunication network that
supports multiple services
including, but not limited to, voice
and data.
Integrated Services Digital Network
(ISDN) call transfer
In WebSphere Voice Response, an
application that allows you to
transfer calls on Nortel DMS-100
switches using Integrated Services
Digital Network (ISDN) two B-channel
transfer, and on Nortel DMS-100 and
DMS-250 switches using Nortel's
proprietary Release Link Trunk
(RLT) call transfer protocol.
Integrated Services Digital Network
(ISDN) two B-channel transfer
A call transfer feature that is
defined by Bellcore GR-2865-CORE
222
General Information and Planning
specification, and used on Nortel
and Lucent switches.
Integrated Services Digital Network user
part (ISUP)
Part of the SS7 protocol that
supports telephony signaling
applications. The ISDN user part is
defined to carry signaling
information that relates to digital
telephones, terminals, and PABXs in
customer premises.
intelligent network (IN)
A telephone network that includes
programmable software that is not
resident on the switch. It allows the
service provider to provide special
services, such as special
call-handling, that are not
dependent on the capabilities of the
switch. See also advanced intelligent
network.
intelligent peripheral (IP)
A voice processing system (such as
WebSphere Voice Response) that
provides enhanced services such as
voice response, speech recognition,
text-to-speech, voice messaging, and
database access in an advanced
intelligent network.
interactive voice response (IVR)
A computer application that
communicates information and
interacts with the caller via the
telephone voice channel.
International Telecommunications Union –
Telecommunication Standardization Sector
(ITU-T)
The name of the organization that
was previously known as the
CCITT.
IP
See intelligent peripheral.
ISDN See Integrated Services Digital
Network (ISDN) .
ISDN two B-channel transfer
See Integrated Services Digital
Network (ISDN) two B-channel
transfer.
ISDN-UP
See Integrated Services Digital
Network user part.
ISUP
See Integrated Services Digital
Network user part.
ITU-T See International Telecommunications
Union – Telecommunication
Standardization Sector.
IVR
See interactive voice response.
J
Java Bean
A reusable Java component. See
beans.
jump out
See call transfer.
K
key
(1) One of the pushbuttons on the
telephone handset; sometimes
referred to as a DTMF key. (2) A
component of the keyboard that is
attached to the computer system.
key pad
The part of the telephone that
contains the pushbutton keys.
key pad mapping
The process of assigning special
alphanumeric characters to the keys
that are on a telephone key pad, so
that the telephone can be used as a
computer-terminal keyboard.
L
LAN
See local area network.
language model
For speech recognition, a set of
acoustic shapes (in binary format)
for a given set of words, in which
word-to-word differences are
maximized, but speaker-to-speaker
differences are minimized. See also
vocabulary.
LAPD See link access protocol for the
D-channel.
licensed program product (LPP)
A separately-priced program and its
associated materials that bear an
IBM copyright and are offered
under the terms and conditions of a
licensing agreement.
license server
A machine on a network that holds
licenses and distributes them on
request to other machines on the
network.
line error
An error on the telephone line that
causes the signal to be impaired.
link access protocol for the D-channel
An HDLC protocol used in ISDN
that ensures a reliable connection
between the network and the user.
Often used as another name for
Q.921.
local area network (LAN)
A network in which computers are
connected to one another in a
limited geographical area.
WebSphere Voice Response
communication with WebSphere
Voice Server speech recognition,
text-to-speech, and single system
image (SSI) requires a LAN that is
dedicated to that purpose (unless
Glossary
223
both are installed on the same
system). A token-ring network is a
type of LAN.
local variable
A user-defined temporary variable
that can be accessed only by the
program (state table, prompt, or
3270 script) for which it is defined.
Contrast with input parameter, system
variable.
M
macro See system prompt.
MAP
See mobile application part.
MB
See megabyte.
megabyte
(1) For processor storage and real
and virtual memory, 1 048 576
bytes. (2) For disk storage capacity
and transmission rates, 1 000 000
bytes.
Message Center
See Unified Messaging
message delivery preference
The subscriber's choice of whether
voice mail is stored as voice mail
only, as e-mail only, or as both voice
mail and e-mail.
message delivery type
The format in which a voice
message is delivered.
message signal unit (MSU)
An MTP packet that contains data.
message transfer part (MTP)
Part of the SS7 protocol that is
normally used to provide a
connectionless service that is
roughly similar to levels one
through three of the OSI reference
model.
224
General Information and Planning
message waiting indicator (MWI)
A visible or audible indication (such
as a light or a stutter tone) that a
voice message is waiting to be
retrieved.
MFR1 An in-band address signaling
system that uses six tone
frequencies, two at a time. MFR1 is
used principally in North America
and is described in ITU-T
recommendations Q.310 through
Q.332.
MIME See multipurpose Internet mail
extensions.
mobile application part (MAP)
Optional layer 7 application for SS7
that runs on top of TCAP for use
with mobile network applications.
MP
See multiprocessor.
MSU
See message signal unit.
MTP
See message transfer part.
mu(µ)-law
The companding algorithm that is
used primarily in North America
and Japan when converting from
analog to digital speech data.
(Compand is a contraction of
compress and expand.) Contrast
with A-law.
multiprocessor (MP)
A computer that includes two or
more processing units that can
access a common main storage.
multipurpose Internet mail extensions
(MIME)
A protocol that is used on Internet
for extending e-mail capability and
merging it with other forms of
communication, such as voice mail
and fax.
mumble
Non speech noise that a user
interjects while speaking.
music channel
A channel on which sounds can be
broadcast to one or more telephony
(voice) channels.
music title
The name by which WebSphere
Voice Response knows a tune.
MWI
See message waiting indicator.
Network File System (NFS)
A protocol, developed by Sun
Microsystems, Incorporated, that
allows any host in a network to
gain access to another host or
netgroup and their file directories.
In a single system image (SSI), NFS
is used to attach the WebSphere
Voice Response DB2 database.
network termination
See NT mode.
NFAS See non-facility associated signaling.
N
NFS
See Network File System.
National ISDN
A common ISDN standard that was
developed for use in the U.S.
node
In a single system image (SSI), one
of the WebSphere Voice Response
systems that are in the cluster.
NAU
See network addressable unit.
N-Best
The ability to return more than one
speech recognition result. Typically,
an array of results is available in the
application in sequence of
descending probability.
NCP
See network control program.
NET
Norme Européenne de
Télécommunication.
Net 5
The test specification for
conformance to the Euro-ISDN
standard for primary rate access to
ISDN.
network addressable unit (NAU)
Any network component that can be
addressed separately by other
members of the network.
network control program (NCP)
Used for requests and responses
that are exchanged between physical
units in a network for data flow
control.
non-facility associated signaling (NFAS)
An ISDN configuration where
several T1 facilities can be
controlled by a single D-channel,
instead of the normal T1
configuration where each T1 facility
has 23 B-channels and a D-channel
(23B+D). With NFAS, all 24
timeslots of the non signaling trunks
are available for voice, whereas only
23 channels can be used on the
trunk that carries signaling traffic
(23B+D+n24B).
NT mode
Attachment to the ISDN network is
asymmetric. The network side of the
connection operates in network
termination, or NT, mode. User
equipment operates in terminal
equipment, or TE, mode.
O
ODM See Object Data Manager.
Object Data Manager (ODM)
A data manager intended for the
storage of system data. The ODM is
Glossary
225
used for many system management
functions. Information that is used
in many commands and SMIT
functions is stored and maintained
in the ODM as objects with
associated characteristics.
off-hook
A telephone line state, usually
induced by lifting a receiver, in
which the line is ready to make a
call.
offline
Not attached or known to the
existing system configuration, and
therefore not in active operation.
on-hook
A telephone line state, usually
induced by hanging up a receiver,
in which the line is ready to receive
a call.
(ISO) in 1984, it is considered to be
the primary architectural model for
intercomputer communications
originating point code (OPC)
A code that identifies the signaling
Point that originated an MTP signal
unit. Unique in a particular
network.
OSI
See Open Systems Interconnection.
outgoing mail
In voice mail, messages that are sent
by a subscriber to another
subscriber on the same system, and
have not yet been listened to by the
addressee.
out-of-band
In the telephony signaling channel,
as opposed to the voice channel.
Signals are said to be carried
out-of-band. Contrast with in-band.
online In active operation.
OPC
See originating point code.
Open Systems Interconnection (OSI)
(1.) The interconnection of open
systems as specified in particular
ISO standards. (2.) The use of
standardized procedures to enable
the interconnection of data
processing systems.
Open Systems Interconnection (OSI)
architecture
Network architecture that observes
the particular set of ISO standards
that relate to Open Systems
Interconnection.
Open Systems Interconnection (OSI)
Reference Model
A conceptual model composed of
seven layers, each specifying
particular network functions.
Developed by the International
Organization for Standardization
226
General Information and Planning
P
PABX See private automatic branch exchange
.
pack
Each DTTA contains the equivalent
of four packs. The pack is a digital
trunk processor built into the digital
trunk adapter, so there is no need
for external hardware. See also
TPACK.
parameter file
An ASCII file that sets configuration
parameters.
password
A unique string of characters that is
known to a computer system and to
a user. The user must specify the
character string to gain access to the
system and to the information that
is stored in it.
PBX
See private branch exchange.
PCI
See peripheral component interconnect.
PCM
See Pulse Code Modulation.
PCM fault condition
A fault, such as power supply
failure, or loss of incoming signal, in
T1 or E1 equipment. (ITU-T G.732
and G.733.)
peripheral component interconnect (PCI)
A computer busing architecture that
defines electrical and physical
standards for electronic
interconnection.
personal greeting
In voice mail, a greeting that is
recorded by a subscriber. Contrast
with system greeting.
phone recognition
Communicating with a computer
using voice via a telephone, over a
telephone line. The computer
application recognizes what was
said and takes suitable action.
port
In time-slot management, one end
of a 64 Kbps unidirectional stream
that can be attached to the TDM
bus.
port set
In time-slot management, a
collection of ports that can be
connected using a single
CA_TDM_Connect() API call to a
complementary collection of ports.
PRA
Primary rate access (PRA). Used as
another name for primary rate
interface (PRI).
PRI
See primary rate interface.
primary rate access (PRA)
See primary rate interface.
primary rate interface (PRI)
The means of ISDN access that is
normally used by large sites. It
provides 30 (E1) or 23 (T1)
B-channels of 64 Kb per second and
one D-channel for signaling. This is
often known as 30B+D or 23B+D.
Contrast with basic rate interface.
primary rate ISDN (PRI)
See primary rate interface.
primitive
A message that is sent from one
process to another.
private automatic branch exchange (PABX)
An automatic private switching
system that services an organization
and is usually located on a
customer's premises. Often used as
another name for private branch
exchange (PBX) .
private branch exchange (PBX)
A switch inside a private business
that concentrates the number of
inside lines into a smaller number
of outside lines (trunks). Many PBXs
also provide advanced voice and
data communication features. Often
used as another name for private
automatic branch exchange .
process a call
To answer the telephone and
perform the correct tasks.
Process Manager
In WebSphere Voice Server, the
process that manages the interaction
of all telephony system processes;
for example, starting and stopping
text-to-speech or speech recognition
sessions.
production system
A WebSphere Voice Response
system that responds to or makes
“live” calls. A production system
can also be used to develop new
Glossary
227
applications. Contrast with
development system.
program temporary fix (PTF)
An update to IBM software.
program data
Application-specific data that can be
associated with a call transfer from
CallPath to WebSphere Voice
Response, or in the opposite
direction. This is equivalent to
CallPath program data, but
WebSphere Voice Response imposes
the restriction that the data must be
a printable ASCII character string,
with a maximum length of 512
bytes.
prompt
(1) A message that requests input or
provides information. Prompts are
seen on the computer display screen
and heard over the telephone. (2) In
WebSphere Voice Response, a
program that uses logic to
determine dynamically the voice
segments that are to be played as a
voice prompt.
prompt directory
A list of all the prompts that are
used in a particular voice
application. Used by the state table
to play the requested voice prompts.
pronunciation
The possible phonetic
representations of a word. A word
can have multiple pronunciations;
for example, “the” has at least two
pronunciations, “thee” and “thuh”.
pronunciation dictionary
A file that contains the phonetic
representation of all of the words,
phrases, and sentences for an
application grammar.
228
General Information and Planning
pronunciation pool
A WebSphere Voice Server resource
that contains the set of all
pronunciations.
protocol
A set of semantic and syntactic rules
that determines the behavior of
functional units when they get
communication. Examples of
WebSphere Voice Response
protocols are FXS, RE, and R2.
PSTN An ITU-T abbreviation for public
switched telephone network.
PTF
See program temporary fix.
Pulse Code Modulation (PCM)
Variation of a digital signal to
represent information.
pushbutton
(1) A key that is on a telephone key
pad. (2) A component in a window
that allows the user to start a
specific action.
pushbutton telephone
A type of telephone that has
pushbuttons. It might or might not
send tone signals. If it does, each
number and symbol on the key pad
has its own specific tone.
Q
Q.921
The ITU-T (formerly CCITT)
recommendation that defines the
link layer of the DSS1 protocol.
Q.921 defines an HDLC protocol
that ensures a reliable connection
between the network and the user.
Often used as another name for
LAPD.
Q.931
The ITU-T recommendation that
defines the network layer of the
DSS1 protocol. This layer carries the
ISDN messages that control the
making and clearing of calls.
quiesce
To shut down a channel, a trunk
line, or the whole system after
allowing normal completion of any
active operations. The shutdown is
performed channel-by-channel.
Channels that are in an idle state
are shut down immediately.
Channels that are processing calls
are shut down at call completion.
R
RAI
See remote alarm indication.
RBS
See robbed-bit signaling.
RE
See remote extension.
Recognition Engine server
In WebSphere Voice Server, the
software that performs the speech
recognition and sends the results to
the client. This consists of one ‘Tsm
router' and at least one ‘tsmp' and
one ‘engine'.
reduced instruction set computer (RISC)
A computer that uses a small,
simplified set of frequently-used
instructions to improve processing
speed.
referral number
The phone number to which calls
are routed, when call forwarding is
active.
remote alarm indication (RAI)
A remote alarm (also referred to as
a yellow alarm) indicates that the
far-end of a T1 connection has lost
frame synchronization. The Send
RAI system parameter can be set to
prevent WebSphere Voice Response
from sending RAI.
remote extension (RE)
An E1 signaling protocol that is
similar to FXS loop start.
resource element
A component of an Intelligent
Network. The resource element
contains specialized resources such
as speech recognizers or
text-to-speech converters.
response
In speech recognition, the character
string that is returned by the
recognizer, through DVT_Client, to
the state table. The string represents
the result of a recognition attempt.
This is the word or words that the
recognizer considers to be the best
match with the speech input.
result An indicator of the success or
failure of a state table action. It is
returned by WebSphere Voice
Response to the state table. Also
known as an edge.
result state
The state that follows each of the
possible results of an action.
rejection
The identification of an utterance as
one that is not allowed by a
grammar.
return code
A code that indicates the status of
an application action when it
completes.
release link trunk (RLT)
A custom specification from Nortel
for ISDN call transfer.
RISC
See reduced instruction set computer.
RLT
See release link trunk.
Glossary
229
robbed-bit signaling (RBS)
The T1 channel -associated signaling
scheme that uses the least
significant bit (bit 8) of each
information channel byte for
signaling every sixth frame. This is
known as 7-5/6-bit coding rather
than 8-bit coding. The signaling bit
in each channel is associated only
with the channel in which it is
contained.
S
SAP
See service access point.
SAS
A T1 signaling protocol that is
similar to FXS.
SCbus See Signal Computing bus.
SCCP See signaling connection control part.
SCP
See service control point.
screened transfer
A type of call transfer in which the
transfer of the held party to the
third party is completed only if the
third party answers the call.
Contrast with blind transfer.
script
The logical flow of actions for a
3270 server program.
script language
A high-level, application-specific
scripting language, which consists
of statements that are used to
develop 3270 scripts. These scripts
are part of the interface between a
state table and a 3270-based host
business application.
SCSA See Signal Computing System
Architecture.
SDC
See Server Display Control.
SDLC See Synchronous Data Link Control.
230
General Information and Planning
segment ID number
One or more numbers that are used
to identify a voice or prompt
segment.
Server Display Control (SDC)
An ADSI control mode in which the
ADSI telephone is controlled
through a dialog with a voice
response system.
server node
In a single system image (SSI), a
WebSphere Voice Response system
that contains either the WebSphere
Voice Response DB2 database, or the
voice data, or both.
service access point (SAP)
An OSI term for the port through
which a service user (layer N+1)
accesses the services of a service
provider (layer N).
service control point (SCP)
A component of the intelligent
network that provides transactional
services, such as translation of
toll-free numbers to subscriber
numbers.
service information octet (SIO)
A field that is in an MTP message
signal unit. It identifies a higher
layer user of MTP, and whether the
message relates to a national or
international network.
service node
An element of an Intelligent
Network. The service node contains
the service logic that controls an
intelligent network application and
resources.
service provider
Any company that provides services
for a fee to its customers, such as
telecommunication companies,
application service providers,
enterprise IT, and Internet service
providers.
service provider equipment (SPE)
The switching equipment that is
owned by the telephone company.
session
See speech recognition session.
Session Initiation Protocol
A signaling protocol used for
internet conferencing, telephony,
presence, events notification and
instant messaging.
short message service center (SMSC)
A component of the mobile
telephony network, specified by the
GSM group of standards, that
provides for exchange of
alphanumeric messages of less than
160 bytes. Messages can be
exchanged between different types
of system such as mobile telephone,
alphanumeric pager, terminal,
e-mail, telex, or DTMF telephone.
SIF
See signaling information field.
Signal Computing System Architecture
(SCSA)
An architecture that was defined by
Dialogic to support interoperability
of software and hardware
components that are developed by
different vendors in the computer
telephony industry.
Signal Computing bus (SCbus)
A time division multiplexed (TDM)
hardware bus that was originated
by Dialogic to interconnect different
vendors' computer telephony
adapters. Specified as part of Signal
Computing System Architecture
(SCSA).
signaling
The exchange of control information
between functional parts of the
system in a telecommunications
network.
signaling connection control part (SCCP)
A layer 3 protocol that observes
OSI.
signaling information field (SIF)
The user data portion of an MTP
message signal unit.
signaling link code (SLC)
A code that identifies a particular
signaling link that connects the
destination and originating
signaling points. This is used in
MTP signaling network
management messages to indicate
the signaling link to which the
message relates.
signaling link selection (SLS)
A field that is used to distribute
MTP signal units across multiple
signaling links.
signaling mode
The type of signaling protocol,
either channel-associated signaling,
or common-channel signaling.
signaling point
A node in a signaling network that
either originates and receives
signaling messages, or transfers
signaling messages from one
signaling link to another, or both.
signaling process
A WebSphere Voice Response
component that controls signaling
for an exchange data link or
common-channel signaling protocol.
Some signaling processes are
Glossary
231
supplied with WebSphere Voice
Response, and others can be
custom-written.
network. Each system (known as a
node) in the cluster is configured as
either a client or a server. A single
system image typically consists of
one server node and multiple client
nodes. The client nodes retrieve
applications and voice data from the
server. A second server can be
configured for redundancy.
signaling System Number 7 (SS7)
The international high-speed
signaling backbone used for the
public-switched telephone network.
silence
A short pause between utterances.
simple mail transfer protocol (SMTP)
An Ethernet protocol that is related
to TCP/IP.
simple network management protocol
(SNMP)
In the Internet suite of protocols, a
network management protocol that
is used to monitor routers and
attached networks. SNMP is an
application layer protocol.
Information on devices managed is
defined and stored in the
application's Management
Information Base (MIB). SNMP
provides a means of monitoring
WebSphere Voice Response
resources remotely.
Simplified Message Desk Interface
(SMDI)
A Northern Telecom service that
transmits out-of-band information
between WebSphere Voice Response
and particular switches.
Simplified Message Service Interface
(SMSI)
A Lucent Technologies service that
transmits out-of-band information
between WebSphere Voice Response
and particular switches.
single system image (SSI)
A cluster of WebSphere Voice
Response systems that are
connected together using a local area
232
General Information and Planning
sink
A port that takes voice data from
the TDM bus. Contrast with source.
SIO
See service information octet.
SIP
See Session Initiation Protocol.
SLC
See signaling link code.
SLS
See signaling link selection.
SMDI See Simplified Message Desk Interface.
SMIT See System Management Interface Tool.
SMP
See symmetric multiprocessor.
SMSC See short message service center.
SMSI See Simplified Message Service
Interface.
SMTP See simple mail transfer protocol.
SNA
Systems Network Architecture.
SNMP
See simple network management
protocol .
source A port that puts voice data on to the
TDM bus. Contrast with sink.
SPACK
A logical component that consists of
a base card, which connects to the
digital trunk adapter in the pSeries
computer, and a trunk interface card
(TIC), which manages the trunk
connection to the switch. Contrast
with VPACK and TPACK.
SPE
See service provider equipment.
speaker-dependent speech recognition
Identification of spoken words that
is related to knowledge of the
speech characteristics of one
speaker. Contrast with
speaker-independent speech recognition.
speaker-independent speech recognition
Identification of spoken words that
is related to collected knowledge of
the speech characteristics of a
population of speakers. Contrast
with speaker-dependent speech
recognition.
special character
A character that is not alphabetic,
numeric, or blank. For example, a
comma (,) or an asterisk (*).
speech recognition
The process of identifying spoken
words. See discrete word recognition,
continuous speech recognition,
speaker-dependent speech recognition,
speaker-independent speech recognition.
Speech Recognition Control Language
(SRCL)
In WebSphere Voice Server, a
structured syntax and notation that
defines speech grammars,
annotations, repetitions, words,
phrases, and associated rules.
speech recognition session
In WebSphere Voice Server, a
sequence of recognition commands
that allocate a recognition engine,
and return a unique identifier to
identify the engine.
speech synthesis
The creation of an approximation to
human speech by a computer that
concatenates basic speech parts
together. See also text-to-speech.
SRCL See Speech Recognition Control
Language (SRCL).
SS7
See signaling System Number 7.
SSI
See single system image.
SSI-compliant custom server
A custom server that runs correctly
in a single system image. The
custom server observes all the
guidelines for the operation of
custom servers in an SSI
environment.
SSI-tolerant custom server
A custom server that runs in a
single system image, but with only
some restrictions.
standalone system
A WebSphere Voice Response
system that is not part of a single
system image (SSI). A standalone
system is not connected to other
WebSphere Voice Response systems,
so it contains its own application
and voice data.
state
One step in the logical sequence of
actions that makes a WebSphere
Voice Response voice application.
state table
A list of all the actions that are used
in a particular voice application. A
component of WebSphere Voice
Response.
state table action
One instruction in a set of
instructions that is in a WebSphere
Voice Response state table that
controls how WebSphere Voice
Response processes various
operations such as playing voice
prompts or recording voice
messages. See also state.
Glossary
233
stub
A line in a state table that is only
partially displayed.
subscriber
In voice mail, any person who owns
a mailbox.
subscriber class
A named set of variables that
defines a specific level of service
available to telephone subscribers,
such as maximum number of
messages per mailbox and
maximum number of members per
mailbox distribution list.
subvocabulary
A vocabulary that is called by
another vocabulary.
supplementary service
In Euro-ISDN, a service outside the
minimum service offering that each
signatory is obliged to provide. For
example, calling line identification
presentation (CLIP) and call session.
switch A generic term that describes a
telecommunications system that
provides connections between
telephone lines and trunks.
symmetric multiprocessor (SMP)
A system in which
functionally-identical multiple
processors are used in parallel,
providing simple and efficient
load-balancing.
Synchronous Data Link Control (SDLC)
A discipline for managing
synchronous, code-transparent,
serial-by-bit information transfer
over a link connection. Transmission
exchanges can be duplex or
half-duplex over switched or
nonswitched links.
system administrator
The person who controls and
234
General Information and Planning
manages the WebSphere Voice
Response system by adding users,
assigning account numbers, and
changing authorizations.
system greeting
In voice mail, a default greeting that
is heard by callers to the mailboxes
of subscribers who have not
recorded a personal greeting or who
have selected the system greeting.
Contrast with personal greeting.
System Management Interface Tool
(SMIT)
A set of utilities that can be used for
various purposes, such as loading
WebSphere Voice Response
software, installing the exchange
data link, and configuring SNA.
Systems Network Architecture (SNA)
An architecture that describes the
logical structure, formats, protocols,
and operational sequences for
transmitting information units
through the networks and also the
operational sequences for
controlling the configuration and
operation of networks.
system parameter
A variable that controls some of the
behavior of WebSphere Voice
Response or applications that are
running under WebSphere Voice
Response. System parameters are set
through System Configuration or
Pack Configuration options on the
Configuration menu. Some system
parameter values are assigned to
system variables when an application
is initialized. Contrast with input
parameter, local variable, system
variable.
system prompt
The symbol that appears at the
command line of an operating
system, indicating that the operating
system is ready for the user to enter
a command.
system variable
A permanent global variable that is
defined by WebSphere Voice
Response for use by state tables.
Many system variables are loaded
with values when the state table is
initialized. Some values are taken
from system parameters. Contrast
with input parameter, local variable,
system parameter.
T
T1
A digital trunking facility standard
that is used in the United States and
elsewhere. It can transmit and
receive 24 digitized voice or data
channels. Signaling can be
imbedded in the voice channel
transmission when robbed-bit
signaling is used. The transmission
rate is 1544 kilobits per second.
Contrast with E1.
T1/D3 A framing format that is used in T1
transmission.
T1/D4 A framing format that is used in T1
transmission.
tag
A text string that is attached to any
instance of a word in a grammar. A
tag can be used (1) to distinguish
two occurrences of the same word
in a grammar or (2) to identify more
than one word in a grammar as
having the same meaning.
Tag Image File Format-Fax (TIFF-F)
A graphic file format that is used to
store and exchange scanned fax
images.
TCAP See transaction capabilities application
part.
TCP/IP
See Transmission Control
Protocol/Internet Protocol.
TDD
See Telecommunications Device for the
Deaf.
TDM
See time-division multiplex bus.
technology
A program, external to WebSphere
Voice Response, that provides
processing for functions such as
text-to-speech or speech recognition.
Telecommunications Device for the Deaf
(TDD) A telephony device that has a
QWERTY keyboard and a small
display and, optionally, a printer.
telephone input field
A field type that contains
information that is entered by a
caller who is using pushbutton
signals. See also field.
terminal
(1) A point in a system or
communication network at which
data can enter or leave. (2) In data
communication, a device, usually
equipped with a keyboard and
display device, that can send and
receive information.
termination character
A character that defines the end of a
telephone data entry.
text-to-speech (TTS)
The process by which ASCII text
data is converted into synthesized
speech. See also speech synthesis.
TIC
See trunk interface card.
time-division multiplex bus (TDM)
A method of transmitting many
Glossary
235
channels of data over a smaller
number of physical connections by
multiplexing the data into timeslots,
and demultiplexing at the receiving
end. In this document, one such
channel can be considered to be a
half-duplex unidirectional stream of
64 Kb per second.
TIFF-F
See Tag Image File Format-Fax
timeslot
The smallest switchable data unit on
a data bus. It consists of eight
consecutive bits of data. One
timeslot is similar to a data path
with a bandwidth of 64 Kb per
second.
token A particular message or bit pattern
that indicates permission or
temporary control to transmit.
token-ring network
A local area network that connects
devices in a ring topology and
allows unidirectional data
transmission between devices by a
token-passing procedure. A device
must receive a token before it can
transmit data.
An audible signal that is sent across
a telephone network. Single
(one-frequency) tones, tritones (three
sequential tones at different
frequencies), dual tones (two
simultaneous tones at different
frequencies), and dual sequential
tones exist. Each has a different
meaning.
tone
TPACK
A digital trunk processor that is
implemented using DSP technology
on the digital trunk adapter without
the need for external hardware. One
236
General Information and Planning
DTTA digital trunk adapter
provides up to four TPACKs on a
PCI card.
transaction
A specific, related set of tasks in an
application that retrieve information
from a file or database. For
example, a request for the account
balance or the available credit limit.
transaction capabilities application part
(TCAP)
Part of the SS7 protocol that
provides transactions in the
signaling network. A typical use of
TCAP is to verify a card number, for
the credit card calling service.
transaction messaging
The ability to associate an item of
data, such as a transaction identifier,
with a voice message. The voice
message can later be retrieved by
referencing the data value.
transfer
See call transfer.
Transmission Control Protocol/Internet
Protocol (TCP/IP)
A communication subsystem that is
used to create local area and wide
area networks.
trombone
A connected voice path that enters
an IVR from a switch on one circuit,
then returns to the same switch on a
parallel circuit. Two IVR ports and
two circuits are consumed, but in
some circumstances this might be
the only way to make a connection
between two callers if the attached
switch does not support a Call
Transfer function. Also known as
double-trunking.
trunk
A telephone connection between
two central offices or switching
devices. In WebSphere Voice
Response, a trunk refers to 24 or 30
channels that are carried on the
same T1 or E1 digital interface.
trunk interface card (TIC)
The component of the pack that
manages the trunk connection to the
switch.
Tsm Router
In WebSphere Voice Server, a
process that controls which engine
processes are in use at any time.
Requests for an engine by a
WebSphere Voice Server Client are
accepted or rejected depending on
whether an engine that meets the
Tsm Client's requirements is
available.
tsmp
In WebSphere Voice Server, a
process that is running on the
Recognition engine server machine
that passes messages between an
engine and a Tsm Client. One tsmp
exists for every engine.
TTS
See text-to-speech.
tune
A piece of music or other audio
data that is intended to be played as
background music.
U
underrun
To run out of audio data to play,
causing voice or music to be
audibly broken up or cut off.
unified messaging
A messaging system in which a
single copy of a message is stored
and accessed by multiple
applications (for example, voice
mail and e-mail). Contrast with
integrated messaging.
Unified Messaging
An IBM product that uses
WebSphere Voice Response's voice
processing capabilities to provide a
wide range of voice mail, fax, and
e-mail functions. Previously known
as Message Center.
user
Someone who uses WebSphere
Voice Response as a system
administrator, application developer,
or similar. Contrast with caller.
utterance
A spoken word, phrase, or sentence
that can be preceded and followed
by silence.
V
variable
A system or user-defined element
that contains data values that are
used by WebSphere Voice Response
voice applications. See input
parameter, local variable, system
parameter, system variable.
VMS
See Voice Message Service.
vocabulary
A list of words with which
WebSphere Voice Response matches
input that is spoken by a caller. See
also language model.
voice application
A WebSphere Voice Response
application that answers or makes
calls, plays recorded voice segments
to callers, and responds to the
caller's input.
voice directory
A list of voice segments that is
identified by a group ID. Voice
directories can be referenced by
prompts and state tables. Contrast
with voice table.
Glossary
237
voice mail
The capability to record, play back,
distribute, and route voice
messages.
voice mailbox
The notional hard disk space where
the incoming messages for a voice
mail subscriber are stored.
voice message
In voice mail, a recording that is
made by a caller for later retrieval
by a subscriber.
Voice Message Service (VMS)
An Ericsson service that transmits
information between WebSphere
Voice Response and particular
switches.
voice messaging
The capability to record, play back,
distribute, route, and manage voice
recordings of telephone calls
through the use of a processor,
without the intervention of agents
other than the callers and those who
receive messages.
voice model
A file that contains parameters that
describe the sounds of the language
that are to be recognized on behalf
of an application. In WebSphere
Voice Server, this is a bnf file. See
also grammar.
Voice over Internet Protocol (VoIP)
The sending of telephony voice over
Internet Protocol (IP) data
connections instead of over existing
dedicated voice networks, switching
and transmission equipment. See
also gatekeeper and gateway.
voice port library
A library that manages a socket
connection from the client to the
238
General Information and Planning
voice technology. The library uses
entry points that are provided by
DVT.
Voice Protocol for Internet Messaging
(VPIM)
The standard for digital exchange of
voice messages between different
voice mail systems, as defined in
Internet Request For Comments
(RFC) 1911.
voice response unit (VRU)
A telephony device that uses
prerecorded voice responses to
provide information in response to
DTMF or voice input from a
telephone caller.
voice segment
The spoken words or sounds that
make recorded voice prompts. Each
segment in an application is
identified by a group ID and a
segment ID and usually includes
text.
voice server node
In a single system image (SSI), a
server node that contains the voice
data. This is usually the same node
as the database server node.
voice table
A grouping of voice segments that is
used for organizational purposes.
Voice tables can be referenced by
prompts, but not by state tables.
Contrast with voice directory.
voice technology
See technology.
VoiceXML
VoiceXtensible Markup Language.
An XML-based markup language
for creating distributed voice
applications. Refer to the VoiceXML
forum web site at www.voicexml.org
VoIP
information. Contrast with delay
start and immediate start.
See Voice over Internet Protocol.
VPACK
A component consisting of a base
card, which connects to the digital
trunk adapter in the pSeries
computer, and a trunk interface card
(TIC), which manages the trunk
connection to the switch. The single
digital trunk processor contains one
VPACK, and the multiple digital
trunk processor contains slots for up
to five VPACKs. Contrast with
SPACK and TPACK.
VPIM See Voice Protocol for Internet
Messaging.
VRU
See voice response unit.
W
World Wide Web Consortium (W3C)
An organization that develops
interoperable technologies
(specifications, guidelines, software,
and tools) to lead the Web to its full
potential. W3C is a forum for
information, commerce,
communication, and collective
understanding. Refer to the web site
at http://www.w3.org
word spotting
In speech recognition, the ability to
recognize a single word in a stream
of words.
wrap
In ADSI, the concatenation of two
columns of display data to form a
single column.
Y
yellow alarm
See remote alarm indication.
Z
zero code suppression (ZCS)
A coding method that is used with
alternate mark inversion to prevent
sending eight successive zeros. If
eight successive zeros occur, the
second-least significant bit (bit 7,
with the bits labeled 1 through 8
from the most significant to the least
significant) is changed from a 0 to a
1. AMI with ZCS does not support
clear channel operation.
WebSphere Voice Response
A voice processing system, that
combines telephone and data
communications networks to use,
directly from a telephone,
information that is stored in
databases.
wink start
A procedure that is used with some
channel-associated signaling
protocols to indicate when a switch
or PABX is ready to accept address
signaling. After seizure, the switch
sends a short off-hook signal (wink)
when it is ready to accept address
Glossary
239
240
General Information and Planning
List of WebSphere Voice Response and associated
documentation
Here is a list of the documentation for WebSphere Voice Response for AIX and
associated products. PDF and HTML versions of the documentation are
available from the IBM Publications Center at http://www.ibm.com/shop/
publications/order. Hardcopy books, where available, can be ordered through
your IBM representative or at this Web site.
WebSphere Voice Response for AIX documentation can also be found by going
to the IBM Pervasive software Web site at http://www.ibm.com/software/
pervasive, selecting the WebSphere Voice products link, and then selecting
the library link from the WebSphere Voice Response page.
PDF and HTML versions of the WebSphere Voice Response for AIX
publications are available on the CD-ROM supplied with the product. In
addition, WebSphere Voice Response for AIX, WebSphere Voice Response for
Windows, Unified Messaging, and other WebSphere Voice publications are
available together in PDF and HTML formats on a separately-orderable
CD-ROM (order number SK2T-1787).
Note: To read PDF versions of books you need to have the Adobe Acrobat
Reader (it can also be installed as a plug-in to a Web browser). It is available
from Adobe Systems at http://www.adobe.com .
WebSphere Voice Response software
v WebSphere Voice Response for AIX: General Information and Planning,
GC34-7084
v WebSphere Voice Response for AIX: Installation, GC34-7095
v WebSphere Voice Response for AIX: User Interface Guide, SC34-7091
v WebSphere Voice Response for AIX: Configuring the System, SC34-7078
v WebSphere Voice Response
SC34-7085
v WebSphere Voice Response
Applications, SC34-7081
v WebSphere Voice Response
SC34-7076
v WebSphere Voice Response
v WebSphere Voice Response
Applications, GC34-7080
© Copyright IBM Corp. 1991, 2011
for AIX: Managing and Monitoring the System,
for AIX: Designing and Managing State Table
for AIX: Application Development using State Tables,
for AIX: Developing Java applications, GC34-7082
for AIX: Deploying and Managing VoiceXML and Java
241
v
v
v
v
WebSphere
WebSphere
WebSphere
WebSphere
Voice
Voice
Voice
Voice
Response
Response
Response
Response
v WebSphere Voice Response
v WebSphere Voice Response
v WebSphere Voice Response
SC34-7088
v WebSphere Voice Response
SC34-7089
v WebSphere Voice Response
Protocol, GC34-7093
v WebSphere Voice Response
v WebSphere Voice Response
for
for
for
for
AIX:
AIX:
AIX:
AIX:
Custom Servers, SC34-7079
3270 Servers, SC34-7075
Problem Determination, GC34-7087
Fax using Brooktrout , GC34-7083
for AIX: Cisco ICM Interface User's Guide, SC34-7077
for AIX: MRCP for State Tables, SC34-7086
for AIX: Programming for the ADSI Feature,
for AIX: Programming for the Signaling Interface,
for AIX: Voice over IP using Session Initiation
for AIX: Using the CCXML Browser, SC34-7092
for AIX: VoiceXML Programmer's Guide, SC34-7117
IBM hardware for use with WebSphere Voice Response
v IBM Quad Digital Trunk Telephony PCI Adapter (DTTA): Installation and User's
Guide, part number 00P3119 (DTTA card)
WebSphere Voice Response related products
WebSphere Voice Server
The documentation for Version 5.1 of WebSphere Voice Server is provided in
the form of an HTML-based information center, and can be found at:
http://publib.boulder.ibm.com/pvc/wvs/51/en/infocenter/index.html
Unified Messaging for WebSphere Voice Response
v Unified Messaging: General Information and Planning, GC34-6398
v Unified Messaging: Subscriber's Guide (Types 0, 1, 2, 3, 4 and 9), SC34-6403
v Unified Messaging: Subscriber's Guide (Types 5, 6, 7 and 8), SC34-6400
v Unified Messaging: Administrator's Guide, SC34-6399
v Unified Messaging: Voice Interface, GC34-6401
v Unified Messaging: Web Services Voicemail API, SC34-6975
Unified Messaging publications can be found by going to the IBM Pervasive
software Web site at http://www.ibm.com/software/pervasive, selecting the
products link, and then selecting the library link from the Unified Messaging
page.
242
General Information and Planning
AIX and the IBM pSeries computer
For information on AIX Version 6.1, refer to the AIX V6.1 infocenter
For information on System p5 and BladeCenter computers, refer to the IBM
Power hardware infocenter
HACMP
v HACMP for AIX: HACMP 5.4 Concepts and Facilities, SC23-4864-09
v HACMP for AIX: HACMP 5.4 Planning Guide, SC23-4861-09
HACMP for AIX: HACMP 5.4 Installation Guide, SC23-5209-00
HACMP for AIX: HACMP 5.4 Administration Guide, SC23-4862-09
HACMP for AIX: HACMP 5.4 Smart Assist for DB2, SC23-5179-03
HACMP for AIX: HACMP 5.4 Troubleshooting, SC23-5177-03
HACMP for AIX: Enhanced Scalability Installation and Administration Guide,
Volume 1, SC23-4284
v HACMP for AIX: Enhanced Scalability Installation and Administration Guide,
Volume 2, SC23-4306
v
v
v
v
v
For more information on HACMP, refer to the HACMP Library and the AIX
V6.1 infocenter.
SS7
v SS7 Support for WebSphere Voice Response: SS7 User's Guide, GC34-7090
IBM SS7 Support for WebSphere Voice Response observes the applicable parts
of the following specifications for ISUP:
v CCITT Blue book (1988) Q.701 - Q.707
v ITU-T (formerly CCITT) Recommendations Q.700 - Q.716, Volume VI Fascicle
VI.7
v CCITT Blue book (1988) Q.711 - Q.714
v ITU-T White book (1993) Q.711 - Q.714
v CCITT Blue book (1988) Q.721 - Q.724
v ITU-T (formerly CCITT) Recommendations Q.721 - Q.725, Volume VI Fascicle
VI.8
v ITU-T White book (1992) Q.730 group
v CCITT Blue book (1988) Q.761 - Q.764
v ITU-T White book (1992) Q.761 - Q.764
v CCITT Blue book (1988) Q.771 - Q.775
v ITU-T (formerly CCITT) Recommendations Q.771 - Q.775, Q.791, Volume VI
Fascicle VI.9
ADC
v ADC NewNet AccessMANAGER™: Installation and Maintenance
Manual
List of WebSphere Voice Response and associated documentation
243
v ADC NewNet AccessMANAGER™: User Manual
Integrated Services Digital Network
WebSphere Voice Response ISDN support observes the applicable parts of the
following standards for User Side protocol:
Custom ISDN Standards:
v Northern Telecom DMS/250 Primary Rate Interface NIS A211-4 Release
8, July 1995. (IEC05 level)
v Northern Telecom DMS/100 Primary Rate Interface NIS A211-1 Release
7.05, May 1998. (NA007 & RLT)
v AT&T 5ESS Switch. ISDN Primary Rate Interface Specification. 5E7 and
5E8 Software Release AT&T 235-900-332. Issue 2.00 December 1991
v AT&T 5ESS Switch. ISDN Primary Rate Interface Specification. 5E9
Software Release AT&T 235-900-342. Issue 1.00 November 1993
(National ISDN only)
v Lucent 5ESS-2000 Switch ISDN Primary Rate Interface, Interface
Specification, 5E9(2) and Later Software Releases, 235-900-342. Issue
5.00 January 1997 (National ISDN only)
v AT&T ISDN Primary Rate Specification TR41449 July 1989
v AT&T ISDN Primary Rate Specification TR41459 August 1996
Euro-ISDN
The following documents refer to the specifications required for
observing ISDN:
v TBR4-ISDN; Attachment Requirements For Terminal Equipment To
Connect To An ISDN Using ISDN Primary Rate Access, Edition 1, Nov.
95, English
v CTR 4 - European Communities Commission Decision 94/796/EC
published in the Official Journal of the European Communities L
329, 20 December 94 (ISDN PRA)
National ISDN
National ISDN is described in the following publications:
v National ISDN, SR-NWT-002006, Issue 1, August 1991, published by
Bellcore
v National ISDN-1, SR-NWT-001937, Issue 1, February 1991, published
by Bellcore
v National ISDN-2, SR-NWT-002120, Issue 1, May 1992, published by
Bellcore
INS Net Service 1500
INS Net Service is described in the following publications:
244
General Information and Planning
v Interface for the INS Net Service Volume 1 (Outline), 7th Edition,
published by Nippon Telegraph and Telephone Corporation
v Interface for the INS Net Service Volume 2 (Layer 1 & 2 Specifications),
4th Edition, published by Nippon Telegraph and Telephone
Corporation
v Interface for the INS Net Service Volume 3 (Layer 3 Circuit Switching),
5th Edition, published by Nippon Telegraph and Telephone
Corporation
Bellcore Specifications for ADSI Telephones
The following Bellcore specification documents contain technical details of the
requirements for ADSI telephones, and the interface to voice response systems
such as WebSphere Voice Response:
v SR-INS-002461: CustomerPremises Equipment Compatibility Considerations for
the Analog Display Services Interface
v TR-NWT-001273: Generic Requirements for an SPCS to Customer Premises
Equipment Data Interface for Analog Display Services
List of WebSphere Voice Response and associated documentation
245
246
General Information and Planning
Index
Numerics
B
3270 server
purpose of 47
3270 Servers option 76
3270 terminal emulation 195
configuring sessions for 67
monitoring sessions 71
4-Port Multiprotocol Communication Controller
5ESS 5E8 protocol 99
5ESS 5E9 protocol 99
background music
overview 54
backing up 155
batch voice import (BVI) 59
blockage rate 122
busy hour calls 120
BVI
See batch voice import (BVI)
177
A
Access menu 65
access to information 20
access to paging systems 16
accessibility xii
address signaling 94
administrator profile
use of 67
ADSI telephone 118
ADSI telephones
feature download management (FDM) 57
server display control (SDC) 57
AIX login account for SSI 172
analog connectivity 93
Analog Display Services Interface (ADSI) 118
Application Connectivity Link (ACL) 94
application profile 119
introduction 44, 68
application service providers, using WebSphere Voice
Response 14
applications
CCXML 31
Java 38
telephone access to multiple 15
voice, examples of 5
VoiceXML 33
applications, benefits of voice 3
AS/400 178
ASCII console 155
introduction 77
ASCII editor
developing state tables with 48
audio name
introduction 46
automated attendant 15
automated outbound calling 16
availability-based call routing 12
© Copyright IBM Corp. 1991, 2011
59
C
cable 153, 154
call hold time 120, 121
call information 72
call routing
availability-based 12
skill-based 11
call statistics
collecting 72
call transfer 135
call tromboning 55
called number 119
calling number information 115
CallPath Server 115
calls, inbound 17
calls, number of busy hour calls 120
calls, outbound 18
calls, transferring 19
CAS 84
CCS 96
CCXML
applications 31
CD-ROM drive (required for VCS Antares) 142, 143
channel 122
channel associated signaling (CAS) 84, 92
channel bank 93, 94, 155
channel identification 119, 120
channel identifying 119
channel increments 146
channel service unit (CSU) 94, 155
channels
estimating number required 122
number supported for Java and VoiceXML 161
Cisco Intelligent Contact Management (ICM) 117
client node 169
command-line import and export utilities 78
common channel signaling (CCS) 92, 96, 113
communications server
required level 144
247
concurrent network license model 146
Configuration menu 66
configuration planning 120, 122, 138, 141, 143, 156,
158
connection 83, 177
contact center, virtual 10
coordinated voice and data transfer 16
creating voice output for applications 59
CSU 94
custom server 174
monitoring 71
purpose 47
TTS_Client 52
Custom servers option 76
customer managed licensing 147
D
data communications network 177, 178
data transfer, coordinated 16
database server node 169
DDI (direct dial in) 119
designing voice applications 58
Destination Point Code (DPC) 109
Development menu 72
dialed number identification service (DNIS) 119
DID (direct inward dialing) 119
digital connectivity 84
Digital Trunk Telephony Adapter (DTTA) 151
direct dial in (DDI) 119
direct inward dialing (DID) 119
disk space 141, 143, 157
diskette drive 141, 143
distribution list 47
DMS100 BCS34, Northern Telecom 100
DMS250 IEC05, Northern Telecom 100
DNIS 119
DPC, Destination Point Code 109
DTTA 151
E
E1
CAS protocols supported 86
educational institutions, WebSphere Voice Response
used by 13
environment requirements 156
Erlangs 123
errors
system monitoring 70
estimating
disk space requirements 158
memory requirements 156
number of channels 122
telephony traffic 120
Euro-ISDN standard 100
exchange data link 94, 113, 119
248
General Information and Planning
expansion slot
150
F
fax
connection requirements 118
response 15
fax output
overview 52
FDM (feature download management) 57
feature download management (FDM) 57
financial institutions, WebSphere Voice Response used
in 12
G
garbage collection, Java 161
government agencies, WebSphere Voice Response used
by 13
Graphics Display Adapter 141, 143
greeting
voice mail 46
H
H.100 connections 153
hardware 94, 141, 143, 149, 155, 177, 196
optional 154
help
online editor 69
high-quality audio 150
I
IBM Support 184
identifying channels 119
inbound calls 17
incoming call 119
information access 20
information providers, WebSphere Voice Response used
by 13
Integrated Services Digital Network 96
Integrated Services User Part (ISUP) 108
intelligent peripheral 16, 98
Internet service providers, using WebSphere Voice
Response 14
ISDN 92, 96, 99, 106, 119, 121
ISUP, Integrated Services User Part 108
J
Java
applications 38
garbage collection
161
K
keyboard requirements
155
L
language
multiple, in applications
68
language support, WebSphere Voice Response 201
languages provided in WebSphere Voice Response 59
license 146
licensed program products 144
licensing WebSphere Voice Response 146
local area network (LAN) 177
location requirements 155
LPPs 144
Lucent 99
M
machine-readable media 155
mailbox, voice
introduction 46
memory, amount of 160
messaging, voice 19
migrating to a single system image 173
minimum configuration 141
mobile workforce, WebSphere Voice Response used
by 14
monitor, display 141, 143, 155
monitoring
custom servers 71
memory 70
system errors 70
telephone lines 70
terminal emulation sessions 71
mouse requirements 155
multiple systems and applications, telephone access
to 15
N
National ISDN 100
national language support 59
network licensing environment 147
node of SSI 169
Non-Facility Associated Signaling (NFAS)
Nortel 100
Northern Telecom DMS100 BCS34 100
number of busy hour calls 120
O
Operations menu 69
optional hardware 154
OSI model 108
outbound calling, automated
outbound calls 18
16
P
pack configuration option 66
paging systems 16
performance
channel guidelines 161
guidance 159
Java garbage collection 161
memory requirements 160
performance (continued)
size of processor 159
peripheral, intelligent 16
planning and designing voice applications 58
PRA 96
prerequisite software 144
PRI 96
primary rate access (PRA) 96
primary rate interface (PRI) 96
printer 155
processor, size of 159
prompt
editing 75
purpose 45
supplied with WebSphere Voice Response 45
prompt directory 75
pSeries computer 141, 143, 149, 156, 158, 177
Q
queuing
135, 138
R
random access memory (RAM) 141, 143, 156
monitoring 70
recommended configuration 143
recording 150
voice segments 60
voice segments using the telephone 60
remote computer connections 178
requirements
Voice Toolkit 30
retail voice application 6
S
104
SDC (server display control) 57
server
See 3270 server 47
See custom server 47
See fax server 47
See speech recognition server 47
See text-to-speech server 47
server display control (SDC) 57
service industries, WebSphere Voice Response used
in 13
shutting down WebSphere Voice Response
immediate shutdown 72
quiesce shutdown 72
signaling interface 113
signaling process 119
signaling protocols 85, 86, 92, 94, 96
Signaling System 7 (SS7)
See also SS7 106
Simple Network Management Protocol (SNMP)
introduction 78
Simplified Message Desk Interface (SMDI) 94
Simplified Message Service Interface (SMSI) 94
Index
249
single system image 163, 169, 170, 172, 173, 174
size of processor guidelines 159
skill-based call routing 11
soft/hard stop licensing policy 147
Source Point Code (SPC) 109
SPC, Soure Point Code 109
SR-INS-002461 Bellcore specification 245
SS7 92
Integrated Services User Park (ISUP) 108
ISUP, Integrated Services User Part 108
links 108
messages 108
network components 108
protocol stack 108
support for ISUP 108
stand-alone system 170
state table
actions, introduction 44
components 44
creating with an ASCII editor 48
editing 75
statistics
call management 72
terminal emulation sessions 71
voice application 72
subscriber class
menu option 68
purpose 47
Summa Four PBX switch 100
summary of WebSphere Voice Response
capabilities 20
supply chain management, WebSphere Voice Response
used in 12
support 184
switch 84, 135, 138, 155
System Configuration option 67
system management
introduction 69
system monitoring
using graphical display 69
system prompts 45
system variables 45
system voice segments 59
System/370 178
System/390 178
systems, telephone access to multiple 15
T
T1 signaling protocols supported 85
tape drive 141, 143, 155
Telecommunications Device for the Deaf (TDD)
language supplied for 68
overview 52
telecommuting, WebSphere Voice Response used in
250
General Information and Planning
14
telephone access to multiple systems and
applications 15
telephone network 83
telephone operating companies, WebSphere Voice
Response used by 14
telephony
run time components 51
telephony traffic
estimating 120
Telstra network 89
text-to-speech
overview 51
TR-NWT-001273 Bellcore specification 245
TR41449 protocol 99
traffic
telephony, estimating 120
transaction-related voice messaging 16
transfer, coordinated voice and data 16
transferring calls 19
transportation industry, WebSphere Voice Response
used in 13
trombone, call 55
TS003 signaling 91
TTS_Client custom server
introduction 52
U
Ultimedia Audio Adapter 74, 150
unified messaging 19
using WebSphere Voice Response 65
utilities voice application 8
V
virtual contact center 10
voice application 119
designing 58
installing 68
second point of contact in retail example
statistics 72
voice applications
benefits of 3
capabilities, summary 20
inbound calls 17
information access 20
introduction 3
outbound calls 18
real world examples 5
retail example 6
transferring calls 19
utilities example 8
voice messaging 19
voice data 157
voice database
purpose 46
voice directory 74
8
voice mail 15
Voice Message Service (VMS) 94
voice messaging 19
how it works 46
transaction-related 16
voice portals 14
voice response
introduction 15
voice segment
available languages 59
editor 60
importing in batch 59
recording 60
recording and editing 74
storing 74
supplied with WebSphere Voice Response
voice server node 169
voice table 74
Voice Toolkit
requirements 30
voice transfer, Coordinated 16
VoiceXML
applications 33
59
W
Web
first point of contact in retail example 7
WebSphere Voice Response
capabilities, summary 20
language support 201
WebSphere Voice Response services
automated attendant 15
automated outbound calling 16
coordinated voice and data transfer 16
fax response 15
intelligent peripheral 16
paging systems, access to 16
telephone access to multiple systems and
applications 15
voice mail 15
voice messaging 16
voice response 15
Welcome window
introduction 65
wide area network (WAN) 177
Index
251
252
General Information and Planning
Product Number: 5724-I07
GC34-7084-04