You are on page 1of 107

Data management

Pega® Platform includes powerful and flexible facilities for defining data structures and managing data.
You can define individual fields, arrays and repeating groups, multilevel structures, and a variety of
other data structures. To support the exchange of data with other systems, your application can convert
a Pega Platform data structure into a fixed record layout, an XML document, a message, or a relational
database row.

Pega Platform data management facilities


Key Features and Attributes
Terminology

Pega Platform data management facilities


The basic element in a Pega Platform data structure is called a property. To create a property, you
define its characteristics, including name and type. For example, a property named DateofBirth typically
has a date type. Other types include numbers, currency amounts, text fields, and Boolean (true or false)
values.

In addition to a name such as DateofBirth, most property rules belong to a work type, which is a
container that defines the scope of the name. The work type is part of the full property name. For
example, the full name LoanApplication.DateofBirth, identifies a property DateofBirth, where
LoanApplication is the work type, and the period is a separator.

To reduce development effort, Pega Platform includes thousands of property rules, known as standard
properties, which you can use in your application. You need to create additional property rules only for
those properties that are unique to your business situation and environment.

Most properties identify a single value such as a date, time, number, or text string. Other properties
define arrays of scalar values, or data structures that contain a set of property values, known as pages.
Pages can contain other properties, including other pages, or arrays of other pages.

For example, a data structure that describes a family insurance policy can include an embedded page
for each member of the family, with individual properties for the person's first name, date of birth, and
so on. The "dot notation" that is used in many software development environments allows you to
specify the full details of any property, even one that is embedded deeply within pages that are
contained in other pages that are also contained in other pages. If the entire structure is named Policy,
and the information about each family member is an array of pages named InsuredPerson, then the
date of birth property for the third family member has this full name:

Policy.InsuredPerson(3).DateofBirth

Key Features and Attributes


Properties have the following key features and attributes.

Property mode and type


The simplest properties identify a single, specific value. For example, in an order entry system,
properties such as Subject, Department Name, Department Number, Bill Customer, Order Date, and
Note identify simple, scalar values.

The mode of these properties is Single Value, also called scalar. Properties that hold multiple values can
be arrays or other structures. For simple arrays, the property modes are Value List and Value Group.

For Single Value, Value List, and Value Group property types, a type is also defined. The following
property types are the most popular:

Integer – A positive or negative whole number, such as 33, 1066, or -123,456,908.


Date – A single day, represented internally in the format YYYY-MM-DD, such as 2017-10-31. The
date value can appear in forms and be displayed in other formats such as 10/31/17 or October 31,
2017.
DateTime – A time stamp, which is a 'moment' in time, in a specific world time zone. The day,
hour, minute, second, and millisecond are included.
Decimal – A positive or negative number plus a decimal fraction with an explicit number of decimal
places, such as 43.7 or -15.
Text – A text value that is one or more characters in length.
TrueFalse – A value of true or false, also called Boolean.

Internal property value representation and external formats


All integer properties are saved in a single, universal format internally. However, in some displays,
integers may be formatted with commas such as 1,234,567. In other situations, integers can be
displayed with a leading sign (+ or -), or negative integers may be shown in parentheses or red text.

Similarly, dates, times, and decimal values each have one internal format but many possible
presentations on forms and reports. When you enter a value, it is converted automatically to the
internal format for efficiency in storage and processing. When a value is presented as output, the
internal form is converted to the appropriate output format.

Clipboard pages (data structures)


As you interact with the system, Pega Platform builds and maintains a collection of pages of data known
as the clipboard. Each user's clipboard is separate and dedicated to that user. The clipboard occupies
memory on the server that supports your Pega Platform system.

When you log in, your clipboard is initially empty and is assembled with initial data from several
sources. When you log out, your clipboard is erased and the memory is released so that the memory
becomes available to others.

Most interactions update the clipboard, but it is not ordinarily visible. The Clipboard tool is useful when
you build and test rules. The Clipboard tool shows the structure of pages in the left panel, and the
detailed contents (for Single Value properties) in the right panel.

Clipboard

In the preceding example, the left panel presents the user's entire clipboard, as a tree structure of
pages, including pages within pages. The right panel shows the property names and current values of
the page that is highlighted in the left panel.

Property form
The value of a property can change from moment to moment, and from user to user, but you determine
the other permanent characteristics of a property in a rule form. For example, this Property rule form
identifies the property type (Decimal) of the property Newb-Newbies-Work.LineItemTotal, and indicates
that values of the property typically appear as a currency amount, such as $1,234.56.
Property form

As development of a business application progresses, the development team might discover that new
properties are needed beyond those already available. To create a property, you define it on the
Property rule form.

Value definitions for one or more properties


The Property rule form does not include an initial or default value for a property. The most appropriate
initial value depends on the situation or application. You can use a data transform to set values for one
or more properties.

The following data transform sets the value of three properties to 0, and sets a text property to "C-".

Data transform form

Terminology
Aggregate property – A general term for any property that has multiple values, such as a simple
array (property modes Value List and Value Group), a structure (property mode Page) or a
repeating structure (property modes Page List and Page Group).
Class – A hierarchy or tree structure of named "containers" that define which properties can be on
a page.
Clipboard – An internal memory structure for a user, holding the pages and property values in
current use. The clipboard is established when you log in, updated as you work, and disappears
when you log out.
Data transform – A rule that assigns values to one or more properties, often used only for initial
values that may change later. The value can be a constant (such as 7 or "Hello") or an expression
that involves other property values.
Embedded page – A page that is not a top-level page; a page that defines a substructure of a
higher-level page.
Group – A repeating structure with elements identified by a text name, such as a state code. For
example, the Value Group property StateCapitol may contain elements StateCapitol (MA) for
Boston, Massachusetts. There is no defined order for members of a group.
List – A repeating structure with elements identified by numeric subscripts starting at 1. Value Lists
correspond to simple arrays. Page Lists are an array of pages.
Page – A collection of property names and values where all the properties belong to a common
class (or to an ancestor of that class.)
Property
A rule that defines the name, class, property mode, and other characteristics of a property.
Informally, the name and current value of a property rule, a name-value pair.
Property mode – An attribute of each property that identifies whether it holds one value (Single
Value mode) or an array of values (Value List and Value Group modes) or a structure (Page, Page
List, and Page Group modes).
Property type – For Single Value properties and simple arrays (property mode Value List or Value
Group), the type identifies how to validate and interpret the value. The most popular types are
Text, Date, DateTime, Integer, and Decimal.
Single Value – The property mode for scalar properties, corresponding to a single text, date,
numeric, or other value. The opposite of aggregate properties.
Top-level page – A page on the clipboard that is not part of any other page.

Record editor and data import


You can add, edit, and delete data type records by using the record editor. You can also import data
from a .csv file and export data to a .csv file from the record editor. To access the record editor, from
the Data Explorer, click a data type, and then click the Records tab.

Modifying the data import process


You can add, update, or delete data records for your data types in Data Designer by uploading .csv files
that contain a large number of records. You can import data from a variety of sources, even if the fields
do not match. The bulk import process gives you more control over the data in your application. You
can select actions that are specific to each field to ensure that your data is updated correctly and is not
corrupted during the import process.

The import process is extensible, and you can do the following tasks:

Manage data import purposes


Customize the data import wizard
Process data before and after import
Customize import options during data import

Managing data import purposes


You can manage data import purposes, or reasons for importing data from .csv files that are available
to users when they run the data import wizard. You can also disable or remove data import purposes.

Adding data import purposes


You can add data import purposes that can be selected when a .csv file is imported. For example, you
can add an import purpose for the Employee data type. At run time, you select the import purpose and
upload a .csv file in the Upload file step. In the Map fields step, when you map the Employee ID field in
the Employee data type with the Staff ID field in the .csv file and click Next, the system passes the new
import purpose and list of staff IDs in the .csv file to the activity.

Override the pyLoadCustomImportPurposes data transform. For example, you can add an Update
locations purpose as shown in the following figure:
Adding a data import purpose

The new purpose is displayed in the Purpose list of the Upload file step in the data import process:

New data import purpose at run time

In the Map fields step, you can map the field in your data type that acts as a unique identifier with a
field in the imported .csv file. The import process passes the new purpose and list of values for the
uniquely mapped field in the .csv file to the activity that defines the logic for data import
(pyCustomPostProcessing).

Removing data import purposes


Beginning with Pega 7.2.2, you can remove data import purposes.

Override the pyLoadCustomImportPurposes data transform.

Removing a data import purpose

The Delete purpose is not displayed in the Purpose list of the Upload file step in the data import
process:
Data import purpose hidden at run time

Setting a default data import purpose


Beginning with Pega 7.2.2, you can mark a data import purpose as a default choice.

1. Override the pyPreProcessDataImport activity.


2. Set the value for the pyDefaultImportPurpose property to the import purpose that you want as the
default selection.

Setting the default data import purpose

For example, if you set Update locations as the default import purpose, it is displayed as the default
selection in the Purpose list of the Upload file step in the data import process. You can save time during
data import if you update locations for your data type on a regular basis.

Default selection of data import purpose at run time


You must set the value for the pyDefaultImportPurpose property to an existing import purpose. If the
import purpose does not exist, the system displays Add or update as the default purpose.

Disabling the selection of data import purposes


Beginning with Pega 7.2.2, you can prevent the selection of import purposes in the data import process.

1. Override the pyPreProcessDataImport activity.


2. Set the value for the pyDisableImportPurpose property to true.

Disabling the selection of the data import purpose

When you set the value for the property to true, the system does not allow you to select an import
purpose from the Purpose list of the Upload file step in the data import process. The default value in this
list is Add or update or the value that you set for the pyDefaultImportPurpose property. For example, to
allow only location updates for your data type, you can set the value for the pyDefaultImportPurpose
property to UpdateLocations and the value for the pyDisableImportPurpose property to true.

Selection of data import purpose disabled at run time

Customizing the data import wizard by adding fields and


logic
You can customize the data import wizard to add new fields and logic during data import. You can also
define a default unique field mapping for data import, skip key validation, and customize data import
for a data type.

Skipping key validation


When you import a .csv file, you need to validate the key field in the imported .csv file by mapping it to
the field in your data type that acts as a unique identifier. This mapping is not possible in all cases.
Beginning with Pega 7.2.2, you can skip key validation during the data import process. This feature is
useful when you import records without keys and expect to generate keys during the import process by
using custom APIs in the preprocess extension point.

1. Override the pyPreProcessDataImport activity.


2. Set the pySkipKeyValidation property to true.

Disabling key validation

When you set the value for the property to true, the key field is not validated during the data import
process.

Adding fields for customizing data import


You can add fields to customize data import for a new import purpose. For example, you can add a New
location field as shown in the following figure:

New field for data import

Override the pyCustomConfiguration section to add new fields.

The New location field is displayed in the Import records step of the data import process. You can enter
a value for this field (for example, New York) at run time:

New field at run time

The activity that defines the logic for data import can access the work object that contains this field. If
you want to collect new information, add fields to the work object class, and not to some other top-level
page.

Adding logic for data import


You can define the logic for the data import that corresponds to new and existing purposes and fields.

You must define the logic for data import if you have added a new purpose.
The import wizard does not call the pyCustomPostProcessing activity for the Delete import
purpose.
You cannot use custom post-processing for an add-only import of a class that does not have a
BLOB and that uses autogenerated keys.

1. Override the pyCustomPostProcessing activity.


2. Provide the parameters on the Parameters tab of the activity:
Purpose – The new data import purpose.
UniqueFieldsList – A list of HashStringMap objects. The list contains the unique field and value
pairs in the imported .csv file. You can create and use a page for each pair in this activity.
ListSize – The number of values for the uniquely mapped field in the imported .csv file.

You can use these variables as input parameters for the activity.

For example, you can do the following actions:

Use a loop to iterate through the list of values for the uniquely mapped field in the imported .csv
file, as shown in the following figure:

Use the function @Utilities.pxGetKeyPageFromList(param.UniqueFieldsList,param.pyForEachCount)


to get the page from the list at the current index. This function returns a clipboard page that you
can use in the activity.

The import wizard calls this activity when you click Import in the Import records step of the data import
process.

For example, if you override this activity to update all the locations of your Employee data type to a
common location such as New York, the data type is updated as shown in the following figure:

Locations updated after calling the activity

Defining a default unique field mapping for data import


Beginning with Pega 7.2.2, you can define a default mapping between the fields in the .csv file and the
fields in your data type that act as a unique identifier (key). This default mapping saves time during the
import process. For example, if you import data for your Employee data type on a regular basis, you can
define a default mapping between the Staff ID field in the .csv file and Employee ID field in your data
type.

Override the pyCustomUniqueSourceFieldMapping activity.


Defining a default field mapping for data import

In the Map fields step of the data import process, the system sets the default value of the Match
existing records by field to the value that you used in the activity (Staff ID) and marks the mapped field
in your data type (Employee ID) as a record identifier:

Default field mapping at run time

Processing records and data before and after import


Beginning with Pega 7.2.2, you can perform setup actions before the data import process, such as
defining a new table or data record, and cleanup actions after the data import process, such as closing
an open stream or transaction. For example, you can create a new segment to add the records that you
want to import for your Employee data type before the system imports the data. Additionally, you can
perform actions on the data records such as copying them before they are imported. After the data is
imported, you can close the segment.

Processing data before and after import


To perform setup actions before the system imports data, override the
pyPreImportBatchProcessing activity.
To perform cleanup actions after the system imports data, override the the
pyPostImportBatchProcessing activity. The Pega 7 Platform calls these activities from the
pzImportRecords activity, which processes the data records that you upload, based on the import
purpose.
Calling the pyPreImportBatchProcessing and pyPostImportBatchProcessing activities from the pzImportRecords activity

Record processing before import


You can perform actions on a data record before you import it. For example, if you are importing data
for your Employee data type, you can validate each record and, if the record is valid, copy the record to
another data type before the record import.

To process records before import, override the pyPreProcessRecordOnImport activity.

Customizing import options during data import


You can configure the data import wizard to import data in the background or outside the Data
Designer.

Importing data in the background


Beginning with Pega 7.2.2, you can skip the screen that shows the data import progress after you click
Import in the Import records step of the data import process. You can return to the Data Designer while
the data import process runs in the background. This action also skips the addition of the data import
item to your worklist, which prevents your worklist from getting crowded.

1. Override the pyPreProcessDataImport activity.


2. Set the value of the pySkipImportStatusUiAndAssignment property in the activity to true.
Skipping the import progress screen

You can see the data import progress on the D_pxDataImportProgress data page.

Importing data outside the Data Designer


Beginning with Pega 7.2.2, you can import data from anywhere in your application without navigating to
the Data Designer.

Run the pxImportRecordsAPI activity to import data outside the Data Designer with the following
parameters:
dataImportPage – Page that contains all the details required to upload data records for
import. This page has the following information:
pyImportPurpose – Data import purpose
pyDataImportClass – Class for which records are imported
pyClassName – Class for which records are imported (same as pyDataImportClass)
pyDataImportFilePath – Location of the .csv file from which records are imported
pyFieldMappingsForAPI – Page List property of Embed-FieldMapping class that holds the
mapping between the .csv file and the class property
pyListSeparator – List separator that is used to split the records in the .csv file
pyLocale – The locale that is used to show the messages
pyID – Unique ID of the data import process
isAsynchronous – Boolean value that identifies whether the data import process is
asynchronous or synchronous
processID – Unique ID that allows you to check the data import progress details. If you want to
pass this ID, its value should appear in the dataImportPage as pyID'; otherwise the
pxImportRecordsAPI activity creates a new ID.
errorFile – Name of the .csv file containing the erroneous records that were encountered while
processing data for import
The pxImportRecordsAPI activity for data import outside the Data Designer

To see the progress of the data import, you can call the D_pxDataImportProgress data page by passing
the value of processID for your data import.

You can also use the pxImportRecordsAPI activity when you import data in the Data Designer. For more
information, see Processing records and data before and after import.

Configuring a data import link


Beginning with Pega 7.2.2, you can include the link for data import anywhere in your application and
call it in the background from a service or agent.

1. On the properties panel of a cell, click Actions.


2. Add an action set, and specify the Flow in Modal Dialog action with the pxDataRecordsImport flow.
Configuring the data import link

Adding a Record Editor gadget to an application


You can add a Record Editor gadget (pxRecordsEditor) to your application so that users can view and
edit data records anywhere in the application, not just in the Data Designer. You can also decide which
fields to display, referred to as a view, based on what the user is doing. For example, at times users
might only need to see the fields for contacting customers, and at other times they might need to
search and filter by department. You can quickly and easily create these views by using the Record
Editor gadget.

The high-level steps for adding a Record Editor to an application are:

1. Create a section that includes the Record Editor gadget that can be added to your application.
2. Create a report definition to specify which columns are displayed in the Record Editor.
3. Configure the Record Editor gadget to specify the functionality to include, for example, full-text
search and the ability to add, delete, import, and export data.

Limitations
The Record Editor gadget has the following limitations:

The Record Editor gadget cannot be used in the New, Review, and Perform harnesses.
A report definition that uses summary functions is not supported.
When you add or update calculated properties, the properties that use functions in the report and
the joined class properties are shown as read-only.
Reports with parameters are not supported.

Creating a section that includes the Record Editor gadget


To use the Record Editor in your application, add the Record Editor gadget to an embedded section,
which can then be included in your application.

1. Click +Create > User Interface > Section.


2. In the Label field, enter a short description.
3. In the Apply to field, select the class that represents the data that you want to display.
4. In the Add to ruleset field, select the ruleset to which this section applies.
5. Click Create and open.
6. Click Layout.
7. Click Embedded section and drag it to where you want it located in the section.
8. In the Section Include dialog box, in the Section field, select pxRecordsEditor to add the Record
Editor gadget to the section.

Section Include dialog box

9. Click Submit.
10. Click Save.

For more information about sections, see About sections.

Creating a report definition


The report definition determines which columns are displayed in the Record Editor. It can include joined
class properties. After you create the report definition, you configure the Record Editor gadget to use it.

1. Click +Create > Reports > Report Definition.


2. In the Label field, enter a short description.
3. In the Apply to field, select the class that represents the data that you want to display. The class
must be the same class that you selected when you created the section.
4. Click Create and Open.
5. Select the columns that you want to display in the Record Editor gadget.
6. Click Save.

For more information about creating reports, see Creating reports in Designer Studio.

For more information about report definitions, see Report Definition rule form.

Configuring the Record Editor gadget


You can configure the Record Editor features that will be available in your application.

1. Open the section that contains the pxRecordsEditor section.


2. Click the View properties icon in the pxRecordsEditor section.
3. Click the Parameters tab.
Layout Properties dialog box

4. In the Data Source Class Name field, select the class name (pyClassName) of the data type that
has the records that you want to display.
5. In the Report definition Name field, select the report definition that you want to use to display the
data.
6. Optional for screens that contain only one view: In the Report Page Name field, enter the name of
the view that you want to use.
You can create a report page and populate the data on the page before rendering the gadget
during run time, or you can populate the data on the page at run time.
7. Optional: Select the options that you want to make available to users in the application.
Show import and export – When selected, displays the import and export buttons. When a
user clicks Export, the current view is downloaded as a .csv file. When a user clicks Import,
the user can choose a .csv file to import and map the fields in the .csv file to the fields in the
data type; only the mapped fields are imported. For more information about importing, see
Importing data for a data type. For more information about exporting, see Exporting data
from your application.
Show search – When selected, displays the Search field. The Search field filters the results to
display only those records that contain the search text in any field in the report definition.
Use full text search – When selected, allows full-text search. Full-text searches are performed
against the global search index instead of against the database. For more information, see
Full-text search. and Enabling and disabling classes for search indexing.
Hide the add option – When selected, the add option is not displayed. If the add option is
enabled, only non-calculated property values for the current class can be added or edited.
Calculated property columns or other class property values will be in read-only mode. This
option is always hidden for Work- records, regardless of how this parameter is set.
Hide the delete option – When selected, the delete option is not displayed. When a user clicks
Delete, the current class record is deleted from the database. If the Report Definition contains
joined classes, the class record is not deleted. This option is always hidden for Work- records,
regardless of how this parameter is set.
8. Click Submit.

CRM data types in Pega Platform


Customer Relationship Management (CRM) core data types are now included with the Pega® Platform,
making it is easier to bring your data into your Pega application. You can either import your data into
the CRM data types or use integration to communicate with your own system of record. These tables are
useful, for example, when you want to use Pega Platform as both an application and a system of record
in your cloud environment.

The tables are part of the CustomerData database schema.

The CRM data types are: Contact, Address, Organization , Role, ContactOrgRel, OrgOrgRel, and
ContactContactRel. These data types and related rules are part of the Pega-SharedData ruleset. The
Pega-SharedData ruleset is included in PegaRules.

CustomerData is set up during deployment and cannot be changed.

All tables in the CustomerData schema are non-Pega formatted; that is, they do not contain any Pega
internal columns such as a BLOB column, pzinskey, or pyObjClass. Any new tables that you add to
CustomerData are added as non-Pega formatted tables. Pega formatted tables cannot be added to
CustomerData.

The tables and their relationships to one another are shown in the following graphic:
CRM data model
Date time formats supported by the Data Import wizard for
data types
When data for a data type is imported, the date time formats in the import file must be a supported
format or the import fails. The Data Import wizard supports several categories of date time formats to
which it attempts to match the incoming date time format. When a matching date time format is found,
the Data Import wizard stops looking for a match.

If a custom date time format is entered in the Data Import wizard, the Data Import wizard uses it and
does not look for a match in the other date time categories.

The supported date time formats and the order against which the Data Import wizard attempts to find a
match are:

1. Extension point – Data page or data transform that uses the Pega-supplied
D_pyCustomDateFormats data page as its source. The extension point supports Pega-supplied
formats and simple date formats. The record editor class is Data-Metadata-CustomDateFormat.
2. ISO-8601 universal formats – These formats are not locale-specific.
3. Other supported format – yyyy-MM-dd HH:mm:ss
4. Locale-specific – The default Microsoft Excel date time formats for the user's locale for locales
supported by Pega® Platform. For formats not supported by Pega Platform, the default Microsoft
Excel format for the United States locale is used. For a list of supported locales and their formats,
see Locale settings - date time formats. For information about Microsoft Excel date time formats,
refer to the Microsoft Excel documentation.
5. Pega ISO format (Pega default format) – yyyyMMdd'T'HHmmss.SSS 'GMT'

Classes and properties


A class groups a collection of rules or other objects. Each class defines capabilities (rules that include
properties, activities, and HTML forms) that are available to other subordinate classes or to instances of
the class. Classes are organized into a hierarchy, where the system searches the class hierarchy from
the current class upward when looking for a rule to apply. A class is an instance of the Rule-Obj-Class
rule type.

A property defines the format and visual presentation of data in your application. Each property can be
associated with a value and has a mode and type. The property mode, such as Single Value or Page
List, defines the structure that is used to store the property value. The property type, such as Text or
Date, defines the format and allowable characters in the property value.

Page List and Page Group properties


A Page List property is an ordered list of embedded pages that are referenced by a numeric subscript,
starting with 1. Page List properties are useful for holding lists of data. A Page Group property is an
unordered set of pages that is referenced by a string subscript. Page Group properties are useful when
you need to look up specific pages by a unique identifier and the order of the pages does not matter.

The values of Page List and Page Group properties are stored in the BLOB column, that is, these values
are not columns in the database. You can use a Declare Index rule to expose Page List and Page Group
properties so that you can report on them. For more information, see About Declare Index rules.

Page List or Page Group creation


When you create the property, set the Property Mode to Page List or Page Group. Set the PageClass
to the class that represents the structure for each embedded page. For example, the standard Page
Group property Work-.pyWorkParty has pages is of class Data-Party.

Ad hoc Page Lists are also created and filled when you do a database search by using the Obj-List
method. Consider work objects entered by an operator. These work objects are not explicitly defined in
the Operator table. You can write a query by using the Obj-List method to generate a list of records
from the database.

List item modification


To edit, delete, or add Page Groups or Page Lists on a work page, create a custom activity. See the
standard activities Work-.PartyCreate and Work-.PartyRemove for examples. You can also modify list
items using a data transform with the Remove and Update page methods.
List item iteration
To iterate through the elements of a Page Group on a single step of an activity, use the For each
embedded page option on the Loop button as shown in the following example. To iterate through
elements of a Page List, use a data transform.

For each embedded page option

The following table shows the current page and the iteration counter for accessing values in method
parameters and Java steps.

Access Current page Iteration counter


in Method parameters (it is the default step page) param.pyForEachCount
in a Java step myStepPage tools.getParamValue("pyForEachCount")

Large property value storage


Pega® Platform does not restrict the length of property values. However, you can set the size limit for a
property value by setting the Max length field on the Advanced tab of the Property form. Long
properties, especially if you have thousands of them in memory, can affect performance, including
memory and database traffic. The JVM memory limit for the system must be large enough to handle the
number of long properties that are processed.

In addition to performance considerations, you might not be able to expose properties with long values
as database columns, depending on your database software's aggregate size limit for exposed columns.
If a property exceeds your database's aggregate size limit, making the property too long to expose as a
database column, consider dividing the property and storing it as a number of shorter properties.

If your property is exposed, set the property value size in the Max length field to equal the maximum
size of the database column to ensure that your data fits into the column.

Data Designer
You can review, manage, and update data types in your application by using the Data Designer. When
you select a data type in the Data Explorer, the data type opens in the Data Designer.

Data Designer shows the data model for your data type, usage throughout your application, the physical
systems of record that store data of this type, the records editor for browsing and editing records of this
type, and the data visualizer that helps you to understand how this data type is related to the rest of the
case and data types in your application.

How to change the Type of a property rule in a higher


RuleSet Version
Summary
You can create a property rule that overrides an existing property rule (one with the same name and
the same or subclass Applies To key part) that is in a lower version of the RuleSet or in a RuleSet and
version that is lower on your RuleSet list. However, Process Commander restricts the Type values for
the new property.

If you update or override a property rule of mode Single Value, Value List or Value Group, you can change the
property rule Type to one that is more narrowly defined. This does not cause any runtime conversions of
property values.

Allowed property type changes

This table shows which property type changes are allowed.

TO
Date Time Date Time of Day Integer Decimal Double True or False Text
Date Time X X
Date X
FROM Time of Day X
Integer X
Decimal X
Double X X X
True or False X
Text X X X X X X X

For example, if the original property has a type of DateTime, you cannot override it with a new property
that has a type of Double. If you try, an error message similar to the following appears:

"Definition required to conform to Cahaba-Work-CPM-Claims.Prop1 instance created


20050831T162223.149 GMT."

In this case, you can override the DateTime property in a higher RuleSet version with a type of Date or
TimeOfDay. Similarly, you can “specialize” a property of type Text.

It is best practice to make such overrides when no work objects or other saved instances have a value
for the property.

Note: Overriding the property does not cause Process Commander to convert any existing values of the
property. Conversion can cause errors, as described in the following section.

Suggested Approach
Two methods can be used to change the property type. It is recommended that you only override
properties that are not yet saved in work objects, to avoid validation issues and other processing
errors.
.

Method 1: Delete and recreate the property using the new property type

In some cases you can simply delete the property rule in every system (such as application, testing, and
production) and then recreate the property using the correct type.

For example, assume that you created a property named Prop1 with the type Integer in the 01-01-01
RuleSet Version, locked the version and continued development in 01-01-02. You now realize you need
to change the type of Prop1 from Integer to Text. If you change the property and attempt to resave into 01-
01-02 with the new type a general error occurs —

Definition required to conform to Cahaba-Work-CPM-Claims.Prop1 instance created


20050831T162223.149 GMT.

In this case, you can delete Prop1 and recreate it using Text as its property type.

Note: This method may not work in some cases. For instance, assume you change a property called
Prop2 (Value mode) from Text to Integer. Also assume that a work object contains alphabetic characters
using Prop2 (as type Text). Because Prop2 type is now defined as an Integer, an error will likely occur
during data dictionary validation, which occurs when:

User input on an HTML form is placed on the clipboard


The Page-Validate method is applied to a page containing the property, or to the property if it has
mode Page
An Edit Input rule and Property-Validate method is applied to the property

Method 2: Create a new property and block the old one

You can create a new property with a different name and use it going forward. Copy the old property
with a availability setting of Blocked in a new RuleSet version to prevent any later further use of it.

Note: Use this method only in development systems; it is not advisable to make such changes use in
production environments.

Troubleshooting validation errors when changing the type of


an existing field
When you modify the type of an existing field on the Data model tab of a case type or a data type, the
validation check performed by the Pega 7 Platform might fail. The following table describes the possible
messages that might displayed for validation errors and how to troubleshoot each situation.

Validation
Error message that is displayed How to troubleshoot
check
You cannot change the type of {FieldLabel
Contact your system administrator
(FieldName)} field because current production level is
and request a change to the
Production {CurrentProductionLevel} and changing the type of
specified Dynamic System Setting,
level check field is not allowed beyond production level
and then try again to change the
{MaxAllowedProductionLevel}. You can check setting
type.
'fieldtypechange/productionlevel'.
Multiple Define a new property with the
You cannot change the type of {FieldLabel
property correct type, and phase out the
(FieldName)} field because it has multiple versions.
definitions use of the old property.

Define a new property with the


You cannot change the type of {FieldLabel correct type, and phase out use of
Used as a
(FieldName)} field because it is used as a key in the old property.
class key
{ClassLabel (ClassName)} class.
Also update all dependent rules on
the existing property.

Update the external mapping for


the type, and then try again to
Optimized You cannot change the type of {FieldLabel
change the type.
in an (FieldName)} field because it is optimized in
external {ClassLabel (ClassName)} class which is externally This update should be performed
class mapped. by a lead system architect
because of the advanced nature of
the change process.

Delete the column containing the


specified type from the table in
the DB2 database, change the
Optimized
You cannot change the type of {FieldLabel type, and then create a column
in class
(FieldName)} field because it is optimized in with the new type in the DB2
mapped to
{ClassLabel (ClassName)} class which is stored in DB2 database table.
DB2
database.
database This update should be performed
by a lead system architect
because of the advanced nature of
the change process.
Optimized You cannot change the type of {FieldLabel
Remove the property from the
in a class (FieldName)} field because it is optimized in
view, change the type, and then
mapped to {ClassLabel (ClassName)} class which is mapped to a
add the property back to the view.
view view.
Optimized You cannot change the type of {FieldLabel Unoptimize the property in the
in a (FieldName)} field defined in {ClassLabel (ClassName)} descendant class where it is
descendant class because it is optimized in descendant class optimized, change the type, and
class
Validation {DescendantClassLabel(DescendantClassName)}. then reoptimize the property.
Error message that is displayed How to troubleshoot
check You cannot change the type of {FieldLabel Unoptimize the property in the
Optimized
(FieldName)} field defined in {ClassLabel (ClassName)} class where it is optimized,
in another
class because it is optimized in another class as change the type, and then
class
{OtherClassLabel(OtherClassName)}. reoptimize the property.
You cannot change the type of {FieldLabel
Locked (FieldName)} field because it is in a locked ruleset Unlock the ruleset, and then try
ruleset {RulesetName}:{RulesetVersion}. To fix this you can again to change the property type.
unlock the ruleset.
Access the database, delete the
You cannot change the type of {FieldLabel
Records record instances from the
(FieldName)} field because it is used in class {3} ({4})
exist specified class, and then try again
which has records.
to change the type.

Data Explorer
Designer Studio provides explorers that you can use to quickly view and access important components
and information about your application. From the Data Explorer, you can view the data types for your
current application and create and work with data types and their corresponding data pages. To access
the Data Explorer, in the Explorer panel, click Data.

Manage your application's data object types and data pages


with the Data Explorer
The Data Explorer is an always-available tool to simplify how you work with the data your application
needs.

You can manage the data object types available to your application, adding new data object types or
importing existing ones from other applications; and manage the properties and data pages associated
with each data object type.

You can review where your data pages are in use in your applications, to help you evaluate the impact
of changes to the data page you might be considering.

A data object type is a class that simplifies organizing the properties, data pages, transforms, and
other elements your application needs to get the right data at the right time.

A data page is the hub of data management for your application. Data pages dynamically provide the
data the application needs in a given situation, and make sure the data is up to date.

The Data Explorer is one of the explorers in the left panel in the Designer Studio. Click Data to display
and use the Data Explorer at any time.

The list in the Data Explorer


Data object types, and their associated data pages, appear in the Data Explorer if the data object type
has been associated with the current application. This can happen in three ways:

1. The data object types appears in the Data Types list on the Cases and Data tab of the application
rule form. (You can also associate a data object type with the application in the explorer.)
2. A user adds the data object type by the process described under General controls, below.
3. The system automatically associates the data object type with the application. This happens when:
1. A class is created which derives from Data-.
2. A page or page list property in the application whose page class derives from Data- is saved.
3. A property whose page class derives from Data- is referred to in another rule in the
application.

Associating a data object type with an application adds a record to a link table. It does not noticeably
increase the size of the application, and no data object types or data pages are copied into the
application. However, associating data object types with the application provides easy access to the
data definitions, and their data pages most likely to be used by, and useful to, the application, without
making the Data Explorer list unmanageably long.

General controls
Click the down-arrow at the top of the Data list to display a menu of options.

From the menu, you can:

Add/remove data objects

You can add data object types to the list or remove existing ones. Select this option to display the
Add/Remove Data Object Types form.

Click Application to display a list of available applications; check the checkbox for each application
whose data object types you want to include in the list. Click Search to find a data object types by
name, and enter a text string in the field that appears.

The list shows the "exposed" data object types by default. "Exposed" data objects are those associated
with one or more applications in your application stack.

Check the Show unexposed types checkbox to also display "unexposed" data object types, ones that
are in your application’s RuleSet stack, but are not explicitly associated with an application in your
stack.

Uncheck the checkboxes for the data object Types that you do not want to include in the list.

To create a new data object type, click +CREATE NEW. In the wizard that appears, you can quickly
name the data object type. Click the triangle to display fields where you can specify its ID, its directed
and pattern inheritance, and in which RuleSet and version to save it.
The second screen of the wizard lets you quickly specify properties for the data object type.

Create new data page

For details, see Create a new data page.

Show case types

Click this option to include in the list of data object types case types defined for your
application.

Refresh data object types

Click this option to refresh the display, to make sure you are seeing the most current information.

+Create

Click this option to bring up a menu of rule types to create. By default, the new item will be in the
selected data object type.

Click in the Search box, enter part of the name of the data object type or data page you want to find,
and click Enter. The display shows the results of your search.

Working with data object types


To the left of each entry in the list of data object types is a triangle. Click the triangle to display a list of
existing data pages for that data object type.

Hover over the display name of the data object type to see its rule name

An entry under the data object type's name shows how many data pages it has, as well as the number
of undefined pages. The second number shows you the number of pages that should be converted into
data pages to improve application performance and code reuse. Click the second number to display the
undefined-pages landing page, which lists the rules that reference the undefined pages.

In the expanded list, a number to the right of each data page shows how many places in the application
use that data page. Hover over the data page title to see its rule name.

Click the data object type, or one of its data pages, to display your selection in a tab to the right of the
Data Explorer.

Click the menu icon to the right of a data object entry to see a menu of common actions:

The menu lets you quickly

Open the data object type in a tab to the right of the explorer.
Rename the data object type, if it is not in a locked RuleSet.
Create a new data object type.
Create a new data page for the selected data object type.
View a list of the data object type's properties.You can navigate from the list to view or work on a
given property, or use the + icon to add a property.
Quickly define multiple properties for the selected data object type. A popup form allows you to
create multiple properties in one process.
Remove the data object type from the Data Explorer. The data object type is not deleted; however,
it is not displayed in the explorer's list.

The Data Explorer is always available to help you define data sources for your application and ensure
they are available for use.

Viewing external data entities


The Pega 7 Platform and Pega-supplied applications include a wide range of data types that you can use
to quickly design customer-centric applications. The number of data types increases with increased
design and development activities and it is crucial to get a clear understanding of the data layer to
modify and extend it. The External Data Entities landing page provides a way for you to explore a single
consolidated view of this data layer. The landing page contains information that is important for Pega-
supplied application users, such as where the application data comes from and how you can bring in
new application data.

Viewing data virtualization


Pega Live Data allows you to create a virtualized application data layer to simplify, standardize, and
enable the reuse of global data definitions while managing the complexity of accessing data sources
automatically. Applications can get the data that they need, when they need it, from where it is stored.
The External Data Entities landing page allows you to view the data virtualization in an application by
presenting a clear and accurate picture of the data types and the source systems that they use.

1. Click Designer Studio > Data Model > View external data entities to access the landing page, as
shown in the following figure:

The view is dynamically generated, so it always contains the latest information. A data entity
consists of a data type (logical or virtualized layer) and one or more source systems (physical
layer).
2. Expand each data entity to reveal the physical sources that are used to obtain data, as shown in
the following figure:

3. The SOURCED BY section shows the technical details of how the application connects to an
interface that is provided by the source system. The details include the protocol, the inputs, and
how to authenticate. The type of physical source is denoted by the second column of icons in this
section. The DATA PAGE section shows the data pages that are used to retrieve data from each
physical source. These data pages are the core component of the Pega Live Data layer. The data
pages make up the logical data layer and allow an application to access data from anywhere. The
icon next to each data page indicates its structure. For more information about the icons in these
two sections, see External Data Entities landing page.
4. Click the names of the records in the SOURCED BY and DATA PAGE sections to view the information
about how each record is configured.

Connecting to a new source system


You can use the External Data Entities landing page to add a new source of data for the application. The
green circle and yellow triangle icons in the SOURCED BY section denote the status of a source system
or data source. The green circle indicates that the source system or data source is active and
production-ready. The yellow triangle indicates that the data source is being simulated and needs a
replacement before going to production. Existing production applications display green for all source
systems. When you build a new application or extend a Pega-supplied application, most of the status
icons on the landing page display yellow in the beginning. You can replace the sample systems and data
sources with production-ready systems to change the status icons from yellow to green.

You can use a data type with a simulated source (a source system that is marked with a yellow triangle)
during development because data types in the Pega 7 Platform are virtualized data views. You can
continue developing the rest of the application before the integration layer is ready. You can build the
integration layer simultaneously and use it when it becomes available.

To connect a data type to a new source system, rename the sample source system that the data type is
connected to and then replace the simulated data sources with production sources. You can repeat the
tasks for each entity on the landing page with a yellow icon and change it to green. After all the source
systems are green, the data layer of your application is ready for production.

Renaming a sample source system


Do the following actions to rename the sample source system to reflect the actual name of the target
source system:

1. On the External Data entities landing page, expand a data type that is connected to a sample
source system that needs to be replaced, as shown in the following figure:

In this example, multiple simulated data sources connect to the sample source system.
2. Open a data page and find the simulated source in the list, as shown in the following figure:
3. Update the System name field with the name of the source system to use in production, as shown
in the following figure:

4. Repeat steps 2 and 3 for each data source that is connected to the sample source system in the
list.
5. Refresh the External Data Entities landing page. The data type now displays the name of your
actual physical source system, as shown in the following figure. The status is still yellow because it
is still being simulated for now.

Replacing a sample source system


When the integration layer is ready and the data sources that are required to connect to the source
system are built, you can replace the simulated data sources with production sources to change the
status icon to green.

Do the following actions to replace a source system with a new source of data:

1. Open a data page and find the simulated source in the list.
2. Clear the Simulate data source check box.
3. Select the source type based on the interface that is used to connect to the new source system.

4. Enter the name of the data source. This rule is specific to the source type that captures the
configuration details required to interact with your source system. If this record does not exist yet,
create one.

5. If needed, define additional source and mapping information such as request mapping or response
mapping.
The mapping rules are a critical component for Pega Live Data. These rules facilitate the exchange
of data between the logical (virtualized) data layer and the physical source systems.
6. Repeat steps 1 to 5 for each simulated data source that you want to connect to the new source
system.
7. Refresh the External Data Entities landing page. The source system should now have the green
icon next to it.
Data pages
Data pages, (known as "declare pages" and "declarative pages" in versions before Pega® 7), provide
on-demand access to data from your business processes while insulating the business processes from
the actual integration details that are required to connect to the physical sources.

Data pages store data that the system needs to populate work item properties for calculations or for
other processes. When the system references a data page, the data page either creates an instance of
itself on the clipboard and loads the required data in it for the system to use, or responds to the
reference with an existing instance of itself.

Data pages obtain the data from external sources by connectors, from report definitions that generate
queries of the Pega® Platform database, or from other sources; and might use data transforms to make
the data fully available where it is needed.

The name of a data page starts with the prefixes D_ or Declare_ . On the clipboard, the contents of data
page instances are visible, but are read-only.

The Data Explorer in Designer Studio lists all the data pages that are available to your application. By
using the Data Explorer, you can quickly add data pages and data object types (classes).

Concepts
Page scope types for data and declare pages

ata pages - Understanding the access group field

Refresh strategy for data pages

Comparing data pages with other clipboard pages

Data virtualization in PRPC

Enjoy improved PRPC performance with more intelligent data sourcing

Creating and using data pages


Manage your application's data object types and data pages with the Data Explorer

How to create a data page (declare page)

Choosing a name for a data page

Use parameters when referencing a data page to get the right data

Manage multiple data sources for a data page

After loading data, automatically call an activity to process it

Comparing data pages with other clipboard pages


Data pages (known as declare pages in versions prior to PRPC 7.1) have many similarities with other
clipboard pages. They are accessed the same way, so it is not necessary to write a Java step to access
the data on a data page. Data pages also contain the same kinds of data as regular page: properties,
embedded pages, and so on.

However, there are important differences between data pages and other clipboard pages.

Clipboard location
Read-only data pages (all scopes) appear in the data pages (version 7.1) or the declare pages
(versions 5.1-6.3 SP2) area of the clipboard and not under user pages or system pages.
Editable data pages (thread and requestor scope) appear in user pages.
Edit operation – Data pages can be read-only or editable.
Read-only mode: You cannot add or remove data after the data pages are created.
Editable mode: You can modify the data after the data pages are created.
Naming convention – The names of data pages must begin with the string Declare_ (for versions
5.1-6.3 SP1) or either D_ or Declare_ (for version 7.1). Other types of pages cannot begin with
these strings.
Creation – A data page is automatically created whenever any properties on the page are
accessed, if the page does not already exist. You do not have to explicitly create these pages by
using the Page-New method or other methods. Data pages with parameters are loaded only when
mandatory parameters are provided.
Update procedure – Data pages can have an automatic refresh strategy, which ensures that
their contents are up-to-date.
Database persistence
Unlike other pages (such as work item pages), read-only data pages cannot be saved.
Editable data pages can be saved.
Passivation – When a requestor is passivated, all of that user’s information is serialized and
temporarily saved to persistent storage. If this user's clipboard contains any read-only data pages,
those pages are not saved. Instead, the system deletes these pages when it passivates the
requestor, and then re-creates them whenever they are next referenced by that requestor (after
the requestor is reactivated). Editable data pages are saved like normal pages.

Load data into a page property


In Pega 7, data pages can provide automatic data access, populating page and page list properties with
data relevant to the circumstances. A case, another data page, or some other object can use a property
to automatically access the data it needs from a data page.

Overview
Load data into a page property from a page-structure data page
Data transforms

Overview
The diagram below shows, at a high level, what happens when the system references a data page:
1. Properties can automatically reference a data page, providing parameters that the data page can
use to get the data that the object needs from the data source .
2. The data page verifies whether an instance of itself, created by an earlier call using the same
parameters, exists on the clipboard.
If the data page's refresh strategy permits, the data page responds to the request with the
data on the existing clipboard page.
Otherwise, the data page makes a call to a data source, using a request data transform to
structure the request so the data source can respond to it, if necessary.
3. The data source uses the information that the data page sends to locate and provide the data that
the application requires.
4. If necessary, a response data transform maps data to the properties that require it.
5. The data page creates an instance of itself on the clipboard to hold the mapped data, and provides
it as a response to the request.

The object can reference or copy data from a page-structure data page using the new "Data Access"
section on the General tab on the property form:

Here, you specify how the property gets its data:


Manual - the user or system procedurally provides the data.
Refer - the system references a data page.
Copy - the system copies data from a data page instance into the property.

When the system references a data page, the data page checks whether an instance of itself that
satisfies the reference already exists on the clipboard.

If it does, and if the data page's refresh strategy allows it, the data page uses the existing instance
to respond to the reference.
Otherwise, it creates a fresh instance of itself on the clipboard, populates it with data relevant to
the reference, and uses it to respond to the reference.

Each reference may require parameter values. In the example above, the parameter CustomerID has an
* beside its name to indicate that it is required. Auto-populated properties do not try to load a data
page unless all required parameters have values.

To learn about automatic data access for page list properties, see Load data into a page list property.

Return to top

Load data into a page property from a page-structure data


page
In this example, you are building an order management application.

You have a Customer property in your Order case that you want to retrieve from an external
service.
You want the Customer data to be stored with each order so you have a permanent record of
customer information at the time the order was placed.
The data source you are using requires the customer's ID and a "level of detail" (full or minimal)
value to retrieve the data.

This scenario requires:

Properties:

.Customer (a page property of class Data-Customer)


On the General tab, select "Copy data from a data page" or "Refer to a data page" (to
establish a reference to the data page without copying any data into your case), and specify
D_Customer as the data page.
Set as the parameters:
CustomerID = .Customer.CustomerID (this could also be a sibling property to .Customer,
or a constant value)
LevelOfDetail = .LevelOfDetail
When either parameter changes and a non-parameter value is referenced, the data page
creates a fresh instance of itself on the clipboard
Specify the CopyCustomerSubset data transform to execute after the data page returns
data to copy a subset of the data from the data page to the property.
.Customer.CustomerID : a single value integer property defined at the page class of .Customer that
at run time holds the customer ID.
.LevelOfDetail: a single value integer property defined at the page class of .Customer that contains
the LevelOfDetail setting for this customer. The data page passes this value to the integrator to get
the correct amount of customer detail.

Data page:
The data page D_Customer has these settings:

Structure = Page
Class = Data-Customer
Scope = Thread
Edit Mode = Read Only
Parameters = CustomerID and LevelOfDetail

The data source configuration involves:

A connector data source of class Int-GetCustomerInfo.


A request data transform of class Int-GetCustomerInfo to form the request for the service. The data
page passes the values of the parameters CustomerID and LevelOfDetail to the request data
transform, which in turn passes them to the correct properties for the connector.
A response data transform of class Data-Customer (the class of the data page) to map from the
integration to the data class properties

What happens
1. The user or the system sets the values for .LevelOfDetail and .Customer.CustomerID.
2. The user or the system references an embedded property on .Customer. This triggers auto-
populating the customer data.
3. The system references the data page, passing the parameter values.
If an instance of the data page that corresponds to the parameter values exists on the
clipboard, and the data page's refresh strategy permits, the data page responds to the
reference with the existing instance.
Otherwise, it passes the parameters to the appropriate data source.
4. If the data source executes, it passes data to the response data transform, which maps data into
the instance of the data page.
5. The CopyCustomerSubset data transform specified on the .Customer property copies data from the
data page instance into the property. If no data transform is specified, all data from the data page
instance is copied into the property.

Return to top

Data transforms
A data transform defines how to take source data values — data that is in one format — and transform
them into data of another format (the "destination" or "target"). Using a data transform speeds
development and is easier to maintain than setting property values with an activity, because the data
transform form and actions are easier to understand than the activity form and methods, especially
when the activity includes custom Java code.

There are three main data transforms involved in data management using data pages:

1. Optional data transform on the property form.


2. Request data transform that lets you map PRPC data to the fields a connector requires to
communicate with the data service.
3. Response data transforms normalize data provided by the data sources into the common
application data model. For more on this topic, see Data virtualization in PRPC.

For more about working with data transforms, see Introduction to data transforms.

Return to top

Load data into a page list property


In Pega 7, data pages can provide automatic data access, populating page and page list properties with
data relevant to the circumstances. A case, another data page, or some other object can use a property
to automatically access the data it needs from a data page.

Overview
Load data into a page list property from different instances of a page-structure data page
Data transforms

Overview
The diagram below shows, at a high level, what happens when the system references a data page:
The object references a data page, providing parameters that the data page can use to get the
data that the object needs from the data source.
The data page verifies whether an instance of itself, created by an earlier call using the same
parameters, exists on the clipboard.
If the data page's refresh strategy permits, the data page responds to the request with the
data on the existing clipboard page.
Otherwise, the data page makes a call to a data source, using a request data transform to
structure the request so the data source can respond to it, if necessary.
The data source uses the information the data page sends to locate and provide the data the
application requires.
If necessary, a response data transform maps the data to the properties that require it.
The data page creates an instance of itself on the clipboard to hold the mapped data, and provides
it as a response to the request.

Properties can automatically reference a list-structure data page using the new "Data Access" section
on the General tab on the property form:

Here you specify how the property gets its data:

Manual - the user or system procedurally provides the data.


Refer - the system references a data page.
Copy - the system copies data from a data page instance into the property.
When the system references a data page, the data page checks whether an instance of itself that
satisfies the reference already exists on the clipboard.

If it does, and if the data page's refresh strategy allows it, the data page uses the existing instance
to respond to the reference.
Otherwise, it creates a fresh instance of itself on the clipboard, populates it with data relevant to
the reference, and uses it to respond to the reference.

Each reference may require parameter values. In the example above, the parameter ProductType is
optional; a required parameter would have an * beside its name. Auto-populated properties do not try
to load a data page unless all required parameters have values.

To learn about automatic data access for page properties, see Load data into a page property.

Return to top

Load data into a page list property from different instances


of a page-structure data page
In this scenario, you are building an order management application that allows users to order products.
A service can retrieve highly-detailed product information that is required for order processing once the
customer checks out.

Initially the customer browses a product catalog that contains only the information necessary for them
to learn about the product. The product list that the customer is browsing lives outside of the case. But
as a customer adds products of interest to their shopping cart, the items are moved into the case, and
the system retrieves more detailed product information necessary for completing the order. Ordered
products must live with the case so it can maintain an exact record of the product as it was ordered.

There are two services available to this application, both of which return information for a single
product.

The first retrieves the brief customer-facing product information.


The second retrieves the more detailed product information that is necessary for completing the
order.

This scenario requires

Properties

The property hierarchy is:

.ShoppingCart (page property)


.LevelOfDetail (single value property)
.Products (auto-populated page list property)
.ID (single value property)

The configuration of .Products is as follows:

Select the "Copy data from a data page" option (a data transform is required) and specify
D_Products as the data page.
Set as parameters:
DetailLevel = .LevelOfDetail
ID = .ProductsID
Select the "Retrieve each page separately" option. The data page creates a new instance of itself
for each unique .Products(n) based on .Products(n).ID.

Data page

The data page D_Product has these settings:

Structure = Page
Class = Data-Product
Scope = Requestor
Edit Mode = Read Only
Parameters = DetailLevel and ID

The data source configuration involves:

Two connector data sources, one of class Int-GetFullProductDetail, and the other of class Int-
GetMinProductDetail.
Each data source has a When condition that checks the value of param.DetailLevel.
This allows the data page create instances of itself that hold full or minimal product detail,
depending on the value of param.DetailLevel received with each reference.
One request data transform per data source to form the request so the service can use it.
The data page passes the values of the parameters DetailLevel and ID to the request data
transform, which passes the values to the service so they can be used.
One response data transform per data source, of class Data-Product, to map from the integration
to the data class properties.

What happens
1. If the user is shopping, the system sets .LevelOfDetail to return minimal product information. If the
user is ready to check out, the system sets .LevelOfDetail to return full product data.
2. The user or the system references an embedded property on .Products that triggers
autopopulation of the product list.
3. The system passes parameters to the data page.
If there exist on the clipboard instances of the data page that correspond to the parameter
values passed in, and if the data page's refresh strategy permits it, the data page returns
data from the existing instances.
Otherwise, the data page creates separate instances of itself for each page in .Products. Each
data page instance uses the same .ProductDetailLevel value.
4. Since the data page has two data sources, the When condition associated with the first source in
the list checks whether it should be used, based on param.DetailLevel. If it is not to be used, the
system uses the second data source.
5. If the data source executes and returns data, the response data transform maps the data into the
data page.
6. For each .Products(n), the system passes the .ID of the product. The data page creates a new
instance of itself for each product and copies the data from the correct instance to the
corresponding page in the page list.

Return to top

Data transforms
A data transform defines how to take source data values — data that is in one format — and transform
them into data of another format (the "destination" or "target"). Using a data transform speeds
development and is easier to maintain than setting property values with an activity, because the data
transform form and actions are easier to understand than the activity form and methods, especially
when the activity includes custom Java code.

There are three main data transforms involved in data management using data pages:

1. Optional data transform on the property form.


2. Request data transform that lets you map PRPC data to the fields a connector requires to
communicate with the data service.
3. Response data transforms normalize data provided by the data sources into the common
application data model. For more on this topic, see Data virtualization in PRPC.
For more about working with data transforms, see Introduction to data transforms.

Return to top

Use parameters when referencing a data page to get the


right data
Summary
Data pages provide quick, accurate access to the data your application needs, when it needs it. Calling
data pages with parameter values lets the data page provide exactly the data required for a situation,
from the most appropriate data source, on demand. The system waits until a user action or some other
trigger causes a data request, and then loads the data automatically.

Data pages transform the raw data received from a data source into data the application needs and can
use.

An application that uses data pages, and that passes parameter values to them to get the right data to
the right place, can build a responsive and rich structure without creating a lot of data pages, activities,
and other code. Because PRPC supports multiple instances of the same data page, you can use the
same design-time definition for multiple contexts simultaneously, within the same or different threads,
without affecting other instances that have been loaded into memory. For frequently changing data
page references, it’s possible to reuse an instance of a data page that’s already in memory. There is no
need to hard-code and maintain references to data sources.

Data Page parameters


Data pages have a Parameters tab where you can specify the parameters the system can use when
referencing that data page.

Make sure the names are descriptive, so you and other developers can see easily what sort of values
they expect.

For each parameter you can set its type, whether it is required, and other settings. Setting a parameter
to required, for example, changes the way the data page provides data to an auto-populated property
(see below). An auto-populated property only attempts to load the data page that is supposed to
provide its data when the required parameters for that data page have values. On the other hand, if
there are no required parameters for the data page, an auto-populated property references the data
page immediately, as soon as one of the listed (optional) parameters is set.

The data page uses the parameters on the Definition tab in two main ways:

Embedded auto-populate properties. For hierarchical data relationships and contextual


referencing, embedded auto-populate properties is the easiest and preferred way to automatically
access and source data from multiple hierarchically related data pages.
Parameterized data pages. When you don't need to maintain hierarchical relationships or case
context, you can directly refer to data pages with parameters. See the next section.

Passing parameters to data pages using a property


Properties can reference data pages and send parameter values using a section of the property's
General tab:

The data page returns to the property (in this case, a single page) information related to the customer
whose CustomerID the property referenced in the "Parameters" section. The asterisk beside the field
label indicates that a value for this parameter is required when referencing this data page.

Passing parameters to data pages


Data pages can provide parameterized data to many other PRPC elements besides properties. Any of
the following can reference a data page with parameters and get back data appropriate to its situation
and requirements:

Activity Data Transform Infer

Ant script Decision Map Interaction

Batch Decision Table Property Alias

Case Match Decision Tree Scorecard

Collection Declare Expression Strategy

Constraint Function Alias When

How to pass parameters directly to a data page


Each instance of a data page on the clipboard has a fully-qualified name, such as
D_Customer_pa6671911977865993pz. Do not use the fully-qualified name when referencing the data
page. Instead, use the name of the data page itself and add the value for any parameters. The data
page then determines (see below) whether to load a new instance of itself onto the clipboard to respond
to the reference, or to refer to the correct instance already on the clipboard.

The syntax is: <data page name>[<comma delimited list of parameter name:value pairs>]

You can refer to a data page with one of several valid forms of this syntax. For D_Customer, you could
load or refer to the data page using any of these:

D_Customer[CustomerID:“ANTON”,CompanyName:"BigCo"]
D_Customer[CustomerID:.CustID]
D_Customer[CustomerID:param.CustID]

When the data page only has one parameter, you don't have to specify the parameter name. You only
need to specify the value:

D_Customer[“ANTON”]
D_Customer[.CustID]
D_Customer[param.CustID]

See How to create a data page (declare page) and Tailor data pages to the context in which you use
them.

How a data page responds to a reference with parameters


When PRPC references a data page with parameters, the data page checks whether it can use an
existing instance of itself on the clipboard, or needs to load a new instance:

If there is no instance of the data page on the clipboard, or if the refresh strategy defined on the data
page requires loading a fresh instance, or if the parameters passed do not match the parameters used
to create an existing instance on the clipboard, then the data page loads a new instance of itself onto
the clipboard with the parameterized data that the current call requires.

Each time the data page finds it can respond with an existing instance of itself already in the clipboard,
PRPC saves time and effort by not having to go back to the database for data.

Tailor data pages to the context in which you use them


A single data page can create many instances of itself, with each instance being tailored to the context
in which you are using it in the application. This simplifies creation and maintenance of your code, since
one data page can serve multiple references, even from multiple applications (if the data page is in a
framework layer, every application in an implementation layer derived from that framework can make
use of it) at the same time.

Sample approach
As a simple example, imagine that you are going to display a list of products on a page so a customer
can select from them and put the selections in a shopping cart:
You have a preferred provider, Northwind. If the customer does not want to select from their products,
the customer can use a more general Google search to get a list of products.

You can deliver both (or multiple) provider product lists by using a single data page with multiple
possible data sources. Depending on the parameters (including which provider to use), the data page
creates a tailored instance of itself on the clipboard. The instance has the data about the products that
the specific provider offers, and no other data.

The Definition tab of the data page lists the possible data sources for instances of the data page:

If the reference specifies the Northwind product list (so the When condition NorthwindSearchProviderSelected
evaluates to true), the data page requests data from the NorthwindProductList data source (a Report
Definition) and creates an instance of itself on the clipboard with the data returned from that source (or,
depending on the data page's refresh strategy, it may use an already-existing instance of itself on the
clipboard that was created using the same parameters and conditions).

On the clipboard, you can see the instance of the data page created to respond to the reference:
The page pxDPParameters in the image above holds the properties and values sent with the data page
reference, which permit tailoring the data page instance:

Let's say the customer does a second search, using the Google option. The reference goes to the same
data page; it uses the parameters sent to evaluate the When condition NorthwindSearchProviderSelected to
false. The data page then uses the other data source option to create an instance of itself on the
clipboard (or locate an existing instance that matches the parameter values of the reference and the
refresh conditions of the data page), and in that instance, provides data that matches the parameters
sent with the reference.

Management of data page instances


By default, PRPC can maintain 600 read-only unique instances of a data page, although you can change
that limit in the prconfig file to as high as 1000. When the number of page instances exceeds the set
limit, the system deletes all data page instances that were last accessed more than ten minutes
previously.

If, after that step, the number of instances of the data page still exceeds the set limit, the system
tolerates an overload up to 1250 instances. If the number exceeds that limit, the system deletes
instances (irrespective of when they were last accessed), up until the number of entries in the cache is
below the set limit.

The data page creates new instances as needed to respond to references and to replace the deleted
instances.

Additional Information
How to create a data page (declare page)

Use parameters when referencing a data page to get the right data

Manage multiple data sources for a data page


Data pages can have multiple data sources, and you can set rules that determine which data source to
use in a given situation so the application gets the right data every time.

At a high level, when the application invokes the data page, it can send values for one or more data
page parameters. The data page can use the parameter values to select which of its data sources to
use to respond to the current call, and which data from that data source to return.

Calling data pages with parameter values simplifies design and development and promotes code reuse,
since a single data page can serve as a hub, quickly assembling and delivering the right set of data for
a wide range of calls. Using parameters eliminates the labor and maintenance cost of creating and
maintaining hard-coded calls for data.

To make use of multiple data sources, you need to do the following on the data page:

Specify more than one data source


Specify parameters to help determine which data source to use and which data to return
Create a When condition to select among the data sources based on values sent with the call to
the data page

Specify more than one data source


On the Definition tab of a data page, you can specify one or many data sources.

If you specify multiple sources, a field appears where you identify the When condition (see below) that
evaluates whether the current reference to the data page requires using the source identified in each
row. The condition for the final data source is set as "Otherwise": that data source is used if all the
preceding When conditions evaluate to false.

Click "Add New Source" to define an additional data source.

If there is more than one source listed, you can delete all but one. Click the "X" at the right of the
information about the source you want to delete, to remove it.

You can drag data sources higher or lower in the list to set the order in which the system checks
whether they should be used.

See How to create a data page (declare page) and the help documentation for details about what
information to provide in each field.

Specify parameters
When the system references a data page, it can pass one or more parameters that the data page can
use to select exactly the data the system requires. Set those parameters on the Parameters tab.
See Use parameters when referencing a data page to get the right data.

Create a When condition


The data page uses the parameter values the referencing page submits, and a When condition, to
determine what data to send back. In the example below, the When condition checks whether the value

of the submitted parameter searchProvider is "Northwind".

If it is, the data page provides (and appropriately transforms so the application can use it) data drawn
from the Northwind data source.

With these steps, the data page is prepared to respond to references sent with parameter values by
returning data designed for the needs of each reference.

The data page can create a separate instance of itself on the clipboard for each time it is referenced
with unique parameter values, or restrict the number of data page instances to one, so each new
references overwrites the data page instance on the clipboard.

Simulate data sources that are not yet available during


development
During development, you can avoid being blocked because the data source you need is not yet
available. You can create a simulated data source to use while you build and test your application, and
then switch easily to the real data source when it is available.

When the team is working to develop an application, it is not unusual for one developer to get ahead of
others and need access to resources that will come from work other developers have not yet
completed. For example, if your team is building a weather widget, the team members building the UI
may want to see how their design works with data in place before the team members working on the
connector to the data provider have finished their work.

This does not have to be a blocker to UI development. Data pages allow you to specify sources of
sample or simulated data to use until the connectors you really want to use are ready to provide actual
data.

To use sample data, follow these steps:

1. Create a source for sample data


Create some sample data for your application. You can provide the sample data from a lookup, a data
transform, or a load activity. The sample data should include values to populate all the properties that
the page, section, or other component referencing the data page requires.

2. Create or identify the data page the application will reference


The Data Explorer in the Designer Studio gives you always-available access to the data object types and
their related data pages in your application.

In the Data Explorer, locate the data page you want to work with; or create a new data page in the
appropriate data object type.

3. Instruct the data page to use simulated data


In the data page's Definitions tab, locate the Data Sources section. If you have already created a
data source using the connector that is not yet ready, locate it.

Check the Simulate Data Source checkbox. The system saves the actual data source for future use
(to the right in the image below) and lets you select the type and the source for the simulated data that
you have prepared.
If you uncheck the checkbox, the system restores the actual data source specification, so when you are
ready to move from simulated to real data, you only have to uncheck the checkbox and save the data
page.

If you have not already identified a data source for the data page, you can still create one that uses
simulated data:

Continue as described above to specify simulated data. When the correct data source information is
available, you can update the data source entry and either switch to using real data or continue using
the simulated data while you continue development. The property or section referencing the data page
does so in the same way whether or not the data page is providing simulated data.

For more information about referencing a data page from a property, see Use parameters when
referencing a data page to get the right data and Tailor data pages to the context in which you use
them.

Define the contents of a data page using a Report Definition


Data pages, known previously to Pega 7 as "declare pages", organize and make available to your
application data that can come from a range of sources. In earlier versions, the sources were primarily
external, accessed with a connector and a load activity; but beginning with version 6.2, you can use a
Report Definition as the data source for a data page.

Report Definitions define reports. These reports, while typically used to display summarized information
to a user, can also be used to select values for various internal and display features, including data
pages.

Suggested Approach
1. Prerequisites

In this example, the Report Definition report uses data from a simple data table. However, Report
Definitions can report on any classes from the PegaRULES database, or on flat tables in an external
SQL database. For details on using data tables see How to use data tables to reference external
and internal systems.
In this example, you populate a data page using a Report Definition that reports on StateCodes, a
pre-existing data table consisting of two properties: two-letter state codes, and an operator ID
assigned to process work items that involve that state. Both properties are exposed as database
columns. (This report output is static, but reports may in practice produce different, real-time
results each time they run.)

2. Create the source report

Create a new Report Definition. In the Report Definition: New dialog, enter the name of the data
class containing the data table in the the Applies To field. Use the Smart Prompt in this field to find
your data class more easily.

You will use the name of this report when creating the Declare Page.
3. In the Report Definition form, add columns as necessary to populate the data page. The data table
in this example contains only two columns, .StateCode and .AssignedOp.
Complete the rest of the Report Definition form, ignoring fields dealing with the visual display of
the report or sorting, and save the report.
For more information on completing the Report Definition form, see Report Definition rules -
Beyond the basics.

Continue with creating the data page for Pega 7, or creating the declare page for PRPC 6.2-6.3 SP1.

4a. Create the data page (Pega 7)


1. In the Designer Studio, select the Data Explorer in the explorers area to the left.

2. You can then click either menu icon (the general one for the data explorer, or the one beside a
data object type you have selected--marked with red rectangles in the image below) and use the
option to create a new data page.
3. Make sure the data page structure is List.
In the Data Sources section, select Report Definition for the Source, then select the Report
Definition you created from the options in the Name field.

4. Save the data page. The data page and contained properties are now ready to be referenced by
your application.

Return to step 3

4b. Create the declare page (PRPC 6.2 - 6.3 SP1)


1. Create a new Declare Page. In the Page Class field, use the SmartPrompt to enter Code-Pega-
List.The contents of the Data Source section change depending on which option (Load Activity or
Report Definition) is selected.
2. Select the Report Definition option.
3. In the Report Definition Class field, use the Smart Prompt to enter the name of the data class your
Report Definition referenced in the Applies To field. In this example, this is the StateCodes data
class.
4. In the Report Definition field, use the Smart Prompt to enter the name of the Report Definition
created previously.
5. Save the Declare Page form. The Declare Page and contained properties are now ready to be
referenced by your application.

Return to step 3

Accessing the Data Page


While the data page now exists, instances of the page are not actually created on a requestor's
clipboard until something in the application accesses a property on the data page.

For example: create a new decision, using the a property from the data page you created as one of the
decision criteria. Save and unit test the decision shape. An instance of the new data page is now visible
on the clipboard tool in the Data Pages node (previous to 7.1, in the Declared Pages node).

For more information on creating and referencing data pages in your application, see Understanding
data pages.

Data virtualization
You might need to connect a data page of one object type to a data source of another, incompatible
type. Pega® 7 lets you do this quickly and relatively simply with a data transform, enabling data
virtualization.

In data virtualization (http://en.wikipedia.org/wiki/Data_virtualization), the logical application data model


is decoupled from the physical integration data models (the data sources). The data page serves as the
singular reference point throughout the application, joining application data models and integration data
models.

Changing, adding, or removing an integration point that a data page is sourced from means that you
only have to modify, add, or remove a mapping data transform.

In this example, the data page is of one type and the data source is of another, incompatible type.
You can identify a response data transform of the same type as the data page to map the data from the
data source to the data page, making it usable for the application:

The data transform acts automatically on each reference to the data page using that data source,
mapping the data in the manner you specify:

A data page can have multiple data sources that it uses in different circumstances, depending on the
parameters it receives with each reference. See Manage multiple data sources for a data page.

Regardless of which data source the situation requires, the data page maps the data it receives to the
one common application data model.

How to configure a non-blocking UI using Asynchronous


Declare Pages
This article is correct for PRPC 6.3 SP1. Changes and new functionality in Pega 7 mean that some of the
terms and actions in this article are no longer valid for Pega 7. Specifically:
"Declare pages" are known starting in Pega 7 as "data pages".

While data pages load synchronously by default, you can more easily set them to load
asynchronously so users can take action on a work item while other content is still loading.

There are significant changes to the data page rule form. See Data management: what's new in
Pega 7.

In PRPC 6.3 SP1, you can configure non-blocking user interfaces using Asynchronous Declare Pages.
This is useful for pulling in external data from systems of record, web services, and other PRPC systems.
This supporting information, such as account history, purchasing history, business analytics, and local
weather, can display alongside a work item.

In previous PRPC releases, you could configure such data to display in defer loaded sections — the work
item displayed first and defer loaded sections displayed as they became available. However, until all
defer loaded sections were visible, users could not perform an action that required interaction with the
server, for example, Submit.

Using Asynchronous Declare Pages, you can enable a user to take action on a work item while other
content is still being loaded. Defer loaded Asynchronous Declare Pages use a different browser
connection than the main requestor servicing the work item.

Asynchronous Declare pages cannot run declarative expressions, triggers, and other rules that belong
to a declarative network. For example, you can enable executing declarative expressions in a
background requestor; but if the declarative expression refers to properties defined in external named
pages which are not present in the background requestor, then the declarative expression may not
execute.

This document contains:

guidelines for configuring defer loaded Asynchronous Declare Pages — see Developers:
Configuring Non-blocking User Interfaces
information on tuning requestor pooling to ensure an optimal user experience — see System
Administrators: Tuning the Requestor Pool
information on configuring WebSphere — see System Administrators: Configuring WebSphere

The following user interface shows a work item with asynchronously defer loaded sections.

The user can interact immediately with the work item, typing in the text box. Since the other sections
are asynchronously loaded, the user can even click Submit to process the action before the defer loaded
sections display.

The sections using defer loaded Asynchronous Declare Pages contain information of interest to the user,
but not critical in processing the work.
This article describes how developers can use Asynchronous Declare Pages to configure non-blocking
user interfaces. It also illustrates how system administrators can monitor and modify requestor pool
settings as necessary.

Developers: Configuring Non-blocking User Interfaces


To configure defer load of section using an Asynchronous Declare Page:

1. Configure the Asynchronous Declare Page.


2. Configure a defer loaded section that uses the Asynchronous Declare Page as its source.
3. Evaluate the Asynchronous Declare Page using the Tracer.

Configure the Asynchronous Declare Page


To configure a non-blocking UI, create an asynchronously loaded declarative page, then create a defer
loaded section that uses the Asynchronous Declare Page as its source.

1. In the Application Explorer, create a new Declare Page.


For information, see How to define the contents of a Declare pages rule using a Report Definition
rule and How to create a declare pages rule.
2. On the Definition tab, select the Load this Page Asynchronously checkbox.

​This checkbox is no longer available, and is not required, in PRPC 7.1. See the note at the top of the
page.

If you selected Load Activity as the Data Source for the Asynchronous Declare Page (ADP), include all
pages used by the Load Activity in the Pages & Classes tab of the Declare Page form. This is required
because when an Asynchronous Declare Page is invoked, before invoking the Load Activity, the pages
specified in the Pages & Classes tab are copied from the requesting thread specified in the ADP request
to the temporary background ADP thread. The Load Activity is then invoked and the populated Declare
Page is copied to the requesting thread. Section Defer Load is invoked on the requesting thread. The
pages copied to the temporary thread are not copied back to the requesting thread.

3. Click Save.

Configure the section using the Asynchronous Declare Page as its source
1. Create a section and include the section that uses the Asynchronous Declare Page as its source, in
this case, CardHolderInfo.

2. On the General tab, select the Defer Load checkbox.

3. On the Advanced tab, specify the name of the Asynchronous Declare Page in the Using Page field.
In this case, the name of the Asynchronous Declare Page is Declare_CardHolderInfo.

4. Save the section.

All UI references to a Declare Page should be contained within the deferred UI. References to Declare
Page data, such as Visible When, parameters to actions, or property displays (read-only or editable), will
initiate synchronous Declare Page load. UI display will be delayed until the Declare Page is loaded.

Evaluate the Asynchronous Declare Page using the Tracer


You can use the Tracer to view the trace lines for the background thread (requestor) that loads a
declare page asynchronously, as described below. For information on using the Tracer to debug your
application, see Introduction to the Tracer tool and the Developer Help.

1. Run the process.


2. Click the Tracer icon in the Quick Launch bar to trace your current session and thread.
3. In the Tracer, click Settings in the tool bar to display the Trace Options.

4. In the Event Types to Trace area, select the Interaction and ADP Load checkboxes.
5. Review the trace. The first ADP trace of a background thread (requestor) that loads a declare page
asynchronously displays a link to a pop-up window showing the corresponding trace lines for that
requestor session. In the following example, clicking the Async DP Load link in line 2, the Declare_Binaries
Step Page , displays a one-time view of the Load Activity.

When you close this pop-up, the following warning appears:


System Administrators: Tuning the Requestor Pool
Requestor pooling often improves the performance of a service because requestors are reused, their
allocated resources are shared, and the service need not wait for a requestor to be created.

You can use the System Management Application to help you determine optimal pooling settings for the
Async Service Package:

1. Select >System>Tools>System Management Application .


2. Select the node, then select Administration>Requestor Pools.

3. Run the process. Click as needed.

4. Review the ADP Related Data:


Max ADP load request wait time — indicates the maximum amount of time that any ADP request waited
for a requestor. A non-zero number indicates more requests than requestors.
Max pending ADP request count — indicates the maximum number of ADP requests waiting in the
queue for a requestor, up to this point in time. This does not apply to application servers that
manage their own queue.

Click the Reset button to reset these ADP counters. Clicking Clear Requestor Pool disrupts the
current activity and is not recommended.
5. In addition, you can use the following information to determine if you need to adjust the requestor
pool settings:

Idle— indicates how many more concurrent ADP can be accommodated


Active— indicates how many ADP are currently executing. The combined value of the Idle and
Active fields equals the value of the Maximum Active Requestors on the AsyncDeclarativePool Service
Package Pooling tab.
Most Idle— indicates the minimum load on the system
Most Active— indicates the maximum load on system; the most ADPs concurrently executing
Max Idle — the value of Maximum Idle Requestors on the AsyncDeclarativePool Service Package Pooling tab
Max Active — the value of Maximum Active Requestors on the AsyncDeclarativePool Service Package Pooling
tab

The value of the following fields should be zero (0). For web containers that support APIs for thread
management, PRPC automatically sets the maximum threads to the value of the Maximum Active Requestors
specified on the AsyncDeclarativePool Service Package Pooling tab.
However, for application servers in which you manually set the value of the maximum number of
threads for the PRPC node, a configuration error, in which the maximum number of threads is less
than the Maximum Active Requestors, is possible. A non-zero value in these fields indicates a configuration
issue.

Max Wait
Longest Wait
Timeouts

Tip: You can also use the alert log file to help you determine optimal pooling settings. To open the alert

log, select >System>Tools>Logs>Log Files>Alert .

Click an alert to display additional information. The following alert indicates that you may want to
increase the Maximum Active Requestors to decrease the wait time.
See Understanding the PEGA0043 alert: Queue waiting time is more than x for x times for details.

To adjust the alert thresholds, modify the values of the following in prconfig.xml:

alerts/ADP/queuewait/thresholdtime— indicates the wait time in the queue for a specified number of
requestors
alerts/ADP/queuewait/thresholdcount — indicates the number of times that the wait time
(ADP/queuewait/thresholdtime) can be exceeded before an alert is raised

For more information about alerts, see Understanding alert log message data and Performance alerts,
security alerts, and AES.

For instructions on editing prconfig.xml, see How to change prconfig.xml file settings or How to set
prconfig values in a Dynamic System Setting value.

Modify requestor pool settings

1. Select > Integration> Resources then click the Service Packages button.

2. Select the AsyncDeclarativePool Service Package.

3. On the Pooling tab, modify the requestor information:


Maximum Idle Requestors— defines the maximum number of idle requestors that can be in the pool
for services from this package.
Often this value is similar to the value of Maximum Active Requestors. However, in cases in you want
high scalability, but the average load is relatively low, set this value to a number less than the
value of Maximum Active Requestors. For example, in this case, you may set the Maximum Active Requestors
to 100 and set the Maximum Idle Requestors to 30. This means that a maximum of 100 requestors is
available, but as soon as the requestors are free, all but 30 are discarded. This saves memory
by caching/pooling only 30 requestors.
Maximum Active Requestors— defines how many concurrent requestors can be created and in use for
the services in this package. If a service request arrives when the number of active requestors
is at this limit, PRPC waits for an idle requestor. The maximum number of Java threads at any
point is equal to this value since each requestor uses one Java thread.
Note: You must manually configure the maximum threads for application servers that do not
have API’s that support thread management, such as WebSphere®. As a best practice,
configure the Maximum number of threads for the PRPC node in WebSphere equal to the value of
Maximum Active Requestors specified here. See Configuring the maximum number of threads in
WebSphere.
Maximum Wait (in seconds) — should be configured at 0 and is ignored by ADP processing.

4. Restart the server.

​System Administrators: Configuring WebSphere


Configure the maximum number of threads for the PRPCWorkManager in
WebSphere
1. Start the WebSphere server.
2. Log in to the administrator console: https://<hostName>:<portNumber>/ibm/console/login.do.
3. Select Resources>Asynchronous beans>Work managers.

4. In the Work Managers area, click the PRPC node deployed on the current server, in this case,
PRPCWorkManager .

5. Specify the Maximum number of threads for the selected PRPC node, then click Apply. Note the
Maximum number of threads . You will need this value to determine the maximum number of database
connections.

6. Restart the server.

Determine the maximum number of threads for the web container

To determine the maximum number of threads for the web container:

1. Start the WebSphere server.


2. Log in to the administrator console: https://<hostName>:<portNumber>/ibm/console/login.do.
3. Select Servers>WebSphere application servers then select the server, in this case, server1.

4. On the server page, select Thread Pools in the Additional Properties section.
5. On the Thread pools page, select WebContainer.

6. Specify the Maximum Size, then click Apply. Note the maximum number of threads. You will need this
value to determine the maximum number of database connections.

7. Restart the server.

Configure the maximum number of database connections


Set the maximum number of database connections to at least 30% of the maximum number of
simultaneous requests on the PRPC server, where each thread takes two to three database
connections.

To determine the maximum number of simultaneous requests, add the value of the Maximum Size (threads)
for the web container to the value of the Maximum number of threads for the PRPCWorkManager.

For example, if the maximum number of threads for the web container were set to 100 and the Maximum
number of threads for the PRPCWorkManager were set to 50, then the maximum parallel load on the PRPC
server would be 150 requestors. In this case, you would set the value of the maximum number of
database connections to at least 45, 30% of 150.

To determine these values, refer to the previous sections. For information about the:

maximum number of threads for the web container, see Configure the maximum number of
threads for the web container.
Maximum number of threadsfor the PRPCWorkManager, see Configure the maximum number of threads for
the PRPCWorkManager in WebSphere.

To specify the maximum number of database connections:

1. Select Resources>Data sources>PegaRULES.


2. On the PegaRULES Data sources page, select Connection pool properties.

3. Specify the Maximum connections. This value should be at least 30% of the maximum number of
simultaneous requests on the PRPC server, where each thread takes two to three database
connections.

4. Click Apply, then restart the server.

Disable asynchronous load of declare pages


In prconfig.xml, set the value of the env name="<em> initialization/asyncdeclarepages</em> " to false:
<env name="<em> initialization/asyncdeclarepages</em> " value="false" />

If you set the value="false", asynchronous load of declare pages is disabled. However, the Load this Page
Asynchronously? checkbox still displays on the Declare Page Definition tab. If the value in prconfig.xml is set
to false and a user selects the Load this Page Asynchronously? checkbox, the page will be loaded synchronously.

For instructions on editing prconfig.xml, see How to change prconfig.xml file settings or How to set
prconfig values in a Dynamic System Setting value.

Additional information
How to define the contents of a Declare pages rule using a Report Definition rule

How to create a declare pages rule

Configuring a JSON data transform as a request data


transform in data pages
You can use a JSON data transform as a request data transform in data pages for REST integration to
make serialization and deserialization faster and more efficient. The REST Integration wizard creates a
REST connector, creates a class, generates the data model, and creates the request and response data
transforms that turn the integration layer into the data layer. For JSON data models, the data transforms
that the REST Integration wizard creates work, but they are slower and less efficient than JSON data
transforms. You can replace the request and response data transforms created by the REST Integration
wizard with JSON request and response data transforms.

To use a JSON data transform, create a JSON data transform and configure the response or request data
transform created by the REST Integration wizard to use it. For instructions on configuring a response
JSON data transform, see the following instructions. For instructions on configuring a JSON request data
transform, see Configuring a JSON data transform as a response data transform in data pages.

Configuring JSON request data transforms has the following high-level steps:

1. Create a REST connector by using the REST Integration wizard


2. Configure the data page
3. Create the JSON request data transform
4. Configure the request data transform

Creating a REST connector by using the REST Integration wizard


Run the REST Integration wizard to create a REST connector by clicking Designer Studio > Integration >
Connectors > Create REST Integration. For details about the REST Integration wizard, see Creating REST
integration.

Configuring the data page


Requests require a page parameter instead of a list of individual parameters. In addition, caching is not
usually required for requests. Configure the page parameter and disable caching by causing the
contacted page to reload every time it is contacted. If your POST REST service is idempotent and
requires caching, see Enabling caching.

1. Open the data page created by the REST Integration wizard by clicking the link in the Data Page
Created field in the Generation Summary page.
2. Click the Parameters tab.
3. In the Name field, enter the name of the data page, for example, pageName .
4. In the Data type field, click String.
Data page Parameters tab

5. Click the Load Management tab.


6. Select Reload once per interaction.
7. Click Save.

Creating the JSON request data transform


Create the JSON request data transform to use in the request data transform created by the REST
Integration wizard. The JSON data transform must be created in the same class as the request data
transform. You can determine the class by opening the data page from the Generation Summary page
generated by the REST Integration wizard, and clicking the Open icon next to the request data
transform.

JSON Request data transform Definition tab

1. In the Application Explorer, locate the class that the response data transform is in, for example,
Code-Pega-List.
2. Right-click the class, and click Create > Data Model > Data Transform.
3. In the Label field, enter a short description.
4. In the Data model format field, click JSON.
5. In the Add to ruleset field, select the ruleset. It is recommended that you use the same ruleset as
the data page.

JSON Data Transform

6. Click Create and open.


7. Create the mapping between the clipboard and the JSON data by adding the fields that are in the
request data transform to the JSON data transform. The following example shows the fields in the
request data transform.
JSON fields might not match the fields in the response data transform if the JSON fields contained
characters that cannot be used in property names. In this case, the REST Integration wizard
adjusts the names so that they do not contain unsupported characters. To find the property names
for the JSON side of the transform, open the property rule and look at the pzExternalName property
qualifier.

1. In the Clipboard field, enter the first property.


2. In the JSON field, enter the corresponding JSON property.
3. Click Add element to repeat these steps for every field that you need to add. The following
example shows the JSON data transform with the fields from the request data transform.
8. Click Save.

Configuring the request data transform


Update the request data transform generated by the REST Integration wizard to use the new JSON
request data transform.

1. Open the request data transform generated by the REST Integration wizard by clicking the Open
icon next to the Request Data Transform field on the data page.
2. Click the Parameters tab.
3. In the Name field, enter the data page name, for example, pageName .
4. In the Data type field, click Page Name.
5. Click the Pages & Classes tab.
6. In the Page name field, enter the data page name, for example, pageName .
7. In the Class field, select the same class as the request data transform.
8. Click the Definition tab.
9. Delete row 2 by clicking the Trash can icon next to row 2.
10. Create a new row 2 by clicking the Add a row icon.
11. In the Action field, click Update Page.
12. In the Target field, enter the data page name, for example, pageName .
13. In step 2.1 in the Action field, click Apply Data Transform.
14. In the Target field, enter the JSON request data transform that you just created.

Request data transform Definition tab

15. Click the Gear icon in step 2.1.


16. Click Pass Current Parameter Page to configure parameters to be passed by reference. The
executionMode field becomes blank and the execution mode defaults to serialize. Pages that are
passed into the data transform are turned into JSON using the mapping configuration, and the
resulting JSON is returned in a parameter. The next steps map the incoming pages to the
parameter that will contain the results.
17. Click Submit.
18. Map the incoming pages to the parameter that will contain the results.
1. Add row 3 by clicking Add a row.
2. In the Action field, click Set.
3. In the Target field, enter .pyJsonData.
4. In the Source field, enter Param.jsonData.

Updated request data transform

19. Configure the REST connector to use .jsonData.


1. Open the REST connector by clicking the Open icon next to the Name field on the data page.

Data page

2. Click the Methods tab.


3. In the the Message data section for the method that you are configuring, in the Map from
field, click Clipboard.
4. In the Map from key field, enter .pyJsonData

Message data

20. Click Save.


Enabling caching
If your POST REST service is idempotent (consistently provides the same response), you can enable
caching. When caching is enabled, the response from the remote service for the request JSON is
cached, ensuring that subsequent calls to the data page with exactly the same request JSON return a
cached response instead of making a new call to the remote service.

To enable caching, you pass the JSON data to the data page and call the data transform outside of the
data page.

1. From the Records Explorer, click Data Model > Data Page.
2. Click the data page used by the request data transform to open it.
3. Click the Parameters tab.
4. Delete the pageName parameter by clicking the Trash can icon.
5. In the Name field, enter jsonData.
6. In the Data type field, click String.

Data page configured for caching

7. Click the Load Management tab.


8. Clear Reload once per interaction.
9. Click Save.
10. Open the request data transform generated by the REST Integration wizard by clicking the Open
icon next to the Request Data Transform field on the data page.
11. Click the request data transform that is associated with this data page to open the rule form.
12. Delete step 2 by clicking the Trash can icon.

Request data transform configured for caching

13. Click Save.

Configuring a JSON data transform as a response data


transform in data pages
You can use a JSON data transform as a response data transform in data pages for REST integration to
make serialization and deserialization faster and more efficient. The REST Integration wizard creates a
REST connector, creates a class, generates the data model, and creates the request and response data
transforms that turn the integration layer into the data layer. For JSON data models, the data transforms
that the REST Integration wizard creates work, but they are slower and less efficient than using a JSON
data transform. You can replace the request and response data transforms created by the REST
Integration wizard with JSON request and response data transforms.

To use a JSON data transform, create a JSON data transform and configure the response or request data
transform that was created by the REST Integration wizard to use it. For instructions on configuring a
request JSON data transform, see the following instructions. For instructions on configuring a request
JSON data transform, see Configuring a JSON data transform as a request data transform in data pages.

Configuring a JSON response data transform has the following high-level steps.

1. Create a REST connector by using the REST Integration wizard.


2. Create the JSON response data transform.
3. Configure the response data transform.
4. Configure the REST connector.

Creating a REST connector by using the REST Integration wizard


Run the REST Integration wizard to create a REST connector by clicking Designer Studio > Integration >
Connectors > Create REST Integration. For details about the REST Integration wizard, see Creating REST
integration.

Creating the JSON response data transform


Create the JSON response data transform to use in the response data transform created by the REST
Integration wizard. The JSON data transform must be created in the same class as the response data
transform. You can determine the class by opening the data page from the Generation Summary page
generated by the REST Integration wizard, and clicking the Open icon next to the response data
transform.

1. In the Application Explorer, locate the class that the response data transform is in, for example,
Code-Pega-List.
2. Right-click the class and click Create > Data Model > Data Transform.
3. In the Label field, enter a short description.
4. In the Data model format field, click JSON.
5. In the Add to ruleset field, select the ruleset. It is recommended that you use the same ruleset as
the data page so that all data assets are packaged in the same ruleset. You can use a different
ruleset, however, if you want to package and move the data assets to another application, having
them in the same ruleset ensures that all data assets are moved.

Create Data Transform dialog box

6. Click Create and open .


7. Click the Definition tab.
8. If you are using a single-record data page, skip this step; otherwise, complete the following steps
for a list data page.
1. In the Top element structure, click Array.
2. In the Pagelist Property field, select .pxResults.
3. Click the Pages & Classes tab.
4. Add the pages and classes from the response data transform to the JSON data transform. You
do not need to add the data source. The following example shows the pages and classes in
the response data transform.
Response data transform Pages & Classes tab

JSON data transform Pages & Classes tab

5. In the Page name field, enter the page name.


6. In the Class field, enter the class. It must be the same as the class in the response data
transform.
9. Click the Definition tab.
10. Create the mapping between the clipboard and the JSON data by adding the fields that are in the
response data transform to the JSON data transform. The following example shows the fields in the
response data transform.
The following examples use a list data page. If you are using a single-record data page, .pxResults
is not used.

Response data transform Definition tab

1. In the Action field, click Set.


2. In the Clipboard field, enter the first property.
3. In the JSON field, enter the corresponding JSON property.
4. Click Add element to repeat these steps for every field that you need to add. The following
example shows the JSON data transform with the fields from the response data transform.
JSON fields might not match the fields in the response data transform if the JSON fields
contained characters that cannot be used in property names. In this case, the REST
Integration wizard adjusts the names so that they do not contain unsupported characters. To
find the property names for the JSON side of the transform, open the property rule and look at
the pzExternalName property qualifier as shown in the following example.
Property rule form pzExternalName property

JSON data transform

11. Click Save.

Configuring the response data transform


Update the response data transform generated by the REST Integration wizard to use the new JSON
response data transform.

It is not recommended that you delete this data transform. Modifying it to use the new JSON response
data transform is the fastest way to access the data page. In addition, step 3 is required for error
handling.

The following example shows a sample response data transform.


Data transform generated by the REST Integration wizard

1. In the response data transform, delete step 2, Append and Map to, by clicking the Trash can icon to
the right of the step.
2. Click Submit when asked to confirm the deletion.
3. Click in row 1.
4. Click the Add a row icon to create a new step 2.
5. In the Action field, click Apply Data Transform.
6. In the Target field, select the JSON response transform that you just created.

Response data transform updated to use new JSON response data transform

7. Click the Gear icon next to the JSON data transform.


8. In the jsonData field, enter DataSource.pyJsonData. This field that contains the JSON string that is
deserialized and passed to the clipboard.
9. In the executionMode field, enter DESERIALIZATION.
Apply Data Transform Parameters dialog box

10. Click Submit.

Configuring the REST connector


Update the REST connector that was generated by the REST Integration wizard to use the
DataSource.pyJsonData parameter.

1. In the REST connector, click the Methods tab.


2. Click the Response tab.
3. In the Map To field, select Clipboard.
4. In the Map to key field, enter .pyJsonData. The .pyJsonData property is part of @baseclass and is the
property to use for mapping.
5. Click Save.

Edit Connect REST


method

Data page error handling


Data pages store data that Pega 7 requires to populate work item properties for calculations or other
processes. Developers determine where that data comes from and how it gets there by referencing
data sources on the data page record.

Error handling in data pages is a complex problem. There are several ways to handle errors, and various
ways are relevant at different parts in the process. Pega 7 provides developers with the tools that they
need to create custom error-handling responses to specific data page errors without using activities.

Error occurrences
Data page errors occur for a variety of reasons, but they all prevent data from being loaded as
expected. Examples of causes of data page errors include system errors, invalid input, using keys that
are not on a keyed page list, connection problems, and security or authorization issues.
Error types
There are two types of data page errors: invocation errors and data source errors. The following articles
provide more information about handling these errors.

Data source error handling in data pages

Invocation error handling in data pages

Data source error handling in data pages


Data source errors occur during the execution of a data source that cause it to return incorrect, flawed,
or missing data.

Point at which errors can occur during execution of a data source

Examples of data source errors include:

A source system or database is down and the connection times out


A request passed to the external system using the connector is invalid
Credentials are invalid or are not authorized to get the data requested
An internal system error occurred either during load or on the external system

When a data source error occurs in any data source other than an activity or data transform, the page
property pyErrorPage is added to the data page. This property contains all error details so that users do
not have to track them down across the various error properties and page messages used by connector,
report definition, and lookup data sources. It contains the following information:

Rule Description

The status when the data source finishes processing.


.pyStatus
If an error occurs, its value is Fail.

A message about the error that was encountered during data source processing that
forced it to stop. This message is informative and can be made into a page message
.pyStatusMessage
shown to users as an invocation error. This is also the value that is put into the
default page message added on error.

.pyStatus and .pyStatusMessage are always populated on error and can be used with any data source.
When the source type is a connector of type SOAP, SAP, or dotNet, and the error was caused by a SOAP
fault, the following properties are also populated:

Rule Description

Contains the code from the SOAP fault. See the W3C page on SOAP Fault Codes for
.pyFaultCode
more information.

Contains an explanation of the issue causing the SOAP fault and is intended to be
.pyFaultReason shown to users or put in a message. See the W3C page on SOAP Reason Elements for
more information.
Rule Contains details from the SOAP fault, Description
if provided, including application-specific error
.pyFaultDetail information. See the W3C page on SOAP Detail Elements for more information.

Two additional properties are available for more advanced use cases, such as custom connector logic or
load activities. These properties can provide additional detail in page messages:

Rule Description

When the data page has messages, this page list contains the message
details, one page per message. The property .pyDescription contains
the message.
.pxMessageSummary.pxErrors
If the message is attached to a property, the property .pyLabel contains
the property reference.

If an error occurred because of an exception thrown during data source


processing, this property contains the exception Java object.
.pyInvocationException
This property is only populated if the connector source type is SOAP,
SAP, dotNet, or Java.

Handling data source errors


Error handling should be done in the response data transform of a data source. The response data
transform is executed after the data has been obtained from the data source, and maps part or all of
the information returned from the data source to the data page.

When making a new response transform, several error handling actions are added automatically, as
displayed in the following example:

Example of a new response transform with automatically generated error-handling actions

A reference to a when rule, pxDataPageHasErrors, is already included, and a place to put error handling
logic is nested beneath it. This when rule is the primary tool for conditionally executing data source
error handling logic in the same way that you use hasMessages for handling invocation errors.

To use the default error-handling setup, right-click the when step and click Enable. Then, add your error-
handling logic in the nested steps below. You can also add it in a separate data transform that can be
used across multiple data sources and call it from the nested steps.

When the class of the data source matches the class of the data page, the response data transform is
optional, because the data source can be run against the data page directly.

When a response data transform is specified and the data source class matches the data page class,
enable Run on data page. This executes the response data transform directly on the data page rather
than on the Data Source page, saving the memory and processing cost of an unnecessary page and
mappings.
Example of a response data transform with Run on data page enabled

Examples of handling data source errors


Pega 7 provides several tools for handling data source errors. A good place to start is the data transform
pxErrorHandlingTemplate. This data transform has sample actions that you can use for common data
source error-handling tasks, including:

Read page messages on the data page


Clear page messages from the data page so it does not stop work processing
Throw an invocation error by adding page messages so case processes pick it up (remember to
catch and handle it in your case layer)
Output messages to the system log
Send an email to a system administrator or other stakeholder

You can either copy the logic from the template into your response data transform or save the template
to your class and ruleset to create your own error handling transform that can be reused across
sources. All rows start in a disabled state, so remember to right-click and enable actions that you want
to use and customize.

You can also pull in data from a different data page instead of using the normal DataSource page when
it has errors. Examples of this include:

Referencing another data page to pull data from an alternative data source
Referencing another data page with the same data source to use for retry

Do not attempt to reference the currently loading data page from within the response data transform.
This method does not work and can cause errors when you try to load the data page.

Advanced data source error handling


It is possible to create a reusable error handling data transform that works across all data sources or
uses the post-load activity to handle errors in a way that isn’t possible from a data transform. To do
this, you might need to know which data source was used to conditionalize actions or messages.

In addition to pyErrorPage, all data pages have an embedded page property called pySourcePage. This
property contains up to five properties that provide detailed information about the data source that is
used to load the data page:

Rule Description

Specifies one of the following source types: Connector, Report Definition, Lookup,
.pySourceType
Data Transform, or Activity.

.pyConnectorType If the source type is connector, this rule specifies the type of connector.

Specifies the identifier of the connector, report definition, data transform, or activity
.pySourceName rule used as the data source.

This is empty for lookup data sources, because they do not use a rule.
Rule Specifies the Applies To class of the Description
connector, report definition, data transform or
.pySourceClass activity rule used as the data source.

For lookup data sources, this is the identifier of the class used for the lookup.

Specifies the index of the data source on the data page form at the time of load.
.pySourceNumber
The topmost source is 1, and the indexes increase going down the form.

Unlike pyErrorPage, pySourcePage is always present on loaded data page instances. You can always
reference it from your data transforms or activities or view it in the clipboard to see which source was
executed.

Activity data sources


If you use an activity as your data source, your error-handling options are fewer. However, several
options are available in Pega 7.1.8 that make error-handling in activities easier:

pyErrorPage is created and added to the step page on error whenever a Connect-* method is used
to call a connector. It has the same information described in Data Source Errors. Therefore, you
can use the when rule pxDataPageHasErrors against the step page of the Connect-* step to check
if there is an error.

Error information is captured only for the connector types that are supported as data sources on the
data page form. For other, more advanced connector types, you might need to look at the form to see
what other error information is available and what properties contain the information.

Calls to report definitions that use Rule-Obj-Report-Definition.pxRetrieveReportData do not create a


pyErrorPage. You need to continue to look for page messages to figure out if an error occurred and
what went wrong. You can still use the when rule pxDataPageHasErrors against the step page that
was used to call pxRetrieveReportData to determine if an error occurred, or you can use the when
rule hasMessages.
Lookups performed by using obj-open steps do not create a pyErrorPage. No error messages are
displayed, and you cannot use the when rule pxDataPageHasErrors here to check for errors.
Instead, use the when rule StepStatusFail. You also need to use the function @Utilities.getWorstMessage()
to determine what went wrong and use the method Activity-Clear-Status to clear the fail status, if
necessary (for example, if you intend to handle the error by performing a lookup on a different
class instead).

Experimental high-speed data page support in Pega 7.3


An experimental implementation of high-speed read-only data pages is available. This implementation
is useful when data pages are used to hold static data that is used repeatedly. You can enable this
feature for a specific read-only data page from the Load Management tab of the data page rule form by
clicking Enable high-performance read-only data page.

This feature does not support full clipboard page functionality; use with caution.

Supported functionality
Basic page and property access (read and write properties) for all normal data types
Hierarchical data page structure (pages within pages)
Dictionary validation mode
Read-only data pages

Unsupported functionality
Declarative rules
Page messages
Complex property references
Saving pages to a database
API access to the data page

Handling data page errors by using a data transform


The Pega 7 Platform provides various ways to handle data page errors. Because data pages are
inherently declarative in nature, error handling needs to be contained within the data page load
process. A best practice is to use a response data transform that can detect the type of data source and
handle the errors appropriately.

Identifying the data source type


The error handler must be able to identify the type of connected data source so that it can customize
the solution. To identify the data source, use a response data transform or activity step for every data
source, as shown in the following figure:

Connector: Use a response data transform to detect and handle Connector data source errors.

Report Definition: Similar to Connector, use a response data transform to detect and handle Report
Definition data source errors.
Lookup: Use a response data transform with the Run response data transform on error check box
selected to detect and handle Lookup data source errors.

Data transform: Use the hasMessages when condition to detect and handle data transform data
source errors.
Activity: Use appropriate transition conditions such as StepStatusFail in activity steps to detect and
handle activity data source errors.

Creating the error handler


Create your custom error handler by using the default data transform.

1. Create a data transform (for example, MyCoErrorHandlerMaster) by saving the default data
transform pxErrorHandlingTemplate to your top-level class and ruleset. Also, change the status of
the rule from Final to Available.
2. From the data page, pass a parameter (for example, Connector-GetCustomerData) to the response
data transform to uniquely identify the data source that is used to load the data page.
3. On the Definition tab of each response data transform, use the when condition
pxDataPageHasErrors to identify any errors in the data page and apply the
MyCoErrorHandlerMaster data transform.
4. In the MyCoErrorHandlerMaster data transform, create and call a decision table that determines
the appropriate error handling based on the data source.

5. Based on the decision table, perform the error handling action for the data source in the
MyCoErrorHandlerMaster data transform.
6. In the MyCoErrorHandlerMaster data transform, create and call a decision table to map user-
friendly error messages instead of the default error messages, as shown in the following figure:

This data transform can now be used across multiple data sources and data pages in the application.
For example, with an activity data source, the same data transform can be used to handle data page
errors.

In use cases that require a manual retry during data page error, consider using the hasNoMessages
when condition with Do not reload when in the data page rule form so that the data page is reloaded on
retry whenever there is a data page error.
Handle other invocation errors procedurally in flows, post-flow action processing, or activities, as
appropriate for the specific requirement.

Debugging tips
To trace Data Page and Data Transform rules, open the Tracer tool directly from the rule form

To trace date pages that are loaded asynchronously, open the Tracer tool from the Data Page rule.

High-speed data page support


In Pega® 7.3.1, high-speed read-only data pages are available and enabled by default. This
implementation is useful when data pages are used to hold static data that is used repeatedly. If you
experience performance issues during high throughput, you can use a restricted feature set for a
specific read-only data page by selecting Restricted feature set for high throughput on the Load
management tab of the data page rule. When you use the restricted feature set, the following
functionality is not supported:

Declarative rules
Page messages
Complex property references
Saving pages to a database
API access to the data page

Invocation error handling in data pages


Error handling in data pages is a complex problem. Data page errors occur for a variety of reasons, but
they all prevent data from being loaded as expected. Invocation errors, one type of data page error,
prevent requested data pages from being loaded correctly.
Point at which invocation errors occur and prevent data pages from being loaded correctly

Some examples of invocation errors and their causes:

Errors Causes

Required A data transform tries to modify properties in a case that is based on values from a
parameters data page, but the data is not returned when the transform is run because a required
missing parameter is missing.

A flow has a property with automatic data access enabled to copy the data that it
Data source needs to proceed in the work item.
error cannot be
handled When the user attempts to proceed to the next step, a data source error that could
not be handled at the data layer prevents the user from proceeding.

A case shows inputs for properties that are used as keys in a property with automatic
data access enabled to pull the rest of the data into the case.
Keys not found
When the user enters the properties, the data is not returned and an error is shown
because the keys were not found in the list.

When an invocation error occurs, the requested data page instance is marked with page messages that
explain the error. In situations where automatic data access for properties is used, errors can occur on a
property that refers to data on a data page or a property that copies data from a data page. (For more
information, see Load data into a page list property.)

In these situations, the errors might be on the associated data page instance. For example, in the case
above where the user uses a key that is not on the data page, the page messages are on the property
instead of on the data page itself because the data page was loaded without issues.

Handling invocation errors


You can handle invocation errors by building conditional logic in the case layer. Generally, this logic
should be built into whatever rules reference the data page or auto-populated property, such as:

Flow action post-processing transform or activity


Data transform or activity referencing the data
Decision shape followed by a utility step in a flow

You can also defer load activity on a section with a page context of the data page or auto-populated
property.

Use these tools to handle the errors in the locations mentioned above:

Apply the hasMessages when rule to the page that you are trying to use. Any error-handling logic
that you include after the when rule is run only in the case of an invocation error and can be used
to handle the error.
When data is pulled into the case from a property with automatic data access enabled, use the
function @(Pega-RULES:Default).getMessagesAll() in the context of that page property to get all invocation
errors that occurred as text separated by new line characters. You can then write error-handling
logic based on what occurred.
If you use the data page directly, or you want to iterate through all errors on the case, including
invocation errors from auto-populated properties, you can iterate through the property
.pxMessageSummary.pxErrors in a data transform or activity.
Examples of handling invocation errors
Handling invocation errors is typically use case-specific and each application has its own requirements.
The following table lists some common error-handling tasks:

Task Procedure

Preventing case
Case processing already has built-in validation that prevents users from continuing
processing from
when the case has errors.
continuing when
the data page has
To make use of this, use an auto-populated property embedded within the case
an invocation instead of referencing the data page directly.
error

After a data page instance has been loaded, regardless of whether it has errors, it is
not cleared or refreshed unless its refresh strategy dictates or it is procedurally
removed.

To retry the data page, either manually remove the original instance or specify the
Retrying a data
hasNoMessages when rule in the Do not refresh when field. Re-referencing the data
page
page then results in a retry.

The only exception to this is using a property with automatic data access by copy. In
this case, remove both the property and the data page from which it is auto-
populating to force a retry.

To call an alternative data page or data source, add conditional processing that
uses a different data page or different parameters if an invocation error occurs. You
can do this for items such as data transforms, activities, and sections by using the
hasMessages when rule.
Calling an
alternate data If the hasMessages when rule returns true for the data page against which it is
page or data executed, than an invocation error has occurred. You can conditionally change the
source parameters so that a different data source is called and retry or use another data
page entirely for your logic in that case.

An alternative way to conditionalize your logic is to use the hasNoMessages when


rule to determine when an invocation has not occurred.

The simplest way to do this is to create a data page of class Data-Corr-Email that
accepts the error information that it needs to create the email as parameters and
fills in the administrator’s information and any other basic details. You can then
Notifying an pass that data page to the function @Default.SendEmailMessage(Page) to send an email
administrator that from the data transform, activity, or other processing rule in case of an invocation
an invocation error.
error has occurred
You can even have the data page itself call this function as a part of its load to give
yourself the ability to email the administrator no matter where the error occurred,
even if you are working with a section or UI rule.

Moving the case


to an This is best handled by using conditional logic in the flow, or in an activity that
administrator’s assigns it to an administrator.
workbasket

Moving the work


item into the Data pages prevent the ConnectorProblem flow from being called automatically, but
ConnectorProblem it can still be called manually. This is again best handled by using conditional logic
flow and taking in the flow, or in an activity rather than in a flow or data page.
additional actions

A number of other error-handling tasks can be configured, but the same basic pattern applies:
Use the hasMessages when rule to conditionalize processing based on whether an invocation error
has occurred.
Use additional data pages to handle invocation errors, because data page references can occur
anywhere and you want to be able to handle errors wherever they are.
For error handling that can only be done in the flow or an activity (such as the last two above),
make sure that you first reference the data page and conditionalize your processing in a place
where you can handle errors.

Pega 7 management of data pages


Data pages store data that the system needs to populate work item properties for calculations or for
other processes. Developers do not have to create, populate, or otherwise manage data page instances.

When the system references a data page, the data page either creates an instance of itself on the
clipboard and loads the required data in it for the system to use, or responds to the reference with an
existing instance of itself. For a general introduction to data pages, see Understanding data pages.

The system manages data pages based on a combination of settings and circumstances as outlined
below. The system automatically deletes, or prunes, data pages that are no longer needed, or when the
maximum number of data pages is reached.

New data pages


To create a data page, see How to create a data page.

The system creates a data page instance when the data page is referenced, or uses an existing
instance, depending on the settings in the Refresh Strategy section of the Load Management tab on the
rule form. See Refresh strategy for data pages.

Pruning data pages


The system removes, or prunes, data pages in several circumstances.

Single use
Clearing unused pages
Data page instances for a container reaches a set limit

Set single use


You can instruct the system to delete any existing data page instances and to create a instance every
time the data page is referenced. On the Load Management tab, select the Limit to a single data page
check box:

When the check box is selected, each time the system references the data page, it removes any
existing instance and uses submitted parameters to create a data page instance.

If the check box is not selected, the system creates an instance of the data page for each reference with
unique parameter values, which can cause the number of stored instances of the data page to increase
rapidly.

This option is useful for parameterized data pages. See Use parameters when referencing a data page
to get the right data.

Specify clearing unused pages


You can instruct the system to removed unused data page instances by selecting the Clear pages after
non-use check box on the data page rule Load Management tab. This option is selected by default:
This setting has different effects depending on the scope of the data page:

Thread scope — No effect.


Requestor scope — Applies to both editable and read-only data page instances. The system
creates a requestor-scoped instance when any thread refers to the data page. Other threads can
also reference the same data page instance. If the check box is selected, the data page instance is
removed when no more threads refer to it.
Node scope — Applies to read-only data pages. If the check box is selected, the system checks
the Reload if older than fields on the Load Management tab of the data page rule:

The system uses any setting in these fields. If the fields are blank, the system uses the value of the
dynamic system setting DeclarePages/DefaultIdleTimeSeconds, which is set by default to 86400
seconds, or one day. You can adjust the dynamic system setting value.

The number of data page instances for a container reaches the set limit
By default, the Pega 7 Platform can maintain 1000 read-only unique instances of a data page per
thread. You can change this value by editing the dynamic system setting datapages/mrucapacity .

There are different data page instance containers for the thread, requestor, and node level. Each user
can have both requestor-level and thread-level data pages up to the limit established. Additionally, each
node can have any number of requestors, and each requestor can have many threads. See Contrasting
PRThread objects and Java threads.

For each container:

If the number of instances of a data page reaches 60% of the established limit for thread-level or
requestor-level containers, or 80% of the established limit for node-level containers, the system
begins deleting older instances.
If the number reaches the established limit, the system deletes all data page instances that were
last accessed more than 10 minutes ago for that container.
If, after that step, the number of instances of the data page still exceeds the set limit, the system
tolerates an overload up to 125% of the established limit for thread-level or requestor-level
containers, or 120% of the established limit for node-level containers.
If the number exceeds that overload number, the system deletes instances (regardless of when
they were last accessed) until the number of entries in the cache is below the set limit.

The data page creates instances as needed to respond to references and to replace the deleted
instances.

As the count of data page instances approaches the limit, the system displays the PEGA0016 alert. See
PEGA0016 alert: Cache reduced to target size.

This pruning behavior is always active. You can opt to have either or both of the first two methods
active for any data page.

Forcing removal of data pages


You can also force removal of non-parameterized data page instances, without regard to the settings
described above. To force removal, use one of these options:

In Designer Studio, open the rule form.



Click Clear Data Page on the Load Management tab to clear all read-only instances of the data page
from the clipboard according to their scope:

Thread-scoped pages — the system removes all instances of the data page from all threads of
the current requestor.
Requestor-scoped pages — the system removes all instances of the data page from the current
requestor.
Node-scoped pages — the system removes all instances of the data page from all nodes in the
cluster.

In an activity, use the Page-Remove step with the data page as the step page. This method deletes
read-only and editable data page instances regardless of the scope, as long as the data page is
accessible by the thread that runs the activity.
Use the ExpireDeclarativePage rule utility function that takes the data page name as a parameter
to delete read-only, non-parameterized data page instances:
For Thread-scoped data pages, the system removes data page instances from the current
thread of the requestor.
For Requestor-scoped data pages, the system removes data page instances from the
current requestor.
For Node-scoped data pages, the system removes data page instances from all nodes in the
cluster.

As a best practice, do not use the ExpireDeclarativePage rule utility function to remove a data page.
This function is soon to be deprecated.

Passivation
Passivation allows the state of a Java object — such as an entire Pega 7 Platform PRThread context,
including the clipboard state — to be saved to a file. A later operation, known as activation, restores the
object.

The Pega 7 Platform uses standard passivation in general operation, but you can also configure
passivation to shared storage in highly available environments. When all or part of a requestor
clipboard is idle for an extended period and available JVM memory is limited, the Pega 7 Platform
automatically saves clipboard pages in a disk file on the server. This action frees JVM memory for use by
active requestors. (Typically, such passivation occurs only on systems that support 50 or more
simultaneous requestors.)

The system passivates editable data page instances, but discards read-only data page instances.

For more information about passivation, see Creating a custom passivation method.

Setup and cleanup of data page unit test cases


Before you run a data page unit test case, you can set up the clipboard with initial values by using one
or more data transforms, data pages, or activities. You can also use activities to create any required test
data, such as work or data objects, and can use one or more data transforms or activities to remove
information from the clipboard after you run the test.

Set up or clean up the clipboard if you are running a test for which the output or execution depends on
other data pages or information. Click the Setup & Cleanup tab on the data page unit test case page to
configure setup and cleanup information.
Setup & Cleanup tab

For example, when you run a test, you can use a data transform to set the values of the pyWorkPage
clipboard page with the AvailableDate, ProductID, and ProductName properties. Before the test runs,
these values are retrieved from pyWorkPage and placed on the data page that you are testing.

pyWorkPage clipboard page with setup data transform applied

After you run a data page unit test case, data pages that were used to set up the test environment are
automatically removed. You can also apply additional data transforms or activities to remove other
pages or properties from pages on the clipboard before you run more tests. Cleaning up the clipboard
ensures that data pages or properties on the clipboard do not interfere with subsequent tests.

For example, you can use a data transform to clear the AvailableDate, ProductID, and ProductName
properties from the pyWorkPage clipboard. Clear these values to ensure that the test uses the
appropriate information if the setup data changes for subsequent test runs. If you change the value of
the AvailableDate to May 2016, the data page uses that value, not the older value (December 2016).

pyWorkPage with cleanup data transform applied

Using a robotic automation as a source for a data page


Robotic process automations (RPAs) are automations that run in the back office. You can use an RPA as
a source for a data page by using Pega® 7.4 with Pega Robot Manager 5. For example, you can use an
RPA to obtain a credit score from a legacy system that you can then display on a Pega Customer
Service application account overview dashboard.

Robot Manager 5 is required for using an RPA to source a data page. Download Robot Manager 5 from
Pega Exchange.

At run time, the data page invokes a REST endpoint that is hosted in Robot Manager. Using the data
input from the data page, Robot Manager queues a request to be executed by an RPA robot. The RPA
robot returns the result to Robot Manager, which in turn passes the result to the data page as a REST
response. The run-time architecture is shown in the following diagram.

Run-time architecture

Automation status and error handling


Robot Manager enforces two levels of validation and error handling to ensure that the RPA robot returns
valid data. When an RPA robot completes an automation, it returns a status message to Robot Manager
that summarizes the success or failure of the recently executed automation. The status message
contains one of the following values: Completed, Completed with errors, or Did not complete.

When Robot Manager receives an automation status of "Completed with errors" or "Did not complete",
the assignment is treated as a failed automation. The original assignment is completed, and a new
assignment is opened and routed to the Failed Robot Automations workbasket. The Failed Robot
Automations workbasket can be changed to suit your business needs. Because the original assignment
is considered complete, any data returned by the robot is also returned to the data page. For this
reason, map the pyAutomationStatus property to the data page so that you are aware of the
automation status.

If the robot returns data to Robot Manager that violates the validation criteria specified on the original
assignment, the original assignment remains open, and Robot Manager returns an HTTP response code
of 500 that indicates that the data cannot be populated on the data page.

Configuring an RPA as a data page source


The high-level tasks for using an RPA as a data page source are as follows:

1. In Pega Express or Designer Studio, build a simple case type to model the overall robotic
automation. Include the input fields that you want to pass to the RPA robot in the case data model,
for example, customer ID, mailing address, or account number. In addition, include the output data
fields that you want the robot to pass back to your Pega application, for example, credit score,
account balance, or claim number.
2. In Pega Express or Designer Studio, build a simple case life cycle. The case life cycle must include
the Route to robot smart shape. The smart shape queues the automation request to the RPA robot.
This is the case that will be referenced on the data page.
3. In Pega Robotic Automation Studio, build the automation logic for the robot to execute. Once your
automation is completed, reference the robot activity name in the Queue for robot smart shape.
4. Configure the location of the Robot Manager host server and the authentication profile to use when
connecting to it.
5. Configure the data page.

Configuring the Robot Manager host server and authentication profile


Configure the location of the Robot Manager host server and the authentication profile to use when
connecting to it in the application that uses the data page.

1. From the Records explorer, click SysAdmin > Dynamic System Settings.
2. Search for the pegarobotics/RoboticAutomationRequestorProfile setting.
3. In the Value field, enter the name of the authentication profile that the REST connector will use to
connect to the REST service.
4. Click Save.
5. Search for the pegarobotics/RobotManagerHostDomain setting.
6. In the Value field, enter the domain details and the HTTP scheme of the Robot Manager host
server, for example, https://localhost:8443.
7. Click Save.

Configuring the data page


When your automation is built and your case type is configured, you can configure the data page.

1. From the Data Type Explorer, expand the data type, and click the data page that you want to
source with an RPA.
2. In the Source field, click Robotic automation.
3. In the Robotic automation field, enter the name of the case type that you created.
4. In the Timeout(s) field, enter the length of time that the data page will wait for data to be returned
before timing out.
5. In the Request Data Transform field, enter the request data transform to use to provide input data
to the robot.
6. In the Response Data Transform field, enter the response data transform to use to convert the
results returned from the robot to the logical model used by the data page.

If you want to know the status of the automation to determine what action to take, configure your
response data transform to read the pyAutomationStatus property. The status can be Completed,
Completed with errors, or Did not complete.

Example
The following example shows the case type data model, data page, request data transform, and
response data transform for a data page that is sourced by an RPA that gets a customer's credit score.
The data model for the Get credit score case type has two fields, Account id and Credit score. This
information is used to configure the data transforms.

Case type data model

The following screen shows the data page configuration.


Data page configuration

The request data transform passes in the customer ID and puts it into the Account id field that is
defined in the case type.

Request data transform

The response data transform takes the credit score from the physical data model and puts it into the
CustomerCreditScore field in the logical data model.
Response data transform

Using a robotic desktop automation as a source for a data


page
A robotic desktop automation (RDA) runs on client desktops and automates tasks and workflows. You
can use an RDA as a source for a data page when the data page is requested from a browser on a
Microsoft Windows client that is running Pega Robotics™ Runtime. The RDA robot can read data from a
system or write data to a system, even if the system does not have an API. For example, you can use an
RDA to obtain a credit score from a legacy system, and then you can display that credit score on a Pega
Customer Service™ application account overview dashboard and update a client's address on the
system of record.

At run time, the browser requests the data page to be loaded. The data page notifies the client to run
the requested RDA automation, and then the browser notifies the desktop robot to run the automation.
The data that is returned from the desktop robot is pushed back to the server, the server runs the
response data transform, finalizing the data page, and the browser is notified to continue loading the
user interface. The run-time architecture is shown in the following diagram.

Robotic desktop automation run-time architecture

Configuring an RDA as a data page source


The high-level tasks for using an RDA as a data page source are as follows:

1. In either App Studio or Dev Studio, create a data type . For more information, see Creating a new
data type.
2. Ingest the fields into Pega Robotic Automation Studio and build the automation logic for the robot
to execute. Make note of the robotic automation ID because you need it to configure the data
page. For more information, see Configuration of Robotic Desktop Automation (RDA) with your
Pega 7.2.1 application.
3. Configure the data page.

Configuring the data page


After you configure your data type and your automation has been built, you can configure the data
page.

The automation is invoked only when the data page is requested from a browser; that is, the data page
must be the source for a control in the user interface. The automation is not invoked if it is referred to
by an activity or other rule running on the server.

1. In Dev Studio, from the Data Type Explorer, expand the data type that you created in step 1
above, and click the data page that you want to source with an RDA.
2. In the Source field, click Robotic desktop automation.
3. In the Robotic Automation ID field, enter the Robotic Automation ID of the automation that was
created in Pega Robotic Automation Studio.
4. In the Timeout(s) field, enter the length of time that the data page should wait for automation to
complete before timing out.
5. In the Request Data Transform field, enter the request data transform to use to provide input data
to the robot.
6. In the Response Data Transform field, enter the response data transform to use to convert the
results returned from the robot to the logical model used by the data page.
7. Optional. To write data back to the application with which the robot is interacting, configure a save
plan:
1. In the Data page definition section, in the Edit mode field, select Savable.
2. In the Data save options section, in the Save type field, select Robotic desktop automation.
3. In the Robotic Automation ID field, enter the ID of the automation that will write the data back
to the application.
4. In the Timeout field, enter the length of time to wait before timing out.
5. In the Data Transform field, select the request data transform to use to provide input data to
the robot.
8. Click Save.

After the data page is configured, you can use it in your user interface. The most common way to do
this is by using section include on a Section rule. You invoke the automation by selecting Use data page
in the Page Context field and selecting data page. Select Defer load contents to allow extra time for the
robot to fetch the data before rendering the user interface. For more information, see Harness and
section forms - adding a section.
Section rule cell properties

If you configure a save plan, configure the flow action that triggers the post processing that writes data
back to the application. For more information, see Saving data in a data page as part of a flow action.

Data pages - Understanding the access group field


This article describes behavior for Data Pages (PRPC 7.1 and newer) and Declare Pages (PRPC 5.1-
6.3SP1).

The Access Group field (located on the Load Management tab of a data page in Pega 7 and on the
Definition tab of a declare page in earlier versions of PRPC) identifies the access group that PRPC uses
at runtime to locate the load activity that populates the page.

This access group must contain the appropriate RuleSets to provide access to the correct version of the
Load Activity, which populates the clipboard with instances of the data page or declare page.

Suggested Approach
The Access Group field is visible only when the Page Scope field on the Definition tab is set to Node.

The field is available on the Load Management tab of a data page in Pega 7:
It is available on the Definition tab in PRPC 5.1-6.3 SP1:

This approach avoids the following design issues: many users may see one data page (declare pages)
rule definition (for shared clipboard pages at the Node level). Due to rule resolution and different RuleSet
Lists, these users would run different versions of the Load Activity to create that page, or have different
rules called by that activity. Thus, the first user who called the data page (declare page) instance would
set it up using their access group, with their Load Activity and their data.

The next user who accessed that instance might not have the same access group; if not, they would
have to reload the page with their Load Activity and data.

To avoid continually reloading the page instance based on each user’s access group (which negates the
concept of “shared”), the access group to use is set in the data (declare) page instance. When the first
user calls this instance, the system switches to the specified access group and uses that RuleSet list to
run the Load Activity and create the Declare pages. Once the page is created, the system switches the
user back to their own access group.

Important: This means that whenever a user’s processing references a data (declare) page instance,
that instance will contain data which has been loaded with the RuleSet list determined by the data
(declare) page's access group – not necessarily what is available in the user’s RuleSet list. Changing
your own access group to get different data on the page instance has no effect.

Select an access group that provides the RuleSets and versions which have all the appropriate rules to
run the Load Activity.

Instantly access embedded pages in a list-structure data


page using keyed page access
For data held in read-only, list-structure data pages, you can allow access to embedded pages in the
data page by enabling keyed access and using properties to provide values for the key(s). This permits
significantly faster response when the system needs to retrieve a single item or subset of items from a
large list.

Overview
Enabling keyed access
Load data into a page property from a list-structure data page
Load data into a page list property from a list-structure data page
Load data into a page list property from different instances of a list-structure data page

Overview
In general, when the system references a data page:

The system provides parameters that the data page can use to get precisely the data that the
object needs from the data source.
The data page verifies whether an instance of itself, created by an earlier call using the same
parameters, exists on the clipboard.
If the data page's refresh strategy permits, the data page responds to the request with the
data on the existing clipboard page.
Otherwise, the data page loads or reloads its data from the correct data source, using a
request data transform if necessary to structure the request so the data source can respond
to it.
The data source uses the information that the data page sends to locate and provide the data that
the application requires.
A response data transform maps and normalizes the data to the properties that require it.
The data page creates an instance of itself on the clipboard to hold the mapped data, and provides
it to the auto-populated property or direct reference.

To provide instant access to a particular page in a list-structure data page (such as a list of products,
with each embedded page holding information about a particular product), enable keyed access:

You establish one or more keys on the data page.


When the system references the data page, it sends property references (or literal values) to
provide values for the page's key or keys.
The data page uses the values provided for the keys to determine from which embedded page or
pages to provide the data that the system wants.

Return to top

Enabling keyed access


1. In the Data Explorer, locate the data page that both retrieves the data for all suppliers from a data
source and that loads it into an instance of itself on the clipboard:

2. Click the data page name to open it:

3. At the right of the Definition tab for list-structure data pages is the "Keyed Page Access" section.
Check the Access pages with user defined keys checkbox; then, in the "Page List Keys" area,
define one or more keys for this data page:

4. You can select from the properties in the class of the data page, or click the magnifying glass icon
to create a new one.
If you want to use multiple keys (.SupplierID and .Industry, perhaps), click the + icon to add
additional key fields.
5. Either all or no keys must be passed to the data page at run-time. This allows one data page to
serve a dual purpose:
It can display all the items it contains (no keys passed).
It can return only items that match the keys.
6. The "Allow multiple pages per key" option lets the data page return multiple embedded pages from
a single instance to an auto-populated page list property. You might use this option when
preparing to display a list of all the products offered by a particular provider.

You can only use this option with an auto-populated page list property. If the option is not selected,
you can populate a page property.

Reference the data page from a property using a key


The property that requires the data about the specific supplier gets it by referencing the data page and
sending a property reference or a literal value to populate the data page key.

In this example, a page property holds the information about the selected supplier:

Select "Refer to a data page" or "Copy data from a data page", then select the data page by name. In
the KEYS section, provide a property reference or a literal value for each data page key.

When the property is referenced, the system automatically loads the data page using the value that the
property sends as the key.

You can use parameters and keys together.


The parameters determine the instance of a data page that is loaded and selected.
The keys determine which embedded page(s) within a given instance to return.

Return to top

Load data into a page property from a list-structure data


page
You have a service that returns information for all customers who have been identified as most likely to
call back on a given day, due to open issues they are experiencing.

The system holds the information about these customers in a node-level data page that it updates each
day, giving all customer service representatives (CSRs) access to the latest information. The list has
data on thousands of customers, and each CSR needs to get the right information quickly to deal with
customer interactions.

The solution is to automatically load the correct customer's information from the list structure data
page on the clipboard, based on the values of keys passed in.

This scenario requires:

Properties:

.Customer (a page property of class Data-Customer)


On the General tab, select "Copy data from a data page" (to allow using a data transform) or
"Refer to a data page" (without a data transform), and specify D_CustomerList as the data
page.
Set CustomerID = .Customer.CustomerID as the key to the data page. The keyed page list
feature allows indexing into a single instance of a list-structure data page.
Set LevelOfDetail = .LevelOfDetail as the parameter to send to the data page. When this
changes, the data page generates a new instance of itself on the clipboard.
You can optionally specify the CopyCustomerSubset data transform to execute after the data
page returns data to copy a subset of the data from the data page to the property.
.Customer.CustomerID : a single value integer property defined at the page class of .Customer that
at run time holds the customer ID.
.LevelOfDetail: a single value integer property defined at the page class of .Customer that contains
the LevelOfDetail setting for customers. The data page requires this value to get the correct
amount of customer detail.

Data page:

The data page D_Customer has these settings:

Structure = List
Class = Data-Customer
Scope = Node
Edit Mode = Read Only
Parameter = LevelOfDetail
Select the "Access pages with user defined keys" option and specify .CustomerID as the value for
the data page key

The data source configuration involves:

A connector data source of class Int-GetTargetCustomerInfo.


A request data transform of class Int-GetTargetCustomerInfo to form the request for the service.
The data page passes the values of the parameters CustomerID and LevelOfDetail to the request
data transform, which in turn passes them to the correct properties for the connector.
A response data transform of class Data-Customer (the class of the data page) to map from the
integration to the data class properties

What happens
1. The user or the system sets the values for .LevelOfDetail and .Customer.CustomerID.
2. The user or the system references an embedded property on .Customer. This triggers auto-
populating the customer data.
3. The system references the data page, passing the parameter values.
If an instance of the data page that corresponds to the parameter values exists on the
clipboard, and the data page's refresh strategy permits, the data page responds to the
reference with the existing instance.
Otherwise, it passes the parameters to the appropriate data source.
4. If the data source executes, it passes data to the response data transform, which maps data into
the instance of the data page.
5. The data page uses the key passed in to locate the correct page in the list.
6. The CopyCustomerSubset data transform, specified on the property, copies the required data from
the correct page in the list to the .Customer property.

If no data transform is specified, all data from the data page instance is copied into the property.
If the "Refer to a page property" option is selected, no data is copied into the property. Instead, the
system establishes a reference between the property and the correct embedded page in the data
page.

Return to top

Load data into a page list property from a list-structure data


page
In this scenario, you are creating an order management application that lets users browse products
within categories by showcase types: "most ordered", "most viewed", and "most wished-for".

A service returns products of a given showcase type, but does not group them by category. The
requirement is to let the user select both the showcase type and the product category.

This scenario requires

Properties

.Products (a page-list property of class Data-Products)


On the General tab, select "Refer to a data page" (a data transform is not required) and
specify D_Customer as the data page.
Set the CategoryID = .SelectedProductCategoryID as the key to the data page (the
keyed page list feature, noted above, allows indexing into a single instance of a list-
structure data page).
Set the parameter ShowcaseType = .SelectedProductShowcaseType. When this
parameter changes, the data page makes a fresh call to the data source and loads to the
clipboard a new instance of itself with the relevant data.
.SelectedProductCategoryID (sibling to .Products)
.SelectedProductShowcaseType (sibling to .Products)

Data page

The data page D_ProductList has these settings:


Structure = List
Class = Data-Product
Scope = Node
Edit Mode = Read Only
Parameters = ShowcaseType
Select the "Access pages with user defined keys" option and specify .CategoryID as the page list
key. Select "Allow multiple pages per key".

The data source configuration involves:

A connector data source of class Int-GetShowcaseProductList.


A request data transform of class Int-GetShowcaseProductList to form the request so the service
can use it.
A response data transform of class Data-Product to map the data from the integration to the the
data class properties

What happens
1. The user or the system sets the values for .SelectedProductShowcaseType and
.SelectedProductCategoryID.
2. The user or the system references an embedded property on .Products. This triggers auto-
populating the product data.
3. The system passes parameters to the data page.
If there exists on the clipboard an instance of the data page that corresponds to the
parameter values passed in, and if the data page's refresh strategy permits it, the data page
returns data from the existing instance (jump to step 6).
Otherwise, it requests data from the data source so it can load a new instance of itself.
4. If there is a request to the data source, the data source uses the parameters passed to get and
provide the relevant data.
5. If the data source has provided data, the response data transform maps the data into the data
page.
6. The key identified on the data page is used to index into the correct pages in the list of data.
Because "Allow multiple pages per key" is selected, the data page returns all pages that match
.ProductCategoryID.
7. The system creates a reference between the .Products page list property and the products
identified in step 6 in D_ProductList.pxResults().
The requested list of products displays for the user.
The system does not copy anything to the case, since in this scenario you do not want to store
the list of products with the case.

Return to top

Load data into a page list property from different instances


of a list-structure data page
You are building an order management application that lets users order products from different
suppliers. The system has a data service for each supplier, from which the application can request data
about that supplier's products.

When a user adds a product to the shopping cart, the system needs to retrieve more detailed product
information than is displayed to the user from the correct database, and to store that information with
the order.

This scenario differs from the previous one in having a separate service for each supplier that returns a
list of all products for that supplier. The system uses parameters to load lists of products specific to a
supplier, and keyed data access to get specific product information from the correct supplier's product
list.
This scenario requires

Properties

The properties are in this hierarchy:

.ShoppingCart (page property)


.Products (auto-populated page list property of class Data-Product)
.SupplierID (single value property)
.ProductID (single value property)

The configuration of .Products is as follows:

Select the "Copy data from a data page" option (a data transform is required) and specify
D_ProductsList as the data page.
Set as parameters:
DetailLevel = .LevelOfDetail
SupplierID = .Products.SupplierID
Set .ProductID = .Products.ProductID as the key to the data page.
Select the "Retrieve each page separately" option. The data page either creates new instances of
itself for each unique .Products(n) based on .Products(n).ID, or copies different embedded pages
from the same data page instance.

Data page

The data page D_ProductList has these settings:

Structure = List
Class = Data-Product
Scope = Node
Edit Mode = Read Only
Parameters = DetailLevel and ProductID
Select "Access pages with user defined keys" and select .ProductID as the page list key

The data source configuration requires a connector data source for each supplier.

What happens
1. When a user adds a product to their shopping cart, the system sets .LevelOfDetail to return full
product data for that item.
2. The user or the system references an embedded property on .Products that triggers
autopopulation of the product list.
3. The system passes parameters for each supplier to the data page.
If there exist on the clipboard instances of the data page that correspond to the parameter
values passed in, and if the data page's refresh strategy permits it, the data page returns
data from the existing instances.
Otherwise, the data page creates an instance of itself for each supplier based on the value of
the supplier parameter passed in.
Each page contains the appropriate level of detail about that provider's products.
The data page uses the SupplierID parameter value with the When conditions associated
with each data source to locate the data source that matches the supplier in question.
4. If a data source executes and returns data, the response data transform maps the data into the
instance of the data page.
5. For each page in the . Products page list property, the system uses the .LevelOfDetail and
.SupplierID parameters to locate or load the correct data page instance in memory, then uses
.ProductID to key into that instance and return the embedded page containing the correct product
information.
6. As the user adds products to their cart, the simple act of the section rendering the shopping cart
causes steps 1 through 5 to repeat for each product added.

Return to top

Data transforms (models)


A data transform defines how to convert data that is in one format and class into data of another format
and class. Using a data transform instead of an activity to set property values speeds up development
and makes application maintenance easier.

A data transform is a structured sequence of actions. When the system invokes the data transform, it
invokes each action in turn, following the sequence that is defined in the data transform's record form.
You can use a data transform to:

Normalize data for use with a data page.


Define, copy, or map data with activities.
Copy a clipboard page to make a new page.
Map properties (and their values) on one page to another, existing page.
Map properties (and their values) on one page to a new page.
On a specific clipboard page, define one or more initial properties on that page and set their
values. A data transform can set many property values on a page in one processing step.
Append pages from one Page List property to another.

Prior to PRPC 6.2, data transforms were known as model rules, and were used only to set property
values. Data transforms provide more capabilities than model rules.

For more information about data transforms, see Data Transforms.

Data transforms and data pages


The following data transforms that are used for data management use data pages.

Data transform to copy a subset of data from the data page to the property
On the Edit Property form, the Optional Data Mapping field is displayed when you select Copy data from
a data page. Use this data transform to copy a subset of the data from the data page to the property. If
you do not specify a data transform, the system copies all the data from the data page to the property.
Edit property form

Request data transform


When a data source for a data page is a connector (an integration with an external data service), the
request data transform lets you map Pega Platform data to the fields that the connector needs to
communicate with the data service. Select the data transform to use in the Request Data Transform
field on the Data Page rule form on the Definition tab, in the Data sources section. For more
information, see Data page rules - Using the Definition tab.

Request Data Transform field on the Data Page rule form

Response data transform


Response data transforms normalize data provided by the data sources into the common application
data model. A response data transform is required when the data source class is incompatible with the
data page class (the recommended pattern to achieve true data virtualization). Select the data
transform to use in the Response Data Transform field on the Data Page rule form on the Definition tab,
in the Data sources section. For more information, see Data virtualization in PRPC.
Response Data Transform field on the Data Page rule form

Return to top

Changing the value of a TimeOfDay property


When you use an expression on TimeofDay property type values, the expression converts the value into
a BigDecimal where the whole number is the number of days and the fraction is the hour of the day
expressed as a fraction of a day, for example, 12 hours is 0.5 (half of a day). The format is useful to
know when you want to change the value of a property that is a Date, DateTime, or TimeofDay property
type.

For example, assume that you have a property of type TimeofDay that is formatted as a standard time
with hour, minute, and second without punctuation. To add 12 hours to the property:

1. Use a Set action to assign the property value to a second property that is of type TimeofDay.
2. On the right side of the expression, enter the property reference plus the fraction. In this example,
enter .5 to add 12 hours.

Troubleshooting
The articles in this section contain information to help you troubleshoot issues related to data
management.

Troubleshooting: Cross-platform portability considerations


If your application is 100% guardrail compliant, you can successfully port your application from one
database platform to another without any additional action. You can skip the rest of the information in
this article.

However, if you made changes to your application or database outside of Designer Studio, or if you use
REST, HTTP, or SOAP integration, you might need to reconfigure some aspects of your application before
you port to a new database. For example, if you used the database vendor tools to create or modify the
database, some constructs that are specific to a particular database platform (for example, Oracle)
might not automatically translate into a similar construct on another platform (for example, Microsoft
SQL Server). If your application includes a database-specific construct or does not meet the
recommendations in this article, you might not be able to automatically deploy your application to other
database platforms.

Supported property-to-database type mappings


Use the information in the following table to determine the portable format for common database types.

IBM DB2 for Linux,


Pega 7 Platform UNIX, and Windows
Microsoft SQL Server Oracle PostgreSQL
property type and
IBM DB2 for z/OS
VARCHAR(8) VARCHAR(8) VARCHAR2(8) VARCHAR(8)
Date
bytes characters characters characters
DateTime TIMESTAMP DATETIME TIMESTAMP TIMESTAMP
Decimal DECIMAL DECIMAL DECIMAL DECIMAL
Double DECIMAL DECIMAL DECIMAL DECIMAL
Identifier IBM DB2 n
VARCHAR( ) Linux,
for VARCHAR(n) VARCHAR2(n) VARCHAR(n)
Pega
Integer7 Platform bytes
UNIX, and
DECIMAL Windows characters
DECIMAL characters characters
Microsoft SQL Server DECIMAL
Oracle DECIMAL
PostgreSQL
property type and
VARCHAR(70) VARCHAR(70) VARCHAR2(70) VARCHAR(70)
Password IBM DB2 for z/OS
bytes characters characters characters
VARCHAR VARCHAR VARCHAR2 VARCHAR
Text
bytes characters characters characters
VARCHAR VARCHAR VARCHAR2 VARCHAR
TextEncrypted
bytes characters characters characters
VARCHAR(6) VARCHAR(6) VARCHAR2(6) VARCHAR(6)
TimeOfDay
bytes characters characters characters
VARCHAR(5) VARCHAR(5) VARCHAR2(5) VARCHAR(5)
TrueFalse
bytes characters characters characters

Variable-length types
When using variable-length types, follow the recommendations in the following table. These value
ranges are for all supported databases and ensure that you can port the data to other databases.

VARCHAR/VARCHAR2 or
Decimal Integer
NVARCHAR/NVARCHAR2

up to 4000 bytes (UTF-8 or UTF-


precision: 0-31 precision: 0-31 16)
or
scale: 0-precision scale: ==0
up to 4000 characters (ASCII)

Decimals: Decimal columns created or modified to have precision or scale outside the listed range
are not portable by using the Pega 7 Platform tools. For example, Microsoft SQL Server supports a
precision of up to 38 (38 total digits), but IBM DB2 for Linux, UNIX, and Windows supports only 31.
Attempting to migrate data from this Microsoft SQL Server column to IBM DB2 for Linux, UNIX, and
Windows would result in the loss of data and might fail.
Integers: As a best practice, use the decimal type with a scale of 0 in place of integer types.
VARCHAR: IBM DB2 databases interpret VARCHAR(n) as n bytes; other databases interpret it as n
characters. Some platforms allow you to specify the type of interpretation. For example, on Oracle
databases you can specify VARCHAR(n byte) or VARCHAR(n char). This distinction is particularly
important when dealing with Unicode data. Although ASCII uses one byte per character, UTF-8
uses up to 3 bytes per character, and UTF-16 uses 2 bytes per character.

Views
ANSI SQL views are fully supported. Materialized views are database-specific and therefore not portable.

Index limitations
The maximum index size across all vendors is 900 bytes, which means that the sum of the maximum
lengths of all columns in the index must be 900 bytes or less.

Functional indexes are not portable.

Integration resource limitations


REST, HTTP, and SOAP are currently supported on Pega Cloud, but might require additional
configuration if you port your application to a new database. For more information about integrating
with Pega Cloud, see Integration in your Pega Cloud environment.

Any integration functionality not listed in this article or in Integration in your Pega Cloud environment
might cause portability problems.

Data Type wizard artifacts, error messages, and other


processing
The Data Type wizard in the Integration Designer automatically generates artifacts, displays error
messages, and performs additional processing that is useful to understand when you are
developing and troubleshooting your application.

Created Artifacts
Conditions when a data source cannot be updated in App Studio
Data and integration layers
Relationship between the response structure and the data page
Error messages
Locked ruleset processing
Supported authentication types

Created artifacts
When you create a data type that uses Pega as the system of record, the following artifacts are created:

Data type class


Data type is added to the application
Object, list, and savable data pages
Report definition to get the list of data objects
New table for the data type

When you create a data type that uses REST to connect to the system of record, the following artifacts
are created:

Data layer:
Data class for the new data type
Data type properties
Data type is added to the application
Integration layer:
Integration class
Connect REST rule
Authentication profile and authentication profile settings
Resource path settings
Response data transform
Response JSON data transform
Data page

When you replace a data source, the following artifacts are created or versioned:

Integration class, if it does not exist


Connect REST rule
Authentication profile and authentication profile settings
Resource path settings
Response data transform
Response JSON data page
The existing data page is versioned with the new source information

When you update a data source:

A new authentication profile is created if authentication is configured.


The REST connector and JSON data transform are versioned.
The response data transform (non-JSON) and integration class are not modified.
The data page is versioned if you add a new parameter to it on the Request tab of the visual
mapper; otherwise, the data page is not modified.

The Data Type wizard creates all artifacts when you finish the wizard, except when you add a property
in the Data Mapping page. The new property is created immediately. If you cancel or close the wizard,
the property persists; it is not deleted.

When you replace or update a data type, previously generated artifacts are not deleted. New artifacts
are created.

All artifacts can be viewed in Dev Studio.

Conditions when a data source cannot be updated in App


Studio
In general, only data sources created in App Studio can be updated in App Studio; however, there are
conditions when a data source created in App Studio cannot be updated. In these cases, only the
replace option is available. The update option is not available for data sources created in App Studio
when:

The JSON data transform or connector is opened and saved in Dev Studio.
The normal response data transform is modified in Dev Studio so that it no longer references only
one updatable JSON data transform.
The data page is modified in Dev Studio in any of the following ways:
The data page no longer references a normal data transform as the response data transform.
The data page references a request data transform.
The connector parameters for the data source are obtained from the current parameter page
(Pass current parameter page is set to true).

Data and integration layers


You can select the data layer in the Reuse assets from field on the first page of the Data Type wizard.

Data layer selection

The integration layer is set when you generate your application. You can modify the integration layer on
the Cases & Data tab of the application rule form.

Integration layer selection

Relationship between the response structure and the data


page
When you create a data type, if the JSON response has a top-level list structure, then the data page is a
list data page. If the JSON response is not a top-level list structure, then the data page is a single-object
data page.

When you replace or update a data source, you can replace any list or single object with any response,
but you always have to match cardinality when mapping fields.
Error messages
Error messages are displayed for invalid user inputs, invalid system-generated inputs, and environment
issues. The following conditions might cause an error to be displayed:

The class name or field name is longer than 56 characters.


No rulesets are in the application.
All rulesets are locked.
Authentication fails.
There is a connection problem; for example, the host is not reachable.
The response format is not JSON.
The response JSON structure is invalid.

To help with troubleshooting, you can view the actual JSON request and response by clicking the
Information icon on the Data Mapping page.

Call information

Locked ruleset processing


You cannot add a data type to a locked ruleset because the Data Type wizard cannot generate the
rules. The Data Type wizard searches for the first unlocked ruleset. If the first ruleset that is selected is
locked, the Data Type wizard searches for the next best ruleset. If all rulesets are locked, the Data Type
wizard displays an error message. Rulesets are selected in the following order:

1. Branch of the same ruleset as the record


2. Current ruleset
3. User's record management preferences (application, branch, ruleset)
4. Ruleset that defines the applies to class, if the object has an applies to class
5. Ruleset in the same application as the class definition's ruleset, if the object has an applies to class
6. Ruleset in the user's profile

Supported authentication types


The following authentication formats are supported for REST connections to the system of record:

Basic
NTLM

You might also like