Professional Documents
Culture Documents
2018-06-07
3 Development. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .990
3.1 Applications in the Cloud Foundry Environment. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 990
4 Extensions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1610
4.1 Basic Concepts. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1612
Extension Application Front End. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1612
Extension Application Back End. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1615
4.2 Extending SAP Hybris Cloud for Customer. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1620
Create an Integration Token for SAP Hybris Cloud for Customer. . . . . . . . . . . . . . . . . . . . . . . . . .1622
4.3 Extending SAP SuccessFactors. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1624
Create an Integration Token for SAP SuccessFactors. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1627
Installing and Configuring Extension Applications. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1628
5 Administration. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1658
5.1 Account Operations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1658
Change Global Account Display Name. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1658
Managing Subaccounts. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1659
6 Security. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2031
6.1 Authorization and Trust Management. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2031
Authorization and Trust Management in the Cloud Foundry Environment. . . . . . . . . . . . . . . . . . . 2032
Authorization and Trust Management in the Neo Environment. . . . . . . . . . . . . . . . . . . . . . . . . . . 2116
6.2 Platform Identity Provider. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2200
Overview. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2200
1. Create Trust with the Identity Authentication Tenant . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2201
2. Add Global Account Members from the Identity Authentication Tenant User Base. . . . . . . . . . . 2202
3. Add Subaccount Members from the Identity Authentication Tenant User Base. . . . . . . . . . . . . 2203
(Optional) 4. Configure the Identity Authentication Tenant for the Required Scenarios. . . . . . . . . .2204
SAP Cloud Platform is an enterprise platform-as-a-service (enterprise PaaS) that provides comprehensive
application development services and capabilities, which lets you build, extend, and integrate business
applications in the cloud.
SAP Cloud Platform is SAP's innovative cloud development and deployment platform. It is supported by multiple
cloud infrastructure providers and enables innovative technologies such as the Internet of Things, machine
learning, artificial intelligence, and big data, thereby enabling you to achieve business agility and accerelate digital
transformation across your business. SAP Cloud Platform offers different development environments, including
the Cloud Foundry and Neo environments, and provides a broad choice of programming languages.
Scenarios
Environments
Environments constitute the actual platform-as-a-service offering of SAP Cloud Platform that allows for the
development and administration of business applications. Each environment provides at least one application
runtime and comes with its own domain model, user and role management logic, and tools (for example,
command line utility). SAP Cloud Platform provides different environments: Cloud Foundry and Neo. For a detailed
overview of the features and capabilities of each environment, see Environments [page 16].
Regions
You can deploy applications in different regions. Each region represents the location of a data center, the physical
location (for example, Europe, US East) where applications, data, or services are hosted. These data centers are
operated either by SAP or by third-party data center providers such as Amazon Web Services (AWS) or Microsoft
Azure. You can optimize application performance (response time, latency) by selecting a region close to your
users. For more information, see Regions [page 21] and Global Accounts: Enterprise versus Trial [page 11].
According to your preferred development environment and your use cases, you may want to consume a set of
services that are provided by SAP Cloud Platform. For more information, see Capabilities [page 24] and
Availability of SAP Cloud Platform Services.
SAP Cloud Platform facilitates secure integration with on-premise systems that are running software from SAP
and other vendors. Using the platform services, such as the connectivity service, applications can establish secure
connections to on-premise solutions, enabling integration scenarios with your cloud-based applications. For more
information about the connectivity service, see Connectivity [page 32]
Secure Data
The comprehensive, multilevel security measures that are built into SAP Cloud Platform. are engineered to protect
your mission-critical business data and assets, and to provide the necessary industry-standard compliance
certifications.
Quality Certificates
Third-party certification bodies provide independent confirmation that SAP meets the requirements of
international standards. You can find all certificates at https://www.sap.com/corporate/en/company/quality.html
.
Free Trial
Get a free SAP Cloud Platform trial license that also gives you access to our community and all the technical
resources, tutorials, blogs, and support you need. Visit the SAP Cloud Platform Developer Center at https://
cloudplatform.sap.com/developers.html or the SAP Cloud Platform cockpit at https://
account.hanatrial.ondemand.com/.
Related Information
1.1 Accounts
Learn more about the different types of accounts on SAP Cloud Platform and how they relate to each other.
User accounts enable users to log on to SAP Cloud Platform and access subaccounts and use services according
to the permissions given to them.
There are two types of users on SAP Cloud Platform: platform and business. Platform users are usually
developers, administrators, or operators who deploy, administer, and troubleshoot applications and services.
Business users are those who use the applications that are deployed to SAP Cloud Platform.
A user account corresponds to a particular user in the SAP ID service and consists, for example, of a user ID and
password. You can also integrate your own identity management systems to manage business users in both the
Cloud Foundry environment and the Neo environment. However, managing platform users using your own SAP
Cloud Platform Identity Authentication Service tenant is possible only in the Neo environment. For more
information, see Platform Identity Provider [page 2200].
A user account can be assigned to one or more global accounts, subaccounts, and Cloud Foundry spaces. As a
user, you can view a list of all global accounts, subaccounts, and Cloud Foundry spaces that are available to you,
and access them using the cockpit. A user with administrative permissions can create subaccounts and Cloud
Foundry spaces, add users to subaccounts and Cloud Foundry spaces, and assign roles to users for the
subaccount or Cloud Foundry space in question.
Global accounts are hosted environments that represent the scope of the functionality and the level of support
based on a customer or partner’s entitlement to platform resources and services.
The global account is the realization of the commercial contract with SAP. You can choose an enterprise global
account or a trial global account. The type you choose determines pricing, conditions of use, resources, and
services available. For more information, see Global Accounts: Enterprise versus Trial [page 11].
A global account can contain one or more subaccounts in which you deploy applications, use services, and
manage your subscriptions.
SAP Cloud Platform provides different types of global accounts, enterprise and trial. The type you choose
determines pricing, conditions of use, resources, available services, and hosts.
An enterprise account is usually associated with one SAP customer or partner and contains their purchased
entitlements to platform resources and services. It groups together different subaccounts that an administrator
makes available to users for deploying applications. Administrators can assign the available quotas to the different
subaccounts and move it between subaccounts that belong to the same enterprise account.
A trial account lets you try out SAP Cloud Platform for free. Access is open to everyone. Trial accounts are intended
for personal exploration, and not for production use or team development. They allow restricted use of the
platform resources and services. The trial period varies depending on the environment.
A trial account in the Cloud Foundry environment can contain multiple subaccounts. In the Neo environment, you
canmanage only one trial subaccount.
It depends on your use case whether you choose a free trial account or a paid enterprise account. You may want to
start out with an SAP Cloud Platform trial account that also gives you access to our community, including free
technical resources such as tutorials and blogs. If you plan to use your global account in productive mode, you
must purchase a paid enterprise account. It is important that you are aware of these differences when you are
planning and setting up your account model.
The main features of each global account type are described in the following tables:
Enterprise Accounts
Customer Account Partner Account
Use case A customer account is a global account that ena A partner account is a global account that ena
bles you to host productive, business-critical ap bles you to build applications and to sell them to
plications with 24/7 support. your customers.
Benefits Support for productive applications. ● Includes SAP Application Development li
censes that enable you to get started with
scenarios across cloud and on-premise ap
plications.
● Offers the opportunity to certify applications
and receive SAP partner logo package with
usage policies.
● Advertise and sell applications via the SAP
Store
Limitations Resources according to your contract. Predefined resources according to your partner
package. You can purchase additional resources if
necessary.
Registration For more information, see https://hcp.sap.com/ To join the partner program, sign up at the SAP
pricing.html . Application Development Partner Center .
Available Regions See Regions [page 21]. See Regions [page 21].
Trial Accounts
Cloud Foundry Environment Neo Environment
Use case A trial account enables you to explore the basic A trial account enables you to explore the basic
functionality of the Cloud Foundry environment functionality of the Neo environment for a non
for 90 days. Access is open to everyone. committal and unlimited period. Access is open
to everyone.
Services available Productive and beta services. Productive and beta services.
Limitations ● One trial account for a trial user ● One trial account for a trial user
● Subaccount creation is possible ● No subaccount creation allowed
● Trial account allows for member manage ● Does not allow for member management,
ment only one user per trial account
● 1 GB of memory for applications ● 1 GB of memory for applications
● 2 GB of instance memory ● 1 GB of database storage
● 20 total routes ● 1 GB of document storage
● 20 total services ● One SAP HANA MDC tenant database
● Two configured on-premise systems with the ● 100 MB of memory for all Git repositories
Cloud connector ● Two configured on-premise systems with the
● No service level agreement with regard to Cloud connector
the availability of the platform ● Cloud connector supported only for Java
● Usage of HDI containers in a shared SAP and HTML5 applications
HANA database
● No service level agreement with regard to
the availability of the platform
Registration Get a Free Trial Account in the Cloud Foundry En Get a Free Trial Account in the Neo Environment
vironment [page 910] [page 919]
Available Regions See Regions [page 21]. See Regions [page 21].
The hierarchical structure of global accounts and subaccounts lets you define an account model that accurately
fits your business and development needs. For example, if you want to set up different environments for
development, testing, and productive usage, you can create a subaccount for each of these scenarios in your
global account. You can also create subaccounts for different development teams or departments in your
organizations. You can make additional elements available in this hierarchy, depending on the environment you
work in. For example, this can be the organization that is associated with your subaccount and that can contain
one or more spaces in the Cloud Foundry environment. Each subaccount comprises exactly one organization.
Subaccounts in a global account are independent from each other. This is important to consider with respect to
security, member management, data management, data migration and management, integration, and so on, when
you plan your landscape and overall architecture.
Each subaccount is associated with a particular region, which is the physical location where applications, data, or
services are hosted. Since the Cloud Foundry and the Neo environments run in different regions, each subaccount
comprises exactly one of these development environments. The specific region associated with a subaccount is
relevant when you deploy applications (region host) and access the SAP Cloud Platform cockpit (cockpit URL).
The region assigned to your subaccount doesn't have to be directly related to your location. You could be located in
the United States, for example, but operate your subaccount in Europe. For more information, see Regions [page
21].
Caution
You should not use SAP Cloud Platform beta features in subaccounts that belong to productive enterprise
accounts. Any use of beta functionality is at the customer's own risk, and SAP shall not be liable for errors or
damages caused by the use of beta features.
For more information, see Using Beta Features in Subaccounts [page 16].
For enterprise global accounts, you can create multiple subaccounts, Cloud Foundry or Neo.
Every trial user must have a subaccount in the Neo environment. There is no global account associated with this
trial account. For a Cloud Foundry trial, you get a trial global account in addition to your Neo trial. Within your trial
global account, you can have multiple subaccounts in the Cloud Foundry environment.
When you create a subaccount in a trial account in the Cloud Foundry environment, the system creates a Cloud
Foundry org automatically.
Note
The subaccount and the org have a 1:1 relationship. They have the same name and therefore also the same
navigation level in the cockpit.
Within that Cloud Foundry org, you can create spaces. Spaces enable you to further break down your account
model and use services and functions in the Cloud Foundry environment.
Related Information
SAP may offer, and a customer may choose to accept access to functionality, such as a service or application,
which is not generally available and has not been validated and quality assured in accordance with SAP standard
processes. Such functionality is defined as a beta feature.
Beta features let customers, developers, and partners test new features on SAP Cloud Platform. The beta features
have the following characteristics:
● SAP may require that customers accept additional terms to use beta features.
● Beta features are released for enterprise accounts, trial accounts, or both.
● To allow the use of beta features in the subaccounts available to you in the SAP Cloud Platform cockpit, you
need to set the Enable beta features option in the subaccount's details.
● No personal data may be processed by beta functionality in the context of contractual data processing without
additional written agreement.
Caution
You should not use SAP Cloud Platform beta features in subaccounts that belong to productive enterprise
accounts. Any use of beta functionality is at the customer's own risk, and SAP shall not be liable for errors or
damages caused by the use of beta features.
Related Information
1.2 Environments
SAP Cloud Platform provides different development environments, for example, the Cloud Foundry environment
and the Neo environment.
The environments are open source and based on open standards. The availability of different environments
provides choices for technologies, runtimes, and services when using SAP Cloud Platform, allowing for great
flexibility in your development process. You can enhance SAP products, integrate business applications, as well as
develop entirely new enterprise applications based on services and business APIs that are hosted on SAP Cloud
Platform.
Related Information
The Cloud Foundry environment contains the Cloud Foundry Application Runtime, which is based on the open-
source application platform managed by the Cloud Foundry Foundation.
You can deploy your Cloud Foundry applications in different regions, each of which represents the location of a
data center. For more information on regional availability of the Cloud Foundry environment, see Regions and API
Endpoints Available for the Cloud Foundry Environment [page 22].
You can leverage a multitude of buildpacks, including community innovations and self-developed buildpacks. It
also integrates with SAP HANA extended application services, advanced model (SAP HANA XSA). This runtime
platform enables you to develop and deploy web applications, supporting multiple runtimes, programming
languages, libraries, and services.
The following table shows which Cloud Foundry features are supported by the Cloud Foundry environment on SAP
Cloud Platform and which aren't.
For more information about Cloud Foundry, see the official Cloud Foundry documentation at https://
www.cloudfoundry.org/ .
The Neo environment lets you develop HTML5, Java, and SAP HANA extended application services (SAP HANA
XS) applications. You can also use the UI Development Toolkit for HTML5 (SAPUI5) to develop rich user interfaces
for modern web-based business applications.
The Neo environment also allows you to deploy solutions on SAP Cloud Platform. In the context of SAP Cloud
Platform, a solution is made up of various application types and configurations created with different technologies,
designed to implement a certain scenario or task flow. You can deploy solutions by using the Change and Transport
System (CTS+) tool, the console client, or the SAP Cloud Platform cockpit, which also lets you monitor your
solutions. The SAP multitarget application (MTA) model encompasses and describes application modules,
dependencies, and interfaces in an approach that facilitates validation, orchestration, maintenance, and
automation of the application throughout its life cycle.
The Neo environment lets you use virtual machines, allowing you to install and maintain your own applications in
scenarios that aren't covered by the platform. A virtual machine is the virtualized hardware resource (CPU, RAM,
disk space, installed OS) that blends the line between Platform-as-a-Service and Infrastructure-as-a-Service.
You can deploy applications developed in the Neo environment to various SAP data centers around the world. For
more information about regional availability of the Neo environment, see Regions and Hosts Available for the Neo
Environment [page 23].
Choose the development environment that is most suitable for your business needs.
Application developers can use the Cloud Foundry environment to enhance SAP products and to integrate
business applications, as well as to develop entirely new enterprise applications based on business APIs that are
hosted on SAP Cloud Platform. The Cloud Foundry environment allows you to use multiple programming
languages such as Java, Node.js, and community/bring-your-own language options. We recommend that you use
the Cloud Foundry environment for 12-factor and/or micro-services-based applications, for Internet of Things and
machine learning scenarios, and for developing applications using SAP HANA extended application services,
advanced model (SAP HANA XSA).
Neo is a feature-rich and easy-to-use development environment, allowing you to develop Java, SAP HANA XS, and
HTML5 applications. We recommend that you use the Neo environment to develop HTML5 and complex Java
applications and for complex integration and extension scenarios.
The following table provides an overview of the features, capabilities, and restrictions of each environment:
Environment
Best used for 12-factor- and/or microservice-based HTML5-based, SAP HANA XS, and com
applications and services, IoT and ma plex Java applications.
chine learning scenarios, and XSA appli
cations. Allows you to use multiple pro
gramming languages such as Java,
Node.js, and community / bring-your-
own language options.
Available regions See Regions [page 21]. See Regions [page 21].
Environment
Note
Support for additional buildpacks is
limited. SAP regularly upgrades to
new versions of the Cloud Foundry
environment. Any fixes provided in
updated versions of the buildpacks
are available after the relevant up
grade. You can report issues with
these buildpacks to SAP. Issues de
tected in the buildpacks are ad
dressed to the relevant community,
however, SAP only fixes issues that
are related to SAP Cloud Platform it
self.
SAP HANA programming model SAP HANA extended application serv SAP HANA extended application serv
ices, advanced model (SAP HANA XSA). ices, classic model (SAP HANA XS).
Docker support Docker with Diego. For more information, Not available.
see the Cloud Foundry environment doc
umentation at http://docs.cloudfoun
dry.org/adminguide/docker.html .
Environment
Extension development Only selected scenarios. Available for SAP SuccessFactors and
SAP S/4HANA.
1.3 Regions
Depending on the type of your global account and the environment you're using, you can deploy applications in
different regions.
Each region represents the location of a data center, the physical location (for example, Europe, US East) where
applications, data, or services are hosted. Application performance (response time, latency) can be optimized by
selecting a region close to the users. When deploying applications, consider that a subaccount is associated with a
particular region and that this is independent of your own location. You may be located in the United States, for
example, but operate your subaccount in a region in Europe.
To deploy an application in more than one region, execute the deployment separately for each host.
The different development environments – Cloud Foundry environment and Neo environment – are both available
in different regions. All regions that are available for the Neo environment are exclusively provided by SAP, whereas
regions that are available for the Cloud Foundry environment might also be provided by third-party data center
providers such as Amazon or Microsoft. These third-party data center providers operate the infrastructure layer of
regions. By contrast, SAP operates the platform layer and Cloud Foundry.
Regions and API Endpoints Available for the Cloud Foundry Environment
Caution
Some customer contracts include EU access, which means that we only use European subprocessors to access
personal data in cloud services, such as when we provide support. We currently cannot guarantee EU access in
the Cloud Foundry environment. If your contract includes EU access, we cannot move services to the Cloud
Foundry environment, without changing your contract.
Tip
To log on to the cockpit and navigate to a space in the Cloud Foundry environment, choose any of the cockpit
URLs provided in the following table for the Neo environment, then choose a Cloud Foundry region.
eu1.hana.onde
mand.com
1.4 Capabilities
SAP Cloud Platform provides a rich set of capabilities that group together different technical components, like
services, tools and runtimes.
Find an overview of available capabilities in the figure below. For detailed information about the regional availability
of services, see Availability of SAP Cloud Platform Services.
1.4.1 Analytics
Embeds advanced analytics into application solutions, empowering you to identify, combine, and manage multiple
sources of data and build advanced analytics models within business applications for personalized, contextual,
● https://help.sap.com/viewer/p/PREDICTIVE_SERVICE [https://help.sap.com/viewer/p/
PREDICTIVE_SERVICE]
● https://help.sap.com/viewer/product/SAP_BusinessObjects_Cloud/release/en-US [https://help.sap.com/
viewer/product/SAP_BusinessObjects_Cloud/release/en-US]
● https://help.sap.com/viewer/352c8328eab24b80be4bf876355d340c/Cloud/en-US [https://help.sap.com/
viewer/352c8328eab24b80be4bf876355d340c/Cloud/en-US]
● https://help.sap.com/viewer/p/Streaming_Analytics [https://help.sap.com/viewer/p/Streaming_Analytics]
SAP Business Services allows for the fast development of business applications and services for the cloud, and
powers an open marketplace for new business apps, which includes SAP, hybris, and other 3rd party applications.
This includes pre-packaged applications for customer service and ecommerce, micro business services, and a
marketplace of services for quickly creating business-ready applications. This capability includes the following
services: SAP Data Quality Management, microservices for location data, SAP RealSpend, and SAP Localization
Hub, tax service.
● https://help.sap.com/viewer/d95546360fea44988eb614718ff7e959/Cloud/en-US
● https://help.sap.com/viewer/p/SAP_RealSpend
● https://help.sap.com/viewer/SLH_tax_service
Brings people together with secure access to shared business content, information, applications, and processes to
drive results and increase team productivity. This capability includes the following services: Gamification, SAP Live
Link 365, SAP Document Center, and SAP Jam.
● https://help.sap.com/viewer/850b6386f85d49699cfa908a5bc99d99/Cloud/en-US [https://help.sap.com/
viewer/850b6386f85d49699cfa908a5bc99d99/Cloud/en-US]
● https://livelink.sapmobileservices.com/documentation/ [https://livelink.sapmobileservices.com/
documentation/]
● https://help.sap.com/viewer/p/SAP_Document_Center [https://help.sap.com/viewer/p/
SAP_Document_Center]
● https://help.sap.com/viewer/product/SAP_JAM_COLLABORATION/en-US [https://help.sap.com/viewer/
product/SAP_JAM_COLLABORATION/en-US]
By eliminating the division between transactions and analytics, SAP HANA powers any business question
anywhere in real time. With SAP HANA, spatial processing and data virtualization on the same architecture,
innovating with big data is simplified and accelereated. With SAP Adaptive Server Enterprise (SAP ASE),
customers can drive faster, more reliable transaction processing for less. ASE is an affordable relational database
management system designed for high-performance transaction-based applications involving massive volumes of
data and thousands of concurrent users. This capability includes the following services: Document service, SAP
HANA, SAP ASE, MongoDB, Object Store, PostgreSQL, and Redis.
1.4.5 DevOps
Development and IT Operations allow you to develop and manage applications including complete life cycle
management. This capability includes the following services: Application Autoscaler, Corporate Git Link for SAP
Web IDE, Debugging service, Feature Flags service, Git service, Java Apps Lifecycle Management, Job Scheduler
(Beta), Logging services, Monitoring service, Profiling service, SAP Translation Hub, SAP Web IDE, SAP Web IDE
Full-Stack, and Solutions Lifecycle Management.
●
● https://help.sap.com/viewer/825270ffffe74d9f988a0f0066ad59f0/Cloud/en-US/
b8427ec16ae64347b97d2d46fb28f7cd.html [https://help.sap.com/viewer/
825270ffffe74d9f988a0f0066ad59f0/Cloud/en-US/b8427ec16ae64347b97d2d46fb28f7cd.html]
● https://help.sap.com/viewer/product/DEBUGGING_SERVICE/Cloud/en-US [https://help.sap.com/viewer/
product/DEBUGGING_SERVICE/Cloud/en-US]
1.4.6 Integration
Improves business agility while preventing data and application silos by seamlessly and securely integrating cloud
applications into business landscapes. Securely collaborate with customers and partners at scale to improve
efficiencies as well as gain real-time insights from sensors, devices, and social sentiment. This capability includes
the following services: API Management, Business Rules, Integration, Connectivity, Destination service, Enterprise
Messaging, OData Provisioning, RabbitMQ, and Workflow.
Provides the ability to quickly develop, deploy, and manage real-time IoT and machine-to-machine and remote
data synch applications. Onboard and manage connected remote devices, get real-time predictive analysis to
improve intelligence and decision-making at the edge of the network, and optimize business processes at the core
of any business. This capability includes the following services: Internet of Things, SAP IoT Application
Enablement, and Remote Data Sync.
● https://help.sap.com/viewer/product/SAP_CP_IOT_CF/Cloud/en-US [https://help.sap.com/viewer/product/
SAP_CP_IOT_CF/Cloud/en-US]
● https://help.sap.com/viewer/product/SAP_CP_IOT_NEO/Cloud/en-US [https://help.sap.com/viewer/
product/SAP_CP_IOT_NEO/Cloud/en-US]
● https://help.sap.com/viewer/p/SAP_IOT_APPLICATION_SERVICES [https://help.sap.com/viewer/p/
SAP_IOT_APPLICATION_SERVICES]
● https://help.sap.com/viewer/ee5a2592b2884ea795b7cb1ed96299c7/Cloud/en-US [https://help.sap.com/
viewer/ee5a2592b2884ea795b7cb1ed96299c7/Cloud/en-US]
SAP Leonardo Machine Learning Foundation, built on SAP Cloud Platform, provides advanced machine learning
capabilities that help applications recognize patterns and correlations in data. It offers instantly consumable
services that let you learn from data and extract knowledge that was previously inaccessible for computers.
● https://help.sap.com/viewer/p/SAP_LEONARDO_MACHINE_LEARNING_FOUNDATION
Deliver enterprise-grade native and hybrid mobile apps. The mobile portfolio delivers key capabilities such as
multiple authentication methods, secure access to on-premises and cloud-based systems, offline synchronization,
remote logging control and retrieval, automatic application updates for hybrid applications, one-to-one and one-
to-many push notifications. This capability includes the following services: Agentry, App & Device Management,
Development & Operations, SAP Fiori Mobile, and SAP Live Link 365.
● https://help.sap.com/viewer/38dbd9fbb49240f3b4d954e92335e670/Cloud/en-US/
642d3a98d510496a99ab6bbb48910762.html [https://help.sap.com/viewer/
38dbd9fbb49240f3b4d954e92335e670/Cloud/en-US/642d3a98d510496a99ab6bbb48910762.html]
● https://help.sap.com/viewer/p/MOBILE_SERVICE_FOR_APP_AND_DEVICE_MANAGEMENT
● https://help.sap.com/viewer/p/
SAP_CLOUD_PLATFORM_MOBILE_SERVICE_FOR_DEVELOPMENT_AND_OPERATIONS [https://
help.sap.com/viewer/p/
SAP_CLOUD_PLATFORM_MOBILE_SERVICE_FOR_DEVELOPMENT_AND_OPERATIONS]
SAP Cloud Platform supports different programming languages/models and offers standards-based development.
SAP Cloud Platform Virtual Machine gives you full control over virtualized hardware resources enabling you to
install everything you need next to complement your cloud applications. The HTML5 Application Repository
service (Beta) enables you to centrally store and provision HTML5 applications in the Cloud Foundry environment.
1.4.11 Security
SAP Cloud Platform's closely integrated security services include authentication, single sign-on, on-premises
integration and self-services such as registration and password reset for employees, customers, partners, and
consumers. This capability includes the following services: Authorization & Trust Management, Identity
Authentication, Identity Provisioning, Keystore Service, and OAuth 2.0 Service.
● https://help.sap.com/viewer/product/CP_AUTHORIZ_TRUST_MNG/Cloud/en-US
● https://help.sap.com/viewer/product/IDENTITY_AUTHENTICATION/Cloud/en-US [https://help.sap.com/
viewer/product/IDENTITY_AUTHENTICATION/Cloud/en-US]
Empowers organizations to build and scale simple, personalized, and responsive user experience on any device,
anywhere, to every user. This capability includes the following services: SAP Build, Feedback service (Beta), Forms
by Adobe, Portal, and UI theme designer.
● http://sap.github.io/BUILD_User_Assistance/build/HCPCOCKPIT.html [http://sap.github.io/
BUILD_User_Assistance/build/HCPCOCKPIT.html]
● https://help.sap.com/viewer/407fa80165404cd1a90f515b906e39e4/Cloud/en-US [https://help.sap.com/
viewer/407fa80165404cd1a90f515b906e39e4/Cloud/en-US]
● https://help.sap.com/viewer/product/CP_FORMS_BY_ADOBE/Cloud/en-US [https://help.sap.com/viewer/
product/CP_FORMS_BY_ADOBE/Cloud/en-US]
● https://help.sap.com/viewer/3ca6847da92847d79b27753d690ac5d5/Cloud/en-US [https://help.sap.com/
viewer/3ca6847da92847d79b27753d690ac5d5/Cloud/en-US]
● https://help.sap.com/viewer/p/UI_THEME_DESIGNER [https://help.sap.com/viewer/p/
UI_THEME_DESIGNER]
1.5 Connectivity
Overview
The connectivity service allows SAP Cloud Platform applications to securely access remote services that run on
the Internet or on-premise. This service:
A company that uses SAP Cloud Platform is granted a global account to which only authorized users of the
company have access. The company can subscribe applications to its subaccount(s) or deploy its own
applications, and those applications can then be used by this subaccount. The administrator of the Cloud
Connector can set up a secure tunnel from the customer network to his or her subaccount. The platform ensures
that the tunnel can be only used by the subaccount applications. Applications assigned to other accounts cannot
access the tunnel, which is encrypted via transport layer security that guarantees connection privacy.
Features
The connectivity service supports the following protocols relevant for both Java and SAP HANA development:
● HTTP(S) - exchange data between your on-demand application and on-premise systems or Internet services.
For this aim, you can create and configure HTTP destinations to make the needed Web connections. For on-
premise connectivity, you can reach back-end systems using the Cloud Connector via HTTP.
● Mail Protocols - the SMTP protocol allows you to send electronic mail messages from your Web applications
using e-mail providers that are accessible on the Internet, such as Google Mail (Gmail). The IMAP and POP3
protocols allow you to retrieve e-mails from the mailbox of your e-mail account. Applications use the standard
javax.mail API. The e-mail provider and e-mail account are configured using mail destinations.
● RFC - enables you to invoke ABAP function modules. You can create and configure RFC destinations as well as
make connections to back-end systems using the Cloud Connector via RFC.
You can create XS destinations for connecting your HANA XS applications to Internet and on-premise services. For
more information, see Consuming the Connectivity Service (HANA XS) [page 240].
Java Development
● Consume a service from the Internet. More information: Consume Internet Services (Java Web or Java EE 6
Web Profile) [page 156]
● Make connections between Web applications and on-premise backend services via HTTP protocol. More
information: Consume Back-End Systems (Java Web or Java EE 6 Web Profile) [page 171]
● Make connections between Web applications and on-premise backend services via RFC protocol. More
information: Tutorial: Invoke ABAP Function Modules in On-Premise ABAP Systems [page 208]
● Establish connections from on-premise systems to SAP Cloud Platform, using the Cloud Connector. More
information: Cloud Connector [page 253]
● Send and fetch e-mails. More information: Sending and Fetching E-Mail [page 227].
Restrictions
● For the on-demand to on-premise connectivity scenario, the following protocols are currently supported:
○ Neo environment:
○ HTTP
Note
HTTPS is not needed, since the tunnel used by the Cloud Connector is TLS-encrypted.
○ LDAP
○ RFC
○ TCP
Note
You can use TCP-based communication for any client that supports SOCKS5 proxies.
● Service Channels are currently supported only for the Neo environment.
● For Internet connections, you are allowed to use any port > 1024. For on-demand to on-premise solutions
there are no port limitations.
● You can use destination configuration files with extension .props, .properties, .jks, and .txt, as well as
files with no extension.
● If a destination configuration consists of a keystore or truststore, it must be stored in JKS files with a
standard .jks extension.
● To develop a Java Connector (JCo) application, your SDK local runtime needs to be hosted by a 64-bit JVM, on
a x86_64 operating system (Microsoft Windows OS, Linux OS, or Mac OS X).
On Windows platforms, you need to install Microsoft Visual C++ 2010 Redistributable Package (x64). To
download this package, go to http://www.microsoft.com/en-us/download/details.aspx?id=14632 .
● To check all software and hardware prerequisites for working with Cloud Connector 2.x, see Prerequisites
[page 258].
● You cannot communicate with an e-mail provider via an unencrypted SMTP protocol on port 25.
● Fetched e-mail is not scanned for viruses.
● Sending e-mail with attachments using javax.activation.DataHandler works with SAP Cloud Platform
SDK for Java EE 6 Web Profile.
● Mail destinations can be configured only on application level. That is, configuration on a subscription or
customer subaccount level is not supported.
● For SAP Cloud Platform SDK for Java Web and SAP Cloud Platform SDK for Java EE 6 Web Profile:
Applications must use the javax.mail version that is provisioned by the SAP Cloud Platform runtime (see
Connectivity and Destination APIs [page 76]). Applications must not include the javax.mail library as part
of the web archive.
Related Information
Find detailed information how to consume SAP Cloud Platform Connectivity in the Cloud Foundry environment.
Related Information
Prerequisites
● You have installed and configured a Cloud Connector in your on-premise landscape for to the scenario you
want to use. See Installation [page 257] and Configuration [page 284].
● You have deployed an application in a landscape of the Cloud Foundry environment that complies with the
Business Application Pattern.
● Your application is secured as described in Configure Authentication for Java API Using Spring Security [page
2046].
● The connectivity service is a regular service in the Cloud Foundry environment. Therefore, to consume the
connectivity service from an application, you must create a service instance and bind it to the application. See
Create and Bind a Connectivity Service Instance [page 40].
● To get the required authorization for making on-premise calls through the connected Cloud Connector, the
application must be bound to an instance of the xsuaa service using the service plan 'application'. Make sure
you set the xsappname property to the name of the application when creating the instance. Find a guide for
this procedure in section Creation of the SAP Cloud Platform XSUAA instance of this SCN blog: How to use SAP
Cloud Platform Connectivity and Cloud Connector in the Cloud Foundry environment .
Note
Currently, the only supported protocol for connecting the Cloud Foundry environment to an on-premise system
is HTTP. HTTPS is not needed, since the tunnel used by the Cloud Connector is TLS-encrypted.
Required Information
● The endpoint in the Cloud Connector (virtual host and virtual port) and accessible URL paths on it
(destinations). See Configure Access Control (HTTP) [page 151].
● The required authentication type for the on-premise system. See HTTP Destinations [page 130].
● Depending on the authentication type, you might need a username and password for accessing the on-
premise system. For more details, see Client Authentication Types for HTTP Destinations [page 143].
● (Optional) You can use a location Id. For more details, see section Optional: Specifying the Cloud Connector
Location ID below.
We recommended that you use the destination service (see Consuming the Destination Service (Cloud Foundry
Environment) [page 43]) to procure this information. However, using the destination service is optional. You can
also procure (look up) this information in another appropriate way.
Consuming the connectivity service requires credentials from the xsuaa and connectivity service instances which
are bound to the application. By binding the application to service instances of the xsuaa and connectivity service
as described in the prerequisites, these credentials become part of the environment variables of the application.
You can access them as follows:
Sample Code
Note
If you have multiple instances of the same service bound to the application, you must perform additional
filtering to extract the correct credential from jsonArr. You must go through the elements of jsonArr and find
the one matching the correct instance name.
This code stores a JSON object in the credentials variable. Additional parsing is required to extract the value for a
specific key.
Note
We refer to the result of the above code block as connectivityCredentials, when called for connectivity,
and xsuaaCredentials for xsuaa.
Proxy Setup
The connectivity service provides a standard HTTP proxy for on-premise connectivity that is accessible by any
application. Proxy host and port are available as the environment variables <onpremise_proxy_host> and
<onpremise_proxy_port>. You can set up the on-premise HTTP proxy like this:
Sample Code
Authorization
To make calls to on-premise services configured in the Cloud Connector through the HTTP proxy, you must
authorize at the HTTP proxy. For this, OAuth Client Credentials flow is used: applications must create an OAuth
access token using using the parameters clientid and clientsecret that are provided by the connectivity
service in the environment, as shown in the example code below. When the application has retrieved the access
token, it must pass the token to the connectivity proxy using the Proxy-Authorization header.
Sample Code
Depending on the required authentication type for the desired on-premise resource, you may have to set an
additional header in your request. That header provides the required information for the authentication process
against the on-premise resource. See Authentication to the On-Premise System [page 42].
Note
This is an advanced option when using more than one Cloud Connector for a subaccount. For more information
how to set the location ID in the Cloud Connector, see Managing Subaccounts [page 291], step 4 in section
Subaccount Dashboard.
As of Cloud Connector 2.10.0, you can connect multiple Cloud Connectors to a subaccount if their location ID
is different. Using the header SAP-Connectivity-SCC-Location_ID you can specify the Cloud Connector over
which the connection should be opened. If this header is not specified, the connection is opened to the Cloud
Connector that is connected without any location ID, which is also the case for all Cloud Connector versions
prior to 2.10.0. For example:
Sample Code
Related Information
To use the connectivity service in your application, you need an instance of the service.
You have two options for creating a service instance – using the CLI or using the SAP Cloud Platform cockpit.
Use the following CLI commands to create a service instance and bind it to an application:
1. cf marketplace
2. cf create-service connectivity <service-plan> <service-name>
3. cf bind-service <app-name> <service-name>
Example
To bind an instance of the connectivity service "lite" plan to application "myapp", use following commands on
the Cloud Foundry command line:
Assuming that you have already deployed your application to the platform, follow these steps to create a service
instance and bind it to an application:
Result
When the binding is created, the application gets the corresponding connectivity credentials in its environment
variables:
Sample Code
"VCAP_SERVICES": {
"connectivity": [
{
"credentials": {
"onpremise_proxy_host": "10.0.85.1",
"onpremise_proxy_port": "20003",
"clientid": "sb-connectivity-app",
"clientsecret": "KXqObiN6d9gLA4cS2rOVAahPCX0=",
},
"label": "connectivity",
"name": "conn-lite",
"plan": "default",
"provider": null,
"syslog_drain_url": null,
"tags": [
"connectivity",
"conn",
"connsvc"
],
"volume_mounts": []
For each authentication type, you must provide specific information in the request to the virtual host.
Note
Currently, the SAP-Connectivity-Authentication header is required for all authentication types.
Applications must propagate the user JWT token (jwtToken2) using the SAP-Connectivity-Authentication header.
This is required for the connectivity service to open a tunnel to the subaccount for which a configuration is made in
the Cloud Connector. The following example shows you how to do this using the Spring framework:
Sample Code
Authentication Types
No Authentication
Principal Propagation
When you open the application router to access your cloud application, you are prompted to log in. Doing so
means that the cloud application now knows your identity. Principal propagation forwards this identity via the
Cloud Connector to the on-premise system. This information is then used to grant access without additional input
from the user. To achieve this, you do not need to send any additional information from your application, but you
must set up the Cloud Connector for principal propagation. See Configuring Principal Propagation [page 298].
Basic Authentication
If the on-premise system requires username and password to grant access, the cloud application must provide
these data using the Authorization header. The following example shows how to do this:
Sample Code
Retrieve externalized technical information about destinations that are required to consume the target remote
service from an application.
Prerequisites
● To consume the destination service from an application, you must create a service instance and bind it to the
application. See Create and Bind a Destination Service Instance [page 50].
● To generate the required JWT token, you must bind the application to an instance of the xsuaa service using
the service plan 'application'. Make sure you set the xsappname property when creating the instance. Find a
guide for this procedure in section Creation of the SAP Cloud Platform XSUAA instance of this SCN blog: How
to use SAP Cloud Platform Connectivity and Cloud Connector in the Cloud Foundry environment .
● You need at least one configured destination, otherwise there will be nothing to retrieve via the service.
Currently, the only way to manage destinations is through the SAP Cloud Platform cockpit. The process is the
same as for the destinations under Neo. See Create HTTP Destinations [page 111].
The destination service stores its credentials in the environment variables. To consume the service, you require the
following information:
● The value of clientid, clientsecret and uri from the destination service credentials.
● The values of url from the xsuaa credentials.
● From the CLI, the following command lists the environment variables of <app-name>:
cf env <app-name>
● From within the application, the service credential can be accessed as described in Consuming the
Connectivity Service (Cloud Foundry Environment) [page 36].
Note
Below, we refer to the JSONObjects, containing the instance credentials as destinationCredentials (for
the destination service) and xsuaaCredentials (for xsuaa).
Applications must create an OAuth client using the attributes clientid and clientsecret, provided by the
destination service instance, then retrieve a new JWT token from UAA and pass it in the Authorization HTTP
header.
Java:
Sample Code
Sample Code
curl -X POST \
<xsuaa-url>/oauth/token \
-H 'authorization: Basic <<clientid>:<clientsecret> encoded with Base64>' \
-H 'content-type: application/x-www-form-urlencoded' \
-d 'client_id=<clientid>&grant_type=client_credentials'
When calling the destination service, use the uri attribute, provided in VCAP_SERVICES, to build the request
URLs.
Note
Currently, from the SAP Cloud Platform cockpit, the creation of destinations is only possible for subaccount
destinations (destinations associated with a subaccount). If you want to create destinations associated with a
service instance, you can do so using the Destination Service REST API [page 52].
This lets you provide simply a name of the destination while the service will search for it. First, the service searches
the destinations that are associated with the service instance. If none of the destinations match the requested
name, the service searches the destinations that are associated with the subaccount.
● Path: /destination-configuration/v1/destinations/<destination-name>
● Example of a call (cURL):
Sample Code
curl "<uri>/destination-configuration/v1/destinations/<destination-name>" \
-X GET \
-H "Authorization: Bearer <jwtToken>"
● Example of a response (this is a destination found when going through the subaccount destinations):
Sample Code
{
"owner":
{
"SubaccountId":<id>,
"InstanceId":null
},
"destinationConfiguration":
{
"Name": "demo-internet-destination",
Note
The response from this type of call contains not only the configuration of the requested destination, but also
some additional data. See "Find Destination" Response Structure [page 48].
This lets you retrieve the configurations of a destination that is defined within a subaccount, by providing the name
of the destination.
● Path: /destination-configuration/v1/subaccountDestinations/<destination-name>
Sample Code
curl "<uri>/destination-configuration/v1/subaccountDestinations/<destination
name>" \
-X GET \
-H "Authorization: Bearer <jwtToken>"
● Example of a response:
Sample Code
{
"Name": "demo-internet-destination",
"URL": "http://www.google.com",
"ProxyType": "Internet",
"Type": "HTTP",
"Authentication": "NoAuthentication"
}
This lets you retrieve the configurations of all destinations that are defined within a subaccount.
● Path: /destination-configuration/v1/subaccountDestinations
Sample Code
curl "<uri>/destination-configuration/v1/subaccountDestinations" \
-X GET \
-H "Authorization: Bearer <jwtToken>"
Sample Code
[
{
"Name": "demo-onpremise-destination1",
"URL": "http:/virtualhost:1234",
"ProxyType": "OnPremise",
"Type": "HTTP",
"Authentication": "NoAuthentication"
},
{
"Name": "demo-onpremise-destination2",
"URL": "http:/virtualhost:4321",
"ProxyType": "OnPremise",
"Type": "HTTP",
"Authentication": "BasicAuthentication",
"User": "myname123",
"Password": "123456"
}
]
Response codes
When calling the destination service, you may get the following response codes:
The JSON object that serves as the response of a successful request (value of the destinationConfiguration
property for "Find destination") can have different attributes, depending on the authentication type and proxy type
of the corresponding destination. See HTTP Destinations [page 130].
Related Information
When you use the "Find destination" call, the structure of the response includes four parts:
Each of these parts is represented in the JSON object as a key-value pair and their values are JSON objects.
Destination Owner
● Key: owner
The JSON object that represents the value of this property contains two properties itself: SubaccountId and
InstanceId. Depending on where the destination was found (as a service instance destination or a
subaccount destination) one of these properties has the value null, and the other one shows the ID of the
subaccount/service instance, to which the destination belongs.
● Example:
Sample Code
"owner": {
"SubaccountId": "9acf4877-5a3d-43d2-b67d-7516efe15b11",
"InstanceId": null
}
Destination Configuration
● Key: destinationConfiguration
The JSON object that represents the value of this property contains the actual properties of the destination. To
learn more about the available properties, see HTTP Destinations [page 130].
● Example:
Sample Code
"destinationConfiguration": {
"Name": "TestBasic",
"Type": "HTTP",
"URL": "http://sap.com",
"Authentication": "BasicAuthentication",
"ProxyType": "OnPremise",
"User": "test",
"Password": "pass12345"
Authentication Tokens
Note
This property is only applicable to destinations that use the following authentication types: BasicAuthentication,
OAuth2SAMLBearerAssertion.
● Key: authTokens
The JSON array that represents the value of this property contains tokens that are required for authentication.
These tokens are represented by JSON objects with three properties:
○ type: the type of the token.
○ value: the actual token.
○ error (optional): if the retrieval of the token fails, the value of both type and value is an empty string
and this property shows an error message, explaining the problem.
● Example:
Sample Code
"authTokens": [
{
"type": "Basic",
"value": "dGVzdDpwYXNzMTIzNDU="
}
]
Certificates
Note
This property is only applicable to destinations that use the following authentication types:
ClientCertificateAuthentication, OAuth2SAMLBearerAssertion (when default JDK trust store is not used).
● Key: certificates
The JSON array that represents the value of this property contains the certificates, specified in the destination
configuration. These certificates are represented by JSON objects with three properties:
○ type
○ content: the encoded content of the certificate.
Sample Code
"certificates": [
{
"Name": "keystore.jks",
"Content": "<value>"
"Type": "CERTIFICATE"
},
{
"Name": "truststore.jks",
"Content": "<value>"
"Type": "CERTIFICATE"
}
]
Example
Sample Code
{
"owner": {
"SubaccountId": "9acf4877-5a3d-43d2-b67d-7516efe15b11",
"InstanceId": null
},
"destinationConfiguration": {
"Name": "TestBasic",
"Type": "HTTP",
"URL": "http://sap.com",
"Authentication": "BasicAuthentication",
"ProxyType": "OnPremise",
"User": "test",
"Password": "pass12345"
},
"authTokens": [
{
"type": "Basic",
"value": "dGVzdDpwYXNzMTIzNDU="
}
]
}
To use the destination service in your application, you need an instance of the service.
Use the following CLI commands to create a service instance and bind it to an application:
1. cf marketplace
2. cf create-service destination <service-plan> <service-name>
3. cf bind-service <app-name> <service-name>
Assuming that you have already deployed your application to the platform, follow these steps to create a service
instance and bind it to an application:
Result
When the binding is created, the application gets the corresponding destination credentials in its environment
variables:
Sample Code
"VCAP_SERVICES": {
"destination": [
{
"credentials": {
...
"uri": "https://destination-configuration.cfapps.<region host>",
...
},
"syslog_drain_url": null,
"volume_mounts": [],
"label": "destination",
"provider": null,
"plan": "lite",
"name": "dest-lite",
"tags": [
"destination",
"document"
]
}
],
Destination service REST API specification for the SAP Cloud Foundry environment.
Find the methods and models of the destination service REST API in this section.
● Version: 1.0.0
● Base Url: <service-name>.<cf-domain>.com/destination-configuration/v1
● http://apache.org/licenses/LICENSE-2.0.htm
Content
Methods
get /subaccountCertificates/{certificate
name}
get /subaccountCertificates
post /subaccountCertificates
get /instanceCertificates/{certificate
name}
get /instanceCertificates
post /instanceCertificates
get /subaccountDestinations/{destination
name}
get /subaccountDestinations
post /subaccountDestinations
put /subaccountDestinations
get /instanceDestinations/{destination
name}
get /instanceDestinations
post /instanceDestinations
put /instanceDestinations
Models
Certificate
Destination
DestinationLookUpResult
Error
Healthcheck
Owner
Update
1.5.1.2.3.1 Methods
1.5.1.2.3.1.1 CertificatesSubaccountLevel
Path parameters
Update
Example data
Content-Type: application/json
Sample Code
{
"count" : 0
}
Produces
This API call produces the following media types according to the Accept request header. The media type will be
conveyed by the Content-Type response header.
● application/json
Responses
200 OK Update
Path parameters
Return type
Certificate
Example data
Sample Code
{
"Type" : "aeiou",
"Content" : "aeiou",
"Name" : "aeiou"
}
Produces
This API call produces the following media types according to the Accept request header. The media type will be
conveyed by the Content-Type response header.
● application/json
Responses
get /subaccountCertificates
Returns a json array of certificates by subaccount id. The array may be empty.
Produces
This API call produces the following media types according to the Accept request header. The media type will be
conveyed by the Content-Type response header.
● application/json
Responses
post /subaccountCertificates
Request body
Produces
This API call produces the following media types according to the Accept request header. The media type will be
conveyed by the Content-Type response header.
● application/json
Responses
1.5.1.2.3.1.2 CertificatesSubscriptionLevel
Path parameters
Return type
Update
Example data
Content-Type: application/json
Sample Code
{
"count" : 0
}
Produces
This API call produces the following media types according to the Accept request header. The media type will be
conveyed by the Content-Type response header.
● application/json
Responses
200 OK Update
Path parameters
Return type
Certificate
Example data
Content-Type: application/json
Sample Code
{
"Type" : "aeiou",
"Content" : "aeiou",
"Name" : "aeiou"
}
Produces
This API call produces the following media types according to the Accept request header. The media type will be
conveyed by the Content-Type response header.
● application/json
Responses
get /instanceCertificates
Gets all certificates on subscription level (by instance id and subaccount id). (instanceCertificatesGet)
Returns a json array of certificates by instance id. The array may be empty.
Produces
This API call produces the following media types according to the Accept request header. The media type will be
conveyed by the Content-Type response header.
● application/json
Responses
post /instanceCertificates
Request body
Produces
This API call produces the following media types according to the Accept request header. The media type will be
conveyed by the Content-Type response header.
● application/json
Responses
Related Information
1.5.1.2.3.1.3 DestinationsSubaccountLevel
Path parameters
Return type
Update
Example data
Content-Type: application/json
Sample Code
{
"count" : 0
}
Produces
This API call produces the following media types according to the Accept request header. The media type will be
conveyed by the Content-Type response header.
● application/json
Responses
200 OK Update
Path parameters
Return type
Example data
Content-Type: application/json
Sample Code
{
"Type" : "aeiou",
"Content" : "HTTP",
"Name" : "aeiou"
}
Produces
This API call produces the following media types according to the Accept request header. The media type will be
conveyed by the Content-Type response header.
● application/json
Responses
get /subaccountDestinations
Returns a json array of destinations by subaccount id. The array may be empty.
Produces
This API call produces the following media types according to the Accept request header. The media type will be
conveyed by the Content-Type response header.
● application/json
Responses
post /subaccountDestinations
Request body
Produces
This API call produces the following media types according to the Accept request header. The media type will be
conveyed by the Content-Type response header.
● application/json
Responses
Path parameters
Return type
Update
Example data
Content-Type: application/json
Sample Code
{
"count" : 0
}
Produces
This API call produces the following media types according to the Accept request header. The media type will be
conveyed by the Content-Type response header.
● application/json
Responses
Related Information
Path parameters
Return type
Update
Example data
Content-Type: application/json
Sample Code
{
"count" : 0
}
Produces
This API call produces the following media types according to the Accept request header. The media type will be
conveyed by the Content-Type response header.
● application/json
Responses
200 OK Update
Path parameters
Return type
Destination
Example data
Content-Type: application/json
Sample Code
{
"Type" : "aeiou",
"Content" : "HTTP",
"Name" : "aeiou"
}
Produces
This API call produces the following media types according to the Accept request header. The media type will be
conveyed by the Content-Type response header.
● application/json
Responses
get /instanceDestinations
Gets all destinations on subscription level (by instance id and subaccount id). (instanceDestinationsGet)
Returns a json array of destinations by instance id. The array may be empty.
Produces
This API call produces the following media types according to the Accept request header. The media type will be
conveyed by the Content-Type response header.
● application/json
Responses
post /instanceDestinations
Request body
Produces
● application/json
Responses
put /instanceDestinations
Path parameters
Return type
Update
Example data
Content-Type: application/json
Sample Code
{
"count" : 0
}
Produces
● application/json
Responses
Related Information
1.5.1.2.3.1.5 Find
get /destinations/{name}
Find a destination by name on all levels and returns the first match. (destinationsNameGet)
Search priority is destination on instance level and after that fallback to the shared destinations on subaccount
level.
Path parameters
● name (required)
Path Parameter — Destination name.
Return type
DestinationLookUpResult
Example data
Content-Type: application/json
{
"owner" : {
"InstanceId" : "aeiou",
"SubaccountId" : "aeiou"
},
"authTokens" : [ {
"type" : "aeiou",
"value" : "aeiou"
} ],
"certificates" : [ {
"Type" : "aeiou",
"Content" : "aeiou",
"Name" : "aeiou"
} ],
"destinationConfiguration" : {
"PropertyName" : "aeiou",
"Type" : "HTTP",
"Name" : "aeiou"
}
}
Produces
This API call produces the following media types according to the Accept request header. The media type will be
conveyed by the Content-Type response header.
● application/json
Responses
Related Information
get /healthcheck
Return type
Healthcheck
Example data
Content-Type: application/json
Sample Code
{
"Message" : "PONG"
}
Produces
This API call produces the following media types according to the Accept request header. The media type will be
conveyed by the Content-Type response header.
● application/json
Responses
Related Information
AuthToken -
Authorization Token.
Property Description
Certificate -
Certificate/Keystore.
Property Description
Type (optional) String Type of the object. May be null if not available.
Destination -
Destination object.
Property Description
● Enum:
HTTP
RFC
MAIL
LDAP
Contains the owner, found destination, certificates (if available) and authorization token (if available).
Property Description
Error -
Property Description
Healthcheck -
Property Description
Owner -
SubaccountId and InstanceId may have null value but not both at the same time.
Property Description
Property Description
Find detailed information in this section on how to consume SAP Cloud Platform Connectivity in the Neo
environment.
Related Information
In this section, you will learn how to use SAP Cloud Platform Connectivity to connect Web applications to Internet,
make on-demand to on-premise connections to SAP or non-SAP on-premise systems and configure destinations
to send and fetch e-mail. To do all these tasks, you must first create and configure destinations, according to the
relevant protocol type. For more information, see: Destinations [page 86]
User Roles
The following user groups are involved in the end-to-end use of the connectivity service:
● Application developers - create a connectivity-enabled SAP Cloud Platform application by using the
connectivity service API.
Scenarios
● Making Internet connections between Web applications and external servers via HTTP protocol: Consume
Internet Services (Java Web or Java EE 6 Web Profile) [page 156]
● Making connections between Web applications and on-premise backend services via HTTP protocol: Consume
Back-End Systems (Java Web or Java EE 6 Web Profile) [page 171]
● Making connections between Web applications and on-premise backend services via RFC protocol: Tutorial:
Invoke ABAP Function Modules in On-Premise ABAP Systems [page 208]
● Using LDAP-based user authentication for your cloud application: Using LDAP [page 217]
● Accessing on-premise systems via TCP-based protocols using a SOCKS5 proxy: Using TCP for Cloud
Applications [page 223]
● Sending and fetching e-mail via mail protocols: Sending and Fetching E-Mail [page 227]
Related Information
Destinations
Destinations are part of SAP Cloud Platform Connectivity and are used for the outbound communication from a
cloud application to a remote system. They contain the connection details for the remote communication of an
Destinations should be used by application developers when they aim to provide applications that:
● Integrate with remote services or back-end systems that need to be configured by customers
● Integrate with remote services or back-end systems that are located in a fenced environment (that is, behind
firewalls and not publicly accessible)
Tip
HTTP clients created by destination APIs allow parallel usage of HTTP client instances (via class
ThreadSafeClientConnManager).
Connectivity APIs
Package Description
org.apache.http http://hc.apache.org
org.apache.http.client http://hc.apache.org/httpcomponents-client-ga/httpclient/
apidocs/org/apache/http/client/package-summary.html
org.apache.http.util http://hc.apache.org/httpcomponents-core-ga/httpcore/
apidocs/org/apache/http/util/package-summary.html
javax.mail https://javamail.java.net/nonav/docs/api/
The SAP Cloud Platform SDK for Java Web uses version 1.4.1
of javax.mail, the SDK for Java EE 6 Web Profile uses
version 1.4.5 of javax.mail, and the SDK for Java Web
Tomcat 7 uses version 1.4.7 of javax.mail.
Destination APIs
All connectivity API packages are visible by default from all Web applications. Applications can consume the
destinations via a JNDI lookup.
Procedure
Prerequisites
You have set up your Java development environment. See also: Setting Up the Development Environment [page
1126]
To consume destinations using HttpDestination API, you need to define your destination as a resource in the
web.xml file.
1. An example of a destination resource named myBackend, which is described in the web.xml file, is as follows:
<resource-ref>
<res-ref-name>myBackend</res-ref-name>
<res-type>com.sap.core.connectivity.api.http.HttpDestination</res-type>
</resource-ref>
2. In your servlet code, you can look up the destination (an HTTP destination in this example) from the JNDI
registry as following:
import javax.naming.Context;
import javax.naming.InitialContext;
import com.sap.core.connectivity.api.http.HttpDestination;
...
Note
If you want the lookup name to differ from the destination name, you can specify the lookup name in <res-
ref-name> and the destination name in <mapped-name>, as shown in the following example:
<resource-ref>
<res-ref-name>myLookupName</res-ref-name>
<res-type>com.sap.core.connectivity.api.http.HttpDestination</res-type>
<mapped-name>myBackend</mapped-name>
</resource-ref>
3. With the retrieved HTTP destination, you can then, for example, send a simple GET request to the configured
remote system by using the following code:
import org.apache.http.client.HttpClient;
import org.apache.http.client.methods.HttpGet;
import org.apache.http.HttpResponse;
...
Note
If you want to use <res-ref-name>, which contains "/", the name after the last "/" should be the same as
the destination name. For example, you can use <res-ref-name>connectivity/myBackend</res-
ref-name>. In this case, you should use java:comp/env/connectivity/myBackend as a lookup string.
If you want to get the URL of your configured destination, use the URI getURI() method. This method returns
the URL, defined in the destination configuration, converted to URI.
As alternative approach how to retrieve an HTTP destination, DestinationFactory can be used. We recommend
this approach if the used destinations are unknown at implementation time and should be loaded dynamically at
runtime.
<resource-ref>
<res-ref-name>connectivity/DestinationFactory</res-ref-name>
<res-type>com.sap.core.connectivity.api.DestinationFactory</res-type>
</resource-ref>
2. In your Java code, you can then look it up and use it in following way:
Note
If you have two destinations with the same name, one configured on subaccount level and the other on
application level, the getConfiguration() method will return the destination on subaccount level.
The preference order is: subscription level -> subaccount level -> application level.
Related Information
If you need to also add Maven dependencies, take a look at this blog:
See also:
All connectivity API packages are visible by default from all Web applications. Applications can consume the
connectivity configuration via a JNDI lookup.
Context
Besides making destination configurations, you can also allow your applications to use their own HTTP clients. The
ConnectivityConfiguration API provides you a direct access to the destination configurations of your
applications. This API also:
● Can be used independent of the existing destination API so that applications can bring and use their own
HTTP client
● Consists of both a public REST API and a Java client API.
The ConnectivityConfiguration API is supported by all runtimes, including Java Web Tomcat 7. For more
information about runtimes, see Application Runtime Container [page 1153].
1. To consume connectivity configuration using JNDI, you need to define ConnectivityConfiguration API as
a resource in the web.xml file. An example of a ConnectivityConfiguration resource named
connectivityConfiguration, which is described in the web.xml file, is as follows:
<resource-ref>
<res-ref-name>connectivityConfiguration</res-ref-name>
<res-
type>com.sap.core.connectivity.api.configuration.ConnectivityConfiguration</res-
type>
</resource-ref>
2. In your servlet code, you can look up the ConnectivityConfiguration API from the JNDI registry as
following:
import javax.naming.Context;
import javax.naming.InitialContext;
import com.sap.core.connectivity.api.configuration.ConnectivityConfiguration;
...
3. With the retrieved ConnectivityConfiguration API, you can read all properties of any destination defined
on subscription, application or subaccount level.
Note
If you have two destinations with the same name, one configured on subaccount level and the other on
application level, the getConfiguration() method will return the destination on subaccount level. The
preference order is: subscription level -> subaccount level -> application level.
4. If truststore and keystore are defined in the corresponding destination, they can be accessed by using
methods getKeyStore and getTrustStore.
// create sslcontext
TrustManagerFactory tmf =
TrustManagerFactory.getInstance(TrustManagerFactory.getDefaultAlgorithm());
KeyManagerFactory keyManagerFactory =
KeyManagerFactory.getInstance(KeyManagerFactory.getDefaultAlgorithm());
String keyStorePassword = "myPassword";
keyManagerFactory.init(keyStore, keyStorePassword.toCharArray());
All connectivity API packages are visible by default from all Web applications. Applications can consume the
authentication header provider via a JNDI lookup.
Context
The AuthenticationHeaderProvider API allows your Web applications to use their own HTTP clients, as it also
provides them with authentication token generation (application-to-application SSO, on-premise SSO). This API
also:
● Provides additional helper methods, which facilitate the task to initialize an HTTP client (for example,
authentication method that helps you set headers for application-to-application SSO).
● Consists of both a public REST API and a Java client API.
The AuthenticationHeaderProvider API is supported by all runtimes, including Java Web Tomcat 7. For
more information about runtimes, see Application Runtime Container [page 1153].
Procedure
1. To consume the authentication header provider API using JNDI, you need to define
AuthenticationHeaderProvider API as a resource in the web.xml file. An example of a
AuthenticationHeaderProvider resource named myAuthHeaderProvider, which is described in the
web.xml file, is as follows:
<resource-ref>
<res-ref-name>myAuthHeaderProvider</res-ref-name>
<res-
type>com.sap.core.connectivity.api.authentication.AuthenticationHeaderProvider</
res-type>
</resource-ref>
2. In your servlet code, you can look up the AuthenticationHeaderProvider API from the JNDI registry as
following:
import javax.naming.Context;
import javax.naming.InitialContext;
import com.sap.core.connectivity.api.authentication.AuthenticationHeaderProvider;
...
Tip
We recommend that you pack the HTTP client (Apache or other) inside the lib folder of your Web application
archive.
Restrictions:
● Principal Propagation must be enabled for the subaccount. For more information, see Application Identity
Provider [page 2161] → section "Specifying Custom Local Provider Settings"
● Both applications must run on behalf of the same subaccount.
● The receiving application must use SAML2 authentication.
Note
In case you work with Java Web Tomcat 7 runtime: Bear in mind that the following code snippet works
properly only when using Apache HTTP client version 4.1.3. If you use other (higher) versions of Apache HTTP
client, you should adapt your code.
To learn how to generate on-premise SSO authentication, see Principal Propagation Using HTTP Proxy [page 147].
SAP Cloud Platform provides support for applications to use the SAML Bearer assertion flow for consuming
OAuth-protected resources. In this way, applications do not need to deal with some of the complexities of OAuth
and can reuse existing identity providers for user data. Users are authenticated by using SAML against the
configured trusted identity providers. The SAML assertion is then used to request an access token from an OAuth
authorization server. This access token should be injected in all HTTP requests to the OAuth-protected resources.
Tip
Тhe access tokens are cached by AuthenticationHeaderProvider and are auto-renovated. When a token is
about to expire, a new token is created shortly before the expiration of the old one.
The AuthenticationHeaderProvider API provides the following method for generating such headers:
List<AuthenticationHeader>
getOAuth2SAMLBearerAssertionHeaders(DestinationConfiguration
destinationConfiguration);
SAP Java Connector (SAP JCo) is a middleware component that enables you to develop ABAP-compliant
components and applications in Java. SAP JCo supports communication with Application Server ABAP (AS ABAP)
in both directions:
SAP JCo can be implemented with Desktop applications and Web server applications.
Note
You can find generic information regarding authorizations required for the use of SAP JCo in SAP Note 460089
.
To learn in detail about the SAP JCo API, see the Connectors page on SAP Service Marketplace (SAP JCo 3.0
documentation: SAP Java Connector Tools & Services ).
Note
This documentation contains sections not applicable to SAP Cloud Platform. In particular:
● SAP JCo Architecture: CPIC is only used in the last mile from your Cloud Connector to the backend. From
the cloud to the Cloud Connector, SSL protected communication is used.
● SAP JCo Installation: SAP Cloud Platform already includes all the necessary artifacts.
● SAP JCo Customizing and Integration: In SAP Cloud Platform, the integration is already done by the
runtime. You can concentrate on your business application logic.
● Server Programming: The programming model of JCo in SAP Cloud Platform does not include server-side
RFC communication.
● IDoc Support for External Java Applications: For the time being, there is no IDocLibrary for JCo available in
SAP Cloud Platform.
Related Information
Overview
Connectivity destinations are part of SAP Cloud Platform Connectivity and are used for the outbound
communication of a cloud application to a remote system. They contain the connection details for the remote
communication of an application. Connectivity destinations are represented by symbolic names that are used by
on-demand applications to refer to remote connections. The connectivity service resolves the destination at
runtime based on the symbolic name provided. The result is an object that contains customer-specific
configuration details. These details contain the URL of the remote system or service, the authentication type, and
the relative credentials.
You can use destination files with extension .props, .properties, .jks, and .txt, as well as files with no
extension.
The currently supported destination types are HTTP, Mail, and RFC.
● HTTP destination [page 130] - provides data communication via HTTP protocol and is used for both Internet
and on-premise connections..
● Mail destination [page 229]- specifies an e-mail provider for sending and retrieving e-mails via SMTP, IMAP,
and POP3 protocols.
● RFC destination [page 194] - makes connections to ABAP on-premise systems via RFC protocol using JCo as
API.
Destinations can be simultaneously configured on three levels: application, consumer subaccount, and
subscription. This means it is possible to have one and the same destination on more than one configuration level.
● Application level - The destination is related to an application and its relevant provider subaccount. It is,
though, independent from the consumer subaccount in which the application is running.
● Consumer subaccount level - The destination is related to a particular subaccount.
● Subscription level - The destination is related to the triad <Application, Provider Subaccount,
Consumer Subaccount>.
The runtime tries to resolve a destination in the following order: Subscription level → Consumer subaccount level →
Provider application level.
For more information about the usage of consumer subaccount, provider subaccount, and provider application,
see Configure Destinations from the Console Client [page 87].
To use the connectivity service 2.x and the Cloud Connector 2.x version, the following properties need to be
specified, according to the destination type:
● Destination configuration files and Java keystore (JKS) files are cached at runtime. The cache expiration time
is set to a small time interval (currently around 4 minutes). This means that once you update an existing
destination configuration or a JKS file, the application needs about 4 minutes until the new destination
configuration is applied. To avoid this waiting time, the application can be restarted on the cloud; following the
restart, the new destination configuration takes effect immediately.
● When you configure a destination for the first time, it takes effect immediately.
● If you change a mail destination, the application needs to be restarted before the new configuration becomes
effective.
To configure and then use a destination to remotely connect your Java EE or on-demand application, you can use
either of the following methods:
Related Information
You can see examples in the SDK package that you previously downloaded from http://tools.hana.ondemand.com.
Open the SDK location and go to /tools/samples/connectivity. This folder contains a standard
template.properties file, weather destination, and weather.destinations.properties file, which provides all the
necessary properties for uploading the weather destination.
As an application operator, you can configure your application using SAP Cloud Platform console client. You can
configure HTTP, Mail, or RFC destinations using a standard properties file.
To use an application from another subaccount, you must be subscribed to this application through your
subaccount.
Note
Destination files must be encoded in ISO 8859-1 character encoding.
Prerequisites
● You have downloaded and set up the console client. For more information, see Set Up the Console Client [page
1135].
● For specific information about all connectivity restrictions, see Connectivity [page 32] → section "Restrictions".
The number of mandatory property keys varies depending of the authentication type you choose. For more
information about HTTP destination properties files, HTTP Destinations [page 130].
Key stores and trust stores must be stored in JKS files with a standard .jks extension.
If mandatory fields are missing or data is specified incorrectly, you will be prompted accordingly by the console
client.
For more information about mail destination properties files, see Mail Destinations [page 229].
If mandatory fields are missing or data is specified incorrectly, you will be prompted accordingly by the console
client.
All properties except Name and Type must start with "jco.client." or "jco.destination". For more
information about RFC destination properties files, see RFC Destinations [page 194].
If mandatory fields are missing or data is specified incorrectly, you will be prompted accordingly by the console
client.
Tasks
Related Information
Context
The procedure below explains how you can upload destination configuration properties files and certificate files.
You can upload them on subaccount, application or subscribed application level.
Note
Bear in mind that, by default, your destinations are configured on SAP Cloud Platform, that is the
hana.ondemand.com landscape. If you need to specify a particular region host, you need to add the --host
parameter, as shown in the examples. Otherwise, you can skip this parameter.
Procedure
Note
When uploading a destination configuration file that contains a password field, the password value remains
available in the file. However, if you later download this file, using the get-destination command, the
password value will no more be visible. Instead, after Password =..., you will only see an empty space.
Note
The configuration parameters used by SAP Cloud Platform console client can be defined in a properties file as
well. This may be done instead of specifying them directly in the command (with the exception of the -
password parameter, which must be specified when the command is executed). When you use a properties file,
enter the path to it as the last command line parameter.
Example:
Related Information
Context
The procedure below explains how you can download (read) destination configuration properties files and
certificate files. You can download them on subaccount, application or subscribed application level.
You can read destination files with extension .props, .properties, .jks, and .txt, as well as files with no
extension. Destination files must be encoded in ISO 8859-1 character encoding.
Note
Bear in mind that, by default, your destinations are configured on SAP Cloud Platform, that is the
hana.ondemand.com landscape. If you need to specify a particular region host, you need to add the --host
parameter, as shown in the examples. Otherwise, you can skip this parameter.
Tips
Note
If you download a destination configuration file that contains a password field, the password value will not be
visible. Instead, after Password =..., you will only see an empty space. You will need to learn the password in
other ways.
Note
The configuration parameters used by SAP Cloud Platform console client can be defined in a properties file as
well. This may be done instead of specifying them directly in the command (with the exception of the -
password parameter, which must be specified when the command is executed). When you use a properties file,
enter the path to it as the last command line parameter. A sample weather properties file can be found in
directory <SDK_location>\tools\samples\connectivity.
Example:
Context
The procedure below explains how you can delete destination configuration properties files and certificate files.
You can delete them on subaccount, application or subscribed application level.
Note
Bear in mind that, by default, your destinations are configured on SAP Cloud Platform, that is the
hana.ondemand.com landscape. If you need to specify a particular region host, you need to add the --host
parameter, as shown in the examples. Otherwise, you can skip this parameter.
Procedure
Note
The configuration parameters used by SAP Cloud Platform console client can be defined in a properties file as
well. This may be done instead of specifying them directly in the command (with the exception of the -
password parameter, which must be specified when the command is executed). When you use a properties file,
enter the path to it as the last command line parameter.
Example:
Related Information
You can use the Connectivity editor in the Eclipse IDE to configure HTTP, Mail, RFC and LDAP destinations in order
to:
● Connect your cloud application to the Internet or make it consume an on-premise back-end system via
HTTP(S);
● Send an e-mail from a simple Web application using an e-mail provider that is accessible on the Internet;
● Make your cloud application invoke a function module in an on-premise ABAP system via RFC.
● Use LDAP-based user authentication for your cloud application.
You can create, delete and modify destinations to use them for direct connections or export them for further
usage. You can also import destinations from existing files.
Note
Destination files must be encoded in ISO 8859-1 character encoding.
Prerequisites
● You have downloaded and set up your Eclipse IDE. For more information, see Setting Up the Development
Environment [page 1126] or Updating Java Tools for Eclipse andSAP Cloud Platform SDK for Neo Environment
[page 1136].
Tasks
Related Information
Context
The procedure below demonstrates how you can create and configure connectivity destinations (HTTP, Mail, or
RFC) on a local SAP Cloud Platform server.
Also, a Servers folder is created and appears in the navigation tree of the Eclipse IDE. It contains configurable
folders and files you can use, for example, to change your HTTP or JMX port.
5. On the Servers view, double-click the added server to open its editor.
6. Go to the Connectivity tab view.
a. In the All Destinations section, choose the button to create a new destination.
b. From the dialog window, enter a name for your destination, select its type and then choose OK.
c. In the URL field, enter the URL of the target service to which the destination should refer.
d. In the Authentication dropdown box, choose the authentication type required by the target service to
authenticate the calls.
○ If the target service does not require authentication, choose NoAuthentication.
○ If the target service requires basic authentication, choose BasicAuthentication. You need to enter a
user name and a password.
○ If the target service requires a client certificate authentication, choose ClientCertificateAuthentication.
See Use Destination Certificates (IDE) [page 102].
e. Optional: In the Properties or Additional Properties section, choose the button to specify additional
destination properties.
f. Save the editor.
7. When a new destination is created, the changes take effect immediately.
Related Information
When using a local server, the destination configuration is stored in the file system as plain text by default. The
plain text storage includes password fields, which can be a security issue.
Perform the following procedure to encrypt those fields for a particular destination configuration file.
Generate a Key
To encrypt and decrypt the password fields, you need a key for an AES-128-CBC algorithm (Advanced Encryption
Standard). The following steps show you how to generate this key using OpenSSL. Alternatively, you can use any
other appropriate procedure.
Note
If a stronger AES algorithm is required (for example, AES with 256-bit keys), you must install the JCE Unlimited
Strength Jurisdiction Policy Files in the JDK/JRE.
Prerequisites
OpenSSL is provided by Linux and Mac by default. For Windows, you must install it from http://
gnuwin32.sourceforge.net/packages/openssl.htm .
Note
For Windows, the installer does not add the path of the openssl.exe file to the PATH environment variable. You
should do this manually or navigate to the file before executing the OpenSSL commands in the terminal.
Procedure
1. Open a terminal.
2. (Optional) Navigate to the OpenSSL executable.
3. Adjust and execute the following command:
Sample Code
4. This procedure generates a key and stores it in the specified file (and creates the file if necessary). The key file
has the following format:
salt=3F190F676A469E24
iv =AD5EE334AE9694BE96E1754B6E736C7D
Note
Only the <key> and <iv> fields are needed. If you use a different method to create the key file, you only
need to include those two fields.
Configure Encryption
To store the password fields of a destination in an encrypted format, you must set the encryption-key-
location property. The value of this property is the absolute path of the key file, containing an encryption key in
the format described above.
Note
You should store the key file on a removable storage device. Otherwise, the decryption key can always be
accessed.
When you save the destination, the destination file in the file system includes encrypted password fields. The key
location, which is specified by the encryption-key-location property, is used when a destination is retrieved
by an application to get the key and encrypt the password fields. This is done automatically by the service.
Encryption/Decryption Failure
● Encryption
Encryption is performed when the destination is saved to the file system. If an error occurs, the Save
operation fails and a message shows the cause.
Note
The Save operation fails until a valid key (which can decrypt the loaded destination) is provided. We
strongly recommend that you provide the new location of the key immediately and save the destination.
Then you can continue working with the destination as usual.
○ a key file is corrupted, the editor treats it as if the key was not found. You can specify a new location and, if
the key is valid, continue working with the destination.
○ a particular field (or multiple fields) cannot be decrypted, the editor loads the destination and changes the
value of the failed properties to blank. In this case, you must modify (specify new values) or remove each
of these fields to fix the corrupted data.
○ the initialization of the decrypting library fails, all password fields are changed to blank.
SDK
● Decryption
If decryption fails, the retrieval of an encrypted destination always causes an exception, no matter the cause of
the failure. This exception is either IllegalStateException (if the failure is caused by a Java problem), or
IllegalArgumentException (if the failure is caused by a problem in the destination or key file).
Note
The SDK does not perform any destination encryption.
Context
The procedure below demonstrates how you can create and configure connectivity destinations (HTTP, Mail, or
RFC) on SAP Cloud Platform.
Procedure
a. In the All Destinations section, choose the button to create a new destination.
b. From the dialog window, enter a name for your destination, select its type and the choose OK.
c. In the URL field, enter the URL of the target service to which the destination should refer.
d. In the Authentication dropdown box, choose the authentication type required by the target service to
authenticate the calls.
○ If the target service does not require authentication, choose NoAuthentication.
○ If the target service requires basic authentication, choose BasicAuthentication. You need to enter a
user name and a password.
○ If the target service requires a client certificate authentication, choose ClientCertificateAuthentication.
See Use Destination Certificates (IDE) [page 102].
○ If the target service requires your cloud user authentication, choose PrincipalPropagation. You also
need to select Proxy Type: OnPremise and should enter the additional property
CloudConnectorVersion with value 2.
e. In the Proxy Type dropdown box, choose the required type of proxy connection.
Note
This dropdown box allows you to choose the type of your proxy and is only available when deploying on
SAP Cloud Platform. The default value is Internet. In this case, the destination uses the HTTP proxy for
the outbound communication with the Internet. For consumption of an on-premise target service,
choose the OnPremise option so that the proxy to the SSL tunnel is chosen and the tunnel is
established to the connected Cloud Connector.
f. Optional: In the Properties or Additional Properties section, choose the button to specify additional
destination properties.
g. Save the editor. This saves the specified destination configuration in SAP Cloud Platform.
6. When new destinations are created, the changes take effect immediately.
Note
Bear in mind that changes are currently cached with a cache expiration of up to 4 minutes. That’s why if you
modify a destination configuration, the changes might not take effect immediately. However, if the relevant
Web application is restarted on the cloud, the destination changes will take effect immediately.
Related Information
Prerequisites
Context
You can maintain keystore certificates in the Connectivity editor. You can upload, add and delete certificates for
your connectivity destinations. Bear in mind that:
● You can use JKS, PFX and P12 files for destination keystore, and JKS, CRT, CER, DER files for destination
truststore.
● You add certificates in a keystore file and then you upload, add, or delete this keystore.
● You can add certificates only for HTTPS destinations. Keystore is available only for
ClientCertificateAuthentication.
Procedure
Uploading Certificates
1. Press the Upload/Delete keystore button. You can find it in the All Destinations section in the Conectivity
editor.
2. Choose Upload Keystore and select the certificate you want to upload. Choose Open or double-click the
certificate.
Note
You can upload a certificate during creation or editing of a destination, by choosing Manage Keystore or by
pressing the Upload/Delete keystore button.
Deleting Certificates
Related Information
Prerequisites
Note
The Connectivity editor allows importing destination files with extension .props, .properties, and .txt, as
well as files with no extension. Destination files must be encoded in ISO 8859-1 character encoding.
Procedure
○ If the destination does not contain client certificate authentication, it is saved as a single configuration file.
○ If the destination provides client certificate data, it is saved as an archive, which contains the main
configuration file and a Keystore file.
5. The destination file is imported within the Connectivity editor.
Note
If the properties file contains incorrect properties or values, for example wrong destination type, the editor
only displays the valid ones in the Properties table.
Related Information
Prerequisites
You have imported or created a new destination (HTTP, Mail, or RFC) in the Eclipse IDE.
Procedure
○ If the destination does not contain client certificate authentication, it is saved as a single configuration file.
○ If the destination provides client certificate data, it is saved as an archive, which contains the main
configuration file and a Keystore file.
Tip
You can keep the default name of the destination, or rename it to avoid overriding with previous files with
the same name.
After exporting the destination, you can open it to check its content. Bear in mind that all password fields will be
commented (with # symbols), and their values - deleted.
Example:
Related Information
Use the Destinations editor in SAP Cloud Platform cockpit to configure HTTP, Mail, RFC, and LDAP destinations in
order to:
● Connect your cloud application to the Internet or make it consume an on-premise back-end system via
HTTP(S).
● Send an e-mail from a simple Web application using an e-mail provider that is accessible on the Internet.
● Make your cloud application invoke a function module in an on-premise ABAP system via RFC.
● Use LDAP-based user authentication for your cloud application.
You can create, delete, clone, modify, import and export destinations.
Use this editor to work with destinations on subscription, subaccount, and application level.
Note
Destination files must be encoded in ISO 8859-1 character encoding.
1. You have logged into the cockpit from the SAP Cloud Platform landing page, depending on your subaccount
type. For more information, see Regions [page 21].
2. Depending on the level you need to make destination configurations from the Destinations editor, make sure
the following is fulfilled:
○ Subscription level – you need to have at least one application subscribed to your subaccount.
○ Application level – you need to have at least one application deployed on your subaccount.
○ Subaccount level – no prerequisites.
For more information, see Access the Destinations Editor [page 110].
Tasks
Related Information
Prerequisites
You have logged into the cockpit from the SAP Cloud Platform landing page, depending on your global account
type. For more information, see Regions [page 21].
Procedure
1. In the cockpit, select your subaccount name from the Subaccount menu in the breadcrumbs.
2. From the left-side navigation, choose Applications Subscriptions to open the page with your currently
subscribed Java applications (if any).
3. Select the application for which you need to create a destination.
4. From the left-side panel, choose Destinations.
1. In the cockpit, select your subaccount name from the Subaccount menu in the breadcrumbs.
2. From the left-side navigation, choose Connectivity Destinations .
3. The Destinations editor is opened.
1. In the cockpit, select your subaccount name from the Subaccount menu in the breadcrumbs.
2. From the left-side navigation, choose Applications Java Applications to open the page with your
currently deployed Java Web applications (if any).
3. Select the application for which you need to create a destination.
4. From the left-side panel, choose Configuration Destinations .
5. The Destinations editor is opened.
Prerequisites
You have logged into the cockpit and opened the Destinations editor.
Context
To learn how to create HTTP, RFC, and Mail destinations, follow the steps on the relevant pages:
Related Information
Prerequisites
You have logged into the cockpit and opened the Destinations editor.
Note
For more information, see also: HTTP Destinations [page 130].
Note
If you set an HTTPS destination, you need to also add truststore. For more information, see Use Destination
Certificates (Cockpit) [page 118].
Note
For a detailed description of WebIDE-specific properties, see Connecting Remote Systems.
Prerequisites
You have logged into the cockpit and opened the Destinations editor.
Procedure
Note
For a detailed description of RFC-specific properties (JCo properties), see RFC Destinations [page 194].
Prerequisites
You have logged into the cockpit and opened the Destinations editor.
Procedure
Related Information
Prerequisites
You have logged into the cockpit and opened the Destinations editor.
Context
You can use the Check Connection button in the Destinations editor of the cockpit to verify if the URL configured
for an HTTP Destination is reachable and if the connection to the specified system is possible.
Note
This check is available with Cloud Connector version 2.7.1 or higher.
For each destination, the check button is available in the destination detail view and in the destination overview list
(icon Check availability of destination connection in section Actions).
Note
The check does not guarantee that a backend is operational. It only verifies if a connection to the backend is
possible.
This check is supported only for destinations with Proxy Type Internet and OnPremise:
Backend status could not be determined. ● The Cloud Connector version is less ● Upgrade the Cloud Connector to
than 2.7.1. version 2.7.1 or higher.
● The Cloud Connector is not con ● Connect the Cloud Connector to the
nected to the subaccount. corresponding subaccount.
● Check the server status (availabil
● The backend returns a HTTP status
ity) of the back-end system.
code above or equal to 500 (server
● Check the basic Cloud Connector
error).
configuration steps:
● The Cloud Connector is not config- Initial Configuration [page 285]
ured properly. Configuring the Cloud Connector
for HTTP [page 148]
Configuring the Cloud Connector
for RFC [page 200]
Backend is not available in the list of de The Cloud Connector is not configured Check the basic Cloud Connector config-
uration steps:
fined system mappings in Cloud properly.
Connector. Initial Configuration [page 285]
Resource is not accessible in Cloud The Cloud Connector is not configured Check the basic Cloud Connector config-
uration steps:
Connector or backend is not reachable. properly.
Initial Configuration [page 285]
Backend is not reachable from Cloud Cloud connector configuration is ok but Check the backend (server) availability.
Prerequisites
You have previously created or imported a connectivity destination (HTTP, Mail, or RFC ) in the Destinations editor
of the cockpit.
1. In the Destinations editor, go to the existing destination which you want to clone.
Related Information
Prerequisites
You have previously created or imported a connectivity destination (HTTP, Mail, or RFC) in the Destinations editor
of the cockpit.
Procedure
● Edit a destination:
Tip
For complete consistency, we recommend that you first stop your application, then apply your
destination changes, and then start again the application. Also, bear in mind that these steps will cause
application downtime.
● Delete a destination:
Related Information
Prerequisites
You have logged into the cockpit and opened the Destinations editor. For more information, see Access the
Destinations Editor [page 110].
Context
This page explains how you can maintain truststore and keystore certificates in the Destinations editor. You can
upload, add and delete certificates for your connectivity destinations. Bear in mind that:
● You can only use JKS, PFX and P12 files for destination key store, and JKS, CRT, CER, DER for destination trust
store.
● You can add certificates only for HTTPS destinations. Truststore can be used for all authentication types.
Keystore is available only for ClientCertificateAuthentication.
● An uploaded certificate file should contain the entire certificate chain.
Procedure
Uploading Certificates
Deleting Certificates
1. Choose the Certificates button or click the Upload and Delete Certificates link.
2. Select the certificate you want to remove and choose Delete Selected.
3. Upload another certificate, or close the Certificates window.
Related Information
Prerequisites
Note
The Destinations editor allows importing destination files with extension .props, .properties, .jks,
and .txt, as well as files with no extension. Destination files must be encoded in ISO 8859-1 character
encoding.
○ If the configuration file contains valid data, it is displayed in the Destinations editor with no errors. The
Save button is enabled so that you can successfully save the imported destination.
○ If the configuration file contains invalid properties or values, under the relevant fields in the Destinations
editor are displayed error messages in red which prompt you to correct them accordingly.
Related Information
Prerequisites
You have created a connectivity destination (HTTP, Mail, or RFC) in the Destinations editor.
Procedure
○ If the destination does not contain client certificate authentication, it is saved as a single configuration file.
○ If the destination provides client certificate data, it is saved as an archive, which contains the main
configuration file and a JKS file.
Related Information
User → jco.client.user
Password → jco.client.passwd
Note
For security reasons, do not use these additional properties but use the corresponding main properties' fields.
Related Information
Overview
The connectivity service provides a secure way of forwarding the identity of an on-demand user to the Cloud
Connector, and from there to the back end of the relevant on-premise system. This process is called principal
propagation. It uses SAML tokens as the exchange format for the user information. User mapping takes place in
the back end and, in this way, either the token is forwarded directly to the back end or an X.509 certificate is
generated, which is then used in the back end.
Restriction
This authentication is only applicable if you want to connect to your on-premise system via the Cloud
Connector.
How It Works
1. The user authenticates at the Web application front end via the IDP (Identity Pro
vider) using a standard SAML Web SSO profile. When the back-end connection is
established by the Web application, the destination service (re)uses the received
SAML assertion to create the connection to the on-premise system (BE1-BEm).
2. The Cloud Connector validates the received SAML assertion for a second time,
extracts the attributes, and uses its STS (Security Token Service) component to
issue a new token (an X.509 certificate) with the same or similar attributes to
assert the identity to the back-end.
3. The Cloud Connector and the Web application(s) share the same SP identity, that
is, the trust is only set up once in the IDP.
You can create and configure connectivity destinations making use of the PrincipalPropagation property in the
Eclipse IDE and in the cockpit. Bear in mind that this property is only available for destination configurations
created in the cloud.
Tasks
Related Information
● Call an Internet service using a simple application that queries some information from a public service:
Consume Internet Services (Java Web or Java EE 6 Web Profile) [page 156]
Consume Internet Services (Java Web Tomcat 7) [page 163]
● Call a service from a fenced customer network using a simple application that consumes an on-premise ping
service:
Consume Back-End Systems (Java Web or Java EE 6 Web Profile) [page 171]
Consume Back-End Systems (Java Web Tomcat 7) [page 182]
You can consume on-premise back-end services in two ways – via HTTP destinations and via the HTTP Proxy. For
more information, see:
To create a loopback connection, you can use the dedicated HTTP port bound to localhost. The port number can
be obtained from the cloud environment variable HC_LOCAL_HTTP_PORT.
For more information, see Using Cloud Environment Variables [page 1172] → section "List of Environment
Variables".
Note
Note that when deploying locally from the Eclipse IDE or the console client, the HTTP port may differ.
Related Information
Using the Keystore Service for Client Side HTTPS Connections [page 2240]
Overview
By default, all connectivity API packages are visible from all Web applications. In this classical case, applications
can consume the destinations via a JNDI lookup. For more information, see Connectivity and Destination APIs
[page 76].
Caution
● If you use SDK for Java Web, we only recommend that you create a destination before deploying the
application.
● If you use SDK for Java EE 6 Web Profile, you must create a destination before deploying the application.
● If you use SDK for Java Web Tomcat 7, the DestinationFactory API is not supported. Instead, you can
use ConnectivityConfiguration API [page 80].
Tip
When you know in advance the names of all destinations you need, you should better use destinations.
Otherwise, we recommend using DestinationFactory.
Procedure
<resource-ref>
<res-ref-name>connectivity/DestinationFactory</res-ref-name>
<res-type>com.sap.core.connectivity.api.DestinationFactory</res-type>
</resource-ref>
import com.sap.core.connectivity.api.DestinationFactory;
import com.sap.core.connectivity.api.http.HttpDestination
...
Context ctx = new InitialContext();
DestinationFactory destinationFactory
=(DestinationFactory)ctx.lookup(DestinationFactory.JNDI_NAME);
HttpDestination destination = (HttpDestination)
destinationFactory.getDestination("myBackend");
3. With the retrieved HTTP destination, you can then, for example, send a simple GET request to the configured
remote system by using the following code:
import org.apache.http.client.HttpClient;
import org.apache.http.client.methods.HttpGet;
import org.apache.http.HttpResponse;
...
// coding to call service "myService" on the system configured in the given
destination
HttpClient createHttpClient = destination.createHttpClient();
HttpGet get = new HttpGet("myService");
HttpResponse resp = createHttpClient.execute(get);
Overview
The HTTP destinations provide data communication via HTTP protocol and are used for both Internet and on-
premise connections.
The runtime tries to resolve a destination in the order: Subscription Level → Subaccount Level → Application Level.
By using the optional "DestinationProvider" property, a destination can be limited to application level only,
that is, the runtime tries to resolve the destination on application level.
Property Description
Note
If you use Java Web Tomcat 7 runtime container, the DestinationProvider property is not supported.
Instead, you can use AuthenticationHeaderProvider API [page 82].
Example
Name=weather
Type=HTTP
Authentication=NoAuthentication
DestinationProvider=Application
The proxy type used for a destination must be specified by the destination property ProxyType. The property's
default value (if not configured explicitly) is Internet.
If you work in your local development environment behind a proxy server and want to use a service from the
Internet, you need to configure your proxy settings on JVM level. To do this, proceed as follows:
1. On the Servers view, double-click the added server and choose Overview to open the editor.
2. Click the Open Launch Configuration link.
3. Choose the (x)=Arguments tab page.
4. In the VM Arguments box, add the following row:
-Dhttp.proxyHost=yourproxyHost -Dhttp.proxyPort=yourProxyPort -
Dhttps.proxyHost=yourproxyHost -Dhttps.proxyPort=yourProxyPort
5. Choose OK.
6. Start or restart your SAP HANA Cloud local runtime.
For more information and example, see Consume Internet Services (Java Web or Java EE 6 Web Profile) [page
156].
● When using the Internet proxy type, you do not need to perform any additional configuration steps.
● When using the OnPremise proxy type, you configure the setting the standard way through the Connectivity
editor in the Eclipse IDE.
For more information and example, see Consume Back-End Systems (Java Web or Java EE 6 Web Profile)
[page 171].
Configuring Authentication
When creating an HTTP destination, you can use different authentication types for access control::
Context
The server certificate authentication is applicable for all client authentication types, described below.
Note
TLS 1.2 became the default TLS version of HTTP destinations. If an HTTP destination is consumed by a java
application the change will be effective after restart. All HTTP destinations that use the HTTPS protocol and
have ProxyType=Internet can be affected. Previous TLS version can be used by configuring an additional
property TLSVersion=TLSv1.0 or TLSVersion=TLSv1.1.
Properties
Property Description
TLSVersion Optional property. Can be used to specify the preferred TLS version to be used by
the current destination. Since TLS 1.2 is not enabled by default on the older java ver
sions this property can be used to configure TLS 1.2 in case this is required by the
server configured in this destination. It is usable only in HTTP destinations. Exam
ple: TLSVersion=TLSv1.2 .
TrustStoreLocation Path to the JKS file which contains trusted certificates (Certificate Authorities) for
1. When used in local environment authentication against a remote client.
2. When used in cloud environment 1. The relative path to the JKS file. The root path is the server's location on the file
system.
2. The name of the JKS file.
Note
The default JDK truststore is appended to the truststore defined in the destina
tion configuration. As a result, the destination simultaneously uses both trust
stores. If the TrustStoreLocation property is not specified, the JDK trust
store is used as a default truststore for the destination.
TrustStorePassword Password for the JKS trust store file. This property is mandatory in case
TrustStoreLocation is used.
TrustAll If this property is set to TRUE in the destination, the server certificate will not be
checked for SSL connections. It is intended for test scenarios only, and should not
be used in production (since the SSL server certificate is not checked, the server is
not authenticated). The possible values are TRUE or FALSE; the default value is
FALSE (that is, if the property is not present at all).
HostnameVerifier Optional property. It has two values: Strict and BrowserCompatible. This
property specifies how the server hostname matches the names stored inside the
server's X.509 certificate. This verifying process is only applied if TLS or SSL proto
cols are used and is not applied if the TrustAll property is specified. The default
value (used if no value is explicitly specified) is Strict.
Note
You can upload TrustStore JKS files using the same command for uploading destination configuration property
file. You only need to specify the JKS file instead of the destination configuration file.
Note
Connections to remote services which require Java Cryptography Extension (JCE) unlimited strength
jurisdiction policy are not supported.
Related Information
Context
By default, all SAP systems accept SAP assertion tickets for user propagation.
Note
The SAP assertion ticket is a special type of logon ticket. For more information, see SAP Logon Tickets and
Logon Using Tickets.
The aim of the SAPAssertionSSO destination is to generate such an assertion ticket in order to propagate the
currently logged-on SAP Cloud Platform user to an SAP back-end system. You can only use this authentication
type if the user IDs on both sides are the same. The following diagram shows the elements of the configuration
process on the SAP Cloud Platform and in the corresponding back-end system:
1. Configure the back-end system so that it can accept SAP assertion tickets signed by a trusted x.509 key pair.
For more information, see Configuring a Trust Relationship for SAP Assertion Tickets.
2. Create and configure a SAPAssertionSSO destination by using the properties listed below, and deploy it on
SAP Cloud Platform.
○ Configure Destinations from the Cockpit [page 108]
○ Configure Destinations from the Console Client [page 87]
Note
Configuring SAPAssertionSSO destinations from the Eclipse IDE is not yet supported.
Property Description
ProxyType You can use both proxy types Internet and OnPremise.
Example
Name=weather
Type=HTTP
Authentication=SAPAssertionSSO
IssuerSID=JAV
IssuerClient=000
RecipientSID=SAP
RecipientClient=100
Certificate=MIICiDCCAkegAwI...rvHTQ\=\=
SigningKey=MIIBSwIB...RuqNKGA\=
Context
The aim of the PrincipalPropagation destination is to forward the identity of an on-demand user to the Cloud
Connector, and from there – to the back-end of the relevant on-premise system. In this way, the on-demand user
will no longer need to provide his/her identity every time he/she makes a connection to an on-premise system via
the same Cloud Connector.
Configuration Steps
You can create and configure a PrincipalPropagation destination by using the properties listed below, and deploy it
on SAP Cloud Platform. For more information, see:
Note
This property is only available for destination configurations created on the cloud.
Properties
Property Description
Example
Name=OnPremiseDestination
Type=HTTP
URL= http://virtualhost:80
Authentication=PrincipalPropagation
ProxyType=OnPremise
Related Information
Context
SAP Cloud Platform provides support for applications to use the SAML Bearer assertion flow for consuming
OAuth-protected resources. In this way, applications do not need to deal with some of the complexities of OAuth
and can reuse existing identity providers for user data. Users are authenticated by using SAML against the
configured trusted identity providers. The SAML assertion is then used to request an access token from an OAuth
authorization server. This access token is automatically injected in all HTTP requests to the OAuth-protected
resources.
Tip
Тhe access tokens are auto-renovated. When a token is about to expire, a new token is created shortly before
the expiration of the old one.
You can create and configure an OAuth2SAMLBearerAssertion destination by using the properties listed below, and
deploy it on SAP Cloud Platform. For more information, see:
Note
Configuring OAuth2SAMLBearerAssertion destinations from the Eclipse IDE is not yet supported.
If you use proxy type OnPremise, both OAuth server and the protected resource have to be located on premise
and exposed via the Cloud Connector. Make sure to set URL to the virtual address of the protected resource and
tokenServiceURL to the virtual address of the OAuth server (see section Properties below).
Note
The combination on-premise OAuth server and protected resource on the Internet is not supported, as well as
OAuth server on the Internet and protected resource on premise.
Properties
The table below lists the destination properties needed for OAuth2SAMLBearerAssertion authentication type. The
values for these properties should be found in the documentation of the particular provider of OAuth-protected
services. Usually, only a subset of the optional properties is required by a particular service provider.
Property Description
Required
Type Destination type. Use HTTP as a value for all HTTP(S) destina
tions.
ProxyType You can use both proxy types Internet and OnPremise.
Additional
SystemUser User to be used when requesting access token from the OAuth
authorization server. If this property is not specified, the cur
rently logged-in user will be used.
nameQualifier Security domain of the user for which access token will be re
quested
SkipSSOTokenGenerationWhenNoUser If this parameter is set and there is no user logged in, token
generation is skipped, thus allowing anonymous access to
public resources. If set, it may have any value.
Note
When the OAuth authorization server is called, it accepts the trust settings of the destination. For more
information, see Server Certificate Authentication [page 132].
Example
The connectivity destination below provides HTTP access to the OData API of SuccessFactors Jam.
URL=https://demo.sapjam.com/OData/OData.svc
Name=sap_jam_odata
TrustAll=true
ProxyType=Internet
Type=HTTP
Authentication=OAuth2SAMLBearerAssertion
tokenServiceURL=https://demo.sapjam.com/api/v1/auth/token
clientKey=Aa1Bb2Cc3DdEe4F5GHIJ
audience=cubetree.com
nameQualifier=www.successfactors.com
Context
The AppToAppSSO destinations are used in scenario of application-to-application communication where the caller
needs to propagate its logged-in user. Both applications are deployed on SAP Cloud Platform.
Configuration Steps
1. Configure your subaccount to allow principal propagation. For more information, see Application Identity
Provider [page 2161] → section "Specifying Custom Local Provider Settings".
Note
This setting is done per subaccount, which means that once set to Enabled all applications within the
subaccount will accept user propagation.
2. Create and configure an AppToAppSSO destination by using the properties listed below, and deploy it on SAP
Cloud Platform. For more information, see:
○ Configure Destinations from the Cockpit [page 108]
○ Configure Destinations from the Console Client [page 87]
Note
Configuring AppToAppSSO destinations from the Eclipse IDE is not yet supported.
Property Description
Type Destination type. Use HTTP as a value for all HTTP(S) destina
tions.
SessionCookieNames Optional.
Note
In case that a session cookie name has a variable part you
can specify it as a regular expression.
Example:
JSESSIONID, JTENANTSESSIONID_.*,
CookieName, Cookie*Name, CookieName.*
Note
The spaces after comma are optional.
Note
Recommended value for the target Java app on SAP Cloud
Platform is: JTENANTSESSIONID_.*, and for the HANA
XS app is: xsId.*.
Note
If not specified, both applications must be consumed in the
same subaccount.
SkipSSOTokenGenerationWhenNoUser Optional.
Example
#
#Wed Jan 13 12:25:47 UTC 2016
Name=apptоapp
URL=https://someurl.com
ProxyType=Internet
Type=HTTP
SessionCookieNames=JTENANTSESSIONID_.*
Authentication=AppToAppSSO
Related Information
Context
This section lists the supported client authentication types and the relevant supported properties.
This is used for destinations that refer to a service on the Internet or an on-premise system that does not require
authentication. The relevant property value is:
Authentication=NoAuthentication
Note
When a destination is using HTTPS protocol to connect to a Web resource, the JDK truststore is used as
truststore for the destination.
Basic Authentication
This is used for destinations that refer to a service on the Internet or an on-premise system that requires basic
authentication. The relevant property value is:
Authentication=BasicAuthentication
Property Description
Password Password
Preemptive If this property is not set or is set to TRUE (that is, the default behavior is to use
preemptive sending), the authentication token is sent preemptively. Otherwise, it re
lies on the challenge from the server (401 HTTP code). The default value (used if no
value is explicitly specified) is TRUE. For more information about preemptiveness,
see http://tools.ietf.org/html/rfc2617#section-3.3 .
Note
When a destination is using the HTTPS protocol to connect to a Web resource, the JDK truststore is used as
truststore for the destination.
Note
Basic Authentication and No Authentication can be used in combination with
ProxyType=OnPremise. In this case, also the CloudConnectorLocationId property can be specified.
Starting with SAP HANA Cloud Connector 2.9.0, it is possible to connect multiple cloud connectors to a
subaccount as long as their location ID is different. The value defines the location ID identifying the Cloud
Connector over which the connection shall be opened. The default value is the empty string identifying the
Cloud Connector that is connected without any location ID. This is also the case for all Cloud Connector
versions prior to 2.9.0.
This is used for destinations that refer to a service on the Internet. The relevant property value is:
Authentication=ClientCertificateAuthentication
Property Description
KeyStoreLocation Path to the JKS file that contains the client certificate(s) for authentication against a
1. When used in local environment remote server.
2. When used in cloud environment 1. The relative path to the JKS file. The root path is the server's location on the file
system.
2. The name of the JKS file.
KeyStorePassword The password for the key storage. This property is mandatory in case
KeyStoreLocation is used.
Note
You can upload KeyStore JKS files using the same command for uploading destination configuration property
file. You only need to specify the JKS file instead of the destination configuration file.
Configuration
Related Information
The connectivity service provides a standard HTTP Proxy for on-premise connectivity to be accessible by any
application. Proxy host and port are available as the environment variables HC_OP_HTTP_PROXY_HOST and
HC_OP_HTTP_PROXY_PORT.
Note
● The HTTP Proxy provides a more flexible way to use on-premise connectivity via standard HTTP clients. It is
not suitable for other protocols, such as RFC or Mail. HTTPS requests will not work as well.
● The previous alternative, that is, using on-premise connectivity via existing HTTP Destination API, is still
supported. For more information, see DestinationFactory API [page 128].
Multitenancy Support
By default, all applications are started in multitenant mode. Such applications are responsible to propagate
consumer subaccounts to the HTTP Proxy, using header SAP-Connectivity-ConsumerAccount. This header is
mandatory during the first request of each HTTP connection. HTTP connections are associated with one
consumer subaccount and cannot be used with another subaccount. If the SAP-Connectivity-
ConsumerAccount header is sent after the first request, and its value is different from the value in the first
request, the Proxy will return HTTP response code 400.
Starting with SAP HANA Cloud Connector 2.9.0, it is possible to connect multiple cloud connectors to a
subaccount as long as their location ID is different. Using the header SAP-Connectivity-SCC-Location_ID it
is possible to specify the Cloud Connector over which the connection shall be opened. If this header is not
specified, the connection will be opened to the Cloud Connector that is connected without any location ID. This is
also the case for all Cloud Connector versions prior to 2.9.0.
If an application VM is started for one consumer subaccount, this subaccount is known by the HTTP Proxy and the
application may not send the SAP-Connectivity-ConsumerAccount header.
On multitenant VMs, applications are responsible to propagate consumer subaccount via SAP-Connectivity-
ConsumerAccount header. The following example shows how this can be performed.
On single-tenant VMs, the consumer subaccount is known and subaccount propagation via header is not needed.
The following example demonstrates this case.
// create HTTP client and insert the necessary headers in the request
HttpClient httpClient = new DefaultHttpClient();
httpClient.getParams().setParameter(ConnRoutePNames.DEFAULT_PROXY, new
HttpHost(proxyHost, proxyPort));
HttpGet request = new HttpGet("http://virtualhost:1234");
Related Information
Context
The HTTP Proxy can forward the identity of an on-demand user to the Cloud Connector, and from there – to the
back-end of the relevant on-premise system. In this way, on-demand users will no longer need to provide their
identity every time they make connections to on-premise systems via one and the same Cloud Connector. To
propagate the logged-in user, an application must use the AuthenticationHeaderProvider API to generate a
header, which then embeds in the HTTP request to the on-premise system.
● IDPs used by applications protected by SAML2 have to be denoted as trustworthy for the Cloud Connector.
● Non-SAML2 protected applications have to be denoted themselves as trustworthy for the Cloud Connector.
Example
Note
You can also apply dependency injection by using the @Resource annotation.
Related Information
This section helps you to configure your Cloud Connector [page 253] when you are working via the HTTP protocol.
Related Information
In order to set up a mutual authentication between the Cloud Connector and any back-end system it connects to,
you can import an X.509 client certificate into the Cloud Connector. The Cloud Connector will then use the so-
called "system certificate" for all HTTPS requests to back-ends that request or require a client certificate. This
means, that the CA, which signed the Cloud Connector's client certificate, needs to be trusted by all back-end
systems to which the Cloud Connector is supposed to connect.
This system certificate needs to be provided as PKCS#12 file containing the client certificate, the corresponding
private key and the CA root certificate that signed the client certificate (plus potentially the certificates of any
intermediate CAs, if the certificate chain is longer than 2). Via the file upload dialog, this PKCS#12 file can be
chosen from the file system. Its password also needs to be supplied for the import process.
Note
As of version 2.6.0, there is a second option - starting a Certificate Signing Request procedure, similar
to the UI certificate described in Exchange UI Certificates in the Administration UI [page 283].
If a system certificate has been imported successfully, its distinguished name, the name of the issuer, and the
validity dates are displayed:
If a system certificate is no longer required it can be deleted. To do this, use the respective button and confirm
deletion. If you need the public key for establishing trust with a server, you can simply export the full chain via the
Export button.
Related Information
To allow your on-demand applications to access a certain back-end system on the intranet, you need to insert an
extra line into the Cloud Connector access control management.
4. Protocol: This field allows you to decide whether the Cloud Connector should use HTTP or HTTPS for the
connection to the back-end system. Note that this is completely independent from the setting on cloud side.
Thus, even if the HTTP destination on cloud side specifies "http://" in its URL, you can select HTTPS. This
way, you are ensured that the entire connection from the on-demand application to the actual back-end
system (provided through the SSL tunnel) is SSL-encrypted. The only prerequisite is that the back-end system
supports HTTPS on that port. For more information, see Initial Configuration (HTTP) [page 149].
○ If you specify HTTPS and there is a "system certificate" imported in the Cloud Connector, the latter
attempts to use that certificate for performing a client-certificate-based logon to the back-end system.
○ If there is no system certificate imported, the Cloud Connector opens an HTTPS connection without client
certificate.
6. Virtual Host specifies the host name exactly as it is specified as the URL property in the HTTP destination
configuration in SAP Cloud Platform. The virtual host can be a fake name and does not need to exist. The
Virtual Port allows you to distinguish between different entry points of your back-end system, for example,
HTTP/80 and HTTPS/443, and have different sets of access control settings for them. For example, some
noncritical resources may be accessed by HTTP, while some other critical resources are to be called using
HTTPS only. The fields will be prepopulated with the values of the Internal Host and Internal Port. In case you
don't modify them, you will need to provide your internal host and port also in the cloud side destination
configuration or in the URL used for your favorite HTTP client.
7. Principal Type defines what kind of principal is used when configuring a destination on the cloud side using this
system mapping with authentication type Principal Propagation. Regardless of what you choose, you
need to make sure that the general configuration for the principal type has been done to make it work
correctly. For destinations using different authentication types, this setting is ignored. If you choose None as
principal type, it is not possible to use principal propagation to this system.
Note
There are two variants of a principal type X.509 certificate: X.509 certificate (general usage) and X.509
certificate (strict usage). The latter was introduced with Cloud Connector 2.11. If the cloud side sends a
principal, these variants behave identically. If no principal is sent, the injected HTTP headers indicate that
the system certificate used for trust is not used for authentication.
9. The summary shows information about the system to be stored and when saving the host mapping, you can
trigger a ping from the Cloud Connector to the internal host, using the Check availability of internal host
checkbox. This allows you to make sure the Cloud Connector can indeed access the internal system, and
allows you to catch basic things, such as spelling mistakes or firewall problems between the Cloud Connector
and the internal host. If the ping to the internal host is successful, the Cloud Connector saves the mapping
without any remark. If it fails, a warning will pop up, that the host is not reachable. Details for the reason are
available in the log files. You can execute such a check for all selected systems in the Access Control overview
In addition to allowing access to a particular host and port, you also need to specify which URL paths (Resources)
are allowed to be invoked on that host. The Cloud Connector uses very strict white-lists for its access control, so
only those URLs for which you explicitly granted access are allowed. All other HTTP(S) requests are denied by the
Cloud Connector.
To define the permitted URLs (Resources) for a particular back-end system, choose the line corresponding to that
back-end system and choose Add in section Resources Accessible On... below. A dialog appears prompting you to
enter the specific URL path that you want to allow to be invoked.
The Enabled checkbox allows you to specify, whether that resource shall initially be enabled or disabled. (See the
following section for an explanation of enabled or disabled resources.)
In some cases, it is useful for testing purposes to temporarily disable certain resources without having to delete
them from the configuration. This allows you to easily reprovide access to these resources at a later point of time
without having to type in everything once again.
● To enable the resource again, select it and choose the Enable button.
● It is also possible to mark multiple lines and then to disable or enable all of them in one go by clicking the
Enable/Disable icons in the top row.
Examples:
● /production/accounting and Path only (sub-paths are excluded) are selected. Only requests of the form GET /
production/accounting or GET /production/accounting?name1=value1&name2=value2... are
allowed. (GET can also be replaced by POST, PUT, DELETE, and so on.)
● /production/accounting and Path and all sub-paths are selected. All requests of the form GET /production/
accounting-plus-some-more-stuff-here?name1=value1... are allowed.
● / and Path and all sub-paths are selected. All requests to this server are allowed.
Related Information
1.5.2.1.4.5 Tutorials
The connectivity service allows a secure, reliable, and easy-to-consume access to remote services running either
on the Internet or in an on-premise network.
Use Cases
The tutorials in this section show how you can make connections to Internet services and on-premise networks:
Consume Internet Services (Java Web or Java EE 6 Web Profile) [page 156]
Consume Back-End Systems (Java Web or Java EE 6 Web Profile) [page 171]
Context
This step-by-step tutorial demonstrates consumption of Internet services using Apache HTTP Client . The
tutorial also shows how a connectivity-enabled Web application can be deployed on a local server and on the cloud.
The servlet code, the web.xml content, and the destination file (outbound-internet-destination) used in
this tutorial are mapped to the connectivity sample project located in <SDK_location>/samples/
connectivity. You can directly import this sample in your Eclipse IDE. For more information, see Import
Samples as Eclipse Projects [page 1145].
Prerequisites
You have downloaded and set up your Eclipse IDE, SAP Cloud Platform Tools for Java, and SDK.
For more information, see Setting Up the Development Environment [page 1126].
Note
You need to install SDK for Java Web or SDK for Java EE 6 Web Profile.
5. Choose Finish so that the ConnectivityServlet.java servlet is created and opened in the Java editor.
6. Go to ConnectivityHelloWorld WebContent WEB-INF and open the web.xml file.
7. Choose the Source tab page.
8. Add the following code block to the <web-app> element:
<resource-ref>
<res-ref-name>outbound-internet-destination</res-ref-name>
<res-type>com.sap.core.connectivity.api.http.HttpDestination</res-type>
</resource-ref>
Note
The value of the <res-ref-name> element in the web.xml file should match the name of the destination
that you want to be retrieved at runtime. In this case, the destination name is outbound-internet-
destination.
9. Replace the entire servlet class with the following one to make use of the destination API. The destination API
is visible by default for cloud applications and must not be added explicitly to the application class path.
package com.sap.cloud.sample.connectivity;
import java.io.IOException;
import java.io.InputStream;
import static java.net.HttpURLConnection.HTTP_OK;
import javax.naming.Context;
import javax.naming.InitialContext;
import javax.naming.NamingException;
import javax.servlet.ServletException;
import javax.servlet.http.HttpServlet;
import javax.servlet.http.HttpServletRequest;
import javax.servlet.http.HttpServletResponse;
import org.apache.http.HttpEntity;
import org.apache.http.HttpResponse;
import org.apache.http.client.HttpClient;
import org.apache.http.client.methods.HttpGet;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import com.sap.core.connectivity.api.DestinationFactory;
import com.sap.core.connectivity.api.http.HttpDestination;
/**
* Servlet class making HTTP calls to specified HTTP destinations.
* Destinations are used in the following exemplary connectivity scenarios:<br>
Note
The given servlet can run with different destination scenarios, for which user should specify the destination
name as a requested parameter in the calling URL. In this case, the destination name should be
<applicationURL>/?destname=outbound-internet-destination. Nevertheless, your servlet can
still run even without specifying the destination name for this outbound scenario.
10. Save the Java editor and make sure the project compiles without errors.
Caution
● If you use SDK for Java Web, we only recommend that you create a destination before deploying the
application.
-Dhttp.proxyHost=<your_proxy_host> -Dhttp.proxyPort=<your_proxy_port> -
Dhttps.proxyHost=<your_proxy_host> -Dhttps.proxyPort=<your_proxy_port>
○ Choose OK.
5. Go to the Connectivity tab page of your local server. Create a destination with the name outbound-
internet-destination, and configure it so it can be consumed by the application at runtime. For more
information, see Configure Destinations from the Eclipse IDE [page 95].
For the sample destination to work properly, the following properties need to be configured:
Name=outbound-internet-destination
Type=HTTP
URL=http://sap.com/index.html
Authentication=NoAuthentication
6. From the ConnectivityServlet.java editor's context menu, choose Run As Run on Server .
7. Make sure that the Choose an existing server option is selected and choose Java Web Server.
8. Choose Finish.
The server is now started, displayed as Java Web Server [Started, Synchronized] in the Servers
view.
Result:
The internal Web browser opens with the expected output of the connectivity-enabled Web application.
7. Choose Finish.
8. A new server <application>.<subaccount> [Stopped]> appears in the Servers view.
9. Go to the Connectivity tab page of the server, create a destination with the name outbound-internet-
destination, and configure it using the following properties:
Name=outbound-internet-destination
Type=HTTP
URL=http://sap.com/index.html
Authentication=NoAuthentication
ProxyType=Internet
Result:
The internal Web browser opens with the URL pointing to SAP Cloud Platform and displaying the expected output
of the connectivity-enabled Web application.
Next Step
You can monitor the state and logs of your Web application deployed on SAP Cloud Platform.
Context
This step-by-step tutorial demonstrates consumption of Internet services using HttpURLConnection. The
tutorial also shows how a connectivity-enabled Web application can be deployed on a local server and on the cloud.
The servlet code, the web.xml content, and the destination file (outbound-internet-destination) used in
this tutorial are mapped to the connectivity sample project located in <SDK_location>/samples/
connectivity. You can directly import this sample in your Eclipse IDE. For more information, see Import
Samples as Eclipse Projects [page 1145].
Prerequisites
You have downloaded and set up your Eclipse IDE, SAP Cloud Platform Tools for Java, and SDK.
For more information, see Setting Up the Development Environment [page 1126].
<resource-ref>
<res-ref-name>connectivityConfiguration</res-ref-name>
<res-
type>com.sap.core.connectivity.api.configuration.ConnectivityConfiguration</res-
type>
</resource-ref>
9. Replace the entire servlet class with the following one to make use of the destination API. The destination API
is visible by default for cloud applications and must not be added explicitly to the application class path.
package com.sap.cloud.sample.connectivity;
import java.io.IOException;
import java.io.InputStream;
import java.io.OutputStream;
import java.net.HttpURLConnection;
import java.net.InetSocketAddress;
import java.net.Proxy;
import java.net.URL;
import javax.annotation.Resource;
import javax.naming.Context;
import javax.naming.InitialContext;
import javax.servlet.ServletException;
import javax.servlet.http.HttpServlet;
import javax.servlet.http.HttpServletRequest;
import javax.servlet.http.HttpServletResponse;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import com.sap.cloud.account.TenantContext;
import com.sap.core.connectivity.api.configuration.ConnectivityConfiguration;
import com.sap.core.connectivity.api.configuration.DestinationConfiguration;
/**
* Servlet class making http calls to specified http destinations.
* Destinations are used in the following example connectivity scenarios:<br>
* - Connecting to an outbound Internet resource using HTTP destinations<br>
* - Connecting to an on-premise backend using on premise HTTP destinations,<br>
* where the destinations have no authentication.<br>
*/
public class ConnectivityServlet extends HttpServlet {
@Resource
private TenantContext tenantContext;
/** {@inheritDoc} */
@Override
public void doGet(HttpServletRequest request, HttpServletResponse response)
throws ServletException, IOException {
HttpURLConnection urlConnection = null;
String destinationName = request.getParameter("destname");
try {
// Look up the connectivity configuration API
Context ctx = new InitialContext();
ConnectivityConfiguration configuration =
(ConnectivityConfiguration) ctx.lookup("java:comp/env/
connectivityConfiguration");
if (ON_PREMISE_PROXY.equals(proxyType)) {
// Get proxy for on-premise destinations
proxyHost = System.getenv("HC_OP_HTTP_PROXY_HOST");
proxyPort = Integer.parseInt(System.getenv("HC_OP_HTTP_PROXY_PORT"));
} else {
// Get proxy for internet destinations
proxyHost = System.getProperty("http.proxyHost");
proxyPort = Integer.parseInt(System.getProperty("http.proxyPort"));
}
return new Proxy(Proxy.Type.HTTP, new InetSocketAddress(proxyHost,
proxyPort));
}
Note
The given servlet can run with different destination scenarios, for which user should specify the destination
name as a requested parameter in the calling URL. In this case, the destination name should be
<applicationURL>/?destname=outbound-internet-destination. Nevertheless, your servlet can
still run even without specifying the destination name for this outbound scenario.
10. Save the Java editor and make sure the project compiles without errors.
Note
We recommend but not obligate that you create a destination before deploying the application.
-Dhttp.proxyHost=<your_proxy_host> -Dhttp.proxyPort=<your_proxy_port> -
Dhttps.proxyHost=<your_proxy_host> -Dhttps.proxyPort=<your_proxy_port>
○ Choose OK.
5. Go to the Connectivity tab page of your local server, create a destination with the name outbound-
internet-destination, and configure it so it can be consumed by the application at runtime. For more
information, see Configure Destinations from the Eclipse IDE [page 95].
For the sample destination to work properly, the following properties need to be configured:
Name=outbound-internet-destination
Type=HTTP
URL=http://sap.com/index.html
Authentication=NoAuthentication
6. From the ConnectivityServlet.java editor's context menu, choose Run As Run on Server .
7. Make sure that the Choose an existing server option is selected and choose Java Web Tomcat 7 Server.
8. Choose Finish.
The server is now started, displayed as Java Web Tomcat 7 Server [Started, Synchronized] in the
Servers view.
Result:
The internal Web browser opens with the expected output of the connectivity-enabled Web application.
Note
The application name should be unique enough to allow your deployed application to be easily identified in
SAP Cloud Platform cockpit.
Name=outbound-internet-destination
Type=HTTP
URL=http://sap.com/index.html
Authentication=NoAuthentication
ProxyType=Internet
10. From the ConnectivityServlet.java editor's context menu, choose Run As Run on Server .
11. Make sure that the Choose an existing server option is selected and choose <Server_host_name>
<Server_name> .
12. Choose Finish.
The internal Web browser opens with the URL pointing to SAP Cloud Platform and displaying the expected output
of the connectivity-enabled Web application.
Next Step
You can monitor the state and logs of your Web application deployed on SAP Cloud Platform.
Context
This step-by-step tutorial demonstrates how a sample Web application consumes a back-end system via HTTP(S)
by using the connectivity service. For simplicity, instead of using a real back-end system, we use a second sample
Web application containing BackendServlet. It mimics the back-end system and can be called via HTTP(S).
The servlet code, the web.xml content, and the destination files (backend-no-auth-destination and
backend-basic-auth-destination) used in this tutorial are mapped to the connectivity sample project
located in <SDK_location>/samples/connectivity. You can directly import this sample in your Eclipse IDE.
For more information, see Import Samples as Eclipse Projects [page 1145].
In the on-demand to on-premise connectivity end-to-end scenario, different user roles are involved. The particular
steps for the relevant roles are described below:
Prerequisites
● You have downloaded and configured the Cloud Connector. For more information, see Cloud Connector [page
253].
● You have downloaded and set up your Eclipse IDE, SAP Cloud Platform Tools for Java, and SDK.
For more information, see Setting Up the Development Environment [page 1126].
Note
You need to install SDK for Java Web or SDK for Java EE 6 Web Profile.
This tutorial uses a Web application that responds to a request with a ping as a sample back-end system. The
connectivity service supports HTTP and HTTPS as protocols and provides an easy way to consume REST-based
Web services.
To set up the sample application as a back-end system, see Set Up an Application as a Sample Back-End System
[page 191].
Tip
Instead of the sample back-end system provided in this tutorial, you can use other systems to be consumed
through REST-based Web services.
Once the back-end application is running on your local Tomcat, you need to configure the ping service, provided by
the application, in your installed Cloud Connector. This is required since the Cloud Connector only allows access to
white-listed back-end services. To do this, follow the steps below:
1. Open the Cloud Connector and from the Content navigation (in left), choose Access Control.
Note
This step shows the procedure and screenshot for Cloud Connector versions prior to 2.9. For Cloud
Connector versions as of 2.9.0, follow the steps in Configure Access Control (HTTP) [page 151] and enter
the values shown in the screenshot above.
Note
For Cloud Connector versions as of 2.9.0, follow the steps in Configure Access Control (HTTP) [page 151],
section Limiting the Accessible Services for HTTP(S), and enter the values as shown in the next step.
<resource-ref>
<res-ref-name>outbound-internet-destination</res-ref-name>
<res-type>com.sap.core.connectivity.api.http.HttpDestination</res-type>
</resource-ref>
<resource-ref>
<res-ref-name>connectivity/DestinationFactory</res-ref-name>
<res-type>com.sap.core.connectivity.api.DestinationFactory</res-type>
</resource-ref>
Note
○ Destinations backend-no-auth-destination and backend-basic-auth-destination will be
looked-up via DestinationFactory JNDI lookup. For more information, see DestinationFactory API [page
128].
○ In case you use destinations as resource reference, the value of the <res-ref-name> element in the
web.xml file should match the name of the destination that you want to be retrieved at runtime. In this
case, the destination name is outbound-internet-destination.
8. Replace the entire servlet class to make use of the destination API. The destination API is visible by default for
cloud applications and must not be added explicitly to the application class path.
package com.sap.cloud.sample.connectivity;
import java.io.IOException;
import java.io.InputStream;
import javax.naming.Context;
import javax.naming.InitialContext;
import javax.naming.NamingException;
import javax.servlet.ServletException;
import javax.servlet.http.HttpServlet;
import javax.servlet.http.HttpServletRequest;
import javax.servlet.http.HttpServletResponse;
import org.apache.http.HttpEntity;
import org.apache.http.HttpResponse;
import org.apache.http.client.HttpClient;
import org.apache.http.client.methods.HttpGet;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import com.sap.core.connectivity.api.http.HttpDestination;
import com.sap.core.connectivity.api.DestinationFactory;
/** {@inheritDoc} */
@Override
public void doGet(HttpServletRequest request, HttpServletResponse response)
throws ServletException, IOException {
HttpClient httpClient = null;
String destinationName = request.getParameter("destname");
try {
// Get HTTP destination
Context ctx = new InitialContext();
HttpDestination destination = null;
if (destinationName != null) {
DestinationFactory destinationFactory = (DestinationFactory)
ctx.lookup(DestinationFactory.JNDI_NAME);
destination = (HttpDestination)
destinationFactory.getDestination(destinationName);
} else {
// The default request to the Servlet will use outbound-internet-
destination
destinationName = "outbound-internet-destination";
destination = (HttpDestination) ctx.lookup("java:comp/env/" +
destinationName);
}
Note
The given servlet can be run with different destination scenarios, for which user should specify the
destination name as a requested parameter in the calling URL. In the case of on-premise connection to a
back-end system, the destination name should be either backend-basic-auth-destination or
backend-no-auth-destination, depending on the chosen authentication type scenario. For example:
<application_URL>/?destname=backend-no-auth-destination
9. Save the Java editor and make sure the project compiles without errors.
Caution
● If you use SDK for Java Web, we just recommend that you create a destination before starting the
application.
● If you use SDK for Java EE 6 Web Profile, you must create a destination before starting the application.
1. To deploy your Web application locally or on the cloud, follow the steps described in the respective pages:
Deploy Locally from Eclipse IDE [page 1189]
Deploy on the Cloud from Eclipse IDE [page 1191]
2. Once the application is deployed successfully on a local server and on the cloud, the application issues an
exception. This exception says that destination backend-basic-auth-destination or backend-no-
auth-destination has not been specified yet:
HTTP Status 500 - Connectivity operation failed with reason: Destination with
name backend-no-auth-destination cannot be found. Make sure it is created and
configured.. See logs for details.
2014 01 10 08:11:01#
+00#ERROR#com.sap.cloud.sample.connectivity.ConnectivityServlet##anonymous#http-
bio-8041-exec-1##conngold#testsample#web#null#null#Connectivity operation failed
com.sap.core.connectivity.api.DestinationNotFoundException: Destination with
name backend-no-auth-destination cannot be found. Make sure it is created and
configured.
at
com.sap.core.connectivity.destinations.DestinationFactory.getDestination(Destinat
ionFactory.java:20)
at
com.sap.core.connectivity.cloud.destinations.CloudDestinationFactory.getDestinati
on(CloudDestinationFactory.java:28)
at
com.sap.cloud.sample.connectivity.ConnectivityServlet.doGet(ConnectivityServlet.j
ava:50)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:735)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:848)
at
org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilte
rChain.java:305)
at
org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.j
ava:210)
at
com.sap.core.communication.server.CertValidatorFilter.doFilter(CertValidatorFilte
r.java:321)
at
org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilte
rChain.java:243)
...
To configure the destination in SAP Cloud Platform, you need to use the virtual host name
(virtualpingbackend) and port (1234) specified in one of the previous steps on the Cloud Connector's Access
Control tab page.
Note
On-premise destinations support HTTP connections only. Thus, when defining a destination in the SAP Cloud
Platform cockpit, always enter the URL as http://virtual.host:virtual.port, even if the backend requires an HTTPS
connection.
The connection from an SAP Cloud Platform application to the Cloud Connector (through the tunnel) is
encrypted with TLS anyway. There is no need to “double-encrypt” the data. Then, for the leg from the Cloud
Connector to the backend, you can choose between using HTTP or HTTPS. The Cloud Connector will establish
an SSL/TLS connection to the backend, if you choose HTTPS.
1. In the Eclipse IDE, open the Servers view and double-click on <application>.<subaccount> to open the
SAP Cloud Platform editor.
2. Open the Connectivity tab page.
3. In the All Destinations section, choose to create a new destination with the name backend-no-auth-
destination or backend-basic-auth-destination.
○ To connect with no authentication, use the following configuration:
Name=backend-no-auth-destination
Type=HTTP
URL=http://virtualpingbackend:1234/BackendAppHttpNoAuth/noauth
Authentication=NoAuthentication
ProxyType=OnPremise
CloudConnectorVersion=2
Name=backend-basic-auth-destination
Type=HTTP
URL=http://virtualpingbackend:1234/BackendAppHttpBasicAuth/basic
Next Step
You can monitor the state and logs of your Web application deployed on SAP Cloud Platform.
Context
This step-by-step tutorial demonstrates how a sample Web application consumes a back-end system via HTTP(S)
by using the connectivity service. For simplicity, instead of using a real back-end system, we use a second sample
Web application containing BackendServlet. It mimics the back-end system and can be called via HTTP(S).
The servlet code, the web.xml content, and the destination file (backend-no-auth-destination) used in this
tutorial are mapped to the connectivity sample project located in <SDK_location>/samples/connectivity.
You can directly import this sample in your Eclipse IDE. For more information, see Import Samples as Eclipse
Projects [page 1145].
In the on-demand to on-premise connectivity end-to-end scenario, different user roles are involved. The particular
steps for the relevant roles are described below:
Prerequisites
● You have downloaded and configured the Cloud Connector. For more information, see Cloud Connector [page
253].
● You have downloaded and set up your Eclipse IDE, SAP Cloud Platform Tools for Java, and SDK.
For more information, see Setting Up the Development Environment [page 1126].
Note
You need to install SDK for Java Web Tomcat 7.
This tutorial uses a Web application that responds to a request with a ping as a sample back-end system. The
connectivity service supports HTTP and HTTPS as protocols and provides an easy way to consume REST-based
Web services.
To set up the sample application as a back-end system, see Set Up an Application as a Sample Back-End System
[page 191].
Tip
Instead of the sample back-end system provided in this tutorial, you can use other systems to be consumed
through REST-based Web services.
Once the back-end application is running on your local Tomcat, you need to configure the ping service, provided by
the application, in your installed Cloud Connector. This is required since the Cloud Connector only allows access to
white-listed back-end services. To do this, follow the steps below:
1. Open the Cloud Connector and from the Content navigation (in left), choose Access Control.
2. Under Mapping Virtual To Internal System, choose the Add button and define an entry as shown on the
following screenshot. The Internal Host must be the physical host name of the machine on which the Tomcat
of the back-end application is running.
Note
For Cloud Connector versions as of 2.9.0, follow the steps in Configure Access Control (HTTP) [page 151],
section Limiting the Accessible Services for HTTP(S), and enter the values as shown in the next step.
5. Choose Finish so that the ConnectivityServlet.java servlet is created and opened in the Java editor.
6. Go to ConnectivityHelloWorld WebContent WEB-INF and open the web.xml file.
7. To consume connectivity configuration using JNDI, you need to define the ConnectivityConfiguration
API as a resource in the web.xml file. Below is an example of a ConnectivityConfiguration resource,
named connectivityConfiguration.
<resource-ref>
<res-ref-name>connectivityConfiguration</res-ref-name>
<res-
type>com.sap.core.connectivity.api.configuration.ConnectivityConfiguration</res-
type>
</resource-ref>
Note
Destination backend-no-auth-destination will be looked-up via ConnectivityConfiguration JNDI
lookup. For more information, see ConnectivityConfiguration API [page 80].
8. Replace the entire servlet class to make use of the configuration API. The configuration API is visible by default
for cloud applications and must not be added explicitly to the application class path.
package com.sap.cloud.sample.connectivity;
import java.io.IOException;
import java.io.InputStream;
import java.io.OutputStream;
import java.net.HttpURLConnection;
import java.net.InetSocketAddress;
import java.net.Proxy;
import java.net.URL;
import javax.annotation.Resource;
import javax.naming.Context;
import javax.naming.InitialContext;
import javax.servlet.ServletException;
import javax.servlet.http.HttpServlet;
import javax.servlet.http.HttpServletRequest;
import javax.servlet.http.HttpServletResponse;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
/** {@inheritDoc} */
@Override
public void doGet(HttpServletRequest request, HttpServletResponse response)
throws ServletException, IOException {
HttpURLConnection urlConnection = null;
String destinationName = request.getParameter("destname");
try {
// Look up the connectivity configuration API
Context ctx = new InitialContext();
ConnectivityConfiguration configuration =
(ConnectivityConfiguration) ctx.lookup("java:comp/env/
connectivityConfiguration");
Note
The given servlet can be run with different destination scenarios, for which user should specify the
destination name as a requested parameter in the calling URL. In the case of on-premise connection to a
back-end system, the destination names should be backend-no-auth-destination. That is, it will be
accessed at: <application_URL>/?destname=backend-no-auth-destination
9. Save the Java editor and make sure the project compiles without errors.
Note
We only recommend but not obligate that you create the destination before starting the application.
1. To deploy your Web application locally or on the cloud, follow the steps described in the respective pages:
Deploy Locally from Eclipse IDE [page 1189]
Deploy on the Cloud from Eclipse IDE [page 1191]
2. Once the application is successfully deployed locally or on the cloud, the application issues an exception
saying that the backend-no-auth-destination destination has not been specified yet:
To configure the destination in SAP Cloud Platform, you need to use the virtual host name
(virtualpingbackend) and port (1234) specified in one of the previous steps on the Cloud Connector's Access
Control tab page.
Note
● On-premise destinations support HTTP connections only.
● The connection from an application to the Cloud Connector (through the tunnel) is encrypted on TLS level.
Also, you can choose between using HTTP or HTTPS to hop from the Cloud Connector to the back end.
1. In the Eclipse IDE, open the Servers view and double-click <application>.<subaccount> to open the cloud
server editor.
2. Open the Connectivity tab page.
3. In the All Destinations section, choose to create a new destination with the name backend-no-auth-
destination.
4. Use the following configuration:
Name=backend-no-auth-destination
Next Step
You can monitor the state and logs of your Web application deployed on SAP Cloud Platform.
Related Information
JavaDoc ConnectivityConfiguration
JavaDoc DestinationConfiguration
JavaDoc AuthenticationHeaderProvider
Overview
This section describes how you set up a simple ping Web application that is used as a back-end system.
Prerequisites
You have downloaded SAP Cloud Platform SDK on your local file system.
Procedure
<role rolename="pingrole"/>
<user name="pinguser" password="pingpassword" roles="pingrole" />
Note
In case you use SDK with version equal to or lower than, respectively, 1.44.0.1 (Java Web) and 2.24.13 (Java
EE 6 Web Profile), you should find the WAR files in directory <SDK_location>/tools/samples/
connectivity/onpremise, under the names PingAppHttpNoAuth.war and PingAppHttpBasicAuth.war.
Also, you should access the applications at the relevant URLs:
● http://localhost:8080/PingAppHttpNoAuth/pingnoauth
● http://localhost:8080/PingAppHttpBasicAuth/pingbasic
Related Information
Consume Back-End Systems (Java Web or Java EE 6 Web Profile) [page 171]
Installation Prerequisites
● To provide connectivity tunnel via RFC destinations, your Cloud Connector version needs to be at least 1.3.0.
● To develop a JCo application, your SDK version needs to be 1.29.18 (SDK Java Web), or 2.11.6 (SDK for
Java EE 6 Web Profile). Also, your SDK local runtime needs to be hosted by a 64-bit JVM. SDKs of Tomcat 7,
Tomcat 8, and TomEE 7 runtime support JCo from the very beginning.
On Windows platforms, you need to install Microsoft Visual C++ 2010 Redistributable Package (x64). To
download this package, go to http://www.microsoft.com/en-us/download/details.aspx?id=14632 .
To learn in detail about the SAP Java Connector API, see Connectors .
Note
This documentation contains sections not applicable to SAP Cloud Platform. In particular:
SAP JCo Architecture: CPIC is only used in the last mile from your Cloud Connector to the back end. From SAP
Cloud Platform to the Cloud Connector, TLS-protected communication is used.
SAP JCo Installation: SAP Cloud Platform runtimes already include all required artifacts.
SAP JCo Customizing and Integration: On SAP Cloud Platform, the integration is already done by the runtime.
You can concentrate on your business application logic.
Server Programming: The programming model of JCo on SAP Cloud Platform does not include server-side RFC
communication.
IDoc Support for External Java Applications: Currently, there is no IDocLibrary for JCo available on SAP Cloud
Platform.
You can call a service from a fenced customer network using a simple application which consumes a simple on-
premise remote-enabled function module.
The invocation of function modules via RFC is offered via the JCo API like the one available in SAP NetWeaver
Application Server Java since version 7.10, and in JCo standalone 3.0. If you are an experienced JCo developer, you
can easily develop a Web application using JCo: you simply consume the APIs like you do in other Java
environments. Restrictions that apply in the cloud environment are mentioned in the Restrictions section below.
Restrictions
Related Information
RFC destinations provide the configuration needed for communicating with an on-premise ABAP system via RFC.
The RFC destination data is used by the JCo version that is offered within SAP Cloud Platform to establish and
manage the connection.
The RFC destination specific configuration in SAP Cloud Platform consists of properties arranged in groups, as
described below. The supported set of properties is a subset of the standard JCo properties in arbitrary
environments. The configuration data is divided into the following groups:
The minimal configuration contains user logon properties and information identifying the target host. This means
you must provide at least a set of properties containing this information.
Example
Name=SalesSystem
Type=RFC
jco.client.client=000
jco.client.lang=EN
jco.client.user=consultant
jco.client.passwd=<password>
jco.client.ashost=sales-system.cloud
jco.client.sysnr=42
jco.destination.pool_capacity=5
jco.destination.peak_limit=10
This group of JCo properties covers different types of user credentials, as well as the ABAP system client and the
logon language. The currently supported logon mechanism uses user or password as the credentials.
Property Description
jco.client.client Represents the client to be used in the ABAP system. Valid for
mat is a three-digit number.
jco.client.user Represents the user to be used for logging on to the ABAP sys
tem. n
Note
When working with the Destinations editor in the cockpit,
enter the value in the User field. Do not enter it as addi
tional property.
jco.client.passwd Represents the password of the user that shall be used. Note
that passwords in systems of SAP NetWeaver releases lower
than 7.0 are case-insensitive and can be only eight characters
long. For releases 7.0 and higher, passwords are case-sensitive
with a maximum length of 40.
Note
When working with the Destinations editor in the cockpit,
enter this password in the Password field. Do not enter it as
additional property.
Note
In the case of PrincipalPropagation value, you
should better configure the
jco.destination.repository.user and
jco.destination.repository.passwd proper
ties, since there are special permissions needed (for meta
data lookup in the back end) that not all business applica
tion users might have.
This group of JCo properties covers different settings for the behavior of the destination's connection pool. All
properties are optional.
Property Description
Note
Turning on this check has performance impact for
stateless communication. This is due to an addi
tional low-level ping to the server, which takes a
certain amount of time for noncorrupted connec
tions depending on latency.
Pooling Details
● Each destination is associated with a connection factory and, if the pooling feature is used, with a connection
pool.
● Initially, the destination's connection pool is empty, and the JCo runtime does not preallocate any connection.
The first connection will be created when the first function module invocation is performed. The peak_limit
property describes how many connections can be created simultaneously, if applications allocate connections
This JCo properties group allows you to influence how the repository that dynamically retrieves function module
metadata behaves. All properties below are optional. Alternatively, applications could create their metadata in
their code, using the metadata factory methods within the JCo class, to avoid additional round-trips to the on-
premise system.
Property Description
Note
When working with the Destinations editor in the cockpit,
enter the value in the <Repository User> field. Do not
enter it as additional property.
jco.destination.repository.passwd Represents the password for a repository user. If you use such
a user, this property is mandatory.
Note
When working with the Destinations editor in the cockpit,
enter this password in the <Repository Password> field.
Do not enter it as additional property.
Overview
Depending on the configuration used, different properties are considered mandatory or optional.
Property Description
jco.client.sysnr Represents the so-called "system number" and has two digits.
It identifies the logical port on which the application server is
listening for incoming requests. In the case of configurations in
SAP Cloud Platform, this property needs to match a virtual
port entry in the Cloud Connector Access Control configura-
tion.
Note
The virtual port in the above access control entry needs to
be named sapgw<##>, where <##> is the value of sysnr.
Note
The virtual port in the above access control entry needs to
be named sapms<###>, where <###> is the value of
r3name.
This group of JCo properties allows you to influence the connection to an ABAP system. All properties are optional.
Property Description
jco.client.codepage Declares the 4-digit SAP codepage that shall be used when ini
tiating the connection to the backend. The default value is
1100 (comparable to iso-8859-1). It is important to provide
this property if the password that is used contains characters
that cannot be represented in 1100.
Note
When working with the Destinations editor in the cockpit,
enter the Cloud Connector location ID in the <Location
ID> field. Do not enter it as additional property.
Overview
This section helps you to configure your Cloud Connector when you are working via the RFC protocol.
Related Information
To set up a mutual authentication between Cloud Connector and an ABAP back-end system (connected via RFC),
you can configure SNC for the Cloud Connector. It will then use the associated PSE for all RFC SNC requests. This
means that the SNC identity, represented by this PSE needs to:
● Be trusted by all back-end systems to which the Cloud Connector is supposed to connect;
● Play the role of a trusted external system by adding the SNC name of the Cloud Connector to the SNCSYSACL
table. You can find more details in the SNC configuration documentation for the release of your ABAP system.
Prerequisites
You have configured your ABAP system(s) for SNC. For detailed information on configuring SNC for an ABAP
system, see also Configuring SNC on AS ABAP. In order to establish trust for Principal Propagation, follow the
steps described in Configure Principal Propagation to an ABAP System for RFC [page 310].
Configuration Steps
○ Library Name: Provides the location of the SNC library you are using for the Cloud Connector.
Note
Bear in mind that you must use one and the same security product on both sides of the
communication.
○ My Name: The SNC name that identifies the Cloud Connector. It represents a valid scheme for the SNC
implementation that is used.
Note
When using CommonCryptoLibrary as SNC implementation, note 1525059 will help you to configure the
PSE to be associated with the user running the Cloud Connector process.
Related Information
To allow your on-demand applications to access a certain back-end system on the intranet, you need to insert an
extra line within the Cloud Connector Access Control management.
1. Choose Cloud To On-Premise from your Subaccount menu and go to tab Access Control.
2. Choose Add.
3. Back-end Type: You need to select the description that best matches the addressed back-end system. In case
of RFC, only ABAP System and SAP Gateway are fitting values, which means usage of RFC is free of charge.
4. Choose Next.
5. Protocol: You need to choose whether the Cloud Connector should use RFC or RFC with SNC for connecting to
the back-end system. This is completely independent from the settings on cloud side. This way, you are
ensured that the entire connection from the on-demand application to the actual back-end system (provided
through the SSL tunnel) is secured, partly with SSL and partly with SNC. For more information, see Initial
Configuration (RFC) [page 200].
6. Choose Next.
7. Choose whether you want to configure a load balancing logon or whether to connect to a concrete application
server.
8. Specify the parameters of the back-end system. It needs to be an existing network address that can be
resolved on the intranet and has network visibility for the Cloud Connector. If this is only possible using a valid
SAProuter, specify the router in the respective field. The Cloud Connector will try to establish a connection to
this system, so the address has to be real.
○ When using a load-balancing configuration, the Message Server specifies the message server of the ABAP
system. The system ID is a three-char identifier that is also found in the SAP Logon configuration.
Alternatively, it's possible to directly specify the message server port in the System ID field.
9. Optional: You can virtualize the system information in case you like to hide your internal host names from the
cloud. The virtual information can be a fake name which does not need to exist. The fields will be pre-
populated with the values of the configuration provided in Message Server and System ID, or Application
Server and Instance Number.
○ Virtual Message Server - specifies the host name exactly as specified as the jco.client.mshost
property in the RFC destination configuration in the cloud. The Virtual System ID allows you to distinguish
between different entry points of your back-end system that have different sets of access control settings.
The value needs to be the same like for the jco.client.r3name property in the RFC destination
configuration in the cloud.
Note
If you use an RFC connection, you cannot choose between different principal types. Only the X.509
certificate is supported. You need an SNC-enabled back-end connection to use it. For RFC, the two X.509
certificate variants X.509 certificate (general usage) and X.509 certificate (strict usage) do not differ in
behavior.
11. SNC Partner Name: This step will only come up if you have chosen RFC SNC. The SNC partner name needs to
contain the correct SNC identification of the target system.
12. You can enter an optional description at this stage. The respective description will be shown as a rich tooltip
when the mouse hovers over the entries of the virtual host column (table Mapping Virtual to Internal System).
14. Optional: You can later edit a system mapping (choose Edit) to make the Cloud Connector route the requests
for sales-system.cloud:sapgw42 to a different back-end system. This can be useful if the system is
currently down and there is a back-up system that can serve these requests in the meantime. However, you
cannot edit the virtual name of this system mapping. If you want to use a different fictional host name in your
on-demand application, you need to delete the mapping and create a new one. Here, you can also change the
Principal Type to None in case you don't want to allow principal propagation to a certain system.
Note
This function applies for RFC/RFC SNC only.
In addition to allowing access to a particular host and port, you also need to specify which function modules
(Resources) are allowed to be invoked on that host. The Cloud Connector uses very strict white lists for its access
1. To define the permitted function modules (Resources) for a particular back-end system, choose the row
corresponding to that back-end system and press Add in section Resources Accessible On... below. A dialog
appears, prompting you to enter the specific function module name whose invoking you want to allow.
2. The Cloud Connector checks that the function module name of an incoming request is exactly as specified in
the configuration. If it is not, the request is denied.
3. If you select the Prefix option, the Cloud Connector allows all incoming requests, for which the function
module name begins with the specified string.
4. The Enabled checkbox allows you to specify whether that resource should be initially enabled or disabled.
Related Information
Tutorial: Invoke ABAP Function Modules in On-Premise ABAP Systems [page 208]
Context
This step-by-step tutorial shows how a sample Web application invokes a function module in an on-premise ABAP
system via RFC by using theconnectivity service.
Different user roles are involved in the on-demand to on-premise connectivity end-to-end scenario. The particular
steps for the relevant roles are described below:
IT Administrator
This role sets up and configures the Cloud Connector. Scenario steps:
Application Developer
1. Installs the Eclipse IDE, SAP Cloud Platform Tools for Java, and SDK.
2. Develops a Java EE application using the destination API.
3. Configures connectivity destinations as resources in the web.xml file.
4. Configures connectivity destinations via the SAP Cloud Platform server adapter in Eclipse IDE.
5. Deploys the Java EE application locally and on the cloud.
Subaccount Operator
This role deploys Web applications, configures their destinations, and conducts tests. Scenario steps:
● You have downloaded and set up your Eclipse IDE and SAP Cloud Platform Tools for Java.
● You have downloaded the SDK. Its version needs to be at least 1.29.18 (SDK for Java Web), 2.11.6 (SDK for
Java EE 6 Web Profile), or 2.9.1 (SDK for Java Web Tomcat 7), respectively.
● Your local runtime needs to be hosted by a 64-bit JVM. On Windows platforms, you need to install Microsoft
Visual C++ 2010 Redistributable Package (x64).
● You have downloaded and configured your Cloud Connector. Its version needs to be at least 1.3.0.
To read the installation documentation, go to Setting Up the Development Environment [page 1126] and Installation
[page 257].
Procedure
2. From the Eclipse main menu, choose New Dynamic Web Project .
3. In the Project name field, enter jco_demo .
4. In the Target Runtime pane, select the runtime you want to use to deploy the HelloWorld application. In this
tutorial, we choose Java Web.
5. In the Configuration pane, leave the default configuration.
6. Choose Finish to complete the creation of your project.
Procedure
package com.sap.demo.jco;
import java.io.IOException;
import java.io.PrintWriter;
import javax.servlet.ServletException;
import javax.servlet.http.HttpServlet;
import javax.servlet.http.HttpServletRequest;
import javax.servlet.http.HttpServletResponse;
import com.sap.conn.jco.AbapException;
import com.sap.conn.jco.JCoDestination;
import com.sap.conn.jco.JCoDestinationManager;
import com.sap.conn.jco.JCoException;
import com.sap.conn.jco.JCoFunction;
import com.sap.conn.jco.JCoParameterList;
import com.sap.conn.jco.JCoRepository;
/**
* Sample application that uses the connectivity
service. In particular,
* it makes use of the capability to invoke a function module in an ABAP system
* via RFC
*
* Note: The JCo APIs are available under <code>com.sap.conn.jco</code>.
*/
public class ConnectivityRFCExample extends HttpServlet
{
private static final long serialVersionUID = 1L;
public ConnectivityRFCExample()
{
}
protected void doGet(HttpServletRequest request, HttpServletResponse
response)
throws ServletException, IOException
{
PrintWriter responseWriter = response.getWriter();
try
{
// access the RFC Destination "JCoDemoSystem"
JCoDestination
destination=JCoDestinationManager.getDestination("JCoDemoSystem");
// make an invocation of STFC_CONNECTION in the backend;
JCoRepository repo=destination.getRepository();
JCoFunction stfcConnection=repo.getFunction("STFC_CONNECTION");
JCoParameterList imports=stfcConnection.getImportParameterList();
imports.setValue("REQUTEXT", "SAP HANA Cloud connectivity runs with
JCo");
stfcConnection.execute(destination);
JCoParameterList exports=stfcConnection.getExportParameterList();
String echotext=exports.getString("ECHOTEXT");
String resptext=exports.getString("RESPTEXT");
response.addHeader("Content-type", "text/html");
responseWriter.println("<html><body>");
responseWriter.println("<h1>Executed STFC_CONNECTION in system
JCoDemoSystem</h1>");
responseWriter.println("<p>Export parameter ECHOTEXT of
STFC_CONNECTION:<br>");
responseWriter.println(echotext);
responseWriter.println("<p>Export parameter RESPTEXT of
STFC_CONNECTION:<br>");
responseWriter.println(resptext);
responseWriter.println("</body></html>");
}
catch (AbapException ae)
{
5. Save the Java editor and make sure that the project compiles without errors.
Procedure
1. To deploy your Web application locally or on the cloud, see the following two procedures, respectively:
To configure the destination on SAP Cloud Platform, you need to use a virtual application server host name
(abapserver.hana.cloud) and a virtual system number (42) that you will expose later in the Cloud Connector.
Alternatively, you could use a load balancing configuration with a message server host and a system ID.
Procedure
Name=JCoDemoSystem
Type=RFC
jco.client.ashost=abapserver.hana.cloud
jco.client.cloud_connector_version=2
jco.client.sysnr=42
jco.client.user=DEMOUSER
jco.client.passwd=<password>
jco.client.client=000
jco.client.lang=EN
jco.destination.pool_capacity=5
2. Upload this file to your Web application in SAP Cloud Platform. For more information, see Configure
Destinations from the Console Client [page 87].
3. Call the URL that references the cloud application again in the Web browser. The application should now return
a different exception:
4. This means the Cloud Connector denied opening a connection to this system. As a next step, you need to
configure the system in your installed Cloud Connector.
This is required since the Cloud Connector only allows access to white-listed back-end systems. To do this, follow
the steps below:
Procedure
1. Optional: In the Cloud Connector administration UI, you can check under Monitor Audit whether access
has been denied:
2. In the Cloud Connector administration UI and choose Cloud To On-Premise from your Subaccount menu, tab
Access Control.
3. In section Mapping Virtual To Internal System choose Add to define a new system.
1. For Back-end Type, select ABAP System and choose Next.
2. For Protocol, select RFC and choose Next.
3. Choose option Without load balancing.
4. Enter application server and instance number. The Application Server entry must be the physical host
name of the machine on which the ABAP application server is running. Choose Next.
Example:
4. Call the URL that references the cloud application again in the Web browser. The application should now throw
a different exception:
5. This means the Cloud Connector denied invoking STFC_CONNECTION in this system. As a final step, you need
to provide access to this function module in your installed Cloud Connector.
This is required since the Cloud Connector only allows access to white-listed resources (which are defined on the
basis of function module names with RFC). To do this, follow the steps below:
Procedure
1. Optional: In the Cloud Connector administration UI, you can check under Monitor Audit whether access
has been denied:
2. In the Cloud Connector administration UI, go to the Access Control tab page.
5. Call the URL that references the cloud application again in the Web browser. The application should now return
with a message showing the export parameters of the function module after a successful invocation.
Related Information
You can monitor the state and logs of your Web application deployed on SAP Cloud Platform.
For your cloud applications, you can use LDAP-based user management if you are operating an LDAP server within
your network. To enable LDAP user management, learn more about the the required configuration steps in this
section.
Consume the LDAP tunnel in a Java application: LDAP Destinations [page 218]
To do this, You need to install theCloud Connector in your on-premise network and configure it for LDAP:
LDAP destinations carry connectivity details for accessing systems over Lightweight Directory Access Protocol
(LDAP) as specified in RFC 4511 . In combination with the Cloud Connector they enable SAP Cloud Platform
applications to access LDAP servers in an on-premise corporate network. LDAP destinations are intended to be
used with the Java JNDI/LDAP Service Provider.
For more information on how to use the Java JNDI/LDAP Service Provider see: http://docs.oracle.com/javase/7/
docs/technotes/guides/jndi/jndi-ldap.html .
Proxy Type ldap.proxyType Possible values: Internet or In case proxy type is OnPre
OnPremise mise, the resulting property is
java.naming.ldap.fac
tory.socket with value
com.sap.core.connect
ivity.api.ldap.LdapO
nPremiseSocketFactor
y.
Example: ldap://ldap
server.examplecompany.com:
389
Example: serviceuser@exam
plecompany.com
As additional properties in an LDAP destination you can specify the properties defined by the Java JNDI/LDAP
Service Provider. For more details regarding these properties see Environment Properties at http://
docs.oracle.com/javase/7/docs/technotes/guides/jndi/jndi-ldap.html l.
Sample Code
package com.sap.cloud.example.ldap;
import java.io.IOException;
import java.util.Properties;
import javax.annotation.Resource;
import javax.naming.NamingEnumeration;
import javax.naming.NamingException;
import javax.naming.directory.DirContext;
import javax.naming.directory.InitialDirContext;
import javax.naming.directory.SearchControls;
import javax.naming.directory.SearchResult;
import javax.servlet.ServletException;
import javax.servlet.annotation.WebServlet;
import javax.servlet.http.HttpServlet;
import javax.servlet.http.HttpServletRequest;
import javax.servlet.http.HttpServletResponse;
import com.sap.core.connectivity.api.configuration.ConnectivityConfiguration;
import com.sap.core.connectivity.api.configuration.DestinationConfiguration;
/**
* Servlet that obtain LDAP destination, connect to the specified LDAP server and
search for users.
*/
@WebServlet("/*")
public class LdapExample extends HttpServlet {
private static final long serialVersionUID = 1L;
private static final String DESTINATION_NAME = "example-ldap-destination";
Configure the Cloud Connector to support LDAP in different scenarios (cloud applications using LDAP or Cloud
Connector authentication).
Prerequisites
You have installed the Cloud Connector and done the basic configuration:
When using LDAP-based user management, you have to confgure the Cloud Connector to support this feature.
Depending on the scenario, you need to perform the following steps:
Scenario 1: Cloud applications using LDAP for authentication. Configure the destination of the LDAP server in the
Cloud Connector: Configure Access Control (LDAP) [page 221].
Scenario 2: Internal Cloud Connector user management. Activate LDAP user management in the Cloud
Connector: Use LDAP for Authentication [page 358].
Add a specified system mapping to the Cloud Connector if you want to use an on-premise LDAP server or user
authentication in your cloud application.
To allow your cloud applications to access an on-premise LDAP server, you need to insert an extra line into the
Cloud Connector access control management.
4. Protocol: Select LDAP or LDAPS for the connection to the back-end system. When you are done, choose Next.
5. Internal Host and Internal Port: specify the host and port under which the target system can be reached within
the intranet. It needs to be an existing network address that can be resolved on the intranet and has network
visibility for the Cloud Connector. The Cloud Connector will try to forward the request to the network address
specified by the internal host and port, so this address needs to be real.
7. You can enter an optional description at this stage. The respective description will be shown as a tooltip when
you press the button Show Details in column Actions of the Mapping Virtual To Internal System overview.
8. The summary shows information about the system to be stored. When saving the host mapping, you can
trigger a ping from the Cloud Connector to the internal host, using the Check Internal Host check box. This
allows you to make sure the Cloud Connector can indeed access the internal system. Also, you can catch basic
things, such as spelling mistakes or firewall problems between the Cloud Connector and the internal host.
If the ping to the internal host is successful, the Cloud Connector saves the mapping without any remark. If it
fails, a warning is displayed in column Check Result, that the host is not reachable. Details for the reason are
available in the log files. You can execute such a check at any time later for all selected systems in the Mapping
Virtual To Internal System overview by pressing Check Availability of Internal Host in column Actions.
9. Optional: You can later edit the system mapping (by choosing Edit) to make the Cloud Connector route the
requests to a different LDAP server. This can be useful if the system is currently down and there is a back-up
LDAP server that can serve these requests in the meantime. However, you cannot edit the virtual name of this
system mapping. If you want to use a different fictional host name in your cloud application, you have to delete
the mapping and create a new one.
How to access on-premise systems via TCP-based protocols using a SOCKS5 proxy.
SAP Cloud Platform Connectivity offers a SOCKS5 proxy you can use to access on-premise systems via TCP-
based protocols.
Note
You can use the provided SOCKS5 proxy server only to connect to on-premise systems. You cannot use it as
general-purpose SOCKS5 proxy.
How to use it
The proxy server is started by default on all application machines. So you can access it on localhost and port
20004.
The use of SOCK5 Basic Proxy Authentication is mandatory as it is required to provide routing information to the
proxy. It is used to find the correct Cloud Connector to which the data will be routed. The corresponding
authentication scheme is 1.<subaccount>.<locationId>, where subaccount is a mandatory parameter,
whereas locationId is optional.
Note
Both values should be base64-encoded.
The following code snippet shows how to provide the proxy authentication values :
Sample Code
import java.net.Authenticator;
import org.apache.commons.codec.binary.Base64; // Or any other Base64 encoder
Authenticator.setDefault(new Authenticator() {
@Override
protected java.net.PasswordAuthentication getPasswordAuthentication() {
return new java.net.PasswordAuthentication("1." + encodedSubaccount +
"." + encodedLocationId , new char[]{});
}
});
}
In this code snippet you can see how to set up the SOCKS proxy and how to use it to create an HTTP connection:
Sample Code
import java.net.SocketAddress;
import java.net.InetSocketAddress;
import java.net.Proxy;
import java.net.URL;
import java.net.HttpURLConnection;
SocketAddress addr = new InetSocketAddress("localhost", 20004);
Proxy proxy = new Proxy(Proxy.Type.SOCKS, addr);
setSOCKS5ProxyAuthentication(subaccount, locationId); // where subaccount is
the current subaccount and locationId is the Location ID of the SCC (or empty
string if locationId is not set)
URL url = new URL("http://virtualhost:1234/");
HttpURLConnection connection = (HttpURLConnection) url.openConnection(proxy);
Restrictions
Interfaces
You can access a subaccount associated with the current execution thread using the TenantContext API.
● Interface TenantContext
● Interface TenantContext: getTenant()
● Interface Tenant: getAccount()
● Interface Account: getId()
Related Information
This section helps you to configure your Cloud Connector if working with the TCP protocol.
Related Information
To allow your Cloud applications to access a certain back-end system on the intranet via TCP, you need to insert an
extra line into the Cloud Connector access control management.
5. Internal Host and Internal Port: specify the host and port under which the target system can be reached within
the intranet. It needs to be an existing network address that can be resolved on the intranet and has network
visibility for the Cloud Connector. The Cloud Connector will try to forward the request to the network address
specified by the internal host and port. That is why this address needs to be real.
6. Enter a Virtual Host and Virtual Port. The virtual host can be a fake name and does not need to exist. The fields
are prepopulated with the values of the Internal Host and Internal Port.
7. You can enter an optional description at this stage. The respective description will be shown as a tooltip when
you press the button Show Details in column Actions of the Mapping Virtual To Internal System overview.
8. The summary shows information about the system to be stored. When saving the host mapping, you can
trigger a ping from the Cloud Connector to the internal host, using the Check Internal Host checkbox. This
allows you to make sure the Cloud Connector can indeed access the internal system. Also, you can catch basic
things, such as spelling mistakes or firewall problems between the Cloud Connector and the internal host.
If the ping to the internal host is successful, the Cloud Connector saves the mapping without any remark. If it
fails, a warning is displayed in column Check Result, that the host is not reachable. Details for the reason are
available in the log files. You can execute such a check at any time later for all selected systems in the Mapping
Virtual To Internal System overview by pressing Check Availability of Internal Host in column Actions.
The e-mail connectivity functionality allows you to send electronic mail messages from your Web applications
using e-mail providers that are accessible on the Internet, such as Google Mail (Gmail). It also allows you to
retrieve e-mails from the mailbox of your e-mail account.
Note
SAP does not act as e-mail provider. To use this service, please cooperate with an external e-mail provider of
your choice.
● Obtain a mail session resource using resource injection or, alternatively, using a JNDI lookup.
● Configure the mail session resource by specifying the protocol settings of your mail server as a mail
destination configuration. SMTP is supported for sending e-mail, and POP3 and IMAP for retrieving messages
from a mailbox account.
● In your Web application, use the JavaMail API (javax.mail) to create and send a MimeMessage object or
retrieve e-mails from a message store.
In your Web application, you use the JavaMail API (javax.mail) to create and send a MimeMessage object or
retrieve e-mails from a message store.
Mail Session
You can obtain a mail session resource using resource injection or a JNDI lookup. The properties of the mail
session are specified by a mail destination configuration. So that the resource is linked to this configuration, the
names of the destination configuration and mail session resource must be the same.
● Resource injection
You can directly inject the mail session resource using annotations as shown in the example below. You do not
need to declare the JNDI resource reference in the web.xml deployment descriptor.
@Resource(name = "mail/Session")
private javax.mail.Session mailSession;
● JNDI lookup
To obtain a resource of type javax.mail.Session, you declare a JNDI resource reference in the web.xml
deployment descriptor in the WebContent/WEB-INF directory as shown below. Note that the recommended
resource reference name is Session and the recommended subcontext is mail (mail/Session):
<resource-ref>
<res-ref-name>mail/Session</res-ref-name>
<res-type>javax.mail.Session</res-type>
</resource-ref>
An initial JNDI context can be obtained by creating a javax.naming.InitialContext object. You can then
consume the resource by looking up the naming environment through the InitialContext, as follows:
Note that according to the Java EE Specification, the prefix java:comp/env should be added to the JNDI
resource name (as specified in the web.xml) to form the lookup name.
With the javax.mail.Session object you have retrieved, you can use the JavaMail API to create a MimeMessage
object with its constituent parts (instances of MimeMultipart and MimeBodyPart). The message can then be
sent using the send method from the Transport class:
Fetching E-Mail
You can retrieve the e-mails from the inbox folder of your e-mail account using the getFolder method from the
Store class as follows:
Fetched e-mail is not scanned for viruses. This means that e-mail retrieved from an e-mail provider using IMAP or
POP3 could contain a virus that could potentially be distributed (for example, if e-mail is stored in the database or
forwarded). Basic mitigation steps you could take include the following:
Related Information
A mail destination is used to specify the mail server settings for sending or fetching e-mail, such as the e-mail
provider, e-mail account, and protocol configuration.
The name of the mail destination must match the name used for the mail session resource. You can configure a
mail destination directly in a destination editor or in a mail destination properties file. The mail destination then
Note
SAP does not act as e-mail provider. To use this service, please cooperate with an external e-mail provider of
your choice.
Name The name of the destination. The mail session that is configured by Yes
this mail destination is available by injecting the mail session re
source mail/<Name>. The name of the mail session resource
must match the destination name.
Type The type of destination. It must be MAIL for mail destinations. Yes
mail.* javax.mail properties for configuring the mail session. Depends on the mail protocol
used.
To send e-emails, you must specify at least
mail.transport.protocol and mail.smtp.host.
mail.password Password that is used for authentication. The user name for au Yes, if authentication is used
thentication is specified by mail.user (a standard (mail.smtp.auth=true and
javax.mail property). generally for fetching e-mail).
● mail.smtp.port: The SMTP standard ports 465 (SMTPS) and 587 (SMTP+STARTTLS) are open for
outgoing connections on SAP Cloud Platform.
● mail.pop3.port: The POP3 standard ports 995 (POP3S) and 110 (POP3+STARTTLS) are open for outgoing
connections (used to fetch e-mail).
● mail.imap.port: The IMAP standard ports 993 (IMAPS) and 143 (IMAP +STARTTLS) are open for outgoing
connections (used to fetch e-mail).
● mail.<protocol>.host: The mail server of an e-mail provider accessible on the Internet, such as Google
Mail (for example, smtp.gmail.com, imap.gmail.com, and so on).
The destination below has been configured to use Gmail as the e-mail provider, SMTP with STARTTLS (port 587)
for sending e-mail, and IMAP (SSL) for receiving e-mail:
Name=Session
Type=MAIL
mail.user=<gmail account name>
mail.password=<gmail account password>
mail.transport.protocol=smtp
mail.smtp.host=smtp.gmail.com
mail.smtp.auth=true
mail.smtp.starttls.enable=true
mail.smtp.port=587
mail.store.protocol=imaps
mail.imaps.host=imap.gmail.com
SMTPS Example
The destination below uses Gmail and SMTPS (port 465) for sending e-mail:
Name=Session
Type=MAIL
mail.user=<gmail account name>
mail.password=<gmail account password>
mail.transport.protocol=smtps
mail.smtps.host=smtp.gmail.com
mail.smtps.auth=true
mail.smtps.port=465
Related Information
In order to troubleshoot e-mail delivery and retrieval issues, it is useful to have debug information about the mail
session established between your SAP Cloud Platform application and your e-mail provider.
Context
To include debug information in the standard trace log files written at runtime, you can use the JavaMail debugging
feature and the System.out logger. The System.out logger is preconfigured with the log level INFO. You require
at least INFO or a level with more detailed information.
Procedure
1. To enable the JavaMail debugging feature, add the mail.debug property to the mail destination configuration
as shown below:
mail.debug=true
2. To check the log level for your application, log on to the cockpit.
Note
You can check the log level of the System.out logger in a similar manner from the Eclipse IDE.
Related Information
This step-by-step tutorial shows how you can send an e-mail from a simple Web application using an e-mail
provider that is accessible on the Internet. As an example, it uses Gmail.
Note
SAP does not act as e-mail provider. To use this service, please cooperate with an external e-mail provider of
your choice.
Prerequisites [page 233] The application is also available as a sample in the SAP
1. Create a Dynamic Web Project and Servlet [page 233] Cloud Platform SDK:
Prerequisites
You have installed the SAP Cloud Platform Tools and created an SAP HANA Cloud server runtime environment as
described in Setting Up the Development Environment [page 1126].
To develop applications for the SAP Cloud Platform, you require a dynamic Web project and servlet.
1. From the Eclipse main menu, choose File New Dynamic Web Project .
2. In the Project name field, enter mail.
3. In the Target Runtime pane, select the runtime you want to use to deploy the application. In this tutorial, you
use Java Web.
4. In the Configuration area, leave the default configuration and choose Finish.
5. To add a servlet to the project you have just created, select the mail node in the Project Explorer view.
6. From the Eclipse main menu, choose File New Servlet .
7. Enter the Java package com.sap.cloud.sample.mail and the class name MailServlet.
8. Choose Finish to generate the servlet.
You add code to create a simple Web UI for composing and sending an e-mail message. The code includes the
following methods:
package com.sap.cloud.sample.mail;
import java.io.IOException;
import java.io.PrintWriter;
import javax.annotation.Resource;
import javax.mail.Message.RecipientType;
import javax.mail.MessagingException;
import javax.mail.Session;
import javax.mail.Transport;
import javax.mail.internet.InternetAddress;
import javax.mail.internet.MimeBodyPart;
import javax.mail.internet.MimeMessage;
import javax.mail.internet.MimeMultipart;
import javax.servlet.ServletException;
import javax.servlet.http.HttpServlet;
import javax.servlet.http.HttpServletRequest;
import javax.servlet.http.HttpServletResponse;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
/**
* Servlet implementing a mail example which shows how to use the connectivity
service APIs to send e-mail.
* The example provides a simple UI to compose an e-mail message and send it.
The post method uses
* the connectivity
service and the javax.mail API to send the e-mail.
*/
public class MailServlet extends HttpServlet {
@Resource(name = "mail/Session")
private Session mailSession;
private static final long serialVersionUID = 1L;
private static final Logger LOGGER =
LoggerFactory.getLogger(MailServlet.class);
/** {@inheritDoc} */
@Override
protected void doGet(HttpServletRequest request, HttpServletResponse
response) throws ServletException, IOException {
// Show input form to user
response.setHeader("Content-Type", "text/html");
PrintWriter writer = response.getWriter();
writer.write("<!DOCTYPE HTML PUBLIC \"-//W3C//DTD HTML 4.01
Transitional//EN\" "
+ "\"http://www.w3.org/TR/html4/loose.dtd\">");
writer.write("<html><head><title>Mail Test</title></head><body>");
writer.write("<form action='' method='post'>");
writer.write("<table style='width: 100%'>");
writer.write("<tr>");
writer.write("<td width='100px'><label>From:</label></td>");
writer.write("<td><input type='text' size='50' value=''
name='fromaddress'></td>");
Test your code using the local file system before configuring your mail destination and testing the application in the
cloud.
1. To test your application on the local server, select the servlet and choose Run Run As Run on Server .
2. Make sure that the Manually define a new server radio button is selected and select SAP Java Web
Server .
3. Choose Finish. A sender screen appears, allowing you to compose and send an e-mail. The sent e-mail is
stored in the work/mailservice directory contained in the root of your SAP Cloud Platform local runtime
server.
Note
To send the e-mail through a real e-mail server, you can configure a destination as described in the next section,
but using the local server runtime. Remember that once you have configured a destination for local testing,
messages are no longer sent to the local file system.
Create a mail destination that contains the SMTP settings of your e-mail provider. The name of the mail
destination must match the name used in the resource reference in the web.xml descriptor.
1. In the Eclipse main menu, choose File New Other Server Server .
2. Select the server type SAP Cloud Platform and choose Next.
3. In the SAP Cloud Platform Application dialog box, enter the name of your application, subaccount, user, and
password and choose Finish. The new server is listed in the Servers view.
4. Double-click the server and switch to the Connectivity tab.
Property Value
mail.transport.protocol smtp
mail.smtp.host smtp.gmail.com
mail.smtp.auth true
mail.smtp.starttls.enable true
mail.smtp.port 587
Property Value
mail.transport.protocol smtps
mail.smtps.host smtp.gmail.com
mail.smtps.auth true
mail.smtps.port 465
8. Save the destination to upload it to the cloud. The settings take effect when the application is next started.
9. In the Project Explorer view, select MailServlet.java and choose Run Run As Run on Server .
10. Make sure that the Choose an existing server radio button is selected and select the server you have just
defined.
11. Choose Finish to deploy to the cloud. You should now see the sender screen, where you can compose and send
an e-mail
Internet Connectivity
Applications that require connection to a remote service can use the connectivity service to configure HTTP or
RFC endpoints. In a provider-managed application, such an endpoint can either be once defined by the application
provider, or by each application consumer. If the application needs to use the same endpoint, independently from
the current application consumer, the destination that contains the endpoint configuration is uploaded by the
application provider. If the endpoint should be different for each application consumer, the destination shall be
uploaded by each particular application consumer.
You can configure destinations simultaneously on three levels: application, consumer subaccount and
subscription. This means that it is possible to have one and the same destination on more than one configuration
level. For more information, see Destinations [page 86].
When the application accesses the destination at runtime, the connectivity service tries to first lookup the
requested destination in the consumer subaccount on subscription level. If no destination is available there, it
checks if the destination is available on the subaccount level of the consumer subaccount. If there is still no
destination found, the connectivity service searches on application level of the provider subaccount.
Consumer-Specific Destination
Provider-Specific Destination
This connectivity type is fully applicable when working with connectivity service 2.x.
Related Information
You can create connectivity destinations for HANA XS applications, configure their security, add roles and test
them on a relevant enterprise or trial landscape.
Related Information
Overview
This section represents the usage of the connectivity service in a productive SAP HANA instance. Below are listed
the scenarios depending on the connectivity and authentication types you use for your development work.
Connectivity Types
Internet Connectivity
In this case, you can develop an XS application in a productive SAP HANA instance at SAP Cloud Platform. This
enables the application to connect to external Internet services or resources.
The corresponding XS parameters for all enterprise region hosts are the same (see also Regions [page 21]):
XS parameter Value
useProxy true
proxyHost proxy
proxyPort 8080
For more information, see Use XS Destinations for Internet Connectivity [page 242]
In this case, you can develop an XS application in a productive SAP HANA instance at SAP Cloud Platform. That
way the application connects, via a Cloud Connector tunnel, to on-premise services and resources.
The corresponding XS parameters for all enterprise regions hosts are the same (see also Regions [page 21]):
XS parameter Value
useProxy true
proxyHost localhost
proxyPort 20003
useSSL false
Note
When XS applications consume the connectivity service to connect to on-premise systems, the useSSL
property must always be set to false.
The communication between the XS application and the proxy listening on localhost is always via HTTP.
Whether the connection to the on-premise back end should be HTTP or HTTPS is a matter of access control
configuration in the Cloud Connector. For more information, see Configure Access Control (HTTP) [page 151].
For more information, see Use XS Destinations for On-Demand to On-Premise Connectivity [page 246]
Authentication Types
No Authentication
Basic Authentication
You need credentials to access an Internet or on-premise service. To meet this requirement, proceed as follows:
1. Open a Web browser and start the SAP HANA XS Administration Tool (https://
<schema><account>.<host>/sap/hana/xs/admin/).
2. On the XS Applications page, expand the nodes in the application tree to locate your application.
3. Select the .xshttpdest file to display details of the HTTP destination and then choose Edit.
4. In the AUTHENTICATION section, choose the Basic radio button.
5. Enter the credentials for the on-premise service.
6. Save your entries.
Context
This tutorial explains how to create a simple SAP HANA XS application, which is written in server-side JavaScript
and makes use of theconnectivity service for making Internet connections.
In the HTTP example, the package is named connectivity and the XS application is mapinfo. The output displays
information from Google Maps showing the distance between Frankfurt and Cologne, together with the consumed
time if traveling with a car, as all this information is provided in American English..
Note
You can check another outbound connectivity example (financial services that display the latest stock values) in
Developer Guide for SAP HANA Studio → section "8.4.1 Tutorial: Using the XSJS Outbound API ". For more
information, see the SAP HANA Developer Guides listed in the Related Links section below. Refer to the SAP
Cloud Platform Release Notes to find out which HANA SPS is supported by SAP Cloud Platform.
● You have a productive SAP HANA instance. For more information, see Using an SAP HANA XS Database
System [page 1240].
● You have installed the SAP HANA tools. For more information, see Install SAP HANA Tools for Eclipse [page
1224].
1. Initial Steps
To create and assign an XS destination, you need to have a developed HANA XS application.
● If you have already created one and have opened a database tunnel, go straight to procedure 2. Create an XS
Destination File on this page.
● If you need to create an XS application from scratch, go to page Creating an SAP HANA XS Hello World
Application Using SAP HANA Studio [page 1229] and execute procedures 1 to 4. Then execute the procedures
from this page (2 to 5).
Note
The subpackage in which you will later create your XS destination and XSJS files has to be named connectivity.
1. In the Project Explorer view, select the connectivity folder and choose File New File .
2. Enter the file name google.xshttpdest and choose Finish.
3. Copy and paste the following destination configuration settings:
host = "maps.googleapis.com";
port = 80;
pathPrefix = "/maps/api/distancematrix/json";
authType = none;
useSSL = false;
timeout = 30000;
1. In the Project Explorer view, select the connectivity folder and choose File New File .
2. Enter the file name google_test.xsjs and choose Finish.
3. Copy and paste the following JavaScript code into the file:
$.response.contentType = "application/json";
$.response.setBody(response.body.asString());
$.response.status = $.net.http.OK;
} catch (e) {
$.response.contentType = "text/plain";
$.response.setBody(e.message);
}
Note
To consume an Internet service via HTTPS, you need to export your HTTPS service certificate into X.509
format, to import it into a trust store and to assign it to your activated destination. You need to do this in the
SAP HANA XS Administration Tool (https://<schema><account>.<host>/sap/hana/xs/admin/). For more
information, see Developer Guide for SAP HANA Studio → section "3.6.2.1 SAP HANA XS Application
Authentication". For more information, see the SAP HANA Developer Guides listed in the Related Links section
below. Refer to the SAP Cloud Platform Release Notes to find out which HANA SPS is supported by SAP
Cloud Platform.
1. In the Systems view, expand Security Users and then double-click your user ID.
2. On the Granted Roles tab, choose the + (Add) button.
3. Select the model_access role in the list and choose OK. The role is now listed on the Granted Roles tab.
4. Choose Deploy in the upper right corner of screen. A message confirms that your user has been modified.
Open the cockpit and proceed as described in Launch SAP HANA XS Applications [page 1239].
You will be authenticated by SAML and should then see the following response:
{
"destination_addresses" : [ "Cologne, Germany" ],
"origin_addresses" : [ "Frankfurt, Germany" ],
"rows" : [
{
"elements" : [
Additional Example
You can also see an example for enabling server-side JavaScript applications to use the outbound connectivity API.
For more information, see Developer Guide for SAP HANA Studio → section "8.4.1 Tutorial: Using the XSJS
Outbound API".
Note
For more information, see the SAP HANA Developer Guides listed below. Refer to the SAP Cloud Platform
Release Notes to find out which HANA SPS is supported by SAP Cloud Platform.
See Also
Related Information
Context
This tutorial explains how to create a simple SAP HANA XS application that consumes a sample back-end system
exposed via the Cloud Connector.
In this example, the XS application consumes an on-premise system with basic authentication on landscape
hana.ondemand.com.
Prerequisites
● You have a productive SAP HANA instance. For more information, see Using an SAP HANA XS Database
System [page 1240].
● You have installed the SAP HANA tools. For more information, see Install SAP HANA Tools for Eclipse [page
1224]. You need them to open a Database Tunnel.
● You have Cloud Connector 2.x installed on an on-premise system. For more information, see Installation [page
257].
● A sample back-end system with basic authentication is available on an on-premise host. For more information,
see Set Up an Application as a Sample Back-End System [page 191].
● You have created a tunnel between your subaccount and a Cloud Connector. For more information, see Initial
Configuration [page 285] → section "Establishing Connections to SAP Cloud Platform".
● The back-end system is exposed for the SAP HANA XS application via Cloud Connector configuration using as
settings: virtual_host = virtualpingbackend and virtual_port = 1234. For more information, see
Consume Back-End Systems (Java Web or Java EE 6 Web Profile) [page 171].
Note
The last two prerequisites can be achieved by exposing any other available HTTP service in your on-premise
network. In this case, you shall adjust accordingly the pathPrefix value, mentioned below in procedure "2.
Create an XS Destination File".
1. Initial Steps
To create and assign an XS destination, you need to have a developed HANA XS application.
● If you have already created one and have opened a database tunnel, go straight to procedure 2. Create an XS
Destination File on this page.
Note
The subpackage in which you will later create your XS destination and XSJS files has to be named connectivity.
1. In the Project Explorer view, select the connectivity folder and choose File New File .
2. Enter the file name odop.xshttpdest and choose Finish.
3. Copy and paste the following destination configuration settings:
host = "virtualpingbackend";
port = 1234;
useSSL = false;
pathPrefix = "/BackendAppHttpBasicAuth/basic";
useProxy = true;
proxyHost = "localhost";
proxyPort = 20003;
timeout = 3000;
Note
In case you use SDK with a version equal to or lower than 1.44.0.1 (Java Web) and 2.24.13 (Java EE 6
Web Profile) respectively, you should find the on-premise WAR files in directory <SDK_location>/tools/
samples/connectivity/onpremise. Also, the pathPrefix should be /PingAppHttpBasicAuth/
pingbasic.
1. In the Project Explorer view, select the connectivity folder and choose File New File .
2. Enter the file name ODOPTest.xsjs and choose Finish.
3. Copy and paste the following JavaScript code into the file:
$.response.contentType = "text/html";
var dest = $.net.http.readDestination("connectivity","odop");
var client = new $.net.http.Client();
var req = new $.web.WebRequest($.net.http.GET, "");
client.request(req, dest);
var response = client.getResponse().body.asString();
$.response.setBody(response);
Note
You also need to enter your on-premise credentials. You should not enter them in the destination file since they
must not be exposed as plain text.
1. Open a Web browser and start the SAP HANA XS Administration Tool (https://
<schema><account>.<host>/sap/hana/xs/admin/).
2. On the XS Applications page, expand the nodes in the application tree to locate your application.
3. Select the odop.xshttpdest file to display the HTTP destination details and then choose Edit.
4. In section AUTHENTICATION, choose the Basic radio button.
5. Enter your on-premise credentials (user and password).
6. Save your entries.
Note
If you later need to make another configuration change to your XS destination, you need to enter your password
again since it is no longer remembered by the editor.
1. In the Systems view, expand Security Users and then double-click your user ID.
2. On the Granted Roles tab, choose the + (Add) button.
3. Select the model_access role in the list and choose OK. The role is now listed on the Granted Roles tab.
4. Choose Deploy in the upper right corner of screen. A message confirms that your user has been modified.
Open the cockpit and proceed as described in Launch SAP HANA XS Applications [page 1239].
Principal Propagation scenario is available for HANA XS applications. It is used for propagating the currently
logged in user to an on-premise back-end system using the Cloud Connector and connectivity service. To
configure the scenario make sure to:
2.Open the Cloud Connector and mark your HANA instance as trusted in the Principal Propagation tab. The HANA
instance name is displayed in the cockpit under SAP HANA/SAP ASE Databases & Schemas . For more
information, see Set Up Trust [page 298].
Related Information
port It enables you to specify the port ● For Internet connection: 80, 443
number to use for connections to the
● For on-demand to on-premise
HTTP destination hosting the service or
connection: 1080
data you want your SAP HANA XS
● For service-to-service connection:
application to access.
8443
Note
See also: Connectivity for SAP HANA
XS (Enterprise Version) [page 240]
Note
See also: Connectivity for SAP HANA
XS (Enterprise Version) [page 240]
Related Information
This section represents the usage of the connectivity service when you develop and deploy SAP HANA XS
applications in a trial environment. Currently, you can make XS destinations for consuming HTTP Internet services
only.
The tutorial explains how to create a simple SAP HANA XS application which is written in server-side JavaScript
and makes use of the connectivity service for making Internet connections. In the HTTP example, the package is
named connectivity and the XS application is mapinfo. The output displays information from Google Maps
showing the distance between Frankfurt and Cologne, together with the consumed time if traveling with a car, as
all this information is provided in American English.
Features
In this case, you can develop an XS application in a trial environment at SAP Cloud Platform so that the application
connects to external Internet services or resources.
XS parameter hanatrial.ondemand.com
useProxy true
proxyHost proxy-trial
proxyPort 8080
Note
The useSSL property can be set to true or false depending on the XS application's needs.
1. Initial Steps
To create and assign an XS destination, you need to have a developed HANA XS application.
● If you have already created one and have opened a database tunnel, go straight to procedure 2. Create an XS
Destination File on this page.
● If you need to create an XS application from scratch, go to page Creating an SAP HANA XS Hello World
Application Using SAP HANA Studio [page 1229] and execute procedures 1 to 4. Then execute the procedures
from this page (2 to 5).
Note
The subpackage in which you will later create your XS destination and XSJS files has to be named connectivity.
1. In the Project Explorer view, select the connectivity folder and choose File New File .
2. Enter the file name google.xshttpdest and choose Finish.
3. Copy and paste the following destination configuration settings:
host = "maps.googleapis.com";
port = 80;
pathPrefix = "/maps/api/distancematrix/json";
useProxy = true;
proxyHost = "proxy-trial";
proxyPort = 8080;
authType = none;
useSSL = false;
timeout = 30000;
1. In the Project Explorer view, select the connectivity folder and choose File New File .
2. Enter the file name google_test.xsjs and choose Finish.
3. Copy and paste the following JavaScript code into the file:
$.response.contentType = "application/json";
$.response.setBody(response.body.asString());
$.response.status = $.net.http.OK;
} catch (e) {
$.response.contentType = "text/plain";
$.response.setBody(e.message);
}
1. In the Systems view, select your system and from the context menu choose SQL Console.
call
"HCP"."HCP_GRANT_ROLE_TO_USER"('p1234567890trial.myhanaxs.hello::model_access',
'<SAP HANA Cloud user>')
3. Execute the procedure. You should see a confirmation that the statement was successfully executed.
Open the cockpit and proceed as described in Launch SAP HANA XS Applications [page 1239].
You will be authenticated by SAML and should then see the following response:
{
"destination_addresses" : [ "Cologne, Germany" ],
"origin_addresses" : [ "Frankfurt, Germany" ],
"rows" : [
{
"elements" : [
{
"distance" : {
"text" : "190 km",
"value" : 190173
},
"duration" : {
"text" : "1 hour 58 mins",
"value" : 7103
},
"status" : "OK"
}
]
}
],
"status" : "OK"
}
Related Information
Creating an SAP HANA XS Hello World Application Using SAP HANA Web-based Development Workbench [page
1225]
Content
Section Description
Advantages [page 255] How the Cloud Connector helps you to connect your on-prem
ise systems to SAP Cloud Platform.
Scenarios [page 255] Learn more about the different connection setups you can
choose.
Basic Tasks [page 256] Primary steps you need to perform to connect the Cloud
Connector to your SAP Cloud Platform subaccount.
What's New? [page 256] Stay up to date with the new Cloud Connector features.
Context
The Cloud Connector serves as a link between SAP Cloud Platform applications and on-premise systems. It
combines an easy setup with a clear configuration of the systems that are exposed to the SAP Cloud Platform. You
can also control the resources available for the cloud applications in those systems. Thus, you can benefit from
your existing assets without exposing the whole internal landscape.
The Cloud Connector runs as on-premise agent in a secured network and acts as a reverse invoke proxy between
the on-premise network and SAP Cloud Platform. Due to its reverse invoke support, you don't need to configure
the on-premise firewall to allow external access from the cloud to internal systems. The Cloud Connector provides
fine-grained control over:
You can use the Cloud Connector in business-critical enterprise scenarios. The Cloud Connector automatically
reestablishes broken connections, provides audit logging of the inbound traffic and configuration changes, and can
be run in a high-availability setup.
In the Scenarios section below, follow the steps according to the protocol you need to use (HTTP or RFC).
Caution
The Cloud Connector must not be used with products other than SAP Cloud Platform or S/4HANA Cloud.
Compared to the approach of opening ports in the firewall and using reverse proxies in the DMZ to establish access
to on-premise systems, the Cloud Connector has the following advantages:
● The firewall of the on-premise network does not have to open an inbound port to establish connectivity from
SAP Cloud Platform to an on-premise system. In the case of allowed outbound connections, no modifications
are required.
● The Cloud Connector supports additional protocols, apart from HTTP. For example, the RFC protocol supports
native access to ABAP systems by invoking function modules.
● You can use the Cloud Connector to connect on-premise databases or BI tools to SAP HANA databases in the
cloud. That means, it also supports the opposite connection direction (from the on-premise system to the
cloud).
● The Cloud Connector allows propagating the identity of cloud users to on-premise systems in a secure way.
● The Cloud Connector is easy to install and configure, that is, it comes with a low TCO and fits well to cloud
scenarios. SAP provides standard support for it.
Scenarios
Note
Depending on the type of installation setup, you can also install the Cloud Connector in an environment
managed by SAP or a 3rd party provider. In this case, special procedures may apply for configuration. If so, they
are mentioned in the corresponding configuration steps.
Basic Tasks
The following steps are required to connect the Cloud Connector to your SAP Cloud Platform subaccount:
What's New?
You can follow the SAP Cloud Platform Release Notes to stay informed about updates of the Cloud Connector.
Related Information
Choose one of the procedures listed below to install Cloud Connector 2.x on your operating system.
On Microsoft Windows and Linux, two installation modes are available: a portable version and an installer
version. On Mac OS X, only the portable version is available.
● Portable version - can be easily installed by simply extracting a compressed archive into an empty directory.
It does not require administrator or root privileges for the installation. Restrictions:
○ You cannot run it in the background as a Windows Service or Linux daemon (with automatic start
capabilities at boot time).
○ It does not support an automatic upgrade procedure. So, if you want to update a portable installation,
you must delete the current installation, extract the new version, and then re-do the configuration.
○ It is meant for non-productive scenarios only.
● Installer version - requires administrator or root permissions for the installation and can be set up to run as
a Windows Service or Linux daemon in the background. You can also upgrade it easily, retaining all the
configuration and customizing. We recommended that you use this variant for productive setups.
Prerequisites
There are some prerequisites you must fulfill to successfully install the Cloud Connector 2.x., see Prerequisites
[page 258].
Tasks
Related Information
Connectivity Restrictions
For general information about SAP Cloud Platform restrictions, see Product Prerequisites and Restrictions [page
906].
For specific information about all connectivity restrictions, see Connectivity [page 32] → section "Restrictions".
Hardware
Software
● You have downloaded the Cloud Connector installation archive from SAP Development Tools for Eclipse.
● A JDK 7 or 8 must be installed. Due to problems with expired root CA certificates contained in older patch
levels of JDK 7, we recommend that you install the most recent patch level. You can download an up-to-date
SAP JVM from SAP Development Tools for Eclipse as well.
Caution
Do not use Apache Portable Runtime (APR) on the system on which you use the Cloud Connector. If you
cannot avoid this restriction and want to use APR at your own risk, you must manually adopt the default-
server.xml configuration file in directory <scc_installation_folder>/config_master/
org.eclipse.gemini.web.tomcat. To do so, follow the steps in HTTPS port configuration for APR.
Supported JDKs
Network
You must have Internet connection at least to the following hosts (depending on the region), to which you can
connect your Cloud Connector:
Neo Environment
connectivitytunnel.hana.ondemand.com 155.56.210.84
connectivitytunnel.eu3.hana.onde 157.133.141.141
mand.com
connectivitytunnel.us1.hana.onde 65.221.12.41
mand.com
connectivitytunnel.us2.hana.onde 64.95.110.214
mand.com
connectivitytunnel.us3.hana.onde 169.145.118.141
mand.com
connectivitytunnel.cn1.hana.onde 157.133.192.141
mand.com
connectivitytunnel.jp1.hana.onde 157.133.150.141
mand.com
connectivitytunnel.ca1.hana.onde 157.133.54.141
mand.com
connectivitytunnel.ru1.hana.onde 157.133.2.141
mand.com
connectivitytunnel.br1.hana.onde 157.133.246.141
mand.com
connectivitytunnel.ae1.hana.onde 157.133.85.141
mand.com
connectivitytunnel.sa1.hana.onde 157.133.93.141
mand.com
connectivitytunnel.cf.eu10.hana.onde 52.58.143.196,
mand.com 35.157.143.217
connectivitytunnel.cf.us10.hana.onde 52.58.143.196,
mand.com 35.157.143.217
connectivitytunnel.cf.br10.hana.onde 54.232.240.220,
mand.com 54.232.204.156
connectivitycertsigning.hanatrial.onde 155.56.219.22
mand.com
connectivitytunnel.hanatrial.onde 155.56.219.27
mand.com
Note
If you install the Cloud Connector in a network segment that is isolated from the back-end systems, make sure
the exposed hosts and ports are still reachable and open them in the firewall that protects them:
● for HTTP, the ports you chose for the HTTP/S server.
● for LDAP, the port of the LDAP server.
● for RFC it depends on whether you use a SAProuter or not and whether load balancing is used:
○ if you use an SAProuter, it is typically configured as visible in the network of the Cloud Connector and
the corresponding routtab is exposing all the systems that should be used.
○ without SAProuter, you must open the application server hosts and the corresponding gateway ports
(33##, 48##). When using load balancing for the connection, the message server host and port must
also be opened.
For more information about the used ABAP server ports, see: Ports of SAP NetWeaver Application Server ABAP.
Enterprise Linux 6
Windows 8.1, Windows Server 2012, Win x86_64 2.5.1 and higher
dows Server 2012 R2
SUSE Linux Enterprise Server 12, Redhat x86_64 2.5.1 and higher
Enterprise Linux 7
When installing a Cloud Connector, the first thing you need to decide is the sizing of the installation.
This section gives some basic guidance what to consider for this decision. The provided information includes the
shadow instance, which should always be added in productive setups. See also Install a Failover Instance for High
Availability [page 361].
Note
The following recommendations are based on current experiences. However, they are only a rule of thumb since
the actual performance strongly depends on the specific environment. The overall performance of a Cloud
Connector is impacted by many factors (number of hosted subaccounts, bandwidth, latency to the attached
regions, network routers in the corporate network, used JVM, and others).
Restrictions
Note
Up until now, you cannot perform horizontal scaling directly. However, you can distribute the load statically by
operating multiple Cloud Connector installations with different location IDs for all involved subaccounts. In this
scenario, you can use multiple destinations with virtually the same configuration, except for the location ID. See
also Managing Subaccounts [page 291], step 4. Alternatively, each of the Cloud Connector instances can host
its own list of subaccounts without any overlap in the respective lists. Thus, you can handle more load, if a single
installation risks to be overloaded.
Related Information
How to choose the right sizing for your Cloud Connector installation.
Regarding the hardware, we recommend that you use different setups for master and shadow. One dedicated
machine should be used for the master, another one for the shadow. Usually, a shadow instance takes over the
master role only temporarily. During most of its lifetime, in the shadow state, it needs less resources compared to
the master.
If the master instance is available again after a downtime, we recommend that you switch back to the actual
master.
Note
The sizing recommendations refer to the overall load across all subaccounts that are connected via the Cloud
Connector. This means that you need to accumulate the expected load of all subaccounts and should not only
calculate separately per subaccount (taking the one with the highest load as basis).
Related Information
Learn more about the basic criteria for the sizing of your Cloud Connector master instance.
For the master setup, keep in mind the expected load for communication between the SAP Cloud Platform and on-
premise systems. The setups listed below differ in a mostly qualitative manner, without hard limits for each of
them.
Note
The mentioned sizes are considered as minimal configuration, larger ones are always ok. In general, the more
applications, application instances, and subaccounts are connected, the more competition will exist for the
limited resources on the machine.
Particularly the heap size is critical. If you size it too low for the load passing the Cloud Connector, at some point
the Java Virtual Machine will execute full GCs (garbage collections) more frequently, blocking the processing of the
Cloud Connector completely for multiple seconds, which massively slows down overall performance. If you
experience such situations regularly, you should increase the heap size in the Cloud Connector UI (choose
Configuration Advanced JVM ). See also Configure the Java VM [page 356].
Note
You should use the same value for both <initial heap size> and <maximum heap size>.
Learn more about the basic criteria for the sizing of your Cloud Connector shadow instance (high availability
mode).
The shadow installation is typically not used in standard situations and hence does not need the same sizing,
assuming that the time span in which it takes over the master role is limited.
While being in the shadow state, the resource consumption is very low, especially in productive environments,
where typically only few configuration changes are required. Therefore, the machine sizing can usually be smaller
than the one for the master. However, if you want to mitigate the risk of a longer outage of the master machine, you
should increase the sizing of the shadow up to the master size:
Master Shadow
Choose the right connection configuration options to improve the performance of the Cloud Connector.
This section provides detailed information how you can adjust the configuration to improve overall performance.
This is typically relevant for an M or L installation (see Hardware Setup [page 264]). For S installations, the default
configuration will probably be sufficient to handle the traffic.
● As of Cloud Connector 2.11, you can configure the number of physical connections through the Cloud
Connector UI. See also Configure Tunnel Connections [page 354].
● In versions prior to 2.11, you have to modify the configuration files with an editor and restart the Cloud
Connector to activate the changes.
In general, the Cloud Connector tunnel is multiplexing multiple virtual connections over a single physical
connection. Thus, a single connection can handle a considerable amount of traffic. However, increasing the
maximum number of physical connections allows you to make use of the full available bandwidth and to minimize
latency effects.
If the bandwidth limit of your network is reached, adding additional connections doesn't increase the througput,
but will only consume more resources.
Note
Different network access parameters may impact and limit your configuration options: if the access to an
external network is a 1 MB line with an added latency of 50 ms, you will not be able to achieve the same data
Optimal configuration strongly depends on your actual scenarios. A good approach is to try out different settings,
if the current performance does not meet your expectations.
Related Information
Adjusting the number of physical connections for this direction is possible both globally in the Cloud Connector UI
( Configuration Advanced ), and more specific for individual communication partners on cloud side ( On-
Demand To On-Premise Applications ). If the calling application/instance is hosted by the SAP Cloud Platform
Neo environment, you can define it even per Java application or HANA instance.
Connections are established per communication partner and the current number of opened connections is visible
in the Cloud Connector UI via <Subaccount> Cloud Connections . For Neo applications with multiple
processes, the configured connections are established per process, which lets you use lower overall values for
such a connection.
The global default (and thus the individual setting) is 1 physical connection per communication partner. This value
is used across all subaccounts hosted by the Cloud Connector instance and will be used for all communication
partners, for which there is no specific value set in On-Demand To On-Premise Applications . In general, the
default should be sufficient for applications with low traffic and could stay at 1. If you expect medium traffic for
most applications, it may be useful to set the value to 2 so that it is not needed to set individual values per
application.
The following simple rule should help you to decide, whether an individual setting for a concrete application is
required:
● Per 20 threads in one process executing requests to on-premise systems, provide 1 physical connection.
● If the request or response net size is larger than 250k, make sure to add an additional connection per 2 of such
clients.
Example
For an application in the SAP Cloud Platform Neo environment, requests to on-premise systems are executed in
each application thread. The expected usage is 100 concurrent users. In average, about 3 of those users are
triggering a remote call to an on-premise system that returns about 400k. That is, for the number of threads,
you should use 5 physical connections, for the 3 clients sending larger amounts add an additional 2, which sums
up to 7 connections.
In addition to the number of connections, you can configure the number of <Tunnel Worker Threads>. This
value should be at least the maximum of all configured individual application tunnel connection settings in all
subaccounts so that there is at least 1 thread available for each connection that can process incoming requests
and outgoing responses.
The value for <Protocol Processor Worker Threads> is mainly relevant when RFC is used as protocol. As
the communication model to the ABAP system is a blocking one, each thread can only take care for one invocation
at a time and cannot be shared. Hence, you should offer 1 thread per 5 concurrent RFC requests.
Note
Due to the blocking nature of this low-level protocol, the longer the execution time in the back end is, the more
threads you will need. The threads can only be reused after returning the response to SAP Cloud Platform.
Configure the number of physical connections for a Cloud Connector service channel.
Service channels let you configure the number of physical connections to the communication partner on cloud
side, see Using Service Channels [page 345]. The default is 1. This value is used as well in versions prior to Cloud
Connector 2.11, which did not offer a configuration option for each service channel. You should define the number
of connections depending on the expected number of clients and, with lower priority, depending on the size of the
exchanged messages.
If there is only a single RFC client for an S/4HANA Cloud channel or only a single HANA client for a HANA DB on
SAP Cloud Platform side, increasing the number doesn't help, as each virtual connection is assigned to one
physical connection. The following simple rule per service channel should allow you to define the number of
connections that meet your requirements:
Example
For a HANA system in the SAP Cloud Platform, data is replicated using 18 concurrent clients in the on-premise
network. In average, about 5 of those clients are regularly sending 600k. For the number of clients, you should
use 2 physical connections, for the 5 clients sending larger amounts add an additional 3, which sums up to 5
connections.
Prerequisites
● You have either of the following 64-bit operating systems: Windows 7, Windows 8.1, Windows 10, Windows
Server 2008 R2, Windows Server 2012, Windows Server 2012 R2 or Windows Server 2016.
● You have downloaded either the portable variant as ZIP archive for Windows, or the MSI installer from
the SAP Development Tools for Eclipse page.
● You must install Microsoft Visual Studio C++ 2013 runtime libraries (vcredist_x64.exe). For more information,
see Visual C++ Redistributable Packages for Visual Studio 2013 .
Note
Even if you have a more recent version of the Microsoft Visual C++ runtime libraries, you still must install
the Microsoft Visual Studio C++ 2013 libraries.
● Java 7 or Java 8 must be installed. In case you want to use SAP JVM, you can download it from the SAP
Development Tools for Eclipse page.
● When using the portable variant, the environment variable <JAVA_HOME> must be set to the Java
installation directory, so that the bin subdirectory can be found. Alternatively, you can add the relevant bin
subdirectory to the <PATH> variable.
Context
You can choose between a simple portable variant of the Cloud Connector and the MSI-based installer. The
installer is the generally recommended version that you can use for both developer and productive scenarios.
It lets you, for example, register the Cloud Connector as a Windows service and this way automatically start it after
machine reboot.
Tip
If you are a developer, you might want to use the portable variant as you can run the Cloud Connector after a
simple unzip (archive extraction). You might want to use it also if you cannot perform a full installation due to
lack of permissions, or if you want to use multiple versions of the Cloud Connector simultaneously on the same
machine.
Portable Scenario
1. Extract the <sapcc-<version>-windows-x64.zip> ZIP file to an arbitrary directory on your local file
system.
2. Set the environment variable JAVA_HOME to the installation directory of the JDK that you want to use to run
the Cloud Connector. Alternatively, you can add the bin subdirectory of the JDK installation directory to the
PATH environment variable.
3. Go to the Cloud Connector installation directory and start it using the go.bat batch file.
4. Continue with the Next Steps section.
Note
Cloud Connector 2.x is not started as a service when using the portable variant, and hence will not
automatically start after a reboot of your system. Also, the portable version does not support the automatic
upgrade procedure.
Installer Scenario
Note
Cloud Connector 2.x is started as a Windows Service in the productive use case. Therefore, installation requires
administration permissions. After installation, manage this service under Control Panel Administrative
Tools Services . The service name is Cloud Connector 2.0. Make sure the service is executed with a user
that has limited privileges. Typically, privileges allowed for service users are defined by your company policy.
Adjust the folder and file permissions to be manageable by only this user and system administrators.
1. Open a browser and enter: https://<hostname>:8443. <hostname> is the host name of the machine on
which you have installed the Cloud Connector. If you access the Cloud Connector locally from the same
machine, you can simply enter localhost.
2. Continue with the initial configuration of the Cloud Connector, see Initial Configuration [page 285].
Related Information
Prerequisites
● You have either of the following 64-bit operating systems: SUSE Linux Enterprise Server 11 or 12, or Redhat
Enterprise Linux 6 or 7
● You have downloaded either the portable variant as tar.gz archive for Linux or the RPM installer
contained in the ZIP for Linux, from SAP Development Tools for Eclipse.
● Java 7 or Java 8 must be installed. If you want to use SAP JVM, you can download an up-to-date version from
SAP Development Tools for Eclipse as well. Use the following command to install it:
rpm -i sapjvm-<version>-linux-x64.rpm
If you want to check the JVM version installed on your system, use the following command:
When installing it using the RPM package, the Cloud Connector will detect it and use it for its runtime.
● When using the tar.gz archive, the environment variable <JAVA_HOME> must be set to the Java installation
directory, so that the bin subdirectory can be found. Alternatively, you can add the Java installation's bin
subdirectory to the <PATH> variable.
Context
You can choose between a simple portable variant of the Cloud Connector and the RPM-based installer. The
installer is the generally recommended version that you can use for both the developer and the productive
Tip
If you are a developer, you might want to use the portable variant as you can run the Cloud Connector after a
simple "tar -xzof" execution. You also might want to use it if you cannot perform a full installation due to
missing permissions for the operating system, or if you want to use multiple versions of the Cloud Connector
simultaneously on the same machine.
Portable Scenario
1. Extract the tar.gz file to an arbitrary directory on your local file system using the following command:
Note
If you use the parameter "o", the extracted files are assigned to the user ID and the group ID of the user who
has unpacked the archive. This is the default behavior for users other than the root user.
2. Go to this directory and start the Cloud Connector using the go.sh script.
3. Continue with the Next Steps section.
Note
In this case, Cloud Connector is not started as a daemon, and therefore will not automatically start after a
reboot of your system. Also, the portable version does not support the automatic upgrade procedure.
Installer Scenario
1. Extract the sapcc-<version>-linux-x64.zip archive to an arbitrary directory by using the following the
command:
unzip sapcc-<version>-linux-x64.zip
2. Go to this directory and install the extracted RPM using the following command. You can perform this step
only as a root user.
rpm -i com.sap.scc-ui-<version>.x86_64.rpm
In the productive case, Cloud Connector 2.x is started as daemon. If you need to manage the daemon process,
execute:
Example: After a file system restore, the system files represent Cloud Connector 2.3.0 but the RPM package
management "believes" that version 2.4.3 is installed. In this case, commands like rpm -U and rpm -e do not
work as expected. Furthermore, avoid using the --force parameter as it may lead to an unpredictable state
with two versions being installed concurrently, which is not supported.
Next Steps
1. Open a browser and enter: https://<hostname>:8443. <hostname> is the host name of the machine on
which you installed the Cloud Connector.
If you access the Cloud Connector locally from the same machine, you can simply enter localhost.
2. Continue with the initial configuration of the Cloud Connector, see Initial Configuration [page 285].
Related Information
Prerequisites
Note
Mac OS X is not supported for productive scenarios. The developer version described below must not be used
as productive version.
● You have either of the following 64-bit operating systems: Mac OS X 10.7 (Lion), Mac OS X 10.8 (Mountain
Lion), Mac OS X 10.9 (Mavericks), Mac OS X 10.10 (Yosemite), or Mac OS X 10.11 (El Capitan).
● You have downloaded the tar.gz archive for the developer use case on Mac OS X from SAP Development
Tools for Eclipse.
Procedure
1. Extract the tar.gz file to an arbitrary directory on your local file system using the following command:
2. Go to this directory and start Cloud Connector using the go.sh script.
3. Continue with the Next Steps section.
Note
The Cloud connector is not started as a daemon, and therefore will not automatically start after a reboot of
your system. Also, the Mac OS X version of Cloud Connector does not support the automatic upgrade
procedure.
Next Steps
1. Open a browser and enter: https://<hostname>:8443. <hostname> is the host name of the machine on
which you installed the Cloud Connector.
If you access the Cloud Connector locally from the same machine, you can simply enter localhost.
2. Continue with the initial configuration of the Cloud Connector, see Initial Configuration [page 285].
Related Information
For the connectivity service and the Cloud Connector, you should apply the following guidelines to guarantee the
highest level of security for these components.
From the Connector menu, choose Security Status to access an overview showing potential security risks and the
recommended actions.
The General Security Status addresses security topics that are subaccount-independent.
● Choose any of the Actions icons in the corresponding line to navigate to the UI area that deals with that
particular topic and view or edit details.
Note
Navigation is not possible for the last item in the list (Service User).
● The service user is specific to the Windows operating system (see Installation on Microsoft Windows OS [page
270] for details) and is only visible when running the Cloud Connector on Windows. It cannot be accessed or
edited through the UI. If the service user was set up properly, choose Edit and check the corresponding
checkbox.
The Subaccount-Specific Security Status lists security-related information for each and every subaccount.
Note
The security status only serves as a reminder to address security issues and shows if your installation complies
with all recommended security settings.
Upon installation, the Cloud Connector provides an initial user name and password, and forces the user
(Administrator) to change the password. You should change the password immediately after installation.
The connector itself does not check the strength of the password. You should select a strong password that cannot
be guessed easily.
Note
To enforce your company's password policy, we recommend that you configure the Administration UI to use an
LDAP server for authorizing access to the UI.
The Cloud Connector is a security-critical component that handles the external access to systems of an isolated
network, comparable to a reverse proxy. We therefore recommend that you restrict the access to the operating
system on which the Cloud Connector is installed to the minimal set of users who would administrate the Cloud
Connector. This minimizes the risk of unauthorized users getting access to credentials, such as certificates stored
in the secure storage of the Cloud Connector.
We also recommend that you use the machine to operate only the Cloud Connector and no other systems.
To log on to the Cloud Connector administration UI, the Administrator user of the connector must not have an
operating system (OS) user for the machine on which the connector is running. This allows the OS administrator
to be distinguished from the Cloud Connector administrator. To make an initial connection between the connector
and a particular SAP Cloud Platform subaccount, you need an SAP Cloud Platform user with the required
permissions for the related subaccount. We recommend that you separate these roles/duties (that means, you
have separate users for Cloud Connector administrator and SAP Cloud Platform).
Note
We recommend that only a small number of users are granted access to the machine as root users.
Hard-drive encryption ensures that the Cloud Connector configuration data cannot be read by unauthorized users,
even if they obtain access to the hard drive.
The Cloud Connector administration UI can be accessed remotely via HTTPS. The connector uses a standard X.
509 self-signed certificate as SSL server certificate. You can exchange this certificate with a specific certificate
that is trusted by your company. See Recommended: Replace the Default SSL Certificate [page 279].
Note
Since browsers usually do not resolve localhost to the host name whereas the certificate usually is created
under the host name, you might get a certificate warning. In this case, simply skip the warning message.
Currently, the protocols HTTP and RFC are supported for connections between the SAP Cloud Platform and on-
premise systems when the Cloud Connector and the connectivity service are used. The whole route from the
application virtual machine in the cloud to the Cloud Connector is always SSL-encrypted.
The route from the connector to the back-end system can be SSL-encrypted or SNC-encrypted. See Configure
Access Control (HTTP) [page 151] and Configure Access Control (RFC) [page 202].
We recommend that you turn on the audit log on operating system level to monitor the file operations.
The Cloud Connector audit log must remain switched on during the time it is used with productive systems. Set it
to audit level ALL (the default is SECURITY). The administrators who are responsible for a running Cloud
Connector must ensure that the audit log files are properly archived, to conform to the local regulations. You
should switch on audit logging also in the connected back-end systems.
By default, all available encryption ciphers are supported for HTTPS connections to the administration UI.
However, some of them may not conform to your security standards and therefore should be excluded:
1. From the main menu, choose Configuration and select the tab User Interface, section Cipher Suites. By default,
all available ciphers are marked as selected.
2. Choose the Remove icon to unselect the ciphers that do not meet your security requirements.
3. Choose Save.
Related Information
Overview
By default, the Cloud Connector includes a self-signed UI certificate. It is used to encrypt the communication
between the browser-based user interface and the Cloud Connector itself. For security reasons, however, you
should replace this certificate with your own certificate so that the browser accepts the certificate without security
warnings.
Note
As of version 2.6.0, you can easily replace the default certificate within the Cloud Connector administration UI .
See Exchange UI Certificates in the Administration UI [page 283].
Caution
The Cloud Connector's keystore may contain a certificate used in the High Availability setup. This certificate has
the alias "ha". Any changes on it or removal would cause a disruption of communication between the shadow
and the master instance, and therefore to a failed procedure. We recommend that you replace the keystore on
both the master and shadow server before establishing the connection between the two instances.
Procedure
● on Linux OS:
Note
Memorize the keystore password, as you will need it for later operations. See related links.
Make sure you go to directory /opt/sap/scc/config before executing the commands described in the following
procedures.
Note
For a detailed description of the keytool tool, see http://docs.oracle.com/javase/7/docs/technotes/tools/
solaris/keytool.html .
Related Information
Generate a self-signed certificate for special purposes like, for example, a demo setup.
Context
Note
As of Cloud Connector 2.10 you can generate self-signed certificates also from the administration UI. See
Configure a CA Certificate for Principal Propagation [page 303] and Initial Configuration (HTTP) [page 149]. In
this case, the steps below are not required.
If you want to use a simple, self-signed certificate, follow the procedure below.
Note
The parameter values in the following section are examples.
The server configuration delivered by SAP uses the same password for key store (option \-storepass) and key
(option \-keypass) under alias tomcat.
Procedure
2. Generate a certificate:
3. Self-sign it - you will be prompted for the keypass password defined in step 2:
Use a signed certificate by a trusted certificate authority (CA) to increase the security level when running the
Cloud Connector.
Procedure
If you have a signed certificate produced by a trusted certificate authority (CA), go directly to step 3.
You now have a file called <csr-file-name> that you can submit to the certificate authority. In return, you
get a certificate.
3. Import the certificate chain that you obtained from your trusted CA:
The password is created at installation time and stored in the secure storage. Thus, only applications with access
can read the password. You can read password using Java:
Note
We recommend that you do not modify the configuration unless you have expertise in this area.
Related Information
By default, the Cloud Connector includes a self-signed UI certificate. It is used to encrypt the communication
between the browser-based user interface and the Cloud Connector itself. For security reasons, however, you
should replace this certificate with your own one to let the browser accept the certificate without security
warnings.
Procedure
Master Instance
1. From the main menu, choose Configuration and go to the User Interface tab.
2. In the UI Certificate section, start a certificate signing request procedure by choosing the icon Generate a
Certificate Signing Request.
5. You are prompted to save the signing request in a file. The content of the file is the signing request in PEM
format.
The signing request must be provided to a Certificate Authority (CA) - either one within your company or
another one you trust. The CA signs the request and the returned response should be stored in a file.
6. To import the signing response, choose the Upload icon. Select Browse to locate the file and then choose the
Import button.
7. Review the certificate details that are displayed.
8. Restart the Cloud Connector to activate the new certificate.
Shadow Instance
In a High Availability setup, perform the same operation on the shadow instance.
1.5.3.2 Configuration
Configure the Cloud Connector to make it operational for connections between your SAP Cloud Platform
subaccount and on-premise systems.
In this section:
Topic Description
Initial Configuration [page 285] After installing the Cloud Connector and starting the Cloud
Connector daemon, you can log on and perform the required
configuration to make your Cloud Connector operational.
Managing Subaccounts [page 291] How to connect SAP Cloud Platform subaccounts to your
Cloud Connector.
Configuring Principal Propagation [page 298] Principal Propagation [page 126] lets you forward the logged-
on identity in the cloud to the internal (on-premise) system
without providing the password.
Configure Access Control [page 318] Configure access control or copy the complete access control
settings from another subaccount on the same Cloud
Connector.
Configuration REST APIs [page 320] Configure a newly installed Cloud Connector (initial configura-
tion, subaccounts, access control) using the configuration
REST API.
Configure the User Store [page 343] Configure applications running on SAP Cloud Platform to use
your corporate LDAP server as a user store.
Using Service Channels [page 345] Service channels provide secure and reliable access from an
external network to certain services on SAP Cloud Platform,
which are not exposed to direct access from the Internet.
Connect DB Tools to SAP HANA via Service Channels [page How to connect database, BI, or replication tools running in
347] the on-premise network to a HANA database on SAP Cloud
Platform using the service channels of the Cloud Connector.
Configure Domain Mappings for Cookies [page 352] Map virtual and internal domains to ensure correct handling of
cookies in client/server communication.
Configure Solution Management Integration [page 353] Activate Solution Management reporting in the Cloud
Connector.
Configure Tunnel Connections [page 354] Adapt connectivity settings that control the throughput by
choosing the appropriate limits (maximal values).
Configure the Java VM [page 356] Adapt the JVM settings that control memory management.
Configuration Backup [page 356] Backup and restore your Cloud Connector configuration.
Context
After installing the Cloud Connector and starting the Cloud Connector daemon, you can log on and perform the
required configuration to make your Cloud Connector operational.
We strongly recommend that you read and follow the steps described in Recommendations for Secure Setup
[page 275]. For operating the Cloud Connector securely, see also Guidelines for Secure Operation of Cloud
Connector [page 401].
To administer the Cloud Connector, you need a Web browser. To check the list of supported browsers, see Product
Prerequisites and Restrictions [page 906] → section Browser Support.
1. When you first log in, you must change the password before you continue, regardless of the installation type
you have chosen.
2. Choose between master and shadow installation. Use Master if you are installing a single Cloud Connector
instance or a main instance from a pair of Cloud Connector instances. See Install a Failover Instance for High
Availability [page 361].
3. You can edit the password for the Administrator user from Configuration in the main menu, tab User
Interface, section Authentication:
If your internal landscape is protected by a firewall that blocks any outgoing TCP traffic, you must specify an
HTTPS proxy that the Cloud Connector can use to connect to SAP Cloud Platform. Normally, you must use the
same proxy settings as those being used by your standard Web browser. The Cloud Connector needs this proxy for
two operations:
● Download the correct connection configuration corresponding to your subaccount ID in SAP Cloud Platform.
● Establish the SSL tunnel connection from the Cloud Connector user to your SAP Cloud Platform subaccount.
Note
If you want to skip the initial configuration, you can click the icon in the upper right corner. You might need
this in case of connectivity issues shown in your logs. You can add subaccounts later as described in Managing
Subaccounts [page 291].
When you first log on, the Cloud Connector collects the following required information:
1. For <Region Host>, specify the SAP Cloud Platform host that should be used. You can choose it from the
drop-down list. See Regions [page 21].
2. For <Subaccount>, <Subaccount User> and <Password>, enter the values you obtained when you
registered your subaccount on SAP Cloud Platform or add a new subaccount user [page 965] (for Cloud
Foundry, see Add Organization Members Using the Cockpit [page 956]) with role Cloud Connector Admin
from the Members tab in the SAP Cloud Platform cockpit and use the new user and password.
Note
If your subaccount is on Cloud Foundry, you must enter the subaccount ID as <Subaccount>, rather than
its actual name. For information on getting the subaccount ID, see Find Your Subaccount ID (Cloud Foundry
Environment) [page 296]. As <Subaccount User> you must provide your Login E-mail instead of a
user ID.
3. (Optional) You can define a <Display Name> that lets you easily recognize a specific subaccount in the UI
compared to the technical subaccount name.
4. (Optional) You can define a <Location ID> identifying the location of this Cloud Connector for a specific
subaccount. As of Cloud Connector release 2.9.0, the location ID is used as routing information and therefore
you can connect multiple Cloud Connectors to a single subaccount. If you don't specify any value for
<Location ID>, the default is used, which represents the behavior of previous Cloud Connector versions.
The location ID must be unique per subaccount and should be an identifier that can be used in a URI. To route
requests to a Cloud Connector with a location ID, the location ID must be configured in the respective
destinations.
Note
Location IDs provided in older versions of the Cloud Connector are discarded during upgrade to ensure
compatibility for existing scenarios.
5. Enter a suitable proxy host from your network and the port that is specified for this proxy. If your network
requires an authentication for the proxy, enter a corresponding proxy user and password. You must specify a
proxy server that supports SSL communication (a standard HTTP proxy does not suffice).
Note
These settings strongly depend on your specific network setup. If you need more detailed information,
please contact your local system administrator.
6. (Optional) You can provide a <Description> (free-text) of the subaccount that is shown when choosing the
Details icon in the Actions column of the Subaccount Dashboard. It lets you identify the particular Cloud
Connector you use.
7. Choose Save.
Note
The internal network must allow access to the port. Specific configuration for opening the respective port(s)
depends on the firewall software used. The default ports are 80 for HTTP and 443 for HTTPS. For RFC
communication, you must open a gateway port (default: 33+<instance number> and an arbitrary message
server port. For a connection to a HANA Database (on SAP Cloud Platform) via JDBC, you must open an
arbitrary outbound port in your network. Mail (SMTP) communication is not supported.
● If you later want to change your proxy settings (for example, because the company firewall rules have
changed), choose Configuration from the main menu and go to the Cloud tab, section HTTPS Proxy. Some
proxy servers require credentials for authentication. In this case, you must provide the relevant user/password
information.
As soon as the initial setup is complete, the tunnel to the cloud endpoint is open, but no requests are allowed to
pass until you have performed the Access Control setup, see Configure Access Control [page 318].
To manually close (and reopen) the connection to SAP Cloud Platform, choose your subaccount from the main
menu and select the Disconnect button (or the Connect button to reconnect to SAP Cloud Platform).
The green icons next to Region Host and HTTPS Proxy indicate that they are both valid and operational. In case of a
timeout or a connectivity issue, these icons are yellow (warning) or red (error), and a tooltip shows the cause of the
problem. The Subaccount User is the user that has originally established the tunnel. During normal operations, this
user is no longer needed. Instead, certificates are used to open the connection to a subaccount.
Related Information
Add and connect your SAP Cloud Platform subaccounts to the Cloud Connector.
Context
As of version 2.2, you can connect to several subaccounts within a single Cloud Connector installation. Those
subaccounts can use the Cloud Connector concurrently with different configurations. By selecting a subaccount
from the drop-down box, all tab entries show the configuration, audit, and state, specific to this subaccount. In
case of audit and traces, cross-subaccount info is merged with the subaccount-specific parts of the UI.
Note
We recommend that you group only subaccounts with the same qualities in a single installation:
● Productive subaccounts should reside on a Cloud Connector that is used for productive subaccounts only.
● Test and development subaccounts can be merged, depending on the group of users that are supposed to
deal with those subaccounts. However, the preferred logical setup is to have separate development and test
installations.
Subaccount Dashboard
In the subaccount dashboard (choose your Subaccount from the main menu), you can check the state of all
subaccount connections managed by this Cloud Connector at a glance.
The dashboard also lets you disconnect or connect the subaccounts by choosing the respective button in the
Actions column.
If you want to connect an additional subaccount to your on-premise landscape, choose the Add Subaccount
button. A dialog appears, which is similar to the Initial Configuration operation when establishing the first
connection.
Procedure
1. The <Region> field specifies the SAP Cloud Platform region that should be used, for example, Europe
(Rot). Choose the one you need from the drop-down list. See SAP Cloud Platform Cockpit [page 900] →
section "Logon".
2. For <Subaccount> and <Subaccount User> (user/password), enter the values you obtained when you
registered your subaccount on SAP Cloud Platform or add a Add Members to Subaccounts [page 965] with
Note
If your subaccount is located in the Cloud Foundry environment, you must enter the subaccount ID as
<Subaccount>, rather than its actual name. For information on getting the subaccount ID, see Find Your
Subaccount ID (Cloud Foundry Environment) [page 296]. As <Subaccount User> you must provide your
Login E-mail instead of a user ID.
Note
If the Cloud Connector is installed in an environment that is operated by SAP, SAP provides a user that you
should add as member in your SAP Cloud Platform subaccount. In this case, assign the Cloud Connector
Admin role (see Managing Member Authorizations [page 1671]) to the user provided by SAP. When the
Cloud Connector connection is established, this user is not needed any more since it serves only for initial
connection setup. You may then revoke the corresponding role assignment and remove the user from the
Members list.
3. (Optional) You can define a <Display Name> that allows you to easily recognize a specific subaccount in the
UI compared to the technical subaccount name.
4. (Optional) You can define a <Location ID> that identifies the location of this Cloud Connector for a specific
subaccount. As of Cloud Connector release 2.9.0, the location ID is used as routing information and therefore
you can connect multiple cloud connectors to a single subaccount. If you don't specify any value for
<Location ID>, the default is used, which represents the behavior of previous Cloud Connector versions.
The location ID must be unique per subaccount and should be an identifier that can be used in a URI. To route
requests to a Cloud Connector with a location ID, the location ID must be configured in the respective
destinations.
5. (Optional) You can provide a <Description> of the subaccount that is shown when clicking on the Details
icon in the Actions column.
6. Choose Save.
Next Steps
● To modify an existing subaccount, choose the Edit icon and change the <Display Name>, <Location ID>
and/or <Description>.
Related Information
The certificates that are used by the Cloud Connector are issued with a limited validity period. To prevent a
downtime while refreshing the certificate, you can update it for your subaccount directly from the administration
UI.
4. If you have configured a disaster recovery subaccount, go to section Disaster Recovery Subaccount below and
choose Refresh Disaster Recovery Certificate.
5. Enter <User Name> and <Password> as in step 3 and choose OK.
Note
This feature is not generally available.
Each subaccount (except trial accounts) can have a disaster recovery subaccount. The disaster recovery
subaccount is intended to take over if the region host of its associated original subaccount faces severe issues.
A disaster recovery account inherits the configuration from its original subaccount except for the region host. The
user can, but does not have to be the same.
Procedure
Note
The selected region host must be different from the region host of the original subaccount.
Note
The technical subaccount name is the same as for the original subaccount, and set automatically.
Note
You cannot choose another original subaccount nor a trial subaccount to become a disaster recovery
subaccount.
Note
If you want to change a disaster recovery subaccount, you must delete it first and then configure it again.
To switch from the original subaccount to the disaster recovery subaccount, choose Employ disaster recovery
subaccount.
The disaster recovery subaccount then becomes active, and the original subaccount is deactivated.
You can switch back to the original subaccount as soon as it is available again.
Get your subaccount ID to configure the Cloud Connector in the Cloud Foundry environment.
In order to set up your subaccount in the Cloud Connector, you must know the subaccount ID. Follow these steps
to acquire it:
4. Choose the Show More icon in the lower right corner of the subaccount tile to display the subaccount
ID:
If you want to use a custom region for your subaccount, you can configure regions in the Cloud Connector, which
are not listed in the selection of standard regions.
1. From the Cloud Connector main menu, choose Configuration Cloud and go to the Custom Regions
section.
2. To add a region to the list, choose the Add icon.
Use principal propagation to simplify the access of SAP Cloud Platform users to on-premise systems.
In this section:
Topic Description
Set Up Trust [page 298] Configure a trusted relationship in the Cloud Connector to
support principal propagation. Principal propagation lets you
forward the logged-on identity in the cloud to the internal sys
tem without providing the password.
Configure Kerberos [page 317] The Cloud Connector lets you propagate users authenticated
in SAP Cloud Platform via Kerberos against back-end systems.
It uses the Service For User and Constrained Delegation pro
tocol extension of Kerberos.
Content
You perform trust configuration to support principal propagation. By default, your Cloud Connector does not trust
any entity that issues tokens for principal propagation. Therefore, the list of trusted identity providers is empty by
default. If you decide to use the principal propagation feature, you must establish trust to at least one identiy
provider. Currently, SAML2 identity providers are supported. You can configure trust to one or more SAML2 IdPs
per subaccount. After you've configured trust in the cockpit for your subaccount, for example, to your own
company's identity provider(s), you can synchronize this list with your Cloud Connector.
As of Cloud Connector 2.4, you can also trust HANA instances and Java applications to act as identity providers.
From your subaccount menu, choose Cloud to On-Premise and go to the Principal Propagation tab. Choose the
Synchronize button to store the list of existing identity providers locally in your Cloud Connector.
You can decide for each entry, whether to trust it for the principal propagation use case by choosing Edit and
(de)selecting the Trusted checkbox.
Note
Whenever you update the SAML IdP configuration for a subaccount on cloud side, you must synchronize the
trusted entities in theCloud Connector. Otherwise the validation of the forwarded SAML assertion will fail with
an exception containing an exception message similar to this: Caused by:
com.sap.engine.lib.xml.signature.SignatureException: Unable to validate signature ->
Set up principal propagation from SAP Cloud Platform to your internal system that is used in a hybrid scenario.
Note
As a prerequisite for principal propagation for RFC, the following cloud application runtime versions are
required:
1. Set up trust to an entity that is issuing an assertion for the logged-on user (see section above).
2. Set up the system identity for the Cloud Connector.
○ For HTTPS, you must import a system certificate into your Cloud Connector.
○ For RFC, you must import an SNC PSE into your Cloud Connector.
3. Configure the target system to trust the Cloud Connector. There are two levels of trust:
1. First, you must allow the Cloud Connector to identify itself with its system certificate (for HTTPS), or with
the SNC PSE (for RFC).
2. Then, you must allow this identity to propagate the user accordingly:
○ For HTTPS, the Cloud Connector forwards the true identity in a short-lived X.509 certificate in an
HTTP header named SSL_CLIENT_CERT. The system must use this certificate for logging on the real
user. The SSL handshake, however, is performed through the system certificate.
○ For RFC, the Cloud Connector forwards the true identity as part of the RFC protocol.
4. Configure the user mapping in the target system. The X.509 certificate contains information about the cloud
user in its subject. Use this information to map the identity to the appropriate user in this system. This step
applies for both HTTPS and RFC.
Note
If you have the following scenario: Application1->AppToAppSS0->Application2->Principal Propagation->On
premise Backend System you have to mark Application2 as trusted by the Cloud Connector in the Trust
Configurations tab.
By default, all applications within a subaccount are allowed to use the Cloud Connector associated with the
subaccount they run in. However, this behavior might not be desired in any scenario. For example, this may be
acceptable for some applications, as they must interact with on-premise resources, while other applications, for
As long as there is no entry in this list, all applications are allowed to use the Cloud Connector. If one or more
entries appear in the whitelist, then only these applications are allowed to connect to the exposed systems in the
Cloud Connector.
1. From your subaccount menu, choose Cloud to On-Premise and go to the Applications tab.
2. To add an application, choose the Add icon in section Trusted Applications.
3. Enter the <Application Name> in the Add Tunnel Application dialog.
Note
To add all applications that are listed in section Tunnel Connection Limits on the same screen, you can also
use the Upload button next to the Add button. The list Tunnel Connection Limits shows all applications for
which a specific maximal number of tunnel connections was specified. See also: Configure Tunnel
Connections [page 354].
4. (Optional) Enter the maximal number of <Tunnel Connections> only if you want to override the default
value.
5. Choose Save.
Note
To allow a subscribed application, you must add it to the whitelist in the format
<providerSubaccount>:<applicationName>. In particular, when using HTML5 applications, an
implicit subscription to services:dispatcher is required.
By default, the Cloud Connector trusts every on-premise system when connecting to it via HTTPS. As this may be
an undesirable behavior from a security perspective, you can configure a trust store that acts as a whitelist of
trusted on-premise systems, represented by their respective public keys. You can configure the trust store as
follows:
An empty trust store does not impose any restrictions on the trusted on-premise systems. It becomes a whitelist
as soon as you add the first public key.
Note
You must provide the public keys in .der or .cer format.
Tasks
To learn more about the different types of configuring and supporting principal propagation for a particular AS
ABAP, see:
Related Information
Supported CA Mechanisms
You can enable support for principal propagation with X.509 certificates by performing either of the following
procedures:
Note
Prior to version 2.7.0, this was the only option and the system certificate was acting both as client certificate
and CA certificate in the context of principal propagation.
The Cloud Connector uses the configured CA approach to issue short-lived certificates for logging on the same
identity in the back end that is logged on in the cloud. For establishing trust with the back end, the respective
configuration steps are independent of the approach that you choose for the CA.
To issue short-lived certificates that are used for principal propagation to a back-end system, you can import an X.
509 client certificate into the Cloud Connector. This CA certificate must be provided as PKCS#12 file containing
the (intermediate) certificate, the corresponding private key, and the CA root certificate that signed the
intermediate certificate (plus the certificates of any other intermediate CAs, if the certificate chain includes more
than those two certificates).
● Option 1: Choose the PKCS#12 file from the file system, using the file upload dialog. For the import process,
you must also provide the file password.
● Option 2: Start a Certificate Signing Request (CSR) procedure like for the UI certificate, see Exchange UI
Certificates in the Administration UI [page 283].
● Option 3: (As of version 2.10) Generate a self-signed certificate, which might be useful in a demo setup or if
you need a dedicated CA. In particular for this option, it is useful to export the public key of the CA via the
button Download certificate in DER format.
Note
The CA certificate should have the KeyUsage attribute keyCertSign. Many systems verify that the issuer of a
certificate includes this attribute and deny a client certificate without this attribute. When using the CSR
procedure, the attribute is requested for the CA certificate. Also, when generating a self-signed certificate, this
attribute is added automatically.
Choose Create and import a self-signed certificate if you want to use option 3:
After successful import of the CA certificate, its distinguished name, the name of the issuer, and the validity dates
are shown:
If a CA certificate is no longer required, you can delete it. Use the respective Delete button and confirm the
deletion.
If you want to delegate the CA functionality to a Secure Login Server, choose the CA using Secure Login Server
option and configure the Secure Login Server as follows, after having configured the Secure Login server as
described in Configure a Secure Login Server [page 313].
● <Host Name>: The host, on which your Secure Login Server (SLS) is installed.
● <Profiles Port>: The profiles port must be provided only when your Secure Login Server is configured to
not allow to fetch profiles via the privileged authentication port. In this case, you can provide here the port that
is configured for that functionality.
● <Authentication Port>: The port, over which the Cloud Connector is requesting the short-lived
certificates from SLS. Choose Next.
Note
For this privileged port, a client certificate authentication is required, for which the Cloud Connector system
certificate is used.
● <Profile>: The Secure Login Server profile that allows to issue certificates as needed for principal
propagation with the Cloud Connector.
Related Information
In this example, you can find step-by-step instructions how to configure principal propagation to an ABAP server
for HTTPS.
Example Data
● System certificate was issued by: CN=MyCompany CA, O=Trust Community, C=DE.
● It has the subject: CN=SCC, OU=HCP Scenarios, O=Trust Community, C=DE.
● The short-lived certificate has the subject CN=P1234567890, where P1234567890 is the platform user.
Prerequisites
To perform the following steps, you must have the corresponding authorizations in the ABAP system for the
transactions mentioned below (administrator role according to your specific authorization management) as well
as an administrator user for the Cloud Connector.
Configure the ABAP system to trust the Cloud Connector's system certificate:
Note
If you have applied SAP Note 2052899 to your system, you can alternatively provide an additional
parameter for icm/trusted_reverse_proxy_<x>, for example: icm/trusted_reverse_proxy_2 =
SUBJECT="CN=SCC, OU=HCP Scenarios, O=Trust Community, C=DE", ISSUER="CN=MyCompany CA,
O=Trust Community, C=DE".
Note
If you have a Web dispatcher installed in front of the ABAP system, trust must be added in its configuration files
with the same parameters as for the ICM. Also, you must add the system certificate of the Cloud Connector to
the trust list of the Web dispatcher Server PSE.
You can do this manually in the system as described below or use an identity management solution for a more
comfortable approach. For example, for large numbers of users the rule-based certificate mapping is a good way
to save time and effort. For more information, see Rule-based Mapping of Certificates [page 308].
(Optional procedure) Execute these steps if your scenario requires basic authentication support for some of the
ICF services.
Related Information
Note
If dynamic parameters are disabled, enter the value using transaction RZ10 and restart the whole ABAP
system.
Note
To access transaction CERTRULE, you need the corresponding authorizations (see: Assign
Authorization Objects for Rule-based Mapping [page 309]).
Note
When you save the changes and return to transaction CERTRULE, the sample certificate which you
imported in Step 2b will not be saved. This is just a sample editor view to see the sample certificates
and mappings.
Related Information
In this example, you can find step-by-step instructions how to configure principal propagation to an ABAP server
for RFC.
Example Data
● A system PSE has been generated and installed on the host where the Cloud Connector is running. See the
SNC User's Guide: https://service.sap.com/security → section "Infrastructure Security".
● The system's SNC name is: p:CN=SCC, OU=HCP Scenarios, O=Trust Community, C=DE
● The ABAP system's PSE name is: p:CN=SID, O=Trust Community, C=DE
● The ABAP system's PSE and the Cloud Connector's system PSE must be signed by the same CA for mutual
authentication.
● The short-lived certificate has the subject CN=P1234567, where P1234567 is the platform user.
1. Configure the ABAP System to Trust the Cloud Connector's System PSE
1. Open the SNC Access Control List for Systems (transaction SNC0).
2. Enter a System ID for your Cloud Connector together with its SNC name: p:CN=SCC, OU=HCP
Scenarios, O=Trust Community, C=DE.
3. Save the entry and choose the Details button.
4. In the next screen, activate the checkboxes for Entry for RFC activated and Entry for certificate activated.
5. Save your settings.
You can do this manually in the system as described below or use an identity management solution for a more
comfortable approach. For example, for large numbers of users the rule-based certificate mapping is a good way
to save time and effort. See Rule-Based Certificate Mapping.
Prerequisites
● The required security product for the SNC flavor that is used by your ABAP back-end systems, is installed on
the Cloud Connector host.
● The Cloud Connector's system PSE is opened for the operating system user under which the Cloud Connector
process is running.
1. In the Cloud Connector UI, choose the Configuration from the main menu, select the On Premise tab, and go to
the SNC section.
2. Provide the fully qualified name of the SNC library (the security product's shared library implementing the
GSS API), the SNC name of the above system PSE, and the desired quality of protection by choosing the Edit
icon.
For more information, see Initial Configuration (RFC) [page 200].
Note
The example in Initial Configuration (RFC) [page 200] shows the library location if you use the SAP Secure
Login Client as your SNC security product. In this case (as well as for some other security products), SNC
My Name is optional, because the security product automatically uses the PSE associated with the current
operating system user under which the process is running, so you can leave that field empty. (Otherwise, in
this example it should be filled with p:CN=SCC, OU=HCP Scenarios, O=Trust Community, C=DE.)
We recommend that you enter Maximum Protection for <Quality of Protection>, if your security
solution supports it, as it provides the best protection.
Create an RFC hostname mapping corresponding to the RFC destination that uses principal propagation on
cloud side
1. In the Access Control section of the Cloud Connector, create a hostname mapping corresponding to the cloud-
side RFC destination. See Configure Access Control (RFC) [page 202].
2. Make sure you choose RFC SNC as Protocol and ABAP System as Back-end Type. In the SNC Partner Name
field, enter the ABAP system's SNC name, for example, p:CN=SID, O=Trust Community, C=DE.
3. Save your mapping.
Principal propagation offers a secure way to forward an on-demand identity to the Cloud Connector. From there, it
forwards the identity to the back-end system.
For this purpose, you can define a pattern identifying the user for the subject of the generated short-lived X.509
certificate, as well as its validity period.
To configure such a pattern, choose Configuration On Premise and press the Edit icon in section Principal
Propagation:
Use either of the following to define the subject's distinguished name (DN), for which the certificate will be issued:
● Add or edit the subject pattern fields directly with free text.
● Use the selection menu of the corresponding field..
● ${name}
● ${mail}
● ${display_name}
● ${login_name} (as of cloud connector version 2.8.1.1)
The values for these variables are provided by the Certificate Authority, which also provides the values for
the subject's DN.
Sample Certificate
By choosing Generate Sample Certificate you can create a sample certificate that looks like one of the short-lived
certificates created at runtime. You can use this certificate to, for example, generate user mapping rules in the
target system, via transaction CERTRULE in an ABAP system. If your subject pattern contains variable fields, a
wizard lets you provide meaningful values for each of them and eventually you can save the sample certificate in
DER format.
Related Information
The Cloud Connector can use on-the-fly generated X.509 user certificates to log in to on-premise systems if the
external user session is authenticated (for example by means of SAML). If you do not want to use the built-in
certification authority (CA) functionality of the Cloud Connector (for example because of security considerations),
you can connect SAP SSO 2.0 Secure Login Server (SLS).
SLS is a Java application running on AS JAVA 7.20 or higher, which provides interfaces for certificate enrollment.
● HTTPS
● REST
Note
Any enrollment requires a successful user or client authentication, which can be a single, multiple or even a
multi factor authentication.
● LDAP/ADS
● RADIUS
● SAP SSO OTP
● ABAP RFC
● Kerberos/SPNego and
● X.509 TLS Client Authentication
SLS lets you define arbitrary enrollment profiles, each with a unique profile UID in its URL, and with a configurable
authentication and certificate generation.
Requirements
For user certification, SLS must provide a profile that adheres to the following:
With SAP SSO 2.0 SP06, SLS provides the following required features:
Implementation
INSTALLATION
Follow the standard installation procedures for SLS. This includes the initial setup of a PKI (public key
infrastructure).
Note
SLS allows you to set up one or more own PKIs with Root CA, User CA, and so on. You can also import CAs as
PKCS#12 file or use a hardware security module (HSM) as "External User CA".
CONFIGURATION
SSL Ports
1. Open the NetWeaver Administrator, choose Configuration SSL and define a new port with Client
Authentication Mode = REQUIRED.
Note
You may also define another port with Client Authentication Mode = Do not request if you did
not do so yet.
2. Import the root CA of the PKI that issued your Cloud Connector service certificate.
3. Save the configuration and restart the Internet Communication Manager (ICM).
Authentication Policy
Root CA Certificate
Cloud Connector
Follow the standard installation procedure of the Cloud Connector and configure SLS support:
1. Enter the policy URL that points to the SLS user profile group.
2. Select the profile, for example, Cloud Connector User Certificates.
3. Import the Root CA certificate of SLS into the Cloud Connector´s truststore.
Follow the standard configuration procedure for Cloud Connector support in the corresponding target system and
configure SLS support.
To do so, import the Root CA certificate of SLS into the system´s truststore:
● AS ABAP: choose transaction STRUST and follow the steps in Maintaining the SSL Server PSE's Certificate
List.
● AS Java: open the Netweaver Administrator and follow the steps described in Configuring the SSL Key Pair
and Trusted X.509 Certificates.
Context
The Cloud Connector allows you to propagate users authenticated in SAP Cloud Platform via Kerberos against
back-end systems. It uses the Service For User and Constrained Delegation protocol extension of Kerberos.
The Key Distribution Center (KDC) is used for exchanging messages in order to retrieve Kerberos tokens for a
certain user and back-end system.
For more information, see Kerberos Protocol Extensions: Service for User and Constrained Delegation Protocol
1. An SAP Cloud Platform application calls a back-end system via the Cloud
Connector.
2. The Cloud Connector calls the KDC to obtain a Kerberos token for the user
propagated from the Cloud Connector.
3. The obtained Kerberos token is sent as a credential to the back-end system.
Procedure
3. In the <KDC Hosts> field (press Add to display the field), enter the host name of your KDC using the format
<host>:<port>. The port is optional; if you leave it empty, the default, 88, is used.
Example
You have a back-end system protected with SPNego authentication in your corporate network. You want to call
it from a cloud application while preserving the identity of a cloud-authenticated user.
Result:
When you now call a back-end system, the Cloud Connector obtains an SPNego token from your KDC for the
cloud-authenticated user. This token is sent along with the request to the back end, so that it can authenticate
the user and the identity to be preserved.
Related Information
Kerberos Configuration
Set Up Trust [page 298]
When you add subaccounts, you can copy the complete access control settings from another subaccount on the
same Cloud Connector. You can also do it any time later by using the import/export mechanism provided by the
Cloud Connector.
1. From your subaccount menu, choose Cloud To On-Premise and select the tab Access Control.
2. To store the current settings in a ZIP file, choose Download icon in the upper-right corner.
There are two locations from which you can import access control settings:
There are also two options that influence the behavior of the import:
● Overwrite: All previously existing system mappings are removed. If you don't select this option, existing
mappings are merged into the list of existing ones. Whether the option is selected or not, if the same virtual
host-port combination already exists, it is overritten by the imported one. By default, imported system
mappings are merged into the existing ones.
● Include Resources: The resources that belong to the imported systems are also imported. If you do't select this
option, only the list of system mappings is imported, without any exposed resource.
Related Information
You can use a set of APIs to perform the basic setup of the Cloud Connector.
As of version 2.11, the Cloud Connector provides several REST APIs that let you configure a newly installed Cloud
Connector. The configuration options correspond to the following steps:
Note
All configuration APIs start with the following string: /api/v1/configuration.
Prerequisites
● After installing the Cloud Connector, you have changed the initial password.
● You have specified the high availability role of the Cloud Connector (master or shadow).
● You have configured the proxy on the master instance if required for your network.
Requests and responses are coded in JSon format. In case of errors with return code 400, the status line contains
the error structure in json format:
Security
The Cloud Connector supports basic authentication and form based authentication. Upon first request
under /api/v1, a CSRF token is generated and sent back in the response header. The client application must keep
this token and send it in all subsequent requests as header X-CSRF-Token.
Return Codes
Successful requests return the code 200, or, if there is no content, 204. POST actions that create new entities
return 201, with the location link in the header.
403 – the current Cloud Connector instance does not allow changes. For example, the instance has been assigned
to the shadow role and therefore does not allow configuration changes.
409 – current state of the Cloud Connector does not allow changes. For example, the password has to be changed
first.
Note
The error texts depend on the request locale.
Entities returned by the APIs contain links as suggested by the current draft JSON Hypertext Application Language
(see https://tools.ietf.org/html/draft-kelly-json-hal-08 ).
Available APIs
System Mapping Resources [page 336] ● Get list of system mapping resources
● Create system mapping resources
● Delete system mapping resources
● Edit system mapping
● Read system mapping
Method GET
Request
Errors
URI /api/v1/configuration/connector
Method PUT
Request {description=<value>}
Response
Errors
URI /api/v1/configuration/connector/haRole
Method GET
Request
Errors
URI /api/v1/configuration/connector/haRole
Method POST
Request {haRole=[master|shadow]}
Response
URI /api/v1/configuration/connector/proxy
Method GET
Request
Errors
URI /api/v1/configuration/connector/proxy
Method PUT
Response
URI /api/v1/configuration/connector/
authentication
Method GET
Request
URI /api/v1/configuration/connector/
authentication/basic
Method PUT
Request 1.
{oldPassword, newPassword}
2.
{password, user}
Response {}
URI /api/v1/configuration/connector/
authentication/ldap
Method PUT
where configuration is
host is
Response
URI /api/v1/configuration/connector/
solutionManagement
Method GET
Request
Errors
This API turns on the integration with the Solution Manager. The prerequisite is an available Host Agent. You can
specify a path to the Host Agent executable, if you don't use the default path.
Method POST
Request {hostAgentPath}
Response
Errors
URI /api/v1/configuration/connector/
solutionManagement
Method DELETE
Request
Response
Errors
1.5.3.2.5.6 Backup
URI /api/v1/configuration/backup
Method POST
Request {password}
Errors
URI /api/v1/configuration/backup
Method PUT
Errors 400
Note
Since this API uses a multipart request, it requires a multipart request header.
1.5.3.2.5.7 Subaccount
List Subaccounts
URI /api/v1/configuration/subaccounts
Method GET
Request
Create Subaccount
URI /api/v1/configuration/subaccounts
Method POST
Request {cloudUser,cloudPassword,displayName,de
scription,regionHost, subaccount,
locationID}
Delete Subaccount
URI /api/v1/configuration/subaccounts/
<regionHost>/<subaccount>
Method DELETE
Request
Response 204
URI /api/v1/configuration/subaccounts/
<regionHost>/<subaccount>
Method PUT
Response
Errors 400
Connect/Disconnect Subaccount
URI /api/v1/configuration/subaccounts/
<regionHost>/<subaccount>/state
Method PUT
Response
Errors 400
URI /api/v1/configuration/subaccounts/
<regionHost>/<subaccount>/validity
Method POST
Response
Errors 400
URI /api/v1/configuration/subaccounts/
<regionHost>/<subaccount>/recovery
Method PUT
Response
Errors 400
URI /api/v1/configuration/subaccounts/
<regionHost>/<subaccount>/recovery
Method DELETE
Request
Response
Errors 400
URI /api/v1/configuration/subaccounts/
<regionHost>/<subaccount>/recovery/
validity
Method POST
Response
Errors 400
URI /api/v1/configuration/subaccounts/
<regionHost>/<subaccount>
Method GET
Request
Response {"tunnel":
{"state","connections":int,"application
Connections":[],"serviceChannels":
[]},"_links","regionHost","subaccount",
"locationID"}
Errors
URI /api/v1/configuration/subaccounts/
<regionHost>/<subaccount>/systemMappings
Method GET
Request
Errors
URI /api/v1/configuration/subaccounts/
<regionHost>/<subaccount>/systemMappings
Method POST
Method DELETE
Request
Response {}
URI /api/v1/configuration/subaccounts/
<regionHost>/<subaccount>/systemMappings
Method DELETE
Request
Response {}
URI /api/v1/configuration/subaccounts/
<regionHost>/<subaccount>/systemMappings/
virtualHost:virtualPort
Method PUT
Response
Errors 400
URI /api/v1/configuration/subaccounts/
<regionHost>/<subaccount>/systemMappings/
virtualHost:virtualPort
Method GET
Request
Errors
URI /api/v1/configuration/subaccounts/
<regionHost>/<subaccount>/ systemMappings/
virtualHost:virtualPort/resources
Request
Errors
URI /api/v1/configuration/subaccounts/
<regionHost>/<subaccount>/ systemMappings/
virtualHost:virtualPort/resources
Method POST
URI /api/v1/configuration/subaccounts/
<regionHost>/<subaccount>/ systemMappings/
virtualHost:virtualPort/resources/
<encodedResourceId>
Method DELETE
Request
Response {}
URI /api/v1/configuration/subaccounts/
<regionHost>/<subaccount>/ systemMappings/
virtualHost:virtualPort/resources/
<encodedResourceId>
Method PUT
Response
Errors 400
URI /api/v1/configuration/subaccounts/
<regionHost>/<subaccount>/ systemMappings/
virtualHost:virtualPort/resources/
<encodedResourceId>
Method GET
Request
Errors
URI /api/v1/configuration/subaccounts/
<regionHost>/<subaccount>/domainMappings
Method GET
Request
Errors
URI /api/v1/configuration/subaccounts/
<regionHost>/<subaccount>/domainMappings
Method POST
Method DELETE
Request
Response {}
URI /api/v1/configuration/subaccounts/
<regionHost>/<subaccount>/domainMappings/
<internalDomain>
Method PUT
Response
Errors 400
Method GET
Request
URI /api/v1/configuration/subaccounts/
<regionHost>/<subaccount>/channels
Method POST
Request {typeKey,details,serviceNumber,connecti
onCount}
URI /api/v1/configuration/subaccounts/
<regionHost>/<subaccount>/channels/<id>
Method DELETE
Response {}
URI /api/v1/configuration/subaccounts/
<regionHost>/<subaccount>/channels/<id>
Method PUT
Request {typeKey,details,serviceNumber,connecti
onCount}
Response 204
URI /api/v1/configuration/subaccounts/
<regionHost>/<subaccount>/channels/<id>/
state
Method PUT
Request {enabled:boolean}
Response
500
{type:"RUNTIME_FAILURE","message":"Serv
ice channel could not be opened"}
Prerequisites
● Configure your cloud application to use an on-premise user provider and to consume users from LDAP via the
Cloud Connector. To do this, execute the following command:
● Create a connectivity destination using the following paremeters, to configure the on-premise user provider:
Name=onpremiseumconnector
Type=HTTP
URL= http://scc.scim:80/scim/v1
Authentication=NoAuthentication
CloudConnectorVersion=2
ProxyType=OnPremise
Context
You can configure SAP Cloud Platform applications to use your corporate LDAP server as a user store. This means
that the platform doesn't need to keep the entire user database but requests the necessary information from the
LDAP server. Java applications that are running on the SAP Cloud Platform can use the on-premise system to
check credentials, search for users, and retrieve details. In addition to the user information, the cloud application
may request information about the groups a user belongs to. .
One way a Java cloud application can define user authorizations is by checking a user's membership to specific
groups in the on-premise user store. The application uses the roles for the groups defined in SAP Cloud Platform.
For more information, see Managing Roles [page 2151].
Procedure
Note
Multiple hosts are currently not supported.
Note
The user name must be fully qualified, including the AD domain suffix, for example,
john.smith@mycompany.com.
6. In User Path, specify the LDAP subtree that contains the users.
7. In Group Path, specify the LDAP subtree that contains the groups.
8. Choose Save.
Context
With service channels, the Cloud Connector allows secure and reliable access from an external network to certain
services on SAP Cloud Platform. These services are not exposed to direct access from the Internet. The Cloud
Connector ensures that the connection is always available and communication is secured:
● The service channel for the HANA Database allows accessing HANA databases that run in the cloud with
database clients (for example, clients using ODBC/JDBC drivers). You can use the service channel to connect
database, analytical, BI, or replication tools to your HANA database in your SAP Cloud Platform subaccount.
● You can use the virtual machine (VM) service channel to access a SAP Cloud Platform VM using an SSH client,
and adjust it to your needs.
● The service channel for RFC supports calls from on-premise systems to S/4HANA Cloud using RFC,
establishing a connection to an S/4HANA Cloud tenant host.
Next Steps
You can establish a connection to a SAP HANA database in the SAP Cloud Platform that is not directly exposed to
external access.
Context
The service channel for HANA Database allows accessing SAP HANA databases running in the cloud via ODBC/
JDBC. You can use the service channel to connect database, analytical, BI, or replication tools to a HANA database
in your SAP Cloud Platform subaccount.
Note
The following procedure requires a productive HANA instance to be available in the respective subaccount.
Procedure
3. In the Add Service Channel dialog, leave the default value HANA Database in the <Type> field.
4. Choose Next.
5. Choose the HANA instance name. If you cannot select from the list, enter the instance name, which must
match one of the names (IDs) shown in the cockpit under SAP HANA/SAP ASE Databases & Schemas ,
in the <DB/Schema ID> column.
Note
The HANA instance name is case-sensitive.
6. Specify the local instance number. This is a double-digit number which computes the local port used to access
the HANA instance in the cloud. The local port is derived from the local instance number as 3<instance
number>15. For example, if the instance number is 22, then the local port is 32215.
7. Leave Enabled selected to establish the channel immediately after clicking Finish, or unselect it if you don't
want to establish the channel immediately.
8. Choose Finish.
Next Steps
Once you have established a HANA Database service channel, you can connect on-premise database or BI tools to
the selected HANA database in the cloud. This may be done by using
<cloud_connector_host>:<local_HANA_port> in the JDBC/ODBC connect strings.
See Connect DB Tools to SAP HANA via Service Channels [page 347].
Context
You can connect database, BI, or replication tools running in on-premise network to a HANA database on SAP
Cloud Platform using service channels of the Cloud Connector. You can also use the high availability support of the
Cloud Connector on a database connection. The picture below shows the landscape in such a scenario.
● For more information on using SAP HANA instances, see Using an SAP HANA XS Database System [page 1240]
● For the connection string via ODBC you need a corresponding database user and password (see step 4 below).
See also: Creating Database Users [page 1244]
● Find detailed information on failover support in the SAP HANA Administration Guide: Configuring Clients for
Failover.
Note
This link points to the latest release of SAP HANA Administration Guide. Refer to the SAP Cloud Platform
Release Notes to find out which HANA SPS is supported by SAP Cloud Platform. Find the list of guides
for earlier releases in the Related Links section below.
Procedure
1. To establish a highly available connection to one or multiple SAP HANA instances in the cloud, we recommend
that you make use of the failover support of the Cloud Connector. Set up a master and a shadow instance. See
Install a Failover Instance for High Availability [page 361].
Example:
jdbc:sap://<cloud-connector-master-host>:30015;<cloud-connector-shadow-host>:
30015[/?<options>]
The SAP HANA JDBC driver supports failover out of the box. All you need is to configure the shadow instance
of the Cloud Connector as a failover server in the JDBC connection string. The different options supported in
the JDBC connection string are described in: Connect to SAP HANA via JDBC
4. You can also connect on-premise DB tools via ODBC to the SAP HANA database. Use the following connection
string:
"DRIVER=HDBODBC32;UID=<user>;PWD=<password>;SERVERNODE=<cloud-connector-master-
host>:30015;<cloud-connector-shadow-host>:30015;"
Related Information
Context
You can establish a connection to a virtual machine (VM) in the SAP Cloud Platform that is not directly exposed to
external access. Use On-Premise to Cloud Service Channels in the Cloud Connector. The service channel for
Virtual Machine lets you access SAP HANA databases running on the cloud via SSH. You can use the service
channel to manage the VM and adjust it to your needs.
Note
The following procedure requires that you have created a VM in your subaccount.
Procedure
3. In the Add Service Channel dialog, select Virtual Machine from the list of supported channel types.
4. Choose Next. The Virtual Machine dialog opens.
5. Choose the Virtual Machine <Name> from the list of available Virtual Machines. It matches the corresponding
name shown under Virtual Machines in the cockpit.
Note
The Virtual Machine name is case-sensitive.
6. Choose the <Local Port>. You can use any port that is not used yet.
7. Leave <Enabled> selected to establish the channel immediately after clicking Save. Unselect it if you don't
want to establish the channel immediately.
8. Choose Finish.
Next Steps
Once you have established a service channel for the Virtual Machine, you can connect it with your SSH client.
This may be done by accessing <Cloud_connector_host>:<local_VM_port> and the key file that was
generated when creating the virtual machine.
Related Information
For scenarios that need to call from on-premise systems to S/4HANA Cloud using RFC, you can establish a
connection to an S/4HANA Cloud tenant host. To do this, select On-Premise to Cloud Service Channels in
the Cloud Connector.
Procedure
3. In the Add Service Channel dialog, select S/4HANA Cloud from the drop-down list of supported channel
types.
4. Choose Next. The S/4HANA Cloud dialog opens.
5. Enter the <S/4HANA Cloud Tenant> host name that you want to connect to.
Note
The S/4HANA Cloud tenant host name is case-sensitive. Also make sure that you specify the API address of
your tenant host. For example, if the tenant host of your instance is my1234567.s4hana.ondemand.com,
the API tenant host to be specified is my1234567-api.s4hana.ondemand.com.
Context
Some HTTP servers return cookies that contain a domain attribute. For subsequent requests, HTTP clients should
send these cookies to machines that have host names in the specified domain.
It returns the cookie in follow-up requests to all hosts like ecc60.mycompany.corp, crm40.mycompany.corp,
and so on, if the other attributes like path and attribute require it.
However, in a Cloud Connector setup between a client and a Web server, this may lead to problems. For example,
assume that you have defined a virtual host sales-system.cloud and mapped it to the internal host name
ecc60.mycompany.corp. The client "thinks" it is sending an HTTP request to the host name sales-system.cloud,
while the Web server, unaware of the above host name mapping, sets a cookie for the domain mycompany.corp.
The client does not know this domain name and thus, for the next request to that Web server, doesn't attach the
cookie, which it should do. The procedure below prevents this problem.
Procedure
1. From your subaccount menu, choose Cloud To On-Premise, and go to the Cookie Domains tab.
2. Choose Add.
3. Enter cloud as the virtual domain, and your company name as the internal domain.
4. Choose Save.
The Cloud Connector checks the Web server's response for Set-Cookie headers. If it finds one with an
attribute domain=intranet.corp, it replaces it with domain=sales.cloud before returning the HTTP
response to the client. Then, the client recognizes the domain name, and for the next request against
www1.sales.cloud it attaches the cookie, which then successfully arrives at the server on
machine1.intranet.corp.
Note
The value of the domain attribute may be a simple host name, in which case no extra domain mapping is
necessary on the Cloud Connector. If the server sets a cookie with domain=machine1.intranet.corp,
the Cloud Connector automatically reverses the mapping machine1.intranet.corp to
www1.sales.cloud and replaces the cookie domain accordingly.
Related Information
If you want to monitor the Cloud Connector with the SAP Solution Manager, you can install a host agent on the
machine of the Cloud Connector and register the Cloud Connector on your system.
Prerequisites
● You have installed the SAP Diagnostics Agent and SAP Host Agent on the Cloud Connector host and
connected them to the SAP Solution Manager.
● The SAP Solution Manager must be of release 7.2 SP06 or higher.
For more details about the host agent and diagnostics agent, see SAP Host Agent and the SCN Wiki SAP Solution
Manager Setup/Managed_System_Checklist . For consulting, contact your local SAP partner.
Note
If the host agent is not available on the Cloud Connector host, the Cloud Connector (and, if attached, a shadow
instance) generate a registration file named lmdbModel.xml in the current working directory. You can download
and copy this file to the Solution Manager host and upload it into the Solution Manager to establish the
integration of the Cloud Connector with the Solution Manager manually. However, automatic registration is the
recommended standard procedure unless installation or connection of the host or diagnostics agent should fail
permanently. See also SAP notes 2607632 (SAP Solution Manager 7.2 - Managed System Configuration for
SAP Cloud Connector) and 1018839 (Registering in the System Landscape Directory using sldreg).
1. From the Cloud Connector main menu, choose Configuration Reporting . In section Solution
Management of the Reporting tab, select Edit.
Note
To download the registration file lmdbConfig.xml, choose the icon Download registration file from the Reporting
tab.
Related Information
Adapt connectivity settings that control the throughput by choosing the appropriate limits (maximal values).
If required, you can adjust the following parameters for the communication tunnel by changing their default values:
For detailed information on connection configuration requirements, see Configuration Setup [page 267].
1. From the Cloud Connector main menu, choose Configuration Advanced . In section Connectivity, select
Edit.
2. In the Edit Connectivity Settings dialog, change the parameter values as required.
3. Choose Save.
Additionally, you can specify the number of allowed tunnel connections for each application that you have
specified as a trusted application [page 300].
Note
If you don't change the value for a trusted application, it keeps the default setting specified above. If you change
the value, it may be higher or lower than the default and must be higher than 0.
1. From your subaccount menu, choose Cloud To On-Premise Applications . In section Tunnel Connection
Limits, choose Add.
2. In the Edit Tunnel Connections Limit dialog, enter the <Application Name> and change the number of
<Tunnel Connections> as required.
3. Choose Save.
If required, you can adjust the following parameters for the Java VM by changing their default values:
Note
A restart is required when changing JVM settings.
We recommended that you set the initial heap size equal to the maximal heap size, to avoid memory
fragmentation.
1. From the Cloud Connector main menu, choose Configuration Advanced . In section JVM, select Edit.
2. In the Edit JVM Settings dialog, change the parameter values as required.
3. Choose Save.
2. To backup or restore your configuration, choose the respective icon in the upper right corner of the screen.
Note
An archive containing a snapshot of the current Cloud Connector configuration is created and
downloaded by your browser. You can use the archive to restore the current state on this or any other
Cloud Connector installation. For security reasons, some files are protected by a password.
2. To restore your configuration, enter the required password in the Restore from Archive dialog and choose
Restore.
Note
The restore action overwrites the current configuration of the Cloud Connector. It will be permanently
lost unless you have created another backup before restoring. Upon successfully restoring the
configuration, the Cloud Connector restarts automatically. All sessions are then terminated.
1.5.3.3 Operations
Learn more about operating the Cloud Connector, using its administration tools and optimizing its functions.
Topic Description
Use LDAP for Authentication [page 358] If you operate an LDAP server in your system landscape, you
can configure the Cloud Connector to use the users who are
available on the LDAP server. .
Install a Failover Instance for High Availability [page 361] The Cloud Connector lets you install a redundant (shadow) in
stance, which monitors the main (master) instance.
Change the UI Port [page 366] Use the changeport tool (Cloud Connector version 2.6.0+) to
change the port for the Cloud Connector administration UI. .
Secure the Activation of Traffic Traces [page 367] Tracing of network traffic data may contain business critical in
formation or security sensitive data. You can implement a
"four-eyes" (double check) principle to protect your traces
(Cloud Connector version 1.3.2+).
Monitoring [page 368] Use various views to monitor the activities and state of the
Cloud Connector.
Alerting [page 381] Configure the Cloud Connector to send email alerts whenever
critical situations occur that may prevent it from operating.
Audit Logging [page 383] Use the auditor tool to view and manage audit log information
(Cloud Connector version 2.2+).
Troubleshooting [page 386] Information about monitoring the state of open tunnel con
nections in the Cloud Connector. Display different types of
logs and traces that can help you troubleshoot connection
problems.
Operator's Guide [page 390] Detailed information and procedures for operating the Cloud
Connector.
You can use LDAP (Lightweight Directory Access Protocol) to configure Cloud Connector authentication.
After installation, the Cloud Connector uses file-based user management by default. As an alternative to this file-
based user management, the Cloud Connector also supports LDAP-based user management. If you have an LDAP
server in your landscape, you can configure the Cloud Connector to use the users who are available on that LDAP
server. All users that are in a group named admin or sccadmin will have the necessary authorization for
administrating the Cloud Connector.
1. From the main menu, choose Configuration and go to the User Interface tab.
2. From the Authentication section, choose Switch to LDAP.
3. Optional: to save intermediate adoptions of the LDAP configuration, choose Save Draft.
4. Usually, the LDAP server lists users in an LDAP node and user groups in another node. In this case, you can
use the following template for LDAP configuration. Copy the template into the configuration text area:
userPattern="uid={0},ou=people,dc=mycompany,dc=com"
roleBase="ou=groups,dc=mycompany,dc=com"
roleName="cn"
5. Change the ou and dc fields in userPattern and roleBase, according to the configuration on your LDAP
server, or use some other LDAP query.
6. Provide the LDAP server's host and port (port 389 is used by default) in the Host field. To use the secure
protocol variant LDAPS based on TLS, select Secure.
7. Provide a failover LDAP server's host and port (port 389 is used by default) in the Alternate Host field. To use
the secure protocol variant LDAPS based on TLS, select Secure.
8. (Optional) You can provide a service user and its password.
9. (Optional) You can override the role to check for permissions in User Role. The default role is sccadmin.
10. Optionally, you can override the role to check for permissions in Monitoring Role. If not provided, Cloud
Connector will check permissions for the default role sccmonitoring for the monitoring APIs.
11. After finishing the configuration, choose Activate. Immediately after activating the LDAP configuration you
must restart the local server, which invalidates the current browser session. Refresh the browser and logon to
the Cloud Connector again, using the credentials configured at the LDAP server. To use the secure protocol
variant LDAPS based on TLS, select Secure.
12. To switch back to file-based user management, choose the Switch icon again.
For more information about how to set up LDAP authentication, see tomcat.apache.org/tomcat-7.0-doc/realm-
howto.html .
Note
If you are using LDAP together with a high availability setup with master and shadow, you cannot use the
configuration option userPattern. Instead, use a combination of userSearch, userSubtree and userBase.
Note
If you have set up an LDAP configuration incorrectly, you may not be able to logon to the Cloud Connector
again. Adjust the Cloud Connector configuration to use the file-based user store again without the
administration UI. For more information, see the next section.
You can also configure LDAP authentication on the shadow instance in a high availability setup. From the main
menu of the shadow instance, select Shadow, then go to the Authentication tab:
If your LDAP settings do not work as expected, you can use the useFileUserStore tool, provided with Cloud
Connector version 2.8.0 and higher, to revert back to the file-based user store:
1. Change to the installation directory of the Cloud Connector and enter the following command:
○ Microsoft Windows: useFileUserStore
○ Linux, Mac OS: ./useFileUserStore.sh
2. Restart the Cloud Connector to activate the file-based user store.
For versions older than 2.8.0, you must manually edit the configuration files.
<Realm className="org.apache.catalina.realm.LockOutRealm">
<Realm className="org.apache.catalina.realm.CombinedRealm">
<Realm
X509UsernameRetrieverClassName="com.sap.scc.tomcat.utils.SccX509SubjectDnRetrieve
r" className="org.apache.catalina.realm.UserDatabaseRealm" digest="SHA-256"
resourceName="UserDatabase"/>
<Realm
X509UsernameRetrieverClassName="com.sap.scc.tomcat.utils.SccX509SubjectDnRetrieve
r" className="org.apache.catalina.realm.UserDatabaseRealm" digest="SHA-1"
resourceName="UserDatabase"/>
</Realm>
</Realm>
The Cloud Connector allows you to install a redundant instance, which monitors the main instance.
Context
In a failover setup, when the main instance should go down for some reason, a redundant one can take over its role.
The main instance of the Cloud Connector is called master and the redundant instance is called the shadow. The
shadow has to be installed and connected to its master. During the setup of high availability, the master pushes the
entire configuration to the shadow. Later on, during normal operation, the master also pushes configuration
updates to the shadow. Thus, the shadow instance is kept synchronized with the master instance. The shadow
pings the master regularly. If the master is not reachable for a while, the shadow tries to take over the master role
and to establish the tunnel to SAP Cloud Platform.
Note
For detailed information about sizing of the master and the shadow instance, see also Sizing Recommendations
[page 263].
If this flag is not activated, no shadow instance can connect itself to this Cloud Connector. Additionally, when
providing a concrete Shadow Host, you can ensure that only from this host a shadow instance can be
connected.
Install the shadow instance in the same network segment as the master instance. Communication between
master and shadow via proxy is not supported. The same distribution package is used for master and shadow
instance.
Note
If you plan to use LDAP for the user authentication on both master and shadow, make sure you configure it
before you establish the connection from shadow to master.
1. On first start-up of a Cloud Connector instance, a UI wizard asks you whether the current instance should be
master or shadow. Choose Shadow and press Save:
If you want to attach the shadow instance to a different master, choose the Reset button.
Note
The Reset button sets all high availability settings to their initial state. High availability is disabled and the
shadow host is cleared. Resetting works only if no shadow is connected.
4. The UI on the master instance shows information about the connected shadow instance. From the main menu,
choose High Availability:
5. As of version 2.6.0, the High Availability view includes an Alert Messages panel. It displays alerts if
configuration changes have not been pushed successfully. This might happen, for example, if a temporary
network failure occurs at the same time a configuration change is made. This panel lets an administrator know
if there is an inconsistency in the configuration data between master and shadow that could cause trouble if
the shadow needs to take over. Typically, the master recognizes this situation and tries to push the
configuration change at a later time automatically. If this is successful, all failure alerts are removed and
replaced by a warning alert showing that there had been trouble before. As of version 2.8.0.1, these alerts have
been integrated in the general Alerting section; there is no longer a separate Alert Messages panel.
If the master doesn't recover automatically, disconnect, then reconnect the shadow, which triggers a complete
configuration transfer.
There are several administration activities you can perform on the shadow instance. All configuration of tunnel
connections, host mappings, access rules, and so on, must be maintained on the master instance; however, you
can replicate them to the shadow instance for display purposes. You may want to modify the check interval (time
between checks of whether the master is still alive) and the takeover delay (time the shadow waits to see whether
the master would come back online, before taking over the master role itself).
You can use the Reset button to drop all the configuration information on the shadow that is related to the master,
but only if the shadow is not connected to the master.
Failover Process
The shadow instance regularly checks whether the master instance is still alive. If a check fails, the shadow
instance first attempts to reestablish the connection to the master instance for the time period specified by the
takeover delay parameter.
● If no connection becomes possible during the takeover delay time period, the shadow tries to take over the
master role. At this point, it is still possible for the master to be alive and the trouble to be caused by a network
issue between the shadow and master. The shadow instance next attempts to establish a tunnel to the given
SAP Cloud Platform subaccount. If the original master is still alive (that is, its tunnel to the cloud subaccount is
still active), this attempt is denied and the shadow instance remains in "shadow status", periodically pinging
the master and trying to connect to the cloud, while the master is not yet reachable.
● If the takeover delay period has fully elapsed, and the shadow instance does make a connection, the cloud side
opens a tunnel and the shadow instance takes over the role of the master. From this point, the shadow
instance shows the UI of the master instance and allows the usual operations of a master instance, for
example, starting/stopping tunnels, modifying the configuration, and so on.
Note
Only one shadow instance is supported. Any further shadow instances that attempt to connect are declined by
the master instance.
The master considers a shadow as lost, if no check/ping is received from that shadow instance during a time
interval that is equal to three times the check period. Only after this much time has elapsed can another shadow
system register itself.
Note
On the master, you can manually trigger failover by selecting the Switch Roles button. If the shadow is available,
the switch is made as expected. Even if the shadow instance cannot be reached, the role switch of the master
may still be enforced. Select Switch Roles only if you are absolutely certain it is the correct action to take for
your current circumstances.
Related Information
Context
By default, the Cloud Connector uses port 8443 for its administration UI. If this port is blocked by another process,
or if you want to change it after the installation, you can use the changeport tool, provided with Cloud Connector
version 2.6.0 and higher.
Note
On Windows, you can also choose a different port during installation.
1. Change to the installation directory of the Cloud Connector. To adjust the port and execute one of the following
commands:
○ Microsoft Windows OS:
changeport <desired_port>
./changeport.sh <desired_port>
2. When you see a message stating that the port has been successfully modified, restart the Cloud Connector to
activate the new port.
Context
For support purposes, all network traffic (HTTP/RFC requests and responses) through a Cloud Connector can be
traced. This traffic data may include business-critical information or security-sensitive data, such as user names,
passwords, address data, credit card numbers, and so on. Thus, by activating the corresponding trace level, a
Cloud Connector administrator might see data that he or she is not meant to. To prevent this behavior, implement
the four-eyes principle, which is supported by the Cloud Connector release 1.3.2 and higher.
Once the four-eyes principle is applied, activating a trace level that dumps traffic data will require two separate
users:
● An operating system user on the machine where the Cloud Connector is installed;
● An Administrator user of the Cloud Connector user interface.
By assigning these roles to two different people, you can ensure that both persons are needed to activate a traffic
dump.
1. Create a file named writeHexDump in <scc_install_dir>\scc_config. The owner of this file must be a
user other than the operating system user who runs the cloud connector process.
Note
Usually, this file owner is the user which is specified in the Log On tab in the properties of the cloud
connector service (in the Windows Services console). We recommend that you use a dedicated OS user
for the cloud connector service.
1.5.3.3.5 Monitoring
The cockpit includes a Connectivity view, where users can check the status of the Cloud Connector attached in the
current subaccount, if any, as well as information about the Cloud Connector ID, version, used Java runtime, high
availability setup, and some so on. Access to this view is, by default, granted to administrators, developers, and
support users.
The Cloud Connector offers various views for monitoring its activities and state.
Performance
All requests that travel through the Cloud Connector to a back end as specified through access control take a
certain amount of time. You can check the duration of requests in a bar chart. The requests are not shown
individually, but are assigned to buckets, each of which represents a time range.
For example, the first bucket contains all requests that took 10ms or less, the second one the requests that took
longer than 10ms, but not longer than 20ms. The last bucket contains all requests that took longer than 5000ms.
The collection of duration statistics starts as soon as the Cloud Connector is operational. You can delete all of
these statistical records by selecting the button Delete All. After that, the collection of duration statistics starts
over.
Note
Delete All deletes not only the list of most recent requests, but it also clears the top time consumers.
A horizontal stacked bar chart breaks down the duration of the request into several parts: external (back end),
open connection, internal (SCC), SSO handling, and latency effects. The numbers in each part represent
milliseconds.
Note
Parts with a duration of less than 1ms are not included.
In the above example, the selected request took 25ms, to which the Cloud Connector contributed 1ms. Opening a
connection took 5ms. Back-end processing consumed 7ms. Latency effects accounted for the remaining 12ms,
while there was no SSO handling necessary and hence it took no time at all.
To further restrict the selection of the listed 50 most recent requests, you can edit the resource filter settings for
each virtual host:
Note
If you specify sub-paths for a resource, the request URL must match exactly one of these entries to be
recorded. Without specified sub-paths (and the value Path and all sub-paths set for a resource), all sub-
paths of a specified resource are recorded.
This option is similar to Most Recent Requests; however, requests are not shown in order of appearance, but rather
sorted by their duration (in descending order). Furthermore, you can delete top time consumers, which has no
effect on most recent requests or the performance overview.
Back-End Connections
The maximum idle time appears on the rightmost side of the horizontal axis. For any point t on that axis
(representing a time value ranging between 0ms and the maximal idle time), the ordinate is the number of
connections that have been idle for no longer than t. You can click inside the graph area to view the respective
abscissa t and ordinate.
Hardware Metrics
You can check the current state of critical system resources using pie charts. The history of CPU and memory
usage (recorded in intervals of 15 seconds) is also shown graphically.
● View usage at a certain point in time by clicking inside the main graph area, and
● Zoom in on a certain excerpt of the historic data.
The data in its entirety is always visible in the smaller bottom area right below the main graph.
If you have zoomed in, an excerpt window in the bottom area shows you where you are in the main area with
respect to all of the data. You can:
Related Information
Use the Cloud Connector monitoring APIs to include monitoring information in your own monitoring tool.
You might want to integrate some monitoring information in the monitoring tool you use.
For this purpose, the Cloud Connector includes a collection of APIs that allow you to read the following sets of
monitoring data:
Prerequisites
You must use Basic Authentication or form field authentication to read the monitoring data via API.
Note
The Health Check API does not require a specified user.
Note
Separate users are available through LDAP only.
URL Parameters
https://<scchost>:<sccport>/xxx
Available APIs
Using the health check API, it is possible to recognize that the Cloud Connector is up and running. The purpose of
this health check is only to verify that the Cloud Connector is not down. It does not check any internal state or
tunnel connection states. Thus, it is a quick check that you can execute frequently:
URL https://<scc_host>:<scc_port>/exposed?action=ping
URL https://<scchost>:<sccport>/api/monitoring/
subaccounts
Input None
Example:
The list of connections lets you view all back-end systems connected to the Cloud Connector and get detail
information for each connection:
URL https://<scchost>:<sccport>/api/monitoring/
connections/backends
Input None
Example:
Using this API, you can read the data provided by the Cloud Connector performance monitor:
URL https://<scchost>:<sccport>/api/monitoring/
performance/backends
Input None
Example:
Using this API, you can read the data of top time consumers provided by the Cloud Connector performance
monitor:
URL https://<scchost>:<sccport>/api/monitoring/
performance/toptimeconsumers
Input None
Example:
You can configure the Cloud Connector to send e-mail messages when situations that may prevent it from
operating correctly occur. Choose Alerting from the top-left navigation area to set up and tailor e-mail messaging.
E-mail Configuration
1. Select E-mail Configuration to specify the list of em-ail addresses to which alerts should be sent (Send To).
Note
The addresses you enter here can use either of the following formats: john.doe@company.com or John Doe
<j.doe@company.com>.
Observation Configuration
Once you've entered the e-mail addresses to receive alerts, the next step is to identify the resources and
components of the Cloud Connector: E-mail messages are sent when any of the chosen components or resources
have malfunctioned or are in a critical state.
Note
The Cloud Connector does not dispatch the same alert repeatedly. As soon as an issue has been resolved, an
informational alert is generated, sent, and listed in section Alert Messages (see section Alert Messages below).
Alert Messages
As well as sending alert e-mail messages, the Cloud Connector also shows messages on screen in section Alert
Messages.
You can remove alerts using Delete or Delete All. If you attempt to delete active alerts (that is, those that haven't
yet been resolved), they simply reappear in the list once the next health check interval has elapsed.
Context
The auditor tool, available as of version 2.2, lets you verify the integrity of the available audit log files.
Choose Audit from your subaccount menu and go to Settings to specify the type of audit events the Cloud
Connector should log at runtime. You can currently select between the following Audit Levels (for either
<subaccount> and <cross-subaccount> scope):
● Security: Default value. The Cloud Connector writes an audit entry (Access Denied) for each request that
was blocked. It also writes audit entries, whenever an administrator changes one of the critical configuration
settings, such as exposed back-end systems, allowed resources, and so on.
● All: The Cloud Connector writes one audit entry for each received request, regardless of whether it was
allowed to pass or not (Access Allowed and Access Denied). It also writes audit entries that are relevant
to the Security mode.
● Off: No audit entries are written.
Note
We recommend that you don't log all events unless you are required to do so by legal requirements or company
policies. Generally, logging security events only is sufficient.
To enable automatic cleanup of audit log files, choose a period (14 to 365 days) from the list in the field
<Automatic Cleanup>.
Audit entries for configuration changes are written for the following different categories:
In the Audit Viewer section, you can first define filter criteria, then display the selected audit entries.
● In the Audit Type field, you can select whether to view the audit entries for the following:
○ Any entries
These filter criteria are combined with a logical AND so that all audit entries that match these criteria are shown. If
you have modified one of the criteria, select Refresh to display the updated selection of audit events that match
the new criteria.
Note
To prevent a single person from being able to both change the audit log level, and delete audit logs, we
recommend that the operating system administrator and the SAP Cloud Platform administrator are different
persons. We also suggest that you turn on the audit log at the operating system level for file operations.
The Check button checks all files that are filtered by the specified date range.
To check the integrity of the audit logs, go to <scc_installation>/auditor. This directory contains an
executable go script file (respectively, go.cmd on Microsoft Windows and go.sh on other operating systems).
If you start the go file without specifying parameters from <scc_installation>/auditor, all available audit
logs for the current Cloud Connector installation are verified.
The auditor tool is a Java application, and therefore requires a Java runtime, specified in JAVA_HOME, to execute:
Alternatively, to execute Java, you can include the Java bin directory in the PATH variable.
In the following example, the Audit Viewer displays Any audit entries, at Security level, for the time frame
between May 28, 01:00:00 and May 28, 23:59:59. Automatic cleanup of audit logs has been set to 365 days in
the Settings section:
1.5.3.3.8 Troubleshooting
To troubleshoot connection problems, monitor the state of your open tunnel connections in the Cloud Connector,
and view different types of logs and traces.
For information about a specific problem or an error you have encountered, see Connectivity Support [page 428].
Monitoring
To view a list of all currently connected applications, choose your Subaccount from the left menu and go to section
Cloud Connections:
● Application name: The name of the application, as also shown in the cockpit, for your subaccount
● Connections: The number of currently existing connections to the application
● Connected Since: The earliest start time of a connection to this application
● Peer Labels: The name of the application processes, as also shown for this application in the cockpit, for your
subaccount
Logs
The Logs tab page includes some files for troubleshooting that are intended primarily for SAP Support. These files
include information about both internal Cloud Connector operations and details about the communication
between the local and the remote (SAP Cloud Platform) tunnel endpoint.
● Cloud Connector Loggers adjusts the levels for Java loggers directly related to Cloud Connector functionality.
● Other Loggers adjusts the log level for all other Java loggers available at the runtime. Change this level, which
produces a large volume of trace entries, only when requested to do so by SAP Support.
● CPIC Trace Level allows you to set the level between 0 and 3 and provides traces for the CPIC-based RFC
communication with ABAP systems.
● When the Payload Trace is activated for a subaccount, all the HTTP and RFC traffic crossing the tunnel for that
subaccount going through this Cloud Connector, is traced in files with names
traffic_trace_<subaccount id>_on_<regionhost>.trc.
Note
Use payload and CPIC tracing at Level 3 carefully and only when requested to do so for support reasons.
The trace may write sensitive information (such as payload data of HTTP/RFC requests and responses) to
the trace files, and thus, present a potential security risk. As of version 2.2, the Cloud Connector supports
an implementation of a "four-eyes principle" for activating the trace levels that dump the network traffic into
a trace file. This principle requires two users to activate a trace level that records traffic data. See Secure
the Activation of Traffic Traces [page 367].
● To enable automatic cleanup of log and trace files, choose a period (2 to 365 days) from the list in the field
<Automatic Cleanup>.
View all existing trace files and delete the ones that are no longer needed.
Use the Download/Download All icons to create a ZIP archive containing one trace file or all trace files. Download it
to your local file system for convenient analysis.
When running the Cloud Connector with SAP JVM, it is possible to trigger the creation of a thread dump by
pressing the Thread Dump button, which will be written to the JVM trace file vm_$PID_trace.log . You will be
requested by SAP support to create one if it is expected to help during incident analysis.
Note
From the UI, you can't delete trace files that are currently in use. You can delete them from the Linux OS
command line; however, we recommend that you do not use this option to avoid inconsistencies in the internal
trace management of the Cloud Connector.
Once a problem has been identified, you should turn off the trace again by editing the trace and log settings
accordingly to not flood the files with unnecessary entries.
Use the Refresh button to update the information that appears. For example, you can use this button because
more trace files might have been written since you last updated the display.
Related Information
The Operator's Guide explains how to set up, configure, securely operate, and protect the Cloud Connector,
version 2.x, in productive scenarios. Its audience is system, IT, and cloud account administrators.
The Cloud Connector is an on-premise agent that runs in the customer network and takes care of securely
connecting cloud applications, running on SAP Cloud Platform, with services and systems of the customer
network. You can use it to implement hybrid scenarios, in which cloud applications require point-to-point
integration with existing services or applications in the customer network.
Additional Information
This document focuses on the operation aspects of the Cloud Connector. It does not cover a general overview of
the SAP Cloud Platform and its connectivity service, neither does it address development-related questions, such
as how to implement connectivity-enabled applications.
For additional information on specific topics, see the following online resources:
Related Information
Hardware and software requirements for installing and running the Cloud Connector.
Hardware Requirements
CPU Single core 3 GHz, x86-64 architecture Dual core 2 GHz, x86-64 architecture
compatible compatible
Memory (RAM) 1 GB 4 GB
Software Requirements
Note
An up-to-date list with detailed Cloud Connector version information is available from the Prerequisites [page
258] section.
Supported Browsers
The browsers you can use for the Cloud Connector Administration UI are the same as those currently supported
by SAPUI5. See: Browser and Platform Support.
The minimum free disk space required to download and install a new Cloud Connector server is as follows:
● Size of downloaded Cloud Connector installation file (ZIP, TAR, MSI files): 50 MB
● Newly installed Cloud Connector server: 70 MB
● Total: 120 MB as a minimum
The Cloud Connector writes configuration files, audit log files and trace files at runtime. We recommend that you
reserve between 1 and 20 GB of disk space for those files.
Trace and log files are written to <scc_dir>/log/ within the Cloud Connector root directory. The
ljs_trace.log file contains traces in general, communication payload traces are stored in
traffic_trace_*.trc. These files may be used by SAP Support to analyze potential issues. The default trace
level is Information, where the amount of written data is generally only a few KB per day. You can turn off these
traces to save disk space. However, we recommend that you don't turn off this trace completely, but that you leave
it at the default settings, to allow root cause analysis if an issue occurs. If you set the trace level to All, the amount
of data can easily reach the range of several GB per day. Use trace level All only to analyze a specific issue.
Payload trace, however, should normally be turned off, and used only for analysis by SAP Support.
Note
Regularly back up or delete written trace files to clean up the used disk space.
To be compliant with the regulatory requirements of your organization and the regional laws, the audit log files
must be persisted for a certain period of time for traceability purposes. Therefore, we recommend that you back
up the audit log files regularly from the Cloud Connector file system and keep the backup for the length of time
required.
A customer network is usually divided into multiple network zones or subnetworks according to the security level
of the contained components. For example, the DMZ that contains and exposes the external-facing services of an
organization to an untrusted network, usually the Internet, and there are one or more other network zones which
contain the components and services provided in the company’s intranet.
You can generally choose the network zone in which to set up the Cloud Connector.
● Internet access to the SAP Cloud Platform region host, either directly or via HTTPS proxy.
● Direct access to the internal systems it provides access to, which means there must be transparent
connectivity between the Cloud Connector and the internal system.
The Cloud Connector can be set up either in the DMZ and operated centrally by the IT department, or set up in the
intranet and operated by the appropriate line of business.
Note
The internal network must allow access to the required ports; the specific configuration depends on the firewall
software used.
Installation
Note
Use the windows MSI installer for productive scenarios, as only then does the Cloud Connector get registered
as a MS Windows service (SAP HANA Cloud Connector 2.0). Your company policy defines the privileges to
be allowed for service users. Adjust the folder and file permissions to be managed only by a limited-privileged
user and system administrators.
Upgrade
After installation, the Cloud Connector is registered as a Windows service that is configured to be started
automatically after a system reboot. You can start and stop the service via shortcuts on the desktop ("Start Cloud
Connector 2.0" and "Stop Cloud Connector 2.0"), or by using the Windows Services manager and look for the
service SAP HANA cloud connector 2.0.
Access the Cloud Connector administration UI at https://localhost:<port>, where the default port is 8443 (but this
port might have been modified during the installation).
Uninstallation
Installation
Note
Use the Linux RPM installer installer for productive scenarios, as only then does the Cloud Connector get
registered as a daemon process.
Upgrade
After installation via RPM manager, the Cloud Connector process is started automatically and registered as a
daemon process, which ensures the automatic restart of the Cloud Connector after a system reboot.
To start, stop, or restart the process explicitly, open a command shell and use the following commands, which
require root permissions:
Uninstallation
You can operate the Cloud Connector in a high availability mode, in which a master and a shadow instance are
installed.
● To learn how to install a failover (shadow) instance, see: Install a Failover Instance for High Availability [page
361]
● To learn how to administer master and shadow instances, see: Master and Shadow Administration [page 365]
1.5.3.3.9.6 Administration
As the Cloud Connector is a security critical component enabling external access to systems of an isolated
network (similar to a reverse proxy in a DMZ), we recommend that you restrict access to the operating system on
which the Cloud Connector is installed to the minimal set of users who shall administrate the system. This
minimizes the risk of unauthorized users accessing the Cloud Connector system and trying to modify or damage a
running Cloud Connector instance.
We also recommend that you use hard-drive encryption for the Cloud Connector system, ensuring that the Cloud
Connector configuration data cannot be read by unauthorized users, even if they obtain access to the hard drive.
To learn all tips and tricks for secure setup, see Recommendations for Secure Setup [page 275]
After a new installation, the Cloud Connector provides a self-signed X.509 certificate. It is used for the SSL
communication between the Cloud Connector Administration UI running in a Web browser and the Cloud
Connector process itself. For productive scenarios, replace this certificate with a certificate trusted by your
organization.
Basic Configuration
The basic configuration steps for the Cloud Connector consist of:
You must change the initial password immediately after installation. The Cloud Connector itself does not check the
strength of the password, thus you should voluntarily choose a strong password that cannot be guessed easily.
The major principle for the connectivity established by the Cloud Connector is that the Cloud Connector
administrator should have full control over the connection to the cloud, that is, deciding if and when the Cloud
Connector should be connected to the cloud, the accounts to which it should be connected, and which on-premise
systems and resources should be accessible to applications of the connected subaccount.
Using the administration UI, the Cloud Connector administrator can connect and disconnect the Cloud Connector
to and from the configured cloud subaccount. Once disconnected, no communication is possible, either between
the cloud subaccount and the Cloud Connector, or to the internal systems. The connection state can be verified
and changed by the Cloud Connector administrator on the Subaccount Dashboard tab of the UI.
Note
Once the Cloud Connector is freshly installed and connected to a cloud subaccount, none of the systems in the
customer network are yet accessible to the applications of the related cloud subaccount. Accessible systems
and resouurces must be configured explicitly in the Cloud Connector one by one, see Configure Trust [page
399].
Beginning with Cloud Connector version 2.2.0, a single Cloud Connector instance can be connected to multiple
subaccounts in the cloud. This is useful especially if you need multiple subaccounts to structure your development
or to stage your cloud landscape into development, test, and production. In this case, you can use a single Cloud
Connector instance for multiple subaccounts. However, we recommend that you do not use subaccounts running
in productive scenarios and subaccounts used for development or test purposes within the same Cloud
Connector. You can add or a delete a cloud account to or from a Cloud Connector using the Add and Delete
buttons on the Subaccount Dashboard (see screenshot above).
After installing a new Cloud Connector in a network, no systems or resources of the network have been exposed to
the cloud yet. You must still configure each system and resource that shall be used by applications of the
connected cloud subaccount. To do this, choose Cloud To On Premise from your subaccount menu and go to tab
Access Control:
Any type of system that can be called via one of the supported protocols, that is, both SAP and non-SAP systems
are supported. For example, a convenient way to access an ABAP system in a cloud application is via SAP
NetWeaver Gateway, as it allows consumption of ABAP content via HTTP and open standards.
We recommend that limit access to only those back-end services and resources that are explicitly needed by the
cloud applications. Instead of configuring, for example, a system and granting access to all its resources, grant
access only to the concrete resources that are needed by the cloud application. For example, define access to an
HTTP service by specifying the service URL root path and allowing access to all its subpaths.
When configuring an on-premise system, you can define a virtual host and port for the specified system. The
virtual host name and port represent the fully qualified domain name of the related system in the cloud. We
recommend that you use the virtual host name/port mapping to prevent leaking information about the physical
machine name and port of an on-premise system to the cloud.
For secure communication between the Cloud Connector and on-premise systems, use encrypted protocols, like
HTTPS and RFC over SNC, and set up a trusted relationship between the Cloud Connector and the on-premise
systems by exchanging certificates.
When using HTTPS as protocol, you can set up a trusted relationship by configuring the system certificate in the
Cloud Connector. A system certificate is an X.509 certificate that represents the identity of the Cloud Connector
instance. It is used as a client certificate in the HTTPS communication between the Cloud Connector and the on-
premise system. To ensure that only calls from trusted Cloud Connectors are accepted, configure the on-premise
system to validate the system certificate of theCloud Connector.
Analogously, you can configure SNC for secure RFC communication to an ABAP back end, see: Initial
Configuration (RFC) [page 200].
We recommend that you configure LDAP-based user management for the Cloud Connector so that only named
administrator users can log on to the administration UI. This guarantees traceability of the Cloud Connector
configuration changes via the Cloud Connector audit log. If you use the default and built-in Administrator user,
you can't identify the actual person or persons who perform configuration changes.
If you have an LDAP server in your landscape, you can configure the Cloud Connector to authenticate Cloud
Connector administrator users against the LDAP server. Valid administrator users must belong to the user group
named admin or sccadmin. See: Use LDAP for Authentication [page 358]
Once you've configured an LDAP server for authentication of the Cloud Connector, the default Administrator
user becomes inactive and can no longer be to log on to the Cloud Connector.
Audit logging is a critical element of an organization’s risk management strategy. The audit log data can alert
Cloud Connector administrators to unusual or suspicious network and system behavior. Additionally, the audit log
data can provide auditors with information required to validate security policy enforcement and proper
segregation of duties. IT staff can use the audit log data for root-cause analysis following a security incident.
The Cloud Connector includes an auditor tool for viewing and managing audit log information about access
between the cloud and the Cloud Connector, as well as for tracking of configuration changes done in the Cloud
Connector. The written audit log files are digitally signed by the Cloud Connector so that their integrity can be
checked. See: Audit Logging [page 383]
Note
We recommend that you permanently switch on Cloud Connector audit logging in productive scenarios.
● Under normal circumstances, set the logging level to Security (the default configuration value).
● If legal requirements or company policies dictate it, set the logging level to All. This lets you use the log
files to, for example, detect attacks of a malicious cloud application that tries to access on-premise services
without permission, or in a forensic analysis of a security incident.
We also recommend that you regularly copy the audit log files of the Cloud Connector to an external persistent
storage according to your local regulations. The audit log files can be found in the Cloud Connector root
directory /log/audit/<subaccount-name>/audit-log_<timestamp>.csv.
Currently, the Cloud Connector supports basic authentication and principal propagation as user authentication
types towards internal systems. The destination configuration of the used cloud application defines which of these
types is used for the actual communication to an on-premise system through the Cloud Connector. See:
Destinations [page 86]
To use principal propagation, you must explicitly configure trust to those cloud entities from which user tokens are
accepted as valid. You can do this in the Trust view of the Cloud Connector. See: Set Up Trust [page 298]
Guidelines and recommendations for a secure setup and operation of the Cloud Connector in a productive
scenario.
# Task Recommendation
1 Restrict OS level access to the Cloud Restrict the access to the Cloud
Connector Connector operating system to the
users who administer the system.
2 Use hard drive encryption for the Cloud Use hard drive encryption to avoid
Connector operating system unauthorized access to the Cloud
Connector configuration data and
credentials.
4 Authenticate with named users to the Configure an LDAP system in the Cloud
Cloud Connector Administrator UI Connector and work with named
administrator users to ensure better
traceability.
6 Use HTTPS and System Certificate, or Use HTTPS and a system certificate, or
RFC via SNC for communication from RFC over SNC, for communication and
Cloud Connector to backend authentication between the Cloud
Connector and back-end systems.
10 Copy and persist audit log files of Cloud Regularly copy the Cloud Connector
Connector regularly audit log files to an external persistent
storage and keep them for the length of
time dictated by the regulatory
requirements.
Related Information
1.5.3.3.9.8 Monitoring
The simplest way to verify whether a Cloud Connector is running is to try to access its administration UI. If you can
open the UI in a Web browser, the cloud connector process is running.
● On Microsoft Windows operating systems, the Cloud Connector process is registered as a Windows service,
which is configured to start automatically after a new Cloud Connector installation. If the Cloud Connector
server is rebooted, the cloud connector process should also auto-restart immediately. You can check the
state with the following command:
To verify if a Cloud Connector is connected to a certain cloud subaccount, log on to the Cloud Connector
administration UI and go to the Subaccount Dashboard, where the connection state of the connected subaccounts
is visible, as described in section Connect and Disconnect a Cloud Subaccount [page 397].
1.5.3.3.9.9 Supportability
For issues with the Cloud Connector, SAP customers and partners can create OSS tickets under the component
BC-MID-SCC.
The general SAP SLAs in regards of OSS processing time also apply for SAP Cloud Platform and the Cloud
Connector. To avoid unnecessary answer/response cycles in the support case, we recommend that you download
the logs of the corresponding Cloud Connector, using the Download button on the Logs view, and attach any log
files to the OSS ticket directly when creating it.
If the issue is easily reproducible, reexecute the steps leading up to it using log level All.
Related Information
New releases of the Cloud Connector are available on the Cloud Tools page. While there could be new releases
every other week (in conjunction with SAP Cloud Platform bi-weekly releases), actual releases are less frequent,
occurring only when new features or important bug fixes are delivered.
Cloud connector versions follow the <major>.<minor>.<micro> versioning schema. The Cloud Connector stays
fully compatible within a major version. Within a minor version, the Cloud Connector will stay with the same
feature set. Higher minor versions usually support additional features compared to lower minor versions. Micro
versions generally consist of patches to a <master>.<minor> version to deliver bug fixes.
For each supported major version of the Cloud Connector, only one <major>.<minor>.<micro> version will be
provided and supported on the Cloud Tools page. This means that users must upgrade their existing Cloud
Connectors to get a patch for a bug or to make use of new features.
New versions of the Cloud Connector are announced in the Release Notes of SAP Cloud Platform. We
recommend that Cloud Connector administrators regularly check the release notes for Cloud Connector updates.
Note
We recommend that you first apply upgrades in a test landscape to validate that the running applications are
working.
There are no manual user actions required in the Cloud Connector when the SAP Cloud Platform is updated.
A hybrid scenario is one, in which applications running on SAP Cloud Platform require access to on-premise
systems.
To gain an overview of the cloud and on-premise landscape that is relevant for your hybrid scenario, we
recommend that you diagrammatically document your cloud subaccounts, their connected Cloud Connectors and
any on-premise back-end systems. Include the subaccount names, the purpose of the subaccounts (dev, test,
prod), information about the Cloud Connector machines (host, domains), the URLs of the Cloud Connectors in the
landscape overview document, and any other details you might find useful to include.
Document the users who have administrator access to the cloud subaccounts, to the Cloud Connector operating
system, and to the Cloud Connector administration UI.
Such an administrator role documentation could look like following sample table:
CA Dev2 X
CA Test X X
CA Prod X
Create and document separate email distribution lists for both the cloud subaccount administrators and the Cloud
Connector administrators.
Define and document mandatory project and development guidelines for your SAP Cloud Platform projects. An
example of such a guideline could be similar to the following.
Define and document how to set a cloud application live and how to configure needed connectivity for such an
application.
For example, the following processes could be seen as relevant and should be defined and document in more
detail:
1. Transferring application to production: Steps for transferring an application to the productive status on the
SAP Cloud Platform.
2. Application connectivity: The steps for adding a connectivity destination to a deployed application for
connections to other resources in the test or productive landscape.
3. Cloud Connector Connectivity: Steps for adding an on-premise resource to the Cloud Connector in the test or
productive landscapes to make it available for the connected cloud subaccounts.
4. On-premise system connectivity: The steps for setting up a trusted relationship between an on-premise
system and the Cloud Connector, and to configure user authentication and authorization in the on-premise
system in the test or productive landscapes.
5. Application authorization: The steps for requesting and assigning an authorization that is available inside the
SAP Cloud Platform application to a user in the test or productive landscapes.
6. Administrator permissions: Steps for requesting and assigning the administrator permissions in a cloud
subaccount to a user in the test or productive landscape.
1.5.3.4 Security
Learn how to set up and run the Cloud Connector ensuring the highest security standards.
Security is a crucial concern for any cloud-based solution. It has a major impact on the business decision of
enterprises whether to make use of such solutions. SAP Cloud Platform is a platform-as-a-service offering
designed to run business-critical applications and processes for enterprises, with security considered on all levels
of the on-demand platform:
Application Layer
● Frontend security
● Secure application development
Service Layer
SAP Cloud Platform provides the Cloud Connector to allow integration of on-demand applications with services
and systems running in secured customer networks, as well as to support secure database connections from the
customer network to SAP HANA databases running on SAP Cloud Platform. As these are highly security-sensitive
topics, this section gives an overview how the Cloud Connector ensures highest security standards for the
mentioned scenarios.
Related Information
On application level, the main tasks to ensure secure Cloud Connector operations are to provide appropriate
frontend security (for example, validation of entries) and a secure application development.
Basically, you should follow the rules given in the product security standard, for example, protection against cross-
site scripting (XSS) and cross-site request forgery (XSRF).
The scope and design of security measures on application level strongly depend on the specific needs of your
application.
You can use SAP Cloud Platform Connectivity to securely integrate on-demand applications with systems running
in isolated customer networks. For this, you need to install the Cloud Connector as integration agent in the on-
premise network. After that you can use it to establish a persistent TLS tunnel to SAP Cloud Platform
subaccounts. To establish this tunnel, the Cloud Connector administrator must authenticate himself or herself
against the related SAP Cloud Platform subaccount of which he or she must be a member. Once the tunnel is
established, it can be used by applications of the connected subaccount to remotely call systems in the customer
network.
Architecture
The figure below shows a system landscape in which the Cloud Connector is used for secure connectivity between
SAP Cloud Platform applications and on-premise systems. You can connect a single Cloud Connector instance to
multiple SAP Cloud Platform subaccounts, each connection requiring separate authentication and defining an own
set of configuration. You can connect an arbitrary number of SAP and non-SAP systems to a single Cloud
Connector instance. The on-premise system does not need to be touched to use it in combination with the Cloud
Connector, unless you shall configure trust between the Cloud Connector and the on-premise system (this is
needed for principal propagation, for instance). You can operate the Cloud Connector also in a high availability
mode. In this case, you have to install a second (redundant) so-called shadow Cloud Connector which takes over
from the master instance in case it becomes unavailable.
The Cloud Connector also supports the communication direction from the on-premise network to the SAP Cloud
Platform subaccount with the so-called database tunnel. This allows you to connect common ODBC/JDBC
database tools to SAP HANA databases and other provided databases in SAP Cloud Platform.
A company network is usually divided into multiple network zones according to the security level of the contained
systems. The DMZ network zone usually contains and exposes the external-facing services of an organization to an
untrusted network, usually the Internet. Besides this, there are usually one or multiple other network zones which
contain the components and services provided in the company’s intranet.
You can set up the Cloud Connector either in the DMZ or in an inner network zone. Technical prerequisites for the
Cloud Connector to work properly are:
● The Cloud Connector must have access to the SAP Cloud Platform landscape host, either directly or via
HTTPS proxy (see also: Prerequisites [page 258]).
● The Cloud Connector must have direct access to the internal systems it shall provide access to. I.e. there must
be transparent connectivity between the Cloud Connector and the internal system.
Related Information
For inbound connections into the on-premise network, the Cloud Connector acts as a reverse invoke proxy
between SAP Cloud Platform and the internal systems. Once installed, none of the internal systems are accessible
by default through the Cloud Connector: you must configure explicitly each system and each service and resource
on every system which shall be exposed to SAP Cloud Platform in the Cloud Connector. The Cloud Connector
administrator can also specify a virtual host name and port for a configured on-premise system, which is then
used in the cloud. Doing this, you can avoid that information on physical hosts is exposed to the cloud.
TLS Tunnel
The TLS (Transport Layer Security) tunnel is established from the Cloud Connector to SAP Cloud Platform via so-
called reverse invoke approach. This gives full control of the tunnel into the hands of the Cloud Connector
administrator, which means the tunnel can’t be established from the cloud or from somewhere else outside the
company network. The Cloud Connector administrator is the one who decides when the tunnel is established or
closed.
The tunnel itself is using TLS with strong encryption of the communication, and mutual authentication of both
sides of the communication, the client side (Cloud Connector side) and the server side (SAP Cloud Platform). The
X.509 certificates which are used to authenticate the Cloud Connector and the SAP Cloud Platform subaccount
are issued and controlled by SAP Cloud Platform. They are kept in secure storages in the Cloud Connector and the
cloud. Having the tunnel encrypted and authenticated, confidentiality and authenticity of the communication
between the SAP Cloud Platform applications and the Cloud Connector is guaranteed.
As an additional level of control, the Cloud Connector optionally allows restricting the list of SAP Cloud Platform
applications which are able to use the tunnel at all. This is useful in situations where multiple applications are
deployed in a single SAP Cloud Platform subaccount while only particular applications should require connectivity
to on-premise systems.
SAP Cloud Platform guarantees strict isolation on subaccount level provided by its infrastructure and platform
layer. This means that an application of one subaccount is not able to access and use resources of another
subaccount. This means that an SAP Cloud Platform application of one subaccount can’t access the tunnel of
another subaccount.
Supported Protocols
The Cloud Connector supports inbound connectivity for HTTP and RFC, any other protocol is not supported. The
payload sent via these protocols is encrypted on TLS/tunnel-level. For the route from the Cloud Connector to the
on-premise systems, Cloud Connector administrators have the choice for each configured on-premise system
whether to use HTTP, HTTPS, RFC or RFC over SNC. For HTTPS, you can configure a so-called system certificate in
the Cloud Connector which is used for the trust relationship between the Cloud Connector and the connected on-
premise systems. For RFC over SNC, you can configure an SNC PSE in the Cloud Connector respectively.
Principal Propagation
The Cloud Connector also supports principal propagation of the cloud user identity to connected on-premise
systems. For this, the system certificate (in case of HTTPS) or the SNC PSE (in case of RFC) is mandatory to be
configured and trust with the respective on-premise system must be established. Trust configuration, in particular
for principal propagation, is the only reason to configure and touch the used on-premise systems when using the
Cloud Connector.
Related Information
The Cloud Connector also supports the communication direction from the on-premise network to SAP Cloud
Platform for the so called database tunnel. It is used to connect local database tools via JDBC or ODBC to the SAP
HANA DB or other databases onSAP Cloud Platform, for instance SAP Business Objects tools like Lumira, BOE or
Data Services.
The database tunnel only allows JDBC and ODBC connections from the Cloud Connector into the cloud. A reuse
for other protocols is not possible. The tunnel uses the same security mechanisms as for the inbound connectivity.
These mechanisms are TLS-encryption and mutual authentication, as well as audit logging when and by whom a
database tunnel has been established or closed.
Related Information
As audit logging is a critical element of an organization’s risk management strategy, the Cloud Connector provides
audit logging for the complete record of access between cloud and Cloud Connector as well as of configuration
changes done in the Cloud Connector. The written audit log files are digitally signed by the Cloud Connector so
that they can be checked for integrity (see also: Audit Logging [page 383]).
The audit log data of the Cloud Connector can be used to alert Cloud Connector administrators regarding unusual
or suspicious network and system behavior. Additionally, the audit log data can provide auditors with information
required to validate security policy enforcement and proper segregation of duties. IT staff can use the audit log
data for root-cause analysis following a security incident.
Related Information
SAP Cloud Platform’s infrastructure and network facilities ensure security on network layer by granting access to
the physical infrastructure of the platform and its applications only to authorized persons and only for a specific
business purpose. The SAP Cloud Platform landscape runs in an isolated network, which is protected from the
outside by firewalls, DMZ, and communication proxies for all inbound and outbound communications to and from
the network.
The SAP Cloud Platform infrastructure layer also ensures that platform services, like the SAP Cloud Platform
Connectivity, and applications are running in isolation in sandboxed environments. An interaction between them is
only possible over a secure remote communication channel.
SAP Cloud Platform runs in SAP-hosted data centers which are compliant with regulatory requirements as
described in The SAP Data Center and Certification for Security’s Sake . The security measures include, for
example:
● strict physical access control mechanisms using biometrics, video surveillance, and sensors
● high availability and disaster recoverability with redundant power supply and own power generation
The following table lists the SAP security guidelines for a secure usage of the Cloud Connector:
Network zone Depending on the needs of the project, To access highly secure on-premise sys
the Cloud Connector can be either set up tems, operate the Cloud Connector cen
in the DMZ and operated centrally by the trally by the IT department and install it
IT department or set up in the intranet in the DMZ of the company network.
and operated by the line-of-business.
Set up trust between the on-premise
system and the Cloud Connector, and
only accept requests from trusted Cloud
Connectors in the system.
OS level protection of the Cloud The Cloud Connector is a security-criti Restrict access to the operating system
Connector machine cal component that handles the inbound on which the Cloud Connector is instal
access from SAP Cloud Platform appli led to the minimal set of users who
cations to systems of an on-premise net should administrate the Cloud
work. Connector.
Protection of the Cloud Connector ad After installation, the Cloud Connector Change the password of the
ministration UI provides an initial user name and pass Administrator user immediately after in
word and forces the user (Administrator) stallation. Choose a strong password for
to change the password upon initial the user (see also Recommendations for
logon. Secure Setup [page 275]).
You can access the Cloud Connector ad Exchange the self-signed X.509 certifi-
ministration UI remotely via HTTPS. cate of the Cloud Connector administra
tion UI by a certificate that is trusted by
After installation, it uses a self-signed X.
your company and the company’s ap
509 certificate as SSL server certificate,
proved Web browser settings (see Rec
which is not trusted by default by Web
ommended: Replace the Default SSL
browsers.
Certificate [page 279]).
Audit logging configuration in the Cloud For end-to-end traceability of configura- Switch on audit logging in the Cloud
Connector tion changes in the Cloud Connector, as Connector: set audit level to “All” (see
well as communication delivered by the Recommendations for Secure Setup
Cloud Connector, switch on audit log [page 275] and Audit Logging [page
ging for productive scenarios. 383])
Availability To guarantee high availability of the con Use the high availability feature of the
nectivity for cloud integration scenarios, Cloud Connector for productive scenar
run productive instances of the Cloud ios (see Install a Failover Instance for
Connector in high availability mode, that High Availability [page 361]).
is, with a second (redundant) shadow
Cloud Connector in place.
Supported protocols HTTP, HTTPS, RFC and RFC over SNC The route from the Cloud Connector to
are currently supported as protocols for
the on-premise system should be en
the communication direction from the
crypted using TLS (for HTTPS) or SNC
cloud to on-premise.
(for RFC).
The route from the application VM in the
cloud to the Cloud Connector is always Trust between the Cloud Connector and
Configuration of on-premise systems in When configuring the access to an inter Use hostname mapping of exposed on-
nal system in the Cloud Connector, map
the Cloud Connector premise systems in the access control of
physical host names to virtual host
the Cloud Connector (see Configure Ac
names to prevent exposure of informa
tion on physical systems to the cloud. cess Control (HTTP) [page 151] and Con
figure Access Control (RFC) [page 202]).
To allow access only for trusted applica Narrow the list of cloud applications
tions of your SAP Cloud Platform subac which are allowed to use the on-premise
count to on-premise systems, configure tunnel to the ones that need on-premise
the list of trusted applications in the connectivity (see Set Up Trust [page
Cloud Connector. 298]).
Usage of Cloud Connector instances for You can connect a single Cloud Use different Cloud Connector instances
productive and nonproductive scenarios Connector instance to multiple SAP to separate productive and nonproduc
Cloud Platform subaccounts. tive scenarios.
Related Information
1.5.3.5 Upgrade
The steps for upgrading your Cloud Connector differ according to your operating system. Previous settings and
configurations are automatically preserved.
Note
Upgrade is supported only for installer versions, not for portable versions. See Installation [page 257].
If you have a single-machine Cloud Connector installation, a short downtime is unavoidable during the upgrade
process. However, if you have set up a master and a shadow instance, you can perform the upgrade without
downtime by executing the following procedure:
Result: Both instances have now been upgraded without connectivity downtime and without configuration loss.
For more information, see Install a Failover Instance for High Availability [page 361].
Microsoft Windows OS
1. Uninstall the Cloud Connector as described in Uninstallation [page 418] and make sure to retain the existing
configuration.
2. Reinstall the Cloud Connector within the same directory. For more information, see Installation on Microsoft
Windows OS [page 270].
3. Before accessing the administration UI, clear your browser cache to avoid any unpredictable behavior due to
the upgraded UI.
Linux OS
rpm -U com.sap.scc-ui-<version>.rpm
2. Before accessing the administration UI, clear your browser cache to avoid any unpredictable behavior due to
the upgraded UI.
Sometimes you must update the Java VM used by the Cloud Connector, for example, because of expired SSL
certificates contained in the JVM, bug fixes, and so on. If you make a replacement in the same directory, shut down
the Cloud Connector, upgrade the JVM, and restart the Cloud Connector when you are done.
If you change the installation directory of the JVM, follow the steps below. Make sure the JVM has been installed
successfully.
On Windows
Note
The bin subdirectory must not be part of the JavaHome value.
If the JavaHome value does not yet exist, create it here with a "String Value" (REG_SZ) and specify the full path
of the Java installation directory, for example, C:\sapjvm_7.
● Close the registry editor and restart the Cloud Connector.
On Linux
After executing the above steps, the Cloud Connector should be running again and should have picked up the new
Java version during startup. You can verify this by logging in to the Cloud Connector with your favorite browser,
opening the About dialogue and checking that the field <Java Details> shows the version number and build
date of the new Java VM. After you verified that the new JVM is indeed used by the Cloud Connector, delete or
uninstall the old JVM.
1.5.3.7 Uninstallation
Context
If you have installed an installer variant of the Cloud Connector , follow the steps for your operating system to
uninstall the Cloud Connector.
Microsoft Windows OS
1. In the Windows software administration tool, search for SAP HANA cloud connector 2.x.
2. Select the entry and follow the appropriate steps to uninstall it.
3. When you are uninstalling in the context of an upgrade, make sure to retain the configuration files.
Linux OS
rpm -e com.sap.scc-ui
Caution
This command also removes the configuration files.
Mac OS X
Portable Variants
(Microsoft Windows OS, Linux OS, Mac OS X) If you have installed a portable version (zip or tgz archive) of the
Cloud Connector, simply remove the directory in which you have extracted the Cloud Connector archive.
Related Information
Technical Issues
Does the Cloud Connector send data from on-premise systems to SAP Cloud Platform or the other
way around?
The connection is opened from the on-premise system to the cloud, but is then used in the other direction.
An on-premise system is, in contrast to a cloud system, normally located behind a restrictive firewall and its
services aren’t accessible thru the Internet. This concept follows a widely used pattern often referred to as reverse
invoke proxy.
Is the connection between the SAP Cloud Platform and the Cloud Connector encrypted?
Yes, by default, TLS encryption is used for the tunnel between SAP Cloud Platform and the Cloud Connector.
If used properly, TLS is a highly secure protocol. It is the industry standard for encrypted communication and also,
for example, as a secure channel in HTTPS.
Keep your Cloud Connector installation updated and we will make sure that no weak or deprecated ciphers are
used for TLS.
Can I use a TLS-terminating firewall between Cloud Connector and SAP Cloud Platform?
This is not possible. Basically, this is a desired man-in-the-middle attack, which does not allow the Cloud
Connector to establish a mutual trust to the SAP Cloud Platform side.
What is the oldest version of SAP Business Suite that's compatible with the Cloud Connector?
The Cloud Connector can connect an SAP Business Suite system version 4.6C and newer.
6 7 8
We recommend that you always use the latest supported JRE version.
Note
Version 2.8 and later of theCloud Connector may have problems with ciphers in Google Chrome, if you use the
JVM 7. For more information read this SCN Article .
Which configuration in the SAP Cloud Platform destinations do I need to handle the user
management access to the Cloud User Store of the Cloud Connector?
Is the Cloud Connector sufficient to connect the SAP Cloud Platform to an SAP ABAP back end or
is SAP Cloud Platform Integration needed?
It depends on the scenario: For pure point-to-point connectivity to call on-premise functionality like BAPIs, RFCs,
OData services, and so on, that are exposed via on-premise systems, the Cloud Connector might suffice.
However, if you require advanced functionality, for example, n-to-n connectivity as an integration hub, SAP Cloud
Platform Integration – Process Integration is a more suitable solution. SAP Cloud Platform Integration can use the
Cloud Connector as a communication channel.
The amount of bandwidth depends greatly on the application that is using the Cloud Connector tunnel. If the
tunnel isn’t currently used, but still connected, a few bytes per minute is used simply to keep the connection alive.
What happens to a response if there's a connection failure while a request is being processed?
The response is lost. The Cloud Connector only provides tunneling, it does not store and forward data when there
are network issues.
For productive instances, we recommend installing the Cloud Connector on a single purpose machine. This is
relevant for security. Find more information in our Security Whitepaper .
We recommend that you use at least three servers, with the following purposes:
Note
Do not run the production master and the production shadow as VMs inside the same physical machine. Doing
so removes the redundancy, which is needed to guarantee high availability. A QA (Quality Assurance) instance is
a useful extension. For disaster recovery, you will also need two additional instances; another master instance,
and another shadow instance.
Can I send push messages from an on-premise system to the SAP Cloud Platform through the
Cloud Connector?
We currently support 64-bit operating systems running only on an x86-64 processor (also known as x64, x86_64
or AMD64).
Yes, you should be able to connect almost any system that supports the HTTP Protocol, to the SAP Cloud
Platform, for example, Apache HTTP Server, Apache Tomcat, Microsoft IIS, or Nginx.
No, currently there is only one role that allows complete administration of the Cloud Connector.
Yes, to enable this, you must configure an LDAP server. See: Use LDAP for Authentication [page 358].
How can I reset the Cloud Connector's administrator password when not using LDAP for
authentication?
Visit https://tools.hana.ondemand.com/#cloud to download the portable version of the Cloud Connector. Extract
the users.xml file in the config directory to the config directory of your Cloud Connector installation, then restart
the Cloud Connector.
This resets the password and user name to their default values.
You can manually edit the file; however, we strongly recommend that you use the users.xml file.
Package the following three folders, located in your Cloud Connector installation directory, into an archive file:
● config
● config_master
● scc_config
As the layout of the configuration files may change between versions, we recommend that you don't restore a
configuration backup of a Cloud Connector 2.x installation into a 2.y installation.
Yes, you can create an archive file of the installation directory to create a full backup. Before you restore from a
backup, note the following:
● If you restore the backup on a different host, the UI certificate will be invalidated.
● Before you restore the backup, you should perform a “normal” installation and then replace the files. This
registers the Cloud Connector at your operating systems package manager.
This user opens the tunnel and generates the certificates that are used for mutual trust later on.
The user is not part of the certificate that identifies the Cloud Connector.
In both the Cloud Connector UI and in the SAP Cloud Platform cockpit, this user ID appears as the one who
performed the initial configuration (even though the user may have left the company).
What happens to a Cloud Connector connection if the user who created the tunnel leaves the
company?
This does not affect the tunnel, even if you restart the Cloud Connector.
For how long does SAP continue to support older Cloud Connector versions?
Each Cloud Connector version is supported for 12 months, which means the cloud side infrastructure is
guaranteed to stay compatible with those versions.
After that time frame, compatibility is no longer guaranteed and interoperability could be dropped. Furthermore,
after an additional 3 month, the next feature release published after that period will no longer support an upgrade
from the deprecated version as a starting release.
SAP Cloud Platform customers can purchase subaccounts and deploy applications into these subaccounts.
Additionally, there are users, who have a password and can log in to the cockpit and manage all subaccounts they
have permission for.
● A single subaccount can be managed by multiple users, for example, your company may have several
administrators.
● A single user can manage multiple subaccounts, for example, if you have multiple applications and want them
(for isolation reasons) to be split over multiple subaccounts.
For trial users, the account name is typically your user name, followed by the suffix “trial”:
Does the Cloud Connector work with the SAP Cloud Platform Cloud Foundry environment?
As of version 2.10, the Cloud Connector is able to establish a connection to regions with the SAP Cloud Platform
Cloud Foundry environment.
Does the Cloud Connector work with the SAP S/4HANA Cloud?
Starting with version 2.10, the Cloud Connector offers a Service Channel to S/4HANA Cloud instances, given that
they are associated with the respective SAP Cloud Platform subaccount. Also, S/4HANA Cloud communication
scenarios invoking remote enabled function modules (RFMs) in on-premise ABAP systems are supported as of
version 2.10. See also Using Service Channels [page 345].
How do I bind multiple Cloud Connectors to one SAP Cloud Platform subaccount?
As of version 2.9, you can connect multiple Cloud Connectors to a single subaccount. This lets you assign multiple
separate corporate network segments.
Those Cloud Connectors are distinguishable based on the location ID, which you must provide to the destination
configuration on the cloud side.
As of version 2.10, this is possible using the TCP channel of the Cloud Connector, if the client supports a SOCKS5
proxy to establish the connection. However, only the HTTP and RFC protocols currently provide an additional level
of access control by checking invoked resources.
You can also use the Cloud Connector as a JDBC or ODBC proxy to access the HANA DB instance of your SAP
Cloud Platform subaccount (service channel). This is sometimes called the “HANA Protocol”.
No, the audit log monitors access only from SAP Cloud Platform to on-premise systems.
Troubleshooting
How do I fix the “Could not open Service Manager” error message?
You are probably seeing this error message due to missing administrator privileges. Right-click the cloud
connector shortcut and select Run as administrator.
If you don’t have administrator privileges on your machine you can use the portable variant of the Cloud
Connector.
Note
The portable variants of the Cloud Connector are meant for nonproductive scenarios only.
For the portable versions, JAVA_HOME must point to the installation directory of your JRE, while PATH must
contain the bin folder inside the installation directory of your JRE.
The installer versions automatically detect JVMs in these locations, as well as in other places.
When I try to open the Cloud Connector UI, Google Chrome opens a Save as dialog, Firefox
displays some cryptic signs, and Internet Explorer shows a blank page, how do I fix this?
This happens when you try to access the Cloud Connector over HTTP instead of HTTPS. HTTP is the default
protocol for most browsers.
Adding “https://” to the beginning of your URL should fix the problem. For localhost, you can use https://
localhost:8443/.
An alternative approach compared to the SSL VPN solution that is provided by the Cloud Connector is to expose
on-premise services and applications via a reverse proxy to the Internet. This method typically uses a reverse
proxy setup in a customer's "demilitarized zone" (DMZ) subnetwork. The reverse proxy setup does the following:
● Acts as a mediator between SAP Cloud Platform and the on-premise services
● Provides the services of an Application Delivery Controller (ADC) to, for example, encrypt, filter, route, or
check inbound traffic
The figure below shows the minimal overall network topology of this approach. For more information, see Technical
Connectivity Guide .
On-premise services that are accessible via a reverse proxy are callable from SAP Cloud Platform like other HTTP
services available on the Internet. When you use destinations to call those services, make sure the configuration of
the ProxyType parameter is set to Internet.
Depending on your scenario, you can benefit from the reverse proxy. An example is the required network
infrastructure (such as a reverse proxy and ADC services): since it already exists in your network landscape, you
can reuse it to connect to SAP Cloud Platform. There's no need to set up and operate new components on your
(customer) side.
Disadvantages
● The reverse proxy approach leaves exposed services generally accessible via the Internet. This makes them
vulnerable to attacks from anywhere in the world. In particular, Denial-of-Service attacks are possible and
difficult to protect against. To prevent attacks of this type and others, you must implement the highest security
in the DMZ and reverse proxy. For the productive deployment of a hybrid cloud/on-premise application, this
approach usually requires intense involvement of the customer's IT department and a longer period of
implementation.
● If the reverse proxy allows filtering, or restricts accepted source IP addresses, you can set only one IP address
to be used for all SAP Cloud Platform outbound communications.
A reverse proxy does not exclusively restrict the access to cloud applications belonging to a customer,
although it does filter any callers that are not running on the cloud. Basically, any application running on the
cloud would pass this filter.
● The SAP-proprietary RFC protocol is not supported, so a cloud application cannot directly call an on-premise
ABAP system without having application proxies on top of ABAP.
Note
Using the Cloud Connector mitigates all of these issues. As it establishes the SSL VPN tunnel to SAP Cloud
Platform using a reverse invoke approach, there is no need to configure the DMZ or external firewall of a
customer network for inbound traffic. Attacks from the Internet are not possible. With its simple setup and fine-
grained access control of exposed systems and resources, the Cloud Connector allows a high level of security
and fast productive implementation of hybrid applications. It also supports multiple application protocols, such
as HTTP and RFC.
Troubleshooting
This section provides troubleshooting information related to SAP Cloud Platform Connectivity and the Cloud
Connector, including solutions to general connectivity issues as well as to specific on-demand to on-premise
cases.
● Getting Support [page 2280] (SAP Support Portal, SAP Cloud Platform community)
If you cannot find a solution to your issue, collect and provide the following specific, issue-relevant information to
SAP Support:
You can submit this information by creating a customer ticket in the SAP CSS system using the following
components:
If you experience a more serious issue that cannot be resolved using only traces and logs, SAP Support may
request access to the Cloud Connector. Follow the instructions in the following SAP notes:
New
The destination service (Beta) is available in the Cloud Foundry environment. See Consuming the Destination Service
(Cloud Foundry Environment) [page 43].
Enhancement
Cloud Connector
● The URLs of HTTP requests can now be longer than 4096 bytes.
● SAP Solution Manager can be integrated with one click of a button if the host agent is installed on a Cloud Connector
machine. See the Solution Management section in Monitoring [page 368].
● The limitation that only 100 subaccounts could be managed with the administration UI has been removed. See Manag
ing Subaccounts [page 291].
Fix
Cloud Connector
● The regression of 2.10.0 has been fixed, as principal propagation now works for RFC.
● The cloud user store works with group names that contain a backslash (\) or a slash (/).
● Proxy challenges for NT LAN Manager (NTLM) authentication are ignored in favor of Basic authentication.
● The back-end connection monitor works when using a JVM 7 as a runtime of Cloud Connector.
Enhancement
Cloud Connector
Fix
Cloud Connector
● The is no longer a bottleneck that could lengthen the processing times of requests to exposed back-end systems, after
many hours under high load when using principal propagation, connection pooling, and many concurrent sessions.
● Session management is no longer terminating early active sessions in principal propagation scenarios.
● On Windows 10 hardware metering in virtualized environments shows hard disk and CPU data.
New
In case the remote server supports only TLS 1.2, use this property to ensure that your scenario will work. As TLS 1.2 is more
secure than TLS 1.1, the default version used by HTTP destinations, consider switching to TLS 1.2.
Enhancement
The release of SAP Cloud Platform Cloud Connector 2.9.1 includes the following improvements:
● UI renovations based on collected customer feedback. The changes include rounding offs, fixes of wrong/odd behav
iors, and adjustments of controls. For example, in some places tables were replaced by sap.ui.table.Table for better ex
perience with many entries.
● You can trigger the creation of a thread dump from the Log and Trace Files view.
● The connection monitor graphic for idle connections was made easier to understand.
Fix
● When configuring authentication for LDAP, the alternate host settings are no longer ignored.
● The email configuration for alerts is processing correctly the user and password for access to the email server.
● Some servers used to fail to process HTTP requests when using the HTTP proxy approach (HTTP Proxy for On-Premise
Connectivity [page 145]) on the SAP Cloud Platform side.
● A bottleneck was removed that could lengthen the processing times of requests to exposed back-end systems under
high load when using principal propagation.
● The Cloud Connector accepts passwords that contain the '§' character when using authentication-mode password.
Enhancement
Update of JCo runtime for SAP Cloud Platform. See Connectivity [page 32].
Fix
● 2016
● 2015
● 2014
Overview
Applications access it using the OASIS standard protocol Content Management Interoperability Services (CMIS).
Java applications running on SAP Cloud Platform can easily consume the document service using the provided
client library. Since the document service is exposed using a standard protocol, it can also be consumed by any
other technology that supports the CMIS protocol.
Features
The document service is an implementation of the CMIS standard and is the primary interface to a reliable and
safe store for content on SAP Cloud Platform.
● The storage and retrieval of files, which the file system often handles on traditional platforms
● The organization of files in a hierarchical folder structure
● The association of metadata with the content and the ability to read and write metadata
● A query interface based on this metadata using a query language similar to SQL
● Managing access control (access control lists)
● Versioning of content
● A powerful Java API (Apache Chemistry OpenCMIS)
● Streaming support to also handle large files efficiently
● Files are always encrypted (AES-128) before they are stored in the document service.
● A virus scanner can be activated to scan files for viruses during file uploads (write accesses). For performance
reasons, read-only file accesses are not scanned
● Access from applications running internally on SAP Cloud Platform or externally
● A domain model and service bindings that can be used by applications to work with a content management
repository
● An abstraction layer for controlling diverse document management systems and repositories using Web
protocols
CMIS provides a common data model covering typed files and folders with generic properties that can be set or
read. There is a set of services for adding and retrieving documents (called objects). CMIS defines an access
control system, a checkout and version control facility, and the ability to define generic relations. CMIS defines the
following protocol bindings, which use WSDL with Simple Object Access Protocol (SOAP) or Representational
State Transfer (REST):
The consumption of CMIS-enabled document repositories is easy using the Apache Chemistry libraries. Apache
Chemistry provides libraries for several platforms to consume CMIS using Java, PHP, .Net, or Python. The
subproject OpenCMIS, which includes the CMIS Java implementation, also includes tools around CMIS, like the
CMIS Workbench, which is a desktop client for CMIS repositories for developers.
Since the SAP Cloud Platform Document service API includes the OpenCMIS Java library, applications can be built
on SAP Cloud Platform that are independent of a specific content repository.
Restrictions
The following features, which are defined in the OASIS CMIS standard, are supported with restrictions:
● Multifiling
● Policies
● Relationships
● Change logs
● For searchable properties, a maximum of 100 values with a maximum of 5,000 characters is allowed.
● For non-searchable properties, a maximum of 1,000 values with a maximum of 50,000 characters is allowed.
● Maximal allowed length of one property is 4,500 characters.
Related Information
Use the SAP Cloud Platform Document service to store unstructured or semi-structured data in the context of
your SAP Cloud Platform application.
Introduction
Many applications need to store and retrieve unstructured content. Traditionally, a file system is used for this
purpose. In a cloud environment, however, the usage of file systems is restricted. File systems are tied to individual
virtual machines, but a Web application often runs distributed across several instances in a cluster. File systems
also have limited capacity.
The document service offers persistent storage for content and provides additional functionality. It also provides a
standardized interface for content using the OASIS CMIS standard.
Related Information
The following sections describe the basic concepts of the SAP Cloud Platform Document service.
In the coding and the coding samples, ecm is used to refer to the document service. Therefore, for example, the
document service API is called ecm.api.
The SAP Cloud Platform Document service is exposed using the OASIS standard protocol Content Management
Interoperability Service (CMIS).
The CMIS standard defines the protocol level (SOAP, AtomPub, and JSON based protocols). The SAP Cloud
Platform provides a document service client API on top of this protocol for easier consumption. This API is the
Open Source library OpenCMIS provided by the Apache Chemistry Project.
Related Information
To manage documents in the SAP Cloud Platform Document service, you need to connect an application to a
repository of the document service.
A repository is the document store for your application. It has a unique name with which it can later be accessed,
and it is secured using a key provided by the application. Only applications that provide this key are allowed to
connect to this repository.
Note
Due to the tenant isolation in SAP Cloud Platform, the document service cockpit cannot access or view
repostories you create in SAP Document Center or vice versa.
You can manage a repository using the application's program. In this way, you can create, edit, delete, and connect
the repository.
Related Information
You can create a repository with the createRepository(repositoryOptions) method of the EcmService
(document service).
Procedure
Use the createRepository(repositoryOptions) method and define the properties of the repository.
The following code snippet shows how to create a repository where uploaded files are scanned for viruses:
Context
There are many ways to connect to a repository. For more information, see the API Documentation [page 1288] and
Reuse OpenCmis Session Objects in Performance Tips (Java) [page 479].
Procedure
Once you are connected to the repository, you get an OpenCMIS session object to manage documents and folders
in the connected repository.
Probably the most common use case is to create documents and folders in a repository. Every repository in CMIS
has a root folder. Once you have received a Session, you can create the root folder using the following syntax:
Once you have a root folder, you can create other folders or documents. In the CMIS domain model, all CMIS
objects are typed. Therefore, you have to provide type information for each object you create. The types carry the
metadata for an object. The metadata is passed in a property map. Some properties are mandatory, others are
optional. You have to provide at least an object type and a name. For properties defined in the standard, OpenCMIS
has predefined constants in the PropertyIds class.
To create a document with content, provide a map of properties. In addition, create a ContentStream object
carrying a Java InputStream plus some additional information for the content, like Content-Type and file name.
String id = myDocument.getId();
Getting Children
To get the children of a folder, you can use the following code:
Retrieving a Document
You can also retrieve a document using its path with the getObjectByPath() method.
Tip
We recommend that you retrieve objects by ID and not by path. IDs are kept stable even if the object is moved.
Retrieving objects by IDs is also faster than retrieving objects by paths.
Before your application can use the document service, the application must be able to access and consume the
service.
There are several ways in which your application can access the document service:
● Any application deployed on SAP Cloud Platform as a Java Web application can consume the document
service.
● During the development phase, you can also use the document service in the SAP Cloud Platform local
runtime.
As a prerequisite for local development, you need an installation of the MongoDB on your machine. See Create
Sample Applications (Java) [page 442].
● You can also use the document service from an application running outside SAP Cloud Platform.
This requires a special application running on SAP Cloud Platform acting as a bridge between the external
application and the document service. This application is called a "proxy bridge". For more information, see
Build a Proxy Bridge [page 448].
Related Information
http://chemistry.apache.org/
User Management
The service treats user names as opaque strings that are defined by the application. All actions in the document
service are executed in the context of this named user or the currently logged-on user. That is, the service sets the
Repositories are identified either by their unique name or by their ID. The unique name is a human-readable name
that should be constructed with Java package-name semantics, for example, com.foo.MySpecialRepository,
to avoid naming conflicts. Repositories in the document service are secured by a key provided by the application.
When a repository is created, a key must be supplied. Any further attempts to connect to this repository only
succeed if the key provided by the connecting application matches the key that was used to create the repository.
Therefore, this key must be stored in a secure manner, for example, using the Java KeyStore. It is, however, up to
the application to decide whether to share this key with other applications from the same subaccount to
implement data-sharing scenarios.
Multiple applications can access the same repository. However, applications can only connect to the same
repository using the unique name assigned to this repository if they are deployed within the same subaccount as
the application that created the repository. In contrast, applications that are deployed in a different subaccount
cannot access this repository. A consequence of having repositories isolated within a subaccount is that data
cannot be shared across different subaccounts.
Repository ABC is created when Application1 is deployed in Subaccount1. Application2 is located in the same
Subaccount1 as Application1; therefore, Application2 can also access the same repository using its unique name
ABC. Application3 is deployed in Subaccount2. Application3 calls a repository that has the same unique name ABC
as the other repository that belongs to Subaccount1. However, Application3 cannot access the ABC repository that
belongs to Subaccount1 using the identical unique name, because the repositories are isolated within the
subaccount. Therefore, Application3 in Subaccount2 connects to another ABC repository that belongs to
Subaccount2. In summary, a repository can only be accessed by applications that are deployed in the same
subaccount as the application that created the repository.
Multitenancy
The document service supports multitenancy and isolates data between tenants. Each application consuming the
document service creates a repository and provides a unique name and a secret key. The document service
creates the repository internally in the context of the tenant using the application. While the repository name
uniquely identifies the repository, an internal ID is created for the application for each tenant. This ID identifies the
storage area containing all the data for the tenant in this repository. An application that uses the document service
in this way has multitenancy support. No additional logic is required at the application level.
We recommend that you create one document service session per tenant and cache these sessions for future
reuse. Make sure that you do not mix up the tenants on your side.
If you expect a high load for a specific tenant, we recommend that you create a pool of sessions for that tenant.
A session is always bound to a particular server of the document service and this will not scale. If you use a
session pool, the different sessions are bound to different document service servers and you will get a much
better performance and scaling.
Related Information
Changes to the data are visible to other ECM sessions only with some delay.
If the data of a repository is changed, for example, by creating, modifying, or deleting documents or folders, an
ECM session fetched from the class EcmFactory is used. Only subsequent read operations of the same session
(session-based read your own writes) see such changes immediately. All other sessions see such changes only
after some time (eventual consistency), usually within a few seconds but in case of heavy load scenarios also after
a longer delay.
Prerequisites
● You have downloaded and configured the SAP Eclipse platform. For more information, see Setting Up the
Development Environment [page 1126].
● You have created a HelloWorld Web application as described in Creating a Hello World Application [page 1139].
● You have downloaded the SDK used for local development.
● You have installed MongoDB as described in Setup Local Development [page 446].
This tutorial describes how you extend the HelloWorld Web application so that it uses the SAP Cloud Platform
Document service for managing unstructured content in your application. You test and run the Web application on
your local server and the SAP Cloud Platform.
Note
For historic reasons, ecm is used to refer to the document service in the coding and the coding samples.
Procedure
package hello;
import java.io.ByteArrayInputStream;
import java.io.IOException;
import java.io.InputStream;
import java.util.HashMap;
import java.util.Map;
import javax.servlet.ServletException;
import javax.servlet.http.HttpServlet;
import javax.servlet.http.HttpServletRequest;
import javax.servlet.http.HttpServletResponse;
import org.apache.chemistry.opencmis.client.api.CmisObject;
import org.apache.chemistry.opencmis.client.api.Document;
import org.apache.chemistry.opencmis.client.api.Folder;
import org.apache.chemistry.opencmis.client.api.ItemIterable;
import org.apache.chemistry.opencmis.client.api.Session;
import org.apache.chemistry.opencmis.commons.PropertyIds;
import org.apache.chemistry.opencmis.commons.data.ContentStream;
import org.apache.chemistry.opencmis.commons.enums.VersioningState;
import
org.apache.chemistry.opencmis.commons.exceptions.CmisNameConstraintViolationEx
ception;
For more information about using the OpenCMIS API, see the Apache Chemistry documentation.
During execution, this servlet executes the following steps:
1. It connects to a repository. If the repository does not yet exist, the servlet creates the repository.
2. It creates a subfolder.
3. It creates a document.
4. It displays the children of the root folder.
4. Add the resource reference description to the web.xml file.
Note
The document service is consumed by defining a resource in your web.xml file and by using JNDI lookup to
retrieve an instance of the com.sap.ecm.api.EcmService class. Once you have established a
connection to the document service, you can use one of the connect(…) methods to get a CMIS session
(org.apache.chemistry.opencmis.client.api.Session). A few examples of how to use the
<resource-ref>
<res-ref-name>EcmService</res-ref-name>
<res-type>com.sap.ecm.api.EcmService</res-type>
</resource-ref>
5. Test the Web application locally or in the SAP Cloud Platform. For testing, proceed as described in Deploy
Locally from Eclipse IDE [page 1189] or Deploy on the Cloud from Eclipse IDE [page 1191] linked below.
Related Information
To use the document service in a Web application, download the SDK and install the MongoDB database.
Context
Caution
The local document service emulation is deprecated as of 5 March 2018. Support will be discontinued after 5
July 2018. This does not affect the availability of the document service running on SAP Cloud Platform, but only
its local emulation that is part of the SDK.
We recommend to either deploy applications consuming the document service to SAP Cloud Platform, or to
consume a cloud-located repository locally as described in Access from External Applications [page 447]. This
explains how to access a document service repository that is located on SAP Cloud Platform from local
applications.
Procedure
If your setup is correct, you see a text message starting with "You are trying to access MongoDB on
the native driver port. …"
Related Information
Overview
The services on SAP Cloud Platform can be consumed by applications that are deployed on SAP Cloud Platform
but not from external applications. There are cases, however, where applications want to access content in the
cloud but cannot be deployed in the cloud.
The figure below describes a mechanism with which this scenario can be supported and is followed by an
explanation:
This can be addressed by deploying an application on SAP Cloud Platform that accepts incoming requests from
the Internet and forwards them to the document service. We refer to this type of application as a proxy bridge. The
proxy bridge is deployed on SAP Cloud Platform and runs in a subaccount using the common SAP Cloud Platform
patterns. The proxy bridge is responsible for user authentication. The resources consumed in the document
service are billed to the SAP Cloud Platform subaccount that deployed this application.
Context
All the standard mechanisms of the document service apply. The SAP Cloud Platform SDK provides a base class
(a Java servlet) that provides the proxy functionality out-of-the-box. This can easily be extended to customize its
behavior. The proxy bridge performs a 1:1 mapping from source CMIS calls to target CMIS calls. CMIS bindings can
be enabled or disabled. Further modifications of the incoming requests, such as allowing only certain operations or
modifying parameters, are not supported. The Apache OpenCMIS project contains a bridge module that supports
advanced scenarios of this type.
To experience the best performance and to benefit from the consistency model described in Consistency Model
(Java) [page 442], ensure that cookies are enabled for client applications that connect to the proxy bridge. This is
the default setting for HTML5 apps. Only if cookies are enabled, will your subsequent requests be dispatched to
the same processing nodes, which is a prerequisite for the consistency model mentioned earlier.
The proxy bridge allows you to use standard CMIS clients to connect to the document service of SAP Cloud
Platform. An example is the Apache Chemistry Workbench, which can be useful for development and testing.
Caution
Note that the proxy bridge opens your repository to the public Internet and should always be secured
appropriately.
Note
For historic reasons, ecm is used to refer to the document service in the coding and the coding samples.
Procedure
1. Create an SAP Cloud Platform application as described in Using Java EE Web Profile Runtimes [page 1166].
2. Create a web.xml file and a servlet class.
3. Derive your servlet from the class com.sap.ecm.api.AbstractCmisProxyServlet.
4. Add a servlet mapping to your web.xml file using a URL pattern that contains a wildcard. See the following
example.
<servlet>
<servlet-name>cmisproxy</servlet-name>
<servlet-class>my.app.CMISProxyServlet</servlet-class>
</servlet>
<servlet-mapping>
<servlet-name>cmisproxy</servlet-name>
<url-pattern>/cmis/*</url-pattern>
</servlet-mapping>
You can use prefixes other than /cmis and you can add more servlets in accordance with your needs. The URL
pattern for your servlet derived from the class AbstractCmisProxyServlet must contain a /* suffix.
5. Override the two abstract methods provided by the AbstractCmisProxyServlet class:
getRepositoryUniqueName() and getRepositoryKey().
These methods return a string containing the unique name and the secret key of the repository to be
accessed. You can override a third method getDestinationName(), which also returns a string. If this
method is overridden, it should return the name of a destination deployed for this application to connect to the
service. This is useful if a service user is used, for example. Ensure that there is a valid custom destination.
6. If you override the getServletConfig() method ensure that you call the superclass in your method.
○ service()
○ doGet()
○ doPost()
○ and so on
7. Optionally, you can restrict the proxy bridge to restrict the exposed bindings by overriding one or more of the
following methods:
○ supportAtomPubBinding()
○ supportBrowserBinding()
<security-constraint>
<web-resource-collection>
<web-resource-name>Proxy</web-resource-name>
<url-pattern>/cmis/*</url-pattern>
</web-resource-collection>
<auth-constraint>
<role-name>EcmDeveloper</role-name>
</auth-constraint>
</security-constraint>
In some cases it might be useful to grant public access for reading content but not for modifying, creating or
deleting it. For example, a Web content management application might embed pictures into a public Web site
but store them in the document service. For a scenario of this type, override the method readOnlyMode() so
that it returns true. This means that only read requests are forwarded to the repository and all other requests
are rejected. The read-only mode only works with the JSON binding. The other bindings are disabled in this
case.
9. Optionally, you can override two more methods to customize timeout values for reading and connecting:
getConnectTimeout() and getReadTimeout().
It should only be necessary to use these methods if frequent timeout errors occur.
package my.app;
import com.sap.ecm.api.AbstractCmisProxyServlet;
public class CMISProxyServlet extends AbstractCmisProxyServlet {
@Override
protected String getRepositoryUniqueName() {
return "MySampleRepository";
}
@Override
//For applications in production, use a secure location to store the secret
key.
protected String getRepositoryKey() {
return "abcdef0123456789";
}
}
10. To access the proxy bridge from an external application you need the correct URL.
Example
Your proxy bridge application is deployed as cmisproxy.war. The cockpit shows the following URL for your
app: https://cmisproxysap.hana.ondemand.com/cmisproxy and the web.xml is as shown above.
Then the URLs is as follows:
○ CMIS 1.1:
AtomPub: https://cmisproxysap.hana.ondemand.com/cmisproxy/cmis/1.1/atom
Browser: https://cmisproxysap.hana.ondemand.com/cmisproxy/cmis/json
○ CMIS 1.0:
AtomPub: https://cmisproxysap.hana.ondemand.com/cmisproxy/cmis/atom
Browser: (not available)
These URLs can be passed to the CMIS Workbench from Apache Chemistry, for example.
The workbench requires basic authentication. Please add the following code to your web.xml:
Sample Code
<login-config>
<auth-method>BASIC</auth-method>
Example
A full example that can be deployed consists of two files: a web.xml and a servlet class. This example only
exposes the CMIS browser binding (JSON) using the prefix /cmis in the URL.
Sample Code
web.xml
Sample Code
Servlet
package my.app;
import com.sap.ecm.api.AbstractCmisProxyServlet;
public class CMISProxyServlet extends AbstractCmisProxyServlet {
private static final long serialVersionUID = 1L;
@Override
protected boolean supportAtomPubBinding() {
return false;
}
@Override
protected boolean supportBrowserBinding() {
return true;
}
public CMISProxyServlet() {
super();
}
@Override
Procedure
Your repository should never be available to the public. In the example, basic authentication and the role
EcmDeveloper are required (see security pages). Assign this role to the users or groups who should be able to
access the subaccount area of cockpit.
For more information, see Create Sample Applications (Java) [page 442] and step 8 of Build a Proxy Bridge
[page 448].
3. (Optional) Use CMIS workbench to test proxy bridge.
Field Value
Type HTTP
Name documentservice
CloudConnectorVersio 2
n
ProxyType Internet
URL https://cmisproxy<subaccount_ID>.hana.ondemand.com/cmisproxy/
cmis/json
5. Create an HTML5 application accessing the document service and open it in the Web IDE. Then create an
index.html file with the following contents:
Example
<!DOCTYPE html>
<html>
<head>
<meta charset="utf-8">
<title>Use CMIS from HTML5 Application</title>
<script type="text/javascript">
function setFilename() {
var thefile = document.getElementById('filename').split('\\').pop();
document.getElementById("cmisname").value = thefile.value;
}
function getChildren() {
var xhttp = new XMLHttpRequest();
xhttp.onreadystatechange = function() {
if (this.readyState == 4 && this.status == 200) {
var children = obj = JSON.parse(this.responseText);
var str = "<ul>";
var repoUrl = "/cmis/<repo-ID>/root/"
for (var i = 0; i <children.objects.length; i++) {
if
(children.objects[i].object.properties["cmis:baseTypeId"].value ==
'cmis:folder') {
str += '<li>'
+
children.objects[i].object.properties["cmis:name"].value
+ ' (folder)</li>';
} else {
var name =
children.objects[i].object.properties["cmis:name"].value;
str += '<li><a href="' + repoUrl + name + '">' + name
+ '</a></li>';
}
}
str += "</ul>";
document.getElementById("listchildren").innerHTML = str;
}
};
xhttp.open("GET",
"/cmis/<repo-id>/root?cmisselector=children",
true);
xhttp.send();
}
</script>
</head>
<body>
<h1>Document Service from HTML App</h1>
<p>
For more information, see Create an HTML5 Application [page 1267], Create a Project [page 1263], and Edit the
HTML5 Application [page 1264].
a. Open the URL of the proxy bridge from the previous step in a browser, copy the repository ID, for example,
8d1c2718db5a2fc0d7242585, from the response.
Example: https://cmisproxyd058463sapdev.int.sap.hana.ondemand.com/cmisproxy/cmis/
json
Example
{
"8d1c2718db5a2fc0d7242585": {
"repositoryId": "8d1c2718db5a2fc0d7242585",
"repositoryName": "Sample Repository",
"repositoryDescription": "Sample repository for external access",
"vendorName": "SAP AG",
"productName": "SAP Cloud Platform, document service",
"productVersion": "1.0",
"rootFolderId": "8d1c2718db5a2fc0d7242585",
"capabilities": {
…
b. In your index.html, replace all occurrences of <repo-id> with the extracted repository ID and all
occurrences of <your-proxy-url> with the URL of the proxy bridge application.
c. Create a neo-app.json file in the root of your project directory with the following contents:
{
"welcomeFile": "/index.html",
"routes": [
{
"path": "/cmis",
"target": {
"type": "destination",
"name": "documentservice"
},
"description": "CMIS Connection Document Service"
}
],
"sendWelcomeFileRedirect": true
}
This handles all URLs starting with /cmis to the path specified in the destination named
“documentservice”.
d. Commit your files in Git, create a new version, and activate the version.
For more information, see Create a Version [page 1268] and Activate a Version [page 1269].
The following sections describe the advanced concepts of the SAP Cloud Platform Document service.
One benefit of Content Management Interoperability Services (CMIS) as compared to a file system is the extended
handling of metadata.
You can use metadata to structure content and make it easier to find documents in a repository, even if it contains
millions of documents. In the CMIS domain model, metadata is structured using types. A type contains the set of
allowed or required properties, for example, an Invoice type that has the InvoiceNo and CustomerNo
properties.
A type is described in a type definition and contains a list of property definitions. CMIS has a set of predefined
types and predefined properties. Custom-specific types and additional custom properties can extend the
predefined types. When a type is created, it is derived from a parent type and extends the set of the parent
properties. In this way, a hierarchy of types is built. The base types do not have parents. Base types are defined in
the CMIS specification. The most important base types are cmis:document and cmis:folder.
Each property has a data format (String, Integer, Date, Decimal, ID, and so on) and can define additional
constraints, such as:
Each object stored in a CMIS repository has a type and a set of properties. Types and properties provide the
mechanism used to find objects with CMIS queries.
Related Information
http://chemistry.apache.org/
http://chemistry.apache.org/java/developing/guide.html
http://chemistry.apache.org/java/0.9.0/maven/apidocs/
http://chemistry.apache.org/java/examples/index.html
http://docs.oasis-open.org/cmis/CMIS/v1.1
http://docs.oracle.com/javase/6/docs/api/java/security/KeyStore.html
The document store on SAP Cloud Platform supports the cmis:document and cmis:folder types. It also has a
built-in subtype for versioned documents. The types can be investigated using the Apache CMIS workbench.
In addition to the standard CMIS properties, the document service of SAP Cloud Platform supports additional SAP
properties. The most important ones are:
Related Information
http://chemistry.apache.org/java/download.html
Context
The CMIS client API uses a map to pass properties. The key of the map is the property ID and the value is the
actual value to be passed. The cmis:name and cmis:objectTypeId properties are mandatory.
Procedure
1. Use a name that is unique within the folder and a type ID that is a valid type from the repository.
2. Run the sample code.
// properties
Map<String, Object> properties = new HashMap<String, Object>();
properties.put(PropertyIds.OBJECT_TYPE_ID, "cmis:document");
properties.put(PropertyIds.NAME, "Document-1");
// content
byte[] content = "Hello World!".getBytes();
InputStream stream = new ByteArrayInputStream(content);
ContentStream contentStream = new ContentStreamImpl(name,
BigInteger.valueOf(content.length), "text/plain", stream);
// create a document
Folder root = session.getRootFolder();
Document newDoc = folder.createDocument(properties, contentStream,
VersioningState.NONE
Results
You can inspect the document in the CMIS workbench. You can see that various other properties have been set by
the system, such as the ID, the creation date, and the creating user.
Context
This procedure focuses on the use of the sap:tags property to mark the document. This is a multi-value
attribute, so you can assign more than one tag to it.
Procedure
1. To assign the Hello and Tutorial tags to the document, use the following code:
This section gives a very brief introduction to querying. The OpenCMIS Client API is a Java client-side library with
many capabilities, for example, paging results. For more information, consult the OpenCMIS Javadoc and the
examples on the Apache Chemistry Web site.
Context
The following procedure focuses on a use case where you have created a second folder and some more
documents. The repository then looks like this:
The Hello Document and Hi Document documents have the tags Hello and Tutorial, the Loren Ipsum
document has no tags.
Procedure
1. Use the CMIS query to search documents in the system based on their properties.
Note
In this case, the workbench displays only the first value of multi-valued properties.
Tutorial
Tutorial
Related Information
http://chemistry.apache.org/java/0.13.0/maven/apidocs/
http://chemistry.apache.org/java/examples/index.html
For the SAP Cloud Platform Document service, you can create new object types or you can remove those new
object types again in accordance with the CMIS standard.
Context
In CMIS, every object, for example a document or a folder, has an object type. The object type defines the basic
settings of an object of that type. For example, the cmis:document object type defines that objects of that type
are searchable.
Furthermore, the object type defines the properties that can be set for an object of that type, for example, an
object of type cmis:document has a mandatory cmis:name property that must be a string. Therefore, every
object of type cmis:document needs a name. Otherwise, the object is not valid and the repository rejects it.
In CMIS, types are organized hierarchically. The most important (predefined) base types are:
CMIS allows you to define additional types provided that each type is a descendant of one of the predefined base
types. In this type hierarchy, a type inherits all property definitions of its parent type. CMIS 1.1 allows type hierarchy
modifications (see the OASIS page) by providing methods for the creation, the modification, and the removal of
object types. Currently, the document service only supports the creation and removal of types. This allows a
developer to define new types as subtypes of existing types. The new types might possess other properties in
addition to all of the automatically inherited property definitions of the parent type. Creating objects of that type
allows you to assign values for these new properties to the object. Remember to also set the values for the
inherited properties as appropriate.
The following example shows how to create a new document type that possesses one additional property for
storing the summary of a document. The developer must implement the MyDocumentTypeDefinition and
MyStringPropertyDefinition classes. Example implementations for these classes as well as for the interfaces
(FolderTypeDefinition, SecondaryTypeDefinition, PropertyBooleanDefinition,
PropertyDecimalDefinition, and so on) are described in the following topics.
import java.util.HashMap;
import java.util.Map;
import org.apache.chemistry.opencmis.client.api.ObjectType;
import org.apache.chemistry.opencmis.client.api.Session;
import org.apache.chemistry.opencmis.commons.definitions.PropertyDefinition;
import org.apache.chemistry.opencmis.commons.enums.BaseTypeId;
import org.apache.chemistry.opencmis.commons.enums.Cardinality;
import org.apache.chemistry.opencmis.commons.enums.ContentStreamAllowed;
import org.apache.chemistry.opencmis.commons.enums.Updatability;
import
org.apache.chemistry.opencmis.commons.exceptions.CmisObjectNotFoundException;
import org.apache.chemistry.opencmis.commons.exceptions.CmisRuntimeException;
// specify type attributes
String idAndQueryName = "test:docWithSummary";
String description = "Doc with Summary";
String displayName = "Document with Summary";
String localName = "some local name";
String localNamespace = "some local name space";
String parentTypeId = BaseTypeId.CMIS_DOCUMENT.value();
Boolean isCreatable = true;
Boolean includedInSupertypeQuery = true;
Boolean queryable = true;
ContentStreamAllowed contentStreamAllowed = ContentStreamAllowed.ALLOWED;
Boolean versionable = false;
// specify property definitions
Map<String, PropertyDefinition<?>> propertyDefinitions
= new HashMap<String, PropertyDefinition<?>>();
MyStringPropertyDefinition summaryPropertyDefinitions
= createSummaryPropertyDefinitions();
propertyDefinitions.put(summaryPropertyDefinitions.getId(),
summaryPropertyDefinitions);
// build object type
MyDocumentTypeDefinition docTypeDefinition
= new MyDocumentTypeDefinition(idAndQueryName, description, displayName,
localName, localNamespace, parentTypeId, isCreatable,
includedInSupertypeQuery, queryable, contentStreamAllowed,
versionable, propertyDefinitions);
// add type to repository
ecmSession.createType(docTypeDefinition);
// create document of new type
ecmSession.clear();
Map<String, String> newDocProps = new HashMap<String, String>();
newDocProps.put(PropertyIds.OBJECT_TYPE_ID, docTypeDefinition.getId());
newDocProps.put(PropertyIds.NAME, "testDocWithNewType");
newDocProps.put("test:summary", "This is a document with a summary property");
● The ID and the query name must be identical and meet the following rules:
○ They must match the regular Java expression "[a-zA-Z][a-zA-Z0-9_:]*".
○ Their names must not start with cmis:, sap, or s: in any combination of uppercase and lowercase letters,
for example, cMis: is also not allowed.
● If the base type of the new object type is cmis:secondary, no other type definition may already contain a
property definition with the same ID or query name.
● If the base type of the new object type is not cmis:secondary and another type definition already contains a
property definition with the same ID or query name, this property definition must be identical to the one of the
new type.
● You cannot specify default values or choices.
To delete a new object type, you can use the following code snippet: ecmSession.deleteType(typeId);
You can only delete an object type if it is no longer used by any documents or folders in the repository.
Related Information
Example
import java.util.HashMap;
import java.util.List;
import java.util.Map;
import org.apache.chemistry.opencmis.commons.data.CmisExtensionElement;
import org.apache.chemistry.opencmis.commons.definitions.PropertyDefinition;
import org.apache.chemistry.opencmis.commons.definitions.TypeDefinition;
import org.apache.chemistry.opencmis.commons.definitions.TypeMutability;
import org.apache.chemistry.opencmis.commons.enums.BaseTypeId;
public abstract class MyTypeDefinition implements TypeDefinition {
private String description = null;
private String displayName = null;
private String idAndQueryName = null;
private String localName = null;
private String localNamespace = null;
private String parentTypeId = null;
private Boolean isCreatable = null;
private Boolean includedInSupertypeQuery = null;
private Boolean queryable = null;
private Map<String, PropertyDefinition<?>> propertyDefinitions
= new HashMap<String, PropertyDefinition<?>>();
public MyTypeDefinition(String idAndQueryName, String description,
String displayName, String localName, String localNamespace,
String parentTypeId, Boolean isCreatable,
Boolean includedInSupertypeQuery, Boolean queryable,
Map<String, PropertyDefinition<?>> propertyDefinitions) {
this.description = description;
this.displayName = displayName;
this.idAndQueryName = idAndQueryName;
this.localName = localName;
this.localNamespace = localNamespace;
this.parentTypeId = parentTypeId;
this.isCreatable = isCreatable;
this.includedInSupertypeQuery = includedInSupertypeQuery;
this.queryable = queryable;
if (propertyDefinitions != null) {
this.propertyDefinitions = propertyDefinitions;
}
}
@Override
abstract public BaseTypeId getBaseTypeId();
@Override
public String getDescription() {
return description;
}
@Override
public String getDisplayName() {
return displayName;
}
@Override
public String getId() {
return idAndQueryName;
}
@Override
public String getLocalName() {
import java.util.List;
import org.apache.chemistry.opencmis.commons.data.CmisExtensionElement;
import org.apache.chemistry.opencmis.commons.definitions.TypeMutability;
public class MyTypeMutability implements TypeMutability {
@Override
public List<CmisExtensionElement> getExtensions() {
return null;
}
@Override
public void setExtensions(List<CmisExtensionElement> arg0) {
}
@Override
public Boolean canCreate() {
return true;
}
@Override
public Boolean canDelete() {
return true;
}
@Override
public Boolean canUpdate() {
return false;
}
}
import java.util.Map;
import org.apache.chemistry.opencmis.commons.definitions.DocumentTypeDefinition;
import org.apache.chemistry.opencmis.commons.definitions.PropertyDefinition;
import org.apache.chemistry.opencmis.commons.enums.BaseTypeId;
import org.apache.chemistry.opencmis.commons.enums.ContentStreamAllowed;
public class MyDocumentTypeDefinition extends MyTypeDefinition implements
DocumentTypeDefinition {
private ContentStreamAllowed contentStreamAllowed = null;
private Boolean versionable = null;
public MyDocumentTypeDefinition(String idAndQueryName, String description,
String displayName, String localName, String localNamespace,
String parentTypeId, Boolean isCreatable,
Boolean includedInSupertypeQuery, Boolean queryable,
ContentStreamAllowed contentStreamAllowed, Boolean versionable,
Map<String, PropertyDefinition<?>> propertyDefinitions) {
super(idAndQueryName, description, displayName, localName, localNamespace,
parentTypeId, isCreatable, includedInSupertypeQuery, queryable,
propertyDefinitions);
this.contentStreamAllowed = contentStreamAllowed;
this.versionable = versionable;
}
@Override
public BaseTypeId getBaseTypeId() {
return BaseTypeId.CMIS_DOCUMENT;
}
@Override
public ContentStreamAllowed getContentStreamAllowed() {
return contentStreamAllowed;
}
@Override
import java.util.Map;
import org.apache.chemistry.opencmis.commons.definitions.FolderTypeDefinition;
import org.apache.chemistry.opencmis.commons.definitions.PropertyDefinition;
import org.apache.chemistry.opencmis.commons.enums.BaseTypeId;
public class MyFolderTypeDefinition extends MyTypeDefinition implements
FolderTypeDefinition {
public MyFolderTypeDefinition(String idAndQueryName, String description,
String displayName, String localName, String localNamespace,
String parentTypeId, Boolean isCreatable,
Boolean includedInSupertypeQuery, Boolean queryable,
Map<String, PropertyDefinition<?>> propertyDefinitions) {
super(idAndQueryName, description, displayName, localName, localNamespace,
parentTypeId, isCreatable, includedInSupertypeQuery, queryable,
propertyDefinitions);
}
@Override
public BaseTypeId getBaseTypeId() {
return BaseTypeId.CMIS_FOLDER;
}
}
import java.util.Map;
import org.apache.chemistry.opencmis.commons.definitions.FolderTypeDefinition;
import org.apache.chemistry.opencmis.commons.definitions.PropertyDefinition;
import org.apache.chemistry.opencmis.commons.enums.BaseTypeId;
public class MySecondaryTypeDefinition extends MyTypeDefinition implements
FolderTypeDefinition {
public MySecondaryTypeDefinition(String idAndQueryName, String description,
String displayName, String localName, String localNamespace,
String parentTypeId, Boolean isCreatable,
Boolean includedInSupertypeQuery, Boolean queryable,
Map<String, PropertyDefinition<?>> propertyDefinitions) {
super(idAndQueryName, description, displayName, localName, localNamespace,
parentTypeId, isCreatable, includedInSupertypeQuery, queryable,
propertyDefinitions);
}
@Override
public BaseTypeId getBaseTypeId() {
return BaseTypeId.CMIS_SECONDARY;
}
}
import java.util.List;
import org.apache.chemistry.opencmis.commons.data.CmisExtensionElement;
import org.apache.chemistry.opencmis.commons.definitions.Choice;
import org.apache.chemistry.opencmis.commons.definitions.PropertyDefinition;
import org.apache.chemistry.opencmis.commons.enums.Cardinality;
import org.apache.chemistry.opencmis.commons.enums.PropertyType;
import org.apache.chemistry.opencmis.commons.enums.Updatability;
abstract public class MyPropertyDefinition<T> implements PropertyDefinition<T> {
private String idAndQueryName = null;
private Cardinality cardinality = null;
private String description = null;
private String displayName = null;
private String localName = null;
private String localNameSpace = null;
private Updatability updatability = null;
private Boolean orderable = null;
private Boolean queryable = null;
public MyPropertyDefinition(String idAndQueryName, Cardinality cardinality,
String description, String displayName, String localName,
String localNameSpace, Updatability updatability,
Boolean orderable, Boolean queryable) {
super();
this.idAndQueryName = idAndQueryName;
this.cardinality = cardinality;
this.description = description;
this.displayName = displayName;
this.localName = localName;
this.localNameSpace = localNameSpace;
this.updatability = updatability;
this.orderable = orderable;
this.queryable = queryable;
}
@Override
public String getId() {
return idAndQueryName;
}
@Override
public Cardinality getCardinality() {
return cardinality;
}
@Override
public String getDescription() {
return description;
}
@Override
public String getDisplayName() {
return displayName;
}
@Override
public String getLocalName() {
return localName;
}
@Override
public String getLocalNamespace() {
return localNameSpace;
}
@Override
abstract public PropertyType getPropertyType();
@Override
public String getQueryName() {
return idAndQueryName;
}
@Override
public Updatability getUpdatability() {
import org.apache.chemistry.opencmis.commons.definitions.PropertyBooleanDefinition;
import org.apache.chemistry.opencmis.commons.enums.Cardinality;
import org.apache.chemistry.opencmis.commons.enums.PropertyType;
import org.apache.chemistry.opencmis.commons.enums.Updatability;
public class MyBooleanPropertyDefinition extends MyPropertyDefinition<Boolean>
implements PropertyBooleanDefinition {
public MyBooleanPropertyDefinition(String idAndQueryName,
Cardinality cardinality, String description, String displayName,
String localName, String localNameSpace,
Updatability updatability, Boolean orderable, Boolean queryable) {
import java.util.GregorianCalendar;
import org.apache.chemistry.opencmis.commons.definitions.PropertyDateTimeDefinition;
import org.apache.chemistry.opencmis.commons.enums.Cardinality;
import org.apache.chemistry.opencmis.commons.enums.DateTimeResolution;
import org.apache.chemistry.opencmis.commons.enums.PropertyType;
import org.apache.chemistry.opencmis.commons.enums.Updatability;
public class MyDateTimePropertyDefinition extends
MyPropertyDefinition<GregorianCalendar> implements PropertyDateTimeDefinition {
public MyDateTimePropertyDefinition(String idAndQueryName,
Cardinality cardinality, String description, String displayName,
String localName, String localNameSpace,
Updatability updatability, Boolean orderable, Boolean queryable) {
super(idAndQueryName, cardinality, description, displayName,
localName, localNameSpace, updatability, orderable, queryable);
}
@Override
public PropertyType getPropertyType() {
return PropertyType.DATETIME;
}
@Override
public DateTimeResolution getDateTimeResolution() {
return DateTimeResolution.TIME;
}
}
import java.math.BigDecimal;
import org.apache.chemistry.opencmis.commons.definitions.PropertyDecimalDefinition;
import org.apache.chemistry.opencmis.commons.enums.Cardinality;
import org.apache.chemistry.opencmis.commons.enums.DecimalPrecision;
import org.apache.chemistry.opencmis.commons.enums.PropertyType;
import org.apache.chemistry.opencmis.commons.enums.Updatability;
public class MyDecimalPropertyDefinition extends MyPropertyDefinition<BigDecimal>
implements
PropertyDecimalDefinition {
public MyDecimalPropertyDefinition(String idAndQueryName,
Cardinality cardinality, String description, String displayName,
String localName, String localNameSpace,
Updatability updatability, Boolean orderable, Boolean queryable) {
super(idAndQueryName, cardinality, description, displayName,
localName, localNameSpace, updatability, orderable, queryable);
}
@Override
public PropertyType getPropertyType() {
return PropertyType.DECIMAL;
}
@Override
public BigDecimal getMaxValue() {
return null;
}
@Override
public BigDecimal getMinValue() {
return null;
}
@Override
public DecimalPrecision getPrecision() {
import org.apache.chemistry.opencmis.commons.definitions.PropertyHtmlDefinition;
import org.apache.chemistry.opencmis.commons.enums.Cardinality;
import org.apache.chemistry.opencmis.commons.enums.PropertyType;
import org.apache.chemistry.opencmis.commons.enums.Updatability;
public class MyHtmlPropertyDefinition extends MyPropertyDefinition<String>
implements PropertyHtmlDefinition {
public MyHtmlPropertyDefinition(String idAndQueryName,
Cardinality cardinality, String description, String displayName,
String localName, String localNameSpace,
Updatability updatability, Boolean orderable, Boolean queryable) {
super(idAndQueryName, cardinality, description, displayName,
localName, localNameSpace, updatability, orderable, queryable);
}
@Override
public PropertyType getPropertyType() {
return PropertyType.HTML;
}
}
import org.apache.chemistry.opencmis.commons.definitions.PropertyIdDefinition;
import org.apache.chemistry.opencmis.commons.enums.Cardinality;
import org.apache.chemistry.opencmis.commons.enums.PropertyType;
import org.apache.chemistry.opencmis.commons.enums.Updatability;
public class MyIdPropertyDefinition extends MyPropertyDefinition<String> implements
PropertyIdDefinition {
public MyIdPropertyDefinition(String idAndQueryName,
Cardinality cardinality, String description, String displayName,
String localName, String localNameSpace,
Updatability updatability, Boolean orderable, Boolean queryable) {
super(idAndQueryName, cardinality, description, displayName,
localName, localNameSpace, updatability, orderable, queryable);
}
@Override
public PropertyType getPropertyType() {
return PropertyType.ID;
}
}
import java.math.BigInteger;
import org.apache.chemistry.opencmis.commons.definitions.PropertyIntegerDefinition;
import java.math.BigInteger;
import org.apache.chemistry.opencmis.commons.definitions.PropertyStringDefinition;
import org.apache.chemistry.opencmis.commons.enums.Cardinality;
import org.apache.chemistry.opencmis.commons.enums.PropertyType;
import org.apache.chemistry.opencmis.commons.enums.Updatability;
public class MyStringPropertyDefinition extends MyPropertyDefinition<String>
implements PropertyStringDefinition {
public MyStringPropertyDefinition(String idAndQueryName,
Cardinality cardinality, String description, String displayName,
String localName, String localNameSpace,
Updatability updatability, Boolean orderable, Boolean queryable) {
super(idAndQueryName, cardinality, description, displayName,
localName, localNameSpace, updatability, orderable, queryable);
}
@Override
public PropertyType getPropertyType() {
return PropertyType.STRING;
}
@Override
public BigInteger getMaxLength() {
return null;
}
}
import org.apache.chemistry.opencmis.commons.definitions.PropertyUriDefinition;
import org.apache.chemistry.opencmis.commons.enums.Cardinality;
● cmis:read
○ Allows fetching an object (folder or document).
○ Allows reading the ACL, properties and the content of an object.
● sap:file
○ Includes all privileges of cmis:read.
○ Allows the creation of objects in a folder and to move an object.
● cmis:write
○ Includes all privileges of sap:file.
○ Allows modifying the properties and the content of an object.
○ Allows checking out of a versionable document.
● sap:delete
○ Includes all privileges of cmis:write.
○ Allows the deletion of an object.
○ Allows checking in and canceling check out of a private working copy.
● cmis:all
○ Includes all privileges of sap:delete.
○ Allows modifying the ACL of an object.
For a repository the initial settings for the root folder are:
● The ACL contains one ACE for the {sap:builtin}everyone principal with the cmis:all permission. With
these settings, all principals have full control over the root folder.
● The owner property is set to {sap:builtin}admin (ownership is described below).
Initially, without specific ACL settings, all documents and folders possess an ACL with one ACE for the built-in
principal {sap:builtin}everyone with the cmis:all permission that grants all users unrestricted access.
ACLs or ACEs are not inherited but explicitly stored at the particular objects. An empty ACL means that no
principal has permission, except the owner of the object. The owner concept is described below in more detail.
The following methods for modifying ACLs (Access Control Lists) in the CMIS client library are available:
To modify the ACL of the current object only, set the propagation parameter to OBJECTONLY. To modify the ACL
of the current object as well as of the ACLs of all of the object's descendants, set the propagation parameter to
PROPAGATE. You can apply PROPAGATE only to folders. It works as follows: The ACEs that are added and removed
at the root folder of the operation are computed and then applyAcl is called with these ACE sets for each
descendant.
Removing a permission for a principal from an object results in no ACE entry for the principal in that ACL. This is
independent of the current settings in the ACL with respect to this principal.
In methods with parameters for adding and removing ACEs, first the specified ACEs are removed and then the new
ones are added.
Every folder and document has the sap:owner property. When an object is created the currently connected user
automatically becomes the owner of the object. The owner of an object always has full access even without any
specific ACEs granting him or her permission.
The owner property could be changed using the updateProperties method with the following restrictions:
● The new value of the owner property must be identical with the currently connected user.
● The currently connected user has cmis:all privilege.
● The application can use a connect method without explicitly providing a parameter containing a user. Then the
current user is forwarded to the document service. The user's right to access particular documents and
folders is determined using the user ID and the attached ACLs.
● The application can provide a user ID explicitly using a parameter of the connect method. Then this ID is used
for checking the access rights.
Note
Note that the document service is not connected to any Identity Provider or Identity Management System and
considers the provided ID as an opaque string. This is also true for the user or principal strings provided in the
ACEs when setting ACLs at objects.
The application is responsible for providing the correct user ID but it can also submit a technical user ID that
does not belong to any physical user, for example, to implement some kind of service user concept.
Besides providing a user, some connect methods have an additional parameter to provide the IDs of additional
principals to the document service.
If additional principals are provided, the user not only has his or her own permissions to access objects but in
addition gets the access rights of these principles. If, for example, the user him or herself has no right to access a
specific document but one of the additionally provided principals is allowed to read the content, then the user can
also access the content in the context of this connection.
With this concept an application could also use roles (or even groups) in the ACLs by setting ACEs indicating these
roles or groups. Then the roles of the current user can be evaluated during his connection calls and he is granted
access rights according to his role (or group) membership.
It is very important to keep in mind that the additional principals are also opaque strings for the document service.
This leaves it up to the application to decide what kind of information it sends as additional principals, including
identifiers only known by the application itself. On the other hand, the application must ensure that there is no user
with an ID similar to the additional principals, which the application uses in its ACLs because such a user might
unintentionally get too many access rights.
Example
This example shows how to assign write and read permissions for two kinds of users: Authors and readers.
Authors should have write access to documents and readers should only have read access to the documents.
The application defines two roles, one for authors called author-role and one for readers called reader-
role.
For more information about securing applications and using roles, see Securing Applications.
To set up permissions for authors and readers as described in our example, set the appropriate ACEs at the
documents. The following code snippet shows how to set these permissions for a single document:
import com.sap.security.um.service.UserManagementAccessor;
import com.sap.security.um.user.User;
import com.sap.security.um.user.UserProvider;
…
String authorRole = "author-role";
String readerRole = "reader-role";
As long as the user's session is active, his or her permission to access the documents is determined by the
user's role assignment. That is, authors can change documents and readers are only allowed to read them.
Related Information
● The {sap:builtin}admin user who always has full access to all objects no matter which ACLs are set.
Note
Note that the document service considers user IDs only as opaque strings. Therefore, the application must
prevent that a normal user connects to the document service using this administration user ID.
● The {sap: builtin}everyone user applies to all users. Therefore, granting a permission to this user using
an ACE grants this permission to all users.
There are some document service specific rules with respect to ACLs.
Object Creation
When creating an object the connected user becomes the owner of the new object. The ACL of the parent folder is
copied to the new object and modified according to the addAcl and removeAcl parameter settings of the create
method.
Access by Path
A user is allowed to fetch an object using the path if the user has at least the cmis:read permission for the object.
In this case, the ACLs of the ancestor folders of the object are not relevant.
Versioning
● All documents of a version series, except the private working copy (PWC), share the same ACL and owner.
● It is only allowed to modify the ACL on the last version of a version series and only if it is not checked out.
● Principals are allowed to check out a document if they have the cmis:write permission for it. They become
the owner of the PWC and the ACL of the PWC initially contains only one ACE with their principal name and the
cmis:all permission.
● The ACL and the owner of a PWC can be changed independently of the other objects of the version series the
PWC belongs to. Only the owner of the PWC and users with the sap:delete permission are allowed to check
in or to cancel a checkout.
● Only principals having the cmis:all permission for the version series are allowed to add or remove ACEs
when checking in a PWC.
● getChildren
Returns all children the principal is allowed to see. If the principal has no read permission for the current folder,
a NodeNotFoundException is thrown.
● getDecendants
Returns only those descendants of a folder F, which the principal is allowed to see. Only those descendants are
returned for which all folders on the path from F to the descendant are accessible to the principal. If the
principal has no read permission for the current folder F, a NodeNotFoundException is thrown.
● getFolderTree
In many ways the document service behaves like a relational database, where each document and folder is one
entry.
Therefore, most of the performance tips for databases also apply to the document service, for example:
To help you improve the performance of your application that uses the document service, we provide the following
tips.
Note
These are only recommendations, and may not be suitable in every case. There may be situations where you
cannot and should not apply them.
Documents and folders are stored in the document service in different repositories. Creating a large number of
repositories entails significant CPU usage and requires a considerable amount of storage, even if no documents
are stored.
Recommendation
We recommend that you keep the total number of repositories to a minimum. Avoid, for example, creating a
separate repository for each user, especially if the users do not have large amounts of data to store. In such a
situation, create just one repository instead and store the user data in several separate folders.
If folders contain many children, performance might be impaired when you navigate to one of these folders using a
getChildren call. If you navigate to a folder to analyze its data, for example, using the CMIS Workbench, this
analysis becomes complicated. In contrast, fetching a child in a folder with many children by using its object ID or
its path is not a problem.
It is difficult to define what qualifies as a "large" folder. If you send only one getChildren call per hour, then a
thousand or more children would be totally acceptable, but if you send many calls per second, then even 100
children might impair performance. In any case, the load caused by calling this method increases linearly with the
number of children.
Instead of having one folder with many children, you might consider subdividing the children into different
subfolders or even a subfolder hierarchy. Another alternative to using the getChildren call option is to use the
query method with the IN_FOLDER predicate together with additional restrictions to limit the number of matching
results.
Several CMIS methods have a skip count parameter, for example, the getChildren or the query method. Using
large skip counts produces a significant load because a huge number of matching result objects is found and
skipped before the final result set can be collected. To prevent the need for large skip counts, try to reduce the
number of matching results by subdividing the children into different subfolders or by using a more selective
query.
Only use a sort criterion if you really need it, because it might reduce performance significantly (see also Paging
with maxItems and skipCount (for example, for getChildren, query) in the Frequently Asked Questions.
In the operational context (see the OperationalContext.java class), you can define the properties that are to
be returned together with the selected objects. Do not query all properties because this might be time consuming
and it increases the amount of data transferred over the network. In particular, requesting the cmis:path
property can be inefficient because it has to be computed for each call. The general rule is to reduce the amount of
It is much faster to access an object using its ID than using its path.
Using the getFolderTree or getDescendants method on large hierarchies is very inefficient. The same is true
for the folder predicate IN_TREE that you can use in the statement of the query method. All these methods are
slow for large hierarchies even if the final result set is small.
The reason for the performance problems with these methods is that all the descendant folders of the start folder
have to be loaded from the database into the server where the document service is running. This results in many
calls to the database and many objects are transferred over the network. Finally, a very complex query with all the
IDs of the folders in the hierarchy has to be created and sent to the database to get the final result.
For the query method, the size of the searchable folder hierarchy is already restricted to a maximum of 1000. For
larger hierarchies an exception is thrown. Be aware that even a hierarchy of 1000 folders is quite large and results
in a heavy load on the system as well as bad performance for the request.
When applications use the document service they fetch a session object using one of the connect methods.
Creating a session is quite an expensive operation, which should be reused and shared if possible. A session object
is thread safe and allows parallel method calls.
Usually, a session is bound to a user. To reduce the number of sessions that are created, fetch the session only for
the first request of the user and store it in the user's HTTP session. Then the session can be reused in subsequent
requests of this user.
If an application uses a service user to connect the session to the document service, we recommend that you store
this session in a central place and reuse it for all subsequent requests.
● A session object has an internal cache, for example, for already fetched objects. To make sure that you fetch
the latest version of specific objects, clear the cache from time to time.
● If a session is used for a very long time, problems might occur that result in exceptions (for example, network
connection problems). A possible solution is to replace the failing session with a new one. However, do not
replace a session if an ObjectNotFound exception is thrown because you tried to fetch a non-existent
document or folder. This also applies to similar situations where the exception is part of the normal method
behavior.
Multitenancy
One document service session is always bound to one tenant and to one user. If you create the session only once,
then store it statically, and finally reuse it for all subsequent requests, you end up in the tenant where you first
created the document service session. That is: You do not use multitenancy.
We recommend that you create one document service session per tenant and cache these sessions for future
reuse. Make sure that you do not mix up the tenants on your side.
If you expect a high load for a specific tenant, we recommend that you create a pool of sessions for that tenant. A
session is always bound to a particular server of the document service and this will not scale. If you use a session
pool, the different sessions are bound to different document service servers and you will get a much better
performance and scaling.
Search Hints
You can indicate hints for queries. The general syntax is:
hint:<hintname>[,<hintname>]*:<cmis query>
● ignoreOwner: Usually, documents are returned for which the current user is the owner OR is present in an
ACE. The ignoreOwner setting returns only documents for which the current user has an ACE; ownership is
ignored in this case. This improves the speed of the query because the owner check is omitted. This is useful if
the owner is present in an ACE anyway.
● noPath: Does not return the path property even if it is requested. This improves the speed of queries on
folders, because paths do not have to be computed internally.
Sample Code
Related Information
The document service executes several backups a day to prevent file loss due to disasters. Backups are kept for 14
days and then deleted. Backups are not needed for simple hard disk crashes, since all storage hardware is based
on redundant hard disks.
If you implement paging using maxItems and skipCount, be aware that the different calls might be send to
different database servers each returning the result objects in a possibly different order. To get a consistent result
for these calls, add a unique sort criterion so that each server returns the objects using the same order. Be aware
that using a sort criterion might reduce the processing speed significantly. Therefore, only use a sort criterion if
really needed.
You can connect to the document service by treating it as an external service and the document service treats your
HTML5 application as an external app that requests access.
Procedure
To enable external access to your document service repositories, deploy a small proxy application that is available
out-of-the-box. For more information about its usage and deployment, see Access the Document Service from an
HTML5 Application [page 452].
Related Information
In the cockpit, you can create, edit, and delete a document service repository for your subaccounts. In addition,
you can monitor the number and size of the tenant repositories of your document service repository.
Note
Due to the tenant isolation in SAP Cloud Platform, the document service cockpit cannot access or view
repostories you create in SAP Document Center or vice versa.
Related Information
In the cockpit, you can create document service repositories for your subaccounts.
Procedure
1. Log on with a user (who is a subaccount member) to the SAP Cloud Platform cockpit.
Field Entry
Name Mandatory. Enter a unique name consisting of digits, letters, or special characters. The name is
restricted to 100 characters.
Display Name Optional. Enter a display name that is shown instead of the name in the repository list of the sub
account. The name is restricted to 200 characters. You cannot change this name later on.
Description Optional. Enter a descriptive text for the repository. The name is restricted to 500 characters. You
cannot change the description later on.
When you create a repository, you can activate a virus scanner for write accesses. The virus scan
ner scans files during uploads. If it finds a virus, write access is denied and an error message is
displayed. Note that the time for uploading a file is prolonged by the time needed to scan the file
for viruses.
Repository Key Enter a repository key consisting of at least 10 characters but without special characters. This key
is used to access the repository metadata.
You cannot recover this key. Therefore, you must be sure to remember it.
You can, however, create a new key using the console client command reset-ecm-key [page 1951].
4. Choose Save.
Related Information
In the cockpit, you can change the name, key, or virus scan settings of the repository. You cannot change the
display name or the description.
Procedure
1. Log on with a user (who is a subaccount member) to the SAP Cloud Platform cockpit.
2. In Repositories Document Repositories in the navigation area, select the repository for which you want
to change the name or the virus scan setting.
3. Choose Edit, and change the repository name or the virus scan setting.
4. Enter the repository key.
5. To change the repository key itself, choose the Change Repository Key button and fill in the key fields that
appear.
6. Choose Save.
In the cockpit, you can delete a repository including the data of any tenants in the repository.
Context
Caution
Be very careful when using this command. Deleting a repository permanently deletes all data. This data cannot
be recovered.
If you simply forgot the repository key, you can request a new repository key and avoid deleting the repository. For
more information, see reset-ecm-key [page 1951].
Procedure
1. Log on with a user (who is a subaccount member) to the SAP Cloud Platform cockpit.
2. In Repositories Document Repositories in the navigation area, select the repository, which you want to
delete.
3. Choose Delete.
4. On the dialog that appears, enter the repository key.
5. Choose Delete.
In the cockpit, you can monitor the number and size of the tenant repositories of your document service
repository.
Context
If an application runs in several different tenant contexts, a tenant repository is created for each tenant context.
The tenant repository is created automatically when the application connects to the document service and the
respective tenant repository did not exist before.
1. Log on with a user (who is a subaccount member) to the SAP Cloud Platform cockpit.
2. In Repositories Document Repositories in the navigation area, click the name of your repository.
3. Choose Tenant Repositories in the navigation area.
Related Information
You can create and manage repositories for the document service with client commands.
The following set of console client commands for managing repositories is available:
Related Information
Procedure
Make sure that you set up the permissions correctly. For more information about building a proxy bridge, see
Build a Proxy Bridge [page 448].
Results
To set up automated batch operations, you can use "Console" in the CMIS workbench. You can create scripts that
perform queries to filter your content or you can download selective folders only. As starting point, have a look at
the sample scripts that are available in the Console menu.
With the proxy bridge you get a standard CMIS endpoint. So you are not restricted to the CMIS workbench as only
tool for export, you can use any CMIS-compliant tool.
The SAP Cloud Platform Feedback Service provides developers, customers, and partners with the option to collect
end-user feedback for their applications. It also provides predefined analytics on the collected feedback data. This
includes rating distribution and detailed text analysis of user sentiment (positive, negative, or neutral).
Note
The Feedback Service is currently a beta offering that is available only on the SAP Cloud Platform trial
landscape for trial accounts.
To use the Feedback Service, you must enable it from the SAP Cloud Platform cockpit for your subaccount.
To use the services' UIs, the following roles must be assigned to your user:
● FeedbackAdministrator
● FeedbackAnalyst
If you are a subaccount owner, these roles are automatically assigned to your user when you enable the Feedback
Service. To enable other SAP ID users to access the Analysis and Administration UIs, you need to assign the roles
manually. For more information, see Consuming the Feedback Service [page 489].
In the Administration UI, the administrator adds the applications for which feedback is to be collected. Then the
developer can use the client API to consume the Feedback Service.
Once the Feedback Service is consumed by the application and feedback data is collected, the feedback analyst
can explore feedback text in the Analysis UI. As a result, a developer can use end-user feedback to improve the
performance and appearance of the specific application.
The Feedback Service leverages the in-memory technology of the SAP HANA DB.
Related Information
Your application can consume the Feedback Service either via a browser or via a Web application back end.
Note
For the role assignments to take effect, either open a new browser session or log out from the cockpit
and log on to it again.
4. In the Administration UI, add the application for which feedback is to be collected.
5. Modify your application code to use the Feedback Service client API to collect the feedback of your application
users.
Your application can consume the Feedback Service either via a browser or via web application back end.
Related Information
The SAP Cloud Platform Feedback Service is exposed through a client API that you can use to enable users to send
feedback for your application. You do this by adding code to the application that uses the Feedback Service client
API.
Request
An application can consume the Feedback Service using the service's REST API. The messages exchanged
between the client (your application) and the Feedback Service are JSON-encoded. Call the Feedback Service by
issuing an HTTP POST request to the unique application feedback URL that contains your application ID:
https://feedback-account_name.hanatrial.ondemand.com/api/v2/apps/application_id/posts
The application feedback URL is automatically generated after you register your application in the Administration
UI of the Feedback Service.
To collect feedback data, you must provide values for at least one rating or one free-text attribute. You can
additionally pass values for:
● Up to 5 rating attributes
● Up to 5 free text attributes
● Up to 8 context attributes
Caution
According to the data privacy terms defined in the Terms of Use for SAP HANA Cloud Developer Edition, you
must not collect, process, store, or transmit any personal data using your trial account. Therefore, do not use
the context attributes of the Feedback Service client API to collect personal data such as user ID and user
name.
Response
When the request is successful, the Feedback Service returns an HTTP response with code 200-OK and an empty
body.
In case of errors, the Feedback Service returns an HTTP response with an appropriate error code. Any additional
information that describes the error, is contained in the response body as an Error object. For example:
{
error: {
code: 30,
message: "quota exceeded"
}
}
The value of error.code identifies the cause, and the value of error.message describes the cause. The string in
error.message is not meant to be shown to your application users and is therefore not translated. The purpose of
the string is to assist in the development of your application.
The table below lists the most common errors that the service can return. In addition to this list, a call to the
Feedback Service may also result in a response with another HTTP response code. In this case, the HTTP response
code itself should be enough to describe the issue.
Error Codes
Error Cause HTTP Status Code Content Type error.code error.message
Examples:
● URL: https://feedback-<account_name>.hanatrial.ondemand.com/api/v2/apps/
<application_id>/posts
● HTTP method: POST
● Content-Type: application/json
● Request body:
{
"texts":{
"t1": "Very helpful",
"t2": "Well done",
"t3": "Not usable at all",
"t4": "I don't like it",
"t5": "OK"
},
"ratings":{
"r1": {"value":5},
"r2": {"value":2},
"r3": {"value":5},
"r4": {"value":3},
"r5": {"value":1}
},
"context":{
"page": "/b2b/orders",
"view": "payment",
"lang": "en",
"attr1": "1.3.15",
"attr4": "mobile"
}
}
Related Information
Developers can consume the SAP Cloud Platform Feedback Service using a web browser.
Prerequisites
Procedure
a. From the Eclipse main menu, navigate to File New Dynamic Web Project .
b. In the Project name field, enter feedback-app. Make sure that SAP HANA Cloud is selected as the target
runtime.
c. Leave the default values for the other project settings and choose Finish.
2. Add an HTML file to the web project:
a. In the Project Explorer view, select the feedback-app node.
b. From the Eclipse main menu, navigate to File New HTML File .
c. Enter index.html as the file name.
d. To generate the file, choose Finish.
e. Replace the source code with the following content:
<!DOCTYPE HTML>
<html>
<head>
<meta http-equiv="X-UA-Compatible" content="IE=edge" />
<meta http-equiv="Content-Type" content="text/html;charset=UTF-8"/>
<title>Feedback Application</title>
<script src="https://sapui5.hana.ondemand.com/resources/sap-ui-core.js"
id="sap-ui-bootstrap"
data-sap-ui-libs="sap.m, sap.ui.commons"
data-sap-ui-theme="sap_bluecrystal">
</script>
<script>
var app = new sap.m.App({initialPage:"page1"});
var t1 = new sap.m.Text({text: "Please share your feedback"});
var t2 = new sap.m.Text({text: "Do you like it"});
var ind1 = new sap.m.RatingIndicator({maxValue : 5, value : 4});
var t3 = new sap.m.Text({text: "Some free comments:"});
var textArea = new sap.m.TextArea({rows : 2, cols: 40});
var sendBtn = new sap.m.Button({
text : "Send",
press : function() {
var data = {
"texts": {t1: textArea.getValue()},
"ratings": {r1: {value: ind1.getValue()}},
"context": {page: "page1"}
};
$.ajax({
url: "https://feedback-
<subaccount_name>.hanatrial.ondemand.com/api/v2/apps/<your_application_id>/
posts",
type: "POST",
contentType: "application/json",
data: JSON.stringify(data)
}).done(function() {
jQuery.sap.require("sap.m.MessageToast");
sap.m.MessageToast.show("Thank you. Your feedback was
accepted.");
}).fail(function() {
jQuery.sap.require("sap.m.MessageToast");
sap.m.MessageToast.show("Something went wrong plese try
again later.");
Note
<Subaccount_name> is the unique identifier that is automatically generated when the subaccount is
created.
3. Adjust the service URL in the source code to point to the application feedback URL generated for your
application.
4. Test the application on SAP Cloud Platform local runtime:
a. Deploy the application on your SAP Cloud Platform local runtime.
b. Open the application in your web browser: http://<host>:<port>/feedback-app/. Send sample
feedback.
5. Test the application on the SAP Cloud Platform:
a. Deploy the application on the SAP Cloud Platform.
b. Start the application and open it in your web browser.
Related Information
Developers can use the SAP Cloud Platform Feedback Service from the Java code in a simple Java EE Web
application.
Prerequisites
Procedure
a. From the Eclipse main menu, navigate to File New Dynamic Web Project .
b. In the Project name field, enter feedback-app. Make sure that SAP HANA Cloud is selected as the target
runtime.
c. Leave the default values for the other project settings and choose Finish.
2. Add a servlet to the web project:
a. In the Project Explorer view, select the feedback-app node.
b. From the Eclipse main menu, navigate to File New Servlet .
c. Enter the Java package hello and the class name FeedbackServlet.
d. To generate the servlet, choose Finish.
e. Replace the source code with the following content:
FeedbackServlet.java
package hello;
import java.io.IOException;
import javax.naming.Context;
import javax.naming.InitialContext;
import javax.naming.NamingException;
import javax.servlet.ServletException;
import javax.servlet.http.HttpServlet;
import javax.servlet.http.HttpServletRequest;
import javax.servlet.http.HttpServletResponse;
import org.apache.http.HttpResponse;
import org.apache.http.client.HttpClient;
import org.apache.http.client.methods.HttpPost;
import org.apache.http.conn.ClientConnectionManager;
import org.apache.http.entity.StringEntity;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import com.sap.core.connectivity.api.DestinationException;
import com.sap.core.connectivity.api.http.HttpDestination;
/**
response.sendError(HttpServletResponse.SC_INTERNAL_SERVER_ERROR, "Something
went wrong please try again later.");
} else {
response.getWriter().print("Your feedback was accepted. Thank
You!");
}
} catch (NamingException e) {
LOGGER.error("Cannot lookup the feedback service destination", e);
response.sendError(HttpServletResponse.SC_INTERNAL_SERVER_ERROR,
"Cannot lookup the feedback service destination");
} catch (DestinationException e) {
LOGGER.error("Cannot create HttpClient", e);
response.sendError(HttpServletResponse.SC_INTERNAL_SERVER_ERROR,
"Something went wrong please try again later.");
} finally {
if (httpClient != null) {
ClientConnectionManager connectionManager =
httpClient.getConnectionManager();
if (connectionManager != null) {
connectionManager.shutdown();
}
}
}
}
}
<!DOCTYPE HTML>
<html>
<head>
<meta http-equiv="X-UA-Compatible" content="IE=edge" />
<meta http-equiv="Content-Type" content="text/html;charset=UTF-8"/>
<title>Feedback Application</title>
<script src="https://sapui5.hana.ondemand.com/resources/sap-ui-core.js"
id="sap-ui-bootstrap"
data-sap-ui-libs="sap.m, sap.ui.commons"
data-sap-ui-theme="sap_bluecrystal">
</script>
<script>
var app = new sap.m.App({initialPage:"page1"});
var t1 = new sap.m.Text({text: "Please share your feedback"});
var t2 = new sap.m.Text({text: "Do you like it"});
var ind1 = new sap.m.RatingIndicator({maxValue : 5, value : 4});
var t3 = new sap.m.Text({text: "Some free comments:"});
var textArea = new sap.m.TextArea({rows : 2, cols: 40});
var sendBtn = new sap.m.Button({
text : "Send",
press : function() {
var data = {
"text": textArea.getValue(),
"rating": ind1.getValue(),
"page": "page1"
};
$.ajax({
url: "FeedbackServlet",
type: "POST",
data: data
}).done(function() {
jQuery.sap.require("sap.m.MessageToast");
sap.m.MessageToast.show("Thank you. Your feedback was
accepted.");
}).fail(function() {
jQuery.sap.require("sap.m.MessageToast");
sap.m.MessageToast.show("Something went wrong plese try
again later.");
});
}
});
var vbox = new sap.m.VBox({
fitContainer: true,
displayInline: false,
items: [t1, t2, ind1, t3, textArea, sendBtn]
});
var page1 = new sap.m.Page("page1", {
title: "Feedback Application",
content : vbox
});
app.addPage(page1);
app.placeAt("content");
</script>
</head>
<body class="sapUiBody">
<div id="content"></div>
</body>
</html>
web.xml
Name=FeedbackService
Type=HTTP
URL=https://feedback-<subaccount_name>.hanatrial.ondemand.com/api/v2/apps/
<your_application_id>/posts
Authentication=NoAuthentication
The application feedback URL, which contains the application ID, is automatically generated after you
register the application in the Administration UI of the Feedback Service.
d. Open the application in your web browser: http://<host>:<port>/feedback-app/. Send sample
feedback.
6. Testing the application on SAP Cloud Platform:
a. Deploy the application on the SAP Cloud Platform.
b. Open the SAP Cloud Platform Cockpit in your web browser. Create a destination with the name
FeedbackService and configure it to be consumed by the application at runtime.
Name=FeedbackService
Type=HTTP
URL=https://feedback-<subaccount_name>.hanatrial.ondemand.com/api/v2/apps/
<your_application_id>/posts
Authentication=NoAuthentication
The application feedback URL, which contains the application ID, is automatically generated after you
register your application in the Administration UI of the Feedback Service.
c. Start the application and open it in your web browser.
Related Information
After you deploy your applications on the SAP Cloud Platform, you need to add the applications for which you want
to collect feedback to the Administration UI of the feedback service.
Adding an application generates a dedicated application feedback URL. The developer uses this URL in the client
API to consume the feedback service. Once the feedback service is consumed by the application and feedback
data is collected, the feedback analyst can explore ratings and text analysis in the Analysis UI. Developers can then
use the feedback to improve the application performance and appearance.
To use the Administration and Analysis UIs, you must be assigned the following roles:
● FeedbackAdministrator
● FeedbackAnalyst
As a subaccount owner, the roles are automatically assigned to your user after you enable the feedback service. To
allow other SAP ID users to access the Analysis and Administration UIs, you need to assign the roles manually.
You can also provide your feedback about the feedback service and its UI. Choose the Feedback button and share
your ideas and suggestions for improvement. Information about your landscape host as well as about the specific
place (page, view, or tab) from which you have called the feedback form is collected by SAP for analysis purposes.
Related Information
1.7.2.1 Administration
● Add applications for which feedback is to be collected in the Administration UI of the feedback service
● Customize descriptions of feedback questions
● Customize descriptions of context attributes
● Free up feedback quota space
Once you add an application to your list, you enable it to use the feedback service. As a result, a URL that is
specific to both the subaccount and the application is generated. To start collecting feedback, the developer
integrates the URL into the application UI, to enable end users to post feedback (for example, in a feedback form).
The URL is called through a POST request by the application that wants to send feedback. That is, once an end
user submits the feedback form, the application calls the feedback service using the URL and the service stores
the user feedback.
https://feedback-<subaccount_name>.hanatrial.ondemand.com/api/v2/apps/
<application_id>/posts
To use the Administration UI of the feedback service, you need to be assigned the FeedbackAdministrator role.
To access the Administration UI, open the following URL in your browser:
https://feedback-<subaccount_name>.hanatrial.ondemand.com/admin/mobile
Each subaccount has a feedback quota assigned, that is, a specific amount of feedback data that can be stored in
the SAP HANA DB. The quota is 250 feedback forms filled in by end users. When you reach 70% of the feedback
quota, you see a warning message. Once you reach the limit, the feedback service stops processing feedback
requests and storing feedback data, until you free up quota space. Do this by deleting the feedback records for a
specific time period.
● Rating questions
● Free text questions
● Context attributes
If you have the FeedbackAnalyst role assigned (in addition to the FeedbackAdministrator role), you can analyze
feedback results and export raw feedback data.
Related Information
As a feedback administrator, you can add applications and administer application feedback.
Procedure
1. Open the Administration UI, where you can perform the following tasks:
a. Add an application by choosing the +Add button and enter a name for the application for which feedback
is to be collected.
b. To customize the description of a rating, a free text question, or a context attribute, click the pencil icon in
the respective attribute row.
As a feedback analyst, in the Analysis UI of the Feedback Service you can explore the feedback collected from end
users by viewing detailed ratings or text analysis, or exporting the feedback text as raw data.
The rating analysis presents information about rating questions and how feedback rating is distributed according
to time and distribution criteria.
You can choose a specific time period for which to view analyzed feedback data and to export raw data. The default
time period is the last 7 days.
You can export raw feedback data, so that you can perform specific analysis tailored to your needs. You download
raw feedback data in a .CSV format encoded in UTF-8.
Note
If there are characters that do not appear correctly when you open the exported file, reopen it as UTF-8
encoded.
Related Information
As a feedback analyst, you can explore the feedback collected from end users by viewing the detailed text analysis.
Text analysis classifies user feedback by:
For further information about text analysis, read the Text Analysis section in the SAP HANA Developer Guide.
The Overview screen displays a summary of all free text feedback questions. Each question tile provides the
following information:
The sentiment summary provides a useful overview of negative, positive, and neutral sentiments of user feedback.
Feedback from a single user can result in a small or large amount of the overall sentiment count of the specific
question. In other words, sentiment is calculated not per user feedback but by the sentiment elements (words) in
the feedback text.
Select a question tile to see detailed information about the question, including the following:
For example, you can filter your responses for a specific question to show only feedback of type Problem that has
Negative and Neutral sentiment. The returned list is ordered by date (most recent is on top).
Note
No matter what filter is applied, the list always includes responses (if any) that are not classified by type or
sentiment.
You can drill down to see details about a specific feedback response and examine the actual feedback text
analysis. You can view the entire response with all detected text analysis "hits". In addition, you can choose the
Related Information
As a feedback analyst, you can examine the feedback collected from users by viewing a detailed rating analysis.
Users can reply to each rating question by choosing a number on the scale of 1 to 5 where 1 is the lowest rating and
5 is the highest.
The Overview screen shows a summary of all rating questions. Each question tile provides the following
information:
Select a question tile to see detailed information about the question during the time period you specified, including
the following:
Depending on the time period, the graph and table views show the following data:
● Feedback distribution by rating: A graph or table showing the percentage of the overall feedback responses
that receive a specific rating number. That is, how feedback is distributed in terms of a specific rating.
● Feedback distribution by time period: A graph or table in of feedback distribution among various time frame
granularities, for example, a day or a year. The data shown is the average rating for the specified time
granularity and applies only to the time period initially selected.
1.8 Gamification
Overview
The SAP Cloud Platform Gamification allows the rapid introduction of gamification concepts into applications. The
service includes an online development and operations environment (gamification workbench) for implementing
and analyzing gamification concepts. The underlying gamification rule management provides support for
sophisticated gamification concepts, covering time constraints, complex nested missions, and collaborative
games. The built-in analytics module allows you to perform advanced analysis of the player's behavior to facilitate
continuous improvement of game concepts.
Product Features
● Web-based IDE (gamification workbench) for modeling game mechanics and rules
● Gamification engine for real-time processing of sophisticated gamification concepts involving time constraints
and cooperation
● Built-in runtime game analytics for continuous improvement of game designs
● Web API for integration
● Simple SAPUI5 integration based on widgets
● Single-sign-on (SSO) support based on Identity Authentication
● Enterprise-level performance and scalability
Related Information
Learn how to enable the gamification service in your subaccount, and how to configure and use the sample
application HelpDesk.
When enabling the service, configuration steps 2, 3, and 4 are executed automatically, as follows:
● All gamification roles are assigned to the user who enabled the service.
● The required destinations are created at the subaccount level. The destination gsdest requires credentials
(user/password). In the trial version you can use an SCN, it is safer to create a dedicated technical user.
Note
If you use your SCN user to configure gsdest, make sure you change the destination configuration after you've
changed the SCN user password in SAP ID Service. Otherwise, your user will be locked when using the
HelpDesk app.
Prerequisites
● Access to a SAP Cloud Platform account for personal development, or to a Trial account.
● A subaccount for which you are assigned the role Administrator.
● An SCN user.
Procedure
Prerequisites
Log in to the SAP Cloud Platform cockpit using your SCN user and password.
Procedure
Related Information
Prerequisites
Log in to the SAP Cloud Platform cockpit using your SCN user and password.
Context
You must configure a destination to allow the communication between your application (in this case, a sample
app) and your subscription to the gamification service. For the sample application, two destinations are necessary:
Note
Create these destinations at the subaccount level of your personal user account.
Procedure
1. In the cockpit, choose the Destinations subtab from the Connectivity tab.
2. Enter the name: gsdest.
3. Select the type: HTTP.
4. (Optional) Enter a description.
5. Enter the application URL of your service instance: https://<application_URL>/gamification/api/
tech/JsonRPC
You can find the application URL of your service instance by navigating to Subaccount Services
Gamification Service Go to Service .
Related Information
Procedure
You can find the application URL of your service instance by navigating to Subaccount Services
Gamification Service Go to Service .
6. Select proxy type: Internet.
7. Select authentication: AppToAppSSO.
8. Choose Save.
Related Information
Prerequisites
● Log in to the SAP Cloud Platform cockpit using your SCN user and password.
● A subaccount for which you are assigned the role Administrator.
Context
To support application-to-application SSO as part of destination gswidgetdest, you must configure your
subaccount to allow principal propagation.
Procedure
1. Open the cockpit and choose the Trust subtab from the Security tab.
2. Choose the Local Service Provider subtab.
3. Choose Edit.
4. Change the Principal Propagation value to Enabled.
Related Information
Prerequisites
● Log in to the SAP Cloud Platform cockpit with your SCN user and password.
● You are assigned the role TenantOperator.
Procedure
Prerequisites
The gamification development cycle describes to introduce gamification into existing or new applications.
Creating gamification concepts is purely a conceptual tasks that is typically executed by gamification designers.
The task is executed during the design phase and covers the specification of a meaningful game / gamification
design.
Implementing the concept mans mapping it to the mechanics offered by the gamification service. This task is also
normally performed by gamification designers, or IT experts.
Integration is a development task that includes the technical integration of the target application with the APIs of
the gamification service. This is normally performed by application developers, since it requires technical
knowledge of the application (such as implementing points for listening for events or creating visual representation
of achievements).
A gamification concept, normally developed by gamification designers and domain experts describes the
mechanics that will encourage users (players) to perform certain tasks. For example an award system comprising
point and badges to encourage call center employees to process tickets efficiently or to select more complex
tickets over more straightforward ones.
Note
Creating gamification concepts is not a service that is covered or supported by the gamification service.
A simple gamification concept includes elements such as points and badges. For example, users are awarded
experience points for certain actions, and badges as a visual representation. The gamification concept describes
how these elements motivate users. It therefore includes descriptions of the actions (within the application) that
allow users to attain the various achievements.
Additional examples include missions that foster collaboration or activities with time constraints that encourage
users to work faster.
Related Information
Implementation means mapping a gamification concept to the elements used in the gamification service. You can
use the gamification workbench to maintain the gamification elements, such as points, badges, levels, or rules. You
can modify the gamification concept at runtime.
Gamification is about full transparency to users, and is intended to encourage them. We therefore advise against
modifying a concept significantly without informing users, since doing so might catch them by surprise and could
possibly demotivate them.
Related Information
Integration refers to the technical integration of the target application with the APIs of the gamification service.
Integration is required to send events that are of interest to the gamification service, for example, when a user in a
call center has successfully processed a ticket. Integration is also necessary to notify the users about their
achievements, to send notifications to users for earned points, or to display user profiles.
The gamification service supports the integration of mainly cloud applications running with SAP Cloud Platform.
Integration of other applications is technically possible, but restricted for security reasons.
Related Information
Gamification is a continuous process. It is crucial that you monitor the influence of a gamification concept and
react to the users' behavior. For example, you want to know if your gamification concept motivates the target
group or if users lose interest.
The gamification service offers basic analytics: for example, the assignment of points or badges to users over time.
Therefore, you can analyze peaks and troughs of user achievements.
The introduction of gamification often requires the acquisition of sensitive information. For example you might
need to track user behavior within an application to allow the gamification of onboarding scenarios.
The gamification service lets you anonymize user data. It also offers secure communication via the various APIs.
However it is ultimately the responsibility of the host application to ensure data privacy however, and application
developers must ensure that only the necessary data is sent to the gamification service.
Related Information
The gamification workbench is the central point for managing all gamification content associated with your
subaccount and for accessing key information about your gamification usage. It allows you to manage the
Summary Dashboard
The figure below shows an example of the Summary dashboard in the workbench and is followed by an
explanation:
The entry page Summary of the gamification workbench provides an overview of the gamification concept for the
selected app, the overall player base and overall landscape.
Logon
You can log on with your subaccount user via SSO (single-sign on).
The gamification workbench can be accessed using the Subscription tab in the SAP Cloud Platform cockpit. The
following link will be used: https://< SUBSCRIPTION_URL>/gamification .
Navigation
● Summary
● Game Design
Note
You must have specific roles in order to access the gamification workbench, see Roles [page 520].
Level Description
Game Design ● Allows you to read and configure game mechanics (man
aging points, badges, levels, missions and rules for exam
ple) for multiple applications
Terminal ● Allows you to test the gamification concept using the API
1.8.3.1 Roles
Different roles can be assigned to users, to enable them to explicitly access the gamification workbench.
Prerequisites
Procedure
Context
The gamification service offers the gamification workbench, an API for integration and a demo app. The access to
the user interfaces and API is protected using SAP Cloud Platform roles.
Note
Roles must be explicitly assigned to a SAP Cloud Platform user.
Note
The API can be used for the integration of host applications. For productive use a technical user (SAP Cloud
Platform user) should be created for a communication between the host application and the gamification
service. (The use of a personal subaccount or user is only recommended for testing or demo purposes.)
1.8.4.1 Roles
The following roles can be assigned to access the gamification service gamification workbench, API or demo app
and must be explicitly assigned to a SAP Cloud Platform user:
AppStandard Technical API (methods are annotated ● Write only - using rules;
with required role) reading achievements is
possible, but should be
Terminal (send events for
avoided
testing purposes)
● Send player-related
events
● Read player achieve
ments and available ach
ievements
AppAdmin Technical API (methods are annotated ● Read and delete a player
with required role) record for a single app or
for the whole tenant
● Create and delete a user
or a team
Player (automatically as Technical (implicit role) API (methods are annotated ● Send player-related
signed) with required role) events (only works for
the user that is authenti
cated using the identity
provider which is config-
ured for your subac
count)
Note
This role is not a standard
SAP Cloud Platform role. It
is automatically assigned
to a user (player) that is
created using the
gamification service and
cannot be explicitly as
signed to a SAP Cloud
Platform user.
Prerequisites
Procedure
Related Information
The SAP Cloud Platform Gamification meets the security and data privacy standards of the SAP Cloud Platform. In
general, the gamification service is not responsible for any content such as game mechanics or player
achievements. It is the responsibility of the host application to meet any local data privacy standards. Therefore,
you need to make sure that the personal information of players is protected according to the local regulations. In
some cases where the gamification is applied to employee scenarios work council approval for the gamified host
application might be necessary.
Prerequisites
You have the role TenantOperator, are logged into the gamification workbench, and have opened the Apps tab in
the Operations section.
The gamification service introduces the concept of apps. An app represents a self-contained, isolated context for
defining and executing game mechanics such as points, levels, and rules.
All data or meta data associated with an app are stored in an isolated way. In addition to this, an isolated rule
engine instance is created and started for each app.
Note
Players are stored independently from apps and can therefore take part in multiple apps.
Prerequisites
You have the roles TenantOperator and GamificationDesigner, are logged into the gamification workbench,
and have opened the Apps tab in the Operations section.
Context
An app represents a self-contained, isolated context for defining and executing game mechanics.
Create Apps
Procedure
Update Apps
Procedure
Delete Apps
Procedure
Prerequisites
You have the role GamificationDesigner or TenantOperator or both and are logged into the gamification
workbench.
Context
By switching the app, the gamification workbench only shows game mechanics and player achievements
associated with the selected app.
Procedure
1. Select an app in the app selection combo box located in the upper right corner of the gamification workbench.
2. Optional: Review whether the app has been changed successfully, for example by comparing the summary
page (tab Summary).
Prerequisites
You have the role TenantOperator, are logged into the gamification workbench and have opened the Operations
tab, and navigated to the Data Management section.
The gamification service allows exporting all available apps including their content. You can choose between a full
tenant export including all player data and an export of game mechanics only. The latter can be imported again.
Procedure
1. Select the Export mode in the combo box labeled Export in the form area Import / Export.
○ Full Export: export all game mechanics and player data.
○ Game Mechanics: export game mechanics only.
2. Press Download to start the export. Your browser should show the file storing dialog.
3. Store the provided ZIP file on your disk.
Prerequisites
● You have the role TenantOperator, are logged into the gamification workbench and have opened the
Operations tab, and navigated to the Data Management section.
● You have a gamification service export file.
Note
See section Exporting Apps [page 527] for details.
Context
The gamification service allows importing game mechanics based on existing gamification service export files (ZIP
format). Section Exporting Apps explains how to do the export.
Procedure
1. Press Browse in the form area Import / Export to select the import file.
2. Press Upload to start the import based on the selected file.
Note
If an app with the same name already exists, the import will skip this app and does not overwrite its content.
Note
See section Configuring Rules [page 543] for details.
Prerequisites
You have the role TenantOperator, are logged into the gamification workbench, and have opened the Operations
tab, and navigated to the Data Management section.
Context
The gamification service is shipped with selected demo content comprising game mechanics as well as demo
players. The demo content is created within the context of a new app.
Procedure
Note
Appropriate content (points, levels, badges, and rules) is created for the app automatically.
Prerequisites
You have the role TenantOperator, are logged into the gamification workbench and have opened the Operations
tab, and navigated to the Data Management section.
The gamification service is shipped with selected demo content comprising game mechanics as well as demo
player. The demo content is created within the context of a new app. The app can be deleted manually, but this will
not delete generated demo players. To delete the full demo content, the explicit action must be triggered.
Procedure
Prerequisites
You have the GamificationDesigner role , are logged on to the gamification workbench and have opened the
Game Design tab.
Context
The gamification concept describes the metrics, achievements and rules that are applied to an application. The
following checklist describes the tasks required to implement your gamification concept in your subscription of the
gamification service.
1. Configuring Achievements:
○ Configuring Points (Point Categories) [page 531]
○ Configuring Levels [page 533]
○ Configuring Badges [page 535]
○ Configuring Missions [page 537]
2. Configuring and Managing Rules [page 543]
General Procedure
For each game mechanics entity there is a tab with a master and details view.
● Master View
○ Shows the list of available entities.
○ Add button for adding a new entity.
Each entity has at least the attributes name and a display name. The name serves as the unique identifier and is
immutable.
Prerequisites
You have logged on to the gamification workbench with the role GamificationDesigner and you have opened
the Points tab.
Context
Points are the fundamental element of a gamification design. For example, points can indicate the progress in
various dimensions. Points can be flagged as "Hidden from Player" for security or privacy reasons. Points that are
flagged as hidden are not visible to players. Instead they can be utilized in rules. Furthermore points can have
various different subtypes. The table lists the available point types.
Type Description
ADVANCING Advancing points are points that can never decrease. They are
used to reflect progress.
Points can be configured in the Points subtab of the Game Design tab.
Prerequisites
You have the GamificationDesigner role, are logged on to the gamification workbench and have opened the
Points tab.
Procedure
Prerequisites
You have the GamificationDesigner role, are logged on to the gamification workbench and have opened the
Points tab
Procedure
Prerequisites
You have the GamificationDesigner role, are logged on to the gamification workbench and have opened the
Points tab
Procedure
Prerequisites
You have logged on to the gamification workbench with the role GamificationDesigner and you have opened
the Levels tab.
Caution
Only levels that are based on the default point category are exposed to the default user profile.
A level describes the status of a user once a specific goal is reached. The gamification service allows you to define
levels based on a defined point category. The threshold defines the value of the selected point type to reach the
level.
Prerequisites
You have logged on to the gamification workbench with the role GamificationDesigner and you have opened
the Levels tab.
Procedure
Prerequisites
You have logged on to the gamification workbench with the role GamificationDesigner and you have opened
the Levels tab.
Context
Prerequisites
You have logged on to the gamification workbench with the role GamificationDesigner and you have opened
the Levels tab.
Procedure
Prerequisites
You have logged onto the gamification workbench with the role GamificationDesigner and you have opened
the Badges tab.
Context
A badge is a graphical representation of an achievement. Hidden badges are not visible to the user before the
assignment and can be used as surprise achievements.
Prerequisites
You have logged onto the gamification workbench with the role GamificationDesigner and you have opened
the Badges tab.
Procedure
Prerequisites
You have logged onto the gamification workbench with the role GamificationDesigner and you have opened
the Badges tab.
Procedure
Prerequisites
You have logged onto the gamification workbench with the role GamificationDesigner and you have opened
the Badges tab.
Procedure
Prerequisites
You have logged on to the gamification workbench with the role GamificationDesigner and you have opened
the Missions tab.
Context
A mission defines what has to be achieved to gain a measurable outcome. Besides basic standalone missions the
gamification service allows modelling complex mission structures using mission conditions and consequences.
Note
Mission conditions and consequences are of descriptive nature only. Actual condition checking and the
execution of consequences has to be done by corresponding rules. These rules are not generated automatically
yet.
● Point Conditions: A number of points, each with a respective threshold. Each point can be considered as a
progress indicator: As soon as the threshold is reached, the condition is met.
● A list of missions that have to be completed. Within the API such missions are referred to as sub missions.
The consequences part is limited to a list of follow-up missions, which should be assigned or unlocked after the
current mission has been completed. Within the API such follow-up missions are referred to as nextMissions.
Example for a rule that checks a point condition in its WHEN part and assigns a follow-up mission in its THEN part:
● WHEN
$p : Player($playerid : id)
eval(queryAPIv1.hasPlayerMission($playerid, 'Troubleshooting', false) == true)
eval(queryAPIv1.getScoreForPlayer($playerid, 'Critical Tickets', null,
null).getAmount() >= 5)
● THEN
Prerequisites
You have logged on to the gamification workbench with the role GamificationDesigner and you have opened
the Missions tab.
Procedure
Results
Note
Adding a sub mission or follow-up mission only creates relations in the database. The corresponding rules for
checking conditions, assigning follow up missions, or both are not generated yet. They have to be created
manually. But without storing these relationships and making them available through the achievement query
API it would not be possible to create such rules at all.
Prerequisites
You have logged on to the gamification workbench with the role GamificationDesigner and you have opened
the Missions tab.
Prerequisites
You have logged on to the gamification workbench with the role GamificationDesigner and you have opened
the Missions tab.
Procedure
● System Missions: the mission life cycle is fully controlled by the service using API calls within rules.
● User-accepted Missions: the player actively decides whether to accept or reject missions, while the remaining
mission life cycle (unlocking or completing a mission) is controlled by the service. In both cases the API calls
have to be executed within rules to ensure data consistency between the engine and the backend.
All state transitions are triggered by calling the respective API methods within rules, while the list of missions in a
certain state can be retrieved either by calling the API directly or within a rule.
Sample rule for assigning a system mission as part of the user init rule:
● WHEN
● THEN
● WHEN
$p : Player($playerid : id)
eval(queryAPIv1.hasPlayerMission($playerid, 'Troubleshooting', false) == true)
eval(queryAPIv1.getScoreForPlayer($playerid, 'Critical Tickets', null,
null).getAmount() >= 5)
● THEN
Note
Invoking the manual mission methods via the user endpoint currently does not trigger any rules. If there is a rule
that has to trigger when missions become active for players it would require a separate event to trigger this rule.
Prerequisites
You have logged on to the gamification workbench with the role GamificationDesigner and you have opened
the Rules tab.
Context
The rules are a fundamental element of the game mechanics. They describe the consequences of actions, the
corresponding constraints and the goals that can be achieved. The rules allow you to define complex conditions
and consequences based on common complex event processing (CEP) operators.
Related Information
Rules are the core elements of the gamification design. Generally they follow the event condition action (ECA)
structure as for active rules in event driven architectures. Each rule is structured in two parts:
● Left hand side (LHS): rule conditions or trigger (events conditions and/or player conditions)
● Right hand side (RHS): rule consequences (updates from the player and/or event generation)
The rule conditions (LHS) are maintained in the Trigger (“when”) area. Examples are:
The rule consequences (RHS) are maintained in the Consequences (“then”) area. Examples are:
● Create new events - new event with the type “solvedProblemDelayed” that is triggered with a delay of 1 minute:
Note
The gamification service follows the “rule-first” approach. This means that any achievements of a player are
always updated using the rule engine. A modification of player achievements cannot be done using an API
(without any rule execution).
The SAP Cloud Platform Gamification allows you to write rules to reach the best flexibility for the targeted game
concept. Additionally you can write rules in one of the multiple graphical (form based) editors in the gamification
workbench.
The declaration of the trigger (“when”) part is based on the Drools Rules Language (DRL).
The trigger part defines the constraints that must be fulfilled in order to execute the consequences ("then" part).
Variables can be defined and used both in the "when" and in the "then" part. This is generally recommended in
case you want to use the same object more than once. Multiple constraints can be described in one trigger part.
The constraints are typically described using the logical operators (within eval statements) and evaluation of the
event object. The event object must be defined with a type and can include multiple parameters. Additionally, DRL
allows you to define temporal constraints using common complex event processing (CEP) operators.
Related Information
http://docs.jboss.org/drools/release/5.6.0.Final/drools-expert-docs/html/ch05.html
The gamification service rule engine allows the use of two event streams:
You must explicitly declare in the trigger part which event stream will be used. Furthermore, you must explicitly
declare in the consequences part which event stream is used in case you create new events. Using the managed
stream is strongly recommended. Only use the unmanaged stream if the auto-retraction does not work with your
rule design.
Context
Variables can be defined in the trigger part and can afterwards be used in both the trigger and the consequences
part. Variables are recommended in case one object is used more than once. For example, a player object needs to
be updated multiple times.
Procedure
A variable is declared by any string with a leading $ sign, for example $player or $var.
Declaration of a variable:
$<VARIABLE> : <EXPRESSION>
Context
An event type must be set for each incoming event. The event type needs to be checked within the trigger part.
The player's ID is sent with each event, it should be stored in a variable for further use.
Additionally, multiple parameters can be passed with an event and evaluated. The parameters can be a string or
any numeric values. The parameters can be evaluated with logical operators such as equal (=), larger than (>) and
smaller than (<).
Procedure
Declaration of an event object with a given event type and declaration of a variable with a given player ID:
Note
It is recommended to always assign the player ID (playerid) within the event object of a variable since the
player ID is necessary to get the according player object for updating achievements in the consequence part.
Declaration of an event with a given event type, declaration of a variable with a given player ID and evaluation of a
property:
EventObject(type=='<EVENT_TYPE>', data['<PROPERTY>']<OPERATOR><VALUE>
$playerid:playerid) from entry-point eventstream
Note
It is recommended to always evaluate event parameters within the event object instead of defining additional
parameters and using additional eval statements.
EventObject(type=='solvedProblem', data['relevance']=='critical',
$playerid:playerid) from entry-point eventstream
● Declaration of event with the given type “buttonPressed” and a property with the name “color” and the value
“red”.
● Declaration of event with the given type “temperatureIncreased” and an integer property with the name
“temperatureValue” where the numeric value is larger than 30.
EventObject(type=='temperatureIncreased',
Integer.parseInt(data['temperatureValue'])>30, $playerid:playerid) from entry-
point eventstream
● Declaration of two events of type “ticketEventA” and “ticketEventB”. Both events must occur and they have to
belong to different players.
EventObject(type=='ticketEventA', $playerid:playerid)
EventObject(type=='ticketEventB', playerid!=$playerid)
● Declaration of two events of type “ticketEventA” and “ticketEventB” using the explicit “and” operator. Both
events must occur and they have to belong to different players.
● Declaration of two events of type “ticketEventA” and “ticketEventB” using the “or” operator that describes that
“eventA” or “eventB” must occur and the "player IDs" must not be the same.
(EventObject(type=='ticketEventA', $playerid:playerid) ||
EventObject(type=='ticketEventB', playerid!=$playerid))
● Declaration of two events of type “ticketEvent” where the “player IDs” are different and the “ticked id” is the
same and another event of the type “connectedEvent” that must not be true.
Context
Eval statements are used to define constraints with data that is not available in the working memory, such as
status of player achievements. Multiple constraints can be defined in one rule with the combination of multiple
logical operators.
Note
It is recommended to avoid using an eval statement since it is an expensive operation. Use it as late as possible
within your trigger part.
Procedure
eval(<EXPRESSION><OPERATOR><VALUE>)
● Expression: It is recommended to only use methods of the Query API in eval conditions. The use of the Query
API allows you to evaluate available player details and achievements using Java statements.
● Operator: All logical operators supported by Java are supported.
● Declaration of an eval statement where the mission “Troubleshooting” is assigned to the player.
● Declaration of an eval statement where the “Experience Points” of the player are larger or equal to 10.
● Declaration of an eval statement where the player does not have the badge “Sporting Ace” assigned.
Note
The use of an invalid expressions may lead to an error during rule execution. Make sure that referenced point
categories or missions exist and the spelling is correct.
Creating generic facts (a Map object with an optional key) and storing them in the working memory is supported.
This allows you to store temporary results and create complex constraints (e.g.: count the number of a specific
event type). Generic facts can be evaluated in all rules if they exist.
The data structure of a generic fact is Map<String, Object> data. Additionally, you can set a key for the generic
factr to identify it. A generic fact must be initialized in the consequences part.
GenericFact(key=='<KEY>')
$<FACT_VARIABLE>: GenericFact(key=='<KEY>')
Examples for querying generic facts and assignment to a variable that can be used for evaluation:
● $loginCounter: GenericFact(key=='LoginCounter')
● $daysOfWeek: GenericFact(key=='DaysOfWeek')
The declaration of the consequences (“then”) part supports writing code with the Drools Rules Language (DRL) in
version 5.6.0 and Java code.
Note
The formatting in the consequences part must be in the Java style. The DRL can be used in combination with
Java code.
The consequences part defines what will be executed once the trigger part is fulfilled. It allows you to update the
player achievements or to create new events. Multiple consequences can be defined within one consequences
part.
Related Information
http://docs.jboss.org/drools/release/5.6.0.Final/drools-expert-docs/html/ch05.html
The Update API can be used to update any player achievements. Multiple updated can be executed within on the
consequences part.
updateAPIv1.<QUERY_API_METHOD>(<PLAYER_ID>, <PARAMS>);
update(engine.getPlayerById(<PLAYER_ID>));
updateAPIv1.addMissionToPlayer($playerid, 'Troubleshooting');
update(engine.getPlayerById($playerid));
updateAPIv1.completeMission($playerid, 'Troubleshooting');
update(engine.getPlayerById($playerid));
● Increasing the “Experience Points” of the player by one, complete mission “Troubleshooting, and add badge
“Champion Badge”.
New events can be created in the consequences part. They can be used for more complex game mechanics
(cascading rules), changing the state of facts or even for temporal triggers.
Generic facts can be used as global variables and are stored in the working memory. The creation of a generic fact
instance has to be done in the consequences part. In the trigger part you can query for certain generic fact
instances and (if required) bind them to local variables. This works just like querying the EventObject.
● Declaration of a generic fact with the key “factB” with a property “relevance” and according value “critical”.
$<FACT_VARIABLE>.getData();
$<FACT_VARIABLE>.setData(<VALUE>);
update($<FACT_VARIABLE>);
$loginCounter.setData("59");
update($loginCounter);
● Assigning the value of the variable “lCounter” to the generic fact “loginCounter”.
$loginCounter.setData(lCounter);
update($loginCounter);
retract($<FACT_VARIABLE>);
retract($loginCounter);
Using Java code in the consequences part is allowed and very complex rules can be created. You can work with all
Java control flow statements, a selected set of Java objects, for example collections, create generic facts or update
the player's achievements.
Prerequisites
You have logged on to the gamification workbench with the role GamificationDesigner and you have opened
the Rules tab.
Procedure
Caution
A newly created rule is not automatically deployed. The deployment is initiated once you apply the changes.
The rule must be activated to be deployed.
Prerequisites
You have logged on to the gamification workbench with the role GamificationDesigner and you have opened
the Rules tab. A rule already exists and is not enabled.
1. Check the Activate on Engine Update checkbox of the rule you want to enable.
2. Open the Rule Engine Manager by pressing Rule Engine.
3. Commit your changes by pressing the Apply Changes button in the Rule Engine Manager. The rule will be
deployed immediately after successful validation. A blue flag next to the rule indicates that the rule has been
changed.
Note
A rule that contains errors will not be deployed. Errors can be viewed by pressing the Show Issues button in
the Rule Engine Manager.
Prerequisites
You have logged on to the gamification workbench with the role GamificationDesigner and you have opened
the Rules tab. A rule already exists and is enabled.
Procedure
1. Uncheck the Activate on Engine Update checkbox of the rule you want to disable.
2. Open the Rule Engine Manager by pressing Rule Engine.
3. Commit your changes by pressing the Apply Changes button in the Rule Engine Manager. The rule will be
deployed immediately once the validation was successful.
Prerequisites
You have logged on to the gamification workbench with the role GamificationDesigner and you have opened
the Rules tab.
1. Click on the name of the rule in the entity list to open the rule editor.
2. Change the rule code.
3. Press Save.
4. Optional: Create or modify additional rules.
5. Close the rule editor and apply changes to deploy the rules.
Caution
A modified rule is not automatically deployed. The deployment is initiated once you have pressed Apply
Changes in the rules overview. The rule must be enabled to be deployed.
Prerequisites
You have logged on to the gamification workbench with the role GamificationDesigner and you have opened
the Rules tab.
Procedure
Caution
Only rules that are disabled can be deleted.
Prerequisites
You have logged on to the gamification workbench with the role GamificationDesigner and you have opened
the Rule Engine tab.
The gamification workbench supports to detect issues with rules during design time and during runtime. Any
detected issues will be displayed in the Rule Engine tab. Syntax errors are already checked during design time after
the user applied the changes.
Procedure
1. Reported rule warnings are displayed in a table, sorted by the rule which caused them.
2. Optional: Press the refresh button attached to the rule warnings table to refresh and check for new warnings.
Prerequisites
You have logged on to the gamification workbench with the role TenantOperator or AppAdmin, and you have
opened the Rule Engine tab of the releated app.
Context
The gamification service creates a rule engine instance for each app. Over time the state of each rule engine
instance changes based on its usage. A recovery mechanism for different rule engine states has been introduced
to allow a clean recovery in case of errors, rule set changes or system migrations. This mechanism allows to create
and restore snapshots of the current rule engine instance session and its deployed rule set. Snapshots are stored
into the database.
Generation of snapshots
Using “apply changes” (see Update Rules [page 554] for details), the current rule set stored in the database is
deployed on the currently running rule engine instance. Technically, the current session, which includes all facts
and events, is upgraded to a new rule set. To assure compatibility of new rules with the existing session, rules are
being evaluated one by one. Compatible pairs of session and rule set are stored as snapshots.
Additionally, when receiving events via the “handleEvent” method, the session will change as well and requires the
same recovery mechanism. The gamification service service will generate snapshots during event execution in
dynamic intervals.
The gamification service manages rules and corresponding snapshots in the following way:
● After each successful rule deployment (Apply Changes) the corresponding rule set as well as the session are
both tagged with a new version. The service stores the latest 10 versions at max.
● For the latest (currently active) version as well as the previous version the gamification service stores the 10
latest snapshots in slots numbered 1 through 10.
Procedure
1. The Rule Engine section lists a table with all available rule engine snapshots and their details.
2. Choose a rule engine snapshot to recover and press its Recover button.
3. Read and confirm the modal dialog.
4. The gamification service is now recovering the snapshot. This may take a few seconds.
Note
Rule engine snapshots are constantly being created, when events are being sent. Older snapshots are
removed by the system during the process. It is recommended to stop any applications from sending
events to the rule engine while restoring snapshots.
Related Information
Notifications are messages that inform users about certain state changes, for example earned achievements, new
missions, new teams. They are considered "see and forget" information and won't stay long in the system.
Context
On one hand, notifications are created automatically when calling certain API methods. On the other hand, you can
also create and assign custom notifications by using the methods addCustomNotificationToPlayer and
addCustomNotificationToTeamMembers.
Notifications are delivered to players or teams by implementing a polling-based approach using the API methods
getNotificationsForPlayer and getAllNotifications.
The gamification service automatically creates notification for users when calling certain API methods. The table
below lists all methods, which implicitly generate notifications and explains the corresponding notification
parameters.
API Method Player Type Category Subject Details Message Date Created
Custom messages can usually be specified using an optional parameter <notificationMessage> of the
corresponding API method.
Examples:
Besides the automatically generated notification it is possible to add custom notifications to players or teams
using the methods addCustomNotificationToPlayer and addCustomNotificationToTeamMembers from
within rules.
The table explains how the notification parameters are used when creating custom notifications.
API Method Player Type Category Subject Detail Message Date Created
Context
Notifications are strictly defined as "see and forget". The gamification service will only store the last 25
notifications for each player (currently "X" defaults to 25). The show notifications to players a polling-based
approach has to be implemented using the following API methods:
● getNotificationsForPlayer(playerId, timestamp)&app=APPNAME
Returns the latest notifications for a player starting from the timestamp. This mechanism allows other
applications to better track which notifications have been requested or displayed already. This is the current
approach for "user2service" communication. It works well with the user endpoint using JavaScript.
● getAllNotifications(timestamp)&app=APPNAME
Returns all generated notifications for all players within one app starting from the provided timestamp. This is
the current approach for "application2service" communication. An application can query all notifications for
the app using the tech endpoint and forward the information to the user using custom events or
communication channels. This avoids having all clients in parallel polling for notifications.
Procedure
You can see the Notification Widget in the Helpdesk Scenario (sap_gs_notifications.js) for more information on
how the polling of notifications can be implemented at the client side. The notification polling is handled as follows:
1. Retrieve the gamification service server time on initialization, using the method getServerTime.
2. Use this server time to initially poll for notifications.
Prerequisites
You have logged into the gamification workbench and opened the Terminal tab.
Context
The Terminal within the game mechanics area allows you to quickly execute one or more API calls. Make sure that
you have the appropriate access rights for executing the call.
A comprehensive documentation of the API can be found in your SAP Cloud Platform Gamification subscription
under Help API Documentation .
Procedure
1. Enter the list of JSON RPC calls as a JSON array: [JSON_RPC_CALL1, JSON_RPC_CALL2,…]
Example:
2. Press Execute to execute the calls. Check Force synchronous execution checkbox to enforce sequential
execution of calls in the JSON array.
3. Review server response. You can view the detailed JSON response by clicking on the symbol on the right.
Note
The calls are executed in the context of the currently selected app (see dropdown box in the upper right
corner of the gamification workbench).
The defined JSON RPC calls are stored in the browser cache. For restoring the initial sample calls press
Restore Example.
Related Information
Prerequisites
Navigate to the Terminal in the Game Design tab. Your user has the role AppAdmin.
Context
The Terminal allows you to send events that are typically sent to the host application.
Note
The Terminal should be only used to send events for testing purposes. In case you send events for a user that is
used in a productive environment it will modify the real achievements!
Procedure
1. Enter the list of JSON RPC calls with the method handleEvent.
[ {"method":"handleEvent", "params":[{"type":"myEvent","playerid":"demo-
user@mail.com","data":{}}]} ]
2. Press Execute to execute the calls. Check Force synchronous execution checkbox to enforce sequential
execution of calls in a JSON array.
3. Review server response. You can view the detailed JSON response by clicking on the symbol on the right. Once
the event is send successfully the response is true.
4. All rules that listen on the according event type (when clause) will be executed.
Prerequisites
Context
The Terminal allows you to execute all methods for retrieving the user achievements data.
Procedure
1. Enter list of JSON RPC calls with the method with the desired achievement query methods.
Example getPlayerRecord:
[ {"method":"getPlayerRecord", "params":["demo-user@mail.com"]} ]
2. Press Execute to execute the calls. Check Force synchronous execution checkbox to enforce sequential
execution of calls in a JSON array.
3. Review server response. You can view the detailed JSON response by clicking on the symbol on the right. Once
the event is send successfully you will see the result.
Prerequisites
You are logged into the gamification workbench and have opened the Logging tab.
Context
The logging view allows you to search the event log for the selected app. The event log includes all API calls related
to “Event Submission” as well as the corresponding API calls executed from within the rules, which were triggered
by the corresponding events.
Note
The maximum retention time for the event log is 7 days, but not exceeding 500,000 log entries.
Rules with an EventObject fact and one or more other facts (Player or GenericFact)
in WHEN part cause endless loops.
Understanding why such rule sets result in loops requires a deeper understanding of the gamification service itself:
● Rules with fact-based conditions are triggered on changes of the respective fact or facts. For example, insert,
update or retract fact.
● handleEvent inserts a fact of type EventObject and fires all rules. For example the THEN parts of all rules
that satisfy a fact-based condition involving EventObject will be executed.
● THEN execution may involve the modification of facts (insert, update, delete), which in turn may trigger
further rules. For example, insert a new GenericFact or update an existing fact (Player or GenericFact).
Rule execution runs until there are no more rules to fire.
● Endless loops occur if there are circles in the rule execution graph, for example, one rule calling another and
vice versa. The gamification service loop detection will detect such loops at runtime and stop the engine until
the problems are resolved.
● The EventObject inserted by handleEvent is per default retracted automatically after all rules have fired.
Thus, if the WHEN part includes EventObject conditions and further fact conditions, for example, Player(),
the rule will trigger again if one of the respective facts changed and the overall condition is still true.
● This can cause an endless loop. For example: Rule 1 WHEN includes EventObject and queries for
corresponding player (Player(playerid==$playerid)). Rule 2 WHEN expects Player change only
(Player()) in WHEN. If both, Rule 1 and Rule 2, include an update($player) in the THEN part, this will result
in an endless loop.
Mitigation strategy
● Use update(fact) with care. Think if it is needed and check for rules that could trigger accidently.
● Minimize the number of update calls in the THEN part. Example: Only call update($player) if player
achievement data has changed and you want other rules to retrigger, e.g. rules checking for mission
completion. This will also significantly improve performance since unnecessary rule executions are avoided.
Both, key and value are interpreted as Strings. Thus, an explicit type conversion is required if you want to compare
them with numbers. This type conversion is done using the standard Java approach for the different numeric
types, for example, Integer.parseInt(value) or Double.parseDoube(value).
Example:
[
{"method":"handleEvent", "params":
[{"type":"solvedProblem","playerid":"D053659","data":
{"relevance":"critical","processTime":15}}]}
]
Related Information
Context
The integration of a (gamified) cloud application must consider the following aspects:
The following sections describe how you can deal with these aspects using the Web APIs provided. The sample
code shown is based on the demo application "Help Desk". The demo application's source code is also available in
GitHub .
Note
The sample code used to demonstrate the integration is not ready for production.
The Application Programming Interface (API) of the gamification service is the central integration point of your
application.
● Technical endpoint for integrating gamification events and user management in youur backend.
● User endpoint for integrating user achievements in the application frontend.
It is recommended to use the technical endpoint only for executing methods of the gamification service that must
not be executed by the users themselves, such as sending events to the gamification service that trigger certain
achievements or performing user management tasks, creating players for example. Authentication and
authorization in this case is based on a technical user that is created for the application itself.
The user endpoint should be used for accessing user related information for example earned achievements,
available achievements/mission, notifications and others. A great advantage of this approach is that the
gamification service manages access control, based on the user roles. For instance to make sure that a user
cannot access other users' data. For this, the authenticated user must be passed to the user endpoint.
Note
The whole integration can be done by using only the technical endpoint. However, in this case you must manage
access control yourself..
The documentation for the API can be found in your gamification service under Help API Documentation or
at https://gamification.hana.ondemand.com/gamification/documentation/documentation.html.
In a SAP Cloud Platform setting we assume that the gamified app and the gamification service subscription are
located in the same subaccount. Furthermore, we assume that the application back end is written in Java, while
the application front end is based on HTML5 or SAPUI5.
The technical endpoint is used to send gamification-relevant events and perform user management tasks from the
application back end. Communication is based on a BASIC AUTH destination that uses the user name and
password of a technical user.
Note
For productive settings the client-side event sending should support resending events in case of failures,
planned or unplanned service downtimes. For instance, short planned downtimes (less than 5 minutes
according to Cloud Platform maintenance schedules) are required to apply regular gamification service
updates.
The easiest way to show player achievements is to integrate a default user profile that comes with the gamification
service subscription as an iFrame in the application's web front end.
To implement a user profile or single widgets (for example a progress bar tailored to the application's front end),
we recommend you use the user endpoint in combination with a local proxy servlet and an app-to-app SSO
destination. The proxy servlet prevents running into cross-site scripting issues and the app-to-app SSO
destination automatically forwards the credentials of the authenticated user to the gamification service. This
allows reuse of the access control mechanisms offered by the gamification service.
Since the user endpoint is used from a browser it is protected against cross-site request forgery. Accordingly, an
XSRF token has to be acquired by the client first.
Context
If the user performs actions in the application that are relevant to gamification, the gamification service has to be
informed by invoking the corresponding API method. To prevent cheating this should be done in the application
back end using the technical endpoint offered by the API.
Note
For productive settings the client-side event sending should support resending events in case of failures,
planned or unplanned service downtimes. For instance, short planned downtimes (less than 5 minutes
according to Cloud Platform maintenance schedules) are required to apply regular gamification service
updates.
Procedure
Note
See also:
○ Demo application source code: https://github.com/SAP/gamification-demo-app
○ API Documentation: SAP Cloud Platform Gamification subscription, under Help API
Documentation .
Context
The gamification service subscription includes a default user profile, which you can include in your application as
an <iFrame/>.
https://<Subscription URL>/gamification/userprofile.html?name=<userid>&app=<appid>
2. Include the default user profile in your HTML5 code as an iFrame:
Prerequisites
Configure your subaccount to allow principal propagation. For more information, see HTTP Destinations [page
130]
Context
The integration of custom gamification elements tailored to your application's user interface requires the
development of custom JavaScript/HTML5 widgets. To avoid cross-site-scripting issues, you should introduce a
proxy servlet in the application. This servlet forwards JSON-RPC requests to the user endpoint using an App-to-
App SSO destination. This way, the gamification service has access to the user principle and the built-in access
control is active.
Procedure
API Documentation: SAP Cloud Platform Gamification subscription under Help API Documentation .
Context
The players (users) must be explicitly created before they can be used to assign achievements. A player context is
always valid for one tenant and therefore can be used across multiple apps (managed in one tenant).
Procedure
1. Register (create) a player (user) for a tenant subscription using the API method createPlayer.
Note
This is done automatically on the first event if the flag Auto-Create Players is set to true for the given app.
2. (Optional) Initialize a player (user) by creating a rule listening for an event of type initPlayerForApp.
a. Precondition: The player is registered.
b. On event: if a player has not been initialized for the given app yet an event of type initPlayerForApp is
automatically inserted into the engine. The THEN-part of this rule should include the user-defined init
actions, for example assigning initial missions.
c. (Optional) If you want players to be created with a display name you can add the optional parameter
playerName to the event. During the automated player creation this parameter is used for setting the
player name. Example:
{"method":"handleEvent","params":
[{"type":"linkProvided","playerid":"maria.rossi@sap.com", "playerName":
"Maria Rossi", "data":{}}]}
Prerequisites
You have logged on to the gamification workbench with the role GamificationDesigner and you have opened
the Game Design tab.
Context
The gamification introduction is a continuous process since the modification of game mechanics can be done at
any point in time. For example, the number of points a player can reach might be changed in order to change the
behavior of the user.
Prerequisites
You have logged on to the gamification workbench with the role GamificationDesigner and you have opened
the Analytics tab.
Context
You can view the statistics of achievements such as points and badges. The points metrics that can be viewed are
all point categories and badges that are maintained for your application.
The following aggregations can be selected (the values for badges cannot be aggregated):
Note
The analytics are currently limited to point categories and badges. Analytics on player level are not available due
to privacy reasons.
Procedure
Prerequisites
You have logged on to the gamification workbench with the role GamificationDesigner and you have opened
the Analytics tab. You have selected the statistics you are interested in. A time range must be selected.
Context
You can view the statistics of achievements such as points and badges. The selected values can be compared to an
earlier time range in order to identify changes in the assignment of achievements.
Note
A time range for the statistics must be selected.
View a lag chart for a comparison of the selected data to an earlier time range.
1. Select the Enable lag chart checkbox.
2. Select the lag amount for comparison.
The lag chart displays the difference of the aggregated values to the values before the lag amount. For
example, when you select the sum of point category for the current month, the lag chart will show the
difference compared to the month before, provided you have selected a lag amount equal to one month.
In this case study, a demo application will be gamified in order to demonstrate the implementation and
configuration of a gamification concept step by step.
The demo host application is a “Help Desk” software, which is typically used by call center employees. Customers
can create tickets (for an issue with software or hardware, for example) and call center employees can process
these tickets.
The image below shows the welcome screen of the Help Desk application. The welcome screen appears once the
user is successfully authenticated using the identity provided. The user must have the role helpdesk. The
assignment of roles is described in page Roles [page 520].
Context
The demo application (Help Desk) will be automatically subscribed for each subaccount that is subscribed to the
gamification service.
The gamification service has already been integrated within the demo application. Events such as the processing
of tickets will be sent to the gamification service of the subaccount subscription for example, and the
achievements are going to be retrieved by the corresponding interfaces.
Since the gamification service and the demo applications are subscriptions, a destination has to be enabled in
order to allow communication between the services. A technical user is also required in order to allow secure
communication.
Procedure
The Help Desk app can be accessed via the menu Help Open Help Desk . The following link will be used:
https://< SUBSCRIPTION_URL>/helpdesk. The role helpdesk must be granted to the user.
Context
The user requires the role helpdesk in order to access the help desk application.
Procedure
Related Information
The destination requires a technical user for secure communication between your application and the gamification
service subscription.
Context
Note
You can request user IDs at the SAP Service Marketplace: http://service.sap.com/request-user SAP Service
Marketplace users are automatically registered with the SAP ID service, which controls user access to SAP
Cloud Platform.
1. Request a technical user via SMP. (You can use your subaccount user as well, but this is not recommended for
security reasons.)
2. In the SAP Cloud Platform cockpit, choose the Services tab.
3. Click the Gamification Service tile.
4. Click on the Configure Gamification Service link.
Related Information
Prerequisites
For more information about how to install the SAP Cloud Platform tools, see Eclipse Tools [page 903].
Context
The demo application's (Help Desk) source code is also available in GitHub .
Procedure
3. Open Eclipse with SAP Cloud Platform tools and choose File Import .
For more information, see Deploying on the Cloud from Eclipse IDE [page 1191].
7. Configure destinations and roles for the deployed application. Use the same configuration as described in
section Configure Available Subscription [page 577].
The host application without the application does not allow the user (call center employee) to see any feedback on
his/her daily work. The user does not really know how s/he performs compared to other colleagues either.
The requirement for gamification in the demo applications is to intrinsically motivate the users with instant
feedback (achievements). Collaborative feedback will be introduced, and the progress for each individual user will
be visible as well as the performance compared to others.
Points Categories
Levels
Based on the number of experience points a user gains, s/he can reach different levels. Three levels are
introduced:
“Competent” - this level can be reached once the user has gained 10 “Experience Points”
“Expert” - this level can be reached once the user has gained 50 “Experience Points”
Badges
Based on the successful completion of a mission, the user will gain a badge. The following badges are introduced:
“Troubleshooting Champion”
Missions
Missions will be introduced to motivate continuous efforts. The following missions will be introduced:
“Troubleshooting”
Rules
For each processed ticket, the user will gain 1 “Experience point”.
For each processed ticket categorized as “critical”, the user will gain 2 additional “Experience Points” to motivate
him or her to solve critical tickets with higher priority.
For each processed ticket categorized as “critical”, the user will gain 1 “Critical Tickets” point.
Once the mission troubleshooting is completed, the user will gain the “Troubleshooting Champion” badge.
The gamification concept introduced above can be generated automatically within the gamificationworkbench.
The generated gamification concept is designed for the demo application only and is designed to provide an
example of a gamification concept.
The demo content for the Help Desk application can be generated in the OPERATIONS tab. You need to have the
TenantOperator role. Go to "Demo Content Creation" (shown in the picture below) and select the Create
HelpDesk Demo button. After a short while you will see a notification Gamification concept successfully
created. once the content generation was successful. The demo content has been generated into a new app:
HelpDesk.
The generated gamification concept contains more gamification elements than described in Switch Apps [page
527] to provide additional examples.
The following sections describe how the gamification design is realized in the gamification workbench.
The gamification workbench makes it possible to manage gamification concepts for multiple apps. An app must be
created before the gamification concept can be implemented.
Procedure
1. Go to the OPERATIONS tab. The user must have the TenantOperator role.
2. Go to Apps.
Next Steps
Once the app has been created, it must be selected in the top right corner so that the gamification concept can be
implemented for it.
Procedure
Results
You should now see both point categories (“Experience Points” and “Critical Tickets”) in the list for Points.
Procedure
7. Press Add.
8. Enter Name: “Competent”.
9. Select Points: “Experience Points”.
10. Enter Threshold: “10”.
11. Press Add.
Results
You should now see all three levels (“Novice”, “Competent”, and “Expert”) in the list for Levels.
Procedure
You should now see all badges (“Troubleshooting Champion”) in the list for Badges.
Procedure
You should now see all missions (“Troubleshooting”) in the list for Missions.
Context
Procedure
Procedure
1. Press Add.
2. Enter Name: “GiveXPCritical”
3. Enter Description: “Give additional Experience Points for critical ticket.”
4. Enter the following text for the trigger:
Procedure
1. Press Add.
2. Enter Name: “GiveCT”
3. Enter Description: “Give Critical Ticket Points for processed ticket.”
4. Enter the following text for the trigger:
Procedure
1. Press Add.
2. Enter Name: “AssignMissionTS”
3. Enter Description: “Assign Troubleshooting mission.”
4. Enter the following text for the trigger:
$p : Player($playerid : uid)
$event : EventObject(type=='initPlayerForApp', $playerid==playerid) from entry-
point eventstream
updateAPI.addMissionToPlayer($playerid, 'Troubleshooting');
update($p);
Procedure
1. Press Add.
$p : Player($playerid : uid);
eval(queryAPI.hasPlayerMission($playerid, 'Troubleshooting') == true)
eval(queryAPI.getPointsForPlayer($playerid, 'Critical Tickets').getAmount() >= 5)
updateAPI.completeMission($playerid, 'Troubleshooting');
updateAPI.addBadgeToPlayer($playerid, 'Troubleshooting Champion', 'You solved 5
critical tickets!');
update($p);
1.8.9.5.6.6 Result
You should now see the created rules in the list for Rules.
Results
Use the SAP Cloud Platform Git service to store and maintain a version of the source code of applications, for
example, for HTML5 and Java applications, in Git repositories.
Git is a widely used open source system for revision management of source code that facilitates distributed and
concurrent large-scale development workflows.
You can use any standard compliant Git client to connect to the Git service. Many modern integrated development
environments, including but not limited to Eclipse and the SAP Web IDE, provide tools for working with Git. There
are also native clients available for many operating systems and platforms.
Features
● Highly distributed. Every clone of a repository contains the complete version history.
● Cost-effective and simple creation and merging of branches supporting a multitude of development styles.
● Almost all operations are performed on a local clone of a repository and therefore are very fast.
● No need to be permanently online, only required when synchronizing with the Git service.
● Only differences between versions are recorded, allowing for very compact storage and efficient transport.
● Widely used and supported by many tools.
Restrictions
While Git can manage and compare text files very efficiently, it was not designed for processing large files or files
with binary content, such as libraries, build artifacts, multimedia files (images or movies), or database backups.
Consider using the document service or some other suitable storage service for storing such content.
To ensure best possible performance and health of the service, the following restrictions apply:
● The size of an individual file cannot exceed 20 MB. Pushes of changes that contain a file larger than 20 MB are
rejected.
● The overall size of the bare repository stored in the Git service cannot exceed 500 MB.
Third-Party Notice
The SAP Cloud Platform Git service makes use of the Git-Icon-1788C image made available by Git (https://git-
scm.com/downloads/logos ) under the Creative Commons Attribution 3.0 Unported License (CC BY 3.0) http://
creativecommons.org/licenses/by/3.0 .
Related Information
In the SAP Cloud Platform cockpit, you can create and delete Git repositories, as well as lock and unlock
repositories for write operations. In addition, you can monitor the current disk consumption of your repositories
and perform garbage collections to clean up and compact repository content.
Related Information
In the SAP Cloud Platform cockpit, you can create Git repositories for your subaccounts.
Prerequisites
Context
Note
To create a repository for the static content of an HTML5 application, see Create an HTML5 Application [page
1267].
Procedure
1. Log on to the SAP Cloud Platform cockpit, and select the subaccount.
Field Entry
Name (Mandatory) A unique name starting with a lowercase letter, followed by digits and lowercase let
ters. The name is restricted to 30 characters.
Description (Optional) A descriptive text for the repository. You can change this description later on.
Create empty commit An initial empty commit in the history of the repository. This might be useful if you want to import
the content of another repository.
4. Choose OK.
5. To navigate to the details page of the repository, click its name.
The URL of the Git repository appears under Source Location on the detail page of the repository. You can use this
URL to access the repository with a standard-compliant Git client. You cannot use this URL in a browser to access
the Git repository.
Related Information
Permissions for Git repositories are granted based on the subaccount member roles that are assigned to users. To
grant a subaccount member access to a Git repository, assign one of these roles: Administrator, Developer, or
Support User.
Prerequisites
Context
For details about the permissions associated with the individual roles, see Security [page 605].
Procedure
Make sure that you assign at least one of these roles: Administrator, Developer, or Support User.
In the SAP Cloud Platform cockpit, you can change the state of a Git repository temporarily to READ ONLY to block
all write operations.
Prerequisites
Procedure
1. Log on to the SAP Cloud Platform cockpit, and select the required subaccount.
2. In the list of Git repositories, locate the repository you want to work with and follow the link on the repository's
name.
3. On the details page of the repository, choose Set Read Only.
Results
The state flag of the repository changes from ACTIVE to READ ONLY and all further write operations on this
repository are prohibited.
Note
To unlock the repository again and allow write access, choose Set Active on the details page of the repository.
In the SAP Cloud Platform cockpit, you can delete a Git repository unless it is associated with an HTML5
application. In this case, delete the HTML5 application.
Prerequisites
Context
Caution
Be very careful when using this command. Deleting a Git repository also permanently deletes all data and the
complete history. Clone the repository to some other storage before deleting it from the Git service in case you
need to restore its content later on.
Procedure
1. Log on to the SAP Cloud Platform cockpit, and select the appropriate subaccount.
In the SAP Cloud Platform cockpit, you can trigger a garbage collection for a repository to clean up unnecessary
objects and compact the repository content aggressively.
Prerequisites
Perform this operation from time to time to ensure the best possible performance for all Git operations. The Git
service also automatically runs normal garbage collections periodically.
Note
This operation might take a considerable amount of time and may also impact the performance of some Git
operations while it is running.
Procedure
1. Log on to the SAP Cloud Platform cockpit, and select the subaccount.
Results
The garbage collection runs in the background. You can use the Git repository without restrictions while the
process is running.
We assume that you are familiar with Git concepts, and that you have access to a suitable Git client, for example,
SAP Web IDE for performing Git operations.
If you are new to Git, we strongly recommend that you read a text book about it, and consult the Best Practices
guide before using the service.
Related Information
The URL of the Git repository is shown under Source Location on the details page of the repository. Use this URL to
access the repository using a Git client.
Prerequisites
In the subaccount where the repository resides, you must be a subaccount member who is assigned the role
Administrator, Developer, or Support User.
Procedure
1. Log on to the SAP Cloud Platform cockpit, and select the required subaccount.
You need to clone the Git repository of your application to your development environment.
Procedure
1. In the cockpit, copy the link to the Git repository of your application.
a. Log on with a user who is a subaccount member to the SAP Cloud Platform cockpit.
b. From the navigation area, choose Applications HTML5 Applications .
c. Click your newly created application.
d. Switch to the Versioning tab.
e. Under Source Location, copy the link that points to the Git repository of your application.
2. You can either use Eclipse or the Git command line tool to execute this step.
○ To use Eclipse:
1. Start the Eclipse IDE.
2. In the JavaScript perspective, open the Git Repositories view.
3. Choose the Clone a Git repository icon.
4. Paste the link that points to the Git repository of your application.
5. If prompted, enter your SCN user and password.
Related Information
EGit/User Guide
Web IDE: Cloning a Repository
The Git fetch operation transfers changes from the remote repository to your local repository.
Prerequisites
● You must be a subaccount member who is assigned the role Administrator, Developer, or Support User.
● You have cloned the repository to your workspace, see Clone Repositories [page 602].
Context
Refer to the SAP Web IDE documentation if you want to fetch changes to SAP Web IDE. Otherwise, see the
documentation of your Git client to learn how to fetch changes from a remote Git repository.
Procedure
Related Information
The Git push operation transfers changes from your local repository to a remote repository.
Prerequisites
● You must be a subaccount member who is assigned the role Administrator or Developer.
● You have already committed the changes you want to push in your local repository.
● You have ensured that the e-mail address in the push commit matches the e-mail address you registered with
the SAP ID service.
Context
Refer to the SAP Web IDE documentation if you want to push changes from SAP Web IDE. Otherwise, see the
documentation of your Git client to learn how to push changes to a remote Git repository.
Procedure
Related Information
The Git service offers a web-based repository browser that allows you to inspect the content of a repository.
Prerequisites
In the subaccount where the repository resides, you must be a subaccount member who is assigned the role
Administrator, Developer, or Support User.
The repository browser gives read-only access to the full history of a Git repository, including its branches and tags
as well as the content of the files. Moreover, it allows you to download specific versions as ZIP files.
The repository browser automatically renders *.md Markdown files into HTML to make it easier to create
documentation.
Procedure
1. Log on to the SAP Cloud Platform cockpit, and select the required subaccount.
You can find the commits of a given user containing the user's name and e-mail address.
Procedure
1. Clone the Git repositories of the account to which the user had write access.
For more information, see Determine the Repository URL [page 602] and Clone Repositories [page 602].
2. On each of the Git repositories, execute the following commands:
1.9.3 Security
Access to the Git service is protected by SAP Cloud Platform roles and granted only to subaccount members.
Restrictions
You cannot host public repositories or repositories with anonymous access on the Git service.
Access to a Git repository is granted only to users who are authenticated by the SAP ID service. When sending
requests, users must authenticate using their SAP ID service credentials.
Permissions
The permitted operations depend on the subaccount member role of the user.
Read access is granted to all users who are assigned the Administrator, Developer, or Support User role. These
users are allowed to do the following:
● Clone a repository
● Fetch commits and tags
Write access is granted to all users who are assigned the Administrator or Developer role. These users are allowed
to do the following:
● Create repositories.
● Push commits
● Push tags
Note
If the repository is associated with an HTML5 application, pushing a tag defines a new version for the
HTML5 application. The version name is the same as the tag name.
Only users who are assigned the Administrator role are allowed to do the following:
● Delete repositories
● Run garbage collection on repositories
● Lock and unlock repositories
● Delete remote branches
● Delete tags
● Push commits committed by other users (forge committer identity)
● Forcefully push commits, for example to rewrite the history of a Git repository
● Forcefully push tags, for example to move the version of an HTML5 application to a different commit
Related Information
Governments place legal requirements on industry to protect data and privacy. We provide features and functions
to help you meet these requirements.
Note
SAP does not provide legal advice in any form. SAP software supports data protection compliance by providing
security features and data protection-relevant functions, such as blocking and deletion of personal data. In
many cases, compliance with applicable data protection and privacy laws is not covered by a product feature.
Furthermore, this information should not be taken as advice or a recommendation regarding additional features
that would be required in specific IT environments. Decisions related to data protection must be made on a
case-by-case basis, taking into consideration the given system landscape and the applicable legal requirements.
Definitions and other terms used in this documentation are not taken from a specific legal source.
Handle personal data with care. You as the data controller are legally responsible when processing personal data.
If you need to know which repositories contain Git commits of a given user that contain the user's name and e-mail
address, see Find Commits of a Given User [page 605].
If you need help with this, open a ticket on BC-NEO-GIT as described in 1888290 . Please indicate the user’s e-
mail address and the account where Git repositories reside to which this user had write access.
If you need to anonymize a user’s e-mail address and name from a given Git repository this requires rewriting the
history of the Git repository. This will change the IDs of all affected commits and their successor commits. For
more information,
If you intend to delete a subaccount or terminate your contract you can export the Git repositories by cloning
them. For more information, see Determine the Repository URL [page 602] and Clone Repositories [page 602].
Related Information
Following best practices can help you get started with Git and to avoid common pitfalls.
If you are new to Git, we strongly recommend that you read a text book about Git, search the Internet for
documentation and guides, or get in touch with the large worldwide community of developers working with Git.
Note
The only valid exception to this guideline is if you accidentally pushed a secret, for example, a password, to
the Git service.
● Don't create dependencies on changes that have not yet been pushed.
While Git provides some powerful mechanisms for handling chains of commits, for example interactive
rebasing, these are usually considered to be for experienced users only.
● Do not push binary files.
Git efficiently calculates differences in text files, but not in binary files. Pushing binary files bloats your
repository size and affects performance, for example, in clone operations.
● Store source code, not generated files and build artifacts.
Keep build artifacts in a separate artifact repository because they tend to change frequently and bloat your
commit history. Furthermore, build artifacts are usually stored in some sort of binary or archive format that Git
cannot handle efficiently.
● Periodically run garbage collection.
Trigger a garbage collection in the SAP Cloud Platform cockpit from time to time to compact and clean up your
repository. Also run garbage collection regularly for repositories cloned to your workplace. This will minimize
the disk usage and improve the performance of common Git commands.
While working with the Git service, you might occasionally encounter common problems and error messages. The
actual error messages and their presentation depend on the Git client you are using for communication with the
Git service.
General Issues
● All remote operations on a repository fail with Authentication failed for ....
Make sure you've entered the correct SAP ID credentials. Verify that you can log on to the SAP Cloud Platform,
for example, to the cockpit. If that fails as well, your subaccount may have been locked temporarily due to too
many failed logon attempts. If the problem persists, contact SAP Support for help.
● A remote operation on a repository fails with Git access forbidden.
This message means you don't have permission to access the repository at all, or to perform the requested Git
operation. Ensure that you are member of the subaccount that owns the repository. For read access (clone,
fetch, pull), you must be assigned the role Administrator, Developer, or Support User. For write access
(push, push tags), you must be assigned the Administrator or Developer role. For more information about
required roles for certain Git operations, see Security [page 605].
● Change pushes fail with a message similar to: You are not allowed to perform this operation.
To push into this reference you need 'Push' rights. ... HEAD -> master (prohibited
by Gerrit).
This message means you are not assigned the subaccount member role Developer or Administrator, or that
the repository is currently locked for write operations. Check your roles in the SAP Cloud Platform cockpit or
ask a subaccount administrator to assign the necessary roles. Verify the state of the repository in the cockpit
and unlock it to enable write operations.
Currently, the Git service does not support the Gerrit code review workflow. To use it, you need to run your own
Gerrit server, which you integrate intoSAP Cloud Platform using the cloud connector.
● Change pushes fail with You are not committer ....
● Change pushes fail with Pack exceeds the limit of ..., rejecting the pack.
This error message indicates that the maximum size of your Git repository would be exceeded by accepting
this change. The Git service imposes a hard limit of 500 MB as the maximum size of repositories to ensure the
best possible performance and health of the service. You can see this limit in the SAP Cloud Platform cockpit,
as well as your current repository size.
Run a garbage collection in the SAP Cloud Platform cockpit to clean up unnecessary objects and compact the
repository content. If doing this doesn't significantly reduce the size of the repository, it's usually an indication
that in addition to source code, the repository contains build artifacts or some other binary data that cannot
be compressed efficiently. Remove such files from the history of the repository and consider storing them
outside the Git service.
● Change pushes fail with Object too large (... bytes), rejecting the pack. Max object size
limit is ... bytes.
This error message indicates that the commit you are trying to push contains files that are too large to be
stored by the Git service. The Git service imposes a hard limit of 20 MB as the maximum size of individual files
in a repository to ensure the best possible performance and health of the service. Remove the file or files that
are too big from the commit and push again.
Related Information
The traditional point-to-point communication model is a decentralized one where applications and services
directly communicate with each other. The service decouples communication between the sending and receiving
applications and ensures the delivery of messages between them.
The messaging models that the service supports include the following:
● In the point-to-point model, the service enables applications to communicate with each other through
message queues. A sending application sends a message to a specific named queue. There's a one on one
correspondence between a receiving application and its queue. The message queue retains the messages until
the receiving application consumes it. You can manage these queues using enterprise messaging
administration.
● In the publish-subscribe model, the service enables a sending application to publish messages to a topic.
Applications must be subscribed to that topic, and be active when the message is sent. Topics do not retain
● In queue subscriptions, the service enables a sending application to publish messages to a topic that directly
sends the message to the queue to which it is bound. For example, messages from an S/4HANA system can
only be sent to a topic. A queue subscription ensures that the message is retained until it is consumed by the
receiving application.
The messaging model that the service supports for eventing is:
Messages from an event source: The service has the built-in capability of receiving events from an event source
(S/4HANA), event look up, and event discovery. Different event sources can publish events to the service running
on the cloud. An event channel group is a logical grouping of events at the event source with a unique name. At the
event source, related events are grouped by an administrator. The group is then associated with a topic using the
service which means receiving applications can subscribe to this topic to access messages from the event source.
SAP Enterprise Messaging (Beta) service is at the heart of a centralized message-oriented architecture. It is more
scalable and reliable compared to the traditional point-to-point communication model.
The traditional point-to-point communication model is a decentralized one where applications and services
directly communicate with each other. The service decouples communication between the sending and receiving
applications and ensures the delivery of messages between them.
The service consists of one or more messaging hosts. You can think of a messaging host as a space or domain in
the service where your message queues and topics reside.
The messaging models that the service supports includes the following:
● In the publish-subscribe model, the service enables a sending application to publish messages to a topic. To
receive these messages, applications must be subscribed to that topic. Use this model when each message
can be consumed by any number of receiving applications. These topics must be managed programmatically;
you cannot use SAP Cloud Platform cockpit.
Perform the following tasks to configure and set up enterprise messaging service in your subaccount.
Use SAP Enterprise Messaging (Beta) in SAP Cloud Platform cockpit to manage messaging hosts. A messaging
host is a space or domain in enterprise messaging service where the message queues and topics reside.
Prerequisites
You have an SAP Cloud Platform subaccount with messaging hosts provisioned in it.
Procedure
1. Navigate to your subaccount in the Neo environment. For more information, see Navigate to Global Accounts
and Subaccounts [page 964].
2. In the navigation area, choose Services. You can see the list of services available to you.
For more information about accessing services, see Using Services in the Neo Environment [page 1119].
3. Choose Enterprise Messaging.
4. Choose Messaging Hosts in the navigation area.
○ Edit the description of a messaging host by selecting it and choosing Edit Description.
○ Create and manage queues in your messaging host. For more information, see Manage Queues [page
616].
Use SAP Enterprise Messaging (Beta) in SAP Cloud Platform cockpit to manage queues in your messaging hosts.
Context
Messaging hosts use queues to enable point-to-point communication between two Java applications. You can
configure a queue to deliver messages in different ways when multiple applications are connected to it. This
property of a queue is called access type and it can be one of the following values:
● Exclusive: Only one application can receive messages from an exclusive queue. This is typically the first
application that connects to the queue.
● Non-Exclusive: Multiple applications can connect to a non-exclusive queue. Each application is serviced in a
round-robin fashion to provide load-balancing.
Procedure
1. Navigate to your subaccount in the Neo environment. For more information, see Navigate to Global Accounts
and Subaccounts [page 964].
2. Choose Services in the navigation area. You can see the list of services available to you.
For more information about accessing services, see Using Services in the Neo Environment [page 1119].
3. Choose Enterprise Messaging.
4. Choose Messaging Hosts in the navigation area.
5. Select a messaging host.
6. Choose Queues in the navigation area.
You can also search for a queue by typing its name in the Search field. The list of queues is filtered to match the
pattern you have entered.
7. To create a new queue, perform the following substeps.
a. Choose Create Queue.
b. Enter a queue name.
c. Select an access type.
d. Choose Save.
8. Manage the queues by performing one or more of the following administrative tasks:
Use SAP Enterprise Messaging (Beta) in SAP Cloud Platform cockpit to create and manage application bindings to
messaging hosts.
Context
To complete the messaging setup, create application bindings that connect Java applications to messaging hosts.
After an application binding has been created, applications can send messages to any queue or topic in the
messaging host. They can also receive messages from any queue or topic in the messaging host.
Procedure
1. Navigate to your subaccount in the Neo environment. For more information, see Navigate to Global Accounts
and Subaccounts [page 964].
2. Choose Services in the navigation area.
For more information about accessing services, see Using Services in the Neo Environment [page 1119].
3. Choose Enterprise Messaging.
4. Choose Application Bindings in the navigation area.
5. To create an application binding:
a. Choose Create Application Binding. At least one Java application must be available to create an application
binding. For more information on developing Java applications, see Developing Java Applications [page
1164].
b. Select a Java application.
c. Select the messaging host to which you want to bind the application.
d. Enter a name for the application binding. This name must be unique across all application bindings
associated with the selected Java application.
e. Choose Save. You may need to restart the application, depending on how you developed it.
Use this service to monitor Java applications or database systems running on SAP Cloud Platform.
You can view current and historical metrics, register availability and JMX checks, and set alert recipients.
Furthermore, you can integrate monitoring data with external tools or build new ones like dashboards, custom
notifications or self-healing apps. You perform such operations by using the SAP Cloud Platform cockpit, the
console client, or the monitoring REST API.
Note
The monitoring service is available in the Neo environment.
Retrieve Java application metrics in a JSON format by performing a REST API request defined by the monitoring
API.
Parameter Value
processes A list of processes. Each process contains the process ID, the
state of the process, and the list of the metrics for that proc
ess.
Example
The JSON response for Java application metrics may look like the following example:
[
{
"account": "mySubaccount",
"application": "hello",
"state": "Ok",
"processes": [
{
"process": "bf061f611cc520f39839f2fa9e44813b2a20cdb7",
"state": "Ok",
"metrics": [
{
"name": "Used Disc Space",
"state": "Ok",
"value": 43,
"unit": "%",
"warningThreshold": 90,
"errorThreshold": 95,
"timestamp": 1456408611000,
"output": "DISK OK - free space: / 4177 MB (54% inode=84%); /
var 1417 MB (74% inode=98%); /tmp 1845 MB (96% inode=99%);",
Related Information
Learn how to configure a custom application that retrieves the metrics for Java applications running on SAP Cloud
Platform. The dashboard, as implemented, shows the states of the Java applications and can also show the state
and metrics of the processes running on those applications.
Prerequisites
● To test the entire scenario, you need subaccounts on SAP Cloud Platform in two regions (Europe [Rot/
Germany] and US East).
● To retrieve the metrics from Java applications, you need two deployed and running Java applications.
This tutorial uses a Java project published on GitHub. This project contains a notification application that requests
the metrics of the following Java applications (running on SAP Cloud Platform):
After receiving each JSON response, the dashboard application parses the response and retrieves the name and
state of each application as well as the name, state, value, thresholds, unit, and timestamp of the metrics for each
process. The data is arranged in a list and then shown in the browser as a dashboard. For more information about
the JSON response, see Monitoring Service Response for Java Applications [page 618].
Procedure
Note
You can also upload your project by copying the URL from GitHub and pasting it as a Git repository path or
URI after you switch to the Git perspective. Remember to switch back to a Java perspective afterward.
3. In Eclipse, open the Configuration.java class and update the following information: your logon
credentials, your Java applications and their subaccounts and regions (hosts).
...
private final String user = "my_username";
private final String password = "my_password";
private final List<ApplicationConfiguration> appsList = new
ArrayList<ApplicationConfiguration>();
public void configure(){
String landscapeFQDN1 = "api.hana.ondemand.com";
String account1 = "a1";
String application1 = "app1";
ApplicationConfiguration app1Config = new
ApplicationConfiguration(application1, account1, landscapeFQDN1);
this.appsList.add(app1Config);
Note
The example above shows only two applications, but you can create more and add them to the list.
Tip
View the status of your Java applications and start them in the SAP Cloud Platform cockpit.
○ When you select an application, you can view the states of the application’s processes.
○ When you select a process, you can view the process’s metrics.
Related Information
Cockpit
Java: Application Operations
Regions and Hosts
Configure an example notification scenario that includes a custom application that notifies you of critical metrics
via e-mail or SMS. The application also performs actions to fix issues based on these critical metrics.
Prerequisites
● To test the entire scenario, you need subaccounts on SAP Cloud Platform in two regions (Europe [Rot/
Germany] and US East).
Note
If a Java application is not started yet, the notification application automatically triggers the start process.
Context
In this tutorial, you'll implement a notification application that requests the metrics of the following Java
applications (running on SAP Cloud Platform):
Note
Since the requests are sent to only two applications, the Maven project that you import in Eclipse only spawns
two threads. However, you can change this number in the MetricsWatcher class, where the
ScheduledThreadPoolExecutor(2) method is called. Furthermore, if you decide to change the list of
applications, you also need to correct the list in the Demo class of the imported project.
When the notification application receives the Java application metrics, it checks for critical metrics. The
application then sends an e-mail or SMS, depending on whether the metrics are received as critical once or three
times. In addition, the notification application restarts the Java application when the metrics are detected as
critical three times.
Procedure
Note
You can also upload your project by copying the URL from GitHub and pasting it as a Git repository path or
URI after you switch to the Git perspective. Remember to switch back to a Java perspective afterward.
3. Open the Demo.java class and update the following information: your e-mail and SMS addresses, your logon
credentials, your Java applications and their subaccounts and regions.
...
String mail_to = "my_email@email.com";
String mail_to_sms = "my_email@sms-service.com";
private final String auth_user = "my_user";
private final String auth_pass = "my_password";
String landscapeFqdn1 = "api.hana.ondemand.com";
String account1 = "a1";
4. Open the Mailsender.java class and update your e-mail account settings.
...
private static final String FROM = "my_email_account@email.com";
final String userName = "my_email_account";
final String password = "my_email_password";
...
public static void sendEmail(String to, String subject, String body) throws
AddressException, MessagingException {
// Set up the mail server
Properties properties = new Properties();
properties.setProperty("mail.transport.protocol", "smtp");
properties.setProperty("mail.smtp.auth", "true");
properties.setProperty("mail.smtp.starttls.enable", "true");
properties.setProperty("mail.smtp.port", "587");
properties.setProperty("mail.smtp.host", "smtp.email.com");
properties.setProperty("mail.smtp.host", "mail.email.com");
...
To do this, you can create a JMX check with a very low critical threshold for HeapMemoryUsage so that the
check is always received in a critical state.
Example
To use the console commands, you need to set up the console client. For more information, see Set Up
the Console Client.
Related Information
create-jmx-check
Monitoring Service [page 618]
Context
The Remote Data Sync service provides bi-directional synchronization of complex structured data between many
remote databases at the edge and SAP Cloud Platform databases at the center. The service is based on SAP SQL
Anywhere and its MobiLink technology.
● Using Remote Data Sync you can create occasionally-connected applications at the edge. These include
applications that are not suitable or economical to have a permanent connection, or applications that must
continue to operate in the face of unexpected network failures.
● Also, you can create applications that use a local database and synchronize with the cloud when a connection
is available.
● Remote Data Sync allows you to create remote applications that store and share large amounts of data
between the application and the cloud. This can significantly reduce latency for data-rich applications and
provide a more responsive user experience for remote applications.
A single cloud database may have hundreds of thousands of data collection and action endpoints that operate in
the real world over sometimes unreliable networks. Remote Data Sync provides a way to connect all of these
remote applications and to synchronize all databases at the edge into a single cloud database.
● SAP HANA database on the cloud via SQL Anywhere MobiLink clients, running on the edge devices;
● SQL Anywhere MobiLink servers, which are provided in the cloud by the Remote Data Sync service.
New insights can be later gained by analytics and data mining on the consolidated data in the cloud.
Sizing
Before you start working with the service, check its sizing requirements and choose the optimal hardware features
for fluent run of your applications. For more information, see Performance and Scalability of the MobiLink Server
[page 648].
Prerequisites
● You have an account in a productive SAP Cloud Platform landscape (for example, hana.ondemand.com,
us1.hana.ondemand.com, ap1.hana.ondemand.com, eu2.hana.ondemand.com).
● Your SAP Cloud Platform account has an SAP HANA instance associated to it. The Remote Data Sync service
is currently only supported with SAP HANA database as target database in the cloud.
● On the edge side, you need to install SAP SQL Anywhere Remote Database Client version 16. You can
get a free Developer Edition . See also the existing production packages: Overview
Context
The procedure below helps you to make the Remote Data Sync service available in your SAP Cloud Platform
account. As the service is not available for your SAP Cloud Platform account by default, you need to first fulfill the
prerequisites above. After that follow the procedure described below to request the Remote Data Sync service for
your account.
Note
Before you start working with the service, check its sizing requirements and choose the optimal hardware
features for fluent run of your applications. For more information, see Performance and Scalability of the
MobiLink Server [page 648].
To get access to the Remote Data Sync service, you need to extend your standard SAP Cloud Platform license with
an a-la-carte license for Remote Data Sync in one of two flavors:
1. Remote Data Sync, Standard: MobiLink server on 2 Cores / 4GB RAM (Price list material number: 8003943 )
2. Remote Data Sync, Premium: MobiLink sever on 4 Cores / 8 GB RAM (Price list material number: 8003944 )
Next Steps
Prerequisites
● You have received the needed licenses and have enabled the Remote Data Sync service for your subaccount.
For more information, see Get Access to the Remote Data Sync Service [page 630].
● You have installed and configured the console client. For more information, see Using the Console Client [page
1792].
Context
To use the Remote Data Sync service, a MobiLink server must be started and bound to the SAP HANA database of
your subaccount. This can be done by the following steps (they are described in detail in the procedure below):
1. Deploy the MobiLink server on a compute unit of your subaccount using the console client.
2. Bind the MobiLink server to your SAP HANA database to connect the MobiLink server to the database.
3. Start the MobiLink server within the console client.
Note
To provision a MobiLink server in your subaccount, you need a free compute unit of your quota. The Remote
Data Sync service license includes an additional compute unit for the MobiLink server.
Procedure
1. Deploy the MobiLink server on a compute unit of your subaccount using the deploy command. You can
configure the MobiLink server to be started with customized server options (see MobiLink Server Options ).
You can do this either during deployment using the --ev parameter, or later on using the set-application-
property command. You can also specify the compute unit by using the --size parameter of the deploy
command.
○ Exemplary MobiLink options configuration during development and starting MobiLink server on a
premium compute unit:
2. Bind the MobiLink server to your SAP HANA database. This is needed to connect the MobiLink server to the
database.
Note
Prerequisite: You have created an SAP HANA database user dedicated to the MobiLink server instance. For
more information, see Creating Database Users [page 1244].
Hint: In case your SAP HANA instance is configured to create database users with a temporary password
(the user is forced to reset it on first logon), you need to do it before creating the binding.
Note
In case you find the log message below, your binding step is missed or unsuccessfully executed:
5. You can stop or undeploy your MobiLink server. For more information, see stop [page 1982] or undeploy [page
1993].
Next Steps
Prerequisites
● An SQL Anywhere version 16 installation is available on the client side. For more information, see Get Access to
the Remote Data Sync Service [page 630].
● A MobiLink server is running in your account. For more information, see Provide a MobiLink Server in Your
Subaccount [page 631].
Context
This page provides a simple example that demonstrates how to synchronize data from a remote SQL Anywhere
database into the SAP HANA database, using the Remote Data Sync service and the underlying SQL Anywhere
MobiLink technology. For more information on MobiLink synchronizations, see Quick start to MobiLink
(Synchronization) .
Tip
The SQL Anywhere database running on the client side is called remote database. The central SAP HANA
database running on SAP Cloud Platform is called consolidated database.
Procedure
Sample Code
4. Create a publication
4. Choose the Back button in the toolbar menu to get back to the root task level.
9. Run a synchronization
Next Steps
Related Information
Context
You can access the MobiLink server logs both in the cockpit and the console client.
Procedure
4. In the Most Recent Logging section, click the icon to view the logs, or the icon to download them.
Related Information
This page helps you to achieve end-to-end traceability of all synchronizations done via the Remote Data Sync
service of SAP Cloud Platform. This way, you can track who made what changes during work on the SAP HANA
target database in the cloud.
To monitor and record which users performed selected actions on SAP HANA database, you can use the SAP
HANA Audit Activity with Database Table as trail target. To use this feature, it must first be activated for your SAP
HANA database. This can be done via SAP HANA Studio by a database user with role HCP_SYSTEM.
● Use an SAP HANA database table as the trail target makes it possible to query and analyze auditing
information quickly. It also provides a secure and tamper-proof storage location.
● Audit entries are only accessible through the public system view AUDIT_LOG. Only SELECT operations can be
performed on this view by users with the system privilege AUDIT OPERATOR or AUDIT ADMIN.
For more information about how to configure audit policy, see SAP HANA Administration Guide and SAP HANA
Security Guide.
Note
These links point to the latest release of SAP HANA Administration Guide and SAP HANA Security Guide. Refer
to the SAP Cloud Platform Release Notes to find out which HANA SPS is supported by SAP Cloud Platform.
Find the list of guides for earlier releases in the Related Links section below.
Additionally to the SAP HANA audit logs, you might want to use the MobiLink server logs to achieve end-to-end
traceability.
● We recommend that you set the log level of the MobiLink server to a value that produces logs in granularity
useful for end-to-end traceability of the performed synchronization operations. For example, the log level -
vtRU. For more information about this log level configuration, see -v parameter documentation .
● To configure the log level, use the deploy command in the console client. For more information, see Provide a
MobiLink Server in Your Subaccount [page 631].
Remember
SAP Cloud Platform retains the MobiLink server log files for only a week. To fulfill the legal requirements
regarding retention of audit log files, make sure you download the log files regularly (at least once a week), and
keep them for a longer period of time according to your local laws.
Related Information
Context
This section provides information about security-related operations and configurations you can perform in a
Remote Data Sync scenario.
Currently, as part of SAP Cloud Platform, the MobiLink servers support only basic authentication. For more
information, see User Authentication Architecture .
Tasks
There are different options how to configure the HTTPS connection, depending on the SQL Anywhere
synchronization tool that is used to trigger synchronizations:
○ When using SQL Anywhere dbmlsync command line tool to trigger client-initiated synchronizations,
trusted certificates can be specified using the trusted_certificates parameter as described here .
○ When using the Sybase Central UI to trigger client-initiated synchronizations, you can specify Trusted
certificates as described here .
Related Information
MobiLink Users
MobiLink Security
Prerequisites
● An SQL Anywhere version 16 installation is available on the client side. For more information, see Get Access to
the Remote Data Sync Service [page 630].
● A MobiLink server is running in your account. For more information, see Provide a MobiLink Server in Your
Subaccount [page 631].
Context
The page describes how existing tools of SQL Anywhere (SQL Anywhere Monitor and MobiLink Profiler)
can be connected and used with the Remote Data Sync service running on SAP Cloud Platform.
Related Information
MobiLink Profiler
Context
SQL Anywhere Monitor comes as part of the standard SQL Anywhere installation. You can find it under
Administrative Tools of SQL Anywhere 16. The tool provides basic information about the health and availability of a
SQL Anywhere and MobiLink landscape. It also gives basic performance information and overall synchronization
statistics of the MobiLink server.
Procedure
1. To start the SQL Anywhere Monitor tool, open the SQL Anywhere 16 installation and go to Administrative Tools.
2. Open the SQL Anywhere Monitor dashboard via URL: http://<host_name>:4950, where <host_name> is
the host of the computer where SQL Anywhere Monitor is running.
3. Log in with the default credentials: user= admin , password= admin .
○ MobiLink server:
○ As Host, specify the fully qualified domain name of the MobiLink server running in your SAP Cloud
Platform account.
○ As Port, specify 8443.
○ As Connection Type, specify HTTPS. Leave the rest unchanged.
○ MobiLink user: provide the credentials of a valid MobiLink user.
○ Collection interval: time interval after which SQL Anywhere Monitor contacts the MobiLink server again to
fetch data
Next Steps
SQL Anywhere Monitor also allows you to configure e-mail alerts for synchronization problems. For more
information, see Alerts .
Related Information
Context
MobiLink Profiler comes as part of the standard SQL Anywhere installation. You can find it under Administrative
Tools of SQL Anywhere 16. The tool collects statistical data about all synchronizations during a profiling session,
and provides performance details of the single synchronizations, down to the detailed level of a MobiLink event. It
also provides access to the synchronization logs of the MobiLink server. Therefore, the tool is mostly used to
troubleshoot failed synchronizations or performance issues, and during the development phase to further analyze
synchronizations, errors, or warnings.
Procedure
1. Start the MobiLink Profiler under Administrative Tools of SQL Anywhere 16. The tool is a desktop client and
does not run in a Web browser.
2. Open File Begin Profiling Session to connect to the MobiLink server of your cloud account.
3. In the Connect to MobiLink Server window, provide the appropriate connection details, such as:
○ Host: specify the fully qualified domain name of the MobiLink server running in your SAP Cloud Platform
account.
○ Port: 8443
○ User/Password: the credentials of a valid MobiLink user.
○ Protocol: HTTPS
Next Steps
To learn more about the UI of the MobiLink Profiler, see MobiLink Profiler Interface .
Prerequisites
● An SQL Anywhere version 16 installation is available on the client side. For more information, see Get Access to
the Remote Data Sync Service [page 630].
● A MobiLink server is running in your subaccount. For more information, see Provide a MobiLink Server in Your
Subaccount [page 631].
Context
This page describes how you can configure an availability check for your MobiLink server and subscribe recipients
to receive alert e-mail notifications when your server is down or responds slowly. Furthermore, recommended
actions are listed in case of issues.
Procedure
Example:
3. To subscribe recipients to notification alerts, execute the following command (exemplary data).
Tip
To add multiple e-mail addresses, separate them with commas. We recommend that you use distribution
lists rather than personal e-mail addresses. Keep in mind that you will remain responsible for handling of
personal e-mail addresses with respect to data privacy regulations applicable.
Next Steps
● Check the logs. In case of synchronization errors, use the MobiLink Profiler tool to drill down into the
problem for root cause analysis.
● In case of crude server startup parameters, reset the MobiLink server.
● If your MobiLink server hangs, restart it.
This page provides sizing information for applications using the Remote Data Sync service.
Although the only realistic answers to optimal resource planning are “It depends” and “Testing will show what you
need”, this section aims to help you choose the right hardware parameters.
Synchronization Phases
The figure below shows the major phases of a synchronization session. Though not complete, it covers many
common use cases.
1. Synchronization is initiated by a remote database client. It uploads any changes made at the remote database
to the server.
2. MobiLink applies the changes to the database.
3. MobiLink queries the database and prepares the changes to be sent to the remote database client.
4. MobiLink sends the changes to the remote database client.
Database Capacity
When the Remote Data Sync server applies changes to the consolidated database and prepares changes to be
sent to the remote database client, it typically does so by executing SQL statements or stored procedures that are
invoked by MobiLink events. For example, to apply an upload MobiLink may execute insert, update, and delete
statements for each table being synchronized; to prepare a download MobiLink may execute a query for each table
being synchronized.
Database tuning is outside the scope of this document, but the load on the database can be substantial. Think of
MobiLink as a concentrator of database load. All the operations that are carried out against the remote database
while disconnected, in addition to the requests for updates to be downloaded to the remote database, are
executed in two transactions (1 upload, 1 download) against the consolidated database. This can place a heavy
load on the database.
You should know the number of concurrent synchronizations as a starting point, and from there on, calculate back
on the required resources. Typically, this number is limited by RAM requirements. To estimate, you need a typical
upload and download data volume as a starting point.
A machine with N MB of RAM can have C clients each with about V MB of upload or download data volume, where C
= N/V.
Following this formula, for large synchronizations (< 20 MB), you can have:
Remote Data Sync servers are not typically CPU intensive, and typically require less than half the processing that is
required by the consolidated database. When selecting the appropriate compute units for MobiLink, memory is
more likely to limit the maximum sustainable throughput for a Remote Data Sync server than CPU.
Example:
1. Let's assume the database can process the target load of L synchronizations per second (and that is a matter
for testing).
2. At this throughput, one database thread will come open every 1/L seconds. To keep throughput high, a
synchronization request should be ready, with data uploaded and available to pass to the database thread.
3. To keep the database busy, if a synchronization request takes t seconds to upload (which will depend on
network speed and data volume, and which should be determined by testing), then the Remote Data Sync
server must be able to hold (L x t) client data uploads in memory.
4. The Remote Data Sync server must also be able to download the data to the client to prevent the database
threads having to wait for a network connection to download. In the case, this volume is similar to the uploads
we end up with: MobiLink should be able to support (2 x L x t) simultaneous synchronizations to maintain a
throughput of L synchronizations per second.
Note
For example, to support a peak sustained throughput of 50 synchronizations per second, with a client that takes
0.5 seconds to upload and download data, then the Remote Data Sync server should be able to support 50
simultaneous synchronizations in RAM to sustain this rate as a peak throughput. Assuming data transfer
volumes per client are less than 80 MB (which is a very high number for data synchronization), a Standard
machine would be a good choice to start with.
This guide describes how to administer SAP HANA databases in the SAP Cloud Platform Neo environment and
Cloud Foundry environment for exisiting SAP HANA database systems on AWS. Keep in mind that it helps you with
● https://help.sap.com/viewer/d4790b2de2f4429db6f3dff54e4d7b3a/Cloud/en-US/
d17a78913c87434b8b2a059d003220bb.html
● https://help.sap.com/viewer/cc53ad464a57404b8d453bbadbc81ceb/Cloud/en-US
● https://help.sap.com/viewer/d4790b2de2f4429db6f3dff54e4d7b3a/Cloud/en-US/
f6567e3b7334403b9b275426fbe4fb04.html
If you're looking for information on how to develop SAP HANA on SAP Cloud Platform, see the following sections of
the SAP Cloud Platform developer documentation to find help: For the Cloud Foundry environment, Developing
SAP HANA in the Cloud Foundry Environment [page 991] as well as The SAP HANA XS Advanced Java Run Time
[page 998], and for the Neo environment, SAP HANA: Development [page 1222].
The Cloud Foundry environment allows you to use SAP HANA tenant database systems.
Note
The SAP HANA service has recently been updated in the Cloud Foundry environment. See .
Note
The latest SAP HANA revisions supported by SAP Cloud Platform in the Cloud Foundry environment are
2.00.012.05 and 2.00.024.02. For more information on SAP HANA revisions, see SAP Note 2378962 .
In the Cloud Foundry environment, space managers can create databases on SAP HANA tenant database systems
(MDC) in their spaces. Developers can then bind databases to applications running on SAP Cloud Platform.
An SAP HANA tenant database system is associated with a particular space and is available to applications in this
space. You can create and delete tenant databases using the cockpit, and bind them to applications using the
cockpit or the console client. You can bind the same tenant database to multiple applications, and the same
application to multiple tenant databases.
Feature Description
Work with Databases and Database SAP HANA supports multiple isolated databases in a single SAP HANA system. These
Systems are referred to as tenant databases. All tenant databases in the same SAP HANA system
share the same system resources (memory and CPU cores) but each tenant database is
fully isolated with its own database users, catalog, repository, persistence (data files and
log files) and services. In your enterprise account, you have full control of user manage
ment and can use a range of database tools.
For an overview on how to administer your database system and databases, see Data
base Administration [page 671].
Ensure Backup & Recovery Backup and recovery of data stored in your database and database system are per
formed by SAP. If you're databases are not working properly, you can resolve the issues
by restarting the corresponding system or tenant database. You can also request a re
store in the SAP Cloud Platform cockpit.
Monitor Your Databases Monitor the health of your SAP HANA databases in the SAP HANA cockpit. See Access
SAP HANA Cockpit [page 696].
You can also view the memory usage for your database systems in the SAP Cloud
Platform cockpit. See View Memory Usage for an SAP HANA Database System [page
678].
Try it out You can try out working with SAP HANA tenant databases in the Cloud Foundry environ
ment and create a service binding to a shared SAP HANA tenant database. For restric
tions, see .
Tools
You can use the following tools in combination with the SAP HANA service:
Restriction
General Restrictions
No automatic life cycle manage ● The SAP HANA service does not provide automatic life cycle management for da
ment for database objects tabase objects, such as tables, indices, sequences, and so on. An application
must create the necessary database objects, either by using JDBC to send the
corresponding data definition statements to the database, or by using the
schema creation capabilities of EclipseLink. Due to limitations of the EclipseLink
schema creation feature, changes to the schema, like altering a table definition,
must be done by the application. Alternatively, open source tools for database
schema management (like Liquibase) can be used for life cycle management for
database objects, but must be bundled with the application.
Restriction
Restrictions Applying to SAP HANA Tenant Databases
Backup ● When you stop a tenant database for several days, you may not be able to recover
the database. It is important to keep databases running without longer down
times.
Monitoring ● The availability of SAP HANA databases enabled for multitenant database con
tainer support is not monitored and no alerts are sent when a database is not
available.
Memory Management ● The sum of the specified allocation limits must not exceed the memory available
for tenant databases.
● If the specified memory limit for a certain tenant database is exceeded, the con
nection to the tenant database may not be possible anymore until the tenant da
tabase is restarted or the limit is increased by SAP Cloud Platform operators.
● Be aware that setting tight memory limits for tenant databases may lead to failing
backups, and that a recovery may not always be possible.
Restriction
No automatic life cycle manage ● It is currently not possible to use a dedicated SAP HANA tenant database system
ment for database objects in a trial account. You can, however, create a service binding to a shared SAP
HANA tenant database in your trial account. For more information, see First
Steps in Trial Accounts [page 666].
There are some other restrictions as to which SAP HANA features can be used in the trial scenario and which
cannot.
Learn how to get started with the SAP HANA service in the Cloud Foundry environment.
Depending on your account type, different steps are necessary to get you started.
Learn how to bind an application to a new SAP HANA tenant database in a Cloud Foundry space in an enterprise
account by creating a service binding.
Scenario Tutorial
You want to create a service binding using the SAP Cloud Creating a Service Binding Using the Cloud Cockpit [page
Platform cockpit. 655]
You want to create a service binding using the console client. Creating a Service Binding Using the Console Client [page
661]
Related Information
Learn about how to bind an application to a new SAP HANA tenant database in a Cloud Foundry space in an
enterprise account by creating a service binding in the SAP Cloud Platform cockpit.
Note
For more information on creating service bindings in trial accounts, see Creating a Service Binding in a Trial
Account Using the Cloud Cockpit [page 667].
● An SAP HANA tenant database system is deployed in a Cloud Foundry space in your enterprise account.
● Deploy an application in the same Cloud Foundry space.
Context
To bind an application to a tenant database in the cloud cockpit, perform the following steps:
Note
If you've already created a tenant database to which you want to bind your application, you can skip this
step.
To learn more about concepts behind integrating service instances with applications in the Cloud Foundry
environment, see the official documentation for the Cloud Foundry environment at https://docs.cloudfoundry.org/
devguide/services/ .
Create a tenant database on an SAP HANA tenant database management system that is deployed in your Cloud
Foundry space in an enterprise account. If you've already created a tenant database that you want to use for the
binding, you can skip this task.
Prerequisites
You must be Space Manager in the space in which you want to create a tenant database. For more information on
roles and permissions, please see the official Cloud Foundry documentation at https://docs.cloudfoundry.org/
concepts/roles.html .
Procedure
1. In the cloud cockpit, navigate to the Cloud Foundry space in which you want to create a new tenant database.
For more information, see Navigate to Global Accounts, Subaccounts, Orgs, and Spaces in the Cockpit [page
953].
Tip
To view additional database details, for example, its state and the number of existing bindings, select a
database from the list.
Note
The password must be at least 15 characters long and must contain at least one uppercase and one
lowercase letter ('a' – 'z', 'A' – 'Z') and one digit ('0' – '9'). It can also contain special characters (except ", '
and \).
You can define limits for memory consumption by individual processes of the new database. For more
information, see the SAP HANA Administration Guide. For more information on viewing limits of an existing
tenant database, see View Memory Usage for an SAP HANA Database System [page 678].
○ XS Engine:
By default, the XS engine of your new database runs in an embedded mode. You can create a standalone
XS engine by selecting Standalone and set a value in the XS Engine Limit (MB) field.
Caution
You cannot change this parameter after you have created the database, which means you cannot
switch from an embedded to a standalone mode and the other way around. If you run the XS engine in a
standalone mode, you can change the memory limit after you have created the database, but you won't
be able to do so if you run the XS engine in an embedded mode.
Caution
It's important you decide here whether you want to enable or disable either of the servers. You won't be
able to turn either of them on or off after you've created the database. However, you can, at a later time,
change the memory limit of a server you enabled here.
We strongly recommend you enable the DI server and the hana service broker for your tenant database.
If you don't enable it, you won't be able to use the service broker or to create a service binding.
8. Choose Create.
Note
There is a limit to the number of databases you can create, and you'll see an error message when you reach
the maximum number of databases. The default limits are shown in the table below, but depending on your
database system configuration, the number of tenant databases you can create might differ from these
limits.
Your particular use case and the amount of data and workloads handled in your tenant databases should
determine how many tenant databases you create. The more tenant databases you create, the less memory
is available for you in each individual tenant database. Therefore, we recommend that you create no more
than half of the maximum number of databases shown in the table below.
SAP HANA Memory Size Maximum Number of Tenant Databases You Can Create
61GB 4
122GB 10
244GB 24
488GB 50
>488GB 200
To bind the new SAP HANA tenant database to an application in your Cloud Foundry space in an enterprise
account, you need to create a service instance using a particular plan of the hana service, and bind it to an
application.
Procedure
1. In the navigation area, choose Applications, then select the relevant application.
2. In the navigation area, choose Service Bindings.
The overview lists all applications to which the selected application is currently bound.
3. Choose Bind Service.
4. On the Choose Service Type tab, select the Service from the catalog radio button and choose Next.
5. On the Choose Service tab, select the hana tile and choose Next.
Note
The hana tile is only displayed if you've turned on the DI Server + Service Broker switch for a tenant
database in your Cloud Foundry org. For more information, see Create an SAP HANA Tenant Database
[page 656].
6. On the Choose Service Plan tab, select the corresponding radio button to create a new instance or reuse an
existing instance. Select a service plan and choose Next.
7. Depending on the number of tenant databases you have created in your space, choose one of the following
options:
Option Description
There is only one ten Skip the Specify Parameters tab by choosing Next.
ant database in your
Cloud Foundry space.
There is more than Specify the parameters in JSON format. Copy the following string by specifying the parameters:
one tenant database
in your Cloud Foundry {"database_id":"<tenant_db_name>"}
space.
Enter the database ID that you defined when creating an SAP HANA tenant database. Choose
Next.
Note
To use a tenant database that is owned by another space, see Sharing a Tenant Database with
Other Spaces [page 691].
8. On the Confirm tab, enter a name in the Instance Name field and choose Finish.
Once you've created the binding, you must restart your application.
Procedure
Navigate to the Cloud Foundry space and choose Applications. Select the Restart icon for your application.
Note
An application’s state influences when a newly bound SAP HANA tenant database becomes effective. If an
application is already running (Started state), it does not have access to the newly bound HANA tenant
database until it has been restarted.
Results
You have created a service binding for an SAP HANA tenant database in your Cloud Foundry space.
To unbind a database from an application, choose the Delete icon in the Actions column. The application maintains
access to the database until it is restarted.
To change database parameters (for example, to assign a higher memory limit to one of its processes), choose the
Configure button on the Overview page.
Related Information
Bind an application to a new SAP HANA tenant database in a Cloud Foundry space in an enterprise account using
the Cloud Foundry command line interface (cf CLI).
Note
For more information on creating service bindings in trial accounts, see Creating a Service Binding in a Trial
Account Using the Console Client [page 669].
Prerequisites
● An SAP HANA tenant database system is deployed in a Cloud Foundry space in your enterprise account.
● Deploy an application to the same space.
● Download and install the cf CLI. For more information, see Download and Install the Cloud Foundry Command
Line Interface [page 948].
● Using cf CLI, log on to the Cloud Foundry space in which the SAP HANA system and your application are
deployed. For more information, see Log On to the Cloud Foundry Environment Using the Cloud Foundry
Command Line Interface [page 948].
Context
To bind an application to a database using the cf CLI, perform the following steps:
Note
You can create tenant databases only in the SAP Cloud Platform cockpit, that is, you cannot create them
using the console client. If you've already created a tenant database to which you'd like to bind your
application, you can skip this step.
To learn more about concepts behind integrating service instances with applications in the Cloud Foundry
environment, please see the documentation for the Cloud Foundry environment at https://docs.cloudfoundry.org/
devguide/services/ .
Create a tenant database on an SAP HANA tenant database management system that is deployed in your Cloud
Foundry space in an enterprise account. If you've already created a tenant database that you want to use for the
binding, you can skip this task.
Prerequisites
You must be Space Manager in the space in which you want to create a tenant database. For more information on
roles and permissions, please see the official Cloud Foundry documentation at https://docs.cloudfoundry.org/
concepts/roles.html .
Procedure
1. In the cloud cockpit, navigate to the Cloud Foundry space in which you want to create a new tenant database.
For more information, see Navigate to Global Accounts, Subaccounts, Orgs, and Spaces in the Cockpit [page
953].
Tip
To view additional database details, for example, its state and the number of existing bindings, select a
database from the list.
Note
The password must be at least 15 characters long and must contain at least one uppercase and one
lowercase letter ('a' – 'z', 'A' – 'Z') and one digit ('0' – '9'). It can also contain special characters (except ", '
and \).
You can define limits for memory consumption by individual processes of the new database. For more
information, see the SAP HANA Administration Guide. For more information on viewing limits of an existing
tenant database, see View Memory Usage for an SAP HANA Database System [page 678].
○ XS Engine:
By default, the XS engine of your new database runs in an embedded mode. You can create a standalone
XS engine by selecting Standalone and set a value in the XS Engine Limit (MB) field.
Caution
You cannot change this parameter after you have created the database, which means you cannot
switch from an embedded to a standalone mode and the other way around. If you run the XS engine in a
standalone mode, you can change the memory limit after you have created the database, but you won't
be able to do so if you run the XS engine in an embedded mode.
Caution
It's important you decide here whether you want to enable or disable either of the servers. You won't be
able to turn either of them on or off after you've created the database. However, you can, at a later time,
change the memory limit of a server you enabled here.
Caution
It's important you decide here whether you want to enable or disable either of the servers. You won't be
able to turn either of them on or off after you've created the database. However, you can, at a later time,
change the memory limit of a server you enabled here.
We strongly recommend you enable the DI server and the hana service broker for your tenant database.
If you don't enable it, you won't be able to use the service broker or to create a service binding.
8. Choose Create.
Note
There is a limit to the number of databases you can create, and you'll see an error message when you reach
the maximum number of databases. The default limits are shown in the table below, but depending on your
database system configuration, the number of tenant databases you can create might differ from these
limits.
SAP HANA Memory Size Maximum Number of Tenant Databases You Can Create
61GB 4
122GB 10
244GB 24
488GB 50
>488GB 200
Results
To bind the new SAP HANA tenant database to an application in your Cloud Foundry space in an enterprise
account, you need to create a service instance first.
Procedure
Caution
You can only create service instances on tenant databases for which the DI Server + Service Broker was enabled
during database creation. For more information, see Create an SAP HANA Tenant Database [page 656].
Open a command line and choose one of the following options, depending on the number of tenant databases you
have created in your space:
There is only one Enter the following string, providing the appropriate parameters:
tenant database
in your Cloud cf create-service hana <service_plan> <service_instance_name>
Foundry space.
For more information and examples, see Managing Service Instances with the cf CLI .
There is more Enter the following string and specify the parameters:
than one tenant ○ macOS and Linux:
database in your
Cloud Foundry cf create-service hana <service_plan> <service_instance_name> -c
‘{"database_id":"<tenant_db_name>"}’
space.
○ Windows Command Line:
○ Windows PowerShell:
Note
To use a tenant database that is owned by another space, see Sharing a Tenant Database with Other
Spaces [page 691].
Once you've created a service instance, you can bind it to the application.
Procedure
Procedure
cf restart <application_name>
Note
An application’s state influences when a newly bound SAP HANA tenant database becomes effective. If an
application is already running (Started state), it does not have access to the newly bound HANA tenant
database until it has been restarted.
Results
You have created a service binding for an SAP HANA tenant database in your Cloud Foundry space.
Related Information
Learn how to bind an application to a shared SAP HANA tenant database in a Cloud Foundry space in a trial
account by creating a service binding.
Scenario Tutorial
You want to create a service binding using the SAP Cloud Creating a Service Binding in a Trial Account Using the Cloud
Platform cockpit. Cockpit [page 667]
You want to create a service binding using the console client. Creating a Service Binding in a Trial Account Using the Con
sole Client [page 669]
Related Information
Bind an application to a shared SAP HANA tenant database in a Cloud Foundry space that belongs to a trial
account by creating a service binding in the SAP Cloud Platform cockpit.
Tip
You cannot create SAP HANA tenant databases in trial accounts. You directly bind a shared SAP HANA tenant
database to an application in your Cloud Foundry space in a trial account.
Note
For more information on creating service bindings in enterprise accounts, see Creating a Service Binding Using
the Cloud Cockpit [page 655].
Prerequisites
Context
To learn more about the concepts behind integrating service instances with applications in the Cloud Foundry
environment, please see the official Cloud Foundry documentation at https://docs.cloudfoundry.org/devguide/
services/ .
To bind a shared SAP HANA tenant database to an application in your Cloud Foundry space in a trial account, you
need to create a service instance using a particular plan of the hanatrial service, and bind it to an application.
Procedure
1. In the navigation area, choose Applications, then select the relevant application.
2. In the navigation area, choose Service Bindings.
Once you've created the binding, you must restart your application.
Procedure
Navigate to the Cloud Foundry space and choose Applications. Select the Restart icon for your application.
Note
An application’s state influences when a newly bound SAP HANA tenant database becomes effective. If an
application is already running (Started state), it does not have access to the newly bound HANA tenant
database until it has been restarted.
You have created a service binding for an SAP HANA tenant database in your Cloud Foundry space.
To unbind a database from an application, choose the Delete icon in the Actions column. The application maintains
access to the database until it is restarted.
To change database parameters (for example, to assign a higher memory limit to one of its processes), choose the
Configure button on the Overview page.
Related Information
Bind an application to a shared SAP HANA tenant database in a Cloud Foundry space that belongs to a trial
account using the Cloud Foundry command line interface (cf CLI).
Note
For more information on creating service bindings in enterprise accounts, see Creating a Service Binding Using
the Console Client [page 661].
Prerequisites
Context
To bind a shared SAP HANA tenant database to an application in your Cloud Foundry space in a trial account, you
must first create a service instance.
Procedure
For more information and examples, see Managing Service Instances with the cf CLI .
Once you've created a service instance, you can bind it to the application.
Procedure
Procedure
cf restart <application_name>
Note
An application’s state influences when a newly bound SAP HANA tenant database becomes effective. If an
application is already running (Started state), it does not have access to the newly bound HANA tenant
database until it has been restarted.
Results
You have created a service binding for an SAP HANA tenant database in your Cloud Foundry space.
Related Information
Use the SAP Cloud Platform cockpit to administer your database systems and databases in the Cloud Foundry
environment.
An overview of the different tasks you can perform to administer database systems in the Cloud Foundry
environment.
View Memory Usage for an SAP HANA Database System [page 678]
View the memory usage for an SAP HANA tenant database system in the Cloud Foundry environment
using the SAP Cloud Platform cockpit.
Update software components or apply a new Support Package to your SAP HANA tenant database system in the
Cloud Foundry environment.
Prerequisites
Context
To update your SAP HANA tenant database system, you have the following options:
● Update the software components installed on your SAP HANA tenant database system to a later version.
● Apply a single Support Package on top of an existing SAP HANA tenant database system.
Remember
Make sure that you read the SAP Notes listed in the UI before applying any updates. Complete all the steps
required before or after the update.
Please expect a temporary downtime for the SAP HANA tenant database system when you update SAP HANA.
Procedure
1. In the SAP Cloud Platform cockpit, navigate to a space that owns the SAP HANA tenant database system
you're updating. For more information, see Navigate to Global Accounts, Subaccounts, Orgs, and Spaces in the
Cockpit [page 953].
2. Choose SAP HANA in the navigation area.
All database systems available in the space are listed with their details, including the database type, version,
memory size, state, and the number of associated databases.
3. Select the entry for the relevant database system.
Note
You can select only SAP HANA revisions that have been approved for use in SAP Cloud Platform. To update
to another revision, please contact SAP Support.
Updating an SAP HANA tenant database system to a maintenance revision may result in upgrade path
limitations. See SAP Note 1948334 for details.
6. (Optional) Specify whether you want a prompt for confirmation before the update of the SAP HANA tenant
database system is applied and the system downtime is started.
By default, this option is selected. If you unselect it, the update is performed without any user interaction.
7. Choose Continue/Update.
The update process takes some time and is executed asynchronously. The update dialog box remains on the
screen while the update is in progress. You can close the dialog box and reopen it later.
8. (Optional) If you chose to be prompted for confirmation after preparation of the update, the process stops and
prompts for your confirmation to start the update.
During preparation, the SAP HANA tenant database system is not modified, so you can cancel the process
here if necessary.
9. Choose Update.
The update starts and takes about 20 minutes.
Note
For more information, see the SAP HANA Developer Guides listed below. Refer to the SAP Cloud Platform
Release Notes to find out which SAP HANA SPS is currently supported by SAP Cloud Platform.
SAP HANA Developer Guide for SAP HANA Studio → section "Set up Application Authentication"
SAP HANA Developer Guide for SAP HANA Web Workbench → section "Set up Application Authentication"
SAP Note 1948334
Restart SAP HANA Database Systems [page 674]
Install SAP HANA Components [page 677]
Access SAP HANA Cockpit [page 696]
Try to solve issues by restarting the entire corresponding SAP HANA tenant database system in the Cloud Foundry
environment.
Prerequisites
Procedure
1. In the SAP Cloud Platform cockpit, navigate to a space that owns the SAP HANA tenant database system you
want to restart. For more information, see Navigate to Global Accounts, Subaccounts, Orgs, and Spaces in the
Cockpit [page 953].
Note
If security OS patches are pending for the database system you have restarted, the host of the database
system is also restarted.
Results
You can monitor the database system status during the restart using the HANA tools. Connected applications and
database users cannot access the system until it is restarted. The restart for the SAP HANA database system is
complete when HANA tools such as the cockpit are available again.
Prerequisites
Procedure
1. In the SAP Cloud Platform cockpit, navigate to a space that owns the SAP HANA database system you want to
restore. For more information, see Navigate to Global Accounts, Subaccounts, Orgs, and Spaces in the Cockpit
[page 953].
Caution
You will lose all data stored between the time you specify in the New Service Request screen and the
time at which you create the service request. For example, if you create a restore request at 3pm to
restore your database system to 9am on the same day for example, all data stored between 9am and
3pm will be lost.
d. Choose Save.
A template for opening an incident in the SAP Support Portal is displayed.
e. Select the text in the template between the two dashed lines and copy it to the clipboard.
Tip
Navigate to SAP HANA Service Requests , then choose the Display icon to find the template for
opening a ticket at any time.
Note
You need the authorization to create an incident. Contact a user administrator in your company to
request this authorization.
Note
You can find a detailed step-by-step instruction for creating an incident in the Report an Incident - Help .
6. Once you have reached the Enter Incident view, enter the following data:
a. In the Classification panel, enter the component for persistency.
Note
For a complete list of SAP Cloud Platform components, see 1888290 .
b. In the Problem Details panel, enter the title Database System Restore Request in the Short Text field.
c. Paste the template text you copied to your clipboard into the Long Text field.
d. Choose Send Incident.
Results
You have created a request for restoring a database system and sent the request to SAP Support for processing.
As soon as your database system is restored, the state of your request will be set to Finished in the cockpit and the
incident you created will be set to Completed. You can see the state of your request in the cockpit by navigating to
SAP HANA Service Requests . The state is displayed next to your service request. In the meantime, SAP
Support might contact you in case they need further clarification. You will be notified by e-mail if you need to take
any further action.
Note
Your database system is available for use for all users immediately after the restore has been successful.
Note
To cancel your restore request, go to SAP HANA Service Request , choose your restore request and select
the Delete icon. Note that your request can only be canceled if it has the state New.
Use the SAP Cloud Platform cockpit to install new SAP HANA components in the Cloud Foundry environment.
Prerequisites
Context
● SAP HANA platform components, which are installed on the SAP HANA tenant database system at the
operating system level.
Recommendation
We recommend that you always use the latest available version.
Please expect a temporary downtime for the SAP HANA database when installing SAP HANA components.
Procedure
1. In the SAP Cloud Platform cockpit, navigate to a space that owns the SAP HANA tenant database system for
which you'd like to install SAP HANA components. For more information, see Navigate to Global Accounts,
Subaccounts, Orgs, and Spaces in the Cockpit [page 953].
2. Choose SAP HANA in the navigation area.
3. Select the entry for the relevant database system in the list.
4. To install an SAP HANA component for the selected database system, choose Install components.
Results
Note
For more information, see the SAP HANA Developer Guides listed below. Refer to SAP HANA Service in the
Cloud Foundry Environment (Before Update) [page 651] to find out which HANA revision is supported.
Related Information
Developer Guide for SAP HANA Studio → section "Set up Application Authentication"
Developer Guide for SAP HANA Web Workbench→ section "Set up Application Authentication"
Update SAP HANA Database Systems [page 672]
Restart SAP HANA Database Systems [page 674]
Access SAP HANA Cockpit [page 696]
View the memory usage for an SAP HANA tenant database system in the Cloud Foundry environment using the
SAP Cloud Platform cockpit.
Procedure
1. In the SAP Cloud Platform cockpit, navigate to a space that owns the SAP HANA tenant database system
you'd like to view memory limits for. For more information, see Navigate to Global Accounts, Subaccounts,
Orgs, and Spaces in the Cockpit [page 953].
You see a table that lists the memory limits and usage for each tenant database and the system database.
You can view the following values:
○ On database system level:
○ Global allocation limit: The amount of memory that is available for the SAP HANA system.
○ Global shared memory allocation: The amount of allocated memory of the SAP HANA system that
cannot be associated with a concrete process.
○ Global shared memory usage: The amount of used memory of the SAP HANA system that cannot be
associated with a concrete process.
○ On tenant and system database level:
○ Configured allocation limit: The limit that is currently set for a particular process.
○ Allocated memory: The memory that is currently allocated to a particular process.
○ Used memory: The memory that is currently used by a particular process.
For more information about memory usage, see the SAP HANA Administration Guide.
Note
If you haven't set a limit for a particular process or if you've allocated a percentage, the corresponding entry
is empty and the total of configured allocation limits cannot be calculated.
Related Information
Create a tenant database on an SAP HANA tenant database management system that is deployed in your Cloud
Foundry space in an enterprise account.
Prerequisites
You must be Space Manager in the space in which you want to create a tenant database. For more information on
roles and permissions, please see the official Cloud Foundry documentation at https://docs.cloudfoundry.org/
concepts/roles.html .
Context
Tip
You cannot create SAP HANA tenant databases in trial accounts. You directly bind a shared SAP HANA tenant
database to an application in your Cloud Foundry space in a trial account. For more information, see Create
Service Bindings in Trial Accounts [page 684] or First Steps in Trial Accounts [page 666].
1. In the cloud cockpit, navigate to the Cloud Foundry space in which you want to create a new tenant database.
For more information, see Navigate to Global Accounts, Subaccounts, Orgs, and Spaces in the Cockpit [page
953].
Tip
To view additional database details, for example, its state and the number of existing bindings, select a
database from the list.
Note
The password must be at least 15 characters long and must contain at least one uppercase and one
lowercase letter ('a' – 'z', 'A' – 'Z') and one digit ('0' – '9'). It can also contain special characters (except ", '
and \).
You can define limits for memory consumption by individual processes of the new database. For more
information, see the SAP HANA Administration Guide. For more information on viewing limits of an existing
tenant database, see View Memory Usage for an SAP HANA Database System [page 678].
○ XS Engine:
By default, the XS engine of your new database runs in an embedded mode. You can create a standalone
XS engine by selecting Standalone and set a value in the XS Engine Limit (MB) field.
Caution
You cannot change this parameter after you have created the database, which means you cannot
switch from an embedded to a standalone mode and the other way around. If you run the XS engine in a
standalone mode, you can change the memory limit after you have created the database, but you won't
be able to do so if you run the XS engine in an embedded mode.
Caution
It's important you decide here whether you want to enable or disable either of the servers. You won't be
able to turn either of them on or off after you've created the database. However, you can, at a later time,
change the memory limit of a server you enabled here.
Caution
It's important you decide here whether you want to enable or disable either of the servers. You won't be
able to turn either of them on or off after you've created the database. However, you can, at a later time,
change the memory limit of a server you enabled here.
We strongly recommend you enable the DI server and the hana service broker for your tenant database.
If you don't enable it, you won't be able to use the service broker or to create a service binding.
8. Choose Create.
Note
There is a limit to the number of databases you can create, and you'll see an error message when you reach
the maximum number of databases. The default limits are shown in the table below, but depending on your
database system configuration, the number of tenant databases you can create might differ from these
limits.
Your particular use case and the amount of data and workloads handled in your tenant databases should
determine how many tenant databases you create. The more tenant databases you create, the less memory
is available for you in each individual tenant database. Therefore, we recommend that you create no more
than half of the maximum number of databases shown in the table below.
SAP HANA Memory Size Maximum Number of Tenant Databases You Can Create
61GB 4
122GB 10
244GB 24
488GB 50
>488GB 200
Results
Related Information
Create a service instance using a particular plan of the hana service and bind it to an application in your Cloud
Foundry space in an enterprise account.
Procedure
1. In the navigation area, choose Applications, then select the relevant application.
2. In the navigation area, choose Service Bindings.
The overview lists all applications to which the selected application is currently bound.
3. Choose Bind Service.
4. On the Choose Service Type tab, select the Service from the catalog radio button and choose Next.
5. On the Choose Service tab, select the hana tile and choose Next.
Note
The hana tile is only displayed if you've turned on the DI Server + Service Broker switch for a tenant
database in your Cloud Foundry org. For more information, see Create SAP HANA Tenant Databases [page
680].
6. On the Choose Service Plan tab, select the corresponding radio button to create a new instance or reuse an
existing instance. Select a service plan and choose Next.
7. Depending on the number of tenant databases you have created in your space, choose one of the following
options:
There is only one ten Skip the Specify Parameters tab by choosing Next.
ant database in your
Cloud Foundry space.
There is more than Specify the parameters in JSON format. Copy the following string by specifying the parameters:
one tenant database
in your Cloud Foundry {"database_id":"<tenant_db_name>"}
space.
Enter the database ID that you defined when creating an SAP HANA tenant database. Choose
Next.
Note
To use a tenant database that is owned by another space, see Sharing a Tenant Database with
Other Spaces [page 691].
8. On the Confirm tab, enter a name in the Instance Name field and choose Finish.
Next Steps
Once you've created the binding, you must restart your application:
Create a service instance using a particular plan of the hanatrial service and bind it to an application in your Cloud
Foundry space in a trial account.
Procedure
1. In the navigation area, choose Applications, then select the relevant application.
2. In the navigation area, choose Service Bindings.
Use the SYSTEM user to create a database administration user in the Cloud Foundry environment using the SAP
HANA cockpit.
Prerequisites
You have enabled the SAP HANA Cockpit Access switch for your tenant database. For more information, see
Access SAP HANA Cockpit [page 696].
Context
You specify a password for the SYSTEM user when your create your SAP HANA tenant database. You use the
SYSTEM user to log on to SAP HANA Cockpit and create your own database administration user.
The SYSTEM user is a preconfigured database superuser with irrevocable system privileges, such as the ability to
create other database users, access system tables, and so on. A database-specific SYSTEM user exists in every
database of a tenant database system. To ensure that the administration tool SAP HANA cockpit can be used
immediately after database creation, the SYSTEM user is automatically given several roles the first time the SAP
HANA cockpit is opened with this user.
Caution
You should not use the SYSTEM user for day-to-day activities. Instead, use this user to create dedicated
database users for administrative tasks and to assign privileges to these users.
Procedure
1. In the SAP Cloud Platform cockpit, navigate to a space. For more information, see Navigate to Global
Accounts, Subaccounts, Orgs, and Spaces in the Cockpit [page 953].
All databases available in the selected subaccount are listed with their ID, type, version, and related database
system.
A message is displayed to inform you that at that point, you lack the roles that you need to open the SAP
HANA cockpit.
6. To confirm the message, choose OK.
You receive a confirmation that the required roles are assigned to you automatically.
7. Choose Continue.
Note
The user name always appears in upper case letters.
12. In the Authentication section, make sure the Password checkbox is selected and enter a password.
Note
According to the SAP HANA default password policy, the password must start with a letter and only contain
uppercase and lowercase letters ('a' - 'z', 'A' - 'Z'), and numbers ('0' - '9'). If you've changed the default
policy, other password requirements might apply.
The new database user is displayed as a new node under the Users node.
14. Assign your user the necessary administrator roles and privileges by going to the Granted Roles section and
choosing the + (Add Role) button. To allow your administration user to create new users and assign roles and
privileges to them for example, add the USER_ADMIN privilege. For more information see, System Privileges in
the SAP HANA Security Guide.
15. Choose Ok.
16. Save your changes.
Caution
At this point, you are still logged on with the SYSTEM user. You can only use your new database user to work
with SAP HANA Web-based Development Workbench by logging out from SAP HANA Cockpit first.
Recommendation
We recommend that you create more than one database administration users. If one database
administration user is locked or if the password needs to be reset, only another administration user can
unlock this user and reset the password.
Next Steps
You can use the newly created database administration user to create database users for the members of your
subaccount and assign them the required developer roles.
Try to solve issues by stopping and restarting the corresponding tenant database in the Cloud Foundry
environment.
Prerequisites
Procedure
1. In the SAP Cloud Platform cockpit, navigate to a space that owns the SAP HANA tenant database system you
want to restart. For more information, see Navigate to Global Accounts, Subaccounts, Orgs, and Spaces in the
Cockpit [page 953].
Redefine the limits for memory consumption by individual processes of the new database in the Cloud Foundry
environment by using the SAP Cloud Platform cockpit.
Prerequisites
You must have the Space Manager role for the space that owns the SAP HANA tenant database you want to
reconfigure.
Procedure
1. In the SAP Cloud Platform cockpit, navigate to a space that owns the SAP HANA tenant database you want to
reconfigure. For more information, see Navigate to Global Accounts, Subaccounts, Orgs, and Spaces in the
Cockpit [page 953].
You can redefine the limits for memory consumption by individual processes of the new database. For more
information, see the SAP HANA Administration Guide. For more information about viewing the memory usage
of an existing tenant database, see View Memory Usage for an SAP HANA Database System [page 678].
If you don't enter any values, no limits are set, and the respective value appears as unlimited.
○ XS Engine:
If you run the XS engine in a standalone mode, you can change the memory limit after you have created
the database.
Note
If you run the XS engine in an embedded mode, the XS engine and the index server are part of the same
process, and therefore share the same allocation limit.
Note
If you run the XS engine in an embedded mode, the XS engine and the index server are part of the same
process, and therefore share the same allocation limit.
Restore your tenant database in the Cloud Foundry environment from a specific point of time by creating a service
request in the SAP Cloud Platform cockpit.
Prerequisites
Procedure
1. In the SAP Cloud Platform cockpit, navigate to a space that owns the SAP HANA tenant database you want to
restore. For more information, see Navigate to Global Accounts, Subaccounts, Orgs, and Spaces in the Cockpit
[page 953].
Caution
You will lose all data stored between the time you specify in the New Service Request screen and the
time at which you create the service request. For example, if you create a restore request at 3pm to
restore your tenant database to 9am on the same day for example, all data stored between 9am and
3pm will be lost.
d. Choose Save.
A template for opening an incident in the SAP Support Portal is displayed.
e. Select the text in the template between the two dashed lines and copy it to the clipboard.
Tip
Navigate to SAP HANA Service Requests , then choose the Display icon to find the template for
opening a ticket at any time.
Note
You need the authorization to create an incident. Contact a user administrator in your company to
request this authorization.
Note
You can find a detailed step-by-step instruction for creating an incident in the Report an Incident - Help .
6. Once you have reached the Enter Incident view, enter the following data:
a. In the Classification panel, enter the component for persistency.
Note
For a complete list of SAP Cloud Platform components, see 1888290 .
b. In the Problem Details panel, enter the title Database Restore Request in the Short Text field.
c. Paste the template text you copied to your clipboard into the Long Text field.
d. Choose Send Incident.
Results
You have created a request for restoring a tenant database and sent the request to SAP Support for processing. As
soon as your tenant database is restored, the state of your request will be set to Finished in the cockpit and the
incident you created will be set to Completed. You can see the state of your request in the cockpit by navigating to
SAP HANA Service Requests . The state is displayed next to your service request. In the meantime, SAP
Support might contact you in case they need further clarification. You will be notified by e-mail if you need to take
any further action.
Note
Your tenant database is available for use for all users immediately after the restore has been successful.
Note
To cancel your restore request, go to SAP HANA Service Request , choose your restore request and select
the Delete icon. Note that your request can only be canceled if it has the state New.
Share your tenant database that has been created in one space in the Cloud Foundry environment with other
spaces that belong to the same organization.
When a tenant database is created in a space, only the space it is located in can access it and use it for service
bindings, for example. However, a user who is assigned the Space Manager role can change this, and grant other
spaces within the same organization controlled access to the tenant database in the space he or she manages.
Depending on the permission type, a member of the space who receives permission can access the database,
including using it to create a service instance and bind it to applications.
Prerequisites
● The space in which the tenant database was created, and the space that receives permission to use this
database must be part of the same organization.
● To give another space permission to use a tenant database in your space, you must be assigned the Space
Manager role.
Context
Share a database with other spaces This gives another space permission to The Space Manager of the space in
use a tenant database. which the tenant database is located
Use a database shared by another space Depending on the permission given, a A member of the space receiving the
space can access and use a tenant data permission to use the tenant database
base that is located in another space.
If Space Managers want to share a tenant database in their space with another space in the same org, they can
assign different permission types to the other space: To allow applications in another space to access a tenant
database, Space Managers can provide the other space with the permission type APPLICATION_ACCESS, which
assigns a security group to that other space. To enable members of another space to create service instances on
the tenant database and bind these service instances to applications, the Space Manager needs to assign both the
permission types APPLICATION_ACCESS and HANA_SERVICE to the other space.
As a Space Manager, you can give another space in the same organization permission to access and use a tenant
database that has been created in your space.
Procedure
1. In the SAP Cloud Platform cockpit, navigate to the space in which the tenant database you would like to share
with another space has been created. For more information, see Navigate to Global Accounts, Subaccounts,
Orgs, and Spaces in the Cockpit [page 953].
2. Choose SAP HANA Tenant Databases and select the tenant database that you'd like to share.
3. In the navigation area, choose Permissions.
4. Choose New Permissions.
5. In the dialog box, do the following:
a. Select the space that should receive permission to use the tenant database.
b. Choose the permission type(s).
Results
You have given another space permission to use a tenant database in your space. In your space (the space that
owns the tenant database), the (Edit Permissions) icon appears in the Tenant Databases list in the cockpit next
to all tenant databases that can be used by other spaces.
Next Steps
To edit the permission type of an existing access permission, select the required tenant database, then choose
Permissions in the navigation menu. Select the (Edit Permissions) icon next to the space in question.
To revoke an existing access permission, first delete all service instances. Then select the required tenant
database, navigate to Permissions and choose the (Delete) icon next to the space. This revokes all permissions
for this space.
As a member of a space that has received permission to use a tenant database owned by another space, you can
access the database, by opening a tunnel to it, for example, or using it to create a service instance and bind it to an
application.
As member of a space that has received permission to use a tenant database owned by another space, you can
open a tunnel to that database.
Prerequisites
● The space in which the application is deployed must have the permission to open a tunnel to the database in
another space (permission type APPLICATION_ACCESS).
● Download and install the cf CLI. For more information, see Download and Install Cloud Foundry Command Line
Interface.
● Log on to the space in which the application for which you want to create a service binding is deployed. For
more information, see Log On to the Cloud Foundry Instance.
● Deploy an application to a space and start the application.
● Enable SSH access either for the application you've started or for your space. For more information, see
https://docs.cloudfoundry.org/devguide/deploy-apps/ssh-apps.html .
Procedure
Open a tunnel to the tenant database as described in Open a Database Tunnel [page 698]. When executing the cf
ssh command, specify the host and port of the tenant database that is owned by another space.
Results
You are now connected to a tenant database that is owned by another space. To connect to the tenant database
from your local workstation, follow the steps described in Connect to a Tenant Database Using SAP HANA Studio
[page 700].
As member of a space that has received permission to use a tenant database owned by another space, you can
use that database to create a service instance and bind it to an application.
Prerequisites
Procedure
Use the cockpit 1. In the SAP Cloud Platform cockpit, navigate to the space in which the application you would
like to bind is deployed. For more information, see Navigate to Global Accounts, Subac
counts, Orgs, and Spaces in the Cockpit [page 953].
2. Follow the steps described in Create a Service Instance and Bind It to the Application [page
659]
{"database_id":"<GUID_owner_space>:<tenant_db_name>"}
Tip
Navigate to Security Groups to identify the GUID of the space that owns the tenant data
base you'd like to use.
3. Restart the application. For more information, see Restart the Application [page 660].
Use the console client 1. Open the cf CLI and enter the following command:
○ macOS and Linux:
○ Windows PowerShell:
2. Bind the service instance to the application. For more information, see Bind the Service In
stance to the Application [page 665].
3. Restart the application. For more information, see Restart the Application [page 666].
Results
You have created a service instance and bound it to an application using a tenant database owned by a different
space than the one in which the application has been deployed.
Delete your SAP HANA tenant database in the Cloud Foundry environment using the SAP Cloud Platform cockpit.
Procedure
To delete your SAP HANA tenant database, perform the following steps:
Remember
To delete an SAP HANA tenant database, you must first delete all services instances bound to your
application.
Delete all service instances of the hana or hana-managed service, which are bound to your applications, using the
SAP Cloud Platform cockpit.
Prerequisites
● The SAP HANA tenant database you want to unbind the service instances from must be in status Started.
● You must have the Space Developer role for the space that owns SAP HANA tenant database you want to
delete.
● (Optional) Access to the spaces you shared the SAP HANA tenant database with.
Note
Deleting all service instances bound to your application might include service bindings in other spaces than
the space that owns the database.
Procedure
1. In the SAP Cloud Platform, navigate to the space that owns the applications to which your SAP HANA tenant
database is bound to. For more information, see Navigate to Global Accounts, Subaccounts, Orgs, and Spaces
in the Cockpit [page 953].
Next Steps
You can delete your SAP HANA tenant database using the SAP Cloud Platform cockpit.
Prerequisites
You must have the Space Manager role in the space that owns the SAP HANA tenant database you want to delete.
Procedure
1. In the SAP Cloud Platform, navigate to the space that owns the SAP HANA tenant database you want to
delete. For more information, see Navigate to Global Accounts, Subaccounts, Orgs, and Spaces in the Cockpit
[page 953].
Access the SAP HANA cockpit for a tenant database in the Cloud Foundry environment using the SAP Cloud
Platform cockpit.
Context
The SAP HANA cockpit is a Web-based administration tool for the administration, monitoring, and maintenance of
SAP HANA databases in the Cloud Foundry environment. It provides a single point of access to a range of tools for
your SAP HANA database, and also integrates development capabilities required by administrators through the
You can access the SAP HANA cockpit by navigating to a tenant database in SAP Cloud Platform's web-based
administration interface: the SAP Cloud Platform cockpit.
Procedure
1. In the SAP Cloud Platform cockpit, navigate to the Cloud Foundry space that owns the tenant database you'd
like to access using the SAP HANA cockpit. For more information, see Navigate to Global Accounts,
Subaccounts, Orgs, and Spaces in the Cockpit [page 953].
2. Choose SAP HANA Tenant Databases and select the tenant database.
3. From the Administration Tools section, select SAP HANA Cockpit.
Note
You can open the SAP HANA cockpit only if SAP HANA Cockpit Access is enabled for the tenant database. If
it is disabled, choose Configure and turn on the switch for SAP HANA Cockpit Access.
Results
You are now logged on to the SAP HANA cockpit for a tenant database on SAP Cloud Platform.
Related Information
Connect to an SAP HANA tenant database deployed in a Cloud Foundry space that belongs to an enterprise
account from your local workstation.
Prerequisites
● Deploy an SAP HANA tenant database in a Cloud Foundry space in your enterprise account.
● Deploy an application to the same Cloud Foundry space and started the application.
● Download and install the cf CLI. For more information, see Download and Install the Cloud Foundry Command
Line Interface [page 948].
● Log on to the Cloud Foundry space in which the SAP HANA tenant database you want to connect to is
deployed. For more information, see Download and Install the Cloud Foundry Command Line Interface [page
948] or Log On to Your Global Account [page 936].
● Access to SAP HANA studio. For more information, see the SAP HANA Developer Guide for SAP HANA Studio.
Context
To connect to an SAP HANA tenant database, you need to perform the following steps:
Procedure
1. (Optional) If you're working behind a proxy server, you may have to adjust your proxy settings.
2. (Optional) Find out the host and port of your tenant database. Depending on the tool you'd like to use for
finding out the host and port, choose one of the following options. If you already have this information, skip
this step.
Recommendation
We recommend you use the SAP Cloud Platform cockpit.
SAP Cloud 1. Navigate to the space that owns the tenant database to which you'd like to open a tunnel. See Navi
Platform gate to Global Accounts, Subaccounts, Orgs, and Spaces in the Cockpit [page 953].
cockpit 2. In the navigation area, choose Security Groups, then select the tenant database in question.
In the Rules section at the bottom of the screen, find the host (Destination column) and the port (Port
column) for your tenant database.
Note
If two ports are listed in the Ports column, use the port that starts with "300".
cf security-group <security_group_database>
3. Open a command line and enter the following string and specify the parameters:
Note
You'll need to restart the application after you've created the tenant database to be able to use the
command above.
Next Steps
Now that you have opened the database tunnel, you can connect to the remote tenant database.
For more information on SSH in the Cloud Foundry environment, please see the official documentation for the
Cloud Foundry environment at https://docs.cloudfoundry.org/devguide/deploy-apps/app-ssh-overview.html .
To connect to an SAP HANA tenant database from your local workstation, you have to establish a connection using
SAP HANA studio.
Procedure
1. Open the SAP HANA Administration Console perspective in SAP HANA studio. In the Systems panel, choose
Add System.
2. In the Host Name field, enter localhost.
3. In the Instance Number field, enter 00.
4. Select Single container.
5. Choose Next.
6. Enter the credentials of a valid database user, for example, the SYSTEM user.
7. Choose Finish.
Results
You are now connected to a tenant database in your Cloud Foundry space.
Note
The tunnel to the tenant database needs to remain open for as long as you want to be connected to it in SAP
HANA studio.
Answers to some of the most commonly asked questions about the SAP HANA service in the Cloud Foundry
environment.
Where can I view the memory limits for an SAP HANA tenant database system?
See View Memory Usage for an SAP HANA Database System [page 678].
For database systems, see Restart SAP HANA Database Systems [page 674], and for tenant databases, see
Restart SAP HANA Tenant Databases [page 687].
You cannot update a single tenant database. You can only update the complete database system, including all
tenant databases. See Update SAP HANA Database Systems [page 672].
Restore activities are currently handled by SAP Operations. You can request a recovery for a single tenant
database or the entire SAP HANA database system. Depending on your scenario, see Restore SAP HANA Tenant
Databases [page 689] or Restore SAP HANA Database Systems [page 675].
How often does a backup occur? How much data can I lose in the worst case?
For productive databases in the Cloud Foundry environment, a full data backup is done once a day. Log backup is
triggered at least every 30 minutes. The corresponding data, or log backups are replicated to a secondary location
every two hours. Backups are kept (complete data and log) on a primary location for the last two backups and on a
secondary location for the last 14 days. Backups are deleted afterwards. Recovery is therefore only possible within
a time frame of 14 days. Restoring the system from files on a secondary location might take some time depending
on the availability.
The Neo environment allows you to use SAP HANA single-container database systems, and SAP HANA tenant
database systems.
Note
The latest SAP HANA revision supported by SAP Cloud Platform in the Neo environment is 1.00.122.16. For
more information on SAP HANA revisions, see SAP Note 2021789 .
A database is associated with a specific subaccount and is available to applications in this subaccount. You can
create databases, bind them to applications, and delete them using the console client or the cockpit. You can bind
the same database to multiple applications, and the same application to multiple databases.
Feature Description
Work with Databases and Database ● SAP HANA Single-Container Database Systems
Systems The SAP HANA database is reserved for your exclusive use. You have full control of
user management and can use a range of tools.
● SAP HANA Tenant Database Systems
SAP HANA supports multiple isolated databases in a single SAP HANA system.
These are referred to as tenant databases. The SAP HANA tenant database system
is reserved for your exclusive use, hosting multiple SAP HANA databases on a single
SAP HANA database system. All tenant databases in the same system share the
same system resources (memory and CPU cores), but each tenant database is fully
isolated with its own database users, catalog, repository, persistence (data files and
log files) and services.
For an overview on how to administer your database system and databases, see Data
base Administration [page 720].
Ensure Backup & Recovery Backup and recovery of data stored in your database and database system are per
formed by SAP. If you're databases are not working properly, you can resolve the issues
by restarting the corresponding system or database. You can also request a restore in
the SAP Cloud Platform cockpit.
See How often does a backup occur [page 890]. and Analyze Backup Alerts [page
730].
Monitor Your Databases Monitor the health of your SAP HANA databases in the SAP Cloud Platform cockpit and
in the SAP HANA cockpit. For example, see Monitoring Database Systems [page 731].
You can also view the memory usage for your database systems in the SAP Cloud
Platform cockpit. See View Memory Usage for an SAP HANA Database System [page
729].
Try it out You can try out working with SAP HANA tenant databases in the Neo environment and
create your own trial database on a shared SAP HANA tenant database system. For re
strictions, see Restrictions in Trial Accounts [page 704].
You can use the following tools in combination with the SAP HANA service:
Restriction
General Restrictions
No automatic life cycle manage ● The SAP HANA service does not provide automatic life cycle management for da
ment for database objects tabase objects, such as tables, indices, sequences, and so on. An application
must create the necessary database objects, either by using JDBC to send the
corresponding data definition statements to the database, or by using the
schema creation capabilities of EclipseLink. Due to limitations of the EclipseLink
schema creation feature, changes to the schema, like altering a table definition,
must be done by the application. Alternatively, open source tools for database
schema management (like Liquibase) can be used for life cycle management for
database objects, but must be bundled with the application.
Restriction
Restrictions Applying to SAP HANA Tenant Databases
Backup ● When you stop a tenant database for several days, you may not be able to recover
the database. It is important to keep databases running without longer down
times.
Monitoring ● The availability of SAP HANA tenant databases is not monitored and no alerts are
sent when a database is not available.
● The registration of availability checks for HANA native applications is not sup
ported.
Recommendation
The support for database schemas on shared SAP HANA databases in trial accounts has ended. We
recommend to create an SAP HANA tenant database on a shared SAP HANA tenant database system.
Restriction
● You can create only one trial tenant database in the subaccount.
● The SAP HANA service determines to which database system the tenant is assigned.
● Trial databases are configured using fixed quota for RAM and CPU.
● You can use the trial tenant database for 12 hours. It shuts down automatically after this period to free
resources. You can, however, restart it. For more information, see Restart SAP HANA Tenant Databases
[page 761].
● If you do not use the tenant database for 7 days, it is automatically deleted to free the consumed disk
space.
● Backup is not enabled and no recovery is possible.
● There are some other restrictions as to which SAP HANA features can be used in the trial scenario and
which cannot.
Learn how to create a new SAP HANA tenant database and to bind it to your application.
Scenario Tutorial
You want to create an SAP HANA tenant database from the Creating an SAP HANA Database from the Cockpit [page
SAP Cloud Platform cockpit. 705]
You want to create an SAP HANA tenant database using the Creating an SAP HANA Database Using the Console Client
console client. [page 712]
Create a database on an SAP HANA database system from a selected subaccount in the SAP Cloud Platform
cockpit.
Prerequisites
● Download and set up your Eclipse IDE, SAP HANA Tools for Eclipse, SAP Cloud Platform Tools for Java, and
SAP Cloud Platform SDK for Neo environment for Java Web. For more information, see Install SAP HANA Tools
for Eclipse [page 1224] and https://tools.hana.ondemand.com/#cloud.
● Install an SAP HANA tenant database system. This system must be assigned to a subaccount.
● A user who has the administrator role for the subaccount.
● Install Maven.
Context
To create an SAP HANA database from the cockpit, perform the following steps:
Steps Tools
Create a Database in the Cockpit [page 706] SAP Cloud Platform cockpit
Create a Database User with Permissions for Working with Web IDE [page 707] SAP HANA cockpit
Start and Work with the SAP HANA Web-based Development Workbench [page SAP Cloud Platform cockpit
709]
SAP HANA Web-based Development
Workbench
Deploy the Persistence with JDBC Java Application [page 710] Maven
Browser
View Table Content in SAP HANA Web-based Development Workbench [page 712] SAP HANA Web-based Development
Workbench
Related Information
In your subaccount in the SAP Cloud Platform cockpit, you create a database on an SAP HANA tenant database
system.
Procedure
1. In the SAP Cloud Platform cockpit, navigate to a subaccount. For more information, see Navigate to Global
Accounts and Subaccounts [page 964].
2. In the navigation area, choose SAP HANA / SAP ASE Databases & Schemas .
3. From the Databases & Schemas page, choose New.
4. Enter the required data:
mdc1 (HANAMDC)
Note
mdc1 corresponds to the database system on which you
create the database.
SYSTEM User Password The password for the SYSTEM user of the database.
5. Choose Save.
6. The Events page shows the progress of the database creation. Wait until the tenant database is in state
Started.
7. Optional: To view the details of the new database, choose Overview in the navigation area and select the
database in the list. Verify that the status STARTED is displayed.
Create a new database user in the SAP HANA cockpit and assign the user the required permissions.
Procedure
1. Go to the cockpit and log on to the SAP HANA cockpit with the SYSTEM user and password.
You see a message that informs you that at that point, you lack the roles that you need to open the SAP HANA
cockpit.
2. To open the SAP HANA cockpit, go to the database overview page in the SAP Cloud Platform cockpit.
3. In the navigation area, choose SAP HANA / SAP ASE Databases & Schemas , then select the relevant
database.
4. In the database overview, open the SAP HANA cockpit link under Administration Tools.
You receive a confirmation that the required roles are assigned to you automatically.
7. Choose Continue.
Procedure
The password must start with a letter and only contain uppercase and lowercase letters ('a' – 'z', 'A' – 'Z'), and
numbers ('0' – '9').
6. Choose Save.
Procedure
1. To assign your user the roles with the required permissions for working with SAP HANA Web-based
Development Workbench, go to the Granted Roles section and choose the + (Add Role) button.
2. Type ide in the search field and select all roles in the result list.
Results
Caution
At this point, you are still logged on with the SYSTEM user. You can only use your new database user to work
with SAP HANA Web-based Development Workbench by logging out from SAP HANA cockpit first. Otherwise,
Start the from the cockpit and create an SAP HANA XS Hello World program.
Procedure
1. In the SAP Cloud Platform cockpit, choose SAP HANA / SAP ASE Databases & Schemas in the
navigation area.
2. Select the relevant database.
3. In the overview that is shown in the lower part of the screen, click the SAP HANA Web-based Development
Workbench link under Development Tools.
4. Log on to the SAP HANA Web-based Development Workbench with your new database user and password.
5. Select the Editor.
6. The header shows details for your user and database. Hover over the entry for the SID to view the details.
Note
Use the Logout button in the header to log on with a different user.
7. To create a new package, choose New Package from the context menu for the Content folder.
8. Enter a package name.
9. From the context menu for the new package node, choose File Create Application .
10. Select HANA XS Hello World as template and choose Create.
When you click the files under the new package in the hierarchy, they open in the editor.
11. To deploy the program, select the logic.xsjs file from the new package and choose Run.
The program is deployed and appears in the browser: Hello World from User <Your User>.
To work with the application, deploy the Persistence with JDBC sample in the cockpit, create a binding, and start
the application.
Procedure
1. Download and set up your Eclipse IDE, SAP HANA Tools for Eclipse, and SDK.
For more information, see Install SAP HANA Tools for Eclipse [page 1224].
2. Open the command window and navigate to the <SDK>/samples/persistence-with-jdbc folder.
3. To build the war file that you want to deploy with Maven, execute the mvn clean install command.
Procedure
Results
Caution
Do not choose Start. If you choose Start, a default schema and binding is created for the database; you'll do this
in the next task.
Procedure
1. In the cockpit, choose SAP HANA / SAP ASE Databases & Schemas in the navigation area.
2. Select the database.
3. Choose New Binding.
4. Leave the default settings for the data source (<DEFAULT>).
5. Select your Java application.
6. Enter your user for the database and your password.
7. Save your entries.
Results
Procedure
You can view the application in the browser and check that the names you entered are available in the database.
Procedure
1. To view the table in the SAP HANA Web-based Development Workbench, you have the following options:
The SAP HANA Web-based Development Workbench is Choose Navigation Links Catalog .
still open.
The SAP HANA Web-based Development Workbench is To reopen the SAP HANA Web-based Development Work
closed. bench, proceed as described in Start and Work with the SAP
HANA Web-based Development Workbench [page 709], and
on the entry page, choose Catalog.
Create a database in an SAP HANA tenant database system, using SAP Cloud Platform console client commands
in the Neo environment.
Prerequisites
Note
To be able to use this functionality, you must purchase an SAP HANA tenant database system.
Please contact SAP for details at the SAP Support Portal as described at Getting Support [page 2280].
● Download and set up your SAP Cloud Platform SDK for Neo environment for Java Web and SAP HANA client.
For more information, see https://tools.hana.ondemand.com/#cloud.
● Install an SAP HANA tenant database system. Assign this to a subaccount.
Context
To create an SAP HANA database using the console client, perform the following steps:
Steps Tools
Create a Database Using Database System mdc1 [page 714] Console client, SAP Cloud Platform SDK
Create a Database User and Assign a Role [page 716] Console client, SAP Cloud Platform SDK
Database tunnel
Bind Java Application to the Database [page 718] Console client, SAP Cloud Platform SDK
Start Java Application and Add Person Data with Servlet [page Console client, SAP Cloud Platform SDK
718]
Browser
Related Information
Procedure
Note
Skip this step if you're working in a trial account.
Output Code
3. Create database.
Note
To create a tenant database on a trial landscape, use -trial- instead of the ID of an SAP HANA tenant
database.
Output Code
4. To access the SAP HANA database, provide the SYSTEM user password.
5. Optional: Check that the status of the database is STARTED.
Note
To check the status of the database in a trial landscape, enter hanatrial.ondemand.com instead of
hana.ondemand.com.
Procedure
Output Code
Use the console client to create a database user, assign the <codeph>content_admin</codeph> role, and change
the password.
Context
You need the tunnel to connect to your database. You can use the connection details you obtain from the tunnel
response to connect to database clients, for example, Eclipse Data Tools Platform (DTP).
Note
The database tunnel must remain open while you work on the remote database instance. Close the tunnel only
when you have completed the session.
Procedure
Note
You can also create a database user using SAP HANA studio in the Eclipse IDE. For more information, see
Creating an SAP HANA Database from the Cockpit [page 705].
Tip
Use this command window only for the tunnel command.
Output Code
Output Code
Output Code
Password:
Connected to localhost:30015
Output Code
0 rows affected (overall time 286,192 msec; server time 11,370 msec)
7. Assign the content_admin role to the database user using the following command:
8. Log on to the database with the new database user and change the password:
Note
If the database has a password policy that requires users to change their password after the initial logon,
you need to provide a new password, otherwise you cannot work with the servlet.
a. Use the quit command to log off from the hdbsql client:
hdbsql NEO_MULTID...=> \q
\hdbclient>hdbsql
Output Code
Password:
You have to change your password.
Enter new Password:
Confirm new Password:
Connected to localhost:30015
Once the database is available, you use another console client command to create a binding between the database
and an existing Java application.
Procedure
Go to the command window you used to create the database and enter the following command:
Output Code
There are additional commands that let you deploy the Java application and run it. You can view the application in
the browser, enter first and last names in the table, and check in SAP HANA Client that the names you entered are
available in the database.
Procedure
Output Code
3. To add Person Data, Copy the URL from the status command into the address field of your browser and add /
persistence-with-jdbc/. Start the servlet in the browser and add person data.
Output Code
2 rows selected (overall time 291,603 msec; server time 156 usec)
View Memory Usage for an SAP HANA Database System [page 729]
View the memory usage for an SAP HANA tenant database system in the Neo environment using the SAP
Cloud Platform cockpit.
Install a database system in the Neo environment using the SAP Cloud Platform cockpit.
Prerequisites
Recommendation
We recommend that you always use the latest available database version. For more information about the
availability of new versions for installation, see .
Procedure
1. In the SAP Cloud Platform cockpit, navigate to a subaccount. For more information, see Navigate to Global
Accounts and Subaccounts [page 964].
2. From the navigation area, choose SAP HANA / SAP ASE Database Systems
You see all database systems that are available in the subaccount, along with their details, including the
database type, version, memory size, state, and the number of associated databases.
Note
Note
You can select the size only if there is available quota for the size you specify in your subaccount.
The name must be unique in your subaccount and can include only lowercase letters, and digits.
8. Choose Start.
You can monitor the status of the installation in the Database Systems view.
Update your SAP HANA database systems in the Neo environment using the SAP Cloud Platform cockpit.
Prerequisites
● Use the SAP HANA XS administration tool to enable basic authentication for SAP HANA application lifecycle
management to update SAP HANA XS-based components. Navigate to sap/hana/xs/lm and add Basic in
the Authentication section.
● You are assigned the administrator role for the subaccount.
Context
To update your SAP HANA database systems, you have the following options:
● Update the software components installed on your SAP HANA database system to a higher version.
● Apply a single Support Package on top of an existing SAP HANA database system.
● Before you apply an update, read the SAP Notes listed in the UI, and perform all required steps.
Recommendation
We recommend that you always use the latest available version. For more information about the availability of
new HANA revisions for the update, see SAP HANA Service in the Neo Environment [page 701].
Procedure
1. In the SAP Cloud Platform cockpit, navigate to a subaccount. For more information, see Navigate to Global
Accounts and Subaccounts [page 964].
2. From the navigation are, choose SAP HANA / SAP ASE Database Systems .
You see all database systems that are available in the subaccount, along with their details, including the
database type, version, memory size, state, and the number of associated databases.
3. To select the entry for the relevant database system in the list, click the link on its name.
The overview of the database system shows details, including the database version and state, and the number
of associated databases.
4. To update an SAP HANA database system, choose Check for updates.
5. Select a version to update.
If you select to update to a later version, remember to read the corresponding release note.
Note
You can select SAP HANA revisions approved for use in SAP Cloud Platform only. To update to another
revision, please contact SAP Support.
Updating an SAP HANA database system to a maintenance revision can result in upgrade path limitations.
See SAP Note 1948334 for details.
6. (Optional) Specify whether you'd like a prompt for confirmation before the update of the SAP HANA database
system is applied and the system downtime is started.
By default, this option is selected. If you deselect it, the update is performed without any user interaction.
7. Choose Continue/Update.
The system begins preparing to update. The update process takes some time and is executed asynchronously.
The update dialog box remains on the screen while the update is in progress; however, you can close the dialog
box and reopen it later.
8. (Optional) If you chose to be prompted the process stops and waits for confirmation before starting the
update.
During preparation, the SAP HANA database system is not modified, so you can safely cancel the update
process.
9. Choose Update.
The update starts and takes about 20 minutes.
Results
Related Information
Restart your database systems in the Neo environment using the SAP Cloud Platform cockpit.
Context
If your databases aren't working properly, you can try to solve the issues by restarting the corresponding database
system. A restart is performed for the entire database system.
Procedure
1. In the SAP Cloud Platform cockpit, navigate to the subaccount that owns the database system you want to
restart. For more information, see Navigate to Global Accounts and Subaccounts [page 964].
2. From the navigation area, choose SAP HANA / SAP ASE Database Systems .
3. Select the database system you want to restart.
4. On the Overview page of the database system, choose Restart.
5. Choose OK to confirm the restart.
Note
If security OS patches are pending for the database system you have restarted, the host of the database
system is also restarted.
Perform a point-in-time restore in the Neo environment by creating a service request in the SAP Cloud Platform
cockpit.
Prerequisites
Procedure
1. In the SAP Cloud Platform cockpit, navigate to a subaccount. For more information, see Navigate to Global
Accounts and Subaccounts [page 964].
2. In the navigation area, choose SAP HANA / SAP ASE Service Requests .
3. Choose New Service Request and do the following:
a. Choose Database System.
Caution
If you restore a database system, all databases within this system are restored. To restore a single
database only, see Restore Databases [page 761].
Caution
You will lose all data stored in the databases in the database system between the time you specify in the
New Service Request screen and the time at which you create the service request. If you create a
restore request at 3pm to restore your database system to 9am on the same day for example, all data
stored between 9am and 3pm is lost.
d. Choose Save.
You see a template for opening an incident in the SAP Support Portal.
e. From the template, select the text between the two dashed lines and copy it to your clipboard.
Tip
Navigate to SAP HANA / SAP ASE Service Requests and choose the Display icon next to your
request to find the template for opening a ticket at any time.
f. Choose Close.
4. Log on to the SAP Support Portal with your S-user ID and password, and create a new incident by choosing
Report an Incident.
Tip
You can find a detailed step-by-step instruction for creating an incident in the Report an Incident - Help .
6. Once you have reached the Enter Incident view, enter the following data:
a. In the Classification panel, enter the component for persistency.
Note
For a complete list of SAP Cloud Platform components, see SAP Note 1888290 .
b. In the Problem Details panel, enter Database System Restore Request in the Short Text field.
c. Paste the template text you copied to your clipboard into the Long Text field.
d. Choose Send Incident.
Results
As soon as your database system is restored, the state of your request is set to Finished in the cockpit and the
incident is set to Completed. You can see the state of your request in the cockpit by navigating to SAP HANA /
SAP ASE Service Requests . The state appears next to your service request. SAP Support might contact you by
e-mail if they need clarification, or if you need to take any further action.
Note
Your database system is available for use for all users immediately after it is restored.
Note
To cancel your restore request, go to SAP HANA / SAP ASE Service Request , choose the request and
select the Delete icon. You can cancel a request only if it has still the state New.
Related Information
Delete your database system in the Neo environment using the SAP Cloud Platform cockpit.
Prerequisites
Recommendation
Export valuable data to another source in advance, since all data in your database system will be deleted.
Procedure
1. In the SAP Cloud Platform cockpit, navigate to a subaccount. For more information, see Navigate to Global
Accounts and Subaccounts [page 964].
2. From the navigation area, choose SAP HANA / SAP ASE Database Systems
You see all database systems that are available in the subaccount, along with their details, including the
database type, version, memory size, state, and the number of associated databases.
3. Select the database system you want to delete.
4. On the overview page of the database system, choose Delete System.
You can monitor the status of the deletion in the Database Systems view.
Learn how to install new SAP HANA components in the Neo environment.
Prerequisites
Use the SAP HANA XS administration tool to enable basic authentication for SAP HANA application lifecycle
management to update SAP HANA XS-based components. Navigate to sap/hana/xs/lm and add Basic in the
Authentication section.
You can install the following types of SAP HANA components, as long as they are enabled in your subaccount:
● SAP HANA platform components, which are installed on the SAP HANA database system at the operating
system level
● SAP HANA XS applications, which are deployed on the SAP HANA database system
Restriction
Installation of SAP HANA XS-based components on SAP HANA database systems that are configured to
support SAP HANA tenant databases is currently not supported.
Installation of SAP HANA XS-based components is supported on SAP HANA database systems with version
SPS09 or higher.
Recommendation
We recommend that you always use the latest available version.
You should expect and plan for a temporary downtime for the SAP HANA database or SAP HANA XS Engine when
installing some SAP HANA components. You might not be able to work with SAP HANA studio, SAP HANA Web-
based Development Workbench, and cockpit UIs that depend on SAP HANA XS.
Procedure
1. In the SAP Cloud Platform cockpit, navigate to a subaccount. For more information, see Navigate to Global
Accounts and Subaccounts [page 964].
2. From the navigation area, choose SAP HANA / SAP ASE Database Systems .
You see all database systems that are available in the subaccount, along with their details, including the
database type, version, memory size, state, and the number of associated databases.
3. To select the entry for the relevant database system in the list, click the link on its name.
The overview of the database system shows details, including the database version and state, and the number
of associated databases.
4. To install an SAP HANA component for the selected productive database system, choose Install components.
5. Select a solution to install.
If you have a license for the solution in your subaccount, all SAP HANA components that are part of the
solution are listed.
6. Select the target version for all listed components.
7. (Optional) Specify whether you'd like a prompt for confirmation before the SAP HANA components are
installed and the system downtime is started.
By default, this option is selected. If you deselect it, the installation is performed without any user interaction.
8. Choose Continue/Install.
Results
SAP HANA components are installed on your SAP HANA database system.
Note
Refer to SAP HANA Service in the Neo Environment [page 701] to find out which HANA revision is supported by
SAP Cloud Platform.
Related Information
View the memory usage for an SAP HANA tenant database system in the Neo environment using the SAP Cloud
Platform cockpit.
Procedure
1. In the SAP Cloud Platform cockpit, navigate to a space that owns the SAP HANA tenant database system
you'd like to view memory limits for. For more information, see Navigate to Global Accounts, Subaccounts,
Orgs, and Spaces in the Cockpit [page 953].
2. In the navigation area, choose Database Systems SAP HANA / SAP ASE and select the entry for the
relevant database system.
You see a table that lists the memory limits and usage for each tenant database and the system database.
You can view the following values:
○ On database system level:
○ Global allocation limit: The amount of memory that is available for the SAP HANA system.
○ Global shared memory allocation: The amount of allocated memory of the SAP HANA system that
cannot be associated with a concrete process.
○ Global shared memory usage: The amount of used memory of the SAP HANA system that cannot be
associated with a concrete process.
○ On tenant and system database level:
○ Configured allocation limit: The limit that is currently set for a particular process.
○ Allocated memory: The memory that is currently allocated to a particular process.
○ Used memory: The memory that is currently used by a particular process.
For more information about memory usage, see the SAP HANA Administration Guide.
Note
If you haven't set a limit for a particular process or if you've allocated a percentage, the corresponding entry
is empty and the total of configured allocation limits cannot be calculated.
Analyze error warnings that are related to data backups of tenant databases or the system database in the Neo
environment.
Context
If the backup ran into problems, backup-related error messages are shown in the Monitoring tab of the SAP Cloud
Platform cockpit. For more information, see View Monitoring Metrics of a Database System [page 1251]. This can
be related to memory issues in the SAP HANA database system.
Procedure
To find out why the backup failed, analyze the alert to determine which tenant database or tenant databases, or
whether the system database is affected.
1. If only one or a few tenant databases are affected, try the following:
1. Check the memory limits and the memory usage of the affected tenant databases using the Memory
Usage tab of the SAP Cloud Platform cockpit. If there are memory limits set on the tenant databases,
consider removing or increasing the limits. For more information, see and View Memory Usage for an SAP
HANA Database System [page 729].
Tip
If you frequently run into memory-related backup problems, try to find out where they come from and why your
databases consume too much memory. These actions might resolve your issues:
● If there are any tenant databases you don't currently need, stop these databases to free resources. Restart
them only when you need them.
● Delete any unneeded tenant databases.
● If possible, remove data from your databases.
● If possible, move data to another system.
● Resize the database system.
Note
Even after you've fixed the memory issue, you may still see the alert might in the cockpit until after the next
daily backup has been successfully created.
Related Information
Learn how you can monitor your database systems running in the Neo environment.
In the cockpit, you can view the current metrics of a selected database system to get information about its health
state. You can also view the metrics history of a productive database to examine the performance trends of your
database over different intervals of time or investigate the reasons that have led to problems with it. You can view
the metrics for all types of databases.
Metric Value
CPU Load The percent of the CPU that is used on average over the last
one minute.
Disk I/O The number of bytes per second that are currently being read
or written to the disc.
Network Ping The percent of packets that are lost to the database host.
OS Memory Usage The percent of the operating system memory that is currently
being used.
Used Disc Space The percent of the local discs of the operating system that is
currently being used.
Prerequisites
The readMonitoringData scope is assigned to the used platform role for the subaccount. For more information, see
Platform Scopes [page 1676].
Procedure
For more information, see Navigate to Global Accounts and Subaccounts [page 964].
2. Navigate to the Database Systems page either by choosing SAP HANA / SAP ASE Database Systems
from the navigation area or from the Overview page.
All database systems available in the selected subaccount are listed with their details, including the database
version and state, and the number of associated databases.
The Current Metrics panel shows the current state of the metrics for the selected database system. When a
threshold is reached, the metric health status changes to warning or critical.
The Metrics History panel shows the metrics history of your database.
When you open the checks history, you can view graphic representations of the different checks, and zoom in
when you click and drag horizontally or vertically to get further details. If you zoom in a graphic horizontally, all
other graphics also zoom in to the same level of detail. Press Shift and drag to pan a graphic. Zoom out to
the initial size with a double-click.
You can select different time intervals for viewing the checks. Depending on the selected interval, data is
aggregated as follows:
○ Last 12 or 24 hours - data is collected every minute.
○ Last 7 days - data is aggregated from the average values for each 10 minutes
○ Last 30 days - data is aggregated from the average values for each hour
You can also select a custom time interval when you are viewing history of checks. If you select an interval
during which the application isn't running, the graphics won't contain any data.
Related Information
You can use the REST API to get metrics for your database systems that are running on SAP Cloud Platform in the
Neo environment.
Protection
The monitoring REST API is available with the following basic URI: https://api.{host}/monitoring/v2.
This version is protected with OAuth 2.0 client credentials. Create an OAuth client and obtain an access token to
call the API methods. See Using Platform APIs [page 1289]. For more information about the format of the REST
APIs, see Monitoring API .
Note
While you are creating the API client on the Platform API tab, select the Monitoring Service API with the Read
Monitoring Data scope.
Request the states or the metric details of your database systems by using the GET REST API calls.
Example
Use the following request to receive all the metrics for a database system located in the Europe (Rot/Germany)
region (with hana.ondemand.com host):
https://api.hana.ondemand.com/monitoring/v2/accounts/<subaccount_name>/dbsystem/
<database_system>/metrics
Example
Use the following request to receive the state of a database system located in the US East (Ashburn) region
(with us1.hana.ondemand.com host):
https://api.us1.hana.ondemand.com/monitoring/v2/accounts/<subaccount_name>/dbsystem/
<database_system>/state
Protects your application data stored in SAP HANA databases, SAP ASE databases, or any other Data & Storage
services in the Neo environment hosted on SAP Cloud Platform.
Overview
The SAP Cloud Platform Neo environment includes a standard disaster recovery option. It is based on data restore
from backups, which are stored in a disaster recovery site. The backups contain all data stored in the Data &
Storage services on SAP Cloud Platform. For more information about the Data & Storage services, see Capabilities
[page 24].
Note
Data not stored in the Data & Storage services, as well as data stored in deployed applications, cannot be
recovered.
For more information about the backup specifics, see Frequently Asked Questions [page 889].
● The recovery point objective (RPO) has no strict SLA. The objective is 24 hours.
Note
The time needed for declaring the disaster is included in the RTO.
The SAP Cloud Platform Enhanced Disaster Recovery service requires an additional fee, but offers better SLAs for
the RPO and RTO. For more information, see .
● Follow the instructions provided in the established SAP Cloud Platform notification channels. For more
information, see Platform Updates and Notifications [page 2283].
● Create an incident for backup restore, assigned to the component BC-NEO-PERS. For more information, see
Getting Support [page 2280].
Specify in the incident the respective names of the affected subaccounts, applications, and database systems
along with the corresponding database version.
You can set up your database system in high availability mode. A high availability setup consists of two database
systems that permanently replicate data from a primary to a database system.
Setting Up High Availability for SAP HANA Database Systems [page 737]
To set up high availability, SAP makes a secondary SAP HANA database system available for you and sets
up the data replication between your database system (the primary database system) and your secondary
database system. Reporting an incident triggers the creation of the high availability setup.
Administering SAP HANA Database Systems in a High Availability Setup [page 739]
You can administer the primary database system as you would any other SAP HANA database system.
Some restrictions apply for the management of the secondary database system.
The high availability (HA) setup consists of two SAP HANA database systems, a primary and a secondary, between
which data is continuously and synchronously replicated. If the primary database system fails, the secondary
database system is promoted to the role of primary database system.
As shown in the figure below, the primary and secondary SAP HANA database systems (also called the "HA pair")
are hosted in the same region. Applications that are running in that region establish a client connection to the
primary database system, but data is continuously replicated from the primary database system to the secondary
database system. Since the data is replicated synchronously, the replication causes only very little delay. Each
database has separate resources, including disks.
The primary database system is constantly monitored. A failure triggers the promotion of the secondary system.
The promotion takes about a minute and isn't dependent on the load of the database. To minimize data loss,
access to the primary database system is stopped before the promotion, and the promotion takes place only after
all pending data has been replicated.
Note
Once the promotion is complete, applications can reconnect to the new primary database system and see all
committed data, including data that was committed in the initial primary database system. This also works for an
SAP HANA tenant database. The application doesn't need to be restarted as it is bound to the HA pair. Once the
initial primary database system is reactivated as the new secondary database system, the new primary database
system begins replicating. You should develop your applications to gracefully handle connection issues during the
promotion.
Setting Up High Availability for SAP HANA Database Systems [page 737]
Administering SAP HANA Database Systems in a High Availability Setup [page 739]
What to Do in Case of an SAP HANA Database System Failure [page 741]
To set up high availability, SAP makes a secondary SAP HANA database system available for you and sets up the
data replication between your database system (the primary database system) and your secondary database
system. Reporting an incident triggers the creation of the high availability setup.
Prerequisites
● Available quota for database systems in your subaccount. You need the same quota for the secondary
database system as for the primary database system. For more information, see Adding Quotas to
Subaccounts [page 966].
● The primary database system must have a size of minimum 64 GB.
● The primary database system must be at least SAP HANA revision 122.13.
Procedure
Log in to the SAP Support Portal and report an incident on component BC-NEO-PERS.
Provide the ID of the database system for which you want to set up high availability. For detailed instructions about
how to report an incident on the SAP Support Portal, see Getting Support [page 2280].
Note
During the high availability setup, file-based certificate stores are migrated to in-database certificate stores. For
more information, see SAP Note 2175664 .
Results
Once high availability has been set up for your database system, you are notified via the SAP Support Portal.
Note
The primary database system might need to be restarted during the setup process. When it's convenient for
you, you can restart the system in the subaccount that owns the database system in the SAP Cloud Platform
cockpit.
Related Information
Administering SAP HANA Database Systems in a High Availability Setup [page 739]
What to Do in Case of an SAP HANA Database System Failure [page 741]
High Availability for SAP HANA Database Systems [page 736]
You can administer the primary database system as you would any other SAP HANA database system. Some
restrictions apply for the management of the secondary database system.
Task Primary SAP HANA Database System Secondary SAP HANA Database System
Access Access the primary SAP HANA database system You cannot access the secondary database sys
via the cockpit or the console client as you do any tem.
other database system.
Monitoring Monitor the primary database system as you do You cannot monitor the secondary database sys
any other SAP HANA database system. You can tem. However, you can monitor the status of the
also monitor the status of the replication to the system replication via the primary database sys
secondary database system. tem.
Update Update the primary database system as you do When you trigger an update of the primary sys
any other SAP HANA database system. For more tem, the secondary system is also updated.
information, see Update Database Systems [page
The secondary system is updated before the pri
722].
mary system. The primary system remains up
and running while the secondary system is being
Restriction
updated.
The downtime during the update is the same as
for an SAP HANA system that is not in a high
availability setup.
Restriction
You can't update additional SAP HANA
components using the update with mini
mized downtime.
Installing additional Install SAP HANA components on the primary da When you trigger the installation of SAP HANA
SAP HANA compo tabase system as you do on any other SAP HANA components on the primary database system,
nents database system. For more information, see Install they are installed on the secondary database sys
SAP HANA Components [page 727]. tem.
Restart Restart the primary database system as you do You can request a restart of the secondary data
any other SAP HANA database system. For more base system by reporting an incident in the SAP
information, see Restart Database Systems [page Support Portal. For more information, see Getting
724]. Support [page 2280].
Restore Restore the primary database system as you do You don't need to restore the secondary database
any other SAP HANA database system. For more system. In case of a failure of the secondary data
information, see Restore Database Systems [page base system, the data is replicated from the pri
725]. mary database system once the secondary sys
tem is reactivated.
Delete Delete the primary SAP HANA database system as You can request the removal of the secondary da
you would any other database system. For more tabase system by reporting an incident in the SAP
information, see Delete Database Systems [page Support Portal. For more information, see Getting
727] Support [page 2280].
Note
If you delete the primary database system,
both the primary and secondary SAP HANA da
tabase systems are deleted.
Related Information
Setting Up High Availability for SAP HANA Database Systems [page 737]
If your primary SAP HANA database system fails in a high availability setup, the secondary database system is
promoted to the role of primary database system.
There is no action required from you if your primary SAP HANA database system fails.
If you have questions about the high availability features, you can report an incident on the SAP Support Portal. For
more information, see Getting Support [page 2280].
Related Information
Administering SAP HANA Database Systems in a High Availability Setup [page 739]
Setting Up High Availability for SAP HANA Database Systems [page 737]
High Availability for SAP HANA Database Systems [page 736]
Set up your SAP HANA database system in the Neo environment in a disaster recovery mode to restore
operations.
You can set up your SAP HANA database system in a disaster recovery mode. A disaster recovery setup consists of
two database systems that are hosted in different regions and that permanently replicate data from a primary to a
secondary database system.
Disaster recovery refers to restoring operations after an outage due to a prolonged region or site failure. Although
similar to a high availability setup, disaster recover may require backing up data across longer distances, and may
thus be more complex and costly.
Setting Up Disaster Recovery for SAP HANA Database Systems [page 743]
To set up disaster recovery, SAP makes a secondary SAP HANA database system available for you and
sets up the data replication between your database system (the primary database system) and your
secondary database system. Report an incident to create a disaster recovery setup.
The disaster recovery (DR) setup consists of two SAP HANA database systems that are hosted in two different
regions and between which data is continuously and asynchronously replicated: a primary and a secondary
database system. If there is a disaster affecting the primary region, the secondary database system is promoted to
the role of the primary database system.
As shown in the figure below, applications running in the region in which the primary database system is hosted
establish a client connection to that database system. Ideally, applications are also deployed to the DR region and
bound to the secondary database system, although these applications need to be inactive as long as the region in
which the primary database system is hosted is up and running.
The geographic separation of the two regions makes the DR system capable of withstanding the loss of an entire
region. For example, your system may include a primary database system that is located in San Francisco, and a
secondary database system in San Jose. If the primary database system is destroyed, the secondary server is safe
and ready to assume control. Data is continuously and asynchronously replicated between the primary and the
secondary database system. Each database has separate resources, including disks.
As shown in the figure below, the secondary database system in the DR region is promoted to the role of primary
database system if the region in which the primary database system is hosted is down. Once the promotion is
Related Information
Setting Up Disaster Recovery for SAP HANA Database Systems [page 743]
Administering SAP HANA Database Systems in a Disaster Recovery Setup [page 745]
What to Do in Case of an SAP HANA Database System Failure [page 747]
To set up disaster recovery, SAP makes a secondary SAP HANA database system available for you and sets up the
data replication between your database system (the primary database system) and your secondary database
system. Report an incident to create a disaster recovery setup.
Prerequisites
Restriction
● You cannot set up SAP HANA tenant database systems (MDC) in disaster recovery mode.
● If you have installed SAP Streaming Analytics on an SAP HANA database system, you cannot set up this
system in disaster recovery mode.
Procedure
Disaster recovery is available in the region in which your database system runs. Provide the ID of the database
system for which you want to set up disaster recovery. For detailed instructions on how to report an incident on the
SAP Support Portal, see Getting Support [page 2280].
Note
During the creation of the disaster recovery setup, file-based certificate stores are migrated to in-database
certificate stores. For more information, see SAP Note 2175664 .
Results
Once disaster recovery has been set up for your database system, you are notified via the SAP Support Portal.
Note
The primary database system will be restarted during the creation of the disaster recovery setup. The expected
downtime is 30 minutes.
Related Information
Administering SAP HANA Database Systems in a Disaster Recovery Setup [page 745]
What to Do in Case of an SAP HANA Database System Failure [page 747]
Disaster Recovery for SAP HANA Database Systems [page 742]
Although you administer the primary database system in the same way as any other SAP HANA database system,
some restrictions apply to the management of the secondary database system.
Task Primary SAP HANA Database System Secondary SAP HANA Database System
Access Access the primary SAP HANA database system You cannot access the secondary database sys
via the cockpit or the console client as you do any tem while system replication is running.
other database system.
To test the disaster recovery setup, you need to
stop system replication.
See and .
Monitoring Monitor the primary database system as you do You can only monitor OS-related metrics, and you
any other SAP HANA database system. You can can only do so while system replication is running.
also monitor the status of the replication to the You can also monitor the status of system replica
secondary database system. tion via the primary database system.
Update Update the primary database system as you do You can update the secondary database system
any other SAP HANA database system. For more as any other SAP HANA database system. For
information, see Update Database Systems [page more information, see Update Database Systems
722]. [page 722].
Installing additional Install SAP HANA components on the primary da You can install SAP HANA components on the
SAP HANA compo tabase system as you do on any other SAP HANA secondary database system as on any other SAP
nents database system. For more information, see Install HANA database system. For more information,
SAP HANA Components [page 727]. see Install SAP HANA Components [page 727].
Note Restriction
Installed SAP HANA components are replicated ● You cannot install XS-based applications
only once during the system replication setup. on the secondary database system. XS-
If you want to install other SAP HANA compo based applications are automatically re
nents after the system replication has been set plicated from the primary system.
up, you'll need to install them on both the pri ● You cannot install SAP Streaming Analyt
mary and the secondary database system. ics on a database system that is in disas
ter recovery mode.
Restriction
The primary database system remains available
You cannot install SAP Streaming Analytics on
during the installation of SAP HANA components
a database system that is in disaster recovery
on the secondary database system.
mode.
Restart Restart the primary database system as you do You can restart the secondary database system
any other SAP HANA database system. For more as any other SAP HANA database system. For
information, see Restart Database Systems [page more information, see Restart Database Systems
724]. [page 724].
Restore Restore the primary database system as you do You don't need to restore the secondary database
any other SAP HANA database system. For more system. In case of a failure of the secondary data
information, see Restore Database Systems [page base system, the data is replicated from the pri
725]. mary database system once the secondary sys
tem is reactivated.
Delete Delete the primary database system as any other You can delete the secondary database system as
database system. For more information, see De any other SAP HANA database system. For more
lete Database Systems [page 727]. information, see Delete Database Systems [page
727].
Note
System replication stops before the secondary
To delete the primary database system, you database system is deleted. The primary system
need to delete the secondary database system remains up and running while the secondary sys
first. tem is deleted.
Setting Up Disaster Recovery for SAP HANA Database Systems [page 743]
What to Do in Case of an SAP HANA Database System Failure [page 747]
Disaster Recovery for SAP HANA Database Systems [page 742]
If there is a disaster affecting the primary region, the secondary database system, which is hosted in a different
center, is promoted to the role of the primary database system.
There is no action required from you if a disaster affects the primary region and causes the failure of the
primary database system.
● SAP constantly monitors your database systems and will promote your secondary database system as soon
as possible.
● If you are in doubt or if you have further questions, you can report an incident on the SAP Support Portal. For
more information, see Getting Support [page 2280].
Related Information
Administering SAP HANA Database Systems in a Disaster Recovery Setup [page 745]
Setting Up Disaster Recovery for SAP HANA Database Systems [page 743]
Disaster Recovery for SAP HANA Database Systems [page 742]
Use the set of console client commands for managing database systems in the Neo environment that is provided
by the .
Related Information
An overview of the different tasks you can perform to administer databases in the Neo environment.
Use the cockpit to create databases on database management systems in your subaccount in the Neo
environment and set properties, such as the database size.
Context
In the cockpit, you can create databases at the subaccount and the database system level. The procedures listed
below describe how to create a database at the subaccount level. To create a database at the database system
level, choose SAP HANA / SAP ASE Database Systems in the navigation area at the subaccount level. Select
a database system in the list. Choose Databases in the navigation area at the database system level. Then choose
New Database and enter the required details.
There is a limit to the number of databases you can create, and you'll see an error message when you reach that
number.
Related Information
Create a database user and assign him the administrator role to perform administrative tasks with your database
in the Neo environment.
Context
Related Information
As a subaccount administrator, you can use the database user feature provided in the cockpit to create your own
database administration user for your SAP HANA XS database in the Neo environment.
Context
SAP Cloud Platform creates four users that it requires to manage the database: SYSTEM, BKPMON, CERTADM,
and PSADBA. These users are reserved for use by SAP Cloud Platform.
Caution
Do not delete or deactivate these users or change their passwords.
Procedure
1. In the SAP Cloud Platform cockpit, navigate to a subaccount. For more information, see Navigate to Global
Accounts and Subaccounts [page 964].
2. Choose SAP HANA / SAP ASE Databases and Schemas in the navigation area.
You see all databases that are available in the subaccount, along with their details, including the database type,
version, memory size, state, and the number of associated databases.
3. Select the relevant SAP HANA XS database.
4. In the Development Tools section, click Database User.
A message confirms that you do not yet have a database user.
5. Choose Create User.
Your user and initial password are displayed. Change the initial password when you first log on to an SAP
HANA system, for example the SAP HANA Web-based Development Workbench.
Note
○ Your database user is assigned a set of permissions for administering the SAP HANA database system,
which includes HCP_PUBLIC, and HCP_SYSTEM. The HCP_SYSTEM role contains, for example,
privileges that allow you to create database users and grant additional roles to your own and other
database users.
○ You also require specific roles to use the SAP HANA Web-based tools. For security reasons, only the role
that provides access to the SAP HANA Web-based Development Workbench is assigned as default.
6. To log on to the SAP HANA Web-based Development Workbench and change your initial password now
(recommended), copy your initial password and then close the dialog box.
You do not have to change your initial password immediately. You can open the dialog box again later to display
both your database user and initial password. Since this poses a potential security risk, however, you are
strongly advised to change your password as soon as possible.
Caution
You are responsible for choosing a strong password and keeping it secure. If your user is blocked or if you've
forgotten the password of your user, another database administration user with USER_ADMIN privileges can
unlock your user.
Next Steps
● Tip
There may be some roles that you cannot assign to your own database user. In this case, we recommend
that you create a second database user (for example, ROLE_GRANTOR) and assign it the HCP_SYSTEM
role. Then log onto the SAP HANA system with that user and grant your database user the roles you require.
● In the SAP HANA system, you can now create database users for the members of your subaccount and assign
them the required developer roles.
● To be able to use other HANA tools like HANA Cockpit or HANA XS Administration tool, you must assign
yourself access to these before you can start using them. See Assign Roles Required for the SAP HANA XS
Administration Tool [page 1247]
Related Information
Create a Database Administration User for SAP HANA Tenant Databases [page 751]
You use the SYSTEM user to create a database administration user with SAP HANA cockpit in the Neo
environment.
Prerequisites
You have enabled the Web Access switch for your tenant database. For more information, see .
You specify a password for the SYSTEM user when your create your SAP HANA tenant database. You use the
SYSTEM user to log on to SAP HANA Cockpit and create your own database administration user.
The SYSTEM user is a preconfigured database superuser with irrevocable system privileges, such as the ability to
create other database users, access system tables, and so on. A database-specific SYSTEM user exists in every
database of a tenant database system. To ensure that the administration tool SAP HANA cockpit can be used
immediately after database creation, the SYSTEM user is automatically given several roles the first time the SAP
HANA cockpit is opened with this user.
Caution
You should not use the SYSTEM user for day-to-day activities. Instead, use this user to create dedicated
database users for administrative tasks and to assign privileges to these users.
Procedure
1. In the SAP Cloud Platform cockpit, navigate to a subaccount. For more information, see Navigate to Global
Accounts and Subaccounts [page 964].
2. Choose SAP HANA / SAP ASE Databases & Schemas in the navigation area.
All databases available in the selected subaccount are listed with their ID, type, version, and related database
system.
Tip
To view the details of a database, for example, its state and the number of existing bindings, select a
database in the list and click the link on its name. On the overview of the database, you can perform further
actions, for example, delete the database.
A message is displayed to inform you that at that point, you lack the roles that you need to open the SAP
HANA cockpit.
6. To confirm the message, choose OK.
You receive a confirmation that the required roles are assigned to you automatically.
7. Choose Continue.
Note
The user name always appears in upper case letters.
12. In the Authentication section, make sure the Password checkbox is selected and enter a password.
Note
According to the SAP HANA default password policy, the password must start with a letter and only contain
uppercase and lowercase letters ('a' - 'z', 'A' - 'Z'), and numbers ('0' - '9'). If you've changed the default
policy, other password requirements might apply.
The new database user is displayed as a new node under the Users node.
14. Assign your user the necessary administrator roles and privileges by going to the Granted Roles section and
choosing the + (Add Role) button. To allow your administration user to create new users and assign roles and
privileges to them for example, add the USER_ADMIN privilege. For more information see, System Privileges in
the SAP HANA Security Guide.
15. Choose Ok.
16. Save your changes.
Caution
At this point, you are still logged on with the SYSTEM user. You can only use your new database user to work
with SAP HANA Web-based Development Workbench by logging out from SAP HANA Cockpit first.
Otherwise, you would automatically log in to the SAP HANA Web-based Development Workbench with the
SYSTEM user instead of your new database user. Therefore, choose the Logout button before you continue
to work with the SAP HANA Web-based Development Workbench, where you need to log on again with the
new database user.
Recommendation
We recommend that you create more than one database administration users. If one database
administration user is locked or if the password needs to be reset, only another administration user can
unlock this user and reset the password.
Next Steps
You can use the newly created database administration user to create database users for the members of your
subaccount and assign them the required developer roles.
Establish a data source binding between your applications and the SAP HANA database in the Neo environment
using the console client or the SAP Cloud Platform cockpit.
You can also bind a database to applications that are owned by subaccounts other than the one in which the
database is deployed, but these subaccounts first need permission to use the database. For more information, see
Sharing Databases with Subaccounts [page 763].
When you create a binding between an application and an SAP HANA XS or SAP HANA tenant database, you
specify a database user, password, and a database ID. The database user you specify determines which database
schema is assigned to the application as its default. The default name of the database schema is the same as the
name of the database user, who is also referred to as the schema owner. By default, only the schema owner has
permission to access data stored in a schema.
As shown below, you can use different database users to bind applications to different schemas.
To bind several applications to the same schema, specify the same database user each time you create the
binding.
The application uses the database user's default schema, but since a database user may have access to more than
one schema, it could potentially use any of these schemas. For more information on using non-default schemas for
bindings, see Use Non-Default SAP HANA Database Schemas for Application Bindings [page 759].
Recommendation
We recommend that you use a database user’s default schema. Using non-default SAP HANA database
schemas for application bindings requires expert SAP HANA database knowledge.
Before you create the binding, create a technical user as described in Disable the Password Lifetime Handling for a
New Technical SAP HANA Database User [page 755].
When you create a data source binding in the Neo environment, you specify a technical database user that
determines to which SAP HANA XS or SAP HANA tenant database schema the application is bound. You need to
prevent the system from asking you to change the initial password of that user. Otherwise, the application may not
start correctly.
Prerequisites
● Deploy a productive SAP HANA XS or SAP HANA tenant database in your account.
● Create a database administrator user for that database. For more information, see Creating a Database
Administration User [page 749].
● Assign the following roles to your database administration user: sap.hana.ide.roles::CatalogDeveloper,
sap.hana.ide.roles::Developer, and sap.hana.ide.roles::EditorDeveloper.
Note
The procedure below creates a new technical database user. If you've already created a user that you want to
specify for the binding, follow the instructions in step 9.
Procedure
1. In the SAP Cloud Platform cockpit, navigate to a subaccount. For more information, see Navigate to Global
Accounts and Subaccounts [page 964].
2. In the navigation area, choose SAP HANA / SAP ASE Database & Schemas .
3. Select the SAP HANA database for which you would like to create a binding.
4. In the Development Tools section, choose the SAP HANA Web-based Development Workbench link.
5. Log in with the credentials of your existing database administrator user.
6. Choose Catalog.
7. Choose the SQL button.
8. Enter the following command, providing a username and password for your new user:
Note
To use this user only for the application binding, you don't need to assign any roles.
Results
You have disabled the password lifetime check of a new SAP HANA database user so that this user's password
never expires.
Note
If you'd like to change the password of that user after you've created the binding, do the following:
4. Delete the existing data source binding and create a new one specifying the new credentials of your
database user. See Bind Databases Using the Cockpit [page 757] or Bind Databases Using the Console
Client [page 758].
5. Restart your application. See Start and Stop Applications [page 1706].
Next Steps
You can now use the database user to create the data source binding:
Related Information
Use Non-Default SAP HANA Database Schemas for Application Bindings [page 759]
SAP HANA Security Guide
You can bind a database to your Java application in the SAP Cloud Platform cockpit in the Neo environment.
Prerequisites
● Deploy a Java application in your subaccount. See Deploying and Updating Applications [page 1175].
● Deploy a database in a subaccount that belongs to an enterprise account.
Procedure
1. In the SAP Cloud Platform cockpit, navigate to a subaccount. For more information, see Navigate to Global
Accounts and Subaccounts [page 964].
2. Choose one of the following options:
By database 1. In the navigation area, choose SAP HANA / SAP ASE Databases & Schemas ,
then select the relevant database.
2. Choose Data Source Bindings.
The overview lists all Java applications that the specified database is currently bound
to, as well as the database user used in each case.
3. Choose New Binding.
4. Enter a data source name and the name of the application you want to bind the data
base to.
Note
Data source names you enter here must match the JNDI data source names used in
the corresponding applications, as defined in the web.xml or persistence.xml file.
To create a binding to the default data source, enter the data source name
<DEFAULT>. An application that is bound to the default data source (shown as
<DEFAULT>) cannot be bound to any other databases. To use other databases,
first rebind the application using a named data source.
By Java application 1. Choose Applications Java Applications in the navigation area and select the rel
evant application.
Note
Data source names are freely definable but need to match the JNDI data source
names used in the corresponding applications, as defined in the web.xml or persis
tence.xml file.
To create a binding to the default data source, enter the data source name
<DEFAULT>. An application that is bound to the default data source (shown as
<DEFAULT>) cannot be bound to any other databases. To use other databases,
first rebind the application using a named data source.
5. In the Database ID field, enter the database to which the application should be bound.
6. Provide the credentials of the database user.
7. Select the checkbox Verify credentials to verify the validity of the credentials.
8. Save your entries.
Next Steps
Once the binding is created, you can start your application. To do so, navigate to Applications Java
Applications and select the Start icon for your application.
To unbind a database from an application, choose the Delete icon next to the binding. The application maintains
access to the database until it is restarted.
Related Information
You can bind a database to your Java application using the bind-db command in the Neo environment.
Prerequisites
● Deploy a Java application in your subaccount. See Deploying and Updating Applications [page 1175].
Procedure
1. Open the command window in the <SDK>/tools folder and execute the bind-db [page 1805] command,
replacing the values as appropriate:
Example:
In this example, a data source name has not been specified and the application therefore uses the default data
source.
Note
To bind an application to a database in another account, specify an owner account or an access token. For
more information, see bind-db [page 1805] or Sharing Databases with Subaccounts [page 763].
Note
You can unbind your database from the application using the unbind-db [page 1988] command.
Related Information
The database user you specify while creating a binding determines which SAP HANA database schema an
application can access. Typically, the application uses the database user’s default schema, but since a database
Recommendation
We recommend that you work with a database user’s default schema. The name of the default schema is the
same as the database user, and is created automatically when you create the user. If you require multiple
schemas, simply create separate appropriately named database users and then bind each of their default
schemas to the application using named data sources.
Caution
Using non-default schemas is error prone and requires greater care with the application code.
The following example shows one scenario for which you might want to use non-default schemas.
An application can access a non-default schema in its program code by adding the schema name as a prefix to the
table name as follows: <schema name>.<table name>
When programming with JPA, add the schema prefix to the table annotation in the JPA entity class.
Example
Table T_PERSON in the schema COMPANYDATA:
@Entity
@Table(name = "COMPANYDATA.T_PERSON")
For JDBC, all occurrences of the table names in SQL statements require the schema prefix.
Example
Table T_PERSONS in the schema COMPANYDATA:
INSERT "INSERT INTO COMPANYDATA.T_PERSONS (ID, FIRSTNAME, LASTNAME) VALUES (?, ?, ?)"
CREATE "CREATE TABLE COMPANYDATA.T_PERSONS (ID VARCHAR(255) PRIMARY KEY NOT NULL,
FIRSTNAME VARCHAR (255), LASTNAME VARCHAR (255))"
Note
When you retrieve database metadata to check whether a table already exists, you might also need to specify
the schema parameter; in particular, if you have multiple schemas containing tables with identical names:
Example
Related Information
Disable the Password Lifetime Handling for a New Technical SAP HANA Database User [page 755]
Bind Databases Using the Cockpit [page 757]
Bind Databases Using the Console Client [page 758]
Programming with JPA [page 821]
Programming with Plain JDBC [page 868]
Try to solve issues by restarting the entire corresponding SAP HANA tenant database system in the Neo
environment.
Procedure
1. In the SAP Cloud Platform cockpit, navigate to the subaccount that owns the SAP HANA tenant database you
want to restart. For more information, see Navigate to Global Accounts and Subaccounts [page 964].
2. In the navigation area, choose SAP HANA / SAP ASE Databases & Schemas .
3. Select the tenant database to restart.
4. On the Overview page of the tenant database, choose Stop and confirm the dialog.
5. Once the database is stopped, choose Start and confirm the dialog.
Restore your database in the Neo environment from a specific point of time by creating a service request in the
SAP Cloud Platform cockpit.
Prerequisites
1. In the SAP Cloud Platform cockpit, navigate to a subaccount. For more information, see Navigate to Global
Accounts and Subaccounts [page 964].
2. In the navigation area, choose SAP HANA / SAP ASE Service Requests .
3. Choose New Service Request, then do the following:
a. Choose Database.
b. From the dropdown box, select the Database you want to restore.
c. Use the Restore To field to specify a specific point in time to which you want to restore the database.
Caution
You will lose all data stored between the time you specify in the New Service Request screen and the
time at which you create the service request. For example, if you create a restore request at 3pm to
restore your database to 9am on the same day for example, all data stored between 9am and 3pm will
be lost.
d. Choose Save.
A template for opening an incident in the SAP Support Portal is displayed.
e. Select the text in the template between the two dashed lines and copy it to the clipboard.
Tip
Navigate to SAP HANA / SAP ASE Service Requests , then choose the Display icon to find the
template for opening a ticket at any time.
f. Choose Close.
4. Log on to the SAP Support Portal with your S-user ID and password and create a new incident by choosing
Report an Incident.
Note
You need the authorization to create an incident. Contact a user administrator in your company to
request this authorization.
Note
You can find a detailed step-by-step instruction for creating an incident in the Report an Incident - Help .
6. Once you have reached the Enter Incident view, enter the following data:
a. In the Classification panel, enter the component for persistency.
Note
For a complete list of SAP Cloud Platform components, see 1888290 .
Results
You have created a request for restoring a database and sent the request to SAP Support for processing. As soon
as your database is restored, the state of your request will be set to Finished in the cockpit and the incident you
created will be set to Completed. You can see the state of your request in the cockpit by navigating to SAP
HANA / SAP ASE Service Requests . The state is displayed next to your service request. In the meantime, SAP
Support might contact you in case they need further clarification. You will be notified by e-mail if you need to take
any further action.
Note
Your database is available for use for all users immediately after the restore has been successful.
Note
To cancel your restore request, go to SAP HANA / SAP ASE Service Request , choose your restore request
and select the Delete icon. Note that your request can only be canceled if it has the state New.
Related Information
Share productive that are provisioned in a subaccount with other subaccounts in the Neo environment.
When you provision a database in an SAP Cloud Platform subaccount, only the current subaccount has access to
it. You can change this by giving other subaccounts controlled access to productive databases that are owned by a
different subaccount. You can also allow other subaccounts to bind their Java applications to a database in a
different subaccount.
Sharing Databases in the Same Enter This method allows you to give a subaccount permission to use a database that is
prise Account [page 764] owned by a different subaccount. You can add and revoke this permission using the
cockpit or the console client. See Managing Access Permissions [page 766].
Restriction
The subaccount providing the permission and the subaccount receiving the per
mission must be part of the same global account. For more information on global
accounts, see Accounts [page 10].
The subaccount receiving the permission can bind its applications or open a tunnel
to the database in the different subaccount, or both. See
Sharing Databases with Other Subac This method allows you to give any subaccount permission to use a database that is
counts [page 771] owned by a different subaccount. You can add and revoke this permission using the
console client. See Managing Access to Databases for Other Subaccounts [page
773]
The subaccount receiving the permission uses an access token to bind a Java appli
cation or to open a tunnel to a database in the other subaccount. See
You can share productive that have been provisioned in a subaccount with other subaccounts of your enterprise
account in the Neo environment.
Note
The following explanations apply only to subaccounts that belong to the same enterprise account. To share a
database with a subaccount that is not part of your enterprise account, see Sharing Databases with Other
Subaccounts [page 771].
You can give subaccounts controlled access to a database owned by another subaccount by adding a permission
for the subaccounts requesting access. Depending on the type of permission you provide, the owners of the
subaccounts receiving the permission can bind their applications to database or open a tunnel to the database
[page 810] that is owned by another subaccount.
To give access permissions to other subaccounts in your enterprise account, log in to the subaccount in which the
database you want to share is provisioned. Then use the SAP Cloud Platform cockpit or the console client to give
permissions to other subaccounts. Subsequently, owners of the subaccounts receiving the permission can see the
database listed in the cockpit and in the console client, and use it in accordance with the permissions given.
The table below lists the tasks and the person responsible for sharing databases with other subaccounts in the
same enterprise account:
Add New Access Permissions [page Administrator in the subaccount that grant-db-access [page 1882]
766]
owns the database
Revoke Access Permissions [page 769] Administrator in the subaccount that revoke-db-access [page 1956]
Bind Applications to Databases in the Member of the subaccount that has re bind-db [page 1805]
Same Enterprise Account [page 770]
quested permission to use a database
owned by another subaccount
Open Database Tunnels [page 810] Member of the subaccount that has re open-db-tunnel [page 1942]
Subaccount A, B, and C are all part of the same enterprise account. An SAP HANA or SAP ASE database is
provisioned in all three subaccounts. Three Java applications have been deployed in subaccount C. Java
application 3 is bound to the database in subaccount C. To bind Java application 1 to the database in subaccount
A, a member of subaccount A provides subaccount C with a permission for data source bindings. In addition, a
member of subaccount B gives subaccount C the permission to open a tunnel to the database in subaccount B.
After the permissions have been given, members of subaccount C can see the databases owned by subaccount A
and B in the console client and in the cockpit. As shown in the picture below, subaccount C binds two of its Java
applications to the database in subaccount A. The permission for data source bindings provided to subaccount C
by subaccount A is not restricted to a single application. All members of subaccount C can bind multiple Java
applications to the database in subaccount A. Due to the permission for opening database tunnels provided to
subaccount C by subaccount B, all members of subaccount C can also open a tunnel to the database in
subaccount B.
Related Information
As a subaccount member with the administrator role, you can add, change, and revoke access permissions for
subaccounts in your enterprise account by using the cockpit or the console client in the Neo environment.
Caution
To share a database with a subaccount that is not part of your enterprise account, follow the steps in Sharing
Databases with Other Subaccounts [page 771].
Related Information
You use the cockpit or the console client in the Neo environment to create a new access permission, allowing a
subaccount to use a database that is owned by another subaccount.
Prerequisites
● Provision the database you want to share in a subaccount that belongs to an enterprise account. See Creating
Databases [page 749].
● You are assigned the administrator role in that subaccount.
● (For the console command only) Set up the console client. See Set Up the Console Client [page 1135] and
Using the Console Client [page 1792].
As a subaccount member with the Administrator role, you use the cockpit or the console client to give
subaccounts permission to use a productive SAP HANA or SAP ASE database that is owned by another
subaccount.
Restriction
The subaccount providing the permission to use the database and the subaccount receiving the permission
must be part of the same enterprise account.
Procedure
1. In the SAP Cloud Platform cockpit, navigate to the subaccount that owns the database
Using the Cockpit
you would like to share. For more information, see Navigate to Global Accounts and Sub
accounts [page 964].
2. Choose SAP HANA / SAP ASE Databases & Schemas in the navigation area.
3. Choose the required database.
4. In the navigation area, choose Permissions.
5. Choose New Permission, then do following:
1. Select the subaccount to receive permission to use the database.
2. Choose the permission type by selecting TUNNEL, BINDING, or both.
3. Choose Save.
6.
Using the Console Client 1. Open the command window in the <SDK>/tools folder and enter the following com
mand:
Note
For an example, see grant-db-access [page 1882].
2. (Optional) Check that permission has been given successfully by entering the following
command:
Note
For an example, see list-db-access-permissions [page 1914].
Results
You have given a subaccount permission to use a database that is owned by another subaccount. In the
subaccount that owns the database, the Shared icon appears in the Databases & Schemas list in the cockpit next
to all databases that can be used by other subaccounts.
Related Information
Use the cockpit to change the type of an existing access permission in the Neo environment.
Prerequisites
● You are assigned the administrator role in the subaccount that owns the database.
● Give a subaccount permission to use a database that is owned by another subaccount. The subaccount
providing the permission and the subaccount receiving the permission must be part of the same enterprise
account. See Add New Access Permissions [page 766].
Procedure
1. In the SAP Cloud Platform cockpit, navigate to the subaccount that owns the database for which you would
like to change permissions. For more information, see Navigate to Global Accounts and Subaccounts [page
964].
2. Choose SAP HANA / SAP ASE Databases & Schemas in the navigation area.
Related Information
You use the cockpit or the console client to revoke an access permission for another subaccount in the Neo
environment.
Prerequisites
● You are assigned the administrator role in the subaccount that owns the database.
● Give a subaccount permission to use a database that is owned by another subaccount. The subaccount
providing the permission and the subaccount receiving the permission must be part of the same enterprise
account. See Add New Access Permissions [page 766].
● (For the console command only) Set up the console client. See Set Up the Console Client [page 1135] and
Using the Console Client [page 1792].
Procedure
1. In the SAP Cloud Platform cockpit, navigate to the subaccount that owns the database for
Using the Cockpit
which you would like to revoke permissions. For more information, see Navigate to Global
Accounts and Subaccounts [page 964].
2. Choose SAP HANA / SAP ASE Databases & Schemas in the navigation area.
3. Choose the required database.
4. In the navigation area, choose Permissions.
5. Choose the Delete icon next to the subaccount that you want to revoke the permission for.
Caution
Choosing the Delete icon revokes all access permissions for this subaccount. To
change the type of permission for a subaccount, from Tunnel to Binding for example,
see Change Access Permission Types [page 768].
6. Choose OK.
Using the Console Client Open the command window in the <SDK>/tools folder and enter the following command:
Note
For an example, see revoke-db-access [page 1956].
Related Information
You use the cockpit or the console client in the Neo environment to bind a Java application that you deployed in
one subaccount to a productive that is owned by another subaccount.
Prerequisites
● Deploy a Java application to SAP Cloud Platform. See Deploying and Updating Applications [page 1175].
● (For the console commands only) Set up the console client. See Set Up the Console Client [page 1135] and
Using the Console Client [page 1792].
● The subaccount that owns the database and the subaccount in which the Java application has been deployed
must be part of the same enterprise account. The subaccount that owns the database has given the
subaccount in which the Java application has been deployed permission to bind the application to the
database. See Managing Access Permissions [page 766].
Using the cockpit In the SAP Cloud Platform cockpit, navigate to the subaccount in which the applica
tion you would like to bind has been deployed. For more information, see Navigate to
Global Accounts and Subaccounts [page 964].
Follow the steps described in Binding SAP HANA Databases to Java Applications
[page 754] or Bind Applications to Databases in the Same Enterprise Account [page
770]. When prompted to select the database that you want to bind the application to,
select the database that is owned by another subaccount.
Note
To unbind the database from an application, simply delete the binding. The appli
cation maintains access to the database until restarted.
Using the console client Open the command window in the <SDK>/tools folder and enter the command for
binding an application to a database in another subaccount (same enterprise ac
count) described in bind-db [page 1805].
Note
To unbind the database from an application, open the command window in the
<SDK>/tools folder and enter the following command:
You can share a productive that is owned by a subaccount with other subaccounts in the Neo environment.
Note
We recommend that you use this method to share your database with subaccounts that do not belong to your
global account. To share your database in the same global account, see Sharing Databases in the Same
Enterprise Account [page 764].
You can allow a subaccount to access a database that is owned by another subaccount by generating an access
token with the console client. A member of the subaccount requesting access to the database can use the access
token to bind a Java application [page 780] and/or to open a tunnel [page 781] to the database in question.
The access token uniquely identifies the access permission based on the following:
● It always applies to one database (and one application, if the permission allows for a data source binding) and
is not transferrable.
● It has an unlimited validity period.
● (For application bindings only) It can be used for as long as application bindings exist or until the permission is
revoked. You can revoke permissions at any time, wheter or not the target application has already been bound
to the database.
The table below lists the tasks and the person responsible for sharing databases with other subaccounts:
Give Applications in Other Subaccounts Administrator in the subaccount that grant-schema-access [page 1884]
Permission to Access a Database [page
owns the database
774]
Revoke Database Access Permissions for Administrator in the subaccount that revoke-schema-access [page 1958]
Applications in Other Subaccounts [page
owns the database
775]
Give Other Subaccounts Permission to Administrator in the subaccount that grant-db-tunnel-access [page 1883]
Open a Database Tunnel [page 777]
owns the database
Revoke Tunnel Access to Databases for Administrator in the subaccount that revoke-db-tunnel-access [page 1957]
Other Subaccounts [page 778]
owns the database
Bind Applications to Databases in Other Member of the subaccount that has re bind-db [page 1805] (for SAP HANA ten
Subaccounts [page 780]
quested permission to use a database ant databases and SAP ASE databases)
owned by another subaccount
bind-hana-dbms [page 1808] (for pro
ductive SAP HANA database systems)
Open Tunnels to Databases in Other Member of the subaccount that has re open-db-tunnel [page 1942]
Subaccounts [page 781]
quested permission to use a database
owned by another subaccount
Subaccount A, B, and C are not part of the same global account. An SAP HANA or SAP ASE database is
provisioned in all three subaccounts. Three Java applications have been deployed in subaccount C. Java
application 3 is bound to the database in subaccount C. To bind Java application 1 to the database in subaccount
A, a member of subaccount C requests access permission to the database in subaccount A for Java application 1.
As shown in the picture below, the access token provided by subaccount A is used by a member of subaccount C
to bind Java application 1 to the database in subaccount A. The token only applies to Java application 1, it would
not be possible to bind other Java applications in subaccount C to the database in subaccount A. The access token
provided by subaccount B is used by a member of subaccount C to open a tunnel to the database in subaccount B.
All members of subaccount C can open tunnels to the database in subaccount B if they are in possession of the
access token.
Related Information
As a subaccount member with the administrator role, you can manage access to databases for other subaccounts
in the Neo environment.
Caution
To share a database with a subaccount that is part of your global account, we recommend you follow the steps
in Managing Access Permissions [page 766].
Related Information
You can give a Java application in another subaccount permission to access a productive in your subaccount in the
Neo environment.
Prerequisites
● The database you would like to share has been provisioned in a subaccount. See Creating Databases [page
749].
● You have the administrator role in that subaccount.
● You have set up the console client. See Set Up the Console Client [page 1135] and Using the Console Client
[page 1792].
Context
To give a Java application permission to access a database in your subaccount, you generate an access token
using the grant-schema-access command. A member of the subaccount in which the application has been
deployed uses the token to create a data source binding.
● It always applies to one database and one application, and is not transferrable
● It has an unlimited validity period
● You can revoke permissions at any time, whether or not the target application has already been bound to the
database
● It can be used for as long as application bindings exist or until the permission is revoked
Procedure
Open the command window in the <SDK>/tools folder and enter the following command:
A successfully generated access token (an alphanumeric string) appears on the screen.
Next Steps
To give a Java application in another subaccount access to your database, create a database user and a password
and provide it, together with the access token, to a member of the subaccount receiving the permission.
Related Information
You can revoke the permission to access a productive in your subaccount for applications in other subaccounts in
the Neo environment.
Prerequisites
● Give an application in another subaccount permission to use a database in your subaccount. See Give
Applications in Other Subaccounts Permission to Access a Database [page 774].
● You are assigned the administrator role in that subaccount.
● Set up the console client. See Set Up the Console Client [page 1135] and Using the Console Client [page 1792].
Context
Note
You can revoke the permission to use a database in your subaccount for applications in other subaccounts at
any time, whether or not the applications have already been bound to the database.
Procedure
1. Open the command window in the <SDK>/tools folder and enter the following command to list all
permissions for the specified database:
Example output:
2. To revoke the permission, enter the following command, using the access token obtained in the previous step:
Caution
We strongly recommend that you delete the database user and password you provided to the other
subaccount requesting the access to your database.
If the access token has already been used to bind the database, revoking the access permission also unbinds
the database. If the application is running, it continues to use the database until it is restarted.
3. Optional: Check that the access token has been revoked by listing all permissions again as described in step 1
or using the display-schema-info command.
Related Information
You can allow other subaccounts to open a tunnel to a productive database in your subaccount in the Neo
environment.
Prerequisites
● Provision the database you want to share in a subaccount. See Creating Databases [page 749].
● You are assigned the administrator role in that subaccount.
● Set up the console client. See Set Up the Console Client [page 1135] and Using the Console Client [page 1792].
Context
To give another subaccount permission to open a tunnel to your database, create a database user for that
subaccount and provide that user's credentials, together with an access token, to a member of the subaccount
that requested permission to open a database tunnel. This allows this subaccount member to open a database
tunnel to the database in your subaccount. All members of the subaccount receiving the permission can access
the database in your subaccount.
Provide the following information to a member of the subaccount that requested permission to open a database
tunnel:
● To check if the database access has been given successfully, you can view a list of all currently active database
access permissions to other subaccounts, which exist for a specified subaccount, by using the The token is
simply a random string, for example, 31t0dpim6rtxa00wx5483vqe7in8i3c1phv759w9oqrutf638l, which
remains valid until the provider subaccount revokes it again.list-db-tunnel-access-grants command.
● The token is simply a random string, for example, you can revoke the database access permission at any point
in time using the revoke-db-tunnel-access command. See Revoke Tunnel Access to Databases for Other
Subaccounts [page 778].
Note
Only the provider subaccount can revoke the access permission. When you revoke the access permission,
we highly recommend that you disable the database user and password created for the access permission
on the database itself and that you close any open sessions on the SAP HANA database.
If a subaccount member has already used the access token and there are open database tunnels, they remain
open until they are closed, even though the user has been disabled.
We highly recommend that you create a dedicated database user on the database for each access permission.
If the permission has been given successfully, you see the access token. As a database administrator, create a
database user with the needed permissions. Provide the database user and password together with the access
token to a member of the subaccount that has requested permission to open a tunnel to your database.
Related Information
You can revoke the permission to open database tunnels to a productive SAP HANA database in your subaccount
for other subaccounts in the Neo environment.
Prerequisites
● Give another subaccount permission to use a database in your subaccount. See Give Other Subaccounts
Permission to Open a Database Tunnel [page 777].
● You are assigned the administrator role in that subaccount.
● Set up the console client. See Set Up the Console Client [page 1135] and Using the Console Client [page 1792].
Note
You can revoke the permission to use a database in your subaccount for other subaccounts at any time.
Procedure
1. Open the command window in the <SDK>/tools folder and enter the following command to list all
permissions for the specified database:
Example output:
2. To revoke the permission, enter the following command and copy across the access token obtained in the
previous step:
Note
Only the provider subaccount can revoke the access permission. When you revoke the access permission,
we highly recommend that you disable the database user and password created for the access permission
on the database itself and that you close any open sessions on the SAP HANA database.
You have revoked the permission to open tunnels to a database in your subaccount for other subaccounts.
3. Optional: Check that the access token has been revoked by listing all permissions again as described in step 1.
Related Information
To bind applications to productive in other subaccounts, you use a remote access token that indicates that access
to the database has been permitted.
Prerequisites
● You have set up the console client. For more information, see Set Up the Console Client [page 1135].
● You have received an access token from the database owner.
Context
When you bind Java applications to the specified database in other subaccounts, you provide a database user and
password and an access token that you have received from the database owner. You can use this token for as long
as application bindings exist, or until the permission is revoked.
Note
The token is not transferrable to other applications in your subaccount. The owner subaccount can revoke
access to the database at any time.
Procedure
1. Open the command window in the <SDK>/tools folder and enter the following command:
You have bound your application to the database in the other subaccount.
Related Information
To open a tunnel to a database that is owned by another subaccount, you request permission from that
subaccount. If your request is approved, the subaccount that owns the database in question provides you with an
access token, a database user, and password. This allows you to open a tunnel from your subaccount to the
database in the other subaccount.
Prerequisites
● Set up the console client. For more information, see Set Up the Console Client [page 1135].
● The subaccount that owns the database has given you an access token and a database user and password.
See Give Other Subaccounts Permission to Open a Database Tunnel [page 812].
Context
Once you have received the token and the database credentials, you can open the database tunnel. Use the access
token parameter for the open-db-tunnel command, not the database ID parameter. Then you can use a
database tool of your choice to connect to the database in another subaccount. Log on to the database with the
user and password that you received from the provider. You can then work on the remote database instance.
Note
All members of the consumer subaccount have permission to access the database in the provider subaccount.
Next Steps
Once you have opened the tunnel, you can connect to the database. See:
Related Information
Delete your SAP HANA tenant database in the Neo environment using the SAP Cloud Platform cockpit.
Procedure
To delete your SAP HANA tenant database, perform the following steps:
Remember
To delete an SAP HANA tenant database, first delete all existing bindings to your application.
Delete all bindings to your SAP HANA tenant database using the SAP Cloud Platform cockpit.
Prerequisites
● The SAP HANA tenant database you want to unbind from must be in status Started.
● (Optional) Access to the subaccounts you shared the SAP HANA tenant database with.
Note
Deleting all bindings might include bindings in other subaccounts than the subaccount that owns the
database.
Procedure
1. In the SAP Cloud Platform cockpit, navigate to the subaccount that owns the applications you bound your
tenant database to. For more information, see Navigate to Global Accounts and Subaccounts [page 964].
2. Choose the application you want to unbind the SAP HANA tenant database from.
3. On the overview page of the applications, choose Configuration Data Source Bindings .
4. Choose (Delete) for the binding you want to delete and confirm the deletion.
Next Steps
You can delete your SAP HANA tenant database using the SAP Cloud Platform cockpit.
Prerequisites
You must have the Administrator role in the subaccount that owns the SAP HANA tenant database you want to
delete.
Procedure
1. In the SAP Cloud Platform cockpit, navigate to the subaccount that owns the SAP HANA tenant database you
want to delete. For more information, see Navigate to Global Accounts and Subaccounts [page 964].
2. In the navigation area, choose SAP HANA / SAP ASE Databases & Schemas .
3. Select the tenant database to delete.
4. On the Overview page of the tenant database, choose Delete and confirm the deletion.
Define the number of open database connections to an application in the Neo environment.
Prerequisites
Context
A connection pool maintains a specific number of open database connections to an application in the main
memory. The default size of the database connection pool is eight, but you can change this number while
deploying or updating an application in the SAP Cloud Platform cockpit.
1. Follow the steps as described in Deploy on the Cloud with the Cockpit [page 1199], but do
Deploying your application
not choose the Deploy button yet.
2. In the dialog box, in the JVM Arguments field, enter the following, depending on your data
source name:
○ If you plan to use the default data source name for your data source binding, enter
the following by replacing the <value> parameter with the connection pool size you
want to set:
-
Dcom.sap.persistence.jdbc.connection.pool.max_active=<val
ue>
○ If you do not plan to use the default data source name for your data source binding,
enter the following by replacing the <data_source_name> parameter with the
data source name you plan to use, and the <value> parameter with the connection
pool size you want to set:
-
Dcom.sap.persistence.jdbc.connection.pool.<data_source_na
me>.max_active=<value>
Example
If the name of your data source is jdbc/myds and you want to set the connection
pool size to 20, enter the following:
-
Dcom.sap.persistence.jdbc.connection.pool.jdbc.myds.max_ac
tive=20
Caution
Replace forward slashes with periods when entering the data source name in the JVM
Arguments field, as shown in the example above.
Tip
If you plan to create multiple data source bindings using non-default data source
names, you can configure the connection pool size for each of them at the same time.
Simply insert one entry per data source in the JVM Arguments field as described
above, and separate the entries with a space.
3. Select Deploy.
1. Follow the steps as described in the Results section of Deploy on the Cloud with the Cock
Updating your application
pit [page 1199].
2. In the dialog box, in the JVM Arguments field, enter the following, depending on your data
source name:
○ If you're using the default data source name for your data source binding, enter the
following by replacing the <value> parameter with the connection pool size you
want to set:
-
Dcom.sap.persistence.jdbc.connection.pool.max_active=<val
ue>
○ If you're not using the default data source name for your data source binding, enter
the following by replacing the <data_source_name> parameter with the data
source name you plan to use and the <value> parameter with the connection pool
size you want to set:
-
Dcom.sap.persistence.jdbc.connection.pool.<data_source_na
me>.max_active=<value>
Example
If the name of your data source is jdbc/myds and you want to to set the connection
pool size to 20, enter the following:
-
Dcom.sap.persistence.jdbc.connection.pool.jdbc.myds.max_ac
tive=20
Caution
Replace forward slashes with periods when entering the data source name in the JVM
Arguments field, as shown in the example above.
Tip
If you've created multiple data source bindings using non-default data source names,
you can configure the connection pool size for each of them at the same time. Simply
insert one entry per data source in the JVM Arguments field as described above and
separate the entries with a space.
Note
You might need to delete the data source binding and create a new one.
Next Steps
For more information on monitoring JMX attributes, see JMX Attributes for the Database Connection Pool [page
787].
JMX attributes let you monitor the database connection pool for a started Java application in the Neo
environment.
Note
To monitor the JMX attributes for the database connection pool using the SAP Cloud Platform cockpit, follow
the procedure in . Select the com.sap.cloud.jmx Persistence-ConnectionPools MBean, then select the
data source.
Timeouts05h Number of timeouts received during the last 5 hours when try
ing to request a connection from the connection pool.
Recommendation
To determine if your current database connection pool size is suitable for your scenario, monitor the
AverageConnectionWaitTimeMillis, the MinConnectionWaitTimeMillis, and the
MaxConnectionWaitTimeMillis attributes, all of which provide connection pool performance statistics. If the
values of these attributes don't conform to the accepted values for application performance, you might need to
increase the size of the database connection pool. See Configure the Database Connection Pool Size [page
784].
Tip
You can configure JMX checks to monitor the performance of your database connection pool and to send you
incident alerts. For example, if you configure a JMX check to monitor the AverageConnectionWaitTimeMillis
attribute, the system sends an alert if the value of the attribute exceeds its limit. See .
Access remote database instances through a database tunnel in the Neo environment, which provides a secure
connection from your local machine and bypasses any firewalls.
To learn how to access your database remotely, see Accessing Databases Remotely [page 810].
Related Information
Analyze error warnings that are related to data backups of tenant databases or the system database in the Neo
environment.
Context
If the backup ran into problems, backup-related error messages are shown in the Monitoring tab of the SAP Cloud
Platform cockpit. For more information, see View Monitoring Metrics of a Database System [page 1251]. This can
be related to memory issues in the SAP HANA database system.
Procedure
To find out why the backup failed, analyze the alert to determine which tenant database or tenant databases, or
whether the system database is affected.
1. If only one or a few tenant databases are affected, try the following:
1. Check the memory limits and the memory usage of the affected tenant databases using the Memory
Usage tab of the SAP Cloud Platform cockpit. If there are memory limits set on the tenant databases,
consider removing or increasing the limits. For more information, see and View Memory Usage for an SAP
HANA Database System [page 729].
2. Connect to the tenant database and check the memory consumption of the tenant databases using SAP
HANA studio or SAP HANA cockpit. For more information, see SAP Note 1999997 .
3. If you cannot connect to the tenant database, restart it, which frees memory and may therefore resolve
memory issues. For more information, see Restart SAP HANA Tenant Databases [page 761].
2. If almost all tenant databases or the system database are affected, try the following:
1. If you cannot connect to any of the tenant databases, restart the database system. For more information,
see Restart Database Systems [page 724].
Tip
If you frequently run into memory-related backup problems, try to find out where they come from and why your
databases consume too much memory. These actions might resolve your issues:
● If there are any tenant databases you don't currently need, stop these databases to free resources. Restart
them only when you need them.
● Delete any unneeded tenant databases.
● If possible, remove data from your databases.
● If possible, move data to another system.
● Resize the database system.
Note
Even after you've fixed the memory issue, you may still see the alert might in the cockpit until after the next
daily backup has been successfully created.
Related Information
Use console client commands to manage your databases in the Neo environment.
Related Information
An overview of the different tasks you can perform to administer database schemas in the Neo environment.
Restriction
You cannot create database schemas on SAP HANA single-container or tenant database systems. You can only
create schemas on shared SAP HANA databases systems, which are displayed as HANA (<shared>) in the
SAP Cloud Platform cockpit. The usage of shared databases is restricted to partners of SAP that purchased the
innovation pack for SAP Cloud Platform. For more information, see Innovation Pack for SAP Cloud Platform .
Learn about database schemas in the Neo environment on SAP Cloud Platform.
Each application deployed on SAP Cloud Platform can be assigned one or more database schemas. A schema is
associated with a particular subaccount and is available solely to applications within this subaccount. A schema
can be bound to multiple applications.
Creating Schemas
You can create schemas explicitly, assigning to them any name and certain properties, such as a specific database
type. The schema is independent of any application and has to be explicitly bound.
A schema ID is unique within a subaccount. When a schema is created automatically, an ID is also created based
on a combination of the subaccount and application names and the suffix web.
Binding Schemas
Schemas can be bound to applications based on an explicitly named data source or using the default data source.
The main differences are as follows:
You can share a schema between applications by binding the same schema to more than one application. The
following items apply when binding schemas to applications:
● An application’s bindings are based on either named data sources or the default data source. An application
cannot use a combination of the two types of bindings.
● When named data sources are used, binding names must be unique per application.
Applications can also use schemas belonging to other subaccounts if they are explicitly granted access
permission.
Unbinding Schemas
Unbind a schema from an application if the application no longer needs it. It can still be used by other applications
to which it is still bound. Before a schema can be deleted, it must be unbound from all applications. Schemas can
be deleted only if they no longer have any bindings.
If an application is undeployed but was not yet unbound from its schema, the schema is still listed as bound to the
application and remains bound if the application is redeployed.
Deleting Schemas
You should drop a schema when it is no longer required or to redeploy an application from scratch.
Before deleting a schema, explicitly remove any bindings that still exist between the schema and an application.
You can also remove all bindings by enforcing the deletion of the schema.
When using explicitly named data sources to create bindings between schemas and applications, make sure that
the data source names are the same as the JNDI names used in the applications.
Data sources are defined as resources in the web.xml file, or as JTA or non-JTA data sources in the
persistence.xml file in the normal manner. Data sources can be referenced in the application code using a
context.lookup or annotations (@Resource, @PersistenceUnit, @PersistenceContext).
When using explicitly named data sources in the Java EE 6 Web Profile runtime environment, you must create two
additional bindings:
● A binding between the application and schema using a data source named jdbc/
defaultManagedDataSource
● A binding between the application and schema using a data source named jdbc/
defaultUnmanagedDataSource
Related Information
Context
Schemas have properties, such as a database type and database version, and are identified by an ID that is unique
within the subaccount. The schema is independent of any application.
Procedure
1. In the SAP Cloud Platform cockpit, navigate to a subaccount. For more information, see Navigate to Global
Accounts and Subaccounts [page 964].
2. In the navigation area, choose SAP HANA / SAP ASE Databases & Schemas .
All schemas available in the selected subaccount are listed with their ID, type, version, and related database
system.
Note
To display a schema’s details, for example, its state and the number of existing bindings,select the relevant
schema in the list and click the link on its name.
3. To create a new schema, choose New on the Databases & Schemas page.
4. Enter the following schema details:
○ Schema ID: A schema ID is freely definable but must start with a letter and contain only uppercase and
lowercase letters ('a' - 'z', 'A' - 'Z'), numbers ('0' - '9'), and the special characters '.' and '-'. The actual
schema ID assigned in the database isn't the same as the ID you enter here..
○ Database System: HANA (<shared>)
To create schemas on your productive HANA instances, you have to use the HANA-specific tools.
5. Save your entries.
The schema overview shows details about its state, quota used, and the number of existing bindings. You can
perform further actions for the newly created schema, for example, delete it.
Note
To delete a schema, first delete all existing bindings to the schema. The Delete button is only enabled if a
schema has no bindings.
Related Information
Context
Bindings are identified by a data source name, which must be unique per application. You can bind the same
schema to multiple applications, and the same application to multiple schemas.
Procedure
1. In the SAP Cloud Platform cockpit, navigate to a subaccount. For more information, see Navigate to Global
Accounts and Subaccounts [page 964].
2. Choose one of the following options:
By schema 1. In the navigation area, choose SAP HANA / SAP ASE Databases & Schemas in
the navigation area.
2. Select the schema for which you want to create a new binding.
The overview shows the schema details, for example, its state, and the number of ex
isting bindings, and provides access to further actions.
3. In the navigation area, choose Data Source Bindings.
4. Choose New Binding.
5. In the New Binding dialog box, enter a data source name and select the name of the
application to which the schema should be bound. The application must be deployed
in the selected subaccount.
6. Save your entries.
By application 1. In the navigation area, choose Applications Java Applications and select the rel
evant application.
Next Steps
An application’s state influences when a newly bound schema becomes effective. If an application is already
running (Started state), it continues to use the old schema until it is restarted. A restart is also required if
additional schemas have been bound to the application.
Note
To unbind a schema from an application, simply delete the binding. The application retains access to the
schema until it is restarted.
Related Information
Change the database property, which determines the database in the Neo environment on which an application
runs.
Context
The default database system is used when schemas are created automatically. This occurs if an application is
started but has not yet been assigned a schema.
● A new application that has not been explicitly assigned a schema uses the most effective default database
system when automatic schema creation is triggered, that is, when the application is started for the first time.
● When deploying an application from the Eclipse IDE, an application is deployed and started in one step.
● An application that is already using a default database system is not affected by any changes. Its schema
remains associated with the default database system selected when the application was created.
Procedure
1. In the SAP Cloud Platform cockpit, navigate the list of subaccounts available to you. For more information, see
Navigate to Global Accounts and Subaccounts [page 964].
2. Choose the (edit) icon on the tile for the subaccount you want to change.
3. Select the new default database system, and save your changes.
Related Information
Perform the most typical use case scenarios in the Neo environment either in the cockpit or by using the console
client.
The schema management scenarios outline the steps involved for the most typical use cases in the Neo
environment. The scenarios use the console client together with the schema commands provided by the SAP
HANA service. Alternatively, you can perform the scenarios from the cockpit.
For the sake of simplicity, the example scenarios use JDBC and web.xml to illustrate the definition of data
sources. Depending on your application and runtime environment, you can use other options, such as the
persistence.xml file and annotations.
Related Information
Prerequisites
Set up the console client. For more information, see Set Up the Console Client [page 1135].
Context
In this scenario, an application has been deployed with the default database type assigned to the subaccount. Use
the unbind-schema command to remove the schema already assigned to the application, then create a schema
with the database type you want to use (create-schema) and bind it to the application (bind-schema). The
following example data is used:
● The application myapp runs on the SAP MaxDB database and is bound to a schema that was created
automatically. The application has been stopped.
● Runtime environment: Java Web
● Data source name: jdbc/dshana
● Schema: myhana
● User: test
● Subaccount: mysubaccount
● Host: hana.ondemand.com (replace as necessary, for example, with hanatrial.ondemand.com for trial
accounts)
Procedure
1. In the application's web.xml file, update the resource definition by replacing the default data source <res-
ref-name>jdbc/DefaultDB</res-ref-name>, or similar, with the named data source <res-ref-
name>jdbc/dshana</res-ref-name>:
<resource-ref>
<res-ref-name>jdbc/dshana</res-ref-name>
<res-type>javax.sql.DataSource</res-type>
</resource-ref>
2. Adjust the JNDI lookup in the application to use the data source you just defined in the web.xml file. You will
later bind the application to the myhana schema using this data source:
# JNDI lookup
InitialContext ctx = new InitialContext();
DataSource ds = (DataSource) ctx.lookup("java:comp/env/jdbc/dshana");
Example output:
Schema ID DB Type
myhana hana
5. Unbind the current schema from the application. Since the application has a default binding, you do not need
to specify a data source name:
You see a message that the schema has been successfully unbound.
6. Since you have made code changes, redeploy the application.
7. Bind the schema to the application using the data source you defined in the application. Make sure that the
name is identical to that in the web.xml file and in the JNDI lookup (jdbc/dshana):
You see a message that the schema has been successfully unbound.
8. (Optional) Verify the results using the following command:
Example output:
Related Information
Multiple schemas allow you to use multiple databases in parallel. You might, for example, want to use SAP ASE for
normal transaction processing and the SAP HANA database for analytics.
Prerequisites
You have set up the console client. For more information, see Set Up the Console Client [page 1135].
Context
In this scenario, you use the create-schema command to create two schemas, one associated with SAP ASE and
the other with the SAP HANA database. You then use the bind-schema command to bind both schemas to the
application. The following example data is used:
● The application is named myapp and has not yet been deployed.
● Runtime environment: Java Web
● Schemas: myhana (SAP HANA database) and myase (SAP ASE database)
● Data source names: jdbc/dshana and jdbc/dsase
● User: test
● Subaccount: mysubaccount
● Host: hana.ondemand.com (replace as necessary, for example, with hanatrial.ondemand.com for trial
accounts)
Procedure
1. In the application's web.xml file, add resource definitions for the two data sources:
<resource-ref>
<res-ref-name>jdbc/dshana</res-ref-name>
<res-type>javax.sql.DataSource</res-type>
</resource-ref>
<resource-ref>
<res-ref-name>jdbc/dsase</res-ref-name>
<res-type>javax.sql.DataSource</res-type>
</resource-ref>
2. Add JNDI lookups in the application code using the two data sources. This allows the application to access
both the myhana and myase schemas:
# JNDI lookup
InitialContext ctx = new InitialContext();
DataSource ds = (DataSource) ctx.lookup("java:comp/env/jdbc/dshana");
...
Example output:
Schema ID DB Type
myhana hana
myase ase
7. Bind the schemas to the application using the data source names jdbc/dshana and jdbc/dsase:
In both cases, you see a message that the schema has been successfully bound.
8. (Optional) Verify the results with the following command:
Example output:
Related Information
You can migrate from an auto-created schema by unbinding the schema currently assigned to your application
and rebinding it to the required one. This step is necessary, for example, to use more than one database in parallel.
Prerequisites
You have set up the console client. For more information, see Set Up the Console Client [page 1135].
Context
In this scenario, you migrate from the auto-bound schema by unbinding and then rebinding the same schema. This
allows you to retain the schema and all its artifacts. The following example data is used:
Procedure
1. Open the command window in the <SDK>/tools folder and use the list-application-datasources
command to obtain the name of the schema currently assigned to the application (you need the schema ID in
step 3):
Example output:
2. Unbind the current schema from the application. Since the application has a default binding, you do not need
to specify a data source name:
<resource-ref>
<res-ref-name>jdbc/DefaultDB</res-ref-name>
<res-type>javax.sql.DataSource</res-type>
</resource-ref>
Note
If you prefer, you can change this name, but then you will also need to change the JNDI lookup in the
application code and redeploy the application.
4. Rebind the application to the same schema using the data source name from the previous step, for example,
jdbc/DefaultDB:
Example output:
6. The application continues to use the old schema and default data source until it is restarted. Restart the
application so that it uses the new binding to the schema.
Related Information
Allow applications that belong to other subaccounts controlled access to the schemas of your subaccount in the
Neo environment.
Schemas can normally be used only by applications within the same subaccount in the Neo environment. You can,
however, allow applications belonging to other subaccounts controlled access to your subaccount’s schemas. The
other subaccount might be one of your own subaccounts or a third-party subaccount.
The access token is used by the consumer subaccount to bind the schema to the application. It can be used only
once. An unbind operation does not require an access token.
An access token:
● Always applies to one schema and one application and is not transferrable
● Has an unlimited validity period
● Can be revoked at any time, regardless of whether the schema has already been bound to the target
application
Related Information
As a subaccount member who is assigned the Administrator or Developer role, you can grant applications in other
subaccounts access to any of your subaccount’s schemas in the Neo environment.
Prerequisites
You have set up the console client. For more information, see Set Up the Console Client [page 1135].
Context
To allow access, generate a one-time access token that permits the requesting application to access your schema
from its subaccount.
Procedure
Open the command window in the <SDK>/tools folder and enter the following command:
A successfully generated access token (an alphanumeric string) appears on the screen.
Next Steps
The generated access token can now be used by the consumer subaccount to bind the schema to the application.
● When the target application binds the schema to which it has been granted access, a new technical database
user is created automatically (name: DEV_<guid>). This user has access permission only for the specified
schema (technical name: NEO_<guid>).
● To allow the application to access other schemas or packages on the productive SAP HANA instance, you can
grant the technical database user additional privileges ( Security Users DEV_<guid> ).
● The technical database user is not the same as a normal database user and is provided purely as a mechanism
for enabling schema access.
Related Information
To bind a schema contained in another subaccount to your application, use a remote access token that indicates
that access to this specific schema has been permitted in the Neo environment.
Prerequisites
You have set up the console client. For more information, see Set Up the Console Client [page 1135].
Context
To prevent misuse, the remote access token can be used only once and cannot be transferred to other applications
in your subaccount. The owner subaccount can revoke access to the schema at any time.
Procedure
1. Open the command window in the <SDK>/tools folder and enter the following command:
Since the schema does not belong to your subaccount, the schema ID is prefixed with the owner subaccount’s
name (subaccount:schemaID), as shown in the example output below:
A permission grant applies to a specific schema and specific application and is identified by an access token. It is
valid until it is revoked by a member of the owner subaccount in the Neo environment.
Context
Procedure
1. Open the command window in the <SDK>/tools folder and enter the following command to list all grants for
the specified schema:
Example output:
2. To revoke the grant, enter the following command, using the access token obtained in the previous step:
If the access token has already been used to bind the schema, revoking the access permission also unbinds
the schema. If the application is running, it continues to use the schema until it is restarted.
3. Optionally check that the access token has been revoked by listing all grants again as described in step 1 or
using the display-schema-info command.
Use the set of console client commands for managing schemas in the Neo environment that is provided by the
SAP HANA service.
Related Information
Related Information
Access remote database instances through a database tunnel in the Neo environment, which provides a secure
connection from your local machine and bypasses any firewalls.
A database tunnel allows you to use database tools, such as the Eclipse Data Tools Platform, to connect to the
remote database instance. It provides you with direct access to a schema and allows you to manipulate it at the
database level.
Related Information
A database tunnel allows you to connect to a remote database instance through a secure connection. To open a
tunnel, use the open-db-tunnel command. When you open the tunnel, you'll get the connection details required
for the remote database instance, including a user and password.
Prerequisites
Set up the console client. For more information, see Set Up the Console Client [page 1135].
Note
For more information on required parameters, see open-db-tunnel [page 1942].
Next Steps
Now that you have opened the database tunnel, you can connect to the remote database instance using the
connection details you have just obtained.
Note
The database tunnel must remain open while you work on the remote database instance. Close it only when you
have completed the session.
Related Information
To access data from a productive database in another subaccount, and you need the required permissions. From
the subaccount prodiving the permission, you can obtain a token and a database user, which you use to open a
tunnel to the database that is owned by that subaccount.
The table below lists the tasks and the person responsible for providing access to the database in another
subaccount:
Give Other Subaccounts Permission to Administrator in the subaccount that grant-db-tunnel-access [page 1883]
Open a Database Tunnel [page 812]
owns the database
Open Tunnels to Databases in Other Member of the subaccount that has re open-db-tunnel [page 1942]
Subaccounts [page 814]
quested permission to open a tunnel to a
database owned by another subaccount
Revoke Tunnel Access to Databases for Administrator in the subaccount that revoke-db-tunnel-access [page 1957]
Other Subaccounts [page 815]
owns the database
Related Information
You can allow other subaccounts to open a tunnel to a productive database in your subaccount in the Neo
environment.
Prerequisites
● Provision the database you want to share in a subaccount. See Creating Databases [page 749].
● You are assigned the administrator role in that subaccount.
● Set up the console client. See Set Up the Console Client [page 1135] and Using the Console Client [page 1792].
To give another subaccount permission to open a tunnel to your database, create a database user for that
subaccount and provide that user's credentials, together with an access token, to a member of the subaccount
that requested permission to open a database tunnel. This allows this subaccount member to open a database
tunnel to the database in your subaccount. All members of the subaccount receiving the permission can access
the database in your subaccount.
Provide the following information to a member of the subaccount that requested permission to open a database
tunnel:
● To check if the database access has been given successfully, you can view a list of all currently active database
access permissions to other subaccounts, which exist for a specified subaccount, by using the The token is
simply a random string, for example, 31t0dpim6rtxa00wx5483vqe7in8i3c1phv759w9oqrutf638l, which
remains valid until the provider subaccount revokes it again.list-db-tunnel-access-grants command.
● The token is simply a random string, for example, you can revoke the database access permission at any point
in time using the revoke-db-tunnel-access command. See Revoke Tunnel Access to Databases for Other
Subaccounts [page 778].
Note
Only the provider subaccount can revoke the access permission. When you revoke the access permission,
we highly recommend that you disable the database user and password created for the access permission
on the database itself and that you close any open sessions on the SAP HANA database.
If a subaccount member has already used the access token and there are open database tunnels, they remain
open until they are closed, even though the user has been disabled.
We highly recommend that you create a dedicated database user on the database for each access permission.
Procedure
If the permission has been given successfully, you see the access token. As a database administrator, create a
database user with the needed permissions. Provide the database user and password together with the access
token to a member of the subaccount that has requested permission to open a tunnel to your database.
To open a tunnel to a database that is owned by another subaccount, you request permission from that
subaccount. If your request is approved, the subaccount that owns the database in question provides you with an
access token, a database user, and password. This allows you to open a tunnel from your subaccount to the
database in the other subaccount.
Prerequisites
● Set up the console client. For more information, see Set Up the Console Client [page 1135].
● The subaccount that owns the database has given you an access token and a database user and password.
See Give Other Subaccounts Permission to Open a Database Tunnel [page 812].
Context
Once you have received the token and the database credentials, you can open the database tunnel. Use the access
token parameter for the open-db-tunnel command, not the database ID parameter. Then you can use a
database tool of your choice to connect to the database in another subaccount. Log on to the database with the
user and password that you received from the provider. You can then work on the remote database instance.
Note
All members of the consumer subaccount have permission to access the database in the provider subaccount.
Next Steps
Once you have opened the tunnel, you can connect to the database. See:
Related Information
You can revoke the permission to open database tunnels to a productive SAP HANA database in your subaccount
for other subaccounts in the Neo environment.
Prerequisites
● Give another subaccount permission to use a database in your subaccount. See Give Other Subaccounts
Permission to Open a Database Tunnel [page 777].
● You are assigned the administrator role in that subaccount.
Context
Note
You can revoke the permission to use a database in your subaccount for other subaccounts at any time.
Procedure
1. Open the command window in the <SDK>/tools folder and enter the following command to list all
permissions for the specified database:
Example output:
2. To revoke the permission, enter the following command and copy across the access token obtained in the
previous step:
Note
Only the provider subaccount can revoke the access permission. When you revoke the access permission,
we highly recommend that you disable the database user and password created for the access permission
on the database itself and that you close any open sessions on the SAP HANA database.
You have revoked the permission to open tunnels to a database in your subaccount for other subaccounts.
3. Optional: Check that the access token has been revoked by listing all permissions again as described in step 1.
For continuous delivery and automated tests, the open-db-tunnel command supports a background mode,
which allows a database tunnel to be opened by automated scripts or as part of a Maven build.
Prerequisites
Set up the console client on the CI server. For more information, see Set Up the Console Client [page 1135].
Procedure
To open or close the database tunnel in a Maven build, use the following goals of the SAP Cloud Platform Maven
plug-in:
○ open-db-tunnel
○ close-db-tunnel
Tip
Take a look at the following samples delivered with the SAP Cloud Platform SDK:
○ persistence-with-ejb
○ persistence-with-jpa
Each sample includes a test that opens a database tunnel in background mode within the Maven build and
executes some SQL statements.
Related Information
Prerequisites
You have installed and set up all the necessary tools. For more information, see Install SAP HANA Tools for Eclipse
[page 1224].
Note
Neon is the last Eclipse version that supports this feature.
Procedure
Note
Make sure that you specify the host correctly.
b. Specify the subaccount name, e-mail or SCN user name, and your SCN password.
c. Choose Next.
5. Select a database and provide your credentials:
a. Select the Databases radio button.
b. From the dropdown menu, select the database you want to work with.
c. Enter your database user and password.
For more information, see Create a Database Administrator User [page 1245].
Note
Make sure that you specify the database user and password correctly.
d. Choose Finish.
Results
Related Information
Establish a direct connection to a shared SAP HANA schema via the Eclipse IDE, using SAP HANA Tools.
Prerequisites
You have installed and set up all the necessary tools. For more information, see Install SAP HANA Tools for Eclipse
[page 1224].
Note
Neon is the last Eclipse version that supports this feature.
Procedure
If you select Save password, the password for a given user name will be kept in the secure store.
You must have created a schema previously to be able to select it in this step.
9. Choose Finish.
Program with JPA in the Neo environment, whose container-managed persistence and application-managed
persistence differ in terms of the management and life cycle of the entity manager.
The main features of each scenario are shown in the table below. We recommend that you use container-managed
persistence (Java EE 6 Web Profile runtime), which is the model most commonly used by Web applications.
JPA Scenario SDK for Java Web SDK for Java EE 6 Web Profile
EclipseLink
Download the latest version of EclipseLink. EclipseLink versions 2.5 and later contain the SAP HANA database
platform.
For details about importing the files into your Web application project and specifying the JPA implementation
library EclipseLink, see the tutorial Tutorial: Adding Application-Managed Persistence with JPA (SDK for Java Web)
[page 836].
Related Information
Special Settings for EclipseLink Versions Earlier than 2.5 [page 850]
Persistence Units [page 851]
Using Container-Managed Persistence [page 852]
Using Application-Managed Persistence [page 855]
Entity Classes [page 861]
Use JPA together with EJB to apply container-managed persistence in a simple Java EE web application that
manages a list of persons.
Prerequisites
● Download and set up your Eclipse IDE, SAP Cloud Platform Tools for Java, and the SDK for Java EE 6 Web
Profile. For more information, see Setting Up the Development Environment [page 1126].
● Set up your runtime environment in the Eclipse IDE. For more information, see Set Up the Runtime
Environment [page 1131].
● Develop or import a Java Web application in Eclipse IDE. For more information, see Developing Java
Applications [page 1164] or Import Samples as Eclipse Projects [page 1145].
The application is also available as a sample in the SAP Cloud Platform SDK for Neo environment for Java EE 6
Web Profile:
○ Sample name: persistence-with-ejb
○ Location: <sdk>/samples folder
More information, see Using Samples [page 1143].
Context
Create a dynamic web project using the JPA project facet and add a servlet.
Procedure
1. From the Eclipse main menu, choose File New Dynamic Web Project .
2. Enter the Project name persistence-with-ejb.
3. In the Target Runtime pane, select Java EE 6 Web Profile as the runtime you want to use to deploy the
application.
4. In the Dynamic web module version section, select 3.0.
5. In the Configuration section, choose Modify and select JPA in the Project Facets screen.
6. Choose OK and return to the Dynamic Web Project screen.
7. Choose Next.
14. To add a servlet to your project, choose File New Servlet from the Eclipse main menu.
15. Enter the Java package com.sap.cloud.sample.persistence and the class name
PersistenceEJBServlet.
16. To generate the servlet, choose Finish.
Procedure
2. From the Eclipse main menu, choose File New Other Class and choose Next.
3. Make sure that the Java package is com.sap.cloud.sample.persistence.
4. Enter the class name Person and choose Finish. Replace the entire class with the following content:
package com.sap.cloud.sample.persistence;
import javax.persistence.Basic;
import javax.persistence.Entity;
import javax.persistence.GeneratedValue;
import javax.persistence.Id;
import javax.persistence.NamedQuery;
import javax.persistence.Table;
/**
* Class holding information on a person.
*/
@Entity
@Table(name = "T_PERSON")
@NamedQuery(name = "AllPersons", query = "select p from Person p")
public class Person {
@Id
@GeneratedValue
private Long id;
@Basic
private String firstName;
@Basic
private String lastName;
public long getId() {
return id;
}
public void setId(long newId) {
this.id = newId;
}
public String getFirstName() {
return this.firstName;
}
public void setFirstName(String newFirstName) {
this.firstName = newFirstName;
}
public String getLastName() {
return this.lastName;
}
public void setLastName(String newLastName) {
this.lastName = newLastName;
}
}
Procedure
1. Select persistence.xml, and from the context menu choose Open With Persistence XML Editor .
2. On the General tab, make sure that org.eclipse.persistence.jpa.PersistenceProvider is entered in
the Persistence provider field.
3. On the Options tab, make sure that the DDL generation type Create Tables is selected.
4. On the Connection tab, select the transaction type JTA.
5. Save the file.
Procedure
2. From the Eclipse main menu, choose File New Other EJB Session Bean (EJB 3.x) and choose
Next.
3. Make sure that the Java package is com.sap.cloud.sample.persistence.
4. Enter the class name PersonBean, leave the default setting Stateless, and choose Finish.
5. Leave the default setting Stateless and choose Finish.
6. Replace the entire class with the following content:
package com.sap.cloud.sample.persistence;
import java.util.List;
import javax.ejb.LocalBean;
import javax.ejb.Stateless;
import javax.persistence.EntityManager;
import javax.persistence.PersistenceContext;
/**
* Session Bean implementation class PersonBean
*/
@Stateless
@LocalBean
public class PersonBean {
Procedure
2. From the context menu, choose Import General File System and choose Next.
3. Browse to the local directory where you downloaded and unpacked the SDK for Java EE 6 Web Profile, select
the repository/plugins directory, and choose OK.
4. Select com.sap.security.core.server.csi_1.x.y.jar and choose Finish.
Extend the servlet to use the Person entity and EJB session bean.
Context
The servlet adds Person entity objects to the database, retrieves their details, and shows them on the screen.
Procedure
2. Select PersistenceEJBServlet.java, and from the context menu choose Open With Java Editor .
package com.sap.cloud.sample.persistence;
import java.io.IOException;
import java.sql.SQLException;
import java.util.List;
import javax.ejb.EJB;
import javax.servlet.ServletException;
import javax.servlet.annotation.WebServlet;
import javax.servlet.http.HttpServlet;
import javax.servlet.http.HttpServletRequest;
import javax.servlet.http.HttpServletResponse;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import com.sap.security.core.server.csi.IXSSEncoder;
import com.sap.security.core.server.csi.XSSEncoder;
/**
* Servlet implementation class PersistenceEJBServlet
*/
@WebServlet("/")
public class PersistenceEJBServlet extends HttpServlet {
private static final long serialVersionUID = 1L;
private static final Logger LOGGER = LoggerFactory
.getLogger(PersistenceEJBServlet.class);
@EJB
PersonBean personBean;
/** {@inheritDoc} */
@Override
protected void doGet(HttpServletRequest request,
HttpServletResponse response) throws ServletException, IOException {
response.getWriter().println("<p>Persistence with JPA!</p>");
try {
appendPersonTable(response);
appendAddForm(response);
} catch (Exception e) {
response.getWriter().println(
"Persistence operation failed with reason: "
+ e.getMessage());
LOGGER.error("Persistence operation failed", e);
}
}
/** {@inheritDoc} */
@Override
protected void doPost(HttpServletRequest request,
HttpServletResponse response) throws ServletException, IOException {
try {
doAdd(request);
doGet(request, response);
} catch (Exception e) {
response.getWriter().println(
"Persistence operation failed with reason: "
+ e.getMessage());
LOGGER.error("Persistence operation failed", e);
}
}
private void appendPersonTable(HttpServletResponse response)
throws SQLException, IOException {
// Append table that lists all persons
List<Person> resultList = personBean.getAllPersons();
response.getWriter().println(
"<p><table border=\"1\"><tr><th colspan=\"3\">"
+ (resultList.isEmpty() ? "" : resultList.size()
+ " ")
+ "Entries in the Database</th></tr>");
if (resultList.isEmpty()) {
response.getWriter().println(
"<tr><td colspan=\"3\">Database is empty</td></tr>");
Results
Procedure
1. To test your web application on the local server, follow the steps for deploying a web application locally as
described in Deploy Locally from Eclipse IDE [page 1189].
2. Enter a first name (for example, John) and a last name (for example, Smith) and choose Add Person.
Note
If you add more names to the database, they are also listed in the table. This confirms that you have
successfully enabled persistence using the Person entity.
Procedure
If you leave the default Automatic option, the server loads the target runtime of your application.
7. Enter your subaccount name, e-mail or user name, and password, then choose Next.
○ If you have previously entered a subaccount and user name for your host, you can select these names
from lists.
○ Previously entered hosts also appear in a dropdown list.
○ Select Save password to remember and store the password for a given user.
Caution
Do not select your application on the Add and Remove screen. Adding an application automatically starts it
with the effect that it will fail because no data source binding exists. You will add an application in a later
step.
9. In the Servers view, open the context menu for the server you just created and choose Show In Cockpit .
Procedure
1. In the cockpit, select a subaccount, then choose SAP HANA / SAP ASE Databases & Schemas in the
navigation area.
2. Select the database you want to create a binding for.
3. Choose Data Source Bindings. For more information, see .
4. Define a binding (<DEFAULT>) for the application and select a database ID. Choose Save.
Add the application to the new server and start it to deploy the application on the cloud.
Context
Note
You cannot deploy multiple applications on the same application process. Deployment of a second application
on the same application process overwrites any previous deployments. To deploy several applications, deploy
each of them on a separate application process.
Procedure
1. On the Servers view in Eclipse, open the context menu for the server choose Add and Remove...
<application name> .
2. To add the application to the server, add the application to the panel on the right side.
3. Choose Finish.
4. Start the server.
This deploys the application and starts it on the SAP Cloud Platform.
5. Access the application by clicking the application URL on the application overview page in the cockpit.
You should see the same output as when the application was tested on the local server.
Use JPA to apply application-managed persistence in a simple Java EE web application that manages a list of
persons.
Prerequisites
● Download and set up your Eclipse IDE, SAP Cloud Platform Tools for Java, and the SDK for Java Web
For more information, see Setting Up the Development Environment [page 1126].
● Downloaded the JPA Provider, EclipseLink:
1. Download the latest 2.5.x version of EclipseLink from: http://www.eclipse.org/eclipselink/downloads .
Select the EclipseLink 2.5.x Installer Zip (intended for use in Java EE environments).
Note
The tutorial and sample use EclipseLink version 2.5. .
Context
1. Create a Dynamic Web Project and Servlet with JPA [page 837]
2. Create the JPA Persistence Entity [page 840]
3. Maintain Metadata for the Person Entity [page 841]
4. Prepare the Web Application Project for JPA [page 842]
5. Extend the Servlet to Use Persistence [page 843]
6. Test the Web Application on the Local Server [page 846]
Create a dynamic web project using the JPA project facet and a servlet.
Procedure
1. From the Eclipse main menu, choose File New Dynamic Web Project .
2. Enter the Project name persistence-with-jpa.
3. In the Target Runtime pane, select Java Web as the runtime to deploy the application.
4. In the Dynamic web module version section, select 2.5.
5. In the Configuration section, choose Modify, then select the JPA in the Project Facets screen.
6. Choose OK and return to the Dynamic Web Project screen.
14. To add a servlet to the project you have just created, choose File New Servlet from the Eclipse main
menu.
15. Enter the Java package com.sap.cloud.sample.persistence and the class name
PersistenceWithJPAServlet.
16. To generate the servlet, choose Finish.
Context
Create a JPA persistence entity class named Person. Add an auto-incremented ID to the database table as the
primary key and person attributes. You must also define a query method that retrieves a Person object from the
database table. Each person stored in the database is represented by a Person entity object.
Procedure
2. From the Eclipse main menu, choose File New Other Class and choose Next.
3. Make sure that the Java package is com.sap.cloud.sample.persistence.
4. Enter the class name Person and choose Finish.
5. In the editor, replace the entire class with the following content:
package com.sap.cloud.sample.persistence;
import javax.persistence.Basic;
import javax.persistence.Entity;
import javax.persistence.GeneratedValue;
import javax.persistence.Id;
import javax.persistence.NamedQuery;
import javax.persistence.Table;
/**
* Class holding information on a person.
*/
@Entity
@Table(name = "T_PERSON")
@NamedQuery(name = "AllPersons", query = "select p from Person p")
public class Person {
@Id
@GeneratedValue
private Long id;
@Basic
private String firstName;
@Basic
private String lastName;
public long getId() {
return id;
}
public void setId(long newId) {
this.id = newId;
}
public String getFirstName() {
return this.firstName;
}
public void setFirstName(String newFirstName) {
this.firstName = newFirstName;
}
To maintain metadata for your entity class, define additional settings in the persistence.xml file.
Context
Procedure
1. Select persistence.xml and from the context menu choose Open With Persistence XML Editor .
2. Choose the General tab.
3. Make sure that org.eclipse.persistence.jpa.PersistenceProvider is entered in the Persistence
provider field.
4. In the Managed Class section, choose Add..., enter Person, then choose Ok.
5. On the Connection tab, make sure that the transaction type Resource Local is selected.
6. On the Schema Generation tab, make sure the DDL generation type Create Tables in the EclipseLink
Schema Generation section is selected.
7. Save the file.
Prepare the web application project by adding EclipseLink executables and the XSS Protection Library, adapting
the Java build path order, and adding the resource reference description to the web.xml file.
Procedure
<resource-ref>
<res-ref-name>jdbc/DefaultDB</res-ref-name>
<res-type>javax.sql.DataSource</res-type>
</resource-ref>
<servlet-mapping>
<servlet-name>PersistenceWithJPAServlet</servlet-name>
<url-pattern>/</url-pattern>
</servlet-mapping>
Note
An application's URL path contains the context root followed by the optional URL pattern ("/<URL
pattern>"). The servlet URL pattern that is automatically generated by Eclipse uses the servlet’s class name
as part of the pattern. Since the cockpit shows only the context root, this means that you cannot directly
open the application in the cockpit without adding the servlet name. To call the application by only the
context root, use "/" as the URL mapping, then you will no longer have to correct the URL in the browser.
Context
The servlet adds Person entity objects to the database, retrieves their details, and displays them on the screen.
Procedure
2. Select PersistenceWithJPAServlet.java and from the context menu choose Open With Java
Editor .
3. In the opened editor, replace the entire servlet class with the following content:
package com.sap.cloud.sample.persistence;
import java.io.IOException;
import java.sql.Connection;
import java.sql.SQLException;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
import javax.naming.InitialContext;
import javax.naming.NamingException;
import javax.persistence.EntityManager;
import javax.persistence.EntityManagerFactory;
import javax.persistence.Persistence;
import javax.servlet.ServletException;
import javax.servlet.http.HttpServlet;
import javax.servlet.http.HttpServletRequest;
import javax.servlet.http.HttpServletResponse;
import javax.sql.DataSource;
import org.eclipse.persistence.config.PersistenceUnitProperties;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
4. Save the servlet. The project should compile without any errors.
Procedure
1. To test your web application on the local server, follow the steps for deploying a web application locally as
described in Deploy Locally from Eclipse IDE [page 1189].
2. Enter a first name (for example, John) and a last name (for example, Smith) and choose Add Person.
Note
If you add more names to the database, they are also listed in the table. This confirms that you have
successfully enabled persistence using the Person entity.
Procedure
If you leave the default Automatic option, the server loads the target runtime of your application.
7. Enter your subaccount name, e-mail or user name, and password, then choose Next.
○ If you have previously entered a subaccount and user name for your host, you can select these names
from lists.
○ Previously entered hosts also appear in a dropdown list.
○ Select Save password to remember and store the password for a given user.
Caution
Do not select your application on the Add and Remove screen. Adding an application automatically starts it
with the effect that it will fail because no data source binding exists. You will add an application in a later
step.
9. In the Servers view, open the context menu for the server you just created and choose Show In Cockpit .
Procedure
1. In the cockpit, select a subaccount, then choose SAP HANA / SAP ASE Databases & Schemas in the
navigation area.
2. Select the database you want to create a binding for.
3. Choose Data Source Bindings. For more information, see .
4. Define a binding (<DEFAULT>) for the application and select a database ID. Choose Save.
Add the application to the new server and start it to deploy the application on the cloud.
Context
Note
You cannot deploy multiple applications on the same application process. Deployment of a second application
on the same application process overwrites any previous deployments. To deploy several applications, deploy
each of them on a separate application process.
Procedure
1. On the Servers view in Eclipse, open the context menu for the server choose Add and Remove...
<application name> .
2. To add the application to the server, add the application to the panel on the right side.
3. Choose Finish.
4. Start the server.
This deploys the application and starts it on the SAP Cloud Platform.
5. Access the application by clicking the application URL on the application overview page in the cockpit.
You should see the same output as when the application was tested on the local server.
Container-Managed Persistence
<properties>
<property name="eclipselink.target-database"
value="com.sap.persistence.platform.database.HDBPlatform"/>
</properties>
Application-Managed Persistence
Specify the target database as shown above or directly in the servlet code, as shown in the example below:
ds = (DataSource) ctx.lookup("java:comp/env/jdbc/DefaultDB");
connection = ds.getConnection();
Map properties = new HashMap();
properties.put(PersistenceUnitProperties.NON_JTA_DATASOURCE, ds);
properties.put("eclipselink.target-database",
"com.sap.persistence.platform.database.HDBPlatform");
General Points
Set the target database property before you deploy the application on the SAP HANA database. If you dn't, you'll
get an error, and if this happens, you need to re-create the table with the correct definitions, setting the DDL
When testing the application locally, remove the DDL generation type altogether.
A JPA model contains a persistence configuration file, persistence.xml, which describes the defined
persistence units. A persistence unit in turn defines all entity classes managed by the entity managers in your
application and includes the metadata for mapping the entity classes to the database entities.
JPA Provider
The persistence.xml file is located in the META-INF folder within the persistence unit src folder. The JPA
persistence provider used by the is org.eclipse.persistence.jpa.PersistenceProvider.
Example
In the persistence.xml file in the tutorial Adding Container-Managed Persistence with JPA (SDK for Java EE 6
Web Profile), the persistence unit is named persistence-with-ejb, the transaction type is JTA (default
setting), and the DDL generation type has been set to Create Tables, as shown below:
The the EclipseLink capabilities to generate database tables. The following values are valid for generating the DDL
for the entity specified in the persistence.xml file:
Transaction Type
JTA transactions are used for container-managed persistence, and resource-local transactions for application-
managed persistence. The SDK for Java Web supports resource-local transactions only.
Container-managed entity managers are the model most commonly used by Web applications. Container-
managed entity managers require JTA transactions and are generally used with stateless session beans and
transaction-scoped persistence contexts, which are threadsafe.
Context
The scenario described in this section is based on the Java EE 6 Web Profile runtime. You use a stateless EJB
session bean into which the entity manager is injected using the @PersistenceContext annotation.
Procedure
1. Configure the persistence units in the persistence.xml file to use JTA data sources and JTA transactions.
2. Inject the entity manager into an EJB session bean using the @PersistenceContext annotation.
Related Information
To use container-managed entity managers, configure JTA data sources in the persistence.xml file. JTA data
sources are managed data sources and are associated with JTA transactions.
Context
To configure JTA data sources, set the transaction type attribute (transaction-type) to JTA and specify the
names of the JTA data sources (jta-data-source), unless the application is using the default data source.
Procedure
The example below shows the persistence units defined for two data sources, where each data source is
associated with a different database:
<persistence>
<persistence-unit name="hanadb" transaction-type="JTA">
<provider>org.eclipse.persistence.jpa.PersistenceProvider</provider>
<jta-data-source>jdbc/hanaDB</jta-data-source>
<class>com.sap.cloud.sample.persistence.Person</class>
<properties>
<property name="eclipselink.ddl-generation" value="create-tables" />
</properties>
</persistence-unit>
<persistence-unit name="maxdb" transaction-type="JTA">
<provider>org.eclipse.persistence.jpa.PersistenceProvider</provider>
<jta-data-source>jdbc/maxDB</jta-data-source>
<class>com.sap.cloud.sample.persistence.Person</class>
<properties>
<property name="eclipselink.ddl-generation" value="create-tables" />
</properties>
</persistence-unit>
</persistence>
EJB session beans, which typically perform the database operations, can use the @PersistenceContext
annotation to directly inject the entity manager. The corresponding entity manager factory is created
transparently by the container.
Procedure
1. In the EJB session bean, inject the entity manager as follows. A persistence context type has not been
explicitly specified in the example below and is therefore, by default, transaction-scoped:
@PersistenceContext
private EntityManager em;
To use an extended persistence context, set the value of the persistence context type to EXTENDED
(@PersistenceContext(type=PersistenceContextType.EXTENDED)), and declare the session bean as stateful.
An extended persistence context allows a session bean to maintain its state across multiple JTA transactions.
An extended persistence context is not threadsafe.
2. If you have more than one persistence unit, inject the required number of entity managers by specifying the
persistence unit name as defined in the persistence.xml file:
@PersistenceContext(unitName="hanadb")
private EntityManager em1;
...
@PersistenceContext(unitName="maxdb")
private EntityManager em2;
3. Inject an instance of the EJB session bean class into, for example, the servlet of the web application with an
annotation in the following form, where PersonBean is an example session bean class:
The persistence context made available is based on JTA and provides automatic transaction management.
Each EJB business method automatically has a managed transaction, unless specified otherwise. The entity
manager life cycle, such as instantiation and closing, is controlled by the container. Therefore, do not use
methods designed for resource-local transactions, such as em.getTransaction().begin(),
em.getTransaction().commit(), and em.close().
Related Information
Application-managed entity managers are created manually using the EntityManagerFactory interface.
Application-managed entity managers require resource-local transactions and non-JTA data sources, which you
must declare as JNDI resource references.
Context
The scenario described in this section is based on the Java Web runtime, which supports only manual creation of
the entity manager factory.
Procedure
Related Information
An application can use one or more data sources. A data source can be a default data source or an explicitly
named data source. Before a data source can be used, you must declare it as a JNDI resource reference in the
web.xml deployment descriptor.
Procedure
<resource-ref>
<res-ref-name>jdbc/DefaultDB</res-ref-name>
<res-type>javax.sql.DataSource</res-type>
</resource-ref>
<resource-ref>
<res-ref-name>jdbc/datasource1</res-ref-name>
<res-type>javax.sql.DataSource</res-type>
</resource-ref>
<resource-ref>
<res-ref-name>jdbc/datasource2</res-ref-name>
<res-type>javax.sql.DataSource</res-type>
</resource-ref>
○ The data source name is the JNDI name used for the lookup.
4. Save the file.
Related Information
To use application-managed entity managers, configure resource-local transactions in the persistence.xml file.
Resource-local transactions are associated with non-JTA data sources (that is, unmanaged data sources) and are
explicitly controlled by the application through the EntityTransaction interface of the entity manager.
Context
To use resource-local transactions, the transaction type attribute has to be set to RESOURCE_LOCAL, indicating
that the entity manager factory should provide resource-local entity managers. When you work with a non-JTA
data source, the non-JTA data source element also has to be set in the persistence unit properties in the
application code.
Procedure
In the application code, obtain an initial JNDI context by creating a javax.naming.InitialContext object,
then retrieve the data source by looking up the naming environment through the InitialContext. Alternatively,
you can directly inject the data source.
Procedure
1. To create an initial JNDI context and look up the data source, add the following code to your application and
make sure that the JNDI name matches the one specified in the web.xml file:
According to the Java EE Specification, you should add the prefix java:comp/env to the JNDI resource name
(as specified in the web.xml) to form the lookup name. For more information about defining and referencing
resources according to the Java EE standard, see the Java EE Specification.
2. Alternatively, to directly inject the data source, use the @Resource annotation:
@Resource
private javax.sql.DataSource ds;
@Resource(name="jdbc/datasource1")
private javax.sql.DataSource ds1;
@Resource(name="jdbc/datasource2")
private javax.sql.DataSource ds2;
Related Information
Java EE Specification
Use the EntityManagerFactory interface to manually create and manage entity managers in your Web
application.
Procedure
In the code above, the non-JTA data source element has been set in the persistence unit properties, and the
persistence unit name is the name of the persistence unit as it is declared in the persistence.xml file.
Note
Include the above code in the servlet init() method, as illustrated in the tutorial Adding Application-
Managed Persistence with JPA (SDK for Java Web), since this method is called only once during initialization
when the servlet instance is loaded.
2. Use the entity manager factory obtained above to create an entity manager as follows:
EntityManager em = emf.createEntityManager();
Application-managed entity managers are always extended and therefore retain the entities beyond the scope of a
transaction. You should therefore close an entity manager when it is no longer needed by calling
EntityManager.close(), or alternatively EntityManager.clear() wherever appropriate, such as at the end
of a transaction. An entity manager cannot be used concurrently by multiple threads, so design your entity
manager handling to avoid doing this.
Related Information
When working with a resource-local entity manager ,use the EntityTransaction API to manually set the transaction
boundaries in your application code. You can obtain the entity transaction attached to the entity manager by
calling EntityManager.getTransaction().
To create and update data in the database, you need an active transaction. The EntityTransaction API provides the
begin() method for starting a transaction, and the commit() and rollback() methods for ending a
transaction. When a commit is executed, all changes are synchronized with the database.
Example
The tutorial code (Adding Application-Managed Persistence with JPA (SDK for Java Web)) shows how to create
and persist an entity:
The EntityManager.persist() method makes an entity persistent by associating it with an entity manager.
It is inserted into the database when the commit() method is called. The persist() method can be called
only on new entities.
Related Information
Tutorial: Adding Application-Managed Persistence with JPA (SDK for Java Web) [page 836]
The data source is determined dynamically at runtime and does not need to be defined in the web.xml or
persistence.xml file. This allows you to bind additional schemas to an application and obtain the corresponding
data source, without having to modify the application code or redeploy the application.
Context
A dynamic JNDI lookup is applied as follows, depending on whether you are using an unmanaged or a managed
data source:
● Unmanaged – supported in the Java Web, Java EE 6 Web Profile, and Java Web Tomcat 7 runtimes.
● Managed
Note
For the Java Web and Java EE 6 Web Profile runtimes only, but not for the Java Web Tomcat 7, you can continue
to use the earlier variants of the JNDI lookup:
● Unmanaged
● Managed
The steps described below are based on JPA application-managed persistence using the Java Web runtime.
Procedure
1. Create the persistence unit to be used for the dynamic data source lookup:
a. In the Project Explorer view, select <project>/Java Resources/src/META-INF/persistence.xml,
and from the context menu choose Open With Persistence XML Editor .
b. Switch to the Source tab of the persistence.xml file and create a persistence unit, as shown in the
example below. The corresponding data source is not defined in either the persistence.xml or
web.xml file:
2. In the servlet code, implement a JNDI data source lookup. In the example below, the data source name is
"mydatasource":
ds = (DataSource) context.lookup("unmanageddatasource:mydatasource");
3. Create an entity manager factory in the normal manner. In the example below, the persistence unit is named
"mypersistenceunit", as defined in the persistence.xml file:
4. Use the console client to create a schema binding with the same data source name. To do this, open the
command window in the <SDK>/tools folder and enter the bind-schema [page 1810] command, using the
data source name you defined in step 2:
Note
Note that you need to use the same data source name you have defined in step 2.
To declare a class as an entity and define how that entity maps to the relevant database table, you can either
decorate the Java object with metadata using Java annotations or denote it as an entity in the XML descriptor.
The Dali Java Persistence Tools, which are provided as part of the Eclipse IDE for Java EE Developers, allow you to
use a JPA diagram editor to create, edit, and display entities and their relationships (your application’s data model)
in a graphical environment.
Example
The tutorial Adding Application-Managed Persistence with JPA (SDK for Java Web) defines the entity class
Person, as shown in the following:
package com.sap.cloud.sample.persistence;
import javax.persistence.*;
@Entity
@Table(name = "T_PERSON")
@NamedQuery(name = "AllPersons", query = "select p from Person p")
public class Person {
@Id
@GeneratedValue
private long id;
@Basic
private String FirstName;
@Basic
private String LastName;
Related Information
Tutorial: Adding Application-Managed Persistence with JPA (SDK for Java Web) [page 836]
Dali Java Persistence Tools User Guide
The SAP HANA database lets you create tables with row-based storage or column-based storage. By default,
tables are created with row-based storage, but you can change the type of table storage you have applied, if
necessary.
The example below shows the SQL syntax used by the SAP HANA database to create different table types. The
first two SQL statements both create row-store tables, the third a column-store table, and the fourth changes the
table type from row-store to column-store:
EclipseLink JPA
When using EclipseLink JPA for data persistence, the table type applied by default in the SAP HANA database is
row-store. To create a column-store table or alter an existing row-store table, you can manually modify your
database using SQL DDL statements, or you can use open source tools, such as Liquibase (with plain SQL
statements), to handle automated database migrations.
Due to the limitations of the EclipseLink schema generation feature, you'll need to use one of the above options to
handle the life cycle management of your database objects.
You can use the ALTER TABLE statement to change a row-store table in the SAP HANA database to a column-store
table. The example is based on the Adding Application-Managed Persistence with JPA (SDK for Java Web) tutorial
and has been designed specifically for this tutorial and use case.
The example allows you to take advantage of the automatic table generation feature provided by JPA EclipseLink.
You merely alter the existing table at an appropriate point, when the schema containing the relevant table has just
been created. The applicable code snippet is added to the init() method of the servlet
(PersistenceWithJPAServlet). The main changes to the servlet code are outlined below:
1. Since the table must already exist when the ALTER statement is called, a small workaround has been
introduced in the init() method. An entity manager is created at an earlier stage than in the original version
of the tutorial to trigger the generation of the schema:
2. The SAP HANA database table SYS.M_TABLES contains information about all row and column tables in the
current schema. A new method, which uses this table to check that T_PERSON is not already a column-store
table, has been added to the servlet.
3. Another new method alters the table using the SQL statement ALTER TABLE <table name> COLUMN.
To apply the solution, replace the entire servlet class PersistenceWithJPAServlet with the following content:
package com.sap.cloud.sample.persistence;
import java.io.IOException;
import java.sql.Connection;
import java.sql.PreparedStatement;
import java.sql.ResultSet;
import java.sql.SQLException;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
import javax.naming.InitialContext;
import javax.naming.NamingException;
import javax.persistence.EntityManager;
import javax.persistence.EntityManagerFactory;
import javax.persistence.Persistence;
import javax.servlet.ServletException;
import javax.servlet.http.HttpServlet;
import javax.servlet.http.HttpServletRequest;
import javax.servlet.http.HttpServletResponse;
import javax.sql.DataSource;
import org.eclipse.persistence.config.PersistenceUnitProperties;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import com.sap.security.core.server.csi.IXSSEncoder;
import com.sap.security.core.server.csi.XSSEncoder;
/**
* Servlet implementing a simple JPA based persistence sample application for SAP
Cloud Platform.
*/
public class PersistenceWithJPAServlet extends HttpServlet {
private static final long serialVersionUID = 1L;
private static final Logger LOGGER =
LoggerFactory.getLogger(PersistenceWithJPAServlet.class);
private static final String SQL_GET_TABLE_TYPE = "SELECT TABLE_NAME, TABLE_TYPE
FROM SYS.M_TABLES WHERE TABLE_NAME = ?";
private static final String PERSON_TABLE_NAME = "T_PERSON";
private DataSource ds;
private EntityManagerFactory emf;
Related Information
Tutorial: Adding Application-Managed Persistence with JPA (SDK for Java Web) [page 836]
EclipseLink provides weaving as a means of enhancing JPA entities and classes for performance optimization. At
present, SAP Cloud Platform supports static weaving only. Static weaving occurs at compile time and is available
in both the Java Web and Java EE 6 Web Profile environments.
Prerequisites
● For static weaving to work, the entity classes must be listed in the persistence.xml file.
● EclipseLink Library:
To use the EclipseLink weaving options in your web applications, add the EclipseLink library to the classpath:
○ SDK for Java Web
The EclipseLink library has already been added to the WebContent/WEB-INF/lib folder, since it is
required for the JPA persistence scenario.
○ SDK for Java EE 6 Web Profile
The EclipseLink library is already part of the SDK for Java EE 6 Web Profile, allowing you to run JPA
scenarios without any additional steps. To use the weaving options, however, you add the EclipseLink
library to the classpath, as described below.
SDK for Java EE 6 Web Profile: Adding the EclipseLink Library to the Classpath
1. In the Eclipse IDE in the Project Explorer view, select the web application and from the context menu choose
Properties.
2. In the tree, select JPA.
3. In the Platform section, select the correct EclipseLink version, which should match the version available in the
SDK.
4. In the JPA implementation section, select the type User Library.
5. To the right of the user library list box, choose Download library.
6. Select the correct version of the EclipseLink library (currently EclipseLink 2.5.2) and choose Next.
7. Accept the EclipseLink license and choose Finish.
8. The new user library now appears; make sure it is selected.
9. Unselect Include libraries with this application and choose OK.
1. In the Eclipse IDE in the Project Explorer view, select the web application and from the context menu choose
Properties.
2. In the tree, select JPA EclipseLink .
Note
If you change the target class settings, make sure you deploy these classes.
Your web application project is rebuilt so that the JPA entity class files contain weaving information. This also
occurs on each (incremental) project build. The woven entity classes will are whenever you publish the web
application to the cloud.
More Information
For information about using an ant task or the command line to perform static weaving, see the EclipseLink User
Guide .
Program with JDBC in the Neo environment in cases in which its low-level control is more appropriate than JPA.
Working with JDBC entails manually writing SQL statements to read and write objects from and to the database.
An application can use one or more data sources. A data source can be a default, or explicitly named. Either way,
before a data source can be used, you must declare it as a JNDI resource reference.
Declare a JNDI resource reference to a JDBC data source in the web.xml deployment descriptor located in the
WebContent/WEB-INF directory as shown below. The resource reference name is only an example:
<resource-ref>
<res-ref-name>jdbc/DefaultDB</res-ref-name>
<res-type>javax.sql.DataSource</res-type>
</resource-ref>
● <res-ref-name>: The JNDI name of the resource. The Java EE Specification recommends that you declare
the data source reference in the jdbc subcontext (jdbc/NAME).
● <res-type>: The type of resource that is returned during the lookup.
Add the <resource-ref> elements after the <servlet-mapping> elements in the deployment descriptor.
If the application uses multiple data sources, add a resource reference for each data source:
<resource-ref>
You can obtain an initial JNDI context from Tomcat by creating a javax.naming.InitialContext object. Then
consume the data source by looking up the naming environment through the InitialContext, as follows:
According to the Java EE Specification, you should add the prefix java:comp/env to the JNDI resource name (as
specified in web.xml) to form the lookup name.
If the application uses multiple data sources, construct the lookup in a similar manner to the following:
You can directly inject the data source using annotations as shown below.
@Resource
private javax.sql.DataSource ds;
● If the application uses explicitly named data sources, you must first declare them in the web.xml file. Inject
them as shown in the following example:
@Resource(name="jdbc/datasource1")
private javax.sql.DataSource ds1;
@Resource(name="jdbc/datasource2")
private javax.sql.DataSource ds2;
The data source let you create a JDBC connection to the database. You can use the resulting Connection object to
instantiate a Statement object and execute SQL statements, as shown in the following example:
private static final String STMT_SELECT_ALL = "SELECT ID, FIRSTNAME, LASTNAME FROM
" + TABLE_NAME;
Connection conn = dataSource.getConnection();
try {
PreparedStatement pstmt = conn.prepareStatement(STMT_SELECT_ALL);
ResultSet rs = pstmt.executeQuery();
...
Database Tables
Use plain SQL statements to create the tables you require. Since there is currently no tool support available, you
have to manually maintain the table life cycles. The exact syntax you'll use may differ, depending on the underlying
database. The Connection object provides metadata about the underlying database and its tables and fields,
which can be accessed as shown in the code below:
To create a table in the Apache Derby database, you could use the following SQL statement executed with a
PreparedStatement object:
See the tutorial Adding Persistence Using JDBC for information about executing SQL statements and applying the
Data Access Object (DAO) design pattern in your Web application.
Related Information
Tutorial: Adding Persistence with JDBC (SDK for Java Web) [page 871]
Java EE Specification
Use JDBC to persist data in a simple Java EE web application that manages a list of persons.
Prerequisites
● Download and set up your Eclipse IDE, SAP Cloud Platform Tools for Java, and SDK for Java Web. For more
information, see Setting Up the Development Environment [page 1126].
● Create a database. For more information, see Creating Databases [page 749].
● Set up your runtime environment in the Eclipse IDE. For more information, see Set Up the Runtime
Environment [page 1131].
● Develop or import a Java Web application in Eclipse IDE. For more information, see Developing Java
Applications [page 1164] or Import Samples as Eclipse Projects [page 1145].
The application is also available as a sample in the SAP Cloud Platform SDK for Neo environment for Java
Web:
○ Sample name: persistence-with-jdbc
○ Location: <sdk>/samples folder
For more information, see Using Samples [page 1143].
Context
1. Create a Dynamic Web Project and Servlet with JDBC [page 872]
2. Create the Person Entity [page 872]
3. Create the Person DAO [page 873]
4. Prepare the Web Application Project for JDBC [page 875]
5. Extend the Servlet to Use Persistence [page 876]
6. Test the Web Application on the Local Server [page 879]
7. Deploy Applications Using Persistence on the Cloud from Eclipse [page 880]
8. Configure Applications Using the Cockpit [page 882]
9. Start Applications Using Eclipse [page 882]
Create a dynamic web project and add a servlet, which you'll extend later in the tutorial.
Procedure
1. From the Eclipse main menu, choose File New Dynamic Web Project .
2. Enter the Project name persistence-with-jdbc.
3. In the Target Runtime pane, select Java Web as the runtime to use to deploy the application.
4. Leave the default values for the other project settings and choose Next.
5. On the Java screen, leave the default settings and choose Next.
6. In the Web Module configuration settings, select Generate web.xml deployment descriptor and choose Finish.
7. To add a servlet to the project you have just created, choose File New Web Servlet from the Eclipse
main menu.
8. Enter the Java package com.sap.cloud.sample.persistence and the class name
PersistenceWithJDBCServlet.
9. Choose Finish to generate the servlet.
Procedure
2. From the context menu, choose New Class , verify that the package entered is
com.sap.cloud.sample.persistence, enter the class name Person, and choose Finish.
3. Open the file in the text editor and insert the following content:
package com.sap.cloud.sample.persistence;
/**
* Class holding information on a person.
*/
public class Person {
private String id;
private String firstName;
private String lastName;
public String getId() {
return id;
Create a DAO class, PersonDAO, in which you encapsulate the access to the persistence layer.
Procedure
2. From the context menu, choose New Class , verify that the package entered is persistence-with-jdbc/
Java Resources/src/com.sap.cloud.sample.persistencecom.sap.cloud.sample.persistence, enter the
class name PersonDAO, and choose Finish.
3. Open the file in the text editor and insert the following content:
package com.sap.cloud.sample.persistence;
import java.sql.Connection;
import java.sql.DatabaseMetaData;
import java.sql.PreparedStatement;
import java.sql.ResultSet;
import java.sql.SQLException;
import java.util.ArrayList;
import java.util.List;
import java.util.UUID;
import javax.sql.DataSource;
/**
* Data access object encapsulating all JDBC operations for a person.
*/
public class PersonDAO {
private DataSource dataSource;
/**
* Create new data access object with data source.
*/
public PersonDAO(DataSource newDataSource) throws SQLException {
setDataSource(newDataSource);
}
/**
* Get data source which is used for the database operations.
*/
Prepare the web application project by adding the XSS Protection Library, adapting the Java build path order, and
adding the resource reference description to the web.xml file.
Procedure
<resource-ref>
<res-ref-name>jdbc/DefaultDB</res-ref-name>
<res-type>javax.sql.DataSource</res-type>
</resource-ref>
<servlet-mapping>
<servlet-name>PersistenceWithJDBCServlet</servlet-name>
<url-pattern>/</url-pattern>
</servlet-mapping>
Note
If your servlet version is 3.0 or higher, simply change the WebServlet annotation in the
PersistenceWithJDBCServlet.java class to @WebServlet("/").
Note
An application's URL path contains the context root followed by the optional URL pattern ("/<URL
pattern>"). The servlet URL pattern that is automatically generated by Eclipse uses the servlet’s class name
as part of the pattern. Since the cockpit shows only the context root, this means that you cannot directly
open the application in the cockpit without adding the servlet name. To call the application by only the
context root, use "/" as the URL mapping, then you will no longer have to correct the URL in the browser.
The exended servlet adds Person entity objects to the database, retrieves their details, and displays them on the
screen.
Procedure
2. Select PersistenceWithJDBCServlet.java, and from the context menu choose Open With Java
Editor .
package com.sap.cloud.sample.persistence;
import java.io.IOException;
import java.sql.SQLException;
import java.util.List;
import javax.naming.InitialContext;
import javax.naming.NamingException;
import javax.servlet.ServletException;
import javax.servlet.http.HttpServlet;
import javax.servlet.http.HttpServletRequest;
import javax.servlet.http.HttpServletResponse;
import javax.sql.DataSource;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import com.sap.security.core.server.csi.IXSSEncoder;
import com.sap.security.core.server.csi.XSSEncoder;
/**
* Servlet implementing a simple JDBC based persistence sample application for
* SAP Cloud Platform.
*/
public class PersistenceWithJDBCServlet extends HttpServlet {
private static final long serialVersionUID = 1L;
private static final Logger LOGGER = LoggerFactory
.getLogger(PersistenceWithJDBCServlet.class);
private PersonDAO personDAO;
/** {@inheritDoc} */
@Override
public void init() throws ServletException {
try {
InitialContext ctx = new InitialContext();
DataSource ds = (DataSource) ctx
.lookup("java:comp/env/jdbc/DefaultDB");
personDAO = new PersonDAO(ds);
} catch (SQLException e) {
throw new ServletException(e);
} catch (NamingException e) {
throw new ServletException(e);
}
}
/** {@inheritDoc} */
@Override
protected void doGet(HttpServletRequest request,
HttpServletResponse response) throws ServletException, IOException {
response.getWriter().println("<p>Persistence with JDBC!</p>");
try {
appendPersonTable(response);
appendAddForm(response);
} catch (Exception e) {
response.getWriter().println(
"Persistence operation failed with reason: "
+ e.getMessage());
LOGGER.error("Persistence operation failed", e);
}
}
/** {@inheritDoc} */
@Override
protected void doPost(HttpServletRequest request,
HttpServletResponse response) throws ServletException, IOException {
try {
doAdd(request);
doGet(request, response);
} catch (Exception e) {
response.getWriter().println(
"Persistence operation failed with reason: "
+ e.getMessage());
LOGGER.error("Persistence operation failed", e);
4. Save the servlet. The project should compile without any errors.
Procedure
1. To test your web application on the local server, follow the steps for deploying a web application locally as
described in Deploy Locally from Eclipse IDE [page 1189].
2. Enter a first name (for example, John) and a last name (for example, Smith) and choose Add Person.
Note
If you add more names to the database, they are also listed in the table. This confirms that you have
successfully enabled persistence using the Person entity.
Procedure
If you leave the default Automatic option, the server loads the target runtime of your application.
7. Enter your subaccount name, e-mail or user name, and password, then choose Next.
○ If you have previously entered a subaccount and user name for your host, you can select these names
from lists.
○ Previously entered hosts also appear in a dropdown list.
○ Select Save password to remember and store the password for a given user.
Caution
Do not select your application on the Add and Remove screen. Adding an application automatically starts it
with the effect that it will fail because no data source binding exists. You will add an application in a later
step.
9. In the Servers view, open the context menu for the server you just created and choose Show In Cockpit .
Procedure
1. In the cockpit, select a subaccount, then choose SAP HANA / SAP ASE Databases & Schemas in the
navigation area.
2. Select the database you want to create a binding for.
3. Choose Data Source Bindings. For more information, see .
4. Define a binding (<DEFAULT>) for the application and select a database ID. Choose Save.
Add the application to the new server and start it to deploy the application on the cloud.
Context
Note
You cannot deploy multiple applications on the same application process. Deployment of a second application
on the same application process overwrites any previous deployments. To deploy several applications, deploy
each of them on a separate application process.
Procedure
1. On the Servers view in Eclipse, open the context menu for the server choose Add and Remove...
<application name> .
2. To add the application to the server, add the application to the panel on the right side.
3. Choose Finish.
4. Start the server.
This deploys the application and starts it on the SAP Cloud Platform.
5. Access the application by clicking the application URL on the application overview page in the cockpit.
You should see the same output as when the application was tested on the local server.
Test an application in the Neo environment that uses the default data source and runs locally on Apache Derby on
the local runtime.
If an application uses the default data source and runs locally on Apache Derby, provided as standard for local
development, you can test it on the local runtime without any further configuration. To use explicitly named data
sources or a different database, you'll need to configure the connection.properties file appropriately.
To test an application on the local server, define any data sources the application uses as connection properties for
the local database. You don't need to do this if the application uses the default data source.
Prerequisites
Start the local server at least once (with or without the application) to create the relevant folder.
Procedure
1. In the Project Explorer view, open the folder Servers/SAP Cloud Platform local runtime/
config_master/connection_data and select connection.properties.
2. From the context menu, choose Open With Properties File Editor .
3. Add the connection parameter com.sap.cloud.persistence.dsname to the block of connection
parameters for the local database you are using, as shown in the example below:
com.sap.cloud.persistence.dsname=jdbc/datasource1
javax.persistence.jdbc.driver=org.apache.derby.jdbc.EmbeddedDriver
javax.persistence.jdbc.url=jdbc:derby:memory:DemoDB;create=true
javax.persistence.jdbc.user=demo
javax.persistence.jdbc.password=demo
eclipselink.target-database=Derby
If the application has been bound to the data source based on an explicitly named data source instead of using
the default data source, ensure the following:
○ Provide a data source name in the connection properties that matches the name used in the data source
binding definition.
○ Add prefixes before each property in a property group for each data source binding you define. If an
application is bound only to the default data source, this configuration is considered the default no matter
which name you specified in the connection properties. The application can address the data source by
any name.
4. Repeat this step for all data sources that the application uses.
com.sap.cloud.persistence.dsname=jdbc/defaultManagedDataSource
com.sap.cloud.persistence.dsname=jdbc/defaultUnmanagedDataSource
6. To indicate that a block of parameters belong together, add a prefix to the parameters, as shown in the
example below. The prefix is freely definable; the dot isn't required:
1.com.sap.cloud.persistence.dsname=jdbc/datasource1
1.javax.persistence.jdbc.driver=org.apache.derby.jdbc.EmbeddedDriver
1.javax.persistence.jdbc.url=jdbc:derby:memory:DemoDB;create=true
1.javax.persistence.jdbc.user=demo
1.javax.persistence.jdbc.password=demo
1.eclipselink.target-database=Derby
2.com.sap.cloud.persistence.dsname=jdbc/defaultManagedDataSource
2.javax.persistence.jdbc.driver=org.apache.derby.jdbc.EmbeddedDriver
2.javax.persistence.jdbc.url=jdbc:derby:memory:DemoDB;create=true
2.javax.persistence.jdbc.user=demo
2.javax.persistence.jdbc.password=demo
2.eclipselink.target-database=Derby
3.com.sap.cloud.persistence.dsname=jdbc/defaultUnmanagedDataSource
3.javax.persistence.jdbc.driver=org.apache.derby.jdbc.EmbeddedDriver
3.javax.persistence.jdbc.url=jdbc:derby:memory:DemoDB;create=true
3.javax.persistence.jdbc.user=demo
3.javax.persistence.jdbc.password=demo
3.eclipselink.target-database=Derby
Identify inefficient SQL statements in your applications in the Neo environment and investigate performance
issues.
Context
The SQL trace provides a log of selected SQL statements with details about when a statement was executed and
its duration, allowing you to identify inefficient SQL statements in your applications and investigate performance
issues. SQL trace records are integrated in the standard trace log files written at runtime.
By default, the SQL trace is disabled. Generally, you enable it when you require SQL trace information for a
particular application and disable it again once you have completed your investigation. It is not intended for
general performance monitoring.
You can use the cockpit to enable the SQL trace by setting the log level of the logger
com.sap.core.persistence.sql.trace to the log level DEBUG in the application’s log configuration. Once
you've changed this setting, you can view trace information in the log files.
Procedure
1. In the SAP Cloud Platform cockpit, navigate to a subaccount. For more information, see Navigate to Global
Accounts and Subaccounts [page 964].
Note
You can set log levels only when an application is running. Loggers are not listed if the relevant application
code has not been executed.
The new log setting takes effect immediately. Log settings are saved permanently and do not revert to their
initial values when an application is restarted.
Procedure
1. See the application's trace logs, which contain the SQL trace records, either in the Most Recent Logging panel
on the application dashboard or on the Logging page by navigating to Monitoring Logging in the
navigation area.
2. To display the contents of a particular log file, choose (Show). You can also download the file by choosing
(Download).
Example
The SQL-specific information from the default trace is shown below in plain text format:
Related Information
In addition to using the cockpit, you can also enable the SQL trace from the Eclipse IDE, and using the console
client. Whichever tool you use, you need to set the log level of the logger
com.sap.core.persistence.sql.trace to the log level DEBUG.
Eclipse
You can set the log level for applications deployed locally or in the cloud.
See
Console Client
You can use the console client to set the log level as a logging property for one or more loggers. To do so, use the
command neo set-log-level with the log parameters logger <logger_name> and level <log_level>.
See
Related Information
1.12.11.4.2 Create
1.12.11.4.7 Delete
Answers to some of the most commonly asked questions about the in the Neo environment..
Where can I view the memory limits for an SAP HANA tenant database system?
See View Memory Usage for an SAP HANA Database System [page 729].
See .
Restore activities are currently handled by SAP Operations. For more information on how to request a recovery,
see Restore Database Systems [page 725] and Restore Databases [page 761].
For productive databases, a full data backup is done once a day. Log backup is triggered at least every 30 minutes.
The corresponding data or log backups are replicated to a secondary location every two hours. Backups are kept
(complete data and log) on a primary location for the last two backups and on a secondary location for the last 14
days. Backups are deleted afterwards. Recovery is therefore only possible within a time frame of 14 days. Restoring
the system from files on a secondary location might take some time depending on the availability. For more
information, see Restore Database Systems [page 725] and Restore Databases [page 761].
SAP offers to back up and recover shared and dedicated database systems only as a whole.
I am using JPA with EclipseLink and have denoted a property of type String
with @Lob. Why is a VARCHAR column of limited length created in the
database?
Due to the EclipseLink bug 317597 , the @Lob annotation is ignored when the corresponding table column is
created in the database. To enforce the creation of a CLOB column, you have to additionally specify
@Column(length=4001) for the property concerned.
Governments place legal requirements on industry to protect data and privacy. We provide features and functions
to help you meet these requirements.
Note
SAP does not provide legal advice in any form. SAP software supports data protection compliance by providing
security features and data protection-relevant functions, such as blocking and deletion of personal data. In
many cases, compliance with applicable data protection and privacy laws is not covered by a product feature.
Furthermore, this information should not be taken as advice or a recommendation regarding additional features
that would be required in specific IT environments. Decisions related to data protection must be made on a
case-by-case basis, taking into consideration the given system landscape and the applicable legal requirements.
Definitions and other terms used in this documentation are not taken from a specific legal source.
The following sections provide information about the . For the central data protection and privacy statement for
SAP Cloud Platform, see Data Protection and Privacy [page 2269].
1.12.12.1 Erasure
When handling personal data, consider the legislation in the different countries where your organization operates.
After the data has passed the end of purpose, regulations may require you to delete the data. However, additional
Read-access logging (RAL) is used to monitor and log read access to sensitive data. Data may be categorized as
sensitive by law, by external company policy, or by internal company policy. Read-access logging enables you to
answer questions about who accessed certain data within a specified time frame.
1.12.12.3 Glossary
The following document provides information about what was released in 2017.
Change
The Cloud Foundry environment supports SAP HANA revision 2.00.012.03. See SAP Note 2378962 - SAP HANA 2.0 Revision
and Maintenance Strategy .
Change
Neo Environment
The Neo environment now supports SAP HANA revision 1.00.122.14. See SAP Note 2021789 - SAP HANA Revision and Main
tenance Strategy .
Neo Environment
You can now install SAP HANA and SAP ASE database systems using a self-service in the SAP Cloud Platform cockpit. See
Install Database Systems [page 721].
New
Neo Environment
You can now delete your SAP HANA or SAP ASE database system using a self-service in the SAP Cloud Platform cockpit.
See Delete Database Systems [page 727].
Change
The Cloud Foundry environment supports SAP HANA revision 2.00.022.00. See SAP Note 2378962 - SAP HANA 2.0 Revi
sion and Maintenance Strategy .
Change
The default values for the limitation of the memory consumption in tenant databases are removed. The new default behavior
is that no memory limits are set. You can, however, still provide your own values. See Create an SAP HANA Tenant Database
[page 656].
Change
Neo Environment
The default values for the limitation of the memory consumption in tenant databases are removed. The new default behavior
is that no memory limits are set. You can, however, still provide your own values. See .
Fix
In earlier versions, if an SAP HANA tenant database was created in the SAP Cloud Platform cockpit using Internet Explorer
11, the user interface froze before creating the database.
Fix
In earlier versions, if an error occurred while an SAP HANA tenant database was being created or deleted, the database sta
tus was suspended in “Creating” or “Deleting”.
New
The Cloud Foundry environment supports SAP HANA revisions 2.00.012.02 and 2.00.021.00. See SAP Note 2378962 - SAP
HANA 2.0 Revision and Maintenance Strategy .
New
You can now view the memory limits, including the configured and live allocation limit for an SAP HANA MDC database sys
tem, in the SAP Cloud Platform cockpit. For more information, see View Memory Usage for an SAP HANA Database System
[page 678].
New
Neo Environment
You can now view the memory limits, including the configured and live allocation limit for an SAP HANA MDC database sys
tem, in the SAP Cloud Platform cockpit. For more information, see View Memory Usage for an SAP HANA Database System
[page 729].
Enhancement
The Cloud Foundry environment supports SAP HANA revision 2.00.012.01. See SAP Note 2378962 - SAP HANA 2.0 Revision
and Maintenance Strategy .
Neo Environment
The Neo environment supports SAP HANA revision 1.00.122.12. See SAP Note 2021789 - SAP HANA Revision and Mainte
nance Strategy .
Enhancement
Neo Environment
The Neo environment supports the following SAP HANA revision 1.00.122.11. See SAP Note 2021789 - SAP HANA Revision
and Maintenance Strategy .
Enhancement
The Cloud Foundry environment supports SAP HANA revisions 2.00.002.02 and 2.00.012.00. See SAP Note 2378962 - SAP
HANA 2.0 Revision and Maintenance Strategy .
Enhancement
Neo Environment
Users with the Space Auditor role can now see SAP HANA database systems provisioned to a space in the SAP Cloud Plat
form cockpit. In earlier versions, only users with the Admin or Space Developer role could see this information. See Roles
and Permissions .
New
You can use the SAP HANA cockpit to monitor and administer your SAP HANA database in the Cloud Foundry environment.
See SAP HANA cockpit.
New
The SAP HANA database explorer is now available for use with SAP HANA cockpit and SAP Web IDE for SAP HANA. The
database explorer allows you to execute SQL statements and database procedures, query information about the database,
and view information about database catalog objects. See About the SAP HANA Database Explorer and the SQL Analyzer.
New
You can share SAP HANA tenant databases within the spaces in an organization. This allows any app in an organization to
use any database in the same organization. See Sharing a Tenant Database with Other Spaces [page 691].
New
SAP HANA databases, enabled for multitenant database container support, are generally available in the Neo environment.
See SAP HANA Service in the Neo Environment [page 701].
Enhancement
The Persistence Service has been renamed to SAP HANA/ SAP ASE service in the SAP Cloud Platform cockpit and in the
documentation. For more information, see Version Update – 16.0 SP02 PL06About This Guide [page 650].
Enhancement
Version Update – 16.0 SP02SAP HANA revision 2.00.002.01 is supported in the Cloud Foundry environment.
New
SAP HANA
You can use the SAP HANA service in the Cloud Foundry environment. See First Steps in Enterprise Accounts [page 655].
Enhancement
SAP ASE
You can update your SAP ASE database system using a self-service in the SAP Cloud Platform cockpit. See .
Enhancement
You can configure the connection pool size of dedicated SAP HANA and SAP ASE database systems. See Configure the Da
tabase Connection Pool Size [page 784].
Enhancement
Limits for the memory consumption can be defined by the individual processes of an SAP HANA MDC tenant database in
the cockpit.
Enhancement
Limits have been introduced to the number of SAP HANA MDC tenant databases and SAP ASE user databases that a data
base system can have. For more information, see Creating Databases [page 749].
The limits apply to new and existing database systems. Systems that currently contain number of databases beyond the
new limits will not be affected; however, you will not be able to create new databases within such database systems.
New
You can enable the SAP HANA Interactive Education (SHINE) demo application for SAP HANA MDC databases in trial ac
counts and access the app directly from the cockpit. For more information, see Enable SAP HANA Interactive Education
(SHINE) [page 1237].
● 2016
● 2015
● 2014
● 2013
1.13 Tools
Tool Description
SAP Cloud Platform Cockpit [page 900] This is the central point for managing all activities associated
with your subaccount and for accessing key information about
your applications.
SAP Web IDE [page 904] This is a cloud-based meeting space where multiple applica
tion developers can work together from a common Web inter
face — connecting to the same shared repository with virtually
no setup required. SAP Web IDE allows you to prototype, de
velop, package, deploy, and extend SAPUI5 applications.
Maven Plugin [page 905] It supports you in using Maven to develop Java applications for
SAP Cloud Platform. It allows you to conveniently call the con
sole client and its commands from the Maven environment.
Cloud Connector [page 253] It serves as the link between on-demand applications in SAP
Cloud Platform and existing on-premise systems. You can con
trol the resources available for the cloud applications in those
systems.
SAP Cloud Platform SDK for Neo Environment [page 898] It contains everything you need to work with SAP Cloud
Platform, including a local server runtime and a set of com
mand line tools.
SAP Cloud Platform SDK for iOS [page 900] It is based on the Apple Swift programming language for de
veloping apps in the Xcode IDE and includes well-defined lay
ers (SDK frameworks, components, and platform services)
that simplify development of enterprise-ready mobile native
iOS apps. The SDK is tightly integrated with SAP Cloud Plat
form Mobile Service for Development and Operations.
Eclipse Tools [page 903] This is a Java-based toolkit for Eclipse IDE. It enables you to
develop and deploy applications as well as perform operations
such as logging, managing user roles, creating connectivity
destinations, and so on.
Console Client for the Neo Environment [page 905] It enables development, deployment and configuration of an
application outside the Eclipse IDE as well as continuous inte
gration and automation tasks.
Cloud Foundry Command Line Interface [page 948] It enables you to work with the Cloud Foundry environment to
deploy and manage your applications.
The SAP Cloud Platform SDK for Neo environment contains everything you need to work with SAP Cloud Platform,
including a local server runtime and a set of command line tools.
Prerequisites
You have the SDK installed. See Install the SAP Cloud Platform SDK for Neo Environment [page 1127].
The location of the SDK is the folder you have chosen when you downloaded and unzipped it.
An overview of the structure and content of the SDK is shown in the table below. The folders and files are located
directly below the common root directory in the order given:
Folder/File Description
api The platform API containing the SAP and third-party API
JARs required to compile Web applications for SAP Cloud
Platform (for more information about the platform API, see
the "Supported APIs" section further below).
javadoc Javadoc for the SAP platform APIs (also available as online
documentation via the API Documentation link in the title bar
of the SAP Cloud Platform Documentation Center). Javadoc
for the third-party APIs is cross-referenced from the online
documentation.
server Initially not present, but created once you install a local server
runtime.
tools Command line tools required for interacting with the cloud
runtime (for example, to deploy and start applications) and
the local server runtime (for example, to install and start the
local server).
readme.txt Brief introduction to the SAP Cloud Platform SDK for Neo
environment, its content, and how to set it up.
The cloud server runtime consists of the application server, the platform API, and the cloud implementations of
the provided services (connectivity, SAP HANA and SAP ASE, document, and identity). The SDK, on the other
hand, contains a local server runtime that consists of the same application server, the same platform API, but local
implementations of the provided services. These are designed to emulate the cloud server runtime as closely as
possible to support the local development and test process.
Supported APIs
The SAP Cloud Platform SDK for Neo environment contains the API for SAP Cloud Platform. All web applications
that should be deployed in the cloud should be compiled against this platform API. The platform API is used by the
SAP Cloud Platform Tools for Java to set the compile-time classpath.
All JARs contained in the platform API are considered part of the provided scope and must therefore be used for
compilation. This means that they must not be packaged with the application, since they are provided and wired at
runtime in the SAP Cloud Platform runtime, irrespective of whether you run your application locally for
development and test purposes or centrally in the cloud.
When you develop applications to run on the SAP Cloud Platform, you should be aware of which APIs are
supported and provisioned by the runtime environment of the platform:
● Third-party APIs: These include Java EE standard APIs (standards based and backwards compatible as
defined in the Java EE Specification) and other APIs released by third parties.
● SAP APIs: The platform APIs provided by the SAP Cloud Platform services.
SAP Cloud Platform SDK for iOS enables developers to quickly develop enterprise-ready native iOS apps, built with
Swift, the modern programming language by Apple.
The SDK is tightly integrated with SAP Cloud Platform mobile service for development and operations to provide:
The SDK provides a set of UI controls that are often used in the enterprise space. These controls are implemented
using the SAP Fiori design language, and are in addition to the existing native controls on the iOS platform.
Related Information
A web-based administration interface provides access to a number of functions for configuring and managing
applications, services, and subaccounts.
Use the cockpit to manage resources, services, security, monitor application metrics, and perform actions on
cloud applications.
The cockpit provides an overview of the applications available in the different technologies supported by SAP
Cloud Platform, and shows other key information about the subaccount. The tiles contain links for direct
navigation to the relevant information.
The first thing you see on SAP Cloud Platform is the home page. You can find key information about the cloud
platform and its service offering. You can log on to the cloud cockpit from the home page, or register if you are a
new user.
When you log on to the cockpit, you see one or more global accounts. Choose a global account to work in (based
on your contract).
Trial users see a home page with information relevant to them after registering on SAP Cloud Platform. On the
home page, they can choose between a Cloud Foundry and a Neo environment to try out SAP Cloud Platform.
Logon
Log on to the cockpit using the relevant URL. The URL depends on the following:
Note
We recommend that you log on with your e-mail address.
Accessibility
SAP Cloud Platform provides High Contrast Black (HCB) theme support. Switch between the default theme and
the high contrast theme by choosing Your Name User Settings in the header toolbar and selecting a theme.
Once you have saved your changes, the cockpit starts with the theme of your choice.
Language
To set the language in which the cockpit should be displayed, choose one of the following options from Your
Name User Settings in the header toolbar:
● English
Browser Support
For more information, see the Feature Scope Description for SAP Cloud Platform.
Notifications
Use Notifications to stay informed about different operations and events in the cockpit, for example, to monitor the
progress of copying a subaccount.
Get Support
To ask a question or give us feedback, choose one of the following options from Your Name User Settings in
the header toolbar:
Favorites
Create favorites to navigate easily to your most used pages in the cockpit and group them in folders.
Related Information
Accounts
Environments
Regions and Hosts
Getting Support [page 2280]
Use Notifications to stay informed about different operations and events in the cockpit, for example, to monitor the
progress of copying a subaccount.
The Notification icon in the header toolbar provides a quick access to the list of notifications and shows the
number of available notifications. The icon is visible only if there are currently notifications.
Each notification includes a short statement, a date and time, and the relevant subaccount. A notification informs
you about the status of an operation or asks for an action. For example, if copying a subaccount failed, an
administrator of the subaccount can assign the corresponding notification to himself and provide a fix. The other
members of this subaccount can see that the notification is already assigned to someone else.
● Dismiss a notification.
● Assign a notification to yourself. It's possible also to unassign yourself from a notification without processing it
further.
● Once you have completed the related action, you can set the status to complete. This dismisses the
corresponding notification for everyone else.
You can access the full list of notifications (also the ones you have dismissed earlier) by choosing Notifications in
the navigation area at the region level.
Related Information
The Eclipse plugin for theCloud Foundry environment enables you to deploy and manage Java and Spring
applications in the Cloud Foundry environment from Eclipse or Spring Tool Suite. For more information about how
to install and use the plugin, see Eclipse Plugin for the Cloud Foundry Environment .
SAP Cloud Platform Tools for the Neo environment is a Java-based toolkit for Eclipse IDE. It enables you to
perform the following operations in SAP Cloud Platform:
Features of the SAP Cloud Platform Tools for the Neo Environment
You can download SAP Cloud Platform Tools from the SAP Development Tools for Eclipse page. The toolkit
package contains:
Support of the SAP Cloud Platform Tools for the Neo Environment
SAP Cloud Platform Tools come with a wizard for gathering support information in case you need help with a
feature or operation (during deploying/debugging applications, logging, configurations, and so on). For more
information, see Gather Support Information [page 2282].
Related Information
SAP Web IDE is a fully extensible and customizable experience that accelerates the development life cycle with
interactive code editors, integrated developer assistance, and end-to-end application development life cycle
support. SAP Web IDE was developed by developers for developers.
SAP Web IDE is a next-generation cloud-based meeting space where multiple application developers can work
together from a common Web interface — connecting to the same shared repository, with virtually no setup
Note
SAP Web IDE is only available in the Neo environment, but it supports developing applications for both the Neo
environment and the Cloud Foundry environment.
Related Information
SAP offers a Maven plugin that supports you in using Maven to develop Java applications for SAP Cloud Platform.
It allows you to conveniently call the SAP Cloud Platform console client and its commands from the Maven
environment.
Most commands that are supported by the console client are available as goals in the plugin. To use the plugin, you
require a SAP Cloud Platform SDK for Neo environment, which can be automatically downloaded with the plugin.
Each version of the SDK always has a matching Maven plugin version.
Note
Maven Plugin is related only to the Neo environment.
For a list of goals and parameters, usage guide, FAQ, and examples, see:
Related Information
SAP Cloud Platform console client for the Neo environment enables development, deployment and configuration
of an application outside the Eclipse IDE as well as continuous integration and automation tasks. The tool is part of
the SAP Cloud Platform SDK for Neo environment. You can find it in the tools folder of your SDK location.
Downloading and setting up the console client Set Up the Console Client [page 1135]
Opening the tool and working with the commands and param Using the Console Client [page 1792]
eters
Console Client Video Tutorial
Verbose mode of output Verbose Mode of the Console Commands Output [page 1795]
Use the Cloud Foundry command line interface (CF CLI) to deploy and manage your applications in the Cloud
Foundry environment.
Downloading and installing the console client for theCloud Download and Install the Cloud Foundry Command Line Inter
Foundry environment face [page 948]
Cloud Foundry command line interface plug-ins CF CLI: Plug-ins [page 2006]
Find a list of the product prerequisites and restrictions for SAP Cloud Platform.
General Constraints
● Upload limit: an application deployed in the Neo environment can be up to 1.5 GB. If the application is
packaged as a WAR file, the size of the unzipped content is taken into account.
● For more information on constraints and default settings to consider when you deploy an application in the
Cloud Foundry environment, see http://docs.cloudfoundry.org/devguide/deploy-apps/large-app-
deploy.html#limits_table .
● SAP Cloud Platform Tools for Java and SDK have been tested with Java 7, and Java 8.
● SAP Cloud Platform Tools for Java and SDK run in many operating environments with Java 7, and Java 8 that
are supported by Eclipse. However, SAP does not systematically test all platforms.
● SAP Cloud Platform Tools for Java must be installed on Eclipse IDE for Java EE developers.
For the platform development tools, SDK, Cloud connector, and SAP JVM, see https://tools.hana.ondemand.com/
#cloud
Browser Support
The SAP Cloud Platform cockpit supports the following desktop browsers on Microsoft Windows, and where
mentioned below, on macOS:
Browser Versions
For a list of supported browsers for developing SAPUI5 applications, see Browser and Platform Matrixes.
For security reasons, SAP Cloud Platform does not support TLS1.1 and older, SSL 3.0 and older, and RC4-based
cipher suites. Make sure your browser supports at least TLS1.2 and modern ciphers (for example, AES).
Services
You can find the restrictions related to each SAP Cloud Platform service in the respective service documentation.
For more information, see Capabilities [page 24].
Links to additional information about the Cloud Foundry environment that is useful to know but not necessarily
directly connected to the SAP Cloud Platform Cloud Foundry environment.
Content Location
BOSH http://bosh.cloudfoundry.org
Buildpacks http://docs.cloudfoundry.org/buildpacks
http://docs.cloudfoundry.org/devguide/services/user-pro
vided.html
Get onboarded at SAP Cloud Platform, explore and familiarize yourself with what's available, configure your
environment, and subscribe to business applications:
● Getting Started with a Trial Account in the Cloud Foundry Environment [page 909]
● Getting Started with a Trial Account in the Neo Environment [page 918]
● Getting Started with a Customer Account: Workflow in the Cloud Foundry Environment [page 927]
● Getting Started with a Customer Account: Workflow in the Neo Environment [page 931]
● Getting a Global Account [page 935]
● Setting Up a Global Account [page 936]
●
● Getting Started with Business Application Subscriptions [page 967]
● Tutorials [page 980]
● Glossary [page 983]
Related Information
Try It Out: 3 Easy Steps to Get You Started With the Cloud Foundry Environment [page 909]
About Trial Accounts in the Cloud Foundry Environment [page 913]
Get Started with a Trial Account: Workflow in the Cloud Foundry Environment [page 914]
2.1.1 Try It Out: 3 Easy Steps to Get You Started With the Cloud
Foundry Environment
Get a free trial account in the Cloud Foundry environment and quickly get started by deploying a sample
application and binding it to a service instance:
3. Create a PostgreSQL Service Instance and Bind It to the Application [page 912]
First, sign up for a free trial account in the Cloud Foundry environment. If you already have a trial account, open the
cockpit, navigate to your subaccount and to the space in which you'd like to deploy the application and continue
with step 2.
Procedure
You’ll receive a confirmation e-mail with instructions for activating your trial account.
c. Click the link in the e-mail to confirm your address and activate your trial account.
d. Choose Continue and log on with your credentials.
3. Choose Cloud Foundry Trial.
4. Select the region closest to you and choose OK.
A global account, subaccount, org, and space are automatically created for you.
5. Choose Go to Space.
Results
This process also creates a trial account for you in the Neo environment.
Download a sample Node.js application and deploy it to your trial space using the SAP Cloud Platform cockpit.
Procedure
1. Open https://github.com/SAP/cf-sample-app-nodejs .
2. Choose Clone or download, then choose Download ZIP.
3. Open the zip file and extract its content to a folder on your local computer.
4. Navigate to the folder to which you've just extracted the content of the zip file and rename it to hello-
nodejs.
6. Open the manifest.yml file located in the hello-nodejs folder with an editor of your choice.
7. Add the following string in the manifest.yml file and provide a unique host name for the myhost parameter:
host: <myhost>
Note
Make sure your subaccount and your trial space have enough resources available. For more information
on changing quotas, see Adding Quotas and Space Quota Plans [page 960].
Use the SAP Cloud Platform cockpit to create a PostgreSQL service instance and bind it to the Node.js application
you've just created.
Procedure
Results
You've just deployed a sample Node.js application in your trial account and bound it to a PostgreSQL service
instance.
You want to learn more? See Get Started with a Trial Account: Workflow in the Cloud Foundry Environment [page
914].
A trial account lets you try out a limited set of features in the Cloud Foundry environment for free. Access is open
to everyone.
Trial accounts are intended for personal exploration, and not for production use or team development. The
features included in a trial account are limited, compared to an enterprise account. Consider the following before
using a trial account:
For more information about the regions that are available for trial accounts, see Regions and API Endpoints
Available for the Cloud Foundry Environment [page 22].
Before you begin, sign up for a free trial account. See Get a Free Trial Account [page 910]. For more information
about trial accounts, see About Trial Accounts in the Cloud Foundry Environment [page 913].
● Navigate to Global Accounts, Subaccounts, Orgs, and Spaces in the Cockpit [page 953]
● Create Subaccounts Using the Cockpit [page 938]
● Download and Install the Cloud Foundry Command Line Interface [page 948]
● Log On to the Cloud Foundry Environment Using the Cloud Foundry Command Line Interface [page 948]
●
● Global Accounts and Subaccounts [page 10]
1. When you register for a trial account, a subaccount and a space are created for you. You can create additional
subaccounts and spaces, thereby further breaking down your account model and structuring it according to
your development scenario, but first it's important you understand how to navigate to your accounts and
spaces using the cockpit. See Navigate to Global Accounts, Subaccounts, Orgs, and Spaces in the Cockpit
[page 953].
2. If you like, create further subaccounts. See Create Subaccounts Using the Cockpit [page 938].
3. If you haven't done so already, now is a good time to download and install the Cloud Foundry Command Line
Interface (cf CLI). This tool allows you to administer and configure your environment,enable services, and
deploy applications. See Download and Install the Cloud Foundry Command Line Interface [page 948]. But
don't worry, you can also perform all the necessary task using the SAP Cloud Platform cockpit, which you
don't need to install.
1. Now that you've set up your account model, it's time to think about member management. You can add
members at different levels. For example, you can add members at the org level. See Add Organization
Members Using the Cockpit [page 956]. For more information about the roles that are available on the
different levels, see About Roles in the Cloud Foundry Environment [page 955].
1. Develop your application. Check out the Developer Guide for tutorials and more information. See Applications
in the Cloud Foundry Environment [page 990].
2. Deploy your application. See Deploy Applications [page 1118].
3. Integrate your application with a service. To do so, first create a service instance. See Creating Service
Instances [page 1067].
4. Bind the service instance to your application. See Binding Service Instances to Applications [page 1069].
5. Alternatively, you can also create and use service keys. See Creating Service Keys [page 1071]. For more
information on using services and creating service keys, see About Services [page 1066].
6. You can also create instances of user-provided services. See Creating User-Provided Service Instances [page
1073].
Related Information
FAQ for Cloud Foundry environment within SAP Cloud Platform on the SAP Cloud Platform Public Wiki
Related Information
Try It Out: 3 Easy Steps to Get You Started With the Neo Environment [page 918]
About Trial Accounts in the Neo Environment [page 924]
Get Started with a Trial Account: Workflow in the Neo Environment [page 925]
2.2.1 Try It Out: 3 Easy Steps to Get You Started With the Neo
Environment
Get a free trial account in the Neo environment and quickly get started by deploying a sample application:
First, you need to sign up for a free trial account in the Neo environment.
Download and install all tools necessary to deploy an application in the Neo environment.
2.1 Download and Install the SAP Cloud Platform SDK for Neo Environment
1. Open https://tools.hana.ondemand.com/#cloud
2. From the SAP Cloud Platform Neo Environment SDK section, download the ZIP file for Java Web Tomcat 8 and
save it to your local file system.
3. Extract the ZIP file to a folder on your computer or network.
1. For Neon, download Eclipse IDE for Java EE Developers from https://www.eclipse.org/downloads/
packages/release/Neon/3 and for Oxygen, download it from https://www.eclipse.org/downloads/
packages/release/Oxygen/R .
2. Find the ZIP file you have downloaded on your local file system and unpack the archive.
3. Go to the eclipse folder and run the eclipse executable file.
4. Specify a Workspace directory.
5. To open the Eclipse workbench, choose Workbench in the upper right corner.
6. In the main menu, choose Window Preferences .
Note
For some operating systems, the path is Eclipse Preferences .
7. Configure your proxy settings (in case you work behind a proxy or a firewall):
1. Go to General Network Connections .
2. In the Active Provider dropdown menu, choose Manual.
5. Choose Next.
6. Java Web Tomcat 8 is set as default name. You can change it if needed.
7. Choose Browse to locate the folder to which you extracted the SDK.
8. Choose Finish.
9. In the Preferences window, choose OK.
12. Install the Maven Integration for Eclipse WTP:
Import a sample Java Hello World application into Eclipse and deploy it to SAP Cloud Platform. Use the SAP Cloud
Platform cockpit to view and monitor your application.
1. From the Eclipse main menu, choose File Import… Maven Existing Maven Projects and then choose
Next.
2. Browse to locate and select the directory containing the sample application: <sdk>/samples/hello-world,
and choose OK.
3. Choose Finish to start the import.
1. Open the servlet in the Java editor and from the context menu, choose Run As Run on Server .
2. The Run On Server dialog box appears. Make sure that the Manually define a new server option is selected.
3. As server type, select SAP SAP Cloud Platform .
4. For Server name, enter https://hanatrial.ondemand.com.
5. Choose Next.
6. On the New Server wizard page, specify a unique application name (only lowercase Latin letters and digits are
allowed).
7. Leave the Automatic option for the runtime.
8. Enter your subaccount name, e-mail or user name, and password.
9. Choose Finish. This triggers the publishing of the application on SAP Cloud Platform.
10. After publishing has completed, the Internal Web Browser opens and shows the application.
3.3 View and Monitor the Application in the SAP Cloud Platform Cockpit
1. Open the SAP Cloud Platform Home page and select Neo Trial.
2. In the navigation area, choose Applications Java Applications .
3. Select the sample application you just deployed.
4. Use the Monitoring tab in the navigation area to monitor the application.
You've just deployed a sample Java Hello World application to SAP Cloud Platform.
You want to learn more? See Get Started with a Trial Account: Workflow in the Neo Environment [page 925].
A trial account allows you to try out a limited set of features of the Neo environment for free. Access is open to
everyone.
Trial accounts are intended for personal exploration, and not for productive use or team development. The amount
of functionality a trial account offers is limited, compared to an enterprise account. Consider the following before
you decide to use a trial account:
Before you begin, sign up for a free trial account. See Get a Free Trial Account [page 919]. For more information
about trial accounts, see About Trial Accounts in the Neo Environment [page 924].
1. Develop and deploy your application. Check out the Developer Guide for tutorials and more information. See
Applications in the Neo Environment [page 1119].
2. Enable a service so that you can integrate it with an application. See Enable Services in the Neo Environment
[page 1121].
●
●
● Global Accounts: Enterprise versus Trial [page 11]
Before you begin, purchase a customer account or join the partner program. See or . For more information about
account types, see Global Accounts: Enterprise versus Trial [page 11].
● Download and Install the Cloud Foundry Command Line Interface [page 948]
● Log On to the Cloud Foundry Environment Using the Cloud Foundry Command Line Interface [page 948]
● Create Cloud Foundry Spaces Using the Cockpit [page 946]
● Global Accounts and Subaccounts [page 10]
● Create Subaccounts Using the Cockpit [page 938]
1. After you've received your logon data by email, create subaccounts in your global account. This allows you to
further break down your account model and structure it according to your business needs. See Create
Subaccounts Using the Cockpit [page 938].
2. If you haven't done so already, now is a good time to download and install the Cloud Foundry Command Line
Interface (cf CLI). This tool allows you to administer and configure your environment, enable services, and
deploy applications. See Download and Install the Cloud Foundry Command Line Interface [page 948]. But
don't worry, you can also perform all the necessary task using the SAP Cloud Platform cockpit, which you
don't need to install.
3. If you'd like to use the cf CLI, log on to your environment. See Log On to the Cloud Foundry Environment Using
the Cloud Foundry Command Line Interface [page 948].
4. Create spaces. See Create Cloud Foundry Spaces Using the Cockpit [page 946]. If you want to learn more
about subaccounts, orgs, and spaces, and how they relate to each other, see . You'll also find some
recommendations for setting up your account model so that it meets your business needs.
1. You can either use the cockpit or the cf CLI to configure your environment. If you'd like to use the cockpit, it's
important you understand how you can navigate to your accounts and spaces. See Navigate to Global
Accounts, Subaccounts, Orgs, and Spaces in the Cockpit [page 953].
1. Develop your application. Check out the Developer Guide for tutorials and more information. See Applications
in the Cloud Foundry Environment [page 990].
2. Deploy your application. See Deploy Applications [page 1118].
3. Integrate your application with a service. To do so, first create a service instance. See Creating Service
Instances [page 1067]
4. Bind the service instance to your application. See Binding Service Instances to Applications [page 1069].
5. Alternatively, you can also create and use service keys. See Creating Service Keys [page 1071]. For more
information on using services and creating service keys, see About Services [page 1066].
6. You can also create instances of user-provided services. See Creating User-Provided Service Instances [page
1073].
Related Information
FAQ for Cloud Foundry environment within SAP Cloud Platform on the SAP Cloud Platform Public Wiki
Before you begin, purchase a customer account or join the partner program. See Purchase a Customer Account
[page 935] or Join the Partner Program [page 935]. For more information about account types, see Global
Accounts: Enterprise versus Trial [page 11].
After you've received your logon data by email, create subaccounts in your global account. This allows you to
further break down your account model and structure it according to your business needs. See Create
Subaccounts Using the Cockpit [page 938].
1. Since you need to use the SAP Cloud Platform cockpit to configure your environment, it's important you
understand how you can navigate to your global account and subaccounts. See Navigate to Global Accounts
and Subaccounts [page 964].
2. It's time to think about member management. You can add members to subaccounts and assign different
roles to those members. For more information, see Add Members to Subaccounts [page 965]. For more
information about roles, see Managing Member Authorizations [page 1671].
3. Before you can start using resources such as application runtimes, you need to manage your entitlements and
add quotas to your subaccounts. See Adding Quotas to Subaccounts [page 966]. To learn more about
entitlements and quotas, see About Entitlements and Quotas [page 945].
1. Develop and deploy your application. Check out the Developer Guide for tutorials and more information. See
Applications in the Neo Environment [page 1119].
2. Enable a service so that you can integrate it with an application. See Enable Services in the Neo Environment
[page 1121].
To use a paid enterprise account, you can either purchase a customer account or join the partner program to
purchase a partner account.
A customer account is an enterprise account that allows you to host productive, business-critical applications with
24x7 support.
When you want to purchase a customer account, you can select from a set of predefined packages. For more
information, see https://cloudplatform.sap.com/pricing.html . Contact us on SAP Cloud Platform or via an
SAP sales representative.
In addition, you can upgrade and refine your resources later on. You can also contact your SAP sales representative
and opt for a configuration, tailored to your needs.
After you have purchased your customer account, you will receive an e-mail with a link to the Home page of SAP
Cloud Platform.
Related Information
A partner account is an enterprise account that enables you to build applications and to sell them to your
customers.
To become a partner, you need to fill in an application form and then sign your partner contract. You will be
assigned to a partner account with the respective resources. To apply for the partner program, visit https://
www.sapappsdevelopmentpartnercenter.com/en/signup/new/ . You will receive a welcome mail with further
information afterwards.
Your SAP Cloud Platform global account is the entry point for managing the resources, landscape, and
entitlements for your departments and projects in a self-service manner.
Set up your account model by creating subaccounts in your enterprise account. You can create any number of
subaccounts in any environment (Neo and Cloud Foundry) and region.
The global account is where you can start working in SAP Cloud Platform.
Prerequisites
You have received a Welcome e-mail from SAP for your global account.
Context
When your organization signs a Cloud Platform Enterprise Agreement, an e-mail is sent to the e-mail address
specified in the contract. The e-mail message contains the link to log on to the system and the SAP Cloud Identity
credentials (user ID) for the specified user. These credentials can also be used for sites such as the SAP Store
or the SAP Community .
Procedure
Altenatively, for example, choose https://account.eu1.hana.ondemand.com. To avoid latency, make sure you
choose a logon URL in the region closest to you.
For more information, see Regions and Hosts Available for the Neo Environment [page 23].
Tip
In the Global Accounts page, you can filter the display of global accounts:
○ Filter by role to display global accounts according to your role in the global account, administrator or
member.
○ Filter by region to display global accounts that contain subaccounts in the selected region.
Add users as global account members using the SAP Cloud Platform cockpit.
Prerequisites
Note
The Administrator role is automatically assigned to the user who has started a trial account or who has
purchased resources for an enterprise account.
Context
All global account members have global account administrator permissions for their global account, and can do
the following:
● View all the subaccounts in the global account, meaning all the subaccount tiles in the global account's
Subaccounts page.
● Edit general properties of the subaccounts in the global account from the Edit icon in the subaccount tile.
● Create a new subaccount in the global account.
● View, add, and remove global account members.
● Manage entitlements for the global account.
Restriction
Adding members to global accounts is only possible in enterprise accounts, not in trial accounts.
Procedure
The users you add as members at global account level are automatically assigned the Administrator role.
Next Steps
To delete members at global account level, choose (Delete) next to the user's ID.
Related Information
Create subaccounts in your global account using the SAP Cloud Platform cockpit.
Prerequisites
Context
You create subaccounts in your global account. The tile for the new subaccount is then available in the global
account view. You can change details for your subaccount and you can delete your subaccount.
In a subaccount in the Cloud Foundry environment, the subdomain that you specify becomes part of the URL for
accessing subscribed applications. The format of this URL follows the pattern
<subdomain>.<application>.cfapps.<host>. The subdomain can contain only lowercase letters and digits
and must be unique within your global account.
When you create a new subaccount in the Neo environment, you can choose to copy settings from an existing Neo
subaccount in the same region.
Subaccount creation happens in the background. Some details including the subaccount name and description
are available right away. Settings that you copy are initially created only in the background with some delay.
You can enable subaccounts to use beta features, including services and applications, which are occasionally
made available by SAP for SAP Cloud Platform. This option is not selected by default and available only to
administrators for your enterprise account.
Caution
You should not use SAP Cloud Platform beta features in subaccounts that belong to productive enterprise
accounts. Any use of beta functionality is at the customer's own risk, and SAP shall not be liable for errors or
damages caused by the use of beta features.
Procedure
Additional fields are displayed according to the selected environment. Select settings as required.
5. (Optional) To enable the use of beta features in the subaccount, select the Enable beta features checkbox.
6. Save your changes.
Results
A new tile appears in the Global Account page with the subaccount details.
Tip
You can filter the display of subaccounts by the region of the subaccount.
Related Information
You can create subaccounts using the console client in the Neo environment.
Prerequisites
Restriction
Creating subaccounts is not possible in trial accounts in the Neo environment.
● You must be a member of the global account that contains the subaccount.
● You work in the Neo environment.
● You set up the console client. See Set Up the Console Client [page 1135].
Procedure
Note
For more information on creating new subaccounts and cloning existing subaccounts using the console client,
see create-account [page 1817].
A global account can group together different subaccounts that an administrator makes available to users.
Administrators can assign the available quotas of a global account to its different subaccounts and move it
between subaccounts that belong to the same global account.
For enterprise global accounts, you can create multiple subaccounts, Cloud Foundry or Neo.
Every trial user must have a subaccount in the Neo environment. There is no global account associated with this
trial account. For a Cloud Foundry trial, you get a trial global account in addition to your Neo trial. Within your trial
global account, you can have multiple subaccounts in the Cloud Foundry environment.
When you create a subaccount in a trial account in the Cloud Foundry environment, the system creates a Cloud
Foundry org automatically.
Note
The subaccount and the org have a 1:1 relationship. They have the same name and therefore also the same
navigation level in the cockpit.
Within that Cloud Foundry org, you can create spaces. Spaces enable you to further break down your account
model and use services and functions in the Cloud Foundry environment.
The hierarchical structure between global accounts and subaccounts lets you define an account model that
accurately fits your business and development needs.
● Dev - use it for development purposes and for testing the increments in the cloud. You can grant permissions
to all application developers.
● Test- use it for testing the developed applications and their critical configurations to ensure quality delivery
(integration testing and testing in productive-like environment prior to making it publicly available).
● Prod - use it to run productive applications. Give permissions only to operators.
You can transfer an application from one subaccount to another by redeploying it in the respective subaccount.
Related Information
The hierarchical structure between global accounts, subaccounts, orgs, and spaces lets you define an account
model that accurately fits your business and development needs.
To decide whether to create separate subaccounts or separate spaces within the same subaccount for different
units, teams, stages, or scenarios, consider the different configuration possibilities available for subaccounts and
spaces:
Cross-consume SAP HANA tenant data No Possible to share SAP HANA tenant da
bases tabases with other spaces
Recommendation
We recommend that you use subaccounts to create a staged development environment, meaning that you
create one subaccount each for the development, testing, and production stages. This allows for flexible and
refined member management, as it lets you configure your own identity zone at the subaccount level.
Related Information
Assign the quotas that are available in your global account to your subaccounts using the SAP Cloud Platform
cockpit.
Prerequisites
● Cloud Foundry environment: You have the Administrator role in the global account.
● Neo environment:
○ You have an enterprise account.
Restriction
In the Neo environment, adding quotas to subaccounts is not possible in trial accounts.
○ You have the Administrator role in at least two subaccounts. You can only distribute quotas between
subaccounts for which you have the Administrator role.
Context
● The category Other Subaccounts shows the total quota of all subaccounts where you are not a member and
therefore, can't access them, but without any details.
Procedure
1. Choose the global account that contains the subaccounts to which you'd like to add quotas. For more
information, see Navigate to Global Accounts and Subaccounts [page 964].
2. In the navigation area, choose Entitlements.
You see a list of all the resources you are entitled to use.
Note
The limited set of resources available in Cloud Foundry trial accounts is added automatically to the
application runtime and services, but you can change that allocation.
3. Choose Edit.
4. Use the + and – buttons to adjust the quotas in the specified limits as needed and save your changes.
Note
If you have distributed the limit of your purchased quotas, you cannot increase it further.
Related Information
An entitlement equals your right to provision and consume a resource. A quota represents the numeric quantity
that defines the maximum allowed consumption of that resource. You must distribute quotas to your subaccounts
before you can start using application runtimes and services in subaccounts.
Note
The limited set of quotas available in Cloud Foundry trial accounts is added automatically to the application
runtime and services, but you can change that allocation. For more information, see Add Quotas to
Subaccounts Using the Cockpit [page 944].
In the Cloud Foundry environment, you can further distribute the quotas that are allocated to a subaccount across
the spaces in that subaccount by creating space quota plans and assigning them to the spaces. For more
information on space quota plans in the Cloud Foundry environment, see https://docs.cloudfoundry.org/
adminguide/quota-plans.html .
Related Information
Create spaces in your Cloud Foundry organization using the SAP Cloud Platform cockpit.
Prerequisites
You have the Org Manager role in the organization in which you'd like to create a space.
Procedure
1. Choose the subaccount that contains the Cloud Foundry organization in which you'd like to create a space.
Note
Subaccounts and orgs have a 1:1 relationship. They have the same name and therefore also the same
navigation level in the cockpit.
Note
If you haven't created a Cloud Foundry organization yet, create one first by choosing Enable Cloud Foundry.
3. Enter a space name and choose the permissions you'd like to assign to your ID, then choose Ok.
To assign quota to spaces, see Change Space Quota Plans Using the Cockpit [page 1666].
You can change the name of a space by choosing (Edit) on its tile.
Related Information
Navigate to Global Accounts, Subaccounts, Orgs, and Spaces in the Cockpit [page 953]
Creating Spaces Using the Cloud Foundry Command Line Interface [page 947]
You can use the Cloud Foundry Command Line Interface (cf CLI) to log on to the Cloud Foundry environment and
create spaces, but you need to download and install the cf CLI first.
Prerequisites
● (Enterprise accounts only) You have created at least one subaccount and enabled the Cloud Foundry
environment in this subaccount.
For more information, see Create Subaccounts Using the Cockpit [page 938].
Note
In a Cloud Foundry trial account, the Cloud Foundry environment has been activated for you automatically
and a first space "dev" has been created for you.
● You must be assigned the Org Manager role in the organization in which you'd like to create a space. For more
information about roles and permissions, see https://docs.cloudfoundry.org/concepts/roles.html .
Context
1. Download and Install the Cloud Foundry Command Line Interface [page 948] (You can skip this step if you've
already downloaded and installed the cf CLI.)
2. Log On to the Cloud Foundry Environment Using the Cloud Foundry Command Line Interface [page 948] (You
can skip this step if you're already logged on to Cloud Foundry.)
Download and set up the Cloud Foundry Command Line Interface (cf CLI) to start working with the Cloud Foundry
environment.
Procedure
1. Download the latest version of cf CLI from GitHub at the following URL: https://github.com/cloudfoundry/
cli#downloads
Use the Cloud Foundry Command Line Interface (cf CLI) to log on to the Cloud Foundry space.
Prerequisites
● (Enterprise accounts only) You have created at least one subaccount and enabled the Cloud Foundry
environment in this subaccount. For more information, see Create Subaccounts Using the Cockpit [page 938].
● You have download and installed the cf CLI. For more information, see Download and Install the Cloud Foundry
Command Line Interface [page 948].
Procedure
Note
There is no specific endpoint for trial accounts. Both enterprise and trial accounts use the same API
endpoints.
cf login
Use the cf create-space command to create spaces in your Cloud Foundry organization using the Cloud
Foundry Command Line Interface (cf CLI).
Prerequisites
● (Enterprise accounts only) Create at least one subaccount and enable the Cloud Foundry environment in this
subaccount. For more information, see Create Subaccounts Using the Cockpit [page 938].
● You must be assigned the Org Manager role in the organization in whichyou'll create a space. For more
information about roles and permissions, see https://docs.cloudfoundry.org/concepts/roles.html .
● Download and install the cf CLI and log on to Cloud Foundry. For more information, see Download and Install
the Cloud Foundry Command Line Interface [page 948] and Log On to the Cloud Foundry Environment Using
the Cloud Foundry Command Line Interface [page 948].
Procedure
1. Enter the following string, specifying a name for the new space, the name of the organization, and the quota
you'd like to assign to it:
Note
If you are assigned to only one Cloud Foundry organization and space, the system automatically targets you
to the relevant Cloud Foundry organization and space when you log on.
Learn how to navigate in the cockpit and how to add members and quotas using the cockpit or the Cloud Foundry
Command Line Interface.
1. Now that you've set up your account model, it's time to think about member management. You can add
members at different levels. For example, you can add members at the org level. See Add Organization
Members Using the Cockpit [page 956]. For more information about the roles that are available on the
different levels, see About Roles in the Cloud Foundry Environment [page 955].
2. You can also add members at the space level. See Add Space Members Using the Cockpit [page 958].
1. You can either use the cockpit or the cf CLI to configure your environment. If you'd like to use the cockpit, see
Navigate to Global Accounts, Subaccounts, Orgs, and Spaces in the Cockpit [page 953].
2. It's time to think about member management. You can add members at different levels. For example, you can
add members at org level. See Add Organization Members Using the Cockpit [page 956]. For more
information about roles, see About Roles in the Cloud Foundry Environment [page 955].
3. You can also add members at a space level. See Add Space Members Using the Cockpit [page 958].
4. Before you can start using resources such as services or application runtimes, you need to manage your
entitlements and add quotas to your subaccounts. See Add Quotas to Subaccounts Using the Cockpit [page
944]. To learn more about entitlements and quotas, see About Entitlements and Quotas [page 945].
5. You can also assign quotas to different spaces in a subaccount. See Adding Quotas and Space Quota Plans
[page 960].
To administer your Cloud Foundry environment, navigate to your global account and to your subaccounts, orgs,
and spaces in the SAP Cloud Platform cockpit.
Prerequisites
● Sign up for an enterprise or a trial account and receive your logon data.
For more information, see Get a Free Trial [page 910] or .
● Create the subaccount, org, or space to which you want to navigate.
For more information, see Create Subaccounts Using the Cockpit [page 938] and Create Cloud Foundry
Spaces Using the Cockpit [page 946].
Procedure
Result:
Home / <global_account>
Subaccount 1. Select the global account that contains the subaccount you'd like to navigate to
by following the steps described above.
2. Select the subaccount.
For more information about creating new subaccounts, see Create Subac
counts Using the Cockpit [page 938].
Result:
Org Navigate to the subaccount that contains the Cloud Foundry org by following the
steps described above. If you've already enabled the Cloud Foundry environment in
your subaccount, you see the name of your organization, the number of its spaces
and members, and its API endpoint on the Overview page of your subaccount. If you
haven't enabled Cloud Foundry yet, choose Enable Cloud Foundry to create a Cloud
Foundry org.
Result:
Note
Note that your subaccount and your org have a 1:1 relationship. They have the
same name and therefore also the same navigation level in the cockpit.
Space 1. Navigate to the subaccount that contains the space you'd like to navigate to.
2. In the navigation menu, choose Spaces.
Note
If you don't see a navigation entry for Spaces, enable the Cloud Foundry en
vironment first by choosing Enable Cloud Foundry.
3. Select the space. For more information about creating spaces, see Create Cloud
Foundry Spaces Using the Cockpit [page 946].
Result:
You can add members to your global account, orgs, and spaces and assign different roles to them:
Related Information
Roles determine which features in the cockpit users can view and access, and which actions they can initiate.
SAP Cloud Platform includes predefined roles that are specific to the navigation level in the cloud cockpit; for
example, the roles at the level of the organization differ from the ones for the space. A user can be assigned one or
more roles, where each role comes with a set of permissions. Roles apply to all operations that are associated with
the global account, the organization, or the space, irrespective of the tool used (Eclipse-based tools, cockpit, and
cf CLI).
The following roles can be assigned to users in the Cloud Foundry environment:
Global account Global Account Adminis A Global Account Administrator has permission to add members to the
trator global account.
Note
The Administrator role is automatically assigned to the user who
has started a trial account or who has purchased resources for an
enterprise account.
Restriction
You can add members to global accounts only in enterprise ac
counts, that is, not in trial accounts.
Space Developer
Space Auditor
Related Information
You can add organization members and assign roles to them at the subaccount level in the cockpit.
Prerequisites
You have the Org Manager role for the org in question.
Note
You automatically have the Org Manager role in a subaccount that you created.
Procedure
1. Choose the subaccount that contains the org to which you'd like to add members.
2. In the navigation area, choose Members.
All members currently assigned to the organization are shown in a list.
3. Choose Add Members.
4. Enter one or more e-mail addresses.
Next Steps
● To select or unselect roles for a member, choose (Edit). The changes you make to the roles of a member
take effect immediately.
● To remove all the roles of a member, choose (Delete). This removes the member from the organization.
Note
You can only remove members from the organization that are no longer assigned to any space in the
organization.
Related Information
You can use the Cloud Foundry Command Line Interface (cf CLI) to add organization members and assign roles to
them.
Prerequisites
Note
You automatically have the Org Manager role in a subaccount that you created.
Enter the following string, specifying the user name, the name of the organization, and the role:
Next Steps
To remover an org role from a user, enter the following string, specifying the user name, the name of the
organization, and the role:
You can add space members and assign roles to them at the space level in the cockpit.
Prerequisites
Note
You must add e-mail addresses of registered members who have an S-user or a trial account.
Administrators can request S-user IDs on the SAP ONE Support Launchpad User Management application:
1271482 .
If users do not have a registered S-user ID, they can register for a trial account instead to get a valid user to
be able to use the cloud cockpit and the cf CLI command line tool to connect to the space to which the user
was added. For more information on getting a trial account, see Get a Free Trial Account [page 910].
Procedure
1. Navigate to the space to which you'd like to add members. For more information, see Navigate to Global
Accounts, Subaccounts, Orgs, and Spaces in the Cockpit [page 953].
2. In the navigation area, choose Members.
All members currently assigned to the space are shown in a list.
Next Steps
● To select or unselect roles for a member, choose (Edit). The changes you make to the roles of a member
take effect immediately.
● To remove all the roles of a member, choose . This removes the member from the space.
You can use the Cloud Foundry Command Line Interface (cf CLI) to add space members and assign roles to them.
Prerequisites
Procedure
Enter the following string, specifying the user name, the name of the organization, the name of the space, and the
role:
To remove a space role from a user, enter the following string, specifying the user name, the name of the
organization, the name of the space, and the role:
You need to distribute the resources that you are entitled to in your enterprise account to subaccounts before you
can start using services and application runtimes. You can further distribute these resources across spaces in your
subaccounts.
You can ensure that resources do not exceed allocated space limits by assigning quotas to spaces. To do so, create
space quota plans and assign them to spaces using the SAP Cloud Platform cockpit or the Cloud Foundry
Command Line Interface.
For more information on creating and modifying quota plans for spaces, see https://docs.cloudfoundry.org/
adminguide/quota-plans.html#space .
Related Information
Prerequisites
The Org Manager role for the org in which you want to create a space quota plan.
Procedure
1. Navigate to the subaccount that contains the spaces to which you want to add quotas. For more information,
see Navigate to Global Accounts, Subaccounts, Orgs, and Spaces in the Cockpit [page 953].
Note
The org quota limit is applicable for a resource if you do not enter a space quota limit. If the space quota
limit for a resource exceeds the org quota limit, the org quota limit applies.
You can use the Cloud Foundry Command Line Interface to create space quota plans.
Prerequisites
● The Org Manager role for the org that contains the spaces to which you want to assign quotas.
● Download and install the cf CLI and log on to Cloud Foundry. For more information, see Download and Install
the Cloud Foundry Command Line Interface [page 948] and Log On to the Cloud Foundry Environment Using
the Cloud Foundry Command Line Interface [page 948].
Procedure
Open a command line and enter the following string, replacing QUOTA with the name for your space quota plan:
You can use the SAP Cloud Platform cockpit to assign quota plans to spaces.
Prerequisites
● The Org Manager role for the org in which you want to create a space quota plan.
● A space quota plan. For more information, see Create Space Quota Plans Using the Cockpit [page 960].
Procedure
1. Navigate to the subaccount that contains the spaces to which you want to add quotas. For more information,
see Navigate to Global Accounts, Subaccounts, Orgs, and Spaces in the Cockpit [page 953].
2. In the navigation menu, choose Quota Plans.
3. In the Plan Assignment section, select a quota plan for your spaces.
You use the Cloud Foundry Command Line Interface to assign the quotas available in your global account to your
subaccounts.
Prerequisites
● The Org Manager role for the org that contains the spaces to which you want to assign quotas.
● Download and install the cf CLI and log on to Cloud Foundry. For more information, see Download and Install
the Cloud Foundry Command Line Interface [page 948] and Log On to the Cloud Foundry Environment Using
the Cloud Foundry Command Line Interface [page 948].
● A space quota plan. For more information, see Create Space Quota Plans Using the Cloud Foundry Command
Line Interface [page 961].
Learn how to navigate in the cockpit and how to add members and quotas to subaccounts of your customer
account:
Restriction
Adding members and quotas to subaccounts is not possible in trial accounts in the Neo environment.
1. Since you need to use the cockpit to configure your environment, it's important you understand how you can
navigate to your global account and subaccounts. See Navigate to Global Accounts and Subaccounts [page
964].
2. It's time to think about member management. You can add members to subaccounts and assign different
roles to those members. For more information, see Add Members to Subaccounts [page 965]. For more
information about roles, see Managing Member Authorizations [page 1671].
3. Before you can start using resources such as application runtimes, you need to manage your entitlements and
add quotas to your subaccounts. See Adding Quotas to Subaccounts [page 966]. To learn more about
entitlements and quotas, see About Entitlements and Quotas [page 945].
To administer your Neo environment, navigate to your global account and to your subaccounts in the SAP Cloud
Platform cockpit.
Prerequisites
● Sign up for an enterprise or a trial account and receive your logon data. For more information, see Get a Free
Trial [page 919] or .
● Create the subaccount to which you want to navigate. For more information, see Create Subaccounts Using
the Cockpit [page 938].
Procedure
Result:
Home / <global_account>
Subaccount 1. Select the global account that contains the subaccount you'd like to navigate to
by following the steps described above.
2. Select the subaccount. For more information about creating subaccounts, see
Create Subaccounts Using the Cockpit [page 938].
Result:
Add users as members to a subaccount and assign roles to them using the SAP Cloud Platform cockpit.
Prerequisites
Restriction
In the Neo environment, adding members to subaccounts is only possible in enterprise accounts, not in trial
accounts.
Tip
Administrators can request S-user IDs on the SAP ONE Support Launchpad User Management application:
1271482 .
Context
In the Neo environment, you can assign predefined roles to subaccount members, but you can also create custom
platform roles. For more information, see Managing Member Authorizations [page 1671].
Procedure
Note
The name of a member is shown only after he or she visits the subaccount for the first time.
Next Steps
● To select or unselect roles for a member, choose (Edit). The changes you make to the member's roles
take effect immediately.
● You can enter a comment when editing user roles. This lets you track the reasons for subaccount membership
and other important data. The comments are visible to all members.
● You can send an e-mail to a member. This option appears only after the recipient visits the subaccount for the
first time.
● To remove all the roles of a member, choose Delete (trashcan). This also removes the member from the
subaccount.
● Choose the History button to view a list of changes to members (for example, added or removed members, or
changed role assignments).
● Use the filter to show only the members with the role you've selected.
Related Information
Learn more about entitlements and quotas in the Neo environment and how to add quotas to subaccounts using
the SAP Cloud Platform cockpit or the console client.
In this section:
You can use the SAP Cloud Platform cockpit to add quotas to subaccounts
Prerequisites
● An enterprise account.
Restriction
In the Neo environment, adding quotas to subaccounts is not possible in trial accounts.
● You must be a member of the global account that contains the subaccount.
● Set up the console client. See Set Up the Console Client [page 1135].
Procedure
Example:
Note
For more information on adding quotas to subaccounts using the console client, see set-quota [page 1972].
Learn how to subscribe to business applications in the Cloud Foundry environment and get started with providing
and subscribing to business applications in the Neo environment:
Learn how to subscribe to a multitenant business application provided by SAP in the Cloud Foundry environment.
Context
As shown in the image below, you need to create a subaccount in your global account in order to consume a
business application provided by SAP:
After subscribing to the application, you can configure application roles and assign those roles to your users.
Procedure
To subscribe to a business application in the Cloud Foundry environment, follow the steps below:
Create a subaccount in your global account to subscribe to a business application. You can use an existing
subaccount if you already have one.
Prerequisites
● You have an enterprise account. For more information, see Global Accounts and Subaccounts [page 10].
● You are a member of the global account in which you want to create the subaccount.
Procedure
1. Access the cockpit by opening the URL in your onboarding e-mail or any cockpit URL.
2. Select the region in which your global account has been provisioned.
3. Select your global account.
4. Choose New Subaccount.
5. Specify a display name and the subdomain.
Note
The subdomain becomes part of the URL for accessing subscribed applications. The format of this URL
follows the pattern [subdomain].[application].cfapps.[host]. The subdomain can contain only
lowercase letters and digits and must be unique within your global account.
The newly created subaccount now appears on the overview page of available subaccounts.
Note
In the Cloud Foundry environment, you do not have to create an org to subscribe to a business
application.The subdomain becomes part of the URL for accessing subscribed applications. The format of
this URL follows the pattern [subdomain].[application].cfapps.[host]. The subdomain can contain only
lowercase letters and digits and must be unique within your global account.
Subscribe to the business application by navigating to the Subscriptions tab in the SAP Cloud Platform cockpit.
Prerequisites
● You have purchased SaaS licenses for the applications you want to consume. For more information, see
https://cloudplatform.sap.com/pricing.html . You can also contact us on SAP Cloud Platform or via an
SAP sales representative.
● You have created a subaccount in your global account. See 1. Create a Subaccount [page 969].
Procedure
For more information, see Navigate to Global Accounts, Subaccounts, Orgs, and Spaces in the Cockpit [page
953].
2. In the navigation area, choose Subscriptions.
The following information is displayed for the business applications to which your global account is entitled in
the Cloud Foundry environment:
○ The name and short description of the application.
○ Subscribed / Not subscribed: The status of the application, indicating whether the subscription is active in
your subaccount in the current region.
3. Click the application name to open its Overview page.
4. Choose Subscribe.
The Go to Application link becomes available once the subscription is activated. Choose the link to launch the
application and obtain its URL.
Remember
To remove a subscribed application, go back to the application's Overview page and then choose
Unsubscribe.
View, create, and modify application roles and assign users to these roles using the SAP Cloud Platform cockpit.
Prerequisites
Subscribe to a business application provided by SAP in the Cloud Foundry environment. See 2. Subscribe to the
Application [page 970].
Procedure
1. Navigate to your subaccount. For more information, see Navigate to Global Accounts, Subaccounts, Orgs, and
Spaces in the Cockpit [page 953].
2. In the navigation area, choose Subscriptions.
3. Click the application name to open its Overview page.
4. Choose Manage Roles to view, create, and modify the application roles.
Procedure
1. Navigate to your subaccount. For more information, see Navigate to Global Accounts, Subaccounts, Orgs, and
Spaces in the Cockpit [page 953].
2. Choose Security Roles Collections in the navigation area and include the roles into the roles collection.
By using SAP Cloud Platform, a provider can build and run an application for consumption by multiple consumers.
A provider is an SAP partner, who wants to sell business applications to their customers, or an SAP customer, who
wants to make their business applications available to different organizational units, for example.
Overview
The platform provides a multitenant functionality, which allows providers to own, deploy, and operate an
application for multiple consumers with reduced costs. For example, the provider can upgrade the application for
all consumers instead of performing each update individually, or can share resources across many consumers.
Application consumers can configure certain application features and launch them using consumer-specific URLs.
Furthermore, they can protect the application by isolating their tenants.
Consumers do not deploy applications in their subaccounts, but simply subscribe to the provider application. As a
result, a subscription is created in the consumer subaccount. This subscription represents the contract or relation
between a subaccount (tenant) and a provider application.
Note
SAP Partners that wish to offer SAP Cloud Platform multitenant business applications in the Cloud Foundry
environment should contact SAP.
In the Neo environment, SAP Cloud Platform supports Java and HTML5 subscriptions. You configure HTML5
subscriptions used for HTML5 provider applications through the cockpit only. While for Java applications, you can
Multitenancy Roles
● Application Provider - an organizational unit that uses SAP Cloud Platform to build, run, and sell
applications to customers, that is, the application consumers.
For more information about providing applications, see Providing Multitenant Applications to Consumers in
the Neo Environment [page 974].
● Application Consumer - an organizational unit, typically a customer or a department inside an
organization of a customer, which uses an SAP Cloud Platform application for a certain purpose. Obviously,
the application is in fact used by end users, who might be employees of the organization (for instance, in the
case of an HR application) or just arbitrary users, internal or external (for instance, in the case of a
collaborative supplier application).
For more information about consuming applications, see Subscribe to Java Multitenant Applications in the
Neo Environment [page 977] or Subscribe to HTML5 Multitenant Applications in the Neo Environment [page
979].
To use SAP Cloud Platform, both the application provider and the application consumer must have a subaccount.
The subaccount is the central organizational unit in SAP Cloud Platform. It is the central entry point to SAP Cloud
Platform for both application providers and consumers. It may consist of a set of applications, a set of subaccount
members and a subaccount-specific configuration.
Subaccount members are users who are registered via the SAP ID service. Subaccount members may have
different privileges regarding the operations that are possible for a subaccount (for example, subaccount
administration, deploy, start, and stop applications). Note that the subaccount belongs to an organization and not
to an individual. Nevertheless, the interaction with the subaccount is performed by individuals, the members of the
subaccount. The subaccount-specific configuration allows application providers and application consumers to
adapt their subaccount to their specific environment and needs.
An application resides in exactly one subaccount, the hosting subaccount. It is uniquely identified by the
subaccount name and the application name. Applications consume SAP Cloud Platform resources, for instance,
compute units, structured and unstructured storage and outgoing bandwidth. Costs for consumed resources are
billed to the owner of the hosting subaccount, who can be an application provider, an application consumer, or
both.
Related Information
In the Neo environment of SAP Cloud Platform, you can develop and run multitenant (tenant-aware) applications
that you can make available to multiple consumers.
For detailed instructions on developing multitenant applications, see Developing Multitenant Applications in the
Neo Environment [page 1204].
Prerequisites
● An enterprise account. For more information, see Global Accounts and Subaccounts [page 10].
● Develop and deploy an application in the Neo environment for multiple consumers. For more information, see
Developing Multitenant Applications in the Neo Environment [page 1204].
● Provider and consumer subaccounts that belong to the same region. For more information, see Regions [page
21].
● Set up the console client. For more information, see Set Up the Console Client [page 1135].
To list all subaccounts subscribed to a Java application, use the list-subscribed-accounts command.
Example
Using the console client, you can create subaccounts and subscribe them to a provider application to test how
applications can be provided to multiple consumers.
Prerequisites
● Set up the console client. For more information, see Set Up the Console Client [page 1135].
● Develop and deploy an application that is used by multiple consumers. For more information, see Developing
Multitenant Applications in the Neo Environment [page 1204].
● You have an enterprise account. For more information, see Global Accounts and Subaccounts [page 10].
● You are a member to both subaccounts: the one where the multitenant application is deployed and the one
that you subscribe to the application.
Context
Note
You can subscribe a subaccount to an application that is running in another subaccount only if both
subaccounts (provider and consumer subaccounts) belong to the same region.
Procedure
1. Open the command prompt and navigate to the folder containing neo.bat/sh (<SDK installation
folder>/tools).
2. Create subaccounts for several consumers.
Access the application through the different tenants and verify that the multitenant application works as
configured for the respective subaccount (tenant).
Procedure
1. Access the application using the dedicated URL for each consumer subaccount in the format https://
<application name><provider subaccount>-<consumer subaccount>.<host>.
You see the list of subscriptions and the corresponding application URLs to access them in the Subscriptions
pane in the cockpit.
2. Change the configuration of the multitenant application for each consumer subaccount (tenant).
3. Verify that the configuration of the provider application differs for each consumer subaccount (tenant).
4. (Optional) You can also check the list of your test subaccounts and subscriptions as follows:
Option Description
Procedure
Create,list, and remove subscriptions for a Java application using the console client and view all our subscriptions
in the SAP Cloud Platform cockpit.
Prerequisites
● An enterprise account. For more information, see Global Accounts and Subaccounts [page 10].
● Develop and deploy an application in the Neo environment for multiple consumers. For more information, see
Developing Multitenant Applications in the Neo Environment [page 1204].
● Provider and consumer subaccounts that belong to the same region. For more information, see Regions [page
21].
● If applicable, purchase SaaS licenses for the applications you want to consume.
● Set up the console client. For more information, see Set Up the Console Client [page 1135].
Example
Example
● To list all subaccounts subscribed to a Java application, use the list-subscribed-accounts command.
Example
Procedure
For more information, see Navigate to Global Accounts and Subaccounts [page 964].
You see a list of subscriptions to Java applications, with the provider subaccount from which the subscription
was obtained and the subscribed application.
3. To navigate to the subscription overview, choose the application name. You have the following options:
○ To launch an application, choose the link in the Application URLs panel.
○ To create connectivity destinations, choose Destinations in the navigation area.
○ To create or assign roles, choose Roles in the navigation area.
Note
Some subscriptions automatically created by the platform cannot be removed.
Related Information
Manage subscriptions to HTML5 applications by viewing, creating, or removing subscriptions in the SAP Cloud
Platform cockpit.
Procedure
For more information, see Navigate to Global Accounts and Subaccounts [page 964].
Note
The subscription name must be unique across all subscription names and all HTML5 application names in
the current subaccount.
Procedure
1. In the navigation area, choose Applications Subscriptions . The subscriptions to HTML5 applications are
listed with the following information:
○ The subaccount name of the application provider from which the subscription was obtained
○ The name of the subscribed application
2. To navigate to the subscription overview, click the application name:
○ To launch an application, click the URL link in the Active Version panel.
○ To create or assign roles, choose Roles in the navigation area.
Procedure
Related Information
2.8 Tutorials
Follow the tutorials below to get familiar with the services offered by SAP Cloud Platform.
SAP HANA service scenarios Creating a Service Binding Using the Cloud Cockpit [page
655]
How to send and receive messages using the RabbitMQ serv Tutorial
ice
How to create a "HelloWorld" Web application Creating a Hello World Application [page 1139]
How to create a "HelloWorld" Web application using Java EE 6 Using Java EE Web Profile Runtimes [page 1166]
Web Profile
How to create a "Hello World" Multi-Target Application Create a Hello World Multi-Target Application [page 1341]
Connectivity service scenarios Consume Internet Services (Java Web or Java EE 6 Web Pro
file) [page 156]
SAP HANA and SAP ASE service scenarios Tutorial: Adding Application-Managed Persistence with JPA
(SDK for Java Web) [page 836]
Java applications lifecycle management scenarios Lifecycle Management API Tutorial [page 1177]
How to secure your HTTPS connections Using the Keystore Service for Client Side HTTPS Connections
[page 2240]
How to create an SAP HANA XS application ● Creating an SAP HANA XS Hello World Application Using
SAP HANA Studio [page 1229]
● Creating an SAP HANA XS Hello World Application Using
SAP HANA Web-based Development Workbench [page
1225]
Continuous Integration scenarios Continuous Integration (CI) Best Practices with SAP: Introduc
tion and Navigator
Video Tutorials
Tutorial Navigator
2.9 Glossary
A-G
API An application programming interface and its respective code, which allows other soft
ware products to communicate with or call on the software. API packages are sets of APIs
with a common denominator that belongs either to a service or a business application.
Application process Each application is started on a dedicated SAP Cloud Platform Runtime. This is called ap
plication process. You can start one or many application processes of your application at
any given time, according to the compute unit quota that you have. Each application proc
ess has a unique process ID that you can use to manage it.
Application router A Cloud Foundry reverse proxy per application. The so called Application Router is the
main entry point for each business application, is capable of serving static content, routes
between the different components a business application consists of, and handles au
thentication and session management.
Application runtime container Java applications developed on SAP Cloud Platform run on a modular and lightweight run
[page 1153] time container, which allows them to consume standard Java EE APIs and platform serv
ices.
Application service Services at a higher-level that are exposing a REST API. The Cloud Foundry environment
at SAP Cloud Platform offers different services, such as RabbitMQ, Redis, MongoDB, Post
greSQL.
Blob store The blob store holds application code, buildpacks, and droplets.
BOSH A project for the Cloud Foundry environment for release engineering, deployment, and
lifecycle management of large-scale cloud software. BOSH can provision and deploy soft
ware over hundreds of VMs and performs monitoring, failure recovery, and software up
dates with zero-to-minimal downtime.
Buildpack In the Cloud Foundry environment, buildpacks provide framework and runtime support for
apps.
Business services Platform services that enable, facilitate, or accelerate the development of business proc
ess components and elements of a business application.
Compute units [page 1159] The virtualized hardware resources used by an SAP Cloud Platform application.
Cockpit [page 900] SAP Cloud Platform cockpit is the central point of entry to key information about your ac
counts and applications, and for managing all activities associated with your account.
Connectivity service [page 32] Provides a secure, reliable, and easy-to-consume access to business systems, running ei
ther on-premise or in the cloud.
Console client [page 905] SAP Cloud Platform console client enables development, deployment and configuration
of a Web application outside the Eclipse IDE as well as continuous integration and auto
mation tasks. The tool is part of the SAP Cloud Platform SDK.
CLI The Command Line Interface (CLI) is used to deploy and manage your applications in the
Cloud Foundry environment.
Cloud connector [page 253] Cloud connector serves as the link between on-demand applications in SAP Cloud
Platform and existing on-premise systems. It combines an easy setup with a clear config-
uration of the systems that are exposed to SAP Cloud Platform.
Cloud controller API Manages the lifecycle of applications. When a developer pushes an application to the
Cloud Foundry environment, this is targeting the Cloud Controller. The Cloud Controller
then stores the raw application bits, creates a record to track the application metadata,
and directs a DEA node to stage and run the application. The Cloud Controller also main
tains records of orgs, spaces, services, service instances, user roles, and more.
Containerization Ensures that application instances run in isolation, get their fair share of resources, and
are protected from neighbors.
Database An organized collection of the data that can be backed up and restored separately. The
database is the technical unit that contains the data where DBMS is a service that enables
users to define, create, query, update, and administer the data. SAP Cloud Platform ac
count administrators can create databases on database management systems in their ac
count.
Database management system A computer system that enables administrators, developers, and applications to interact
(DBMS) with one or more databases and provides access to the data contained in the database. It
runs on a hardware host (or several hosts for distributed database systems) and has a ver
sion. Examples for DBMSs are SAP HANA and SAP ASE.
Database type A specific database product, such as the SAP HANA database
Developer Center SAP HANA Cloud Developer Center is the place on the SAP Community Network
where you can find information, news, discussions, blogs, and more about SAP Cloud
Platform.
Docker A technology that allows you to containerize applications used in SAP Cloud Platform, for
example, to run application components on the PaaS, which are not available out of the
box, like an R server for statistical calculations.
Droplet An archive within the Cloud Foundry environment that contains the application ready to
run on a DEA. A droplet is the result of the application staging process.
Document service [page 433] Provides an on-demand repository for applications to manage unstructured content for an
application-specific context using the CMIS protocol.
Enabling or disabling services An administrative activity that allows or disallows the provisioning and use of a service by
a set of users/developers, for example, via the configuration of a subaccount.
Enterprise account [page 11] An enterprise account is usually associated with one SAP customer or partner and is typi
cally subject to charges. It groups together different subaccounts that an administrator
makes available to users for deploying applications.
Environment Constitutes the SAP Cloud Platform actual Platform-as-a-Service offering that allows for
the development and administration of business applications. Each environment provides
at least one application runtime and comes with its own domain model, user and role
management logic, and tools (for example, command line utility). Environments are self-
sufficient and integrated into the platform at the subaccount level, which means that each
environment can be used on its own without the need to use another environment.
Global account Contains the purchased entitlements of a customer. A global account exists independ
ently of its usage in a concrete environment and across all regions. A global account can
See Accounts [page 10]
be either a trial account or an enterprise account and allows for member and quota man
agement.
H-R
HDI The HANA Deployment Infrastructure (HDI) provides a service layer on top of the SAP
HANA database to deploy database artifacts into it.
HDI container A set of schemas and users that together enable an isolated deployment of SAP HANA
database artifacts. The next-generation SAP HANA Extended Application Services (XS)
provide a service to create HDI containers on a shared database, and a mechanism to de
ploy database artifacts together with the application code. See “service broker”.
Health manager Supervises applications, monitors their state, and starts them.
Infrastructure service Services that are consumable by applications running on the PaaS layer, for example, SAP
HANA or connectivity services. We differentiate services that serve multiple customers
from a single instance from the ones that provide dedicated instances, where the infra
structure can provision and isolate separate instances.
Infrastructure as a Service (IaaS) A provisioning model in which an organization outsources the equipment used to support
operations, including storage, hardware, servers, and networking components.
Managed service Services that integrate with the Cloud Foundry environment using a service broker that
implements the Service Broker API. Managed services enable end users to provision re
served resources and credentials on demand.
Multitenant database container A self-contained database container in a multiple-container system. A tenant database
container has its own isolated set of database users and its own database catalog. No
data is shared between the tenant databases in a system. Clients can connect to tenant
databases individually.
OAuth [page 2208] Widely adopted security protocol for protection of resources over the Internet. It is used
by many social network providers and by corporate networks. It allows an application to
request authentication on behalf of users with third-party user accounts, without the user
having to grant its credentials to the application.
Organization (Org) Can be owned and used by an individual or multiple collaborators. All collaborators access
an org with user accounts. Collaborators in an org share a resource quota plan, applica
tions, services availability, and custom domains. In the Cloud Foundry environment within
SAP Cloud Platform, an org is associated with one subaccount.
or Partner account in the Neo envi Allows partners to build business applications and sell them to their customers. A partner
ronment [page 935] account is available through a partner program, which provides a package of predefined
resources and the opportunity to certify, advertise, and ultimately sell products.
Platform as a Service An environment to develop, deploy, run, and manage your business applications in the
cloud. The underlying software and hardware infrastructure is provided on demand (as a
service).
Programming language A computer language that specifies a set of instructions, vocabulary, grammatical rules,
and a unique syntax for developing business applications on SAP Cloud Platform.
Programming model A set of concepts used to create business applications on SAP Cloud Platform. For exam
ple, a programming model can include programming languages, runtimes, and APIs.
Quota [page 1659] A numeric quantity that defines the maximum allowed consumption of a specific technical
asset/resource.
Region Exactly one data center. Data centers can be owned by SAP or by third-party data center
providers like Amazon Web Services or Microsoft Azure.
Router Routes incoming traffic to the appropriate component, usually the Cloud Controller or a
running application on a DEA node.
Runtime An engine or context for executing programs, such as Java Web Tomcat 8 or Node.js run
time.
Runtime container Manage runtimes and allow for application isolation, resource management, and shared
service bindings
Runtime for Java [page 1150] The components which create the environment for deploying and running Java applica
tions on SAP Cloud Platform - Java Virtual Machine, Application Runtime Container and
Compute Units.
S-Z
SAP Community Network SAP's professional social network for SAP customers, partners, employees, and experts,
(SCN) which offers insight and content about SAP solutions and services in a collaborative envi
ronment: http://scn.sap.com. To use SAP Cloud Platform, you have to be registered on
SCN.
SAP ASE Service [page 650] A service to set up and manage SAP Adaptive Server Enterprise (ASE) databases and to
bind them to cloud applications. SAP ASE is a high-performance transactional database.
It is a cost-effective database solution that can handle large numbers of transactions and
concurrent users with superior performance, reliability and efficiency. In business scenar
ios where the analytical in-memory capabilities of the SAP HANA service are not neces
sary, SAP ASE service can serve as an appropriate alternative with a lower total cost of
ownership (TCO).
SAP Cloud Platform SAP Cloud Platform is an in-memory cloud platform that enables customers and partners
to build, deploy, and manage cloud-based enterprise applications that complement and
extend SAP or non-SAP solutions, either on-premise or on-demand.
SAP HANA Service [page 650] A service to set up and manage SAP HANA databases and to bind them to cloud applica
tions. The SAP HANA service provides in-memory and relational data persistence to appli
cations running on SAP Cloud Platform. It processes transactions and analytics in-mem
ory on a single data copy – to deliver real-time insights from live data. Moreover, the serv
ice offers advanced data processing for business, text, spatial, graph, and series data to
gain unprecedented insight.
SAP ID Service [page 2116] The default identity provider for SAP Cloud Platform applications. It manages the user
base for SAP Community Network and other SAP Web sites. SAP ID service is also used
for authentication in the cockpit and operations such as deploying, updating, and so on.
SAP Cloud Platform Identity SAP Cloud Platform Identity Authentication service is a cloud solution for identity lifecycle
Authentication Service management for SAP Cloud Platform applications, and optionally for on-premise applica
tions. You can use Identity Authentication as an identity provider for SAP Cloud Platform
applications.
UI development toolkit for HTML5 A framework providing UI controls for developing Web applications.
(SAPUI5)
Security Assertion Markup Lan A markup language which provides a wide-spread protocol for secure authentication and
guage SSO. SAML is implemented by SAP ID service.
(Platform) services Software that enables, facilitates, or accelerates the development of business applications
and other platform services on SAP Cloud Platform. Platform services are integrated with
business applications and other cloud resources by developers. End users only interact
with platform services via business applications, never directly. All platform services pro
vide an interface such as an API or a set of APIs. There are two types of platform services:
business and technical services.
Service broker When a developer provisions and binds a service to an application, the service broker for
that service is responsible for providing the service instance and for binding services to
applications. For example, the HANA service broker allows any application running on
Cloud Foundry to connect to an SAP HANA database.
Service plan A variant of a service; for instance, a database may be configured with various “t-shirt
sizes”, each of which is a different service plan.
Service provider The application interested in getting authentication and authorization information. In
stead of providing this information in itself, it contacts the identity provider.
Service rate A fixed monetary amount charged for the consumption of a service plan offered on SAP
Cloud Platform. For example, each t-shirt size of a database has a specific service rate at
tached to it.
Service rate plan A formula for translating usage data of a service into a monetary amount. A service rate
plan can comprise a one-time setup fee, a recurring monthly rate, or block rates that de
scribe a metric. A metric can be any type of quantity that can be measured.
Single Sign-On A property of access control of multiple related, but independent software systems, which
enables a user to log in once and have access to all systems.
Software as a Service A software distribution model in which applications are hosted by a vendor or service pro
vider and made available to customers over the Internet.
SAP Java Virtual Machine [page SAP's own implementation of a Java Virtual Machine on which the SAP Cloud Platform
1127] infrastructure runs.
Staging The process in the Cloud Foundry environment by which the raw bits of an application are
transformed into a droplet that is ready to execute.
Subaccount Lets you structure a global account according to the requirements of customer organiza
tions and projects with regards to members, authorizations and quotas.
In the Neo environment, an enterprise account can have one or many subaccounts, while
a trial account can have only one. Neo apps and services run only in the Neo environment,
however, HTTPS-based services might be cross-consumed from the outside. In the Cloud
Foundry environment, a global account can have as many subaccounts as required; each
subaccount can have an associated Cloud Foundry org.
Subscribing or unsubscribing to An administrative activity that allows or disallows the provisioning and use of a business
business applications application by a set of users/developers, for example, via the configuration of a subac
count.
Subscription In the context of SAP Cloud Platform, "subscription" means either of the following:
1. The SAP Cloud Platform current business model, which is a pre-paid monthly recurring
charge for the right to use a bundle of technical assets up to specified limits.
Technical services Platform services that enable, facilitate, or accelerate the generic development of a busi
ness application, independent of the application's business process or task.
Tenant ID [page 1209] Identifier of the consumer account for the current application context. The tenant ID can
be used to distinguish data of different application consumer accounts.
Tool Software that is used by developers and administrators to develop, extend, administer, de
bug, or otherwise support business applications or other software.
Trial account [page 11] Enables you to explore the basic functionality of SAP Cloud Platform without incurring a
fee.
User account The user’s identity as managed by the identity management system. This account is used
to log in to the SAP Cloud Platform Cockpit and determining which business applications
the user can access.
User-provided service Services that provide credentials to applications for service instances, which have been
pre-provisioned outside of the Cloud Foundry environment.
WTP Server Adapter A tool for deploying and testing Java EE assets on SAP Cloud Platform or for local testing.
Develop and run business applications for SAP Cloud Platform within the respective environment, Cloud Foundry
environment or Neo environment. Use our application programming model, APIs, services, tools and capabilities.
Related Information
Find here selected information for SAP HANA database development and references to more detailed sources.
This section gives you information about database development and its surrounding tasks especially if you want to
explore the SAP Cloud Platform Cloud Foundry environment. To get more into detail we have references to other
guides and the SAP HANA extended application services, advanced model documentation. The context we are
looking at is multi-target application (MTA) development, whereby SAP HANA is the database module and you
develop all artifacts in that module.
See a typical flow to get started quickly with SAP HANA development. Select a tile to find further information
about this step and references to other sources that give detailed instructions on task level. The links guide you to
SAP HANA documentation and a comprehensive SAP Web IDE Full-Stack guide. Feel free to explore these guides if
you feel comfortable in using them and need more in-depth knowledge.
Development Environment
1. Register for an SAP Cloud Platform trial account at https://account.hanatrial.ondemand.com/ and log on
afterwards.
2. Open SAP Web IDE Full-Stack
3. Setting Up Application Projects - Create a Project from Scratch & Select a Cloud Foundry Space
4. Create a Database Module
Database Artifacts
The SAP Web IDE Full-Stack provides dedicated editors for specific artifacts. But you can create all relevant
artifacts and open them in a text editor to edit them. For more information, see Develop Database Artifacts
To create database artifacts open the context menu on the <your_db_module>/src folder, select New, and
choose the artifact you want to create.
More Information:
● For an overview about defining the data model, see Defining the Data Model in XS Advanced
● To learn how to create the data persistence artifacts, see Creating the Data Persistence Artifacts in XS
Advanced
● To learn how to create procedures and functions, see Creating Procedures and Functions in XS Advanced
● To learn how to enable cross-container access to external objects, see Using Synonyms to Access External
Schemas and Objects in XS Advanced
To build your database module open the context menu on your database module folder and select Build.
Open the integrated SAP HANA Database Explorer with Tools Database Explorer and add your database. To
learn more, see Working with the SAP HANA Database Explorer
Reference Information
The SAP HANA Developer Information Map gives you access to detailed SAP HANA documentation from different
angles, by task, by guide or by scenario.
Find here selected information for Java development on SAP Cloud Platform Cloud Foundry environment and
references to more detailed sources.
This section gives you information about Java development and its surrounding tasks especially if you are
exploring the Cloud Foundry environment. To get more into detail we have references to other guides and
Quick Start
See a typical flow to get started quickly with Java development. Select a tile to find further information about this
step and references to other sources that give detailed instructions on task level. The links guide you to already
available documentation like a comprehensive SAP Web IDE Full-Stack guide. Feel free to explore these guides if
you feel comfortable in using them and need more in-depth knowledge.
1. Download the latest version of cf CLI from GitHub at the following URL: https://github.com/cloudfoundry/
cli#downloads
2. Install cf CLI.
cf login
You need to know the API endpoint you want connect against and your user credentials. To find out your API
endpoint see Regions [page 21]
Development Environment
Next you are setting up your SAP Web IDE Full-Stack on SAP Cloud Platform.
1. Register for an SAP Cloud Platform trial account at https://account.hanatrial.ondemand.com/ and log on
afterwards.
2. Open SAP Web IDE Full-Stack
3. Setting Up Application Projects - Create a Project from Scratch & Select a Cloud Foundry Space
4. Create a Java Module
The structure of your module depends on the template you use when creating a Java module. In general it is a
Maven standard directory layout. Use the src/main/java folder to create and implement the required Java files.
Further reading:
To run your application open the context menu on your Java module and go to Run Run as Java
Application . In the run console you can see the name of your Java module. If the application is started correctly
you find the applications URL here and can access it directly.
The sap_java_buildpack is used in the Web IDE Full-Stack to build and run Java modules.
Further reading:
Find some more advanced and reference topics behind the tiles in this section.
To use the SAP Java buildpack for a single Java application, you need to push your app as follows:
For more information, see The SAP HANA XS Advanced Java Run Time [page 998]
You may also use the standard Java buildpack in Cloud Foundry which supports different containers and runtime
environments. For more information, seehttps://github.com/cloudfoundry/java-buildpack
Authentication
Authentication for Java applications relies on a usage of the OAuth 2.0 protocol, which is based on central
authentication at the UAA. The UAA vouches for the authenticated user's identity using an OAuth 2.0 access
token. The current implementation uses as access token a JSON web token (JWT). This is a signed text-based
token in JSON syntax. The Java application is specified in the related manifest file.
To learn more about authentication using Spring, see Configure Authentication for Java API Using Spring Security
[page 2046] (open in a new tab or window)
To learn more about authentication using SAP Java Buildpack, see Configure Authentication for SAP Java
Buildpacks [page 2049] (open in a new tab or window)
To learn about the plugin user interface and the tasks, see https://docs.cloudfoundry.org/buildpacks/java/
sts.html#plugin-ui .
Maven Central
We at SAP SE provide artifacts through the Maven Central Repository.You can consume these artifacts as
dependency during your Maven build.
Debugging
How you debug a Java application on the Cloud Foundry environment depends on the buildpack you use during
deployment. With the SAP Java buildpack your application is running on SAP JVM. If you use the community Java
buildpack your application is based on the standard Java runtime.
Further reading:
Profiling
The SAP JVM Profiler is a tool that helps you analyze the resource consumption of a Java application running on
SAP Java Virtual Machine (JVM). You can use it to profile simple stand-alone Java programs or complex enterprise
applications.
SAP HANA XS advanced provides a Java run time to which you can deploy your Java applications.
The Java run time for SAP HANA XS advanced provides a Tomcat or TomEE run time to deploy your Java code.
During application deployment, the build pack ensures that the correct SAP Java Virtual Machine (JVM) is
provided and that the appropriate data sources for the SAP HANA HDI container are bound to the corresponding
application container.
Note
XS advanced makes no assumptions about which frameworks and libraries to use to implement the Java micro
service.
XS advanced does, however, provide the following components to help build a Java micro service:
To use Tomcat on SAP JVM 8, specify the java-buildpack in your XS advanced application's deployment
manifest (manifest.yml), as illustrated in the following example:
Sample Code
applications:
- name: my-java-service
buildpack: git://github.acme.com/xs2-java/java-buildpack.git
services:
- my-hdi-container
- my-uaa-instance
A selection of tutorials that show you how to set up an application to run in the Java run-time environment.
Prerequisites
To perform the tasks described in the following tutorials, the following tools and components must be available:
● The Cloud Foundry command line interface (CLI) must be installed on your development machine.
● You must have a user in the Cloud Foundry environment and know your logon credentials.
Tutorials
How to change Application Settings for the XS Advanced Java Changing Application Settings for the XS Advanced Java Run
Run Time Time [page 1000]
How configure a Java Application for Logs and Traces Configure a Java Application for Logs and Traces [page 1006]
How to configure Authentication and Authorization Configuring Authentication and Authorization [page 1029]
Related Information
Download and Install the Cloud Foundry Command Line Interface [page 948]
Cloud Foundry CLI Reference Guide
You can change the run-time settings in your XS advanced Java application's manifest file.
Context
The default Java run-time environment in XS advanced is “Tomcat”. However, you can change your desired Java
run-time environment in XS advanced to “TomEE” by setting the <TARGET_RUNTIME> environment variable in the
Java application's manifest.yml, as illustrated in the following example:
Code Syntax
env:
TARGET_RUNTIME: tomee
Context
The HEAP, METASPACE and STACK memory types are configured with a corresponding default size. However, the
default values can also be customized, as illustrated in the following example.
Procedure
If this is the first staging of the application (the application is not pushed to SAP HANA XS advanced), provide
<JBP_CONFIG_SAPJVM> environment variable in the manifest.yml file of the application and then stage the
application.
Example
Sample manifest.yml
---
applications:
- name: <app-name>
memory: 512M
For more information about memory types for the XS advanced Java run-time environment, see Memory Size
Options in Related Information.
● Configure the memory sizes during the application run time.
This configuration can be changed multiple times after the application is staged and does not require re-
staging.
a. Open the command prompt.
b. Set custom weights, sizes, and initials.
Context
You can add custom JVM properties as this configuration is done once and can be changed only when the
application is re-staged.
For more information about the Java run-time options in XS advanced , see Memory Size Options [page 1002].
Procedure
If this is the first staging of the application (the application has not yet been “pushed” to SAP HANA XS
advanced), provide <JBP_CONFIG_JAVA_OPTS> environment variable in the manifest.yml file of the
application and then stage the application.
Sample Code
Excerpt from a Java manifest.yml file
---
applications:
- name: <app-name>
memory: 512M
...
env:
JBP_CONFIG_JAVA_OPTS: '[from_environment: false, java_opts: ''-
DtestJBPConfig=^%PATH^% -DtestJBPConfig1="test test" -DtestJBPConfig2="%PATH
%"'']'
If the application is already available on SAP HANA XS advanced and the application developer wants to fine-
tune the JVM additionally with an additional or modified property, a new value for the
<JBP_CONFIG_JAVA_OPTS> environment variable can be specified with the cf set-env command. The new
values will overwrite the default (or set) values in the corresponding buildpack the next time the application
is staged.
Example
Note
When the Java options are specified on the command line with the cf set-env command, the string
defining the new values must be enclosed in double quotes, for example, “<New-config_values>”.
Related Information
Memory Types
There are three memory types that can be sized and for each memory type there is a command-line option, which
is passed to the JVM.
● HEAP
The initial and maximum sizes of the HEAP memory are controlled by the -Xms and -Xmx options respectively.
● METASPACE
The initial and maximum sizes of the METASPACE memory are controlled by the -XX:MetaspaceSize and -
XX:MaxMetaspaceSize options.
The memory calculator uses calculation techniques that allocate the available memory to the memory types. The
total available memory is first allocated to each memory type in proportion to its weighting (this is called
‘balancing'). If the resultant size of any memory type lies outside its range, the size is constrained to the range, the
constrained size is excluded from the remaining memory, and no further calculation is required for that memory
type. After that, the remaining memory is balanced against the memory types that are left, and the check is
repeated until no calculated memory sizes lie outside their ranges. These iterations terminate when none of the
sizes of the remaining memory types are constrained by their corresponding ranges.
You can set custom values in the manifest file before the application is staged by using the following parameters:
● memory_sizes
The data for this parameter specifies the absolute values for the memory properties of the SAP JVM: -Xms, -
Xmx, -Xss, -XX:MaxMetaspaceSize, or -XX:MetaspaceSize.
● memory_heuristics
The data for this parameter specifies the relative weights for the different memory types.
● memory_initials
This parameter is used to adjust the initial values for HEAP and METASPACE types.
Sample Code
Setting HEAP memory to 8.5 times larger than the STACK memory.
env:
JBP_CONFIG_SAPJVM: "[memory_calculator: {memory_heuristics: {heap: 85,
stack: 10}}]"
Furthermore, you can change the sizes during the application run time with the cf set-env command, as
follows:
Sample Code
Sample Code
Sample Code
Note
If you specify values for weight and size at the same time, the size values take precedence over the weight
values.
Java Options
You can set the Java options by using the following parameters:
● from_environment
This parameter expects a Boolean value which specifies if the value of the <JAVA_OPTS> environment variable
must be taken into account. By default, it is set to “false”.
● java_opts
This section is used for additional Java options such as -Xloggc:$PWD/beacon_gc.log -verbose:gc -
XX:NewRatio=2. By default, no additional options are provided.
Related Information
The SAP Java Buildpack prints a histogram of the heap to the logs, when the JVM encounters a terminal failure. In
addition to that, if the application is bound to a volume service with name or tag heap-dump a heap dump file is
also generated and stored in the mounted volume.
Output Code
Sample Code
For more information about the jmvkill agent, see Cloud Foundry jvmkill documentation .
A Java agent that generates heap dumps based on preconfigured conditions in the memory usage of the
application. See Java Memory Assistant .
When enabled in the buildpack, the agent will generate two files – *.hprof (heap dump) and *.addons, when the
configured memory limits are met. The *.addons file contains:
Configure the collection of log and trace messages generated by a Java application in XS advanced.
Prerequisites
Note
The log JARs for the SLF4J and logback are already included in the Java run-time environment; importing
them in the application could cause problems during class loading.
Context
The recommended framework for logging is Simple Logging Facade for Java (SLF4J). To use this framework, you
can create an instance of the org.slf4j.Logger class. To configure your XS advanced Java application to
generate logs and traces, and if appropriate set the logging or tracing level, perform the following steps:
Procedure
1. Instruct Maven that the application should not package the SLF4J dependency; this dependency is already
provided by the run time.
<dependency>
<groupId>org.slf4j</groupId>
<artifactId>slf4j-api</artifactId>
<version>1.7.12</version>
<scope>provided</scope>
</dependency>
Example
Ensure that debug and info log messages from the Java application appear in the log files.
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
public void logging () {
final Logger logger =
LoggerFactory.getLogger(Log4JServlet.class);
logger.debug("this is record logged through SLF4J param1={},
param2={}.", "value1", 1);
logger.info("This is slf4j info");
}
Related Information
Logging Levels
ALL All logging levels are displayed in the logs and traces.
DEBUG This level is used for debugging purposes. When this level is
set, DEBUG, INFO, WARN, ERROR and FATAL levels are dis
played in the application logs.
INFO This level is used for information purposes. When this level is
set, the INFO, WARN, ERROR and FATAL levels are displayed in
the application logs.
WARN This level is used for warning purposes. When this level is set,
the WARN, ERROR and FATAL levels are displayed in the appli
cation logs.
ERROR This level is used to display errors. When this level is set, ER
ROR and FATAL levels are displayed in the application logs.
FATAL This level is used to display critical errors. When this level is
set, only the FATAL level is displayed in the application logs.
Note
You can change your logger location and messages by editing the Log4JServlet class in the Log4J
application.
You can configure your application to use a database connection so that the application can persist its data. This
configuration is applicable for Tomcat or TomEE run times.
Related Information
Configure a Database Connection for the Tomcat Run Time [page 1008]
Configure a Database Connection for the TomEE Run Time [page 1009]
Database Connection Configuration Details [page 1025]
SAP HANA HDI Data Source [page 1028]
Procedure
The context.xml file has to be inside the META-INF/ directory of the application's WAR file and has to
contain information about the data source to be used.
Example
Sample context.xml
Example
Defining a default service in resource_configuration.yml
---
tomcat/webapps/ROOT/META-INF/context.xml:
service_name_for_DefaultDB: di-core-hdi
Example
Defining a new service for the look-up of the data source
env:
JBP_CONFIG_RESOURCE_CONFIGURATION: "['tomcat/webapps/ROOT/META-INF/
context.xml': {'service_name_for_DefaultDB' : 'my-local-special-di-core-hdi'}]"
As a result of this configuration, when the application starts, the factory named
com.sap.xs.jdbc.datasource.TomcatDataSourceFactory takes the parameters bound for service my-
local-special-di-core-hdi from the environment, creates a data source, and binds it under jdbc/DefaultDB.
The application then uses the Java Naming and Directory Interface (JNDI) to look up how to connect with the
database.
Procedure
○ If the data source is to be used from a Web application, you have to create the file inside the WEB-INF/
directory.
○ If the data source is to be used from Enterprise JavaBeans (EJBs), you have to create the file inside the
META-INF/ directory.
Example
Sample resources.xml
○ Example
Defining a default service in resource_configuration.yml for a Web application
---
tomee/webapps/ROOT/WEB-INF/resources.xml:
service_name_for_DefaultDB: di-core-hdi
○ Example
Defining a default service in resource_configuration.yml for an EJB
---
tomee/webapps/ROOT/META-INF/resources.xml:
service_name_for_DefaultDB: di-core-hdi
○ Example
Defining a new service for the look-up of the data source in a Web application
env:
TARGET_RUNTIME: tomee
JBP_CONFIG_RESOURCE_CONFIGURATION: "[
'tomee/webapps/ROOT/WEB-INF/resources.xml':
{'service_name_for_DefaultDB' : 'my-local-special-di-core-hdi'}
]"
○ Example
Defining a new service for the look-up of the data source in an EJB
env:
TARGET_RUNTIME: tomee
As a result of this configuration, when the application starts, the XS factory takes the parameters bound for
service my-local-special-di-core-hdi from the environment, creates a data source, and binds it under jdbc/
DefaultDB. The application then uses the Java Naming and Directory Interface (JNDI) to look up how to
connect with the database.
The application description defined in the xs-app.json file contains the configuration information used by the
application router.
The following example of an xs-app.json application descriptor shows the JSON-compliant syntax required and
the properties that either must be set or can be specified as an additional option.
Code Syntax
{
"welcomeFile": "index.html",
"authenticationMethod": "route",
"sessionTimeout": 10,
"pluginMetadataEndpoint": "/metadata",
"routes": [
{
"source": "^/sap/ui5/1(.*)$",
"target": "$1",
"destination": "ui5",
"csrfProtection": false
},
{
"source": "/employeeData/(.*)",
"target": "/services/employeeService/$1",
"destination": "employeeServices",
"authenticationType": "xsuaa",
"scope": ["$XSAPPNAME.viewer", "$XSAPPNAME.writer"],
"csrfProtection": true
},
{
"source": "^/(.*)$",
"target": "/web/$1",
"localDir": "static-content",
"replace": {
"pathSuffixes": ["/abc/index.html"],
"vars": ["NAME"]
}
}
],
"login": {
"callbackEndpoint": "/custom/login/callback"
},
"logout": {
"logoutEndpoint": "/my/logout",
"logoutPage": "/logout-page.html"
},
"destinations": {
"employeeServices": {
welcomeFile
The Web page served by default if the HTTP request does not include a specific path, for example, index.html.
Code Syntax
"welcomeFile": "index.html"
authenticationMethod
The method used to authenticate user requests, for example: “route” or “none” (no authentication).
Code Syntax
"authenticationMethod" : "route"
Caution
If authenticationMethod is set to “none”, logon with User Account and Authentication (UAA) is disabled.
routes
Defines all route objects, for example: source, target, and, destination.
source RegEx Yes Describes a regular expression that matches the incom
ing request URL.
Note
Be aware that the RegExp is applied to the full URL
including query parameters.
httpMethods Array of uppercase No HTTP methods that will be served by this route; the sup
HTTP methods ported methods are: DELETE, GET, HEAD, OPTIONS,
POST, PUT, TRACE, and PATCH.
Tip
If this option is not specified, the route will serve any
HTTP method.
csrfProtection Boolean No Toggle whether this route needs CSRF token protection.
The default value is “true”. The application router enfor
ces CSRF protection for any HTTP request that changes
state on the server side, for example: PUT, POST, or DE
LETE.
Note
This value is relevant only for static resources.
Code Syntax
"routes": [
{
"source": "^/sap/ui5/1(.*)$",
"target": "$1",
"destination": "ui5",
"scope": "$XSAPPNAME.viewer",
"authenticationType": "xsuaa",
"csrfProtection": true
}
]
The properties destination and localDir cannot be used together in the same route.
If there is no route defined for serving static content via localDir, a default route is created for “resources”
directory as follows:
Sample Code
{
"routes": [
{
"source": "^/(.*)$",
"localDir": "resources"
}
]
}
Note
If there is at least one route using localDir, the default route is not added.
The httpMethods option allows you to split the same path across different targets depending on the HTTP
method. For example:
Sample Code
"routes": [
{
"source": "^/app1/(.*)$",
"target": "/before/$1/after",
"httpMethods": ["GET", "POST"]
}
]
This route will be able to serve only GET and POST requests. Any other method (including extension ones) will get
a 405 Method Not Allowed response. The same endpoint can be split across multiple destinations depending
on the HTTP method of the requests:
Sample Code
"routes": [
{
"source": "^/app1/(.*)$",
"destination" : "dest-1",
"httpMethods": ["GET"]
},
{
"source": "^/app1/(.*)$",
"destination" : "dest-2",
"httpMethods": ["DELETE", "POST", "PUT"]
}
The sample code above will route GET requests to the target dest-1, DELETE, POST and PUT to dest-2, and any
other method receives a 405 Method Not Allowed response. It is also possible to specify catchAll routes,
namely those that do not specify httpMethods restrictions:
Sample Code
"routes": [
{
"source": "^/app1/(.*)$",
"destination" : "dest-1",
"httpMethods": ["GET"]
},
{
"source": "^/app1/(.*)$",
"destination" : "dest-2"
}}
]
In the sample code above, GET requests will be routed to dest-1, and all the rest to dest-2.
replace
The replace object configures the placeholder replacement in static text resources.
Sample Code
{
"replace": {
"pathSuffixes": ["index.html"],
"vars": ["escaped_text", "NOT_ESCAPED"]
}
}
pathSuffixes Array An array defining the path suffixes that are relative to localDir. Only files
with a path ending with any of these suffixes will be processed.
vars Array A white list with the environment variables that will be replaced in the files
matching the suffix specified in pathSuffixes.
The supported tags for replacing environment variables are: {{ENV_VAR}} and {{{ENV_VAR}}} . If there such an
environment variable is defined, it will be replaced, otherwise it will be just an empty string.
Note
Any variable that is replaced using two-brackets syntax {{ENV_VAR}} will be HTML-escaped; the triple
brackets syntax {{{ENV_VAR}}} is used when the replaced values do not need to be escaped and all values will
If your application descriptor xs-app.json contains a route like the one illustrated in the following example,
{
"source": "^/get/home(.*)",
"target": "$1",
"localDir": "resources",
"replace": {
"pathSuffixes": ["index.html"],
"vars": ["escaped_text", "NOT_ESCAPED"]
}
}
Sample Code
<html>
<head>
<title>{{escaped_text}}</title>
<script src="{{{NOT_ESCAPED}}}/index.js"/>
</head>
</html
Then, in index.html, {{escaped_text}} and {{{NOT_ESCAPED}}} will be replaced with the value defined in
the environment variables <escaped_text> and <NOT_ESCAPED>.
Note
All index.html files are processed; if you want to apply the replacement only to specific files, you must set the
path relative to localDir. In addition, all files should comply with the UTF-8 encoding rules.
The content type returned by a request is based on the file extension specified in the route. The application router
support the following file types:
● .json (application/json)
● .txt (text/plain)
● .html (text/html) default
● .js (application/javascript)
● .css (test/css)
Example Result
{ "pathSuffixes": All files with the extension .html under localDir and its subfolders will be
[".html"] } processed.
{ "pathSuffixes": ["/abc/ For the suffix /abc/main.html, all files named main.html which are inside a
main.html", "some.html"] } folder named abc will be processed.
For the suffix some.html, all files with a name that ends with “some.html” will
be processed, for example: some.html, awesome.html.
{ "pathSuffixes": All files with the name “some.html” will be processed. For example:
[".html"] } some.html , /abc/some.html.
sessionTimeout
Define the amount of time (in minutes) for which a session can remain inactive before it closes automatically
(times out); the default time out is 15 minutes.
Note
The sessionTimeout property is no longer available; to set the session time out value, use the environment
variable <SESSION_TIMEOUT>.
Sample Code
{
"sessionTimeout": 40
}
With the configuration in the example above, a session timeout will be triggered after 40 minutes and involves
central log out.
login
A redirect to the application router at a specific endpoint takes place during OAuth2 authentication with the User
Account and Authentication service (UAA). This endpoint can be configured in order to avoid possible collisions, as
illustrated in the following example:
Sample Code
Application Router “login” Property
"login": {
"callbackEndpoint": "/custom/login/callback"
}
logout
Define any options that apply if you want your application to have central log out end point. In this object you can
define an application's central log out end point by using the logoutEndpoint property, as illustrated in the
following example:
Sample Code
"logout" {
"logoutEndpoint": "/my/logout"
}
Making a GET request to “/my/logout” triggers a client-initiated central log out. Central log out can be initiated by
a client or triggered due to a session timeout, with the following consequences:
Tip
It is not possible to redirect back to your application after logging out from UAA.
You can use the logoutPage property to specify the Web page in one of the following ways:
● URL path
The UAA service redirects the user back to the application router, and the path is interpreted according to the
configured routes.
Note
The resource that matches the URL path specified in the property logoutPage should not require
authentication; for this route, the property authenticationType must be set to “none”.
In the following example, my-static-resources is a folder in the working directory of the application router;
the folder contains the file logout-page.html along with other static resources.
{
"authenticationMethod": "route",
"logout": {
"logoutEndpoint": "/my/logout",
"logoutPage": "/logout-page.html"
},
"routes": [
{
"source": "^/logout-page.html$",
"localDir": "my-static-resources",
"authenticationType": "none"
}
]
}
Sample Code
"logout": {
"logoutEndpoint": "/my/logout",
"logoutPage": "http://acme.com/employees.portal"
}
destinations
Specify any additional options for your destinations. The destinations section in xs-app.json extends the
destination configuration in the deployment manifest (manifest.yml), for example, with some static properties
such as a logout path.
Sample Code
{
"destinations": {
"node-backend": {
"logoutPath": "/ui5logout",
"logoutMethod": "GET"
}
}
}
logoutPath String No The log out end point for your destination. The
logoutPath will be called when central log out is
triggered or a session is deleted due to a time out.
The request to logoutPath contains additional
headers, including the JWT token.
compression
The compression keyword enables you to define if the application router compresses text resources before
sending them. By default, resources larger than 1KB are compressed. If you need to change the compression size
threshold, for example, to “2048 bytes”, you can add the optional property “minSize”: <size_in_KB>, as
illustrated in the following example.
Sample Code
{
"compression": {
"minSize": 2048
}
}
● Global
Within the compression section add "enabled": false
● Front end
The client sends a header “Accept-Encoding” which does not include “gzip”.
● Back end
The application sends a header “Cache-Control” with the “no-transform” directive.
minSize Number No Text resources larger than this size will be com
pressed.
Note
If the <COMPRESSION> environment variable is set it will overwrite any existing values.
pluginMetadataEndpoint
Adds an endpoint that serves a JSON string representing all configured plugins.
Sample Code
{
"pluginMetadataEndpoint": "/metadata"
}
Note
If you request the relative path /metadata of your application, a JSON string is returned with the configured
plug-ins.
whitelistService
Enable the white-list service to help prevent against click-jacking attacks. Enabling the white-list service opens an
endpoint accepting GET requests at the relative path configured in the endpoint property, as illustrated in the
following example:
Sample Code
{
"whitelistService": {
"endpoint": "/whitelist/service"
}
}
If the white-list service is enabled in the application router, each time an HTML page needs to be rendered in a
frame, the white-list service is used check if the parent frame is allowed to render the content in a frame.
Sample Code
[
{
"protocol": "http",
"host": "*.acme.com",
"port": 12345
},
{
"host": "hostname.acme.com"
}
]
Note
Matching is done against the properties provided. For example, if only host name is provided, the white-list
service returns “framing: true” for all, and matching will be for all schemata and protocols.
Return Value
The white-list service accepts only GET requests, and returns a JSON object as the response. The white-list service
call uses the parent origin as URI parameter (URL encoded) as follows:
Sample Code
GET url/to/whitelist/service?parentOrigin=https://parent.domain.acme.com
The response is a JSON object with the following properties; property “active” has the value false only if
<CJ_PROTECT_WHITELIST> is not provided:
Sample Code
{
"version" : "1.0",
"active" : true | false,
"origin" : "<same as passed to service>",
"framing" : true | false
The “active” property enables framing control; the “framing” property specifies if framing should be allowed.
By default, the Application Router (approuter.js) sends the X-Frame-Options header with value the
SAMEORIGIN.
Tip
If the white-list service is enabled, the header value probably needs to be changed, see the X-Frame-Options
header section for details about how to change it.
websockets
The application router can forward web-socket communication. Web-socket communication must be enabled in
the application router configuration, as illustrated in the following example. If the back-end service requires
authentication, the upgrade request should contain a valid session cookie. The application router supports the
destination schemata "ws", "wss", "http", and "https".
Sample Code
{
"websockets": {
"enabled": true
}
}
Restriction
A web-socket ping is not forwarded to the back-end service.
errorPage
Errors originating in the application router show the HTTP status code of the error. It is possible to display a
custom error page using the errorPage property.
The property is an array of objects, each object can have the following properties:
In the following code example, errors with status code “400”, “401” and “402” will show the content of ./custom-
err-4xx.html; errors with the status code “501” will display the content of ./http_resources/custom-
err-501.html.
Sample Code
{ "errorPage" : [
{"status": [400,401,402], "file": "./custom-err-40x.html"},
{"status": 501, "file": "./http_resources/custom-err-501.html"}
]
}
Note
The contents of the errorPage configuration section have no effect on errors that are not generated by the
application router.
Related Information
Define details of the database connection used by your Java application in XS advanced.
Configuration Files
To configure your XS advanced application to establish a connection to the SAP HANA database, you must specify
details of the connection in a configuation file. For example, you must define the data source that the application
will use to discover and look up data. The application then uses a Java Naming and Directory Interface (JNDI) to
look up the specified data in the file.
The easiest way to define the required data source is to declare the keys for the data source in a resource file.
For the TomEE run time, you can create a resources.xml file in the WEB-INF/ directory with the following
content:
The problem with this simple approach is that your WAR file cannot be signed, and any modifications can only be
made in the WAR file. For this reason, it is recommended that you do not use the method in a production
environment; use modification settings in resource_configuration.yml and manifest.yml instead, as
illustrated in the following examples:
Example
Defining a default service in resource_configuration.yml
---
tomcat/webapps/ROOT/META-INF/context.xml:
service_name_for_DefaultDB: di-core-hdi
Specifying a default name for a service is useful (for example, for automation purposes) only if you are sure there
can be no conflict with other names. For this reason, it is recommended that you include a helpful and descriptive
error message instead of a default value. That way the error message will be part of an exception triggered when
the data source is initialized, which helps troubleshooting.
Example
Sample resource_configuration.yml
---
tomcat/webapps/ROOT/META-INF/context.xml:
service_name_for_DefaultDB: Specify the service name for Default DB in
manifest.yml via "JBP_CONFIG_RESOURCE_CONFIGURATION"..
The generic mechanism JBP_CONFIG_RESOURCE_CONFIGURATION basically replaces the key values in the
specified files. For this reason, if you use placeholders in the configuration files, it is important to ensure that you
use unique names for the placeholders.
Example
Sample context.xml
The names of the defined placeholders are also used in the other resource files.
Example
Sample resource_configuration.yml
---
tomcat/webapps/ROOT/META-INF/context.xml:
service_name_for_DefaultDB: di-core-hdi
max_Active_Connections_For_DefaultDB: 100
service_name_for_DefaultDB: di-core-hdi-xa
max_Active_Connections_For_DefaultXADB: 100
Sample manifest.yml
---
env:
JBP_CONFIG_RESOURCE_CONFIGURATION: "['tomcat/webapps/ROOT/META-INF/
context.xml': {'service_name_for_DefaultDB' : 'my-local-special-di-core-hdi' ,
'max_Active_Connections_For_DefaultDB' : '30', 'service_name_for_DefaultXADB' :
'my-local-special-xa-di-core-hdi' , 'max_Active_Connections_For_DefaultXADB' :
'20' }]"
To use a SAP HANA HDI container from your Java run time, you must create a service instance for the HDI
container and bind the service instance to the Java application. To create a service instance, use the cf create-
service command, as illustrated in the following example:
Sample Code
To bind the service instance to a Java application, specify the service instance in the Java application's deployment
manifest file (manifest.yml).
Sample Code
services:
- my-hdi-container
Next, add the resource reference to the web.xml file, which must have the name of the service instance:
Sample Code
<resource-ref>
<res-ref-name>jdbc/my-hdi-container</res-ref-name>
<res-type>javax.sql.DataSource</res-type>
</resource-ref>
Look up the data source in your code; you can find the reference to the data source in the following ways:
● Annotations
@Resource(name = "jdbc/my-hdi-container")
private DataSource ds;
Related Information
You configure authentication and authorization by using the integrated container authentication provided with the
SAP Java buildpack or the Spring security library.
Prerequisites
Note
A role collection must have already been assigned to the user in SAP HANA. This is done through SAP
HANA studio. For more information, see Role Collections for XS Advanced Administrators in the SAP HANA
Administrators Guide.
Context
XS advanced enables offline validation of the JSON Web Token (JWT) used for authentication purposes; it does not
require an additional call to the User Account and Authorization (UAA) service. The trust for this offline validation
is created by binding the xsuaa service instance to your application.
Context
The SAP Java buildpack includes an authentication method called XSUAA. This makes an offline validation of the
received JWT token possible. The signature is validated using the verification key received from the service binding
to the xsuaa service.
Procedure
import java.io.IOException;
import javax.servlet.ServletException;
import javax.servlet.annotation.*;
import javax.servlet.http.*;
/**
* Servlet implementation class HomeServlet
*/
@WebServlet(“/*”)
@ServletSecurity(@HttpConstraint(rolesAllowed = { “Display” }))
public class HomeServlet extends HttpServlet {
/**
* @see HttpServlet#doGet(HttpServletRequest request, HttpServletResponse
response)
*/
protected void doGet(HttpServletRequest request, HttpServletResponse response)
throws ServletException, IOException {
response.getWriter().println(“principal” + request.getUserPrincipal());
}
Context
Applications using the Spring libraries can use the corresponding Spring security libraries. SAP HANA XS
advanced model provides a module for offline validation of the received JSON Web Token (JWT). The signature is
Procedure
Sample pom.xml
<dependency>
<groupId>com.sap.xs2.security</groupId>
<artifactId>java-container-security</artifactId>
<version>0.14.5</version>
</dependency>
Sample web.xml
<listener>
<listener-class>org.springframework.web.context.ContextLoaderListener</listener-
class>
</listener>
<context-param>
<param-name>contextConfigLocation</param-name>
<param-value>/WEB-INF/spring-security.xml</param-value>
</context-param>
<filter>
<filter-name>springSecurityFilterChain</filter-name>
<filter-class>org.springframework.web.filter.DelegatingFilterProxy</filter-
class>
</filter>
<filter-mapping>
<filter-name>springSecurityFilterChain</filter-name>
<url-pattern>/*</url-pattern>
</filter-mapping>
Sample spring-security.xml
In this example, the most important line in the service context is highlighted in bold. The contents of this file
specify which parts of the micro service are to be made secure. For example, an HTTP POST request to /
rest/addressbook/deletedata requires the Delete scope.
4. Configure the security properties of the service instance in application.properties.
# parameters of hello-world-java
xs.appname=java-hello-world
The following code snippet shows that not only the requests are authenticated but also that the user principal
is available.
Debugging an application helps you detect and diagnose errors in your code. You can control the execution of your
program by setting breakpoints, suspending threads, stepping through the code, and examining the contents of
the variables.
You can debug an application running on a Cloud Foundry container that is using SAP JVM. By using SAP JVM, you
can enable debugging on-demand without having to restart the application or the JVM.
Prerequisites
Download and install the Cloud Foundry Command Line Interface (cf CLI) and log on to Cloud Foundry. For more
information, see Download and Install the Cloud Foundry Command Line Interface [page 948] and Log On to the
Cloud Foundry Environment Using the Cloud Foundry Command Line Interface [page 948].
Deploy your application using the SAP Java Buildpack. From the cf CLI, execute cf push <app name> -p <my
war file> -b sap_java_buildpack.
Note
SAP JVM is included in the SAP Java Buildpack. With SAP JVM, you can enable debugging on-demand. You do
not need to set any debugging parameters.
Context
After enabling the debugging port, you need to open an SSH tunnel, which connects to that port.
1. To enable debugging or to check the debugging state of your JVM, run jvmmon in your Cloud Foundry
container by executing cf ssh <app name> -c "app/META-INF/.sap_java_buildpack/
sapjvm/bin/jvmmon".
2. From the jvmmon command line window, execute start debugging.
3. (Optional) To confirm that debugging is enabled and see which port is open, execute print debugging
information.
Note
The default port is 8000.
You can debug an application running on a Cloud Foundry container that is using the standard Java community
build pack.
Prerequisites
Download and install the Cloud Foundry Command Line Interface (cf CLI) and log on to Cloud Foundry. For more
information, see Download and Install the Cloud Foundry Command Line Interface [page 948] and Log On to the
Cloud Foundry Environment Using the Cloud Foundry Command Line Interface [page 948].
Context
To debug an application you need to open a debugging port on your Cloud Foundry container and open an SSH
tunnel that will connect to that port.
1. To open the debugging port, you need to configure the JAVA_OPTS parameter in your JVM. From the cf CLI,
execute cf set-env <app name> JAVA_OPTS '-
Xrunjdwp:transport=dt_socket,server=y,suspend=n,address=8000.
2. To enable SSH tunneling for your application, execute cf enable-ssh <app name>.
3. To activate the previous changes, restart the application by executing cf restart <app name>.
4. To open the SSH tunnel, execute cf ssh <app name> -N -T -L 8000:127.0.0.1:8000.
Your local port 8000 is connected to the debugging port 8000 of the JVM running in the Cloud Foundry
container.
Note
The default port is 8000.
Caution
The connection is active until you close the SSH tunnel. After you have finished debugging, close the SSH
tunnel by pressing Ctrl + C .
5. Connect a Java debugger to your application. For example, use the standard Java debugger provided by
Eclipse IDE and connect to localhost:8000.
The SAP JVM Profiler is a tool that helps you analyze the resource consumption of a Java application running on
SAP Java Virtual Machine (JVM). You can use it to profile simple stand-alone Java programs or complex enterprise
applications.
Prerequisites
Install the SAP JVM Tools for Eclipse. For more information, see Install the SAP JVM Tools in Eclipse [page 1036].
Open a debugging connection using an SSH tunnel. For more information, see Debug an Application Running on
SAP JVM [page 1033]
Context
To measure resource consumption, the SAP JVM Profiler enables you to run different profiling analyses. Each
profiling analysis creates profiling events that focus on different aspects of resource consumption. For more
information about the available analyses, see the SAP JVM Profiler documentation in Eclipse. Go to Help Help
Contents SAP JVM Profiler .
Note
Your port number is your local SSH tunnel endpoint.
4. Choose the analysis you want to run and specify your profiling parameters.
Note
To use the thread annotation filters, complete the fields under the Analysis Options section. By default, all
filters are set to *, which means that all annotations pass the filter.
Procedure
2. In the Work with combo box enter https://tools.hana.ondemand.com/oxygen and choose SAP Cloud
Platform Tools SAP JVM Tools .
3. Choose Next and follow the installation wizard.
You can add your SAP JVM to the VM Explorer view of the SAP JVM Profiler.
Prerequisites
Install the SAP JVM Tools for Eclipse. For more information, see Install the SAP JVM Tools in Eclipse [page 1036].
Download and install the Cloud Foundry Command Line Interface (cf CLI) and log on to Cloud Foundry. For more
information, see Download and Install the Cloud Foundry Command Line Interface [page 948] and Log On to the
Cloud Foundry Environment Using the Cloud Foundry Command Line Interface [page 948].
From the VM Explorer, you can debug, monitor or profile your SAP JVM. For more information about the VM
Explorer, see the SAP JVM Profiler documentation in Eclipse. Go to Help Help Contents SAP JVM Profiler .
Note
You need to use the jvmmond tool, which is included in the SAP Java Buildpack.
Procedure
This starts the jvmmond service on your Cloud Foundry container. It is listening to port 50003. This command
also specifies a port range of 50004-50005 in case additional ports need to be opened.
2. To enable an SSH tunnel for these ports, execute cf ssh <app name> -N -T -L
50003:127.0.0.1:50003 -L 50004:127.0.0.1:50004 -L 50005:127.0.0.1:50005.
3. In Eclipse, open the Profiling perspective and from the VM Explorer view, choose Manage Hosts.
4. Choose Add and enter the following host name and port number: localhost:50003.
In this section about Node.js development, you get information about the buildpack supported by SAP and about
the Node.js packages, and how you consume them in your application.
There is also a small tutorial with an introduction to securing your application, and some tips and tricks for
developing and running Node.js applications on the Cloud Foundry environment.
Node.js Buildpack
SAP Cloud Platform uses the standard Node.js buildpack provided by Cloud Foundry to deploy Node.js
applications.
To get familiar with the buildpack and how to deploy applications with it, take a look at the Cloud Foundry Node.js
Buildpack documentation .
You can download and consume SAP developed Node.js packages via the SAP NPM Registry. There is an overview
of Node.js packages developed by SAP, what they are meant for, and where they are included in the SAP HANA
Developer Guide for XS Advanced Model. As the Node.js packages used for development in the Cloud Foundry are
similar or the same, as those in SAP HANA XS advanced, the links guide you to the SAP HANA Developer Guide for
XS Advanced Model.
Additional information:
Node.js Tutorial
The tutorial will guide you through creating a Node.js application, and setting up authentication and authorization
checks. This is by no means a setup for productive use, but you get to know the basics and links to some further
reading.
For selected tips and tricks for your Node.js development, see Tips and Tricks for Node.js Applications [page
1048].
This tutorial will guide you through creating and setting up a sample Node.js application by using the Cloud
Foundry command line interface (cf CLI).
Prerequisites
● A registered user and trial space in the Cloud Foundry environment, see Geta a Free Trial Account [page 910].
● cf CLI is installed locally, see Download and Install the Cloud Foundry Command Line Interface [page 948].
● Node.js is installed and configured locally, see npm documentation .
Context
You will start by building and deploying a simple web application that returns some sample data.
Procedure
---
applications:
- name: myapp
host: <host>
path: myapp
memory: 128M
Exchange the <host> with an unique name, so it does not clash with other deployed application.
This configuration is used to describe the applications and how they will be deployed.
Tip
For information about the manifest.yml file, see Deploying with Application Manifests .
3. Create a new directory inside node-tutorial named myapp and change the current directory to myapp with
cd myapp.
4. Execute npm init.
This will walk you through creating a package.json file in the myapp directory.
5. Run npm install express --save.
This will add the express package as a dependency in the package.json file. After the installation is complete
the content of the package.json should look similar to this:
{
"name": "myapp",
"version": "1.0.0",
"description": "My App",
"main": "index.js",
"scripts": {
"test": "<test>"
},
"author": "",
"license": "ISC",
"dependencies": {
"express": "^4.15.3"
}
}
6. Add engines and update scripts sections to the package.json so it looks similar to this:
7. Inside the myapp directory create another file called start.js with the following content:
This will create a very simple web application returning “Hello World” as a response. Express is one of the
most widely used Node.js modules (for serving web content) and it is the web server part of this application.
After these steps are complete note that the package express has been installed in the node_modules
directory.
8. Deploy the application on Cloud Foundry. Execute the following command in the node-tutorial directory:
cf push
Note
cf push is always executed in the same directory where the manifest.yml is located.
When the staging and deployment steps are complete you can check the state and URL of your application by
using the cf app command.
Prerequisites
You have gone over the Create a Node.js Application [page 1038] tutorial and have the sample application
deployed on the Cloud Foundry environment.
Context
Authentication in the Cloud Foundry environment is provided by the UAA service. In this example, OAuth 2.0 is
used as the authentication mechanism. The simplest way to add authentication is to use the @sap/approuter
package, which is a component used to provide a central entry point for business applications. More details on the
security in SAP Cloud Platform can be found in Web Access in the Cloud Foundry Environment [page 2040]
documentation.
Procedure
1. Create file xs-security.json (see the related link) in node-tutorial directory with the following content:
{
"xsappname" : "myapp",
"tenant-mode" : "dedicated"
}
2. Create a UAA service instance named myuaa via the following command:
---
applications:
- name: myapp
host: <host>
path: myapp
memory: 128M
services:
- myuaa
In this case myuaa service instance will be bound to the myapp application during deployment.
4. Create a directory named web in the node-tutorial directory.
<html>
<head>
<title>JavaScript Tutorial</title>
</head>
<body>
<h1>JavaScript Tutorial</h1>
<a href="/myapp/">myapp</a>
</body>
</html>
"scripts": {
"start": "node node_modules/@sap/approuter/approuter.js"
}
10. Add the following content at the end of the manifest.yml file in the node-tutorial directory:
- name: web
host: <host>
path: web
memory: 128M
env:
destinations: >
[
{
"name":"myapp",
"url":"<myapp url>",
"forwardAuthToken": true
}
]
services:
- myuaa
○ Exchange the <host> with an unique name, so it does not clash with other deployed application.
○ The <destinations> environment variable defines the destinations to the microservices, the application
router will forward requests to.
Set the url property to the URL of the myapp application as displayed by the cf apps command, and add
the network protocol before the URL.
○ In the services section we specify the uaa service name that will be bound to the application.
11. Create the xs-app.json file in the web directory with the following content:
{
"routes": [
{
"source": "^/myapp/(.*)$",
"target": "$1",
"destination": "myapp"
}
]
Note
With this configuration, the incoming request path is connected with the destination where the request
should be forwarded to. By default, every route requires OAuth authentication, so the requests to this path
will require an authenticated user.
12. Execute the following commands in the myapp directory to download the @sap/xssec and passport
packages:
13. Verify the request is authenticated by checking the JWT token in the request by using JWTStrategy provided
by the @sap/xssec package. To do that replace the content of the myapp/start.js file with the following:
cf push
This command will update the myapp application and will deploy the web application.
Note
From this point in the tutorial the URL of the web application will be requested instead of the myapp URL. It
will then forward the requests to the myapp application.
15. Find the URL of the web application via the cf apps command and open it in a Web browser.
16. Enter the credentials of a valid user.
17. Click the myapp link.
Related Information
Prerequisites
You have gone over the Authentication Checks in Node.js Applications [page 1041] tutorial and have the sample
application deployed on the Cloud Foundry environment.
Context
Authorization in the Cloud Foundry environment is provided by the UAA service. In the previous example, the
@sap/approuter package was added to provide a central entry point for the business application and enable
authentication. Now to extend the sample authorization will be added. The authorization concept includes
elements such as roles, scopes, and attributes provided in the security descriptor file xs-security.json,
explained in details in the Authorization and Trust Management Overview [page 2032] section.
In this tutorial, the sample will be extended by implementing the users REST service. Different authorization
checks will be introduced for the GET and CREATE operations to demonstrate how authorization works.
Note
Authorization checks can be configured in the application router, check the Application Router Configuration
Syntax [page 1011], route’s scope property. This tutorial focuses on authorization checks in the microservices
using the container security API for Node.js.
Procedure
1. To introduce application roles, open the xs-security.json in the node-tutorial directory, and add
scopes and role templates as follows:
Two roles are introduced: Viewer and Manager. These roles are a collecting set of OAuth 2.0 scopes or
actions. The scopes will be used later in the microservices code for authorization checks.
2. Update the UAA service.
3. Add a new file called users.json to the myapp directory with the following content:
[{
"id": 0,
"name": "John"
},
{
"id": 1,
"name": "Paul"
}]
This will be the initial list of users for the REST service.
4. Add a dependency to body-parser that will be used for JSON parsing in the add new user operation. In the
myapp directory execute:
5. Change the myapp/start.js by adding GET and POST operations for the users REST endpoint:
Enforcing authorization checks is done via the @sap/xssec package. Using passport and
xssec.JWTSecurity, a security context is attached as an authInfo object to the request object. This object
is initialized with the incoming JWT token. For a full list of methods and properties of the security context, see
Authentication for Node.js Applications [page 2050].
For the HTTP GET requests, a check if the user has the scope Display is done.
To create new users for example, the HTTP POST requests, require the service to have the Update scope
assigned to the user.
6. Update the UI to be able to send POST requests. Change the content of web/resources/index.html with
the following code:
<html>
<head>
<title>JavaScript Tutorial</title>
<script src="https://ajax.googleapis.com/ajax/libs/jquery/3.2.1/
jquery.min.js"></script>
<script>
function fetchCsrfToken(callback) {
jQuery.ajax({
url: '/myapp/users',
type: 'HEAD',
headers: { 'x-csrf-token': 'fetch' }
})
.done(function(message, text, jqXHR) {
callback(jqXHR.getResponseHeader('x-csrf-token'))
})
.fail(function(jqXHR, textStatus, errorThrown) {
alert('Error fetching CSRF token: ' + jqXHR.status + ' ' +
errorThrown);
});
}
function addNewUser(token) {
var name = jQuery('#name').val() || '--';
jQuery.ajax({
In the UI there is a link to get all users, an input box to enter a user name, and a button to send the create a
new user requests.
In the sample, jQuery is used to create a POST request and send a new user in JSON format to the REST API.
The code seems more complicated than expected because you need to get a CSRF token before sending the
POST request. This token is required by the application router for requests that change the state, so the call
should provide it. In general, what the code does in the case when the Add User button is clicked is to fetch a
token and on success send a POST request with the users’ data as a JSON body.
7. Go to the node-tutorial directory and push both applications. Execute the following command:
cf push
8. Find the URL of the application router web application via the cf apps command and open it in a Web
browser.
9. Enter the credentials of a valid user.
10. Click the Show users link. This should result in 403 Forbidden response, due to missing privileges.
11. Configure the roles collections and user groups assignment in the SAP Cloud Platform cockpit.
Note
The configuration is not part of this tutorial. See Set Up Security Artifacts [page 2105].
Initially add only the Viewer role to the user. In a new browser window open the application URL, ensure you
are logged in interactively. Click on the Show users link in the UI and you should be able to see a JSON
response from the REST service. Clicking on Add User leads to a message box with error text: 403
Forbidden.
Get to know the Node.js buildpack for the Cloud Foundry environment
Check the Tips and Tricks for Node.js applications in the Cloud Foundry Node.js Buildpack documentation.
Vendoring Node.js application dependencies is discussed in the documentation for the Cloud Foundry
environment. Even though the SAP Cloud Platform is a connected environment, for productive applications the
recommendation is to vendor application dependencies.
There are various reasons for this, for example for productive applications usually security scans are run, at the
same time npm does not provide reliable dependency fetch and you might end-up with different dependencies in
case they are installed during deployment. Additionally, npm downloads any missing dependencies from its
registry. If this registry is not accessible for some reason, the deployment may fail.
Note
Be aware when using dependencies containing native code, that you need to preinstall it in the same
environment as the Cloud Foundry container or that the package has built-in support for it.
To ensure that prepackaged dependencies are pushed to the Cloud Foundry environment and On-premise
runtime, make sure that node_modules directory is not listed in .cfignore file. It’s also preferable that
development dependencies are not deployed for productive deployments. To ensure that run this command:
For performance reasons Node.js (V8) has lazy garbage collection. Even though there are no memory leaks in your
application, this might lead to occasional restarts as explained in the Tips for Node.js Applications section in the
Node.js Buildpack documentation for the Cloud Foundry Environment.
Enforce the garbage collector to run before the memory is consumed by limiting the V8 application’s heap size at
about ~75% of the available memory.
Sample Code
You can optimize V8 behavior using additional options. You can list them using the command:
node --v8-options
When deploying an application in the Cloud Foundry environment without specifying the application memory
requirements, the controller of the Cloud Foundry environment will assign the default (1G of RAM currently) for
your application. Many Node.js applications require less memory and assigning the default is waste of resources.
To save memory from your quota, specify the memory size in descriptor of the deployment in the Cloud Foundry
environment – manifest.yml. Details how to do that, can be found in the Deploying with Application Manifests
topic in the documentation for the Cloud Foundry environment.
In this section about Python development, you get information about the buildpack supported by SAP and about
the Python packages, and how you consume them in your application.
There is also a small tutorial with an introduction to securing your application, and some tips and tricks for
developing and running Python applications on the Cloud Foundry environment.
SAP Cloud Platform uses the standard Python buildpack provided by the Cloud Foundry environment to deploy
Python applications.
To get familiar with the buildpack and how to deploy applications with it, take a look at the Cloud Foundry Python
Buildpack documentation .
SAP includes a selection of Python packages, which are available for download and use, for customers and
partners who have the appropriate access authorization, from the SAP Service Marketplace (SMP). For more
Python packages
Package Description
hdbcli The SAP HANA Database Client, provides means for database
connectivity.
Python Tutorial
The tutorial will guide you through creating a Python application, consuming Cloud Foundry services, and setting
up authentication and authorization checks. This is by no means a setup for productive use, but you get to know
the basics and links to some further reading.
Selected tips and tricks for your Python development. See Tips and Tricks for Python Applications [page 1060].
Prerequisites
Before you start you will need to fulfill the following requirements:
● You have a registered Cloud Foundry@SAP user and trial space. See Get Started with a Trial Account: Workflow
in the Cloud Foundry Environment [page 914].
● The Cloud Foundry command line interface is installed locally. See Download and Install the Cloud Foundry
Command Line Interface [page 948].
● Python version 3.5 or higher is installed locally. See the installation guides for OS X , Windows , and Linux
.
● virtualenv is installed locally. See https://github.com/kennethreitz/python-guide/blob/master/docs/dev/
virtualenvs.rst .
Context
This tutorial will guide you through creating and setting up a simple Python application by using the Cloud Foundry
command line interface (cf CLI). The tutorial will showcase some basic SAP provided Python libraries aimed to
ease your application development. You will start by building and deploying the web application that returns some
sample data.
Procedure
1. Log on to Cloud Foundry. See Log On to the Cloud Foundry Environment Using the Cloud Foundry Command
Line Interface [page 948].
2. Create a new directory named python-tutorial.
3. Create a manifest.yml file in the python-tutorial directory with the following content:
Source Code
manifest.yml
---
applications:
- name: myapp
host: <host>
path: .
memory: 128M
command: python server.py
Exchange the <host> with an unique name, so it does not clash with other deployed applications. This file is
the configuration describing your application and how it should be deployed to the Cloud Foundry. See
Deploying with Application Manifests .
Source Code
runtime.txt
python-3.6.x
Note
The buildpack only supports the stable Python versions, which are listed in the Python buildpack release
notes .
5. The application will be a web server utilizing the Flask web framework. Specify Flask as an application
dependency, by creating a requirements.txt file with the following content:
Source Code
requirements.txt
Flask==0.12.2
6. Create a server.py file, which will contain the following application logic:
Source Code
server.py
import os
from flask import Flask
app = Flask(__name__)
port = int(os.environ.get('PORT', 3000))
@app.route('/')
def hello():
return "Hello World"
if __name__ == '__main__':
app.run(host='0.0.0.0', port=port)
This is a simple server, which will return a “Hello World” when requested. Flask is one of the most widely used
Python web frameworks (for serving web content) and it is the web server part of this application.
7. Deploy the application on Cloud Foundry. Execute the cf push command in the python-tutorial
directory.
Note
cf push is always executed in the same directory, where the manifest.yml is located.
When the staging and deployment steps are complete you can check the state and URL of your application by
using the cf app command.
8. Open a browser window and enter the URL of the application.
Prerequisites
You have gone over and completed the Create a Python Application [page 1051] part of the tutorial.
Context
In this part of the tutorial you will connect and consume a Cloud Foundry service in your application. For the
purpose of the tutorial the SAP HANA Service will be used.
You can view what services and plans are available for your application to consume, by executing cf
marketplace.
Procedure
1. Create an instance of the SAP HANA service with the following command:
This will create a service instance called myhana, from the service hana, with plan hdi-shared.
2. Bind this service instance to the application.
a. Modify the manifest.yml file:
Source Code
manifest.yml
---
applications:
- name: myapp
host: <host>
path: .
memory: 128M
command: python server.py
services:
- myhana
b. To consume the service inside the application you need to read the service settings and credentials from
the application. To do that use the python module cfenv. Add the following line to the requirements.txt
file:
Source Code
requirements.txt
Flask==0.12.2
c. Modify the server.py file to include the following lines of code, which are used to read the service
information from the environment:
Source Code
server.py
import os
from flask import Flask
from cfenv import AppEnv
app = Flask(__name__)
env = AppEnv()
port = int(os.environ.get('PORT', 3000))
hana = env.get_service(label='hana')
@app.route('/')
def hello():
return "Hello World"
if __name__ == '__main__':
app.run(host='0.0.0.0', port=port)
When you restage the application the SAP HANA service instance will be bound to the application and the
application could connect to it.
3. Connect to SAP HANA using the SAP HANA database client or hdbcli module provided by SAP.
The overall recommendation for Cloud Foundry applications deployed in SAP, is for them to be deployed as
self-contained – they need to carry all of their dependencies so that the staging process does not require any
network calls. The Python buildpack provides a mechanism for that – applications can vendor their
dependencies by creating a vendor folder in their root directory and execute the following command to
download dependencies in it:
Note
You should always make sure you are vendoring dependencies for the correct platform, so if you are
developing on anything other than Ubuntu, use the --platform flag. See pip download .
4. Modify the server.py file to execute a query with the hdbcli driver:
Source Code
server.py
import os
from flask import Flask
cursor = conn.cursor()
cursor.execute("select CURRENT_UTCTIMESTAMP from DUMMY", {})
ro = cursor.fetchone()
cursor.close()
conn.close()
Prerequisites
● You have gone over and completed, the Create a Python Application [page 1051] and Consume Cloud Foundry
Services [page 1053] parts of the tutorial.
● You have npm installed locally.
Context
Authentication in the Cloud Foundry environment is provided by the UAA service. In this example, OAuth 2.0 is
used as the authentication mechanism. The simplest way to add authentication is to use the @sap/approuter
Node.js package, which is a component used to provide a central entry point for business applications. More
details on the security in SAP Cloud Platform can be found in Web Access in the Cloud Foundry Environment
documentation. To use @sap/approuter we’ll create a separate Node.js micro-service to act as an entry point for
the application.
Procedure
1. Create an xs-security.json file for your application with the following content:
{
"xsappname" : "myapp",
"tenant-mode" : "dedicated"
}
2. Create a UAA service instance named myuaa via the following command:
Source Code
manifest.yml
---
applications:
- name: myapp
host: <host>
path: .
memory: 128M
command: python server.py
services:
- myhana
- myuaa
The myuaa service instance will be bound to the myapp application during deployment.
4. To create a second microservice, which will be the application router, create a directory called web in the
python-tutorial directory.
5. Inside the web directory, create a sub-directory named resources - this directory will be used to provide the
business application's static resources.
6. 6. Inside resources, create an index.html file with the following content:
Source Code
index.html
<html>
<head>
<title>Python Tutorial</title>
</head>
<body>
<h1>Python Tutorial</h1>
<a href="/myapp/">myapp</a>
</body>
</html>
"scripts": {
"start": "node node_modules/@sap/approuter/approuter.js"
}
10. Modify the manifest.yml file in the python-tutorial directory with the following content at the end of it:
Source Code
---
applications:
- name: myapp
host: <host>
path: .
memory: 128M
command: python server.py
services:
- myhana
- myuaa
- name: web
host: <host>
path: web
memory: 128M
env:
destinations: >
[
{
"name":"myapp",
"url":"<myapp url>",
"forwardAuthToken": true
}
]
services:
- myuaa
○ Exchange the <host> with an unique name, so it does not clash with other deployed application.
○ The <destinations> environment variable defines the destinations to the micro-services, the application
router will forward requests to.
○ Set the url property to the URL of the myapp application as displayed by the cf apps command, and add
the network protocol before the URL.
○ In the services section, specify the UAA service name, that will be bound to the application.
11. Create the xs-app.json file in the web directory with the following content:
Source Code
xs-app.json
{
"routes": [
{
"source": "^/myapp/(.*)$",
"target": "$1",
"destination": "myapp"
}
]
}
12. Navigate to the node-tutorial directory and execute cf push. This command will update the myapp
application and will deploy the web application.
Note
From this point in the tutorial the URL of the web application will be requested instead of the myapp URL. It
will then forward the requests to the myapp application.
13. Check the URL of the web application via the cf apps command and open it in a browser window.
You should be prompted to log in and then you should see the current HANA time displayed by the Python
application.
Prerequisites
You have completed Authentication Checks in Python Applications [page 1055] and have the sample application
deployed on the Cloud Foundry environment.
Context
Authorization in the Cloud Foundry environment is provided by the UAA service. In the previous example, the
@sap/approuter package was added to provide a central entry point for the business application and enable
authentication. Now to extend the sample, authorization will be added. The authorization concept includes
elements such as roles, scopes, and attributes provided in the security descriptor file xs-security.json,
explained in details in the Authorization and Trust Management Overview [page 2032] section.
Procedure
1. Add the xssec security library to the requirements.txt file, to place restrictions on the content you serve.
Source Code
requirements.txt
Flask==0.12.2
cfenv==0.5.3
2. Then vendor it inside the vendor folder by executing: pip download -d vendor -r requirements.txt
–-find-links sap_dependencies from the root of the application.
3. Modify server.py to use the security library:
Source Code
server.py
import os
from flask import Flask
from cfenv import AppEnv
from flask import request
from flask import abort
import xssec
from hdbcli import dbapi
app = Flask(__name__)
env = AppEnv()
port = int(os.environ.get('PORT', 3000))
hana = env.get_service(label='hana')
uaa_service = env.get_service(name='myuaa').credentials
@app.route('/')
def hello():
if 'authorization' not in request.headers:
abort(403)
access_token = request.headers.get('authorization')[7:]
security_context = xssec.create_security_context(access_token, uaa_service)
isAuthorized = security_context.check_scope('openid')
if not isAuthorized:
abort(403)
conn = dbapi.connect(address=hana.credentials['host'],
port=int(hana.credentials['port']), user=hana.credentials['user'],
password=hana.credentials['password'])
cursor = conn.cursor()
cursor.execute("select CURRENT_UTCTIMESTAMP from DUMMY", {})
ro = cursor.fetchone()
cursor.close()
conn.close()
return "Current time is: " + str(ro["CURRENT_UTCTIMESTAMP"])
if __name__ == '__main__':
app.run(host='0.0.0.0',port=port)
4. Try to access the application directly, you should see an HTTP 403 error. If you however access the application
through the application router, you should see the current HANA time, provided you have the scope ‘openid’
assigned to your user.
Since the OAuth 2.0 client is used, the scope openid is assigned to your user by default and you are able to
access the application as usual.
The functional authorization scopes for applications are declared in the xs-security.json, see Specify the
Security Descriptor Containing the Functional Authorization Scopes for Your Application [page 2107].
● The overall recommendation for Cloud Foundry applications deployed in SAP, is for them to be deployed as
self-contained – they need to carry all of their dependencies so that the staging process does not require any
network calls. See https://docs.cloudfoundry.org/buildpacks/python/index.html#vendoring
● The cfenv package provides access to Cloud Foundry application environment settings by parsing all the
relevant environment variables. The settings are returned as a class instance. See https://github.com/
jmcarp/py-cfenv .
● While developing Python applications (whether in the Cloud or not) it’s a very good idea to use virtual
environments. The most famous Python package providing such a functionality is virtualenv. See https://
virtualenv.pypa.io/en/stable/
● The PEP 8 style guide for Python applications - https://www.python.org/dev/peps/pep-0008/ , will help you
improve your applications.
Get to know certain aspects of SAPUI5 development, to get up and running quickly.
If you are about to decide which UI technology to use , read everything you need to know about supported library
combinations, the supported browsers and platforms, and so on, at the Read Me First section that contains the
following topics and more:
Select the tiles to discover SAPUI5 Development. The references guide you to the documentation of the SAPUI5
Demo Kit. Besides the entry points we provide here on each tile, start exploring the demo kit on your own
whenever you feel comfortable enough.
1. Register for an SAP Cloud Platform trial account at https://account.hanatrial.ondemand.com/ and log on
afterwards.
2. Open SAP Web IDE Full-Stack
3. Setting Up Application Projects - Create a Project from Scratch & Select a Cloud Foundry Space
Tutorials
In an HTML5 module in SAP Web IDE Full-Stack, more files are created than described in these tutorials, but you
can run through it and at the end you have a running application on the Cloud Foundry environment.
For more information have a look at the Get Started: Setup and Tutorials section that contains the following topics
and more:
● “Hello World!”
● Data Binding
● Navigation and Routing
● Mock Server
● Worklist App
● Ice Cream Machine
Essentials
This is reference information that describes the development concepts of SAPUI5 , such as the Model View
Controller, data binding, and components.
Developing Apps
Create apps with rich user interfaces for modern Web business applications, responsive across browsers and
devices, based on HTML5.
The following topics are excerpts from the Developing Apps section:
Application Patterns
In the Cloud Foundry environment, SAP is promoting a pattern for building applications as shown in the diagram
below. This is a logical view, abstracting from a lot of details like the CF router, CF controller, etc. and represents
architecture of a business application. We use the term Business Application to distinguish from single
Microservices, service instances, bindings, services, and routes are entities known to the platform.
Microservices are created from "pushing" code and binaries to the platform resulting in a number of application
instances each running in a separate container.
Services are exposed to apps by injecting access credentials into the environment of the applications via service
binding. Applications are bound to service instances where a service instance represents the required
configuration and credentials to consume a service. Services instances are managed by a service broker which has
to be provided for each service (or for a collection of services).
Routes are mapped to applications and provide the actual application access points/URLs.
A business application is a collection of microservices, service instances, bindings and routes which together
represent a usable web application from an end-user point of view. These microservices, services instances,
bindings and routes are created by communicating with the CF/XSA Controller (e.g. using a command line
interface).
SAP provides a set of libraries, services and component communication principles which are used to implement
(multi-tenant) business applications according this pattern.
Business application embracing microservice architecture, are decomposed into multiple services that can be
managed independently. Still this approach brings some challenges for application developers, like handling
security in a consistent way and dealing with same origin policy.
Application router is a separate component that addresses some of these challenges. It provides three major
functions:
● Reverse proxy - provides a single entry point to a business application and forwards user requests to
respective backend services
● Serves static content from the file system
● Security – provides security related functionality like login, logout, user authentication, authorization and
CSRF protection in a central place
The application router exposes the endpoint accessed by a browser to access the application.
UAA Service
The User Account and Authentication (UAA) service is a multi-tenant identity management service, used in the
SAP Cloud Platform Cloud Foundry environment. Its primary role is as an OAuth2 provider, issuing tokens for client
applications to use when they act on behalf of the users of the Cloud Foundry environment. It can also
authenticate users with their credentials for the Cloud Foundry environment, and can act as an SSO service using
those credentials (or others). It has endpoints for managing user accounts and for registering OAuth2 clients, as
well as various other management functions.
The platform provides a number of backing services like HANA, MongoDB, PostgreSQL, Redis, RabbitMQ, Audit
Log, Application Log, etc. Also, the platform provides various business services, for instance to retreive currency
exchange rates. In addition, applications can use user-provided services which are not managed by the platform.
In all these cases applications can bind and consume required services in a similar way. See Services Overview
documentation for general information about services and their consumption.
Application Deployment
There are two options for a deployer (human or software agent), how to deploy and update a business application:
● Native deployment: Based on the native controller API of the Cloud Foundry environment, the deployer will
deploy individual applications, create service instances, bindings and routes. The deployer is responsible for
performing all these tasks in an orchestrated way to manage the lifecycle of the entire business application.
● Multi-Target Applications [page 1292] (MTA): Based on a declarative model the deployer creates an MTA
archive and hands over the model description together with all application artifacts to the SAP Deploy Service.
SAP HANA extended application services, advanced model running in the Cloud Foundry environment comprise
various means for effective development on SAP HANA.
The advanced model of SAP HANA extended application services enhances the Cloud Foundry environment with a
number of tweaks and extensions provided by SAP. These SAP enhancements include amongst other things: an
integration with the SAP HANA database, OData support, compatibility with XS classic model, and some additional
features designed to improve application security. XS advanced allows you to develop and deploy SAP HANA-
based web applications on the cloud platform, supporting multiple runtimes, programming languages, libraries,
and services.
Concepts
Some of the central concepts for the advanced model of SAP HANA extended application services include the
following:
Related Information
Learn more about using services in the Cloud Foundry environment, how to create (user-provided) service
instances and bind them to applications, and how to create service keys.
In the Cloud Foundry environment, you usually enable services by creating a service instance using either the SAP
Cloud Platform cockpit or the Cloud Foundry command line interface (cf CLI), and binding that instance to your
application.
In a PaaS environment, all external dependencies, such as databases, messaging systems, files systems, and so
on, are services. In the Cloud Foundry environment, services are offered in a marketplace, from which users can
create service instances on-demand. A service instances is a single instantiation of a service running on SAP Cloud
Platform. Service instances are created using a specific service plan. A service plan is a configuration variant of a
service. For example, a database may be configured with various "t-shirt sizes", each of which is a different service
plan.
To integrate services with applications, the service credentials must be delivered to the application. To do so, you
can bind service instances to your application to automatically deliver these credentials to your application. Or you
can use service keys to generate credentials to communicate directly with a service instance. As shown in the
figure below, you can deploy an application first and then bind it to a service instance:
Alternatively, you can also bind the service instance to your application as part of the application push via the
application manifest. For more information, see https://docs.cloudfoundry.org/devguide/deploy-apps/
manifest.html#services-block .
The Cloud Foundry environment also allows you to work with user-provided service instances. User-provided
service instances enable developers to use services that are not available in the marketplace with their
applications running in the Cloud Foundry environment. Once created, user-provided service instances behave in
the same manner as service instances created through the marketplace. For more information, see Creating User-
Provided Service Instances [page 1073].
For more conceptual information about using services in the Cloud Foundry environment, see https://
docs.cloudfoundry.org/devguide/services/ . For more information about the availability of services available in
the Cloud Foundry environment, see Capabilities [page 24] and Availability of SAP Cloud Platform Services.
Related Information
Use the SAP Cloud Platform cockpit or the Cloud Foundry Command Line Interface to create service instances:
Prerequisites
If you are working in an enterprise account, you need to add quotas to the services you purchased in your
subaccount before they appear in the service marketplace. Otherwise, only default free-of-charge services are
listed. Quotas are automatically assigned to the resources available in trial accounts.
For more information, see Add Quotas to Subaccounts Using the Cockpit [page 944].
For more information, see Getting Started with Business Application Subscriptions [page 967]
You can use the Cloud Foundry Command Line Interface (cf CLI) to create service instances.
Prerequisites
Procedure
1. (Optional) Open a command line and enter the following string to list all services and service plans that are
available in your org:
cf marketplace
Related Information
Use the SAP Cloud Platform cockpit or the Cloud Foundry Command Line Interface to bind service instances to
applications:
You can bind service instances to applications both at the application view, and at the service-instance view in the
cockpit.
Prerequisites
● Deploy an application in the same space in which you plan to create the service instance. For more
information, see Deploy Applications [page 1118].
● Create a service instance. For more information, see Create Service Instances Using the Cockpit [page 1067].
Procedure
1. Navigate to the space in which you deployed your application and created the service instance. For more
information, see Navigate to Global Accounts, Subaccounts, Orgs, and Spaces in the Cockpit [page 953].
Application 1. In the navigation area, choose Applications, then select the relevant application.
2. In the navigation area, choose Service Bindings.
3. Choose Bind Service.
4. Choose a service type, then choose Next.
5. Choose a service, then choose Next.
6. To create a new instance of the service, choose Create new instance and follow
the steps required for creating a new instance. To reuse an existing instance,
choose Re-use existing instance. Then choose Next
7. Choose Finish to save your changes.
You can bind service instances to applications using the Cloud Foundry Command Line Interface (cf CLI).
Prerequisites
● Download and install the cf CLI and log on to Cloud Foundry. For more information, see Download and Install
the Cloud Foundry Command Line Interface [page 948] and Log On to the Cloud Foundry Environment Using
the Cloud Foundry Command Line Interface [page 948].
● Deploy an application in the same space in which you plan to create the service instance. For more
information, see Deploy Applications [page 1118].
● Create a service instance. For more information, see Create Service Instances Using the Cloud Foundry
Command Line Interface [page 1068].
Procedure
Related Information
You can use service keys to generate credentials to communicate directly with a service instance. Once you
configure them for your service, local clients, apps in other spaces, or entities outside your deployment can access
your service with these keys.
You can use the SAP Cloud Platform cockpit or the Cloud Foundry Command Line Interface to create service keys:
Prerequisites
Create a service instance. For more information, see Create Service Instances Using the Cockpit [page 1067].
Procedure
1. Navigate to the space in which you've created a service instance for which you want to create a service key. For
more information, see Navigate to Global Accounts, Subaccounts, Orgs, and Spaces in the Cockpit [page 953].
Results
Local clients, apps in other spaces, or entities outside your deployment can now access your service instance with
this key.
Use the Cloud Foundry Command Line Interface to create a service key.
Prerequisites
● Download and install the cf CLI and log on to Cloud Foundry. For more information, see Download and Install
the Cloud Foundry Command Line Interface [page 948] and Log On to the Cloud Foundry Environment Using
the Cloud Foundry Command Line Interface [page 948].
● Create a service instance. For more information, see Create Service Instances Using the Cloud Foundry
Command Line Interface [page 1068].
Procedure
Local clients, apps in other spaces, or entities outside your deployment can now access your service instance with
this key.
Related Information
User-provided service instances enable you to use services that are not available in the marketplace with your
applications running in the Cloud Foundry environment.
You can create user-provided service instances using the SAP Cloud Platform cockpit or the Cloud Foundry
Command Line Interface:
Use the cockpit to create user-provided service instances and bind them to applications in the Cloud Foundry
environment.
Prerequisites
Obtain what the application requires to access as that is not available in the marketplace service via the network, a
URL and port for example. Also,credentials required to authenticate the application with the service, such as a user
name and a password for example, and tfor communicating with a service t.
1. Navigate to the space in which you want to create a user-provided service instance. For more information, see
Navigate to Global Accounts, Subaccounts, Orgs, and Spaces in the Cockpit [page 953].
Next Steps
To bind your application to the user-provided service instance, follow the steps described in Bind Service Instances
to Applications Using the Cockpit [page 1069].
Use the Cloud Foundry Command Line Interface to make a user-provided service instance available to Cloud
Foundry applications.
Prerequisites
● Download and install the cf CLI and log on to Cloud Foundry. For more information, see Download and Install
the Cloud Foundry Command Line Interface [page 948] and Log On to the Cloud Foundry Environment Using
the Cloud Foundry Command Line Interface [page 948].
● Obtain a URL, port, user name, and password for communicating with a service that is not available in the
marketplace.
Context
Open a command line and enter the following string to create a user-provided service instance:
Next Steps
To bind your application to the user-provided service instance, follow the steps described in Bind Service Instances
to Applications Using the Cloud Foundry Command Line Interface [page 1070].
Related Information
The application router is the single point-of-entry for an application running in the Cloud Foundry environment at
SAP Cloud Platform; according to the described business application pattern, the application router
configurations are described in the xs-app.json file. The package.json file contains the start command for the
application router and a list of package dependencies.
Prerequisites
● cf CLI is installed locally, see Download and Install the Cloud Foundry Command Line Interface [page 948].
● Node.js is installed and configured locally, see npm documentation .
● The SAP npm registry, which contains the Node.js package for the application router, is configured:
npm config set @sap.registry https://npm.sap.com
The application router is used to serve static content, authenticate users, rewrite URLs, and forward or proxy
requests to other micro services while propagating user information. For more information, see the Readme.md file
of the application router.
Procedure
The subfolder for the Web-resources module must be located in the application root folder, for example, /
path/<myAppName>/web.
Tip
The Web-resource module uses @sap/approuter as a dependency; the Web-resources module also
contains the configuration and static resources for the application.
Static resources can, for example, include the following file and components: index.html, style sheets (.css
files), images and icons. Typically, static resouces for a Web application are placed in a subfolder of the Web
module, for example, /path/<myAppName>/web/resources.
4. Create the application-router configuration file.
The application router configuration file xs-app.json must be located in the application's Web-resources
folder, for example, /path/<MyAppName>/web.
Sample Code
<myAppName>
|- web/ # Application descriptors
| |- xs-app.json # Application routes configuration
| |- package.json # Application router details/dependencies
| \- resources/
The contents of the xs-app.json file must use the required JSON syntax. For more information, see
Application Router Configuration Syntax [page 1011].
a. Create the required destinations configuration.
{
"welcomeFile": "index.html",
"routes": [
{
"source": "/sap/ui5/1(.*)",
"target": "$1",
"localDir": "sapui5"
},
{
"source": "/rest/addressbook/testdataDestructor",
"destination": "backend",
"scope": "node-hello-world.Delete"
},
{
"source": "/rest/.*",
"destination": "backend"
},
{
"source": "^/(.*)",
"localDir": "resources"
}
]
}
b. Add the routes (destinations) for the specific application (for example, node-hello-world) to the env
section application’s deployment manifest (manifest.yml).
Every route configuration that forwards requests to a micro service has property destination. The
destination is a name that refers to the same name in the destinations configuration. The destinations
configuration is specified in an environment variable passed to the approuter application.
Sample Code
<myAppName>/manifest.yml
- name: node-hello-world
host: myHost-node-hello-world
domain: xsapps.acme.ondemand.com
memory: 100M
path: web
env:
destinations: >
[
{
"name":"backend",
"url":"http://myHost-node-hello-world-
backend.xsapps.acme.ondemand.com",
"forwardAuthToken": true
}
]
6. Add a package descriptor (package.json) for the application router to the root folder of your application's
Web resources module (web/) and execute npm install to download the approuter npm module from the
SAP npm registry.
The package descriptor describes the prerequisites and dependencies that apply to the application router and
starts the application router, too.
<myAppName>
|- web/ # Application descriptors
| |- xs-app.json # Application routes configuration
| |- package.json # Application router details/dependencies
| \- resources/
The basic package.json file for your Web-resources module (web/) should look similar to the following
example:
Sample Code
{
"name": "node-hello-world-approuter",
"dependencies": {
"@sap/approuter": "5.1.0"
},
"scripts": {
"start": "node node_modules/@sap/approuter/approuter.js"
}
}
Tip
The start script (for example, approuter.js) is mandatory; the start script is executed after application
deployment.
Related Information
The application router is used to serve static content, authenticate users, rewrite URLs, and forward or proxy
requests to other micro services while propagating user information. The following table lists the resource files
used to define routes for multi-target applications:
package.json The package descriptor is used by the Node.js package man Yes
ager (npm) to start the application router; in the
“dependencies”: {} section
Tip
If you have a static resources folder name in the xs-
app.json file, we recommend that you use localDir as
default.
default-services.json Defines the configuration for one or more special User Account -
and Authentication (UAA) services for local development
Sample Code
xs-app.json
{
"source":"^/web-pages/(.*)$",
"localDir":"my-static-resources"
}
File that contains the configuration information used by the application router.
When a business application consists of several different apps (microservices), the application router is used to
provide a single entry point to that business application. The application router is responsible for the following
tasks:
The application descriptor is a file that contains the configuration information used by the application router. The
file is named xs-app.json and its content is formatted according to JavaScript Object Notation (JSON) rules.
Tip
The different applications (microservices) are the destinations to which the incoming requests are forwarded.
The rules that determine which request should be forwarded to which destination are called routes. For every
destination there can be more than one route.
User authentication is performed by the User Account and Authentication (UAA) server. In the run-time
environment (on-premise and in the Cloud Foundry environment), a service is created for the UAA configuration;
by using the standard service-binding mechanism, the content of this configuration is available in the
<VCAP_SERVICES> environment variable, which the application router can access to read the configuration
details.
Note
The UAA service should have xsuaa in its tags or the environment variable <UAA_SERVICE_NAME> should be
defined, for example, to specify the exact name of the UAA service to use.
A calling component accesses a target service by means of the application router only if there is no JSON Web
Token (JWT) available, for example, if a user invokes the application from a Web browser. If a JWT token is already
available, for example, because the user has already been authenticated, or the calling component uses a JWT
token for its own OAuth client, the calling component calls the target service directly; it does not need to use the
application router.
Note
The application router does not “hide” the back-end microservices in any way; they remain directly accessible
when bypassing the application router. So the back-end microservices must protect all their end points by
validating the JWT token and implementing proper authorization scope checks.
Headers
If back end nodes respond to client requests with URLs, these URLs need to be accessible for the client. For this
reason, the application router passes the following x-forwarding-* headers to the client:
● x-forwarded-host
Contains the host header that was sent by the client to the application router
● x-forwarded-proto
Contains the protocol that was used by the client to connect to the application router
● x-forwarded-for
Contains the address of the client which connects to the application router
● x-forwarded-path
Contains the original path which was requested by the client from the approuter
Caution
If the application router forwards a request to a destination, it blocks the header host.
“Hop-by-hop” headers are meaningful only for a single transport-level connection; these headers are not
forwarded by the application router. The following headers are classified as “Hop-By-Hop” headers:
● Connection
● Keep-Alive
● Public
● Proxy-Authenticate
● Transfer-Encoding
● Upgrade
You can configure the application router to send additional HTTP headers, for example, either by setting it in the
httpHeaders environment variable or in a local-http.headers.json file.
Sample Code
local-http.headers.json
[
{
"X-Frame-Options": "ALLOW-FROM http://localhost"
},
{
"Test-Additional-Header": "1"
}
]
The application router establishes a session with the client (browser) using a session cookie. The application
router intercepts all session cookies sent by back-end services and stores them in its own session. To prevent
collisions between the various session cookies, back-end session cookies are not sent to the client. On request, the
application router sends the cookies back to the respective back-end services so the services can establish their
own sessions.
Note
Non-session cookies from back-end services are forwarded to the client, which might cause collisions between
cookies. Applications should be able to handle cookie collisions.
Session Contents
A session established by the application router typically contains the following elements:
● Redirect location
The location to redirect to after logon; if the request is redirected to a UAA logon form, the original request
URL is stored in the session so that, after successful authentication, the user is redirected back to it.
● CSRF token
The CSRF token value if it was requested by the clients. For more information about protection against Cross
Site Request Forgery see CSRF Protection [page 1083] below.
● OAuth token
The JSON Web Token (JWT) fetched from the User Account and Authentication service (UAA) and forwarded
to back-end services in the Authorization header. The client never receives this token. The application
router refreshes the JWT automatically before it expires (if the session is still valid). By default, this routine is
triggered 5 minutes before the expiration of the JWT, but it can also be configured with the <JWT_REFRESH>
environment variable (the value is set in minutes). If <JWT_REFRESH> is set to 0, the refresh action is disabled.
● OAuth scopes
The scopes owned by the current user, which is used to check if the user has the authorizations required for
each request.
● Back-end session cookies
All session cookies sent by back-end services.
Scaling
The application router keeps all established sessions in local memory, and if multiple instances of the application
router are running, there is no synchronization between the sessions. To scale the application router for multiple
instances, session stickiness is used so that each HTTP session is handled by the same application router
instance.
The application-router process should run with at least 256MB memory. The amount of memory actually required
depends on the application the router is serving. The following aspects have an influence on the application's
memory usage:
● Concurrent connections
● Active sessions
● Size of the Java Web Token
● Back-end session cookies
You can use the start-up parameter max-old-space-size to restrict the amount of memory used by the
JavaScript heap. The default value for max-old-space-size is less than 2GB. To enable the application to use all
available resources, the value of max-old-space-size should be set to a number equal to the memory limit for
the whole application. For example, if the application memory is limited to 2GB, set the heap limit as follows, in the
application's package.json file:
Sample Code
"scripts": {
"start": "node --max-old-space-size=2048 node_modules/@sap/approuter/
approuter.js"
}
If the application router is running in an environment with limited memory, set the heap limit to about 75% of
available memory. For example, if the application router memory is limited to 256MB, add the following command
to your package.json:
Sample Code
"scripts": {
"start": "node --max-old-space-size=192 node_modules/@sap/approuter/
approuter.js"
}
Note
For detailed information about memory consumption in different scenarios, see the Sizing Guide for the
Application Router located in approuter/approuter.js/doc/sizingGuide.md.
CSRF Protection
The application router enables CSRF protection for any HTTP method that is not GET or HEAD and the route is not
public. A path is considered public, if it does not require authentication. This is the case for routes with
authenticationType: none or if authentication is disabled completely via the top level property
authenticationMethod: none.
If a CSRF protected route is requested with any of the above mentioned methods, x-csrf-token: <token>
header should be present in the request with the previously obtained token. This request must use the same
session as the fetch token request. If the x-csrf-token header is not present or is invalid, the application router
will return status code “403 - Forbidden”.
Cloud Connectivity
The application router supports integration with SAP Cloud Platform connectivity service. The connectivity service
enables you to manage proxy access to SAP Cloud Platform Cloud Connector, which you can use to create tunnels
for connections to systems located in private networks, for example, on-premise. To use the connectivity feature,
you must create an instance of the connectivity service and bind it to the Approuter application.
In addition, the relevant destination configurations in the <destinations> environment variable must have the
proxy type "onPremise", for example, "proxyType": "onPremise". You must also ensure that you obtain a
valid XSUAA logon token for the XS advanced User Account and Authentication service.
Troubleshooting
The application router uses the @sap/logging package, which means that all of the typical logging features are
available to control application logging. For example, to set all logging and tracing to the most detailed level, set the
<XS_APP_LOG_LEVEL> environment variable to “debug”.
Note
Enabling debug log level could lead to a very large amount of data being written to the application logs and trace
files. The asterisk wild card (*) enables options that trace sensitive data that is then written to the logs and
traces.
Tip
Logging levels are application-specific and case-sensitive; they can be defined with lower-case characters (for
example, “debug”) or upper-case characters (for example, “DEBUG”). An error occurs if you set a logging level
incorrectly, for example, using lower-case characters “debug” where the application defines the logging level as
“DEBUG”.
You can enable additional traces of the incoming and outgoing requests by setting the environment variable
<REQUEST_TRACE> to true. When enabled basic information will be logged for every incoming and outgoing
request of the application router.
The @sap/logging package sets the header x-request-id in the application router's responses. This is useful if
you want to search the application router's logs and traces for entries that belong to a particular request
execution. Note that the application router does not change the headers received from the back end and
Related Information
The application description defined in the xs-app.json file contains the configuration information used by the
application router.
The following example of an xs-app.json application descriptor shows the JSON-compliant syntax required and
the properties that either must be set or can be specified as an additional option.
Code Syntax
{
"welcomeFile": "index.html",
"authenticationMethod": "route",
"sessionTimeout": 10,
"pluginMetadataEndpoint": "/metadata",
"routes": [
{
"source": "^/sap/ui5/1(.*)$",
"target": "$1",
"destination": "ui5",
"csrfProtection": false
},
{
"source": "/employeeData/(.*)",
"target": "/services/employeeService/$1",
"destination": "employeeServices",
"authenticationType": "xsuaa",
"scope": ["$XSAPPNAME.viewer", "$XSAPPNAME.writer"],
"csrfProtection": true
},
{
"source": "^/(.*)$",
"target": "/web/$1",
"localDir": "static-content",
"replace": {
"pathSuffixes": ["/abc/index.html"],
"vars": ["NAME"]
}
}
],
"login": {
"callbackEndpoint": "/custom/login/callback"
},
"logout": {
welcomeFile
The Web page served by default if the HTTP request does not include a specific path, for example, index.html.
Code Syntax
"welcomeFile": "index.html"
authenticationMethod
The method used to authenticate user requests, for example: “route” or “none” (no authentication).
Code Syntax
"authenticationMethod" : "route"
Caution
If authenticationMethod is set to “none”, logon with User Account and Authentication (UAA) is disabled.
routes
Defines all route objects, for example: source, target, and, destination.
source RegEx Yes Describes a regular expression that matches the incom
ing request URL.
Note
Be aware that the RegExp is applied to the full URL
including query parameters.
httpMethods Array of uppercase No HTTP methods that will be served by this route; the sup
HTTP methods ported methods are: DELETE, GET, HEAD, OPTIONS,
POST, PUT, TRACE, and PATCH.
Tip
If this option is not specified, the route will serve any
HTTP method.
csrfProtection Boolean No Toggle whether this route needs CSRF token protection.
The default value is “true”. The application router enfor
ces CSRF protection for any HTTP request that changes
state on the server side, for example: PUT, POST, or DE
LETE.
Note
This value is relevant only for static resources.
Code Syntax
"routes": [
{
"source": "^/sap/ui5/1(.*)$",
"target": "$1",
"destination": "ui5",
"scope": "$XSAPPNAME.viewer",
"authenticationType": "xsuaa",
"csrfProtection": true
}
]
The properties destination and localDir cannot be used together in the same route.
If there is no route defined for serving static content via localDir, a default route is created for “resources”
directory as follows:
Sample Code
{
"routes": [
{
"source": "^/(.*)$",
"localDir": "resources"
}
]
}
Note
If there is at least one route using localDir, the default route is not added.
The httpMethods option allows you to split the same path across different targets depending on the HTTP
method. For example:
Sample Code
"routes": [
{
"source": "^/app1/(.*)$",
"target": "/before/$1/after",
"httpMethods": ["GET", "POST"]
}
]
This route will be able to serve only GET and POST requests. Any other method (including extension ones) will get
a 405 Method Not Allowed response. The same endpoint can be split across multiple destinations depending
on the HTTP method of the requests:
Sample Code
"routes": [
{
"source": "^/app1/(.*)$",
"destination" : "dest-1",
"httpMethods": ["GET"]
},
{
"source": "^/app1/(.*)$",
"destination" : "dest-2",
"httpMethods": ["DELETE", "POST", "PUT"]
}
The sample code above will route GET requests to the target dest-1, DELETE, POST and PUT to dest-2, and any
other method receives a 405 Method Not Allowed response. It is also possible to specify catchAll routes,
namely those that do not specify httpMethods restrictions:
Sample Code
"routes": [
{
"source": "^/app1/(.*)$",
"destination" : "dest-1",
"httpMethods": ["GET"]
},
{
"source": "^/app1/(.*)$",
"destination" : "dest-2"
}}
]
In the sample code above, GET requests will be routed to dest-1, and all the rest to dest-2.
replace
The replace object configures the placeholder replacement in static text resources.
Sample Code
{
"replace": {
"pathSuffixes": ["index.html"],
"vars": ["escaped_text", "NOT_ESCAPED"]
}
}
pathSuffixes Array An array defining the path suffixes that are relative to localDir. Only files
with a path ending with any of these suffixes will be processed.
vars Array A white list with the environment variables that will be replaced in the files
matching the suffix specified in pathSuffixes.
The supported tags for replacing environment variables are: {{ENV_VAR}} and {{{ENV_VAR}}} . If there such an
environment variable is defined, it will be replaced, otherwise it will be just an empty string.
Note
Any variable that is replaced using two-brackets syntax {{ENV_VAR}} will be HTML-escaped; the triple
brackets syntax {{{ENV_VAR}}} is used when the replaced values do not need to be escaped and all values will
If your application descriptor xs-app.json contains a route like the one illustrated in the following example,
{
"source": "^/get/home(.*)",
"target": "$1",
"localDir": "resources",
"replace": {
"pathSuffixes": ["index.html"],
"vars": ["escaped_text", "NOT_ESCAPED"]
}
}
Sample Code
<html>
<head>
<title>{{escaped_text}}</title>
<script src="{{{NOT_ESCAPED}}}/index.js"/>
</head>
</html
Then, in index.html, {{escaped_text}} and {{{NOT_ESCAPED}}} will be replaced with the value defined in
the environment variables <escaped_text> and <NOT_ESCAPED>.
Note
All index.html files are processed; if you want to apply the replacement only to specific files, you must set the
path relative to localDir. In addition, all files should comply with the UTF-8 encoding rules.
The content type returned by a request is based on the file extension specified in the route. The application router
support the following file types:
● .json (application/json)
● .txt (text/plain)
● .html (text/html) default
● .js (application/javascript)
● .css (test/css)
Example Result
{ "pathSuffixes": All files with the extension .html under localDir and its subfolders will be
[".html"] } processed.
{ "pathSuffixes": ["/abc/ For the suffix /abc/main.html, all files named main.html which are inside a
main.html", "some.html"] } folder named abc will be processed.
For the suffix some.html, all files with a name that ends with “some.html” will
be processed, for example: some.html, awesome.html.
{ "pathSuffixes": All files with the name “some.html” will be processed. For example:
[".html"] } some.html , /abc/some.html.
sessionTimeout
Define the amount of time (in minutes) for which a session can remain inactive before it closes automatically
(times out); the default time out is 15 minutes.
Note
The sessionTimeout property is no longer available; to set the session time out value, use the environment
variable <SESSION_TIMEOUT>.
Sample Code
{
"sessionTimeout": 40
}
With the configuration in the example above, a session timeout will be triggered after 40 minutes and involves
central log out.
login
A redirect to the application router at a specific endpoint takes place during OAuth2 authentication with the User
Account and Authentication service (UAA). This endpoint can be configured in order to avoid possible collisions, as
illustrated in the following example:
Sample Code
Application Router “login” Property
"login": {
"callbackEndpoint": "/custom/login/callback"
}
logout
Define any options that apply if you want your application to have central log out end point. In this object you can
define an application's central log out end point by using the logoutEndpoint property, as illustrated in the
following example:
Sample Code
"logout" {
"logoutEndpoint": "/my/logout"
}
Making a GET request to “/my/logout” triggers a client-initiated central log out. Central log out can be initiated by
a client or triggered due to a session timeout, with the following consequences:
Tip
It is not possible to redirect back to your application after logging out from UAA.
You can use the logoutPage property to specify the Web page in one of the following ways:
● URL path
The UAA service redirects the user back to the application router, and the path is interpreted according to the
configured routes.
Note
The resource that matches the URL path specified in the property logoutPage should not require
authentication; for this route, the property authenticationType must be set to “none”.
In the following example, my-static-resources is a folder in the working directory of the application router;
the folder contains the file logout-page.html along with other static resources.
{
"authenticationMethod": "route",
"logout": {
"logoutEndpoint": "/my/logout",
"logoutPage": "/logout-page.html"
},
"routes": [
{
"source": "^/logout-page.html$",
"localDir": "my-static-resources",
"authenticationType": "none"
}
]
}
Sample Code
"logout": {
"logoutEndpoint": "/my/logout",
"logoutPage": "http://acme.com/employees.portal"
}
destinations
Specify any additional options for your destinations. The destinations section in xs-app.json extends the
destination configuration in the deployment manifest (manifest.yml), for example, with some static properties
such as a logout path.
Sample Code
{
"destinations": {
"node-backend": {
"logoutPath": "/ui5logout",
"logoutMethod": "GET"
}
}
}
logoutPath String No The log out end point for your destination. The
logoutPath will be called when central log out is
triggered or a session is deleted due to a time out.
The request to logoutPath contains additional
headers, including the JWT token.
compression
The compression keyword enables you to define if the application router compresses text resources before
sending them. By default, resources larger than 1KB are compressed. If you need to change the compression size
threshold, for example, to “2048 bytes”, you can add the optional property “minSize”: <size_in_KB>, as
illustrated in the following example.
Sample Code
{
"compression": {
"minSize": 2048
}
}
● Global
Within the compression section add "enabled": false
● Front end
The client sends a header “Accept-Encoding” which does not include “gzip”.
● Back end
The application sends a header “Cache-Control” with the “no-transform” directive.
minSize Number No Text resources larger than this size will be com
pressed.
Note
If the <COMPRESSION> environment variable is set it will overwrite any existing values.
pluginMetadataEndpoint
Adds an endpoint that serves a JSON string representing all configured plugins.
Sample Code
{
"pluginMetadataEndpoint": "/metadata"
}
Note
If you request the relative path /metadata of your application, a JSON string is returned with the configured
plug-ins.
whitelistService
Enable the white-list service to help prevent against click-jacking attacks. Enabling the white-list service opens an
endpoint accepting GET requests at the relative path configured in the endpoint property, as illustrated in the
following example:
Sample Code
{
"whitelistService": {
"endpoint": "/whitelist/service"
}
}
If the white-list service is enabled in the application router, each time an HTML page needs to be rendered in a
frame, the white-list service is used check if the parent frame is allowed to render the content in a frame.
Sample Code
[
{
"protocol": "http",
"host": "*.acme.com",
"port": 12345
},
{
"host": "hostname.acme.com"
}
]
Note
Matching is done against the properties provided. For example, if only host name is provided, the white-list
service returns “framing: true” for all, and matching will be for all schemata and protocols.
Return Value
The white-list service accepts only GET requests, and returns a JSON object as the response. The white-list service
call uses the parent origin as URI parameter (URL encoded) as follows:
Sample Code
GET url/to/whitelist/service?parentOrigin=https://parent.domain.acme.com
The response is a JSON object with the following properties; property “active” has the value false only if
<CJ_PROTECT_WHITELIST> is not provided:
Sample Code
{
"version" : "1.0",
"active" : true | false,
"origin" : "<same as passed to service>",
"framing" : true | false
The “active” property enables framing control; the “framing” property specifies if framing should be allowed.
By default, the Application Router (approuter.js) sends the X-Frame-Options header with value the
SAMEORIGIN.
Tip
If the white-list service is enabled, the header value probably needs to be changed, see the X-Frame-Options
header section for details about how to change it.
websockets
The application router can forward web-socket communication. Web-socket communication must be enabled in
the application router configuration, as illustrated in the following example. If the back-end service requires
authentication, the upgrade request should contain a valid session cookie. The application router supports the
destination schemata "ws", "wss", "http", and "https".
Sample Code
{
"websockets": {
"enabled": true
}
}
Restriction
A web-socket ping is not forwarded to the back-end service.
errorPage
Errors originating in the application router show the HTTP status code of the error. It is possible to display a
custom error page using the errorPage property.
The property is an array of objects, each object can have the following properties:
In the following code example, errors with status code “400”, “401” and “402” will show the content of ./custom-
err-4xx.html; errors with the status code “501” will display the content of ./http_resources/custom-
err-501.html.
Sample Code
{ "errorPage" : [
{"status": [400,401,402], "file": "./custom-err-40x.html"},
{"status": 501, "file": "./http_resources/custom-err-501.html"}
]
}
Note
The contents of the errorPage configuration section have no effect on errors that are not generated by the
application router.
Related Information
The application router is used to serve static content, propagates user information, and acts as a proxy to forward
requests to other micro services. The routing configuration for an application is defined in one or more
destinations. The application router configuration is responsible for the following tasks:
Destinations
A destination defines the back-end connectivity. In its simplest form, a destination is an URL to which requests are
forwarded. There has to be a destination for every single app (microservice) that is a part of the business
application.
The destinations configuration can be provided by the destinations environment variable or by the destination
service.
Sample Code
Destinations Defined in the Application Manifest (manifest.yml)
---
applications:
- name: node-hello-world
port: <approuter-port> #the port used for the approuter
memory: 100M
path: web
env:
destinations: >
[
{
"name":"backend",
"url":"http://<hostname>:<node-port>",
"forwardAuthToken": true
}
]
services:
- node-uaa
Note
The "name" and "url" properties are mandatory.
The value of the destination "name" property ("name":“backend” in the example below) must match the value of
the destinations property configured for a route in the corresponding application-router configuration file (xs-
app.json). It is also possible to define a logout path and method for the destinations property in the xs-
app.json file. For more information, see the section describing the application-router configuration syntax in
Related Information.
For the destination service, there are the following AppRouter-specific limitations to the destination properties
configuration:
● none
● basic authentication
If set, User and Password are
mandatory
● principal propagation
If set, the ProxyType has to be
on-premise.
● on-premise
If set, binding to SAP Cloud
Platform connectivity service is re
quired.
● internet
Note
If the ProxyType is set to on-
premise, ForwardAuthToken
property should not be set.
Note
The timeout value specified also ap
plies to the destination's logout path
(if defined), which belongs to the
destination property.
● If a destination with the same name is defined both in the environment destination and the destination service,
the destination configuration loads the settings from the environment.
● If the configuration of a destination is updated on runtime, the changes will not be reflected automatically to
the AppRouter. To apply the changes, the AppRouter has to be restarted.
● Destination service is only available in the Cloud Foundry environment.
Related Information
A list of environment variables that can be used to configure the application router.
The following table lists the environment variables that you can use to configure the application router. The table
also provides a short description of each variable and, where appropriate, an example of the configuration data.
Variable Description
httpHeaders Configure the application router to return additional HTTP headers in its responses to cli
ent requests
SESSION_TIMEOUT Set the time to trigger an automatic central log out from the User Account and Authenti
cation (UAA) server.
CJ_PROTECT_WHITELIST A list of allowed server or domain origins to use when checking for click-jacking attacks
WS_ALLOWED_ORIGINS A list of the allowed server (or domain) origins that the application router uses to verify
requests
JWT_REFRESH Configures the automatic refresh of the JSON Web Token (JWT) provided by the User Ac
count and Authentication (UAA) service to prevent expiry (default is 5 minutes).
UAA_SERVICE_NAME Specify the exact name of the UAA service to bind to an application.
INCOMING_CONNECTION_TIME Specify the maximum time (in milliseconds) for a client connection. If the specified time
OUT is exceeded, the connection is closed.
TENANT_HOST_PATTERN Define a regular expression to use when resolving tenant host names in the request’s
host name.
SECURE_SESSION_COOKIE Configure the enforcement of the Secure flag of the application router's session cookie.
CORS Provide support for cross-origin requests, for example, by allowing the modification of
the request header.
httpHeaders
If configured, the application router sends additional HTTP headers in its responses to a client request. You can set
the additional HTTP headers in the <httpHeaders> environment variable. The following example configuration
shows how to configure the application router to send two additional headers in the responses to the client
requests from the application <myApp>:
or
Tip
To ensure better security of your application set the Content-Security-Policy header. This is a response
header which informs browsers (capable of interpreting it) about the trusted sources from which an application
expects to load resources. This mechanism allows the client to detect and block malicious scripts injected into
Usage of the Content-Security-Policy header is considered second line of defense. An application should always
provide proper input validation and output encoding.
destinations
The destinations configuration is an array of objects that is defined in the destinations environment variable. A
destination is required for every application (microservice) that is part of the business application. The following
table lists the properties that can be used to describe the destination:
url String Yes The Unique Resource Locator for the application
(microservice)
proxyHost String No The host of the proxy server used in case the re
quest should go through a proxy to reach the
destination.
Tip
Mandatory if proxyPort is defined.
proxyPort String No The port of the proxy server used in case the re
quest should go through a proxy to reach the
destination.
Tip
Mandatory if proxyHost is defined.
forwardAuthToken Boolean No If true, the OAuth token will be sent to the desti
nation. The default value is “false”. This token
contains the user identity, scopes, and some
other attributes. The token is signed by the User
Account and Authorization (UAA) service so that
the token can be used for user-authentication
and authorization purposed by back-end serv
ices.
Caution
For testing purposes only. Do not use this
property in production environments!
Tip
The timeout specified here also applies to the
logout path, logoutPath, if the logout path
is defined, for example, in the application's
descriptor file xs-app.json.
Tip
In Cloud environments, if you set the applica
tion's destination proxyType to
onPremise, a binding to the SAP Cloud
Platform connectivity service is required, and
the forwardAuthToken property must
not be set.
The following example shows a simple configuration for the <destinations> environment variable:
Sample Code
[
{
"name" : "ui5",
"url" : "https://sapui5.acme.com",
"proxyHost" : "proxy",
"proxyPort" : "8080",
"forwardAuthToken" : false,
"timeout" : 1200
}
It is also possible to include the destinations in the manifest.yml file, as illustrated in the following example:
Sample Code
- name: node-hello-world
memory: 100M
path: web
env: destinations: >
[
{"name":"ui5", "url":"https://sapui5.acme.com"}
]
SESSION_TIMEOUT
You can configure the triggering of an automatic central log-out from the User Account and Authentication (UAA)
service if an application session is inactive for a specified time. A session is considered to be inactive if no requests
are sent to the application router. The following command shows how to set the environment variable
<SESSION_TIMEOUT> to 40 (forty) minutes for the application <myApp1>:
Note
You can also set the session timeout value in the application's manifest.yml file, as illustrated in the following
example:
Sample Code
- name: myApp1
memory: 100M
path: web
env:
SESSION_TIMEOUT: 40
Tip
If the authentication type for a route is set to "xsuaa" (for example, "authenticationType": "xsuaa"),
the application router depends on the UAA server for user authentication, and the UAA server might have its
own session timeout defined. To avoid problems caused by unexpected timeouts, it is recommended that the
session timeout values configured for the application router and the UAA are identical."
SEND_XFRAMEOPTIONS
By default, the application router sends the X-Frame-Options header with the value “SAMEORIGIN”. You can
change this behavior either by disabling the sending of the default header value (for example, by setting
The following example shows how to disable the sending of the X-Frame-Options for a specific application,
myApp1:
CJ_PROTECT_WHITELIST
The <CJ_PROTECT_WHITELIST> specifies a list of origins (for example, host or domain names) that do not need
to be protected against click-jacking attacks. This list of allowed host names and domains are used by the
application router's white-list service to protect XS advanced applications against click-jacking attacks. When an
HTML page needs to be rendered in a frame, a check is done by calling the white-list service to validate if the
parent frame is allowed to render the requested content in a frame. The check itself is provided by the white-list
service
The following example shows how to add a host name to the click-jacking protection white list for the application,
myApp1:
The content is a JSON list of objects with the properties listed in the following table:
Note
Matching is done against the properties provided. For example, if only the host name is provided, the white-list
service matches all schemata and protocols.
WS_ALLOWED_ORIGINS
When the application router receives an upgrade request, it verifies that the origin header includes the URL of
the application router. If this is not the case, then an HTTP response with status 403 is returned to the client. This
origin verification can be further configured with the environment variable <WS_ALLOWED_ORIGINS>, which
defines a list of the allowed origins the application router uses in the verification process.
JWT_REFRESH
The JWT_REFRESH environment variable is used to configure the application router to refresh a JSON Web Token
(JWT) for an application, by default, 5 minutes before the JWT expires, if the session is active.
If the JWT is close to expiration and the session is still active, a JWT refresh will be triggered <JWT_REFRESH>
minutes before expiration. The default value is 5 minutes. To disable the automatic refresh, set the value of
<JWT_REFRESH> to 0 (zero).
UAA_SERVICE_NAME
The UAA_SERVICE_NAME environment variable enables you to configure an instance of the User Account and
Authorization service for a specific application, as illustrated in the following example:
Note
The details of the service configuration are defined in the <VCAP_SERVICES> environment variable, which is not
configured by the user.
INCOMING_CONNECTION_TIMEOUT
The INCOMING_CONNECTION_TIMEOUT environment variable enables you to set the maximum time (in
milliseconds) allowed for a client connection, as illustrated in the following example:
If the specified time is exceeded, the connection is closed. If INCOMING_CONNECTION_TIMEOUT is set to zero (0),
the connection-timeout feature is disabled. The default value for INCOMING_CONNECTION_TIMEOUT is 120,000
ms (2 min).
The TENANT_HOST_PATTERN environment variable enables you to specify a string containing a regular expression
with a capturing group. The requested host is matched against this regular expression. The value of the first
capturing group is used as the tenant Id. as illustrated in the following example:
COMPRESSION
The COMPRESSION environment variable enables you to configure the compression of resources before a response
to the client, as illustrated in the following example:
SECURE_SESSION_COOKIE
The SECURE_SESSION_COOKIE can be set to true or false. By default, the Secure flag of the session cookie is set
depending on the environment the application router runs in. For example, when the application router is running
behind a router that is configured to serve HTTPS traffic, then this flag will be present. During local development
the flag is not set.
Note
If the Secure flag is enforced, the application router will reject requests sent over unencrypted connections.
The following example illustrates how to set the SECURE_SESSION_COOKIE environment variable:
REQUEST_TRACE
You can enable additional traces of the incoming and outgoing requests by setting the environment variable
<REQUEST_TRACE>“true”. If enabled, basic information is logged for every incoming and outgoing request to the
application router.
The following example illustrates how to set the REQUEST_TRACE environment variable:
Tip
This is in addition to the information generated by the Node.js package @sap/logging that is used by the XS
advanced application router.
The CORS environment variable enables you to provide support for cross-origin requests, for example, by allowing
the modification of the request header. Cross-origin resource sharing (CORS) permits Web pages from other
domains to make HTTP requests to your application domain, where normally such requests would automatically
be refused by the Web browser's security policy.
Cross-origin resource sharing (CORS) is a mechanism that allows restricted resources on a Web page to be
requested from another domain (protocol and port) outside the domain (protocol and port) from which the first
resource was served. The CORS configuration enables you to define details to control access to your application
resource from other Web browsers. For example, you can specify where requests can originate from or what is
allowed in the request and response headers. The following example illustrates a basic CORS configuration:
[
{
"uriPattern": "^\route1$",
"allowedMethods": [
"GET"
],
"allowedOrigin": [
{
"host": "host.acme.com",
"protocol": "https",
"port": 345
}
],
"maxAge": 3600,
"allowedHeaders": [
"Authorization",
"Content-Type"
],
"exposeHeaders": [
"customHeader1",
"customHeader2"
],
"allowedCredentials": true
}
]
The CORS configuration includes an array of objects with the following properties, some of which are mandatory:
uriPattern String Yes A regular expression (RegExp) representing the source routes
to which the CORS configuration applies. To ensure that the
RegExp matches the complete path, surround it with ^ and $,f
or example, "uriPattern": "^\route1$". Defaults:
none
Note
Matching is case-sensitive. In addition, if no port or protocol
is specified, the default is “*”.
Tip
The specified methods must be upper-case, for exam
ple,GET. Matching of the method type is case-sensitive.
maxAge String No A single value specifying the length of time (in seconds) a pre
flight request should be cached for. A negative value that pre
vents CORS filter from adding this response header to the pre-
flight response. If maxAge is defined but no value is specified,
the default time of “1800” seconds applies.
allowedCredentials Boolean No A Boolean flag that indicates whether the specified resource
supports user credentials. The default setting is “true”.
Sample Code
CORS Configuration in the manifest.yml File
- name: node-hello-world
memory: 100M
path: web
env:
CORS: >
[
{
"allowedOrigin":[
{
"host":"my_host",
"protocol":"https"
}
],
"uriPattern":"^/route1$"
}
Related Information
Each multi-tenant application has to deploy its own application router, and the application router handles requests
of all tenants to the application. The application router is able to determine the tenant identifier out of the URL and
then forwards the authentication request to the tenant User Account and Authentication (UAA) service and the
related identity zone.
To use a multi-tenant application router, you need to have a shared UAA service and the version of the application
router has to be greater than 2.3.1.
The application router needs to determine the tenant-specific subdomain for the UAA that in turn determines the
identity zone, used for authentication. This is done by using a regular expression defined in the environment
variable TENANT_HOST_PATTERN.
TENANT_HOST_PATTERN is a string containing a regular expression with a capturing group. The request host is
matched against this regular expression. The value of the first capturing group is used as the tenant subdomain.
With this configuration, the application router will extract the tenant subdomain, which will be used for
authentication against a multi-tenant UAA.
Instead of starting the application router directly, you can configure your XS advanced application to use its own
start script. You can also use the application router as a regular Node.js package.
Sample Code
The application router uses the connect framework, for more information, see Connect framework in the Related
Links below. You can reuse all injected “connect” middleware within the application router, for example, directly in
the application start script:
Sample Code
Tip
To facilitate troubleshooting, always provide a name for your custom middleware.
The path argument is optional. You can also chain use calls.
Sample Code
The application router defines the following slots where you can insert custom middleware:
● first - right after the connect application is created, and before any application router middleware. At this
point security checks are not performed yet.
Tip
This is good place for infrastructure logic, for example, logging and monitoring.
● beforeRequestHandler - before standard application router request handling, that is static resource
serving or forwarding to destinations.
Tip
This is a good place to handle custom REST API requests.
Tip
This is a good place to capture or customize error handling.
If your middleware does not complete the request processing, call next to return control to the application router
middleware, as illustrated in the following example:
Sample Code
module.exports = {
insertMiddleware: {
first: [
function logRequest(req, res, next) {
console.log('Got request %s %s', req.method, req.url);
}
],
beforeRequestHandler: [
{
path: '/my-ext',
handler: function myMiddleware(req, res, next) {
res.end('Request handled by my extension!');
}
}
]
}
};
The extension configuration can be referenced in the corresponding application's start script, as illustrated in the
following example:
Sample Code
By default the application router handles its command-line parameters, but that can be customized as well.
An <approuter> instance provides the property cmdParser that is a commander instance. It is configured with
the standard application router command line options. You can add custom options like this:
Sample Code
ar.cmdParser = false;
Related Information
A detailed list of the features and functions provided by the application router extension API.
The application router extension API enables you to create new instances of the application router, manage the
approuter instance, and insert middleware using the Node.js “connect” framework. This section contains detailed
information about the following areas:
The application router uses the “Connect” framework for the insertion of middleware components. You can reuse
all connect middleware within the application router directly.
Tip
This is a good place to insert infrastructure logic, for exam
ple, logging and monitoring.
Tip
This is a good place to handle custom REST API requests.
Tip
This is a good place to capture or customize error handling.
start(options, callback) Starts the application router with the given options.
Middleware Slot
Related Information
When an application for the Cloud Foundry environment resides in a folder on your local machine, you can deploy it
and start it by executing the command line interface (CLI) command push.
Context
● For more information about developing and deploying applications in the Cloud Foundry environment, see
http://docs.cloudfoundry.org/devguide/deploy-apps/deploy-app.html .
● You can deploy your first applications following the tutorials that are available for the supported services:
Tutorials [page 980].
Related Information
Download and Install the Cloud Foundry Command Line Interface [page 948]
Log On to the Cloud Foundry Environment Using the Cloud Foundry Command Line Interface [page 948]
Develop applications for the Neo environment using Java, HTML5 and SAP HANA technologies, and consuming
cloud services.
Note
If your application uses a platform service and that service becomes temporarily unavailable due to a restart or
a temporary problem, you need to make sure that you develop your application in such a way that it could
resume its normal running state automatically when the service becomes available again.
This can be done by wrapping the calls to the service in a way that an erroneous state is expected and calls can
be retried later. Applications should not fall into an unrecoverable state as this will mandate application restart.
In addition, applications could mitigate the fact that a specific functionality is temporarily missing by displaying
data in their user interface only partially.
In this section:
Related Information
In the Neo environment, you enable services in the SAP Cloud Platform cockpit.
The cockpit lists all services grouped by service category. Some of the services are basic services, which are
provided with SAP Cloud Platform and are ready-to-use. In addition, extended services are available. A label on the
tile for a service indicates if this service is enabled.
An administrator must first enable the service and apply the service-specific configuration (for example, configure
the corresponding roles and destinations) before any subaccount members can use it.
Note
Some services are exposed only for trial accounts. That means the services are not, or not yet, released for use
with a customer or partner account.
Some services are exposed only if your organization has purchased a license.
Remember
You can access most of the links only after the service has been enabled.
The configuration options for a service may look like in this example for the Portal service:
● To configure connection parameters to other systems (by creating connectivity destinations), choose
Configure <Portal Service> Destinations .
This option is available only if the service is enabled.
● To create custom roles and assign custom or predefined roles to individual users and groups, choose
Configure <Portal Service> Roles .
This option is available only if the service is enabled.
In the Neo environment, you might need to enable services before subaccount members can integrate them with
applications. Note that free services are always enabled.
Prerequisites
Procedure
1. Navigate to the subaccount in which you'd like to enable a service. For more information, see Navigate to
Global Accounts and Subaccounts [page 964].
2. In the navigation area, choose Services.
3. Select the service and choose Enable.
In the Neo environment, you might need to disable services so that they are not available to subaccount members.
Prerequisites
Procedure
1. Navigate to the subaccount in which you'd like to disable a service. For more information, see Navigate to
Global Accounts and Subaccounts [page 964].
2. In the navigation area, choose Services.
3. Select the service and choose Disable.
Note
○ If other services use the service, they may be negatively impacted when you disable it. Your service
documentation may provide information about services that are dependent on your service.
Note
For information about platform services, go to Capabilities [page 24].
SAP Cloud Platform allows you to achieve isolation between the different application life cycle stages
(development, testing, productive) by using multiple subaccounts.
Prerequisites
● You have developed an application. For more information, see Developing Java Applications [page 1164].
● You have a subaccount in an enterprise account. For more information, see Global Accounts and Subaccounts
[page 10].
Context
Using multiple subaccounts ensures better stability. Also, you can achieve better security for productive
applications because permissions are given per subaccount.
For example, you can create three different subaccounts for one application and assign the necessary amount of
compute unit quota to them:
● dev - use for development purposes and for testing the increments in the cloud, you can grant permissions to
all application developers
● test- use for testing the developed application and its critical configurations to ensure quality delivery
(integration testing and testing in productive-like environment prior to making it publicly available)
● prod - use to run productive applications, give permissions only to operators.
Procedure
Next, you can deploy your application in the newly created subaccount using the Eclipse IDE or the console
client. Then, you can test your application and make it ready for productive use.
You can transfer the application from one subaccount to another by redeploying it in the respective
subaccount.
Procedure
1. Open the command prompt and navigate to the folder containing neo.bat/sh (<SDK installation
folder>/tools).
2. Create a new subaccount.
Execute:
Execute:
Next, you can deploy your application in the newly created subaccount by executing neo deploy -a
<subaccount> -h <host> -b <application name> -s <file location> -u <user name or
email>. Then, you can test your application and make it ready for productive use.
You can transfer the application from one subaccount to another by redeploying it in the respective
subaccount.
SAP Cloud Platform enables you to develop, deploy and use Java applications in a cloud environment. Applications
run on a runtime container where they can use the platform services APIs and Java EE APIs according to standard
patterns.
The SAP Cloud Platform Runtime for Java enables the provisioning and running applications on the platform. The
runtime is represented by Java Virtual Machine, Application Runtime Container and Compute Units. Cloud
applications interact at runtime with the containers and services via the platform APIs.
Compute Unit
The Java development process is enabled by the SAP Cloud Platform Tools, which comprise the Eclipse IDE and
the SAP Cloud Platform SDK.
During and after development, you can configure and operate an application using the cockpit and the console
client.
Appropriate for
Related Information
Set up your Java development environment and deploy your first application in the cloud.
Samples
A set of sample applications allows you to explore the core functionality of SAP Cloud Platform and shows how this
functionality can be used to develop complex Web applications. See: Using Samples [page 1143]
Before you can start developing your application, you need to download and set up the necessary tools, which
include Eclipse IDE for Java EE Developers, SAP Cloud Platform Tools, and SDK.
SAP Cloud Platform Tools, SAP Cloud Platform SDK for Neo environment, SAP JVM, and SAP HANA Cloud
Connector, can be downloaded from the SAP Development Tools for Eclipse page.
For more information on each step of the set up procedure, open the relevant page from the structure.
Procedure
1. For Java applications, choose between three types of SAP Cloud Platform SDK for Neo environment.
For more information, see Installing the SDK.
2. SAP JVM is the Java runtime used in SAP Cloud Platform. It can be set as a default JRE for your local runtime.
For instructions on how to install it, see (Optional) Installing SAP JVM.
3. Download and set up Eclipse IDE for Java EE Developers.
See Installing Eclipse IDE.
4. Download and set up SAP Development Tools for Eclipse.
See Installing SAP Development Tools for Eclipse.
5. Configure the landscape host and SDK location on which you will be deploying your application.
See Set Up the Runtime Environment [page 1131].
6. Add Java Web, Java Web Tomcat 7, Java Web Tomcat 8, or Java EE 6 Web Profile, according to the SDK you
use. See Setting Up the Runtime Environment.
For more information on the different SDK versions and their corresponding runtime environments, see
Application Runtime Container.
7. To set up SAP JVM as a default JRE for your local environment, see Setting Up SAP JVM in Eclipse IDE.
8. If you prefer working with the Console Client, see Setting Up the Console Client.
9. If you need to establish a connection between on-demand applications in SAP Cloud Platform and existing on-
premise systems, you can use SAP HANA Cloud Connector.
For more information, see SAP HANA Cloud Connector.
Context
For more information, see section Application Runtime Container [page 1153].
Procedure
1. Open https://tools.hana.ondemand.com/#cloud
2. From the SAP Cloud Platform Neo Environment SDK section, download the relevant ZIP file and save it to your
local file system.
3. Extract the ZIP file to a folder on your computer or network.
Your SDK is ready for use. To use the SAP Cloud Platform SDK for Neo environment with Eclipse, see Set Up the
Runtime Environment [page 1131]. To use the console client, see Using the Console Client [page 1792].
Related Information
Context
SAP Cloud Platform infrastructure runs on SAP's own implementation of a Java Virtual Machine - SAP Java Virtual
Machine (JVM).
SAP JVM is a certified Java Virtual Machine and Java Development Kit (JDK), compliant to Java Standard Edition
(SE) 8. Technology-wise it is based on the OpenJDK and has been enhanced with a strong focus on supportability
and reliability. One example of these enhancements is the SAP JVM Profiler. The SAP JVM Profiler is a tool that
helps you analyze the resource consumption of a Java application running on theSAP Cloud Platform local
runtime. You can use it to profile simple stand-alone Java programs or complex enterprise applications.
Customer support is provided directly by SAP for the full maintenance period of SAP applications that use the SAP
JVM. For more information, see Java Virtual Machine [page 1151]
Procedure
1. Open https://tools.hana.ondemand.com/#cloud
2. From the SAP JVM section, download the SAP JVM archive file compatible to your operating system and save
it to your local file system.
3. Extract the archive file.
Note
If you use Windows as your operating system, you need to install the Visual C++ 2010 Runtime prior to using
SAP JVM. The installation package for the Visual C++ 2010 Runtime can be obtained from Microsoft. Download
and install vcredist_x64.exe from the following site: https://www.microsoft.com/en-us/download/details.aspx?
id=26999 . Even if you already have a different version of Visual C++ Runtime, for example Visual C++ 2015,
you still need to install Visual C++ 2010 Runtime prior to using SAP JVM. See SAP Note 1837221 .
Related Information
Prerequisites
If you are not using SAP JVM, you need to have JDK installed in order to be able to run Eclipse.
Procedure
2. Find the ZIP file you have downloaded on your local file system and unpack the archive.
3. Go to the eclipse folder and run the eclipse executable file.
4. Specify a Workspace directory.
5. To open the Eclipse workbench, choose Workbench in the upper right corner.
Note
If the version of your previous Eclipse IDE is 32-bit based and your currently installed Eclipse IDE is 64-bit based
(or the other way round), you need to delete the Eclipse Secure Storage, where Eclipse stores, for example,
credentials for source code repositories and other login information. For more information, see Eclipse Help:
Secure Storage .
To use SAP Cloud Platform features, you first need to install the relevant toolkit. Follow the procedure below.
Prerequisites
You have installed an Eclipse IDE. For more information, see Install Eclipse IDE [page 1128].
Caution
The support for Mars has entered end of maintenance. We recommend that you use Oxygen or Neon releases.
Procedure
Note
For some operating systems, the path is Eclipse Preferences .
4. Configure your proxy settings (in case you work behind a proxy or a firewall):
Note
If you want to have your SAP Cloud Platform Tools updated regularly and automatically, open the Preferences
window again and choose Install/Update Automatic Updates . Select Automatically find new updates and
notify me and choose Apply.
Prerequisites
You have installed the SAP Development Tools for Eclipse. See Install SAP Development Tools for Eclipse [page
1129]
Procedure
7. Choose the Validate button to check whether the data on this preference page is valid.
8. Choose OK.
In your Eclipse IDE, set up the runtime environment for Java applications. Use the same runtime environment as
the one you will be using to run the applications on the cloud.
Context
There are different runtime environments for Java applications available. For a complete list, see Application
Runtime Container [page 1153].
Prerequisites
You have downloaded an SDK archive and installed it in your Eclipse IDE. For more information, see Install the SAP
Cloud Platform SDK for Neo Environment [page 1127].
Procedure
Java Web
Note
When deploying your application on SAP Cloud Platform, you can change your server runtime even during
deployment. If you manually set a server runtime different than the currently loaded, you will need to republish
the application. For more information, see Deploy on the Cloud from Eclipse IDE [page 1191].
Related Information
Context
Once you have installed your SAP JVM, you can set it as a default JRE for your local runtime. Follow the steps
below.
Prerequisites
You have downloaded and installed SAP JVM, version 7.1.054 or higher.
You can set SAP JVM as default or assign it to a specific SAP Cloud Platform runtime.
● To use SAP JVM as default for your Eclipse IDE, follow the steps:
1. Open again the Preferences window.
2. Select sapjvm<n> as default.
3. Choose OK.
● To use SAP JVM for launching local servers only, follow the steps:
1. Double-click on the local server you have created (Java Web Server, Java Web Tomcat 7 Server or
Java Web Tomcat 8 Server).
2. Open the Overview tab and choose Open launch configuration.
3. Select the JRE tab.
4. Choose the Alternative JRE option.
5. From the dropdown menu, select the SAP JVM version you have just added.
6. Choose OK.
Related Information
Prerequisites
You have downloaded and extracted the SAP Cloud Platform SDK for Neo environment. For more information, see
Install the SAP Cloud Platform SDK for Neo Environment [page 1127].
Context
SAP Cloud Platform console client is part of the SAP Cloud Platform SDK for Neo environment. You can find it in
the tools folder of your SDK installation. Before using the tool, you need to configure it to work with the platform.
Procedure
cd C:\HCP\SDK
cd tools
3. In case you use a proxy server, specify the proxy settings by using environment variables. You can find sample
proxy settings in the readme.txt file in the \tools folder of your SDK location.
○ Microsoft Windows
Note
○ For the new variables to be effective every time you open the console, define them using
Advanced System Settings Environment Variables and restart the console.
○ For the new variables to be valid only for the currently open console, define them in the console
itself.
For example, if your proxy host is proxy and proxy port is 8080, specify the following environment
variables:
set HTTP_PROXY_HOST=proxy
set HTTP_PROXY_PORT=8080
set HTTPS_PROXY_HOST=proxy
set HTTPS_PROXY_PORT=8080
set HTTP_NON_PROXY_HOSTS="localhost"
If you need basic proxy authentication, enter your user name and password:
set HTTP_PROXY_USER=<user_name>
set HTTP_PROXY_PASSWORD=<password>
set HTTPS_PROXY_USER=<user_name>
set HTTPS_PROXY_PASSWORD=<password>
export http_proxy=http://proxy:8080
export https_proxy=https://proxy:8080
export no_proxy="localhost"
If you need basic proxy authentication, enter your user name and password:
export http_proxy=http://user:password@proxy:8080
export https_proxy=https://user:password@proxy:8080
Related Information
If you have already installed and used the SAP Cloud Platform Tools, SAP Cloud Platform SDK for Neo
environment and SAP JVM, you only need to keep them up to date.
Context
If you have already installed an SAP Cloud Platform SDK for Neo environment package, you only need to update it
regularly. To update your SDK, follow the steps below.
Procedure
1. Download the new SAP Cloud Platform SDK for Neo environment version from https://
tools.hana.ondemand.com/#cloud
Note
Again, if the SAP Cloud Platform SDK for Neo environment version is higher and not supported by the
version of your SAP Cloud Platform Tools for Java, a message appears prompting you to update your
SAP Cloud Platform Tools for Java. You can check for updates (recommended) or ignore the message.
4. Choose Finish.
6. After editing all local runtimes, choose OK.
Related Information
Install the SAP Cloud Platform SDK for Neo Environment [page 1127]
Application Runtime Container [page 1153]
sdk-upgrade [page 1963]
Context
If you have already installed an SAP Java Virtual Machine, you only need to update it. To update your JVM, follow
the steps below.
Note
Do not install the new SAP JVM version to a directory that already contains SAP JVM.
3. In the Eclipse IDE main menu, choose Window Preferences Java Installed JREs and select the JRE
configuration entry of the old SAP JVM version.
4. Choose the Edit... button.
5. Use the Directory... button to select the directory of the new SAP JVM version.
6. Choose Finish.
7. In the Preferences window, choose OK.
Related Information
Context
If you have already installed SAP Cloud Platform Tools, you only need to update them. To do so, follow the steps
below.
Procedure
1. Ensure that the SAP Cloud Platform Tools software site is checked for updates:
1. Find out whether you are using a Oxygen or Neon release of Eclipse. The name of the release is shown on
the welcome screen when the Eclipse IDE is started.
Caution
The support for Mars has entered end of maintenance. We recommend that you use Oxygen or Neon
releases.
2. In the main menu, choose Window Preferences Install/Update Available Software Sites .
3. Make sure there is an entry https://tools.hana.ondemand.com/oxygen or https://
tools.hana.ondemand.com/neon and that this entry is selected.
Note
If you want to have your SAP Cloud Platform Tools updated regularly and automatically, open the Preferences
window again and choose Install/Update Automatic Updates . Select Automatically find new updates and
notify me and choose Apply.
Related Information
This document describes how to create a simple Hello World Web application, which you can use for testing on
SAP Cloud Platform.
First, you create a dynamic Web project and then you add a simple Hello World servlet to it.
After you have created the Web application, you can test it on the local runtime and then deploy it on the cloud.
Prerequisites
You have installed the SAP Cloud Platform Tools. For more information, see Setting Up the Development
Environment [page 1126].
Make sure you have downloaded the JRE that matches the SDK.
If you work in a proxy environment, set the proxy host and port correctly.
1. Open your Eclipse IDE for Java EE Developers and switch to the Workbench screen.
2. From the Eclipse IDE main menu, choose File New Dynamic Web Project .
3. In the Project name field, enter HelloWorld.
4. In the Target Runtime pane, select the runtime you want to use to deploy the Hello World application. In this
tutorial, we use Java Web.
Note
The application will be provisioned with JRE version matching the Web project Java facet. If the JRE version
is not supported by SAP Cloud Platform, the default JRE for the selected SDK will be used (SDK for Java
Web and for Java EE 6 Web Profile – JRE 7).
6. Optional: If you want your context root to be different from "HelloWorld", proceed as follows:
1. Choose the Next button until you reach the Web Module wizard page.
7. Choose Finish.
1. On the HelloWorld project node, open the context menu and choose New Servlet . Window Create
Servlet opens.
2. Enter hello as Java package and HelloWorldServlet as class name.
6. Choose Finish to generate the servlet. The Java Editor with the HelloWorldServlet opens.
7. Replace the body content of the doGet(…) method with the following line:
response.getWriter().println("Hello World!");
Test your Hello World application locally and deploy it to SAP Cloud Platform. For more information, see Deploying
and Updating Applications [page 1175].
The sample applications allow you to explore the core functionality of SAP Cloud Platform and show how this
functionality can be used to develop more complex Web applications. The samples are included in the SAP Cloud
Platform SDK for Neo environment or presented as blogs in the SAP Community.
SDK Samples
The samples provided as part of the SAP Cloud Platform SDK for Neo environment introduce important concepts
and application features of the SAP Cloud Platform and show how common development tasks can be automated
using build and test tools.
hello-world A simple HelloWorld Web application Creating a HelloWorld Application [page 1139]
connectivity Consumption of Internet services Consume Internet Services (Java Web or Java EE 6 Web
Profile) [page 156]
persistence-with-ejb Container-managed persistence with JPA Tutorial: Adding Container-Managed Persistence with
JPA (SDK for Java EE 6 Web Profile) [page 823]
persistence-with-jdbc Relational persistence with JDBC Tutorial: Adding Persistence with JDBC (SDK for Java
Web) [page 871]
document-store Document storage in repository Using the Document Service in a Web Application [page
442]
SAP_Jam_OData_HCP Accessing data in SAP Jam via OData Source code for using the SAP Jam API
All samples can be imported as Eclipse or Maven projects. While the focus has been placed on the Eclipse and
Apache Maven tools due to their wide adoption, the principles apply equally to other IDEs and build systems.
For more information about using the samples, see Import Samples as Eclipse Projects [page 1145], Import
Samples as Maven Projects [page 1147], and Building Samples with Maven [page 1148].
The Web application "Paul the Octopus" is part of a community blog and shows how the SAP Cloud Platform
services and capabilities can be combined to build more complex Web applications, which can be deployed on the
SAP Cloud Platform.
● It is intended for anyone who would like to gain hands-on experience with the SAP Cloud Platform.
● It involves the following platform services: identity, connectivity, SAP HANA and SAP ASE, and document.
● Its user interface is developed via SAPUI5 and is based on the Model-View-Controller concept. SAPUI5 is
based on HTML5 and can be used for building applications with sophisticated UI. Other technologies that you
can see in action in "Paul the Octopus" are REST services and job scheduling.
For more information, see the SAP Community blog: Get Ready for Your Paul Position .
The Web application "SAP Library" is presented in a community blog as another example of demonstrating the
usage of several SAP Cloud Platform services in one integrated scenario, closely following the product
documentation. You can import it as a Maven project, play around with your own library, and have a look at how it
is implemented. It allows you to reserve and return books, edit details of existing ones, add new titles, maintain
library users' profiles and so on.
● The library users authenticate using the identity service. It supports Single Sign-On (SSO).
● The books’ status and features are persisted using the SAP HANA and SAP ASE service.
● Book’s details are retrieved using a public Internet Web service, demonstrating usage of the connectivity
service.
● The e-mails you will receive when reserving and returning books to the library, are implemented using a Mail
destination.
● When you upload your profile image, it is persisted using the document service.
For more information, see the SAP Community blog: Welcome to the Library!
Related Information
To get a sample application up and running, import it as an Eclipse project into your Eclipse IDE and then deploy it
on the local runtime and SAP Cloud Platform.
Prerequisites
You have installed the SAP Cloud Platform Tools and created a SAP Cloud Platform server runtime environment as
described in Setting Up the Development Environment [page 1126].
1. From the main menu of the Eclipse IDE, choose File Import… General Existing Projects into
Workspace and then choose Next.
2. Browse to locate and select the directory containing the project you want to import, for example, <sdk>/
samples/hello-world, and choose OK.
3. Under Projects select the project (or projects) you want to import.
4. Choose Finish to start the import.
The project is imported into your workspace and appears in the Project Explorer view.
Tip
Close the welcome page if it is still shown.
Note
If you have not yet set up a server runtime environment, the following error will be reported: "Faceted
Project Problem: Target runtime SAP Cloud Platform is not defined". To set up the runtime environment,
complete the steps as described in Set Up Default Region Host in Eclipse [page 1130] and Set Up the
Runtime Environment [page 1131].
Next Steps
Run the sample application locally and then in the cloud. For more information, see Deploy Locally from Eclipse IDE
[page 1189] and Deploy on the Cloud from Eclipse IDE [page 1191].
Note
Some samples are ready to run while others have certain prerequisites, which are described in the respective
readme.txt.
Note
When you import samples as Eclipse projects, the tests provided with the samples are not imported. To be able
to run automated tests, you need to import the samples as Maven projects.
To import the tests provided with the SDK samples, import the samples as Maven projects.
Prerequisites
You have installed the SAP Cloud Platform Tools and created a SAP Cloud Platform server runtime environment as
described in Setting Up the Development Environment [page 1126].
Procedure
Note
To configure the Maven settings.xml file, choose Window Preferences Maven User Settings .
This configuration is required if you need to provide your proxy settings. For more information, see http://
maven.apache.org/settings.html .
Procedure
1. From the Eclipse main menu, choose File Import… Maven Existing Maven Projects and then choose
Next.
2. Browse to locate and select the directory containing the project you want to import, for example, <sdk>/
samples/hello-world, and choose OK.
3. Under Projects select the project (or projects) you want to import.
4. Choose Finish to start the import.
The project is imported into your workspace and appears in the Project Explorer view.
Tip
Close the welcome page if it is still shown.
Next Steps
Run the sample application locally and then in the cloud. For more information, see Deploy Locally from Eclipse IDE
[page 1189] and Deploy on the Cloud from Eclipse IDE [page 1191].
Note
Some samples are ready to run while others have certain prerequisites, which are described in the respective
readme.txt.
All samples provided can be built with Apache Maven. The Maven build shows how a headless build and test can be
completely automated.
Context
● Builds a Java Web application based on the SAP Cloud Platform API
● Demonstrates how to run rudimentary unit tests (not available in all samples)
● Installs, starts, waits for, and stops the local server runtime
● Deploys the application to the local server runtime and runs the integration test
● Starts, waits for, and stops the cloud server runtime
● Deploys the application to the cloud server runtime and runs the integration test
Related Information
You can use the Apache Maven command line tool to run local and cloud integration tests for any of the SDK
samples.
Prerequisites
● You have downloaded the Apache Maven command line tool. For more information, see the detailed Maven
documentation at http://maven.apache.org .
● You are familiar with the Maven build lifecycle. For more information, see http://maven.apache.org/guides/
introduction/introduction-to-the-lifecycle.html .
Procedure
1. Open the folder of the relevant project, for example, <sdk>/samples/hello-world, and then open the
command prompt.
2. Enter the verify command with the following profile in order to activate the local integration test:
If you are using a proxy, you need to define additional Maven properties as described below in step 4 (see
proxy details).
3. Press ENTER to start the build process.
All phases of the default lifecycle are executed up to and including the verify phase, with the resulting build
status shown on completion.
4. To activate the cloud integration test, which involves deploying the built Web application on a landscape in the
cloud, enter the following profile with the additional Maven properties given below:
○ Landscape host
The landscape host (default: hana.ondemand.com) is predefined in the parent pom.xml file (<sdk>/
samples/pom.xml) and can be overwritten, as necessary. If you have a developer account, for example,
and are therefore using the trial landscape, enter the following:
○ Account details
Provide your account, user name, and password:
○ Proxy details
Tip
If your proxy requires authentication, you might want to use the Authenticator class to pass the proxy
user name and password. For more information, see Authenticator . Note that for the sake of
simplicity this feature has not been included in the samples.
Tip
To avoid having to repeatedly enter the Maven properties as described above, you can add them directly to
the pom.xml file, as shown in the example below:
<sap.cloud.username>p0123456789</sap.cloud.username>
You might also want to use environment variables to set the property values dynamically, in particular when
handling sensitive information such as passwords, which should not be stored as plain text:
<sap.cloud.password>${env.SAP_CLOUD_PASSWORD}</sap.cloud.password>
Related Information
The SAP Cloud Platform Runtime for Java comprises the components which create the environment for
provisioning and running applications on SAP Cloud Platform. The runtime is represented by Java Virtual Machine,
Application Runtime Container and Compute Units. Cloud applications can interact at runtime with the containers
and services via the platform APIs.
Components
Related Information
SAP Cloud Platform infrastructure runs on SAP's own implementation of a Java Virtual Machine - SAP Java Virtual
Machine (JVM).
SAP JVM is a certified Java Virtual Machine and Java Development Kit (JDK), compliant to Java Standard Edition
(SE) 8. Technology-wise it is based on the OpenJDK and has been enhanced with a strong focus on supportability
and reliability. One example of these enhancements is the SAP JVM Profiler. The SAP JVM Profiler is a tool that
helps you analyze the resource consumption of a Java application running on theSAP Cloud Platform local
runtime. You can use it to profile simple stand-alone Java programs or complex enterprise applications.
SAP JVM
The SAP JVM is a standard compliant certified JDK, supplemented by additional supportability and developer
features and extensive monitoring and tracing information. All these features are designed as interactive, on-
demand facilities of the JVM with minimal performance impact. They can be switched on and off without having to
restart the JVM (or the application server that uses the JVM).
Debugging on Demand
With SAP JVM debugging on demand, Java developers can activate and deactivate Java debugging directly – there
is no need to start the SAP JVM (or the application server on top of it) in a special mode. Java debugging in the
Profiling
To address the root cause of all performance and memory problems, the SAP JVM comes with the SAP JVM
Profiler, a powerful tool that supports the developer in identifying runtime bottlenecks and reducing the memory
footprint. Profiling can be enabled on-demand without VM configuration changes and works reliably even for very
large Java applications.
The user interface – the SAP JVM Profiler – can be easily integrated into any Eclipse-based environment by using
the established plug-in installation system of the Eclipse platform. It allows you to connect to a running SAP JVM
and analyze collected profiling data in a graphical manner. The profiler plug-in provides a new perspective similar
to the debug and Java perspective.
A number of profiling traces can be enabled or disabled at any point in time, resulting in snapshots of profiling
information for the exact points of interest. The SAP JVM Profiler helps with the analysis of this information and
provides views of the collected data with comprehensive filtering and navigation facilities.
● Memory Allocation Analysis – investigates the memory consumption of your Java application and finds
allocation hotspots
● Performance Analysis – investigates the runtime performance of your application and finds expensive Java
methods
● Network Trace - analyzes the network traffic
● File I/O Trace - provides information about file operations
● Synchronization Trace - detects synchronization issues within your application
● Method Parameter Trace – yields detailed information about individual method calls including parameter
values and invocation counts
● Profiling Lifecycle Information – a lightweight monitoring trace for memory consumption, CPU load, and GC
events.
The SAP JVM provides comprehensive statistics about threads, memory consumption, garbage collection, and I/O
activities. For solving issues with SAP JVM, a number of traces may be enabled on demand. They provide
additional information and insight into integral VM parts such as the class loading system, the garbage collection
algorithms, and I/O. The traces in the SAP JVM can be switched on and off using the jvmmon tool, which is part of
the SAP JVM delivery.
Related Information
SAP Cloud Platform applications run on a modular and lightweight application runtime container where they can
use the platform services APIs and Java EE APIs according to standard patterns.
Depending on the runtime type and corresponding SDK you are using, SAP Cloud Platform provides the following
profiles of the application runtime container:
Java Web Tom Some of the standard Java EE 7 8 (default); 7 If you need a simplified Java Web application runtime
cat 8 [page
APIs (Servlet, JSP, EL, Websocket) container based on Apache Tomcat 8.
1156]
Java Web Tom Some of the standard Java EE 6 7 (default); 8 If you need a simplified Java Web application runtime
cat 7 [page
APIs (Servlet, JSP, EL, Websocket) container based on Apache Tomcat 7.
1155]
Java EE 7 Web Java EE 7 Web Profile APIs 8 (default); 7 If you need an application runtime container together
Profile TomEE 7
with all containers defined by the Java EE 7 Web Pro
[page 1158]
file specification.
Java EE 6 Web Java EE 6 Web Profile APIs 7 (default) If you need an application runtime container together
Profile [page
with all containers defined by the Java EE 6 Web Pro
1157]
file specification.
For the complete list of supported APIs, see Supported Java APIs [page 1161]
Restriction
Support of Java 6 in the Neo environment is discontinued. You cannot deploy or start applications on Java 6.
● If you redeploy your application or deploy a new one, you will not be able to use Java 6 but you will have to
use Java 7 instead.
● If you have a running application with Java 6, you have to redeploy it with Java 7.
Tip
You can still run Java 6 complied applications with Java 7 as Java 7 backward compatible.
Java Web is a minimalistic application runtime container in SAP Cloud Platform that offers a subset of Java EE
standard APIs typical for a standalone Java Web Container.
Restriction
This runtime is deprecated. Support will be discontinued after 31th December 2017. We recommend migrating
to Java Web Tomcat 8. For more information, see below.
In the general case, applications running on Java Web runtime are compatible with the Java Web Tomcat 8
runtime, and can be ported there without change. An exception are applications using the HTTP Destination API
[page 78] (com.sap.core.connectivity.api.http.HttpDestination). This API is not available in Java
Web Tomcat 8 runtime. If you use that API in your applications, you need to migrate to the
ConnectivityConfiguration API [page 80]
(com.sap.core.connectivity.api.configuration.ConnectivityConfiguration).
Use (Deprecated)
This runtime container is suitable for SAP Cloud Platform applications that need a small, low memory consuming
container. The default supported Java version for Java Web is 7.
The current version 1 of the Java Web application runtime container (neo-java-web 1.x) provides implementation
for the following set of Java Specification Requests (JSRs):
The Java Web enables you to easily create your applications for SAP Cloud Platform utilizing standard defined APIs
suitable for a Web Container in addition to SAP Cloud Platform services APIs.
For more information, see SAP Cloud Platform SDK Java API Documentation.
Related Information
This container leverages Apache Tomcat 7 without modifications and adds a subset of SAP Cloud Platform
services client APIs. Applications running in the Apache Tomcat 7 container are portable on Java Web Tomcat 7.
The default supported Java version for Java Web Tomcat 7 is 7; you can also use Java version 8.
The current version of Java Web Tomcat 7 application runtime container (neo-java-web 2.x) provides
implementation for the following set of Java Specification Requests (JSRs) defined specifications:
The following subset of APIs of SAP Cloud Platform services are available within Java Web Tomcat 7: document
service APIs, mail service APIs, connectivity service APIs (destination configuration and authentication header
provider), SAP HANA service and SAP ASE service JDBC APIs, and security APIs.
Java Web Apache Tomcat 8 (Java Web Tomcat 8) is the next edition of the Java Web application runtime container
that has all characteristics and features of its predecessor Java Web Tomcat 7.
This container leverages Apache Tomcat 8.5 Web container without modifications and also adds the already
established set of SAP Cloud Platform services client APIs. Applications running in the Apache Tomcat 8.5 Web
container are portable to Java Web Tomcat 8. Existing applications running in Java Web and Java Web Tomcat 7
application runtime containers can run unmodified in Java Web Tomcat 8 in case they share the same set of
enabled APIs.
Restriction
HTTP2 protocol is not supported at SAP Cloud Platform.
The default supported Java version for Java Web Tomcat 8 is 8; you can also use Java version 7.
The current version of Java Web Tomcat 8 application runtime container (neo-java-web 3.x) provides
implementation for the following set of Java Specification Requests (JSRs) defined specifications:
The following subset of APIs of SAP Cloud Platform services are available within Java Web Tomcat 8: document
service APIs, mail service APIs, connectivity service APIs (destination configuration and authentication header
provider), SAP HANA service and SAP ASE service JDBC APIs, and security APIs.
The Java EE 6 Web Profile application runtime container of SAP Cloud Platform is Java EE 6 Web Profile certified.
The lightweight Web Profile of Java EE 6 is targeted at next-generation Web applications. Developers benefit from
productivity improvements with more annotations and less XML configuration, more Plain Old Java Objects
(POJOs), and simplified packaging.
The current version 2 of Java EE 6 Web Profile application runtime container (neo-javaee6-wp 2.x) provides
implementation for the following Java Specification Requests (JSRs):
Note
EJB Timer Servce is also supported (although not part of
the EJB Lite specification).
Contexts and Dependency Injection for Java EE platform 1.0 JSR - 299
For more information about the differences between EJB 3.1 and EJB 3.1 Lite, see the Java EE 6 specification, JSR
318: Enterprise JavaBeans, section 21.1.
Development Process
The Java EE 6 Web Profile enables you to easily create your applications for SAP Cloud Platform.
For more information, see Using Java EE Web Profile Runtimes [page 1166].
Related Information
Java EE at a Glance
The Java EE 7 Web Profile TomEE 7 provides implementation of the Java EE 7 Web Profile specification.
The default supported Java version for Java EE 7 Web Profile TomEE is 8; you can also use Java version 7.
The current version of Java EE 7 Web Profile TomEE application runtime container (neo-javaee7-wp 1.x) provides
implementation for the following Java Specification Requests (JSRs):
Java API for RESTful Web Services (JAX-RS) 2.0 JSR - 339
Note
EJB Timer Servce is also supported (although not part of
the EJB Lite specification).
Contexts and Dependency Injection for Java EE platform 1.1 JSR - 346
For more information about the differences between EJB 3.2 and EJB 3.2 Lite, see the Java EE 7 specification, JSR
345: Enterprise JavaBeans, section 21.1.
Development Process
The Java EE 7 Web Profile TomEE 7 enables you to easily create your applications for SAP Cloud Platform.
For more information, see Using Java EE Web Profile Runtimes [page 1166].
Related Information
Java EE at a Glance
A compute unit is the virtualized hardware resources used by an SAP Cloud Platform application.
After being deployed to the cloud, the application is hosted on a compute unit with certain central processing unit
(CPU), main memory, disk space, and an installed OS.
SAP Cloud Platform offers four standard sizes of compute units according to the provided resources.
Depending on their needs, customers can choose from the following compute unit configurations:
The third column in the table shows what value of the -z or --size parameter you need to use for a console
command.
Note
In a trial account you can run only one application at a time.
For customer accounts, all sizes of compute units are available. During deployment, customers can specify the
compute unit on which they want their application to run.
Related Information
The basic tools of the SAP Cloud Platform development environment, the SAP Cloud Platform Tools, comprise the
SAP Cloud Platform Tools for Java and the SAP Cloud Platform SDK for Neo environment.
The focus of the SAP Cloud Platform Tools for Java is on the development process and enabling the use of the
Eclipse IDE for all necessary tasks: creating development projects, deploying applications locally and in the cloud,
and local debugging. It makes development for the platform convenient and straightforward and allows short
development turn-around times.
The SAP Cloud Platform SDK for Neo environment, on the other hand, contains everything you need to work with
the platform, including a local server runtime and a set of command line tools. The command line capabilities
enable development outside of the Eclipse IDE and allow modern build tools, such as Apache Maven, to be used to
Related Information
When you develop applications that run on SAP Cloud Platform, you can rely on certain Java EE standard APIs.
These APIs are provided with the runtime of the platform. They are based on standards and are backward
compatible as defined in the Java EE specifications. Currently, you can make use of the APIs listed below:
● javax.activation
● javax.annotation
● javax.el
● javax.mail
● javax.persistence
● javax.servlet
● javax.servlet.jsp
● javax.servlet.jsp.jstl
● javax.websocket
● org.slf4j.Logger
● org.slf4j.LoggerFactory
If you are using the SAP Cloud Platform SDK for Java EE 6 WebProfile, you can have access to the following Java
EE APIs as well:
● javax.faces
● javax.validation
● javax.inject
● javax.ejb
● javax.interceptor
● javax.transaction
● javax.enterprise
● javax.decorator
The table below summarizes the Java Request Specifications (JSRs) supported in the two SAP Cloud Platform
SDKs for Java.
The table below summarizes the Java Request Specifications (JSRs) supported in the SAP Cloud Platform SDK for
Java Web Tomcat 8 .
Supported Java EE 7 Specification SDK for Java Web Tomcat 8 SDK for Java EE 7 Web Profile TomEE 7
In addition to the standard APIs, SAP Cloud Platform offers platform-specific services that define their own APIs
that can be used from the SAP Cloud Platform SDK. The APIs of the platform-specific services are listed in the
table below
The SAP Cloud Platform SDK contains a platform API folder for compiling your Web applications. It contains the
above content, that is, all standard and third-party API JARs (for legal reasons provided "as is", that is, they also
have non-API content on which you should not rely) and the platform APIs of the SAP Cloud Platform services.
You can add additional (pure Java) application programming frameworks or libraries and use them in your
applications. For example, you can include Spring Framework in the application (in its application archive) and use
it in the application. In such cases, the application should handle all dependencies to such additional frameworks
or libraries and you should take care for the whole assembly of such additional frameworks or libraries inside the
application itself.
SAP Cloud Platform also provides numerous other capabilities and APIs that might be accessible for applications.
However, you should rely only on the APIs listed above.
Related Information
You can develop applications for SAP Cloud Platform just like for any application server. SAP Cloud Platform
applications can be based on the Java EE Web application model. You can use programming logic that is well-
known to you, and benefit from the advantages of Java EE, which defines the application frontend. Inside, you can
embed the usage of the services provided by the platform.
Development Environment
SAP Cloud Platform development environment is designed and built to optimize the process of development and
deployment.
It includes the SAP Cloud Platform Tools for Java, which integrate the standard capabilities of Eclipse IDE with
some extended features that allow you to deploy on the cloud. For Java applications, you can choose between
three types of SAP Cloud Platform SDK for Neo environment:
● SDK for Java Web - provides support for some of the standard Java EE 6 APIs (Servlet, JSP, EL, Websocket)
● SDK for Java Web Tomcat 7 - provides support for some of the standard Java EE 6 APIs (Servlet, JSP, EL,
Websocket)
● SDK for Java EE 6 Web Profile - certified to support Java EE 6 Web Profile APIs
● SDK for Java Web Tomcat 8 - provides support for some of the standard Java EE 7 APIs (Servlet, JSP, EL,
Websocket)
In the Eclipse IDE, create a simple HelloWorld application with basic functional logic wrapped in a Dynamic Web
Project and a Servlet. You can do this with both SDKs.
For more information, see Creating a Hello World Application [page 1139] or watch the Creating a HelloWorld
application video tutorial.
To learn how to enhance the HelloWorld application with role management, see the Managing Roles in SAP Cloud
Platform video tutorial.
SAP Cloud Platform is Java EE 6 Web Profile certified so you can extend the basic functionality of your application
with Java EE 6 Web Profile technologies. If you are working with the SDK for Java EE 6 Web Profile, you can equip
the basic application with additional Java EE features, such as EJB, CDI, JTA.
For more information, see Using Java EE Web Profile Runtimes [page 1166]
Create a fully-fledged application benefiting from the capabilities and services provided by SAP Cloud Platform. In
your application, you can choose to use:
● Authentication [page 2120] - by default, SAP Cloud Platform is configured to use SAP ID service as identity
provider (IdP), as specified in SAML 2.0. You can configure trust to your custom IdP, to provide access to the
cloud using your own user database.
● UI development toolkit for HTML5 (SAPUI5) - use the platform's official UI framework.
● Persistence Service [page 705] - provide relational persistence with JPA and JDBC via our persistence service.
● Connectivity Service [page 75] - use it to connect Web applications to Internet, make on-demand to on-
premise connections to Java and ABAP on-premise systems and configure destinations to send and fetch e-
mail.
● Document Service [page 435] - use the service to store unstructured or semistructured data in your
application.
● - implement a logging API if you want to have logs produced at runtime.
● Cloud Environment Variables [page 1172]- use system environment variables that identify the runtime
environment of the application.
Deploy
First, deploy and test the ready application on the local runtime and then make it available on SAP Cloud Platform.
For more information, see Deploying and Updating Applications [page 1175]
You can speed up your development by applying and activating new changes on the already running application.
Use the hot-update command.
Manage
Manage all applications deployed in your account from a single dedicated user interface - SAP Cloud Platform
cockpit.
For more information, see SAP Cloud Platform Cockpit [page 900]
This tutorial demonstrates creating a simple Hello World Java application with a Java bean using the Java EE 6
Web Profile or Java EE 7 Web Profile TomEE 7.
Prerequisites
● You have installed SAP Cloud Platform tools. Make sure you also download the SDK for Java EE 6 Web Profile
or SDK for Java EE 7 Web Profile TomEE 7. For more information, see Setting Up the Tools and SDK [page
1126].
● If you have a previously installed version of SAP Cloud Platform Tools, make sure you update them to the latest
version. For more information, see Updating the Tools and SDK [page 1136].
● The SDK brings all required libraries. In case you get an error with the import of a library, make sure you have
set the SAP Cloud Platform Tools and the Web Project correctly.
Procedure
Configuration Default configuration for Java EE 6 Web Default configuration for Java EE 7 Web
Profile Profile TomEE 7
5. Choose Finish.
For more information, see Creating a Hello World Application [page 1139] .
1. On the HelloWorld project node, open the context menu and choose New Servlet . Window Create
Servlet opens.
2. Enter hello as the Java package and HelloWorldServlet as the class name. Choose Next.
3. In the URL mappings field, select /HelloWorldServlet and choose Edit.
4. In the Pattern field, replace the current value with just "/" and choose OK. In this way, the servlet will be
mapped as a welcome page for the application.
5. Choose Finish to generate the servlet. The Java Editor with the HelloWorldServlet opens.
6. Change the doGet(…) method so that it contains:
response.getWriter().println("Hello World!");
For more information, see Creating a Hello World Application [page 1139].
Create a JSP
1. On the HelloWorld project node, open the context menu and choose New JSP file . Window New JSP file
opens.
2. Enter the name of your JSP file and choose Finish.
1. On the HelloWorld project node, choose File New Other EJB Session Bean . Choose Next.
2. In the Create EJB session bean wizard, еnter test as the Java package and HelloWorldBean as the name of your
new class. Choose Finish.
3. Implement a simple public method sayHello that returns a greeting string. Save the project.
package test;
import javax.ejb.LocalBean;
import javax.ejb.Stateless;
/**
* Session Bean implementation class HelloWorldBean
*/
@Stateless
@LocalBean
@EJB
private HelloWorldBean helloWorldBean;
<%@page import="javax.naming.InitialContext"%>
<%@ page language="java" contentType="text/html; charset=ISO-8859-1"
pageEncoding="ISO-8859-1"%>
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://
www.w3.org/TR/html4/loose.dtd">
<%@ page import = "test.HelloWorldBean" %>
<%@ page import = "javax.ejb.EJB" %>
<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=ISO-8859-1">
<title>Insert title here</title>
</head>
<body>
</body>
</html>
<%
try {
InitialContext ic = new InitialContext();
HelloWorldBean h= (HelloWorldBean)ic.lookup("java:comp/env/
hello.HelloWorldServlet/helloWorldBean");
out.println(h.sayHello());
}
You can test the application on the local runtime and then deploy it on SAP Cloud Platform.
For more information, see Deploying an Application on SAP HANA Cloud [page 1175].
You can now use JPA together with EJB to persist data in your application
For more information, see Tutorial: Adding Container-Managed Persistence with JPA (SDK for Java EE 6 Web
Profile) [page 823]
Overview
SAP Cloud Platform runtime sets several system environment variables that identify the runtime environment of
the application. Using them, an application can get information about its application name, subaccount and URL,
as well as information about the region host it is deployed on and region-specific parameters. All SAP Cloud
Platform specific environment variables names start with the common prefix HC_.
The following SAP Cloud Platform environment variables are set to the runtime environment of the application:
HC_LANDSCAPE production / trial Type of the region host where the appli
cation is deployed
SAP Cloud Platform environment variables are accessed as standard system environment variables of the Java
process - for example via System.getenv("...").
Note
Environment variables are not set when deploying locally with the console client or Eclipse IDE.
Example
<html>
<head>
<title>Display SAP Cloud Platform Environment Platform variables</title>
</head>
<body>
<p>Application Name: <%= System.getenv("HC_APPLICATION") %></p>
</body>
</html>
Related Information
Prerequisites
In the Eclipse IDE you have developed or imported a Java application that is running on a cloud server.
Context
In the Server editor of your local Eclipse IDE, you can use the Advanced tab and the Environment Variables table to
add, edit, select and remove environment variables for the cloud virtual machine.
Note
The Advanced tab is only available for cloud servers.
Procedure
1. In the Eclipse IDE go to the Servers view and select the cloud server you want to configure.
2. Double click on it to open the Server Editor.
3. Open the Advanced tab.
4. (Optional) Add an environment variable.
Note
The changes made by someone else will be loaded once you reopen the editor.
Content
Deploying Applications
After you have created your Java application, you need to deploy and run it on SAP Cloud Platform. We
recommend that you first deploy and test your application on the local runtime before deploying it on the cloud.
Use the tool that best fits your scenario:
Eclipse IDE Deploy Locally from Eclipse IDE [page You have developed your application using SAP Cloud
1189] Platform Tools in the Eclipse IDE.
Console Client Deploy Locally with the Console Client You want to deploy an application in the form of one or more
[page 1195] WAR files.
Lifecycle Manage Deploy an Application [page 1178] You want to deploy an application in the form of one or more
ment API WAR files.
Application properties are configured during deployment with a set of parameters. To update these properties, use
one of the following approaches:
Console Client deploy [page 1856] Deploy the application with new WAR file(s) and make
changes to the configuration parameters.
Command: deploy
Console Client set-application-property [page 1964] Change some of the application properties you defined during
deployment without redeploying the application binaries.
Command: set-application-property
Cockpit Deploy on the Cloud with the Cockpit Update the application with a new WAR file or make changes
[page 1199] to the configuration parameters.
If you want to quickly see your changes while developing an application, use the following approaches:
Eclipse IDE Deploy on the Cloud from Eclipse IDE Republish the application. The cloud server is not restarted,
[page 1191] and only the application binaries are updated.
Console Client hot-update [page 1905] Apply and activate changes. Use the command to speed up
development and not for updating productive applications.
Command: hot-update
Console Client Use Delta Deployment [page 1198] Apply changes in a deployed application without uploading the
entire set of files.
deploy [page 1856]
Command: deploy or hot-update
hot-update [page 1905]
Parameter: --delta
If you are an application operator and need to deploy a new version of a productive application or perform
maintenance, you can choose among several approaches:
Zero Downtime Update Applications with Zero Downtime Use when the new application version is backward compatible
[page 1714] with the old version. Deploy a new version of the application
and disable and enable processes in a rolling manner, or, do it
rolling-update [page 1960]
at one go with the rolling-update command.
Planned Down Enable Maintenance Mode for Planned Use when the new application version is backward incompati
time Downtimes [page 1716] ble. Enable maintenance mode for the time of the planned
downtime.
(Maintenance
Mode)
Soft Shutdown Perform Soft Shutdown [page 1719] Supports zero downtime and planned downtime scenarios.
Disable the application or individual processes in order to shut
down the application or processes gracefully.
Related Information
The lifecycle REST API provides functionality for Java application lifecycle management.
This tutorial provides information about the most common use cases for Java applications and the operations that
are included in each one:
Prerequisites
● You have assigned the manageJavaApplications scope to the platform role used in the subaccount. For more
information, see Platform Scopes [page 1676].
● You have installed a REST client.
Context
For the purposes of this tutorial, we will deploy three .war files: (app.war, example.war, demo.war).
Procedure
Client Request:
GET https: api.hana.ondemand.com/lifecycle/v1/csrf
Request Headers:
X-CSRF-Token: Fetch
Authorization: Basic UDE5NDE3OTM5NDg6RnJhZ28jNjQ3Ng==
Server Response:
Response Status: 200
Response Headers:
X-CSRF-Token: 3B95B7A8B0E8E6B923C67E6C0BFD234D
Note
After a while the CSRF token expires. If you are using an invalid CSRF token, you will receive an error
message similar to this one: HTTP Status 403 - CSRF token validation failed! If this
happens, get a new token.
2. Create an application.
Send a POST Applications request:
Client Request:
POST: https://api.hana.ondemand.com/lifecycle/vl/accounts/test/apps
Request Body:
{
"applicationName": "myapp",
"runtimeName": "neo-java-web",
"runtimeVersion": "1",
"minProcesses": 1,
"maxProcesses": 1
}
Tip
You can add other properties to the body of the request. The properties in this example are the minimum
requirements that let you execute the request successfully.
Client Request:
POST: https://api.hana.ondemand.com/lifecycle/vl/accounts/test/apps/myapp/
binaries
Request Headers:
X-CSRF-Token: 3B95B7A8B0E8E6B923C67E6C0BFD234D
Content-Type: application/json
Request Body:
{
"files": [{
"path": "app.war"
}, {
"path": "demo.war"
}, {
"path": "example.war"
}]
}
Server Response:
Response Status: 201
Response Body:
{
"metadata": {
"url": "/lifecycle/v1/accounts/test/apps/myapp/
binaries"
},
"entity": {
"totalSize": 0,
"status": "UPLOADING",
"files": [{
"path": "app.war",
"pathGuid": "YXBwLndhcg\u003d
\u003d",
"status": "UNAVAILABLE",
"entries": []
}, {
"path": "example.war",
"pathGuid": "ZXhhbXBsZS53YXI\u003d",
"status": "UNAVAILABLE",
"entries": []
This request describes the metadata of the binaries and prepares them for their upload.
4. Upload the binaries.
Note
You must start uploading the binaries within the next 2 minutes. Otherwise, the operation will be canceled
and you will have to deploy the application again. If you do not start uploading the binaries within the next 2
minutes, you will receive the following response:
Server Response:
Response Status: 404
Response Body:
{
"code": "98a59939-0e9a-430c-9ec3-c094a4d8d78d",
"description": "Application operation is not found"
}
Send PUT Binary requests for each one of the binaries. Use the corresponding pathGuid values for each .war
file from the previous POST Binaries response and add it to the URL:
Client Request:
PUT: https://api.hana.ondemand.com/lifecycle/v1/accounts/test/apps/myapp/
binaries/YXBwLndhcg==
Request Headers:
X-CSRF-Token: 3B95B7A8B0E8E6B923C67E6C0BFD234D
Content-Type: application/octet-stream
Request Body:
Choose to add a file and select app.war.
Client Request:
PUT: https://api.hana.ondemand.com/lifecycle/v1/accounts/test/apps/myapp/
binaries/ZXhhbXBsZS53YXI=
Request Headers:
X-CSRF-Token: 3B95B7A8B0E8E6B923C67E6C0BFD234D
Content-Type: application/octet-stream
Request Body:
Choose to add a file and select example.war.
Client Request:
PUT: https://api.hana.ondemand.com/lifecycle/v1/accounts/test/apps/myapp/
binaries/ZGVtby53YXI=
Request Headers:
X-CSRF-Token: 3B95B7A8B0E8E6B923C67E6C0BFD234D
Content-Type: application/octet-stream
Request Body:
Choose to add a file and select demo.war.
If the operation is successful, the response for all three requests should return 200 without a body.
5. List the binaries.
Client Request:
GET: https://api.hana.ondemand.com/lifecycle/v1/accounts/test/apps/myapp/binaries
Repeat the process and observe the general status until you receive the FAILED or the DEPLOYED values. The
DEPLOYED status shows that the deployment operation has been successful and you can now start your
application:
Server Response:
Response Status: 200
Response Body:
{
"metadata": {},
"entity": {
"totalSize": 57857,
"status": "DEPLOYED",
"warnings": "Warning: No compute unit size was
specified for the application so size was set automatically to \u0027lite
\u0027.",
"files": [{
"path": "app.war",
"pathGuid": "YXBwLndhcg\u003d
\u003d",
"size": 17194,
"status": "AVAILABLE",
"hash":
"6c8b99a72d5b42db31cc576273260f9c2f316c1ac7dcc4a8c845412e51d420f0dcf53f4035745e30
3cdd43bf73974fada19839920d845010013bf422ae5bc4dd",
"entries": [{...}]
}, {
"path": "example.war",
"pathGuid": "ZXhhbXBsZS53YXI\u003d",
"size": 37615,
"status": "AVAILABLE",
"hash":
"7b2a80771f79d0740f629bdaaf019c550b10df55eec8789447ec02fa93e7fdb1f6f47f4864769f4a
4f027a4bca8bfa1ea45a83c5fb38ae539b397abe9fe66be1",
"entries": [{...}]
}, {
"path": "demo.war",
"pathGuid": "ZGVtby53YXI\u003d",
"size": 3048,
"status": "AVAILABLE",
"hash":
"8c4b39bfe3a034d64e8592e7cf638ac4b5985c5f9a4f691270d040b8f15dc8edbb6284bd5431f1a2
40abaad3b2288411563b784b691c35ca677ae5e9ced565a9",
"entries": [{...}]
}]
}
}
The binaries are now officially DEPLOYED. You can also see that each binary has status AVAILABLE.
Prerequisites
● You have assigned the manageJavaApplications scope to the platform roles used in the target and source
subaccounts. For more information, see Platform Scopes [page 1676].
● You have an existing source application deployed on SAP Cloud Platform.
● You have installed a REST client.
Context
In this tutorial, you will deploy an application from an existing application by specifying the source account and
application as query parameters.
Procedure
Client Request:
GET https: api.hana.ondemand.com/lifecycle/v1/csrf
Request Headers:
X-CSRF-Token: Fetch
Authorization: Basic UDE5NDE3OTM5NDg6RnJhZ28jNjQ3Ng==
Server Response:
Response Status: 200
Response Headers:
X-CSRF-Token: 3B95B7A8B0E8E6B923C67E6C0BFD234D
Note
After a while the CSRF token expires. If you are using an invalid CSRF token, you will receive an error
message similar to this one: HTTP Status 403 - CSRF token validation failed! If this
happens, get a new token.
Client Request:
POST: https://api.hana.ondemand.com/lifecycle/vl/accounts/test/apps?
operation=copy&sourceAccount=sourcesubaccount&sourceApplication=sourceapp
Server Response:
Response Status: 201
Response Body:
{
"metadata": {
"url": "/lifecycle/v1/accounts/test/apps/myapp"
},
"entity": {
"accountName": "test",
"applicationName": "myapp",
"runtimeName": "neo-java-web",
"runtimeVersion": "1",
"minProcesses": 1,
"maxProcesses": 1
}
}
Tip
Тhe body is optional for the request. If you do not specify a body, the REST API will take the parameters
from the source application.
Client Request:
GET: https://api.hana.ondemand.com/lifecycle/v1/accounts/test/apps/myapp/binaries
Repeat the process and observe the general status until you receive the FAILED or the DEPLOYED values. The
DEPLOYED status shows that the copy operation has been successful and you can now start your application:
Server Response:
Response Status: 200
Response Body:
{
"metadata": {},
"entity": {
"totalSize": 57857,
"status": "DEPLOYED",
"warnings": "Warning: No compute unit size was
specified for the application so size was set automatically to \u0027lite
\u0027.",
"files": [{
"path": "app.war",
"pathGuid": "YXBwLndhcg\u003d
\u003d",
"size": 17194,
"status": "AVAILABLE",
"hash":
"6c8b99a72d5b42db31cc576273260f9c2f316c1ac7dcc4a8c845412e51d420f0dcf53f4035745e30
3cdd43bf73974fada19839920d845010013bf422ae5bc4dd",
"entries": [{...}]
The binaries are now officially DEPLOYED. You can also see that each binary has status AVAILABLE.
Prerequisites
Context
You can validate the content of an application by verifying the hash values in a binaries response. For example, you
verify changes to an application by comparing hash values of deployed binaries with the hash values of modified
binaries. You can use this verification to be sure that you have the correct binaries for deploy or update in a copy
operation.
Procedure
1. Get a CSRF token. If you try to start your application long after its deployment, the token has most probably
expired.
Send a GET CSRF Protection request.
Client Request:
GET: https://api.hana.ondemand.com/lifecycle/v1/accounts/test/apps/myapp/binaries
Repeat the process and observe the general status until you receive the FAILED or the DEPLOYED values. The
DEPLOYED status shows that the deployment operation has been successful and you can now start your
application:
Server Response:
Response Status: 200
Response Body:
{
"metadata": {},
"entity": {
"totalSize": 57857,
"status": "DEPLOYED",
"warnings": "Warning: No compute unit size was
specified for the application so size was set automatically to \u0027lite
\u0027.",
"files": [{
"path": "app.war",
"pathGuid": "YXBwLndhcg\u003d
\u003d",
"size": 17194,
"status": "AVAILABLE",
"hash":
"6c8b99a72d5b42db31cc576273260f9c2f316c1ac7dcc4a8c845412e51d420f0dcf53f4035745e30
3cdd43bf73974fada19839920d845010013bf422ae5bc4dd",
"entries": [{...}]
}, {
"path": "example.war",
"pathGuid": "ZXhhbXBsZS53YXI\u003d",
"size": 37615,
"status": "AVAILABLE",
"hash":
"7b2a80771f79d0740f629bdaaf019c550b10df55eec8789447ec02fa93e7fdb1f6f47f4864769f4a
4f027a4bca8bfa1ea45a83c5fb38ae539b397abe9fe66be1",
"entries": [{...}]
}, {
"path": "demo.war",
"pathGuid": "ZGVtby53YXI\u003d",
"size": 3048,
"status": "AVAILABLE",
"hash":
"8c4b39bfe3a034d64e8592e7cf638ac4b5985c5f9a4f691270d040b8f15dc8edbb6284bd5431f1a2
40abaad3b2288411563b784b691c35ca677ae5e9ced565a9",
"entries": [{...}]
}]
}
}
The binaries are now officially DEPLOYED. You can also see that each binary has status AVAILABLE.
3. Use the hash values of the binaries to compare with those of previous binaries before you start another
operation.
Procedure
1. Get a CSRF token. If you try to start your application long after its deployment, the token has most probably
expired.
Send a GET CSRF Protection request.
Note
If your session is still actively running, you do not have to request a new CSRF token. In this case, we will use
the CSRF token generated during the deployment scenario.
Client Request:
PUT: https://api.hana.ondemand.com/lifecycle/v1/accounts/test/apps/myapp/state
Request Headers:
X-CSRF-Token: 3B95B7A8B0E8E6B923C67E6C0BFD234D
Content-Type: application/json
Request Body:
{
"applicationState": "STARTED"
}
Server Response:
Response Status: 200
Response Body:
{
"metadata": {
"message": "Triggered start of application
process.",
"url": "/lifecycle/v1/accounts/test/apps/myapp",
"createdAt": 1501825923105,
"updatedAt": 1501827428000
},
"entity": {
"applicationState": "STARTING",
"processes": [{
"processId":
"dc1460001710d282b42b7331f1831ec5ad9c1924",
"status": "PENDING",
"lastStatusChange": 0,
"availabilityZone": "",
"computeUnitSize": "LITE"
}],
"warningMessage": "Triggered start of application
process."
}
}
The applicationState value will change from STARTING (or PENDING) to STARTED.
3. Make sure the application is working properly.
Client Request:
GET: https://api.hana.ondemand.com/lifecycle/v1/accounts/test/apps/myapp/state
Server Response:
Response Body:
{
"metadata": {
"domain": "hana.ondemand.com",
"aliases": "[\"/DemoApp\",\"example\",\"/\"]",
"accessPoints": ["https://
myapptest.int.hana.ondemand.com", "https://myapptest.hana.ondemand.com"],
"runtime": {
"id": "neo-java-web",
"state": "recommended",
"expDate": "1541203200000",
"displayName": "Java Web",
"relDate": "1501718400000",
"version": "1.133.3"
},
"url": "/lifecycle/v1/accounts/test/apps/myapp",
"createdAt": 1501825923105,
"updatedAt": 1501827428000
},
"entity": {
"applicationState": "STARTED",
"loadBalancerState": "ENABLED",
"urls": ["https://
myapptest.int.hana.ondemand.com", "https://myapptest.hana.ondemand.com"],
"processes": [{
"processId":
"dc1460001710d282b42b7331f1831ec5ad9c1924",
"status": "STARTED",
"lbStatus": "ENABLED",
"lastStatusChange": 1501827728209,
"runtime": {
"id": "neo-java-web",
"state":
"recommended",
"expDate":
"1541203200000",
"displayName": "Java
Web",
"relDate":
"1501718400000",
"version":
"1.133.3.2"
},
"availabilityZone": "PRSAG",
"computeUnitSize": "LITE"
}]
}
}
Procedure
1. Get a CSRF token. If you try to start your application long after its deployment, the token has most probably
expired.
Send a GET CSRF Protection request.
Note
If your session is still actively running, you don't have to request a new CSRF token. In this case, we will use
the CSRF token generated during the deployment scenario.
Client Request:
PUT: https://api.hana.ondemand.com/lifecycle/v1/accounts/test/apps/myapp/state
Request Headers:
X-CSRF-Token: 3B95B7A8B0E8E6B923C67E6C0BFD234D
Content-Type: application/json
Request Body:
{
"applicationState": "STOPPED"
}
Server Response:
Response Status: 200
Response Body:
{
"metadata": {
"message": "Triggered stop of application
process.",
"url": "/lifecycle/v1/accounts/test/apps/myapp",
"createdAt": 1501825923105,
"updatedAt": 1501827428000
},
"entity": {
"applicationState": "STOPPING",
"processes": [{
"processId":
"dc1460001710d282b42b7331f1831ec5ad9c1924",
"status": "PENDING",
"lastStatusChange": 0,
"availabilityZone": "",
"computeUnitSize": "LITE"
}],
"warningMessage": "Triggered stop of application
process."
}
}
Client Request:
Server Response:
Response Body:
{
"metadata": {
"aliases": "[]",
"runtime": {
"id": "neo-java-web",
"state": "recommended",
"expDate": "1541203200000",
"displayName": "Java Web",
"relDate": "1501718400000",
"version": "1.133.3"
},
"url": "/lifecycle/v1/accounts/test/apps/myapp",
"createdAt": 1502274734263,
"updatedAt": 1502274835000
},
"entity": {
"applicationState": "STOPPED",
"processes": []
}
}
Related Information
Follow the steps below to deploy your application on a local SAP Cloud Platform server.
Prerequisites
● You have set up your runtime environment in Eclipse IDE. For more information, see Set Up the Runtime
Environment [page 1131].
● You have developed or imported a Java Web application in Eclipse IDE. For more, information, see Developing
Java Applications [page 1164] or Import Samples as Eclipse Projects [page 1145]
Procedure
1. Open the servlet in the Java Editor and from the context menu, choose Run As Run on Server .
2. Window Run On Server opens. Make sure that the Manually define a new server option is selected.
Note
If this is the first server you run in your IDE workspace, a folder Servers is created and appears in the Project
Explorer navigation tree. It contains configurable folders and files you can use, for example, to change your
HTTP or JMX port.
6. The Internal Web Browser opens in the editor area and shows the application output.
7. Optional: If you try to delete a server with an application running on it, a dialog appears allowing you to choose
whether to only undeploy the application, or to completely delete it together with its configuration.
Next Steps
After you have deployed your application, you can additionally check your server information. In the Servers view,
double-click on the local server and open the Overview tab. Depending on your local runtime, the following data is
available:
● If you have run your application in Java Web or Java EE 6 Web Profile runtime, you see the standard
server data (General Info, Publishing, Timeouts, Ports).
● If you have run your application in Java Web Tomcat 7 or Java Web Tomcat 8 runtime, you see some
additional Tomcat sections, default Tomcat ports, and an extra Modules page, which shows a list of all
applications deployed by you.
Related Information
Prerequisites
● You have set up your runtime environment in the Eclipse IDE. For more information, see Set Up the Runtime
Environment [page 1131].
● You have developed or imported a Java Web application in Eclipse IDE. For more, information, see Developing
Java Applications [page 1164] or Import Samples as Eclipse Projects [page 1145]
● You have an active subaccount. For more information, see Get a Free Trial Account [page 919].
Procedure
1. Open the servlet in the Java editor and from the context menu, choose Run As Run on Server .
2. The Run On Server dialog box appears. Make sure that the Manually define a new server option is selected.
Note
○ If you have previously entered a subaccount and user name for your region host, these names will be
prompted to you in dropdown lists.
○ A dropdown list will be displayed as well for previously entered region hosts.
○ If you select the Save password box, the entered password for a given user name will be remembered
and kept in the secure store.
9. Choose Finish. This triggers the publishing of the application on SAP Cloud Platform.
Note
You cannot deploy multiple applications on the same application process. Deployment of a second
application on the same application process overwrites any previous deployments. If you want to deploy
several applications, deploy each of them on a separate application process.
Next Steps
● If, during development, you need to redeploy your application, after choosing Run on Server or Publish, the
cloud server will not be restarted but only the binaries of the application will be updated.
You can see all applications deployed in your subaccount within the Eclipse Tools, or change the current runtime.
For more information, see Configuring Advanced Configurations [page 1193].
Related Information
SAP Cloud Platform Tools provide options for advanced server and application configurations from the Eclipse IDE,
as well as direct reference to the cockpit UI.
Prerequisites
You have developed or imported a Java Web application in the Eclipse IDE. For more, information, see Developing
Java Applications [page 1164] or Import Samples as Eclipse Projects [page 1145].
Alternatives
There are alternative ways to open the cockpit (1) and the application URLs (2).
1. In the Servers view, open the context menu and choose Show In Cockpit .
2. In the Servers view, expand the cloud server node and, from the context menu of the relevant application,
choose Application URL Open . It will be opened in a new browser tab.
Tip
● If the application is published on the cloud server, besides the Open option you can also choose Copy to
Clipboard, which only copies the application URL.
● If the application has not been published but only added to the server, Copy to Clipboard will be disabled.
The Open option though will display a dialog which allows you to publish and then open the application in a
browser.
● If the cloud server is not in Started status, both Application URL options will be disabled.
After you have deployed your application, you can check and also change the server runtime. Proceed as follows:
Note
When you change the Runtime value so that it differs from the one in Runtime in use, after saving your
change, a link appears prompting you to republish the server.
From the server editor, you can configure additional application parameters, such as compute unit size, JVM
arguments, and others.
Note
If you make your configurations on a started server, the changes will take effect after server restart. You can
use the link Restart to apply changes.
Related Information
The console client allows you to install a server runtime in a local folder and use it to deploy your application.
Procedure
neo install-local
3. To start the local server, enter the following command and press ENTER :
neo start-local
This starts a local server instance in the default local server directory <SDK installation folder>/
server. Again, use the following optional command argument to specify another directory:
4. To deploy your application, enter the following command as shown in the example below and press ENTER :
This deploys the WAR file on the local server instance. If necessary, specify another directory as in step 3.
5. To check your application is running, open a browser and enter the URL, for example:
http://localhost:8080/hello-world
Note
The HTTP port is normally 8080. However, the exact port configurations used for your local server,
including the HTTP port, are displayed on the console screen when you install and start the local server.
6. To stop the local server instance, enter the following command from the <SDK installation folder>/
tools folder and press ENTER :
neo stop-local
Related Information
Deploying an application publishes it to SAP Cloud Platform. During deploy, you can define various specifics of the
deployed application using the deploy command optional parameters.
Prerequisites
● You have downloaded and configured SAP Cloud Platform console client. For more information, see Set Up the
Console Client [page 1135]
● Depending on your subaccount type, deploy the application on the respective region host. For more
information, see Regions [page 21]
Procedure
1. In the opened command line console, execute neo deploy command with the appropriate parameters.
You can define the parameters of commands directly in the command line as in the example below, or in the
properties file. For more information, see Using the Console Client [page 1792].
2. Enter your password if requested.
3. Press ENTER and deployment of your application will start. If deployment fails, check if you have defined the
parameters correctly.
Note
The size of an application deployed on SAP Cloud Platform can be up to 1.5 GB. If the application is
packaged as a WAR file, the size of the unzipped content is taken into account.
Example
Next Steps
To make your deployed application available for requests, you need to start it by executing the neo start
command.
Then, you can manage the application lifecycle (check the status; stop; restart; undeploy) using dedicated console
client commands.
By using the delta deployment option, you can apply changes in a deployed application faster without uploading
the entire set of files tо SAP Cloud Platform.
Context
The delta parameter allows you to deploy only the changes between the provided source and the previously
deployed content - new content is added; missing content is deleted; existing content is updated if there are
changes. The delta parameter is available in two commands – deploy and hot-update.
Note
Use it to save time for development purposes only. For updating productive applications, deploy the whole
application.
Procedure
To upload only the changed files from the application WARs, use one of the two approaches:
Related Information
The cockpit allows you to deploy Java applications as WAR files and supports a number of deployment options for
configuring the application.
Procedure
○ Start: Start the application to activate its URL and make the application available to your end users.
○ Close: Simply close the dialog box if you do not want to start the application immediately.
You can update or redeploy the application whenever required. To do this, choose Update application to open the
same dialog box as in update mode. You can update the application with a new WAR file or change the
configuration parameters.
To change the name of a deployed application, deploy a new application under the desired name, and delete the
application whose name you want to change.
Related Information
After you have created a Web application and tested it locally, you may want to inspect its runtime behavior and
state by debugging the application in SAP Cloud Platform. The local and the cloud scenarios are analogical.
Context
The debugger enables you to detect and diagnose errors in your application. It allows you to control the execution
of your program by setting breakpoints, suspending threads, stepping through the code, and examining the
contents of the variables. You can debug a servlet or a JSP file on a SAP Cloud Platform server without losing the
state of your application.
Note
Currently, it is only possible to debug Web applications in SAP Cloud Platform that have exactly one application
process (node).
Tasks
Related Information
In this section, you can learn how to debug a Web application on SAP Cloud Platform local runtime in the Eclipse
IDE.
Prerequisites
You have developed a Web application using the Eclipse IDE. For more information, see Developing Java
Applications [page 1164].
Procedure
Related Information
In this section, you can learn how to debug a Web application on SAP Cloud Platform depending on whether you
have deployed it in the Eclipse IDE or in the console client.
Prerequisites
● You have developed a Web application using the Eclipse IDE. For more information, see Developing Java
Applications [page 1164].
● You have deployed your Web application either using the Eclipse IDE or via the console client. For more
information, see Deploying and Updating Applications [page 1175].
Note
Debugging can be enabled if there is only one VM started for the requested account or application.
Procedure
Note
Since cloud servers are running on SAP JVM, switching modes does not require restart and happens in real
time.
1. Deploy your Web application in the console client and start it.
2. Go to the Eclipse IDE, open the Servers view and choose New Server .
3. Choose SAP SAP Cloud Platform .
4. Enter the correct region host, according to your location. (For more information, see Regions [page 21].)
5. Edit the server name, if necessary, and choose Next.
Note
● If you have deployed an application on a running server, we recommend that you do not use Debug on
Server or Run on Server for this will republish (redeploy) your application.
● Also, bear in mind that if you have deployed two or more WAR files, only the debugged one will remain after
that.
Related Information
In the Neo environment of SAP Cloud Platform, you can develop and run multitenant (tenant-aware) applications.
These applications run on a shared compute unit that can be used by multiple consumers (tenants). Each
consumer accesses the application through a dedicated URL.
You can read about the specifics of each platform service with regards to multitenancy in the respective section
below:
● Isolate data
● Save resources by sharing them among tenants
● Perform updates efficiently, that is, in one step
Currently, you can trigger the subscription via the console client for testing purposes. For more information, see
Providing Java Multitenant Applications to Tenants for Testing [page 975].
When an application is accessed via a consumer specific URL, the application environment is able to identify the
current consumer. The application developer can use the tenant context API to retrieve and distinguish the tenant
ID, which is the unique ID of the consumer. When developing tenant-aware applications, data isolation for different
consumers is essential. It can be achieved by distinguishing the requests based on the tenant ID. There are also
some specifics in the usage of different services when you develop your multitenant application.
● Shared in-memory data such as Java static fields will be available to all tenants.
● Avoid any possibility that an application user can execute custom code in the application JVM, as this may give
them access to other tenants' data.
● Avoid any possibility that an application user can access a file system, as this may give them access to other
tenants' data.
For more information, see Multitenancy in the Connectivity Service [page 238].
Multitenant applications on SAP Cloud Platform have two approaches available to separate the data of the
different consumers:
Document Service
The document service automatically separates the documents according to the current consumer of the
application. When an application connects to a document repository, the document service client automatically
propagates the current consumer of the application to the document service. The document service uses this
information to separate the documents within the repository. If an application wants to connect to the data of a
dedicated consumer instead of the current consumer (for example in a background process), the application can
specify the tenant ID of the corresponding consumer when connecting to the document repository.
The Keystore Service provides a repository for cryptographic keys and certificates to tenant-aware applications
hosted on SAP Cloud Platform. Because the tenant defines a specific configuration of an application, you can
configure an application to use different keys and certificates for different tenants.
For more information about the Keystore Service, see Keys and Certificates [page 2231].
Access rights for tenant-aware application are usually maintained by the application consumer, not by the
application provider. An application provider may predefine roles in the web.xml when developing the application.
By default, predefined roles are shared with all application consumers, but could also be made visible only to the
provider subaccount. Once a consumer is subscribed to this application, shared predefined roles become visible in
the cockpit of the application consumer. Then, the application consumer can assign users to these roles to give
them access to the provider application. In addition, application consumer subaccounts can add their own custom
roles to the subscribed application. Custom roles are visible only within the application consumer subaccount
where they are created.
For more information about managing application roles, see Managing Roles [page 2151].
Trust configuration regarding authentication with SAML2.0 protocol is maintained by the application consumer.
For more information about configuring trust, see Application Identity Provider [page 2161].
Related Information
Context
● Application Provider - an organizational unit that uses SAP Cloud Platform to build, run and sell
applications to customers, that is, the application consumers.
● Application Consumer - an organizational unit, typically a customer or a department inside a customer’s
organization, which uses an SAP Cloud Platform application for a certain purpose. Obviously, the application is
To use SAP Cloud Platform, both the application provider and the application consumer need to have a
subaccount. The subaccount is the central organizational unit in SAP HANA Cloud Plaftorm. It is the central entry
point to SAP Cloud Platform for both application providers and consumers. It may consist of a set of applications,
a set of subaccount members and a subaccount-specific configuration.
Subaccount members are users who must be registered via the SAP ID service. Subaccount members may have
different privileges regarding the operations which are possible for a subaccount (for example, subaccount
administration, deploy/start/stop applications). Note that the subaccount belongs to an organization and not to an
individual. Nevertheless, the interaction with the subaccount is performed by individuals, the members of the
subaccount. The subaccount-specific configuration allows application providers and application consumers to
adapt their subaccount to their specific environment and needs.
An application resides in exactly one subaccount, the hosting subaccount. It is uniquely identified by the
subaccount name and the application name. Applications consume SAP Cloud Platform resources, for instance,
compute units, structured and unstructured storage and outgoing bandwidth. Costs for consumed resources are
billed to the owner of the hosting subaccount, who can be an application provider, an application consumer, or
both.
Related Information
Overview
In a provider-managed application scenario, each application consumer gets its own access URL for the provider
application. To be able to use an application with a consumer-specific URL, the consumer must be subscribed to
the provider application. When an application is launched via a consumer-specific URL, the tenant runtime is able
to identify the current consumer of the application. The tenant runtime provides an API to retrieve the current
application consumer. Each application consumer is identified by a unique ID which is called tenantId.
Since the information about the current consumer is extracted from the request URL, the tenant runtime can only
provide a tenant ID if the current thread has been started via an HTTP request. In case the current thread was not
started via an HTTP request (for example, a background process), the tenant context API only returns a tenant if
the current application instance has been started for a dedicated consumer. If the current application instance is
shared between multiple consumers and the thread was not started via an HTTP request, the tenant runtime
throws an exception.
Note
The tenant context API is of interest to application providers only.
com.sap.cloud.account TenantContext
<resource-ref>
<res-ref-name>TenantContext</res-ref-name>
<res-type>com.sap.cloud.account.TenantContext</res-type>
</resource-ref>
To get an instance of the TenantContext API, use resource injection the following way:
@Resource
private TenantContext tenantContext;
Note
When you use WebSockets, the TenantId and AccountName parameters, provided by the TenantContext
API, are correct only during processing of WebSocket handshake request. This is because what follows after the
Account API
The Account API provides methods to get subaccount ID, subaccount display name, and attributes. For more
information, see the Javadoc.
Sample Code
Sample Code
Related Information
Below are listed tutorials describing end-to-end scenarios with multitenant demo applications:
Create a general demo application (servlet) Create an Exemplary Provider Application (Servlet) [page
1212]
Create a general demo application (JSP file) Create an Exemplary Provider Application (JSP) [page 1215]
Create a connectivity demo application Create a Multitenant Connectivity Application [page 1217]
Consume a connectivity demo application Consume a Multitenant Connectivity Application [page 1221]
This tutorial explains how to create a sample application which makes use of the multitenancy concept. That is,
you can enable your application to be consumed by users, members of a tenant which is subscribed to this
application in a multitenant flavor.
Prerequisites
● You have downloaded and set up your Eclipse IDE, SAP HANA Cloud Tools for Java and SAP Cloud Platform
SDK for Neo environment. For more information, see Setting Up the Development Environment [page 1126].
● You are an application provider. For more information, see Multitenancy Roles [page 1207].
Procedure
5. Choose Finish so that the TenantContext.java servlet is created and opened in the Java editor.
6. Go to /TenantContextApp/WebContent/WEB-INF and open the web.xml file.
7. Choose the Source tab page.
8. Add the following code block to the <web-app> element:
<resource-ref>
<res-ref-name>TenantContext</res-ref-name>
<res-type>com.sap.cloud.account.TenantContext</res-type>
</resource-ref>
9. Replace the entire servlet class with the following sample code:
package tenantcontext.demo;
import java.io.IOException;
import java.io.PrintWriter;
import javax.naming.Context;
import javax.naming.InitialContext;
import javax.servlet.ServletException;
import javax.servlet.http.HttpServlet;
import javax.servlet.http.HttpServletRequest;
import javax.servlet.http.HttpServletResponse;
import com.sap.cloud.account.TenantContext;
/**
* Servlet implementation class TenantContextServlet
*/
public class TenantContextServlet extends HttpServlet {
private static final long serialVersionUID = 1L;
/**
* @see HttpServlet#HttpServlet()
*/
public TenantContextServlet() {
super();
}
/**
* @see HttpServlet#doGet(HttpServletRequest request, HttpServletResponse
response)
*/
protected void doGet(HttpServletRequest request, HttpServletResponse
response) throws ServletException, IOException {
try {
InitialContext ctx = new InitialContext();
Context envCtx = (Context)ctx.lookup("java:comp/env");
10. Save the Java editor. The project compiles without errors.
You have successfully created a Web application containing a sample servlet and connectivity functionality.
To learn how to deploy your application, see Deploy on the Cloud from Eclipse IDE [page 1191].
Result
You have created a sample application that can be requested in a browser. Its output depends on the tenant
context.
● To test the access to your multitenant application, go to a browser and request it on behalf of your subaccount.
Use the following URL pattern: https://<application_name><provider_subaccount>.<host>/
<application_path>
● If you want to test the access to your multitenant application on behalf of a consumer subaccount, follow the
steps in page: Consume a Multitenant Connectivity Application [page 1221]
Related Information
This tutorial explains how to create a sample application which makes use of the multitenancy concept. That is,
you can enable your application to be consumed by users, members of a tenant which is subscribed to this
application in a multitenant flavor.
Prerequisites
● You have downloaded and set up your Eclipse IDE, SAP HANA Cloud Tools for Java and SAP HANA SDK. For
more information, see Setting Up the Development Environment [page 1126].
● You are an application provider. For more information, see Multitenancy Roles [page 1207].
Procedure
<resource-ref>
<res-ref-name>TenantContext</res-ref-name>
<res-type>com.sap.cloud.account.TenantContext</res-type>
</resource-ref>
1. Under the TenantContextApp project node, choose New JSP File in the context menu.
2. Enter index.jsp as the File name and choose Finish.
3. Open the index.jsp file using the text editor.
4. Replace the entire JSP file content with the following sample code:
<%@page
import="javax.naming.InitialContext,javax.naming.Context,com.sap.cloud.account.Te
nantContext" %>
<%@ page language="java" contentType="text/html; charset=ISO-8859-1"
pageEncoding="ISO-8859-1"%>
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://
www.w3.org/TR/html4/loose.dtd">
<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=ISO-8859-1">
<title>SAP Cloud Platform - Tenant Context Demo Application</title>
</head>
<body>
<h2> Welcome to the SAP Cloud Platform Tenant Context demo application</h2>
<br></br>
<%
try {
InitialContext ctx = new InitialContext();
Context envCtx = (Context) ctx.lookup("java:comp/env");
TenantContext tenantContext = (TenantContext) envCtx
.lookup("TenantContext");
String currentTenantId = tenantContext.getTenant().getId();
out.println("<p><font size=\"5\"> The application was accessed on
behalf of a tenant with an ID: <b>"
+ currentTenantId + "</b></font></p>");
} catch (Exception e) {
out.println("error at client");
}
%>
</body>
</html>
To learn how to deploy your application, see Deploy on the Cloud from Eclipse IDE [page 1191].
Result
You have successfully created a Web application containing a JSP file and tenant context functionality.
Next Steps
● To test the access to your multitenant application, go to a browser and request it on behalf of your subaccount.
Use the following URL pattern: https://<application_name><provider_subaccount>.<host>/
<application_path>
● If you want to test the access to your multitenant application on behalf of a consumer subaccount, follow the
steps in page: Consume a Multitenant Connectivity Application [page 1221]
Related Information
Prerequisites
● You have downloaded and set up your Eclipse IDE, SAP Cloud Platform Tools for Java and SAP Cloud Platform
SDK for Neo environment. For more information, see Setting Up the Development Environment [page 1126].
● You are an application provider. For more information, see Multitenancy Roles [page 1207].
Context
This tutorial explains how you can create a sample application which is based on the multitenancy concept, makes
use of the connectivity service, and can be later consumed by other users. That means, you can enable your
application to be consumed by users, members of a tenant which is subscribed for this application in a multitenant
flavor. The output of the application you are about to create, displays a welcome page showing the URI of the
The application code is the same as for a standard HelloWorld consuming the connectivity service as the latter
manages the multitenancy with no additional actions required by you. The users of the consumer subaccount,
which is subscribed to this application, can access the application using a tenant-specific URL. This would lead the
application to use a tenant-specific destination configuration. For more information, see Multitenancy in the
Connectivity Service [page 238].
Note
As a provider, you can set your destination configuration on application and subaccount level. They are the
default destination configurations in case a consumer has not configured tenant-specific destination
configuration (on subscription level).
Procedure
<resource-ref>
<res-ref-name>search_engine_destination</res-ref-name>
<res-type>com.sap.core.connectivity.api.http.HttpDestination</res-type>
</resource-ref>
1. Under the MultitenantConnectivity project node, choose New JSP File in the context menu.
2. Enter index.jsp as the File name and choose Finish.
3. Open the index.jsp file using the text editor.
4. Replace the entire JSP file content with the following sample code:
<%@page
import="javax.naming.InitialContext,javax.naming.Context,com.sap.core.connectivit
y.api.http.HttpDestination,java.util.Arrays"%>
<%@ page language="java" contentType="text/html; charset=ISO-8859-1"
pageEncoding="ISO-8859-1"%>
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://
www.w3.org/TR/html4/loose.dtd">
<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=ISO-8859-1">
<title>SAP Cloud Platform - Multitenant Connectivity Demo Application</title>
</head>
<body>
<h2>Welcome to SAP Cloud Platform - multitenant connectivity demo
application</h2>
<br></br>
<%
try {
Context context = (Context) new InitialContext()
.lookup("java:comp/env");
// In this case you don't need to explicitly use the TenantContext
API
// because the Connectivity service handles the tenancy by itself.
// The retrieved HttpDestination object will be tenant-specific.
String destinationName = "search_engine_destination";
HttpDestination destination = (HttpDestination) context
.lookup(destinationName);
out.println("<p><font size=\"5\"> Retreived destination with name
<i>"
+ destination.getName()
+ "</i> and URI <b>"
+ destination.getURI() + "</b></font></p>");
} catch (Exception e) {
out.println("<b>An exception has been thrown: <i>" + e.getMessage()
+ "</i></b>");
out.println(Arrays.toString(e.getStackTrace()));
}
%>
</body>
</html>
You have successfully created a Web application containing a sample JSP file and consuming the connectivity
service via looking up a destination configuration.
To learn how to deploy your application, see Deploy on the Cloud from Eclipse IDE [page 1191].
You, as application provider, can configure a default destination, which is then used at runtime when the
application is requested in the context of the provider subaccount. In this case, the URL used to access the
application is not tenant-specific.
Example:
Name=search_engine_destination
URL=https://www.google.com
Type=HTTP
ProxyType=Internet
Authentication=NoAuthentication
TrustAll=true
For more information on how to define a destination for provider subaccount, see:
Result
You have created a sample application which can be requested in a browser. Its output depends on the tenant
name.
Next Steps
● To test the access to your multitenant application, go to a browser and request it on behalf of your subaccount.
Use the following URL pattern: https://<application_name><provider_subaccount>.<host>/
<application_path>
● If you want to test the access to your multitenant application on behalf of a consumer subaccount, follow the
steps in page: Consume a Multitenant Connectivity Application [page 1221]
Related Information
Prerequisites
Note
This tutorial assumes that your subaccount is subscribed to the following exemplary application (deployed in a
provider subaccount): Create a Multitenant Connectivity Application [page 1217]
Context
This tutorial explains how you can consume a sample connectivity application based on the multitenancy concept.
That is, you are a member of a subaccount which is subscribed for applications provided by other subaccounts.
The output of the application you are about to consume, displays a welcome page showing the URI of the tenant-
specific destination configuration. This means that the administrator of your consumer subaccount may have
been previously set a tenant-specific configuration for this application. However, in case such configuration has
not been set, the application would use a default one, set by the administrator of the provider subaccount.
Users of a consumer subaccount, which is subscribed to an application, can access the application using a tenant-
specific URL. This would lead the application to use a tenant-specific destination configuration. For more
information, see Multitenancy in the Connectivity Service [page 238].
Note
As a consumer, you can set a tenant-specific destination configuration on subscription level.
Procedure
You can consume a provider application if your subaccount is subscribed to it. In this case, administrators of your
consumer subaccount can configure a tenant-specific destination configuration, which can later be used by the
provider application.
To illustrate the tenant-specific consumption, the URL used in this example is diferent from the one in the
exemplary provider application tutorial.
Name=search_engine_destination
URL=http://www.yahoo.com
Type=HTTP
ProxyType=Internet
Authentication=NoAuthentication
TrustAll=true
Tip
The destination name depends on the provider application.
For more information on how to configure a destination for provider subaccount, see:
Go to a browser and request the application on behalf of your subaccount. Use the following URL pattern:
https://<application_name><provider_subaccount>-<consumer_subaccount>.<host>/
<application_path>
Result
The application is requested in a browser. Its output is relevant to your tenant-specific destination configuration.
Related Information
With SAP Cloud Platform, you can use the SAP HANA development tools to create comprehensive analytical
models and build applications with SAP HANA's programmatic interfaces and integrated development
environment.
Appropriate for
Related Information
Set up your SAP HANA development environment and run your first application in the cloud.
Note
To determine the most suitable tool for your development scenario, see SAP HANA Developer Information
by Scenario.
Add Features
Use calculation views and visualize the data with SAPUI5. See: 8 Easy Steps to Develop an XS application on the
SAP Cloud Platform
Enable SHINE
Enable the demo application SAP HANA Interactive Education (SHINE) [page 1237] and learn how to build native
SAP HANA applications.
Before developing your SAP HANA XS application, you need to download and set up the necessary tools.
Prerequisites
● You have downloaded and installed a 32-bit or 64-bit version of Eclipse IDE, version Neon. For more
information, see Install Eclipse IDE [page 1128].
Caution
The support for Mars has entered end of maintenance.
● You have configured your proxy settings (in case you work behind a proxy or a firewall). For more information,
see Install SAP Development Tools for Eclipse [page 1129] → step 3.
Procedure
5. Choose Next.
6. On the next wizard page, you get an overview of the features to be installed. Choose Next.
7. Confirm the license agreements.
8. Choose Finish to start the installation.
9. After the successful installation, you will be prompted to restart your Eclipse IDE.
Next Steps
Creating an SAP HANA XS Hello World Application Using SAP HANA Web-based Development Workbench [page
1225]
Creating an SAP HANA XS Hello World Application Using SAP HANA Studio [page 1229]
Create and test a simple SAP HANA XS application that displays the "Hello World" message.
Prerequisites
Make sure the database you want to use is deployed in your subaccount before you begin with this tutorial. You can
create SAP HANA XS applications using one of the following databases:
Note
Learn more about the steps that are needed for . For more information on purchasing a larger SAP HANA
database for development or productive purposes, see SAP Cloud Platform Pricing and Packaging .
Context
You will perform all subsequent activities with this new user.
Procedure
2. Choose SAP HANA / SAP ASE Databases & Schemas in the navigation area.
All databases available in the selected account are listed with their ID, type, version, and related database
system.
Tip
To view the details of a database, for example, its state and the number of existing bindings, select a
database in the list and click the link on its name. On the overview of the database, you can perform further
actions, for example, delete the database.
3. Depending on the database you are using, choose one of the following options:
A productive SAP Follow the steps described in Create a Database Administrator User [page 1245].
HANA XS data
base
A productive or 1. Select the relevant SAP HANA tenant database in the list.
trial SAP HANA
2. In the overview that is shown in the lower part of the screen, open the SAP HANA cockpit link
tenant database
under Administration Tools.
3. In the Enter Username field, enter SYSTEM, then enter the password you determined for the
SYSTEM user in the Enter Password field.
A message is displayed to inform you that at that point, you lack the roles that you need to open
the SAP HANA cockpit.
4. To confirm the message, choose OK.
You receive a confirmation that the required roles are assigned to you automatically.
5. Choose Continue.
You are now logged on to the SAP HANA cockpit.
6. Choose Manage Roles and Users.
7. To create database users and assign them the required roles, expand the Security node.
8. Open the context menu for the Users node and choose New User.
9. On the User tab, provide a name for the new user.
The user name always appears in upper case letters.
10. In the Authentication section, make sure the Password checkbox is selected and enter a pass
word.
Note
The password must start with a letter and only contain uppercase and lowercase letters ('a' -
'z', 'A' - 'Z'), and numbers ('0' - '9').
Note
For more information on the CONTENT_ADMIN role, see Predefined Database Roles.
Caution
At this point, you are still logged on with the SYSTEM user. You can only use your new database
user to work with SAP HANA Web-based Development Workbench by logging out from SAP HANA
Cockpit first. Otherwise, you would automatically log in to the SAP HANA Web-based Develop
ment Workbench with the SYSTEM user instead of your new database user. Therefore, choose the
Logout button before you continue to work with the SAP HANA Web-based Development Work
bench, where you need to log on again with the new database user.
Procedure
1. Open the SAP Cloud Platform cockpit and choose SAP HANA / SAP ASE Databases & Schemas in the
navigation area.
2. Select the relevant database from the list and choose SAP HANA Web-based Development Workbench under
Development Tools.
Note
If you log on to the SAP HANA Web-based Development Workbench for the first time, you are prompted to
change your initial password.
The editor is displayed. The header shows the details for your user and database. Hover over the entry for the
SID to view the details.
5. Create a new package by choosing New Package from the context menu for the Content folder.
6. Enter a package name.
Open the files under the new package hierarchy to view them in the editor.
9. Only if you are using an SAP HANA tenant database: From the context menu for the new package node,
choose Activate All.
Procedure
In the Editor of the SAP HANA Web-based Development Workbench, select the logic.xsjs file from the newly created
package and choose Run.
The program is deployed and displayed in the browser: Hello World from User <Your User>.
Note
If you have used an SAP HANA XS database for creating your SAP HANA XS application, you can also launch
your application from the SAP Cloud Platform cockpit by choosing the application URL after navigating to
Applications HANA XS Applications . For more information, see Launch SAP HANA XS Applications [page
1239].
Create and test a simple SAP HANA XS application that displays the "Hello World" message.
Prerequisites
Make sure the database you want to use is deployed in your account before you begin with this tutorial. You can
create SAP HANA XS applications using one of the following databases:
Note
Learn more about the steps that are needed for . For more information on purchasing a larger SAP HANA
database for development or productive purposes, see SAP Cloud Platform Pricing and Packaging .
You also need to install the tools as described in Install SAP HANA Tools for Eclipse [page 1224] to follow the steps
described in this tutorial.
Context
Context
You will perform all subsequent activities with this new user.
2. Choose SAP HANA / SAP ASE Databases & Schemas in the navigation area.
All databases available in the selected account are listed with their ID, type, version, and related database
system.
Tip
To view the details of a database, for example, its state and the number of existing bindings, select a
database in the list and click the link on its name. On the overview of the database, you can perform further
actions, for example, delete the database.
3. Depending on the database you are using, choose one of the following options:
A productive SAP Follow the steps described in Create a Database Administrator User [page 1245].
HANA XS data
base
A productive or 1. Select the relevant SAP HANA tenant database in the list.
trial SAP HANA
2. In the overview that is shown in the lower part of the screen, open the SAP HANA cockpit link
tenant database
under Administration Tools.
3. In the Enter Username field, enter SYSTEM, then enter the password you determined for the
SYSTEM user in the Enter Password field.
A message is displayed to inform you that at that point, you lack the roles that you need to open
the SAP HANA cockpit.
4. To confirm the message, choose OK.
You receive a confirmation that the required roles are assigned to you automatically.
5. Choose Continue.
You are now logged on to the SAP HANA cockpit.
6. Choose Manage Roles and Users.
7. To create database users and assign them the required roles, expand the Security node.
8. Open the context menu for the Users node and choose New User.
9. On the User tab, provide a name for the new user.
The user name always appears in upper case letters.
10. In the Authentication section, make sure the Password checkbox is selected and enter a pass
word.
Note
The password must start with a letter and only contain uppercase and lowercase letters ('a' -
'z', 'A' - 'Z'), and numbers ('0' - '9').
Note
For more information on the CONTENT_ADMIN role, see Predefined Database Roles.
Caution
At this point, you are still logged on with the SYSTEM user. You can only use your new database
user to work with SAP HANA Web-based Development Workbench by logging out from SAP HANA
Cockpit first. Otherwise, you would automatically log in to the SAP HANA Web-based Develop
ment Workbench with the SYSTEM user instead of your new database user. Therefore, choose the
Logout button before you continue to work with the SAP HANA Web-based Development Work
bench, where you need to log on again with the new database user.
Context
Connect to a dedicated SAP HANA database using SAP HANA Tools via the Eclipse IDE.
Procedure
Note
Make sure that you specify the host correctly.
b. Specify the subaccount name, e-mail or SCN user name, and your SCN password.
c. Choose Next.
5. Select a database and provide your credentials:
Note
Make sure that you specify the database user and password correctly.
If you select Save password, the password for a given user name is kept in the secure store.
d. Choose Finish.
Results
Context
After you add the SAP HANA system hosting the repository that stores your application-development files, you
must specify a repository workspace, which is the location in your file system where you save and work on the
development files.
Procedure
Results
In the Repositories view, you see your workspace, which enables you to browse the repository of the system tied to
this workspace. The repository packages are displayed as folders.
At the same time, a folder will be added to your file system to hold all your development files.
Context
After you set up a development environment for the chosen SAP HANA system, you can add a project to contain all
the development objects you want to create as part of the application-development process. There are a variety of
project types for different types of development objects. Generally, a project type ensures that only the necessary
libraries are imported to enable you to work with development objects that are specific to a project type. In this
tutorial, you create an XS Project.
1. In the SAP HANA Development perspective in the Eclipse IDE, choose File New XS Project .
2. Make sure the Share project in SAP repository option is selected and enter a project name.
3. Choose Next.
4. Select the repository workspace you created in the previous step and choose Next.
5. Choose Finish without doing any further changes.
Results
The Project Explorer view in the SAP HANA Development perspective in Eclipse displays the new project. The
system information in brackets to the right of the project node name in the Project Explorer view indicates that the
project has been shared; shared projects are regularly synchronized with the Repository hosted on the SAP HANA
system you are connected to.
Context
SAP HANA Extended Application Services (SAP HANA XS) supports server-side application programming in
JavaScript. In this step, you add some simple JavaScript code that generates a page which displays the
wordsHello, World!
Procedure
1. In the Project Explorer view in the SAP HANA Development perspective in Eclipse, right-click your XS project,
and choose New Other in the context-sensitive popup menu.
2. In the Select a wizard dialog, choose SAP HANA Application Development XS JavaScript File .
3. In the New XS JavaScript File dialog, enter MyFirstSourceFile.xsjs in the File name text box and choose
Next.
4. Choose Finish.
5. In the MyFirstSourceFile.xsjs file, enter the following code and save the file:
$.response.contentType = "text/html";
$.response.setBody( "Hello, World !");
Note
By default, saving the file automatically commits the saved version of the file to the repository.
The application descriptors are mandatory and describe the framework in which an SAP HANA XS application
runs. The .xsapp file indicates the root point in the package hierarchy where content is to be served to client
requests; the .xsaccess file defines who has access to the exposed content and how.
Note
By default, the project-creation Wizard creates the application descriptors automatically. If they are not
present, you will see a 404 error message in the Web Browser when you call the XS JavaScript service. In
this case, you will need to create the application descriptors manually. See the SAP HANA Developer Guide
for SAP HANA Studio for instructions.
7. Open the context menu for the new files (or the folder/package containing the files) and select Team
Activate All . The activate operation publishes your work and creates the corresponding catalog objects; you
can now test it.
Context
Check if your application is working and if the Hello, World! message is displayed.
Procedure
In the SAP HANA Development perspective in the Eclipse IDE, open the context menu of the
MyFirstSourceFile.xsjs file and choose Run As 1 XS Service .
Note
You might need to enter the credentials of the database user you created in this tutorial again.
Note
If you have used an SAP HANA XS database for creating your SAP HANA XS application, you can also launch
your application from the SAP Cloud Platform cockpit by choosing the application URL after navigating to
Applications HANA XS Applications . For more information, see Launch SAP HANA XS Applications [page
1239].
Hello, World !
Context
To extract data from the database, you use your JavaScript code to open a connection to the database and then
prepare and run an SQL statement. The results are added to the Hello, World! response. You use the following
SQL statement to extract data from the database:
The SQL statement returns one row with one field called DUMMY, whose value is X.
Procedure
1. In the Project Explorer view in the SAP HANA Development perspective in Eclipse, open the
MyFirstSourceFile.xsjs file in the embedded JavaScript editor.
2. In the MyFirstSourceFile.xsjs file, replace your existing code with the following code:
$.response.contentType = "text/html";
var output = "Hello, World !";
var conn = $.db.getConnection();
var pstmt = conn.prepareStatement( "select * from DUMMY" );
var rs = pstmt.executeQuery();
if (!rs.next()) {
$.response.setBody( "Failed to retrieve data" );
$.response.status = $.net.http.INTERNAL_SERVER_ERROR;
} else {
output = output + "This is the response from my SQL: " + rs.getString(1);
}
rs.close();
pstmt.close();
conn.close();
$.response.setBody(output);
4. Open the context menu of the MyFirstSourceFile.xsjs file and choose Team Activate All .
Context
Check if your application is retrieving data from your SAP HANA database.
Procedure
In the SAP HANA Development perspective in the Eclipse IDE, open the context menu of the
MyFirstSourceFile.xsjs file and choose Run as XS Service .
Note
If you have used an SAP HANA XS database for creating your SAP HANA XS application, you can also launch
your application from the SAP Cloud Platform cockpit by choosing the application URL after navigating to
Applications HANA XS Applications . For more information, see Launch SAP HANA XS Applications [page
1239].
Results
You can enable the SAP HANA Interactive Education (SHINE) demo application for a new or existing SAP HANA
tenant database in your trial account.
Context
SAP HANA Interactive Education (SHINE) demonstrates how to build native SAP HANA applications. The demo
application comes with sample data and design-time developer objects for the application's database tables, data
views, stored procedures, OData, and user interface. For more information, see the SAP HANA Interactive
Education (SHINE) documentation.
By default, SHINE is available for all SAP HANA tenant databases in trial accounts in the Neo environment.
1. Log in to the SAP Cloud Platform cockpit navigate to a subaccount. For more information, see Navigate to
Global Accounts and Subaccounts [page 964].
Restriction
You can enable SHINE only in your trial account.
2. In the navigation area, choose SAP HANA / SAP ASE Databases & Schemas .
3. To enable SHINE for an SAP HANA tenant database, you must first create a SHINE user. If you are enabling
SHINE for a new SAP HANA tenant database, a SHINE user can be automatically created during the database
creation. If you are enabling SHINE for an existing SAP HANA tenant database, you must manually create the
SHINE user.
Enable SHINE for an 1. From the list of all databases and schemas, choose the SAP HANA tenant database for which
existing SAP HANA you want to enable SHINE.
tenant database 2. In the overview in the lower part of the screen, open the SAP HANA Cockpit link under
Administration Tools.
3. In the Enter Username field, enter SYSTEM, then enter the password you determined for the
SYSTEM user.
The first time you log in to the SAP HANA Cockpit, you are informed that you don't have ther
oles that you need to open the SAP Cloud Platform cockpit.
4. Choose OK. The required roles are assigned to you automatically.
5. Choose Continue.
You are now logged in to the SAP HANA Cockpit.
6. Choose Manage Roles and Users.
7. To create database users and assign them the required roles, expand the Security node.
8. Open the context menu for the Users node and choose New User.
9. On the User tab, provide a name for the new SHINE user.
Note
The user name can contain only uppercase and lowercase letters ('a' - 'z', 'A' - 'Z'), numbers
('0' - '9'), and underscores ('_').
Note
The password must contain at least one uppercase and one lowercase letter ('a' - 'z', 'A' -
'Z') and one number ('0' - '9'). It can also contain special characters (except ", ' and \).
A login screen for the SHINE demo application is shown in a new browser window.
4. Enter the credentials of the SHINE user you created and choose Login.
Note
The first time you log in to the SHINE demo application, you are prompted to change your initial password.
Results
You see the SHINE demo application for your SAP HANA tenant database. Consult the SAP HANA Interactive
Education (SHINE) documentation for detailed information about using the application.
You can open your SAP HANA XS applications in a Web browser directly from the cockpit.
Context
Note
This feature is only available for SAP HANA XS applications in single container SAP HANA systems. For SAP
HANA XS applications in SAP HANA tenant database systems, use SAP Cloud Platform Web IDE or SAP HANA
cockpit to manage your applications.
Procedure
1. In the SAP Cloud Platform cockpit, navigate to a subaccount. For more information, see Navigate to Global
Accounts and Subaccounts [page 964].
Note
If an HTTP status 404 (not found) error is shown, bear in mind that the cockpit displays only the root of an
application’s URL path. This means that you might have to either:
○ Add the application name to the URL address in the browser, for example, hello.xsjs.
○ Use an index.html file, which is the default setting for the file displayed when the package is accessed
without specifying a file name in the URL.
○ Override the above default setting by specifying the default_file keyword in the .xsaccess file, for
example:
{
"exposed" : true,
"default_file": "hello.xsjs"
}
Related Information
Use SAP HANA single-container database systems designed for developing with SAP HANA in a productive
environment.
Prerequisites
An SAP HANA XS database system is deployed in a subaccount in your enterprise account. For more information,
see Install Database Systems [page 721].
Note
To find out the latest SAP HANA revision supported by SAP Cloud Platform in the Neo environment, see What's
New for SAP HANA Service.
Before going live with an application for which a significant number of users and/or significant load is expected,
you should do a performance load test. This is best practice in the industry and we strongly recommend it for
HANA XS applications.
SAP Cloud Platform creates four users that it requires to manage the database: SYSTEM, BKPMON, CERTADM,
and PSADBA. These users are reserved for use by SAP Cloud Platform. For more information, see Create a
Database Administrator User [page 1245].
Caution
Do not delete or deactivate these users or change their passwords.
Each SAP HANA XS database system has a technical database user NEO_<guid>, which is created automatically
when the database system is assigned to a subaccount. A technical database user is not the same as a normal
database user and is provided purely as a mechanism for enabling schema access.
Caution
Do not delete or change the technical database user in any way (password, roles, permissions, and so on).
Features
Feature Description
See:
See:
See:
Connectivity destinations ● Connectivity for SAP HANA XS (Enterprise Version) [page 240]
● Maintaining HTTP Destinations
Monitoring Configure Availability Checks for SAP HANA XS Applications from the Cockpit [page 1248]
Launch SAP HANA XS applications Launch SAP HANA XS Applications [page 1239]
Related Information
Note
The support for database schemas on shared SAP HANA databases in trial accounts has ended. We
recommend to create an SAP HANA tenant database on a shared SAP HANA tenant database system. See .
SAP Cloud Platform supports the following Web-based tools: SAP HANA Web-based Development Workbench,
SAP HANA Studio, and SAP HANA XS Administration Tool.
Prerequisites
● You have a database user. See Creating Database Users [page 1244].
● Your database user is assigned the roles required for the relevant tool. See Roles Required for Web-based Tools
[page 1247].
Context
You can access the SAP HANA Web-based tools using the Cockpit or the tool URLs. The following table
summarizes what each supported tool does, and how to acess it.
SAP HANA Web-based Devel Includes an all-purpose editor Development Tools section: https://<database
opment Workbench tool that enables you to main SAP HANA Web-based instance><subaccount
tain and run design-time ob Development Workbench >.< host>/sap/
jects in the SAP HANA reposi hana/xs/ide/
tory. It does not support mod
eling activities.
SAP HANA Cockpit Provides you with a single Administration Tools section: https://<database
point-of -access to a range of SAP HANA Cockpit instance><subaccount
Web-based applications for >.<host>/sap/
the online administration of hana/xs/admin/
SAP HANA.
cockpit
For more information, see the
SAP HANA Administration
Guide.
Note
It is not possible to use the
SAP HANA database life
cycle manager (HDBLCM)
with the cockpit.
SAP HANA XS Administration Allows you, for example, to Administration Tools section: https://<database
Tool configure security options SAP HANA XS Administration instance><subaccount
and HTTP destinations. >.<host>/sap/
For more information, see the hana/xs/admin/
SAP HANA Administration
Guide.
Remember
When using the tools, log on with your database user (not your SAP Cloud Platform user). If this is your initial
logon, you will be prompted to change your password. You are responsible for choosing a strong password and
keeping it secure.
Related Information
Use the database user feature in the SAP Cloud Platform cockpit to create a database administration user for SAP
HANA XS databases, and set up database users in SAP HANA for the members of your development team.
To create database users for SAP HANA XS databases, perform the following steps:
Related Information
As an subaccount administrator, you can use the database user feature provided in the cockpit to create your own
database user for your SAP HANA database.
Context
SAP Cloud Platform creates four users that it requires to manage the database: SYSTEM, BKPMON, CERTADM,
and PSADBA. These users are reserved for use by SAP Cloud Platform.
Caution
Do not delete or deactivate these users or change their passwords.
Procedure
1. In the SAP Cloud Platform cockpit, navigate to a subaccount. For more information, see Navigate to Global
Accounts and Subaccounts [page 964].
2. Choose SAP HANA / SAP ASE Databases and Schemas in the navigation area.
You see all databases that are available in the subaccount, along with their details, including the database type,
version, memory size, state, and the number of associated databases.
3. Select the relevant SAP HANA XS database.
4. In the Development Tools section, click Database User.
A message confirms that you do not yet have a database user.
5. Choose Create User.
Your user and initial password are displayed. Change the initial password when you first log on to an SAP
HANA system, for example the SAP HANA Web-based Development Workbench.
6. To log on to the SAP HANA Web-based Development Workbench and change your initial password now
(recommended), copy your initial password and then close the dialog box.
You do not have to change your initial password immediately. You can open the dialog box again later to display
both your database user and initial password. Since this poses a potential security risk, however, you are
strongly advised to change your password as soon as possible.
7. In the Development Tools section, click SAP HANA Web-based Development Workbench.
8. On the SAP HANA logon screen, enter your database user and initial password.
9. Change your password when prompted.
Caution
You are responsible for choosing a strong password and keeping it secure. If your user is blocked or if you've
forgotten the password of your user, another database administration user with USER_ADMIN privileges can
unlock your user.
Next Steps
● Tip
There may be some roles that you cannot assign to your own database user. In this case, we recommend
that you create a second database user (for example, ROLE_GRANTOR) and assign it the HCP_SYSTEM
role. Then log onto the SAP HANA system with that user and grant your database user the roles you require.
● In the SAP HANA system, you can now create database users for the members of your subaccount and assign
them the required developer roles.
● To be able to use other HANA tools like HANA Cockpit or HANA XS Administration tool, you must assign
yourself access to these before you can start using them. See Assign Roles Required for the SAP HANA XS
Administration Tool [page 1247]
Related Information
To work with the SAP HANA XS Administration Tool, add the required rules to your database user.
Context
The initial set of roles of your database user also contains the sap.hana.xs.ide.roles::Developer role, allowing you to
work with the SAP HANA Web-based Development Workbench, but not the SAP HANA XS Administration tool.
Procedure
○ Use the Eclipse IDE and connect to your SAP HANA studio. For more information, see Connect to SAP
HANA Databases via the Eclipse IDE [page 818].
○ Use the SAP HANA Web-based Development Workbench. For more information, see Supported SAP
HANA Web-based Tools [page 1243].
2. In the Security section, assign the required set of roles to yourself. For more information, see Roles Required
for Web-based Tools [page 1247].
To use the SAP HANA Web-based tools, you require specific roles.
Role Description
sap.hana.xs.ide.roles::EditorDeveloper or parent role Use the Editor component of the SAP HANA Web-based Development
sap.hana.xs.ide.roles::Developer Workbench.
sap.hana.xs.admin.roles::TrustStoreViewer Read-only access to the trust store, which contains the server's root cer
tificate or the certificate of the certification authority that signed the
server’s certificate.
sap.hana.xs.admin.roles::TrustStoreAdministrator Full access to the SAP HANA XS application trust store to manage the
certificates required to start SAP HANA XS applications.
Related Information
In the cockpit, you can configure availability checks for the SAP HANA XS applications running on your productive
SAP HANA database system.
Prerequisites
● The manageMonitoringConfiguration scope is assigned to the used platform role for the subaccount. For more
information, see Platform Scopes [page 1676].
● You have deployed and started an SAP HANA XS application in your subaccount.
Procedure
For more information, see Navigate to Global Accounts and Subaccounts [page 964].
2. In the cockpit, choose Applications HANA XS Applications in the navigation area of the subaccount.
3. Select an application from the list and in the Application Details panel choose the Create Check button.
4. In the dialog that appears, select the URL you want to monitor from the dropdown list and fill in values for
warning and critical thresholds if you want them to be different from the default ones. Choose Save.
Your availability check is created. You can view your application's latest HTTP response code and response
time as well as status icon showing whether your application is up or down. If you want to receive alerts when
your application is down, you need to configure alert recipients from the console client. For more information,
see the Subscribe recipients to notification alerts. step in Configure Availability Checks for SAP HANA XS
Applications from the Console Client [page 1249].
In the console client, you can configure an availability check for your SAP HANA XS application and subscribe
recipients to receive alert e-mail notifications when it is down or responds slowly.
Prerequisites
Procedure
1. Open the command prompt and navigate to the folder containing neo.bat/sh (<SDK installation
folder>/tools).
2. Create the availability check.
Execute:
○ Replace "mysubaccount", "myhana:myhanaxsapp" and "myuser" with the names of your subaccount,
productive SAP HANA database name and application, and user respectively.
○ The availability URL (/heartbeat.xsjs in this case) is not provided by default by the platform. Replace it
with a suitable URL that is already exposed by your SAP HANA XS application or create it. Keep in mind
the limitations for availability URLs. For more information, see .
{
"exposed": true,
"authentication": null,
"authorization": null
}
○ The check will trigger warnings "-W 4" if the response time is above 4 seconds and critical alerts "-C 6" if
the response time is above 6 seconds or the application is not available.
○ Use the respective host for your subaccount type.
3. Subscribe recipients to notification alerts.
Execute:
○ Replace "mysubaccount", "myhana" and "myuser" with the names of your subaccount, productive SAP
HANA database name, and user respectively.
○ Replace "alert-recipients@example.com" with the e-mail addresses that you want to receive alerts.
Separate e-mail addresses with commas. We recommend that you use distribution lists rather than
personal e-mail addresses. Keep in mind that you will remain responsible for handling of personal e-mail
addresses with respect to data privacy regulations applicable.
○ Use the respective host for your subaccount type.
Note
Setting an alert recipient for an application will trigger sending all alerts for this application to the
configured e-mail(s). Once the recipients are subscribed, you do not need to subscribe them again after
every new check you configure. You can also set the recipients on subaccount level if you skip the -b
parameter so that they receive alerts for all applications and for all the metrics you are monitoring.
Related Information
Configure Availability Checks for SAP HANA XS Applications from the Cockpit [page 1248]
Regions [page 21]
Availability Checks Commands
list-availability-check [page 1908]
In the cockpit, you can view the current metrics of a selected database system to get information about its health
state. You can also view the metrics history of a productive database to examine the performance trends of your
database over different intervals of time or investigate the reasons that have led to problems with it. You can view
the metrics for all types of databases.
Metric Value
CPU Load The percent of the CPU that is used on average over the last
one minute.
Disk I/O The number of bytes per second that are currently being read
or written to the disc.
Network Ping The percent of packets that are lost to the database host.
OS Memory Usage The percent of the operating system memory that is currently
being used.
Used Disc Space The percent of the local discs of the operating system that is
currently being used.
Prerequisites
The readMonitoringData scope is assigned to the used platform role for the subaccount. For more information, see
Platform Scopes [page 1676].
Procedure
For more information, see Navigate to Global Accounts and Subaccounts [page 964].
The Current Metrics panel shows the current state of the metrics for the selected database system. When a
threshold is reached, the metric health status changes to warning or critical.
The Metrics History panel shows the metrics history of your database.
When you open the checks history, you can view graphic representations of the different checks, and zoom in
when you click and drag horizontally or vertically to get further details. If you zoom in a graphic horizontally, all
other graphics also zoom in to the same level of detail. Press Shift and drag to pan a graphic. Zoom out to
the initial size with a double-click.
You can select different time intervals for viewing the checks. Depending on the selected interval, data is
aggregated as follows:
○ Last 12 or 24 hours - data is collected every minute.
○ Last 7 days - data is aggregated from the average values for each 10 minutes
○ Last 30 days - data is aggregated from the average values for each hour
You can also select a custom time interval when you are viewing history of checks. If you select an interval
during which the application isn't running, the graphics won't contain any data.
Related Information
You can only debug SAP HANA server-side JavaScript with the SAP HANA Tools plugin for Eclipse as of release 7.4.
If you are working with lower plugin versions, use the SAP HANA Web-based Development Workbench to perform
your debugging tasks.
Prerequisites
Procedure
1. In the SAP Cloud Platform cockpit, navigate to a subaccount. For more information, see Navigate to Global
Accounts and Subaccounts [page 964].
Note
We recommend that you use the Google Chrome browser.
3. In the HANA XS Applications table, select the application to display its details.
4. In the Application Details section, click Open in Web-based Development Workbench. Note that the SAP HANA
Web-based Development Workbench can also be opened directly at the following URL: https://<database
instance><subaccount>.<host>/sap/hana/xs/ide/
5. Depending on whether you want to debug a .xsjs file or a more complex scenario (set a breakpoint in
a .xsjs file and run another file), do the following:
○ .xsjs file:
1. Set the breakpoints and then choose the Run on server (F8) button.
○ Complex scenario:
1. Set the breakpoint in the .xsjs file you want to debug.
2. Open a new tab in the browser and then open the other file on this tab by entering its URL (https://
<database instance><subaccount>.<host>/<package>/<file>).
Note
If you synchronously call the .xsjs file in which you have set a breakpoint and then open the other file
in the SAP HANA Web-based Development Workbench and execute it by choosing the Run on server
(F8) button, you will block your debugging session. You will then need to terminate the session by
closing the SAP HANA Web-based Development Workbench tab.
Note
If you leave your debugging session idle for some time once you have started debugging, your session will
time out. An error in the WebSocket connection to the backend will be reported and your WebSocket
connection for debugging will be closed. If this occurs, reopen the SAP HANA Web-based Development
Workbench and start another debugging session.
Valid for SAP HANA instances running SP8 or lower only. Use this procedure to configure your HANA XS
applications to use Security Assertion Markup Language (SAML) 2.0 authentication. This is necessary if you want
to implement identity federation with your corporate identity providers.
Prerequisites
● You have the SAP HANA Tools installed in your Eclipse IDE. See Install SAP HANA Tools for Eclipse [page
1224].
● You have a user on the productive landscape of SAP Cloud Platform. See Purchase a Customer Account [page
935].
● You have a SAP HANA database user on the productive landscape of SAP Cloud Platform. See Create a
Database Administrator User [page 1245].
● You have a corporate identity provider (IdP) configured with its own trust settings (key pair and certificates).
See the identity provider vendor’s documentation for more information.
Note
To establish successful trust with SAP HANA XS Engine on SAP Cloud Platform, the identity provider must
have the following features:
○ Supports unsigned SAML requests
○ Sends its signing certificate when sending a SAML response
● You have a SAP HANA XS engine configured with its key pair and certificates. See the SAP HANA
Administration Guide.
Context
Restriction
This procedure is valid for productive HANA instances running SAP HANA SP8 or lower. For SAP HANA SP9
instances, see theConfigure SSO with SAML Authentication for SAP HANA XS Applications section in the SAP
HANA Administration Guide.
Use this procedure to configure your HANA XS applications to use Security Assertion Markup Language (SAML)
2.0 authentication. This is necessary if you want to implement identity federation with your corporate identity
providers. See Authorization and Trust Management in the Neo Environment [page 2116].
Procedure
1. Download the identity provider metadata. See the identity provider vendor’s documentation for more
information.
2. Store the IdP signing certificate in a valid PEM or DER file, enclosing the certificate content in -----BEGIN
CERTIFICATE----- and -----END CERTIFICATE-----.
3. Upload the PEM or DER file to SAP Cloud Platform using the upload-hanaxs-certificates command.
Tip
: If you get an error message while uploading the certificates, try to fix the problem using the reconcile-
hanaxs-certificates command. See reconcile-hanaxs-certificates [page 1946]
4. Restart the SAP HANA XS service so the upload takes effect. This is done using the restart-hana console
command.
Procedure
○ sap.hana.xs.admin.roles::HTTPDestAdministrator
○ sap.hana.xs.admin.roles::HTTPDestViewer
○ sap.hana.xs.admin.roles::RuntimeConfAdministrator
○ sap.hana.xs.admin.roles::RuntimeConfViewer
CREATE SAML PROVIDER <idp name> WITH SUBJECT '<certificate subject>' ISSUER
'<certificate issuer>' ENABLE USER CREATION;
Tip
Get the certificate subject and issuer from the IdP certificate. If you don’t have direct access to the
certificate, use a proper file viewer tool to view the certificate contents from the PEM or DER file.
Note
With this statement, you also enable the automatic user creation of a corresponding SAP HANA
database user at first login. Otherwise, you will have to do it manually if such does not exist. See the
SAP HANA Administration Guide.
b. To create a destination:
<uppercase idp name> Create a short name for this IdP in uppercase.
Note
You need to configure all four endpoints, executing all four statements.
5. Open the SAP HANA XS Administation tool (see SAP HANA Administration Guide). For the required
applications, configure SAML authentication to be using this identity provider:
a. Select the application.
b. Go to the SAML section.
c. Choose Identity Provider and set this identity provider as value.
Procedure
1. Download the SAP HANA service provider metadata from the following URL:
https://<SAP HANA url>/sap/hana/xs/saml/info.xscfunc
Tip
You can get the SAP HANA URL from the HANA XS Applications section in the cockpit.
2. Import the SAP HANA service provider metadata in the identity provider. See the identity provider vendor’s
documentation for more information.
4. Test
Open the required application and check if SAML authentication with the required identity provider works. You
should be redirected to the identity provider and prompted to log in. After successful login, you are shown the
application.
To be able to call SAP Cloud Platform services from SAP HANA XS applications, you need to assign a predefined
trust store to the HTTP destination that defines the connection details for a specific service. The trust store
contains the certificate required to authenticate the calling application.
Prerequisites
In the SAP HANA repository, you have created the HTTP destination (.xshttpdest file) to the service to be
called. The file must have the .xshttpdest extension and be located in the same package as the application that
uses it or in one of the application's subpackages.
Procedure
Related Information
SAP Cloud Platform enables you to easily develop and run lightweight HTML5 applications in a cloud environment.
HTML5 applications on SAP Cloud Platform consist of static resources and can connect to any existing on-premise
or on-demand REST services. Compared to a Java application, there is no need to start a dedicated process for an
HTML5 application. Instead the static resources and REST calls are served using a shared dispatcher service
provided by the SAP Cloud Platform.
The static content of the HTML5 applications is stored and versioned in Git repositories. Each HTML5 application
has its own Git repository assigned. For offline editing, developers can directly interact with the Git service using a
Git client of their choice. They may use any Git client like EGit or a native Git implementation to perform Git
operations. A Git repository is created automatically when a new HTML5 application is created.
Lifecycle operations, for example, creating new HTML5 applications, creating new versions, activating, starting and
stopping or testing applications, can be performed using the SAP Cloud Platform cockpit. As the static resources
are stored in a versioned Git repository, not only the latest version of an application can be tested, but the
complete version history of the application is always available for testing. The version that is delivered to the end
users of that application is called the "active version". Each application can have only one active version.
Related Information
For more information about building applications in SAP Web IDE, see the SAP Web IDE documentation. There, you
will also find information on building your project first and then pushing your app to the cockpit.
Related Information
This tutorial illustrates how to build a simple HTML5 application using SAP Web IDE.
Prerequisites
Context
Context
For each new application a new Git repository is created automatically. To view detailed information on the Git
repository, including the repository URL and the latest commits, choose Applications HTML5 Applications in
the navigation area and then Versioning.
Note
To create the HTML5 application in more than one region, create the application in each region separately and
copy the content to the new Git repository.
Procedure
1. Log on with a user (who is a subaccount member) to the SAP Cloud Platform cockpit.
If you have already created applications using this subaccount, the list of HTML5 applications is displayed.
3. To create a new HTML5 application, choose New Application and enter an application name.
Note
Adhere to the naming convention for application names:
○ The name must contain no more than 30 characters.
○ The name must contain only lowercase alphanumeric characters.
○ The name must start with a letter.
4. Choose Save.
5. Clone the repository to your development environment.
a. To start SAP Web IDE and automatically clone the repository of your app, choose Edit Online ( ) at the
end of the table row of your application.
b. On the Clone Repository screen, if prompted enter your user and password (SCN user and SCN
password), and choose Clone.
Results
Related Information
A project is needed to create files and to make them available in the cockpit.
Procedure
1. In SAP Web IDE, choose Development (</>), and then select the project of the application you created in the
cockpit.
2. To create a project and to clone your app to the development environment, right-click the project, and choose
New Project from Template .
3. Choose the SAPUI5 Application button, and choose Next.
4. In the Project Name field, enter a name for your project, and choose Next.
Note
The project name has to be unique within the workspace.
Field Entry
6. Choose Finish.
Task overview: Creating a Hello World Application Using SAP Web IDE [page 1261]
SAP Web IDE already created an HTML page for your project. You now adapt this page.
Procedure
1. In SAP Web IDE, expand the project node in the navigation tree and open the HelloWorld.view.js using a
double-click.
2. In the HelloWorld.view.js view, replace Title in the title: "{i18n>title}" line with the title of your
application Hello World.
4. To test your Hello World application, select the index.html file and choose Run ( ).
Task overview: Creating a Hello World Application Using SAP Web IDE [page 1261]
Next task: Deploy Your App to SAP Cloud Platform [page 1265]
With this step you create a new active version of your app that is started on SAP Cloud Platform.
Procedure
1. In SAP Web IDE, select the project node in the navigation tree.
2. To deploy the project, right-click it and choose Deploy Deploy to SAP Cloud Platform .
3. On the Login to SAP Cloud Platform screen, enter your password and choose Login.
4. On the Deploy Application to SAP Cloud Platform screen, increment the version number and choose Deploy.
Note
If you leave the Activate option checked, the new version is activated directly.
Task overview: Creating a Hello World Application Using SAP Web IDE [page 1261]
The developer’s guide introduces the development environment for HTML5 applications, a procedure on how to
create applications, and supplies details on the descriptor file that specifies how dedicated application URLs are
handled by the platform.
Related Information
The development workflow is initiated from the SAP Cloud Platform cockpit.
The cockpit provides access to all lifecycle operations for HTML5 applications, for example, creating new
applications, creating new versions, activating a version, and starting or stopping an application.
The SAP Cloud Platform Git service stores the sources of an HTML5 application in a Git repository.
For each HTML5 application there is one Git repository. You can use any Git client to connect to the Git service. On
your development machine you may, for example, use Native Git or Eclipse/EGit. The SAP Web IDE has a built-in
Git client.
Git URL
With this URL, you can access the Git repository using any Git client.
The URL of the Git repository is displayed under Source Location on the detail page of the repository. You can also
view this URL together with other detailed information on the Git repository, including the repository URL and the
latest commits, by choosing HTML5 Applications in the navigation area and then Versioning.
Authentication
Access to the Git service is only granted to authenticated users. Any user who is a member of the subaccount that
contains the HTML5 application and who has the Administrator, Developer, or Support User role has access to the
Git repository. When sending requests, users have to authenticate themselves using their user name and
password of the identity provider.
Permissions
The permitted actions depend on the subaccount member role of the user:
Any authenticated user with the Administrator, Developer, or Support User role can read the Git repository. They
have permission to:
Write access is granted to users with the Administrator or Developer role. They have permission to:
Related Information
Context
For each new application a new Git repository is created automatically. To view detailed information on the Git
repository, including the repository URL and the latest commits, choose Applications HTML5 Applications in
the navigation area and then Versioning.
Note
To create the HTML5 application in more than one region, create the application in each region separately and
copy the content to the new Git repository.
Procedure
1. Log on with a user (who is a subaccount member) to the SAP Cloud Platform cockpit.
If you have already created applications using this subaccount, the list of HTML5 applications is displayed.
3. To create a new HTML5 application, choose New Application and enter an application name.
4. Choose Save.
5. Clone the repository to your development environment.
a. To start SAP Web IDE and automatically clone the repository of your app, choose Edit Online ( ) at the
end of the table row of your application.
b. On the Clone Repository screen, if prompted enter your user and password (SCN user and SCN
password), and choose Clone.
Results
Related Information
Context
Procedure
1. Log on with a user (who is a subaccount member) to the SAP Cloud Platform cockpit.
Results
You can now activate this version to make the application available to the end users.
Related Information
As end users can only access the active version of an application, you must create and activate a version of your
application.
Context
The developer can activate a single version of an application to make it available to end users.
Procedure
1. Log on with a user (who is a subaccount member) to the SAP Cloud Platform cockpit.
Results
You can now distribute the URL of your application to the end users.
Using the application descriptor file you can configure the behavior of your HTML5 application.
This descriptor file is named neo-app.json. The file must be created in the root folder of the HTML5 application
repository and must have a valid JSON format.
With the descriptor file you can set the options listed under Related Links.
The application descriptor file must follow the following format. All settings are optional.
{
"authenticationMethod": "saml"|"none",
"welcomeFile": "<path to welcome file>",
"logoutPage": "<path to logout page>",
"sendWelcomeFileRedirect": true|false,
"routes": [
{
"path": "<application path to be mapped>",
"target": {
"type": "destination | service | application",
"name": "<name of the destination> | <name of the service> | <name
of the application or subscription>",
"entryPath": "<path prepended to the request path>",
"version": "<version to be referenced. Default is active version.>"
},
"description": "<description>"
}
],
"securityConstraints": [
{
"permission": "<permission name>",
"description": "<permission description>",
"protectedPaths": [
"<path to be secured>",
...
],
"excludedPaths": [
"<path to be excluded>",
...
]
}
],
"cacheControl": [
{
"path": "<optional path of resources to be cached>",
"directive": "none | public | private",
"maxAge": <lifetime in seconds>
}
],
"headerWhiteList": [
"<header1>",
"<header2>",
...
]
}
Related Information
3.2.6.2.5.1 Authentication
Authentication is the process of establishing and verifying the identity of a user as a prerequisite for accessing an
application.
By default an HTML5 application is protected with SAML2 authentication, which authenticates the user against the
configured RDP. For more information, see Application Identity Provider [page 2161]. For public applications the
authentication can be switched off using the following syntax:
Example
An example configuration that switches off authentication looks like this:
"authenticationMethod": "none"
Note
Even if authentication is disabled, authentication is still required for accessing inactive application versions.
To protect only parts of your application, set the authenticationMethod to "none" and define a security
constraint for the paths you want to protect. If you want to enforce only authentication, but no additional
authorization, define a security constraint without a permission (see Authorization [page 1272]).
After 20 minutes of inactivity user sessions are invalidated. If the user tries to access an invalidated session, SAP
Cloud Platform returns a logon page, where the user must log on again. If you are using SAML as a logon method,
you cannot rely on the response code to find out whether the session has expired because it is either 200 or 302.
To check whether the response requires a new logon, get the com.sap.cloud.security.login HTTP header
and reload the page. For example:
jQuery(document).ajaxComplete(function(e, jqXHR) {
if(jqXHR.getResponseHeader("com.sap.cloud.security.login")) {
alert("Session is expired, page shall be reloaded.");
window.location.reload();
}
})
3.2.6.2.5.2 Authorization
To enforce authorization for an HTML5 application, permissions can be added to application paths.
In the cockpit, you can create custom roles and assign them to the defined permissions. If a user accesses an
application path that starts with a path defined for a permission, the system checks if the current user is a
member of the assigned role. If no role is assigned to a defined permission only subaccount members with
developer permission or administrator permission have access to the protected resource.
Permissions are only effective for the active application version. To protect non-active application versions, the
default permission NonActiveApplicationPermission is defined by the system for every HTML5 application.
This default permission must not be defined in the neo-app.json file but is available automatically for each
HTML5 application.
If only authentication is required for a path, but no authorization, a security constraint can be added without a
permission.
A security constraint applies to the directory and its sub-directories defined in the protectedPaths field, except
for paths that are explicitly excluded in the excludedPaths field. The excludedPath field supports pattern
matching. If a path specified ends with a slash character (/) all resources in the given directory and its sub-
directories are excluded. You can also specify the path to be excluded using wildcards, for example, the path
**.html excludes all resources ending with .html from the security constraint.
To define a security constraint, use the following format in the neo-app.json file:
...
"securityConstraints": [
{
"permission": "<permission name>",
"description": "<permission description>",
"protectedPaths": [
"<path to be secured>"
],
"excludedPaths": [
"<path to be excluded>",
...
]
}
]
Example
An example configuration that restricts a complete application to the accessUserData permission, with the
exception of all paths starting with "/logout", looks like this:
...
"securityConstraints": [
{
"permission": "accessUserData",
"description": "Access User Data",
"protectedPaths": [
"/"
],
"excludedPaths": [
"/logout/**"
]
}
]
...
Related Information
By default end users can access the application descriptor file of an HTML5 application.
To do so, they enter the URL of the application followed by the filename of the application descriptor in the
browser.
Tip
For security reasons we recommend that you use a permission to protect the application descriptor from being
accessed by end users.
A permission for the application descriptor can be defined by adding the following security constraint into the
application descriptor
...
"securityConstraints": [
{
"permission": "AccessApplicationDescriptor",
"description": "Access application descriptor",
"protectedPaths": [
"/neo-app.json"
]
}
]
...
To access SAPUI5 resources in your HTML5 application, configure the SAPUI5 service routing in the application
descriptor file.
To configure the SAPUI5 service routing for your application, map a URL path that your application uses to access
SAPUI5 resources to the SAPUI5 service:
...
"routes": [
{
"path": "<application path to be mapped>",
"target": {
"type": "service",
"name": "sapui5",
"version": "<version>",
"entryPath": "/resources"
},
"description": "<description>"
}
]
...
Example
This configuration example maps all paths starting with /resources to the /resources path of the SAPUI5
library.
...
"routes": [
{
"path": "/resources",
"target": {
"type": "service",
"name": "sapui5",
"entryPath": "/resources"
},
"description": "SAPUI5"
}
]
...
For more information about using SAPUI5 for your application, see SAPUI5: UI Development Toolkit for HTML5.
Example
This configuration example shows how to reference the SAPUI5 version 1.26.6 using the neo-app.json file.
...
"routes": [
{
"path": "/resources",
"target": {
"type": "service",
"name": "sapui5",
"version": "1.26.6",
"entryPath": "/resources"
},
"description": "SAPUI5"
}
}
...
Related Information
To connect your application to a REST service, configure routing to an HTTP destination in the application
descriptor file.
A route defines which requests to the application are forwarded to the destination. Routes are matched with the
path from a request. All requests with paths that start with the path from the route are forwarded to the
destination.
If you define multiple routes in the application descriptor file, the route for the first matching path is selected.
...
"routes": [
{
"path": "<application path to be forwarded>",
"target": {
"type": "destination",
"name": "<name of the destination>"
},
"description": "<description>"
}
]
...
Example
With this configuration, all requests with paths starting with /gateway are forwarded to the gateway
destination.
...
"routes": [
{
"path": "/gateway",
"target": {
"type": "destination",
"name": "gateway"
},
"description": "Gateway System"
}
]
...
The browser sends a request to your HTML5 application to the path /gateway/resource (1). This request is
forwarded by the HTML5 application to the service behind the destination gateway (2). The path is shortened
to /resource. The response returned by the service is then routed back through the HTML5 application so that
the browser receives the response (3).
Destination Properties
In addition to the application-specific setup in the application descriptor, you can configure the behavior of routes
at the destination level. For information on how to set destination properties, see You can enter additional
properties (step 9) [page 111].
Timeout Handling
A request to a REST service can time out when the network or backend is overloaded or unreachable. Different
timeouts apply for initially establishing the TCP connection (HTML5.ConnectionTimeoutInSeconds) and
reading a response to an HTTP request from the socket (HTML5.SocketReadTimeoutInSeconds). When a
timeout occurs, the HTML5 application returns a gateway timeout response (HTTP status code 504) to the
client.
While some long-running requests may require to increase the socket timeout, we do not recommend that you
change the default values. Too high timeouts may impact the overall performance of the application by blocking
other requests in the browser or blocking back-end resources.
Redirect Handling
By default all HTML5 applications follow HTTP redirects of REST services internally. This means whenever your
REST service responds with a 301, 302, 303, or 307 HTTP status code, a new request is issued to the redirect
target. Only the response to this second request reaches the browser of the end user. To change this behavior, set
the HTML5.HandleRedirects destination property to false. As a consequence, the 30X responses given above
are directly sent back without following the redirect.
We recommend that you set this property to false. This helps improve the performance of your HTML5
application because the browser stores redirects and thus avoids round trips. If you use relative links, the
automatic handling of redirects might break your HTML5 application on the browser side. However, certain service
types may not run with a value of false.
● Your application descriptor contains a route that forwards requests starting with the path /gateway, to the
destination named gateway as in the example above.
● The service redirects requests from /resource to the path ./servicePath/resource.
When the browser requests the path /gateway/resource (1), the HTML5 application forwards it to the path /
resource of the service (2). As the service responds with a redirect (3), the HTML5 application sends another
request to the new path /servicePath/resource (4). This second response contains the required resource
and is forwarded back to the browser (5).
For the same request to the path /gateway/resources (1), the HTML5 application again forwards the
request to the path /resources of the service (2). Now the redirect is directly forwarded back to the browser
(3). In this case it is the browser that sends another request to the path /gateway/servicePath/resource
(4), which the HTML5 application forwards to the service path /servicePath/resource (5). The requested
resource is then forwarded back to the browser (6).
Deprecated Properties
The following destination properties have been deprecated and replaced by new properties. If the new and the old
properties are both set, the new property overrules the old one.
Security Considerations
When accessing a REST service from an HTML5 application, a new connection is initiated by the HTML5
application to the URL that is defined in the HTTP destination.
To prevent that security-relevant headers or cookies are returned from the REST service to the client, only
whitelisted headers are returned. While some headers are whitelisted per default, additional headers can be
whitelisted in the application descriptor file. For more information about how to whitelist additional headers, see
Header Whitelisting [page 1285].
Cookies that are retrieved from a REST service response are stored by the HTML5 application in an HTTP session
that is bound to the client request. The cookies are not returned to the client. If a subsequent request is initiated to
the same REST service, the cookies are added to the request by the application. Only those cookies are added that
are valid for the request in the sense of correct domain and expiration date. When the client session is terminated,
all associated cookies are removed from the HTML5.
Related Information
To access resources from another HTML5 application or a subscription to an HTML5 application, you can map an
application path to the corresponding application or subscription.
If the given path matches a request path, the resource is loaded from the mapped application or subscription. This
feature may be used to separate re-usable resources in a dedicated application.
...
"routes": [
{
"path": "<application path to be mapped>",
"target": {
"type": "application",
"name": "<name of the application or subscription>"
"version": "<version to be referenced. Default is active version>",
},
"description": "<description>"
}
]
...
Example
This configuration example maps all paths starting with /icons to the active version of the application named
iconlibrary.
...
"routes": [
{
"path": "/icons",
"target": {
"type": "application",
"name": "iconlibrary"
},
"description": "Icon Library"
}
]
...
Related Information
The user API service provides an API to query the details of the user that is currently logged on to the HTML5
application.
If you use a corporate identity provider (IdP), some features of the API do not work as described here. The
corporate IdP requires you to configure a mapping from your IdP’s assertion attributes to the principal attributes
usable in SAP Cloud Platform. See Configure User Attribute Mappings [page 2168].
...
"routes": [
{
"path": "<application path to be forwarded>",
"target": {
"type": "service",
"name": "userapi"
}
}
]
...
The route defines which requests to the application are forwarded to the API. The route is matched with the path
from a request. All requests with paths that start with the path from the route are forwarded to the API.
Example
With the following configuration, all requests with paths starting with /services/userapi are forwarded to
the user API.
...
"routes": [
{
"path": "/services/userapi",
"target": {
"type": "service",
"name": "userapi"
}
}
]
...
● /currentUser
● /attributes
The user API requires authentication. The user is logged on automatically even if the authentication property is
set to none in the neo-app.json file.
Calling the /currentUser endpoint returns a JSON object that provides the user ID and additional information of
the logged-on user. The table below describes the properties contained in the JSON object and specifies the
principal attribute used to compute this information.
The /currentUser endpoint maps a default set of attributes. To retrieve all attributes, use the /attributes
endpoint as described in User Attributes.
Example
A sample URL for the route defined above would look like this: /services/userapi/currentUser.
{
"name": "p12345678",
"firstName": "John",
"lastName": "Doe",
"email": "john@doeenterprise.com",
"displayName": "John Doe (p12345678)"
}
User Attributes
The /attributes endpoint returns the principal attributes of the current user as a JSON object. These attributes
are received as SAML assertion attributes when the user logs on. To make them visible, define a mapping within
the trust settings of the SAP Cloud Platform cockpit, see Configure User Attribute Mappings [page 2168].
Example
A sample URL for the route defined above would look like this: /services/userapi/attributes.
If the principal attributes firstname, lastname, companyname, and organization are present, an example
response may return the following user data:
{
"firstname": "John",
"lastname": "Doe",
"companyname": "Doe Enterprise",
"organization": "Customer sales and marketing"
}
For some endpoints, you can use query parameters to influence the output behavior of the endpoint. The following
table shows which parameters exist for the /attributes endpoint and how they impact the outputs.
multiValuesAsArray Boolean false true If set to true, multivalued attributes are formatted as JSON
s arrays. If set to false, only the first value of the entire value
range of the specific attribute is returned and formatted as a
simple string.
Note
If set to true for an attribute that is not multivalued, then
the value of the attribute is formatted as a simple string
and not a JSON array.
You can either display the default Welcome file or specify a different file as Welcome file.
If the application is accessed only with the domain name in the URL, that is without any additional path
information, then the index.html file that is located in the root folder of your repository is delivered by default. If
you want to deliver a different file, configure this file in the neo-app.json file using the WelcomeFile parameter.
With this additional parameter you specify whether a redirect is sent to the Welcome file or whether the Welcome
file is delivered without redirect. If this option is set, then instead of serving the Welcome file directly under /, the
HTML5 application will send a redirect to the WelcomeFile location. With that, relative links in a Welcome file that
is not located in the root directory will work.
To configure the Welcome file, add a JSON string with the following format to the neo-app.json file:
Example
An example configuration, which forwards requests without any path information to an index.html file in the /
resources folder would look like this:
"welcomeFile": "/resources/index.html",
"sendWelcomeFileRedirect": true
To trigger a logout of the logged-in user, you can configure a logout page in the application descriptor.
When executing a request to the configured logout page, the server triggers a logout. This results in a response
containing a logout request that is send to the identity provider (IdP) to invalidate the user's session on the IdP.
After the user is logged out from the IdP, the configured logout page is called again. Now, the content of the logout
page is served. The logout page is always unprotected, independent of the authentication method of the
application and independent of additional security constraints. In case additional resources, for example, SAPUI5,
are referenced from the logout page, those resources have to be unprotected as well.
For information on how to configure certain paths as unprotected, see Authentication [page 1271] and
Authorization [page 1272].
Because non-active application versions always require authentication, a logout is only triggered for the active
application version. For non-active application versions the logout page is served without triggering a logout.
To configure a logout page for your application, use the following format in the neo-app.json file:
...
"logoutPage": "<path to logout page>"
...
Example
An example configuration of a logout page looks like this:
...
"logoutPage": "/logout.html"
...
To improve the performance of your application you can control the Cache-Control headers, which are returned
together with the static resource of your application.
You can configure caching for the complete application, for dedicated paths, or resources of the application. If the
path you specify ends with a slash character (/) all resources in the given directory and its sub-directories are
matched. You can also specify the path using wildcards, for example, the path **.html matches all resources
ending with .html. Only the first caching directive that matches an incoming request is applied. The path **.css
hides, for example, other paths such as /resources/custom.css.
With the directive property, you specify whether public proxies can cache the resources. The possible values for
the directive property are:
● public
The resource can be cached regardless of your response headers.
● private
Your resource is stored by end-user caches, for example, the browser's internal cache only.
● none
This is the default value that does not send an additional directive
...
"cacheControl": [
{
"path": "<optional path of resources to be cached>",
"directive": "none | public | private",
"maxAge": <lifetime in seconds>
}
]
...
Example
An example configuration that caches all static resources for 24 hours looks like this:
...
"cacheControl": [
{
"maxAge": 86400
}
]
...
For security reasons not all HTTP headers are forwarded from the application to a backend or from the backend to
the application.
The following HTTP headers are forwarded automatically without any additional configuration because they are
part of the HTTP standard:
● Accept
● Accept-Charset
● Accept-Language
● Accept-Range
● Age
● Allow
● Authorization
● Cache-Control
● Content-Language
● Content-Location
● Content-Range
● Content-Type
● Date
Additionally the following HTTP headers are transferred automatically because they are frequently used by Web
applications and (SAP) servers:
● Content-Disposition
● Content-MD5
● DataServiceVersion
● DNT
● MaxDataServiceVersion
● Origin
● RequestID
● Sap-ContextId
● Sap-Message
● Sap-Metadata-Last-Modified
● SAP-PASSPORT
● Slug: For more information, see Atom Publishing Protocol .
● X-CorrelationID
● X-CSRF-TOKEN
● X-Forwarded-For
● X-HTTP-Method
● X-Requested-With
If you need additional HTTP headers to be forwarded to or from a backend request or backend response, add the
header names in the following format to the neo-app.json file:
Example
An example configuration that forwards the additional headers X-Custom1 and X-Custom2 looks like this:
Excluded Headers
● Cookie
● Cookie2
● Content-Length
● Accept-Encoding
Cookies are used for user session identification and therefore should not be shared. The system stores cookies
sent by a backend in the session and removes them from the response before forwarding to the user. With the next
request to the backend the stored cookies are added again.
The Content-Length header cannot be whitelisted as the value is recalculated on demand matching the content
of the given request or response.
Build your first application on the platform based on your preference for development technology and language.
You might want to try several of the tutorials in these tables.
Note
The Import option for some technologies means that sample applications are available, which you can import in
your Eclipse IDE.
Java
HTML5
SAP Web IDE Creating a Hello World Application Using SAP Web IDE [page 1261]
SAPUI5
This document contains references to API documentation to be used for development with SAP Cloud Platform.
The Java API documentation for the Neo environment is provided as part of the downloadable SDK archives. To get
to it, do the following:
1. Install the SDK for Neo environment of your choice . See Install the SAP Cloud Platform SDK for Neo
Environment [page 1127].
2. On your local machine, navigate to the folder of the archive you downloaded and extracted.
3. Navigate to the javadoc folder, and open index.html in your Web browser.
REST APIs
Monitoring API
Related Information
Platform APIs of SAP Cloud Platform are protected with OAuth 2.0 client credentials. Create an OAuth client and
obtain an access token to call the platform API methods.
Context
For description of OAuth 2.0 client credentials grant, see the OAuth 2.0 client credentials grant specification .
Tip
Do not get a new OAuth access token for each and every platform API call. Re-use the same existing access
token throughout its validity period instead, until you get a response indicating the access token needs to be re-
issued.
Context
The OAuth client is identified by a client ID and protected with a client secret. In a later step, those are used to
obtain the OAuth API access token from the OAuth access token endpoint.
Procedure
1. In your Web browser, open the Cockpit. See SAP Cloud Platform Cockpit [page 900].
Caution
Make sure you save the generated client credentials. Once you close the confirmation dialog, you cannot
retrieve the generated client credentials from SAP Cloud Platform.
Context
Once you have the client credentials, you need to send an HTTP POST request to the OAuth access token endpoint
and use the client ID and client secret as user and password for HTTP Basic Authentication. You will receive the
access token as a response. By default, the access token received in this way is valid 1500 seconds (25 minutes).
You cannot configure its validity length.
1. Send a POST request to the OAuth access token endpoint. The URL is landscape specific, and looks like this:
The parameter grant_type=client_credentials notifies the endpoint that the Client Credentials flow is used.
2. Get and save the access token from the received response from the endpoint.
The response is a JSON object, whose access_token parameter is the access token. It is valid for the specified
time (in seconds) in the expires_in parameter. (default value: 1500 seconds).
Example
Retrieving an access token on the trial landscape will look like this:
POST https://api.hanatrial.ondemand.com/oauth2/apitoken/v1?
grant_type=client_credentials
Headers:
Authorization: Basic eW91ckNsaWVudElEOnlvdXJDbGllbnRTZWNyZXQ
{
"access_token": "51ddd94b15ec85b4d54315b5546abf93",
"token_type": "Bearer",
"expires_in": 1500,
"scope": "hcp.manageAuthorizationSettings hcp.readAuthorizationSettings"
}
1. At the required (application, subaccount or global account) level, create an HTTP destination with the
following information (the name can be different):
○ Name=<yourdestination name>
○ URL=https://api.<SAP Cloud Platform host>/oauth2/apitoken/v1?grant_type=client_credentials
○ ProxyType=Internet
○ Type=HTTP
○ CloudConnectorVersion=2
○ Authentication=BasicAuthentication
○ User=<clientID>
○ Password=<clientSecret>
See Create HTTP Destinations [page 111].
2. In your application, obtain an HttpURLConnection object that uses the destination.
See ConnectivityConfiguration API [page 80].
3. With the object retrieved from the previous step, execute a POST call.
urlConnection.setRequestMethod("POST");
urlConnection.setRequestProperty("Authorization", "Basic <Base64 encoded
representation of {clientId}:{clientSecret}>");
Procedure
In the requests to the required platform API, include the access token as a header with name Authorization and
value Bearer <token value>.
Example
We will call the Authorization Management API.
GET https://api.hanatrial.ondemand.com/authorization/v1/accounts/p1234567trial/
users/roles/?userId=myUser
Headers:
Authorization: Bearer 51ddd94b15ec85b4d54315b5546abf93
Related Information
A Multi-Target Application (MTA) is a package comprised of multiple application and resource modules, which have
been created using different technologies and deployed to different runtimes, but have a common lifecycle. You
bundle the modules together, describe them along with their interdependencies to other modules, services, and
interfaces, and package them in an MTA.
Complex business applications are composed of multiple parts developed with focus on micro-service design
principles, API-management, usage of the OData protocol, increased usage of application modules developed with
different languages, IDEs, and build methodologies. Thus, development, deployment, and configuration of
separate elements introduce a variety of lifecycle and orchestration challenges. To address these challenges, SAP
introduces the Multi-Target Application (MTA) concept. It addresses the complexity of continuous deployment by
employing a formal target-independent application model.
An MTA comprises of multiple modules created with different technologies, deployed to different target runtimes,
but having a common lifecycle. Initially, developers describe the modules of the application, the interdependencies
to other modules and services, and required and exposed interfaces. Afterward, the SAP Cloud Platform validates,
orchestrates, and automates the deployment of the MTA.
For more information about the Multi Target Application model, see the official The Multi-Target Application Model
specification.
Related Information
You can create and deploy a Multi-Target Application for theCloud Foundry environment as described below:
● Using the SAP Web IDE for Full-Stack Development as described in Developing Multi-Target Applications - both
the development descriptor mta.yaml and the deployment descriptor mtad.yaml are created automatically.
The mta.yaml is generated when you create the application project, and the mtad.yaml file is created when
you build the project.
Note
You may still need to edit the development descriptor.
Development descriptors are used to generate MTA deployment descriptors, which define the required
deployment-time data. That is, the MTA developmentt descriptor data specifies what you want to build, how to
build it, while the deployment descriptor data specifies as what and how to deploy it.
● Using the Multi-Target Application Archive Builder tool - as described in Multi-Target Application Archive
Builder. Afterward you deploy the MTA using the Cloud Foundry Command Line Interface.
Note
An MTA development descriptor mta.yaml is required. You have to create it manually.
How to deploy the Multi-Target Application Cloud Foundry Command Line Interface [page 906]
● Manually - create the required files manually and deploy them using the Cloud Foundry Command Line
Interface
Note
An MTA development descriptor mta.yaml is not required.
Multi-Target Application deployment descriptor Defining MTA Deployment Descriptors for Cloud Foundry
[page 1296]
Multi-Target Application extension descriptor Defining MTA Extension Descriptors [page 1322]
Multi-Target Application module types and parameters MTA Module Types, Resource Types, and Parameters for Ap
plications in the Cloud Foundry Environment [page 1324]
How to deploy the Multi-Target Application Cloud Foundry Command Line Interface [page 906]
You have to consider the following limits for the MTA artifacts, which can be handled by the Cloud Foundry deploy
service:
Using the Multi-Target Application model is particularly useful in the following situations:
● You are developing a business application composed of several different parts - apps, services, content,
and others - which you want to manage as one unit
In such cases you declaratively design the application structure along with the dependencies between the
various components. This enables you to utilize SAP Cloud Platform to deploy, update without downtime by
employing the Zero Downtime Maintenance mode, or undeploy your end-to-end solution - each with a single
step. This ensures the completeness and consistency of all components.
● Your business application has dependencies to external resources, such as backing services (database,
messaging, among others), APIs, and configurations from other applications
When you describe these resources using one declarative model, you can utilize SAP Cloud Platform to check
for the existence of the required backing services, and afterwards create, configure, and bind the missing ones.
Required APIs or configuration, provided by other applications, would be resolved and wired appropriately as
well.
● Your business application has a certain default configuration, for example about memory, disk, number of
individual app instances, environment variables, service plans, and others
In such cases you can make all configuration elements explicit during your development and release
procedures by including them in one declarative model. Furthermore, you can define the expected
configuration parts that need to be provided for different deployments targets - for example development,
testing, or production environments. This way SAP Cloud Platform can automate your application lifecycle
based on the default configurations, ensuring that all required custom configurations are in place. You could
also employ a wide set of predefined system configuration variables such as default-host, default-
domain, app-name, org, space, among others, by using placeholder notation ${<parameter>}. This
approach reduces the hard-coded configuration values in your apps.
Multi-Target Applications are defined in a development descriptor required for design-time purposes.
The development descriptor (mta.yaml) defines the elements and dependencies of a Multi-Target Application
(MTA) compliant with the Cloud Foundry environment.
Note
The MTA development descriptor (mta.yaml) is used to generate the deployment descriptor (mtad.yaml),
which is required for deploying an MTA to the target runtime. If you use command-line tools to deploy an MTA,
you do not need an mta.yaml file. However, in these cases you have to manually create the mtad.yaml file.
For more information about the MTA development descriptor, see Inside an MTA Descriptor.
Related Information
The Multi-Тarget Application (MTA) deployment descriptor is a YAML file that defines the relations between you as
a provider of а deployable artefact and the SAP Cloud Platform as a deployer tool.
Using the YAML data serialization language you describe the MTA in an MTA deployment descriptor (mtad.yaml)
file containing the following elements:
● Global elements - an identifier and version that uniquely identify the MTA, including additional optional
information such as a description, the providing organization, and a copyright notice for the author.
● Modules - they contain the properties of module types, which represent Cloud Foundry applications or content
that form the MTA and are deployed. For more information, see Modules [page 1300].
● Resources - they contain properties of resource types, which are entities not part of an MTA, but required by
the modules at runtime or at deployment time. For more information, see Resources [page 1305].
● Dependencies between modules and resources.
● Properties - these result in the application environment variables that have to be available to the respective
module at run time. For more information, see Properties [page 1305].
● Technical configuration parameters, such as URLs, and application configuration parameters such as
environment variables For more information, see Parameters and Placeholders [page 1307].
● Metadata - provide additional information about the declared parameters and properties. For more
information, see Metadata for Properties and Parameters [page 1308].
Example
_schema-version: "3.1"
ID: com.sap.xs2.samples.javahelloworld
version: 0.1.0
modules:
- name: java-hello-world
type: javascript.nodejs
path: web/
requires:
- name: java-uaa
- name: java
group: destinations
properties:
name: java
url: ~{url}
forwardAuthToken: true
- name: java-hello-world-backend
type: java.tomee
path: java/target/java-hello-world.war
provides:
- name: java
properties:
url: ${default-url}
properties:
JBP_CONFIG_RESOURCE_CONFIGURATION: "['tomee/webapps/ROOT/WEB-INF/
resources.xml': {'service_name_for_DefaultDB' : 'java-hdi-container'}]"
requires:
- name: java-uaa
- name: java-hdi-container
- name: java-hello-world-db
- name: java-hello-world-db
type: com.sap.xs.hdi
path: db/
requires:
- name: java-hdi-container
resources:
- name: java-hdi-container
type: com.sap.xs.hdi-container
- name: java-uaa
type: com.sap.xs.uaa-space
parameters:
config-path: xs-security.json
Note
● The format and available options in the MTA deployment descriptor could change with the newer versions
of the MTA specification. Always specify the schema version when defining an MTA deployment descriptor,
so that the SAP Cloud Platform is aware against which specific MTA specification version you are deploying.
● The example above is incomplete. To deploy a solution, you have to create an MTA extension descriptor with
the user and password added there. You also have to create the MTA archive.
Note
As there are technical similarities between SAP HANA XS Advanced and Cloud Foundry, you can adapt
application parameter for operation in either platforms. Note that each environment supports its own set of
Example 1
Sample Code
_schema-version: "3.1"
ID: com.sap.xs2.samples.nodehelloworld
version: 0.1.0
modules:
- name: node-hello-world
type: javascript.nodejs
path: web/
requires:
name: nodejs-uaa
- name: nodejs
group: destinations
properties:
name: backend
url: ~{url}
forwardAuthToken: true
properties-metadata:
name:
optional: false
overwritable: false
url:
overwritable: false
parameters:
host: !sensitive ${user}-node-hello-world
memory: 128MB
parameters-metadata:
memory:
optional: true
overwritable: true
- name: node-hello-world-backend
type: javascript.nodejs
path: js/
provides:
- name: nodejs
properties:
url: "${default-url}"
requires:
- name: nodejs-uaa
- name: nodejs-hdi-container
- name: node-hello-world-db
parameters:
host: ${user}-node-hello-world-backend
- name: node-hello-world-db
type: com.sap.xs.hdi
- name: nodejs-uaa
type: com.sap.xs.uaa
parameters:
config-path: xs-security.json
- name: log
type: application-logs
optional: true
Example 2
The following sample code is a more complex example, which shows an MTA deployment description with the
following modules:
● A database model
● An SAP UI5 application (hello world)
● An application written in node.js
The UI5 application “hello-world” use the environment-variable <ui5_library> as a logical reference to some
version of UI5 on a public Website.
Sample Code
ID: com.acme.xs2.samples.javahelloworld
version: 0.1.0
modules:
- name: hello-world
type: javascript.nodejs
requires:
- name: uaa
- name: java_details
properties:
backend_url: ~{url3}/
properties:
ui5_library: "https://sapui5.hana.acme.com/"
- name: java-hello-world-backend
type: java.tomee
requires:
- name: uaa
- name: java-hello-world-db # deploy ordering
- name: java-hdi-container
provides:
- name: java_details
- name: java-uaa
type: com.sap.xs.uaa
parameters:
name: java-uaa # the name of the actual service
3.3.1.3.2 Modules
The modules section of the deployment descriptor lists the names of the application modules contained in the
MTA deployment archive.
Optional module attributes include: path, type, description, properties, and parameters, plus requires
and provides lists.
Order of Deployment
Modules and therefore applications are deployed in an order that is based on the dependencies they have on each
other. This is done to ensure that an application that depends on other applications is deployed only after any
dependent applications are already up and running. To determine the order of application deployment and start
up, the modules defined in the application development (and deployment) descriptor are sorted in such a way that
the first module has the least number of (and preferably no) dependencies on other modules in the MTA, while the
last one has the most dependencies. For example, the modules in the following deployment descriptor will be
deployed in the order m3, m2, and finally m1:
Note
The number of dependencies between the modules and resources has no influence on the deployment order;
during deployment, the creation of services always takes place before any of the modules are deployed.
Sample Code
MTA Deployment Descriptor (mtad.yaml)
ID: com.sap.xs2.sample
version: 0.1.0
modules:
- name: m1
resources:
- name: r1
type: com.sap.xs.uaa
- name: r2
type: com.sap.xs.hdi-container
Circular Dependencies
If two modules have a circular dependency (for example m1 depends on m2, but m2 also depends on m1, the
parameter “dependency-type” can be specified to control the order in which the modules are deployed. Modules
with parameter dependency-type: hard (m1, in the following example) are always deployed first.
Note
The default setting for the dependency type dependency-type: soft for all modules. The obvious
consequence of this default dependency setting is that the deployment order of modules with circular
dependencies is arbitrary and cannot relied upon.
Sample Code
MTA Deployment Descriptor (mtad.yaml)
ID: com.sap.xs2.sample
version: 0.1.0
modules:
- name: m1
type: java.tomcat
requires:
- name: m2
parameters:
dependency-type: hard
- name: m2
type: java.tomcat
requires:
- name: m1
The MTA deployment is incremental. This means that the state of the artifacts in the Cloud Foundry environment
is compared to the desired state as described in the mtad.yaml and then the set of operations to change the
current state of the MTA to the desired state, are computed. The following optimizations are possible during the
MTA redeployment:
Note
Currently, the Cloud Foundry environment application will be restaged if it is bound to any service with
parameters, because it is not possible to get the current Cloud Foundry service parameters and
compare them with the desired ones.
○ The application is bound to service configuration parameters that have been changed. This requires an
update of the service instance and rebind, restage, and restart of the application.
○ The service binding parameters are changed. This requires an update of the service binding and restage of
the application.
○ The MTA version is changed, which requires a change of special application environment variables,
managed by the deploy service
It is possible for multiple MTA modules to reference a single deployable application, for example, a code bundle in
the MTA archive. This means that during deployment the same code fragment is executed separately in multiple
applications or application instances, but with different parameters and properties. This results in multiple
deployed runtime modules based on the same source module, which runtime modules are usually started with
different configuration parameters. A development project can have one source folder, which is referenced by
multiple module entries in the MTA deployment descriptor mtad.yaml, as illustrated in the following example:
Code Syntax
Multiple MTA Module Entries in the Deployment Descriptor (mtad.yaml)
...
modules:
- name: fileloader-master
type: nodejs
path: js
properties:
FL_ROLE: master
- name: fileloader-worker
type: nodejs
path: js
parameters:
instances: 3
properties:
FL_ROLE: worker
If deployment is based on an MTA archive, it is not necessary to duplicate the code to have two different deployable
modules; the specification for the MTA-module entry in MANIFEST.MF is extended, instead. The following
(incomplete) example of a MANIFEST.MF shows how to use a comma-separated list of module names to associate
one set of deployment artifacts with all listed modules:
Code Syntax
Multiple MTA Modules Listed in the MANIFEST.MF Deployment Manifest
Manifest-Version: 1.0
...
Name: js/
MTA-Module: fileloader-master, fileloader-worker
...
You package the MTA deployment descriptor and module binaries in an MTA archive. You can manually do so as
described below, or alternatively use the MTA Archive Builder tool.
For more information about the MTA Archive Builder tool, see Multi-Target Application Archive Builder.
Note
There could be more than one module of the same type in an MTA archive.
The Multi-Target Application (MTA) archive is created in a way compatible with the JAR file specification. This
allows us to use common tools for creating, modifying, and signing such types of archives.
Note
● The MTA extension descriptor is not part of the MTA archive. During deployment you provide it as a
separate file, or as parameters you enter manually when the SAP Cloud Platform requests them.
● Using a resources directory as in some examples is not mandatory. You can store the necessary resource
files on root level of the MTA archive, or in another directory with name of your choice.
The following example shows the basic structure of an MTA archive. It contains a Java application .war file and a
META-INF directory, which contains an MTA deployment descriptor with a module and a MANIFEST.MF file.
Example
/example.war
The MANIFEST.MF file has to contain a name section for each MTA module part of the archive that has a file
content. In the name section, the following information has to be added:
● Name - the path within the MTA archive, where the corresponding module is located. If it leads to a directory,
add a forward slash (/) at the end.
● Content-Type - the type of the file that is used to deploy the corresponding module
● MTA-module - the name of the module as it has been defined in the deployment descriptor
Note
● You can store one application in two or more war files contained in the MTA archive.
● According to the JAR specification, there must be an empty line at the end of the file.
Example
Manifest-Version: 1.0
Created-By: example.com
Name: example.war
Content-Type: application/zip
MTA-Module: example-java-app
● Look for the example.war file within the root of the MTA archive when working with module example-java-
app
● Interpret the content of the example.war file as an application/zip
Note
The example above is incomplete. To deploy a solution, you have to create an MTA deployment descriptor and
an MTA extension descriptor with the user and password added there. Then you have to create the MTA archive.
Tip
As an alternative to the procedure described above, you can also use the MTA Archive Builder tool. See its
official documentation at Multi-Target Application Archive Builder.
Related Information
3.3.1.3.4 Resources
The application modules defined in the “modules” section of the deployment descriptor may depend on
resources.
● name
Must be unique within the MTA it identifies
Optional resource attributes include: type, description, properties, and parameters. The resource type is
one of a reserved list of resource types supported by the MTA-aware deployment tools, for example:
com.sap.xs.uaa, com.sap.xs.hdi-container, com.sap.xs.job-scheduler; the type indicates to the
deployer how to discover, allocate, or provision the resource, for example, a managed service such as a database,
or a user-provided service
Restriction
System-specific parameters for the deployment descriptor must be included in a so-called MTA deployment
extension descriptor.
3.3.1.3.5 Properties
The MTA deployment descriptor can contain two types of properties, which are very similar, and are intended for
use in the modules or resources configuration, respectively.
Properties can be declared in the deployment description both in the modules configuration (for example, to
define provides or requires dependencies), or in the resources configuration to specify requires
dependencies. Both kinds of properties (modules and requires) are injected into the module’s environment. In
the requires configuration, properties can reference other properties that are declared in the corresponding
provides configuration, for example, using the ~{} syntax.
The values of properties can be specified at design time, in the deployment description (mtad.yaml). More often,
however, a property value is determined during deployment, where the value is either explicitly set by the
administrator, for example, in an deployment-extension descriptor file (myDeployExtension.mtaext), or
inferred by the MTA deployer from a target-platform configuration. When set, the deployer injects the property
values into the module's environment. The deployment operation reports an error, if it is not possible to determine
a value for a property.
Cross-References to Properties
To enable resource properties to resolve values from a property in another resource or module, a resource must
declare a dependency. However, these “requires” declarations do not affect the order of the application
deployment.
Restriction
It is not possible to reference list configuration entries either from resources or “subscription” functionalities
(deployment features that are available to subscriber applications).
Code Syntax
Cross-References between Properties in the MTA Deployment Descriptor
modules:
- name: java
...
provides:
- name: backend
properties:
url: ${default-url}/foo
resources:
- name: example
type: example-type
properties:
example-prop: my-example-prop
- name: uaa
type: uaa-type
requires:
- name: example
- name: backend
properties:
prop: ~{url}
parameters:
param: ~{url}
properties:
pro1: ~{example/example-prop}
parameters:
config:
app-router-url: ~{backend/url}
example-prop: ~{example/example-prop}
Parameters are reserved variables that affect the behavior of the MTA-aware tools, such as the deployer.
Parameters can be “system”, “write-only”, or “read-write” (default value can be overwritten). Each tool publishes a
list of system parameters and their (default) values for its supported target environments. All parameter values
can be referenced as part of other property or parameter value strings. To reference a parameter value, use the
placeholder notation ${<parameter>}. The value of a system parameter cannot be changed in descriptors. Only
its value can be referenced using the placeholder notation.
Examples of common read-only parameters are user, app-name, default-host, default-uri. The value of a
writable parameter can be specified within a descriptor. For example, a module might need to specify a non-default
value for a target-specific parameter that configures the amount of memory for the module’s runtime.
Tip
It is also possible to declare metadata for parameters and properties defined in the MTA deployment
description; the mapping is based on the parameter or property keys. For example, you can specify if a
parameter is required (optional; false) or can be modified overwritable: true.
Sample Code
Parameters and Placeholders
modules:
- name: node-hello-world
type: javascript.nodejs
path: web/
requires:
- name: nodejs-uaa
- name: nodejs
group: destinations
properties:
name: backend
url: ~{url}
forwardAuthToken: true
parameters:
host: ${user}-node-hello-world
Descriptors can contain so-called placeholders (also known as substitution variables), which can be used as sub-
strings within property and parameter values. Placeholder names are enclosed by the dollar sign ($) and curly
brackets ({}). For example: ${host} and ${domain}. For each parameter “P”, there is a corresponding
placeholder ${P}. The value of <P> can be defined either by a descriptor used for deployment, or by the deploy
service itself.
Placeholders can be used to read the value of parameters. For example, the following placeholder can be used in a
descriptor to get the CF / XS API URL: ${xs-api-url}. Placeholders can also be used in map and list values in
properties and parameters sections of modules and resources, as illustrated in the following example:
Sample Code
resources:
- name: uaa
type: com.sap.xs.uaa
parameters:
config:
users: [ "${generated-user}", "XSMASTER" ]
It is possible to declare metadata for parameters and properties defined in the MTA deployment description, for
example, using the “parameters-metadata:” or “properties-metadata:” keys, respectively; the mapping is
based on the keys defined for a parameter or property.
You can specify if a property is required (optional; false) or can be modified (overwritable: true), as
illustrated in the following (incomplete) example:
The overwritable: and optional keywords are intended for use in extension scenarios, for example, where a
value for a parameter or property is supplied at deployment time and declared in a deployment-extension
descriptor file (myMTADeploymentExtension.mtaext).
You can declare metadata for the parameters and properties that are already defined in the MTA deployment
description. However, any parameters or properties defined in the mtad.yaml deployment descriptor with the
metadata value overwritable: false cannot be overwritten by a value supplied from the extension descriptor.
In this case, an error would occur in the deployment.
Code Syntax
Metadata for MTA Deployment Parameters and Properties
modules:
- name: frontend
type: javascript.nodejs
parameters:
memory: 128M
domain: ${default-domain}
parameters-metadata:
memory:
optional: true
Note
Parameters or properties can be declared as sensitive. Information about properties or parameters flagged as
“sensitive” is not written as plain text in log files; it is masked, for example, using a string of asterisks
(********).
In addition to having dependencies on modules in the same MTA, modules can have dependencies on modules
from other MTAs, too. For these so-called cross-MTA dependencies to work, the MTA that provides the
dependencies must declare them as “public” (this is true by default). You have to use the configuration method
to declare that one module has a dependency on a module in a different MTA:
Note
The declaration method requires the addition of a resource in the deployment descriptor; the additional
resource defines the provided dependency from the other MTA.
This method can be used to access any entry that is present in the configuration registry. The parameters used in
this cross-MTA declaration method are provider-nid, provider-id, version, and target. The parameters
are all optional and are used to filter the entries in the configuration registry based on their respective fields. If any
of these parameters is not present, the entries will not be filtered based on their value for that field. The version
parameter can accept any valid Semver ranges.
When used for cross-MTA dependency resolution the provider-nid is always “mta”, the provider-id follows
the format <mta-id>:<mta-provides-dependency-name> and the version parameter is the version of the
provider MTA. In addition, as illustrated in the following example, the target parameter is structured and contains
the name of the organization and space in which the provider MTA is deployed. In the following example, the
placeholders ${org} and ${space} are used, which are resolved to the name of the organization and space of the
consumer MTA. In this way, the provider MTA is deployed in the same space as the consumer MTA.
The following example shows the dependency declaration in the deployment descriptor of the “consumer” MTA :
Sample Code
Consumer MTA Deployment Descriptor (mtad.yaml)
_schema-version: "3.1"
ID: com.sap.sample.mta.consumer
version: 0.1.0
modules:
- name: consumer
type: java.tomee
requires:
- name: message-provider
properties:
message: ~{message}
resources:
- name: message-provider
type: configuration
parameters:
provider-nid: mta
provider-id: com.sap.sample.mta.provider:message-provider
version: ">=1.0.0"
target:
org: ${org} # Specifies the org of the provider MTA
space: ${space} # Wildcard * searches in all spaces
Tip
If no target organization or space is specified by the consumer, then the current organization and space are
used to deploy the provider MTA. If you specify a wildcard value (*) for organization or space of the provider
MTA, the provider would be searched in all organization or spaces for which the wildcard value is provided. If no
match is found for the provider MTA in the specified target organization or space, then org: <current-org>
and space: SAP is searched.
The following example shows the dependency declaration in the deployment descriptor of the “provider” MTA :
Sample Code
Provider MTA Deployment Descriptor (mtad.yaml)
_schema-version: "3.1"
ID: com.sap.sample.mta.provider
version: 2.3.0
modules:
- name: provider
type: javascript.nodejs
provides:
- name: message-provider
public: true
properties:
message: "Hello! This is a message provided by application \"${app-name}
\", deployed in org \"${org}\" and space \"${space}\"!"
A “consumer” module must explicitly declare the organizations and spaces in which a “provider” is expected to be
deployed, except if it is the same space as the consumer. The “provider” can define a white list that specifies the
organizations and spaces from which the consumption of configuration data is permitted. It is not required to
white list all the provider's spaces in its own organization.
Note
Previously, registry entries were visible from all organizations by default. Now, the default visibility setting is
“visible within the current organization and all the organization's spaces”.
White lists can be defined on various levels. For example, a visibility white list could be used to ensure that a
provider's configuration data is visible in the local space only, in all organizations and spaces, in a defined list of
organizations, or in a specific list of organization and space combinations.
The options for white lists and the visibility of configuration data are similar to the options available for managed
services. However, for visibility white lists, space developers are authorized to extend the visibility of configuration
data beyond the space in which they work, without the involvement of an administrator. An administrator is
required to release service plans to certain organizations.
Visibility is declared on the provider side by setting the parameter visibility: (of type 'sequence'), containing
entries for the specified organization (org:) and space (space:). If no visibility: parameter is set, the default
visibility value org: ${org}, space: '*' is used, which restricts visibility to consumers deployed into all
spaces of the provider's organization. Alternatively, the value org: '*' can be set, which allows to bindings from
all organizations and spaces. The white list can contain entries at the organization level only. This, however,
releases configuration data for consumption from all spaces within the specified organizations, as illustrated in the
following (annotated) example.
Tip
Since applications deployed in the same space are always considered “friends”, visibility of configuration data in
the local space is always preserved, no matter which visibility conditions are set.
Sample Code
provides:
- name: backend
public: true
parameters:
visibility: # a list of possible settings:
- org: ${org} # for local org
space: ${space} # and local space
- org: org1 # for all spaces in org1
- org: org2 # for the specified combination (org2,space2)
space: space2
- org: ${org} # default: all spaces in local org
- org: '*' # all orgs and spaces
- org: '*'
space: space3 # every space3 in every org
● Only those users in the white-listed spaces can read or consume the provided configuration data.
3.3.1.3.8.1 Plugins
The deployment service supports a method that allows an MTA to consume multiple configuration entries per
requires dependency.
The following is an example for multiple requires dependencies in the MTA Deployment Descriptor
(mtad.yaml):
Sample Code
_schema-version: "2.1"
ID: com.acme.framework
version: "1.0" modules:
- name: framework
type: javascript.nodejs
requires:
- name: plugins
list: plugin_configs
properties:
plugin_name: ~{name}
plugin_url: ~{url}/sources
parameters:
managed: true # true | false. Default is false
resources:
- name: plugins
type: configuration
parameters:
target:
org: ${org}
space: ${space}
filter:
type: com.acme.plugin
The MTA deployment descriptor shown in the example above contains a module that specifies a requires
dependency to a configuration resource. Since the requires dependency has a list property, the deploy service
will attempt to find multiple configuration entries that match the criteria specified in the configuration resource.
Tip
It is possible to create a subscription for a single configuration entry, for example, where no “list:” element is
defined in the required dependency.
Note
The filter parameter can be used in combination with other configuration resource specific parameters, for
example: provider-nid, provider-id, target, and version.
The resource itself contains a filter parameter that is used to filter entries from the configuration registry based
on their content. In the example shown above, the filter only matches entries that are provided by an MTA
deployed in the current space, which have a type property in their content with a value of com.acme.plugin.
The XML document in the following example shows some sample configuration entries, which would be matched
by the filter if they were present in the registry.
Sample Code
MTA Configuration Entries Matched in the Registry
<configuration-entry>
<id>8</id>
<provider-nid>mta</provider-nid>
<provider-id>com.sap.sample.mta.plugin-1:plugin-1</provider-id>
<provider-version>0.1.0</provider-version>
<target-space>2172121c-1d32-441b-b7e2-53ae30947ad5</target-space>
<content>{"name":"plugin-1","type":"com.acme.plugin","url":"https://
xxx.mo.sap.corp:51008"}</content>
</configuration-entry>
<configuration-entry>
<id>10</id>
<provider-nid>mta</provider-nid>
<provider-id>com.sap.sample.mta.plugin-2:plugin-2</provider-id>
<provider-version>0.1.0</provider-version>
<target-space>2172121c-1d32-441b-b7e2-53ae30947ad5</target-space>
<content>{"name":"plugin-2","type":"com.acme.plugin"}</content>
</configuration-entry>
The JSON document in the following example shows the environment variable that will be created from the
requires dependency defined in the example deployment descriptor above, assuming that the two configuration
entries shown in the XML document were matched by the filter specified in the configuration resource.
Note
References to non-existing configuration entry content properties are resolved to “null”. In the example above,
the configuration entry published for plugin-2 does not contain a url property in its content. As a result, the
environment variable created from that entry is set to “null” for plugin_url.
Sample Code
Application Environment Variable
plugin_configs: [
{
"plugin_name": "plugin-1",
"plugin_url": "https://xxx.mo.sap.corp:51008/sources"
},
{
"plugin_name": "plugin-2",
"plugin_url": null
}
]
Tip
When starting the deployment of an MTA (with the xs deploy command), you can use the special option --no-
restart-subscribed-apps to specify that, if the publishing of configuration entries created for that MTA result in
the update of a subscribing application's environment, then that application should not be restarted.
Some services support additional configuration parameters in the create-service request. These parameters
are parsed in a valid JSON object containing the service-specific configuration parameters.
The deploy service supports the following methods for the specification of service creation parameters:
Note
If service-creation information is supplied both in the deployment (or extension) descriptor and in a
supporting JSON file, the parameters specified directly in the deployment (or extension) descriptor
override the parameters specified in the JSON file.
The following example shows how to define the service-configuration parameters in the MTA deployment
descriptor (mtad.yaml). If you use this method, all parameters under the special config parameter are used for
the service-creation request.
Sample Code
Service-Configuration Parameters in the MTA Deployment Descriptor
resources:
- name: java-uaa
type: com.sap.xs.uaa
parameters:
config:
xsappname: java-hello-world
The following example shows how to define the service-configuration parameters for a service-creation request in
a JSON file; with this method, there are dependencies on further configuration entries in other configuration files.
For example, if you use this JSON method, an additional entry must be included in the MANIFEST.MF file which
Sample Code
Service-Configuration Parameters in a JSON File
resources:
- name: java-uaa
type: com.sap.xs.uaa
parameters:
config:
xsappname: java-hello-world
Sample Code
xs-security.json File
{
"xsappname": "java-hello-world"
}
Sample Code
Service-Configuration Parameters in the MANIFEST.MF
Name: xs-security.json
MTA-Resource: java-uaa
Content-Type: application/json
The following example of an MTA deployment descriptor shows how to combine both methods to achieve the
desired application-service creation on deployment:
Sample Code
Combined Service-Configuration Parameters in the MTA Deployment Descriptor
resources:
- name: java-uaa
type: com.sap.xs.uaa
parameters:
config:
xsappname: java-hello-world
The deployment service supports the following methods for the specification of service-binding parameters:
Note
If service-binding information is supplied both in the MTA's deployment (or extension) descriptor and in a
supporting JSON file, the parameters specified directly in the deployment (or extension) descriptor override the
parameters specified in the JSON file.
In the MTA deployment descriptor, the requires dependency between a module and a resource represents the
binding between the corresponding application and the service created from them (if the service has a type). For
this reason, the config parameter is nested in the requires dependency parameters, and a distinction must be
made between the config parameter in the modules section and the config parameter used in the resources
section (for example, when used for service-creation parameters).
The following example shows how to define the service-binding parameters in the MTA deployment descriptor
(mtad.yaml). If you use this method, all parameters under the special config parameter are used for the service-
bind request.
Sample Code
Service-Binding Parameters in the MTA Deployment Descriptor
modules:
- name: node-hello-world-backend
type: javascript.nodejs
requires:
- name: node-hdi-container
parameters:
config:
permissions: debugging
The following example shows how to define the service-binding parameters for a service-bind request in a JSON
file; with this method, there are dependencies on entries in other configuration files. For example, if you use this
JSON method, an additional entry must be included in the MANIFEST.MF file which defines the path to the JSON
file containing the parameters as well as the name of the resource for which the parameters should be used.
Sample Code
Service-Binding Parameters in a JSON File
modules:
- name: node-hello-world-backend
type: javascript.nodejs
requires:
Sample Code
xs-hdi.json File
{
"permissions": "debugging"
}
Note
To avoid ambiguities, the name of the module is added as a prefix to the name of the requires dependency;
the name of the manifest attribute uses the following format: <module-name>#<requires-dependency-
name>.
Sample Code
Service-Binding Parameters in the MANIFEST.MF
Name: xs-hdi.json
MTA-Requires: node-hello-world-backend#node-hdi-container
Content-Type: application/json
The following example of an MTA deployment descriptor shows how to combine both methods to achieve the
desired application-service binding on deployment:
Sample Code
Combined Service-Binding Parameters in the MTA Deployment Descriptor
modules:
- name: node-hello-world-backend
type: javascript.nodejs
requires:
- name: node-hdi-container
parameters:
config:
permissions: debugging
Some services provide a list of tags that are later added to the <VCAP_SERVICES> environment variable. These
tags provide a more generic way for applications to parse <VCAP_SERVICES> for credentials.
You can also provide custom tags when creating a service instance. To inform the deployment service about
custom tags, you can use the special service-tags parameter, which must be located in the resources
definition that represent the managed services, as illustrated in the following example:
Sample Code
Defining Service Tags in the MTA Deployment Descriptor
resources:
- name: nodejs-uaa
type: com.sap.xs.uaa
parameters:
service-tags: ["custom-tag-A", "custom-tag-B"]
Note
Some service tags are inserted by default, for example, xsuaa, for the XS User Account and Authentication
(UAA) service.
Service brokers are applications that advertise a catalog of service offerings and service plans, as well as
interpreting calls for creation, binding, unbinding, and deletion. The deploy service supports automatic creation
(and update) of service brokers as part of an application deployment process.
Аn application can declare that a service broker should be created as part of its deployment process, by using the
following parameters in its corresponding module in the MTA deployment (or extension) descriptor:
Tip
You can use placeholders ${} in the service-URL declaration.
Sample Code
- name: jobscheduler-broker
properties:
user: ${generated-user}
password: ${generated-password}
parameters:
create-service-broker: true
service-broker-name: jobscheduler
service-broker-user: ${generated-user}
service-broker-password: ${generated-password}
service-broker-url: ${default-url}
The create-service-broker parameter should be set to true if a service broker must be created for the
specified application module. You can specify the name of the service broker with the service-broker-name
parameter; the default value is ${app-name}The service-broker-user and service-broker-password are
the credentials that will be used by the controller to authenticate itself to the service broker in order to make
requests for creation, binding, unbinding and deletion of services. The service-broker-url parameter specifies
the URL on which the controller will send these requests.
Note
During the creation of the service broker, the XS advanced controller makes a call to the service-broker API to
inquire about the services and plans the service broker provides. For this reason, an application that declares
itself as a service broker must implement the service-broker application-programming interface (API). Failure to
do so might cause the deployment process to fail.
Note
Normally, the registration of а space-scoped broker is successful, because it requires SpaceDeveloper
privileges of the user. However, for a global registration of the service broker, global admin privileges are
needed, which the platform developer usually does not have. In such cases, the MTA deployment would fail. To
solve this, do not use the --do-not-fail-on-missing-permissions option, which will result in skipping
the step with a warning.
The consumption of existing service keys from applications is an alternative of service bindings. The application
can use the service key credentials and consume the service. The existing service keys are modeled as a resource
of type org.cloudfoundry.existing-service-key. MTA modules might be set to depend on these resources
by using a configuration in the requires section, which results in an injection of the service key credentials in the
application environment.
Sample Code
modules:
- name: app123
type: javascript.nodejs
requires:
- name: service-key-1
parameters:
env-var-name: keycredentials
...
resources:
- name: service-key-1
type: org.cloudfoundry.existing-service-key
parameters:
service-name: service-a
service-key-name: test-key-1
Note
Note that only the parameter service-name is mandatory. It defines which service is used by the application.
● service-key-name- resource parameter, which defines which service key of the defined service is used.
The default value is the name of the resource.
● env-var-name - required dependency parameter, which defines what is the name of the new environment
variable of the application. The default value is the service key name.
The Cloud Foundry environment allows you to expose an application as a user-provided service that other
applications can bind to.
The following example of an application MTA deployment or extension descriptor shows the required syntax:
Sample Code
modules:
- name: provider
properties:
userID: ${generated-user}
password: ${generated-password}
parameters:
create-user-provided-service: true
user-provided-service-name: techical-user-provider
user-provided-service-config:
userID: ${generated-user}
password: ${generated-password}
url: ${default-url}
The create-user-provided-service parameter has to be set to true if you want a user-provided service to
be created for the specified module. In addition, you can specify custom parameters for the application (for
example, userID, password, or url); these custom parameters are used for the creation of the user-provided
service. The custom parameters are exposed to any other applications that bind to the user-provided service that
is created during the application-deployment process.
The deploy service supports the extension of the standard syntax for references in module properties; this
extension enables you to specify the name of the requires section inside the property reference.
You can use this syntax extension to declare implicit groups, as illustrated in the following example:
Sample Code
Syntax Extension: Alternative Grouping of MTA Properties
modules:
- name: pricing-ui
type: javascript.nodejs
properties:
API: # equivalent to group, but defined in the module properties
- key: internal1
protocol: ~{price_opt/protocol} #reference to value of protocol defined
in price_opt of module pricing-backend
- key: external
url: ~{competitor_data/url} # reference to string value of property
'url' in required resource 'competitor_data'
api_keys: ~{competitor_data/creds} # reference to list value of property
'creds' in 'competitor_data'
requires:
- name: competitor_data
- name: price_opt
- name: pricing-backend
type: java.tomcat
provides:
- name: price_opt
properties:
protocol: http ...
resources:
- name: competitor_data
properties:
url: "https://marketwatch.acme.com/"
creds:
app_key: 25892e17-80f6
secret_key: cd171f7c-560d
Use this modeling on short-running tasks or scripts, so that they are treated as applications upon deployment.
When using simulated application execution, they are started, executed, and then stopped. This is an alternative to
using one-off administration tasks.
Sample Code
_schema-version: "3.1"
ID: foo
version: 3.0.0
modules:
- name: foo
type: java.tomcat
parameters:
The Multi-Target Application (МТА) extension descriptor is a YAML file that contains data complementary to the
deployment descriptor. This data can be without a lifecycle, specially encoded, and security-sensitive like
credentials and passwords. The MTA extension descriptor is a YAML file that has the same structure as the
deployment descriptor. It can add or overwrite existing data if necessary.
Several extension descriptors can be additionally used after the initial deployment.
Note
The format and available options within the extension descriptor may change with newer versions of the MTA
specification. You must always specify the schema version option when defining an extension descriptor to
inform the SAP Cloud Platform which MTA specification version should be used. Furthermore, the schema
version used within the extension descriptor and the deployment descriptor should always be same.
In the examples below, we have a deployment descriptor, which has already been defined, and several extension
descriptors.
Note
Each extension descriptor is defined in a separate file with an extension .mtaext.
Deployment descriptor:
Example
_schema-version: '3.1'
parameters:
hcp-deployer-version: '1.0'
The following example is a basic extension descriptor that extends the deployment descriptor above:
Example
_schema-version: '3.1'
ID: com.example.extension.config
extends: com.example.extension
● Validate the extension descriptor against the MTA specification version 3.1
● Extend the com.example.extension deployment descriptor
The following is a modification to the extension descriptor that adds data and overwrites data in the extension
descriptor:
Note
Note that this example does not add or overwrite any data in the deployment descriptor.
Example
_schema-version: '3.1'
ID: com.example.extension.first
extends: com.example.extension
resources:
- name: data-storage
properties:
existing-data: new-value
non-existing-data: value
The following is an example of another extension descriptor that extends the extension descriptor from the
previous example:
Example
_schema-version: '3.1'
ID: com.example.extension.second
extends: com.example.extension.first
resources:
- name: data-storage
properties:
second-non-existing-data: value
● The examples above are incomplete. To deploy a solution, you have to create a deployment descriptor and an
MTA archive.
● As of _schema version 3.1, you have the option to input missing values that are required by the Multi-
Target Application, which afterwards act as the latest provided MTA extension descriptor. During deployment
using the cockpit the SAP Cloud Platform detects the missing values, and opens a dialog where you can enter
them. This option can be useful when you need to extend already provided MTAs with new data.
For example, you can choose to provide credentials manually instead of storing and providing them in an MTA
extension descriptor file. Also, you can manually input subaccount-relevant parameter values, specific to the
provider or consumer subaccount in the provider-consumer scenario. For more information, see the
Supported Metadata Options subsection of MTA Module Types, Resource Types, and Parameters for
Applications in the Neo Environment [page 1351] .
Related Information
Defining MTA Deployment Descriptors for the Neo Environment [page 1345]
Defining Multi-Target Application Archives [page 1303]
MTA Module Types, Resource Types, and Parameters for Applications in the Neo Environment [page 1351]
The Multi-Target Application Model
This section contains information about the supported MTA modules, their default parameters, properties, and
supported resource types available in the Cloud Foundry environment.
Modify the following MTA module types by providing specific properties or parameters in the MTA deployment
descriptor (mtad.yaml).
Module Type Module Parameters (Default Value of Pa Module Proper Result
rameter) and Description ties
com.sap.html5. ● no-route (true). Defines if a route None Deploys the Business Logging for
application- should be assigned to the application. configuring text resources
content ● memory (256M). Defines the memory al
located to the application.
● execute-app - (true). Defines
whether the application is executed. If
yes, the application performs certain
amount of work and upon completion
sets a success-marker or
failure-marker by means of a log
message.
● success-marker (STDOUT:The
deployment of html5
application content done.*)
● failure-marker(STDERR:The
deployment of html5
application content
failed.*)
● stop-app (true). Defines if the appli
cation should be stopped after execution.
● check-deploy-id (true) - Defines if
the deployment (process) ID should also
be checked when checking the applica
tion execution status.
● dependency-type(hard). Defines if
this module should be deployed first, if it
takes part in circular module dependency
cycles. If hard means that this module is
deployed first.
● health-check-type(none). Defines
if the module should be monitored for
availability.
business- ● memory (256M). Defines the memory al None Deploys the Business Logging for
logging located to the application. configuring text resources
● execute-app (true). Defines
whether the application is executed. If
yes, the application performs certain
amount of work and upon completion
sets a success-marker or
failure-marker by means of a log
message.
● stop-app (true). Defines if the appli
cation should be stopped after execution.
● no-route (true). Defines if a route
should be assigned to the application.
● success-marker
(STDOUT:Deployment of
content deployer done.)
● failure-
marker(STDERR:Deployment of
content deployer failed.)
Sample Code
modules:
- name: my-binary-app
type: custom
parameters:
buildpack: binary_buildpack
Note
modules:When using the free trial
subaccount, modify the default serv
ice:
Sample Code
resources:
- name: my-hdi-
service
type:
com.sap.xs.hdi-
container
parameters:
service:
hanatrial
Restriction
Use only with the SAP Node.js mod
ule @sap/site-content-
deployer
Restriction
Only for use with the SAP Node.js
module @sap/site-entry
Restriction
Enables customers to access founda
tion services by provisioning,&This,
#, resource type is now deprecated.
Use *). rabbitmq instead.
● org.cloudfoundry.managed-service
Sample Code
resources:
- name: spark-instance
type: org.cloudfoundry.managed-service
parameters:
service: spark
service-plan: shared
Note
To choose a different service plan for a predefined MTA resource type, for example, to change the service
plan for PostgreSQL service, you define it with:
Sample Code
resources:
- name: my-postgre-service
type: org.postgresql
parameters:
service-plan: v9.4-dedicated-xsmall
● org.cloudfoundry.existing-service
То аssume that the named service exists and do not try to manage its lifecycle, you define it by using the
org.cloudfoundry.managed-service resource type with the following parameters:
○ service-name
Optional. Service instance name. Default value: the resource name.
● org.cloudfoundry.existing-service-key
Existing service keys can be modeled as a resource of type org.cloudfoundry.existing-service-key,
which checks and uses their credentials. For more information, see Consumption of Service Keys [page 1319].
● org.cloudfoundry.user-provided-service
Create or update a user-provided service configured with the following resource parameters:
○ service-name
Optional. Name of the service to create. Default value: the resource name.
○ config
Required. Map value, containing the service creation configuration, for example, url and user credentials
(user and password)
Example
resources:
- name: my-destination-service
type: org.cloudfoundry.user-provided-service
parameters:
config:
<credential1>: <value1>
<credential2>: <value2>
● configuration
For more information, see Cross-MTA Dependencies [page 1309].
Parameters
Module, resource, and dependency parameters have platform-specific semantics. To reference a parameter value,
use the placeholder notation ${<parameter>}, for example, ${default-host}.
Tip
It is also possible to declare metadata for parameters and properties defined in the MTA deployment
description; the mapping is based on the parameter or property keys. For example, you can specify if a
parameter is required (optional; false) or can be modified overwritable: true.
default- resources Yes Default value for the con n/a INITIAL_INITIAL_SERVI
container- tainer-name parameter that
CE_NAME
name is used during HDI creation.
It is based on the organiza
tion, space and service
name, which are combined
in a way that conforms to
the container-name restric
tions for length and legal
characters.
default- modules Yes The default host name, com n/a trial-a007007-node-
host posed based on the target hello-world
platform name and the
module name, which en
sures uniqueness. Used with
host-based routing to com
pose the default URI, see be
low
health-check-type:
http
health-check-type:
process
memory modules The memory limit for all in 256M, or as memory: 128M
stances of an application. specified in
This parameter requires a module-type
unit of measurement M, MB,
G, or GB in upper or lower
case.
org All Yes Name of the target organiza The current initial, trial
tion name of the
target organi
zation
protocol All Yes The protocol used by the n/a http, https
Cloud Foundry environment,
for example: “http” or
“https”
service resources The type of the created serv Empty, or as service: hana
ice specified in
resource-
type
service- resources List of alternatives of a de Empty, or as For Coud Foundry Trial, “hana
alternativ fault service offering, de specified in in
trial” is available instead of
es fined in the deploy service the deploy
“hana”.
configuration. If a default service con
service offering does not ex figuration (re service-alternatives:
ist for the current org/space source-type)
[“hanatrial”]
or creating a service to it
fails (with a specific error),
service alternatives are
used. The order of service
alternatives is considered.
service- resources Yes The name of the service in The resource nodejs-hdi-container
name name with or
the Cloud Foundry environ com.sap.xs2.samples.x
without a
ment to be created for this sjshelloworld.nodejs-
name-space
resource, based on the re prefix hdi-container
source name with or without
a name-space prefix
service- resources The plan of the created serv Empty, or as service-plan: hdi-
plan ice specified in shared
resource-
type
space All Yes Name of the target organiza n/a initial, a007007
tional space
controller All Yes The URL of the cloud con n/a https://
-url troller api.cf.sap.hana.ondem
and.com
https://localhost:
30030
xs-type All Yes The XS type, Cloud Foundry n/a CF, XSA
or XS advanced.
env-var- required de Write Used when consuming an The name of env-var-name:
name pendency existing service key. Speci the service SERVICE_KEY_CREDENTIA
fies the name of the environ key. LS
ment variable that will con
tain the service key's cre
dentials. See Consumption
of existing service keys for
more information.
space:
"*"
service- resources Write Used when consuming an The name of service-key-name: my-
key-name existing service key. Speci the resource. service-key
fies the name of the service
key. See Consumption of ex
isting service keys for more
information.
Related Information
Multi-Target Application deployment descriptor Defining MTA Deployment Descriptors for the Neo Environ
ment [page 1345]
Defining MTA Development Descriptors Defining MTA Development Descriptors [page 1344]
Multi-Target Application extension descriptor Defining MTA Extension Descriptors [page 1322]
Multi-Target Application module types and parameters MTA Module Types, Resource Types, and Parameters for Ap
plications in the Neo Environment [page 1351]
How to use transport management tools for moving MTA ar Integration with Transport Management Tools [page 1380]
chives among subaccounts
● A Multi-Target Application (MTA) archive that bundles all the deployable modules and configurations together
with the accompanying MTA deployment descriptor, which describes the content of the MTA archive, the
module interdependencies, and required and exposed interfaces
● An optional MTA extension descriptor that contains data complementary to the MTA deployment descriptor
Prerequisites
Procedure
Note
Strictly adhere to the correct indentations when working with YAML files, and do not use the tabulator
character.
_schema-version: '3.1'
parameters:
hcp-deployer-version: '1.1.0'
ID: com.example.demo.basic
version: 0.1.0
○ Validate the Multi-Target Application against the MTA specification version 3.1
○ Use deploy features specific to the SAP Cloud Platform marked as version 1.0
○ Deploy the Multi-Target Application as a Solution with ID com.example.demo.basic
○ Consider the Multi-Target Application version as a version 0.1.0
c. Create the module that describes the Java application. In the mtad.yaml, add the following data:
Example
modules:
- name: example-java-app
type: com.sap.java
requires:
- name: db-binding
parameters:
name: example
jvm-arguments: -server
java-version: JRE 7
runtime: neo-java-web
runtime-version: 1
○ Deploy a Java application with a specific runtime, Java version, and runtime arguments
○ Require a MTA resource called db-bindings, where you describe your binding data
d. Describe the database binding id and the database credentials the Java application has to use by adding
the following to the mtad.yaml:
Example
resources:
- name: db-binding
type: com.sap.hcp.persistence
parameters:
id:
The example above instructs the SAP Cloud Platform to create a database binding during the deployment
process.
At this point of the procedure, no database id or credentials for your database binding have been added. The
reason for this is that all the content of the mtad.yaml so far is a target-platform independent, meaning that the
same mtad.yaml could be deployed to multiple SAP Cloud Platform subaccounts. The information about your
database id and credentials are, however, subaccount-specific. To keep the mtad.yaml target platform
Note
Security-sensitive data, for example database credentials, should be always deployed using an MTA extension
descriptor, so that this data is encrypted.
Example
_schema-version: '3.1'
ID: com.example.demo.basic.config
extends: com.example.demo.basic
parameters:
title: Basic Solution
description: This is a sample of a basic Solution.
resources:
- name: db-binding
parameters:
id: dbalias
user-id: myuser
password : mypassword
Example
Manifest-Version: 1.0
Created-By: example.com
Name: example.war
Content-Type: application/zip
MTA-Module: example-java-app
Caution
Make sure that the MANIFEST.MF is compliant to the JAR file specification.
Note
The MTA extension descriptor file is deployed separately from the MTA archive.
Example
/example.war
/META-INF
/META-INF/mtad.yaml
/META-INF/MANIFEST.MF
d. Archive the content of the root directory in an .mtar format using an archiving tool capable of producing a
JAR archive.
Results
After you have created your Multi-Target Application archive, you are ready to deploy it into the SAP Cloud
Platform as a solution. To deploy the archive, proceed as described in Deploy a Standard Solution [page 1416].
Multi-Target Applications are defined in a development descriptor required for design-time and build purposes.
The development descriptor (mta.yaml) defines the elements and dependencies of a Multi-Target Application
(MTA) compliant with the Neo environment.
An MTA development descriptor contains the following main elements, in addition to the deployment descriptor
elements:
● path
● build-parameters
Restriction
The WebIDE currently does not support creating MTA development descriptors for the Neo Environment. You
have to create it manually by a text editor of your choice that supports the YAML serialization language.
The Multi-Тarget Application (MTA) deployment descriptor is a YAML file that defines the relations between you as
a provider of а deployable artifact and the SAP Cloud Platform as a deployer tool.
Using the YAML data serialization language, you describe the MTA in an MTA deployment descriptor (mtad.yaml)
file containing the following:
● Modules and module types that represent Neo environment applications and content, which form the MTA
and are deployed on the platform
● Resources and resource types that are not part of an MTA, but are required by the modules at runtime or at
deployment time
● Dependencies between modules and resources
● Technical configuration parameters, such as URLs, and application configuration parameters, such as
environment variables.
See the following examples of a basic MTA deployment descriptor that is defined in an mtad.yaml file:
Example
_schema-version: '3.1'
parameters:
hcp-deployer-version: '1.1.0'
ID: com.example.descriptor
version: 0.1.0
modules:
- name: example-java-app
type: com.sap.java
requires:
- name: db-binding
parameters:
Note
● The format and available options in the MTA deployment descriptor could change with the newer versions
of the MTA specification. Always specify the schema version when defining an MTA deployment descriptor,
so that the SAP Cloud Platform is aware against which specific MTA specification version you are deploying.
● The example above is incomplete. In its case, you have to create an MTA extension descriptor containing
the database user and password.
● As of _schema version 3.1, you have the option to input missing values that are required by the Multi-
Target Application, which afterwards act as the latest provided MTA extension descriptor. During
deployment using the cockpit the SAP Cloud Platform detects the missing values, and opens a dialog where
you can enter them. This option can be useful when you need to extend already provided MTAs with new
data.
For example, you can choose to provide credentials manually instead of storing and providing them in an
MTA extension descriptor file. Also, you can manually input subaccount-relevant parameter values specific
to the provider or consumer subaccount in the provider-consumer scenario. For more information, see the
Supported Metadata Options subsection of MTA Module Types, Resource Types, and Parameters for
Applications in the Neo Environment [page 1351].
Since the Neo environment supports a different set of module types, resource types, and configuration
parameters, the deployment of an MTA archive can be further configured by using MTA extension descriptors. This
allows administrators to adapt a deployment to a target or use case specific requirements, like setting URLs,
memory allocation parameters, and so on. For more information, see the official Multi-Target Application Model
specification.
Related Information
You package the MTA deployment descriptor and module binaries in an MTA archive. You can manually do so as
described below, or alternatively use the MTA Archive Builder tool.
For more information about the MTA Archive Builder tool, see Multi-Target Application Archive Builder.
Note
There could be more than one module of the same type in an MTA archive.
The Multi-Target Application (MTA) archive is created in a way compatible with the JAR file specification. This
allows us to use common tools for creating, modifying, and signing such types of archives.
Note
● The MTA extension descriptor is not part of the MTA archive. During deployment you provide it as a
separate file, or as parameters you enter manually when the SAP Cloud Platform requests them.
● Using a resources directory as in some examples is not mandatory. You can store the necessary resource
files on root level of the MTA archive, or in another directory with name of your choice.
The following example shows the basic structure of an MTA archive. It contains a Java application .war file and a
META-INF directory, which contains an MTA deployment descriptor with a module and a MANIFEST.MF file.
Example
/example.war
/META-INF
/META-INF/mtad.yaml
/META-INF/MANIFEST.MF
The MANIFEST.MF file has to contain a name section for each MTA module part of the archive that has a file
content. In the name section, the following information has to be added:
● Name - the path within the MTA archive, where the corresponding module is located. If it leads to a directory,
add a forward slash (/) at the end.
● Content-Type - the type of the file that is used to deploy the corresponding module
● MTA-module - the name of the module as it has been defined in the deployment descriptor
Note
● You can store one application in two or more war files contained in the MTA archive.
● According to the JAR specification, there must be an empty line at the end of the file.
Example
Manifest-Version: 1.0
Created-By: example.com
Name: example.war
Content-Type: application/zip
MTA-Module: example-java-app
● Look for the example.war file within the root of the MTA archive when working with module example-java-
app
● Interpret the content of the example.war file as an application/zip
Note
The example above is incomplete. To deploy a solution, you have to create an MTA deployment descriptor and
an MTA extension descriptor with the user and password added there. Then you have to create the MTA archive.
Tip
As an alternative to the procedure described above, you can also use the MTA Archive Builder tool. See its
official documentation at Multi-Target Application Archive Builder.
Related Information
The Multi-Target Application (МТА) extension descriptor is a YAML file that contains data complementary to the
deployment descriptor. This data can be without a lifecycle, specially encoded, and security-sensitive like
credentials and passwords. The MTA extension descriptor is a YAML file that has the same structure as the
deployment descriptor. It can add or overwrite existing data if necessary.
Several extension descriptors can be additionally used after the initial deployment.
Note
The format and available options within the extension descriptor may change with newer versions of the MTA
specification. You must always specify the schema version option when defining an extension descriptor to
In the examples below, we have a deployment descriptor, which has already been defined, and several extension
descriptors.
Note
Each extension descriptor is defined in a separate file with an extension .mtaext.
Deployment descriptor:
Example
_schema-version: '3.1'
parameters:
hcp-deployer-version: '1.0'
ID: com.example.extension
version: 0.1.0
resources:
- name: data-storage
properties:
existing-data: value
The following example is a basic extension descriptor that extends the deployment descriptor above:
Example
_schema-version: '3.1'
ID: com.example.extension.config
extends: com.example.extension
● Validate the extension descriptor against the MTA specification version 3.1
● Extend the com.example.extension deployment descriptor
The following is a modification to the extension descriptor that adds data and overwrites data in the extension
descriptor:
Note
Note that this example does not add or overwrite any data in the deployment descriptor.
Example
_schema-version: '3.1'
ID: com.example.extension.first
extends: com.example.extension
resources:
- name: data-storage
properties:
existing-data: new-value
non-existing-data: value
The following is an example of another extension descriptor that extends the extension descriptor from the
previous example:
Example
_schema-version: '3.1'
ID: com.example.extension.second
extends: com.example.extension.first
resources:
- name: data-storage
properties:
second-non-existing-data: value
● The examples above are incomplete. To deploy a solution, you have to create a deployment descriptor and an
MTA archive.
● As of _schema version 3.1, you have the option to input missing values that are required by the Multi-
Target Application, which afterwards act as the latest provided MTA extension descriptor. During deployment
using the cockpit the SAP Cloud Platform detects the missing values, and opens a dialog where you can enter
them. This option can be useful when you need to extend already provided MTAs with new data.
For example, you can choose to provide credentials manually instead of storing and providing them in an MTA
extension descriptor file. Also, you can manually input subaccount-relevant parameter values, specific to the
provider or consumer subaccount in the provider-consumer scenario. For more information, see the
Supported Metadata Options subsection of MTA Module Types, Resource Types, and Parameters for
Applications in the Neo Environment [page 1351] .
Related Information
Defining MTA Deployment Descriptors for the Neo Environment [page 1345]
Defining Multi-Target Application Archives [page 1303]
MTA Module Types, Resource Types, and Parameters for Applications in the Neo Environment [page 1351]
The Multi-Target Application Model
Tip
This section contains collapsible subsections. By clicking on the arrow-shaped icon next to a subsection, you
can expand it to see additional information.
This section contains the parameters and options that can be used to compose the structure of an MTA
deployment descriptor or an MTA extension descriptor.
Note
As both descriptor types use the YAML file format, strictly adhere to the following syntax practices:
The supported target platform options describe general behavior and information about the deployed Multi-Target
Application. The according options are placed within the primary part of the МТА deployment descriptor or МТА
extension descriptor. That is, they are not placed within any modules or resources.
Note
● Note that any sensitive data should be placed within the MTA extension descriptor.
● To ensure that numeric values, such as product version and IDs, are not automatically interpreted as
numbers, always wrap them in single quotes.
_schema-version Version of the MTA specification to which the MTA deployment En n/a yes
closed
descriptor complies. The current supported versions by the SAP
String,
Cloud Platform are:
use sin
● 2.1 gle
quotes
● 3.1
ID The identifier of the deployed artifact. The ID should follow the String n/a yes
convention for a reverse-URL dot-notation, and it has to be
unique within a particular subaccount.
extends Used in MTA extension descriptor to denote which MTA deploy String The ID yes
of the
ment descriptor should be extended. Applicable only in exten
de
sion descriptors.
ploy
ment
de
scrip
tor
that is
to be
ex
tended
version Version of the current Multi-Target Application. The format of the En n/a yes
version is a numeric string of <major>.<minor>.<micro> closed
String,
Note use sin
gle
The value must not exceed 64 symbols. quotes
para hcp-deployer- Version of the deploy service of the SAP Cloud Platform. This En n/a yes
mete version version differs from the schema version. The current supported closed
rs: versions are: String,
use sin
● 1.0.0
gle
● 1.1.0 quotes
● 1.2.0
Note
● Deployer version 1.0.0 is going to be deprecated. Use
version 1.1.0 and higher.
● During a solution update, a different technical approach
is employed. For more information, see General Informa
tion About Solution Updates [page 1425].
logo Base64-encoded logotype image. Optimize your image to be dis String n/a no
● png
● jpeg
● gif
The following syntax is for a .png logotype that has been en
coded in Base64:
Example
logo: "data:image/
png;base64,iVBORw0KGgoAAAANSUhEUgAAAFoA
AABaCAMAAAAPdrEwAAAAnFBMVEX///..."
This section contains the modules that are supported by the SAP Cloud Platform and their parameters and
properties.
Note
● The relation between a module and the entities created in the SAP Cloud Platform is not one-to-one, that is,
it is possible for one module to contain several SAP Cloud Platform entities and vice versa.
● Any security-sensitive data, such as user credentials and passwords, has to be placed in the MTA extension
descriptor.
Tip
Expand the following subsections by clicking on the arrow-shaped element to see the available parameters and
values.
name HTML5 application name, which has to be unique within the current sub String n/a yes
account.
Note
The display-name and name parameters belong to an application
level that is different from the one of the application versions. If an
other application version is defined in the MTA deployment descrip
tor, then its display name has to be identical to display names of other
already defined versions of the application or has to be omitted.
version Application version to be used in the HTML5 runtime. Used for deploying String n/a yes
Java HTML5 modules with the same version can be deployed only once.
In the version parameter, the usage of a <timestamp> read-only vari
able is supported. Thus, a new version string is generated with every de
ploy. For example, version: '0.1.0-${timestamp}'
active This flag indicates whether the related version of the application should Boolean true no
be activated or not. The default value is true.
subscribe When a provided solution is consumed, а subscription and designated Boolean true no
entities might be created in the consumer subaccount, unless the pa
rameter is set to false.
sfsf-access- If true, the application is activated for the SAP SuccessFactors system. Boolean false no
point The default value is false.
sfsf-idp- If true, the extension application is registered as an authorized assertion Boolean false no
access consumer service for the SAP SuccessFactors system to enable the ap
plication to use the SAP SuccessFactors identity provider (IdP) for au
thentication.
For more information, see Home Page Tiles JSON File [page 1642]. En
sure that each tile name is unique within the current subaccount.
sfsf-home- Registers SAP SuccessFactors Employee Central (EC) home page tiles in Binary n/a no
page-tiles the SAP SuccessFactors company instance.
For more information, see Home Page Tiles JSON File [page 1642]. En
sure that each tile name is unique within the current subaccount.
com.sap.java and java.tomcat - used for deploying Java applications, either with the
proprietary SAP Java Web or the Java Web Tomcat runtime containers.
For more information about runtime containers, see Application Runtime Container [page 1153].
Note
You can deploy these application types using two or more war files contained in the MTA archive.
name Java application name, which has to be unique within the current subac String n/a yes
count. The name value length has to be between 1 and 255 symbols.
runtime Depending on the module and its used runtime, use one of the following: String neo- yes
java-
● For com.sap.java web
○ neo-java-web
○ neo-javaee6-wp
○ neo-javaee7-wp
● For java.tomcat - do not define this parameter
runtime- If defining a specific runtime version is required, use one of the following: En ● For no
version closed com
● For com.sap.java - for example, 1 or 2 String, .sa
● For java.tomcat - for example, 2 or 3. The major supported runtime use sin p.j
versions are 2 (with Tomcat 7) and 3 (with Tomcat 8). gle
ava
quotes
○ F
o
r
n
e
o
-
j
a
v
a
-
w
e
b
-
1
○ F
o
r
n
e
o
-
j
a
v
a
e
e
6
-
w
p
-
2
● For
jav
a.t
omc
at -
2
java- The JVM major version, for example JRE 7 or JRE 8. String JRE 7 no
version
compute- The virtual machine computing unit size. The available sizes are LITE, PRO, String LITE no
unit-size PREMIUM, PREMIUM_PLUS. For more information, see Compute Units
[page 1159].
minimum- Minimum number of process instances. The allowed range is from 1 to 99. Integer 1 no
processes
Note
You either have to use both the minimum-processes and maximum-
processes parameters, or neither.
maximum- Maximum number of process instances. The allowed range is from 1 to 99. Integer 1 no
processes
Note
● You either have to use both the minimum-processes and
maximum-processes parameters, or neither.
● The maximum-processes should be equal to or higher than the
minimum-processes value.
rolling- Performs update of an application without downtime in one go. Boolean false no
update
Note
At least hcp-deployer-version 1.2.0 is required.
rolling- Defines how long the old process will be disabled before it is stopped. Integer 60 no
update-
timeout
Note
At least hcp-deployer-version 1.2.0 is required.
running- Specifies how many processes will run at the end of the state of the Java ap Integer n/a no
processes plication. If not specified, the minimum number is used.
jvm- The relevant JVM arguments employed by the customer application. String n/a no
arguments
connection- The maximum timeout period for the connection, in milliseconds. Integer 20000 no
timeout
encoding The used Uniform Resource Identifier (URI) encoding standard. String ISO-88 no
59-1
compression The use of gzip compression for optimizing HTTP response time between the String "off" no
Web server and its clients. The available values are "on", "off", forced.
Note
● Always wrap "on" or "off" values in quotation marks.
● Explicitly specify the compression-mime-types and
compression-min-size parameters only when you use the
value "on".
compression The used compression mime type, for example text/json text/xml String n/a no
-mime-types text/html
compression The threshold size above which an HTTP response package is compressed to Integer n/a no
-min-size reduce traffic.
role- Defines the application that provides the role for the Java application. Use String n/a no
provider one of the following:
● sfsf
● hcp
roles Maps predefined Java application roles to the groups they have to be as List n/a no
signed to. It has to specify the following parameters:
subscribe When a provided solution is consumed, а subscription and designated enti Boolean true no
ties might be created in the consumer subaccount, unless the parameter is
set to false.
sfsf- If true, the application is activated for the SAP SuccessFactors system. The Boolean false no
access- default value is false.
point
sfsf-idp- If true, the extension application is registered as an authorized assertion con Boolean false no
access sumer service for the SAP SuccessFactors system to enable the application
to use the SAP SuccessFactors identity provider (IdP) for authentication.
sfsf- Configures the connectivity of a Java extension application to an SAP List n/a no
connections SuccessFactors system. It creates the required HTTP destination and regis
ters an OAuth client for the Java application in SAP SuccessFactors. An SFSF
connection can only be created after the corresponding Java application has
been deployed and started, so a module of this type depends on a
com.sap.java module.
Registers SAP SuccessFactors Employee Central (EC) home page. Note that
existing SAP Fiori custom roles are skipped during deployment.
One of the following values should be used to define the connection type:
● technical-user - you can define one or several. You also have to de
fine a technical-user-id for each technical-user, and they
should differ. The ID should start with a letter, be in small letters, and not
be longer than 30 symbols.
● default - If you choose this connection type, you cannot define a
technical-user-id.
For more information, see Home Page Tiles JSON File [page 1642]. Ensure
that each tile name is unique within the current subaccount.
sfsf-home- Registers SAP SuccessFactors Employee Central (EC) home page tiles in the Binary n/a no
page-tiles SAP SuccessFactors company instance.
This parameter is a YAML dictionary with one element with key resource
and value <path to resource>. The resource is a descriptor file that
defines the SAP SuccessFactors tiles. The resource has to be in JSON for
mat.
For more information, see Home Page Tiles JSON File [page 1642]. Ensure
that each tile name is unique within the current subaccount.
destination This parameter is a YAML list comprised of one or more connectivity destina List n/a no
s tions. To see the available parameters and values, see the table “Destination
Parameters” below.
Note
● If you have sensitive data, all destination parameters have to be
moved to the MTA extension descriptor.
● When you redeploy a destination, any parameter changes performed
after deployment of the destination are removed. Your custom
changes have to be performed again.
owner Indicates in which subaccount the content should be imported. The possible String provi no
values are provider or consumer. der
Note
● To reduce the risk of being out of sync, we recommend that you use
YAML anchors.
● The value must not exceed 64 symbols.
● To reduce the risk of being out of sync, we recommend that you use
YAML anchors.
● The value must not exceed 64 symbols.
Not
e
Use
sin
gle
quot
es.
Not
e
Use
sin
gle
quot
es.
Not
e
Use
sin
gle
quot
es.
target- It specifies the target site in which the content will be deployed. String n/a no
site-id
minimum- Version of the minimum required SAPUI5 Runtime. The format of the version En n/a no
sapui5- is a numeric string of <major>.<minor> or closed
version <major>.<minor>.<micro> String
Not
e
Use
sin
gle
quot
es.
Note
You have to ensure that the back-end-*-id parameter values are numeric strings of exactly 20 digits.
html5-app- SAP Fiori application name, which has to be unique within the current subac String n/a yes
name count.
html5-app- This flag indicates whether the related version of the application should be Boolean true no
active activated or not. The default value is true.
name SAP Fiori custom role name, which has to be unique within the current subac String n/a yes
count. The name value length has to be between 1 and 255 symbols.
groups List of group names to which the role has to be assigned. List n/a no
For more information, see Role Assignment of Fiori Roles to Security Groups [page 1412].
name HTML5 application custom role name, which has to be unique within the cur String n/a yes
rent subaccount. The name value length has to be between 1 and 255 sym
bols.
groups List of group names to which the role has to be assigned. List n/a no
Remember
The use of this module type with parameters valid for hcp-deployer-version: '1.0.0' will soon be de-
supported. We recommend that you use the parameters valid for hcp-deployer-version: '1.1.0', or
adapt your module type accordingly.
Remember
This deployer version will soon be de-supported. We recommend you use 1.1.0.
metadata- Enable or disable metadata validation, for example true. Boolean n/a yes
validation-
setting
metadata- Enable or disable metadata cache, for example false. Boolean n/a yes
cache-
setting
services List of OData services. Parameters required for an OData service are: List n/a yes
metadata- Enable or disable metadata validation, for example true. Boolean n/a yes
validation-
setting
metadata- Enable or disable metadata cache, for example false. Boolean n/a yes
cache-
setting
services List of OData services. Parameters required for an OData service are: List n/a yes
Note
If a service with the same name/namespace/version com
bination already exists but has different description, the import
will fail.
Note
If a service with the same name/namespace/version com
bination already exists but has different model-id, the import
will fail.
Note
If a service with the same name/namespace/version com
bination already exists but has different default destination, the
import will fail.
com.sap.hcp.sfsf-roles - used for uploading and importing SAP SuccessFactors HCM Suite
roles from the SAP Cloud Platform system repository into the SAP SuccessFactors customer
instance.
The role definitions must be described in a JSON file. For more information about creating roles.json file, see
Create the Resource File with Role Definitions [page 1631].
com.sap.hcp.group - used for modeling the SAP Cloud Platform authorization groups.
name Group name, which has to be unique within the current subaccount. The String n/a yes
name value length has to be between 1 and 255 symbols.
To see the available parameters and values, see the table “Destination Parameters” below.
com.sap.integration - used for modeling the content for the SAP Cloud Platform Integration
service.
technical- Technical name of the com.sap.integration module type String n/a yes
name
Note
To use the com.sap.integration module type, first you have to:
● Enable the Solutions Lifecycle Management service that you will use, in a subaccount that supports SAP
Cloud Platform Integration. For more information, see Content Transport in the SAP Cloud Platform
Integration documentation.
● Create a destination with named CloudIntegration with the following properties:
○ Type - HTTP
○ URL - URL pointing to the /itspaces of the TMN node for the SAP Cloud Platform Integration tenant
in the current subaccount
○ Proxy Type - Internet
○ Authentication - BasicAuthentication
○ User and password - credentials of a user that has the AuthGroup.IntegrationDeveloper role for
the above-mentioned TMN node
This section contains the resource types and their parameters that are supported by the SAP Cloud Platform.
Note
● The relation between a module and the entities created in the SAP Cloud Platform is not one-to-one, that is,
it is possible for one module to contain several SAP Cloud Platform entities.
● Any security-sensitive data, such as user credentials and passwords, has to be placed in the MTA extension
descriptor.
<untyped> Used for adding any properties that you might require and
which you define. It does not have a lifecycle.
Note
The untyped resource is unclassified, that is, it does not
have a type.
com.sap.hcp.p id Identifier of the database that will be bound to a String n/a yes
ersistence deployed Java application
Note
If you want to use a <DEFAULT> database
binding, the standard data source jdbc/
DefaultDB has to be set up at the stage of
the Java application development.
Note
We recommend that you place this parame
ter in the MTA extension descriptor, if you are
using one.
Note
We recommend that you place this parame
ter in the MTA extension descriptor, if you are
using one.
Note
The provider subaccount must meet the fol
lowing criteria:
You can model a named data source by using the parameter binding-name that is added to the database binding
resource required in the requires section of the com.sap.java and java.tomcat module types.
The MTA specification _schema-version 3.1 introduces the notion for metadata, which can be added to a
certain property or parameter.
consumer- Used when you want to provide your Multi-Target Application for con Boo true no
optional sumption by other subaccounts. You can add the consumer- lean
optional metadata to a property to indicate that it should be popu
lated with an MTA extension descriptor when your subscribers con
sume the Multi-Target Application. If you do not provide the
consumer-optional metadata, the deployment of the MTA de
ployment descriptor within your subaccount will fail due to missing
data.
Example
resources:
- name: example-resource
properties:
user:
password:
properties-metadata:
user:
optional: true
consumer-optional: false
password:
optional: true
consumer-optional: false
...
Note
● The optional parameter has to be explicitly defined and set
to true if you want to use the option consumer-
optional. See the MTA specification for additional informa
tion.
● Тhis option is available for Multi-Target Application schema
3.1.0 and higher
Example
resources:
- name: example-resource
properties:
user:
properties-metadata:
user:
description: Еxample resource user
name
...
sensitive If it is used as metadata about a parameter without a value, it prompts Boo false no
the cockpit to use a password input field, that is, with hidden content. lean
Example
resources:
- name: example-resource
properties:
password:
properties-metadata:
password:
sensitive: true
...
complex If it is used as metadata about a parameter without a value, it prompts Boo false no
the cockpit to use an input field for free text, for example, a description lean
of a solution in several lines of text.
Example
resources:
- name: example-resource
properties:
description:
properties-metadata:
description:
complex: true
...
Note
This parameter is not taken into account if you use it in conjunction
with the sensitive parameter. The Password input field is used
instead.
Example
resources:
- name: example-resource
properties:
user:
properties-metadata:
user:
default-value: John Doe
...
Depending on the type of the destination that you wish to create (subaccount-level, application-level, subscription
destination, and so on), the destination can be modeled as a module com.sap.hcp.destination, or as a
parameter of the modules com.sap.java or java.tomcat. However, the options available when you create a
destination are the same for all of the destination types.
description String
url URL Yes Use when the parameter type has the
HTTP or LDAP values. Mandatory only for
these types.
● if BasicAuthentication is the
Authentication type. Mandatory
only for this type.
● if the parameter type has the values
MAIL, or RFC.
● if BasicAuthentication is the
Authentication type. Mandatory
only for this type.
● if MAIL, or RFC are the destination
type.
client String Yes 3 digits, in single quotes Use with the RFC parameter type.
Mandatory only for this type.
client-ashost String 00-99, in single quotes Use with the RFC parameter type. Either
this or client-mshost must be
specified.
client-r3name String 3 letters or digits Use with the RFC parameter type, if
client-mshost is specified.
Example
ldap.
mail.
jco.client.
jco.destination.
In regard to modeling destinations the SAP Cloud Platform offers several keyword properties, which you can use to
optimize your declaration about deploying a destination. You can have the following destination types:
application- This keyword can be placed only within the properties category of the provides section of the
url
com.sap.java and java.tomcat module types. It is used when you want to extract the URL of the
Java Application and link it to a destination that you have modeled.
The following example contains a Java application that has a destination that leads to itself. Note that this
example uses the MTA placeholder concept. For more information, see “Destination with Specific Target
Platform Data Options” below.
Example
modules:
- name: java-module
type: com.sap.java
provides:
- name: java-module
properties:
application-url: ${default-url}
requires:
- name: java-module
parameters:
name: exampleapp
destinations:
- name: ExampleWebsite
type: HTTP
url: ${java-module/application-url}
...
When modeling destinations the SAP Cloud Platform offers several keyword properties that allow you to express
your intention when deploying a destination more clearly. There might be cases, in which some of the destination
data prior deploying the MTA archive is not known to you. Such data might be, for example, the URL of a Java
Application that you want your destination to point to. To address these cases, SAP Cloud Platform provides
several placeholders that you can use when you model your MTA. Placeholders are part of the Multi-Target
Currently all types of destinations support the following placeholders, which are automatically resolved with their
valid values during deployment.
${default- Instructs SAP Cloud Platform to resolve the placeholder value to the default Java Application URL when
url} deploying the destination. Тhis placeholder can be part only of the property named application-
url, which serves as a provided dependency of the com.sap.java and java.tomcat module
types.
This example shows the usage of the ${default-url} placeholder. The modeled java-module pro
vides the application-url dependency:
Example
modules:
- name: java-module
type: com.sap.java
provides:
- name: java-module
properties:
application-url: ${default-url}
parameters:
name: exampleapp
...
Note
● This placeholder can be used only with destination types that have a URL within their properties,
that is, destination types such as an HTTP destination.
● This URL can be automatically resolved only if the Java Application has only one URL.
${account- Instructs SAP Cloud Platform to resolve the placeholder value to your subaccount name when deploying
name} the destination. This placeholder can be used only in the url parameter for a destination, the token-
service-url parameter, and in the application-url property, which serves as a provided de
pendency of the com.sap.java and java.tomcat module types.
Example
modules:
- name: java-module
type: com.sap.java
provides:
- name: java-module
properties:
application-url: ${default-url}/accounts/${account-
name}/example
parameters:
name: exampleapp
...
- name: abc-java
type: com.sap.java
parameters:
destinations:
- name: ExampleWebsite
type: HTTP
url: http://abc.example.com/accounts/${account-name}
....
${provider- Instruct SAP Cloud Platform to resolve the placeholder value to the subaccount name of the provider
account-name} when the destination is being deploying. This placeholder can be used only in the url parameter for a
destination and the token-service-url parameter. You can use it if you want to employ a model,
where a destination is created within your subscribers subaccount and you want it to point to a URL in
your provider subaccount.
Example
modules:
- name: abc-java
type: com.sap.java
parameters:
destinations:
- name: ExampleWebsite
type: HTTP
url: http://abc.example.com/accounts/${provider-account-
name}
owner: consumer
....
Note
● In the example the subscriber subaccount is consuming a Solution that is provided by you.
● The consumer property of the destination indicates that this destination is going to be deployed
into the subaccount of the consumer.
${landscape- Instructs the SAP Cloud Platform to resolve the placeholder value to the current landscape URL when
url} deploying the destination. This placeholder can be used only in the url property for a destination, the
token-service-url parameter, and in the application-url property that serves as a
provided dependency of the com.sap.java and java.tomcat module types.
Example
modules:
- name: java-module
type: com.sap.java
provides:
- name: java-module
properties:
application-url: myjava.${landscape-url}/
parameters:
name: exampleapp
...
- name: abc-java
type: com.sap.java
parameters:
destinations:
- name: ExampleWebsite
type: HTTP
url: abc.${landscape-url}/
....
To transport an application to other subaccounts, you can set up a transport route using the Change and Transport
System (CTS+) or the Transport Management Service.
Change Management with CTS+ Change Management with CTS+ [page 1381]
Setting up a CTS+ transport request to upload MTA archives Setting Up a Direct Upload of MTA Archives to a CTS+ Trans
port Request [page 1381]
Setting Up Direct Uploads of MTA Archives Using the Trans Setting Up Direct Uploads of MTA Archives Using the Trans
port Management Service (BETA) port Management Service (BETA) [page 1383]
Configuring the Access to the Solutions Lifecycle Manage Configuring the Access to the Solutions Lifecycle Manage
ment Service ment Service [page 1385]
You can enable transport of SAP Cloud Platform applications using the enhanced Change and Transport System
(CTS+) tool.
Prerequisites
To be able to transport an application, package it in a Multi-Target Application (MTA) archive as described in Multi-
Target Applications [page 1292].
Context
Use CTS+ to transport and promote your applications, for example, from development to a test or production
environment. You can also deploy one or several MTA archives to your subaccount in one run.
Procedure
Trigger the import of an SAP Cloud Platform application as described in How To... Configure SAP Cloud Platform
for CTS.
Caution
SAP Cloud Platform applications cannot be exported to CTS+. You need to manually add them to a transport
request.
Use the CTS+ Export Web Service to perform a transport of an MTA from one subaccount to another.
Prerequisites
● You have activated and configured the CTS+ Export Web Service as described in Activating and Configuring
CTS Export Web Service.
Note the Alternative Access URL and Calculated Access URL of the web service, which can be found in the
transport settings.
You can define the Alternative Access URL using the following pattern: /<ABAP Client ID>/
export_cts_ws. The Calculated Access URL follows the pattern /sap/bc/srt/rfc/sap/export_cts_ws/
<ABAP Client ID>/export_cts_ws/export_cts_ws.
● You have to define a user that is going to call the CTS+ Export Web Service. This user needs to have the
following user roles:
○ SAP_BC_WEBSERVICE_CONSUMER
○ SAP_CTS_PLUS
● You have installed and configured the Cloud Connector, which is used to connect on-premise systems with the
SAP Cloud Platform. For more information, see Cloud Connector.
● You have exposed the CTS+ Export Web Service URLs as described in Configure Access Control (HTTP).
Note
If you maintain a trusted applications list and a principal propagation trust configuration, you have to
authorize the application .
● Define the transport systems and route corresponding to your SAP Cloud Platform subaccounts. For more
information, see How To... Configure HCP for CTS
Context
Procedure
1. Define the destinations leading to the on-premise systems. In the SAP Cloud Platform cockpit, navigate to the
Services Solution Lifecycle Management Configure Destinations New Destination .
2. For the new destination configuration, enter the required parameters:
○ Name: TransportSystemCTS
Note
This destination name is mandatory.
○ Type: HTTP
○ URL: <Exposed URL of the system, taken from the Cloud Connector section,
following the convention: https://<virtual host name>:<virtual port, such as
443>/<Alternative or Calculated Access URL>><System ID of the source system in
the transport route, which is defined above>
Results
For this feature to be consumed by the Cloud Platform Integration, see Content Transport.
Related Information
Create the required configurations for using the Transport Management Service (BETA), in order to transport MTA
archives from one subaccount to another.
Prerequisites
● You are subscribed and have access to the Transport Management Service, and have set up the environment
to transport MTA archives directly in an application.
● You have a service key, which contains parameters you need to refer in the required destinations.
Note
The Transport Management Service is a beta feature available in the SAP Cloud Platform for global accounts
that have specifically registered for this service with SAP. For general information about beta features, see Using
Beta Features in Subaccounts [page 16].
To perform transports of MTA archives using the Transport Management Service, you have to create and set up
destinations defining the source transport node for transporting MTA archives. Proceed as follows:
Procedure
1. In the SAP Cloud Platform cockpit, navigate to the Services Solution Lifecycle Management Configure
Destinations .
2. Choose New Destination to create the destination directed at the Transport Management Service URL and
defining the source transport node. Enter the following parameters:
○ Name: TransportManagementService
Note
This destination name is mandatory.
○ Type: HTTP
○ URL: <"uri" parameter specified in the Service Key>
○ Authentication: NoAuthentication
○ Proxy Type: Internet
○ Choose New Property, and from the drop-down list select sourceSystemId. As a value, enter the ID of
the source node of the transport route, for example, DEV_NODE.
3. Save the destination.
4. Choose New Destination to create the OAuth authentication client destination. Enter the following parameters:
○ Name: TransportManagementServiceOAuth
Note
This destination name is mandatory.
○ Type: HTTP
○ URL: <"url" specified in the Service Key>
○ User: <"clientid" specified in the Service Key>
○ Password: <"clientsecret" specified in the Service Key>
○ Authentication: BasicAuthentication
5. Save the destination.
You can use the Transport Management Service (BETA) to transport MTA archives.
Related Information
Get Access
Set Up the Environment to Transport MTAs Directly in an Application
Creating Service Keys [page 1071]
Content Transport
To deploy Multi-Target Applications from other tools, such as CTS+ or the Transport Management Service, you
have to connect to the Solutions Lifecycle Management Service by using its dedicated service endpoint https://
slservice.<landscape-host>/slsservice/. You have two authentication methods available - Basic
authentication, and OAuth Platform API Client. The complete URL you have to use is based on the following
patterns, respectively:
● https://slservice.<landscape-host>/slservice/slp/basic/<account-id>/slp/ -
authentication using username and password
● https://slservice.<landscape-host>/slservice/slp/oauth/<account-id>/slp/ -
authentication using an OAuth token created using the OAuth client
● Basic authentication:
1. Ensure the user has an assigned platform role that contains the following scopes:
○ Manage Multi-Target Applications
○ Read Multi-Target Applications
For more information, see section Managing Member Authorizations [page 1671]
● Authentication using an OAuth Client:
1. Create a new OAuth client as described in Using Platform APIs [page 1289].
2. During the process, assign the following scopes from the Solution Lifecycle Management API:
○ Manage Multi-Target Applications
○ Read Multi-Target Applications
In the context of SAP Cloud Platform, a solution is comprised of various application types and configurations,
designed to serve a certain scenario or task flow. Typically the comprised parts of the solution are interconnected
and have a common lifecycle. They are explicitly deployed, updated, deleted, configured, and monitored together.
A solution allows you to easily manage complex deployable artifacts. You can compose a solution by yourself, or
you can acquire one from a third-party vendor. Furthermore, you can use the solutions to deploy artifacts that are
comprised by entities external to SAP Cloud Platform, such as SAP SuccessFactors entities. This allows you to
have a common management and lifecycle of artifacts spread across various SAP platforms and systems.
● A Multi-Target Application (MTA) archive, which contains all required application types and configurations as
well as a deployment descriptor file. It is intended to be used as a generic artifact that can be deployed and
managed on several SAP Cloud Platform subaccounts. For example, you can reuse one MTA archive on your
development and productive subaccounts.
● (Optionally) An МТА extension descriptor file that contains a deployment-specific data. It is intended to be
used as a specific data source for a given SAP Cloud Platform subaccount. For example, you can have different
extension descriptors for your development and productive subaccounts. Alternatively, you can also provide
this data manually during the solution deployment.
You model the supported entities according to the MTA specification so that they can be deployed as a solution.
Related Information
The SAP Cloud Platform allows you to deploy Java applications that run either on the proprietary SAP Java Web or
the Java Web Tomcat runtime containers. These Java applications are com.sap.java and the java.tomcat.
When you model a Java application in the МТА deployment descriptor, you can specify a set of properties related
to this application. For a complete list of the supported properties, see MTA Module Types, Resource Types, and
Parameters for Applications in the Neo Environment [page 1351].
If a Java application is a part of your solution, the following rules apply during deployment:
● The Java application is deployed and started at the end of the deployment
Note
Consider the following:
The Java аpplications are modeled as a Multi-Target Application (MTA) specification modules.
For specification of the Java аpplication module, see MTA Module Types, Resource Types, and Parameters for
Applications in the Neo Environment [page 1351].
For the examples below, we assume that you have the following:
Example
MTA Deployment Descriptor for com.sap.java
_schema-version: '3.1'
parameters:
hcp-deployer-version: '1.1.0'
ID: com.example.basic.javaapp
version: 0.1.0
modules:
- name: example-java
type: com.sap.java
parameters:
name: exampleapp
jvm-arguments: -server
java-version: JRE 7
runtime-version: 1
requires:
- name: dbbinding
resources:
- name: dbbinding
type: com.sap.hcp.persistence
parameters:
id: tst
_schema-version: '3.1'
parameters:
hcp-deployer-version: '1.1.0'
ID: com.example.basic.javaapp
version: 0.1.0
modules:
- name: example-java
type: java.tomcat
parameters:
name: exampleapp
jvm-arguments: -server
java-version: JRE 8
runtime-version: 3
requires:
- name: dbbinding
resources:
- name: dbbinding
type: com.sap.hcp.persistence
parameters:
id: tst
In the examples above you see the required application module properties such as the required Java version,
runtime, description, and so on.
You also have to create an MTA extension descriptor that will hold sensitive data, such as credentials.
Note
Always enter the security-sensitive data of your Solution in an MTA extension descriptor.
Example
MTA Extension Descriptor
_schema-version: '3.1'
ID: com.example.basic.javaapp.config
extends: com.example.basic.javaapp
parameters:
title: Java Application Example
description: This is an example of the Java Application module
resources:
- name: dbbinding
parameters:
user-id: myuser
password : mypassword
In the example above, the extension descriptor adds the user-id and password parameters to the resource,
which is modeled in the deployment descriptor.
After you deploy your solution, you can open its tile in the cockpit and check if the Java application is deployed.
When you model an application in the МТА deployment descriptor, you have to specify a set of properties related
to thе application. For a complete list of the supported properties, see MTA Module Types, Resource Types, and
Parameters for Applications in the Neo Environment [page 1351].
The following rules apply when you deploy a solution that contains an HTML5 application:
● If an application with an identical name but a different version already exists in your subaccount, the added
new version exists in parallel to the earlier application. Depending on the value of the active parameter, the
new version is activated.
● If an application with an identical name and the identical version already exists in your subaccount, the
application in the solution to be deployed is going to be skipped.
● If there is no version specified in the MTA deployment descriptor, it will be deployed with its current timestamp
as version.
● When you delete a solution containing an HTML5 application, the application itself and all of its versions are
going to be deleted.
Note
The HTML5 application has to be packed in a ZIP archive.
Example
Sample Code
parameters:
hcp-deployer-version: '1.1.0'
ID: com.sap.example.html5
version: 0.1.0
modules:
- name: examplehtml5
type: com.sap.hcp.html5
parameters:
name: example
version: '0.1.0'
active: true
display-name: Example HTML5
To always create a new version of the HTML5 application, you can also use the ${timestamp} as a suffix to you
version.
- name: examplehtml5
type: com.sap.hcp.html5
parameters:
name: example1
version: '0.1.0-${timestamp}'
Related Information
By using а database binding, the Java applications connects to a database set up in your current subaccount or
provided by another subaccount part of the same global account. This connection is modeled within your solution
by setting it up during the deployment operation.
Note
Ensure the following prerequisites:
● You have a database that is set up in your subaccount or there is a database provided to you by another
subaccount.
● You have valid credentials for that database. In case that you do not have valid credentials for the database,
default credentials will be generated for you.
Note
You cannot have a database binding to the <DEFAULT> data source together with a database binding to a
named data source, but you can have more than one database binding to named data sources.
Each database binding is modeled as a Multi-Target Application (MTA) resource, which is required by a Java
application module.
For specification of the database binding resource, see MTA Module Types, Resource Types, and Parameters for
Applications in the Neo Environment [page 1351].
First, you have to model the deployment descriptor that will contain the Java application module and the database
binding resource and then you have to create an extension descriptor that will hold sensitive data, such as
credentials.
Note
Make sure that you always use an extension descriptor when you have sensitive data within your solution.
Example
MTA Deployment Descriptor
_schema-version: '3.1'
parameters:
hcp-deployer-version: '1.1.0'
ID: com.example.basic.dbbinding
version: 0.1.0
modules:
- name: example-java
type: com.sap.java
parameters:
name: exampleapp
jvm-arguments: -server
java-version: JRE 7
runtime: neo-java-web
runtime-version: 1
requires:
Example
MTA Extension Descriptor
_schema-version: '3.1'
ID: com.example.basic.dbbinding.config
extends: com.example.basic.dbbinding
parameters:
title: Database binding example
description: This is an example of the database binding resource
resources:
- name: dbbinding
parameters:
user-id: myuser
password : mypassword
In the example above, the extension descriptor adds the user-id and password parameters to the resource
dbbinding, which is modeled in the deployment descriptor.
Example
MTA Deployment Descriptor
_schema-version: '3.1'
parameters:
hcp-deployer-version: '1.1.0'
ID: com.example.basic.dbbinding
version: 0.1.0
modules:
- name: example-java
type: com.sap.java
parameters:
name: exampleapp
jvm-arguments: -server
java-version: JRE 7
runtime: neo-java-web
runtime-version: 1
requires:
- name: firstdbbinding
parameters:
binding-name: tstbinding
- name: seconddbbinding
Example
MTA Extension Descriptor
_schema-version: '3.1'
ID: com.example.basic.dbbinding.config
extends: com.example.basic.dbbinding
parameters:
title: Named Database bindings example
description: This is an example of the database binding resources
resources:
- name: firstdbbinding
parameters:
user-id: myuser
password : mypassword
- name: seconddbbinding
parameters:
user-id: myuser
password : mypassword
In the example above, the extension descriptor adds the user-id and password parameters to each resource,
which is modeled in the deployment descriptor.
Note
The provider subaccount must belong to the same global account to which your subaccount belongs.
Example
MTA Deployment Descriptor
_schema-version: '3.1'
parameters:
hcp-deployer-version: '1.1.0'
ID: com.example.basic.dbbinding
version: 0.1.0
modules:
- name: example-java
type: com.sap.java
parameters:
name: exampleapp
jvm-arguments: -server
java-version: JRE 7
runtime: neo-java-web
runtime-version: 1
requires:
- name: dbbinding
resources:
- name: dbbinding
type: com.sap.hcp.persistence
parameters:
id: tst
account: abcd
Example
MTA Extension Descriptor
_schema-version: '3.1'
ID: com.example.basic.dbbinding.config
extends: com.example.basic.dbbinding
parameters:
title: Database binding example
description: This is an example of the database binding resource
resources:
- name: dbbinding
parameters:
user-id: myuser
password : mypassword
In the example above, the MTA extension descriptor adds the user-id and password parameters to the resource
dbbinding, which is modeled in the MTA deployment descriptor. After you deploy your solution, you can open its
tile in the cockpit and check if the database binding is created.
● Database aliases tst and abc, which are provided by another subaccount
Example
MTA Deployment Descriptor
_schema-version: '3.1'
parameters:
hcp-deployer-version: '1.1.0'
ID: com.example.basic.dbbinding
version: 0.1.0
modules:
- name: example-java
type: com.sap.java
parameters:
name: exampleapp
jvm-arguments: -server
java-version: JRE 7
runtime: neo-java-web
runtime-version: 1
requires:
- name: firstdbbinding
parameters:
binding-name: tstbinding
- name: seconddbbinding
parameters:
binding-name: abcbinding
resources:
- name: firstdbbinding
type: com.sap.hcp.persistence
parameters:
id: tst
account: abcd
- name: seconddbbinding
type: com.sap.hcp.persistence
parameters:
id: abc
account: test
Example
MTA Extension Descriptor
_schema-version: '3.1'
ID: com.example.basic.dbbinding.config
extends: com.example.basic.dbbinding
parameters:
title: Named database bindings example
description: This is an example of the database binding resources
In the example above, the MTA extension descriptor adds the user-id and password parameters to each
resource, which is modeled in the MTA deployment descriptor. After you deploy your solution, you can open its tile
in the cockpit and check if the database bindings are created.
Note
When you delete a database binding, the credentials that you used are not removed from the database. Delete
them manually, if you want to do so.
Related Information
You can connect your applications to another source by describing the source connection properties in a
destination. Later on, you can access that destination from your application.
Depending on whether the destination source is located within the SAP Cloud Platform or not, the destinations are
classified as internal or external. You can also provide a Solution for consumption to another SAP Cloud Platform
subaccount and define a destination as deployable to all subscriber subaccounts.
The supported destination levels you can model within a Solution are:
Subaccount-level destinations are not linked to a particular application, but instead can be used by all applications.
For example, the subaccount-level destination can be used by an HTML5 application to connect to a source Java
application.
Note
If you modify a subaccount-level destination, you will affect all applications that use it. The subaccount-level
destination has a lifecycle that is independent from the applications that use it.
Destinations to external resources lead to services or applications that are not part of the current Multi-Target
Application (MTA) archive and you do not have direct access to them. For example, it might be an application
running in another subaccount or outside SAP Cloud Platform.
When you want to describe subaccount-level destinations to external resources, the modeling is as a module of
type com.sap.hcp.destination. In this type of destination relations, first you declare that a module requires
the dependency using a requires element, and then you provide the dependency details as module type
parameters. The subaccount-level destination has a lifecycle that is independent from the applications that use it.
Note that if you need your Java application to have more than one destination, you have to model each
subaccount-level destination in a separate module.
For a list of the available destination parameters, see MTA Module Types, Resource Types, and Parameters for
Applications in the Neo Environment [page 1351] and The Multi-Target Application Model design document.
Remember
Note the following important cases when it comes to deploying a destination:
● If you need more than one destination, you have to model each subaccount-level destination in a separate
module.
● Use the property force-overwrite to choose a redeployment approach for an existing destination. Place
the property at destination level, and use the value true to have the destination forcefully overwritten, or
false to leave it unchanged, which is also the default behavior.
● If you want a destination to be created before the application that requires it, use a required dependency
to instruct SAP Cloud Platform to define the deploy order. For example, if you have a Java application with a
destination defined in its web.xml file, you have to use the required dependency so that the destination is
resolved correctly when the Java application starts.
Example
modules:
- name: nwl
type: com.sap.java
requires:
- name: examplewebsite-connection
parameters:
name: networkinglunch
...
- name: examplewebsite-connection
type: com.sap.hcp.destination
parameters:
name: ExampleWebsite
type: HTTP
description: Connection to ExampleWebsite
url: http://www.examplewebsite.com
proxy-type: Internet
authentication: BasicAuthentication
user: myuser
password: mypassword
...
In the example above, the module type com.sap.hcp.destination is used to define the subaccount-level
destination and the Java module nwl requires it, because the destination is created prior starting the Java
application. The required section has to ensure proper ordering.
The example above will result in a subаccount-level destination created within the consumer subaccount with
credentials that are still placed into the MTA extension descriptor. If you are providing your solution for
consumption by another subaccount, you might want to create that destination into the subscriber subaccount. To
do this, you have to use the owner option.
Example
MTA Deployment Descriptor
_schema-version: '3.1'
parameters:
hcp-deployer-version: '1.1.0'
ID: com.example.basic.destination.subaccount
version: 0.1.0
modules:
- name: nwl
type: com.sap.java
requires:
- name: examplewebsite-connection
parameters:
name: networkinglunch
- name: examplewebsite-connection
type: com.sap.hcp.destination
requires:
- name: data-storage
parameters:
name: ExampleWebsite
type: HTTP
description: Connection to ExampleWebsite
url: http://www.examplewebsite.com
proxy-type: Internet
authentication: BasicAuthentication
_schema-version: '3.1'
parameters:
hcp-deployer-version: '1.1.0'
ID: com.example.basic.destination.subaccount.config
extends: com.example.basic.destination.subaccount
version: 0.1.0
modules:
resources:
- name: data-storage
properties:
user: myuser
password: mypassword
In the example above, the examplewebsite-connection destination is deployed to the subscriber subaccount.
The untyped resource data-storage contains the sensitive parameters, which are deployed using the MTA
extension descriptor.
Note
● The reference syntax ${source/value} is used to link the destinations user and password options.
● The data-storage resource is of untyped type.
For more information, see MTA Module Types, Resource Types, and Parameters for Applications in the Neo
Environment [page 1351]
In the example above, you create the destination within the subscriber subaccount, but the credentials for that
destination are still provided by you. If the consumer of your solution has to provide the credentials for the
destination, you have to use the consumer-optional metadata element.
Note
Note that using metadata is available in an MTA archive with schema version 3.1 and higher.
Example
MTA Deployment Descriptor
_schema-version: '3.1'
parameters:
hcp-deployer-version: '1.1.0'
ID: com.example.basic.destination.subaccount
version: 0.1.0
modules:
- name: nwl
type: com.sap.java
requires:
- name: examplewebsite-connection
parameters:
_schema-version: '3.1'
ID: com.example.basic.destination.subaccount.config
extends: com.example.basic.destination.subaccount
parameters:
title: Subaccount Destination Example
description: This is an example of the sample Subaccount Destination
_schema-version: '3.1'
ID: com.example.basic.destination.subaccount.config.subscriber
extends: com.example.basic.destination.subaccount.config
resources:
- name: data-storage
properties:
user: subscriberuser
password: subscriberpassword
In the example above, the consumer-optional metadata is used to enforce the subscriber to provide
credentials. The provider's extension descriptor does not provide the credentials, but the consumer's one.
Note
● The consumer-optional metadata is used together with the optional option.
● If you do not use the consumer-optional metadata when you deploy the solution to your subaccount, the
operation will fail due to missing data.
The subaccount-level destination to an internal application is a destination of type HTTP that points to a Java
application, which in turn is part of the current MTA and will be deployed with the same solution. It is modeled as a
com.sap.hcp.destination module type.
Note
● If you need more than one destination, you have to model each subaccount-level destination in a separate
module.
● To overwrite an already existing destination, you have to use the force-overwrite option. For more
information, see MTA Module Types, Resource Types, and Parameters for Applications in the Neo
Environment [page 1351].
In the following example, an HTML5 application, which uses a subaccount-level destination to an internal resource,
connects to a Java application as a backend. The destination uses the URL of the Java application as its URL
target:
Example
- name: abc
type: com.sap.java
provides:
- name: abc
properties:
application-url: ${default-url}
parameters:
name: networkinglunch
...
- name: abc-destination
type: com.sap.hcp.destination
requires:
- name: abc
parameters:
name: NetworkingLunchBackend
type: HTTP
url: ~{abc/application-url}
proxy-type: Internet
authentication: AppToAppSSO
- name: abc-ui
type: com.sap.hcp.html5
requires:
- name: abc-destination
parameters:
name: networkingui
● The Java module type abcapplication-url property, which value is a placeholder. For more information
about the ${default-url} placeholder, see MTA Module Types, Resource Types, and Parameters for
Applications in the Neo Environment [page 1351].
● The destination abc-destination is a module of type com.sap.hcp.destination and requires the Java
application module.
Note
Ensure that your applications do not have circular dependencies, that is, that the application-url provides
its property or one Java application does not refer to the application-url property of another Java
application module.
Application-level destinations apply only within a given application, compared to subaccount-level destinations
that apply to the whole subaccount. You can use them to connect you application to resources outside the SAP
Cloud Platform, applications not part of your subaccount, applications from your subaccount and even to your
own application.
Destinations to external resources lead to services or applications external to and not accessible by the current
Multi-Target Application (MTA) archive. For example, it can be an application running in another subaccount.
Application-level destinations to external resources are modeled as items within the destinations parameter of
the com.sap.java module type. This means that the lifecycle of such a destination is bound to the lifecycle of the
corresponding application.
For a list of the available destination parameters, see MTA Module Types, Resource Types, and Parameters for
Applications in the Neo Environment [page 1351] and The Multi-Target Application Model design document.
Remember
Note the following special cases when it comes to deploying a destination:
● If you need more than one destination, you have to model each subaccount-level destination in a separate
module.
● Use the property force-overwrite to choose a redeployment approach for an existing destination. Place
the property at destination level, and use the value true to have the destination forcefully overwritten, or
false to leave it unchanged, which is also the default behavior.
● You cannot define a destination in the web.xml file of the Java application, if that specific destination points
to the application itself, and has a URL automatically resolved by the SAP Cloud Platform. You have to
manually resolve the destination in the application code.
Example
modules:
- name: abc
type: com.sap.java
parameters:
name: networking
...
destinations:
- name: ExampleHttpDestination
type: HTTP
url: http://www.examplewebsite.com
proxy-type: Internet
authentication: BasicAuthentication
user: myuser
password: mypassword
● The com.sap.java module defines the Java application that has to be deployed.
● The destinations parameter defines the destinations that have to be created.
As a result, the example above creates an application-level destination within your subaccount with credentials,
which are still located in the MTA deployment descriptor.
If you want to provide your solution for consumption by another subaccount, you can create that destination into
the subscriber subaccount. To do this, you can use the owner option.
Example
Deployment Descriptor
modules:
- name: abc
type: com.sap.java
parameters:
name: networking
requires:
- name: data-storage
...
destinations:
- name: ExampleDestination
type: HTTP
url: http://www.examplewebsite.com
proxy-type: Internet
authentication: BasicAuthentication
user: ~{data-storage/user}
password: ~{data-storage/password}
owner: consumer
...
resources:
- name: data-storage
properties:
user:
password:
...
Extension Descriptor
...
Note
The owner option indicates that the destination has to be deployed to the subscriber subaccount.
● The untyped resource data-storage contains the sensitive parameters, which are deployed using the MTA
extension descriptor.
Note
Be aware of the following:
○ The reference syntax ~{source/value} is used to link the destinations user and password options.
○ The data-storage resource is of untyped type.
For more information, see MTA Module Types, Resource Types, and Parameters for Applications in the Neo
Environment [page 1351]
The example above creates the destination within the subscribers subaccount, but the credentials for that
destination are still provided separately by you. If the consumer of your solution has to provide the credentials for
the destination, you have to use consumer-optional metadata.
Note
Note that metadata is available in an MTA archive with schema version 3.1 and higher.
Example
Deployment Descriptor
modules:
- name: abc
type: com.sap.java
parameters:
name: networking
requires:
- name: data-storage
...
destinations:
- name: ExampleDestination
type: HTTP
url: http://www.examplewebsite.com
proxy-type: Internet
authentication: BasicAuthentication
user: ~{data-storage/user}
password: ~{data-storage/password}
owner: consumer
...
...
# no credentials provided
...
resources:
- name: data-storage
properties:
user: subscriberuser
password: subscriberpassword
...
In the example above the consumer-optional metadata is used to enforce the consumer of your solution to
provide the required credentials. In this case, the MTA extension descriptor of the consumer provides the required
credentials instead of the provider MTA extension descriptor.
Note
The consumer-optional metadata is used together with the optional option.
If you do not use the consumer-optional metadata when you deploy the solution to your subaccount, the
operation will fail due to missing data.
The application-level destination to an internal application is an HTTP type destination, which can point to the
same or different Java application deployed with the same solution. It is modeled as a com.sap.java or
java.tomcat module.
Note
● If you need more than one destination, you have to model each subaccount-level destination in a separate
module.
● To overwrite an already existing destination, you have to use the force-overwrite option. For more
information, see MTA Module Types, Resource Types, and Parameters for Applications in the Neo
Environment [page 1351].
For additional options of what can be resolved automatically by the SAP Cloud Platform during deployment see
MTA Module Types, Resource Types, and Parameters for Applications in the Neo Environment [page 1351].
Example
- name: abc
type: com.sap.java
provides:
- name: abc
properties:
application-url: ${default-url}
parameters:
name: javaapp1
...
- name: abc-ui
type: com.sap.java
requires:
- name: abc
parameters:
name: javaapp2
...
destinations:
- name: JavaAppBackend
type: HTTP
url: ~{abc/application-url}
proxy-type: Internet
authentication: AppToAppSSO
Note
The naming of the applicaiton-url property is mandatory. For more details, see MTA Module Types,
Resource Types, and Parameters for Applications in the Neo Environment [page 1351].
● The value of the application-url property is a placeholder. For more details about the ${default-url}
placeholder, see MTA Module Types, Resource Types, and Parameters for Applications in the Neo Environment
[page 1351]
● The destination JavaAppBackend is an entry of the destinations parameter of the com.sap.java module.
● The module abc-ui requires the module abc. By requiring the abc module, the abc-ui module gains access
to all provided properties of that module, namely the application-url property. Later on, the destination
module can use a reference to read the provided properties. For more details, see MTA Module Types,
Resource Types, and Parameters for Applications in the Neo Environment [page 1351].
● The destination is using the reference to read the value of the application-url property.
Note
Ensure that your applications do not have circular dependencies, that is, that the application-url property
or one Java Application does not refer to the application-url property of another Java Application module.
You can connect your SAP SuccessFactors system to your SAP Cloud Platform subaccount. After you do so, you
can define a solution that extends it. In more complex scenarios, you can even provide a solution that can be
consumed by another SAP Cloud Platform subaccount and extend the subscribers SAP SuccessFactors system.
Note
Make sure that:
● You have onboarded an SAP SuccessFactors company in your SAP Cloud Platform subaccount. If you are
providing a solution that is consumed by another subaccount in the SAP Cloud Platform, the subscriber
subaccount is responsible for onboarding the SAP SuccessFactors company.
● You have a database and valid credentials.
In the example below, you will create a standard SAP SuccessFactors extension. The “Benefits” sample Java
application provided by SAP is used. It is located at https://github.com/SAP/cloud-sfsf-benefits-ext .
Note
If you intend to provide this solution for consumption by other subaccounts:
● The sample “Benefits” Java Application will be deployed to your subaccount, but the SAP SuccessFactors
artifacts will be deployed to the subscriber subaccounts and their SAP SuccessFactors systems.
● You can define an additional МТА extension descriptor for your subscribers, so that they can add their own
specific data.
Note
For the example below, we assume that you have the following:
You have to model the sample “Benefits” Java application as a module into the MTA deployment descriptor. You
also have to define an SAP SuccessFactors Role module and a database binding resource.
Example
_schema-version: '3.1'
parameters:
hcp-deployer-version: '1.1.0'
ID: com.example.basic.sfsf
version: 0.1.0
modules:
● The sample “Benefits” Java application module requires both database binding resource and the SAP
SuccessFactors Roles module.
● The SAP SuccessFactors tile is defined as a parameter of the sample “Benefits” Java application and points to
a JSON file within the Multi-Target Application archive
● SAP SuccessFactors provider is defined as a parameter of the sample “Benefits” Java application
● The sample “Benefits” Java application is going to use the SAP SuccessFactors IDP for authentication
● The sample “Benefits” Java application is going to use the default connectivity options when accessing the
SAP SuccessFactors system
Both the SAP SuccessFactors roles and tiles require additional files to be added to the Multi-Target Application
archive. The deployment descriptor contains only the modeling of those entities, but their actual content is
external to the MTA deployment descriptor, in the same way as the sample “Benefits” Java application .war
archive.
You also have to create a JSON file benefits-tiles.json that contains the SAP SuccessFactors tiles.
Example
[
{
"name" : "SAP Corporate Benefits",
"path" : "com.sap.hana.cloud.samples.benefits",
"size" : 3,
"padding" : false,
"roles" : ["Corporate Benefits Admin"],
"metadata" : [
{
"title" : "SAP Corporate Benefits",
"description" : "SAP Corporate Benefits home page tile",
"locale" : "en_US"
}
]
}
In the example above, you can see an example of an SAP SuccessFactors tile for the sample “Benefits” Java
application.
Next you have to create a JSON file benefits-roles.json that contains the SAP SuccessFactors roles.
Example
[
{
"roleDesc": "SAP Corporate Benefits Administrator",
"roleName": "Corporate Benefits Admin",
"permissions": []
}
]
In the example above, you can see an example of an SAP SuccessFactors role for the sample “Benefits” Java
application.
Afterward, you have to create your MANIFEST.MF file and define the Java application, roles, and tiles.
Example
Manifest-Version: 1.0
Created-By: SAP SE
Name: resources/benefits-roles.json
Content-Type: application/json
MTA-Module: benefits-roles
Name: com.sap.hana.cloud.samples.benefits.war
Content-Type: application/zip
MTA-Module: benefits-app
● Entry that links your SAP SuccessFactors roles with the MTA deployment descriptor
● Entry that links your “Benefits” sample Java application with the MTA deployment descriptor
Now you can create your Multi-Target Application archive by following the JAR file specification. The archive
structure has to be as follows:
Example
/com.sap.hana.cloud.samples.benefits.war
/META-INF
/META-INF/mtad.yaml
/META-INF/MANIFEST.MF
/resources/benefits-roles.json
/resources/benefits-tiles.json
Start by creating an MTA extension descriptor that holds the security-sensitive data, such as credentials.
Note
Make sure that you always use an extension descriptor when you have sensitive data within your solution.
_schema-version: '3.1'
ID: com.example.basic.sfsf.config
extends: com.example.basic.sfsf
parameters:
title: SuccessFactors example
description: This is an example of the sample Benefits Java Application for
SuccessFactors
resources:
- name: dbbinding
parameters:
user-id: myuser
password : mypassword
In the example above, the extension descriptor adds the user-id and password parameters to the resource
dbbinding, which is modeled in the deployment descriptor.
After you deploy your solution, you can open its tile in the cockpit and check if the SuccessFactors extension
solution is deployed.
Related Information
To organize application security roles and to manage user access, you create authorization groups in SAP Cloud
Platform.
You model security groups in the MTA deployment descriptor using the module type com.sap.hcp.group. You
can also assign any roles defined in a Java application to these authorization groups.
The following rules apply when you deploy a solution containing authorization groups:
● If the group already exists, it is updated with the new roles assignment defined in the MTA deployment
descriptor.
● If you delete a solution, a group is not deleted, as it might be used by other applications.
Example
We assume that you have defined as follows a set of security roles in the web.xml of your Java application.
For a complete list of the supported properties, see MTA Module Types, Resource Types, and Parameters for
Applications in the Neo Environment [page 1351].
The security roles can be assigned to a group modeled in the MTA deployment descriptor.
Example
ID: com.sap.mta.demo
_schema-version: '2.1'
parameters:
hcp-deployer-version: '1.1.0'
modules:
- name: administratorGroup
parameters:
name: &adminGroup AdministratorGroup
type: com.sap.hcp.group
- name: demowebapp
parameters:
name: demowebapp
title: Demo MTA Application
runtime-version: '3'
java-version: JRE 8
roles:
- name: administrator
groups:
- *adminGroup
requires:
- name: administratorGroup
When you deploy the above example, a new authorization group named “AdministratorGroup” is created, and the
“administrator” application security role form the “demowebapp” is assigned to this group. In case the roles
already exists, only the application security role is assigned to the existing group.
Related Information
You can assign security roles on subscription level for use with SAP Fiori applications.
These roles are assigned to authorization groups when designed as modules in a descriptor file, as shown in the
following example:
Sample Code
ID: com.sap.mta.demo
_schema-version: '3.1'
modules:
- name: administratorGroup
parameters:
name: &adminGroup AdministratorGroup
type: com.sap.hcp.group
- name: fiori-role
type: com.sap.fiori.role
parameters:
name: HRManager
groups:
- *adminGroup
You can assign security roles on subscription level for use with HTML5 applications.
These roles are assigned to authorization groups when designed as modules in a descriptor file, as shown in the
following example:
Sample Code
ID: com.sap.mta.demo
_schema-version: '3.1'
modules:
- name: administratorGroup
parameters:
name: &adminGroup AdministratorGroup
type: com.sap.hcp.group
- name: html5-role
type: com.sap.hcp.html5.role
parameters:
name: HRManager
groups:
- *adminGroup
requires:
- name: administratorGroup
To operate a solution you require at least one of the following roles in your subaccount:
Note
Currently you can operate SAP SuccessFactors extensions using only the Administrator or Developer roles.
Deploying Solutions
Depending on the type of the solution, you can operate it using the cockpit, CTS+ and the SAP Cloud Platform
console client for the Neo environment:
Standard Solution
This solution is only deployed and can be used in the current SAP Cloud Platform subaccount and subscription to
it is not possible. All entities that are part of the solution will be deployed and managed within this subaccount.
Note
The CTS+ cannot be used for providing a solution for subscription, or for subscribing to a solution that is
provided by another subaccount.
When providing a solution for subscription, you can define which parts of it will be deployed to your subaccount,
and which parts will be deployed to the subscriber's subaccount. Note that the parts deployed to your subaccount
will consume resources from your quotas. All parts deployed to the subaccount of the subscriber will consume
resources from its own quotas.
Available Solutions
This is a solution that is available for subscription. It has been provided by another SAP Cloud Platform
subaccount and you have granted entitlements to subscribe to it. After subscribing to the solution, you can use it.
You can list the solutions that are available for subscription using the:
Subscribed Solution
This is a solution that has been provided by another SAP Cloud Platform subaccount. You have subscribed to it,
and thus have a limited set of management operations.
When providing a solution for subscription, the provider defines which parts of it are deployed to your subaccount,
and which parts are deployed to the provider subaccount. Note that the parts deployed to your subaccount
consume resources from your quotas. All parts deployed to the provider subaccount consume resources from its
own quotas.
You can list the solutions that are available for subscription using the:
Updating Solutions
Monitoring Solutions
Deleting Solutions
Related Information
By using the cockpit you can provision a solution using one of the following ways:
● Deploy a Standard Solution [page 1416]- The solution is deployed in the current subaccount and subscription
to it is not possible.
● Deploy a Provided Solution [page 1418]- The solution is deployed in the current subaccount, but is provided for
subscription to another subaccount.
● Subscribe to a Solution Available for Subscription [page 1422]- The current subaccount is entitled to subscribe
to a solution that has been provided by another subaccount.
You can deploy a solution that can be consumed only within your subaccount.
Prerequisites
● Ensure that the MTA archive containing your solution is created as described in Multi-Target Applications
[page 1292].
● Optionally, you have created an extension description as described in Defining MTA Extension Descriptors
[page 1322].
● You have a valid role for your subaccount as described in Operating Solutions [page 1413].
● You have sufficient resources available in your subaccount to deploy the content of the Multi-Target
Application.
Procedure
Alternatively, as of _schema version 3.1, if you do not provide it and your solution has missing data
required for productive use, you can enter that data manually in the dialog subsection that appears. Keep in
mind that you have to input complex parameters such as lists and maps in JSON format. For example, an
account-level destination parameter additional-properties should be a map that has a value similar to
{"additional.property.1": "1", "additional.property.2": "2"}.
Note
Make sure that you do not select the Provider deploy checkbox. If you select it, you will provide your solution
for a subscription. For more information, see Deploy a Provided Solution [page 1418].
6. Choose Deploy to deploy the MTA archive and the optional MTA extension descriptor to theSAP Cloud
Platform.
The Deploy a Solution from an MTA Archive dialog remains on the screen while the deployment is in progress.
When the deployment is completed, a confirmation appears that the solution has been successfully deployed.
If you close the dialog during deployment, you can open it again by choosing Check Progress of the
corresponding operation, located on the Ongoing Operations table in the solution overview page. You can open
the page by choosing the tile of the solution that is being deployed.
Note
If you experience issues during the deployment process, see Troubleshooting [page 1424].
7. (Optional) When deploying against _schema version3.1, if you have manually entered parameters during
deployment, at the end of the process you can use the option to download an extension descriptor containing
only those parameters.
Note
Parameters marked as security sensitive, either by default or as set in the mtad.yaml, are not saved to this
extension descriptor.
Your newly deployed solution appears in the Standard Solutions category in the Solutions page in the cockpit.
Each solution component originates from a certain MTA module or resource, which in turn can result in several
solution components. That is, one MTA module or resource corresponds to given solution components.
Related Information
Using the Solutions view of the cockpit, you can deploy a solution locally to your subaccount and provide it for a
subscription to another subaccount or you can subscribe to a solution that has been provided for subscription by
another subaccount in the cockpit.
Related Information
You can deploy a solution locally to your subaccount and provide it for a subscription to another subaccount.
Prerequisites
● Ensure that the MTA archive containing your solution is created as described in Multi-Target Applications
[page 1292].
● Optionally, you have created an extension description as described in Defining MTA Extension Descriptors
[page 1322].
● You have a valid role for your subaccount as described in Operating Solutions [page 1413].
● You have sufficient resources available in your subaccount to deploy the content of the Multi-Target
Application.
Note
○ If you are performing a re-deploy, the already deployed parts of the Multi-Target Application are deleted
first, so you are not required to have additional resources available in your subaccount.
○ If parts of your solution have to be deployed to the subscribers subaccounts, note that those parts
consume the resources of those subaccounts.
Procedure
Alternatively, as of _schema version 3.1, if you do not provide it and your solution has missing data
required for productive use, you can enter that data manually in the dialog subsection that appears. Keep in
mind that you have to input complex parameters such as lists and maps in JSON format. For example, an
account-level destination parameter additional-properties should be a map that has a value similar to
{"additional.property.1": "1", "additional.property.2": "2"}.
6. Select the Provider deploy checkbox.
7. Choose Deploy to deploy the MTA archive and the optional MTA extension descriptor to the cloud platform.
The Deploy a Solution from an MTA Archive dialog remains on the screen while the deployment is in progress.
When the deployment is completed, a confirmation appears that the solution has been successfully deployed.
If you close the dialog during deployment, you can open it again by choosing Check Progress of the
corresponding operation, located on the Ongoing Operations table in the Solution overview page. You can open
the page by choosing the tile of the solution that is being deployed.
Note
If you experience issues during the deployment process, see Troubleshooting [page 1424].
8. (Optional) When deploying against _schema version3.1, if you have manually entered parameters during
deployment, at the end of the process you can use the option to download an extension descriptor containing
only those parameters.
Results
Your newly deployed solution appears in the Solutions Provided for Subscription category in the Solutions page in
the cockpit. Each solution component originates from a certain MTA module or resource that in turn can result in
several solution components. That is, one MTA module or resource corresponds to given solution components.
Note
If you want to create an MTA extension descriptor, for the еxtension ID you have to use the value of the
Еxtension ID parameter, which you can find in the page of the solution you have just deployed.
Related Information
After the deployment of a solution, which is going to be provided for subscription, create the entitlements that are
going to be granted to the subscribers subaccounts.
Prerequisites
Procedure
Note
The default value is the current date.
○ Granted Entitlements - the number of subaccounts part of the global account, which are going to be able
to subscribe to the provided solution
Note
Currently it is not possible to decrease the number of granted entitlements per particular global
account.
Results
Prerequisites
Procedure
Results
You have edited the number of granted entitlements for a particular global account.
Related Information
You can subscribe to a solution that has been provided for subscription by another subaccount in the cockpit.
Prerequisites
● You have a valid role for your subaccount as described in Operating Solutions [page 1413].
● There is a solution that is available for subscription in your subaccount. That is, you have been granted with an
entitlement from the provider of the solution.
● You have sufficient resources available in your subaccount to deploy the content of the Multi-Target
Application.
Note
Typically, parts of a provided for subscription solution are deployed to the providers subaccount and parts
of it within your subaccount. The parts of the solution that are deployed to your subaccount consume
resources of your subaccount. The parts of the solution that are deployed to the providers subaccount
consume resources of the providers subaccount.
Alternatively, as of _schema version 3.1, if you do not provide it and your solution has missing data
required for productive use, you can enter that data manually in the dialog subsection that appears. Keep in
mind that you have to input complex parameters such as lists and maps in JSON format. For example, an
account-level destination parameter additional-properties should be a map that has a value similar to
{"additional.property.1": "1", "additional.property.2": "2"}.
Note
Ensure that your extension descriptor file extends correctly the solution you are subscribing to. To do so,
check the Extension ID of the solution in the Additional Details field of the solution overview page in the
cockpit, and input it in the extends section of your extension descriptor.
5. Choose Subscribe to subscribe to the provided solution, and deploy the optional MTA extension descriptor to
the SAP Cloud Platform.
The Subscribe to a Solution dialog remains on the screen while the deployment is in progress. When the
deployment is completed, a confirmation appears that the solution has been successfully deployed. If you
close the dialog during deployment, you can open it again by choosing Check Progress of the corresponding
operation, located on the Ongoing Operations table in the solution overview page. You can open the page by
choosing the tile of the solution that is being deployed.
6. (Optional) When deploying against _schema version3.1, if you have manually entered parameters during
deployment, at the end of the process you can use the option to download an extension descriptor containing
only those parameters.
Note
Parameters marked as security sensitive, either by default or as set in the mtad.yaml, are not saved to this
extension descriptor.
Results
The solution to which you are now subscribed appears in the Subscribed Solutions category in the Solutions page
in the cockpit. Each solution component originates from a certain MTA module or resource, which in turn can
result in several solution components. That is, one MTA module or resource corresponds to specific solution
components.
3.3.2.9.2 Troubleshooting
While transporting SAP Cloud Platform applications using the CTS+ tool, or while deploying solutions using the
cockpit, you might encounter one of the following issues. This section provides troubleshooting information about
correcting them.
Troubleshooting
Technical error [Invalid MTA This error could occur if the MTA archive is not consistent. There are several
archive [<mtar archive>]. MTA different reasons for this:
deployment descriptor (META-INF/
mtad.yaml) could not be parsed. ● The MTA deployment descriptor META-IND/mtad.yaml cannot be
Check the troubleshooting guide parsed, because it is syntactically incorrect according to the YAML
for guidelines on how to resolve specification. For more information, see the publicly available YAML
descriptor errors. Technical specification.
details: <…> Make sure that the descriptor is compliant with the specification. Vali
date the descriptor syntax, for example, by using an online YAML parser.
Note
Ensure that you do not submit any confidential information to the
online YAML parser.
● The MTA deployment descriptor might contain data that is not compati
ble with SAP Cloud Platform. Make sure the MTA deployment descriptor
complies with the specification at Multi-Target Applications [page 1292].
● The archive might not be suitable for deployment to the SAP Cloud
Platform. This might happen if, for example, you attempt to deploy an
archive built for XSA to the SAP Cloud Platform. The technical details
might contain information similar to the following:
"Unsupported module type "<module type>" for
platform type "HCP-CLASSIC""
Technical error [Invalid MTA The archive is inconsistent, for example, when a module referenced in the
archive [<MTA name>]: Missing META-INF/mtad.yaml is not present in the MTA archive or is not refer
MTA manifest entry for module
enced correctly. Make sure that the archive is compliant with the MTA speci
[<module name>]]
fication available at The Multi-Target Application Model .
Technical error [MTA extension This error could occur if one or more extension descriptors are not consis
descriptor(s) could not be tent. There are several different reasons for this:
parsed. Check the
troubleshooting guide for ● One or more extension descriptors might not be syntactically compliant
with the YAML specification. Validate the descriptor syntax, for exam
guidelines on how to resolve
ple, by using an online YAML parser.
descriptor errors. Technical
details: <…>
Note
Ensure that you do not submit any confidential information to the
online YAML parser.
● One or more extension descriptors might contain data that is not com
patible with SAP Cloud Platform. Make sure all extension descriptors
comply with the specification at Multi-Target Applications [page 1292].
Technical error [MTA deployment This error could occur if the MTA archive, or one or more extension descrip
descriptor (META-INF/mtad.yaml) tors are not consistent. There are several different reasons for this:
from archive [<mtar archive>]
and some of extension ● The MTA deployment descriptor or an extension descriptor might con
tain data that is not compatible with the SAP Cloud Platform. Make sure
descriptors [<extension
the MTA deployment descriptor and all extension descriptors comply
descriptor>] could not be
with the specification at Multi-Target Applications [page 1292].
processed . Check the
● The archive may not be suitable for deployment to SAP Cloud Platform.
troubleshooting guide for
This might happen if, for example, you attempt to deploy an archive
guidelines on how to resolve
built for XSA to the SAP Cloud Platform. The technical details might
descriptor errors. Technical contain information similar to the following:
details: <…> "Unsupported module type "<module type>" for
platform type "HCP-CLASSIC""
Process [<process-name>] has Ensure that you have required the necessary permissions or roles that are
failed with [Your user is not required to list or manage Multi Target Applications. For more information,
authorized to perform the see Operating Solutions [page 1413].
requested operation. Forbidden
(403)]. Contact SAP Support.
To enhance your solution with new capabilities or technical improvements, you can update it using the cockpit.
Depending on the deployer version (hcp-deployer-version) described in the MTA deployment descriptor, SAP
Cloud Platform uses one of the following technical approaches, where several distinctions apply.
When you update your solution against deployer version 1.0 or 1.1.0, the update is treated as a redeployment,
which means:
● Any new components that have now been described in the MTA deployment descriptor are deployed as usual
● Any already existing components are redeployed or updated, depending on their current runtime state in the
SAP Cloud Platform.
● Only relations are removed to components, which are no longer present in the MTA deployment descriptor of
the new solution version. The component artefacts are not removed.
When you update your solution against deployer version 1.2 or 1.2.0, the update is treated as an update with full
semantics, which means:
● Any new components that have now been described in the MTA deployment descriptor are deployed as usual
● Any already existing components are redeployed or updated, depending on their current runtime state in the
SAP Cloud Platform.
● Components that are no longer present in the MTA deployment descriptor are removed.
Note
The version of the MTA has to follow the “semver” Semantic Versioning specification, for example 1.1.2.
Related Information
Context
1. Log on to the cockpit and select the subaccount containing the solution you want to update.
2. Choose Solutions in the navigation area.
3. Choose the tile of the solution you want to update.
4. On the solution overview page that appears, choose Update.
5. Only for standard and provided solutions: provide the location of the MTA archive you want to use.
Note
When you update a solution as a solution provider, ensure that the solution ID of the new deployed archive
matches the ID of the existing solution.
6. (Optional) You can provide the location of an MTA extension descriptor file.
Alternatively, as of _schema version 3.1, if you do not provide an MTA extension descriptor and your
solution has missing data required for productive use, you can enter that data manually in the dialog
subsection that appears. Keep in mind that you have to input complex parameters such as lists and maps in
JSON format. For example, an account-level destination parameter additional-properties should be a
map that has a value similar to {"additional.property.1": "1", "additional.property.2":
"2"}.
7. Choose Update to start the process.
Note
○ Alternatively to the Update option, to perform the update operation you can also use the Deploy option.
○ As an alternative to the cockpit procedure, you can update a solution using the following command line
comand:
Sample Code
Results
Related Information
Note
For the examples below we assume that you have an already deployed MTA with a deployment descriptor
containing data similar to Version 1, and you want to update it to Version 2.
Version 1 Version 2
Version 1 Version 2
Version 1 Version 2
parameters:
hcp-deployer-version: '1.2.0'
description: The application
demonstrates some of the
main MTA features on SAP CP NEO.
title: Demo MTA Application
version: 0.1.4
Related Information
When deployed to your SAP Cloud Platform subaccount, a solution consists of various solution components. Each
solution component originates from a certain MTA module that in turn can result in several solution components.
That is, one MTA module corresponds to given solution components.
Prerequisites
You have a valid role for your subaccount as described in Operating Solutions [page 1413]
Procedure
To see a status overview of an individual solution or solution components in your subaccount, proceed as follows:
1. Log on to the SAP Cloud Platform and select a subaccount.
2. In the cockpit, choose Solutions in the navigation area.
You can monitor the overall status of the deployed and available for subscription solutions.
Note
The overall status of a solution is a combination of the statuses of all its internal parts and the statuses of
any ongoing operations for that particular solution.
3. In the solution list, select the tile of the solution for which you want to see details.
Note
If you have selected a solution that is available for subscription but not yet subscribed to, you can monitor
only a limited set of its properties.
○ Overview - it displays the solution name and status. For more information about the solution states, see
Solutions page help in the cockpit.
For more information about the possible states of a solution component and what they mean, see your
Solution page help in the cockpit.
4. If you have provided a solution that is available for subscription to another subaccount, you can monitor the
licenses and subscribers of a provided solution as follows:
a. In the solution list under the Solutions Provided for Subscription category, select the tile of the solution for
which you want to see details.
b. Choose Entitlement in the navigation area of the cockpit.
You can monitor the granted entitlements for that solution as well as the parts that were deployed to the
subscribers subaccounts.
Note
Monitoring granted licenses is only available for you if you have the subaccount administrator role.
Results
Solution Components
MTA Modules Results in One or More of the Following Solution Components
Related Information
Delete a solution from your subaccount following the steps for the corresponding solution types.
Prerequisites
You have a valid role for your subaccount as described in Operating Solutions [page 1413]
Context
Note
● Some parts of the solution might be shared and used by other entities within the SAP Cloud Platform. Such
parts of the solution have to be removed manually.
Procedure
Note
If the Delete data source checkbox is selected, any deployed database binding will be deleted. Note that
your database credentials will not be removed from your database and can be used again.
Note
If set, any errors during deletion that are external to SAP Cloud Platform (for example, a
SuccessFactors system) are ignored.
Typical use case is to be able to delete a solution that is linked to a now nonexistent external system.
Then, if the Clean-up on error checkbox is not selected, the deletion process will fail with an error. When
the Clean-up on error is selected the deletion process will ignore the error and continue.
Note
If the Clean-up on error checkbox is selected and an error that originates from an external to SAP Cloud
Platform instance occurs, it will be ignored. As a result all the data stored in the SAP Cloud Platform for
that solution will be deleted. However, external systems might still contain some data that is not
deleted.
The solution deletion dialog remains on the screen during the process. А confirmation appears when the
deletion is completed.
If you close the dialog while the process is running, you can open it again by choosing Check Progress of the
corresponding operation, located in the Ongoing Operations table in the solution overview page.
Related Information
The application programming model for SAP Cloud Platform helps you implement data models, services and UIs
to develop your own stand-alone business applications or extend other cloud solutions, like SAP S/4 HANA or SAP
SuccessFactors. The programming model includes languages, libraries and APIs and focuses on back-end
development. SAP Fiori helps you develop your front-end components.
We provide seamless integration with the Cloud Foundry environment on SAP Cloud Platform. This makes it easier
for you to deploy your application and consume platform services.
Recommendation
The programming model is compatible with any development environment, but we recommend using SAP Web
IDE Full-Stack. For more information, see SAP Web IDE Full-Stack
Key Features
The following table describes the key features of the application programming model:
Key Features
Feature Description
Database support Create persistence models in SAP HANA and standard SQL
databases.
Generate JPA models and benefit from JPA’s support for dif
ferent databases.
SAP Fiori markup Build on SAPUI5, SAP Fiori elements and OData.
Custom handler APIs Use the included APIs to implement custom handlers in differ-
ent events and phases during the lifecycle of a request. For ex
ample, before, instead of or after generic handlers.
Note
Applications can use alternative frameworks, like Spring or
Servlets. These frameworks can also be combined with the
service provider libraries included in the application pro
gramming model.
Data access To read and write data from custom code, we provide tailored,
minimal-footprint libraries and language bindings, which lever
age knowledge of CDS models and advanced Query Language
features. For example, in Java we leverage JPA by generating
annotated POJOs out of CDS models.
Note
You do not need to use a specific run-time library to read
and write data. You can use a low-level database driver or a
third party framework.
Service integration Combined with the SAP S/4HANA Cloud SDK, import and
consume services provided by other applications, business
services or S/4HANA.
Develop a sample business application using SAP Web IDE Full-Stack following the steps listed below.
Related Information
Create a project in SAP Web IDE Full-Stack using the template for the application programming model.
Prerequisites
Log on to SAP Web IDE Full-Stack. For more information, see Open SAP Web IDE.
Note
You need an account in the SAP Cloud Platform Neo environment. For more information, see Getting Started
with a Trial Account in the Neo Environment [page 918].
Configure a Cloud Foundry space to run your SAP Web IDE projects on SAP Cloud Platform. For more information,
see Select a Cloud Foundry Space.
Procedure
Results
A new project with the recommended project layout is created in your workspace.
Related Information
Define a data model in your SAP Web IDE Full-Stack project using our sample code.
Procedure
1. Open db/data-model.cds and replace the template with the following CDS definitions:
Sample Code
namespace my.bookshop;
entity Books {
key ID : Integer;
title : String;
author : Association to Authors;
stock : Integer;
}
entity Authors {
key ID : Integer;
name : String;
books : Association to many Books on books.author = $self;
}
entity Orders {
key ID : UUID;
book : Association to Books;
buyer : String;
date : DateTime;
amount : Integer;
}
2. Choose (Save)
Results
Related Information
Define a service to expose entities from the data model to application UIs or external consumers.
Procedure
Sample Code
4. Choose (Save)
5. To test-run your service, right-click the srv module and choose Run Run As Java Application .
6. Choose (Run Console) and click on the URL.
Next Steps
Re-compile the OData EDMX metadata files for your service. See Compile OData Models [page 1445].
Context
Procedure
Results
The EDMX artifacts are generated and stored in CatalogService.xml. This file is located in the srv module
under src/main/resources/edmx.
Procedure
Field Input
Title Books
4. Choose Current Project from the list of sources and then CatalogService.
If you do not see CatalogService, EDMX artifacts might be missing from your project. To resolve this, see
Compile OData Models [page 1445].
5. Go to the Template Customization tab and choose Books in the OData Collection drop-down menu.
a. Right-click the app module and choose Run Run as Web Application
b. Choose flpSandbox.html.
c. In the Destination Creation dialog box, complete the following fields:
ID and Password
Field Description
Global Subaccount ID The ID of your global subaccount that contains your Neo
environment
Global Subaccount Password The password of your global subaccount that contains
your Neo environment
d. Choose Create.
It shows a table without columns, because UI annotations have not been defined.
f. Choose (Settings) to add the columns you want to see and choose OK.
g. Choose Go.
Context
Note
If you do not want to use the Annotation Modeler, you can add UI annotations using a CDS model. For an
example, see CDS Model Example for UI Annotations [page 1448].
Procedure
Note
You configured Run flpSandbox.html to run with mock data in Add a User Interface [page 1445].
b. Choose Save.
c. Refresh your browser.
2. Go to app/webapp, right-click the localService folder and choose New Annotation File .
3. Double-click the file you have just created to open it in the Annotation Modeler.
4. Expand the Books entity, choose Add subnodes and then LineItem.
Repeat this step for each column you want to have in the table.
5. Choose Save and Run.
Results
Related Information
Annotation Modeler
Supported OData Vocabularies
Use the following code example to add UI annoations using a CDS model.
Procedure
Sample Code
};
annotate CatalogService.Books with @(
UI.LineItem: [
{$Type: 'UI.DataField', Value: ID},
{$Type: 'UI.DataField', Value: title},
{$Type: 'UI.DataField', Value: stock},
{$Type: 'UI.DataField', Value: "author/name"},
],
UI.HeaderInfo: {
Title: { Value: title },
TypeName:'Book',
TypeNamePlural:'Books'
},
UI.Identification:
[
{$Type: 'UI.DataField', Value: ID},
{$Type: 'UI.DataField', Value: title},
{$Type: 'UI.DataField', Value: "author/name"}
],
UI.Facets:
[
{
$Type:'UI.CollectionFacet',
Facets: [
{ $Type:'UI.ReferenceFacet', Label: 'General Info', Target:
'@UI.Identification' }
],
Label:'Book Details',
},
{$Type:'UI.ReferenceFacet', Label: 'Orders', Target: 'orders/
@UI.LineItem'},
]
);
annotate CatalogService.Orders with {
ID
@Common.Label: 'Order'
@Common.FieldControl: #ReadOnly;
book
@Common.Label: 'Book'
@Common.FieldControl: #ReadOnly;
buyer
@Common.Label: 'Buyer'
@Common.FieldControl: #ReadOnly;
date
@Common.Label: 'Date'
@Common.FieldControl: #ReadOnly;
amount
@Common.Label: 'Amount'
@Common.FieldControl: #ReadOnly;
};
annotate CatalogService.Orders with @(
UI.LineItem: [
{$Type: 'UI.DataField', Value: ID},
{$Type: 'UI.DataField', Value: book},
{$Type: 'UI.DataField', Value: buyer},
{$Type: 'UI.DataField', Value: date},
{$Type: 'UI.DataField', Value: amount}
],
UI.HeaderInfo: {
Title: { Value: ID },
4. Choose (Save)
Procedure
Wait for the notification that says the build was successful.
2. To view the generated deployment artifacts, SAP HANA Database Explorer must be enabled in SAP Web IDE.
Note
If you have already enabled the SAP HANA Database Explorer, go to step 3.
Once connected you can view the different database artifacts. For example, the books table that you created.
4. To fill in the initial data:
a. Download this zip file.
b. Unzip the downloaded file and save the db.zip file to your desired location.
The db module now includes an src folder that contains the imported .csv file with the initial data.
5. Re-build the db module and run the application.
Results
Add custom handlers for specific situations that are not covered by the generic service provider.
Procedure
Sample Code
package my.bookshop;
import java.util.ArrayList;
import java.util.List;
import com.sap.cloud.sdk.service.prov.api.*;
import com.sap.cloud.sdk.service.prov.api.annotations.*;
import com.sap.cloud.sdk.service.prov.api.exits.*;
import com.sap.cloud.sdk.service.prov.api.request.*;
import com.sap.cloud.sdk.service.prov.api.response.*;
import org.slf4j.*;
public class OrdersService {
Note
In this sample code, we set a value of 1000 for the property amount of each returned entity in reading
operations.
Tip
○ You can add other custom handlers, for example, to override the generic data modification operations
in this service. For more information, see Adding Custom Logic [page 1486].
○ As an alternative, you can also add custom handlers that use JPA to persist data. See Extending the
Getting Started Tutorial: Using JPA [page 1578] to learn how to override the creation of the Order entity
using JPA.
Add your project to a GitHub repository and switch to your preferred local development environment.
Procedure
1. Right-click the bookshop folder and choose Git Initialize Local Repository .
3. Go back to SAP Web IDE, right-click bookshop and choose Git Set Remote
4. Add the URL for your GitHub repository.
This takes all the committed changes from one branch and incorporates them into a different branch.
Related Information
To help you create concise and comprehensible models with CDS, we have put together a list of best practices.
Related Information
Develop your model using the following naming conventions to ensure reuse:
In the following example, you can see how we have applied these naming conventions:
namespace my.bookshop;
entity Books {
key ID : UUID;
title : String;
genre : Genre;
author : Association to Authors;
}
entity Authors {
key ID : UUID;
name : String;
books : Association to many Books;
}
type Genre : enum {
Mystery;
Fiction;
Drama;
}
service CatalogService {
entity Books as projection on bookshop.Books;
entity Authors as projection on bookshop.Authors;
}
Related Information
Creating custom-defined types is helpful when you have a decent reuse ratio. If not, custom-defined types are
counter-productive, because to fully understand the model, you need to look up the type definition.
entity Order {
price: Decimal;
currency : Currency;
}
Related Information
A primary key is a special relational database table column (or combination of columns) that is unique for each
record. A foreign key is defined in a second table, but it refers to the primary key.
Using a universally unique identifier (UUID) as a primary key is beneficial, because it can be created offline. For
example, in clients. Whereas, database sequences or other sequential ID generators need a central service. For
example, a single database instance and schema.
entity Books {
key ID : UUID;
title : String;
...
}
entity Authors {
key ID : UUID;
name : String;
}
The key rule is that UUIDs are values that are sufficiently unique, compared by equality and used for look-ups.
By default, CDS maps UUIDs to nvarchar(36) in SQL databases. The length is to accommodate representations
with hyphens as well as any other.
Foreign keys are an imperative technical discipline and not required for NoSQL databases. When possible, avoid
using foreign keys.
Instead, we recommend using managed associations supported by CDS. Associations capture your intent, as
shown in the following example:
entity Books {
key ID : UUID;
title : String;
author : Association to Authors;
}
entity Authors {
key ID : UUID;
name : String;
books : Association to many Books on books.author = $self;
}
Aspects in CDS help separate concerns by decomposing models into individual definitions with different life
cycles. An effective definition contains a combination of partial definitions. This means different concerns of a
definition can be contributed by different people at different times.
The separation of concerns principle can be useful in a distributed development scenario. For example, where one
developer provides the core structure definition, another developer adds annotations for consumption in UIs, and
another developer adds annotations for analytics.
Annotations
Add annotations to the original definition or an extension of the definition in the same or different files. The
following examples result in the same effective model:
Recommendation
We recommend using multiple annotation sources to keep your models concise and comprehensible. This also
helps you reflect different ownerships and life cycles of different concerns.
Cross-Cutting Concerns
For example, provide these definitions in a foundation package to capture change information or manage temporal
data for arbitrary entities:
Clients of the application can extend the cross-cutting aspects and apply the extensions to all derived entities:
Adaptable Models
Allow your consumers to choose between a streamlined minimal model and combinations of several predefined
extensions.
For example:
namespace foundation;
abstract entity Contact {
context AddressBook {
entity Contact : foundation.Person, foundation.DetailedContacts {}
}
context MoreSimple {
entity Contact : foundation.Contact {}
}
extend foundation.Contact with {
region : Association to Regions;
}
Related Information
In the CDS data model that you define, you can specify that the property of an entity contains administrative data.
Adminstrative data refers to information such as database username and timestamp with which you can track who
is inserting or updating entity data in the SAP HANA database and when the operation is performed. You can
specify that a property contains administrative data using the following annotations:
@odata.on.insert: #now A property with this annotation captures when the entity was
created in the SAP HANA database. This information is ob
tained from the function CURRENT_TIMESTAMP.
@odata.on.update: #now A property with this annotation captures when the entity was
updated in the SAP HANA database. This information is ob
tained from the function CURRENT_TIMESTAMP.
When creating or updating an entity in the database, any data that you provide for such properties (containing
administrative data) in the payload are overwritten with values provided by the database.
The following sample code shows how you can define administrative data in the data model:
Sample Code
Related Information
You can specify that the property of an entity is computed in your CDS service model.
In other words, you do not have to provide the data for the property in the payload. All you have to do is apply
@core.computed annotation to the property. This annotation ensures that the calculated value of the property is
not persisted in the HANA database.
Related Information
You can provide the capability for anyone to search your service, using plain text, to get a list of matching entity
instances.
Implementation
The following sample code shows how you can apply these annotations in your CDS model:
Sample Code
using SalesOrder;
service MySalesOrderService {
@Capabilities.SearchRestrictions.Searchable: true
entity Customer as projection on SalesOrder.Customer {
* ,
SalesOrders: redirected to SalesOrderHeader,
Note: redirected to Notes
}
excluding {
SalesOrders, Note
};
annotate Customer with {
@Search.defaultSearchElement: true
CustomerName;
@Search.defaultSearchElement: true
Type;
};
}
Performing a Search
You can trigger a search by using the Search custom query option on a single entity only and not on expanded
entities. The URL format for performing such a search looks like this: <ServiceRoot>/<EntitySetName>?
Search="Some Text". The following is a sample URL:
https://host/odata/v2/MySalesOrderService/Customers?Search="Harry"
https://host/odata/v2/MySalesOrderService/Customer(CustomerID=1,Type='Enterprise')/
SalesOrders?Search="Notes"
Note
Search using wildcard characters (such as * and ?) is not supported.
On performing a search, you can encounter a 501 Not Implemented error in the following cases:
● The entity on which the search is performed does not have the annotation
@Capabilities.SearchRestrictions.Searchable: true in the service model.
● None of the properties in the entity being searched have the annotation @Search.defaultSearchElement:
true in the service model.
Related Information
3.4.2.8 ETag
An ETag represents a specific version of a resource found at a URL. You can provide optimistic concurrency when
updating, deleting, or invoking the action bound to an entity by using the ETag value in an If-Match or If-None-
Match header.
To enable ETag for the property of an entity, apply the @odata.etag annotation to it in your CDS data model. The
following sample code shows how you can apply the annotation:
Sample Code
entity allDataTypeModel{
key ID: UUID;
@odata.etag
Sample Code
Related Information
Review the provided examples for CDS concepts and features that you can use to develop your business
application.
Related Information
Entities are structured types representing sets of (persisted) data that can be read and manipulated using CRUD
operations. Views are entities defined by projection on underlying entities or views, similar to views in SQL.
Entities
Entities usually contain primary key elements. This is a very simple example:
Customizing Entities
abstract Adding the prefix abstract to an en abstract entity Foo {...}
tity definition indicates that the entity abstract entity Foo as
should not have instances. This means it
SELECT from Bar {...};
is just an entity type declaration
without an entity set. When acti
vated to a database, persistence arti
facts (tables and views in SQL) are not
created.
Views
Views inherit all properties and annotations from their primary underlying base entity. Their elements
signature is inferred from the projection on base elements. Each element inherits all properties from the
respective base element.
Entity and view can be used interchangeably. This is shown in the following example:
In the following examples, you can see a view and its inferred signature:
entity SomeView {
ID: Integer; name: String; jobTitle: String;
};
Parent topic: Core Data and Services (CDS) Language Reference [page 1465]
Related Information
Use predefined or custom-defined types to develop your applications. Customize elements by adding
specifications or constraints.
Predefined Types
The following table shows the built-in types provided with CDS and the equivalent ANSI SQL and Edm. types.
With sap:display-
format="Date"
Custom-Defined Types
Define custom types for reuse purposes. For example, for elements in entity definitions.
This is an example:
Enumeration Values
Specify enumeration values for a type as a semicolon-delimited list of symbols. For type String, the declaration
of actual values is optional. If omitted, the actual values are the string counterparts of the symbols.
This is an example:
This is an example:
entity Order {
buyer : String(111);
price {
value : Decimal(10,3);
currency : Currency;
};
}
Virtual Elements
Adding the prefix virtual to an element definition indicates that the element will not be added to persistent
artifacts (tables or views in SQL databases). Declaring virtual elements allows you to add metadata.
This is an example:
entity Employees {
...
virtual something : String(11);
}
Element Constraints
Element definitions can be augmented with constraints unique and not null.
This is an example:
entity Employees {
name : String(111) unique not null;
}
Note
Unique is not yet available.
Parent topic: Core Data and Services (CDS) Language Reference [page 1465]
Related Information
3.4.3.3 Associations
Associations capture relationships between entities. They are similar to forward-declared joins added to a table
definition in SQL.
Types of Associations
entity Addresses {
key ID : Integer;
}
Managed To-One For to-one associations, CDS can auto entity Employees {
matically resolve and add requisite for address : Association
eign key elements from the target’s pri to Addresses;
}
mary keys and implicitly add respective
join conditions.
Parent topic: Core Data and Services (CDS) Language Reference [page 1465]
Related Information
3.4.3.4 Annotations
Syntax
● Add @ as a prefix before the definition, after the defined name or at the end of a simple definition.
● Multiple annotations can be placed in each spot separated by white spaces or enclosed in @(...) and
separated by a comma. For an @inner annotation only the syntax @(...) is available.
entity Foo @(
my.annotation: foo,
another.one: 4711
) { /* elements */ }
@my.annotation:foo
@another.one: 4711
entity Foo { /* elements */ }
Targets
Add annotations to any named thing in a CDS model. The following table provides examples:
Annotation Examples
Definitions and elements with simple types @before [define] type Foo @inner :
String @after;
@before [key] anElement @inner :
String @after;
Entities, facets and other struct types and their elements @before [define] (entity|type|facet|
annotation) Foo @inner {
@before simple @inner : String
@after;
@before struct @inner
{ ...elements... };
}
Actions and functions including their parameters and result el @before action doSomething @inner (
ements @before param @inner : String @after
) returns {
@before result @inner : String
@after;
};
Values
Values can be literals or references. If no value is given, the default value is true. This is shown in the following
example for @aFlag:
As described in the Core Schema Notation (add link to topic), the annotations shown in the previous example
compile to Core Schema Notation as follows:
{
"@aFlag": true,
"@aBoolean": false,
"@aString": "foo",
"@anInteger": 11,
"@aDecimal": 11.1,
"@aSymbol": {"#":"foo"},
"@aReference": {"=":"foo.bar"},
"@anArray": [ ... ]
}
Syntax Shortcuts
Annotations in CDS are flat lists of key-value pairs assigned to a target. The record syntax, {key:<value>, ...},
is a shortcut notation that applies a common prefix to nested annotations. This means, the following examples are
equivalent to each other:
@Common.foo.bar
@Common.foo.car: 'wheels'
@Common.foo: { bar }
@Common.foo.car: 'wheels'
They would show up as follows in a parsed model (add link to CSN topic):
{
"@Common.foo.bar": true,
"@Common.foo.car": "wheels",
}
Parent topic: Core Data and Services (CDS) Language Reference [page 1465]
Related Information
Aspect Types
Parent topic: Core Data and Services (CDS) Language Reference [page 1465]
Related Information
3.4.3.6 Services
Service Blocks
CDS allows you to define service interfaces as collections of exposed entities enclosed in a service block. A
service block is similar to a context. This is an example:
service SomeService {
entity SomeExposedEntity ...;
entity AnotherExposedEntity ...;
}
Exposed Entities
The entities exposed by a service are usually projections on entities from underlying data models. Use simple
derived entity definitions or standard view definitions, by using as SELECT from or as projection on.
service CatalogService {
entity Product as projection on data.Products {
*, created.at as since
} excluding { created };
}
service MyOrders {
view Order as select from data.Orders { * } where buyer=$user.id; //> $user not
yet implemented!
entity Product as projection on CatalogService.Product;
}
Service definitions may specify actions and functions with a comma-separated list of named and typed
inbound parameters and an optional response type. The response type can be a reference to a declared type or
(not yet implemented) the inline definition of a new (struct) type. This is an example:
service MyOrders {
entity Order ...;
// unbound actions / functions
type cancelOrderRet {
acknowledge: String enum { succeeded; failed; };
message: String;
}
action cancelOrder ( orderID:Integer, reason:String ) returns cancelOrderRet;
function countOrders() returns Integer;
}
Note
Actions and functions in CDS and OData are similar. Actions and functions on service-level are unbound.
Actions and functions can also be bound to individual entities of a service, enclosed in an additional actions
block as the last clause in an entity or view definition. This is an example:
service CatalogService {
entity Products as projection on data.Products { ... }
actions {
// bound actions/functions
action addRating (stars: Integer);
function getViewsCount() returns Integer;
}
}
Service Extensions
You can extend services with additional entities and actions. This concept is similar to adding new entities to a
context.
You can extend entities with additional actions. This concept is similar to adding new elements.
Parent topic: Core Data and Services (CDS) Language Reference [page 1465]
3.4.3.7 Queries
Path Expressions
Path expressions are similar to column expressions with table aliases in standard SQL. The difference is that path
expressions have an arbitrary length.
● To navigate along associations and struct elements in any SQL clause. For example:
Sample Code
● In from clauses to fetch only those entries from a target entity, which are associated to a parent entity. For
example:
Sample Code
Sample Code
Postfix Projections
Add projections after the FROM clause enclosed in curly-braces. This syntax is shown in the second example, but
both are valid.
Sample Code
Sample Code
Excluding Elements
The excluding clause allows you to express projections with a reduced number of elements while staying open to
subsequent extension fields. Use the excluding clause to read all elements except for the ones listed in the
exclude list.
Sample Code
Query-Local Mixins
Use the mixin...into clause to logically add elements to the source of the query. You can use and propagate the
added elements in the query’s projection.
Sample Code
CDL-Style Casts
Instead of using SQL-style type casts, you can use CDL-style casts. The following examples show both options:
Sample Code
Sample Code
Parent topic: Core Data and Services (CDS) Language Reference [page 1465]
Related Information
3.4.3.8 Namespaces
Add a namespace directive at the beginning of a model to prefix the names of all subsequent definitions. This is
similar to other languages, like Java.
For example:
Sample Code
namespace foo.bar;
entity Foo {} //> foo.bar.Foo
entity Bar : Foo {} //> foo.bar.Bar
Contexts
Use contexts for nested namespace sections.
Sample Code
namespace foo.bar;
entity Foo {} //> foo.bar.Foo
context scoped {
entity Bar : Foo {} //> foo.bar.scoped.Bar
context nested {
entity Zoo {} //> foo.bar.scoped.nested.Zoo
}
}
Fully-Qualified Names
A model is a collection of definitions with unique, fully-qualified names. The following is an example of the model
shown above compiled to Core Schema Notation:
Sample Code
{"definitions":{
"foo.bar.Foo": { "kind": "entity" },
"foo.bar.scoped": { "kind": "context" },
"foo.bar.scoped.Bar": { "kind": "entity",
"includes": [ "foo.bar.Foo" ]
},
"foo.bar.scoped.nested": { "kind": "context" },
"foo.bar.scoped.nested.Zoo": { "kind": "entity" }
}}
Parent topic: Core Data and Services (CDS) Language Reference [page 1465]
Using Directives
Using directives let you declare shortcut aliases to fully-qualified names or namespaces of definitions in other
files. They work like the schema namespace aliases in XML. If no alias is specified, the last part of the fully-qualified
name is implicitly used as the default alias.
Sample Code
using top.level.scoped.Bar;
using top.level.scoped.nested;
using top.level.scoped.nested as specified;
entity Car : Bar {} //> : top.level.scoped.Bar
entity Moo : nested.Zoo {} //> : top.level.scoped.nested.Zoo
entity Zoo : specified.Zoo {} //> : top.level.scoped.nested.Zoo
From Clauses
Add a from clause to using directives to specify the file in which the respective definitions are found. You can
import single definitions or several definitions with a common namespace prefix. Choosing a local alias is optional.
Sample Code
Model Resolution
Imports in CDS are similar to require in Node.js and imports in ES6. We reuse the module loading mechanism
used in Node.js. This means, the same rules apply:
● Names starting with ./ or ../ are resolved relative to the current model
● All other names are absolute references, retrieved from the node_modules folders
● The .json or .cds suffixes are appended in order or from a .../index.<json|cds> file for folders
Tip
To load modules from pre-compiled .json files, do not add .cds suffixes in import statements.
Parent topic: Core Data and Services (CDS) Language Reference [page 1465]
Related Information
Use npm install to make models from other packages available in your local project.
Procedure
Sample Code
This downloads the exported models in <some-other-package> and stores them to a local node_modules
folder.
Sample Code
{
"name": "test", "version": "1.1.0",
"dependencies": {
"some-other-package": ">1.0.4",
"yet-another-import": ">2.0.1"
}
}
CDS finds the imported content in node_modules when processing imports with absolute target references.
For example:
Sample Code
4. Get the latest versions of all imported models using npm outdated, npm update, and npm install.
CDS does not have an integrated public or private mechanism. To share your model, use index.cds
Procedure
namespace caps.foundation;
using from './codes';
using from './common';
using from './contacts';
using from './measures';
Sample Code
You can add custom logic to the generic service that you have created.
Let's do a quick recap. As you have seen from the Getting Started Tutorial [page 1441] topic, you create your
service by first defining your CDS data and service models. A Service Provider is then instantiated based on these
models, which serves most standard operations (that is, Create, Read, Update, Delete and Query) out-of-the-box
through built-in generic handlers. This service, thus exposes the data from the underlying SAP HANA database in
an easy-to-consume manner.
● Plug in custom handlers to different hooks in a transaction (consisting of one or more operations in your
service). These hooks come in handy especially when you're looking to add application-specific validations.
For more information, see Register Custom Handlers to Hooks [page 1488].
● Override the operations provided by the generic service. For more information, see Override Generic
Operations [page 1499].
● Expose entities that exist in remote data sources, which could, in turn, be services or even databases. For more
information, see Implement Custom Handlers [page 1519].
Related Information
You can register (or plug in) custom handlers to different hooks in a transaction (consisting of one or more
operations in your service).
In other words, you can write your own custom code that implements functionality such as validation checks,
authorizations checks, and so on, at certain predefined stages of a transaction. Let's look at these stages in more
detail.
A transaction containing a single operation, for example Create, takes three steps:
Now, let's look at where the hooks get triggered in this flow as depicted in the following diagram. Click on the hooks
to see more information.
● Generic Operation — This operation (Create, Read, Update, Delete, or Query) is provided out-of-the-box by the
service provider, and is instantiated based on the CDS models you define.
● Overridden Operation — Once you write a custom handler annotated using @Create, @Update, and so on, it
overrides the corresponding generic operation. If there is no custom handler for the operation, the generic
operation executes by default.
● Custom Operation — You can write your own custom operations such as functions or actions. For these
custom operations, there are no Before<Operation> or After<Operation> hooks. However, you can use the
InitTransaction, EndTransaction, and CleanupTransaction hooks to perform your validations.
Caution
If the transaction completes without any exceptions, but there is an exception during the serialization phases,
the transaction is not rolled back.
For example, in your custom handler annotated with @Function if you return a complex type when the function
is supposed to return an entity, the function itself completes without any exceptions. But the end to end call fails
in the serialization phase. So in this case, the transaction is not rolled back. To avoid this, ensure that your
custom handler is implemented correctly.
InitTransaction Hook
To implement the InitTransaction hook, attach the @InitTransaction annotation to the intended public method.
This annotated method (InitTransaction hook) is invoked just after the transaction starts and before any operation
executes.
Parameter Description
The method to which the annotation is attached must use the following input parameters:
The following sample code shows how you can apply the @InitTransaction annotation to a method:
Sample Code
@InitTransaction
public void initTrans(List<Request> requests, ExtensionHelper eh) {
logger.info("Starting transaction...");
}
EndTransaction Hook
To implement the EndTransaction hook, attach the @EndTransaction annotation to the intended public method.
This annotated method (EndTransaction hook) is invoked after all the operations in the transaction complete and
before the transaction commits.
Parameter Description
The method to which the annotation is attached must use the following input parameters:
The following sample code shows how you can apply the @EndTransaction annotation to a method:
Sample Code
@EndTransaction
public void endTrans(List<Request> requests, ExtensionHelper eh) {
// Check for any issues. If an exception is thrown from this exit, the
transaction is rolled back.
}
CleanupTransaction Hook
To implement the CleanupTransaction hook, attach the @CleanupTransaction annotation to the intended public
method. This annotated method (CleanupTransaction hook) is invoked after the transaction completes
(committed or rolled back). Use this method to perform any post transaction checks or validations.
Parameter Description
The method to which the annotation is attached must use the following input parameters:
The following sample code shows how you can apply the @CleanupTransaction annotation to a method:
Sample Code
@CleanupTransaction
public void cleanup(boolean isCommitted, List<Request> requests,
ExtensionHelper eh) {
logger.info("cleaning up..");
}
Related Information
You can implement a hook that runs just before the execution of a Create, Read, Update, Delete, or Query
operation.
A typical use case for these hooks is to perform validations (for example, authorization checks) before an
operation commences.You can also use them to modify the payload, especially in the case of the Create and
Update operations. To implement hooks that trigger before the operations execute, you can annotate the
corresponding public methods with the following:
Parameter Description
entity Name of the entity for which the hook is being implemented
serviceName (Optional) Name of the service that the entity belongs to. This
parameter is only needed in case you have multiple services
with the same entity name. In such a case, passing the
serviceName resolves the ambiguity.
To get a better understanding of where these hooks are placed, see the transaction flow [page 1488]. The following
sections cover each annotation in more detail.
@BeforeCreate
The @BeforeCreate annotation attached to a public method specifies that the method executes before the
Create operation on an entity in a service. You can use this hook to perform validation checks or to modify the
payload before the actual Create operation executes.
The method to which the annotation is attached must use the following input and output parameters:
The following sample code shows how you can apply the @BeforeCreate annotation to a method:
Sample Code
@BeforeRead
The @BeforeRead annotation attached to a public method specifies that the method (or hook) executes before
the Read operation on an entity in a service. You can use this hook to perform any validations (for example,
authorization checks) before the actual Read operation executes.
The method to which the annotation is attached must use the following input and output parameters:
The following sample code shows how you can apply the @BeforeRead annotation to a method:
Sample Code
@BeforeUpdate
The @BeforeUpdate annotation attached to a public method specifies that the method (or hook) executes before
the Update operation on an entity in a service. You can use this hook to perform validation checks or to modify the
payload before the actual Update operation executes.
The method to which the annotation is attached must use the following input and output parameters:
The following sample code shows how you can apply the @BeforeUpdate annotation to a method:
Sample Code
The @BeforeDelete annotation attached to a public method specifies that the method (or hook) executes before
the Delete operation on an entity in a service. You can use this hook to perform any validations (for example,
authorization checks) before the actual Delete operation executes.
The method to which the annotation is attached must use the following input and output parameters:
The following sample code shows how you can apply the @BeforeDelete annotation to a method:
Sample Code
The @BeforeQuery annotation attached to a public method specifies that the method (or hook) executes before
the Query operation on an entity in a service. You can use this hook to perform any validations (for example,
authorization checks) before the actual Query operation executes.
The method to which the annotation is attached must use the following input and output parameters:
The following sample code shows how you can apply the @BeforeQuery annotation to a method:
Sample Code
To override the Create, Read, Update, and Delete operations of an entity, write your own custom methods that are
annotated by the following:
Parameter Description
entity Name of the entity for which the generated operation is being
overridden
serviceName (Optional) Name of the service that the entity belongs to. This
parameter is only needed in case you have multiple services
with the same entity name. In such a case, passing the
serviceName resolves the ambiguity.
Note
The generic Query operation cannot be overridden. If you want to customize the response from the generic
Query operation, consider adding methods annotated with @BeforeQuery or @AfterQuery. For more
information, see Register Custom Handlers to Hooks [page 1488].
@Create
The @Create annotation attached to a public method specifies that the method implements the Create operation
of an entity in a service. This method overrides the generic Create operation of the service.
The method to which the annotation is attached must use the following input and output parameters:
The following sample code shows how you can apply the @Create annotation to a method:
Sample Code
@Create(entity = "Customer")
public CreateResponse addCustomer(CreateRequest createRequest,
ExtensionHelper extensionHelper) {
logger.info("Creating Customer");
DataSourceHandler handler = extensionHelper.getHandler();
/*
* We first call setSuccess to get the CreateResponseBuilder, then we set
the createdEntity using the setData method.
* Next, we call the response method that returns the CreateResponse
object which we can then return from our annotated method.
*/
CreateResponse createResponse =
CreateResponse.setSuccess().setData(createdEntity).response();
return createResponse;
@Read
The @Read annotation attached to a public method specifies that the method implements the Read operation of
an entity in a service. This method overrides the generic Read operation of the service.
The method to which the annotation is attached must use the following input and output parameters:
The following sample code shows how you can apply the @Read annotation to a method:
Sample Code
@Read(entity = "Customer" )
logger.info("Reading customer");
DataSourceHandler handler = extensionHelper.getHandler();
/*
* Get the EntityMetadata object from the request object. This is
required because the executeRead method in the DataSourceHandler expects the list
of elements to read,
* and complex/structured elements are expected to be flattened and
given as single elements. EntityMetadata has the method getFlattenedElementNames
which does exactly this.
* The entity name comes from the entityMetadata.
*/
EntityMetadata entityMetadata = readRequest.getEntityMetadata();
EntityData entityData =
handler.executeRead(entityMetadata.getName(),readRequest.getKeys(),
entityMetadata.getFlattenedElementNames());
/*
* Form the response object by first creating the ReadResponseBuilder
by calling ReadResponse.setSuccess();
* Then set the entityData into the builder and get the ReadResponse
by calling the response() method.
*/
ReadResponseBuilder builder = ReadResponse.setSuccess();
ReadResponse rr = builder.setData(entityData).response();
return rr;
@Update
The @Update annotation attached to a public method specifies that the method implements the Update operation
of an entity in a service. This method overrides the generic Update operation of the service.
The method to which the annotation is attached must use the following input and output parameters:
The following sample code shows how you can apply the @Update annotation to a method:
Sample Code
@Update(entity = "Customer")
public UpdateResponse modifyCustomer(UpdateRequest updateRequest,
ExtensionHelper extensionHelper) {
logger.info("Updating Customer");
EntityData entityData = updateRequest.getData();
DataSourceHandler handler = extensionHelper.getHandler();
UpdateResponse updateResponse =
UpdateResponse.setSuccess().response();
return updateResponse;
@Delete
The @Delete annotation attached to a public method specifies that the method implements the Delete operation
of an entity in a service. This method overrides the generic Delete operation of the service.
The method to which the annotation is attached must use the following input and output parameters:
The following sample code shows how you can apply the @Delete annotation to a method:
Sample Code
@Delete(entity="Customer")
public DeleteResponse deleteCustomer(DeleteRequest deleteRequest, ExtensionHelper
extensionHelper) {
logger.info("Deleting Customer");
DataSourceHandler handler = extensionHelper.getHandler();
handler.executeDelete(deleteRequest.getEntityMetadata().getName(),
deleteRequest.getKeys());
return DeleteResponse.setSuccess().response();
Related Information
You can implement a hook that runs just after the execution of a Create, Read, Update, Delete, or Query operation.
A typical use case for these hooks is to validate and modify the response obtained from the corresponding
operation before passing it on. To implement hooks that get triggered after these operations execute, you can
annotate the corresponding public methods with the following:
Parameter Description
entity Name of the entity for which the hook is being implemented
serviceName (Optional) Name of the service that the entity belongs to. This
parameter is only needed in case you have multiple services
with the same entity name. In such a case, passing the
serviceName resolves the ambiguity.
To get a better understanding of where these hooks are placed, see the transaction flow [page 1488]. The following
sections cover each annotation in more detail.
@AfterCreate
The @AfterCreate annotation attached to a public method specifies that the method executes after the Create
operation on an entity in a service. You can use this hook to perform validation checks and to modify the response
from the Create operation, if needed.
The method to which the annotation is attached must use the following input and output parameters:
● EntityData object
● Map object containing the properties of the
entity to be created as key-value pairs.
● POJO object containing entity data
CreateResponseAccessor Input Provides methods with you can access the re
sponse obtained after the execution of the Cre
ate operation. You can modify this response and
return it or return it without any changes using
the CreateResponse object.
The following sample code shows how you can apply the @AfterCreate annotation to a method:
Sample Code
@AfterCreate(entity = "Product")
public CreateResponse afterCreateProduct2(CreateRequest cr,
CreateResponseAccessor productResponse, ExtensionHelper helper) {
EntityData ed = productResponse.getEntityData();
EntityData edNew = EntityData.getBuilder(ed).addElement("SupplierId",
1).buildEntityData("Product"); //adding one more property if required. Eg:
Transient Fields
//return productResponse.getOriginalResponse(); //use this API if no
change is required and the original response can be returned as output.
Validation on the response can be done. If validation passes, the original
response can be sent back.
return CreateResponse.setSuccess().setData(edNew).response(); //use this
API if the response is modified. Can be used if some property is added or removed
from the response.
//return
CreateResponse.setError(ErrorResponse.getBuilder().setMessage("Post create
error!!").response()); //use this API if there is an error in the response that
has to be returned. Validation can be done on the response, if the validation
fails, error can be returned.
}
The @AfterRead annotation attached to a public method specifies that the method executes after the Read
operation on an entity in a service. You can use this hook to perform validation checks and to modify the response
from the Read operation, if needed.
The method to which the annotation is attached must use the following input and output parameters:
ReadResponseAccessor Input Provides methods with you can access the re
sponse obtained after the execution of the Read
operation. You can modify this response and re
turn it or return it without any changes using the
ReadResponse object.
The following sample code shows how you can apply the @AfterRead annotation to a method:
Sample Code
@AfterRead(entity = "Product")
@AfterUpdate
The @AfterUpdate annotation attached to a public method specifies that the method executes after the Update
operation on an entity in a service. You can use this hook to perform validation checks.
The method to which the annotation is attached must use the following input and output parameters:
● EntityData object
● Map object containing the properties of the
entity to be updated as key-value pairs.
● POJO object containing entity data
UpdateResponseAccessor Input Provides methods with you can access the re
sponse obtained after the execution of the Up
date operation.
The following sample code shows how you can apply the @AfterUpdate annotation to a method:
Sample Code
@AfterUpdate(entity = "Product")
public UpdateResponse afterupdateProduct(UpdateRequest cr,
UpdateResponseAccessor uresp, ExtensionHelper helper) {
logger.info("after update");
//Add your validation checks here
return UpdateResponse.setSuccess().response();
}
@AfterDelete
The @AfterDelete annotation attached to a public method specifies that the method executes after the Delete
operation on an entity in a service. You can use this hook to perform validation checks.
The method to which the annotation is attached must use the following input and output parameters:
DeleteResponseAccessor Input Provides methods with you can access the re
sponse obtained after the execution of the De
lete operation.
The following sample code shows how you can apply the @AfterDelete annotation to a method:
Sample Code
@AfterDelete(entity = "Product")
public DeleteResponse afterDeleteProduct(DeleteRequest cr,
DeleteResponseAccessor dresp, ExtensionHelper helper) {
logger.info("after delete");
//Add your validation checks here
return DeleteResponse.setError(ErrorResponse.getBuilder().setMessage("You
cannot delete the entity.").response());
}
@AfterQuery
The @AfterQuery annotation attached to a public method specifies that the method executes after the Query
operation on an entity in a service. You can use this hook to perform validation checks and to modify the response
from the Query operation, if needed.
The method to which the annotation is attached must use the following input and output parameters:
QueryResponseAccessor Input Provides methods with you can access the re
sponse obtained after the execution of the Query
operation. You can modify this response and re
turn it or return it without any changes using the
QueryResponse object.
The following sample code shows how you can apply the @AfterQuery annotation to a method:
Sample Code
@AfterQuery(entity = "Product")
public QueryResponse afterReadProduct2(QueryRequest rr, QueryResponseAccessor
productResponse, ExtensionHelper helper) {
List<EntityData> edList = productResponse.getEntityDataList();
List<EntityData> outputList = new ArrayList<EntityData>();
for(EntityData ed : edList){
outputList.add(EntityData.getBuilder(ed).addElement("SupplierId",
1).buildEntityData("Product"));
}
//return productResponse.getOriginalResponse(); //use this API if no
change is required and the original response can be returned as output.
Validation on the response can be done. If validation passes, the original
response can be sent back.
return QueryResponse.setSuccess().setData(outputList).response(); //use
this API if the response is modified. Can be used if some property is added or
removed from the response.
//return
CreateResponse.setError(ErrorResponse.getBuilder().setMessage("Post create
error!!").response()); //use this API if there is an error is the response that
has to be returned. Validation can be done on the response, if the validation
fails, error can be returned.
}
You can implement functions in your service with the help of the @Function annotation and various APIs.
Functions represent custom operations that are not easily defined in the CRUDQ operations on entities and do not
have any side effects. They are called using HTTP GET method. Functions can return any of the following:
Note
If you are provisioning an OData V2 service, use the @Action [page 1516] annotation in order to implement a
function with HTTP POST method.
Function Annotation
To implement the function in your service, you must annotate the method that provides the implementation logic
of the function with a @Function annotation. The annotation specifies that the method is invoked whenever there
is a call on the function in the specified service. The @Function annotation takes the following parameters:
Parameter Description
A method to which this annotation is attached supports two signatures depending on the following scenarios:
The following sample code shows how you can apply the @Function annotation to a method (working on the local
HANA database):
Sample Code
@Function(Name="getCustomerAddress", serviceName="SampleService2")
public OperationResponse getCustomerAddress(OperationRequest functionRequest,
ExtensionHelper extensionHelper )
{
Map<String, Object> parameters = functionRequest.getParameters();
DataSourceHandler handler = extensionHelper.getHandler();
Map<String, Object> keys = new HashMap<String, Object>();
keys.put("CustomerID", String.valueOf(parameters.get("CustomerID")));
try {
EntityData entityData = handler.executeRead("Customer", keys,
functionRequest.getEntityMetadata().getFlattenedElementNames());
The following sample code shows how you can apply the @Function annotation to a method (working on a remote
data source):
Sample Code
You can implement actions in your service with the help of the @Action annotation and various APIs.
Actions represent custom operations that are not easily defined in the CRUDQ operations on entities and can have
side effects. They are called using the HTTP POST method. Actions can return any of the following:
Action Annotation
To implement an action in your service, annotate the method that provides the implementation logic of the action
with an @Action annotation. The annotation specifies that the method is invoked whenever there is a call on the
action in the specified service. If you are provisioning an OData V2 service, use this annotation in order to
implement a function with HTTP POST method. The @Action annotation takes the following parameters:
Parameter Description
A method to which this annotation is attached supports two signatures depending on the following scenarios:
The following sample code shows how you can apply the @Action annotation to a method (working on the local
HANA database):
Sample Code
@Action(Name="applyProductDiscount", serviceName="SampleService2")
public OperationResponse applyProductDiscount(OperationRequest actionRequest,
ExtensionHelper extensionHelper )
{
Map<String, Object> parameters = actionRequest.getParameters();
DataSourceHandler handler = extensionHelper.getHandler();
Map<String, Object> keys = new HashMap<String, Object>();
keys.put("ProductID", String.valueOf(parameters.get("ProductID")));
//fetching the product details for the id and fetching the amount
try {
Sample Code
You can write custom handlers in your service to expose entities from remote data sources.
To keep things simple, the following topics describe how you can develop these custom operations, without going
into the specifics of consuming the supported data sources:
For more information on consuming specific data sources, see Related Links.
Related Information
You can implement the Query operation for your service with the help of the Query annotation and various APIs
discussed in this topic.
Query Annotation
To implement the Query operation in your service, annotate the method that handles the operation with a @Query
annotation. The annotation specifies that the method is invoked whenever there is a Query operation on the
specified entity set of a service. The @Query annotation takes the following parameters:
Parameter Description
serviceName Name of the service that the target entity set belongs to
sourceEntity (Optional) Name of the source entity set to which the target
entity set is related. This parameter is needed only when you
implement Query with navigation.
The method to which the annotation is attached must use the following input and output parameters:
Example
The following sample code shows how you can apply the @Query annotation to a method:
Sample Code
With this code implementation, the GET request URL for querying products might look like the following:
https://host/odata/v4/EPMSampleService/Products
Sample Code
With this code implementation, the GET request URL for query with navigation might look like the following:
https://host/odata/v4/EPMSampleService/SalesOrders('SO2')/SalesOrderLineItems
You can also use $expand to query a collection of sales order line items associated with a sales order using
navigation property with one-to-many cardinality. The corresponding GET request URL might look like the
following:
https://host/odata/v4/EPMSampleService/SalesOrders('SO1')?
$expand=SalesOrderLineItems
You can implement the Create operation for your service with the help of the Create annotation and various APIs
discussed in this topic.
Create Annotation
To implement the Create operation in your service, annotate the method that handles the operation with a
@Create annotation. The annotation specifies that the method is invoked whenever there is a Create operation on
the specified entity set of a service. The @Create annotation takes the following parameters:
Parameter Description
serviceName Name of the service that the target entity set belongs to
sourceEntity (Optional) Name of the source entity set to which the target
entity set is related. This parameter is only needed when you
implement Create with navigation.
The method to which the annotation is attached must have the following input and output parameters:
Example
The following sample code shows how you can apply the @Create annotation to a method:
Sample Code
With this code implementation, the POST request URL for creating a product might look like the following:
https://host/odata/v4/EPMSampleService/Products
The Content-Type header for this request must be set to application/json. And the following is a sample
payload in the JSON format:
{
"ProductID": "KK-8888",
"Name": "Notebook Long 25",
Example
The following sample code shows how you can implement Create with navigation using the @Create
annotation:
Sample Code
With this code implementation, the POST request URL for creating a sales order line item for a specific sales
order (SO1) might look like the following:
https://host/odata/v4/EPMSampleService/SalesOrders('SO1')/SalesOrderLineItems
The Content-Type header for this request must be set to application/json. And the following is a sample
payload in the JSON format:
{
"SOLineItemID": "SOLI1",
"SalesOrderID": "SO1",
"ItemPosition": 1,
"ProductID": "HT-1000",
"Quantity": 25,
"GrossAmount": 100
}
You can implement the Read operation for your service with the help of the Read annotation and various APIs
discussed in this topic.
Read Annotation
To implement the Read operation in your service, annotate the method that handles the operation with a @Read
annotation. The annotation specifies that the method is invoked whenever there is a Read operation on the
specified entity set of a service. The @Read annotation takes the following parameters:
Parameter Description
serviceName Name of the service that the target entity set belongs to
sourceEntity (Optional) Name of the source entity set to which the target
entity set is related. This parameter is only needed when you
implement Read with navigation.
The method to which the annotation is attached, must have the following input and output parameters:
Example
The following sample code shows how you can apply the @Read annotation to a method:
Sample Code
With this code implementation, the GET request URL for reading a specific product might look like the following:
https://host/odata/v4/EPMSampleService/Products('HT-1020')
Sample Code
With this code implementation, the GET request URL for read with navigation might look like the following:
https://host/odata/v4/EPMSampleService/SalesOrders('SO2')/
SalesOrderLineItems('SOLI1')
Similarly, you can write a code implementation with SalesOrderLineItems as the source entity and Product
as the entity being read using $expand. The corresponding GET request URL might look like the following:
https://host/odata/v4/EPMSampleService/SalesOrderLineItems?$expand=Product
You can implement the Update operation for your service with the help of the Update annotation and various APIs
discussed in this topic.
Update Annotation
To implement the Update operation in your service, annotate the method that handles the operation with an
@Update annotation. The annotation specifies that the method is invoked whenever there is an Update operation
on the specified entity set of a service. The @Update annotation takes the following parameters:
Parameter Description
serviceName Name of the service that the target entity set belongs to
The method to which the annotation is attached, must have the following input and output parameters:
Example
The following sample code shows how you can apply the @Update annotation to a method:
Sample Code
With this code implementation, the PUT request URL for updating a product might look like the following:
https://host/odata/v4/EPMSampleService/Products('KK-8888')
The Content-Type header for this request must be set to application/json. And the following is a sample
payload in the JSON format:
{
"ProductID": "KK-8888",
"Name": "Notebook Basic 15",
"Description": "Notebook Basic 15 with 2,80 GHz quad core, 15\" LCD, 4 GB
DDR3 RAM, 500 GB Hard Disc",
"Category": "Notebooks"
You can implement the Delete operation for your service with the help of the Delete annotation and various APIs
discussed in this topic.
Delete Annotation
To implement the Delete operation in your service, annotate the method that handles the operation with a
@Delete annotation. The annotation specifies that the method is invoked whenever there is a Delete operation on
the specified entity set of a service. The @Delete annotation takes the following parameters:
Parameter Description
serviceName Name of the service that the target entity set belongs to
The method to which the annotation is attached, must have the following input and output parameters:
Example
The following sample code shows how you can apply the @Delete annotation to a method:
Sample Code
With this code implementation, the DELETE request URL for deleting a product might look like the following:
https://host/odata/v4/EPMSampleService/Products('KK-8888')
To start off, you must define the entities in your service model depending on the data source:
Data Persistence
For the entities that come from remote data sources, you must ensure that they do not get persisted in your SAP
HANA database. To specify this, you must annotate the corresponding entities with @cds.persistence.skip.
Let's take a look at where this annotation is specified:
● Remote OData V2 service — the entities from the OData V2 service defined in the service model must be
annotated with @cds.persistence.skip.
● Remote databases (Postgres, HANA, and so on) — the entities from other remote databases defined in the
data model must be annotated with @cds.persistence.skip.
The following is a sample service model to show how this annotation is applied:
using MyOrgContext;
using MyOrgWarehouseService as boms;
service cpref {
entity product as projection on
MyOrgContext.product;
entity prodcategories as projection on
MyOrgContext.prodcategories;
entity salesorder as projection on
MyOrgContext.salesorder;
@cds.persistence.skip
entity MyOrgWarehouse as projection on
boms.MyOrgWarehouse;
@cds.persistence.skip
entity MyOrgWarehouseAssignment as projection on
boms.MyOrgWarehouseAssignment;
@cds.persistence.skip
entity MyOrgType as projection on
boms.MyOrgType;
Custom Handlers
Next, you write your custom handlers that expose the remote data sources. For more information, see Implement
Custom Handlers [page 1519].
For more information on APIs specific to each data source that you can use in your custom handler, see Related
Links.
Related Information
Let's take a look at APIs that implement the Create, Read, Update, Delete, and Query operations on an OData data
source.
Note
Currently, you can only consume an OData V2 data source.
Query Operation
To implement the Query operation on an OData V2 data source, use the ODataQueryBuilder and ODataQuery
classes. The ODataQueryBuilder class provides methods with which you can build the query and return it in an
ODataQuery object. Use the ODataQuery object to actually execute the query. The following is sample code that
shows how you can implement it:
Sample Code
1. Create an ODataQueryBuilder object with the entity set ProductSet belonging to the GWSAMPLE_BASIC
service.
2. Specify the properties of the entity set to be returned using the select method.
3. Skip the first record from the result set
4. Get the top 5 records after skipping the first record
5. Get the total number of records in the result set. This count is returned as part of the response body.
6. Build the query and return it in an ODataQuery object.
7. Use the ODataQuery object to execute the query with the destination setting (containing the URL and user
credentials to the OData V2 data source) specified in manifest.yml.
The results of the Query operation are then captured in the ODataQueryResult object. You can use this object to
return the result of the Query operation as a list of POJOs or HashMaps. For more information, see Conversion of
OData V2 Response to POJO and HashMap [page 1539].
Read Operation
To implement the Read operation on an OData V2 data source, use the ODataQueryBuilder and ODataQuery
classes. The ODataQueryBuilder class provides methods with which you can build the Read query and return it
in an ODataQuery object. Use the ODataQuery object to actually execute the query. The following is sample code
that shows how you can implement it:
Sample Code
1. Create an ODataQueryBuilder object with the entity set ProductSet belonging to the GWSAMPLE_BASIC
service.
2. Specify the properties of the entity set to be returned using the select method.
3. Specify the product to be queried by passing the corresponding key from the Read request through the keys
method
4. Build the query for the specific product and return it in an ODataQuery object.
5. Use the ODataQuery object to execute the query with the destination setting (containing the URL and user
credentials to the OData V2 data source) specified in manifest.yml.
The result of the Read operation is then captured in the ODataQueryResult object. You can use this object to
return the result of the Read operation as a POJO or HashMap. For more information, see Conversion of OData V2
Response to POJO and HashMap [page 1539].
To implement the Create operation on an OData V2 data source, use the ODataCreateRequestBuilder and
ODataCreateRequest classes. The ODataCreateRequestBuilder class provides methods with which you can
build the Create request and return it in an ODataCreateRequest object. Use the ODataCreateRequest object
to actually execute the Create operation. The following is sample code that shows how you can implement it:
Sample Code
1. Create an ODataCreateRequestBuilder object with the entity set ProductSet belonging to the
GWSAMPLE_BASIC service.
2. Add the payload contained in a POJO object to the Create request.
Note
You can also add the payload contained in a HashMap to the Create request using the withBodyAsMap
method.
3. Build and return the Create request and body in an ODataCreateRequest object.
4. Use the ODataCreateRequest object to execute the Create operation with the destination setting
(containing the URL and user credentials to the OData V2 data source) specified in manifest.yml.
The result of the Create operation is then captured in the ODataCreateResult object. You can use this object to
return the result of the Create operation as a POJO or HashMap.
Update Operation
To implement the Update operation on an OData V2 data source, use the ODataUpdateRequestBuilder and
ODataUpdateRequest classes. The ODataUpdateRequestBuilder class provides methods with which you can
build the Update request and return it in an ODataUpdateRequest object. Use the ODataUpdateRequest object
to actually execute the Update operation. The following is sample code that shows how you can implement it:
Sample Code
1. Create an ODataUpdateRequestBuilder object with the entity set ProductSet belonging to the
GWSAMPLE_BASIC service. Pass the keys of the entity to be updated using the getKeys method of the
UpdateRequest object.
2. Pass an If-Match header with the value "*" to indicate that the Update request must be processed
irrespective of the ETag value associated with the entity.
3. Add the payload contained in a HashMap object to the Update request.
4. Build and return the Update request and body in an ODataUpdateRequest object.
5. Use the ODataUpdateRequest object to execute the Update operation with the destination setting
(containing the URL and user credentials to the OData V2 data source) specified in manifest.yml. You can
specify that the Update operation is executed using one of the following methods:
○ PUT – This method lets you update all the properties of an existing entity. Pass all properties of the entity
as part of the payload.
○ PATCH – This method lets you update a subset of the properties of an existing entity. Pass only these
specific properties of the entity as part of the payload.
The result of the Update operation is then captured in the ODataUpdateResult object.
Delete Operation
To implement the Delete operation on an OData V2 data source, use the ODataDeleteRequestBuilder and
ODataDeleteRequest classes. The ODataDeleteRequestBuilder class provides methods with which you can
build the Delete request and return it in an ODataDeleteRequest object. Use the ODataDeleteRequest object
to actually execute the Delete operation. The following is sample code that shows how you can implement it:
Sample Code
1. Create an ODataDeleteRequestBuilder object with the entity set ProductSet belonging to the
GWSAMPLE_BASIC service. Pass the keys of the entity to be deleted using the getKeys method of the
DeleteRequest object.
2. Pass an If-Match header with the value "*" to indicate that the Delete request must be processed
irrespective of the ETag value associated with the entity.
3. Build and return the Delete request in an ODataDeleteRequest object.
4. Use the ODataDeleteRequest object to execute the Delete operation with the destination setting (containing
the URL and user credentials to the OData V2 data source) specified in manifest.yml.
The result of the Delete operation is then captured in the ODataDeleteResult object.
You can use the Destination service in the Cloud Foundry environment of SAP Cloud Platform to connect to and
consume an OData service.
Context
To configure the remote OData V2 service as a destination that you can consume using your custom handlers:
Procedure
1. Create a destination service instance on the Cloud Foundry environment of SAP Cloud Platform and bind it to
your application. For more information, see Create and Bind a Destination Service Instance.
2. In the destination service instance in the Cockpit, select Destinations from the navigation menu and create a
new destination instance with a specific name. For more information on creating the destination, see Create
HTTP Destinations.
Note
You can use this destination name when you write your custom handlers. For sample code that uses this
destination name, see Implementing the Query Operation [page 1534].
Note
To connect to OData services that are on on-premise systems, you must use the SAP Cloud Platform
Connectivity service and Cloud Connector. For more information on configuring the Cloud Connector, see
Cloud Connector. For more information on consuming the Connectivity service, see Consuming the
Connectivity Service.
You can define entities in your service model based on those from a remote OData V2 data source.
Context
To define entities from the remote data source in your service model:
Procedure
1. In your SAP Web IDE project, right-click the srv folder and select New Data Model from External
Service .
2. Select an OData service from one of the sources and choose Next.
3. Choose Finish. The OData model (EDMX) file and its CSN equivalent from the OData V2 data source are placed
in the srv/external folder.
Note
If you had previously selected the same service for this project, you can choose to either overwrite the
existing OData model and CSN files or create them with new names.
4. Define the required entities from the imported CSN file in your service model at srv/<your-service>.cds.
You can convert data retrieved from the OData V2 data source to POJO or HashMap objects for further processing.
POJO
You can use the ODataQueryResult class to convert OData V2 response data to POJO. It provides you with the
following methods that help in the conversion:
The following example shows how you can use the as() API to convert OData V2 response, after a Read operation,
to a POJO and return it:
The following example shows how you can use the asList() API to convert OData V2 response, after a Query
operation, to a list of POJOs and return it:
You can also use the ODataQueryResult class to convert OData V2 response data to HashMap. It provides you
with the following methods that help in the conversion:
The following example shows how you can use the asMap() API to convert OData V2 response, after a Read
operation, to a HashMap and return it:
The following example shows how you can use the asListOfMaps() API to convert OData V2 response, after a
Query operation, to a list of HashMaps and return it:
Related Information
First, use an ODataReadMediaBuilder object for building an OData request in order to query media resources
like images, videos, and others. The following sample code shows the APIs that you can use to create the request:
1. Specified the following information to identify the media resource using the withEntity method:
1. Service name including the service path
2. Name of the entity set containing the media resource
3. Key-value pairs to identify the media resource (in this case, ID=1)
2. Specified the type of media resource using the withHeader method (in this case, JPEG image).
3. Enabled metadata caching for this media request using the enableMetadataCache method.
4. Created the ODataReadMediaRequest object using the build method.
Now that we created the ODataReadMediaRequest object, simply execute the query and store the media
resource in the ODataMediaResult object as shown in the following sample:
The destination name parameter represents the OData V2 data source connection.
Finally, to read the media resource, you can retrieve it into an InputStream object using the getMedia method.
Related Information
When you consume OData V2 services, you can apply complex filter queries on the OData V2 data source APIs by
combining simple filters using binary operators and and or.
The $filter query option is supported in ODataQueryBuilder through the filter method. The filter method takes an
object of FilterExpression as argument. The FilterExpression represents the value of the filter condition
that you want to apply to your OData query.
FilterExpressions can represent simple filter conditions of the form, OData property operator value, like
SalesOrderID eq 'ertfgh' and it can also represent complex filters resulting from the combination of such
simple filters using and and or.
The following examples show how you can construct FilterExpressions. It mentions the OData $filter query
option and how the equivalent is achieved using FilterExpressions .
Example
Sample Code
The value INR should be converted to an ODataType which is done for type safety.
Example
Sample Code
Sample Code
Alternate syntax
Example
The following sample code shows how you can combine filter expressions with or:
Sample Code
Sample Code
Example
The following sample code shows how you can combine filter expressions with and:
Sample Code
Example
The following sample code shows how you can combine filter expressions to determine all the companies that
have passed the diversity hire mandate:
Sample Code
Note
The OData filter expression for individual filter conditions has a default ordering in OData. However, this
precedence rule does not apply here.
Let's take a look at APIs that implement the Create, Read, Update, Delete, and Query operations on a CDS data
source.
CDSDataSourceHandler
To execute any OData operation on a CDS data source, first create a CDSDataSourceHandler object passing a
Connection object (that contains the CDS data source connection) and the namespace of the requested entity.
The following code sample shows how you can create a CDSDataSourceHandler object:
CDSDataSourceHandler dsHandler =
DataSourceHandlerFactory.getInstance().getCDSHandler(getConnection(),
queryRequest.getEntityMetadata().getNamespace());
The following code sample shows how you can create a Connection object that contains the CDS data source
connection:
Query Operation
To implement the Query operation on a CDS data source, use the CDSSelectQueryBuilder and CDSQuery
classes. The CDSSelectQueryBuilder class provides methods with which you can build the query and return it
in a CDSQuery object. Use the CDSDataSourceHandler object to actually execute the query. The following is
sample code that shows how you can implement it:
Sample Code
In this example, you have seen the use of a simple where condition. However, in real-world scenarios you might
have to write composite conditions where two or more conditions are grouped together with a certain order of
precedence. The following is sample code that shows how you can implement it:
Sample Code
The query, in this example, contains a composite condition of the form (Type='Enterprise' AND
CurrencyCode='INR') OR CurrencyCode='USD'. Thus, you can group your conditions with higher
precedence, and use the grouped conditions to form the composite where condition.
Read Operation
To execute the Read operation on a CDS data source, use the executeRead method of the
CDSDataSourceHandler object passing the following parameters:
Sample Code
EntityData ed = dsHandler.executeRead(readRequest.getEntityMetadata().getName(),
readRequest.getKeys(), readRequest.getEntityMetadata().getElementNames());
Create Operation
To execute the Create operation on a CDS data source, use the executeInsert method of the
CDSDataSourceHandler object passing the following parameters:
● EntityData object containing the entity instance to be inserted in the database. This entity instance can be
obtained from the CreateRequest object.
● Boolean value indicating whether the created entity is returned or not.
The following is sample code that shows how you can implement it:
Sample Code
The executeInsert method, in this example, returns an instance of the EntityData object.
Update Operation
To execute the Update operation on a CDS data source, use the executeUpdate method of the
CDSDataSourceHandler object passing the following parameters:
● EntityData object containing the entity instance to be updated in the database. This entity instance can be
obtained from the UpdateRequest object.
● Key-value pair identifying the entity instance.
● Boolean value indicating whether the updated entity is returned or not.
The following is sample code that shows how you can implement it:
Sample Code
EntityData ed = dsHandler.executeUpdate(updateRequest.getData(),
updateRequest.getKeys(), true);
The executeUpdate method, in this example, returns an instance of the EntityData object.
To execute the Delete operation on a CDS data source, use the executeDelete method of the
CDSDataSourceHandler object passing the following parameters:
These parameter values can be obtained from the DeleteRequest object. The following is sample code that
shows how you can implement it:
Sample Code
dsHandler.executeDelete(deleteRequest.getEntityMetadata().getName(),
deleteRequest.getKeys());
You can write simple code to handle exceptions when working with the CDS data source.
● executeInsert
● executeUpdate
● executeRead
● executeDelete
● executeQuery
You can use the DatasourceExceptionType enum to categorize your exception handling code based on the
reason for the exception. The following are the enum values:
Example
The following sample code shows how you catch a CDSException and handle it:
You can return error, warning, and information messages depending on the outcome of a request.
Message Container
While processing a request, you may have multiple events that must be communicated to the end user as
messages. To temporarily store your messages, while processing a request, use the message container. You can
use various APIs (discussed in the following sections) to add your messages to the message container. Once the
processing of the OData request is complete, you can return these messages from the message container through
the sap-message custom response header or the error response, depending on the success or failure of the
request.
Successful Request
When a server processes a request, it may be processed successfully. It is also possible that even though the
request is processed successfully, there are some unexpected events or potential side-effects. The successful
sap-message : {
"code": "UPDATE_PROD",
"message": "Updated the product record.",
"target": "",
"details": []
}
● code – a code or key that uniquely identifies the message for the service.
● message – a text message describing the outcome or an event that occurred while processing the request.
● target – target or focus of the overall message.
● details – additional error, warning or information details with code (key), message, and target information.
This is an array of JSON objects and is optional.
The following code snippet shows how you can add error, warning, and information messages to the sap-message
header:
The setLeadingMessage API is used to add the main message describing the outcome of the request to the
sap-message header. This message is again defined in the i18n.properties file. Only once this API is called, all
the messages from the message container are added to the details section of the sap-message header.
When a server processes a request, it may not be processed at all. This outcome must be communicated to your
end user as an error message that specifies what went wrong and why.
{
"error": {
"code": "UF0",
"message": "Unsupported functionality",
"details": [
{
"code": "UF1",
"message": "$search query option is not supported",
"target": "$search"
}
]
}
}
The following code snippet shows how you can add an error message with additional error details:
Here the main (leading) message is added to the ErrorResponse object using the setMessage API. The
parameter keyError represents the unique key for the message you entered in the i18n.properties file. For
more information on this file, see Internationalization [page 1553]. While processing the query request, you may
have added information or error messages (that are not fatal to the request) to the message container. These
messages may aid the end user to troubleshoot the error that was fatal to the request. For that purpose, you can
use the addContainerMessages API to add them to the ErrorResponse object. By specifying the severity, you
are adding only those messages that fall into that category (in this example, error and information) from the
container. These messages from the container are added to the details section of the error response.
Internationalization
You can ensure that your messages are translatable to different languages. To achieve this, add your messages to
the i18n.properties file. Each message must be entered as a key-value pair separated by =.
This is the default properties file. You can also add properties files for different languages in the same folder, with a
language suffix, and optionally a country suffix. For example, i18n_de, i18n_ja_JP, and so on.
After creating the properties file, you can use an API (for example, addInfoMessage) that takes a key in this
properties file, as an argument, corresponding to any message. The framework, in turn, processes the accept-
language header, gets the properties file for the requested language, and gets the message corresponding to the
key from that properties file.
Parameterized messages
Parameterized messages enable you to reuse the same message with different parameter values. Using
parameterized messages you can handle different instances of a scenario with a single message. For example, let's
consider two user entry fields ProductID and Currency that are mandatory fields. If the user does not enter a
value for either of these fields, you can throw the message {0} cannot be null passing ProductID or
Currency as parameters in the message APIs.
Here 0 is the parameter index. There can be multiple parameters in a single message.
The following code snippet shows how the APIs can take a variable length array of objects:
@Query(Entity....)
public QueryResponse getAllProducts(QueryRequest queryRequest){
...
queryRequest.getMessageContainer().addInfoMessage(key,target,"ProductID");
...
queryRequest.getMessageContainer().addErrorMessage(key,target,8,50);
...
queryRequest.getMessageContainer().setLeadingMessage(key,target,"SalesOrders","Succe
ss",12345);
Related Information
3.4.5 Localization
Localization helps you provide language-specific versions of your models, for example, the EDMX models to serve
UIs. In localized models, static texts are replaced by their translated counterparts, except for i18n aspects like
Number formats or DateTime formats.
To understand how localization is handled with the application programming model, we will use a base project with
the following CDS content as an example:
Sample Code
The application contains the following static text for UI labels in OData annotations:
Sample Code
service Bookshop {
entity Books @(
UI.HeaderInfo: {
Title.Label: 'Book',
TypeName: 'Book',
TypeNamePlural: 'Books'
}
){}
}
To internationalize your application, you need to externalize the texts from the CDS models to a .properties file
in an i18n, _i18n or assets/i18n folder next to the model files or in a parent folder. The following example
shows possible locations of property bundles for srv/my-service.cds in your project structure:
Sample Code
Sample Code
Book = Book
Books = Books
foo = Foo
To refer to these keys in your model you would add code similar to:
Sample Code
service Bookshop {
entity Books @(
UI.HeaderInfo: {
Title.Label: '{i18n>Book}',
TypeName: '{i18n>Book}',
TypeNamePlural: '{i18n>Books}',
},
){}
}
Note
You can choose the keys of your property entries. To configure the folder names where CDS will search for
property bundles, add cds.i18n.folders in your project’s package.json. The default is:
Sample Code
"cds":{"i18n":{
"folders": [ "_i18n", "i18n", "assets/i18n" ]
}}
The translation process uses your i18n.properties file as input and returns a number of copies with
translations. Each copy has a language or locale code added to its name. For example:
Sample Code
_i18n/
i18n.properties # dev master -> 'default fallback'
i18n_en.properties # English -> 'default language'
i18n_de.properties # German
i18n_zh_CN.properties # Chinese
i18n_zh_TW.properties # Traditional Chinese
...
Sample Code
Book = Buch
Books = Bücher
bar = Bar
Key en de zh_CN
Sample Code
key;en;de;zh_CN;...
Book;Book;Buch;...
Books;Books;Bücher;...
...
Localized Models
When building your project cds.compile.to.edm(x) automatically looks up properties files in the _i18n folder
and merges the contained translations into corresponding localized EDMX output versions. For example, assuming
Sample Code
srv/src/main/resources/edmx/...
MyService.xml # default fallback
MyService_en.xml # English
MyService_de.xml # German
MyService_zh_CN.xml # Chinese
MyService_zh_TW.xml # Traditional Chinese
...
Merging Algorithm
The complete stack of overlayed models for the given example would be:
Source Content
Note
The default language is usually en. To change the default language, configure the
cds.i18n.default_language in your project’s package.json.
If your application is importing models from a reuse package, that package might come with its own language
bundles for localization. These language bundles are applied upon import and added to the complete stack. For
example, if your data model imports from a foundation package, then the overall stack of overlays would be:
Source Content
To reduce the number of translated text bundles and simplify bundle lookups at runtime, ISO-639-1 language
codes are used as the language ID. This means the region or script designators are removed from incoming
locales. For example, en_US, en_CA, en_AU are changed to en. However, there are exceptions to this rule. The
following table lists the white-listed language and region descriptors which are not removed from language IDs.
Exceptions
Locale Language
Related Information
3.4.6 References
Related Information
The CDS command line interface is just a thin wrapper around these APIs. All APIs are accessible through one
single facade object. For example:
Sample Code
3.4.6.1.1 cds.load
Loads and parses one or more input models and returns a Promise that resolves into a single merged, extended
and inferred model.
Use this method to create effective models. An options object can be passed as last argument.
Related Information
Entry point to a collection of different back-end processors. Each back-end processor expects a parsed model in
Core Schema Notation (CSN) format as input and generates a different output.
cds.compile APIs
cds.compile.to.edm (csn, Compiles and returns an OData v4 EDM ● For one service:
options) model object for the passed in model,
which is expected to contain at least one Sample Code
service definition. If there is more than
one service definition, use the let edm =
cds.compile.to.edm
{service:...} option parameter to: (csn,
{service:'Catalog'})
● Choose a single service definition.
console.log (edm)
For example:
{service:'Catalog'}.
● For all services:
● Choose to return EDM objects for all
service definitions. For example: Sample Code
{service:'all'}.
A generator is returned that yields let all =
[ edm, {name} ] for each cds.compile.to.edm
(csn,
service. {service:'all'})
for (let [edm,
{name}] in all)
console.log
(name,edm)
Sample Code
Sample Code
cds.compile.to.yml(csn)
Related Information
Learn how to add OData annotations to CDS models written in CDL (CDS Language Reference). We have also
included the common rules for mapping special OData concepts.
OData defines a strict two-fold key structure composed of @<Vocabulary>.<Term>. Annotations are always
specified as a Term with either a primitive, a record or collection value. The properties themselves may also be
primitives, records or collections.
This is an example:
Sample Code
@Common.Label: 'Customer'
@Common.ValueList: {
Label: 'Customers',
CollectionPath: 'Customers'
}
entity Customers;
Sample Code
{"definitions:"
"Customers": {
"kind": "entity",
"@Common.Label": "Customer",
"@Common.ValueList.Label": "Customers",
"@Common.ValueList.CollectionPath": "Customers"
}
}}
Sample Code
<Annotations Target="MyService.Customers">
<Annotation Term="Common.Label" String="Customer"/>
<Annotation Term="Common.ValueList">
<Record Type="Common.ValueListType">
<PropertyValue Property="Label" String="Customers"/>
<PropertyValue Property="CollectionPath" String="Customers"/>
</Record>
</Annotation>
</Annotations>
Related Information
You can add qualified annotations after a # sign to an annotation key. The qualified annotation must be the last
component of a key in the syntax.
This is an example:
Sample Code
@Common.Label: 'Customer'
@Common.Label#Legal: 'Client'
@Common.Label#Healthcare: 'Patient'
@Common.ValueList: {
Label: 'Customers',
CollectionPath:'Customers'
}
@Common.ValueList#Legal: {
Label: 'Clients',
CollectionPath:'Clients'
}
Sample Code
{
"@Common.Label": "Customer",
"@Common.Label#Legal": "Clients",
"@Common.Label#Healthcare": "Patients",
"@Common.ValueList.Label": "Customers",
"@Common.ValueList.CollectionPath": "Customers",
Note
Interpretation and special handling of these qualifiers is not supported. You must write and apply them as
specified by your chosen OData vocabularies.
Related Information
3.4.6.2.3 Primitives
● Strings
● Numbers
● true
● false
● null
These values are mapped to their corresponding OData annotations, terms, as follows:
Sample Code
@Some.Null: null
@Some.Boolean: true
@Some.Integer: 1
@Some.Number: 3.14
@Some.String: 'foo'
<Annotation Term="Some.Null"><Null/></Annotation>
<Annotation Term="Some.Boolean" Bool="true"/>
<Annotation Term="Some.Integer" Int="1"/>
<Annotation Term="Some.Number" Decimal="3.14"/>
<Annotation Term="Some.String" String="foo"/>
Related Information
3.4.6.2.4 Collections
If primitives show up as direct elements of the array, these elements are wrapped into individual primitive child
nodes of the resulting collection. The rules for records and collections are applied recursively, as shown in the
following example:
Sample Code
@Some.Collection: [
true, 1, 3.14, 'foo',
{ $Type:'UI.DataField', Label:'Whatever', Hidden }
]
Sample Code
<Annotation Term="Some.Collection">
<Collection>
<Null/>
<Bool>true</Bool>
<Int>1</Int>
<Decimal>3.14</Decimal>
Related Information
3.4.6.2.5 Records
Note
Primitive types are translated analogously as explained in Primitives [page 1566].
This is an example:
Sample Code
@Some.Record: {
Null: null
Boolean: true
Integer: 1
Number: 3.14
String: 'foo'
}
Sample Code
<Annotation Term="Some.Record">
<Record>
To specify an explicit type for records in OData, add a property called Type as the first element of the record. For
example:
Sample Code
@UI.Identification: [
{$Type:'UI.DataField', Value: deliveryId }
]
Sample Code
<Annotation Term="UI.Identification">
<Collection>
<Record Type="UI.DataField">
<PropertyValue Property="Value" Path="deliveryId"/>
</Record>
</Collection>
</Annotation>
Related Information
References in CDS annotations are either mapped to .Path properties or nested <Path> elements.
This is an example:
Sample Code
@Some.Term: Some.Reference
@Some.Record: {
Value: Some.Refererence
},
@Some.Collection: [
Some.Refererence
],
Sample Code
Related Information
This is an example:
Sample Code
@Common.FieldControl: #Hidden
@Common.FilterExpressionRestrictions: [{
Property: deliveryDate,
AllowedExpressions: #SingleInterval,
}}
Sample Code
Exceptions
In some cases, OData uses enums combined with a special technique for annotating annotations. This is generally
not supported in CDS annotations, except for:
● UI.Importance
● TextArrangement
Sample Code
@Common:{
Text:Text, TextArrangement:#TextOnly
}
@UI.LineItem: [
{ ..., Importance:#High}
],
Sample Code
Related Information
Shortcuts are CDS keywords that are mapped automatically to corresponding OData Annotations.
The following table provides a list of these standard shortcuts with the corresponding examples:
virtual
Sample Code Sample Code
masked
Sample Code Sample Code
@title
Sample Code Sample Code
@title @Common.Label
@description
Sample Code Sample Code
@description @Core.Description
@readonly
Sample Code Sample Code
@Capabilities.InsertRes @Core.Permissions:#READ
trictions.Insertable:fa
lse
@Capabilities.UpdateRes
trictions.Updatable:fal
se
@Capabilities.DeleteRes
trictions.Deletable:fal
se
Related Information
CDS automatically translates OData v4 annotations to OData v2 SAP Extensions when they are invoked with the
odata:v2 option. However, you can add sap: attribute-style annotations if needed.
This is an example:
Sample Code
@sap.applicable.path: 'to_eventStatus/EditEnabled'
action EditEvent ...
Sample Code
Related Information
● SAP Common
● SAP UI
● SAP Analytics
● SAP Communication
Related Information
When you use the SAP Cloud Platform Application template in SAP Web IDE, the created project includes the
following folders and files:
You can rename files or add new files as your project grows.
You can persist data in the SAP HANA database using Java Persistence APIs (JPA) in your custom handlers.
Overview
You have already seen how you can add custom logic to the generic service that is created based on the CDS
models that you define. If you haven't, we recommend that you read Adding Custom Logic [page 1486].
When implementing custom handler code you need to be able to access and manipulate entities in the persistence
layer. For this purpose, you can use the DataSourceHandler API that can be accessed using an
ExtensionHelper object that is available in each custom handler. For more information, see Override Generic
Operations [page 1499]. As an alternative to using the DataSourceHandler API, you can use JPA in your custom
handler. This allows you to leverage the full power of JPA with features like:
● Change tracking
● Lazy and eager loading
● JPQL queries
● Entity lifecycle listeners
The following sections describe how you can use JPA in your custom handlers.
Prerequisites
JPA Entities
Using JPA in a custom handler requires JPA entities, that is, Java classes that define an entity and how it should be
mapped to the persistence. These classes can either be handwritten or, in case you have a CDS (data) model, be
generated automatically from this CDS model using the tool CSN2JPA. This tool provides a CLI so that it can be
used from the console (or called from within scripts). It also provides a Maven plugin, allowing easy integration into
the build of any Maven project, and triggering the JPA class creation automatically at build time.
Transactional Integration
If JPA is used in custom handlers there are two persistence stacks that are used in parallel. The service provider
generated from the CDS models uses its own persistence stack (which can be accessed in custom handlers using
DataSourceHandler provided by ExtensionHelper). In contrast to this, when using JPA in custom handlers,
the JPA requests are handled by a JPA Persistence Provider (also known as JPA implementation), like EclipeLink
or Hibernate . Both stacks need to interoperate in consistency. This is achieved by having both stacks - JPA's as
well as the generic service provider's persistence stack - jointly use JTA transactions, so that all operations that
affect persistence take place in the same (shared) transaction.
Java EE Server
Enabling transaction management using JTA currently requires the usage of a Java EE server as the execution
environment. In Cloud Foundry, TomEE can be used as Java EE server by setting the target runtime to TomEE. In
the MTA deployment descriptor this is done by using the deployment type java.tomee.
Support for JTA needs to be enabled in the generic service provider. This can be done by adding the following
additional Maven dependency to your Maven-based Java project's pom.xml file:
<dependency>
<groupId>com.sap.cloud.servicesdk.prov</groupId>
<artifactId>cf-xsa-util</artifactId>
<version>${sap.gateway.version}</version>
</dependency>
This library detects the presence of a JTA transaction manager and configures usage of JTA transactions if such a
JTA transaction manager is found.
Next, the JPA persistence stack needs to be configured to use JTA transactions. This in done in the persistence-
unit tag of the persistence.xml file by simply adding the attribute transaction-type="JTA":
Note
When using CSN2JPA to automatically generate the JPA classes, a persistence.xml file is automatically
created by default in the subfolder resources/META-INF of your main source folder.
For executing persistence-related operations with JPA entities, the EntityManager API is used. Any Java EE
server (like the TomEE used here) is able to manage the lifecycle of entity managers and hence frees the
application from the burden to programmatically create entity managers and release the allocated resources after
usage again. It is therefore recommended to delegate these tasks to the Java EE container by using container-
managed entity managers. To achieve this, a persistence context reference must be declared. This can be done in
the web.xml file (by default located in the subfolder webapp/WEB-INF of your main source folder.):
<persistence-context-ref>
<persistence-context-ref-name>jpa/default/pc</persistence-context-ref-name>
<persistence-unit-name>default</persistence-unit-name>
</persistence-context-ref>
Using JPA
After you have fulfilled the prerequisites, you can start using JPA in custom handlers just by looking up the entity
manager in JNDI through the persistence context reference:
After getting an EntityManager instance, the JPA classes generated for the entities in the CDS (data) model can
be used with this instance to do all the standard JPA operations. In this example, persist the JPA entity myEntity
using em.persist(myEntity).
Deferred Flush
Entity manager operations like persist or remove are not executed immediately on the database by the JPA
persistence provider. The persistence provider only performs the changes in memory on the persistence context.
The actual synchronization with the database (flush) is deferred until just before the commit. This might have
surprising or undesired effects for changes performed using JPA in custom handlers. For example:
● Exceptions caused by the operations (e.g. a constraint violation) actually occurs during the flush and hence
after the execution of the custom handler.
● Until the commit, all changes done using JPA are visible only to JPA and not to the native persistence layer of
the generic service provider.
To circumvent such issues, it is possible to trigger the flush explicitly by the EntityManager.flush method.
Constraints
The following are the known constraints of using JPA in custom handlers:
Let's look at some sample code that shows how you can use JPA in custom handlers.
The Getting Started Tutorial [page 1441] describes how you can generate a service based on the CDS data and
service models that you define. It also explains how you can Add Custom Logic [page 1451] to the generated
service.
In the custom logic section of the tutorial, you can see that the @AfterRead and @AfterQuery custom handlers
are implemented for the entity Orders. These custom handlers set the order amount to 1000 in each returned
order that is read or queried. To illustrate the usage of JPA, let's add a new custom handler that uses JPA to
override the creation of the order entity, and sets an order amount of 1000 units for each order at creation time.
Let's also set the ID and the date of the order automatically in our custom code.
First, let's add the following JPA-based handler to the custom logic [page 1451] code described in the Getting
Started Tutorial [page 1441] (all imports that are additionally needed are listed):
package com.sap.demo.bookshop;
[...]
import java.util.UUID;
import java.util.Date;
import javax.naming.InitialContext;
import javax.naming.NamingException;
import javax.persistence.EntityManager;
import com.sap.demo.bookshop.jpa.my.bookshop.Books;
import com.sap.demo.bookshop.jpa.my.bookshop.Orders;
import org.apache.olingo.odata2.api.exception.ODataApplicationException;
[...]
public class OrdersService {
[...]
@ExtendCreate (entity = "Orders", serviceName="CatalogService")
public CreateResponse createOrder(CreateRequest createRequest, ExtensionHelper
extensionHelper) throws NamingException, ODataApplicationException {
EntityManager em = (EntityManager) (new
InitialContext()).lookup("java:comp/env/jpa/default/pc");
Orders order = createRequest.getData().as(Orders.class);
order.setID(UUID.randomUUID().toString());
order.setDate(new Date());
order.setAmount(1000);
order.setBook(em.find(Books.class,createRequest.getData().getElementValue("book_ID")
));
em.persist(order);
EntityData createdEntity = EntityData.getBuilder(createRequest.getData())
.addElement("ID", order.getID())
.addElement("book", order.getBook())
.addElement("buyer", order.getBuyer())
.addElement("date", order.getDate())
.addElement("amount", order.getAmount())
.buildEntityData("Orders");
return CreateResponse.setSuccess().setData(createdEntity).response();
}
[...]
}
Using the @ExtendCreate annotation we are overriding the standard Create operation of the Orders entity. We
fetch an EntityManager from JNDI context to perform our JPA operations with. Then, we create an Orders
object from the data sent with the Create request. We set an ID for the order, its creation time, an order amount of
1000 units, and set the object reference to the ordered book. All other attributes of the Orders entity remain
unchanged (for example, the buyer). Using the persist method of the EntityManager we finally add the newly
created Orders object to the JPA persistency, so that once the current transaction is committed, this new order is
added to the database. To return the newly created entity in the response, we create an EntityData
representation of the order, and return it in the body of the returned success response.
In the custom handler code seen earlier, we used the JPA class Orders that corresponds to the Orders entity
defined in the CDS data model [page 1443]. In order to have JPA classes generated automatically for all the
entities defined in the data model, let's add the CSN2JPA mapper to the build of the application in the tutorial.
Sample Code
"scripts": { "build": "cds build --clean && cds compile db/data-model.cds -o ./" }
In the srv/pom.xml file, that specifies how Maven builds the Java backend, let's add CSN2JPA as a build plugin as
follows:
<project ...>
[...]
<build>
[...]
<plugins>
[...]
<!-- Use csn2jpa to generate JPA classes from CSN. -->
<plugin>
<groupId>com.sap.cloud.servicesdk.csn2jpa</groupId>
<artifactId>csn2jpa-maven-plugin</artifactId>
<version>1.0.3</version>
<executions>
<execution>
<phase>generate-resources</phase>
<goals>
<goal>csn2jpa</goal>
</goals>
</execution>
</executions>
<configuration>
<csnFile>${project.basedir}/db/data-model.json</csnFile>
<outputDirectory>${project.basedir}/src/main</outputDirectory>
<persistenceProvider>org.eclipse.persistence.jpa.PersistenceProvider</
persistenceProvider>
<basePackage>com.sap.demo.bookshop.jpa</basePackage>
<parserMode>tolerant</parserMode>
<generatorMode>tolerant</generatorMode>
</configuration>
</plugin>
[...]
As a JPA implementation (JPA provider) that really executes the JPA operations, let's add EclipseLink to the
dependencies section:
<project ...>
[...]
<dependencies>
[...]
<!-- EclipseLink -->
<dependency>
<groupId>org.eclipse.persistence</groupId>
<artifactId>eclipselink</artifactId>
<version>2.7.1</version>
</dependency>
[...]
<project ...>
[...]
<dependencies>
Next, right-click on the srv folder in the workspace of WebIDE and select Build. The CSN2JPA mapper generates
all the JPA classes for the entities defined in the data model. The location where the JPA classes are generated
depends on the outputDirectory and basePackage configuration options set for the CSN2JPA Maven plugin,
as well as the namespace used in the CDS model. In our example, the JPA classes are generated in the folder
srv/src/main/java/com/sap/demo/bookshop/jpa/my/bookshop.
In addition to the JPA classes, a persistence.xml file is generated that contains some setup information for
JPA. There, let's enable JTA support, so that the JPA transactions are in sync.
<persistence ...>
<persistence-unit name="default" transaction-type="JTA">
[...]
Finally, let's make sure that the DataSource that is encapsulating the connection to the database is managed by
the container (that is, application server), and that the persistence context and persistence unit are also known to
the container. For this, open and change the file srv/src/main/webapp/WEB-INF/web.xml as follows:
<web-app ...>
<resource-ref>
<res-ref-name>jdbc/java-hdi-container</res-ref-name>
<res-type>javax.sql.DataSource</res-type>
<res-auth>Container</res-auth>
</resource-ref>
<persistence-context-ref>
<persistence-context-ref-name>jpa/default/pc</persistence-context-ref-name>
<persistence-unit-name>default</persistence-unit-name>
</persistence-context-ref>
[...]
We are delegating the management of some resources to the container, like the management of transactions (JTA)
and the management of the DataSource (encapsulating the connection to the database). These kinds of
management features are supported by JavaEE containers, like TomEE. As the default container of the application
in the tutorial is Tomcat, let's now describe the necessary changes to switch it from Tomcat to TomEE.
If you deploy just the Java backend (and not the whole MTA project), then the manifest.yml in the source folder
of the Java backend defines how this deployment is done. To set TomEE as a runtime here, simply add
TARGET_RUNTIME: tomee to the env section at the end of the file srv/manifest.yml:
---
applications:
- name: srv
If the whole application is deployed as an MTA, then the overall deployment is defined by the file mta.yaml in the
root folder of the project. We also define TomEE as the runtime for the Java backend there. Please note that in
addition to defining TARGET_RUNTIME: tomee, we also need to change the file referenced in the
JBP_CONFIG_RESOURCE_CONFIGURATION property, as TomEE uses a different file for this as Tomcat does.
Change mta.yaml as follows:
[...]
modules:
[...]
- name: srv
type: java
path: srv
parameters:
memory: 1024M
provides:
- name: srv_api
properties:
url: ${default-url}
requires:
- name: hdi_db
properties:
JBP_CONFIG_RESOURCE_CONFIGURATION: '[tomee/webapps/ROOT/WEB-INF/
resources.xml:
{"service_name_for_DefaultDB" : "~{hdi-container-name}"}]'
properties:
TARGET_RUNTIME: tomee
buildpack: sap_java_buildpack
[...]
---
tomee/webapps/ROOT/WEB-INF/resources.xml:
service_name_for_DefaultDB: hdi
Since we changed to TomEE as a runtime, we do not need the file that Tomcat uses for resource configuration
anymore. To remove any ambiguity, we can delete the file srv/src/main/webapp/META-INF/context.xml as
the last step.
Testing
To apply our changes to the running demo application, we can now rebuild and redeploy the whole MTA (that is, all
the components of the demo application), or just the Java backend (right-click on the srv folder in the workspace
of WebIDE and then choose Run and Java Application).
Let's use the tool curl to send a request that creates an order for a specific book for a specific buyer. Assuming
that the URL the Java backend is listening to is <Java-Backend-URL>, the ID of the ordered book is 310, and the
buyer is called JPA Buyer 1, then the command to execute is the following:
The response to such a request includes all the details about the newly created entity, including ID, creation time,
fixed amount of 1000 units, as coded in our custom handler:
{
"d": {
"ID": "476a425e-1d54-4332-b252-961f6bdef456",
"__metadata": {
"id": "https://<Java-Backend-URL>/odata/v2/CatalogService/
Orders(guid'476a425e-1d54-4332-b252-961f6bdef456')",
"type": "CatalogService.Orders",
"uri": "https://<Java-Backend-URL>/odata/v2/CatalogService/
Orders(guid'476a425e-1d54-4332-b252-961f6bdef456')"
},
"amount": 1000,
"book": {
"__deferred": {
"uri": "https://<Java-Backend-URL>/odata/v2/CatalogService/
Orders(guid'476a425e-1d54-4332-b252-961f6bdef456')/book"
}
},
"book_ID": 310,
"buyer": "JPA Buyer 1",
"date": "/Date(1524513352941)/"
}
}
To request all the current orders (which include the newly created order), use the following command:
To fetch just the newly created order (using the ID of the created order from the response seen earlier), use the
following command:
To finally verify that the order amount of 1000 units is not just returned by the @AfterRead and @AfterQuery
custom handlers, but this amount is really written to the database using our JPA-based custom handler, please
check the contents of the database. You could use the Database Explorer of WebIDE (from the menu Tools
Database Explorer ).
CDS allows the definition of multiple models and aspects of an application in a single source. At its core CDS allows
you to specify an entity-relationship model (data model), how entities defined in there should be exposed as
service interfaces (service model). It provides additional information as annotations to the model definition that
can be consumed from different layers of an application (for example, a UI layer could read from annotations how
an attribute of an entity must be displayed). In the following sections we are focusing on the data model aspect of
CDS and how the concepts used to define it relate to concepts used to define data models in another standard
(Java Persistence API).
The Java Persistence API (JPA) is a Java standard that specifies an API for the management of relational data for
Java. JPA operates on an object-relational model, the persistence unit, which defines an entity-relationship model
and a mapping of this model to a relational database.
There are significant similarities between the entity-relationship models used by CDS and JPA. Just like JPA, CDS
defines entities that are mapped to database tables. In CDS elements are mapped to columns, just like attributes
in JPA. The managed associations in CDS are somehow equivalent to relationships in JPA.
Other concepts are significantly different between CDS and JPA. CDS, for example, allows you to define views on
entities and projections which have no counterpart in JPA. JPA on the other hand has a concept for inheritance
that does not match with anything in CDS.
CSN2JPA is a tool that allows you to map the data model part of a CDS model to a JPA model and to generate Java
classes for a JPA persistence unit. It leverages the similarities of CSN and JPA as much as possible. But the tool
does not attempt to bridge the conceptual differences between CDS and JPA models. Therefore limitations apply
and not all CDS (data) models can be completely mapped to JPA.
Restriction
CSN2JPA only maps the entities of the data layer of the application to JPA. The service interfaces are not
mapped.
For more information on using CSN2JPA, see CSN2JPA Maven Plugin [page 1585].
The following topics describe in detail how the mapping from CDS to JPA is done using the tool CSN2JPA.
The CSN2JPA Maven plugin can be used to generate JPA entities from CSN files.
Setup
CSN2JPA is a typical Maven plugin and can simply be declared in the <plugins> section of your pom.xml file. The
following is a sample representation of the CSN2JPA plugin in use:
<build>
<plugins>
<plugin>
<groupId>com.sap.cloud.servicesdk.csn2jpa</groupId>
<artifactId>csn2jpa-maven-plugin</artifactId>
<version>${csn2jpa.version}</version>
<executions>
<execution>
<goals>
<goal>csn2jpa</goal>
</goals>
</execution>
</executions>
<configuration>
<csnFile>${project.basedir}/src/main/resources/cds-model.csn</
csnFile>
<outputDirectory>${project.basedir}/src/main</outputDirectory>
<persistenceUnitName>jpa</persistenceUnitName>
<persistenceProvider>org.eclipse.persistence.jpa.PersistenceProvider</
persistenceProvider>
<basePackage>com.sap</basePackage>
<namingScheme>quoted</namingScheme>
<generateEnums>true</generateEnums>
<parserMode>tolerant</parserMode>
<generatorMode>tolerant</generatorMode>
</configuration>
</plugin>
</plugins>
</build>
Configuration
The following is a list of configuration options that the CSN2JPA Maven plugin accepts and their description:
● csnFile — path to the CSN file has to be explicitly specified (required option).
● outputDirectory — output directory for generated JPA entities (required option).
● persistenceXml — path to persistence.xml. If it is not specified, it defaults to META-INF/
persistence.xml file within the outputDirectory/resources folder. Else, an absolute path with the
project can be specified to generate the persistence.xml file in that directory: <persistenceXml>$
{project.basedir}/src/META-INF/persistence.xml</persistenceXml>.
● persistenceProvider — here a JPA persistence provider can be defined. By default it would choose
org.eclipse.persistence.jpa.PersistenceProvider (EclipseLink) as the persistence provider.
The plugin can also be used without a Maven project, that is, without a pom.xml file.
mvn com.sap.cloud.servicesdk.csn2jpa:csn2jpa-maven-plugin:0.0.1:csn2jpa -
DcsnFile=csn.cds -DoutputDirectory=out
Dependencies
Related Information
CDS entities are mapped to JPA entities with the corresponding name.
In order to adhere to Java naming conventions it is recommended that a CDS entity name start with an upper case
letter. The singular form is preferred.
Keys
In order to map a CDS entity to JPA it is mandatory that it has a key, which is mapped to an ID attribute in JPA.
Scalar Keys
If a CDS entity has a single key element the ID class of the JPA entity has the same type as the ID attribute (or the
corresponding boxed variant).
entity Building {
key building : Integer;
location : String;
};
@Entity
public class Building {
@Id
int building;
String location;
}
Compound Keys
An entity can also have a compound key corresponding to multiple primary key columns on the database. The
following CDS entity has a compound but non-structured key:
entity MobilePhone {
key countryCode : String(3);
key areaCode : String(5);
key number : String(15);
...
};
For a compound ID, JPA requires the JPA entity to have an explicit ID class. This is generated automatically by
CSN2JPA as a static inner class ID:
@Entity
@IdClass(MobilePhone.ID.class)
public class MobilePhone {
@Id
String countryCode;
EntityManager em;
MobilePhone.ID id = new Id("1", "610", "661-1000");
MobilePhone phone = em.find(MobilePhone.class, id);
Structured Keys
The entity in the following example has a structured key:
type Name {
firstName : String;
lastName : String;
}
entity Person {
key name : Name;
}
Restriction
Entities with structured keys are not supported by CSN2JPA.
Entity Name
The entity name, which is the name used in JPQL queries, defaults to the unqualified class of the JPA entity. In CDS
the entity name can be specified with the @jpa.entity.name annotation. In the following example, the entity
Identification is given the entity name IdCard:
@jpa.entity.name : "IdCard"
entity Identification {
...
}
@Entity(name="IdCard")
public class Identification {
...
}
EntityManager em;
Abstract Entities
In CDS an entity definition can be prefixed with the keyword abstract to indicate that it has no corresponding
database table.
Restriction
Abstract entities are ignored by CSN2JPA and not mapped to Java.
Views
Restriction
Views are not supported by CSN2JPA.
Elements may have a scalar type, a structured type, or an association type. All CDS elements with scalar types are
mapped to JPA as Basic attributes unless they are CDS key elements. The attributes have the same name as the
CDS element. Names that are reserved in Java must not be used.
Scalar Types
Predefined Types
The following table describes the mapping between primitive CDS types and Java or JPA types.
In the following example, the element empId has the derived type EmployeeID, which is based on the type UUID:
@Entity
public class Employee {
Enums
In CDS, you can define enum types that specify enumeration values. They can either be named (defined by an
explicit type definition) or anonymous (declared inline).
Named Enums
If an enum type is explicitly defined, CSN2JPA can generate a corresponding enum in Java for it.
In the following example, the enum type AcademicTitle is defined and used as the type of the element
academicTitle:
CSN2JPA can generate the corresponding enum in Java that is used in the JPA entity:
Note
The generation of enumerated types is disabled by default and can be activated using a configuration
parameter. If disabled, CSN2JPA generates basic attributes (String or Integer) instead.
Restriction
Restrictions on Enums
● In the following example, the element status of the entity Order has an inline defined enumerated type:
entity Order {
status : Integer enum {
submitted;
fulfilled;
shipped;
canceled;
};
}
Inline defined enumerated types are not supported by CSN2JPA. The mapper generates for the element a
basic attribute with the base type of the enum - Integer in this example.
● For enums defined in CDS with base type Integer no enums in Java are generated but the base type is used
instead.
Structured Types
Scalar types can be composed to structured types. They can either be named (defined by an explicit type
definition) or anonymous (declared inline).
type Name : {
firstName : String(42);
lastName : String(42);
};
Explicitly defined structured types are supported by JPA. How they are mapped depends on their usage.
entity Person {
key id : Integer;
name : {
firstName : String;
lastName : String;
}
}
Restriction
Anonymous inline defined structured types are not supported by CSN2JPA.
Structured types can be extended to an entity. Let's look at the following example:
type DateRange {
validFrom : Timestamp;
validTo : Timestamp;
};
entity Identification : DateRange {
key id : UUID;
...
};
@Entity
public class Identification {
@Id
String id;
Timestamp validFrom;
Timestamp validTo;
...
}
If a structured type is extended to an entity but not used as the type of an element, CSN2JPA will not generate a
Java class that corresponds to the structured type. A structured type that is extended to an entity may contain
associations or key elements.
A structured type can also be used as the type of a CDS element in an entity:
entity Employee {
key empId : EmployeeID;
name : Name not null;
...
}
In this case, the entity Employee is not derived from the type Name but instead the element name embeds the type
Name.
If a structured type is used as shown in this example, CSN2JPA generates an embeddable JPA class corresponding
to the type. For the CDS element with a structured type, an embedded attribute is generated in the JPA entity:
@Embeddable
public class Name {
@Basic
private String firstName;
@Basic
private String lastName;
}
@Entity
public class Employee {
@Id
String empId;
@Embedded
Name name;
}
Restriction
If the CDS element of an entity has a structured type, the following restrictions apply:
If a CDS element is prefixed with the modifier virtual it is not mapped to a database column. Let's look at an
example:
type Employee {
virtual fullName : String;
...
};
@Entity
public class Employee {
@Transient
String fullName;
...
}
Related Information
3.4.6.5.2.3 Associations
CDS associations can be considered as forward-declared joins between the tables to which the source and the
target entities are mapped. They capture the foreign key (FK) relationship on the database in the model and allow
to navigate the association without the need to explicitly specify the join in a query.
For managed associations, the FK relationship between the source and the target entity is not specified explicitly in
the model. Instead CDS automatically resolves requisite FKs from the target’s primary keys and implicitly adds
respective join conditions. The FK elements are not exposed by the CDS entities.
On the contrary, for the unmanaged case, the join conditions relating the source to the target entity are explicitly
captured in the model and the FK elements are exposed by the CDS entities.
entity Order {
In JPA a single database column must not have a writable mapping to multiple attributes. Therefore, either the FK
attribute or the relationship needs to be read-only. In CDS, this can be expressed using the @jpa.readOnly
annotation. The placement of the @jpa.readOnly has implications on the usage of the relationship at runtime.
Read-Only Associations
If the relationship needs to be established using the values of the FK attribute and the relationship attribute is used
for read purposes only, it is recommended that you declare the association read-only. For example:
orderId : Integer;
@jpa.readOnly;
order : Association to Order on order.id = orderId;
The value of FK attribute orderId is populated only after the entity is read from the database by a find operation
or a query, or after its content has been refreshed by the refresh operation.
If the relationship needs to be established using the relationship attribute but the FK attribute is used for read
purposes only, it is recommended that you declare the FK element read-only.
@jpa.readOnly;
orderId : Integer;
order : Association to Order on order.id = orderId;
The value of the relationship attribute order is populated only after the entity is read from the database by a find
operation or a query, or after its content has been refreshed by the refresh operation.
Association Types
An association is unidirectional if there is only an association from the source entity to the target entity but no
association referring back from the target to the same instance of the source entity.
Unidirectional One-to-One
To define unidirectional one-to-one associations, the source cardinality needs to be 1 and the target cardinality
needs to be 1 or 0..1. In CDS, the association can be defined as follows: Association [1, 1] to <Target>.
The following examples show a case where a department has only one manager:
Managed Example
CDS
entity Department {
...
manager : Association [1, 1] to Employee;
};
entity Employee {
...
}
Resulting Java
@Entity
public class Department {
...
@OneToOne
private Employee manager;
...
}
@Entity
public class Employee {
...
}
The same can be achieved by explicitly specifying a foreign-key (FK) relationship between source and target. This
is achieved by adding an on condition like Association [1, 1] to <Target> on <Target>.<ID> = <FK>.
Unmanaged Example
CDS
entity Department {
...
@jpa.readOnly
managerId : Integer;
manager : Association [1, 1] to Employee on manager.empId = managerId;
};
entity Employee {
key empId : EmployeeID;
...
}
Resulting Java
@Entity
public class Department {
...
@Column( name = "managerId", insertable = false, updatable = false )
Unidirectional Many-to-One
If the target cardinality of a unidirectional association is 1 and the source cardinality is unspecified or *, CSN2JPA
generates a many-to-one association. The following are some examples:
● Association to <Target>
● Association to one <Target>
● Association [0..1] to <Target>
● Association [*, 1] to <Target>
By adding a foreign-key relationship between source and target, we convert the association to unmanaged
Association to one <Target> on <Target>.<ID> = <FK>.
The following examples show a case where many employees work in one room:
Managed Example
CDS
entity Employee {
...
room : Association to one Room;
}
entity Room {
...
};
Resulting Java
@Entity
public class Employee {
...
@ManyToOne
private Room room;
}
@Entity
public class Room { ... }
Unmanaged Example
CDS
entity Employee {
...
room : Association to one Room on room.buildingId = roomBuildingId
and room.floor = roomFloor
and room.number = roomNumber;
roomBuildingId : Integer;
Resulting Java
@Entity
public class Employee {
...
@JoinColumns({
@JoinColumn(name = "roomBuilding", referencedColumnName = "buildingId"),
@JoinColumn(name = "roomFloor", referencedColumnName = "floor"),
@JoinColumn(name = "roomNumber", referencedColumnName = "number")
})
@ManyToOne
private Room room;
@Basic
private Integer roomBuilding;
@Basic
private Integer roomFloor;
@Basic
private Integer roomNumber;
}
@Entity
public class Room {
@Id
private Integer buildingId;
@Id
private Integer floor;
@Id
private Integer number;
...
}
Unidirectional One-to-Many
If the association is unidirectional and target cardinality is many, CSN2JPA generates a one-to-many association.
The following examples show a case where one department has many different rooms:
Managed Example
CDS
entity Department {
...
rooms : Association to many Room;
};
entity Room {
...
};
Resulting Java
@Entity
public class Department {
...
@OneToMany
private Collection<Room> rooms;
Unmanaged Example
CDS
entity Department {
key depId : Integer;
...
rooms : Association to many Room on rooms.departmentId = depId;
};
entity Room {
...
departmentId : Integer;
};
Resulting Java
@Entity
public class Department {
@Id
private Integer depId;
...
@JoinColumn( name = "departmentId", referencedColumnName = "depId" )
@OneToMany
private Collection<Room> rooms;
}
@Entity
public class Room {
...
@Column( name = "departmentId" )
@Basic
private Integer departmentId;
}
Bidirectional Associations
In CDS, an association is bidirectional if there is a (to-one) association from a source entity to a target entity (the
backlink) and another association from the target back to the original source entity instance (the reverse
association). On the target side, the special join condition notation <reverse association>.<backlink> =
$self indicates that the back-link must be used to establish the association. This is illustrated by the following
examples. Compare these two examples to see how the $self can be used in the managed case.
Bidirectional One-to-One
If the association is bidirectional and the target cardinalities on both sides are 1, CSN2JPA creates a bidirectional
one-to-one association.
The following examples show a case where one employee has exactly one mobile phone. And each mobile phone
has exactly one owner.
Managed Example
CDS
entity Employee {
Resulting Java
@Entity
public class Employee {
...
@OneToOne(mappedBy = "owner")
private MobilePhone mobile;
...
}
@Entity
public class MobilePhone {
...
@OneToOne
private Employee owner;
...
}
Unmanaged Example
CDS
entity Employee {
key empId : EmployeeID;
...
mobile: Association to one MobilePhone on mobile.ownerId = empId;
}
entity MobilePhone {
...
@jpa.readOnly
ownerId : EmployeeID;
owner : Association to one Employee on owner.empId = ownerId;
};
Resulting Java
@Entity
public class Employee {
@Id
private String empId;
...
@OneToOne(mappedBy = "owner")
private MobilePhone mobile;
...
}
@Entity
public class MobilePhone {
...
@Column( name = "ownerId", insertable = false, updatable = false )
@Basic
private String ownerId;
@JoinColumn( name = "ownerId", referencedColumnName = "empId" )
@OneToOne
private Employee owner;
...
}
If only the target cardinality of the backlink is one but the target cardinality of the reverse association is many,
CSN2JPA generates a many-to-one association:
The following examples show a case where one employee can work in only one department. At the same time,
many employees can work in one department.
Managed Example
CDS
entity Employee {
...
department : Association to one Department;
}
entity Department {
...
employees : Association to many Employee on employees.department = $self;
};
Resulting Java
@Entity
public class Employee {
...
@ManyToOne
private Department department;
}
@Entity
public class Department {
@Id
private Integer depId;
...
@OneToMany(mappedBy = "department")
private Collection<Employee> employees;
}
Unmanaged Example
CDS
entity Employee {
...
@jpa.readOnly
departmentId : Integer;
department : Association to one Department on department.depId = departmentId;
}
entity Department {
key depId : Integer;
...
employees : Association to many Employee on employees.departmentId = depId;
};
Resulting Java
@Entity
public class Employee {
...
@Column( name = "departmentId", insertable = false, updatable = false )
@Basic
private Integer departmentId;
@JoinColumn(name = "departmentId", referencedColumnName = "depId" )
@ManyToOne
Note
The bidirectional one-to-many is just the same as the bidirectional many-to-one association.
An association may also use a key element of an entity as a foreign key. Because the key values of an entity cannot
be changed, the association to the target entity is immutable, in this case.
In the following example (managed case), the attribute Room.building is not only an id of the entity Room but
also the relationship field to the entity Building:
Managed Example
CDS
entity Room {
key building : Association to one Building;
key floor : Integer;
key number : Integer;
...
};
entity Building {
...
};
Resulting Java
@Entity
public class Room {
@Id
@ManyToOne
private Building building;
@Id
Integer floor;
@Id
Integer number;
...
}
@Entity
public class Building { ... }
In the following example (unmanaged case), the association must be annotated with @jpa.readOnly. The
primary-key column buildingId is also used as the foreign-key column referencing Building.
Unmanaged Example
entity Room {
key buildingId : Integer;
key floor : Integer;
key number : Integer;
@jpa.readOnly
building : Association to one Building on building.buildingId = buildingId;
};
entity Building {
key buildingId : Integer;
...
};
Resulting Java
@Entity
public class Room {
@Id
Integer buildingId;
@Id
Integer floor;
@Id
Integer number;
@JoinColumn( name = "buildingId", insertable=false, updatable=false )
@ManyToOne
private Building building;
...
}
@Entity
public class Building {
@Id
Integer id;
...
}
The on conditions of an association specification may contain additional conditions that do not join a column of
the source table with a column of the target column (additional filter conditions). In the following example, the
association activeId is established using the backlink owner and has the additional filter conditions
activeId.validFrom <= now() and activeId.validTo > now(). Such associations cannot be directly
mapped to JPA. Associations with additional filter conditions are currently only supported for EclipseLink and use
the @On annotation provided by the CSN2JPA Runtime Library [page 1607]. Currently only simple conditions and
conjunctions (and) are supported as filter conditions.
Managed Example
CDS
entity Employee {
...
activeId : Association to one Identification on activeId.owner = $self
and activeId.validFrom <= now()
and activeId.validTo > now();
}
entity Identification {
...
owner : Association to one Employee;
Resulting Java
@Entity
public class Employee {
@On("validFrom <= now() and validTo > now()")
@OneToOne(mappedBy="owner")
Identification activeId;
}
@Entity
public class Identification {
@ManyToOne
Employee owner;
}
Unmanaged Example
CDS
entity Employee {
key empId : EmployeeID;
...
activeId : Association to one Identification on activeId.ownerId = empId
and activeId.validFrom <= now()
and activeId.validTo > now();
}
entity Identification {
...
ownerId : EmployeeID;
@jpa.readOnly
owner : Association to one Employee on owner.empId = ownerId;
};
Resulting Java
@Entity
public class Employee {
@On("validFrom <= now() and validTo > now()")
@OneToOne(mappedBy="owner")
Identification activeId;
}
@Entity
public class Identification {
String ownerId;
@ManyToOne
@JoinColumn( name = "ownerId", insertable=false, updatable=false )
Employee owner;
}
Related Information
Let's see how the names in the CDS models are mapped into JPA.
Namespaces
In CDS, you may place a namespace directive at the top of a model to prefix the qualified names of all subsequent
entity and type definitions, as shown in the following sample code:
namespace managed;
entity Employee { }
Contexts
Nested namespace sections can be created with context definitions, which also prefixes the qualified names, as
shown in the following sample code:
context model {
entity Employee { }
}
The qualified names of CDS types and entities can be further prefixed with the basePackage option of CSN2JPA,
which can be specified, for example in the Maven Plugin [page 1585].
For example, the following model is part of the base package my.app:
namespace managed;
context model {
entity Employee { }
}
package my.app.managed.model;
@Entity
public class Employee {
...
}
How JPA entities and their attributes are mapped to database tables and columns is determined by the naming
scheme, which is used by CSN2JPA.
The naming scheme can be specified explicitly using the namingScheme option of the Maven Plugin [page 1585].
If this option is not provided, CSN2JPA attempts to extract the naming scheme from the CSN file. Otherwise, a
default applies.
Plain
The naming scheme plain effectively converts the case of entity and attribute names to upper case in the
corresponding artifacts of the database catalog: table and column names are sent undelimited (not 'quoted') to
the database and hence are converted to upper case by the database as this is standard in SQL. Column names
are specified only in join column specifications. In table names, the path separator is an underscore (_).
For example:
context model {
entity Employee {
name : String;
}
}
@Entity
@Table("model_Employee")
public class Employee {
String name;
}
Caution
Entity and attribute names must not be reserved keywords of the underlying database.
The naming scheme plain is the default for CSN versions 0.1.0 and above.
Quoted
The naming scheme quoted preserves the case of entity and attribute names in the corresponding artifacts of the
database catalog. All table and column names are explicitly specified in the JPA entities and are delimited
('quoted'). In table names, the path separator is a dot (.):
For example:
context model {
entity Employee {
name : String;
}
}
@Entity
@Table("\"model.Employee\"")
public class Employee {
The naming scheme quoted is the default for CSN versions lower than 0.1.0.
Related Information
3.4.6.5.2.5 Annotations
@jpa.entity.name String The JPA entity generated for Entity Name [page 1588]
the annotated CDS entity gets
the given name, which then
needs to be used in JPQL
queries.
The CSN2JPA runtime library enables CDS features that are not supported by standard JPA mappings.
entity Department {
key id : UUID;
seniorManagers : Association to many Employee on seniorManagers.department
= $self
and seniorManagers.position =
'MANAGER'
The runtime library provides an additional @On("<oncond>") annotation, which can be added to JPA relationship
fields in order to support such additional filter conditions. The additional filter condition provided in the @On
annotation is added as plain SQL string to the on condition that joins the source and target tables of the
corresponding relationship, which enables the use of database-specific features. The additional filter conditions
are automatically used in all usages of the relationship as in navigation or queries.
@Entity
public class Department {
@Id
private String id;
@OneToMany(mappedBy = "department")
@On("position = 'MANAGER' and level > 2")
private List<Employee> seniorManagers = new ArrayList<>();
}
@Entity
public class Employee {
@Id
private String id;
private String position;
private int level;
@ManyToOne
@JoinColumn(name = "DEPT_ID", referencedColumnName = "id")
private Department department;
@OneToOne(mappedBy = "parent")
@On("validFrom <= NOW()")
private Address currentAddress;
}
@Entity
public class Address {
@Id
private String id;
private Timestamp validFrom;
@ManyToOne
@JoinColumn(name = "EMPLOYEE_ID", referencedColumnName = "id")
private Employee parent;
}
A word of caution, when using additional on conditions in combination with JPA's caching mechanisms. For
instance, when changing the level of a manager from 2 to 3, the cached department entity has to be updated either
EntityManager em;
Employee manager = em.find(Employee.class, "1");
manager.setLevel(3);
em.persist(manager);
em.refresh(department);
Usage
The feature is currently only supported for EclipseLink and can be enabled by adding a dependency to pom.xml:
<dependencies>
<dependency>
<groupId>com.sap.cloud.servicesdk.csn2jpa</groupId>
<artifactId>runtime</artifactId>
<version>${csn2jpa.version}</version>
</dependency>
</dependencies>
SAP Cloud Platform is the extension platform from SAP. It enables developers to develop loosely coupled
extension applications securely, thus implementing additional workflows or modules on top of the existing SAP
cloud solution they already have.
SAP Cloud Platform provides a secure application container which decouples the extension applications from the
extended SAP solution via a public API layer. This container ensures that extension applications have no impact on
the stability of the extended solutions. It also ensures that data access is governed through the same roles and
permission checks as those of any other SAP interface. SAP Cloud Platform simplifies many of the system
integration challenges, handling aspects such as identity propagation, subaccount onboarding, dynamic theming
and branding and installation automation and provisioning.
Technical aspects
● Extensions and extended SAP cloud solutions co-located in the same region, where possible
In most of the cases the extensions that are being developed are co-located in the same region as the SAP
product that is being extended. The co-location ensures that the complete solution is using one infrastructure
and is operated by one and the same team on this infrastructure. It also improves the response time for API
calls.
● Integration with SAP Cloud product toolset
This integration allows SAP solution administrators to have a consistent experience in managing extensions as
an integral part of the product they are responsible for, including but not limited to software lifecycle
management, administration of roles, permissions and visibility groups.
● Dynamic UI branding and theming
The tight integration between the SAP product and SAP Cloud Platform allows extension users to get the
same seamless user experience as the native product modules. It also allows the delivery of SAP solution-
specific artifacts, such as navigation exit points, tiles, widgets or external business objects.
● Security integration
The integration between the SAP product and SAP Cloud Platform also allows you to manage the extension
you are building by using all the authentication and authorization capabilities of the SAP product you want to
extend.
Extension options
● Custom development
As a customer of an SAP cloud solution, you can create your own extension applications using SAP Cloud
Platform. SAP provides access to all the required integration and implementation materials describing how
SAP Cloud Platform is connected to the corresponding SAP cloud solution. Furthermore, for some of the SAP
cloud solutions (for example SAP SuccessFactors), SAP Cloud Platform offers out-of-the box and pre-
integrated extension subaccounts. You can leverage all the SAP Cloud Platform tools for the implementation of
those extension applications.
Extension concept
SAP Cloud Platform serves as a dedicated and isolated secure application container (hosting Java or HTML5
applications, or both). On one hand, it provides the API-level access to the extended SAP solution. On the other
hand, it takes care of the lifecycle management and the initial configuration of the extension applications. There
are several levels of extension integration:
● Application customization
Usually, every SAP cloud solution comes with certain customization capabilities. Depending on the technology
stack, this might vary from a fully fledged customization for existing business objects, through creating
custom business objects, and up to generating native user interfaces based on the customized objects. Some
of the SAP technology stacks allow implementation teams to even do some simple coding, which is then
executed natively as part of the customized product. Regardless of how feature-rich the extended solution is,
SAP Cloud Platform adds much more to the extension capabilities and enables you to build a large number of
extension scenarios and interact with on-premise and cloud systems.
● Loosely coupled applications
As a minimum, extension application need a configured Single Sign-On (SSO) with the extended SAP solution.
All the SAP cloud solutions provide the means for such configuration - you can either leverage the solution
local integrated SAML 2.0-compliant identity provider, or by using the SAP Cloud Platform Identity
Authentication service as a central trust point in the landscape. As a rule of thumb, if you want to integrate a
number of different SAML 2.0-compliant solutions in your landscape, a central trust management point such
as Identity Authentication will significantly simplify the management of additional trusts. Furthermore, SAP
Cloud Platform comes pre-integrated with Identity Authentication.
Another aspect of the loosely coupled applications is that you have to ensure the end-to-end user identity
propagation going across all the extension application layers. This means that if, for example, a user has
logged on to an HTML5 application, it has to be the same user on behalf of which all the underlying backend
calls are performed. To achieve this, you leveraging the SAML 2.0 bearer assertion authentication flow, which
is the default way for accesing any SAP cloud solution API from SAP Cloud Platform. You use the same
approach for Java applications.
Related Information
Extension subaccount
An extension subaccount is part of a customer or partner SAP Cloud Platform global account in the Neo
environment which is configured to interact with a particular SAP solution through standardized destinations,
usually with identity propagation turned on. See Accounts [page 10].
Tip
For extension subaccounts, we recommend that you change the default SAP Cloud Platform role provider to the
one of the extended SAP solution. Thus you channel all role assignment calls to the underlying extended SAP
system. For more information about changing the default role provider, see Role Provider (Еxtending SAP
SuccessFactors) [page 1649]
An extension application consists of several layers. It usually has a front-end UI layer decoupled from the back end
by OData, or REST services.
To achieve smooth retheming and rebranding, you can use SAPUI5 for implementing the UI layer. However, you can
also use any HTML5 or JavaScript UI framework.
The extension application back end includes existing SAP solution services, or can expose custom services
delivered with the extension application on SAP Cloud Platform.
Related Information
An extension application usually consists of several layers. There is a front-end UI layer decoupled from the back
end by OData, or REST services.
To achieve smooth retheming and rebranding, you implement the front end UI layer using SAPUI5. You can also
use any HTML5 or JavaScript UI framework.
The following artifacts are part of the UI package and delivered with the extension:
The following figures provide an overview of the building blocks of the extension application front end:
Extensions usually aggregate data from multiple different business systems by combining multiple application
widgets on one or multiple pages. If you have to combine data and need to apply additional security checks, then
you usually define a higher level back-end services in Java or XS, aggregating the required data and exposing it
with a new REST or OData API to the UI tier.
The extension application UI can be based on the solution’s native UI technology (by leveraging solution-native
genarted UIs) or on HTML5. The latter can either be served out of the Java or XS layer or most commonly, can
leverage the SAP Cloud Platform HTML5 application infrastructure, thus ensuring clear decoupling between UI
and back-end services.
Native customization
There are different native custumization options available with the SAP solutions. Most commonly, you can adjust
the user interface by changing the initial product configuration, by adjusting object metadata, by manipulating
field and operation visibility or by defining custom business objects. These customization options do not require
any coding on the frond-end tier since the resulting UI is generated natively in the extended solution.
To achieve smooth retheming and rebranding, you leverage SAPUI5 for the extension UI. SAPUI5 allows smooth
subsequent embedding of the custom UIs in the extended SAP solutions. The built-in extension and customization
mechanisms of SAPUI5 make it easy to replace standard views, to customize i18N resource texts, to add new or to
customize the existing navigation paths or even override existing code. Using SAPUI5 is a good practice but you
can also use other popular UI frameworks.
To achieve dynamic branding and retheming of extension UIs, we recommend that you use Portal service sites
configured with a corresponding template to mimic the look and feel of the extended SAP solution. Furthermore,
the Portal service allows dynamic redesign of pages leveraging the Portal authoring environment.
If you decide to go beyond pure configuration and customize the UI using SAPUI5, a natural choice would be SAP
Web IDE. SAP Web IDE helps you develop, test, and deploy SAPUI5 applications in your SAP Cloud Platform
subaccount, and expose your applications as widgets. It offers various extension templates such as SAPUI5
templates which you can use to start with. Based on the OData services of the extended solution and on their
metadata, you can start creating and adjusting the new user interface. SAP Web IDE comes with a source code
editor that helps you fine tune the generated HTML code on your own, leveraging code completion.
Related Information
The extension application back end includes existing SAP solution services, or it can expose custom services
delivered with the extension application on SAP Cloud Platform. Usually, the back end is decoupled from the front
end by OData, or REST services.
● Active business logic, including both the content and the security checks
Business logic
The clearly decoupled business logic makes it easier to develop, test and operate extension applications on SAP
Cloud Platform. It also enables the implementation of concepts such as zero-downtime updates, A/B testing for
UI, and other. It ensures that all security checks are performed on the right level, leaving no space for error of
putting business logic in the UI tier. Extension applications can leverage any available SAP Cloud Platform runtime.
However, the level of integration of the different runtimes may vary. The list of features whose support may vary
depending on the runtime includes but is not limited to automatic application provisioning, roles and identity
propagation, auto-discovery of different application-bundled artifacts.
Extension applications benefit from the security model provided by both SAP Cloud Platform and the extended
SAP solution. The security frame comprises automatic roles and permissions import, usage of SAP solution-native
admin tools, transparency on roles permission assignment, consistent administration experience.
By leveraging all the available platform services, extension applications will benefit from the subaccount level
Single Sign-On with the extended solution. For some of the SAP solutions (for example, SAP SuccessFactors), it is
possible to turn on native management of permissions and roles using the solution-native administration tools.
This is implemented by changing the default SAP Cloud Platform role provider. Essentially, extension applications
use the available runtime-specific standard mechanisms to check for role assignment and SAP Cloud Platform
transparently performs the assignment check in the underlying extended SAP solution.
In the scenario where the extended solution does not come with an embedded identity provider (IdP), we use the
SAP Cloud Platform Identity Authentication service as a central point for managing trust and user authentication.
By using the IdP-proxy feature of Identity Authentication, you can define your own identity provider.
The following figures provide an overview of the business logic of the extension application back end:
Persistency
The persistency layer is an essential aspect that needs to be considered when developing an extension application.
There are several options for storing data offered by SAP Cloud Platform, including both relational (for example,
SAP HANA and Sybase ASE as offered by persistence service) and unstructured (document service) data storage
options. Thus, the various storage needs of the extension applications can be covered.
It is also possible to store data in the extended SAP solution in the form of custom field or custom business
objects. This option varies for the different extended solutions. Custom business objects, however, are usually
limited both in volume and in number.
The following figures provide an overview of the persistency options for the extension applications:
Connectivity
One of the most critical layers for the SAP Cloud Platform extension concept is the connectivity layer. It connects
an extension application to the extended SAP solution and to other required backend systems. The connectivity is
accomplished through a set of standardized destinations. All back-end calls are performed on behalf of the user
who is logged on to the extension front-end layer. To implement that, SAP Cloud Platform leverages SAML 2.0
bearer assertion authentication flow. The standardized destination names allow the portability of partner
applications - partner extension applications can expect to be installed in an environment where the required
destinations are in place and can be used. For more information about the standardized destinations, see solution-
specific section.
It is also possible to have destinations configured to use basic authentication or other authentication means.
However, we do not recommend the use of service users or a hard-coded user for back-end calls because the
back-end systems will not be able to perform user-based authorization checks. Furthermore, using service users
makes the end-to-end traceability very hard to achieve.
Caution
Extension applications work with your critical business data. Therefore, you must use only applications that
come from a trusted application provider. Always make sure that the extension application complies with the
common security best practices and fulfills data confidentiality and data protection requirements defined for
your organization. Do not deploy or allow access of untrusted applications to your mission-critical back-end
systems.
Related Information
You can extend the scope of SAP Hybris Cloud for Customer using SAP Cloud Platform extension applications.
Overview
Extending SAP Hybris Cloud for Customer on SAP Cloud Platform allows you to implement additional workflows or
modules on top of the SAP Hybris Cloud for Customer benefiting from out-of-the-box security, inherited data
access governance, user interface embedding, and others.
Note
Currently, you can extend your SAP Hybris Cloud for Customer application using SAP Cloud Platform Neo
environment only.
In the SAP Hybris Cloud for Customer extensions scenarios, these are the important aspects:
● The Extension Applications for SAP Hybris Cloud for Customer are hosted or subscribed in a dedicated SAP
Cloud Platform subaccount to ensure the consistency in the integration configuration between the two
solutions. The purpose of the subaccount is to hold the common integration configurations for all extension
applications.
● The single sign-on configuration between the SAP Hybris Cloud for Customer and the SAP Cloud Platform
ensures the secure and consistent data access for the extension application.
● OAuth connectivity configuration for enabling the use of SAP Hybris Cloud for Customer OData APIs.
● Configuration of the HTML mashups in SAP Hybris Cloud for Customer helps with embedding the extension
application UI directly in the SAP Hybris Cloud for Customer screens and offers the same look and feel to the
end users.
Prerequisites
● Your company has an enterprise account in the cloud platform. For more information about account types and
purchasing an enterprise account, see:
○ Global Accounts and Subaccounts
○ Purchasing an Enterprise Account
● You have a user account on the platform with administrative permissions for the enterprise account of your
company and you can use the Cloud Cockpit.
● If you need to configure single sign-on using the SAP Cloud Platform Identity Authentication service, you have
to make sure that your company has license and a tenant for this SAP service. For more details, see the
Getting Started section of the documentation for this service.
● You have administrative permissions for the SAP Hybris Cloud for Customer system necessary for configuring
the connectivity with external systems, in this case SAP Cloud Platform.
Configure the integration between SAP Cloud Platform and SAP Hybris Cloud for Customer to enable the use of
extension applications running on top of the cloud platform.
Restriction
The integration token functionality for SAP Hybris Cloud for Customer is not available for the regions in
Europe. See Regions and Hosts Available for the Neo Environment.
If you are using regions in Europe and you want to extend SAP Hybris Cloud for Customer on SAP Cloud
Platform, proceed with the manual integration starting with Configuring Single Sign-On from the Extending
SAP Hybris Cloud for Customer.
2. Trigger the automatic integration process from SAP Hybris Cloud for Customer system. See Set Up the
Integration Process from the SAP Hybris Cloud for Customer System.
3. (Optional) Configure a single sign-on between SAP Hybris Cloud for Customer and SAP Cloud Platform if the
automated integration (step 2) has not been completed. See Configuring Single Sign-On.
4. Configure the connectivity between SAP Cloud Platform subaccount and SAP Hybris Cloud for Customer
system. This includes:
1. In the SAP Hybris Cloud for Customer system: Configure the OAuth Client for OData Access
2. In the SAP Cloud Platform cockpit: Create and Configure the HTTP Destination
5. To offer seamless end-user experience, embed the user interface of the new solution in the SAP Hybris Cloud
for Customer screens. See Embedding User Interface of an Extension Application in SAP Hybris Cloud for
Customer
Related Information
You create an integration token required for the automated configuration for extending SAP Hybris Cloud for
Customer on SAP Cloud Platform.
Prerequisites
To integrate your SAP Hybris Cloud for Customer system with an SAP Cloud Platform subaccount, you need the
following platform scopes to be specified for your custom platform role:
● readAccount
● readSubscriptions
● readExtensionIntegration
● manageExtensionIntegration
● readTrustSettings
● manageTrustSettings
For more information, see Platform Scopes and Manage Custom Platform Roles in the Neo Environment.
Context
To initiate the automated configuration for extending SAP Hybris Cloud for Customer on SAP Cloud Platform, the
SAP Hybris Cloud for Customer administrators need an integration token. The token determines the SAP Cloud
Platform region and the subaccount from which the respective resources will be consumed. This is the subaccount
that will be integrated with your SAP Hybris Cloud for Customer system.
As an SAP Cloud Platform user with permissions for the respective subaccount, you create an integration token
using the SAP Cloud Platform cockpit, and then pass it over to the SAP Hybris Cloud for Customer administrator.
Restriction
The integration token functionality for SAP Hybris Cloud for Customer is not available for the regions in Europe.
See Regions and Hosts Available for the Neo Environment.
If you are using regions in Europe and you want to extend SAP Hybris Cloud for Customer on SAP Cloud
Platform, proceed with the manual integration starting with Configuring Single Sign-On from the Extending SAP
Hybris Cloud for Customer.
Procedure
1. In your Web browser, open the SAP Cloud Platform cockpit using the relevant URL for the region with which
your customer subaccount is associated. For more information about the regions, see Regions.
3. In the Integration Tokens panel, choose New Token New SAP Hybris Cloud for Customer Token .
The New SAP Hybris Cloud for Customer Token dialog box opens.
4. Select a trusted identity provider for single sign-on with SAP Hybris Cloud for Customer:
○ If you already have a configured trusted identity provider in your extension subaccount that is marked as
default one, you can keep it. To do that, select Use existing.
○ If you have an SAP Cloud Platform Identity Authentication service license for your global account, you can
use it. To do that, select Use SAP Cloud Platform Identity Authentication service and then select one of the
available tenants form the drop-down menu.
Note
If you select the Use SAP Cloud Platform Identity Authentication service option, a new trusted identity
provider will be configured after a successful integration.
Note
If there is no identity provider that can be used for establishing single sign-on between SAP Cloud Platform
and SAP Hybris Cloud for Customer system, both options are disabled. In this case, you need to configure
manually a trusted identity provider.
Your newly created token appears in the list of integration tokens and its status is ACTIVE. In the Integration
Tokens panel, you can view details such as the user who has created the token, the creation date and the
expiration date. The token is valid for 7 days after it has been created.
Note
The integration token can be used only once. Once the integration token is used, it is no longer valid.
○ To view the identity provider used for the applications in the subaccount where the token has been
created, choose Token details in the Actions column on the row of the respective token.
○ To delete an integration token, choose Delete token in the Actions column on the row of the respective
token.
Results
You have created an integration token which you can use to initiate the automated configuration for extending SAP
Hybris Cloud for Customer on SAP Cloud Platform.
Note
Make sure to use the integration token before its expiration date.
You can now pass over the value of the token to the SAP Hybris Cloud for Customer administrator who will be
triggering the automated configuration for extending SAP Hybris Cloud for Customer on SAP Cloud Platform. For
more information, see Set Up the Integration Process from the SAP Hybris Cloud for Customer System.
When the integration has been triggered, the integration token state will change from ACTIVE to CONSUMED. After
a successful integration, the token state is changed from CONSUMED to SERVED.
Related Information
Accounts
You can extend the scope of SAP SuccessFactors HCM Suite using SAP Cloud Platform extension applications.
Overview
Extending SAP SuccessFactors on SAP Cloud Platform allows you to broaden the SAP SuccessFactors scope with
applications running on the platform. This makes it quick and easy for companies to adapt and integrate SAP
SuccessFactors cloud applications to their existing business processes, thus helping them maintain competitive
advantage, engage their workforce and improve their bottom line.
Extending SAP SuccessFactors on SAP Cloud Platform delivers the in-memory computing speed of SAP Cloud
Platform and includes capabilities from the SAP SuccessFactors metadata framework (MDF) and SAP Cloud
Platform for extension development. This combination of technologies makes it easier for SAP SuccessFactors
customers, partners, and developers to extend cloud or on-premises applications, build entirely new cloud
applications, and enable new processes that meet unique business needs. Therefore, you can extend SAP
SuccessFactors on SAP Cloud Platform for both internal custom development based on the provided SAP
SuccessFactors APIs and for running certified extension applications provided by SAP partner companies.
Extensibility layers
Using MDF, you can develop custom objects, automatically expose them to SAP Cloud Platform, and enable them
for social media and mobile apps. This allows you to quickly define the data layer inside the SAP SuccessFactors
HCM suite. You can then access that data layer and build on top of it by defining complex application logic and
creating a feature-rich user interface in SAP Cloud Platform.
With MDF you can create the precise functionality needed to your company's unique business requirements. You
can easily maintain and update the functionality as needed throughout the application lifecycle. You can also
integrate changes into your existing business processes, since every MDF object comes ready with an OData API
that can both read and write data.
Developers can leverage the following HTTP connectivity destinations pointing to SAP SuccessFactors:
Destinations
Note
You create the destination manually.
You use
theConnectivityConfigurati
on API for accessing the destina
tion configuration. For more informa
tion, see ConnectivityConfiguration
API [page 80]
Supported APIs
You can find a list and implementation details of the APIs supported by SAP SuccessFactors HCM Suite on SAP
Help Portal, at http://help.sap.com/hr_api/.
SAP Cloud Platform provides the following options for deploying and configuring SAP SuccessFactors extension
application. The preferred option depends on your scenario.
● Deploying and configuring an extension application using the SAP Cloud Platform cockpit (preferable for
productive scenarios).
For more information, see .
● Deploying and configuring an extension application using console client commands (preferable for
development scenarios).
For more information, see Installing and Configuring Extension Applications [page 1628].
Related Information
You create an integration token required for the configuration of the extension integration between SAP Cloud
Platform andSAP SuccessFactors.
Prerequisites
To create an integration token, you need to have assigned to your user either the Administrator predefined role or a
custom platform role with the following scopes:
● readExtensionIntegration
● manageExtensionIntegration
● manageDestinations
● manageApplicationRoleProvider
● manageTrustSettings
For more information, see Platform Scopes and Manage Custom Platform Roles in the Neo Environment.
Context
To initiate the automated configuration for extending SAP SuccessFactors on SAP Cloud Platform, the SAP
SuccessFactors administrators with Provisioning access need an integration token. It determines the SAP Cloud
Platform subaccount that will be integrated with your SAP SuccessFactors company.
As an SAP Cloud Platform user with permissions for the respective subaccount, you create an integration token
using the SAP Cloud Platform cockpit, and then pass it over to the SAP SuccessFactors administrator.
Procedure
1. In your Web browser, open the SAP Cloud Platform cockpit using the relevant URL for the region with which
your customer subaccount is associated. For more information about the regions, see Regions.
2. Select the relevant subaccount, and then choose Integration Tokens in the navigation area.
3. In the Integration Tokens panel, choose New Token New SAP SuccessFactors Token .
Your newly created token appears in the list of integration tokens and its status is ACTIVE. In the Integration
Tokens panel, you can view details such as the user who has created the token, the creation date and the
expiration date. The token is valid for 7 days after it has been created.
Note
The integration token can be used only once. Once the integration token is used, it is no longer valid.
○ To view the identity provider used for the applications in the subaccount where the token has been
created, choose Token details in the Actions column on the row of the respective token.
○ To delete an integration token, choose Delete token in the Actions column on the row of the respective
token.
Results
You have created an integration token which you can use to initiate the automated configuration for extending SAP
SuccessFactors on SAP Cloud Platform.
Note
Make sure to use the integration token before its expiration date.
Next Steps
You can now pass over the value of the token to the SAP SuccessFactors administrator who will be triggering the
automated configuration for extending SAP SuccessFactors on SAP Cloud Platform. For more information, see
Configure the Extension Integration Between SAP Cloud Platform and SAP SuccessFactors.
Related Information
Accounts
As an implementation partner, you install and configure the extension applications that you want to make available
for customers.
You deploy your extension application, configure its connectivity to the SAP SuccessFactors system and map the
roles defined in your extension application to the roles in the corresponding SAP SuccessFactors system.
Prerequisites
● You have an SAP Cloud Platform extension subaccount and the corresponding SAP SuccessFactors customer
instance connected to it.
Process Flow
You deploy your extension application in your SAP Cloud Platform extension subaccount and create the resource
file with role definitions. You also need to configure the application connectivity to SAP SuccessFactors and to
enable the use of the HCM Suite OData API. To ensure that only approved applications are using the SAP
SuccessFactors identity provider for authentication, you need to register the extension application as an
authorized assertion consumer service in SAP SuccessFactors. Then you register the extension application home
page tiles and import the extension application roles in the SAP SuccessFactors customer instance connected to
the extension subaccount.
To finalize the configuration on SAP Cloud Platform side, you change the default role provider to the SAP
SuccessFactors one. To finalize the configuration on SAP SuccessFactors side, you assign user groups to the
permission roles defined for your extension application.
Task Description
1. Deploy the Extension Application on the Cloud [page 1630] Deploy the extension application in your extension subaccount
on SAP Cloud Platform.
2. Create the Resource File with Role Definitions [page 1631] Create the resource file containing the SAP SuccessFactors
HCM role definitions.
3. Register the Extension Application as an Authorized Asser Register the extension application as an authorized assertion
tion Consumer Service [page 1634] consumer service.
4. Configure the Extension Applications's Connectivity to SAP Configure the connectivity between your Java extension appli
SuccessFactors [page 1636] cation and the SAP SuccessFactors system associated with
your SAP Cloud Platform extension subaccount.
Note
This task is relevant for Java extension applications only.
5. Register a Home Page Tile for the Extension Application Register a home page tile for the extension application in the
[page 1639] extended SAP SuccessFactors system
6. Import the Extension Application Roles in the SAP Success Import the application-specific roles from the SAP Cloud
Factors System [page 1645] Platform system repository into to the SAP SuccessFactors
customer instance connected to your extension subaccount.
7. Assign the Extension Application Roles to Users [page 1646] Assign the extension application roles you have imported in
the SAP SuccessFactors systems to the user to whom you
want to grant access to your application.
8. Enable SAP SuccessFactors Role Provider [page 1647] Change the default SAP Cloud Platform role provider of your
Java application to the SAP SuccessFactors role provider.
Note
This task is relevant for Java extension applications only.
9. Test the Role Assignments [page 1651] Try to access the application with the users with different level
of granted access to test the role assignments.
You deploy the extension application in your extension subaccount on SAP Cloud Platform so that you can run it
and integrate it in SAP SuccessFactors.
Prerequisites
● You have the WAR file of the extension application you want to deploy.
● The WAR file contains the ZIP archive of the application site, as well as the <application_name>.spec.xml
file describing the corresponding widgets. For an example of a site ZIP archive and structure, see the Get the
Source Code section in https://github.com/SAP/cloud-sfsf-benefits-ext .
● You have downloaded and configured SAP Cloud Platform console client. For more information, see Setting Up
the Console Client.
Context
You deploy the extension applications using the SAP Cloud Platform console client. The applications are deployed
in the customer subaccount on the same production landscape where the SAP Cloud Platform, portal service is
deployed. The production landscape is available on a regional basis, where each region represents the location of a
data center. When deploying applications, bear in mind that a customer is associated with a particular region and
that this region is independent of your own location. You could be located in the United States, for example, but
Procedure
1. Open the command prompt and navigate to the folder containing neo.bat/sh (SDK installation folder/tools).
2. To deploy the extension application, execute the following command:
Results
You have deployed the extension application in your extension subaccount on the SAP Cloud Platform.
Related Information
You create the resource file containing the SAP SuccessFactors HCM role definitions.
Prerequisites
● The corresponding SAP SuccessFactors HCM Suite roles exist in the SAP SuccessFactors system.
● You have admin access to the SAP SuccessFactors OData API and have a valid subaccount with user name and
password. For more information, see https://help.sap.com/viewer/09c960bc7676452f9232eebb520066cd/
LATEST/en-US.
To create the resource file with the role definitions required for your application, you use the SAP SuccessFactors
OData API to query the permissions defined for this role, and create a roles.json file containing the role
definitions. You use HTTP Basic Authentication for the OData API call.
Procedure
1. Call the OData API to query the permissions defined for the required role using the following URL:
https://<SAP_SuccessFactors_host_name>/odata/v2/RBPRole?$filter=roleName eq
'<role_name>'&$expand=permissions&$format=json
Where:
○ <host_name> is the fully qualified domain name of the OData API host depending on the region hosting
your SAP SuccessFactors instance. For more information about the OData API endpoints, see https://
help.sap.com/viewer/d599f15995d348a1b45ba5603e2aba9b/LATEST/en-US/
03e1fc3791684367a6a76a614a2916de.html.
○ <role_name> is the name of the role as defined in the SAP SuccessFactors system.
The response is a JSON object containing the following properties for each of the permissions defined for the
specified role:
Example response
{
"d": {
"__metadata": {
"uri": "https://localhost:443/odata/v2/RBPRole(82L)",
"type": "SFOData.RBPRole"
},
"roleId": "82",
"roleDesc": "Testing role permissions",
"lastModifiedBy": "admin",
"lastModifiedDate": "\/Date(1404299328000)\/",
"roleName": "Test Role Permissions",
"userType": "null",
"permissions": {
"results": [{
"__metadata": {
"uri": "https://localhost:443/odata/v2/
RBPBasicPermission(60L)",
"type": "SFOData.RBPBasicPermission"
},
"permissionId": "60",
},
{
"__metadata": {
"uri": "https://localhost:443/odata/v2/
RBPBasicPermission(4L)",
"type": "SFOData.RBPBasicPermission"
},
"permissionId": "4",
"permissionStringValue": "detail_report",
"permissionLongValue": "-1",
"permissionType": "report"
}]
]
}
}
}
2. Create a roles.json file using the following properties. To list all the available permissions in your SAP
SuccessFactors system, use this OData API call: https://<SAP_SuccessFactors_host_name>/
odata/v2/RBPBasicPermission?$format=json. There you can find the properties that you need to
create the roles.json file.
Property Description
roleName Name of the role as defined in the response to the OData API call
roleDesc Role description as defined in the response to the OData API call
[{
"roleDesc": "My role description",
"roleName": "My Application Role Name",
"permissions": [{
"permissionStringValue": "change_info_user_admin",
"permissionLongValue": "-1",
"permissionType": "user_admin"
},
{
"permissionStringValue": "detail_report",
"permissionLongValue": "-1",
"permissionType": "report"
}]
}]
Next Steps
Import the role definition resource file in the SAP SuccessFactors system connected to your extension
subaccount. For more information, see Import the Extension Application Roles in the SAP SuccessFactors System
[page 1645].
Register the extension application as an authorized assertion consumer service to configure its access to the SAP
SuccessFactors system through the SAP SuccessFactors identity provider (IdP).
Prerequisites
● You have made yourself familiar with the SAP Cloud Platform console client. For more information, see
Console Client.
● The extension application is started. For more information about starting an application deployed in an SAP
Cloud Platform subaccount, see start.
● The SAP Cloud Platform subaccount in which you configure the connectivity to the SAP SuccessFactors
system is an extension subaccount. For more information about extension subaccounts, see Basic Concepts.
Context
Extension applications deployed in an SAP Cloud Platform extension subaccount are authenticated against the
SAP SuccessFactors (IdP). To ensure that only approved applications are using the SAP SuccessFactors IdP for
authentication, you need to register the extension application as an authorized assertion consumer service,
configure the application URL, the service provider audience URL and the service provider logout URL of the
extension application in SAP SuccessFactors Provisioning. To do so you use the hcmcloud-enable-
application-access console client command.
1. Open the command prompt and navigate to the folder containing neo.bat/sh (SDK installation folder/tools).
2. Register the extension application as an authorized assertion consumer service. In the console client
command line, execute: hcmcloud-enable-application-access, as follows:
○ For an application deployed in your subaccount, specify the name of your extension application for the
application parameter.
For example, to register a Java extension application running in your subaccount in the US East region,
execute:
○ For an application to which your subaccount is subscribed, specify the application provider subaccount
and the name of your extension application for the application parameter in the following format:
<application_provider_subaccount>:<my_application>.
For example, to register a Java extension application to which your subaccount in the US East region is
subscribed, execute:
3. (Optional) Display the status of an application entry in the list of authorized assertion consumer services for
the SAP SuccessFactors system associated with an extension subaccount. In the console client command line,
execute hcmcloud-display-application-access, as follows:
○ For an application deployed in your subaccount, specify the name of your extension application for the
application parameter.
For example, to display the status of the authorized assertion consumer service entry for an application
deployed in your subaccount in the US East region, execute:
○ For an application to which your subaccount is subscribed, specify the application provider subaccount
and the name of your extension application for the application parameter in the following format:
<application_provider_subaccount>:<my_application>.
For example, to display the status of the authorized assertion consumer service entry for an application to
which your subaccount in the US East region is subscribed, execute:
4. (Optional) If your scenario requires it, remove the entry of the extension application from the list of authorized
assertion consumer services for the SAP SuccessFactors system associated with the extension subaccount. In
the console client command line, execute hcmcloud-disable-application-access, as follows:
○ For an application deployed in your subaccount, specify the name of your extension application for the
application parameter.
○ For an application to which your subaccount is subscribed, specify the application provider subaccount
and the name of your extension application for the application parameter in the following format:
<application_provider_subaccount>:<my_application>.
For example, to remove the authorized assertion consumer service entry for a Java application to which
your subaccount in the US East region is subscribed, execute:
Related Information
Use this procedure to configure the connectivity between your Java extension application and the SAP
SuccessFactors system associated with your SAP Cloud Platform extension subaccount.
Prerequisites
● If you configure access to the HCM Suite OData API, you must have the OData API enabled for your SAP
SuccessFactors company instance in Provisioning. For more information, see theOData API Developer Guide,
available on SAP Help Portal at https://help.sap.com/viewer/d599f15995d348a1b45ba5603e2aba9b/
LATEST/en-US/03e1fc3791684367a6a76a614a2916de.html.
● You have made yourself familiar with the SAP Cloud Platform console client. For more information, see
Console Client
● You have the role-based permissions enabled for the SAP SuccessFactors company instance.
● The SAP Cloud Platform subaccount in which you configure the connectivity to the SAP SuccessFactors
system is an extension subaccount. For more information about extension subaccounts, see Basic Concepts
● Your application runtime supports destinations. For more information about the application runtimes
supported by SAP Cloud Platform, see Application Runtime Container
Note
This procedure is relevant only for Java extension applications.
The extension applications interact with the extended SAP SuccessFactors system using the HCM Suite OData
API. The HCM Suite OData API is a RESTful API based on the OData protocol intended to enable access to data in
the SAP SuccessFactors system. You have the following API access scenarios:
To enable the API access and configure the connectivity between the Java extension applications and theSAP
SuccessFactors system associated with your extension subaccount, you use the hcmcloud-create-
connection console client command. Using the command, you specify the connection details for the remote
communication of the extension application and create the HTTP destinations required for configuring the API
access. The command also creates and configures the corresponding OAuth clients in theSAP SuccessFactors
company instance.
The command uses the following predefined destination names for the different connection types:
OData sap_hcmcloud_core_odata
You can consume this destination in your application using one of these APIs:
If your scenario requires it, you can have two connections for an extension application as long as the type of the
connections differs.
Depending on whether the extension application is deployed in your subaccount or your subaccount is subscribed
to the extension application, you configure the connectivity on an application level in the subaccount where the
application is deployed, or on a subscription level in the subaccount subscribed to the application.
Procedure
1. Open the command prompt and navigate to the folder containing neo.bat/sh (SDK installation folder/tools).
2. Configure the connectivity. In the console client command line, execute hcmcloud-create-connection, as
follows:
○ For an application deployed in your subaccount, specify the name of your extension application for the
application parameter.
For example, to create a connection of the OData type for an application deployed in your subaccount in
the US East region, execute:
○ For an application to which your subaccount is subscribed, specify the application provider subaccount
and the name of your extension application for the application parameter in the following format:
<application_provider_subaccount>:<my_application>.
For example, to configure a connection of the OData type for an application to which your subaccount in
the US East region is subscribed, execute:
3. (Optional) List the connections created for the extension application. In the console client command line,
execute hcmcloud-list-connections, as follows:
○ For an application deployed in your subaccount, specify the name of your extension application for the
application parameter.
For example, to list the connections for an application deployed in your subaccount in the US East region,
execute:
○ For an application to which your subaccount is subscribed, specify the application provider subaccount
and the name of your extension application for the application parameter in the following format:
<app_provider_subaccount>:<my_app>.
For example, to list the connections for an application to which your subaccount in the US East region is
subscribed, execute:
4. (Optional) If your scenario requires it, remove the connectivity configured between your extension application
and the SAP SuccessFactors systems associated with the extension subaccount. In the console client
command line, execute hcmcloud-delete-connection, as follows:
○ For an application to which your subaccount is subscribed, specify the application provider subaccount
and the name of your extension application for the application parameter in the following format:
<app_provider_subaccount>:<my_app>.
For example, to remove a connection of type OData with technical user for an application to which your
subaccount in the US East region is subscribed, execute:
Related Information
You register a Home Page tile for the extension application in the extended SAP SuccessFactors system so that
you can access the application directly from the SAP SuccessFactors Employee Central (EC) Home Page.
Prerequisites
Note
This is a beta feature available for SAP Cloud Platform extension subaccounts. For more information about the
beta features, see Using Beta Features in Subaccounts [page 16].
● You have deployed and started the extension application for which you are registering the Home Page tile.
● You have registered the extension application as an authorized assertion consumer service. For more
information, see Register the Extension Application as an Authorized Assertion Consumer Service.
● You have the Home Page tile provided as part of the application interface.
You develop the content of the tile as a dedicated HTML page inside the application and size it according to the
desired tile size. You describe the tiles in a tiles.json descriptor and package them in a ZIP archive. For
Context
The SAP SuccessFactors EC Home Page provides a framework that allows different modules to provide access to
their functionality using tiles. For the extension applications hosted in the SAP Cloud Platform extension
subaccount, SAP Cloud Platform allows you to register Home Page tiles in the extended SAP SuccessFactors
system. To do so, you use the hcmcloud-register-home-page-tiles console client command. Both Java and
HTML5 extension applications are supported.
Procedure
1. Open the command prompt and navigate to the folder containing neo.bat/sh (SDK installation folder/tools).
2. Register the SAP SuccessFactors EC Home Page tiles in the SAP SuccessFactors company instance linked to
the specified SAP Cloud Platform subaccount. In the console client command line, execute: hcmcloud-
register-home-page-tiles, as follows:
○ For an application deployed in your subaccount, specify the name of your extension application for the
application parameter.
For example, to register a Home Page tile for a Java extension application running in your subaccount in
the US East region, execute:
○ For an application to which your subaccount is subscribed, specify the application provider subaccount
and the name of the extension application for the application parameter in the following format:
<application_provider_subaccount>:<my_application>.
For example, to register a Home Page tile for a Java extension application to which your subaccount in the
US East region is subscribed, execute:
Note
The size of the tile descriptor file must not exceed 100 KB.
3. (Optional) List the extension application Home Page tiles registered in the SAP SuccessFactors company
instance associated with the extension subaccount. In the console client command line, execute hcmcloud-
get-registered-home-page-tiles, as follows:
○ For an application to which your subaccount is subscribed, specify the application provider subaccount
and the name of the extension application for the application parameter in the following format:
<application_provider_subaccount>:<my_application>.
For example, to list the tiles registered for a Java extension application to which your subaccount in the US
East region is subscribed, execute:
Note
If you do not specify the application parameter, the command returns all the tiles registered in the SAP
SuccessFactors EC Home Page of the SAP SuccessFactors company instance linked to the extension
subaccount.
There is no lifecycle dependency between the tiles and the application, so the application may not be
started or may not be deployed anymore.
4. (Optional) If your scenario requires it, unregister the SAP SuccessFactors EC Home Page tiles registered for
the extension application. In the console client command line, execute hcmcloud-unregister-home-page-
tiles, as follows:
○ For an application deployed in your subaccount, specify the name of your extension application for the
application parameter.
For example, to unregister the SAP SuccessFactors EC Home Page tiles for a Java application deployed in
your subaccount in the US East region, execute:
○ For an application to which your subaccount is subscribed, specify the application provider subaccount
and the name of your extension application for the application parameter in the following format:
<application_provider_subaccount>:<my_application>.
For example, to unregister the SAP SuccessFactors EC Home Page tiles for a Java application to which
your subaccount in the US East region is subscribed, execute:
The Home Page tiles JSON descriptor, for example, tiles.json, contains the definition of the Home Page tiles for
the extension application. Depending on the version of the Home Page tiles (Home Page v12 tiles or New Home
Page tiles), a different set of properties is required in the tiles.json file. You can configure both versions in one
JSON file.
Properties
name The name of the tile used to identify it. This name is used in the Home Page administration
tools and it is not visible to end-users, but only to HR administrators.
This is the title of the custom tile as it appears to end-users on the Home Page.
The value en_US is mandatory. Otherwise, the value for other locales is not provided, and
end-users will see a blank tile.
This is a subtitle that appears on custom tiles under the tile title. Subtitles are optional. If
you do not want to use a subtitle, you can leave the field blank.
Using this propety, you can localize the title, and the subtitle.
type Determines how the tile appears to end-users. Currently, only static type is supported.
Contains the ID of the icon that you want to use for the custom tile. You can take its ID
from SAP SuccessFactors system. Go to Admin Center Manage Home Page Add
Custom Tile and then follow the wizard until you choose the icon in the Tile tab. Then,
take its ID and place it in the tiles.json. For example, "sap-icon://add-product".
navigation The tile navigation determines how the tile responds when a user clicks or selects it. You
can choose from the following options:
● type: html-popover
configuration: Contains two properties, "contentSize" and "content".
"contentSize" defines the width of the popover window and has one of the following
values: "responsive", "small", "medium", and "large".
"content" defines the HTML content of the popover window and has a String value.
You cannot localize the content of this type (unlike the SAP SuccessFactors custom
popover tiles).
● type: iframe-popover
configuration: Contains two properties, "contentSize" and "URL".
"contentSize" defines the size of the popover window and has one of the following val
ues: "responsive", "small", "medium", and "large".
"URL" defines the relative to the application root of the iframe source and has a String
value.
● type: url
configuration: Contains the "url" property.
"url" defines the URL link which will be opened and has a String value. For example,
"index.html".
The "url" is relative to the application root.
● type: no_target
For more details about the New Home Page tiles properties, see Custom Tile Configuration Settings.
name The name of the tile used to identify it. This name is used in the Home Page administration
tools and it is not visible to end-users, but only to HR administrators.
Default: 1
Accepted values:
● 1 - medium
● 2 - large
● 3 - extra large
padding Defines whether to add padding around the tile and the application tile content.
Default: false
roles Defines a list of roles. If you specify a role in this parameter, the tile will be visible only to
the users assigned to this role. All the roles must be already existing in the SAP Success
Factors system, before you add them in the JSON file.
metadata Defines the localized tile title and description. If you do not define this parameter, the
framework displays the value of the name parameter to the users.
Optional
Note
The tiles.json descriptor file must use UTF-8 encoding and its size must not exceed 100 KB.
Example
This example contains both Home Page v12 and New Home Page tiles.
[{
"tileVersion": "NEW_HOME_PAGE",
"tiles": [{
"name": "New Test Application",
"section": "news",
"metadata": [{
"title": "My new home page tile",
"subTitle": "This is new home page tile",
"locale": "en_US"
}],
"type": "static",
"configuration": {
"icon": "sap-icon://add-product"
},
"navigation": {
"type": "url",
"configuration": {
"openInNewWindow": false,
"url": "index.html"
}
}
}]
}, {
"tileVersion": "V12",
To complete the authorization configuration of your extension application, you import the application-specific
roles into to the SAP SuccessFactors company instance connected to your extension subaccount.
Prerequisites
● You have created the resource file with the required role definitions. For more information, see Create the
Resource File with Role Definitions [page 1631].
● You have downloaded and configured SAP Cloud Platform console client. For more information, see Setting Up
the Console Client.
Context
Using the hcmcloud-import-roles console client command, you import the required role definitions in the SAP
SuccessFactors company instance connected to this account.
Procedure
1. Open the command prompt and navigate to the folder containing neo.bat/sh (SDK installation folder/tools).
2. Execute the following command:
Note
The size of the file containing the role definitions must not exceed 500 KB.
Results
You have imported the application-specific roles in the SAP SuccessFactors company instance connected to your
subaccount. Now you need to assign users to these roles.
Related Information
To complete the authorization configuration for your extension application, you assign the extension application
roles you have imported in the SAP SuccessFactors systems to the user to whom you want to grant access to your
application.
Prerequisites
● You have a role-based permission environment for your SAP SuccessFactors company instance.
● Your have either a Super Administrator or a Security Admin user for SAP SuccessFactors and have access to
the functionality on the SAP SuccessFactors Admin page.
● You have deployed the extension application.
Procedure
https://<SAP_SuccessFactors_landscape>/login
Where <SAP_SuccessFactors_landscape> is the fully qualified domain name of the host on which the SAP
SuccessFactors company is running.
2. Navigate to the Manage Permission Roles, as follows:
○ For Version 12 UI Framework (Revolution) not enabled: Navigate to: Admin Center Manage Security
Manage Permission Roles .
○ For Version 12 UI Framework (Revolution) enabled: Navigate to: Admin Center Manage Employees
Set User Permissions Manage Permission Roles .
3. Locate the role you want to manage, and from the Take Action dropdown box next to the role, select Edit.
4. On the Permission Role Detail page, scroll down to the Grant this role to...section, and then choose Add. The
system opens the Grant this role to... page.
5. On the Grant this role to... page, define whom you want to grant this role to, and specify the target population
accordingly.
6. To navigate back to the Permission Role Detail page, choose Finished.
7. Save your entries.
If you have SAP Cloud Platform extension package for SAP SuccessFactors configured for your subaccount, you
can change the default SAP Cloud Platform role provider of your Java application to the SAP SuccessFactors role
provider.
Prerequisites
● You have an SAP Cloud Platform extension subaccount. For more information about extension subaccounts,
see Basic Concepts.
● You are an administrator of your SAP Cloud Platform subaccount
● You have configured the Java extension application's connectivity to the SAP SuccessFactors system
associated with the extension subaccount. For more information, see Configure the Extension Applications's
Connectivity to SAP SuccessFactors [page 1636].
Context
A role provider is the component that retrieves the roles for a particular user. By default, the role provider used for
SAP Cloud Platform applications and services is the SAP Cloud Platform role provider. For Java extension
applications, however, you have to change the default role provider to the provider of the corresponding system.
For Java extension applications for SAP SuccessFactors you change the default role provider to the SAP
SuccessFactors role provider. To change the role provider for a Java extension application for SAP SuccessFactors
automatically, use the hcmcloud-enable-role-provider console client command.
Note
Currently, the automated change of the role provider is available only for Java extension applications for SAP
SuccessFactors.
Procedure
1. Open the command prompt and navigate to the folder containing neo.bat/sh (SDK installation folder/tools).
2. Enable the SAP SuccessFactors role provider for your Java extension application. Execute: hcmcloud-
enable-role-provider, as follows:
○ For an application deployed in your subaccount, specify the name of your extension application for the
application parameter.
For example, to enable the SAP SuccessFactors role provider for a Java extension application running in
your subaccount in the US East region, execute:
○ For an application to which your subaccount is subscribed, specify the application provider subaccount
and the name of your extension application for the application parameter in the following format:
<application_provider_subaccount>:<my_application>.
Related Information
If you have SAP Cloud Platform extension package for SAP SuccessFactors configured for your subaccount, you
can change the default SAP Cloud Platform role provider to another one.
Prerequisites
● You have an SAP Cloud Platform partner or customer account. For more information about account types, see
Accounts [page 10].
● You have an SAP Cloud Platform extension package for SAP SuccessFactors and the extension package is
configured for your SAP Cloud Platform subaccount.
● You have an administrator or developer role in your SAP Cloud Platform subaccount.
● Your application runtime supports destinations. For more information about the application runtimes
supported by SAP Cloud Platform, see Application Runtime Container
● You have configured the HTTP destination required to ensure your application's connectivity to SAP
SuccessFactors. For more information, see Configuring Destinations for Extension Applications.
● In the SAP SuccessFactors system, you have roles with the required permissions and these roles are with the
same names as those defined in the web.xml file of the extension application. For more information about
creating permission roles in SAP SuccessFactors, see the How do you create a permission role? section in Role-
Based Permissions Administration Guide.
● In the SAP SuccessFactors system, you have assigned the required roles to the corresponding users and
groups. For more information, see the How can you grant permission roles? section in the Role-Based
Permissions Administration Guide.
● When creating the extension application, you have defined the required roles in the web.xml file of the
application and these roles are the same as the ones you have for the application in the SAP SuccessFactors
system. For more information about how to define roles in the web.xml file of the application, see
Authentication [page 2122]
A role provider is the component that retrieves the roles for a particular user. By default, the role provider used for
SAP Cloud Platform applications and services is the SAP Cloud Platform role provider. For extension applications,
however, you can change the default role provider to another one, for example, a SAP SuccessFactors role provider.
Depending on whether the application is running in your subaccount or your subaccount is subscribed to the
extension application, you configure the role provider from either the Roles section for your application, or the
Subscription section for your subaccount. In addition, you can view the role provider for each enabled SAP Cloud
Platform service in the Services section of the SAP Cloud Platform cockpit.
Procedure
1. In the SAP Cloud Platform cockpit, navigate to the required SAP Cloud Platform subaccount. See Navigate to
Global Accounts and Subaccounts [page 964].
2. Navigate to the application for which you want to change the role provider. To do so, proceed as follows:
○ For a Java application running in your subaccount, choose Applications Java Applications , and then
choose the link of the application.
○ For a Java application to which your subaccount is subscribed, choose Applications Subscriptions ,
and then choose the link of the application.
5. (Optional) To view the role provider for an SAP Cloud Platform service, in the cockpit navigate to Services
<service_name> , and then choose Configure Roles.
The system displays the role provider in the Role Provider panel in a read-only mode.
Note
For a subaccount with SAP Cloud Platform extension package for SAP SuccessFactors, the role provider for
SAP Cloud Platform Portal is SAP SuccessFactors.
Results
The changes take effect after 5 minutes. If you want the changes to take effect immediately, you restart the
application (valid only for applications running in your subaccount).
To test the role assignments you first start the deployed extension application to make it available for requests,
and then try to access it with the users with different level of granted access to the application.
Prerequisites
● You have downloaded and configured SAP Cloud Platform console client. For more information, see Setting Up
the Console Client.
● You have made yourself familiar with the SAP Cloud Platform cockpit concepts. For more information, see
Cockpit
Procedure
1. Open the command prompt and navigate to the folder containing neo.bat/sh (<SDK installation
folder>/tools).
2. Start the deployed application using the following command:
3. Access the application using users with different roles assigned to them.
To access the application, use the application URL. To get the login URL of an application deployed in your
extension subaccount, open the SAP Cloud Platform cockpit, and navigate to Account
<subaccount_name> Java Applications <name_of_your_extension_application> Application
URLs .
Prerequisites
● You need to have specific Role-based permissions to use Intelligent Services Center. See
Prerequisites for Intelligent Services Center.
● You need to have specific Role-based permissions to use Integration Center. See Role-based Permissions for
Integration Center.
You can integrate the extension application to subscribe to an event defined in the SAP SuccessFactors system.
For example, if you want your extension application to be notified when a job title of an employee has been
changed, you have to use the Change in Job Title event.
Follow these steps to subscribe an SAP Cloud Platform extension application to receive notifications from an SAP
SuccessFactors event of your choice:
Procedure
Prerequisites
You need to have permissions for the Integration Center in the SAP SuccessFactors system.
Context
You have to generate the X.509 certificate in the SAP SuccessFactors system. Later on, you will add this certificate
as a trusted identity provider in SAP Cloud Platform.
Procedure
Note
By default, the name of the certificate is <configuration_name>_certificate.pem.
Context
You have to create an OAuth client in SAP Cloud Platform with ID and secret. Later on, you will add these client ID
and secret when creating the outbound OAuth configuration in SAP SuccessFactors system.
Procedure
1. Go to the SAP Cloud Platform cockpit and open the extension subaccount.
Prerequisites
You have created an OAuth X509 key and have saved the X509 certificate on your local file system. See Generate
OAuth X509 Key in SAP SuccessFactors [page 1652].
Context
Register the certificate you downloaded when generating the OAuth X509 key in the SAP Cloud Platform cockpit.
Procedure
Note
You don't need to change anything in the rest of the fields.
8. Choose Save.
Prerequisites
● You have an SAP Cloud Platform OAuth client created. See Create SAP Cloud Platform OAuth Client [page
1653].
● You have created an OAuth X509 key and have saved the X509 certificate on your local file system. See
Generate OAuth X509 Key in SAP SuccessFactors [page 1652].
● You have registered the X509 certificate when creating a trusted identity provider in the SAP Cloud Platform
cockpit. See Create Trusted Identity Provider in SAP Cloud Platform Cockpit [page 1654].
Context
Based on the X509 certificate and OAuth client, you have to create an outbound OAuth configuration in the SAP
SuccessFactors system.
Procedure
8. In the Token URL field, paste the value of the Token URL from the Token Endpoint field in the SAP Cloud
Platform cockpit Security OAuth Branding OAuth URLs .
9. In the Token Method field, select POST.
10. In the Audience field, paste the local service provider name for your SAP Cloud Platform account from the SAP
Cloud Platform cockpit.
Note
Open the SAP Cloud Platform cockpit and go to Security Trust Local Service Provider Local
Provider Name . See Principal Propagation to OAuth-Protected Applications.
Prerequisites
● You have created an OAuth X509 key and have saved the X509 certificate on your local file system. See
Generate OAuth X509 Key in SAP SuccessFactors [page 1652].
● You have created an SAP Cloud Platform OAuth client. See Create SAP Cloud Platform OAuth Client [page
1653].
● You have created a trusted identity provider in SAP Cloud Platform. See Create Trusted Identity Provider in
SAP Cloud Platform Cockpit [page 1654].
● You have created an outbound OAuth Configuration in SAP SuccessFactors. See Create Outbound OAuth
Configuration in SAP SuccessFactors [page 1655].
Context
If you want your extension application to receive notifications from your SAP SuccessFactors system, you need to
subscribe this application to a dedicated event. Using the Intelligent Services Center, you create and configure an
integration for this specific event.
Procedure
1. Log on to the SAP SuccessFactors system, and go to the Intelligent Services Center.
2. Search for an event and choose it from the list. The event opens in a dedicated page.
3. Select an already existing flow or create a new one from the menu on the left.
4. In the Activities section on the right, choose Integration. A new dialog opens.
5. Choose Create New Integration.
○ For the Destination Type, choose REST. This means that the extension application that will receive
notifications by this event is a REST service.
Manage accounts (global accounts and subaccounts), applications and virtual machines.
Related Information
Learn about frequent administrative tasks you can perform using the SAP Cloud Platform cockpit, including
managing subaccounts, orgs and spaces, entitlements, subscriptions, and members.
Related Information
Change the display name for the global account using the SAP Cloud Platform cockpit.
Prerequisites
● Cloud Foundry environment: You are a member of the global account that you'd like to edit.
● Neo environment: You have the Administrator role for the global account you'd like to edit.
The overview of global accounts available to you is your starting point for viewing and changing global account
details in the cockpit. To view or change the display name for a global account, trigger the intended action directly
from the tile for the relevant global account.
Display name Specify a human-readable name for your global account and change it later on, if
necessary. This way you can distinguish between your global accounts if you have
more than one.
Procedure
1. Choose the global account for which you'd like to change the display name and choose on its tile.
A new dialog shows up with the mandatory Display Name field that is to be changed.
2. Enter the new human-readable name for the global account.
3. Save your changes.
Related Information
Create and delete subaccounts, define subaccount details, and enable or disable the subaccount to use beta
features using the SAP Cloud Platform cockpit. In the Neo environment, you can also create and delete
subaccounts using the respective console client commands.
In this section:
Prerequisites
● Cloud Foundry environment: You are a member of the global account that contains the subaccount you'd like
to edit.
● Neo environment: You have the Administrator role for the subaccount you'd like to edit.
Context
You edit a subaccount by choosing the relevant action on its tile. It's available in the global account view, which
shows all its subaccounts.
The subaccount name is a unique identifier of the subaccount on SAP Cloud Platform that you specify when the
subaccount is created. In the Neo environment, you use this subaccount name as a parameter for the console
client commands.
Display Name Specify a human-readable name for your subaccount and change it later on, if nec
essary. This way you can distinguish between multiple subaccounts.
Enable beta features Enable the subaccount to use features, including services and applications, which
are occasionally made available by SAP for beta usage on SAP Cloud Platform. This
option is available to administrators only and is, by default, unselected.
Caution
You should not use SAP Cloud Platform beta features in subaccounts that belong
to productive enterprise accounts. Any use of beta functionality is at the custom
er's own risk, and SAP shall not be liable for errors or damages caused by the use
of beta features.
Note
Once you have enabled this setting in a subaccount in the Cloud Foundry envi
ronment, you cannot disable it.
Procedure
1. Choose the subaccount for which you'd like to make changes and choose on its tile.
You can view more details about the subaccount such as its description and additional attributes by clicking
Show More.
2. Make your changes and save them.
Related Information
Delete subaccounts using the SAP Cloud Platform cockpit. In the Neo environment, you can also delete
subaccounts using the respective console client commands.
Prerequisites
● You have the Administrator role for the global account that contains the subaccount you want to delete.
Neo environment:
● You have the Administrator role for the subaccount you want to delete.
● You have created the subaccount in the global account yourself.
● You have removed any active entities like subscriptions, non-shared database systems, database schemas,
deployed applications, HTML5 applications, or document service repositories from the subaccount. If any of
these items exist, you must delete them before you can delete the subaccount. Make sure also that there are
no running virtual machines in the subaccount.
Context
You cannot delete the last remaining subaccount from the global account.
Procedure
Related Information
Administrators of a global account who are members of a subaccount in the Cloud Foundry environment can
delete an organization assigned to this subaccount using the cockpit. Once the organization is deleted, you can
create a new organization.
Prerequisites
You have the Administrator role for a global account, and at the same time you are a member of the subaccount to
which the organization is assigned.
Procedure
Results
The organization is deleted. All data in the organization including spaces, applications, service instances, and
member information is lost. You can now choose Enable Cloud Foundry to create a new organization.
Related Information
Navigate to Global Accounts, Subaccounts, Orgs, and Spaces in the Cockpit [page 953]
Create and delete spaces in an organization in the Cloud Foundry environment using the SAP Cloud Platform
cockpit or the Cloud Foundry command line interface.
In this section:
Related Information
Navigate to Global Accounts, Subaccounts, Orgs, and Spaces in the Cockpit [page 953]
Delete spaces in your Cloud Foundry organization using the SAP Cloud Platform cockpit.
Prerequisites
● You have the Org Manager role in the organization from which you want to delete a space. For more
information about roles and permissions, see https://docs.cloudfoundry.org/concepts/roles.html .
● You have ensured that the data in the space you are going to delete is no longer needed.
Caution
Please be aware that you won’t be able to access your SAP HANA database system if you delete a space in
an organization to which an SAP HANA database system is assigned.
Procedure
1. Choose the space you'd like to delete and choose (Delete) on its tile.
2. Confirm your change.
Navigate to Global Accounts, Subaccounts, Orgs, and Spaces in the Cockpit [page 953]
Delete Spaces Using the Command Line Interface [page 1665]
Create Cloud Foundry Spaces Using the Cockpit [page 946]
Use the cf delete-space command to delete spaces using the Cloud Foundry command line interface (cf CLI).
Prerequisites
● Download and install the cf CLI and log on to your Cloud Foundry environment. For more information, see
Download and Install Cloud Foundry Command Line Interface and Log On to the Cloud Foundry Environment
Using the Cloud Foundry Command Line Interface [page 948].
● You must be assigned the Org Manager role in the organization in which you'd like to delete a space. For more
information about roles and permissions, see https://docs.cloudfoundry.org/concepts/roles.html .
● You have ensured that the data in the space you are going to delete is no longer needed.
Caution
Please be aware that you won’t be able to access your SAP HANA database system if you delete a space in
an organization to which an SAP HANA database system is assigned.
Procedure
Enter the following string, specifying a name for the space you want to delete and the name of the organization:
Note
The parameter [-f] is optional and forces the space deletion without confirmation.
Related Information
Assign the quotas available for a global account to its subaccounts and across all regions associated with them.
You can also assign quotas to different spaces in an organization in the Cloud Foundry environment using either
the SAP Cloud Platform cockpit or the command line interface.
For more information, see Add Quotas to Subaccounts Using the Cockpit [page 944].
In this section:
Neo environment:
Manage space quota plans in the Cloud Foundry environment using the SAP Cloud Platform cockpit.
Prerequisites
You have the Org Manager role for the organization in which you want to change space quota plans. For more
information about roles and permissions, see https://docs.cloudfoundry.org/concepts/roles.html .
Procedure
1. Choose the subaccount that contains the spaces for which you'd like to change the quota.
2. In the navigation menu, choose Quota Plans.
Related Information
Change Space Quota Plans Using the Command Line Interface [page 1667]
Adding Quotas and Space Quota Plans [page 960]
Change space quota plans in the Cloud Foundry environment using the Cloud Foundry command line interface (cf
CLI).
Prerequisites
● You have the Org Manager role in the organization in which you'd like to change space quota plans. For more
information about roles and permissions, see https://docs.cloudfoundry.org/concepts/roles.html .
● Download and install the cf CLI and log on to your Cloud Foundry instance. For more information, see
Download and Install Cloud Foundry Command Line Interface and Log On to the Cloud Foundry Environment
Using the Cloud Foundry Command Line Interface [page 948].
Procedure
1. (Optional) Enter the following string to identify the names of all space quota plans available in your org:
cf space-quotas
Related Information
Administrators can add users and assign different roles to them to give them access to the functionality based on
the roles.
You manage users at different hierarchical levels of your account model. There are differences depending on the
environment you choose.
For example, in both the Cloud Foundry environment and in the Neo environment, you manage users at global
account level. See Add Global Account Members [page 937].
At the levels below the global account, you manage users differently.
In this section:
Neo environment:
● You manage members in subaccounts. See Add Members to Subaccounts [page 965].
● You have additional options:
○ Enable Application Providers to Access Your Subaccount [page 1670]
○ Managing Member Authorizations [page 1671]
Related Information
Roles determine which functions in the cockpit users can view and access, and which actions they can initiate.
Roles support typical tasks performed by users when interacting with the cloud platform, for example, adding and
removing users. A user can be assigned one or more roles, where each role comes with a set of permissions. The
set of assigned roles defines what functionality is available to the user and what activities he can perform.
SAP Cloud Platform offers predefined roles. These are specific to the navigation level in the cloud cockpit, for
example, the roles at the level of the organization differ from the ones for the space. In addition, users can create
their own roles based on their needs in the Neo environment.
The Administrator role is automatically assigned to the user who has started a trial account or who has purchased
resources for an enterprise account. A global account administrator has permissions to manage all roles in the
relevant global account and can assign the Administrator role to other users in the same global account. He can
add members to the global account who are then automatically assigned the Administrator role.
In the Cloud Foundry environment, an Org Manager can manage the members and roles on the level of the relevant
organization and the Space Manager for the related space only. Managing members and roles for an organization
takes place on the navigation level of the subaccount to which the organization is assigned.
Roles apply to all operations associated with the global account, the organization, or the space, irrespective of the
tool used (Eclipse-based tools, cockpit, and console client). The role assignment is specific to a global account, an
organization, or a space. Users who are assigned the roles at the level of the organization can view and navigate
into all spaces in that organization.
● Users can be assigned to one or more subaccounts and to one or more roles in the relevant subaccount.
● If the user is assigned to more than one subaccount, an administrator must assign the roles to the user for
each subaccount.
● Roles apply to all operations associated with the subaccount, irrespective of the tool used (Eclipse-based
tools, cockpit, and console client).
● As an administrator in the Neo environment, you cannot remove your own administrator role. You can remove
any member except yourself.
● Users can belong to an organization without having an org role assigned. This is required, for example, when a
role should be assigned to a user in a space in the relevant organization only. By default, the cockpit initially
hides all users that do not have a role in the organization. To view all users, set the filter to All Space Members.
● To assign roles to users at the org and space level, use the cockpit or the console client. At the global account
level, only the cockpit supports assigning the Administrator role to other users.
Related Information
If your scenario requires it, you can add application providers as members to your SAP Cloud Platform subaccount
in the Neo environment and assign them the administrator role so that they can deploy and administer the
applications you have purchased.
Prerequisites
Tip
You can request user IDs at the SAP Service Marketplace: http://service.sap.com/request-user
SAP Service Marketplace users are automatically registered with the SAP ID service, which controls user
access to SAP Cloud Platform.
Context
As an administrator of a subaccount, you can add members to it and make them administrators of the subaccount
using the SAP Cloud Platform cockpit. For example, if you have purchased an application from an SAP
implementation partner, you may need to enable the partner to deploy and administer the application.
Procedure
User IDs are case-insensitive and can contain alphanumeric characters only. Currently, there is no user
validation.
5. Select the Administrator checkbox.
Note
You cannot remove your own administrator role.
7. Notify your application provider that they now have the necessary permissions to access the subaccount.
Related Information
Related Information
SAP Cloud Platform includes predefined platform roles that support the typical tasks performed by users when
interacting with the platform. In addition, subaccount administrators can combine various scopes into a custom
platform role that addresses their individual requirements.
A platform role is a set of permissions, or scopes, managed by the platform. Scopes are the building blocks for
platform roles. They represent a set of permissions that define what members can do and what platform resources
they can access (for example, configuration settings such as destinations or quotas). Most scopes follow a
“Manage” and “Read” pattern. For example, manageXYZ comprises the actions create, update, and delete on
platform resource XYZ. However, some areas use a different pattern, for example, Application Lifecycle
Management.
Predefined platform roles cannot be changed. However, global account administrators can copy from predefined
roles, and then modify the copies.
Role Description
You can also manage subscriptions, trust, authorizations, and OAuth settings, and restart
SAP HANA services on HANA databases.
Furthermore, you can view heap dumps and download a heap dump file.
In addition, you have all permissions granted by the developer role, except the debug per
mission.
Note
This role also grants permissions to view the Connectivity tab in the SAP Cloud
Platform cockpit.
Cloud Connector Admin Open secure tunnels via Cloud Connector from on-premise networks to your subaccounts.
Note
This role also grants permissions to view the Connectivity tab in the SAP Cloud
Platform cockpit.
Developer Supports typical development tasks, such as deploying, starting, stopping, and debugging
applications. You can also change loggers and perform monitoring tasks, such as creating
availability checks for your applications and executing MBean operations.
Note
By default, this role is assigned to a newly created user.
Support User Designed for technical support engineers, this role enables you to read almost all data re
lated to a subaccount, including its metadata, configuration settings, and log files. For you
to read database content, a database administrator must assign the appropriate database
permissions to you.
Application User Admin Assigned by the subaccount administrator to a subaccount member. Manage user per
missions on application level to access Java, HTML5 applications, and subscriptions. You
can control permissions directly by assigning users to specific application roles or indi
rectly by assigning users to groups, which you then assign to application roles. You can
also unassign users from the roles or groups.
Note
This role does not let you manage subaccount roles and perform actions at the subac
count level (for example, stopping or deleting applications).
The following graphic illustrates the predefined Administrator, Developer, and Support User roles and their amount
of scopes:
The Admin role includes all platform scopes available on SAP Cloud Platform. The Developer and Support User are
subsets of the Admin role.
Administrators of a subaccount can define custom platform roles based on their needs by assembling the different
scopes they want their custom platform role to include. Custom platform roles are managed at subaccount level
and can be changed at any time.
Subaccount administrators can combine various scopes into a custom platform role that addresses their
individual requirements. Scopes are the building blocks for platform roles. They represent a set of permissions
that define what members can do and what platform resources they can access (for example, configuration
settings such as destinations or quotas).
The following example illustrates how custom platform roles in SAP Cloud Platform typically look like regarding
their amount of scopes:
Related Information
Subaccount administrators can define custom platform roles and assign them to the members of its subaccounts.
Prerequisites
Procedure
1. Choose Platform Roles in the navigation area for the subaccount for which you'd like to manage custom
platform roles.
All custom and predefined platform roles available for the subaccount are shown in a list.
2. You have the following options:
Note
You cannot change or delete a predefined platform role, but you can copy from it and make changes to
the copy.
Java Application Lifecycle readJavaApplications List Java Applications Enables you to list Java applications,
Management get their status, and list Java applica
tion processes.
Note
Assigning this scope to a role re
quires to assign the readMonitoring
Data scope as well.
Note
Assigning this scope to a role re
quires to assign the readMonitoring
Data scope as well.
manageJavaProcesses Manage Java Processes Enables you to start or stop Java appli
cation processes.
Note
Assigning this scope to a role re
quires to assign the readMonitoring
Data scope as well.
HTML5 Application Manage readHTML5Applica List HTML5 Applications Enables you to list HTML applications
ment tions and review their status.
Note
Assigning this scope to a role re
quires to assign the readAccount
scope as well.
Note
Assigning this scope to a role re
quires to assign the readAccount
scope as well.
Multi-Target Application Man readMultiTargetAppli Browse Solutions Inventory Enables you to list solutions, get their
agement cation status, and list solution operations.
Authorization Management readApplicationRoles View Application Roles Enables you to list all user roles availa
ble for a Java application.
manageApplication Manage Application Roles Enables you to assign user roles for a
Roles Java application and create new roles.
readAuthorizationSet View Authorization Settings Enables you to view all kinds of role,
tings group, and user mappings on account
and subscription level.
Note
Assigning this scope to a role re
quires to assign the readSubscrip
tions scope as well.
manageAuthorization Manage Authorization Set Enables you to manage all kinds of role,
Settings tings group, and user mappings on account
and subscription level.
Note
Assigning this scope to a role re
quires to assign the readSubscrip
tions scope as well.
Logging Service readLogs View Application Logs Enables you to view all logs available
for a Java application.
manageLogs Manage Application Logs Enables you to change the log level for
Java application logs.
Note
Assigning this scope to a role re
quires to assign the readLogs scope
as well.
Document Service listECMRepositories List Document Service Repo Enables you to list the document serv
sitories ice repositories.
manageECM Manage Document Service Enables you to create and delete docu
Repositories ment service repositories, including
the management of repository keys.
Git Service accessGit Access Git Repositories Enables you to create repositories,
push commit, push tags, create new
remote branches, and push commits
authored by other users (forge author
identity).
Note
Assigning this scope to a role re
quires to assign the readHTML5Ap
plications scope as well.
Note
Assigning this scope to a role re
quires to assign the readHTML5Ap
plications scope as well.
Note
Assigning this scope to a role re
quires to assign the readHTML5Ap
plications scope as well.
Metering Service readMeteringData Read Metering Data Enables you to access data related to
your application's resource consump
tion, e.g. network data volume or data
base size.
Extension Integration Man readExtensionIntegra Read Extension Integration Enables you to read integration tokens.
agement tion
manageExtensionInte Manage Extension Integra Enables you to create and delete inte
gration tion gration tokens.
Account Management readAccount View Accounts Enables you to view a list of all subac
counts available to you and access
them.
readCustomPlatform View Custom Platform Roles Enables you to list self-defined plat
Roles form roles.
manageCustomPlat Manage Custom Platform Enables you to define your own plat
formRoles Roles form roles.
Member Management readAccountMembers View Account Members Enables you to view a list of members
for an individual subaccount.
Note
Assigning this scope to a role re
quires to assign the readCustom
PlatformRoles scope as well.
manageAccountMem Manage Account Members Enables you to add and remove mem
bers bers for an individual subaccount and
to assign user roles to them.
Note
Assigning this scope to a role re
quires to assign the man
ageHTML5Applications scope as
well. It is planned to remove this re
quirement in a future release.
Note
Assigning this scope to a role re
quires to assign the readHTML5Ap
plications scope as well. It is plan
ned to remove this requirement in a
future release.
SAP HANA / SAP ASE Serv readDatabaseInforma View Database Information Enables you to view lists of SAP HANA
ice tion and SAP ASE database systems, data
bases, and database-related service re
quests. You can also view information
such as the assigned database type,
the database version, and data source
bindings.
Connectivity Service readDestinations View Destinations Enables you to view destinations re
quired for communication outside SAP
Cloud Platform.
readSCCTunnels View SCC Tunnels Enables you to view the data transmis
sion tunnels used by the Cloud connec
tor to communicate with back-end sys
tems.
manageSCCTunnels Manage SCC Tunnels Enables you to operate the data trans
mission tunnels used by the Cloud con
nector.
Trust Management readTrustSettings View Trust Settings Enables you to read trust configura-
tions.
Note
Assigning this scope to a role re
quires to assign the readAccount
and readSubscriptions scopes as
well.
Note
Assigning this scope to a role re
quires to assign the readAccount
and readSubscriptons scopes as
well.
OAuth Client Management readOAuthSettings View OAuth Settings Enables you to view OAuth Application
Client settings.
Password Storage managePasswords Manage Passwords Enables you to set and delete pass
words for a given application in the
password storage (using the console
client).
Keystore Service manageKeystores Manage Keystores Enables you to manage (create, delete,
list) key stores (using the console cli
ent).
Monitoring Service readMonitoringConfi- Read Monitoring Configura- Enables you to list JMX checks, availa
guration tion bility checks, and alert recipients.
manageMonitoring Manage Monitoring Configu- Enables you to set and update JMX
Configuration ration checks, availability checks, and alert
recipients.
Enterprise Messaging readMessagingService Read Messaging Service Enables you to view details of messag
ing hosts, queues, and applications
bindings to messaging hosts.
manageMessaging Manage Messaging Hosts Enables you to create, edit, and delete
Hosts messaging hosts.
Service Management listServices List Services Enables you to browse through the list
of services and review their status in
your subaccount.
Note
For applying any service-specific
configuration, additional scopes
(e.g. manageDestinations) may be
required.
In the cockpit, you can view information about the usage of the services and applications available to your
subaccount.
To view the resource consumption for a subaccount, open the subaccount in the cockpit and choose Resource
Consumption in the navigation area.
The Resource Consumption view displays information about billed usage in tabular and graph formats.
● The table presents usage values for the selected month for the subaccount. The usage values are broken down
by resource.
● The graph presents usage values for 12 months up to the selected month.
By default, the table displays resource consumption for the current month. You can select an earlier month from
the dropdown box.
Each resource is listed with its associated metrics and service plans, if relevant:
The billed usage of the service is displayed. If the service has associated service plans, a breakdown for each
service plan is provided.
In the Cloud Foundry environment, usage values are further broken down by space.
You can display historical usage information for one or more rows in the table. Select the rows to display from the
same service. For example, you can compare usage between different service plans for a service.
In the Cloud Foundry environment, you can compare usage between spaces.
Example
Cloud Foundry Environment
Your subaccount uses three services. The MongoDB service has two assigned service plans, X-Small and Small.
The X-Small service plan is used in one space in the subaccount.
Example
Neo Environment
Your subaccount has entitlements to three different services. None of these services have associated service
plans. Usage of each service is measured according to a different metric.
Related Information
The End-to-End (E2E) trace analysis is designed to identify and trace user requests that have excessive execution
time within a complex system landscape. It consists of features for performing analyses throughout the complete
solution landscape, so that you can isolate problematic components and identify root causes.
Use
You analyze a trace to check the distribution of the response time over the client, network, and server. The
response time of each component involved in executing the request, and the request path through the
components, are then provided to you for detailed analysis.
For additional information, see Root Cause Analysis and Exception Management.
Prerequisites
Configuring Connections for Collecting Statistical Data for SAP Cloud Platform
You need to configure the connection to the SAP Cloud Platform for retrieving the statistical data. Proceed as
follows, depending on the tool you use:
● for SAP Solution Manager 7.2 SP06 and higher - Define the API Service access endpoint as described in Cloud
Services Configuration for Hybrid Scenarios .
● for Focused Run for SAP Solution Manager (FRUN) - Define the API Service access endpoint as described in
Cloud Services Configuration for Hybrid Scenarios .
● for SAP Solution Manager 7.2 SP06 and higher - see Managing Technical System Information and Executing
the Configuration Scenarios.
● for Focused Run for SAP Solution Manager (FRUN) - see Managed Systems Preparation & Maintenance
Guides and Preparing Managed Systems - SAP NetWeaver Application Server ABAP .
HTML5 Applications
Java Applications
The E2E Tracing for Java applications in SAP Cloud Platform is supported by default. For outgoing connections to
other systems, for example other Java applications in SAP Cloud Platform or on-premise systems, use the
Connectivity Service to ensure the correct forwarding of the SAP-PASSPORT for all outgoing connections
depending on the runtime environment. For more information, see the following tutorials:
Runtime Tutorial
Java EE 6 Web Profile Preparing Managed Systems - SAP NetWeaver Consuming the
Connectivity Service (Java)
HTML5 Applications
The E2E tracing and collection of statistics is supported by default for HTML5 applications.
For HTML5 applications started from the SAP Fiori Launchpad, you have to manually activate the collection of
performance statistics for each site. Proceed as follows:
Java Applications
The E2E tracing and collection of data is disabled by default for Java applications. It has to be activated on
demand. As prerequisites, you need a subaccount with a deployed and started Java application, you are a member
of the subaccount, and you have the Developer role enabled.
You then receive an activation confirmation with the valuetrue to notify you that the procedure is successful.
To perform the analysis of the E2E tracing, see the dedicated tool documentation at:
● for SAP Solution Manager 7.2 SP06 and higher - Trace Analysis.
● for Focused Run for SAP Solution Manager(FRUN) - Trace Analysis.
When the contract of an SAP Cloud Platform customer ends, SAP is legally obligated to delete all the data in the
customer’s accounts. The data is physically and irreversibly deleted, so that it cannot be restored or recovered by
reuse of resources.
The termination process is triggered when a customer contract expires or a customer notifies SAP that they wish
to terminate their contract.
1. SAP sends a notification email of the expiring contract, and the termination date on which the account will be
closed.
Note
If the contract is renewed before the termination date, the contract termination process is ended.
2. As stated in the notification email that the customer receives, the customer can export their data, or they can
open an incident to download their data before the termination date.
3. On the termination date, the account is closed and a grace period of 30 days begins.
4. During the grace period:
○ Access is blocked to the account, and to deployed and subscribed applications.
○ No data is deleted, and backups are ongoing.
○ The global account tile is displayed in the Global Accounts page of the cockpit with the label Expired.
Clicking the tile displays the number of days left in the grace period.
Related Information
Administrators can define legal links per enterprise global account in the SAP Cloud Platform cockpit.
Prerequisites
You have the Administrator role for the global account for which you'd like to define legal links.
Context
You can define the legal information relevant for a global account so the members of this global account can view
this information.
Note
This feature is available only for enterprise global accounts, not for trial global accounts.
Procedure
1. Choose the global account for which you'd like to make changes.
The links you configured are available in the Legal Information menu.
How to configure and operate your deployed Java applications Java: Application Operations [page 1693]
How to monitor your SAP HANA applications SAP HANA: Application Operations [page 1731]
How to monitor the current status of the HTML5 applications HTML5: Application Operations [page 1732]
in your subaccount
How to securely operate and monitor your cloud applications Operator's Guide [page 390]
connected to on-premise systems
How to change the default SAP Cloud Platform application Configuring Application URLs [page 1743]
URL by configuring custom or platform domains.
How to enable transport of SAP Cloud Platform applications Change Management with CTS+ [page 1381]
via the CTS+.
SAP Cloud Platform allows you to achieve isolation between the different application life cycle stages
(development, testing, productive) by using multiple subaccounts.
Prerequisites
● You have developed an application. For more information, see Developing Java Applications [page 1164].
● You have a subaccount in an enterprise account. For more information, see Global Accounts and Subaccounts
[page 10].
Using multiple subaccounts ensures better stability. Also, you can achieve better security for productive
applications because permissions are given per subaccount.
For example, you can create three different subaccounts for one application and assign the necessary amount of
compute unit quota to them:
● dev - use for development purposes and for testing the increments in the cloud, you can grant permissions to
all application developers
● test- use for testing the developed application and its critical configurations to ensure quality delivery
(integration testing and testing in productive-like environment prior to making it publicly available)
● prod - use to run productive applications, give permissions only to operators.
You can create multiple subaccounts and assign quota to them either using the console client or the cockpit.
Procedure
Next, you can deploy your application in the newly created subaccount using the Eclipse IDE or the console
client. Then, you can test your application and make it ready for productive use.
You can transfer the application from one subaccount to another by redeploying it in the respective
subaccount.
Procedure
1. Open the command prompt and navigate to the folder containing neo.bat/sh (<SDK installation
folder>/tools).
2. Create a new subaccount.
Execute:
Next, you can deploy your application in the newly created subaccount by executing neo deploy -a
<subaccount> -h <host> -b <application name> -s <file location> -u <user name or
email>. Then, you can test your application and make it ready for productive use.
You can transfer the application from one subaccount to another by redeploying it in the respective
subaccount.
Related Information
After you have developed and deployed your Java application on SAP Cloud Platform, you can configure and
operate it using the cockpit, the console client, or the Eclipse IDE.
Content
Console Client Update Application Properties [page Specify various configurations using commands.
1698]
Eclipse IDE Configuring Advanced Configurations Use the options for advanced server and application configu-
[page 1193] rations as well as direct reference to the cockpit UI.
Cockpit Define Application Details (Java Apps) Start, stop, and undeploy applications, as well as start, stop,
[page 1706] and disable individual application processes.
Console Client start [page 1977]; stop [page 1982]; re Manage the lifecycle of a deployed application or individual ap
start [page 1953] plication processes by executing the respective command.
Eclipse IDE Deploy Locally from Eclipse IDE [page Start, stop, republish, and perform delta deploy of applica
1189] tions.
Lifecycle Manage Start an Application [page 1186] Start and stop applications using the Lifecycle Management
ment API Stop an Application [page 1188] API.
Console Client Register availability checks and JMX checks to receive notifi-
cations if the application goes down or responds slowly.
Monitoring API Use a REST API to get the state or the metric details of an ap
plication and its processes.
Profiling
Eclipse IDE Profiling Applications [page 1722] Analyze resource-related problems in your application.
Logging
Cockpit View the logs and change the log settings of any applications
deployed in your subaccount.
Eclipse IDE View the logs and change the log settings of the applications
deployed in your subaccount or on you local server.
Cockpit Enable Maintenance Mode for Planned Supports zero downtime and planned downtime scenarios.
Downtimes [page 1716] Disable the application or individual processes in order to shut
down the application or processes gracefully.
Perform Soft Shutdown [page 1719]
As an operator, you can configure an SAP Cloud Platform application according to your scenario.
When you are deploying the application using SAP Cloud Platform console client, you can specify various
configurations using the deploy command parameters:
You can scale an application to ensure its ability to handle more requests.
Using the cockpit, you can perform the following identity and access management configuration tasks:
Using the cockpit and the console client, you can configure HTTP, Mail and RFC destinations to make use of them
in your applications:
Using the cockpit and the console client, you can view and download log files of any applications deployed in your
subaccount:
Related Information
You can update a property of an application running on SAP Cloud Platform without redeploying it.
Context
Application properties are configured during deployment with a set of deploy parameters in the SAP Cloud
Platform console client. If you want to change any of these properties (Java version, runtime version, compression,
VM arguments, compute unit size, URI encoding, minimum and maximum application processes) without the need
to redeploy the application binaries, use the set-application-property command. Execute the command
separately for each property that you want to set.
Procedure
1. Open the command prompt and navigate to the folder containing neo.bat/sh (<SDK installation folder>/
tools).
2. Execute set-application-property specifying the new value of one property that you want to change. For
example, to change the compute unit size to premium, execute:
3. For the change to take effect, restart your application using the restart command.
Related Information
Applications deployed on SAP Cloud Platform are always started on the latest version of the application runtime
container. This version contains all released fixes, critical patches and enhancements and is respectively the
recommended option for applications. In some special cases, you can choose the version of the runtime container
your application uses by specifying it with the parameter <--runtime-version> when deploying your
application. To change this version, you need to redeploy the application without specifying this parameter.
You have downloaded and configured SAP Cloud Platform console client. For more information, see Set Up the
Console Client [page 1135].
Context
If you want to choose the version of the application runtime container, follow the procedure.
Procedure
1. Open the command prompt and navigate to the folder containing neo.bat/sh (<SDK installation folder>/
tools).
2. In the console client command line, execute the <list-runtime-versions> command to display all
recommended versions. We recommend that you choose the latest available version.
3. Redeploy your application with parameter <--runtime-version> set to the selected version number.
Caution
By selecting an older version of the application runtime, you do not have the latest released fixes and critical
patches as well as enhancements, which may affect the smooth operation and supportability of your
application. Consider updating the selected version periodically. Plan the updates to the latest version of
the application runtime and apply in your test environment first. Older application runtime versions will be
deprecated and expire. Refer to the <list-runtime-versions> command for information.
Related Information
You can choose the Java Runtime Environment (JRE) version used for an application.
Prerequisites
You have downloaded and configured SAP Cloud Platform console client.
For more information, see Set Up the Console Client [page 1135]
Context
The JRE version depends on the type of the SAP Cloud Platform SDK for Neo environment you are using. By
default the version is:
If you want to change this default version, you need to specify the --java-version parameter when deploying the
application using the SAP Cloud Platform console client. Only the version number of the JVM can be specified.
You can use JRE 8 with the Java Web Tomcat 7 runtime (neo-java-web version 2.25 or higher) in productive
accounts.
For applications developed using the SAP Cloud Platform SDK for Neo environment for Java Web Tomcat 7 (2.x),
the default JRE is 7. If you are developing a JSP application using JRE 8, you need to add a configuration in the
web.xml that sets the compiler target VM and compiler source VM versions to 1.8.
Procedure
1. Open the command prompt and navigate to the folder containing neo.bat/sh (<SDK installation
folder>/tools).
2. Deploy the application specifying --java-version. For example, to use JRE 7, execute the following command:
For Java Web Tomcat 8, Java version 8 is supported by default, but you can also use Java version 7.
Usage of gzip response compression can optimize the response time and improve interaction with an application
as it reduces the traffic between the Web server and browsers. Enabling compression configures the server to
return zipped content for the specified MIME type and size of the response.
Prerequisites
You have downloaded and configured SAP Cloud Platform console client.
For more information, see Set Up the Console Client [page 1135]
Context
You can enable and configure gzip using some optional parameters of the deploy command in the console client.
When deploying the application, specify the following parameters:
Procedure
If you enable compression but do not specify values for --compressible-mime-type or --compression-min-size,
then the defaults are used: text/html, text/xml, text/plain and 2048 bytes, respectively.
If you specify values for --compressible-mime-type or --compression-min-size but do not enable compression,
then the operation passes, compression is not enabled and you get a warning message.
If you want to enable compression for all responses independently from MIME type and size, use only --
compression force.
Example
Next Steps
Once enabled, you can disable the compression by redeploying the application without the compression options or
with parameter --compression off.
Related Information
Using SAP Cloud Platform console client, you can configure the JRE by specifying custom VM arguments.
Prerequisites
For more information, see Set Up the Console Client [page 1135]
Context
● System properties - they will be used when starting the application process. For example {{-D<key>=<value>}}
● Memory arguments - use them to define custom memory settings of your compute units. The supported
memory settings are:
-Xms<size> - set initial Java heap size
-Xmx<size> - set maximum Java heap size
-XX:PermSize - set initial Java Permanent Generation size
-XX:MaxPermSize - set maximum Java Permanent Generation size
Note
We recommend that you use the default memory settings. Change them only if necessary and note that this
may impact the application performance or its ability to start.
1. Open the command prompt and navigate to the folder containing neo.bat/sh (<SDK installation
folder>/tools).
2. Deploy the application, specifying your desired configurations. For example, if you want to specify a currency
and maximum heap size 1 GiB, then execute the deploy with the following parameters:
Note
If you are deploying using the properties file, note that you have to use double quotation marks twice: vm-
arguments=""-Dcurrency=EUR -Xmx1024m"".
This will set the system properties -Dcurrency=EUR and the memory argument -Xmx1024m.
To specify a value that contains spaces (for example, -Dname=John Doe), note that you have to use single
quotation marks for this parameter when deploying.
Related Information
Each application is started on a dedicated SAP Cloud Platform Runtime. One application can be started on one or
many application processes, according to the compute unit quota that you have.
Prerequisites
● You have downloaded and configured SAP Cloud Platform console client. For more information, see Set Up the
Console Client [page 1135].
● Your application can run on more than one application processes
Scaling an application ensures its ability to handle more requests, if necessary. Scalability also provides failover
capabilities - if one application process crashes, the application will continue to work. First, when deploying the
application, you need to define the minimum and maximum number of application processes. Then, you can scale
the application up and down by starting and stopping additional application processes. In addition, you can also
choose the compute unit size, which provides a certain central processing unit (CPU), main memory and disk
space.
Procedure
1. Open the command prompt and navigate to the folder containing neo.bat/sh (<SDK installation
folder>/tools).
2. Deploy the application, specifying --minimum-processes and --maximum-processes. The --minimum-
processes parameter defines the number of processes on which the application is started initially. Make sure it
is at least 2.
4. You can now scale the application up by executing the start command again. Each new execution starts
another application process. You can repeat the start until you reach the maximum number of application
process you defined within the quota you have purchased.
5. If for some reason you need to scale the application down, you can stop individual application processes by
using soft shutdown. Each application process has a unique process ID that you can use to disable and stop
the process.
a. List all application processes with their attributes (ID, status, last change date) by executing neo status
and identify the application process you want to stop.
b. Execute neo disable for the application process you want to stop.
You can also scale your application vertically by choosing the compute unit size on which it will run after the
deploy. You can choose the compute unit size by specifying the --size parameter when deploying the application.
For example, if you have an enterprise account and have purchased a package with Premium edition compute
units, then you can run your application on a Premium compute unit size, by executing
Related Information
For an overview of the current status of the individual applications in your subaccount, use the cockpit. It provides
key information in a summarized form and allows you to initiate actions, such as starting, stopping, and
undeploying applications.
Related Information
You can view details about your currently selected Java application. By adding a suitable display name and a
description, you can identify the application more easily.
Context
In the overview of a Java application in the cockpit, you can add and edit the display name and description for the
Java application as needed.
● Display name - a human-readable name that you can specify for your Java application and change it later on, if
necessary.
● Description - a short descriptive text about the Java application, typically stating what it does.
Procedure
You can directly start, stop, and undeploy applications, as well as start, stop, and disable individual application
processes.
Context
An application can run on one or more application processes. The use of multiple processes allows you to
distribute application load and provide failover capability. The number of processes that you can start depends on
the compute unit quota available to your global account and how an individual application has been configured. If
you reach the maximum, increase the maximum number of processes first before you can start another process.
Note
While an application name is assigned manually and is unique in a subaccount, an application process ID is
generated automatically whenever a new process is started and is unique across the cloud platform.
Procedure
1. Open the subaccount in the cockpit and choose Applications Java Applications in the navigation area.
2. You have the following options for the applications in the list:
To... Choose...
Data source bindings are not deleted. To delete all data source bindings created for this
application, select the checkbox.
Note
Bound databases and schemas will not be deleted. You can delete database and
schema bindings using the Databases & Schemas panel.
3. To choose an action for an application process, click the relevant application's name in the list to go the the
application overview page.
To... Choose...
Related Information
The status of an individual process is based on values that reflect the process run state and its monitoring metrics.
Procedure
This takes you to the overview page for the selected application.
The Processes panel shows the number of running processes and the overall state for the metrics as follows:
State
○ Started
○ Started (Disabled)
○ Starting
○ Stopping
○ Application Error
○ Infrastructure Error
Metric
○ OK
○ Warning (also shown for intermediate states)
○ Critical
○ Pending
3. Choose Monitoring Processes in the navigation area to go to the process overview to view the status
summary and further details:
Panel Description
Status Summary Displays the current values of the two status categories and the runtime version. A short text
summarizes any problems that have been detected.
State Indicates whether the process has been started or is transitioning between the Started and
Stopped states. The Error state indicates a fault, such as server unavailability, timeout, or VM fail
ure.
Runtime Shows the runtime version on which the application process is running and its current status:
○ OK: Still within the first three months since it was released
○ No longer recommended: Has exceeded the initial three-month period
○ Expired: 15 months since its release date
Context
This page describes the format of the Default Trace file. You can view this file for your Web applications via the
cockpit and the Eclipse IDE.
For more information, see Investigating Performance Issues Using the SQL Trace [page 884] and
Parameter Description
RECORD_SEPARATOR ASCII symbol for separating the log records. In our case, it is
"|" (ASCII code: 124)
ESC_CHARACTER ASCII symbol for escape. In our case, it is "\" (ASCII code: 92)
FINEST|Information|FINER|Information|FINE|Information|
CONFIG|Information|DEBUG|Information|PATH|Information|
INFO|Information|WARNING|Warning|ERROR|Error|SEVERE|
Error|FATAL|Error
HEADER_END
● A human-readable string;
● For new accounts, it is shorter than the tenant ID (8-30 characters);
● Unique for the relevant SAP Cloud Platform landscape;
● Equal to the account name (for new accounts); might be equal to the tenant ID (for old accounts).
Example
In this example, the application has been accessed on behalf of two tenants - with identifiers 42e00744-
bf57-40b1-b3b7-04d1ca585ee3 and 5c42eee4-d5ad-494e-9afb-2be7e55d0f9c.
FILE_TYPE:DAAA96DE-B0FB-4c6e-AF7B-A445F5BF9BE2
FILE_ID:1391169413918
ENCODING:[UTF8|NWCJS:ASCII]
RECORD_SEPARATOR:124
COLUMN_SEPARATOR:35
ESC_CHARACTER:92
COLUMNS:Time|TZone|Severity|Logger|ACH|User|Thread|Bundle name|JPSpace|JPAppliance|
JPComponent|Tenant Alias|Text|
SEVERITY_MAP:FINEST|Information|FINER|Information|FINE|Information|CONFIG|
Information|DEBUG|Information|PATH|Information|INFO|Information|WARNING|Warning|
ERROR|Error|SEVERE|Error|FATAL|Error
HEADER_END
2014 01 31 12:07:09#
+00#INFO#com.sap.demo.tenant.context.TenantContextServlet##anonymous#http-bio-8041-
exec-1##myaccount#myapplication#web#null#null#myaccount#The app was accessed on
behalf of tenant with ID: '42e00744-bf57-40b1-b3b7-04d1ca585ee3'|
2014 01 31 12:08:30#
+00#INFO#com.sap.demo.tenant.context.TenantContextServlet##anonymous#http-bio-8041-
exec-3##myaccount#myapplication#web#null#null#subscriberaccount#The app was
accessed on behalf of tenant with ID: '5c42eee4-d5ad-494e-9afb-2be7e55d0f9c'|
Related Information
View information about the application runtime. SAP Cloud Platform provides a set of runtimes. You can choose
the application runtime during application deployment.
Context
The runtime is assigned either by default or explicitly set when an application is deployed. If a version is not
specified during deployment, the major runtime version is determined automatically based on the SDK that is
used to deploy the application. By default, applications are deployed with the latest minor version of the respective
major version.
You are strongly advised to use the default version, since this contains all released fixes and critical patches,
including security patches. Override this behavior only in exceptional cases by explicitly setting the version, but
note that this is not recommended practice.
Procedure
1. In the cockpit, choose Java Applications in the navigation area and then select the relevant application in the
list.
The Runtime panel provides the following information:
○ The exact runtime version on which the process has been started (major, minor, micro, and nano
versions).
○ The date until when this runtime version is recommended for use, or whether it is no longer recommended
or has expired (also indicated by a runtime version status icon).
Related Information
If you are an application operator and need to deploy a new version of a productive application or perform
maintenance, you can choose among several approaches.
Note
In all cases, first test your update in a non-productive environment. The newly deployed version of the
application overwrites the old one and you cannot revert to it automatically. You have to redeploy the old version
to revert the changes, if necessary.
SAP Cloud Platform provides the following approaches for updating an application:
Zero Downtime
Use: When your new application version is backward compatible with the old version - that is, the new version of
the application can work in parallel with the already running old application version.
Steps: Deploy a new version of the application and disable and enable processes in a rolling manner. For an
automated execution of the same procedure, use the rolling-update command.
See Update Applications with Zero Downtime [page 1714] and rolling-update [page 1960].
Description: Shows a custom maintenance page to end users. The application is automatically disabled.
Use: When the new version is backward incompatible - that is, running the old and the new version in parallel may
lead to inconsistent data or erroneous output.
Steps: Enable maintenance mode to redirect new connections to the maintenance application. Deploy and start
the new application version and then disable maintenance mode.
Soft Shutdown
Description: Supports zero downtime and planned downtime scenarios. Disabled applications/processes stop
accepting new connections from users, but continue to serve already running connections.
Use: As part of the zero downtime scenario or to gracefully shut down your application during a planned downtime
(without maintenance mode).
Steps: Disable the application (console client only) or individual processes (console client or cockpit) in order to
shut down the application or processes gracefully.
Related Information
The platform allows you to update an application in a manner in which the application remains operable all the
time and your users do not experience downtime.
Prerequisites
Context
Each application runs on one or more dedicated application processes. You can start one or many application
processes at any given time, according to the compute unit quota that you have. Each process has a unique
process ID that you can use to stop it. To update an application non-disruptively for users, you handle individual
processes rather than the application as a whole. The procedure below describes the manual steps to execute a
zero downtime update. Use it if you want to have more control on the respective steps, for example to have a
different timeout for the different application processes before stopping them. For an automated execution of the
same procedure, use the rolling-update command. For more information, see rolling-update [page 1960].
Note
Not applicable to hanatrial.ondemand.com.
Procedure
1. Open the command prompt and navigate to the folder containing neo.bat/sh (<SDK installation
folder>/tools).
3. Deploy the new version of your application on SAP Cloud Platform by executing <neo deploy> with the
appropriate parameters.
Note that to execute the update, you need to start one additional application process with the new version.
Therefore, make sure you have configured a high enough number of maximum processes for the application
(at least one higher than the number of old processes that are running). In case you have already reached the
quota for your subaccount, stop one of the already running processes, before proceeding.
4. Start a new application process which is running the new version of the application by executing <neo
start>.
5. Use soft shutdown for the application process running the old version of the application:
a. Execute <neo disable> using the ID you identified in Step 2. This command stops the creation of new
connections to the application from new end users, but keeps the already running ones alive.
b. Wait for some time so that all working sessions finish. You can monitor user requests and used resources
by configuring JMX checks, or, you can just wait for a given time period that should be enough for most of
the sessions to finish.
c. Stop the application process by executing <neo stop> using the <application-process-id>
parameter.
6. (Optional) Make sure the application process is stopped by checking its status using the <application-
process-id> parameter.
7. If the application is running on more than one application processes, repeat steps 4 and 5 until all the
processes running the old version are stopped and the corresponding number of processes running the new
version are started.
Example
For example, if your application runs on two application processes, you need to perform the following steps:
Related Information
An operator can start and stop planned application downtime, during which a customized maintenance page for
that application is shown to end users.
Prerequisites
To redirect an application, you need a maintenance application. A maintenance application replaces your
application for a temporary period and can be as simple as a static page or have more complex logic. You need to
provide the maintenance application yourself and ensure that it meets the following conditions:
● It is a Java application.
● It is deployed in the same subaccount as your application.
● It has been started, that is, it is up and running.
● It must not be in maintenance itself.
● Its context path must be the same as the context path of the original application.
Note
Not applicable to hanatrial.ondemand.com.
Cockpit
Context
You can enable the maintenance mode for an application from the overview page for the application. An
application can be put into maintenance mode only if it is not being used as a maintenance application itself and is
running (Started state).
Procedure
1. Log on to the cockpit, select a subaccount and choose Applications Java Applications in the navigation
area.
2. Click the application's name in the list to open the application overview page and in the Application
Maintenance section choose (Start Maintenance).
3. In the dialog box, select the application that will serve as the maintenance application and choose Set Selected
Application. In the application list, the application’s state is now shown as Started (In Maintenance).
From this point on, new connections will be redirected to the maintenance application. All active connections
will still be handled until the application is stopped.
4. Optional: To view the details in the Application Maintenance section, select your application in the list.
The following details confirm that your application is in maintenance mode:
○ In Maintenance
○ A link to the assigned maintenance application: Click the link to open the overview page for this
application.
Results
Note that HTTP requests from already active sessions are redirected to the original application, if able. This
approach makes sure that end users can complete their work without noticing the application downtime. Only new
HTTP requests are redirected to the maintenance application.
maintenance. To disable the maintenance mode, choose (Switch maintenance mode off). Before doing so, you
should ensure that your application is up and running to avoid end users experiencing HTTP errors.
Console Client
Procedure
1. Open the command prompt and navigate to the folder containing neo.bat/sh (<SDK installation
folder>/tools).
2. Start the planned application downtime by executing <neo start-maintenance> in the command line. This
stops traffic to the application and registers a maintenance page application. All active connections will be still
handled until the application is stopped.
If you want to have access to an application during maintenance, use the --direct-access-code
parameter. For more information, see start-maintenance [page 1980].
3. Perform the planned maintenance, update, or configuration of your application:
a. Before stopping the application, wait for the working sessions to finish. You can wait for a given time
period that should be enough for most of the sessions to finish, or configure JMX checks to monitor user
requests and used resources. For more information, see
b. Stop the application by executing:
4. Stop the planned application downtime by executing <neo stop-maintenance> in the command line. This
resumes traffic to the application and the maintenance page application stops handling incoming requests.
Soft shutdown enables an operator to stop an application or application process in a way that no data is lost. Using
soft shutdown gives sufficient time to finish serving end user requests or background jobs.
Prerequisites
Context
Using soft shutdown, an operator can restart the application (for example, in order to update it) in a way that end
users are not disturbed. First, the application process is disabled. This means that requests by users that already
have open connections to this process will be processed, but new requests will not reach this application process
anymore. After the application process is disabled and remaining sessions processed, it can be stopped by the
operator.
Cockpit
Context
You can disable application processes in the Processes panel on the application dashboard or the State panel on
the process dashboard.
Procedure
1. Log on to the cockpit, select an subaccount and choose Applications Java Applications in the navigation
area.
3. In the Processes panel, choose (Disable process) in the relevant row. The process state changes to Started
(disabled).
Note
You can also select the process and disable it from the process dashboard.
4. Wait for some time so that all working sessions finish and then stop the process.
Related Information
Console Client
Procedure
1. Open the command prompt and navigate to the folder containing neo.bat/sh (<SDK installation
folder>/tools).
2. Disable processing of requests from new users to the application by executing <neo disable> with the
appropriate parameters. If you want to stop requests to a specific application process only and not to the
whole application, add the <--application-process-id> parameter.
If you disable the entire application, or all processes of the application, then new users requesting the
application will not be able to access it and will get an error.
3. Wait for some time so that all working sessions finish.
You can monitor user requests and used resources by configuring JMX checks, or, you can just wait for a given
time period that should be enough for most of the sessions to finish.
4. Stop the application by executing <neo stop> with the appropriate parameters. If you want to terminate a
specific application process only and not the whole application, add the <--application-process-id
>parameter.
Related Information
In the event of unplanned downtime when there is no application process able to serve HTTP requests, a default
error is shown to users. To prevent this, an operator can configure a custom downtime page using a downtime
application, which takes over the HTTP traffic if an unplanned downtime occurs.
Prerequisites
Note
Not applicable to hanatrial.ondemand.com.
● You have downloaded and configured the console client. We recommend that you use the latest SDK. For more
information, see Set Up the Console Client [page 1135]
● You have deployed and started your own downtime application in the same SAP Cloud Platform subaccount as
the application itself.
● The downtime application has to be developed in a way that it returns an HTTP 503 return code. That is
especially important if availability checks are configured for the original applications so that unplanned
downtimes are properly detected.
Procedure
1. Open the command prompt and navigate to the folder containing neo.bat/sh (<SDK installation
folder>/tools).
2. Configure the downtime application by executing neo set-downtime-app in the command line.
3. (optional) If the downtime page is no longer needed (for example, if the original application has been
undeployed), you can remove it by executing clear-downtime-app command.
Related Information
The SAP JVM Profiler helps you analyze resource-related problems in your Java application regardless of whether
the JVM is running locally or on the cloud.
Typically, you first profile the application locally. Then you may continue and profile it also on the cloud. The basic
procedure is the following:
Features
Allocation Trace Shows the number, size and type of the allocated objects and the meth
ods allocating them.
Performance Hotspot Trace Shows the most time-consuming methods and execution paths
Garbage Collection Trace Shows all details about the processed garbage collections
Synchronization Trace Shows the most contended locks and the threads waiting for or holding
them
File I/O Trace Shows the number of bytes transferred from or to files and the meth
ods transferring them
Class Statistic Shows the classes, the number and size of their objects currently resid
ing in the Java Heap generations
Tasks
Related Information
Overview
After you have created a Web application and verified that it is functionally correct, you may want to inspect its
runtime behavior by profiling the application. This helps you to:
● You have developed and deployed a Web application using the Eclipse IDE. For more information, see
Deploying and Updating Applications [page 1175].
● You have installed SAP JVM as the runtime for the local server. For more information, see Set Up SAP JVM in
Eclipse IDE [page 1133]
Procedure
Note
Since profiling only works with SAP JVM, if another VM is used, going to Profile will result in opening a dialog
that suggests two options - editing the configuration or canceling the operation.
Result
You have successfully started a profiling run of a locally deployed Web application. You can now trigger your work
load, create snapshots of the profiling data and analyze the profiling results.
When you have finished with your profiling session, you can stop it either by disconnecting the profiling session
from the Profile view or by restarting the server.
Related Information
Refer to the SAP JVM Profiler documentation for details about the available analysis options. The documentation
is available as part of the SAP JVM Profiler plugin in the Eclipse IDE and can be found via Help Help Contents
SAP JVM Profiler .
After you have created a Web application and verified that it is functionally correct, you may want to inspect its
runtime behavior by profiling the application on the cloud. It is best if you first profile the Web application locally.
Prerequisites
● You have developed and deployed a Web application using the Eclipse IDE. For more information, see
Deploying and Updating Applications [page 1175]
● Optional: You have profiled your Web application locally. For more information, see Profile Applications Locally
[page 1723]
Note
Currently, it is only possible to profile Web applications on the cloud that have exactly one application process
(node).
Procedure
○ From the server context menu, choose Profile (if the server is stopped) or Restart in Profile (if the server is
running).
○ Go to the application source code and from its context menu, choose Profile As Profile on Server .
3. Open the Profiling perspective.
Results
You have successfully initiated a profiling run of a Web application on the cloud. Now, you can trigger your
workload, create snapshots of the profiling data and analyze the profiling results.
When you have finished with your profiling session, you can stop it either by disconnecting the profiling session
from the Profile view or by restarting the server.
Refer to the SAP JVM Profiler documentation for details about the available analysis options. The documentation
is available as part of the SAP JVM Profiler plugin in the Eclipse IDE and you can find it via Help Help Contents
SAP JVM Profiler .
Context
This page describes the format of the Default Trace file. You can view this file for your Web applications via the
cockpit and the Eclipse IDE.
For more information, see Investigating Performance Issues Using the SQL Trace [page 884] and
Parameter Description
RECORD_SEPARATOR ASCII symbol for separating the log records. In our case, it is
"|" (ASCII code: 124)
ESC_CHARACTER ASCII symbol for escape. In our case, it is "\" (ASCII code: 92)
FINEST|Information|FINER|Information|FINE|Information|
CONFIG|Information|DEBUG|Information|PATH|Information|
INFO|Information|WARNING|Warning|ERROR|Error|SEVERE|
Error|FATAL|Error
HEADER_END
Besides the main log information, the Default Trace logs information about the tenant users that have
accessed a relevant Web application. This information is provided in the new Tenant Alias column parameter, which
is automatically logged by the runtime. The Tenant Alias is:
● A human-readable string;
● For new accounts, it is shorter than the tenant ID (8-30 characters);
● Unique for the relevant SAP Cloud Platform landscape;
● Equal to the account name (for new accounts); might be equal to the tenant ID (for old accounts).
Example
In this example, the application has been accessed on behalf of two tenants - with identifiers 42e00744-
bf57-40b1-b3b7-04d1ca585ee3 and 5c42eee4-d5ad-494e-9afb-2be7e55d0f9c.
FILE_TYPE:DAAA96DE-B0FB-4c6e-AF7B-A445F5BF9BE2
FILE_ID:1391169413918
ENCODING:[UTF8|NWCJS:ASCII]
RECORD_SEPARATOR:124
COLUMN_SEPARATOR:35
ESC_CHARACTER:92
COLUMNS:Time|TZone|Severity|Logger|ACH|User|Thread|Bundle name|JPSpace|JPAppliance|
JPComponent|Tenant Alias|Text|
SEVERITY_MAP:FINEST|Information|FINER|Information|FINE|Information|CONFIG|
Information|DEBUG|Information|PATH|Information|INFO|Information|WARNING|Warning|
ERROR|Error|SEVERE|Error|FATAL|Error
HEADER_END
2014 01 31 12:07:09#
+00#INFO#com.sap.demo.tenant.context.TenantContextServlet##anonymous#http-bio-8041-
exec-1##myaccount#myapplication#web#null#null#myaccount#The app was accessed on
behalf of tenant with ID: '42e00744-bf57-40b1-b3b7-04d1ca585ee3'|
2014 01 31 12:08:30#
+00#INFO#com.sap.demo.tenant.context.TenantContextServlet##anonymous#http-bio-8041-
exec-3##myaccount#myapplication#web#null#null#subscriberaccount#The app was
accessed on behalf of tenant with ID: '5c42eee4-d5ad-494e-9afb-2be7e55d0f9c'|
Related Information
SAP Cloud Platform allows you to achieve isolation between the different application life cycle stages
(development, testing, productive) by using multiple subaccounts.
Prerequisites
● You have developed an application. For more information, see Developing Java Applications [page 1164].
● You have a subaccount in an enterprise account. For more information, see Global Accounts and Subaccounts
[page 10].
Context
Using multiple subaccounts ensures better stability. Also, you can achieve better security for productive
applications because permissions are given per subaccount.
For example, you can create three different subaccounts for one application and assign the necessary amount of
compute unit quota to them:
● dev - use for development purposes and for testing the increments in the cloud, you can grant permissions to
all application developers
● test- use for testing the developed application and its critical configurations to ensure quality delivery
(integration testing and testing in productive-like environment prior to making it publicly available)
● prod - use to run productive applications, give permissions only to operators.
You can create multiple subaccounts and assign quota to them either using the console client or the cockpit.
Procedure
Next, you can deploy your application in the newly created subaccount using the Eclipse IDE or the console
client. Then, you can test your application and make it ready for productive use.
You can transfer the application from one subaccount to another by redeploying it in the respective
subaccount.
Procedure
1. Open the command prompt and navigate to the folder containing neo.bat/sh (<SDK installation
folder>/tools).
2. Create a new subaccount.
Execute:
Execute:
Next, you can deploy your application in the newly created subaccount by executing neo deploy -a
<subaccount> -h <host> -b <application name> -s <file location> -u <user name or
email>. Then, you can test your application and make it ready for productive use.
You can transfer the application from one subaccount to another by redeploying it in the respective
subaccount.
Related Information
After you have developed and deployed your SAP HANA XS application, you can then monitor it.
Cockpit Configure Availability Checks for SAP HANA XS Applications from the Cockpit [page 1248]
Console Client Configure Availability Checks for SAP HANA XS Applications from the Console Client [page 1249]
For an overview of the current status of the individual HTML5 applications in your subaccount, use the SAP Cloud
Platform cockpit.
It provides key information in a summarized form and allows you to initiate actions, such as starting or stopping.
Managing Destinations
Related Information
You can export HTML5 applications either with their active version or with an inactive version.
Procedure
1. Choose Applications HTML5 Applications in the navigation area, and then the link to the application you
want to export.
Procedure
1. Choose Applications HTML5 Applications in the navigation area, and then the link to the application you
want to export.
2. Choose Versioning in the navigation area, and then choose Versions under History.
3. In the table row of the version you want to export, choose the export icon ( ).
4. Save the zip file.
You can import HTML5 applications either creating a new application or creating a new version for an existing
application.
Note
When you import an application or a version, the version is not imported into master branch of the repository.
Therefore, the version is not visible in the history of the master branch. You have to switch to Versions in the
navigation area.
Procedure
1. To upload a zip file, choose Applications HTML5 Applications in the navigation area, and then Import
from File ( ).
2. In the Import from File dialog, browse to the zip file you want to upload.
3. Enter an application name and a version name.
4. Choose Import.
The new application you created by importing the zip file is displayed in the HTML5 Applications section.
5. To activate this version, see Activate a Version [page 1269].
Procedure
1. Choose Applications HTML5 Applications in the navigation area, and then the application for which you
want to create a new version.
2. Choose Versioning in the navigation area.
3. To upload a zip file, choose Versions under History and then Import from File ( ).
4. In the Import from File dialog, browse to the zip file you want to upload.
5. Enter a version name.
6. Choose Import.
The new version you created by importing the zip file is displayed in the History table.
7. To activate this version, select the Activate this application version icon ( ) in the table row for this version.
8. Confirm that you want to activate the application.
On the Application Details panel, you can add or change a display name and a description to the selected HTML5
application.
Context
If a display name is maintained, this display name is also shown in the list of HTML5 applications and in the list of
HTML5 subscriptions instead of the application name.
Procedure
1. Log on with a user (who is a subaccount member) to the SAP Cloud Platform cockpit.
2. Choose Applications HTML5 Applications in the navigation area, and select the application for which to
add or change a display name and description.
3. Under Application Details of the Overview section, choose Edit.
4. Enter a display name and a description for the HTML5 application.
Field Comment
Display Name Human-readable name that you can specify for your HTML5 application.
Description Short descriptive text about the HTML5 application, typically stating what it
does.
An HTML5 application can have multiple versions, but only one of these can be active. This active version is then
available to end-users of the application.
However, developers can access all versions of an application using unique URLs for testing purposes.
The Versioning view in the cockpit displays the list of available versions of an HTML5 application. Each version is
marked either as active or inactive. You can activate an inactive version using the activation button.
For every version, the required destinations are displayed in a details table. To assign a destination from your
subaccount global destinations to a required destination, choose Edit in the details table. By default, the
destination with the same name as the name you defined for the route in the application descriptor is assigned. If
this destination does not exist, you can either create the destination or assign another one.
When you activate a version, the destinations that are currently assigned to this version are copied to the active
application version.
If an HTML5 application requires connectivity to one or more back-end systems, destinations must be created or
assigned.
Prerequisites
Context
For the active application version the referenced destinations are displayed in the HTML5 Application section of the
cockpit. For a non-active application version the referenced destinations are displayed in the details table in the
Versioning section. HTML5 applications use HTTP destinations, which can be defined on the level of your
subaccount.
By default, the destination with the same name as the name you defined for the route in the application descriptor
is assigned. If this destination does not exist, you can create the destination with the same name as described in
Configure Destinations from the Cockpit [page 108]. Then you can assign this newly created destination.
Alternatively, you can assign another destination that already exists in your subaccount. To assign a destination,
follow the steps below.
Procedure
1. Log on with a user (who is a subaccount member) to the SAP Cloud Platform cockpit.
2. Choose Applications HTML5 Applications in the navigation area, and choose the application for which
you want to assign a different destination (than the default one) from your subaccount global destinations.
3. Choose Edit in the Required Destinations table.
4. In the Mapped Subaccount Destinations column, choose an existing destination from the dropdown list.
End users can only access an application if the application is started. As long as an application is stopped, its end
user URL does not work.
Context
The first start of the application usually occurs when you activate a version of the application. For more
information, see Activating a Version.
Procedure
1. Log on with a user (who is an subaccount member) to the SAP Cloud Platform cockpit.
The end user URL for the application is displayed under Active Version.
Related Information
Resources of an HTML5 application can be protected by permissions. The application developer defines the
permissions in the application descriptor file.
To grant a user the permission to access a protected resource, you can either assign a custom role or one of the
predefined virtual roles to such a permission. The following predefined virtual roles are available:
AccountDeveloper and AccountAdministrator require SAP IdP to be configured as identity provider. If you
want to use the AccountDeveloper or AccountAdministrator role together with a custom IDP, create those
roles as custom roles and assign the corresponding user manually.
The role assignments are only effective for the active application version. To protect non-active application
versions, the default permission NonActiveApplicationPermission is defined by the system for every HTML5
As long as no other role is assigned to a permission, only subaccount members with developer or administrator
permission have access to the protected resource. This is also true for the default permission
NonActiveApplicationPermission.
You can create roles in the cockpit using either of these panels:
Note
An HTML5 application’s own permissions also apply when the application is reached from another HTML5
application (see Accessing Application Resources [page 1279]). Previously, only the permissions of the HTML5
application that was accessed first were considered. If you need time to assign the proper roles, you can
temporarily switch back to the previous behavior by unchecking Always Apply Permissions in the cockpit.
Related Information
You can manage roles and permissions for the HTML5 applications or subscriptions using the HTML5 Applications
panel.
You create roles that are assigned to HTML5 applications or HTML5 applications subscriptions. The roles are
available for all HTML5 applications and all subscriptions to HTML5 applications.
Procedure
Prerequisites
● If you want to use groups, you have configured the groups for your identity provider as described in Application
Identity Provider [page 2161].
Context
Since all HTML5 applications and all HTML5 application subscriptions use the same roles, changing a role affects
all applications that use this role.
Procedure
Once you have created the required roles, you can assign the roles to the permissions of your HTML5 application
or of your HTML5 application subscription to an HTML5 application.
Procedure
You can manage roles and permissions for the HTML5 applications or subscriptions using the Subscriptions panel.
You create roles that are assigned to HTML5 applications or HTML5 applications subscriptions. The roles are
available for all HTML5 applications and all subscriptions to HTML5 applications.
Procedure
Prerequisites
● If you want to use groups, you have configured the groups for your identity provider as described in Application
Identity Provider [page 2161].
Context
Since all HTML5 applications and all HTML5 application subscriptions use the same roles, changing a role affects
all applications that use this role.
Procedure
Once you have created the required roles, you can assign the roles to the permissions of your HTML5 application
or of your HTML5 application subscription to an HTML5 application.
Procedure
You can view logs on any HTML5 application running in your subaccount or subscriptions to these apps. Currently,
only the default trace log file is written. The file contains error messages caused by missing back-end connectivity,
for example, a missing destination, or logon errors caused by your subaccount configuration.
Context
There is one file a day. The logs are kept for 7 days before they are deleted. If the application is deleted, the logs are
deleted as well. A log is a virtual file consisting of the aggregated logs of all processes. Currently, the following data
is logged:
● The time stamp (date, time in milliseconds, time zone) of when the error occurred
● A unique request ID
● The log level (currently only ERROR is available)
● The actual error message text
Procedure
1. Log on with a user (who is a subaccount member) to the SAP Cloud Platform cockpit.
By default, all applications running on SAP Cloud Platform are accessed on the hana.ondemand.com domain.
According to your needs, you can change the default application URL by configuring application domains different
from the default one: custom or platform domains.
You can configure application domains using SAP Cloud Platform console client.
Note that you can use either platform domains or custom domains.
Custom Domains
Use custom domains if you want to make your applications accessible on your own domain different from
hana.ondemand.com - for example, www.myshop.com. When a custom domain is used, the domain name as well
as the server certificate for this domain are owned by the customer.
Platform Domains
Caution
You can configure different platform domains only for Java applications.
By default, applications accessible on hana.ondemand.com are available on the Internet. Platform domains enable
you to use additional features by using a platform URL different from the default one.
For example, you can use svc.hana.ondemand.com to hide the application from the Internet and access it only
from other applications running on SAP Cloud Platform, or, cert.hana.ondemand.com if you want an application to
use client-certificate authentication with the relevant SSL connection settings. The application URLs will be
https://demomyshop.svc.hana.ondemand.com or, https://demomyshop.cert.hana.ondemand.com,
respectively.
Related Information
SAP Cloud Platform allows subaccount owners to make their SAP Cloud Platform applications accessible via a
custom domain that is different from the default one (hana.ondemand.com) - for example www.myshop.com.
Prerequisites
To use a custom domain for your application, you must fulfill a number of preliminary steps. For more
information about these steps, see Prerequisites [page 1745].
Scenario
After fulfilling the prerequisites, you can configure the custom domain on your own using SAP Cloud Platform
console client commands.
First, set up secure SSL communication to ensure that your domain is trusted and all application data is protected.
Then, route the traffic to your application:
1. Create an SSL Host [page 1747] - the host holds the mapping between your chosen custom domain and the
application on SAP Cloud Platform as well as the SSL configuration for secure communication through this
custom domain.
2. Upload a Certificate [page 1748] - it will be used as a server certificate on the SSL host.
3. Bind the Certificate to the SSL Host [page 1750].
4. Add the Custom Domain [page 1750] - this maps the custom domain to the application URL.
5. Configure DNS [page 1751]- you can create a CNAME mapping.
6. Configure Single Sign-On [page 1752] - if you have a custom trust configuration in your subaccount, you need
to enable single logout.
The configuration of custom domains has different setups related to the subscriptions of your subaccount. For
more information about custom domains for applications that are part of a subscription, see Custom Domains for
Multitenant Applications [page 1754].
You need to have a quota for domains configured for your global account. One custom domain quota corresponds
to one SSL host that you can use. For more information, see Purchase a Customer Account [page 935].
The following two steps involve external service providers - domain name registrar and certificate authority.
Note
The domain name and the server certificate for this domain are issued by external authorities and owned by the
customer.
You need to come up with a list of custom domains and applications that you want to be served through them. For
example, you may decide to have three custom domains: test.myshop.com, preview.myshop.com,
www.myshop.com - for test, preview, and productive versions of your SAP Cloud Platform application.
The domain names are owned by the customer, not by SAP Cloud Platform. Therefore, you will need to buy the
custom domain names that you have chosen from a registrar selling domain names.
To make sure that your domain is trusted and all your application data is protected, you have to get an appropriate
SSL certificate from a Certificate Authority (CA).
You need to decide on the number and type of domains you want to be protected by this certificate. One SSL host
can hold one SSL certificate. One certificate can be valid for a number of domains and subdomains.
There are various types of SSL certificates. Depending on your needs, you can choose between:
Note
Choosing the wildcard subdomain certificate ensures protection of all subdomains in your custom domain
(*.myshop.com), but not the domain itself (myshop.com cannot be used).
4. Generate a CSR
To issue an SSL certificate and sign it with the CA of your choice, you need a certificate signing request (CSR). You
must create the CSR using our generate-csr command. For more information, see generate-csr [page 1879].
Note
We do not support uploading of existing certificates that are not generated using the generate-csr command.
Caution
The CSR is valid only for the host on which it was generated and cannot be moved and downloaded. The host
represents the region: for example, hana.ondemand.com for Europe; us1.hana.ondemand.com for the United
States; ap1.hana.ondemand.com for Asia-Pacific, and so on.
Use the CA of your choice to sign the CSR. The certificate has to be in Privacy-enhanced Electronic Mail (PEM)
format (128 or 256 bits) with private key (2048-4096 bits).
Related Information
To make sure your domain is trusted and all application data is protected, you need to first set up secure SSL
communication. The next step will then be to make your application accessible via the custom domain and route
traffic to it.
Context
You have to create an SSL host that will serve your custom domain. This host holds the mapping between your
chosen custom domain and the application on SAP Cloud Platform as well as the SSL configuration for secure
communication through this custom domain.
Prerequisites
To use the console commands, install an SDK according to the instructions in Install the SAP Cloud Platform SDK
for Neo Environment [page 1127].
Procedure
1. Open the command prompt and navigate to the folder containing neo.bat/neo.sh (<SDK installation
folder>/tools).
2. Create an SSL host. In the console client command line, execute neo create-ssl-host. For example:
Note
In the command output, you get the SSL host. For example, "A new SSL host [mysslhostname] was
created and is now accessible on host [123456.ssl.ondemand.com]". Write down the
123456.ssl.ondemand.com host as you will later need it for the DNS configuration.
You need an SSL certificate to allow secure communication with your application. Once installed, the SSL
certificate is used to identify the client/individual and to authenticate the owner of the site.
Context
The certificate generation process starts with certificate signing request (CSR) generation. A CSR is an encoded
file containing your public key and specific information that identifies your company and domain name.
The next step is to use the CSR to get a server certificate signed by a certificate authority (CA) chosen by you.
Before buying, carefully consider the appropriate type of SSL certificate you need. For more information, see
Prerequisites [page 1745].
Procedure
1. Generate a CSR.
The --name parameter is the unique identifier of the certificate within your subaccount on SAP Cloud
Platform and will be used later. It can contain alphanumeric symbols, '.', '-' and '_'.
Note
For security reasons, you can only upload certificates that are generated using the generate-csr
command.
Note
When sending the CSR to be signed by a CA, keep the following requirements in mind:
The certificate must be in Privacy-enhanced Electronic Mail (PEM) format (128 or 256 bits) with private key
(2048-4096 bits).
3. Upload the SSL certificate you received from the CA to SAP Cloud Platform:
Note
Note that some CAs issue chained root certificates that contain an intermediate certificate. In such cases,
put all certificates in the file for upload starting with the signed SSL certificate.
If you did not upload an intermediate certificate for some reason, you can use the --force parameter
option. Put the missing certificate in the file, add the --force parameter, and retry the previously executed
upload-domain-certificate command without changing the values of the remaining parameters.
Caution
Once uploaded, the domain certificate (including the private key) is securely stored on SAP Cloud Platform
and cannot be downloaded for security reasons.
Note that when the certificate expires, you will receive a notification from your CA. You need to take care of the
certificate update. For more information, see Update an Expired Certificate [page 1755]
Tip
The number of certificates you can have is limited and is calculated based on the number of custom
domains you have multiplied by 3. For example, if you have one custom domain, you can have 3 certificates.
To free up some space for new certificates, execute list-domain-certificates to get the names of the
created ones and then delete-domain-certificate for each certificate you do not need.
You need to bind the uploaded certificate to the created SSL host so that it can be used as SSL certificate for
requests to this SSL host.
Procedure
To make your application on the SAP Cloud Platform accessible via the custom domain, you need to map the
custom domain to the application URL.
Context
Note
After you configure an application to be accessed over a custom domain, its default URL hana.ondemand.com
will no longer be accessible. It will only remain accessible for applications that are part of a subscription -
https://<application_name><provider_subaccount>-<consumer_subaccount>.<domain>.
1. In the console client command line, execute neo add-custom-domain with the appropriate parameters.
Note that you can only do this for a started application.
To route the traffic for your custom domain to your application on SAP Cloud Platform, you also need to configure
it in the Domain Name System (DNS) that you use.
Context
You need to make a CNAME mapping from your custom domain to the created SSL host for each custom domain
you want to use. This mapping is specific for the domain name provider you are using. Usually, you can modify
CNAME records using the administration tools available from your domain name registrar.
Procedure
1. Sign in to the domain name registrar's administrative tool and find the place where you can update the domain
DNS records.
2. Locate and update the CNAME records for your domain to point to the DNS entry you received from us
(*.ssl.ondemand.com) - the one that you got as a result when you created the SSL host using the create-
ssl-host command. For example, 123456.ssl.ondemand.com. You can check the SSL host by executing
the list-ssl-hosts command.
For example, if you have two DNS records : myhost.com and www.myhost.com, you need to configure them
both to point to the SSL host 123456.ssl.ondemand.com.
Note
Some DNS servers might not allow you to point a CNAME record to the top-level domain such as
myhost.com. In this case, find the IP of the DNS entry (*.ssl.ondemand.com) and create an A record
with it.
After you configure the custom domain, make sure that the setup is correct and your application is accessible on
the new domain.
Procedure
1. Log on to the cockpit, select an subaccount and go to your Application Dashboard. In Application URLs, check
if the new custom URL has replaced the default one.
2. Open the new application URL in a browser. Make sure that your application responds as expected.
3. Check that there are no security warnings in the browser. View the certificate in the browser. Check the
Subject and Subject Alternative Name fields - the domain names there must match the custom domain.
4. Perform a small load test - request the application from different browser sessions making at least 15 different
requests.
Results
After this procedure, your application will be accessible on the custom domain, and you will be able to log on
(single sign-on) successfully. Single logout, however, may not work yet. If you have a custom trust configuration in
your subaccount, you will need to perform an additional configuration to enable single logout.
Next Steps
Configure single logout. For more information, see Configure Single Logout [page 1753]
To enable single logout, you need to configure the Custom Domain URLs, and, optionally, the Central Redirect URL
for the SAML single sign-on flow. Even if single sign-on works successfully with your application at the custom
domain, you will need to follow the current procedure to enable single logout.
Prerequisites
● You are logged on with a user with administrator role. See Managing Member Authorizations [page 1671].
● You are aware of the productive region that hosts your subaccount. See Regions [page 21].
● You are using a custom trust configuration for your subaccount. See Configure SAP Cloud Platform as a Local
Service Provider [page 2162].
● You have configured the required trust settings for your subaccount. See Configure Trust to the SAML Identity
Provider [page 2165].
Context
Central Redirect URL is the central node that facilitates assertion consumer service (ACS) and single logout (SLO)
service. By default, this node is provided by SAP Cloud Platform, and has the authn.<productive region
host>.com URL (for example, authn.hana.ondemand.com). If you want to use your application’s root URL as
the ACS, instead of the central node, you will need to maintain the Central Redirect URL.
For Java applications, you can follow the procedure described in the current document.
Note
For HANA XS applications that use SAP ID Service as authenticating authority, create an incident in component
BC-IAM-IDS. For HANA XS applications that use SAP Cloud Platform Identity Authentication service for
authentication, see Configure a Trusted Service Provider to learn how to update the ACS and SLO endpoints.
Procedure
1. In your Web browser, open the SAP Cloud Platform cockpit and choose Security Trust in the navigation
area.
2. Choose the Custom Application Domains Settings subtab.
3. Choose Edit. The custom domains properties become editable.
4. Select the Use Custom Application Domains option.
5. In Central Redirect URL, enter the URL of your application process that will serve as the central node.
Note
Make sure you do not stop the application VM specified as the Central Redirect URL. Otherwise, SAML
authentication will fail for all applications in your subaccount.
6. The values in Custom Domain URLs are used for SLO. Enter the required values (all custom domain URLs) in
Custom Domain URLs.
7. Save your changes. The system generates the respective SLO endpoints. Test them in your Web browser and
make sure they are accessible from there.
Tip
The system will accept URL values with or without https://. Either way, the system will generate the
correct ACS and SLO endpoint URLs.
Configuration of custom domains has different setups related to the subscriptions of your subaccount.
Subscriptions represent applications that your subaccount has purchased for use from an application provider.
A subscription means that there is a contract between an application provider and a tenant that authorizes the
tenant to use the provider's application. As the consumer subaccount, you do not own, deploy, or operate these
applications yourself. Subscriptions allow you to configure certain features of the applications and launch them
through consumer-specific URLs.
● The custom domain is owned by the application provider who uses an SSL host from their subaccount quota.
The provider also does the configuration and assignment of the custom domain. The provider can assign a
subdomain of its own custom domain to a particular subscription URL. To do this, the provider needs to have
rights in both the provider and consumer subaccount.
● The customer (consumer) uses an SSL host from the consumer subaccount quota. In this case, the customer
(consumer) owns the custom domain and the SSL host and is therefore able do the necessary configuration
on their own.
Related Information
When the SSL certificate you configured for the custom domain expires, you have to perform the same procedure
with the new certificate and remove the old one.
Context
If you had configured the certificate using the console client commands, follow the steps:
Procedure
1. Generate a new CSR by executing the neo generate-csr command with the appropriate parameters:
2. In the command line output, you get the generated new CSR. To sign your certificate, copy and send the text to
your trusted CA.
3. When you receive a signed SSL certificate from the CA, upload it to SAP Cloud Platform by executing:
5. Assign the new certificate to your existing SSL host by executing neo bind-domain-certificate with the
appropriate parameters.
6. If you want to list your custom domain certificates, execute: neo list-domain-certificates.
Related Information
If you do not want to use the custom domain any longer, you can remove it using the console client commands. As
a result, your application will be accessible only on its default hana.ondemand.com domain.
Procedure
Related Information
Using platform domains, you can configure the application network availability or authentication policy. You can
achieve that by configuring the appropriate platform domain which will change the URL on which your application
will be accessible.
Prerequisites
You have installed and configured SAP Cloud Platform console client. For more information, see Setting Up the
Console Client.
Context
● hana.ondemand.com - any application is accessible on this default domain after being deployed on SAP Cloud
Platform
● cert.hana.ondemand.com - enables client certificate authentication
● svc.hana.ondemand.com - provides access within the same region; for internal communication and not open
on the Internet or other networks
You can configure the platform domains using the application-domains group of console client commands:
Procedure
1. Open the command prompt and navigate to the folder containing neo.bat/neo.sh(<SDK installation
folder>/tools).
2. Configure the platform domain you have chosen by executing the add-platform-domain command.
As a result, the specified application will be accessible on cert.hana.ondemand.com and on the default
hana.ondemand.com domain.
Procedure
1. To make sure the new platform domain is configured, execute the list-application-domains command:
2. Check if the returned list of domains contains the platform domain you set.
Procedure
1. When you no longer want the application to be accessible on the configured platform domain, remove it by
executing the remove-platform-domain command:
2. Repeat the step for each platform domain you want to remove.
Related Information
Using an on-premise reverse proxy allows you to combine on-premise and cloud-based web applications in the
same browser window.
Scope
● Java applications
Note
A Java application must be deployed and started in advance.
● HTML5 applications
● Both host and port mapping for reverse proxy
● More than one reverse proxy address can be mapped to the same application URL.
You are often not allowed to combine on-premise and cloud-based web applications in one browser window. It is
not allowed because browsers use the cross-site information transfer prevention policy. Browsers count this type
of information transfer as a security threat by default, which makes it impossible to perform cookie exchange and,
in particular, cookie-based authentication.
You can use an on-premise reverse proxy as the sole origin for the browser. This feature allows you to manage on-
premise and cloud applications in the same browser window. The commands listed in this topic allow SAP Cloud
Platform applications to be accessed via such proxies.
Note
Please have in mind that you cannot use these commands for applications configured with custom domains.
There are several options available for managing mappings between the cloud application uniform resource
identifier (URI) and the proxy host. Having a proxy-to-application mapping allows access to the application via the
on-premise reverse proxy.
Open the command prompt and navigate to the folder containing neo.bat/neo.sh (<SDK installation
folder>/tools). Then, you can manage the proxy host mappings by using the reverse-proxy group of console
client commands:
● The map-proxy-host command maps an application host to an on-premise reverse proxy host and port.
Example
● The unmap-proxy-host command deletes the mapping between an application host and an on-premise
reverse proxy host and port.
● The list-proxy-host-mappings command lists the proxy hosts mapped to an application host.
Example
Proxy Configuration
You need to set the on-premise proxy with a header with key x-proxy-host. As a result, when your HTTP request
arrives at the cloud, it will be routed properly to App 2. The header value should contain the application host of App
2, which is app.hana.ondemand.com in this specific example.
Note
If you do not use the x-proxy-host header, you will receive the Service Unavailable error message.
Related Information
You can use the SAP Cloud Platform virtual machines to install and maintain your own applications in scenarios
not covered by the platform.
Note
Virtual machines are currently available in regions Europe (Rot), Australia (Sydney), and US East (Sterling) in
the Neo environment. SAP Cloud Platform systems in other regions can communicate with virtual machines
only via public Internet, so you have to architect your applications accordingly.
Note
Backup services for the virtual machine volumes can be triggered manually, however they are not part of SAP’s
disaster recovery and contingency strategies. This means that the underlying infrastructure used to store
snapshots and volumes is set up in a cluster running in High Availability mode. If a disaster occurs (that is, the
whole infrastructure crashes), there will be no backup and recovery possible. You may consider storing your
data by means of the SAP HANA service offered by SAP Cloud Platform or on other systems.
Building Components
An SAP Cloud Platform virtual machine is the virtualized hardware resources (CPU, RAM, disk space, installed OS)
on which the installed software runs. The virtual machines come with a SUSE Linux Enterprise Server installed. For
more information about the currently supported version, see Patch the OS of Existing Virtual Machines [page
1769].
You create virtual machines on which you install your own software different from the programming models
supported by the platform.
Each virtual machine has a volume - the storage behind the file system and all software installed on it.
For each SAP Cloud Platform subaccount, a quota for virtual machines with certain sizes can be configured. You
can distribute quota to virtual machines when you open the SAP Cloud Platform cockpit and navigate to Quota
Management.
Depending on your needs, you can choose from the following sizes with predefined configurations:
Virtual Machine Size CPU Core RAM (GB) Disk Storage (GB)
xs 1 2 20
s 2 4 40
m 4 8 80
l 8 16 160
xl 16 32 320
Capabilities
You can:
Limitations
● An SAP Cloud Platform virtual machine is running in a private network and cannot be accessed from another
customer subaccount.
You can create and start a virtual machine using either the SAP Cloud Platform cockpit or the console client. Then,
you establish a secure communication channel to it over Secure Shell (SSH) protocol. You open an SSH tunnel and
get all the communication details needed for you to log in to the virtual machine and install and maintain your
software.
Prerequisites
● Your subaccount has quota for virtual machines. You can view and assign quota to virtual machines when you
open the SAP Cloud Platform cockpit and navigate to Entitlements.
● You have downloaded the latest SAP Cloud Platform SDK to make sure that the console client contains the
latest changes. For more information, see Install the SAP Cloud Platform SDK for Neo Environment [page
1127].
● You have set up the SAP Cloud Platform console client. For more information, see Set Up the Console Client
[page 1135].
Context
When you use the cockpit to create a virtual machine, you only specify its name and size, and add a public key that
you generate manually. Use the console client instead for more advanced options, such as automatically
generating a key pair with an optional private key passphrase or creating virtual machines from existing volumes
and volume snapshots. For more information, see create-vm [page 1833].
Procedure
1. Using either the cockpit or the console client, create and start the virtual machine:
For more information, see Navigate to Global Accounts and Subaccounts [page 964].
2. In the navigation area, choose Virtual Machines.
3. Choose New Virtual Machine.
4. Enter a unique name for the virtual machine.
The name of the virtual machine can hold up to 30 characters of lowercase letters and numbers, and
must start with a letter.
5. Choose the size of the virtual machine from the drop-down.
6. Manually generate an SSH key pair in the single-line RSA format, and then paste in the public key start
ing with ssh-rsa. Store the private key, which you will later use when connecting to the virtual ma
chine using an SSH client, in a secure location.
7. Choose Save.
1. Open the console client - in the command prompt, navigate to the folder containing neo.bat/neo.sh
Console
(<SDK installation folder>/tools).
client
2. Execute the command:
Tip
In this step, you can optionally generate the key pair that will be used to log in to the virtual machine.
When generating the key pair, the file name is auto-generated and the file is saved under the following
file path: <directory where the command is executed>/<host>/<subaccount>.
The private key in the generated key pair can be encrypted with a passphrase. You will need the pri
vate key passphrase when you connect to the virtual machine using an SSH client. Provide and con
firm the passphrase with the --private-key-passphrase and --private-key-
passphrase-confirmation parameters in the command line, or when prompted.
Note
If you are using OpenSSH, encryption with a passphrase is supported, but may not work with other
SSH clients.
3. Check if the virtual machine was created by executing list-vms, which shows all the virtual machines
in a subaccount:
The command output lists information about the virtual machine, such as size, status, SSH key, floating
IP (if assigned), volumes.
Tip
You can also view this information when you open the SAP Cloud Platform cockpit, open your subac
count, and navigate to Virtual Machines. To view details about a specific virtual machine, choose its
name in the list.
You can provide a port on which you'll connect to the virtual machine once the tunnel is opened. If you do
not provide a port, you receive such automatically. Execute the following command:
After opening an SSH tunnel, the virtual machine is available on localhost on the respective port.
Caution
The open-ssh-tunnel command uses port 2020. If this port is used by another program or if you
have virtual network adapters (for example, when using a VPN client) the command may fail. Instead of
opening an SSH tunnel, you can use the Service Channel option in the Cloud Connector to connect to
the virtual machine. For more information, see Configure a Service Channel for Virtual Machine [page
349]. For productive scenarios, we recommend that you use the Service Channel option.
b. Check if the tunnel was opened by listing the currently opened SSH tunnels:
neo list-ssh-tunnels
Results
You are now the owner of this virtual machine and can install your software on it. To do that, or to apply an OS
patch to your virtual machine, you need access to the SUSE Linux Enterprise Server (SLES) repositories. SLES
repositories, like any other repositories, are storage locations from which you can retrieve and install software
packages.
For more information about the region hosts, see Regions and Hosts Available for the Neo Environment [page 23].
Using the cockpit and the console client, you can manage the lifecycle of the created virtual machine by checking
its state and also deleting it when it is no longer needed.
You can also create another virtual machine with the same file system by using volumes and volume snapshots.
The console client is needed to perform this task.
Related Information
The SAP Cloud Platform Virtual Machine service provides a blank operating system (OS). There is no preinstalled
software. As stated in the SAP Cloud Platform contracts, SAP does not take responsibility for the software that
customers run on top of that OS.
Virtual machines can be restarted during the maintenance windows planned for the service. These planned
maintenance windows are announced in the SAP Cloud Platform contract.
Caution
The software installed on the virtual machines must be configured properly in order to start automatically after
the system reboot. Any other resources, which cannot be preserved such as network connections and OS
settings, must be restored either by the installed software or via automated scripts executed on system startup.
Properly configured software allows for the minimization of downtimes for productive applications running on
virtual machines, and to avoid manual work after the maintenance window.
The file system of a virtual machine can survive a system crash if you create a virtual machine with the --
preserve-volume parameter. This parameter indicates that the volume will be preserved when the running
virtual machine is gone.
Caution
In productive environment, always create virtual machines using the --preserve-volume parameter. For
more information, see create-vm [page 1833].
Operating System
Currently, only SUSE Linux Enterprise 12 is available. It contains the base operating system (OS) packages as
provided by SUSE. There is no custom software installed on top and there are no restrictions to the virtual
machine’s directories.
● When you create a virtual machine without providing volume or snapshot, the virtual machine is created with
the latest image and OS available in the platform. This OS is part of the newly created virtual machine volume
and the subsequent snapshots created out of that volume.
Note
SAP provides support for only one image with one OS version. The image is updated by the platform with
security patch sets and OS upgrades. Only the owner of the virtual machine can update already created
virtual machines and snapshots. For more information, see Patch the OS of Existing Virtual Machines [page
1769].
● When you create a virtual machine with a specified volume or snapshot, the OS is taken from the
corresponding volume or snapshot.
Note
If you want to be able to create virtual machines with the exact same OS version, you may create a snapshot
from an empty virtual machine. Therefore, even if the default image is updated, you can use this snapshot
to create virtual machines with the OS version you prefer.
Support
The SAP Cloud Platform operators cannot log on a customer's virtual machine because a root user is available
only to the creator of the virtual machine. The root user is available via the key pair, which is provided or is
automatically created while creating the virtual machine. For more information, see create-vm [page 1833].
Monitoring
SAP Cloud Platform does not offer any health monitoring capabilities. According to your own preference as a
customer, you are responsible for the installation of the respective software, for the health monitoring setup of
your virtual machine, and the software running on it.
Related Information
When you create a virtual machine and thus become its owner, you have to take care to apply patches and updates
on its operating system (OS). Whenever there is a new OS image with security patches available, the infrastructure
will change the default image used and new virtual machines will be started with the new image. However, the
virtual machines you have previously created will still use the old image and you need to update it. You need to
apply security patches directly from the SUSE Linux Enterprise Server (SLES) repositories.
Prerequisites
You have configured your access to the SLES repositories by executing the following commands:
For more information about the region hosts, see Regions and Hosts Available for the Neo Environment [page 23].
Procedure
zypper refresh
If you do not specify the --category security parameter, the command will list all the available patches.
3. Install the selected patches.
Results
Note
If you have created a snapshot of a virtual machine before the update and start another virtual machine from
that snapshot, you need to install the security patches on that new virtual machine too as described above.
You can allow communication with SAP Cloud Platform virtual machines from other systems by managing security
group rules using console client commands. Communication between virtual machines within the same
subaccount is available by default.
Prerequisites
You have created a virtual machine. For more information, see Manage Virtual Machines [page 1764].
After you create a virtual machine, you can open an SSH connection to it for management purposes using the
open-ssh-tunnel [page 1943] command. There are several scenarios involving network communication to and from
a virtual machine. For more information, see Communication Scenarios Between Virtual Machines and SAP Cloud
Platform Systems [page 1772].
You can define the allowed ports on which another SAP Cloud Platform system can connect to the specific virtual
machine by configuring a security group rule for it.
Procedure
For an SAP HANA system, the --source-id is the SAP HANA database system name. You can find your SAP
HANA database system name in the cockpit, where you navigate to SAP HANA / SAP ASE Database
Systems . For a Java application, it is the application name.
The type of the system is specified in the --source-id field. The acceptable --source-type values are
HANA and JAVA.
2. Check the security group rules for the virtual machine:
3. (Optional) When you no longer need the configured communication, delete the security group rule:
Related Information
Communication from SAP Cloud Platform virtual machines to the public Internet is possible on every port, except
port 53. It is possible without proxy, any protocol restrictions, or any additional manual configurations.
The SAP Cloud Platform virtual machines are running in isolated private networks providing secure
communication between customers, but also with other SAP Cloud Platform systems.
● Communication from a virtual machine to another virtual machine within the same subaccount
The network communication between virtual machines within one subaccount does not have any port or
protocol restrictions.
For more information about this scenario, see Try It: Communication Between Virtual Machines Within One
Subaccount [page 1774].
● Communication from a virtual machine to a HANA Single DB within the same subaccount
● Communication from a virtual machine to a database (HANA Single DB, HANA MDC, ASE) in another
subaccount using the SAP Cloud Connector
For more information, see Configure a Service Channel for Virtual Machine.
Inbound network requests to SAP Cloud Platform virtual machines are restricted by default. The only exception is
the communication scenario from a virtual machine to another virtual machine within the same subaccount.
○ Communication from Java and HANA XS applications to virtual machines can use the floating IP of the
virtual machine and any port (port range) equal to or greater than 22. The port (port range) must be
defined in the security group rules.
Note
You can obtain the floating IP of the virtual machine from the list-vms [page 1937] command output or
from the overview section of the virtual machine in the SAP Cloud Platform cockpit.
○ Communication from Java and HANA XS applications to virtual machines must not use HTTP proxy. You
can use the following code snippet for your Java application:
Example
Note
When registering an access point for a particular security group rule of a virtual machine, a load balancer is
automatically created. For more information, see Enable Internet Access [page 1779].
Manage Network Communication for SAP Cloud Platform Virtual Machines [page 1770]
Enable Internet Access [page 1779]
Try It: Communication Between Virtual Machines Within One Subaccount [page 1774]
Try It: SSH Connection Between Java Applications and SAP Cloud Platform Virtual Machines [page 1775]
The following tutorial aims to show you how to ping a virtual machine from another virtual machine within one SAP
Cloud Platform subaccount.
Procedure
For more information, see Manage Virtual Machines [page 1764] and create-vm [page 1833].
2. Open an SSH tunnel and use it to connect to virtual machine <vm1>.
For more information how to open an SSH tunnel, see open-ssh-tunnel [page 1943].
Use an SSH client of your choice to connect to the virtual machine. For more information, see Setting Up an
SSH Client to Connect to SAP Cloud Platform Virtual Machines .
3. Find the private IP address <ip1> of virtual machine <vm1>.
ifconfig eth0 | grep 'inet addr' | cut -d ':' -f 2 | awk '{ print $1 }'
4. Create a second SAP Cloud Platform virtual machine <vm2> in the same subaccount.
5. Open an SSH tunnel and use it to connect to virtual machine <vm2>.
6. Finally, ping virtual machine <vm1> from virtual machine <vm2>.
Run the following command in the SSH client using the active SSH session of virtual machine <vm2>:
ping -c 1 <ip1>
The following tutorial aims to show you how to open a direct SSH connection between a Java application and an
SAP Cloud Platform virtual machine.
Prerequisites
You have created an SAP Cloud Platform virtual machine. For more information, see create-vm [page 1833].
Context
Note
Only the HTTPS protocol is supported. For more information, see Enable Internet Access [page 1779].
● By establishing a direct network connection using the floating IP of the virtual machine and a security group
rule. For more information, see Manage Network Communication for SAP Cloud Platform Virtual Machines
[page 1770].
Procedure
1. Obtain the SSH host key fingerprint of the virtual machine public key by executing the cat /etc/ssh/
ssh_host_rsa_key.pub command.
The host key is important for securing the SSH connection. To obtain its fingerprint, do the following:
1. Open an SSH tunnel to the virtual machine. For more information, see open-ssh-tunnel [page 1943].
2. Use the SSH tunnel to connect to your virtual machine with an SSH client of your choice. For more
information, see Setting Up an SSH Client to Connect to SAP Cloud Platform Virtual Machines .
3. Once the connection is established, run the cat /etc/ssh/ssh_host_rsa_key.pub command.
2. Use the following sample code for your Java application.
Its purpose is to execute a simple command on a virtual machine using SSH connection.
Sample Code
package com.sap.example;
import java.io.IOException;
import java.io.InputStream;
import java.util.Base64;
You can always replace it with another Linux command of your choice.
○ The value shown here is the passphrase set during the creation of the virtual machine:
You can find it by going to the overview page of the virtual machine in the SAP Cloud Platform cockpit. Or
you can execute the list-vms command. For more information, see list-vms [page 1937].
○ This is where you type the SSH host key fingerprint of the virtual machine public key:
Deploy your application on the Cloud using the SAP Cloud Platform cockpit or from Eclipse IDE. For more
information, see Deploy on the Cloud with the Cockpit [page 1199] and Deploy on the Cloud from Eclipse IDE
[page 1191].
5. Create a security rule for your virtual machine by executing the create-security-rule console command.
For more information, see create-security-rule [page 1830] and Manage Network Communication for SAP
Cloud Platform Virtual Machines [page 1770].
6. Open the Java application in a browser of your choice.
Go to the overview page of your Java application in the SAP Cloud Platform cockpit. Once the application
starts, click the application URL:
If you see the output of the lsof command in your browser, then the SSH connection has been established
successfully.
Manage Network Communication for SAP Cloud Platform Virtual Machines [page 1770]
Communication Scenarios Between Virtual Machines and SAP Cloud Platform Systems [page 1772]
Enable Internet Access [page 1779]
You can make your software running on a virtual machine accessible from the Internet if your scenario requires it.
Context
Using console client commands, you can enable an access point of your application via which end users can
access the application over HTTPS. Alternatively, you can do that from the SAP Cloud Platform cockpit.
Note
SAP Cloud Platform supports communication over HTTPS only. So Internet traffic will be directed over HTTPS
to a software process running on your virtual machine and listening on port 8041. For such communication, you
need to have a valid self-signed certificate in place.
You can check the access point with the list-vms command.
Alternatively, you can enable Internet access to a virtual machine from the cockpit. Open Virtual Machines in
the navigation, click the name of the particular virtual machine and choose Expose to Web.
2. (Optional) When you no longer need the access point, remove it:
Alternatively, you can disable Internet access to a virtual machine from the cockpit. Open Virtual Machines in
the navigation, click the name of the particular virtual machine and choose Hide from Web.
Related Information
A volume is the persistent storage that is created automatically when a virtual machine is created.
Context
Each virtual machine has a volume that stores the file system and all software installed on it. You can create a new
virtual machine with the same volume; delete a volume; create a snapshot of a volume.
The output shows all volumes with their ID, status, size as well as ID of the virtual machine they are attached
to. You can choose a volume from which you want to create another virtual machine and take its ID. The
volume must be in status available.
2. Create a new virtual machine from the volume.
3. (Optional) Delete a volume when you no longer need it to free some resources.
You cannot delete a volume that has snapshots or is in use by a virtual machine.
Next Steps
You can create a snapshot of the volume of a virtual machine. This snapshot contains everything that was installed
on the file system, but does not keep any running processes and runtime configurations. See Manage Volume
Snapshots [page 1782].
Related Information
You can take a snapshot of an existing virtual machine volume in your subaccount and use it to create a new virtual
machine with the same file system thus saving any manual installation.
Prerequisites
Note
Virtual machine quotas and volume snapshot quotas are both size-specific. A quota for a virtual machine of a
given size provides you with a quota for three volume snapshots of that same size.
Example
If an SAP Cloud Platform subaccount has a quota for two large size virtual machines, you can create up to six
large size volume snapshots. All six snapshots can be taken from one and the same large size virtual
machine, or distributed between both large size virtual machines. You cannot use this quota to create
snapshots from virtual machines of other sizes.
Context
Each virtual machine has a volume – the storage behind the file system and all software installed on it. Using
console client commands, you can create a snapshot of the volume of a virtual machine. This snapshot contains
everything that was installed on the file system, but does not keep any running processes and runtime
configurations. Then, you create a new virtual machine from this volume snapshot.
1. List virtual machines in your subaccount to find out the volume of which you want to take a snapshot.
The command output includes all virtual machines with their volume IDs. Copy the ID of the volume you need.
2. Create a snapshot of the specified VM volume.
The create-volume-snapshot command records the state of the file system of the virtual machine, but not
the state of the memory. Therefore, make sure that the file system contains all required data. In order to do so,
before creating the snapshot, ensure that all running programs have written their content to the disk.
3. Check the status of volume snapshot creation. You can find the snapshot ID in the output of the create-
volume-snapshot command.
5. (Optional) List all volume snapshots in your subaccount. This will give you more information about each
snapshot, such as ID, name, status, volume ID.
6. (Optional) Delete a snapshot when you no longer need it. This will free some quota to use for new volume
snapshots.
Related Information
You can consume an SAP HANA database from a virtual machine using JDBC.
Prerequisites
● You have created a virtual machine and connected to it over SSH protocol as a root user so that you can install
your software. For more information, see Manage Virtual Machines [page 1764].
● Your subaccount is configured with a dedicated SAP HANA database.
Procedure
1. Install the runtime necessary to run your application on the virtual machine (for example, Java, Node.js).
2. Get a valid JDBC driver for your application.
3. To get the details required to connect to the database, go to the overview of the database in the SAP Cloud
Platform cockpit.
a. In the cockpit, select a subaccount and choose Databases & Schemas.
b. Select a database that you want to connect to. This opens the overview for the selected database.
4. Create a database user to get access to an SAP HANA database. Please see Binding SAP HANA Databases to
Java Applications [page 754] (section: Create an SAP HANA Database User).
5. To connect to the database, specify the following details:
a. The user name and password that you defined earlier for the database.
b. The host and port that you should take from the JDBC URL (for example, jdbc:sap://localhost:
30015) displayed in the cockpit in the overview for the selected database.
To learn what the SAP Cloud Platform virtual machines have to offer in practice, check out these thoroughly
described scenarios.
To learn See
How to use X forwarding on SAP Cloud Platform virtual ma Using X Forwarding on SAP Cloud Platform Virtual Machines
chines
How to install an R server with SAP Cloud Platform virtual ma Installing an R Server with SAP Cloud Platform Virtual Ma
chines chines
How to run web applications with Apache on an SAP Cloud Running Web Applications with Apache on an SAP Cloud Plat
Platform virtual machine form Virtual Machine
How to set up an SSH client to connect to an SAP Cloud Setting Up an SSH Client to Connect to SAP Cloud Platform
Platform virtual machine Virtual Machines
How to configure access to your SAP Cloud Platform virtual Configuring Access to Your Virtual Machine Using Tomcat 8
machine using Tomcat 8
How to set up an SMTP server on an SAP Cloud Platform vir Setting Up an SMTP Server on an SAP Cloud Platform Virtual
tual machine Machine
How to access multiple applications or servers installed on a Single Endpoint to Access Multiple Services on an SAP Cloud
single SAP Cloud Platform virtual machine instance Platform Virtual Machine
Related Information
Learn how to authorize yourself and execute your first requests with the Virtual Machines API for the SAP Cloud
Platform Neo environment.
Context
To find the Virtual Machines API documentation, see SAP Cloud Platform APIs .
Procedure
3. Enter an Endpoint Alias of your choice along with your Username and Password. Then, choose Ok.
Take the newly generated CSRF token from the response header. For example,
DFAC5AE36DAC7949FE7B74F9C71FA728. If the token expires at some point, repeat the GET CSRF Token
operation to generate a new one.
Next Steps
Now that you have authorized yourself, see the following Virtual Machines API scenarios. These scenarios show
how you can:
Context
You can create a virtual machine by sending the corresponding POST request.
Procedure
Example
{
"name": "testvm",
"size": "x-small",
"publicKey": {
"name": "testkey",
"value": "ssh-rsa
AAAAB3NzaC1yc2EAAAADAQABAAABAQDWILdAVi4KqhD3kqnl2pixxj9dRmFKsXE1MYl5/
YR7rJNviTgbjRF4Owke5S1ZMqsF+OdIpZ06FoWwpvl5x+sof/IUGCdoZXvd1pbaE/2aCmL1FIXB
+FTJKA/gCDnZMdKxrNy6yX9QOyYl3Wm8j1FFJd3UkrDu/EilM
+2fhVNNafsobt63NXSMItlnQ1HFr2q7770rwz1lNAD4zX2FtZTqGGVfyLRrdlmOR2rdMJSQGLJTY5Gv
H6WB1wt+W27ROejsH1nPb9htwYjyY9SvQMbLFgHLyluhBBB0B
+vNDSzcmrdnr8fdy8YT27O2O6owKINU3Cd5YvYaaNK0vYWh34Ur youruser"
},
"volumeId": " ",
"snapshotId": " "
}
Example
location: https://api.hana.ondemand/virtualMachines/v2/accounts/a12345678/
processes/0ad07c35-f277-46db-adb9-824eb85b722a
date: Wed, 14 Feb 2018 14:20:23 GMT
content-length: 0
server: SAP
Context
Once you have triggered the creation of the virtual machine and you have copied the process ID, you can check the
status of that process.
Procedure
Results
The response body provides you with the current state of the virtual machine.
Example
{
"id": "0ad07c35-f277-46db-adb9-824eb85b722a",
"type": "VIRTUAL_MACHINE",
"action": "CREATE",
"status": "RUNNING",
"resourceLocations": [
"string"
],
"createdAt": "2018-02-14T14:20:23.952Z",
"lastModifiedAt": "2018-02-14T14:20:23.952Z"
}
Retry the operation until the status property value switches to STARTED.
Context
To obtain any detailed information about a specific virtual machine, you need to send the corresponding GET
request.
Procedure
The other GET operation requires you to have the ID of the virtual machine.
3. Enter your subaccount name and the name of the virtual machine.
4. Choose Try it out.
Results
The response body contains all the details about the recently created virtual machine.
Example
{
"id": "113f1447-4dd5-4bfd-9514-810a22b80d1f",
"securityGroupId": "37cfdd4e-02d9-4e49-a43a-e749c21f881f",
"name": "testvm",
"size": "x-small",
"ipAddress": "10.76.7.108",
"imageId": "2ed83124-0af7-48b5-8a40-24c77816303a",
"publicKey": "testkey",
"status": "STARTED",
"createdAt": "2018-02-14 14:21:23",
"lastModifiedAt": "2018-02-14 14:21:48",
"volume": {
"id": "f7fc8cde-69e7-4302-a264-5f05a0272c3f",
"name": "testvm",
"status": "IN_USE",
"createdAt": "2018-02-14 14:22:23",
"size": 20,
"attachedToVm": "113f1447-4dd5-4bfd-9514-810a22b80d1f"
}
}
SAP Cloud Platform console client for the Neo environment enables development, deployment and configuration
of an application outside the Eclipse IDE as well as continuous integration and automation tasks. The tool is part of
the SAP Cloud Platform SDK for Neo environment. You can find it in the tools folder of your SDK location.
Note
The Console Client is related only to the Neo environment. For the Cloud Foundry environment use the Cloud
Foundry command line interface. See Download and Install the Cloud Foundry Command Line Interface [page
948].
Downloading and setting up the console client Set Up the Console Client [page 1135]
Opening the tool and working with the commands and param Using the Console Client [page 1792]
eters
Console Client Video Tutorial
Verbose mode of output Verbose Mode of the Console Commands Output [page 1795]
You execute a console client command by entering neo <command name> with the appropriate parameters. To list
all parameters available for the respective command, execute neo help <command name>.
You can define the parameters of the different commands either directly in the command line, or, in a properties
file:
The console client is part of the SAP Cloud Platform SDK for Neo environment. You can find it in the tools folder of
your SDK installation.
To start it, open the command prompt and change the current directory to the <SDK_installation_folder>\tools
location, which contains the neo.bat and neo.sh files.
Command Line
You can deploy the same application as in the example above by executing the following command directly in the
command line:
Properties File
Within the tools folder, a file example_war.properties can be found in the samples/deploy_war folder. In
the file, enter your own user and subaccount name:
################################################
# General settings - relevant for all commands #
################################################
# Your subaccount name
account=<your subaccount>
# Application name
application=<your application name>
# User for login to hana.ondemand.com.
user=<email or user name>
# Host of the admin server. Optional. Defaults to hana.ondemand.com.
host=hana.ondemand.com
#################################################################
# Deployment descriptor settings - relevant only for deployment #
#################################################################
# List of file system paths to *.war files and folders containing them
source=samples/deploy_war/example.war
Note that you can have more than one properties file. For example, you can have a different properties file for each
application or user in your subaccount.
For more information about using the properties file, watch the video tutorial .
Argument values specified in the command line override the values specified in the properties file. For example, if
you have specified account=a in the properties file and then enter account=b in the command line, the operation
will take effect in account b.
Parameter Values
Since the client is executed in a console environment, not all characters can be used in arguments. There are
special characters that should be quoted and escaped.
Consult your console/shell user guide on how to use special characters as command line arguments.
For example, to use argument with value abc&()[]{}^=;!'+,`~123 on Windows 7, you should quote the value
and escape the! character. Therefore you should use "abc&()[]{}^=;^!'+,`~123".
User
Password
Do not specify your password in the properties file or as a command line argument. Enter a password only when
prompted by SAP Cloud Platform console client.
instead of
Restriction
Your password cannot start with the "@" character.
Proxy Settings
If you work in a proxy environment, before you execute commands, you need to configure the proxy.
For more information, see Set Up the Console Client [page 1135]
Output Mode
You can configure the console to print detailed output during command execution.
Related Information
● Local code - executed inside a local JVM, which is started when the command is started.
● Remote code - executed at back end (generally, the REST API that was called by the local code), which is
started in a separate JVM on the cloud.
Note
The trace level for remote code cannot be changed.
For local code execution, a LOG4J library is used. It is easy to be configured and, by default, there is a configuration
file located inside the commands class path, that is .../tools/lib/cmd.
For each command execution, two appenders are defined - one for the session and one for the console. They both
define different files for all messages that are logged by the SAP infrastructure and by apache.http. By default,
the console commands output is written in a number of log files. However, you are allowed to change the
log4j.properties file, and define additional appenders or change the existing ones. If you want, for example,
the full output to be printed in the console (verbose mode), or you want to see details from the execution of
specific libraries (partially verbose mode), you need to adjust the LOG4J configuration file.
To adjust the level of a specific logger, you have to add log4j.logger.<package> = <level> in the code of the
log4j.properties file.
In the file defined for the session, only loggers with level ERROR are logged. If you want, for example, to log debug
information about the apache.http library, you have to change log4j.category.org.apache.http=ERROR,
session to log4j.category.org.apache.http=DEBUG, session.
Example
This example demonstrates how you can change the output of command execution so that it is printed in the
console instead of collecting the information within log files. To do this, open your SDK folder and go to directory /
tools/lib/cmd. Then, open the log4j.properties file and replace its content with the code below.
##########
# Log levels
##########
log4j.rootLogger=INFO, console
log4j.additivity.rootLogger=false
##########
# System out console appender
##########
log4j.appender.console.Threshold=ALL
log4j.appender.console=org.apache.log4j.ConsoleAppender
log4j.appender.console.Target=System.out
log4j.appender.console.layout=org.apache.log4j.PatternLayout
log4j.appender.console.layout.ConversionPattern=%d %-5p [%t] %C: %m%n
log4j.appender.console.filter.1=org.apache.log4j.varia.StringMatchFilter
log4j.appender.console.filter.1.StringToMatch=>> Authorization: Basic
log4j.appender.console.filter.1.AcceptOnMatch=false
Related Information
Context
The console commands can return structured, machine-readable output. When you use the optional --output
parameter in a command, the command returns values and objects in a format that a machine can easily parse.
The currently supported output format is JSON.
Cases
When --output json is specified, the console client prints out a single JSON object containing information
about the command execution and the result, if available.
Here is a full example of a command ( neo start ) that supports structured output and displays result values:
{
"command": "start",
"argLine": "-a myaccount -b myapplication -h hana.ondemand.com -u myuser -p
******* -y",
"pid": 6523,
"exitCode": 0,
"errorMsg": null,
"commandOutput": "Requesting start for:
application : myapplication
account : myaccount
host : https://hana.ondemand.com
synchronous : true
SDK version : 1.48.99
user : myuser
web: STARTED
URL: https://myapplicationmyaccount.hana.ondemand.com
Access points:
https://myapplicationmyaccount.hana.ondemand.com
Application processes
ID State Last Change Runtime
fc735dc STARTED 25-Feb-2014 18:07:48 1.47.10.2
",
"commandErrorOutput": "",
"result": {
"status": "STARTED",
"url": "https://myapplicationmyaccount.hana.ondemand.com",
"accessPoints": [
"https://myapplicationmyaccount.hana.ondemand.com",
"https://myapplicationmyaccount.hana.ondemand.com/app2"
],
"applicationProcesses": [
{
"id": "fc735dc",
"state": "STARTED",
"lastChange": "2014-02-25T18:07:48Z",
"runtime": "1.47.10.2"
}
]
}
}
Related Information
Group Commands
Local Server install-local [page 1906]; deploy-local [page 1861]; start-local [page
1979]; stop-local [page 1984]
Deployment deploy [page 1856]; start [page 1977]; status [page 1975]
SAP HANA / SAP ASE list-application-datasources [page 1907]; list-dbms [page 1915]; list-dbs
[page 1916]; list-schemas [page 1931]
Subaccountd and Entitlements create-account [page 1817]; delete-account [page 1836]; list-accounts
[page 1910]; set-quota [page 1972]
Virtual Machines create-vm [page 1833]; delete-vm [page 1854]; list-vms [page 1937]
5.4.4.1 add-ecm-tenant
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
Type: string
Type: string
Type: string
Type: string
Optional
-v, --virus-scan Can be used to activate the virus scanner and check all incoming documents for viruses.
Default: true
Type: boolean
Recommendation
For repositories that are used by untrusted users and or for unknown content, we rec
ommend that you enable the virus scanner by setting this parameter to true. Enabling
the virus scanner could impair the upload performance.
If a virus is detected, the upload process for the document fails with a virus scanner ex
ception.
-p, --password To protect your password, enter it only when prompted by the console client and not ex
plicitly as a parameter in the properties file or the command line.
Type: string
5.4.4.2 add-custom-domain
Use this command to add a custom domain to an application URL. This will route the traffic for the custom domain
to your application on SAP Cloud Platform.
Parameters
To list all parameters available for this command, execute neo help add-custom-domain in the command line.
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
-p, --password To protect your password, enter it only when prompted by the console client and not ex
plicitly as a parameter in the properties file or the command line.
Type: string
Type: string
-i, --application-url The access point of the application on SAP Cloud Platform default domains (hana.onde
mand.com, etc.)
-l, --ssl-host SSL host as defined with the --name parameter when created, or 'default' if not speci
fied.
Example
Related Information
5.4.4.3 add-platform-domain
Adds a platform domain (under hana.ondemand.com) on which the application will be accessed.
Parameters
To list all parameters available for this command, execute neo help add-platform-domain in the command
line.
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
-p, --password To protect your password, enter it only when prompted by the console client and not ex
plicitly as a parameter in the properties file or the command line.
Type: string
Type: string
The chosen platform domain will be parent domain in the absolute application domain.
Acceptable values:
● svc.hana.ondemand.com
● cert.hana.ondemand.com
Example
Related Information
5.4.4.4 bind-db
This command binds an SAP HANA tenant database or SAP ASE user database to a Java application using a data
source.
You can only bind an application to an SAP HANA tenant database or SAP ASE user database if the application is
deployed.
Note
To bind your application to a database that is owned by another subaccount of your global account, you
need permission to use it. See Sharing Databases in the Same Enterprise Account [page 764].
Parameters
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
Type: string
--access-token Identifies a database access permission. The access token and database ID parameters
are mutually exclusive.
-p, --password To protect your password, enter it only when prompted by the console client and not ex
plicitly as a parameter in the properties file or the command line.
Type: string
Type: string
Default: <DEFAULT>
Type: string (uppercase and lowercase letters, numbers, and the following special charac
ters: `/`, `_`, `-`, `@`. Do not use special characters as first or last charachters of the
data source name.)
Example
Related Information
5.4.4.5 bind-domain-certificate
To list all parameters available for this command, execute neo help bind-domain-certificate in the
command line.
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
-p, --password To protect your password, enter it only when prompted by the console client and not ex
plicitly as a parameter in the properties file or the command line.
Type: string
Type: string
--certificate Name of the certificate that you set to the SSL host
-l, --ssl-host SSL host as defined with the--name parameter when created, or 'default' if not specified.
Example
Related Information
5.4.4.6 bind-hana-dbms
This command binds a Java application to an SAP HANA single-container database system (XS) via a data source.
You can only bind an application to an SAP HANA single-container database system (XS) if the application is
deployed.
Note
To bind your application to a database that is owned by another subaccount of your global account, see
bind-db [page 1805].
Parameters
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
Note
You cannot use this command in trial accounts.
Type: string
--access-token Identifies a database access permission. The access token and database ID parameters
are mutually exclusive.
-p, --password To protect your password, enter it only when prompted by the console client and not ex
plicitly as a parameter in the properties file or the command line.
Type: string
Type: string
--db-password Password of the database user used to access the SAP HANA single-container database
system (XS)
--db-user Name of the database user used to access the SAP HANA single-container database sys
tem (XS)
Optional
Type: string (uppercase and lowercase letters, numbers, and the following special charac
ters: `/`, `_`, `-`, `@`. Do not use special characters as first or last charachters of the
data source name.)
Example
Related Information
5.4.4.7 bind-schema
This command binds a schema to a Java application via a data source. If a data source name is not specified, the
schema will be automatically bound to the default data source of the application.
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
Type: string
--access-token Identifies a schema access grant. The access token and schema ID parameters are mutu
ally exclusive.
-p, --password To protect your password, enter it only when prompted by the console client and not ex
plicitly as a parameter in the properties file or the command line.
Type: string
Type: string
Optional
The application will be able to access the schema via the specified data source.
Type: string (uppercase and lowercase letters, numbers, and the following special charac
ters: `/`, `_`, `-`, `@`. Do not use special characters as first or last charachters of the
data source name.)
Example
Related Information
5.4.4.8 change-domain-certificate
Parameters
To list all parameters available for this command, execute neo help change-domain-certificate in the
command line.
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
-p, --password To protect your password, enter it only when prompted by the console client and not ex
plicitly as a parameter in the properties file or the command line.
Type: string
Type: string
--certificate Name of the certificate that you set to the SSL host
-l, --ssl-host SSL host as defined with the --name parameter when created, or 'default' if not speci
fied.
Example
If your current SAP Cloud Platform SDK version for Neo environment does not support this command, update
your SDK or use the unbind-domain-certificate and bind-domain-certificate commands instead.
Note
The first SAP Cloud Platform SDK versions for Neo environment to support the change-domain-
certificate command are:
For more information, see Update the SAP Cloud Platform SDK for Neo Environment [page 1136].
Related Information
5.4.4.9 clear-alert-recipients
Parameter
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
Type: string
Optional
-b, --application Application name for Java applications or productive SAP HANA database system, and
application name in the format <database name>:<application name> for SAP HANA XS
applications
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
Type: string
Example
5.4.4.10 clear-downtime-app
The command deregisters a previously configured downtime page for an application. After you execute the
command, the default HTTP error will be shown to the user in the event of unplanned downtime.
Parameters
To list all parameters available for this command, execute neo help clear-downtime-app in the command
line.
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
-p, --password To protect your password, enter it only when prompted by the console client and not ex
plicitly as a parameter in the properties file or the command line.
Type: string
Type: string
Example
Related Information
5.4.4.11 close-db-tunnel
This command closes one or all database tunnel sessions that have been opened in a background process using
the open-db-tunnel --background command.
A tunnel opened in a background process is automatically closed when the last session using the tunnel is closed.
The background process terminates after the last tunnel has been closed.
Required
--all Closes all tunnel sessions that have been opened in the background
--session-id Tunnel session to be closed. Cannot be used together with the parameter --all.
Example
Related Information
5.4.4.12 close-ssh-tunnel
Closes the ssh-tunnel to the specified virtual machine. If no virtual machine ID is specified, closes all tunnels.
Parameters
Required
Type: string
Optional
-r, --port Port on which you want to close the SSH tunnel
Example
Creates a new subaccount with an automatically generated unique ID as subaccount name and the specified
display name and assigns the user as a subaccount owner. The user is authorized against the existing subaccount
passed as --account parameter. Optionally, you can clone an existing subaccount configuration to save time and
effort.
Note
If you clone an existing extension account [page 1612], the new subaccount will not be an extension subaccount
but a regular one. The new subaccount will not have the trust and destination settings typical for extension
subaccounts.
Parameters
To list all parameters available for this command, execute neo help create-account in the command line.
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
If you want to create a subaccount whose display name has intervals, use quotes when ex
ecuting the command. For example: neo ... --display-name "Display Name with Intervals"
Type: string
-p, --password To protect your password, enter it only when prompted by the console client and not ex
plicitly as a parameter in the properties file or the command line.
Type: string
--clone (Optional) List of settings that will be copied (re-created) from the existing subaccount
into the new subaccount. A comma separated list of values, which are as follows:
● trust
● members
● destinations
● all
Tip
We recommend listing explicitly the required cloning options instead of using --
clone all in automated scripts. This will ensure backward compatibility in case the
available cloning options, enveloped by all, change in future releases.
Example
all All settings (trust, members and destinations) from the exist
ing subaccount will be copied into the new one.
Caution
The list of cloned configurations might be extended in the
future.
trust The following trust settings will be re-created in the new sub
account similarly to the relevant settings in the existing subac
count:
Note
SAP Cloud Platform will generate a new pair of key and
certificate on behalf of the new subaccount. Remem
ber to replace them with your proprietary key and cer
tificate when using the subaccount for productive pur
poses.
Note
If you do not have any trusted Identity Authentication ten
ants in the existing subaccount, cloning the trust settings
will result in trust with SAP ID Service (as default identity
provider) in the new subaccount.
members All members with their roles from the existing subaccount will
be copied into the new one.
Example of cloning an existing subaccount to create a new subaccount with the same trust settings and existing
destinations:
Parameters
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
Type: string
Type: string
Optional
-b, --application Application name for Java applications or productive SAP HANA database system, and
application name in the format <database name>:<application name> for SAP HANA XS
applications
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
Default: 50
Type: string
Default: 60
Type: string
-w, --overwrite Should be used only if there is an existing alert that needs to be updated.
Default: false
Type: boolean
5.4.4.15 create-db-ase
This command creates an ASE database with the specified ID and settings on an ASE database system.
Parameters
Required
Type: string
-p, --password To protect your password, enter it only when prompted by the console cli
ent and not explicitly as a parameter in the properties file or the command
line.
Type: string
Type: string
Type: string
--db-password Password of the database user used to access the ASE database (optional,
queried at the command prompt if omitted)
Note
This parameter sets the maximum database size. The minimum data
base size is 24 MB. You receive an error if you enter a database size that
exceeds the quota for this database system.
The size of the transaction log will be at least 25% of the database size
you specify.
Note
There is a limit to the number of databases you can create, and you'll see an error message when you reach the
maximum number of databases. For more information on user database limits, see Creating Databases [page
749].
Example
Related Information
5.4.4.16 create-db-hana
This command creates a SAP HANA database with the specified ID and settings, on a SAP HANA database system
enabled for multitenant database containers.
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
Type: string
-p, --password To protect your password, enter it only when prompted by the console client and not ex
plicitly as a parameter in the properties file or the command line.
Type: string
Type: string
Type: string
Note
To create a tenant database on trial, use -trial- instead of the ID of a productive HANA
database system.
--db-password Password of the SYSTEM user used to access the HANA database (optional, queried at the
command prompt if omitted)
Optional
--dp-server Enables or disables the data processing server of the HANA database: 'enabled', 'disabled'
(default).
--script-server Enables or disables the script server of the HANA database: 'enabled', 'disabled' (default).
--web-access Enables or disables access to the HANA database from the Internet: 'enabled' (default),
'disabled'
--xsengine-mode Specifies how the XS engine should run: 'embedded' (default), 'standalone'.
Note
There is a limit to the number of databases you can create, and you'll see an error message when you reach the
maximum number of databases. For more information on tenant database limits, see Creating Databases [page
749].
Related Information
5.4.4.17 create-db-user-ase
Parameters
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
Type: string
-p, --password To protect your password, enter it only when prompted by the console client and not ex
plicitly as a parameter in the properties file or the command line.
Type: string
Type: string
--db-password Password of the database user used to access the ASE database (optional, queried at the
command prompt if omitted)
5.4.4.18 create-ecm-repository
Parameters
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
Type: string
Type: string
Type: string
Optional
-d, --display-name Can be used to provide a more readable name of the repository. Equals the --name value if
left blank. You cannot change the display later on.
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
-e, --description Description of the repository. You cannot change the description later on.
Type: string
-v, --virus-scan Can be used to activate the virus scanner and check all incoming documents for viruses.
Default: true
Type: boolean
Recommendation
For repositories that are used by untrusted users and or for unknown content, we rec
ommend that you enable the virus scanner by setting this parameter to true. Enabling
the virus scanner could impair the upload performance.
If a virus is detected, the upload process for the document fails with a virus scanner ex
ception.
-p, --password To protect your password, enter it only when prompted by the console client and not ex
plicitly as a parameter in the properties file or the command line.
Type: string
Example
5.4.4.19 create-jmx-check
Parameters
Note
The JMX check settings support the JMX specification. For more information, see Java Management Extensions
(JMX) Specification .
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
Type: string
The name must be up to 99 characters long and must not contain the following symbols: `
~!$%^&*|'"<>?,()=
Type: string
-O, --object-name Object name of the MBean that you want to call
Type: string
-A, --attribute Name of the attribute inside the class with the specified object name.
Type: string
Optional
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
Note
If the parameter is not used, the JMX check will be on subaccount level for all running
applications in the subaccount.
It is needed only if the attribute is a composite data structure. This key defines the item in
the composite data structure. For more information about the composite data structure,
see Class CompositeDataSupport .
Type: string
-o, --operation Operation that has to be called on the MBean after checking the attribute value.
It is useful for resetting statistical counters to restart an operation on the same MBean.
Type: string
Type: string
The threshold can be a regular expression in case of string values or compliant with the
official nagios threshold/ranges format. For more information about the format in case it is
a number, see the official nagios documentation .
The threshold can be a regular expression in case of string values or compliant with the
official nagios threshold/ranges format. For more information about the format in case it is
a number, see the official nagios documentation .
Default: false
Type: boolean
Note
When you use this parameter, a new JMX check is not created when the one you spec
ify does not exist.
For a typical example how to configure a JMX check for your application and subscribe recipients to receive
notification alerts, see .
The following example creates a JMX check that returns a warning state of the metric if the value is between 10
and 100 bytes, and returns a critical state if the value is greater than 100 bytes. If the value is less than 10 bytes,
the returned state is OK.
This command creates a HANA database or schema with the specified ID on a shared or dedicated database.
Caution
This command is not supported for productive SAP HANA database systems. For more information about how
to create schemas on productive SAP HANA database systems, see Binding SAP HANA Databases to Java
Applications [page 754].
Parameters
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
-d, --dbtype Creates the HANA database or schema on a shared database system. Syntax: 'type:ver
sion'. Version is optional.
Type: string
--dbsystem Creates the schema on a dedicated database system. To see the available dedicated data
base systems, execute the list-dbms command.
Type: string
Caution
The list-dbms command lists different database types, including productive SAP
HANA database systems. Do not use the create-schema command for productive
SAP HANA database systems. For more information about how to create schemas on
productive SAP HANA database systems, see Binding SAP HANA Databases to Java
Applications [page 754].
It must start with a letter and can contain lowercase letters ('a' - 'z') and numbers ('0' -
'9'). For schemas IDs, uppercase letters ('A' - 'Z') and the special characters '.' and '-' are
also allowed.
Note that the actual ID assigned in the database will be different to this version.
Type: string
-p, --password To protect your password, enter it only when prompted by the console client and not ex
plicitly as a parameter in the properties file or the command line.
Type: string
Type: string
Example
Related Information
5.4.4.21 create-security-rule
This console client command creates a security group rule for a virtual machine.
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
Type: string
-p, --password To protect your password, enter it only when prompted by the console client and not ex
plicitly as a parameter in the properties file or the command line.
Type: string
Type: string
--from-port The start of the range of allowed ports. The <from_port> value must be less than or equal
to the <to_port> value.
--to-port The end of the range of allowed ports. The <to_port> value must be greater than or equal
to the <from_port> value.
--source-id The name of the system that you want to connect from.
For an SAP HANA system, the --source-id is the SAP HANA database system name.
For a Java application, it is the application name.
Example
Related Information
Manage Network Communication for SAP Cloud Platform Virtual Machines [page 1770]
Creates an SSL host for configuration of custom domains. This SSL host will be serving your custom domain.
Parameters
To list all parameters available for this command, execute neo help create-ssl-host in the command line.
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
-p, --password To protect your password, enter it only when prompted by the console client and not ex
plicitly as a parameter in the properties file or the command line.
Type: string
Type: string
Optional
-n, --name Unique identifier of the SSL host. If not specified, 'default' value is set.
Example
Note
In the command output, you get the SSL host. For example, "A new SSL host [mysslhostname] was
created and is now accessible on host [123456.ssl.ondemand.com]". Write down the
123456.ssl.ondemand.com host as you will later need it for the DNS configuration.
5.4.4.23 create-vm
Parameters
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
-p, --password To protect your password, enter it only when prompted by the console client and not ex
plicitly as a parameter in the properties file or the command line.
Type: string
Type: string
Optional
Default: off
If you do not provide -pkp as a parameter in the command line, you will be prompted to
enter a passphrase.
If you do not enter a passphrase, the command will be executed but the private key will
not be encrypted.
-l, --ssh-key-location The path to a public key of certificate that will be uploaded and used to log in on the newly
created virtual machine.
Type: string
-k, --ssh-key-name The name of the already existing public key to be used to login on the newly created virtual
machine.
Type: string. It can contain only alphanumeric characters (0-9, a-z, A-Z), underscore (_)
and hyphen (-).
-v; --volume-id Unique identifier of the volume from which the virtual machine will be created.
Type: string
Condition: Use when you want to create a new virtual machine from a volume.
Type: string
Condition: Use when you want to create a new virtual machine from a volume snap
shot.
Default: off
Example
5.4.4.24 create-volume-snapshot
Takes a snapshot of the file system of the specified virtual machine volume. The operation is asynchronous.
Parameters
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
Type: string
-p, --password To protect your password, enter it only when prompted by the console client and not ex
plicitly as a parameter in the properties file or the command line.
Type: string
Type: string
-v, --volume-id Unique identifier of the volume from which the snapshot will be taken
Type: string
Example
5.4.4.25 delete-account
Deletes a particular subaccount. Only the user who has created the subaccount is allowed to delete it.
Note
You cannot delete a subaccount if it still has associated subscriptions, non-shared database systems, database
schemas, deployed applications, HTML5 applications, or document service repositories. You need to delete
these subaccount entities before you proceed with the subaccount deletion. For information how to delete
them, see Related Information. Make sure also that there are no running virtual machines in the subaccount.
Parameters
To list all parameters available for this command, execute neo help delete-account in the command line.
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
Type: string
-p, --password To protect your password, enter it only when prompted by the console client and not ex
plicitly as a parameter in the properties file or the command line.
Type: string
Related Information
5.4.4.26 delete-availability-check
Parameters
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
Type: string
Optional
-b, --application Application name for Java applications or productive SAP HANA database system, and
application name in the format <database name>:<application name> for SAP HANA XS
applications
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
5.4.4.27 delete-db-ase
This command deletes the ASE database with the specified ID.
Parameters
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
Type: string
-p, --password To protect your password, enter it only when prompted by the console client and not ex
plicitly as a parameter in the properties file or the command line.
Type: string
Type: string
Optional
--force or -f Forcefully deletes the ASE database, including all application bindings
Example
5.4.4.28 delete-db-hana
This command deletes the SAP HANA database with the specified ID on a SAP HANA database system enabled for
multitenant database container support.
Parameters
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
Type: string
-p, --password To protect your password, enter it only when prompted by the console client and not ex
plicitly as a parameter in the properties file or the command line.
Type: string
Type: string
Optional
--force or -f Forcefully deletes the HANA database, including all application bindings
Example
Parameters
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
Type: string
-p, --password To protect your password, enter it only when prompted by the console client and not ex
plicitly as a parameter in the properties file or the command line.
Type: string
Type: string
Optional
Example
Related Information
This command deletes destination configuration properties files and JDK files. You can delete them on
subaccount, application or subscribed application level.
Parameters
To list all parameters available for this command, execute neo help delete-destination in the command
line.
Required
-a, --account Your subaccount. The subaccount for which you provide username and password.
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
-b, --application The application for which you delete a destination. Cases:
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
Type: string
-p, --password Password for the specified user. To protect your password, enter it only when prompted by
the console client and not explicitly as a parameter in the properties file or the command
line.
Type: string
Type: string
Examples
Related Information
5.4.4.31 delete-ecm-repository
This command deletes a repository including the data of any tenants in the repository, unless you restrict the
command to a specific tenant.
Caution
Be very careful when using this command. Deleting a repository permanently deletes all data. This data cannot
be recovered.
Parameters
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
Type: string
Type: string
Type: string
Optional
Deletes the repository for the given tenant only instead of for all tenants. If no tenant name
is provided, the repositories for all tenants are deleted.
Type: string
-p, --password To protect your password, enter it only when prompted by the console client and not ex
plicitly as a parameter in the properties file or the command line.
Type: string
Example
5.4.4.32 delete-domain-certificate
Deletes a certificate.
Note
Cannot be undone. If the certificate is mapped to an SSL host, the certificate will be removed from the SSL host
too.
Parameters
To list all parameters available for this command, execute neo help delete-domain-certificate in the
command line.
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
-p, --password To protect your password, enter it only when prompted by the console client and not ex
plicitly as a parameter in the properties file or the command line.
Type: string
Type: string
-n, --name Name of the certificate that you set to the SSL host
Example
Related Information
5.4.4.33 delete-hanaxs-certificates
This command deletes certificates that contain a specified string in the Subject CN.
Restriction
Use this command only for SAP HANA SP9 or earlier versions. For newer SAP HANA versions, use the
respective SAP HANA native tools.
Parameters
To list all parameters available for this command, execute neo help delete-hanaxs-certificates in the
command line.
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
-p, --password To protect your password, enter it only when prompted by the console client and not ex
plicitly as a parameter in the properties file or the command line.
Type: string
Type: string
-cn-string, --contained- A part of the certificate CN. All certificates that contain this string shall be deleted.
string
Default: none
Example
To delete all certificates containing John Doe in their Subject DN, execute:
or
Parameters
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
Type: string
-n, --name or -A, all Name of the JMX check to be deleted or all JMX checks configured for the given subac
count and application are deleted.
Type: string
Optional
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
Example
Deletes a solution resource file from the system repository of a specified extension subaccount.
Note
This is a beta feature available for SAP Cloud Platform extension subaccounts. For more information about the
beta features, see Using Beta Features in Subaccounts [page 16].
Parameters
To list all parameters available for this command, execute neo help delete-resource in the command line.
Required
Type: string
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
-p, --password To protect your password, enter it only when prompted by the console client and not ex
plicitly as a parameter in the properties file or the command line.
Type: string
Type: string
Optional
Example
To delete a solution resource from the system repository for your extension subaccount, execute:
Parameters
To list all parameters available for this command, execute neo help delete-ssl-host in the command line.
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
-p, --password To protect your password, enter it only when prompted by the console client and not ex
plicitly as a parameter in the properties file or the command line.
Type: string
Type: string
Type: string
Example
Related Information
This command is used to delete a keystore by deleting the keystore file. You can delete keystores on subaccount,
application, and subscription levels.
Parameters
To list all parameters available for this command, execute neo help delete-keystore in the command line.
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
Type: string
Type: string
Optional
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
Example
On Subscription Level
On Subaccount Level
Related Information
5.4.4.38 delete-mta
This command deletes Multi-Target Application (MTA) archives that are deployed to your subaccount.
Parameters
To list all parameters available for this command, execute neo help delete-mta in the command line.
Required
-a, --account The name of the subaccount for which you provide a user and a password.
-p, --password Your user password. We recommend that you enter it only when prompted, and not explic
itly as a parameter in a properties file or the command line.
Optional
-y, --synchronous Instructs the console to wait for the operation to finish. It takes no value.
To delete MTA archives with IDs <MTA_ID1> and <MTA_ID2> that have been deployed to your subaccount,
execute:
5.4.4.39 delete-schema
This command deletes the specified schema, including all data it contains. A schema cannot be deleted if it is still
bound to an application. To enforce the deletion, use the force parameter but bear in mind that this will also delete
all bindings that still exist.
Schema backups are kept for 14 days and may be used to restore mistakenly deleted data (available by special
request only).
Parameters
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
Type: string
-p, --password To protect your password, enter it only when prompted by the console client and not ex
plicitly as a parameter in the properties file or the command line.
Type: string
Type: string
-f, --force Forcefully deletes the schema, including all application bindings
Default: off
Default: off
Example
Related Information
5.4.4.40 delete-security-rule
This console client command deletes a security rule configured for a virtual machine.
Parameters
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
Type: string
-p, --password To protect your password, enter it only when prompted by the console client and not ex
plicitly as a parameter in the properties file or the command line.
Type: string
Type: string
--from-port The start of the range of allowed ports. The <from_port> value must be less than or equal
to the <to_port> value.
--to-port The end of the range of allowed ports. The <to_port> value must be greater than or equal
to the <from_port> value.
--source-id The name of the system that you want to connect from.
For a SAP HANA system, the --source-id is the SAP HANA database system name.
For a Java application, it is the application name.
Example
Related Information
Manage Network Communication for SAP Cloud Platform Virtual Machines [page 1770]
Parameters
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
Type: string
-p, --password To protect your password, enter it only when prompted by the console client and not ex
plicitly as a parameter in the properties file or the command line.
Type: string
Type: string
Type: string
Optional
Default: off
Example
Parameters
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
-p, --password To protect your password, enter it only when prompted by the console client and not ex
plicitly as a parameter in the properties file or the command line.
Type: string
Type: string
-v, --volume-id Unique identifier of the volume that you want to delete
Type: string
Example
5.4.4.43 delete-volume-snapshot
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
-s, --snapshot-id Unique identifier of the volume snapshot that you want to delete
Type: string
-p, --password To protect your password, enter it only when prompted by the console client and not ex
plicitly as a parameter in the properties file or the command line.
Type: string
Type: string
Example
Related Information
5.4.4.44 deploy
Deploying an application publishes it to SAP Cloud Platform. Use the optional parameters to make some specific
configurations of the deployed application.
To list all parameters available for this command, execute neo help deploy in the command line.
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
-s, --source A comma-separated list of file locations, pointing to WAR files, or folders containing them
Note
The size of an application can be up to 1.5 GB. If the application is packaged as a WAR
file, the size of the unzipped content is taken into account.
If you want to deploy more than one application on one and the same application process,
put all WAR files in the same folder and execute the deployment with this source, or spec
ify them as a comma-separated list.
To deploy an application in more than one region, execute the deploy separately for each
region host.
-p, --password To protect your password, enter it only when prompted by the console client and not ex
plicitly as a parameter in the properties file or the command line.
Type: string
Type: string
Optional
Command-specific parameters
Default: 2
Type: integer
--delta Deploys only the changes between the provided source and the deployed content. New
content will be added; missing content will be deleted. Recommended for development
use to speed up the deployment.
--ev Environment variables for configuring the environment in which the application runs.
Note
For security reasons, do not specify any confidential information in plain text format,
such as usernames and passwords. You can either encrypt such data, or store it in a
secure manner. For more information, see Keystore Service [page 2229].
Sets one environment variable by removing the previously set value; can be used multiple
times in one execution.
If you provide a key without any value (--ev <KEY1>=), the –ev parameter is ignored.
Default: depends on the SAP Cloud Platform SDK for Neo environment
-m, --minimum-processes Minimum number of application processes, on which the application can be started
Default: 1
-M, --maximum-processes Maximum number of application processes, on which the application can be started
Default: 1
System properties (-D<name>=<value>) separated with space that will be used when
starting the application process.
Memory settings of your compute units. You can set the following memory parameters: -
Xms, -Xmx, -XX:PermSize, -XX:MaxPermSize.
We recommend that you use the default memory settings. Change them only if necessary
and note that this may impact the application performance or its ability to start.
Use the parameter if you want to choose an application runtime container different from
the one coming with your SDK. To view all available runtime containers, use list-runtimes
[page 1929].
--runtime-version SAP Cloud Platform runtime version on which the application will be started and will run
on the same version after a restart. Otherwise, by default, the application is started on the
latest minor version (of the same major version) which is backward compatible and in
cludes the latest corrections (including security patches), enhancements, and updates.
Note that choosing this option does not affect already started application processes.
You can view the recommended versions by executing the list-runtime-versions command.
Note
If you choose your runtime version, consider its expiration date and plan updating to a
new version regularly.
For more information, see Choose Application Runtime Version [page 1698].
Default: off
Possible values: on (allow compression), off (disable compression), force (forces compres
sion for all responses) or an integer (which enables compression and specifies the com
pression-min-size value in bytes).
For more information, see Enable and Configure Gzip Response Compression [page 1701]
--compressible-mime- A comma separated list of MIME types for which compression will be used
type
Default: text/html, text/xml, text/plain
--connection-timeout Defines the number of milliseconds to wait for the request URI line to be presented after
accepting a connection.
Default: 20000
--max-threads Specifies the maximum number of simultaneous requests that can be handled
Default: 200
--uri-encoding Specifies the character encoding used to decode the URI bytes on application request
Default: ISO-8859-1
For more information, see the encoding sets supported by Java SE 6 and Java SE 7 .
Example
Here are examples of some additional configurations. If your application is already started, stop it and start it again
for the changes to take effect.
You can deploy an application on a host different from the default one by specifying the host parameter. For
example, to use the region (host) located in the United States, execute:
To specify the compute unit size on which you want the application to run, use the --size parameter with one of
the following values:
Available sizes depend on your subaccount type and what options you have purchased. For trial accounts, only the
Lite edition is available.
For example, if you have an enterprise account and have purchased a package with Premium edition compute
units, then you can run your application on a Premium compute unit size, by executing the following command:
When deploying an application, name the WAR file with the desired context root.
If you want to deploy it in the "/" context root then rename your WAR to ROOT.war.
Using the –uri-encoding parameter, you can define the character encoding that will be used to decode the URI
bytes on application request. For example, to use the UTF-8 encoding that can represent every character in the
Unicode character set, execute
Related Information
5.4.4.45 deploy-local
Parameters
Required
-s, --source Source for deployment (comma separated list of WAR files or folders containing one or
more WAR files)
Optional
Related Information
5.4.4.46 deploy-mta
This command deploys Multi-Target Application (MTA) archives. One or more than one MTA archives can be
deployed to your subaccount in one go.
Parameters
To list all parameters available for this command, execute neo help deploy-mta in the command line.
Required
-a, --account The name of the subaccount for which you provide a user and a password.
-p, --password Your user password. We recommend that you enter it only when prompted, and not explic
itly as a parameter in a properties file or the command line.
-s, --source A comma-separated list of file locations, pointing to the MTA archive files, or the folders
containing them.
Optional
Command-specific parameters
-y, --synchronous Triggers the deployment and waits until the deployment operation finishes. The command
without the --synchronous parameter triggers deployment and exits immediately
without waiting for the operation to finish. Takes no value.
-e, --extensions Defines one or more extensions to the deployment descriptor. A comma-separated list of
file locations, pointing to the extension descriptor files, or the folders containing them. For
more information, see Defining MTA Extension Descriptors [page 1322].
--mode Defines whether the deployment method is a standard deployment, or provider deploy
ment. The available values are import (default value), or providerImport.
Example
You can deploy an MTA archive on a host different from the default one by specifying the host parameter. For
example, to use the region (host) located in the United States, execute:
Related Information
5.4.4.47 disable
This command stops the creation of new connections to an application or application process, but keeps the
already running sessions alive. You can check if an application or application process has been disabled by
executing the status command.
Parameters
To list all parameters available for this command, execute neo help disable in the command line.
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
-p, --password To protect your password, enter it only when prompted by the console client and not ex
plicitly as a parameter in the properties file or the command line.
Type: string
Type: string
Optional
-i, --application- Unique ID of a single application process. Use it to disable a particular application process
process-id instead of the whole application. As the process ID is unique, you do not need to specify
subaccount and application parameters. You can list the application process ID by using
the <status> command.
Default: none
Example
To disable a single applcation process, first identify the application process you want to disable by executing neo
status:
From the generated list of application process IDs, copy the ID you need and execute neo disable for it:
5.4.4.48 display-application-properties
The command displays the set of properties of a deployed application, such as runtime version, minimum and
maximum processes, Java version.
Parameters
To list all parameters available for this command, execute the neo help display-application-properties
in the command line.
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
-p, --password To protect your password, enter it only when prompted by the console client and not ex
plicitly as a parameter in the properties file or the command line.
Type: string
Type: string
Related Information
5.4.4.49 display-csr
Parameters
To list all parameters available for this command, execute neo help display-csr in the command line.
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
-p, --password To protect your password, enter it only when prompted by the console client and not ex
plicitly as a parameter in the properties file or the command line.
Type: string
Type: string
-f, --file name Name of the local file where the CSR is stored
Example
Related Information
5.4.4.50 display-ecm-repository
Parameters
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
Type: string
Type: string
Type: string
-p, --password To protect your password, enter it only when prompted by the console client and not ex
plicitly as a parameter in the properties file or the command line.
Type: string
Example
5.4.4.51 display-db-info
This command displays detailed information about the selected database. This includes the assigned database
type, the database version, and a list of bindings with the application and data source names.
Parameters
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
Type: string
-p, --password To protect your password, enter it only when prompted by the console client and not ex
plicitly as a parameter in the properties file or the command line.
Type: string
Type: string
Example
5.4.4.52 display-mta
This command displays a Multi-Target Application (MTA) archive that is deployed to your subaccount.
Parameters
To list all parameters available for this command, execute neo help display-mta in the command line.
Required
-a, --account The name of the subaccount for which you provide a user and a password.
-p, --password Your user password. We recommend that you enter it only when prompted, and not explic
itly as a parameter in a properties file or the command line.
To display an MTA archive with an ID <MTA_ID> that has been deployed to your subaccount, execute:
5.4.4.53 display-schema-info
This command displays detailed information about the selected schema. This includes the assigned database
type, the database version, and a list of bindings with the application and data source names.
Parameters
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
Type: string
-p, --password To protect your password, enter it only when prompted by the console client and not ex
plicitly as a parameter in the properties file or the command line.
Type: string
Type: string
Example
5.4.4.54 display-volume-snapshot
Parameters
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
Type: string
-p, --password To protect your password, enter it only when prompted by the console client and not ex
plicitly as a parameter in the properties file or the command line.
Type: string
Type: string
Example
5.4.4.55 download-keystore
This command is used to download a keystore by downloading the keystore file. You can download keystores on
subaccount, application, and subscription levels.
Parameters
To list all parameters available for this command, execute neo help download-keystore in the command line.
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
Type: string
Type: string
Optional
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
-l,--location Local directory where the keystore will be saved. If it is not specified, the current directory
is used.
Type: string
-w, --overwrite Overwrites a file with the same name if such already exists. If you do not explicitly include
the --overwrite argument, you will be notified and asked if you want to overwrite the
file.
Example
On Subscription Level
On Application Level
On Subaccount Level
Related Information
5.4.4.56 edit-ecm-repository
Changes the name, key, or virus scan settings of a repository. You cannot change the display name or the
description.
Parameters
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
Type: string
Type: string
Type: string
Optional
Caution
If not used, the virus scan setting of the whole repository changes.
Type: string
Type: string
Type: string
-v, --virus-scan Can be used to activate the virus scanner and check all incoming documents for viruses.
Default: true
Type: boolean
Recommendation
For repositories that are used by untrusted users and or for unknown content, we rec
ommend that you enable the virus scanner by setting this parameter to true. Enabling
the virus scanner could impair the upload performance.
If a virus is detected, the upload process for the document fails with a virus scanner ex
ception.
-p, --password To protect your password, enter it only when prompted by the console client and not ex
plicitly as a parameter in the properties file or the command line.
Type: string
Example
Related Information
5.4.4.57 enable
This command enables new connection requests to a disabled application or application process. The enable
command cannot be used for an application that is in maintenance mode.
To list all parameters available for this command, execute neo help enable in the command line.
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
-p, --password To protect your password, enter it only when prompted by the console client and not ex
plicitly as a parameter in the properties file or the command line.
Type: string
Type: string
Optional
-i, --application- Unique ID of a single application process. Use it to enable a particular application process
process-id instead of the whole application. As the process ID is unique, you do not need to specify
subaccount and application parameters. You can list the application process ID by using
the <status> command.
Default: none
Example
To enable a single applcation process, first identify the application process you want to enable by executing neo
status:
Related Information
5.4.4.58 get-destination
This command downloads (reads) destination configuration properties files and destination certificate files. You
can download them on subaccount, application or subscribed application level.
Parameters
To list all parameters available for this command, execute neo help get-destination in the command line.
Required
-a, --account Your subaccount. The subaccount for which you provide username and password.
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
-b, --application The application for which you download a destination. Cases:
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
--localpath The path on your local file system where a destination or a JKS file will be downloaded. If
not set, no files will be downloaded.
Type: string
--name The name of the destination or JKS file to be downloaded. If not set, the names of all desti
nation or JKS files for the service will be listed.
Type: string
-p, --password Password for the specified user. To protect your password, enter it only when prompted by
the console client and not explicitly as a parameter in the properties file or the command
line.
Type: string
Note
If you download a destination configuration file that contains a password field, the
password value will not be visible. Instead, after Password =..., you will only see
an empty space. You will need to learn the password in other ways.
Type: string
Examples
Related Information
Parameters
To list all parameters available for this command, execute neo help generate-csr in the command line.
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
-p, --password To protect your password, enter it only when prompted by the console client and not ex
plicitly as a parameter in the properties file or the command line.
Type: string
Type: string
Type: string (It can contain alphanumerics, '.', '-' and '_')
Allowed attributes:
Optional
-s, --subject- A comma-separated list of all domain names to be protected with this certificate, used as
alternative-name value for the Subject Alternative Name field of the generated certificate.
Type: string
Example
Related Information
Parameters
To list all parameters available for this command, execute neo help get-log in the command line.
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
-d, --directory Local folder location under which the file will be downloaded. If the directory you have
specified does not exist, it will be created.
Type: string
Type: string
Note
To find out the name of the log file to download, use the list-logs command to see
the available log files of your application. For more information, see list-logs [page
1925].
-p, --password Password for the specified user. To protect your password, enter it only when prompted by
the console client and not explicitly as a parameter in the properties file or the command
line.
Type: string
Type: string
-w, --overwrite Overwrites a file with the same name if such already exists. If you do not explicitly include
the --overwrite argument, you will be notified and asked if you want to overwrite the
file.
Default: true
Type: boolean
Example
Related Information
5.4.4.61 grant-db-access
This command gives another subaccount permission to access a database. The subaccount providing the
permission and the subaccount receiving the permission must be part of the same global account.
Parameters
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
Type: string
-p, --password To protect your password, enter it only when prompted by the console client and not ex
plicitly as a parameter in the properties file or the command line.
Type: string
Optional
-to-account The subaccount to receive access permission. The subaccount provoding the permission
and the subaccount receiving the permission must be part of the same global account.
-permissions Comma-separated list of access permissions to the database. Acceptable values: 'TUN
NEL', 'BINDING'.
Example
Related Information
5.4.4.62 grant-db-tunnel-access
This command generates a token, which allows the members of another subaccount to access a database using a
database tunnel.
Required
Type: string
The subaccount to be granted database tunnel access, based on the access token
Type: string
Example
Related Information
5.4.4.63 grant-schema-access
This command gives an application in another subaccount access to a schema based on a one-time access token.
The access token is used to bind the schema to the application.
Parameters
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
Type: string
Type: string
-p, --password To protect your password, enter it only when prompted by the console client and not ex
plicitly as a parameter in the properties file or the command line.
Type: string
Type: string
Example
Related Information
5.4.4.64 hcmcloud-create-connection
This command configures the connectivity of an extension application to a SAP SuccessFactors system
associated with a specified SAP Cloud Platform subaccount. The command creates the required HTTP destination
Prerequisites
To be able to use this command, you need the following platform scopes to be specified for your custom platform
role:
● readExtensionConfigurations
● manageExtensionConfigurations
For more information, see Platform Scopes and Manage Custom Platform Roles in the Neo Environment.
Parameters
To list all parameters available for this command, execute neo help hcmcloud-create-connection in the
command line.
Required
-b, --application The name of the extension application for which you are creating the connection. Cases:
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
-p, --password To protect your password, enter it only when prompted by the console client and not ex
plicitly as a parameter in the properties file or the command line.
Type: string
Type: string
Type: string
Optional
-w, --overwrite If a connection with the same name already exists, overwrites it. If you do not explicitly
specify the --overwrite parameter, and a connection with the same name already exists,
the command fails to execute
Example
To configure a connection of type OData with technical user for an extension application in a subaccount located in
the United States (US East) region, execute:
Result
After executing this command, you have one of the following destinations in your subaccount:
● sap_hcmcloud_core_odata
● sap_hcmcloud_core_odata_technical_user
You can consume this destination in your application using one of these APIs:
This command removes the specified connection configured between an extension application and a SAP
SuccessFactors system associated with the specified SAP Cloud Platform subaccount.
Prerequisites
To be able to use this command, you need the following platform scopes to be specified for your custom platform
role:
● readExtensionConfigurations
● manageExtensionConfigurations
For more information, see Platform Scopes and Manage Custom Platform Roles in the Neo Environment.
Parameters
To list all parameters available for this command, execute neo help hcmcloud-delete-connection in the
command line.
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
Type: string
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
-p, --password To protect your password, enter it only when prompted by the console client and not ex
plicitly as a parameter in the properties file or the command line.
Type: string
Type: string
Example
To delete an OData connection for an extension application running in an extension subaccount in the US East
region, execute:
5.4.4.66 hcmcloud-disable-application-access
This command removes an extension application from the list of authorized assertion consumer services for the
SAP SuccessFactors system associated with the specified subaccount.
Prerequisites
To be able to use this command, you need the following platform scopes to be specified for your custom platform
role:
● readExtensionConfigurations
● manageExtensionConfigurations
For more information, see Platform Scopes and Manage Custom Platform Roles in the Neo Environment.
Parameters
To list all parameters available for this command, execute neo help hcmcloud-disable-application-
access in the command line.
-b, --application The name of the extension application for which you are deleting the connection. Cases:
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
--application-type The type of the extension application for which you are deleting the connection
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
-p, --password To protect your password, enter it only when prompted by the console client and not ex
plicitly as a parameter in the properties file or the command line.
Type: string
Type: string
Example
To remove a Java extension application from the list of authorized assertion consumer services for the SAP
SuccessFactors system associated with a subaccount located in the United States (US East), execute:
The command removes the entry for the application from the list of the authorized service provider assertion
consumer services for the SuccessFactors system associated with the specified subaccount. If entry for the
extension application does not exist the command will fail.
This command displays the status of an extension application entry in the list of assertion consumer services for
the SAP SuccessFactors system associated with the specified subaccount. The returned results contain the
extension application URL.
Prerequisites
To be able to use this command, you need the following platform scopes to be specified for your custom platform
role:
● readExtensionConfigurations
● readJavaApplications
● readHTML5Applications
For more information, see Platform Scopes and Manage Custom Platform Roles in the Neo Environment.
Parameters
To list all parameters available for this command, execute neo help hcmcloud-display-application-
access-status in the command line.
Required
-b, --application The name of the extension application for which you are displaying the status in in the list
of assertion consumer services. Cases:
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
--application-type The type of the extension application for which you are creating the connection
Type: string
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
-p, --password To protect your password, enter it only when prompted by the console client and not ex
plicitly as a parameter in the properties file or the command line.
Type: string
Type: string
Example
To display the status of an application entry in the list of authorized assertion consumer services for the SAP
SuccessFactors system associated with a subaccount in the region located in the United States (US East),
execute:
5.4.4.68 hcmcloud-enable-application-access
This command registers an extension application as an authorized assertion consumer service for the SAP
SuccessFactors system associated with the specified subaccount to enable the application to use the SAP
SuccessFactors identity provider (IdP) for authentication.
Prerequisites
To be able to use this command, you need the following platform scopes to be specified for your custom platform
role:
● readExtensionConfigurations
● manageExtensionConfigurations
● readJavaApplications
● readHTML5Applications
Parameters
To list all parameters available for this command, execute neo help hcmcloud-enable-application-
access in the command line.
Required
-b, --application The name of the extension application for which you are creating the connection. Cases:
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
--application-type The type of the extension application for which you are creating the connection
Type: string
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
-p, --password To protect your password, enter it only when prompted by the console client and not ex
plicitly as a parameter in the properties file or the command line.
Type: string
Type: string
To register an extension application as an authorized assertion consumer service for the SAP SuccessFactors
system associated with a subaccount located in the United States (US East) region, execute:
The command creates entry for the application in the list of the authorized service provider assertion consumer
services for the SAP SuccessFactors system associated with the specified subaccount. The entry contains the
main URL of the extension application, the service provider audience URL and service provider logout URL. If an
entry for the given extension application already exists, this entry is overwritten.
5.4.4.69 hcmcloud-enable-role-provider
This command enables the SAP SuccessFactors role provider for the specified Java application.
Prerequisites
To be able to use this command, you need the following platform scopes to be specified for your custom platform
role:
● readExtensionConfigurations
● manageExtensionConfigurations
For more information, see Platform Scopes and Manage Custom Platform Roles in the Neo Environment.
Parameters
To list all parameters available for this command, execute neo help hcmcloud-enable-role-provider in the
command line.
-b, --application The name of the extension application for which you are creating the connection. Cases:
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
-p, --password To protect your password, enter it only when prompted by the console client and not ex
plicitly as a parameter in the properties file or the command line.
Type: string
Type: string
Example
To enable the SAP SuccessFactors role provider for your Java application in an extension subaccount located in
the United States (US East) region, execute:
This command lists the SAP SuccessFactors Employee Central (EC) home page tiles registered in the SAP
SuccessFactorss company instance associated with the extension subaccount.
Prerequisites
To be able to use this command, you need the following platform scopes to be specified for your custom platform
role:
● readExtensionConfigurations
● readHTML5Applications
For more information, see Platform Scopes and Manage Custom Platform Roles in the Neo Environment.
Note
This is a beta feature available for SAP Cloud Platform extension subaccounts. For more information about the
beta features, see Using Beta Features in Subaccounts [page 16].
Parameters
To list all parameters available for this command, execute neo help hcmcloud-get-registered-home-
page-tiles in the command line.
-b, --application The name of the extension application for which you are listing the home page tiles. Cases:
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
Note
If you do not specify the application parameter, the command lists all tiles regis
tered in the Successfactors company instance associated with the specified extension
subaccount.
--application-type The type of the extension application for which you are listing the home page tiles
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
-p, --password To protect your password, enter it only when prompted by the console client and not ex
plicitly as a parameter in the properties file or the command line.
Type: string
Type: string
Example
To list the home page tiles registered for a Java extension application running in your subaccount in the US East
region, execute:
There is no lifecycle dependency between the tiles and the application, so the application may not be started or
may not be deployed anymore.
This command imports SAP SuccessFactors HCM suite roles into the SAP SuccessFactors customer instance
linked to an extension subaccount.
Prerequisites
To be able to use this command, you need the following platform scopes to be specified for your custom platform
role:
● readExtensionConfigurations
● manageExtensionConfigurations
For more information, see Platform Scopes and Manage Custom Platform Roles in the Neo Environment.
Parameters
To list all parameters available for this command, execute neo help hcmcloud-import-roles in the command
line.
Required
Type: string
Note
The file size must not exceed 500 KB.
Type: string
Type: string
Example
To import the role definitions for an extension application from the system repository for your extension
subaccount into the SuccessFactors customer instance connected to this subaccount, execute:
If any of the roles that you are importing already exists in the target system, the commands fails to execute.
Related Information
5.4.4.72 hcmcloud-list-connections
This command lists the connections configured for the specified extension application.
Prerequisites
To be able to use this command, you need the following platform scopes to be specified for your custom platform
role:
● readExtensionConfigurations
For more information, see Platform Scopes and Manage Custom Platform Roles in the Neo Environment.
To list all parameters available for this command, execute neo help hcmcloud-list-connections in the
command line.
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
-p, --password To protect your password, enter it only when prompted by the console client and not ex
plicitly as a parameter in the properties file or the command line.
Type: string
Type: string
Example
To list the connections for an extension application running in an extension subaccount in the US East region,
execute:
This command registers the SAP SuccessFactors Employee Central (EC) home page tiles in the SAP
SuccessFactors company instance associated with the extension subaccount. The home page tiles must be
described in a tile descriptor file for the extension application in JSON format.
Prerequisites
To be able to use this command, you need the following platform scopes to be specified for your custom platform
role:
● readExtensionConfigurations
● manageExtensionConfigurations
● readJavaApplications
● readHTML5Applications
For more information, see Platform Scopes and Manage Custom Platform Roles in the Neo Environment.
Note
This is a beta feature available for SAP Cloud Platform extension subaccounts. For more information about the
beta features, see Using Beta Features in Subaccounts [page 16].
Parameters
To list all parameters available for this command, execute neo help hcmcloud-register-home-page-tiles
in the command line.
Required
Type: string
Note
The file size must not exceed 100 KB.
-b, --application The name of the extension application for which you are registering the home page tiles.
Cases:
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
--application-type The type of the extension application for which you are registering the home page tiles
Default: java
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
-p, --password To protect your password, enter it only when prompted by the console client and not ex
plicitly as a parameter in the properties file or the command line.
Type: string
Type: string
Example
To register a home page tile for a Java extension application running in your subaccount in the US East region,
execute::
Related Information
tiles.json
This command removes the SAP SuccessFactors EC home page tiles registered for the extension application in the
SAP SuccessFactors company instance associated with the specified extension subaccount.
Prerequisites
To be able to use this command, you need the following platform scopes to be specified for your custom platform
role:
● readExtensionConfigurations
● manageExtensionConfigurations
For more information, see Platform Scopes and Manage Custom Platform Roles in the Neo Environment.
Note
This is a beta feature available for SAP Cloud Platform extension subaccounts. For more information about the
beta features, see Using Beta Features in Subaccounts [page 16].
Parameters
To list all parameters available for this command, execute neo help hcmcloud-unregister-home-page-
tiles in the command line.
-b, --application The name of the extension application for which you are removing the home page tiles.
Cases:
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
Note
You must use the same application name that you have specified when registering the
tiles.
--application-type The type of the extension application for which you are listing the home page tiles
Default: java
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
-p, --password To protect your password, enter it only when prompted by the console client and not ex
plicitly as a parameter in the properties file or the command line.
Type: string
Type: string
Example
To remove the home page tiles registered for a Java extension application running in your subaccount in the US
East region, execute::
There is no lifecycle dependency between the tiles and the application, so the application may not be started or
may not be deployed anymore.
The hot-update command enables a developer to redeploy and update the binaries of an application started on
one process faster than the normal deploy and restart. Use it to apply and activate your changes during
development and not for updating productive applications.
There are three options for hot-update specified with the --strategy parameter:
Limitations:
Parameters
To list all parameters available for this command, execute neo help hot-update in the command line.
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
-p, --password To protect your password, enter it only when prompted by the console client and not ex
plicitly as a parameter in the properties file or the command line.
Type: string
Type: string
-s, --source A comma-separated list of file locations, pointing to WAR files, or folders containing them.
Acceptable values:
● replace-binaries
● restart-runtime
● reprovision-runtime
Optional
Default: 2
Type: integer
--delta Uploads only the changes between the provided source and the deployed content. New
content will be added; missing content will be deleted. Recommended for development
use to speed up the deployment.
Example
5.4.4.76 install-local
This command installs a server runtime in a local folder, by default <SDK installation folder>/server.
neo install-local
Optional
Default: 8009
Default: 8080
Default: 8443
Default: 1717
Related Information
5.4.4.77 list-application-datasources
This command lists all schemas and productive database instances bound to an application.
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letters)
-p, --password To protect your password, enter it only when prompted by the console client and not ex
plicitly as a parameter in the properties file or the command line.
Type: string
Type: string
Example
Related Information
5.4.4.78 list-availability-check
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
Type: string
Optional
-b, --application Application name for Java applications or productive SAP HANA database system, and
application name in the format <database name>:<application name> for SAP HANA XS
applications
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
-R, --recursively Lists availability checks recursively starting from the specified level. For example, if only
'account' is passed as an argument, it starts from the subaccount level and then lists all
checks configured on application level.
Default: false
Type: boolean
Example
Example for listing availability checks recursively starting on subaccount level and listing the checks configured
for Java and SAP HANA XS applications:
Sample output:
5.4.4.79 list-accounts
Lists all subaccounts that a customer has. Authorization is performed against the subaccount passed as --account
parameter.
Parameters
To list all parameters available for this command, execute neo help list-accounts in the command line.
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
Type: string
-p, --password To protect your password, enter it only when prompted by the console client and not ex
plicitly as a parameter in the properties file or the command line.
Type: string
Example
Parameters
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
Type: string
Optional
-b, --application Application name for Java applications or productive SAP HANA instance database name
and application name in the format <instance name>:<application name> for SAP HANA
XS applications
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
-R, --recursively Lists alerts recipients recursively starting from the specified level. For example, if only 'su
baccount' is passed as an argument, it starts from the subaccount level and then lists all
recipients configured on application level.
Default: false
Type: boolean
Example
Sample output:
5.4.4.81 list-application-domains
Parameters
To list all parameters available for this command, execute neo help list-application-domains in the
command line.
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
-p, --password To protect your password, enter it only when prompted by the console client and not ex
plicitly as a parameter in the properties file or the command line.
Type: string
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
Type: string
Example
5.4.4.82 list-custom-domain-mappings
Parameters
To list all parameters available for this command, execute neo help list-custom-domain-mappings in the
command line.
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
-p, --password To protect your password, enter it only when prompted by the console client and not ex
plicitly as a parameter in the properties file or the command line.
Type: string
Type: string
Example
5.4.4.83 list-db-access-permissions
This command lists the permissions that other subaccounts have for accessing databases in the specified
subaccount.
Parameters
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
Type: string
-p, --password To protect your password, enter it only when prompted by the console client and not ex
plicitly as a parameter in the properties file or the command line.
Type: string
Optional
-i, --id Specify a database to view the permissions only to that database.
-to-account Specify an subaccount to view the permissions only for that subaccount.
-permissions Filter the result by permission. Acceptable values: comma separated list of 'TUNNEL',
'BINDING'.
Example
5.4.4.84 list-dbms
This command lists the dedicated and shared database management systems available for the specified
subaccount with the following details: database system (for dedicated databases), database type, and database
version.
Parameters
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
-p, --password To protect your password, enter it only when prompted by the console client and not ex
plicitly as a parameter in the properties file or the command line.
Type: string
Type: string
Example
5.4.4.85 list-dbs
Parameters
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
-p, --password To protect your password, enter it only when prompted by the console client and not ex
plicitly as a parameter in the properties file or the command line.
Type: string
Type: string
Optional
--verbose Displays additional information about each database: database type and database version
Default: off
Example
Parameters
To list all parameters available for this command, execute neo help list-domain-certificates in the
command line.
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
-p, --password To protect your password, enter it only when prompted by the console client and not ex
plicitly as a parameter in the properties file or the command line.
Type: string
Type: string
Example
Related Information
This command lists all current database access permissions for databases in other subaccounts.
Note
The list does not include access permissions that have been revoked.
Parameters
Optional
Type: string
Example
The table below shows the currently active database tunnel access permissions:
Related Information
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
Type: string
Optional
-p, --password To protect your password, enter it only when prompted by the console client and not ex
plicitly as a parameter in the properties file or the command line.
Type: string
Example
ExampleRepository
Display name : Example Repository
Description : This is an example repository with Virus Scan enabled.
ID : cdb158efd4212fc00726b035
Application : Neo CLI
Virus Scan : on
ExampleRepositoryNoVS
Display name : Example Repository without Virus Scan
Description : This is an example repository with Virus Scan disabled.
ID : cdb158efd4212fc00726b035
Application : Neo CLI
Virus Scan : off
Number of Repositories: 2
This command lists identity provider certificates available to productive HANA instances. Optionally, you can
include a part of the certificate <Subject CN> as filter.
Note
Use this command for SAP HANA version SPS09 or lower SPs only.
Parameters
To list all parameters available for this command, execute neo help list-hanaxs-certificates in the
command line.
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
-p, --password To protect your password, enter it only when prompted by the console client and not ex
plicitly as a parameter in the properties file or the command line.
Type: string
Type: string
Optional
-cn-string, --contained- A part of the certificate CN. If more than one certificate contain this string, all shall be
string listed.
Default: none
To list all identity provider certificates that contain <John Smith> in their <Subject CN>, execute:
5.4.4.90 list-jmx-checks
Parameters
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
Type: string
Optional
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
Note
If the parameter is not used, all JMX checks used for this subaccount will be listed.
-R, --recursively Lists JMX checks recursively, starting from the specified level. For example, if only 'subac
count' is passed as an argument, it starts from the subaccount level and then lists all
checks configured on application level.
Default: false
Type: boolean
Sample output:
application : demo
check-name : JVM Heap Memory Used
object-name : java.lang:type=Memory
attribute : HeapMemoryUsage
attribute key : used
warning : 600000000
critical : 850000000
unit : B
5.4.4.91 list-keystores
This command is used to list the available keystores. You can list keystores on subaccount, application, and
subscription levels.
Parameters
To list all parameters available for this command, execute neo help list-keystores in the command line.
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
Type: string
Optional
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
Example
On Subscription Level
On Application Level
On Subaccount Level
Related Information
This command lists all available loggers with their log levels for your application.
Parameters
To list all parameters available for this command, execute neo help list-loggers in the command line.
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
-p, --password Password for the specified user. To protect your password, enter it only when prompted by
the console client and not explicitly as a parameter in the properties file or the command
line.
Type: string
Type: string
Example
Related Information
This command lists all log files of your application sorted by date in a table format, starting with the latest
modified.
Parameters
To list all parameters available for this command, execute neo help list-logs in the command line.
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
-p, --password Password for the specified user. To protect your password, enter it only when prompted by
the console client and not explicitly as a parameter in the properties file or the command
line.
Type: string
Type: string
Example
Related Information
This command lists the Multi-Target Application (MTA) archives that are deployed to your subaccount or provided
by another subaccount.
Parameters
To list all parameters available for this command, execute neo help list-mtas in the command line.
Required
-a, --account The name of the subaccount for which you provide a user and a password.
-p, --password Your user password. We recommend that you enter it only when prompted, and not explic
itly as a parameter in a properties file or the command line.
Optional
Command-specific parameters
--available-for- If you use this parameter, the command will list only the MTAs that are available for sub
scription to the corresponding subaccount. The MTAs, which are deployed by the subac
subscription
count, will not be listed.
Example
To check the MTAs that are available for subscription to a given subaccount, execute:
This command shows the MTA operation status with a given ID.
Parameters
To list all parameters available for this command, execute neo help list-mta-operations in the command
line.
Required
-a, --account The name of the subaccount for which you provide a user and a password.
-p, --password Your user password. We recommend that you enter it only when prompted, and not explic
itly as a parameter in a properties file or the command line.
Note
This parameter is optional. If you do not use this parameter, all operations that have
not been cleaned up within the last 24 hours will be listed.
Example
Related Information
Parameters
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
Type: string
-p, --password To protect your password, enter it only when prompted by the console client and not ex
plicitly as a parameter in the properties file or the command line.
Type: string
Optional
Separate proxy hostname and port with a colon (':'). For example: loc.corp:123
Example
5.4.4.97 list-runtimes
Parameters
To list all parameters available for this command, execute neo help list-runtimes in the command line.
Required
Type: string
-p, --password To protect your password, enter it only when prompted by the console client and not ex
plicitly as a parameter in the properties file or the command line.
Type: string
Example
Related Information
The command displays the supported application runtime container versions for your SAP Cloud Platform SDK for
Neo environment. Only recommended versions are shown by default. You can also list supported version for a
particular runtime container.
Parameters
To list all parameters available for this command, execute neo help list-runtime-versions in the command
line.
Required
Type: string
-p, --password To protect your password, enter it only when prompted by the console client and not ex
plicitly as a parameter in the properties file or the command line.
Type: string
Optional
--all Lists all supported application runtime container versions. Using a previously released
runtime version is not recommended.
--runtime Lists supported version only for the specified runtime container.
Example
Related Information
5.4.4.99 list-schemas
Parameters
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
-p, --password To protect your password, enter it only when prompted by the console client and not ex
plicitly as a parameter in the properties file or the command line.
Type: string
Type: string
Optional
--verbose Displays additional information about each schema: database type and database version
Default: off
Example
Related Information
5.4.4.100 list-schema-access-grants
This command lists all current schema access grants for a specified subaccount.
Note that the list does not include grants that have been revoked.
Parameters
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
-p, --password To protect your password, enter it only when prompted by the console client and not ex
plicitly as a parameter in the properties file or the command line.
Type: string
Type: string
Optional
Type: string
Example
5.4.4.101 list-security-rules
This console client command lists the security rules configured for a virtual machine.
Parameters
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
Type: string
-p, --password To protect your password, enter it only when prompted by the console client and not ex
plicitly as a parameter in the properties file or the command line.
Type: string
Type: string
Example
As an output of the list-security-rules command, you may receive the HANA or JAVA source types
previously created with the create-security-rule command, or an internally managed security rule of type
Related Information
Manage Network Communication for SAP Cloud Platform Virtual Machines [page 1770]
create-security-rule [page 1830]
5.4.4.102 list-ssh-tunnels
list-ssh-tunnels
5.4.4.103 list-ssl-hosts
Parameters
To list all parameters available for this command, execute neo help list-ssl-hosts in the command line.
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
-p, --password To protect your password, enter it only when prompted by the console client and not ex
plicitly as a parameter in the properties file or the command line.
Type: string
Type: string
Related Information
5.4.4.104 list-subscribed-accounts
Parameters
To list all parameters available for this command, execute neo help list-subscribed-accounts in the
command line.
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
To be able to execute this command, the specified user must be a member of the provider
subaccount.
Type: string
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
-p, --password To protect your password, enter it only when prompted by the console client and not ex
plicitly as a parameter in the properties file or the command line.
Type: string
Example
Related Information
5.4.4.105 list-subscribed-applications
Parameters
To list all parameters available for this command, execute neo help list-subscribed applications in the
command line.
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
To be able to execute this command, the specified user must be a member of the subac
count.
Type: string
-p, --password To protect your password, enter it only when prompted by the console client and not ex
plicitly as a parameter in the properties file or the command line.
Type: string
Example
Related Information
5.4.4.106 list-vms
Lists all virtual machines in the specified subaccount. You can get information for a concrete virtual machine by
name. The command output lists information about the virtual machine, such as size; status; SSH key; floating IP
(if assigned); volume IDs.
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
Type: string
-p, --password To protect your password, enter it only when prompted by the console client and not ex
plicitly as a parameter in the properties file or the command line.
Type: string
Type: string
Optional
Type: string
Example
Related Information
5.4.4.107 list-volumes
Lists all volumes in the specified subaccount. Use display-volume to get information about a specific volume.
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
-p, --password To protect your password, enter it only when prompted by the console client and not ex
plicitly as a parameter in the properties file or the command line.
Type: string
Type: string
Optional
Example
Related Information
5.4.4.108 list-volume-snapshots
Lists all volume snapshots in the specified subaccount. Use display-volume-snapshot to get information
about a specific volume snapshot.
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
-p, --password To protect your password, enter it only when prompted by the console client and not ex
plicitly as a parameter in the properties file or the command line.
Type: string
Type: string
Optional
-v, --volume-id Unique identifier of a volume. If specified, only volume snapshots created from this vol
ume will be displayed.
Type: string
Example
Related Information
5.4.4.109 map-proxy-host
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
Separate proxy hostname and port with a colon (':'). For example: loc.corp:123
Type: string
-p, --password To protect your password, enter it only when prompted by the console client and not ex
plicitly as a parameter in the properties file or the command line.
Type: string
Example
Related Information
This command opens a database tunnel to the database system associated with the specified schema or
database.
Note
Make sure that you have installed the required tools correctly.
If you face trouble using this command, please check that your installation is correct.
For more information, see Set Up the Console Client [page 1135] and Using the Console Client [page 1792].
● Default mode: The tunnel remains open until you explicitly close it by pressing ENTER in the command line. It
is closed automatically after 24 hours or if the command window is closed.
● Background mode: The database tunnel is opened in a separate process. Use the close-db-tunnel
command to close the tunnel once you are done, or it is closed automatically after one hour.
Parameters
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
Type: string
--access-token Identifies a database access permission. The access token and database ID parameters
are mutually exclusive.
-p, --password To protect your password, enter it only when prompted by the console client and not ex
plicitly as a parameter in the properties file or the command line.
Type: string
Type: string
Type: string
Example
Related Information
5.4.4.111 open-ssh-tunnel
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
Type: string
-p, --password To protect your password, enter it only when prompted by the console client and not ex
plicitly as a parameter in the properties file or the command line.
Type: string
Type: string
Optional
-r, --port Port on which you want to open the SSH tunnel
Example
5.4.4.112 put-destination
This command uploads destination configuration properties files and JKS files. You can upload them on
subaccount, application or subscribed application level.
Parameters
To list all parameters available for this command, execute neo help put-destination in the command line.
-a, Your subaccount. The subaccount for which you provide username and password.
--account Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
--host Type: URL. For acceptable values see Regions [page 21].
--localpath The path to a destination or a JKS file on your local file system.
Type: string
-p, Password for the specified user. To protect your password, enter it only when prompted by
the console client and not explicitly as a parameter in the properties file or the command
--password line.
Type: string
Note
When uploading a destination configuration file that contains a password field, the
password value remains available in the file. However, if you later download this file, us
ing the get-destination command, the password value will no more be visible. In
stead, after Password =..., you will only see an empty space.
Examples
5.4.4.113 reconcile-hanaxs-certificates
This command re-applies all already uploaded certificates to all SAP HANA instances. This command is useful if
you already uploaded certificates to SAP Cloud Platform but uploading failed for some of the SAP HANA instances.
Restriction
Use this command only for SAP HANA SP9 or earlier versions. For newer SAP HANA versions, use the
respective SAP HANA native tools.
Note
After executing this command, a you need to restart the SAP HANA XS services for it to take effect. See restart-
hana [page 1954].
Parameters
To list all parameters available for this command, execute neo help reconcile-hanaxs-certificates in the
command line.
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
-p, --password To protect your password, enter it only when prompted by the console client and not ex
plicitly as a parameter in the properties file or the command line.
Type: string
Type: string
Example
5.4.4.114 register-access-point
Registers an access point URL for a virtual machine specified by name or ID.
Parameters
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
Type: string
-p, --password To protect your password, enter it only when prompted by the console client and not ex
plicitly as a parameter in the properties file or the command line.
Type: string
Type: string
Type: string
Example
The register-access-point command creates an internally managed security rule of type CIDR, which allows
communication between the load balancer of the SAP Cloud Platform and the virtual machine.
Related Information
5.4.4.115 remove-custom-domain
Removes a custom domain as an access point of an application. Use this command if you no longer want an
application to be accessible on the configured custom domain.
Parameters
To list all parameters available for this command, execute neo help remove-custom-domain in the command
line.
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
-p, --password To protect your password, enter it only when prompted by the console client and not ex
plicitly as a parameter in the properties file or the command line.
Type: string
Type: string
-l, --ssl-host SSL host as defined with the --name parameter when created, or 'default' if not speci
fied.
Example
Related Information
5.4.4.116 remove-platform-domain
To list all parameters available for this command, execute neo help remove-platform-domain in the
command line.
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
-p, --password To protect your password, enter it only when prompted by the console client and not ex
plicitly as a parameter in the properties file or the command line.
Type: string
Type: string
Type: URL
Example
Related Information
If you have forgotten the repository key, use this command to request a new repository key.
This command only creates a new key that replaces the old one. You cannot use the old key any longer. The
command does not affect any other repository setting, for example, the virus scan definition. If you just want to
change your current repository key, use the edit-ecm-repository command.
Parameters
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
-p, --password To protect your password, enter it only when prompted by the console client and not ex
plicitly as a parameter in the properties file or the command line.
Type: string
Type: string
Type: string
Example
This example resets the repository key for the com.foo.MyRepository repository and creates a new repository
key, for example fp0TebRs14rwyqq.
5.4.4.118 reset-log-levels
Parameters
To list all parameters available for this command, execute neo help reset-log-levels in the command line.
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
-p, --password Password for the specified user. To protect your password, enter it only when prompted by
the console client and not explicitly as a parameter in the properties file or the command
line.
Type: string
Type: string
Example
5.4.4.119 restart
Use this command to restart your application or a single application process. The effect of the restart command is
the same as executing the stop command first and when the application is stopped, starting it with the start
command.
Parameters
To list all parameters available for this command, execute the neo help restart command.
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
-p, --password To protect your password, enter it only when prompted by the console client and not ex
plicitly as a parameter in the properties file or the command line.
Type: string
Type: string
-y, --synchronous Triggers the process and waits until the application is restarted. The command without the
--synchronous parameter triggers the restarting process and exits immediately with
out waiting for the application to start.
Default:off
-i, --application- Unique ID of a single application process. Use it to restart a particular application process
process-id instead of the whole application. As the process ID is unique, you do not need to specify
subaccount and application parameters. You can list the application process ID by using
the <status> command.
Default: none
Example
To restart the whole application and wait for the operation to finish, execute:
Related Information
5.4.4.120 restart-hana
Note
To use this command, log on with a user with administrative rights for the subaccount.
Note
The restart-hana operation will be executed asynchronously. Temporary downtime is expected for SAP
HANA database or SAP HANA XS Engine, including inability to work with SAP HANA studio, SAP HANA Web-
based Development Workbench and Cockpit UIs dependent on SAP HANA XS.
After you trigger the command, you can monitor the command execution in SAP HANA Studio, using
Configuration and Monitoring Open Administration .
Parameters
To list all parameters available for this command, execute neo help restart-hana in the command line.
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
Note
You can find the SAP HANA database system ID using the list-dbms [page 1915] com
mand or in the Databases & Schemas section in the cockpit by navigating to SAP
HANA / SAP ASE Databases & Schemas .
It must start with a letter and can contain uppercase and lowercase letters ('a' - 'z', 'A' -
'Z'), numbers ('0' - '9'), and the special characters '.' and '-'.
Type: string
-p, --password To protect your password, enter it only when prompted by the console client and not ex
plicitly as a parameter in the properties file or the command line.
Type: string
Type: string
--service-name The SAP HANA service to be restarted. You can choose between the following values:
--system If available, the entire SAP HANA database system will be restarted.
Example
To restart the SAP HANA database system with ID myhanaid running on the productive host, execute:
To restart the SAP XS Engine service on SAP HANA database system with ID myhanaid, execute:
Related Information
5.4.4.121 revoke-db-access
This command revokes the database access permissions given to another subaccount.
Parameters
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
Type: string
-p, --password To protect your password, enter it only when prompted by the console client and not ex
plicitly as a parameter in the properties file or the command line.
Type: string
Optional
Example
Related Information
5.4.4.122 revoke-db-tunnel-access
This command revokes database access that has been given to another subaccount.
Required
-- access-token Access token that identifies the permission to access the data
base
Type: string
Type: boolean
Optional
Type: string
Example
Related Information
5.4.4.123 revoke-schema-access
This command revokes the schema access granted to an application in another account.
neo revoke-schema-access --host <SAP HANA Cloud host> --account <subaccount name> --
user <e-mail or user name> --access-token <access token>
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
-p, --password To protect your password, enter it only when prompted by the console client and not ex
plicitly as a parameter in the properties file or the command line.
Type: string
Type: string
--access-token Access token that identifies the grant. Grants can only be revoked by the granting subac
count.
Example
Related Information
The rolling-update command performs update of an application without downtime in one go.
Prerequisites
● You have at least one application process that is not in use, see your compute unit quota.
● The command can be used with compatible application changes only.
Parameters
To list all parameters available for this command, execute neo help rolling-update in the command line.
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
-s, --source A comma-separated list of file locations, pointing to WAR files, or folders containing them
If you want to deploy more than one application on one and the same application process,
put all WAR files in the same folder and execute the deployment with this source, or spec
ify them as a comma-separated list.
-p, --password To protect your password, enter it only when prompted by the console client and not ex
plicitly as a parameter in the properties file or the command line.
Type: string
Type: string
Optional
Default: off
Possible values: on (allow compression), off (disable compression), force (forces compres
sion for all responses) or an integer (which enables compression and specifies the com
pression-min-size value in bytes).
For more information, see Enable and Configure Gzip Response Compression [page 1701]
--compressible-mime- A comma separated list of MIME types for which compression will be used
type
Default: text/html, text/xml, text/plain
--connections The number of connections used to deploy an application. Use it to speed up deployment
of application archives bigger than 5 MB in slow networks. Choose the optimal number of
connections depending on the overall network speed to the cloud.
Default: 2
Type: integer
--ev Environment variables for configuring the environment in which the application runs.
Sets one environment variable by removing the previously set value; can be used multiple
times in one execution.
If you provide a key without any value (--ev <KEY1>=), the –ev parameter is ignored.
Default: depends on the SAP Cloud Platform SDK for Neo environment
--timeout Timeout before stopping the old application processes (in seconds)
Default: 60 seconds
-V, --vm-arguments System properties (-D<name>=<value>) separated with space that will be used when
starting the application process.
Memory settings of your compute units. You can set the following memory parameters: -
Xms, -Xmx, -XX:PermSize, -XX:MaxPermSize.
We recommend that you use the default memory settings. Change them only if necessary
and note that this may impact the application performance or its ability to start.
Default: lite
--runtime-version SAP Cloud Platform runtime version on which the application will be started and will run
on the same version after a restart. Otherwise, by default, the application is started on the
latest minor version (of the same major version) which is backward compatible and in
cludes the latest corrections (including security patches), enhancements, and updates.
Note that choosing this option does not affect already started application processes.
You can view the recommended versions by executing the list-runtime-versions command.
Note
If you choose your runtime version, consider its expiration date and plan updating to a
new version regularly.
For more information, see Choose Application Runtime Version [page 1698]
--uri-encoding Specifies the character encoding used to decode the URI bytes on application request.
Default: ISO-8859-1
For more information, see the encoding sets supported by Java SE 6 and Java SE 7 .
Example
5.4.4.125 sdk-upgrade
Use this command to upgrade the SAP Cloud Platform SDK for Neo environment that you are currently working
with.
neo sdk-upgrade
The command checks for a more recent version of the SDK and then upgrades the SDK. There are two possible
cases:
Note
All files and servers that you add to your SDK will be preserved during upgrade.
Example
neo sdk-upgrade
5.4.4.126 set-alert-recipients
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
Type: string
We recommend that you use distribution lists rather than personal email addresses. Keep
in mind that you will remain responsible for handling of personal email addresses with re
spect to data privacy regulations applicable.
Type: string
Optional
-b, --application Application name for Java applications or productive SAP HANA database system, and
application name in the format <database name>:<application name> for SAP HANA XS
applications
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
Default: false
Type: boolean
Example
5.4.4.127 set-application-property
Use this command to change the value of a single property of a deployed application without the need to redeploy
it. Execute the command separately for each property that you want to set. For the changes to take effect, restart
the application.
To execute the command successfully, you need to to specify the new value of one property from the optional
parameters table below.
Parameters
To list all parameters available for this command, execute the neo help set-application-property in the
command line.
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
-p, --password To protect your password, enter it only when prompted by the console client and not ex
plicitly as a parameter in the properties file or the command line.
Type: string
Type: string
Optional
Command-specific parameters
--ev Environment variables for configuring the environment in which the application runs.
Sets the new environment variable without removing the previously set value; can be used
multiple times in one execution.
If you provide a key without any value (--ev <KEY1>=), the environment variable KEY1 will
be deleted.
Default: depends on the SAP Cloud Platform SDK for Neo environment
(beta) You can use JRE 8 with the Java Web Tomcat 7 runtime (neo-java-web version 2.25
or higher) in subaccounts enabled for beta features.
-m, --minimum-processes Minimum number of application processes, on which the application can be started
Default: 1
-M, --maximum-processes Maximum number of application processes, on which the application can be started
Default: 1
System properties (-D<name>=<value>) separated with space that will be used when
starting the application process.
Memory settings of your compute units. You can set the following memory parameters: -
Xms, -Xmx, -XX:PermSize, -XX:MaxPermSize.
We recommend that you use the default memory settings. Change them only if necessary
and note that this may impact the application performance or its ability to start.
--runtime-version SAP Cloud Platform runtime version on which the application will be started and will run
on the same version after a restart. Otherwise, by default, the application is started on the
latest minor version (of the same major version) which is backward compatible and in
cludes the latest corrections (including security patches), enhancements, and updates.
Note that choosing this option does not affect already started application processes.
You can view the recommended versions by executing the list-runtime-versions command.
Note
If you choose your runtime version, consider its expiration date and plan updating to a
new version regularly.
For more information, see Choose Application Runtime Version [page 1698]
Default: off
Possible values: on (allow compression), off (disable compression), force (forces compres
sion for all responses) or an integer (which enables compression and specifies the com
pression-min-size value in bytes).
For more information, see Enable and Configure Gzip Response Compression [page 1701]
--compressible-mime- A comma separated list of MIME types for which compression will be used
type
Default: text/html, text/xml, text/plain
--connection-timeout Defines the number of milliseconds to wait for the request URI line to be presented after
accepting a connection.
Default: 20000
--max-threads Specifies the maximum number of simultaneous requests that can be handled.
Default: 200
--uri-encoding Specifies the character encoding used to decode the URI bytes on application request.
Default: ISO-8859-1
For more information, see the encoding sets supported by Java SE 6 and Java SE 7 .
Example
To change the minimum number of server processes on which you want your deployed application to run, execute:
Related Information
5.4.4.128 set-db-properties-ase
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
Type: string
-p, --password To protect your password, enter it only when prompted by the console client and not ex
plicitly as a parameter in the properties file or the command line.
Type: string
Type: string
Note
This parameter sets the maximum database size. The minimum database size is 24
MB. You receive an error if you enter a database size that exceeds the quota for this
database system.
The size of the transaction log will be at least 25% of the database size you specify.
Example
5.4.4.129 set-db-properties-hana
This command changes the properties for a SAP HANA database enabled for multitenant database container
support.
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
Type: string
-p, --password To protect your password, enter it only when prompted by the console client and not ex
plicitly as a parameter in the properties file or the command line.
Type: string
Type: string
Optional
--web-access Enables or disables access to the HANA database from the Internet: 'enabled' (default),
'disabled'
Example
5.4.4.130 set-downtime-app
This command configures a custom downtime page (downtime application) for an application. The downtime page
is shown to the user in the event of unplanned downtime of the original application.
Parameters
To list all parameters available for this command, execute neo help set-downtime-app in the command line.
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
-p, --password To protect your password, enter it only when prompted by the console client and not ex
plicitly as a parameter in the properties file or the command line.
Type: string
Type: string
The downtime page application is provided by the customer and hosted in the same sub
account as the application itself.
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
Example
Related Information
5.4.4.131 set-log-level
Simple Logging Facade for Java (SLF4J) uses the following log levels:
Level Description
ALL This level has the lowest possible rank and is intended to turn
on all logging.
ERROR This level designates error events that might still allow the
application to continue running.
OFF This level has the highest possible rank and is intended to
turn off logging.
Parameters
To list all parameters available for this command, execute neo help set-log-level in the command line.
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
Type: string
-l, --level The log level you want to set for the logger(s)
Type: string
-p, --password Password for the specified user. To protect your password, enter it only when prompted by
the console client and not explicitly as a parameter in the properties file or the command
line.
Type: string
Type: string
Example
Related Information
5.4.4.132 set-quota
Note
The amount you want to set cannot exceed the amount of quota you have purchased. In case you try to set
bigger amount of quota, you will receive an error message.
Parameters
To list all parameters available for this command, execute neo help set-quota in the command line.
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
Type: string
-p, --password To protect your password, enter it only when prompted by the console client and not ex
plicitly as a parameter in the properties file or the command line.
Type: string
-m, --amount Compute unit quota type and amount of the quota to be set in the format <type>:
[amount].
In this composite parameter, the <type> part is mandatory and must have one of the fol
lowing values: lite, pro, prem, prem-plus. The amount part is optional and must be an inte
ger value. If omitted, a default value 1 is assigned. Do not insert spaces between the two
parts and their delimiter ":", and use lower case for the <type> part.
Type: string
Example
5.4.4.133 set-ssl-host
Configures and updates an SSL host. Allows you to replace an SSL certificate with a different one, and enable the
TLS protocols of your choice.
Parameters
To list all parameters available for this command, execute neo help set-ssl-host in the command line.
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
-p, --password To protect your password, enter it only when prompted by the console client and not ex
plicitly as a parameter in the properties file or the command line.
Type: string
Type: string
-n, --name Name of the SSL host that will be configured and updated.
Optional
-c, --certificate Name of the certificate that you bind to the SSL host. The certificate must already be up
loaded.
Caution
This will replace the previously bound certificate, if there is already one.
Type: string (It can contain alphanumerics, '.', '-', and '_')
-t, --supported- Specify the TLS protocols that you want to enable for the SSL host. The remaining TLS
protocols protocols are disabled.
Note
This parameter requires a certificate to be bound to the SSL host.
Examples
Example
If the optional parameters are not used, the set-ssl-host command returns the current properties of the SSL
host.
Related Information
5.4.4.134 status
You can check the current status of an application or application process. The command lists all application
processes with their IDs, state, last change date sorted chronologically, and runtime information.
The command also lists the availability zones where these application processes are running. However, this is only
valid for recently started applications and if you have the latest SAP Cloud Platform SDK for Neo environment
version installed.
The availability zones ensure the high availability of your application processes. If one of the availability zones
experiences infrastructure issues and downtime, only the processes in this zone are affected. The remaining
processes continue to run normally, ensuring that your application is working as expected.
When an application process is running but cannot receive new connection requests, it is marked as disabled in its
status description. Additionally, if an application is in planned downtime and a maintenance page has been
configured for it, the corresponding application is listed in the command output.
Parameters
To list all parameters available for this command, execute neo help status in the command line.
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
-p, --password To protect your password, enter it only when prompted by the console client and not ex
plicitly as a parameter in the properties file or the command line.
Type: string
Type: string
Optional
-i, --application- Unique ID of a single application process. Use it to show the status of a particular applica
process-id tion process instead of the whole application. As the process ID is unique, you do not need
to specify subaccount and application parameters.
Default: none
--show-full-process-id Shows the full length (40 characters) of the unique application process ID. You may need
to get the full ID when you try to execute a certain operation on the application process
and the process cannot be identified uniquely with the short version of the ID. In particular,
usage of the full length is recommended for tools and batch processing. If this parameter
is not used, the status command lists only the first 7 characters by default.
Default: off
Example
You can list all application processes in your application with their IDs:
Then, you can request the status of a particular application process from the list using its ID:
5.4.4.135 start
Starts a deployed application in order to make it available for customers. In case the application is already started,
the command starts an additional application process if the quota for maximum allowed number of application
processes is not exceeded.
Parameters
To list all parameters available for this command, execute neo help start in the command line.
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
-p, --password To protect your password, enter it only when prompted by the console client and not ex
plicitly as a parameter in the properties file or the command line.
Type: string
Type: string
--disabled Starts an application process in disabled state, so that it is not available for new connec
tions.
Default: off
-y, --synchronous Triggers the starting process and waits until the application is started. The command with
out the --synchronous parameter triggers the starting process and exits immediately
without waiting for the application to start.
Default: off
Example
To start the application and wait for the operation to finish, execute:
Related Information
5.4.4.136 start-db-hana
This command starts the specified SAP HANA database on a SAP HANA database system enabled for multitenant
database container support.
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
Type: string
-p, --password To protect your password, enter it only when prompted by the console client and not ex
plicitly as a parameter in the properties file or the command line.
Type: string
Type: string
Example
Related Information
5.4.4.137 start-local
neo start-local
Optional
Default: 8003
--wait-url Waits for a 2xx response from the specified URL before exiting
--wait-url-timeout Seconds to wait for a 2xx response from the wait-url before exiting
Default: 180
Related Information
5.4.4.138 start-maintenance
This command starts the planned downtime of an application, during which it no longer receives requests and a
custom maintenance page for that application is shown to the user. All active connections will still be handled until
the application is stopped.
Parameters
To list all parameters available for this command, execute neo help start-maintenance in the command line.
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
-p, --password To protect your password, enter it only when prompted by the console client and not ex
plicitly as a parameter in the properties file or the command line.
Type: string
Type: string
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
Optional
--direct-access-code While setting your application in maintenance mode, you can generate an access code,
which you can use later during the maintenance period. While your application is in main
tenance mode, you can use this access code in the Direct-Access-Code HTTP header so
that you can have access to your application for testing and administration purposes. In
the meantime, users will continue to have access to the maintenance application.
If an application is already in planed downtime, executing the status command for it will show the maintenance
application, to which the traffic is being redirected.
Example
Related Information
Use this command to stop your deployed and started application or application process.
Parameters
To list all parameters available for this command, execute neo help stop in the command line.
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
-p, --password To protect your password, enter it only when prompted by the console client and not ex
plicitly as a parameter in the properties file or the command line.
Type: string
Type: string
Optional
-y, --synchronous Triggers the stopping process and waits until the application is stopped. The command
without the --synchronous parameter triggers the stopping process and exits imme
diately without waiting for the application to stop.
Default: off
-i, --application- Unique ID of a single application process. Use it to stop a particular application process
process-id instead of the whole application. As the process ID is unique, you do not need to specify
subaccount and application parameters. You can list the application process ID by using
the <status> command.
Default: none
Example
To stop the whole application and wait for the operation to finish, execute:
Related Information
5.4.4.140 stop-db-hana
This command stops the specified SAP HANA database on a SAP HANA database system enabled for multitenant
database container support.
Parameters
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
Type: string
-p, --password To protect your password, enter it only when prompted by the console client and not ex
plicitly as a parameter in the properties file or the command line.
Type: string
Type: string
Example
Related Information
5.4.4.141 stop-local
neo stop-local
Parameters
Optional
Default: 8003
5.4.4.142 stop-maintenance
This command stops the planned downtime of an application, starts traffic to it and deregisters the maintenance
application page.
Parameters
To list all parameters available for this command, execute neo help stop-maintenance in the command line.
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
-p, --password To protect your password, enter it only when prompted by the console client and not ex
plicitly as a parameter in the properties file or the command line.
Type: string
Type: string
Example
5.4.4.143 subscribe
Subscribes the subaccount of the consumer to a provider Java application. Once the command is executed
successfully, the subscription is visible in the Subscriptions panel of the cockpit in the consumer subaccount.
Remember
You must have the Administrator role in the provider and consumer subaccount to execute this command.
Note
You can subscribe a subaccount to a Java application that is running in another subaccount only if both
subaccounts (provider and consumer subaccount) belong to the same region.
Parameters
To list all parameters available for this command, execute neo help subscribe in the command line.
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
This parameter must be specified in the format <provider subaccount >:<provider appli
cation>.
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
To be able to execute this command, the specified user must be a member of both the
provider and the consumer subaccounts and must possess the Administrator role in those
subaccounts. The command is not available for trial accounts.
Type: string
-p, --password To protect your password, enter it only when prompted by the console client and not ex
plicitly as a parameter in the properties file or the command line.
Type: string
Example
Related Information
5.4.4.144 subscribe-mta
This command subscribes the subaccount of the consumer to a Multi-Target Application (MTA), which is available
for subscription.
Parameters
To list all parameters available for this command, execute neo help subscribe-mta in the command line.
-a, --account The name of the subaccount for which you provide a user and a password.
-p, --password Your user password. We recommend that you enter it only when prompted, and not explic
itly as a parameter in a properties file or the command line.
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
Optional
Command-specific parameters
-y, --synchronous Triggers the deployment and waits until the deployment operation finishes. The command
without the --synchronous parameter triggers deployment and exits immediately
without waiting for the operation to finish. Takes no value.
-e, --extensions Defines one or more extensions to the deployment descriptor. A comma-separated list of
file locations, pointing to the extension descriptor files, or the folders containing them. For
more information, see Defining MTA Extension Descriptors [page 1322].
Example
5.4.4.145 unbind-db
This command unbinds a database from a Java application for a particular data source.
The application retains access to the database until the next application restart. After the restart, the application
will no longer be able to access it.
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
-p, --password To protect your password, enter it only when prompted by the console client and not ex
plicitly as a parameter in the properties file or the command line.
Type: string
Type: string
Optional
Default: <DEFAULT>
Example
Related Information
Unbinds a certificate from an SSL host. The certificate will not be deleted from SAP Cloud Platform storage.
Parameters
To list all parameters available for this command, execute neo help unbind-domain-certificate in the
command line.
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
-p, --password To protect your password, enter it only when prompted by the console client and not ex
plicitly as a parameter in the properties file or the command line.
Type: string
Type: string
-l, --ssl-host SSL host as defined with the --name parameter when created, or 'default' if not speci
fied.
Example
Related Information
This command unbinds a productive SAP HANA database system from a Java application for a particular data
source.
The application retains access to the productive SAP HANA database system until the next application restart.
After the restart, the application will no longer be able to access the database system.
Parameters
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
-p, --password To protect your password, enter it only when prompted by the console client and not ex
plicitly as a parameter in the properties file or the command line.
Type: string
Type: string
Optional
Example
Related Information
5.4.4.148 unbind-schema
This command unbinds a schema from an application for a particular data source.
The application retains access to the schema until the next application restart. After the restart, the application will
no longer be able to access the schema.
Parameters
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
-p, --password To protect your password, enter it only when prompted by the console client and not ex
plicitly as a parameter in the properties file or the command line.
Type: string
Type: string
Optional
Example
5.4.4.149 undeploy
Undeploying an application removes it from SAP Cloud Platform. To undeploy an application, you have to stop it
first.
Parameters
To list all parameters available for this command, execute the neo help undeploy in the command line.
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
-p, --password To protect your password, enter it only when prompted by the console client and not ex
plicitly as a parameter in the properties file or the command line.
Type: string
Type: string
Related Information
5.4.4.150 unmap-proxy-host
Deletes the mapping between an application host and an on-premise reverse proxy host and port.
Parameters
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
Separate proxy hostname and port with a colon (':'). For example: loc.corp:123
Type: string
-p, --password To protect your password, enter it only when prompted by the console client and not ex
plicitly as a parameter in the properties file or the command line.
Type: string
Example
Related Information
5.4.4.151 unregister-access-point
Unregisters all access point URLs registered for a virtual machine specified by name or ID.
Parameters
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
Type: string
-p, --password To protect your password, enter it only when prompted by the console client and not ex
plicitly as a parameter in the properties file or the command line.
Type: string
Type: string
Type: string
Example
Related Information
5.4.4.152 unsubscribe
Remember
You must have the Administrator role in the provider and consumer subaccount to execute this command.
To list all parameters available for this command, execute neo help unsubscribe in the command line.
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
To be able to execute this command, the specified user must be a member of both the
provider and the consumer subaccounts.
Type: string
-p, --password To protect your password, enter it only when prompted by the console client and not ex
plicitly as a parameter in the properties file or the command line.
Type: string
Example
Related Information
Uploads an SSL certificate to SAP Cloud Platform. The certificate must be signed using the previously generated
CSR via the generate-csr command.
Parameters
To list all parameters available for this command, execute neo help upload-domain-certificate in the
command line.
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
-p, --password To protect your password, enter it only when prompted by the console client and not ex
plicitly as a parameter in the properties file or the command line.
Type: string
Type: string
-n, --name Name of the certificate previously used in the CSR generation via the generate-csr
command
Note that some CAs issue chained root certificates that contain an intermediate certifi-
cate. In such cases, put all certificates in the file for upload starting with the signed SSL
certificate.
Note
The certificate that you upload must not be bound to an SSL host.
The --force option is useful if you had to and you did not upload an intermediate certifi-
cate for some reason.
Note that the intermediate certificate must be added to the file that contains the certifi-
cate data.
Example
Related Information
5.4.4.154 upload-hanaxs-certificates
. This command uploads and applies identity provider certificates to productive HANA instances running on SAP
Cloud Platform.
Restriction
Use this command only for SAP HANA SP9 or earlier versions. For newer SAP HANA versions, use the
respective SAP HANA native tools.
Note
After executing this command, a you need to restart the SAP HANA XS services for it to take effect. See restart-
hana [page 1954].
To list all parameters available for this command, execute neo help upload-hanaxs-certificates in the
command line.
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
-p, --password To protect your password, enter it only when prompted by the console client and not ex
plicitly as a parameter in the properties file or the command line.
Type: string
Type: string
-l, --localpath Path to a X.509 certificate or a directory containing certificates on a loca l file system. If
the local path is a directory, all files in it shall be uploaded. You need to restart the HA NA
instances to activate the certificates.
Default: none
Type: string
Example
5.4.4.155 upload-keystore
This command is used to upload a keystore by uploading the keystore file. You can upload keystores on
subaccount, application, and subscription levels.
Parameters
To list all parameters available for this command, execute neo help upload-keystore in the command line.
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
-l,--location Path to a keystore file to be uploaded from the local file system. The file extension deter
mines the keystore type. The following extensions are sup
ported: .jks, .jceks, .p12, .pem. For more information about the keystore formats,
see Features [page 2231]
Type: string
Type: string
Optional
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
-w, --overwrite Overwrites a file with the same name if such already exists. If you do not explicitly include
the --overwrite argument, you will be notified and asked if you want to overwrite the
file.
Example
On Subscription Level
On Application Level
On Subaccount Level
Related Information
5.4.4.156 version
This command is used to list the SAP Cloud Platform SDK for Neo environment version and the runtime. It also
lists the command versions and the JAR files in the SDK and checks whether the SDK is up to date.
Use this command to show the SAP Cloud Platform SDK for Neo environment version and the runtime. You can
use parameters to list the command versions and the JAR files in the SDK and to check whether the SDK version is
up to date.
Parameters
To list all parameters available for this command, execute neo help version in the command line.
Required
-c, --commands Lists all commands available in the SDK and their versions.
-j, --jars Lists all JAR files in the SDK and their versions.
-u, --updates Checks if there are any updates and hot fixes for the SDK and whether the SDK version is
still supported. It also provides the version of the latest available SDK.
Optional
Type: string
To show the SAP Cloud Platform SDK for Neo environment version and the runtime, execute:
neo version
neo version -c
To list all JAR files in the SDK and their versions, execute:
neo version -j
neo version -u
Related Information
Overview
The exit code is a number that indicates the outcome of a command execution. It shows whether the command
completes successfully or defines an error if something goes wrong during the execution.
When commands are executed as part of automated scripts, the exit codes provide feedback to the scripts, which
allows the script to bypass known errors that can be met during execution. A script can also interact with the user
in order to request additional information required for the script to complete.
All exit codes in SAP Cloud Platform are aligned to the Bash-Scripting Guide. For more information, see Exit Codes
With Special Meanings .
The set of exit codes is divided into ranges, based on the error type and the reason.
No error 0 0 1
Common errors 1 9 9
Missing parameters 10 39 30
Exit Codes
Exit codes can be defined as general (common for all commands) and command-specific (cover different cases via
different commands).
0 OK
Related Information
Use the Cloud Foundry command line interface (CF CLI) to deploy and manage your applications in the Cloud
Foundry environment.
Downloading and installing the console client for theCloud Download and Install the Cloud Foundry Command Line Inter
Foundry environment face [page 948]
Cloud Foundry command line interface plug-ins CF CLI: Plug-ins [page 2006]
Download and set up the Cloud Foundry Command Line Interface (cf CLI) to start working with the Cloud Foundry
environment.
Procedure
1. Download the latest version of cf CLI from GitHub at the following URL: https://github.com/cloudfoundry/
cli#downloads
A list of additional commands that have been implemented as plug-ins to extend the base CF CLI client.
Multi-Target Application Commands for the Cloud Foundry Environment [page 2008]
Use the multi-target application plug-in for the Cloud Foundry command line interface to deploy, remove, and view
MTAs, among other possible operations.
Note
Before using the extended commands in the Cloud Foundry environment, you need to install the MTA plug-in in
the Cloud Foundry environment.
The multi-target application plug-in for the Cloud Foundry command line interface lets you deploy, remove, and
view MTAs, among other possible operations, by extending Cloud Foundry commands.
Prerequisites
You have installed the Cloud Foundry command line interface version 6.20 or higher.
Procedure
1. Download the latest version of the plug-in that is compatible with your operating system. On the Web page
https://tools.hana.ondemand.com/#cloud, you will find the plug-in under the SAP Cloud Platform Cloud
Foundry CLI Plugins section with the name MTA Plugin.
2. Untar or unzip the downloaded archive if required.
3. Open the command line interface or terminal.
4. To install the plug-in, enter the following command:
Note
If you are reinstalling the plug-in, first uninstall the previous version using: cf uninstall-plugin
MtaPlugin
5. Verify that the plug-in has been installed successfully by entering cf plugins.
Related Information
https://tools.hana.ondemand.com/#cloud
Download and Install the Cloud Foundry Command Line Interface [page 948]
A list of additional commands to install archives and deploy Multi-Target Applications (MTA) to the Cloud Foundry
environment.
download-mta-op- dmol Download the log files for one or more operations concerning Multi-Target
logs Applications
purge-mta-config Purge all configuration entries and subscriptions, which are no longer valid
deploy
Usage
Deploy a new Multi-Target Application or synchronize changes to an existing one:
Arguments
Command Arguments Overview
Argument Description
<MTA_ARCHIVE> The path to (and name of) the archive or the directory containing the Multi-Target Ap
plication to deploy; the application archive must have the format (and file extension)
mtar, for example, MTApp1.mtar.
Options
Command Options Overview
Option Description
-t <TIMEOUT> Specify the maximum amount of time (in seconds) that the deploy
ment service must wait for before starting the deployed application
-v <VERSION_RULE> Specify the rule to apply to determine how the application version
number is used to trigger an application-update deployment operation,
for example: “HIGHER”, “SAME_HIGHER”, or “ALL”.
-i <OPERATION_ID> Specify the ID of the deploy operation that you want to perform an ac
tion on
-a <ACTION> Specify the action to perform on the deploy operation, for example,
“abort”, “retry”, or “monitor”, or “resume”
-f Force deploy without requiring any confirmation for aborting any con
flicting processes
--delete-service-keys Delete the existing service keys and apply the new ones
--no-restart-subscribed-apps Do not restart subscribed applications that are updated during the de
ployment
bg-deploy
“Blue-green” deployment is a release technique that helps to reduce application downtime and the resulting risk
by running two identical target deployment environments called “blue” and “green”. Only one of the two target
environments is “live” at any point in time and it is much easier to roll back to a previous version after a failed (or
undesired) deployment.
Usage
cf bg-deploy <MTA_ARCHIVE>
[-e <EXT_DESCRIPTOR_1>[,<EXT_DESCRIPTOR_2>]]
[-u <URL>] [-t <TIMEOUT>] [-v <VERSION_RULE>]
[--no-start] [--use-namespaces] [--no-namespaces-for-services]
[--delete-services] [--delete-service-keys] [--delete-service-brokers]
[--keep-files] [--no-restart-subscribed-apps] [--no-confirm] [--do-not-fail-on-
missing-permissions]
Interact with an active MTA deploy operation, for example, by performing an action:
Arguments
Command Arguments Overview
Argument Description
<MTA_ARCHIVE> The path to (and name of) the archive or the path to the directory containing the
Multi-Target Application to deploy. The application archive must have the format (and
file extension) mtar, for example, MTApp1.mtar; the specified directory can be
specified as a path (for example, myApp/ or . (current directory).
Option Description
-t <TIMEOUT> Specify the maximum amount of time (in seconds) that the deploy
ment service must wait for before starting the deployed application
-v <VERSION_RULE> Specify the rule to apply to determine how the application version
number is used to trigger an application-update deployment operation,
for example: “HIGHER”, “SAME_HIGHER”, or “ALL”.
-i <OPERATION_ID> Specify the ID of the deploy operation that you want to perform an ac
tion on
-a <ACTION> Specify the action to perform on the deploy operation, for example,
“abort”, “retry”, or “monitor”, or “resume”
-f Force deploy without requiring any confirmation for aborting any con
flicting processes
--delete-service-keys Delete the existing service keys and apply the new ones
--no-restart-subscribed-apps Do not restart subscribed applications that are updated during the de
ployment
--no-confirm Do not require confirmation for the deletion of the previously deployed
MTA applications
undeploy
Usage
Undeploy an MTA.
cf undeploy <MTA_ID>
[-u <URL>] [-f]
[--delete-services] [--delete-service-brokers] [--no-restart-subscribed-apps]
[--do-not-fail-on-missing-permissions]
Arguments
Command Arguments Overview
Argument Description
Options
Command Options Overview
Option Description
-u <URL> Specify the URL for the service end-point that is to be used for the un
deployment operation
-i <OPERATION_ID> Specify the ID of the undeploy operation that you want to perform an
action on
-a <ACTION> Specify the action to perform on the undeploy operation, for example,
“abort”, “retry”, or “monitor”
--no-restart-subscribed-apps Do not restart subscribed applications that are updated during the de
ployment
mta
Display information about a Multi-Target Application (MTA). The information displayed includes the requested
state, the number of instances, information about allocated memory and disk space, as well as details regarding
the bound service (and service plan).
Usage
cf mta MTA_ID
[-u <URL>]
Arguments
Argument Description
Options
Option Description
-u <URL> Specify the URL for the deployment-service end-point to use to obtain
details of the selected MTA
Usage
Options
Command Options Overview
Option Description
-u <URL> Specify the URL for the deployment-service end-point to use to obtain
details of the selected MTA
mta-ops
Display information about all active operations for Multi-Target Applications (MTA). The information includes the
ID, type, status, the time the MTA-related operation started, as well as the name of the user that started the
operation.
Usage
Options
Command Options Overview
Option Description
-u <URL> Specify the URL for the deployment-service end-point to use to obtain
details of the selected MTA operations
download-mta-op-logs
Download the log files for one or more operations concerning Multi-Target Applications.
cf download-mta-op-logs
[-u <URL>]
[-i <OPERATION_ID>] [-d <DIRECTORY>]
Tip
You can use the alias dmol in place of the download-mta-op-logs command.
Options
Option Description
-u <URL> Specify the URL for the deployment-service end-point to use to obtain
details of the selected MTA operations
-i <OPERATION_ID> Specify the identity (ID) of the MTA operation whose logs you want to
download
-d <DIRECTORY> Specify the path to the location where you want to save the down
loaded MTA operation logs; by default, the location is ./mta-op-
<OPERATION_ID>/
purge-mta-config
Purge all configuration entries and subscriptions, which are no longer valid.
Usage
cf purge-mta-config
[-u <URL>]
Invalid configuration entries are often produced when the application that is providing configuration entries is
deleted by the user without using the deploy-service, for example, the cf delete command . In this case, the
configuration remains in the deploy-service database even though the corresponding application is no longer
available. This could lead to a failure during subsequent attempts to resolve the configuration entries.
Options
Option Description
By running two identical production environments that are called “blue” and “green”, you can perform a blue-
green deployment, which will eliminate the downtime and risk for your system.
Prerequisites
Context
Restriction
There is no blue-green deployment for bound services. Blue and green applications are bound to the same
service instances.
Procedure
1. Deploy your initial MTA (the blue version) by executing the cf bg-deploy <your-mta-archive-v1>
command.
This will:
○ create new applications
Note
If there are already installed applications, “blue” will be added to the existing application names.
2. Deploy your updated MTA (the green version) by executing the cf bg-deploy <your-mta-archive-v2>
command.
This will:
○ create new applications adding “green” to the existing application names
○ create temporary routes to the green applications
Output Code
This will:
○ map the productive routes to your green versions
When performing a blue-green deployment, you can use the Zero-Downtime Maintenance (ZDM) parameter to
update an application that has database changes between the “blue” and the “green” versions.
Prerequisites
● The applications use HDI containers for persistence - com.sap.xs.hdi-container resource type in the
deployment descriptor
● ZDM is supported only with a blue-green deployment of a Multi-Target Application (MTA)
● The application does not use a hard coded service name for the data source to the HDI service
● The database artifacts are modeled as described in Table 1: Modeling of HDI Artifacts [page 2021]
Context
Overview
Zero-downtime update is achieved by deploying database artifacts in separate schemas - data schema and access
schema.
● Data schema - contains all database objects that have persistence data, for example tables, sequences, and
indexes.
● Access schema - contains interface to database objects such as projection views and synonyms, and database
logic such as calculation views, database procedures, and functions.
Note
Applications are bound only to the access schema. Deployment and lifecycle management tools are bound to
both data and access schemas.
● normal
The Table 1: Modeling of HDI Artifacts [page 2021] contains a description of the modeling of the supported HDI
artifacts and the target schema where they should be deployed. Depending on where and how the HDI artifacts are
modeled in the DB module there, are two types of handling:
1. Default handling - when the artifacts are modeled in the default folders (src/, cfg/) of the DB module. In this
case, the deployment-time handles the artifacts by default and deploys them in the relevant default schema
according to artifact type.
This deployment handling has the following limitations:
1. Some access schema objects (for example, interfaces and logic) are deployed in the data schema, which
is unnecessary as this brings a performance reduction and violates the ZDM concept for schema
separation. For example, the CDS types should be separated and used from data schema objects like CDS
entities, or from the access schema objects (such as CDS views). If these types are separated in two
different CDS files - one used from CDS entity and the other from CDS view - all artifacts except the CDS
view are deployed to both schemas. It is not necessary to deploy access schema objects like the CDS type
used from the CDS view into the data schema.
2. The hdbtables are deployed into the data schema by default. In certain situations, it is acceptable to
deploy them into the access schema, for example if there are corresponding hdbtabledata files that fill
these tables, for example with language texts that are incompatible between versions. With the default
handling it is not possible to deploy .hdbtables into an access schema, because they are deployed into
the data schema by default.
2. Separated handling using data/access/ foldersdata/ and access/ folders within src/, cfg/ folders of
the database module. The data/ folder should contain the data schema related objects, which have
persistence data like tables, sequences, indexes. The access/ folder should contain access schema related
objects, like interface-to-database objects (for example, projection views and synonyms) and the database
logic (such as calculation views, database procedures, and functions).
In this case, the deploy-time deploys the artifacts from the data/ folder to the data schema, and generates
the corresponding interface objects in access schema. Artifacts from access/ folder are deployed only to
access schema. This type of handling resolves the limitations listed above due to the following reasons:
1. Limitation 1 is resolved because the access schema objects like the CDS type used from CDS view are
modeled in the access/ folder and are thus deployed only to access schema.
2. Limitation 2 is resolved, because the hdbtables that should be deployed to access schema, are modeled
in access/ folder and are thus deployed only to the access schema.
This handling brings clarity to the database model, as the location of database objects is better defined, and it
also improves the separation of object types.
Note
To ensure that your applications support the ZDM update, follow the adoption guidelines stated in Table 1:
Modeling of HDI Artifacts [page 2021] and model the HDI artifacts in data/ and access/ folders accordingly.
ZDM is also supported with the default handling of HDI artifacts, when they are in the default folder, but it has
limitations with more complex data model.
ZDM deployment is possible for applications that use HDI containers for persistence. HDI containers are services
that use the hdi-shared service plan.
Note
Applications must not use a hard-coded service name for the data source to the HDI service, as during ZDM
deployment the applications are bound to a new access HDI service with generated name, which could be
different from the hard-coded name in the application code.
Java applications can use dynamic data source configuration in one of the following ways:
1. by using an SAP JAVA buildpack -Java applications can use a dynamic data source configuration feature that
allows bound services described in the manifest.yml to appear as data sources available for JNDI lookup in
the application. This is done using the environment variable JBP_CONFIG_RESOURCE_CONFIGURATION as
shown in the example deployment descriptor below.
2. by using the Spring Cloud Spring Service Connector
.hdbcds Yes ● data and access (both) - 1. Put CDS (temporary) entities
all .hdbcds artifacts that into the data/ folder. If put in
contain an entity, type, or ta the access schema, CDS (tem
ble-type definition porary) entities produce only
● access (only) - all .hdbcds projection views.
artifacts that do not contain 2. Put CDS views only in the
an entity, type, or table-type access/ folder.
definition
access/ folders.
3. CDS types and CDS table types
should not be used by both enti
ties and views or procedures.
Separate CDS types and CDS ta
ble types for data/ and
respectively.
1. data/ folder - Define CDS
types and CDS table types
which are used only by CDS
entities. Do not make back
ward incompatible changes
on data types in next ver
sions.
2. access/ folder - Define
CDS types and CDS table
types which are used by pro
cedures, views and/or table
types, but not from entities.
4. Associations defined in CDS enti
ties can be used only by CDS
views, but not by .hdbviews.
In the data/ folder do not
model associations to objects
from access/ folder.
5. Put CDS files containing Data
Control Language (DCL) objects
only in the access/ folder.
Note
Default target schema (data/access) - The target schema in which the artifact should be deployed most
frequently (by default). The artifact is located in the default folder (src/, cfg/) of the db module, not into data/
or access/ folder and HDI Deployer applies default handling to the artifact. If the artifact is modeled in data/
oraccess/ folder, it is deployed into the corresponding schema.
Procedure
1. To run the deployment in a ZDM mode for the applications and the databases, you have to declare the value
zdm-mode:true as a parameter value of all modules, which are of a module type com.sap.xs.hdi.
Note
You can define it either in the deployment descriptor or in an extension descriptor.
Sample Code
mtad.yaml deployment descriptor using sap_java_buildpack
_schema-version: 2.0.0
ID: com.sap.xs2.samples.javahelloworld
version: 0.1.0
modules:
...
- name: java-hello-world-backend
type: java.tomee
path: java/target/java-hello-world.war
provides:
- name: java
properties:
url: "${default-url}"
properties:
JBP_CONFIG_RESOURCE_CONFIGURATION: "['tomee/webapps/ROOT/WEB-INF/
resources.xml': {'service_name_for_DefaultDB' : 'java-hdi-container'}]"
requires:
- name: java-uaa
- name: java-hdi-container
- name: java-hello-world-db
- name: java-hello-world-db
type: com.sap.xs.hdi
path: db/
parameters:
zdm-mode: true
requires:
- name: java-hdi-container
Sample Code
resources/META-INF/persistence.xml
<persistence version="1.0"
xmlns="http://java.sun.com/xml/ns/persistence"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://java.sun.com/xml/ns/persistence http://
java.sun.com/xml/ns/persistence/persistence_1_0.xsd">
<persistence-unit name="java-hello-world" transaction-type="JTA">
<provider>org.eclipse.persistence.jpa.PersistenceProvider</provider>
<jta-data-source>jdbc/java-hdi-container</jta-data-source>
<properties>
<property name="eclipselink.target-database"
value="org.eclipse.persistence.platform.database.HANAPlatform"/>
</properties>
</persistence-unit>
</persistence>
Sample Code
webapp/META-INF/java_xs_buildpack/config/resource_configuration.yml
---
tomee/webapps/ROOT/WEB-INF/resources.xml:
service_name_for_DefaultDB: java-hdi-container
Sample Code
webapp/WEB-INF/resources.xml
Sample Code
Java source code file using @PersistenceContext
@PersistenceContext(name = "java-hello-world")
private EntityManager em;
2. To start the blue-green deployment process, follow the steps described in Blue-Green Deployment of Multi-
Target Applications (MTA) [page 2016].
To prevent name clashes for applications and services contained in different MTAs, but deployed in the same
space, the deployment service provides an option that enables you to add the MTA IDs in front of the names of the
applications and services contained in those MTAs.
Sample Code
MTA Deployment Descriptor
ID: com.sap.xs2.sample
version: 0.1.0
modules:
- name: app
type: java.tomcat
resources:
- name: service
type: com.sap.xs.uaa
By default the “use namespaces” feature is disabled. However, specifying the --use-namespaces option as part of
the xs deploy command allows you to enable it. If you want namespaces to be used for applications, but not for
services, you can use the --no-namespaces-for-services option in combination with the --use-namespaces. For
example, if you use the --use-namespaces and --no-namespaces-for-services options when deploying an MTA with
the deployment descriptor shown in the sample code above, the deployment operation creates an application
named “com.sap.xs2.sample.app” and a service named “service”.
The deployment service follows the rules specified in the Semantic Versioning Specification (Semver) when
comparing the versions of a deployed MTA with the version that is to be deployed. If an MTA submitted for
deployment has a version that is lower than or equal to an already deployed version of the same MTA, the
deployment might fail if their is a conflict with the version rule specified in the deployment operation. The version
rule is a parameter of the deployment process, which specifies which relationships to consider between the
existing and the new version of an MTA before proceeding with the deployment operation. The version rules
supported by the deploy service are as follows:
● HIGHER - Only MTAs with versions that are bigger than the version of the currently deployed MTA, are
accepted for deployment.
● SAME_HIGHER - Only MTAs with versions that are higher than (or the same as) the version of the currently
deployed MTA are accepted for deployment. This is the default setting for the version rule.
● ALL - All versions are accepted for deployment.
Service Fabrik CF CLI plugin is used for performing various backup and restore operations on service-instances in
Cloud Foundry, such as starting/aborting a backup, listing all backups, removing backups, starting/aborting a
restore, etc.
This CF CLI plugin is only available for ServiceFabrik broker, so it can only be used with CF installations in which
this service broker is available. You can also list all available commands and their usage with 'cf backup'. You can
now manage your backups of service instance and restore them using the backup and restore functionality.
To use the functionality, use the SAP Cloud Platform cockpit or the extended Cloud Foundry commands in the
command line interface.
The Service Fabrik plugin lets you manage backups of a service instance by extending Cloud Foundry commands.
Prerequisites
You need to have Cloud Foundry Command Line Interface (CF CLI) installed for the plugin to work since it is built
on CF CLI. The installation instructions for CF CLI can be found here . The minimum version of CF CLI, on which
the plugin has been tested successfully is v6.20.
Procedure
1. Download the latest version of the plugin that is compatible with your operating system. On the Web page, you
will find the plugin under the SAP Cloud Platform Cloud Foundry CLI Plugins section with the name Service
Fabrik based B&R.
2. Untar or unzip the downloaded archive.
3. Open the command line interface or terminal.
4. To install the plugin, enter the following command:
Note
If you are reinstalling the plugin, first uninstall the previous version using: cf uninstall-plugin
ServiceFabrikPlugin
5. Verify that the plugin has been installed successfully by entering cf plugins.
Related Information
https://tools.hana.ondemand.com/#cloud
Download and Install the Cloud Foundry Command Line Interface [page 948]
The Service Fabrik plugin provides commands that support backup and restore operations.
The various commands described in the table below are functionalities that facilitate these operations:
cf list-backup Shows a list of all the service instance Unauthorized Access: if you do not have
backups. These backups are specific to permission to access the space contain
the spaceyou are logged on to. If you ing the service instance or the service in
have permission to access multiple stance itself. Verify that you have the re
spaces and need to view backups for a quired permission.
specific space, log on to that space in
Cloud Foundry.
cf list-backup Shows a list of backups that are specific Unauthorized Access: if you do not have
<service_instance_name> to the service instance within a space. permission to access the space contain
ing the service instance or the service in
stance itself. Verify that you have the re
quired permission.
cf start-restore Restores a service instance from the Unauthorized Access: if you do not have
<service_instance_name> specified instance name and backup ID. permission to access the space contain
<backup_ID> Before providing a restore command, en ing the service instance or the service in
sure that the backup is available for the stance itself. Verify that you have the re
service instance. You can verify the state quired permission.
of the restore process using cf
Another concurrent access: if another
service
operation is already in progress for the
<service_instance_name>. service instance. You might need to try
again after the current operation is com
pleted.
To enable you to seamlessly integrate SAP Cloud Platform applications with existing on-premise identity
management infrastructures, SAP Cloud Platform introduces single sign-on (SSO) and identity federation
features. In SAP Cloud Platform, identity information is provided by identity providers (IdP), and not stored on SAP
Cloud Platform itself. You can have a different IdP for each subaccount you own, and this is configurable using the
Cockpit.
The following graphic illustrates the high-level architecture of identity management in SAP Cloud Platform.
If you don't have a corporate identity management infrastructure, you can use SAP ID Service. It is the default
identity provider for SAP Cloud Platform, and you can use it out of the box, without having to configure SSO and
identity federation.
Application authorizations are managed in the platform, and can be grouped to simplify administration.
In this section:
In SAP Cloud Platform, subaccounts get their users from identity providers. Administrators make sure that users
can only access their dedicated subaccount by making sure that there is a dedicated trust relationship only
between the identity providers and the respective subaccounts. Developers configure and deploy application-
based security artifacts containing authorizations, and administrators assign these authorizations using the
cockpit.
Authorizations
Applications and their users require diverse authorizations so that they can integrate seamlessly into SAP Cloud
Platform. Developers configure these authorizations on the level of the application descriptor files so that security
Security artifacts enable applications to communicate with other applications, for example, making or receiving
calls.
Trust Management
In SAP Cloud Platform, identity providers provide the users. You can have a different identity provider for each
subaccount you own. Using the cockpit, administrators establish the trust relationship between external identity
providers and the subaccounts.
This section describes the authorization and trust management in the Cloud Foundry environment of SAP Cloud
Platform.
The Cloud Foundry environment provides platform security functions such as business user authentication,
authentication of applications, authorization management, trust management, and other security functions. It
enables Cloud Foundry administrators to manage authorizations and trust. Developers design authorization
information and deploy this information in the Cloud Foundry environment.
Related Information
The Cloud Foundry environment provides platform security functions such as business user authentication,
authentication of applications, authorization management, trust management, and other security functions.
For more detailed descriptions of these functions, see the links in the following table:
Function See
Access management in the Cloud Foundry environment of Access Management in the Cloud Foundry Environment [page
SAP Cloud Platform, including the User Account and 2035]
Authentication service
The Cloud Foundry environment of SAP Cloud Platform adopts common industry security standards in order to
provide flexibility for customers through a high degree of interoperability with other vendors.
Identity Federation
Identity federation is the concept of linking and reusing electronic identities of a user across multiple identity
providers. This frees an application from the obligation to obtain and store users' credentials for authentication.
Instead, the application reuses an identity provider that is already storing users' electronic identities for
authentication, provided that the application trusts this identity provider.
This makes it possible to decouple and centralize authentication and authorization functionality. Several major
protocols have been developed to support the concept of identity federation:
● SAML 2.0
● OAuth 2.0
Security Assertion Markup Language (SAML 2.0) is an open standard based on XML for exchanging authentication
and authorization data of a principal (user) between an identity provider (IdP) and a service provider (SP). The
data is exchanged using messages called bearer assertions. A bearer is any party in possession of the assertion.
The integrity of the assertion is protected by XML encryption and an XML signature. SAML addresses the
requirement of web browser single sign-on across the Internet.
Authorizations are implemented through user groups, to which the user is assigned. How these user groups
correlate to specific authorizations depends on the service provider and the respective implementation.
For more information about the SAML specification, see the related link.
The OAuth 2.0 authorization framework is a protocol for delegating authorizations. It is not suitable for
authentication. The OAuth 2.0 specification defines four elements:
Resource owner (usually the end user) A resource owner is capable of granting access to a protected
resource.
Resource server (for example, a cloud application/microser The resources are protected by hosts. You can reach them us
vice) ing REST endpoints. A resource server is capable of accepting
and responding to protected resource requests using access
tokens.
OAuth 2.0 client (for example, application router) An application that makes protected resource requests on be
half of the resource owner and with its authorization.
Authorization server (User Account and Authentication The User Account and Authentication service (UAA) issues ac
service) cess tokens for the client. Once the OAuth 2.0 client has been
successfully authenticated by an SAML 2.0 compliant identity
provider, it obtains the authorizations of the resource owner.
An access token represents credentials used to access pro
tected resources (see also the RFC 6749, OAuth 2.0 access to
ken section in the related link).
The JWT token contains header and claims information (for example, issuer, subject, expiration time, consumer-
defined information), and is digitally signed with the private key of the authorization server (UAA service).
The cloud application and/or business application has a trust relationship with the authorization server. The trust
is configured in the environment variable <VCAP_SERVICES> for the application router and each microservice of
the business application. <VCAP_SERVICES> contains a credentials string for the UAA, which is created by the
respective service broker when the application router and/or the microservice is bound to the UAA service
instance. The credentials string contains, among other things, the public key corresponding to the private key of
the UAA. The public key is used to verify the token signature.
Related Information
http://saml.xml.org/saml-specifications
https://tools.ietf.org/html/rfc6749#section-1.4
An identity provider provides the business users for SAP Cloud Platform. A mutual trust relationship between the
identity provider and SAP Cloud Platform is required.
The default identity provider is SAP ID Service. It is part of SAP Cloud Platform. The trust relationship is already
established.
For more information, see Default Identity Federation with SAP ID Service in the Cloud Foundry Environment [page
2066].
You have the option, to use any other identity provider. SAP Cloud Platform supports SAML 2.0 identity providers.
If you want to use it, you must configure your own custom SAML 2.0 identity provider and establish trust between
your SAP Cloud Platform subaccount and the identity provider.
Configuring trust in a subaccount Establish Trust with an SAML 2.0 Identity Provider in a Subac
count [page 2061]
Configuring trust in an SAML 2.0 identity provider Register SAP Cloud Platform Subaccount in the SAML 2.0
Identity Provider [page 2062]
The Cloud Foundry environment extends SAP Cloud Platform. It provides platform security functions such as
granting access to applications for business systems or business users, managing authorizations, and other
security functions.
Applications contain content (for example, Web content, micro services), which is deployed to different containers.
Business users use a user interface or a user agent to access the content of the applications whereas business
systems use APIs to do so. The applications are using OAuth 2.0 to authenticate against the User Account and
Authentication service.
The Cloud Foundry environment uses OAuth 2.0 as authentication method and access tokens. An SAML identity
provider stores the business users. The applications (for example, Java or Node.js) are deployed in the runtime
container. There are multiple ways of accessing the applications in the runtime container:
● Web access
Business users access the runtime container over the Web, using a browser of a browser-based user interface.
● API access
Using APIs, business systems directly access the runtime container of the Cloud Foundry environment.
Web Access
Access to the static Web content requires user authentication and the appropriate authorization.
Applications authenticate using OAuth 2.0. They use the User Account and Authentication (UAA) service as OAuth
2.0 authorization server, the application router as OAuth 2.0 client, and application logic running in Node.js and
Java backend services as OAuth 2.0 resource server.
The authentication process is triggered by the application router component, which is configured in the design-
time artifact xs-app.json, if required. Authorization restricts the access to resources based on defined user
permissions. Resources in the context of applications are services provided by a container (for example, an OData
Web service) or SAP HANA database artifacts.
API Access
A business system uses APIs to directly access the resources in the runtime container. After having authenticated
at the UAA, which acts as OAuth 2.0 authorization server, the business system gets the appropritate access token.
It enables the APIs to make calls into the applications of the runtime containers.
You want to integrate an application in the Cloud Foundry environment. This means that the Cloud Foundry
environment needs to know your application. Using the User Account and Authentication (UAA) service, you
initially integrate your application into the platform environment of SAP Cloud Platform. The application needs to
authenticate against the User Account and Authentication service. The authentication concept of the Cloud
Foundry environment is OAuth 2.0.
In this context, the UAA acts as OAuth 2.0 authorization server. The application router itself is the OAuth 2.0 client.
To integrate the application into authentication, you must create a service instance of the xsuaa service and bind
it to the application router and the application containers. From the OAuth 2.0 perspective, the containers (for
example the Node.js container) are OAuth 2.0 resource servers. Their container security API validates the access
tokens against the UAA.
Runtime Containers
Static Web content is deployed into the application router, application logic, for example, into Node.js and Java
runtime containers.
Related Information
The User Account and Authentication service (UAA) is the central infrastructure component of the Cloud Foundry
environment at SAP Cloud Platform for user authentication and authorization.
UAA Instances
The Cloud Foundry environment at SAP Cloud Platform distinguishes between two user types and manages each
type in a separate UAA instance. This means the two types are completely separated:
● Platform users perform technical development, deployment, and administration tasks. They use tools like the
cloud cockpit and the cf command line client. These users typically have authorizations for certain
organizations and/or spaces, and other technical services in the Cloud Foundry environment. Apart from
authentication to the platform using the cf command line client, usually there is no direct interaction between
users and the platform UAA.
● Business users only use business applications deployed to SAP Cloud Platform. They do not use SAP Cloud
Platform for technical development, deployment, and administration tasks. A business user is always bound to
a specific tenant which holds the information about the user’s authorizations. Tenants, business users, and
their authorizations are managed by another UAA instance using the extended services for UAA (XSUAA). This
component additionally provides a simple programming model for authentication and authorization in
business applications.
This documentation refers to business users and the extended services of the UAA.
The UAA uses OAuth 2.0 for authentication of the application. In the context of the OAuth flow, the UAA provides
scopes, role templates, and attributes to applications deployed in the runtime of the Cloud Foundry environment.
If a business user has a certain scope, an access token and a refresh token with this scope are issued. These
enable the user to run the application, while the scopes are used to define the model that describes who has the
authorization to start an application that runs in the runtime of the Cloud Foundry environment at SAP Cloud
Platform.
The UAA service provides a programming model for developers; it enables developers to define templates that can
be used to build role and authorization models for business users of applications deployed to the runtime of the
Cloud Foundry environment. In the OAuth 2.0 workflow, the UAA acts as an OAuth server.
Note
The Cloud Foundry environment also supports the following token grant types of Cloud Foundry.
Related Information
Cloud Foundry API Reference for User Account and Authentication Server API
The XSUAA programming model supports multiple tenants in SAP Cloud Platform. In the context of OAuth 2.0, the
security APIs for the appliction runtime container act as resource servers and provide the security context for user
authentication in an application.
OAuth 2.0 is using SAML 2.0 for authentication of the users, which are provided by identity providers.
Multiple Tenants
SAP Cloud Platform supports multitenant applications. These applications are used by multiple tenants. With this
approach, the tenants share the same code base, but they are not allowed to see the data of other tenants. The
application must maintain data separation between tenants.
In the Cloud Foundry runtime, SAP Cloud Platform provides the application router as an option. It is a point-of-
entry for web applications running on the Cloud Foundry environment at SAP Cloud Platform; the application
router is part of the application and triggers the user authentication process in the UAA. In the OAuth 2.0 workflow
for web access, the application (including the application router and any bound containers) is the OAuth 2.0 client,
which initiates user authentication and authorization.
In the context of OAuth 2.0, the security APIs for the runtime container act as resource servers. The APIs provide
the security context for user authentication in an application of the Cloud Foundry environment. When the user
authentication process is triggered, the container security APIs receive an Authorization: Bearer HTTP
header that contains an OAuth access token in the JSON Web token (JWT) format from the application router.
OAuth access tokens expire after a certain period of time. After they have expired, they cannot be used for
authentication anymore. The application router supports the token refresh flow according to the OAuth standard.
The access token and the refresh token contain information describing the user and any authorization scopes.
These must be validated by the container security APIs using the security libraries provided by the UAA.
Applications deployed in the Cloud Foundry runtime at SAP Cloud Platform can use the security API to check
whether scope values have been assigned to the user or application. The container security API provides the
security context for applications (for example, scopes, attributes, token information) of the application; the JWT
token initializes the security context of the application. A security API is available for the following application
runtime containers:
Java API using Spring Authentication for Java Resource Servers [page 2045]
Java web application using sap_java_buildpack Configuring Authentication and Authorization [page 1029]
For more information, see the related links on authentication. They describe how you configure authentication for
Node.js, Java.
Each application's runtime container of an application must use the container security API to provide the security
context. The security context is initialized with the JWT token and enables you to perform the following functions:
The Cloud Foundry environment extends SAP Cloud Platform. It provides platform security functions such as
business user authentication, authorization management, and other security functions for access to the
applications in the runtime container. To access the runtime container, the business user can use a browser or a
browser-based user interface.
The following diagram shows the architecture with the components that are responsible for business user
authentication, authorization management, and security. It is not mandatory for applications to use the User
Account and Authentication and the application router.
● SAP ID Service
● SAP Cloud Platform Identity Authentication service
● Any SAML 2.0 identity provider
Applications use OAuth 2.0. When business users access an application, the application router acts as OAuth
client and redirects their request to the OAuth authorization server for authentication (see the Applications
section.. Runtime containers act as resource servers, using the container security API of the relevant container
(for example, Java) to validate the token issued by the OAuth authorization server.
Related Information
In SAP Cloud Platform, the UAA uses OAuth 2.0 for authentication of the applications deployed in the runtime of
the Cloud Foundry environment.
To access an application with OAuth 2.0, an OAuth 2.0 client must authenticate with an access token. The flow of
the authorization code grant type is used to get an initial access token and a refresh token from an OAuth 2.0
authorization server. The OAuth 2.0 client can then use the refresh token to request new access tokens itself
whenever an access token expires.
1. A user of a browser-based application (user agent) sends a request to the OAuth 2.0 client.
2. Since the application has no access token, the client redirects the request to the browser.
3. The application requests an authorization code at the authorization server.
4. The authorization server checks the validity of the request and grants an authorization code to the client.
5. The client receives the authorization code and requests an access token from the authorization server.
6. The authorization server issues an access token and grants it to the client.
7. The client presents the access token to request the resource on the resource server.
8. In the final step, the resource server validates the acess token and allows the client to access the resource.
The authorization code grant type conforms to the RFC 6749 standard of IETF. For more information, see http://
ietf.org .
Tip
To mitigate the impact of cross-site scripting attacks, we recommend that you avoid sending access tokens to
the browser.
When an application is using the application router, it shares a security session with the browser. The security
session holds the access and refresh token. Requests to back-end systems include the JSON web token (if
configured).
The Cloud Foundry environment extends SAP Cloud Platform. It provides platform security functions such as
business system authentication, authorization management, and other security functions to enable business
systems to access the applications (for example, Java or Node.js) in the runtime container. Business systems use
APIs to access the runtime container.
The following diagram shows the architecture with the components that are responsible for business system
authentication, authorization management, and security.
Business systems use OAuth 2.0 to access applications in the runtime container. The UAA acts as an OAuth
authorization server and issues an access token. It enables the business system to directly access an application
in the runtime container. Runtime containers act as OAuth resource servers, using the container security API of
the relevant container (for example, Java) to validate the token issued by the OAuth authorization server.
Related Information
The SAML 2.0 assertion bearer flow of OAuth 2.0 is the authentication method here. Business users use the SAP
Cloud Platform to log on to the Neo environment of SAP Cloud Platform. During logon, they finally get a token. This
token enables them to connect to the Cloud Foundry environment of SAP Cloud Platform and access the desired
resources.
Note
The SAML 2.0 bearer assertion can be issued by the identity provider.
The SAML 2.0 assertion bearer flow has the following steps:
1. The business user authenticates at the Neo environment and accesses an application (running in the Neo
environment) that uses a resource server hosted in the Cloud Foundry environment of SAP Cloud Platform.
2. The application of the Neo environment uses the destination API to issue a SAML bearer assertion. The
application sends the SAML bearer assertion to the UAA.
3. The UAA issues the JSON web token with the SAML bearer assertion.
4. The application gets the JSON web token with the received SAML bearer assertion.
5. The application can access the resource server on behalf of the business user.
Related Information
The Cloud Foundry environment at SAP Cloud Platform uses the User Account and Authentication service (UAA)
and the application router to manage user logon and logoff requests. The UAA service centrally manages the
issuing of tokens for propagating the identity to application containers and the SAP HANA database. Applications
contain content, which is deployed to containers. Access to the deployed content requires user authentication.
The following components are required for authentication:
● Application router
The application router is a part of the application instance. It triggers authentication of the user at the UAA.
The application instance (with containers and application router) is the OAuth 2.0 client, which handles
Related Information
The Cloud Foundry environment at SAP Cloud Platform provides Java runtimes to which you can deploy your Java
applications.
Authentication for Java applications relies on a usage of the OAuth 2.0 protocol, which is based on central
authentication at the UAA. The UAA vouches for the authenticated user's identity using an OAuth 2.0 access
token. The current implementation uses as access token a JSON web token (JWT). This is a signed text-based
token in JSON syntax. The Java application is specified in the related manifest file.
The Java buildpacks use different authentication methods. For more information, see the related links.
During application deployment, the build pack ensures that the correct SAP Java Virtual Machine (JVM) is
provided and that the appropriate data sources are bound to the corresponding application container.
Note
SAP Cloud Platform makes no assumptions about which frameworks and libraries to use to implement the Java
micro service.
Configure Authentication for Java API Using Spring Security [page 2046]
Configure Authentication for SAP Java Buildpacks [page 2049]
Applications using the Spring libraries can use the corresponding Spring security libraries.
Prerequisites
● You have downloaded and consumed the XS JAVA 1 libraries (see the related link).
Context
SAP Cloud Platform provides a module for offline validation of the received JWT token. The signature is validated
using the verification key received from the service binding to the xsuaa service.
To authenticate requests with JSON Web tokens (JWT), you need to maintain the following files:
Procedure
1. To authenticate requests with JSON web tokens (JWT), add the relevant libraries to your build manifest, for
example, pom.xml, with Maven as shown in the following example:
Sample Code
<dependency>
<groupId>com.sap.xs2.security</groupId>
<artifactId>java-container-security</artifactId>
2. To specify the required listeners in your web.xml, add the following code:
Sample Code
<listener>
<listener-class>org.springframework.web.context.ContextLoaderListener</
listener-class>
</listener>
<context-param>
<param-name>contextConfigLocation</param-name>
<param-value>/WEB-INF/spring-security.xml</param-value>
</context-param>
<filter>
<filter-name>springSecurityFilterChain</filter-name>
<filter-class>org.springframework.web.filter.DelegatingFilterProxy</filter-
class>
</filter>
<filter-mapping>
<filter-name>springSecurityFilterChain</filter-name>
<url-pattern>/*</url-pattern>
</filter-mapping>
3. Configure the Spring beans by adding the spring-security.xml file. The most important line in this service
context is illustrated in the following example:
Sample Code
This file specifies which parts of the microservice are to be made secure, for example, an HTTP POST request
to ="/rest/addressbook/deletedata" requires the Delete scope.
4. Configure the security properties of the service instance in application.properties.
# parameters of hello-world-java
xs.appname=java-hello-world
The following code snippet shows that not only the requests are authenticated but also that the user principal
is available.
Related Information
The SAP Java buildpack includes the XSUAA authentication method. This makes an offline validation of the
received JWT token possible. The signature is validated using the verification key received from the service binding
to the xsuaa service.
Prerequisites
You have created the xsuaa service instance (see the related link).
Context
This section provides you with code samples for configuring authentication. SAP Cloud Platform offers an offline
validation of the JWT token. It does not require an additional call to the User Account and Authentication service
(UAA). The trust for this offline validation has been created by binding the xsuaa service instance to your
application.
Procedure
import java.io.IOException;
import javax.servlet.ServletException;
import javax.servlet.annotation.*;
import javax.servlet.http.*;
/**
* Servlet implementation class HomeServlet
*/
@WebServlet(“/*”)
@ServletSecurity(@HttpConstraint(rolesAllowed = { “Display” }))
public class HomeServlet extends HttpServlet {
Related Information
A collection of Node.js packages developed by SAP is provided as part of the Cloud Foundry environment at SAP
Cloud Platform.
SAP Cloud Platform includes a selection of standard Node.js packages, which are available for download and use
from the SAP NPM public registry to customers and partners. SAP Cloud Platform only includes the Node.js
packages with long-time support (LTS). For more information, see https://nodejs.org . Enabling access to an
NPM registry requires configuration using the SAP NPM Registry.
● Besides the standard NPM Package Manager, you can use the NPM Package Manager to download the
packages from the npm repository.
https://npm.sap.com
● As an alternative option, you can use the SAP CLIENT LIB 1.0. Search for the software component named
SAP CLIENT LIB 1.0 on the ONE Support Launchpad.
ONE Support Launchpad
The SAP CLIENT LIB 1.0 package contains the following modules:
Tip
For more details of the package contents, see the README file in the corresponding package.
@sap/approuter The application router is the single entry point for the (business) application.
@sap/xssec The client security library, including the XS advanced container security API for Node.js
The application router is the single entry point for the (business) application. It has the responsibility to serve
static content, authenticate users, rewrite URLs, and proxy requests to other micro services while propagating
user information.
For more information, see the applications section of SAP Cloud Platform.
@sap/xssec
The client security library includes the container security API for Node.js.
Authentication for node applications relies on the usage of the OAuth 2.0 protocol, which is based on central
authentication at the User Account and Authentication (UAA) server that then vouches for the authenticated
user's identity by means of a so-called OAuth access token. The implementation uses as access token a JSON web
token (JWT), which is a signed text-based token formatted according to the JSON syntax.
The trust: for the offline validation is created by binding the UAA service instance to your application. The key for
validation of tokens is included in the credentials section in the environment variable <VCAP_SERVICES>. By
default, the offline validation check only accepts tokens intended for the same OAuth 2.0 client in the same UAA
subaccount. This makes sense and covers the majority of use cases. However, if an application needs to consume
tokens that were issued either for different OAuth 2.0 clients or for different subaccounts, you can specify a
dedicated access control list (ACL) entry in an environment variable named <SAP_JWT_TRUST_ACL>. The name of
the OAuth 2.0 client has the prefix sb-. The content is a JSON string. It contains an array of subaccounts and
OAuth 2.0 clients. To establish trust with other OAuth 2.0 clients and/or subaccounts, specify the relevant OAuth
2.0 client IDs and subaccounts.
Caution
For testing purposes, use an asterisk (*). This setting should never be used for productive applications.
Subaccounts are not used for on-premise systems. The value for the subaccount is uaa.
SAP_JWT_TRUST_ACL:
[ {"clientid":"<OAuth_2.0_client_ID>","subaccount":"<subaccount>"},...]
In a typical deployment scenario, your node application consists of several parts, which appear as separate
application modules in your manifest file, for example:
● Application logic
This application module (<myAppName>/js/) contains the application logic: code written in Node.js. This
module can make use of this XS Advanced Container Security API for Node.js).
● UI client
This application module (<myAppName>/web/) is responsible for the UI layer; this module can make use of the
application router functionality (defined in the file xs-app.json).
Note
The application logic written in Node.js and the application router should be bound to one and the same UAA
service instance so that these two parts use the same OAuth client credentials.
Note
To enable tracing, set the environment variable DEBUG as follows: DEBUG=xssec:*.
Usage
All SAP modules, for example @sap/xssec, are located in the namespace of the SAP NPM registry. For this
reason, you must use the SAP NPM registry and the default NPM registry.
If you use express and passport, you can easily plug a ready-made authentication strategy.
Sample Code
We recommend that you disable the session as in the example above. Each request comes with a JWT token so it is
authenticated explicitly and identifies the user. If you still need the session, you can enable it but then you should
also implement user serialization/deserialization and some sort of session persistency.
Tip
For more details of the package contents, see the README file in the corresponding package.
API Description
createSecurityContext Creates the “security context” by validating the received access token against
credentials put into the application's environment via the UAA service binding
● getLogonName
● getGivenName
● getFamilyName
● getEmail
● getHdbToken
● getAdditionalAuthAttribute
● getExpirationDate
● getGrantType
checkLocalScope Checks a scope that is published by the current application in the xs-
security.json file
getToken Returns a token that can be used to connect to the SAP HANA database. If the
token that the security context has been instantiated with is a foreign token
(meaning that the OAuth client contained in the token and the OAuth client of
the current application do not match), “null” is returned instead of a token; the
following attributes are available:
● namespace
Tokens can be used in different contexts, for example, to access the SAP
HANA database, to access another XS advanced-based service such as the
Job Scheduler, or even to access other applications or containers. To differ-
entiate between these use cases, the namespace is used. In lib/
constants.js we define supported name spaces (for example,
SYSTEM ).
● name
The name is used to differentiate between tokens in a given namespace, for
example, “HDB” for the SAP HANA database. These names are also defined
in the file lib/constants.js.
hasAttributes Returns “true” if the token contains any XS advanced user attributes; other
wise “false”.
getAttribute Returns the attribute exactly as it is contained in the access token. If no attribute
with the given name is contained in the access token, “null” is returned. If the
token that the security context has been instantiated with is a foreign token
(meaning that the OAuth client contained in the token and the OAuth client of
the current application do not match), “null” is returned regardless of whether
the requested attribute is contained in the token or not. The following attributes
are available:
● name
The name of the attribute that is requested
isInForeignMode Returns “true” if the token, that the security context has been instantiated
with, is a foreign token that was not originally issued for the current application,
otherwise “false”.
getIdentityZone Returns the subaccount that the access token has been issued for.
In the Cloud Foundry environment of SAP Cloud Platform, an application makes API calls to other applications.
Depending on the use case, it may also be necessary for an application from an external system to make API calls
into applications running in the Cloud Foundry environment.
The User Account and Authentication (UAA) service is the central authentication service in the Cloud Foundry
environment of SAP Cloud Platform. The UAA service and the application router manage user logon and logoff
requests. The UAA service centrally manages the issuing of JSON web tokens for propagating the identity to the
applications in the Cloud Foundry environment.
The interaction is browser based. Users are propagated in the following ways:
If applications from an external system must make API calls to applications running in the Cloud Foundry
environment, administrators must make sure that these applications can communicate with the relevant
applications in the external system. In this case, the SAML bearer assertion flow or client credentials identify the
Authorization restricts access to resources and services based on defined user permissions.
Applications of the Cloud Foundry environment at SAP Cloud Platform contain content that is deployed to backing
services. Access to the content requires not only user authentication but also the appropriate authorization.
The access-control process controlled by the authorization policy can be divided into the following two phases:
● Authorization
Defined in the deployment security descriptor (xs-security.json), where access is authorized
● Policy enforcement
The general rules by which requests for access to resources are either approved or disapproved.
Access enforcement is based on user identity and performed in the following distinct application components:
● Application router
After successful authentication at the application router, the application router starts an authorization check
(based on scopes).
● Application containers
For authentication purposes, containers, for example the node.js and Java containers, receive an HTTP header
Authorization: Bearer <JWT token> from the application router; the token contains details of the user
and scope. This token must be validated using the Container Security API. The validation checks are based on
scopes and attributes defined in the deployment security descriptor (xs-security.json).
This section describes the tasks of administrators in the Cloud Foundry environment of SAP Cloud Platform.
Administrators ensure user authentication and assign authorization information to users or user groups.
At first, you determine the security administrators for your subaccount. Since identity providers provide the users
or user groups, you make then sure that there is a trust relationship between your subaccount and the identity
provider. This is a prerequisite for authentication. Now you can manage the authorizations of the business users.
Note
Only the administrator who created the current subaccount can use the cockpít of SAP Cloud Platform to add
other administrators as members. The added administrators can use all the security administration functions.
This enables the administrators to manage authentication and authorization in this subaccount.
In the Cloud Foundry environment of SAP Cloud Platform, identity providers provide the business users. If you use
external SAML 2.0 identity providers, you must configure the trust relationship using the cockpit. The respective
subaccount must have a trust relationship with the SAML 2.0 identity provider. Using the cockpit, you, as an
administrator of the Cloud Foundry environment, establish this trust relationship.
Authorization
In the Cloud Foundry environment, application developers create and deploy application-based authorization
artifacts for business users. Administrators use this information to assign roles, build role collections, and assign
these collections to business users or user groups. In this way, they control the users' permissions.
1 Use an existing role or create a new one using role Administrator of the Cloud Foundry envi SAP Cloud
ronment Platform
templates
cockpit
Maintain Roles for Applications [page 2070]
2 Create a role collection and assign roles to it Administrator of the Cloud Foundry envi SAP Cloud
ronment Platform
Maintain Role Collections [page 2072] cockpit
3 (If you do not use SAP ID Service) Assign the role Administrator of the Cloud Foundry envi SAP Cloud
ronment Platform
collections to SAML 2.0 user groups
cockpit
4 Assign the role collection to the business users pro Administrator of the Cloud Foundry envi SAP Cloud
ronment Platform
vided by an SAML 2.0 identity provider
cockpit
Map Role Collections to User Groups [page 2075]
Related Information
Trust and Federation with SAML 2.0 Identity Providers [page 2059]
When you create a subaccount, SAP Cloud Platform automatically grants your user the role for the administration
of business users and their authorizations in the subaccount. Having this role, you can also add or remove other
users who will then also be user and role administrators of this subaccount.
After having created a subaccount in the Cloud Foundry environment of SAP Cloud Platform, your user
automatically has the administration role. This means that your user also has the Security tab, where you can
You can delegate the security administration to other users. Simply add the users as members to your
subaccount. SAP Cloud Platform grants the User & Role Administrator role to these users.
To see the User & Role Administrator role and all users with this role, go to your subaccount (see Navigate to
Global Accounts, Subaccounts, Orgs, and Spaces in the Cockpit [page 953]. Choose Security
Administrators .
You can also remove the users are not supposed to have this role.
Note
All users with the User & Role Administrator role can manage this subaccount, including the security
administration tasks.
Related Information
You can add security administrators by adding users to your subaccount and by assigning the corresponding role
to them.
Prerequisites
Context
A popup opens, where you can enter the user ID and assign roles.
5. Enter the user IDs of the users you want to add as members.
Note
Use the e-mail address as user ID.
6. To assign roles, make sure that the User & Role Administrator checkbox is checked.
7. To save your changes, choose OK.
Related Information
You can remove security administrators from your subaccount by deleting members from your subaccount.
Prerequisites
To remove security administrators from your subaccount, take the following steps.
Procedure
You see the list of the security adminstrators with their respective roles.
4. To remove a security administrator from your subaccount, choose (Delete).
Related Information
SAP Cloud Platform supports SAML 2.0 identity providers. You have the option, to use any SAML 2.0 standard
compliant identity provider.
SAP Cloud Platform supports the use of single sign-on (SSO) authentication based on Security Assertion Markup
Language 2.0 (SAML). An SAML identity provider is used by an SAML service provider to authenticate users who
sign in to an application by means of single sign-on. The User Account and Authentication (UAA) component is the
central infrastructure component of the runtime platform for business user authentication and authorization
management. The users are stored in the SAML 2.0 identity provider whereas the user authorizations are stored
inside the UAA component.
To make use of your identity provider's user base you must first establish a mutual trust relationship with your
SAML 2.0 identity provider. This configuration consists of two steps.
● Establish trust with the SAML 2.0 identity provider in your SAP Cloud Platform subaccount
● Register your SAP Cloud Platform subaccount in the SAML 2.0 identity provider
To establish trust with your SAML 2-0 identity provider, perform one of the applicable procedures below.
● Establish Trust and Federation with UAA Using SAP Cloud Platform Identity Authentication Service [page
2060]
Note
How you assign users to their authorizations depends on the type of trust configuration. If you are using the
default trust configuration via SAP ID Service, you can assign users directly to role collections. For more
information, see Default Identity Federation with SAP ID Service in the Cloud Foundry Environment [page
2066].
However, if you are using a custom trust configuration as described in this topic, you have to assign SAML
2.0 groups to role collections. Assigning users to their authorizations is part of application administration
which is described here. For more information, see Map Role Collections to User Groups [page 2075].
The SAML 2.0 identity provider provides the business users, who belong to user groups. It is efficient use
federation by assigning role collections to one or multiple SAML 2.0 user groups. The role collection contains
all the authorizations that are necessary for this user group. This saves time when you add a new business
users. Simply add the users to the respective user group(s), and the new business users automatically get all
the authorizations that are included in the role collection.
Related Information
To establish trust, configure the trust configuration of the SAML 2.0 identity provider in your subaccount using the
cockpit. In this case, the SAML 2.0 identity provider is SAP Cloud Platform Identity Authentication service. Next,
register your subaccount in User Account and Authentication service using the administration console of User
Account and Authentication service. To complete federation, maintain the federation attributes of the SAML 2.0
user groups. This makes sure that you can assign authorizations to user groups.
Context
You want to use an SAML 2.0 identity provider, for example, SAP Cloud Platform Identity Authentication service.
This is where the business users for SAP Cloud Platform are stored.
Prerequisites
Context
You must establish a trust relationship with an SAML 2.0 identity provider in your subaccount in SAP Cloud
Platform. The following procedure describes how you establish trust in the SAP Cloud Platform Identity
Authentication service.
Procedure
1. Go to your subaccount (see Navigate to Global Accounts, Subaccounts, Orgs, and Spaces in the Cockpit [page
953]) and choose Security Trust Configuration in the SAP Cloud Platform cockpit.
2. Choose New Trust Configuration.
3. Enter a name and a description that make it clear that the trust configuration refers to the identity provider.
4. To get the relevant metadata, go to https://
<Identity_Authentication_tenant>.accounts.ondemand.com/saml2/metadata.
5. Copy the SAML 2.0 metadata and paste it into the Metadata field.
6. To validate the metadata, choose Parse. This will fill the Subject and Issuer fields with the relevant data from
the SAP Cloud Platform Identity Authentication service - your SAML 2.0 identity provider.
The name of the new trust configuration now shows the value
<Identity_Authentication_tenant.accounts.ondemand.com. It represents the custom identity
provider SAP Cloud Platform Identity Authentication service.
This also fills the fields for the single sign-on URLs and the single logout URLs.
7. Save your changes.
8. (Optional) If you do not need SAP ID Service, set it to inactive.
You have successfully configured trust in the SAP Cloud Platform Identity Authentication service, which is your
SAML 2.0 identity provider.
An SAML service provider interacts with an SAML 2.0 identity provider to authenticate users signing in by means
of a single sign-on (SSO) mechanism. In this scenario, the User Account and Authentication (UAA) service acts as
a service provider representing a single subaccount. To establish trust between an identity provider and a
subaccount, you must register your subaccount by providing the SAML details for web-based authentication in the
identity provider itself. The identity provider we use here is the SAP Cloud Platform Identity Authentication service
Context
Administrators must configure trust on both sides, in the service provider and in the SAML identity provider. This
description covers the side of the identity provider (SAP Cloud Platform Identity Authentication service). The trust
configuration on the side of the SAP Cloud Platform Identity Authentication service must contain the following
items:
We illustrate the process of configuring trust in the service provider by describing how administrators use the
administration console of SAP Cloud Platform Identity Authentication Service to register the subaccount.
To establish trust from a tenant of SAP Cloud Platform Identity Authentication service to a subaccount, assign a
metadata file and define attribute details. The SAML 2.0 assertion includes these attributes. With the UAA as
SAML 2.0 service provider, they are used for automatic assignment of UAA authorizations based on information
maintained in the identity provider.
Procedure
1. Open the administration console of SAP Cloud Platform Identity Authentication service.
https://<Identity_Authentication_service>/admin
https://<tenant_name>.accounts.ondemand.com/admin
Note
In the default configuration, the URL redirects to a logon screen, which requires the credentials of an SAP
HANA database user to complete the logon process.
2. To go to the service provider configuration, choose Applications & Resources Applications in the menu
or the Applications tile.
3. To add a new SAML service provider, create a new application by using the + Add button.
4. Choose a name for the application that clearly identifies it as your new service provider. Save your changes.
Users see this name in the logon screen when the authentication is requested by the UAA service. Seeing the
name, they know which application they currently access after authentication.
5. Choose SAML 2.0 Configuration and import the relevant metadata XML file. Save your changes.
Note
Use the metadata XML file of your subaccount. The subdomain name is identical to the tenant name. You
find the metadata file in the following location:
Example
This is an example for the eu10 landscape. Please note that the domain name can be different.
https://<tenant_name>.authentication.eu10.hana.ondemand.com/saml/metadata
If the contents of the metadata XML file are valid, the parsing process extracts the information required to
populate the remaining fields of the SAML configuration. It provides the name, the URLs of the assertion
consumer service and single logout endpoints, and the signing certificate.
6. Choose Name ID Attribute and select E-Mail as a unique attribute. Save your changes.
7. Choose Assertion Attributes, use +Add to add a multi-value user attribute, and enter Groups (capitalized) as
assertion attribute name for the Groups user attribute. Save your changes.
SAP ID service is the place where you register to get initial access to SAP Cloud Platform. If you are a new user, you
can use the self-service registration option of SAP ID Service.
Trust to SAP ID Service in your subaccount is pre-configured in the Cloud Foundry environment of SAP Cloud
Platform by default, so you can start using it without further configuration. Optionally, you can add additional trust
settings or set the default trust to inactive, for example if you prefer to use another SAML 2.0 identity provider.
Using the SAP Cloud Platform cockpit you can make changes by navigating to your respective subaccount and by
choosing Security Trust Configuration .
Note
If you do not intend to use SAP ID Service, you must establish trust to your custom SAML 2.0 identity provider.
We describe a custom trust configuration using the example of SAP Cloud Platform Identity Authentication
service.
Related Information
Trust and Federation with SAML 2.0 Identity Providers [page 2059]
To establish trust, configure the trust configuration of the SAML 2.0 identity provider in your subaccount using the
cockpit. Next, register your subaccount in User Account and Authentication service using the administration
console of User Account and Authentication service. To complete federation, maintain the federation attributes of
the SAML 2.0 user groups. This makes sure that you can assign authorizations to user groups.
Context
You want to use an SAML 2.0 identity provider. This is where the business users for SAP Cloud Platform are stored.
Prerequisites
Context
You must establish a trust relationship with an SAML 2.0 identity provider in your subaccount in SAP Cloud
Platform. The following procedure tries to guide you though the trust configuration in your SAML 2.0 identity
provider.
1. Go to your subaccount (see Navigate to Global Accounts, Subaccounts, Orgs, and Spaces in the Cockpit [page
953]) and choose Security Trust Configuration in the SAP Cloud Platform cockpit.
2. Choose New Trust Configuration.
3. Enter a name and a description that make it clear that the trust configuration refers to your identity provider.
4. To get the relevant metadata, get to the metadata of your identity provider.
5. Copy the SAML 2.0 metadata and paste it into the Metadata field.
6. To validate the metadata, choose Parse. Take care that the Subject and Issuer fields are filled with the relevant
data from your SAML 2.0 identity provider.
The name of the new trust configuration now shows the value of your identity provider. It represents your
custom identity provider.
Check whether the fields for the single sign-on URLs and the single logout URLs are filled.
7. Save your changes.
8. (Optional) If you do not need SAP ID Service, set it to inactive.
You have successfully configured trust in your SAML 2.0 identity provider.
Context
Administrators must configure trust on both sides, in the service provider and in the SAML identity provider. This
description tries to guide you through the configuration of your identity provider. The trust configuration on the
side of the SAM 2.0 identity provider must contain the following items:
Your administrators use the administration console of your SAML 2.0 identity provider to register the subaccount.
To establish trust from a tenant of your identity provider to a subaccount, assign a metadata file and define
attribute details. The SAML 2.0 assertion includes these attributes. With the UAA as SAML 2.0 service provider,
Procedure
Users see this name in the logon screen when the authentication is requested by the UAA service. Seeing the
name, they know which application they currently access after authentication.
5. Choose the SAML 2.0 configuration and import the relevant metadata XML file. Save your changes.
Note
Use the metadata XML file of your subaccount. The subdomain name is identical to the tenant name. You
find the metadata file in the following location:
Example
This is an example for the eu10 landscape. Please note that the domain name can be different.
https://<tenant_name>.authentication.eu10.hana.ondemand.com/saml/metadata
Take care that the fields of the SAML configuration are filled. The metadata provides information, such as the
name, the URLs of the assertion consumer service and single logout endpoints, and the signing certificate.
6. Choose or create the name ID attribute and select E-mail as a unique attribute. Save your changes.
7. Choose or create a user attribute, and enter Groups (capitalized) as assertion attribute name for the Groups
user attribute. Save your changes.
Related Information
SAP ID service is the place where you register to get initial access to SAP Cloud Platform. If you are a new user, you
can use the self-service registration option of SAP ID Service.
Note
If you do not intend to use SAP ID Service, you must establish trust to your custom SAML 2.0 identity provider.
We describe a custom trust configuration using the example of SAP Cloud Platform Identity Authentication
service.
Related Information
Trust and Federation with SAML 2.0 Identity Providers [page 2059]
This table is supposed to display the attribute settings of the identity provider and the values administrators use to
establish trust between the SAML 2.0 identity provider and a new subaccount.
Since there are multiple identity providers you can use, we display the parameters and values of SAP Cloud
Platform Identity Authentication service. Use the information in this table as a reference for the configuration of
your identity provider.
Settings of SAP Cloud Platform Identity Authentication Service for SAML 2.0 Trust
SAML 2.0 Configuration Configures the trust with a service provider using Upload the metadata file from the UAA
an uploaded metadata file. service.
Name ID Attribute Configures the attribute that the identity provider E-Mail
uses to identify the users. The attribute is sent as
the name ID for the authenticated user in SAML Note
assertions.
We recommend that you use exactly
Possible settings: this name ID format.
● User ID
● E-Mail
● Display Name
● Login Name
● Employee Number
Select Assertion Attributes To define the user authorizations in the UAA serv Select the Groups user attribute and en
ice, provide the user groups in the assertion attrib ter Groups as assertion attribute. You
ute Groups (capitalized). This assertion attribute must set this attribute to enable that the
is required for the assignment of roles in the UAA assignment from role collection to user
service. groups has an effect. For more informa
tion, see the related link.
You can change the default names of the assertion
attributes that the application uses to recognize
Caution
the user attributes. You can use multiple assertion
attributes for the same user attribute. Use exactly this spelling:
Example
In the following, you see what John Doe's SAML 2.0 assertion looks like if Name ID Attribute was set to E-Mail and
Assertion Attribute to Groups.
Sample Code
Related Information
As an administrator of the Cloud Foundry environment of SAP Cloud Platform, you can maintain application roles
and role collections which can be used in user management.
The user roles are derived from role templates that are defined in the security description (xs-security.json)
of applications that have been registered as OAuth 2.0 clients at the User Account and Authentication service
during application deployment. The application security-descriptor file also contains details of the authorization
Tip
Using the SAP Cloud Platform cockpit, you can assign the role collections to users logging on with SAML 2.0
assertions or SAP ID Service.
Related Information
Context
A role is an instance of a role template; you can build a role based on a role template and assign the role to a role
collection. Role collections are then assigned to SAML 2.0 groups or users. The role template defines the type of
access permitted for an application, for example: the authorization scope, and any attributes that need to be
applied. Attributes define information that comes with the respective user, for example 'cost center' or 'country'
(see the related link). This information can only be resolved at run time.
Note
The User Account and Authentication service automatically creates default roles for all role templates that do
not include attribute references. They have the same name as the role template, and their description contains
Default Instance.
1. Application Identifier
Since the application security descriptor contains the role template, it is, at first, necessary to choose the
application identifier of the application that provides the role template with the role you want to add.
2. Role Template
3. Role
7. Save your changes.
Related Information
Role
A role is an instance of a role template; you can build a role based on a role template and assign the role to a role
collection. The cockpit helps you to display information about the selected application and any related roles in the
following windows, tabs, and panes:
● Roles
● Scopes
● Attributes
● Role templates
Role Collection
Roles are assigned to role collections which are assigned in turn to SAML 2.0 groups if an SAML 2.0 identity
provider stores the users. Using the cockpit, you can display information about the role collections that have been
Role collections group together different roles that can be applied to the application users.
Context
Application developers have defined application-specific role templates in the security descriptor file. The role
templates contain the role definitions. You can assign the role to a role collection.
As an administrator of the Cloud Foundry environment of SAP Cloud Platform, you can group application roles in
role collections. Typically, these role collections provide authorizations for certain types of users, for example,
sales representatives.
Once you have created a role collection, you can pick the roles that apply to the typical job of a sales
representative. Since the roles are application-based, you must select the application to see which roles come with
the role template of this application. You are free to add roles from multiple applications to your role collection.
Finally, you assign the role collection to the users provided by the SAP ID Service or to SAML 2.0 user groups,
for example, sales representatives.
Procedure
If you want to use assertion attributes, set up SAML trust to configure the SAML identity providers (IDP) for
runtime of the Cloud Foundry environment. You must perform this step if you want your applications to use SAML
assertions as the logon authentication method.
You have arranged roles in role collections, and now want to assign these role collections to business users.
Prerequisites
● You are using one or both of the following trust configurations (see the related links):
○ Default trust configuration (SAP ID Service)
○ Custom trust configuration (SAP Cloud Platform Identity Authentication service or any SAML 2.0 identity
provider)
Context
How you assign users to their authorizations depends on the type of trust configuration and on whether or not you
prefer to maintain the authorizations of invididual users rather in the identity provider or in SAP Cloud Platform.
The following options are available:
Note
If you are using the default trust configuration with SAP ID Service, you directly assign users to role
collections. For more information, see Default Identity Federation with SAP ID Service in the Cloud Foundry
Environment [page 2066].
However, if you are using a custom trust configuration, for example, with SAP Cloud Platform Identity
Authentication service, you can use both options. For more information on configuring the trust between your
subaccount and a custom SAML 2.0 identity provider, see Establish Trust and Federation with UAA Using Any
SAML Identity Provider [page 2064].
1. Go to your subaccount (see Navigate to Global Accounts, Subaccounts, Orgs, and Spaces in the Cockpit [page
953]).
Default trust configuration (SAP ID Service) Directly Assign Role Collections to Users [page 2074]
Custom trust configuration (SAP Cloud Platform Identity Directly Assign Role Collections to Users [page 2074]
Authentication service or any SAML 2.0 identity provider)
Map Role Collections to User Groups [page 2075]
Related Information
You want to directly assign a role collection to a business user. You can use this option for default and custom trust
configurations.
Prerequisites
● You have created role collections containing authorizations in the form of roles.
Context
If an application developer has defined the role templates accordingly, the roles come with the role templates of
the related applications. A role collection can group roles from a set of applications available to your subaccount..
1. Go to your subaccount (see Navigate to Global Accounts, Subaccounts, Orgs, and Spaces in the Cockpit [page
953]) and choose Security Trust Configuration .
2. Choose the trust configuration for the identity provider of the business user.
3. Choose Role Collection Assignment.
4. Enter the business user's name in the User field, for example john.doe@example.com.
Note
If you are using a custom trust configuration, enter the user name according to the name ID format
configured in the identity provider.
Tip
If the user identifier you entered has never logged on to an application in this subaccount, SAP Cloud
Platform cannot automatically verify the user name. To avoid mistakes, you must check and confirm that
the user name is correct.
You want to assign a role collection to a user group provided by an SAML 2.0 identity provider that has a custom
trust configuration in SAP Cloud Platform. In this case, the assignment is a mapping of a user group to a role
collection. Your identity provider provides the user groups using the SAML assertion attribute called Groups. Each
value of the attribute is mapped to a role collection as described in this procedure.
Prerequisites
● You have configured your custom SAML 2.0 identity provider and established trust in your subaccount.
https://Identity_Authentication_tenant>.accounts.ondemand.com
● You have configured the identity provider so that it conveys the user's group memberships in the Groups
assertion attribute.
● You have created role collections containing authorizations in the form of roles.
Context
The SAML 2.0 identity provider provides the business users, who can belong to user groups. It is efficient to map
user groups to role collections. The role collection as a reusable element contains the authorizations that are
necessary for this user group. This saves time when you want to add a new business user. Simply add the user to
the respective user group or groups, and the business user automatically gets all the authorizations that are
included in the role collections.
For this reason, the assignment is a mapping of user groups to role collections.
Procedure
1. Go to your subaccount (see Navigate to Global Accounts, Subaccounts, Orgs, and Spaces in the Cockpit [page
953]) and choose Security Trust Configuration .
2. Choose your custom trust configuration in your subaccount. This identity provider provides the user groups
which you want to assign to role collections.
3. Choose Role Collection Mappings.
4. Choose New Role Collection Mapping.
5. Choose the role collection you want to map and enter the name of the user group in the Value field.
Tip
You must know the name of the user group in the identity provider.
Example
In the SAP Cloud Platform Identity Authentication service, you find the user groups in the administration
console of your SAP Cloud Platform Identity Authentication service tenant under Users &
Authorizations User Groups . Open the administration console, for example of SAP Cloud Platform
Identity Authentication service using https://
<Identity_Authentication_tenant>.accounts.ondemand.com/admin.
Related Information
Trust and Federation with SAML 2.0 Identity Providers [page 2059]
Establish Trust and Federation with UAA Using SAP Cloud Platform Identity Authentication Service [page 2060]
Federation Attribute Settings of Any Identity Provider [page 2067]
Attributes use information that is specific to the user, for example the user's country. If the application developer
in the Cloud Foundry environment of SAP Cloud Platform has created a country attribute to a role, this restricts
the data a business user can see based on this attribute.
A lot of applications provide purely functional role templates which grant access for all data of a certain type within
your subaccount. Roles for such role templates are generated automatically. Some other applications also provide
the possibility for administrators to restrict access not only by functional authorizations, but also by instance-
based authorizations. That means that users can only work with a certain subset of the data in your subaccount.
The restriction can be either based on information within the respective role, or on user-specific information
provided by the identity provider. This makes instance-based authorizations specific for each customer because
the respective roles cannot be generated automatically. Instead, administrators must create them. Typical
restrictions depend on information like the user's country or cost center.
Each restriction is represented by a dedicated attribute which belongs to a role template of the application. For
each attribute, administrators have two options to specify the value which restricts data access:
● Static attributes
They are stored in the role. The user is given the attribute value when the administrator assigns the role
collection with this role to the user.
● Attributes from a custom identity provider
Since an identity provider provides the business users, you can dynamically reference all attributes that come
with the SAML 2.0 assertion. You define the attributes and the attribute values in the identity provider. In the
Cloud Foundry environment of SAP Cloud Platform, administrators can use the assertion attributes to refine
the roles.
Note
If you want to reference attributes from your identity provider, you must know the exact identifier of the
assertion attribute. Go to the SAML 2.0 configuration of your identity provider and use the assertion
attributes as they are defined there.
As an administrator of the Cloud Foundry environment, you can specify attributes in roles to refine authorizations
of the business users. Depending on these attributes, business users with this role have restricted access to data.
Prerequisites
You have maintained the attributes of the users in your identity provider if you want to use the identity provider as
the source of the attributes.
Note
In SAP Cloud Platform Identity Authentication Service or any SAML identity provider, you find the attributes in
the SAML 2.0 configuration.
Procedure
6. Choose (Edit).
The Edit Role pane displays the attribute name and fields where you can select the source and enter a value.
7. To specify an attribute, choose the source of the attribute. The following sources are available:
Attribute Sources
Static Enter a static value, for example USA to refine the role de
pending on the country.
Identity Provider (SAML) Enter an assertion attribute as defined in your identity pro
vider. Check in your identity provider for the exact syntax of
the assertion attribute identifier.
Applications .
Example
To use the assertion attribute for cost center, you must
enter the value cost_center.
Related Information
As an administrator of the Cloud Foundry environment, you can specify attributes in a new role to refine
authorizations of business users. Depending on these attributes, business users with this role have restricted
access to data.
Prerequisites
You have maintained the attributes of the users in your identity provider if you want to use the identity provider as
the source of the attributes.
Note
In SAP Cloud Platform Identity Authentication Service or any SAML identity provider, you find the attributes in
the SAML 2.0 configuration.
The New Role pane displays the attribute name and fields where you can select the source and enter a value.
7. Enter a name and a description of the new role.
8. Select the role template you want to use.
9. To specify an attribute, choose the source of the attribute. The following sources are available:
Attribute Sources
Static Enter a static value, for example USA to refine the role de
pending on the country.
Identity Provider (SAML) Enter an assertion attribute as defined in your identity pro
vider. Check in your identity provider for the exact syntax of
the assertion attribute identifier.
Applications .
Example
To use the assertion attribute for cost center, you must
enter the value cost_center.
You have created a new role. You can assign this role to a role collection. For more information, see the related
link.
Related Information
According to data privacy regulations, it must be possible to delete personal and private data stored in SAP Cloud
Platform. The User Account and Authentication service of the Cloud Foundry environment stores shadow users,
which contain personal and private data. We want to enable you to comply with the data privacy regulations of
various countries.
Prerequisites
● xs_user.read
● xs_user.write
Context
Note
When handling personal data, consider the legislation in the various countries where your organization
operates. After the data has passed the end of purpose, regulations might require you to delete the data. For
more information on data protection and privacy, see the related link.
Whenever a user authenticates at an SAML 2.0 identity provider, the User Account and Authentication service
stores user-related data records in the form of shadow users. The UAA uses the information of the shadow users
to issue refresh tokens. For more information on shadow users, see the Cloud Foundry documentation.
Security administrators need an OAuth 2.0 client with the corresponding scopes. This OAuth 2.0 client grants
them access to the SCIM REST APIs that they have to use for retrieving and deleting the shadow users.
Procedure
1. To request an OAuth 2.0 client, create an incident on component BC-XS-SEC. Use the SAP Support Portal .
2. To retrieve and delete the shadow users, use the methods of the following REST API for the System for Cross-
domain Identity Management (SCIM).
6.1.1.2.7 Troubleshooting
This section provides information on troubleshooting-related activities for Authorization and Trust Management in
the Cloud Foundry environment.
In case of problems, create an incident in component BC-XS-SEC. For more information, see the related link.
Tip
We also recommend that you regularly check the SAP Notes and Knowledge Base for component BC-XS-SEC in
the SAP Support Portal . These contain information about program corrections and provide additional
information.
Related Information
After sending a request to a web application in the Cloud Foundry environment of SAP Cloud Platform, you get a
response containing a nested error element with the value access_denied and an error_description
element.
1. Symptom
2. Reason and Prerequisites
3. Solution
4. If This Doesn't Help
Symptom
The system might also respond with the single word Forbidden.
<oauth>
<error_description>Access is denied</error_description>
<error>Access denied</error>
</oauth>
A business user has tried to access the URL endpoint of a web application, but is not authorized to do this. The
authorizations (scopes) required for the URL endpoint must be assigned to this business user. Scopes (functional
authorizations) allow the business user to perform operations provided by the web application. Attributes
(instance-based authorizations) specify the data, for example company code or cost center, which the business
user is authorized to use. The business user can thus perform the operations he or she is authorized for.
Solution
A role collection with the role that contains the required authorizations must be available. Assign this role
collection to the business user.
If there are neither instance-based authorizations nor roles containing the required attribute values, create one
based on the corresponding role template. Maintain the attribute values of the new role to define the required data
authorizations.
Note
Role templates are predefined and deployed together with the web application. Role templates specify the
scopes (functional operations) which the web application supports. You cannot change existing role templates
or create new ones.
For more information on roles and role collections, see the related link.
Get Support
If problems occur, create an incident for component BC-XS-SEC. Use the SAP Support Portal .
If there are authentication problems with your application, it is helpful to attach application logs to the incident. For
more information, see Enable and Provide Application Logs [page 2088].
Related Information
After sending a request to a web application in the Cloud Foundry environment of SAP Cloud Platform, you receive
the following error message as a response: Identity provider could not process the
authentication request received
1. Symptom
2. Reason and Prerequisites
3. Solution
4. If This Doesn't Help
Symptom
Error - Identity provider could not process the authentication request received.
Delete your browser cache and stored cookies, and restart your browser. If you
continue to experience issues, send an e-mail to sso@sap.com.
The Cloud Foundry environment of SAP Cloud Platform uses the SAP Cloud Platform Identity Authentication
service as SAML 2.0 identity provider. XSUAA forwards unauthenticated requests to the configured identity
Solution
Configure the missing part of the trust relationship in your identity provider.
Download the SAML metadata file from XSUAA, upload it to your SAML 2.0 identity provider, and complete the
configuration by registering your subaccount. The related links provide an instruction for SAP Cloud Platform
Identity Authentication service as trusted SAML 2.0 identity providers.
Get Support
If problems occur, create an incident for component BC-XS-SEC. Use the SAP Support Portal .
If there are authentication problems with your application, it is helpful to attach application logs to the incident. For
more information, see Enable and Provide Application Logs [page 2088].
Before asking questions or creating an issue, make sure that you have collected the following information. Your
question or issue can only be processed if the information you provide is complete.
● The region.
● The ID and name of the subaccount.
● The subdomain of the subaccount.
● The ID of the business user.
● The URL of the Identity Authentication tenant in which the business user is defined.
● The SAML 2.0 metadata file of your SAML 2.0 identity provider (see the related links).
Related Information
Register SAP Cloud Platform Subaccount in the SAML 2.0 Identity Provider [page 2062]
Register SAP Cloud Platform Subaccount in Any SAML 2.0 Identity Provider [page 2065]
After sending a request to a web application in the Cloud Foundry environment of SAP Cloud Platform, you see a
logon screen with the title SAP HANA XS Advanced.
1. Symptom
2. Reason and Prerequisites
3. Solution
4. If This Doesn't Help
Symptom
The system responds with a logon screen with the title SAP HANA XS Advanced. Below Log On, the following
domain is displayed: xs2security.accounts400.ondemand.com
Note
The expected system behavior is to return a logon screen with the title Welcome <IdP_tenant_name>!.
The request of the business user is not authenticated and therefore needs to be redirected to the identity provider
where the business user is defined. The trust of the identity provider is configured and assigned to the tenant to
which the business user belongs. In this case, Cloud Foundry environment of SAP Cloud Platform cannot correctly
determine the tenant of the business user. As a result, it cannot find the correct identity provider and falls back to
https://authentication.sap.hana.ondemand.com.
Solution
This error indicates a problem with the environment variable TENANT_HOST_PATTERN in the manifest.yml file of
the web application. The tenant host pattern is a regular expression (regex) and is used to determine the tenant ID
(which is identical to the subdomain) with which the request is associated. The system shows the behavior
described earlier if the URL of the request does not match with the defined pattern.
.
.
.
env:
XSAPPNAME: <application_name>
TENANT_HOST_PATTERN: "^(.*)-<CloudFoundry_host_name>.<CloudFoundry_domain>".
.
.
Note
Separate the regular expression and the Cloud Foundry host name by a minus ( - ) sign. The subdomain of the
tenant is not a subdomain of the web application in a technical sense. The subdomain would be separated from
the Cloud Foundry host name by a dot ( . ). Since there is no automatic certificate creation process each time a
new tenant is onboarded, you must use a workaround.
The current workaround is to concatenate the subdomain of the tenant with the Cloud Foundry host name using
the minus ( - ) sign. All such concatenations share the same Cloud Foundry top level domain, together with the
certificate of this top level domain.
The following example shows the web application bulletinboard-d012345 in the Cloud Foundry domain
cfapps.sap.hana.ondemand.com:
.
.
.
env:
XSAPPNAME: bulletinboard-d012345
TENANT_HOST_PATTERN: "^(.*)-approuter.cfapps.sap.hana.ondemand.com".
.
.
.
Note
Recheck the definition of the environment variable TENANT_HOST_PATTERN in the manifest.yml file and
correct it if necessary.
Get Support
In case of problems, create an incident for the component BC-XS-SEC. Use the SAP Support Portal .
If there are authentication problems with your application, it is helpful to attach application logs to the incident. For
more information, see Enable and Provide Application Logs [page 2088].
● The region.
● The ID and name of the subaccount.
● The ID and name of the business user.
● The ID and description of the tenant.
● The ID and description of the web application.
Related Information
If there are authentication problems in your application, enable logging for the container security library in
question, reproduce the problem, and attach the application logs. To obtain more details, set the environment
variables for the application.
Procedure
Note
The Cloud Foundry command line interface prompts you to choose an org. To find the org of your
subaccount, use the cockpit to go to your subaccount. You find the org in the Cloud Foundry tile under
Organization (see Navigate to Global Accounts, Subaccounts, Orgs, and Spaces in the Cockpit [page 953]).
Example
cf set-env your-app SAP_EXT_TRC stdout
Example
cf set-env your-app SAP_EXT_TRL 1
7. (For Node.js) To set detailed logs of the Security API for Node.js, use the following command:
Example
cf set-env your-app DEBUG xssec*
8. Restage your application and logs cf logs using the following command:
cf restage <application>
Example
cf restage your-app
Example
cf logs your-app > your-app-log.txt
Note
You can stop the recording of the log messages using CTRL+C . You can revert the environment variables
using the following command:
Example
cf unset-env your-app SAP_EXT_TRC
10. Reproduce the problem in your application. The events that occur in your application are logged in your local
log file.
11. Create an incident for component BC-XS-SEC. Use the SAP Support Portal . For more information, see
Troubleshooting [page 2082].
12. Attach the log files to the incident.
Values (integer):
○ 0 (off)
○ 1 (default)
○ 2 (medium)
○ 3 (high)
Developers create authorization information for business users in their environment and deploy this information in
an application. They make this available to administrators, who complete the authorization setup and assign the
authorizations to business users.
Developers store authorization information as design-time role templates in the security descriptor file xs-
security.json. Using the cockpit, administrators of the environment assign the authorizations to business
users.
Application security is maintained in the xs-security.json file; the contents of the xs-security.json file
cover the following areas:
Example
Sample Code
AppName
|- web/
| |- xs-app.json
| \- resources/
\- xs-security.json # Security deployment artifacts/scopes/auths
\- [manifest.yml]
Related Information
Business users in an application of the Cloud Foundry environment at SAP Cloud Platform should have different
authorizations because they work in different jobs.
For example in the framework of a leave request process, there are employees who want to create and submit
leave requests, managers who approve or reject, and payroll administrators who need to see all approved leave
requests to calculate the leave reserve.
The authorization concept of a leave request application should cover the needs of these three employee groups.
This authorization concept includes elements such as roles, scopes, and attributes.
Authorization in the application router and runtime container are handled by scopes. Scopes are groupings of
functional organizations defined by the application.
From a technical point of view, the resource server (the relevant security container API) provides a set of services
(the resources) for functional authorizations. The functional authorizations are organized in scopes.
Scopes cover business users’ authorizations in a specific application. They are deployed, for example, when you
deploy an update of the application. The security descriptor file xs-security.json contains the application-
specific “local” scopes (and, if applicable, also “foreign” scopes, which are valid for other defined applications).
After the developer has created the role templates and deployed them to the relevant application, it is the Cloud
Foundry administrator’s task to use the role templates to build roles, aggregate roles into role collections, and
assign the role collections to business users in the application.
Tip
For access to SAP HANA objects, the privileges of the technical user (container owner) apply; for business data,
instance-based authorizations, CDS access policies are used.
A file that defines the details of the authentication methods and authorization types to use for access to your
application.
The xs-security.json file uses JSON notation to define the security options for an application; the information
in the application-security file is used at application-deployment time, for example, for authentication and
authorization (see the related links). Applications can perform scope checks (functional authorization checks with
Boolean result) and checks on attribute values (instance-based authorization checks). The xs-security.json
file declares the scopes and attributes on which to perform these checks. This enables the User Account and
Authentication (UAA) service to limit the size of an application-specific JSON Web Token (JWT) by adding only
relevant content.
Scope checks can be performed by the application router (declarative) and by the application itself
(programmatic, for example, using the container API). Checks using attribute values can be performed by the
application (using the container API) and on the database level.
The contents of the xs-security.json are used to configure the OAuth 2.0 client; the configuration is shared by
all components of an SAP multi-target application. The contents of the xs-security.json file cover the
following areas:
● Authorization scopes
A list of limitations regarding privileges and permissions and the areas to which they apply
● Attributes
Attributes refine business users' authorizations according to attributes that come with the business users. You
can use, for example fixed attribute values, such as 'country equals USA' or SAML attributes.
● Role templates
A description of one or more roles to apply to a user and any attributes that apply to the roles
● Tenant mode
This option is only relevant if you use tenants.
● OAuth 2.0 configuration
Use this property to set custom parameters for tokens.
Sample Code
Example xs-security.json File
{
"xsappname" : "node-hello-world",
"scopes" : [ {
"name" : "$XSAPPNAME.Display",
"description" : "display" },
{
"name" : "$XSAPPNAME.Edit",
"description" : "edit" },
{
"name" : "$XSAPPNAME.Delete",
"description" : "delete" }
],
"attributes" : [ {
"name" : "Country",
"description" : "Country",
"valueType" : "string" },
{
"name" : "CostCenter",
"description" : "CostCenter",
"valueType" : "int" }
],
"role-templates": [ {
"name" : "Viewer",
"description" : "View all books",
"scope-references" : [
"$XSAPPNAME.Display" ],
"attribute-references": [ "Country" ]
},
{ "name" : "Editor",
"description" : "Edit, delete books",
"scope-references" : [
"$XSAPPNAME.Edit",
"$XSAPPNAME.Delete" ],
"attribute-references" : [
"Country",
"CostCenter"]
}
]
}
Scopes
Scopes are assigned to users by means of security roles, which are mapped to the user group(s) to which the user
belongs. Scopes are used for authorization checks by the application router.
Attributes
You can define attributes so that you can perform checks based on a source that is not yet defined. In the example
xs-security.json file included here, the check is based on a cost center, whose name is not known because it
differs according to context.
Role Templates
A role template describes a role and any attributes that apply to the role.
If a role template contains attributes, you must instantiate the role template. This is especially true with regards to
any attributes defined in the role template and their concrete values, which are subject to customization and, as a
result, cannot be provided automatically. Role templates that only contain local scopes can be instantiated without
user interaction. The same is also true for foreign scopes where the scope “owner” has declared his or her consent
in a kind of white list (for example, either for “public” use or for known “friends”).
Note
The resulting application-specific role instance needs to be assigned to one or more user groups.
Related Information
The syntax required to set the properties and values defined in the xs-security.json application-security
description file.
Sample Code
{
"xsappname" : "node-hello-world",
"scopes" : [ {
"name" : "$XSAPPNAME.Display",
"description" : "display" },
{
"name" : "$XSAPPNAME.Edit",
"description" : "edit" },
{
"name" : "$XSAPPNAME.Delete",
"description" : "delete" }
"granted-apps" : [ "$XSAPPNAME(application,iot-
tenant,business-partner)"]
],
"attributes" : [ {
"name" : "Country",
"description" : "Country",
"valueType" : "string" },
{
"name" : "CostCenter",
"description" : "CostCenter",
"valueType" : "string" }
],
"role-templates": [ {
"name" : "Viewer",
"description" : "View all books",
"scope-references" : [
"$XSAPPNAME.Display" ],
"attribute-references": [ "Country" ]
},
{
"name" : "Editor",
"description" : "Edit, delete books",
"scope-references" : [
"$XSAPPNAME.Edit",
"$XSAPPNAME.Delete" ],
"attribute-references" : [
"Country",
"CostCenter"]
}
]
"authorities":["$ACCEPT_GRANTED_AUTHORITIES"]
"oauth2-configuration": {
"token-validity": 900,
"redirect-uris": ["http://
<host_name1>","http://<host_name2>"]
}
}
Use the xsappname property to specify the name of the application that the security description applies to.
Sample Code
"xsappname" : "<app-name>",
Naming Conventions
Bear in mind the following restrictions regarding the length and content of an application name in the xs-
security.json file:
● The following characters can be used in an application name of the Cloud Foundry environment at SAP Cloud
Platform: “aA”-“zZ”, “0”-“9”, “-” (dash), “_” (underscore), “/” (forward slash), and “\” (backslash)
● The maximum length of an application name is 128 characters.
Use the custom tenant-mode property to define the way the tenant's OAuth clients get their client secrets.
During the binding of the xsuaa service, the UAA service broker writes the tenant mode into the credential section.
The application router uses the tenant mode information for the implementation of multitenancy with the
application service plan.
Sample Code
{
"xsappname" : "<application_name>",
"tenant-mode" : "shared",
"scopes" : [
{
"name" : "$XSAPPNAME.Display",
"description" : "display"
Syntax
"tenant-mode" : "shared"
Tenant Modes
Value Description
dedicated (default) An OAuth client gets a separate client secret for each subaccount.
shared An OAuth client always gets the same client secret. It is valid in all sub
accounts. The application service plan uses this tenant mode.
Note
If you do not specify tenant-mode in the xs-security.json, the UAA uses the dedicated tenant mode.
scopes
In the application-security file (xs-security.json), the scopes property defines an array of one or more
security scopes that apply for an application. You can define multiple scopes; each scope has a name and a short
description. The list of scopes defined in the xs-security.json file is used to authorize the OAuth client of the
application with the correct set of local and foreign scopes; that is, the permissions the application requires to be
able to respond to all requests.
Sample Code
"scopes": [
{
"name" : "$XSAPPNAME.Display",
"description" : "display" },
{
"name" : "$XSAPPNAME.Edit",
"description" : "edit" },
{
"name" : "$XSAPPNAME.Delete",
"description" : "delete" }
"granted-apps" : [ "$XSAPPNAME(application,iot-tenant,business-partner)"]
],
Local Scopes
All scopes in the scopes section are “local”; that is, application-specific. Local scopes are checked by the
application's own application router or checked programmatically within the application's run-time container. In
the event that an application needs access to other services of the Cloud Foundry environment on behalf of the
current user, the provided access token must contain the required “foreign” scopes. Foreign scopes are not
provided by the application itself; they are checked by other sources outside the context of the application.
In the xs-security.json file, “local” scopes must be prefixed with the variable <$XSAPPNAME>; at run time. The
variable is replaced with the name of the corresponding local application name.
Tip
Variable <$XSAPPNAME> is defined in the application's deployment manifest description (manifest.yml).
$XSAPPNAME(<service_plan>,<tenant_identifier>,<xsappname>).<local_scope_name>
Example
"$XSAPPNAME(application,iot-tenant,business-partner).Create"
Here is the syntax in the security descriptor of the application that grants the scope. For more information, see
Referencing the Application.
"granted-apps" :
[ "$XSAPPNAME(<service_plan>,<tenant_identifier>,<foreign_xsappname>)"]
Example
"granted-apps" : [ "$XSAPPNAME(application,iot-tenant,business-partner)"]
If you want to grant a scope from this application to another application for a client credentials scenario, use the
grant-as-authorities-to-apps property. In this case, the scopes are granted as authorities. Specify the
other application by name. For more information, see the related link.
Here is the syntax in the security descriptor of the application that grants the scope:
"grant-as-authorities-to-apps" : [ "$XSAPPNAME(<service_plan>,<foreign_xsappname>)"]
Example
"grant-as-authorities-to-apps" : [ "$XSAPPNAME(application,business-partner)"]
Naming Conventions
Bear in mind the following restrictions regarding the length and content of a scope name:
● The following characters can be used in a scope name: “aA”-“zZ”, “0”-“9”, “-” (dash), “_” (underscore), “/”
(forward slash), “\” (backslash), “:” (colon), and the “.” (dot)
● Scope names cannot start with a leading dot “.” (for example, .myScopeName1)
● The maximum length of a scope name, including the fully qualified application name, is 193 characters.
In the application-security file (xs-security.json), the attributes property enables you to define an array
listing one or more attributes that are applied to the role templates also defined in the xs-security.json file.
You can define multiple attributes.
Sample Code
"attributes" : [
{
"name" : "Country",
"description" : "Country",
"valueType" : "s" },
{
"name" : "CostCenter",
"description" : "CostCenter",
"valueType" : "string" }
],
The attributes element is only relevant for a user scenario. Each element of the attributes array defines an
attribute. These attributes can be referenced by role templates. There are multiple sources of attributes:
● Static attributes
Use the cockpit to assign the value of the attributes. You can use the static value.
● Attributes from an SAML identity provider
If an SAML identity provider provides the users, you can reference an SAML assertion attribute. The SAML
assertion is issued by the configured identity provider during authentication. You find the SAML attribute value
in the SAML configuration of your identity provider. The attributes provided by the SAML identity provider,
appear as an SAML assertion attribute in the JSON web token if the user has assigned the corresponding
roles. You can use the assertion attributes to achieve instance-based authorization checks when using an SAP
HANA database.
attributes Parameters
valueType The type of value expected for the defined attrib int
ute; possible values are: “string” (or “s”,
“int” (integer), or “date”
Naming Conventions
● The following characters can be used to declare an xs-security attribute name in the Cloud Foundry
environment: “aA”-“zZ”, “0”-“9”, “_” (underscore)
● The maximum length of a security attribute name is 64 characters.
role-templates
In the application-security file (xs-security.json), the role-templates property enables you to define an
array listing one or more roles (with corresponding scopes and any required attributes), which are required to
access a particular application module. You can define multiple role-templates, each with its own scopes and
attributes.
Sample Code
"role-templates": [
{
"name" : "Viewer",
"description" : "View all books",
"scope-references" : [
"$XSAPPNAME.Display"
],
"attribute-references": [
"Country"
]
},
]
A role template must be instantiated. This is especially true with regard to any attributes defined in the role
template and the specific attribute values, which are subject to customization and, as a result, cannot be provided
automatically. Role templates that only contain “local” scopes can be instantiated without user interaction. The
same is also true for “foreign” scopes where the scope owner has declared consent in a kind of white list (for
example, either for “public” use or for known “friends”).
Note
The resulting (application-specific) role instances need to be assigned to the appropriate user groups.
role-template Parameters
name The name of the role to build from the role tem Viewer
plate
Naming Conventions
Bear in mind the following restrictions regarding the length and content of role-template names in the xs-
security.json file:
● The following characters can be used to declare an xs-security role-template name in XS advanced: “aA”-“zZ”,
“0”-“9”, “_” (underscore)
● The maximum length of an role-template name is 64 characters.
authorities
To enable one (sending) application to accept and use the authorization scopes granted by another application,
each application must be bound to its own instance of the xsuaa service. This is required to ensure that the
applications are registered as OAuth 2.0 clients at the UAA. You must also add an authorities property to the
security descriptor file of the sending application that is requesting an authorization scope. In the authorities
property of the sending application's security descriptor, you can either specify that all scope authorities
configured as grantable in the receiving application should be accepted by the application requesting the scopes,
or alternatively, only individual, named, scope authorities:
● Request and accept all authorities flagged as "grantable" in the receiving application's security descriptor:
Sample Code
Specifying Scope Authorities in the Sending Application's Security Descriptor
"authorities":["$ACCEPT_GRANTED_AUTHORITIES"]
● Request and accept only the "specified" scope authority that is flagged as grantable in the specified receiving
application's security descriptor. For more information, see Referencing the Application.
Sample Code
Specifying Scope Authorities in the Sending Application's Security Descriptor
"authorities":["<ReceivingApp>.ForeignCall", "<ReceivingApp2>.ForeignCall",]
Since both the sending application and the receiving application are bound to UAA services using the information
specified in the respective application's security descriptor, the sending application can accept and use the
specified authority from the receiving application. The sending application is now authorized to access the
receiving application and perform some tasks.
Note
The granted authority always includes the prefixed name of the application that granted it. This information tells
the sending application which receiving application has granted which set of authorities.
Tip
To flag a scope authority as "grantable", add the grant-as-authority-to-apps property to the respective
scope in the receiving application's security descriptor.
Use the custom oauth2-configuration property to define custom values for the OAuth 2.0 clients, such as the
token validity and redirect URIs.
The xsuaa service broker registers and uses these values for the configuration of the OAuth 2.0 clients.
Sample Code
"oauth2-configuration": {
"token-validity": 43200,
"redirect-uris": [
"http://<host_name1>",
"http://<host_name2>"]
oauth2-configuration Parameters
token-validity Token validity in seconds. If you do not define a 900 (15 minutes)
value, the default value is used.
Values:
If you want to grant scopes to an application for example, you must reference this application. Depending on where
the application is located, you can reference an application in multiple ways:
Application References
Example
$XSAPPNAME(application,business-
partner)
Example
$XSAPPNAME(application,iot-
tenant,business-partner)
Properties
"granted-apps" :
[ "$XSAPPNAME(application,iot-
tenant,business-partner)"]
$XSAPPNAME.Display
Related Information
Developers create authorization information for business users in their environment; the information is deployed in
an application and made available to administrators who complete the authorization setup and assign the
authorizations to business users.
Developers store authorization information as design-time role templates in the security descriptor file xs-
security.json. Using the xsuaa service broker, they deploy the security information to the xsuaa service. The
administrators view the authorization information in role templates, which they use as part of the run-time
configuration. The administrators use the role templates to build roles, which are aggregated in role collections.
The role collections are assigned, in turn, to business users.
The tasks required to set up authorization artifacts are performed by two distinct user roles: the application
developer and the administrator of the Cloud Foundry environment. After the deployment of the authorization
artifacts as role templates, the administrator of the application uses the role templates provided by the developers
for building role collections and assigning them to business users.
Note
To test authorization artifacts after deployment, developers can use the role templates to build role collections
and assign authorization to business users in an authorization tool based on a REST API.
1 Specify the security descriptor file containing the Application developer Text editor
(If applicable) If you want to create an OAuth 2.0 client Application developer CF command
in an application-related subaccount, you must use a line tool
separate security descriptor file where you specify the
subaccount.
2 Create role templates for the application using the se Application developer Text editor
3 Create a service instance from the xsuaa service us Application developer CF command
ing the service broker line tool
4 Bind the service instance to the application by includ Application developer Text editor
1 Use an existing role or create a new one using role Administrator of the Cloud Foundry envi SAP Cloud
ronment Platform
templates
cockpit
Maintain Roles for Applications [page 2070]
2 Create a role collection and assign roles to it Administrator of the Cloud Foundry envi SAP Cloud
ronment Platform
Maintain Role Collections [page 2072] cockpit
3 (If you do not use SAP ID Service) Assign the role Administrator of the Cloud Foundry envi SAP Cloud
ronment Platform
collections to SAML 2.0 user groups
cockpit
4 Assign the role collection to the business users pro Administrator of the Cloud Foundry envi SAP Cloud
ronment Platform
vided by an SAML 2.0 identity provider
cockpit
Map Role Collections to User Groups [page 2075]
Related Information
The functional authorization scopes for your application need to be known to the User Account and Authentication
(UAA) service. To do this, you declare the scopes of your application in the security descriptor file named xs-
security.json and then you assign the security descriptor file to the application.
Context
You have created an OAuth 2.0 client using the service broker. The OAuth 2.0 client includes the following
configuration:
Restriction
This is only available if there are PaaS tenants and if the default SAML 2.0 identity provider, which is
SAP ID Service, is used.
When you use the cf create-service command, you create an OAuth 2.0 client with a service instance and you
specify the xs-security.json file that is relevant for your application. Using this security descriptor file, the
OAuth 2.0 client that has been created for your application receives permission to access the scopes of your
application automatically.
Procedure
To create the service instance and the relevant xs-security.json file, use the following command:
Example
cf create-service xsuaa application -c xs-security.json node-uaa
Create a service instance from the xsuaa service using the service broker.
Context
If you want to grant users access to an application, you must create a service instance (acting as OAuth 2.0 client)
for this application.
Procedure
Create a service instance based on the xsuaa service and the service plan using the security descriptors in the
xs-security.json file.
Example
cf create-service xsuaa application authorizationtest-uaa -c xs-security.json
Note
If you want to update an already existing service instance, see the related link.
Related Information
You can update a service instance from the xsuaa service using the service broker.
Context
You are running a service instance that grants user access to an application. It uses the security descriptor file xs-
security.json. If you change properties, for example, you want to reflect the compatible changes you made in
the xs-security.json file in an existing service instance.
Procedure
1. Edit the xs-security.json file and make your changes in the security descriptors.
2. Update the service instance. During the update, you use the security descriptors you changed in the xs-
security.json file.
Example
cf update-service authorizationtest-uaa -c xs-security.json
If you update a service instance, you can add, change, and/or delete scopes, attributes, and role templates.
Whenever you change role templates, you adapt roles that are derived from the changed role template.
Scope ● Add
● Delete
● Change
○ description
○ granted-apps
Attribute ● Add
● Delete
● Change
○ description
Note
Do not change the valueType of the attribute.
You must bind the service instance created by you to your application using the manifest file.
Context
You must bind the service instance that acts as OAuth 2.0 client to your application. You find it under name in the
related manifest file. This file is located in the folder of your application, for example hello-world.
Procedure
Example
cf bind-service hello-world xsuaa -c xs-security.json
You can now proceed and deploy your application with the authorization artifacts that have been created earlier.
You need to get the credentials of a service instance for an application that runs outside of the instance of the
Cloud Foundry environment at SAP Cloud Platform.
Context
Note
Applications that run inside the instance of the Cloud Foundry environment at SAP Cloud Platform get their
credentials after the respective service has been bound to the application. However, you cannot use binding for
an application that runs outside of the instance of the Cloud Foundry environment at SAP Cloud Platform.
Instead, you use a service key that is created in the service instance of the remote application. You need to get the
credentials of the service instance for the remote application. The UAA service broker manages all credentials,
including those of the remote applications. The credentials you need are the OAuth 2.0 client ID and the client
secret.
First you generate a service key for the service instance of the remote application to enable access to the UAA
service broker. Then you retrieve the generated service key with the credentials, including the OAuth 2.0 client ID
and the client secret, from the UAA service broker. The remote application stores the service key. The application
can now use this service key with the credentials (OAuth 2.0 client ID and the client secret of the remote
application).
Procedure
1. Create a service key for the service instance of the remote application.
Example
cf create-service-key rem-app-service rem-app-sk
2. You want to retrieve the credentials including the OAuth 2.0 client ID and the client secret for the service
instance of your remote application. Use the following command:
Example
cf service-key rem-app-service rem-app-sk
Output Code
Another application wants to use the scope of an existing application. For this purpose, the existing application
needs to grant access to the scope for the application that wants to use this scope.
Let's assume that your application name is sample-leave-request-app. This application provides a function
for creating leave requests (for all employees), for approving leave requests (only for managers), and scheduling
some jobs. The MobileApprovals application wants to use the scopes of the sample-leave-request-app.
Let's also assume that the exact scope names are not yet defined. In this case, the xs-security.json file of your
application might resemble the following code example.
Note
The xs-security.json file only includes own scopes (starting with $XSAPPNAME.) in the scopes section.
Scopes provided by other applications (like the JobScheduler scope) are only referenced in the role template.
Sample Code
Example xs-security.json file
{
"xsappname" : "sample-leave-request-app",
"scopes" : [ { "name" : "$XSAPPNAME.createLR",
"description" : "create leave requests" },
{ "name" : "$XSAPPNAME.approveLR",
"description" : "approve leave requests"
"granted-apps" : [ "$XSAPPNAME(application,mobile-
subaccount-id,MobileApprovals)"]
],
"attributes" : [ { "name" : "costcenter",
"description" : "costcenter",
"valueType" : "s"
} ],
"role-templates": [ { "name" : "employee",
"description" : "Role for creating leave
requests",
"scope-references" :
[ "$XSAPPNAME.createLR","JobScheduler.scheduleJobs" ],
"attribute-references": [ "costcenter"] },
{ "name" : "manager",
The role-template section in the xs-security.json of the MobileApprovals application contains the
reference to the scope granted by the sample-leave-request-app application.
Sample Code
Example xs-security.json file
"scope-references" : [ "$XSAPPNAME(application,subaccount-id,sample-leave-
request-app).approveLR" ]
After logging on to an application, you want to be redirected exactly to the page of the application in question. It
should therefore not be possible to use an open redirect, which might take you to the wrong page for example, or
to a malicious page.
At runtime, the User Account and Authentication service checks the redirect URI for correctness and rejects
access attempts to incompatible redirect URIs. This is possible because the security descriptor file xs-
security.json contains the correct redirect URI.
Sample Code
"oauth2-configuration": {
"redirect-uris": ["http://
<host_name1>","http://<host_name2>"]
The User Account and Authentication service stores the correct redirect URIs in the OAuth client table. To avoid
this kind of arbitrary redirect after a logon request, the UAA checks the redirect URI and thus makes sure that
users access the correct page.
The REST services of the User Account and Authentication service (UAA) provide APIs that enable you to manage
shadow users in the Cloud Foundry environment at SAP Cloud Platform.
Use this SCIM REST API of the User Account and Authentication service to retrieve and delete shadow users.
API Description
/sap/rest/user/scim [page 2114] Retrieve a filtered list of shadow users which are stored in the UAA.
/sap/rest/user/scim/{id} [page Delete shadow users which are stored in the UAA.
2115]
/sap/rest/user/scim
You can retrieve a filtered list of the shadow users created after an SAML 2.0 authentication against an SAML 2.0
identity provider. The details include the ID of the shadow users. Time stamp, user ID, and origin (identity provider)
of the shadow user are optional.
● Authorization
xs_user.read
● Method
GET
● Response
A JSON body containing the job details, including the ID of the job with the status code if the call was
successful. A location header with the relative path is included in the response.
Sample Code
GET /sap/rest/user/scim?timestamp=<milliseconds_since_1/1/1970>
{
"id": "967d1a0f-4edb-463d-bb29-8e0855e16eec",
"username": "john.doe@example.com",
"email": "john.doe@example.com",
"givenName": "John",
"familyName": "Doe",
"origin": "ldap",
"zoneId": "acme-apg053accntg",
"verified": false,
"legacyVerificationBehavior": false,
"passwordChangeRequired": false,
"version": 0,
"roleCollections": []
Response Result
The request for shadow users is defined with the parameters listed in the following table:
Query parameter URL en timestamp in milliseconds since 1/1/1970 0:00:00,000 (optional)
coded
username (optional)
/sap/rest/user/scim/{id}
You can delete the shadow users, which are stored in the UAA. This URL is the REST API endpoint of the UAA.
● Authorization
xs_user.write
● Method
DELETE
● Response
A JSON body containing the job details, including the ID of the job with the status code if the call was
successful. A location header with the relative path is included in the response.
Sample Code
DELETE /sap/rest/user/scim/<Id>
Response Result
The Neo environment of SAP Cloud Platform supports identity federation and single sign-on with external identity
providers. The current section provides an overview of the supported scenarios.
Contents
SAP Cloud Platform applications can delegate authentication and identity management to an existing corporate
IdP that can, for example, authenticate your company's employees. It aims at providing a simple and flexible
solution: your employees (or customers, partners, and so on) can single sign-on with their corporate user
credentials, without a separate user store and subaccount in SAP Cloud Platform. All information required by SAP
Cloud Platform about the employee can be passed securely with the logon process, based on a proven and
standardized security protocol. There is no need to manage additional systems that take care for complex user
account synchronization or provisioning between the corporate network and SAP Cloud Platform. Only the
configuration of already existing components on both sides is needed, which simplifies administration and lowers
total cost of ownership significantly. Even existing applications can be "federation-enabled" without changing a
single line of code.
You can use Identity Authentication as an identity provider for your applications. is a cloud solution for identity
lifecycle management. Using it, you can benefit from features such as user base, user provisioning, corporate
branding or logo, and social IdP integration. See Identity Authentication.
Identity Authentication provides an easy way for your applications to delegate authentication and identity
management and keep developers focused on the business logic. It allows authentication decisions to be removed
from the application and handled in a central service.
SAP Cloud Platform offers solid integration with Identity Authentication. When you request an Identity
Authentication tenant for your SAP Cloud Platform subaccount, you can automatically use it as a trusted IdP.
SAP ID service is the place where you have to register to get initial access to SAP Cloud Platform. If you are a new
user, you can use the self-service registration option at the SAP Web site or SAP ID Service . SAP ID Service
manages the users of official SAP sites, including the SAP developer and partner community. If you already have
such a user, then you are already registered with SAP ID Service.
In addition, you can use SAP ID Service as an identity provider for your identity federation scenario, or if you do not
want to use identity federation. Trust to SAP ID Service is pre-configured on SAP Cloud Platform by default, so you
can start using it without further configuration. Optionally, on SAP Cloud Platform you can configure additional
trust settings, such as service provider registration, role assignments to users and groups, and so on.
● A central user store for all your identities that require access to protected resources of your application(s)
● A standards-based Single Sign-On (SSO) service that enables users to log on only once and get seamless
access to all your applications deployed using SAP Cloud Platform
The following graphic illustrates the identity federation with SAP ID Service scenario.
Roles allow you to control the access to application resources in SAP Cloud Platform, as specified in Java EE. In
SAP Cloud Platform, you can assign groups or individual users to a role. Groups are collections of roles that allow
the definition of business-level functions within your subaccount. They are similar to the actual business roles
existing in an organization.
The following graphic illustrates a sample scenario for role, user and group management in SAP Cloud Platform. It
shows a person, John Doe, with corporate role: sales representative. On SAP Cloud Platform, all sales
representatives belong to group Sales, which has two roles: CRM User and Account Owner. On SAP Cloud
Platform, John Doe inherits all roles of the Sales group, and has an additional role: Administrator.
You can use a user store from an on-premise system for user authentication scenarios. SAP Cloud Platform
supports two types of on-premise user stores:
SAP Cloud Platform uses the Security Assertion Markup Language (SAML) 2.0 protocol for authentication and
single sign-on.
By default, SAP Cloud Platform is configured to use SAP ID service as identity provider (IdP), as specified in SAML
2.0. You can configure trust to your custom IdP, to provide access to the cloud using your own user database.
SAP ID Service provides Identity and Access Management for Java EE Web applications hosted on SAP Cloud
Platform through the mechanisms described in Java EE Servlet specification and through dedicated APIs.
Cross-site Scripting (XSS) is one of the most common types of malicious attacks to Web applications. In order to
help protecting against this type of attacks, SAP Cloud Platform provides a common output encoding library to be
used from applications.
Cross-Site Request Forgery (CSRF) is another common type of attack to Web applications. You can protect
applications running on SAP Cloud Platform from CSRF, based on the Tomcat Prevention Filter.
This section describes how you can implement security in your applications.
SAP Cloud Platform provides the following APIs for user management and authentication:
Package Description
com.sap.security.um The user management API can be used to create and delete
users or update user information.
com.sap.security.um.user
com.sap.security.um.service
Authentication API
Related Information
6.1.2.1.1.1 Authentication
In the Neo environment, enable user authentication for access to your applications.
Prerequisites
● You have installed the SAP Cloud Platform Tools for Java. See Setting Up the Development Environment [page
1126].
● You have created a simple HelloWorld application. See Creating a Hello World Application [page 1139].
● If you want to use Java EE 6 Web Profile features in your application, you have downloaded the SAP Cloud
Platform SDK for Java EE 6 Web Profile. See Using Java EE Web Profile Runtimes [page 1166]
Context
Note
User names in SAP Cloud Platform are case insensitive.
Context
The Java EE servlet specification allows the security mechanisms for an application to be declared in the web.xml
deployment descriptor.
FORM Trusted SAML 2.0 identity FORM authentication imple You want to delegate authenti
provider cation to your corporate iden
mented over the Security As
tity provider.
Application-to-Application sertion Markup Language
SSO (SAML) 2.0 protocol. Authen
tication is delegated to SAP ID
service or custom identity
provider. You can specify the
custom identity provider us
ing the trust configuration for
your subaccount. See Appli
cation Identity Provider [page
2161].
BASIC User name and password HTTP BASIC authentication Example 1: You want to dele
delegated to SAP ID service or gate authentication to SAP ID
an on-premise SAP NetWea service. Users will log in with
ver AS Java system. Web their SCN user name and
browsers prompt users to en password.
ter a user name and pass
Example 2: You have an on-
word.
premise SAP NetWeaver AS
By default, SAP ID service is Java system used as a user
used. (Optional) If you config- store. You want users to log in
ure a connection with an on- using the user name and
premise user store, the au password stored in AS Java.
thentication is delegated to an
on-premise SAP NetWeaver
AS Java system. See Using an
SAP System as an On-Prem
ise User Store [page 2175].
Note
If you want to use your
Identity Authentication
tenant for BASIC authenti
cation (instead of SAP ID
service/SAP NetWeaver),
create a customer ticket in
component BC-NEO-SEC-
IAM. In the ticket, specify
the Identity Authentication
tenant you want to use.
Restriction
The trust configuration
( cloud cockpit
Security Trust
Application Identity
Provider ) you set for
your subaccount does not
apply for BASIC authenti
cation.
CERT Client certificate Used for authentication only Users log in using their corpo
with client certificate. See En rate client certificates.
abling Client Certificate Au
thentication [page 2245].
BASICCERT User name and password Used for authentication either Within the corporate network,
with client certificate or with users log in using their client
Client certificate
user name and password. See certificates. Outside that net
Enabling Client Certificate Au work, users log in using user
thentication [page 2245]. name and password.
OAUTH OAuth 2.0 token Authentication according to You have a mobile application
the OAuth 2.0 protocol with consuming REST APIs using
an OAuth access token. See the OAuth 2.0 protocol. Users
OAuth 2.0 Authorization Code log in using an OAuth access
Grant [page 2208] token.
Application-to-Application
SSO
If you need to configure the default options of an authentication method, or define new methods, see
Authentication Configuration [page 2180]
Tip
We recommend using FORM authentication method.
Note
By default, any other methods (DIGEST, CLIENT-CERT, or custom) that you specify in the web.xml are executed
as FORM. You can configure those methods using the Authentication Configuration section at Java application
level in the Cockpit. See Authentication Configuration [page 2180].
Tip
For the SAML and FORM authentication methods, if your application sends multiple simultaneous requests
without an authenticated session, they may fail. We recommend that you first send one request to a protected
resource, establish a session, and then use the session for the multiple simultaneous requests.
Tip
Although BASIC authentication is usually used for technical users to consume REST services (stateless
communication) we recommend that the client leverages the security session instead of sending credentials
with every call. This means the client needs to make sure it preserves and re-sends all HTTP cookies it receives.
Thus, authentication will happen only once and this could improve performance.
Results
● When FORM authentication is used, you are redirected to SAP ID service or another identity provider, where
you are authenticated with your user name and password. The servlet content is then displayed.
● When BASIC authentication is used, you see a popup window and are prompted to enter your credentials. The
servlet content is then displayed.
Example
Example 1: Using FORM Authentication
<login-config>
<auth-method>FORM</auth-method>
</login-config>
<security-constraint>
<web-resource-collection>
<web-resource-name>Protected Area</web-resource-name>
<url-pattern>/index.jsp</url-pattern>
<url-pattern>/a2asso.jsp</url-pattern>
</web-resource-collection>
<auth-constraint>
<!-- Role Everyone will not be assignable -->
<role-name>Everyone</role-name>
</auth-constraint>
</security-constraint>
<security-role>
<description>All SAP Cloud Platform users</description>
<role-name>Everyone</role-name>
</security-role>
Note
All authenticated users implicitly have the Everyone role. You cannot remove or edit this role. In the SAP
Cloud Platform Cockipt, the Everyone role is not listed in role mapping (see Managing Roles [page 2151] ).
If you want to manage authorizations according to user roles, you should define the corresponding constraints
in the web.xml. The following example defines a resource available for users with role Developer, and another
resource for users with role Manager:
<security-constraint>
<web-resource-collection>
<web-resource-name>Developer Page</web-resource-name>
<url-pattern>/developer.jsp</url-pattern>
</web-resource-collection>
<auth-constraint>
<role-name>Developer</role-name>
</auth-constraint>
</security-constraint>
<security-constraint>
<web-resource-collection>
<web-resource-name>Manager Page</web-resource-name>
<url-pattern>/manager.jsp</url-pattern>
</web-resource-collection>
<auth-constraint>
<role-name>Manager</role-name>
</auth-constraint>
</security-constraint>
<login-config>
<auth-method>FORM</auth-method>
</login-config>
Context
With programmatic authentication, you do not need to declare constrained resources in the web.xml file of your
application. Instead, you declare the resources as public, and you decide in the application logic when to trigger
authentication. In this case, you have to invoke the authentication API explicitly before executing any application
code that should be protected. You also need to check whether the user is already authenticated, and should not
trigger authentication if the user is logged on, except for certain scenarios where explicit re-authentication is
required.
If you trigger authentication in an SAP Cloud Platform application protected with FORM, the user is redirected to
SAP ID service or custom identity provider for authentication, and is then returned to the original application that
triggered authentication.
If you trigger authentication in an SAP Cloud Platform application protected with BASIC, the Web browser displays
a popup window to the user, prompting him or her to provide a user name and password.
package hello;
import java.io.IOException;
import javax.security.auth.login.LoginContext;
import javax.security.auth.login.LoginException;
import javax.servlet.ServletException;
import javax.servlet.http.HttpServlet;
import javax.servlet.http.HttpServletRequest;
import javax.servlet.http.HttpServletResponse;
import com.sap.security.auth.login.LoginContextFactory;
public class HelloWorldServlet extends HttpServlet {
...
protected void doGet(HttpServletRequest request,
HttpServletResponse response) throws ServletException, IOException {
String user = request.getRemoteUser();
if (user != null) {
response.getWriter().println("Hello, " + user);
} else {
LoginContext loginContext;
try {
loginContext = LoginContextFactory.createLoginContext("FORM");
loginContext.login();
response.getWriter().println("Hello, " + request.getRemoteUser());
} catch (LoginException e) {
e.printStackTrace();
}
}
...
}
In the example above, you create LoginContext and call its login() method.
Note
All the steps below are described using the FORM authentication method, but they can also be applied to
BASIC.
Procedure
1. Open the source code of your HelloWorldServlet class. Add the code for programmatic authentication to the
doGet() method.
2. Make the doPost() method invoke programmatic authentication. This is necessary because the SAP ID
service always returns the SAML2 response over an HTTP POST binding, and in order to be processed
correctly, the LoginContext login must be called during the doPost() method. The authentication framework
is responsible for restoring the original request using GET after successful authentication. Another alternative
is that your doPost() method simply calls your doGet() method.
3. Test your application on the local server. It does not need to be connected to the SAP ID service, and
authentication is done against local users. For more information, see Testing User Authentication on the Local
Server.
4. Deploy the application to SAP Cloud Platform. If you are using FORM, you are redirected to SAP ID service or
another identity provider, depending on your trust configuration for this subaccount. If you are using BASIC,
you are redirected to SAP ID service (not configurable using trust settings). The servlet content is then
displayed and you should be able to see the content returned by the hello servlet.
When BASIC authentication is used, you should see a popup window prompting you to provide credentials to
authenticate. Once these are entered successfully, the servlet content is displayed.
You can configure session timeout using the web.xml. Default value: 20 minutes. For example:
<session-config>
<session-timeout>15</session-timeout> <!-- in minutes -->
</session-config>
After the specified timeout, user sessions are invalidated. If the user tries to access an invalidated session, SAP
Cloud Platform will return a login page in its response, so the user can enter credentials again. . If you are using
SAML as login protocol, you cannot rely on the response code to find out the your session is expired because it will
be 200 or 302. To check whether the response is for triggering new login, get the
com.sap.cloud.security.login HTTP header, and reload the page. For example:
jQuery(document).ajaxComplete(function(e, jqXHR){
Note
For requests made with the X-Requested-With header and value XMLHttpRequest (AJAX requests), you
need to check for session expiration (by checking the marker header com.sap.cloud.security.login). If
the session is expired and you are using SAML2 or FORM authentication method, the system does not trigger
an authentication request.
6.1.2.1.1.1.4 Troubleshooting
When testing in the local scenario, and your application has Web-ContextPath: /, you might experience the
following problem with Microsoft Internet Explorer:
Output Code
HTTP Status 405 - HTTP method POST is not supported by this URL
If you see such issues, you will have to add the following code into your protected resource:
@Override
protected void doPost(HttpServletRequest req, HttpServletResponse resp)
throws ServletException, IOException { doGet(req, resp); }
Next Steps
You can now test the application locally. See Test Security Locally [page 2140].
After testing, you can proceed with deploying the application to SAP Cloud Platform. See Deploying and Updating
Applications [page 1175].
After deploying on SAP Cloud Platform, you need to configure the role assignments users and groups will have for
this application. See Managing Roles [page 2151].
Optionally, you can configure the authentication options applied in the authentication method that you defined in
the web.xml or programmatically. See Authentication Configuration [page 2180].
To see the end-to-end scenario of managing roles on SAP Cloud Platform, watch the complete video tutorial
Managing Roles in SAP Cloud Platform .
6.1.2.1.1.2 Authorizations
if(!request.isUserInRole("Developer")){
response.sendError(403, "Logged in user does not have role Developer");
return;
} else {
out.println("Hello developer");
}
}
Next Steps
You can now test the application locally. For more information, see Test Security Locally [page 2140].
After testing, you can proceed with deploying the application to SAP Cloud Platform. For more information, see
Deploying and Updating Applications [page 1175].
After deploying on SAP Cloud Platform, you need to configure the role assignments users and groups will have for
this application. For more information, see Managing Roles [page 2151].
The Authorization Management API allows you to manage user roles and groups, and their assignments in your
applications.
The Authorization Management API is protected with OAuth 2.0 client credentials. Create an OAuth client and
obtain an access token to call the API methods. See Using Platform APIs [page 1289].
Note
We strongly recommend that you use this API only for administration, not for runtime checks of authorizations.
For the runtime checks, we recommend using HttpServletRequest.isUserInRole(java.lang.String
role). See Authorizations [page 2130].
{
"roles": [
{
"name": "Developer",
"type": "PREDEFINED",
"applicationRole": true,
"shared": true
},
{
"name": "Administrator",
"type": "PREDEFINED",
"applicationRole": true,
"shared": true
}
]
}
Platform APIs of SAP Cloud Platform are protected with OAuth 2.0 client credentials. Create an OAuth client and
obtain an access token to call the platform API methods.
Context
For description of OAuth 2.0 client credentials grant, see the OAuth 2.0 client credentials grant specification .
For detailed description of the available methods, see the respective API documentation.
Tip
Do not get a new OAuth access token for each and every platform API call. Re-use the same existing access
token throughout its validity period instead, until you get a response indicating the access token needs to be re-
issued.
Context
The OAuth client is identified by a client ID and protected with a client secret. In a later step, those are used to
obtain the OAuth API access token from the OAuth access token endpoint.
Procedure
1. In your Web browser, open the Cockpit. See SAP Cloud Platform Cockpit [page 900].
Caution
Make sure you save the generated client credentials. Once you close the confirmation dialog, you cannot
retrieve the generated client credentials from SAP Cloud Platform.
Context
Once you have the client credentials, you need to send an HTTP POST request to the OAuth access token endpoint
and use the client ID and client secret as user and password for HTTP Basic Authentication. You will receive the
access token as a response. By default, the access token received in this way is valid 1500 seconds (25 minutes).
You cannot configure its validity length.
Procedure
1. Send a POST request to the OAuth access token endpoint. The URL is landscape specific, and looks like this:
The parameter grant_type=client_credentials notifies the endpoint that the Client Credentials flow is used.
2. Get and save the access token from the received response from the endpoint.
The response is a JSON object, whose access_token parameter is the access token. It is valid for the specified
time (in seconds) in the expires_in parameter. (default value: 1500 seconds).
Example
Retrieving an access token on the trial landscape will look like this:
POST https://api.hanatrial.ondemand.com/oauth2/apitoken/v1?
grant_type=client_credentials
Headers:
Authorization: Basic eW91ckNsaWVudElEOnlvdXJDbGllbnRTZWNyZXQ
{
"access_token": "51ddd94b15ec85b4d54315b5546abf93",
"token_type": "Bearer",
"expires_in": 1500,
1. At the required (application, subaccount or global account) level, create an HTTP destination with the
following information (the name can be different):
○ Name=<yourdestination name>
○ URL=https://api.<SAP Cloud Platform host>/oauth2/apitoken/v1?grant_type=client_credentials
○ ProxyType=Internet
○ Type=HTTP
○ CloudConnectorVersion=2
○ Authentication=BasicAuthentication
○ User=<clientID>
○ Password=<clientSecret>
See Create HTTP Destinations [page 111].
2. In your application, obtain an HttpURLConnection object that uses the destination.
See ConnectivityConfiguration API [page 80].
3. With the object retrieved from the previous step, execute a POST call.
urlConnection.setRequestMethod("POST");
urlConnection.setRequestProperty("Authorization", "Basic <Base64 encoded
representation of {clientId}:{clientSecret}>");
urlConnection.connect();
Procedure
In the requests to the required platform API, include the access token as a header with name Authorization and
value Bearer <token value>.
Example
We will call the Authorization Management API.
GET https://api.hanatrial.ondemand.com/authorization/v1/accounts/p1234567trial/
users/roles/?userId=myUser
Headers:
Authorization: Bearer 51ddd94b15ec85b4d54315b5546abf93
Related Information
You can access user attributes using the User Management Java API (com.sap.security.um.user). It can be
used to get and create users or to read and update their information.
To get UserProvider, first, declare a resource reference in the web.xml. For example:
<resource-ref>
<res-ref-name>user/Provider</res-ref-name>
<res-type>com.sap.security.um.user.UserProvider</res-type>
</resource-ref>
Then look up UserProvider via JNDI in the source code of your application. For example:
Note
If you are using the SDK for Java EE 6 Web Profile, you can look up UserProvider via annotation (instead of
embedding JNDI lookup in the code). For example:
@Resource
private UserProvider userProvider;
try {
// Read the currently logged in user from the user storage
return userProvider.getUser(request.getRemoteUser());
} catch (PersistenceException e) {
throw new ServletException(e);
}
import com.sap.security.um.user.User;
import com.sap.security.um.user.UserProvider;
import com.sap.security.um.service.UserManagementAccessor;
...
// Check for a logged in user
if (request.getUserPrincipal() != null) {
try {
// UserProvider provides access to the user storage
UserProvider users = UserManagementAccessor.getUserProvider();
// Read the currently logged in user from the user storage
User user = users.getUser(request.getUserPrincipal().getName());
// Print the user name and email
response.getWriter().println("User name: " + user.getAttribute("firstname") + "
" + user.getAttribute("lastname"));
response.getWriter().println("Email: " + user.getAttribute("email"));
In the source code above, the user.getAttribute method is used for single-value attributes (the first name and
last name of the user). For attributes that we expect to have more than one value (such as the assigned groups),
we use user.getAttributeValues method.
Next Steps
You can now test the application locally. For more information, see Test Security Locally [page 2140].
After testing, you can proceed with deploying the application to SAP Cloud Platform. For more information, see
Deploying and Updating Applications [page 1175].
6.1.2.1.1.6 Logout
This topic describes how to enable users to log out from your applications.
Context
You can provide a logout operation for your application by adding a logout button or logout link.
When logout is triggered in a SAP Cloud Platform application, the user is redirected to the identity provider to be
logged out there, and is then returned to the original application URL that triggered the logout request.
The following code provides a sample servlet that handles logout operations. When loginContext.logout() is
used, the system automatically redirects the logout request to the identity provider, and then returns the user to
the logout servlet again.
import javax.security.auth.login.LoginContext;
import javax.security.auth.login.LoginException;
import com.sap.security.auth.login.LoginContextFactory;
...
public class LogoutServlet extends HttpServlet {
. . .
//Call logout if the user is logged in
LoginContext loginContext = null;
if (request.getRemoteUser() != null) {
try {
loginContext = LoginContextFactory.createLoginContext();
loginContext.logout();
} catch (LoginException e) {
// Servlet container handles the login exception
// It throws it to the application for its information
response.getWriter().println("Logout failed. Reason: " + e.getMessage());
}
} else {
We add a logout link to the HelloWorld servlet, which references this logout servlet:
response.getWriter().println("<a href=\"LogoutServlet\">Logout</a>");
CSRF is a common Web hacking attack. For more information, see Cross-Site Request Forgery (CSRF) (non-
SAP link). You might consider protecting the logout operations for your applications from CSRF to prevent your
users from potential CSRF attack related problems (for example, XSRF denial of service on single logout).
Note
Although SAP Cloud Platform provides ready-to-use support for CSRF filtering, with logout operations you
cannot use it. The reason is users are sent to the logout servlet twice: first, when they trigger logout by clicking a
button/link, and second, when the identity provider has logged them out and redirected them back to the
application. You cannot specify the system to apply the CSRF filter first time, and skip it the second time.
Source Code
Source Code
try {
HttpSession session = request.getSession(false);
if(session != null){
long tokenValue = 0L;
if(session.getAttribute("csrf-logout") != null){
tokenValue = (Long) session.getAttribute("csrf-logout");
} else {
SecureRandom instance =
java.security.SecureRandom.getInstance("SHA1PRNG");
instance.setSeed(instance.generateSeed(5));
tokenValue = instance.nextLong();
session.setAttribute("csrf-logout", tokenValue);
}
response.getWriter().println("<a href=\"LogoutServlet?csrf-
logout="+tokenValue+"\">Logout</a>");
}
} catch (NoSuchAlgorithmException e) {
throw new ServletException(e);
}
For efficient logout to work, the servlet handling logout must not be protected in the web.xml. Otherwise,
requesting logout will result in a login request. The following example illustrates how to unprotect successfully a
logout servlet. The additional <security-constraint>...</security-constraint> section explicitly enables access to
the logout servlet.
<security-constraint>
<web-resource-collection>
<web-resource-name>Start Page</web-resource-name>
<url-pattern>/*</url-pattern>
</web-resource-collection>
<auth-constraint>
<role-name>Everyone</role-name>
</ auth-constraint>
</security-constraint>
<security-constraint>
<web-resource-collection>
<web-resource-name>Logout</web-resource-name>
<url-pattern>/LogoutServlet</url-pattern>
</web-resource-collection>
</security-constraint>
Avoid mapping a servlet to resources using wildcard (<url-pattern>/*</url-pattern> in the web.xml). This may lead
to an infinite loop. Instead, map the servlet to particular resources, as in the following example:
<servlet-mapping>
<servlet-name>Logout Servlet</servlet-name>
<url-pattern>/LogoutServlet</url-pattern>
<servlet-class>test.LogoutServlet</servlet-class>
Next Steps
You can now test the application locally. For more information, see Test Security Locally [page 2140].
After testing, you can proceed with deploying the application to SAP Cloud Platform. For more information, see
Deploying and Updating Applications [page 1175].
This section describes the error messages you may encounter when using BASIC authentication with SAP ID
Service as an identity provider.
For more information about using BASIC authentication, see Authentication [page 2122].
Error Messages
Error Message Description
Your account is temporarily locked. It will be automatically un SAP ID Service has registered five unsuccessful login attempts
locked in 60 minutes. for this account in a short time. For security reasons, your ac
count is disabled for 60 minutes.
Password authentication is disabled for your account. Log in The owner of this account has disabled password authentica
with a certificate. tion using their user profile settings in SAP ID service.
Inactive account. Activate it via your account creation confir- This is a new account and you haven’t activated it yet. You will
mation email receive an e-mail confirming your account creating, and con
taining an account activation link.
Login failed. Contact your administrator. You cannot log in for a reason different from all others listed
here.
This section describes how you can test the security you have implemented in your Java applications.
First, you need to test your application on your local runtime. If you use the Eclipse Tools, you can easily test with
local users. This is useful if you are implementing role-based identity management in your application.
Then, if everything goes well on the local runtime, you can deploy your application on SAP Cloud Platform, and test
how the application works on the Cloud with your local SAML 2.0 identity provider. This makes use if you are
implementing SAML 2.0 identity federation.
When you add user authentication to your application, you can test it first on the local server before uploading it to
SAP Cloud Platform.
Note
On the local server, authentication is handled locally, that is, not by the SAP ID service. When you try to access a
protected resource on the local server, you will see a local login page (not SAP ID service's or another identity
provider's login page). User access is then either granted or denied based on a local JSON (JavaScript Object
Notation ) file (<local_server_dir>/config_master/com.sap.security.um.provider.neo.local/neousers.json),
which defines the local set of user accounts, along with their roles and attributes. This is just for testing
purposes. When you deploy to the cloud, user authentication is still handled by the SAP ID service.
Using SAP Cloud Platform Tools (Eclipse Tools), you can easily manage local users. You can use the visual editor
for configuring the users, or edit the JSON file directly.
User attributes provide additional information about a user account. Applications can use attributes to distinguish
between users or customization according to users. To add a new attribute, proceed as follows:
Roles are used by applications to define access rights. By default, each user is assigned the User.Everyone role. It is
read-only, which means you cannot remove it. To add a new role, proceed as follows:
1. From the list of JSON files, select the user you want to export.
Tip
The default name of the exported file is localusers.json. You can rename it to something more
meaningful to you.
If you prefer using the console client instead of the Eclipse IDE, you have to find and edit manually the JSON file
configuring local test users. It is located at <local_server_dir>/config_master/
com.sap.security.um.provider.neo.local/neousers.json.
The following example shows a sample configuration of a JSON file with two users, along with their attributes and
roles:
{
"Users": [
{
"UID": "P000001",
"Password": "{SSHA}OA5IKcTJplwLLaXCjmbcV+d3LQVKey+bEXU\u003d",
"Roles": [
"Employee",
"Manager"
Troubleshooting
When stopping your local server, you might see the following error logs:
#ERROR#org.apache.catalina.core.ContainerBase##anonymous#System Bundle
Shutdown###ContainerBase.removeChild: stop:
org.apache.catalina.LifecycleException: Failed to stop component
[StandardEngine[Catalina].StandardHost[localhost].StandardContext[/idelogin]]
This error causes no harm and you don't need to take any measures.
Next Steps
● After testing, you can proceed with deploying the application to SAP Cloud Platform. For more information,
see Deploying and Updating Applications [page 1175].
● After deploying on the cloud, you may need to perform configuration steps using the cockpit. For more
information, see Security Configuration [page 2151].
You can use a local test identity provider (IdP) to test single sign on (SSO) and identity federation of an SAP Cloud
Platform application end-to-end.
This scenario offers simplified testing in which developers establish trust to an application deployed in the cloud
with an easy-to-use local test identity provider .
For more information about the identity provider concept in SAP Cloud Platform, see Application Identity Provider
[page 2161].
Contents:
Prerequisites
● You have set up and configured the Eclipse IDE for Java EE Developers and SAP Cloud Platform Tools for Java.
For more information, see Setting Up the Tools and SDK [page 1126].
● You have developed and deployed your application on SAP Cloud Platform. For more information, see Creating
an SAP Cloud Platform Application [page 1166].
Procedure
The usage of the local test identity provider involves the following steps:
For more information about the Users editor, see Testing User Authentication on the Local Server [page 2122].
1. In a Web browser, open the cockpit and navigate to Security Trust Local Service Provider .
2. Choose Edit.
3. For Configuration Type, choose Custom.
4. Choose Generate Key Pair to generate a new signing key and self-signed certificate.
5. For the rest of the fields, leave the default values.
6. Choose Save.
7. Choose Get Metadata to download and save the SAML 2.0 metadata identifying your SAP Cloud Platform
account as a service provider. You will have to import this metadata into the local test IdP to configure trust to
SAP Cloud Platform in the procedure that follows.
You need to configure your local IdP name if you want to use more than one local IdP. Default local IdP name:
localidp.
1. In the Eclipse IDE, go to the already set up local server that will be used as local IdP.
2. In the config_master/com.sap.core.jpaas.security.saml2.cfg/ folder, create a file named
local_idp.cfg.
3. In the file, add a property:
localidp_name=<idpname you want to use>
4. Restart the local server.
The trust settings on SAP Cloud Platform for the local test IdP are configured in the same way as with any other
productive IdP.
1. During the configuration, use the local test IdP metadata that can be requested under the following link:
http://<idp_host>:<idp_port>/saml2/localidp/metadata,
Assertion-based attributes are used to define a mapping between attributes in the SAML assertion issued by the
local test IdP and user attributes on the Cloud.
This allows you to essentially pass any attribute exposed by the local test IdP to an attribute used in your
application in the cloud.
Define user attributes in the local test IdP by using the Eclipse IDE Users editor for SAP Cloud Platform as is
described in Setting up the local test IdP.
1. Open the cockpit in a Web browser, navigate to Security Trust Application Identity Provider .
2. From the table, choose the entry localidp, open the Attributes tab page, and click on Add Assertion-Based
Attribute.
3. In Assertion Attribute, enter the name of the attribute contained in the SAML 2.0 assertion issued by the local
test IdP. These are the same user attributes you defined in the Eclipse IDE Users editor when setting the local
test IdP.
5. Generate self sign-key pair and certificate for the local test IdP (optional)
If an error occurs while requesting the IdP metadata and the metadata cannot be generated, you can do the
following:
1. Generate a localidp.jks keyfile manually. The key and certificate are needed for signing the information that the
local test IdP will exchange with SAP Cloud Platform.
2. Open the directory <JAVA_HOME>/jre/bin/keytool
3. Open a command line and execute the following command:
where <fullpath_dir_name> is the directory path where the jks will be saved after the creation.
4. Under the Server directory, go to config_master\com.sap.core.jpaas.security.saml2.cfg and
create a directory with name localidp.
5. Copy the localidp.jks file under localidp directory.
1. In the Eclipse IDE, go to the already set up local test IdP Server.
2. Copy the file with the metadata describing SAP Cloud Platform as a service provider under the local server
directory config_master/com.sap.core.jpaas.security.saml2.cfg/localidp. To get this
metadata, in the cockpit, choose Security Trust Local Service Provider Get Metadata .
You can now access your application, deployed on the cloud, and test it against the local test IdP and its defined
users and attributes.
When you have implemented security in your application, you need to perform a few configuration tasks using the
Cockpit to enable the scenario to work successfully on SAP Cloud Platform.
Related Information
In SAP Cloud Platform, you can use Java EE roles to define access to the application resources.
Context
Terms
Term Description
Role Roles allow you to diversify user access to application resources (role-based authorizations).
Note
Role names are case sensitive.
Predefined roles Predefined roles are ones defined in the web.xml of an application.
After you deploy the application to SAP Cloud Platform, the role becomes visible in the Cockpit, and
you can assign groups or individual users to that role. If you undeploy your application, these roles are
removed.
● Shared - they are shared by default. A shared role is visible and accessible within all accounts sub
scribed to this application.
● Restricted - an application administrator could restrict a shared role. A restricted role is visible and
accessible only within the subaccount that deployed the application, and not to accounts subscri
bed to the application.
Note
If you restrict a shared role, you hide it from visibility for new assignments from subscribed accounts
but all existing assignments will continue to take effect.
Custom roles Custom roles are ones defined using the Cockpit. Custom roles are interpreted in the same way as pre
defined roles at SAP Cloud Platform: they differ only in the way they are created, and in their scope.
You can add custom roles to an application to configure additional access permissions to it without
modifying the application's source code.
Custom roles are visible and accessible only within the subaccount where they are created. That’s why
different accounts subscribed to the same application could have different custom roles.
User Users are principals managed by identity providers (SAP ID service or others).
Note
SAP Cloud Platform does not have a user database on its own. It cares to map the users authorized
by identity providers to groups, and groups to roles.
Note
When a user logs in, its roles are stored in the user's current browser session. They are not updated
dynamically, and removed from there only if the session is terminated or invalidated. This means if
you change the set of roles for a user currently logged, they will take effect only after logout or ses
sion invalidation.
Group Groups are collections of roles that allow the definition of business-level functions within your subac
count. They are similar to the actual business roles existing in an organization, such as "manager", "em
ployee", "external" and so on. They help you to get better alignment between technical Java EE roles
and organizational roles.
Note
Group names are case insensitive.
For each identity provider (IdP) for your subaccount, you define a set of rules specifying the groups a
user for this IdP belongs to.
Context
This can be done in two ways: using predefined roles in the web.xml at development time, or using custom roles in
the UI.
Tip
If you need to do mass role or group assignment, to a very large number of users simultaneously, we
recommend using the Authorization Management API instead of the cockpit UI. See Using Platform APIs [page
1289].
● Predefined Roles
a. In the web.xml of the required application, define the roles authorized to access the application resources.
See Authentication [page 2122].
b. Deploy the application to SAP Cloud Platform.
See Deploying and Updating Applications [page 1175].
c. Optionally, if you want to restrict the roles to the current application only, deselect the Share option for
them in the Cockpit.
● Custom roles with applications from the same subaccount
Context
Groups allow you to easily manage the role assignments to collections of users instead of individual users.
Procedure
Context
You can assign individual users to the roles or, more conveniently, assign groups for collective role management.
You can do it in either of the two ways: using the Security Roles section for the application, or using the
Security Authorizations section for the subaccount.
Procedure
Tip
You can use regular expressions to narrow the groups found.
Context
For each different IdP, you then define a set of rules specifying to which groups a user logged by this IdP belongs.
Note
You must have defined groups in advance before you define default or assertion-based groups for this IdP.
Default groups are the groups all users logged by this IdP will have. For example, all users logged by the company
IdP can belong to the group "Internal".
Assertion-based groups are groups determined by values of attributes in the SAML 2.0 assertion. For example, if
the assertion contains the attribute "contract=temporary", you may want all such users to be added to the
group "TEMPORARY".
Procedure
a. In the cockpit, navigate to Security Authorizations Groups , and choose Add Default Group.
b. From the dropdown list that appears, choose the required group.
● Defining Assertion-Based Groups
a. In the cockpit, navigate to Security Authorizations Groups , and choose Add Assertion-Based
Group. A new row appears and a new mapping rule is now being created.
b. Enter the name of the group to which users will be mapped. Then define the rule for this mapping.
c. In the first field of the Mapping Rules section, enter the SAML 2.0 assertion attribute name to be used as
the mapping source. In other words, the value of this attribute will be compared with the value you specify
(in the last field of Mapping Rules).
d. Choose the comparison operator.
Equals Choose Equals if you want the value of the SAML 2.0 as
sertion attribute to match exactly the string you specify.
Note that if you want to use more sophisticated relations,
such as "starts with" or "contains", you need to use the
Regular expression option.
.*@sap.com$
Example 2: You want all users with name starting with ad
min to be added to group Administrators. Hence, you
choose the mapping rule to be userid, matched using the
following regular expression:
^(admin).*
e. In the last field of Mapping Rules, enter the value with which you compare the specified SAML 2.0
assertion attribute.
f. You can specify more than one mapping rule for a specific group. Use the plus button to add as many rules
as required.
Note
Adding a new rule binds it to the rest using a logical OR operator.
Note
Adding a new subrule binds it to the rest of the subrules using a logical AND operator.
In the image below, all users logged by this IdP are added to the group Government. The users that have an
arrtibute corresponding to their department name will also be assigned to the respective department
groups.
When you open the Groups tab page of the Authorizations section, you can see the identity provider
mappings for this group.
Try to access the required application logging on with users with and without the required roles respectively.
Context
You may use the following steps to configure default role caching settings. This may be required if you have
automated test procedures for role assignments in your applications. Tests may not work properly with the default
subaccount settings.
Tip
You can take one of the following approaches:
● Increase the time in which the requests are counted to more than the default 2 minutes
● Increase the number of requests – instead of the default 20, set 100 or 200, for example.
The table below shows the VM system properties available for configuring role caching:
Set the required values to the required VM system properties as described in Configure VM Arguments [page
1702].
The application identity provider supplies the user base for your applications. For example, you can use your
corporate identity provider for your applications. This is called identity federation. SAP Cloud Platform supports
Security Assertion Markup Language (SAML) 2.0 for identity federation.
Contents
Prerequisites
● You have a key pair and certificate for signing the information you exchange with the IdP on behalf of SAP
Cloud Platform. This ensures the privacy and integrity of the data exchanged. You can use your pre-generated
ones or use the generation option in the cockpit.
● You have provided the IdP with the above certificate. This allows the IdP administrator to configure its trust
settings.
● You have the IdP signing certificate to enable you to configure the cloud trust settings.
● You have negotiated with the IdP administrator which information the SAML 2.0 assertion will contain for each
user. For example, this could be a first name, last name, company, position, or an e-mail.
● You know the authorizations and attributes the users logged by this IdP need to have on SAP Cloud Platform.
Tip
You can configure your SAP Cloud Platform account for identity federation with more than one identity provider.
In such case, make sure all user identities are unique across all identity providers, and no user is available in
more than one identity provider. Otherwise, this could lead to wrong assignment of security roles at SAP Cloud
Platform.
Context
In the SAML 2.0 communication, each SAP Cloud Platform account acts as a service provider. For more
information, see Security Assertion Markup Language (SAML) 2.0 protocol specification.
Tip
Each SAP Cloud Platform account is a separate service provider. If you need each of your applications to be
represented by its own service provider, you must create and use a separate account for each application. See
Create Subaccounts Using the Cockpit [page 938].
Note
In this documentation and SAP Cloud Platform user interface, we use the term local service provider to
describe the SAP Cloud Platform account as a service provider in the SAML 2.0 communication.
You need to configure how the local service provider communicates with the identity provider. This includes, for
example, setting a signing key and certificate to verify the service provider’s identity and encrypt data. You can use
the configuration settings described in the table that follows.
Default The local provider's own trust settings For testing and exploring the scenario
will inherit the SAP Cloud Platform de
fault configuration (which is trust to SAP
ID service).
None The local provider will have no trust set For disabling identity federation for your
tings, and it will not participate in any account
identity federation scenario.
Custom The local provider settings will have a For identity federation with a corporate
specific configuration, different from the identity provider or Identity
default configuration for SAP Cloud Authentication tenant
Platform.
Force authentication If you set it to Enabled, you enable force authentication for
your application (despite SSO, users will have to re-authenti
cate each time they access it). Otherwise, set this option to
Disabled.
Procedure
1. In your Web browser, log on to the cockpit, and select an account. See SAP Cloud Platform Cockpit [page
900].
Make sure that you have selected the relevant global account to be able to select the right account.
Note
It is recommended to use a URI as the local provider name.
7. In Signing Key and Signing Certificate, place the Base64-encoded signing key and certificate. You can use one
generated with the cockpit (using the Generate Key Pair button) or externally generated one.
Note
Certificates generated using the cockpit have validity of 10 years. If you want your identifying certificate to
have different validity, generate the key and certificate pair using an external tool, and copy the contents in
the Signing Key and Signing Certificate fields respectively in the cockpit.
Note
For more information how to use an externally generated key and certificate pair, see (Optional) Using
External Key and Certificate [page 2164].
8. Choose the required value of the Principal Propagation and Force authentication option.
9. Save the changes.
10. Choose Get Metadata to download the SAML 2.0 metadata describing SAP Cloud Platform as a service
provider. You will have to import this metadata into the IdP to configure trust to SAP Cloud Platform.
If you want to use for the local service provider a signing key and certificate generated using an external tool (such
as OpenSSL), use the following guidelines:
Example
You want to use OpenSSL as a tool for key pair generation.
In the command above, replace <SAP Cloud Platform host> and <your account name> accordingly. For
more information about the SAP Cloud Platform hosts, see Regions [page 21].
Note
If you need the certificate to be signed by a certificate authority (CA), you need to proceed with a few more
steps:
1. Generate a certificate signing request (CSR) by executing the following command in the folder of your
spkey.pem:
OpenSSL will ask you to enter the fields of the CSR. For the Common Name field, we recommend that you
use the following format:
https:\/\/<SAP Cloud Platform host>\/<your account name>.
As a result, OpenSSL generates one more file in your current folder: spkey.csr (the CSR for your key/
certificate pair).
2. Send the spkey.csr to your CA to get it signed.
The CA returns the signed certificate. You can use that certificate in the steps below.
Convert the private key file spkey.pem into the unencrypted PKCS#8 format using the following command:
openssl pkcs8 -nocrypt -topk8 -inform PEM -outform PEM -in spkey.pem -out
spkey.pk8
Now open the file spkey.pk8 in a text editor and copy all contents except for the tags —–BEGIN PRIVATE KEY
—–, —–END PRIVATE KEY—– into the Signing Key text field in the cockpit. Then open the file spcert.pem in a
text editor and copy all contents except for the tags —–BEGIN CERTIFICATE—– and —–END CERTIFICATE—–
into the Signing Certificate text field in the cockpit.
After clicking Save you should get a message that you can proceed with the configuring of your trusted identity
provider settings.
Context
Note
To benefit from fully-featured identity federation with SAML identity providers, you need to have chosen the
Custom configuration type in the Local Service Provider section.
For Default configuration type, you have non-editable trust to SAP ID Service as default identity provider. You
can add other identity providers but they can be used for IdP-initiated single sign-on (SSO) only.
Procedure
1. In the SAP Cloud Platform cockpit, navigate to the required SAP Cloud Platform subaccount. See Navigate to
Global Accounts and Subaccounts [page 964].
Assertion Consumer Service The SAP Cloud Platform endpoint type (application root or
assertion consumer service). The IdP will send the SAML
assertion to that endpoint.
Single Sign-on URL The IdP's endpoint (URL) to which the SP's authentication
request will be sent.
Single Sign-on Binding The SAML-specified HTTP binding used by the SP to send
the authentication request.
Single Logout URL The IdP's endpoint (URL) to which the SP's logout request
will be sent.
Note
If there is no single logout (SLO) end point specified, no
request to the IdP SLO point will be sent, and only the
local session will be invalidated.
Signing Certificate The X.509 certificate used by the IdP to digitally sign the
SAML protocol messages.
User ID Source Location in the SAML assertion from where the user's
unique name (ID) is taken when logging into the Cloud. If
you choose subject, this is taken from the name identifier in
the assertions's subject (<saml:Subject>) element. If you
choose attribute, the user's name is taken from an SAML
attribute in the assertion.
Source Value Name of the SAML attribute that defines the user ID on the
cloud.
Note
If nothing else is specified, the default IdP is used for
authentication. Alternatively, you can use a different IdP
using a URL parameter. See Using an IdP Different from
the Default [page 2172].
Only for IDP-initiated SSO If this checkbox is marked, this identity provider can be
used only for IdP-initiated single sign-on scenarios. The
applications deployed at SAP Cloud Platform cannot use it
for user authentication from their login pages, for example.
Only users coming from links to the application at the IdP
side will be able to authenticate.
Note
This checkbox is always marked if you have selected
Default configuration type in the Local Service Provider
section.
5. In the Attributes tab, configure the user attribute mappings for this identity provider.
User attributes can contain any other information in addition to the user ID.
Default attributes are user attributes that all users logged by this IdP will have. For example, if we know that
"My IdP" is used to authenticate users from MyCompany, we can set a default user attribute for that IdP
"company=MyCompany".
Assertion-based attributes define a mapping between user attributes sent by the identity provider (in the
SAML assertion) and user attributes consumed by applications on SAP Cloud Platform (principal attributes).
This allows you to easily map the user information sent by the IdP to the format required by your application
without having to change your application code. For example, the IdP sends the first name and last name user
information in attributes named first_name and last_name. You, on the other hand, have a cloud
application that retrieves user attributes named firstName and lastName. You need to define the relevant
mapping in the Assertion-Based Attributes section so the application uses the information from that identity
provider properly.
Note
○ There are no default mappings of assertion attributes to user attributes. You need to define those if you
need them.
○ The attributes are case sensitive.
○ You can specify that all assertion attributes will be mapped to the corresponding principal attributes
without a change, by specifying mapping * to *.
○ SAML assertions larger than 25K are not supported.
In the screenshot above, all users authenticated by this IdP will have an attribute
organization="MOKMunicipality" and type="Government". In addition, several attributes (corresponding to first
name, last name and e-mail) from the SAML assertion will also be added to authenticated users. Note that
those attribute names provided in the assertion by the IdP are different from the principal attributes, which are
the attributes used by the cloud applications.
For more information about using user attributes in your application, see Authentication [page 2122].
6. In the Groups tab, configure the groups associated with this IdP's users.
Groups that you define on the cloud are later mapped to Java EE application roles. As specified in Java EE, in
the web.xml, you define the roles authorized to access a protected resource in your application. You therefore
define the groups that exist there and the roles to which each group is mapped via the Groups tab in the
cockpit. For each different IdP, you then define a set of rules specifying to which groups a user logged by this
IdP belongs.
For more information about configuring groups, see Managing Groups and Roles [page 2151].
Note
You must have defined groups in advance before you define default or assertion-based groups for this IdP.
Assertion-based groups are groups determined by values of attributes in the SAML 2.0 assertion. For example,
if the assertion contains the attribute "contract=temporary", you may want all such users to be added to
the group "TEMPORARY".
1. On the GROUPS tab page, choose Add Assertion-Based Group. A new row appears and a new mapping rule
is now being created.
2. Enter the name of the group to which users will be mapped. Then define the rule for this mapping.
3. In the first field of the Mapping Rules section, enter the SAML 2.0 assertion attribute name to be used as
the mapping source. In other words, the value of this attribute will be compared with the value you specify
(in the last field of Mapping Rules).
4. Choose the comparison operator.
○ Choose Equals if you want the value of the SAML 2.0 assertion attribute to match exactly the string
you specify. Note that if you want to use more sophisticated relations, such as "starts with" or
"contains", you need to use the Regular expression option.
○ Choose Regular expression if you want to specify more sophisticated matching rules. You can use all
regular expression rules described in the Java RegEx API .
Example 1: You want to add authenticated SAP employees to group Employees. And SAP employees
are users with e-mail address ending with sap.com. Hence, you choose the mapping rule to be email,
matched using the following regular expression:
.*@sap.com$
Example 2: You want all users with name starting with admin to be added to group Administrators.
Hence, you choose the mapping rule to be userid, matched using the following regular expression:
^(admin).*
5. In the last field of Mapping Rules, enter the value with which you compare the specified SAML 2.0
assertion attribute.
6. You can specify more than one mapping rule for a specific group. Use the plus button to add as many rules
as required. In this case, mapping is based on a logical AND operation for all rules, that is, if one of your
rules applies, the user is added to the group.
All users from the ITSupport department (of organization MOKMunicipality) and the user with e-mail
admin@mokmunicipality.org are added to group MOKMunicipalityAdmins for this subaccount. The rest of the
employees at MOKMunicipality (having an e-mail address in the mokmunicipality.org domain) are assigned to
group Government.
You can see the group assignments visualized in the graphic below.
Context
You can define more than one identity provider for your account. There is always the default IdP. Initially, SAP ID
service is the default IdP but you can change that after you add another IdP.
If you want to use an IdP different from the default one, you can do so by requesting your application with a special
request parameter saml2idp with value the desired IdP name. For example:
You can register a tenant for Identity Authentication service as an identity provider for your subaccount.
Prerequisites
● You have defined service provider settings for the SAP Cloud Platform subaccount. See Configure SAP Cloud
Platform as a Local Service Provider [page 2162].
● You have chosen a custom local provider configuration type for this subaccount (using Cockpit Trust
Local Service Provider Configuration Type Custom )
Context
Identity Authentication service provides identity management for SAP Cloud Platform applications. You can
register a tenant for Identity Authentication service as an identity provider for the applications in your SAP Cloud
Platform subaccount.
Note
If you add a tenant for Identity Authentication service already configured for trust with the same service
provider name, the existing trust configuration on the tenant for Identity Authentication service side will be
updated. If you add a tenant for Identity Authentication configured for trust with SAP Cloud Platform with a
different service provider name, a new trust configuration will be created on the tenant for Identity
Authentication service side.
Procedure
1. In the SAP Cloud Platform cockpit, navigate to the required SAP Cloud Platform subaccount. See Navigate to
Global Accounts and Subaccounts [page 964].
○ You have a tenant for Identity Authentication service registered for your current SAP customer user (s-
user). You want to add the tenant as an identity provider.
1. Click Add Identity Authentication Tenant.
2. Choose the required Identity Authentication tenant and save the changes.
In this case, the trust will be established automatically upon registration on both the SAP Cloud Platform
and the tenant for Identity Authentication service side. See Getting Started with Identity Authentication
○ You want to add a tenant for Identity Authentication service not related to your SAP user.
In this case, you need to register the tenant for Identity Authentication service as any other type of identity
provider. This means you need to set up trust settings on both the SAP Cloud Platform and the Identity
Authentication tenant side. See Integration.
The tenant for Identity Authentication appears in the list of SAML identity providers. You can now administrate
further the Identity Authentication tenant by opening Identity Authentication Admin Console (hover over the
registered tenant for Identity Authentication and click Identity Authentication Admin Console). You can manage the
registered tenant for Identity Authentication as any other registered identity provider.
Note
It will take about 2 minutes for the trust configuration with the tenant for Identity Authentication to become
active.
Note
Each SAP Cloud Platform subaccount is a separate service provider in the tenant for Identity Authentication .
Tip
If you need each of your SAP Cloud Platform applications to be represented by its own service provider, you
must create and use a separate subaccount for each application. See Create Subaccounts Using the Cockpit
[page 938].
Related Information
If you already have an existing on-premise system with a populated user store, you can configure SAP Cloud
Platform applications to use that on-premise user store. This approach is similar to implementing identity
federation with a corporate identity provider. In that way, applications do not need to keep the whole user
database, but request the necessary information from the on-premise system.
Context
● check credentials
● SAP Single Sign-On with a SAP NetWeaver Application Server for Java System - the applications on SAP Cloud
Platform connect to the SAP on-premise system using Destination API (and, if necessary, SAP HANA Cloud
Connector), and make use of the user store there.
● Microsoft Active Directory - this is an LDAP server that can serve as an on-premise user store. The
applications on SAP Cloud Platform connect to the LDAP server using SAP HANA cloud connector, and make
use of the user store there.
Alternatively to the above scenarios, you can implement identity federation with a Identity Authentication tenant,
where the tenant is configured to use an on-premise user store. See:
Related Information
Overview
You can configure applications running on SAP Cloud Platform to use a user store of an SAP NetWeaver (7.2 or
higher) Application Server for Java system and a SAP Single Sign-On system. That way SAP Cloud Platform does
not need to keep the whole user database, but requests the necessary information from an on-premise system.
When deploying the application, you have to set system properties of the application VM. For more information,
see Configure VM Arguments [page 1702].
Note
The WAR file that you are using as a source during the deployment has to be protected declaratively or
programmatically. For more information, see Authentication [page 2122].
Example
Note
The VM arguments passed using this command will have effect only until you re-deploy the application.
Context
The on-premise system is an AS Java with a deployed SCA from SAP Single Sign-On (SSO) 2.0. For the
configuration of the on-premise AS Java system, proceed as follows:
Procedure
For more information about the role assignment process, see Assigning Principals to Roles or Groups.
2. If necessary, set the policy configuration to use the appropriate authentication method.
For more information about the policy configuration, see Editing the Authentication Policy of AS Java
Components.
3. If your user does not exist in the on-premise system, create a technical user.
For the proper communication with the on-premise AS Java system, you need to configure the destination of the
Java application on SAP Cloud Platform. For more information, see Configure Destinations from the Cockpit [page
108].
You have to set the following properties for the destination of the cloud application:
URL https:// < AS Java Host>:<AS Java The URL to the on-premise AS Java sys
HTTPS Port>/scim/v1/ Or http:// tem if it is exposed via reverse proxy. Or
<Virtual host configured in Cloud in case the on-premise systems is ex
Connector>:<virtual Port>/scim/v1/ posed via HANA Cloud Connector the
virtual URL configured in Cloud Connec
tor. In this case, the configured protocol
should be http as the connectivity serv
ice is using secure tunneling to the on-
premise system.
You can use Microsoft Active Directory as an on-premise LDAP server providing a user store for your SAP Cloud
Platform applications.
Prerequisites
Context
When deploying the application, you have to set system properties of the application VM. For more information,
see Configure VM Arguments [page 1702].
Note
The WAR file that you are using as a source during the deployment has to be protected declaratively or
programmatically. For more information, see Authentication [page 2122].
Note
The VM arguments passed using this command will have effect only until you re-deploy the application.
Create the required destination and configure SAP HANA clolud connector as described in Configure the User
Store [page 343]
This is an optional procedure that you can perform to configure the authentication methods used in a cloud
application. You can configure the behavior of standard Java EE authentication methods, or define custom ones,
based on custom combinations of login options.
Prerequisites
● You have an application with authentication defined in its web.xml or source code. See Authentication [page
2122] .
Context
The following table describes the available login options. In the default authentication configuration, they are pre-
assigned to standard Java EE authentication methods. If you want to change this, you need to create a custom
configuration.
For each authentication method, you can select a custom combination of options. You may need to select more
than one option if you want to enable more than one way for users to authenticate for this application.
If you select more than one option, SAP Cloud Platform will delegate authentication to the relevant login modules
consecutively in a stack. When a login module succeeds to authenticate the user, authentication ends with
success. If no login module succeeds, authentication fails.
Trusted SAML 2.0 identity pro Authentication is implemented over the Security Assertion Markup Language (SAML) 2.0
vider protocol, and delegated to SAP ID service or custom identity provider (IdP). The credentials
users need to present depend on the IdP settings. See Application Identity Provider [page
2161].
User name and password HTTP BASIC authentication with user name and password. The user name and password
are validated either by SAP ID service (default) or by an on-premise SAP NetWeaver AS
Java. See Using an SAP System as an On-Premise User Store [page 2175].
Note
If you want to use your Identity Authentication tenant for BASIC authentication (instead
of SAP ID service/SAP NetWeaver), create a customer ticket in component BC-NEO-
SEC-IAM. In the ticket, specify the Identity Authentication tenant you want to use.
Client certificate Users authenticate with a client certificate installed in an on-premise SAP NetWeaver Appli
cation Server for Java system. See Enabling Client Certificate Authentication [page 2245]
Application-to-Application SSO Used for AppToAppSSO destinations. See Application-to-Application SSO Authentication
[page 141].
Note
When you select Trusted SAML 2.0 identity provider, Application-to-Application SSO be
comes enabled automatically.
OAuth 2.0 token Authentication is implemented over the OAuth 2.0 protocol. Users need to present an OAuth
access token as credential. See OAuth 2.0 Authorization Code Grant [page 2208].
Procedure
1. In the SAP Cloud Platform cockpit, navigate to the required SAP Cloud Platform subaccount. See Navigate to
Global Accounts and Subaccounts [page 964].
Example
You have a Web application that users access using a Web browser. You want users to log in using a SAML
identity provider. Hence, you define the FORM authentication method in the web.xml of the application.
Related Information
Use this tutorial to enable an application in your subaccount to access another application without user login (and
user interaction) in the second application. For this scenario to work, the second application needs to propagate its
logged-in user to the first application using an AppToAppSSO destination.
Contents
Prerequisites
● You have an account with Administrator role in both SAP Cloud Platform subaccounts. See Managing Member
Authorizations [page 1671].
● You have deployed both applications on SAP Cloud Platform. See Deploying and Updating Applications [page
1175].
● You have a custom local service provider configuration in both subaccounts (this means in cloud cockpit
Security Trust Local Service Provider you have chosen Configuration Type Custom ). See
Configure SAP Cloud Platform as a Local Service Provider [page 2162].
1. Get the Local Provider Name and the Signing Certificate from the first subaccount.
a. In SAP Cloud Platform cockpit, choose the first subaccount. See Navigate to Global Accounts and
Subaccounts [page 964]
b. Navigate to Security Trust Local Service Provider .
c. Save into a file the values of Local Provider Name and Signing Certificate.
d. Make sure the value of Principal Propagation is set to Enabled.
2. Create trust on the second subaccount.
a. In the SAP Cloud Platform cockpit, choose the second subaccount and navigate to the Trust tab.
b. On the Application Identity Provider tab, choose Add Trusted Identity Provider. Provide the following
information:
Field Description
Name The Local Provider Name of the first subaccount, which you copied in step 1.
Signing Certificate The Signing Certificate of the first subaccount, which you copied in step 1.
c. If it is not automatically checked, select the checkbox Only for IDP-Initiated SSO.
d. Save the changes.
Context
Connect the first subaccount, to the second subaccount by describing the source connection properties in a
destination. For more information see Modeling Destinations [page 1396].
Procedure
Field Description
Name Technical name of the destination. It can be used later on to get an instance of that destination. It should be
unique for the current application.
Note
The name can contain only alphanumeric characters, underscores, and dashes. The maximum length is
200 characters.
URL The URL of the protected resource that you want to access (the first application). See Configuring Applica
tion URLs [page 1743].
Example: https://myappmysubaccount.hana.ondemand.com/
Authentica AppToAppSSO
tion
4. Choose the New Property button. In the fields that appear, fill in: saml2_audience and enter the Local Provider
Name of the second subaccount.
5. Save the changes.
Results
Using application-to-application communication you can now propagate the logged-in user of the second
application.
Related Information
● You have a user account with Administrator role in both SAP Cloud Platform subaccounts. See Managing
Member Authorizations [page 1671].
● You have a custom local service provider configuration (signing keys and certificates, etc.) in your subaccount
in the Neo environment. See Configure SAP Cloud Platform as a Local Service Provider [page 2162].
● Both accounts have a trust configuration to the same Identity Authentication tenant. See:
○ Identity Authentication Tenant as an Application Identity Provider [page 2172] (for the Neo environment)
○ Establish Trust and Federation with UAA Using SAP Cloud Platform Identity Authentication Service [page
2060] (for the Cloud Foundry environment)
● You have developed and deployed both applications, each in the corresponding subaccount.
Note
All configuration steps described in this tutorial are done using the cloud cockpit.
In the source code, the application needs to reference the destination that we are about to create as a later step.
The sample source code below illustrates a complete servlet working with the destination with name pptest.
package com.sap.cloud.samples;
import java.io.BufferedReader;
import java.io.IOException;
import java.io.InputStreamReader;
import java.net.URL;
import java.net.URLConnection;
import java.security.KeyStore;
import java.util.List;
import javax.naming.Context;
import javax.naming.InitialContext;
import javax.net.ssl.HttpsURLConnection;
import javax.net.ssl.SSLContext;
import javax.net.ssl.SSLSocketFactory;
import javax.net.ssl.TrustManagerFactory;
import javax.servlet.ServletException;
import javax.servlet.annotation.WebServlet;
import javax.servlet.http.HttpServlet;
import javax.servlet.http.HttpServletRequest;
import javax.servlet.http.HttpServletResponse;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import com.sap.core.connectivity.api.authentication.AuthenticationHeader;
import com.sap.core.connectivity.api.authentication.AuthenticationHeaderProvider;
import com.sap.core.connectivity.api.configuration.ConnectivityConfiguration;
import com.sap.core.connectivity.api.configuration.DestinationConfiguration;
@WebServlet("/neotocf")
public class NeoToCF extends HttpServlet {
private static final long serialVersionUID = 1L;
private static final Logger LOGGER = LoggerFactory.getLogger(NeoToCF.class);
private static final String ON_PREMISE_PROXY = "OnPremise";
protected void doGet(HttpServletRequest request, HttpServletResponse response)
throws ServletException, IOException {
AuthenticationHeaderProvider authHeaderProvider =
(AuthenticationHeaderProvider) ctx
.lookup("java:comp/env/myAuthHeaderProvider");
// retrieve the authorization header for OAuth SAML Bearer principal
propagation
List<AuthenticationHeader> samlBearerHeader = authHeaderProvider
.getOAuth2SAMLBearerAssertionHeaders(destConfiguration);
LOGGER.debug("JWT token from CF XSUAA: " +
samlBearerHeader.get(1).getValue());
// create sslcontext
TrustManagerFactory tmf =
TrustManagerFactory.getInstance(TrustManagerFactory.getDefaultAlgorithm());
tmf.init(trustStore);
urlConnection.setRequestProperty(samlBearerHeader.get(0).getName(),
samlBearerHeader.get(0).getValue());
urlConnection.setRequestProperty(samlBearerHeader.get(1).getName(),
samlBearerHeader.get(1).getValue());
urlConnection.connect();
response.getWriter().println("Received from CF:");
In the Cloud Foundry environment, you need an application following the XSA security model (protected with
SAML, UAA service binding using JWT token needed, roles configured in the xs-security.json).
See:
Note
You can use the XSA Security Sample Application in GitHub (instructions and code) to develop and deploy an
application compliant with the above requirements.
Prerequisites
Before you create the required destination, you need to note down a few properties that will be used as values in
the destination settings.
1. In the cloud cockpit, navigate to the subaccount in the Cloud Foundry environment.
2. Navigate to the application router.
3. Enter the Environment Variables section.
4. Note down somewhere the values of the following properties:
○ clientid
○ clientsecret
○ url
Connect the first subaccount to the second subaccount by describing the source connection properties in a
destination. For more information see Modeling Destinations [page 1396].
Procedure
Field Description
Name Technical name of the destination. It can be used later on to get an instance of that destination. It must be
unique for the global account.
Note
For the purposes of the example listed in this document, use pptest as value.
URL The URL of the protected resource in the Cloud Foundry environment. See Configuring Application URLs
[page 1743].
Example: https://<tenant-specific-route-for-your-business-
app>.cfapps.eu10.hana.ondemand.com/
Authentica OAuth2SAMLBearerAssertion
tion
See Regions and API Endpoints Available for the Cloud Foundry Environment [page 22].
Client Key What to enter as the value of this property: Open the application in the Cloud Foundry environment. Navi
gate to Environment Variables. Copy the value of the clientid property here.
Token Serv What to enter as the value of this property: Open the application in the Cloud Foundry environment. Navi
ice User gate to Environment Variables. Copy the value of the clientid property as Token Service User in the destina
tion.
Token Serv What to enter as the value of this property: Open the application in the Cloud Foundry environment. Navi
ice Pass gate to Environment Variables. Copy the value of the clientsecret property here.
word
Procedure
After this procedure, you can use the security context from the application in the Neo environment to the
application in the Cloud Foundry environment. The assigned groups from the Neo environment can be used as
role collections in the Cloud Foundry environment.
Enable an application in your subaccount in the Cloud Foundry environment to access an OAuth-protected
application in a subaccount in the Neo environment without user login (and user interaction) in the second
application. For this scenario to work, the two subaccounts need to be in mutual trust, and in trust with the same
identity provider. The first application will propagate its logged-in user to the second application using an
OAuth2SAMLBearer destination.
● You have a user account with Administrator role in both SAP Cloud Platform subaccounts. See Managing
Member Authorizations [page 1671].
● You have a custom local service provider configuration (this means in cloud cockpit Security Trust
Local Service Provider you have chosen Configuration Type Custom ) in your subaccount in the Neo
environment. See Configure SAP Cloud Platform as a Local Service Provider [page 2162].
● Both accounts have a trust configuration to the same identity provider. See:
○ Configure Trust to the SAML Identity Provider [page 2165] (for the Neo environment)
○ Establish Trust with Any SAML 2.0 Identity Provider in a Subaccount [page 2064] (for the Cloud Foundry
environment)
● The application in the Neo environment is protected using OAuth 2.0. See OAuth 2.0 Service [page 2206].
● The application in the Cloud Foundry environment is bound to an instance of the following services:
○ Destination Service. See Create and Bind a Destination Service Instance.
○ xsuaa. See Create a Service Instance from the xsuaa Service [page 2108] and Bind the xsuaa Service
Instance to the Application [page 2110].
● You have deployed both applications, each in the corresponding subaccount.
Note
All configuration steps described in this tutorial are done using the cloud cockpit.
Procedure
Tip
You can view the API endpoint host and subaccount ID from cloud cockpit <your global
account> <your subaccount> <your space> Overview .
○ In the Signing Certificate field, enter the X509 certificate of the Cloud Foundry account.
Note
Make sure you remove the BEGIN CERTIFICATE and END CERTIFICATE parts.
You need an OAuth client to get an access token for the OAuth-protected resources in the application in the Neo
environment.
Procedure
○ Name - the OAuth client name. You will need to provide this name as value of the Token Service User
property of the destination below.
○ Authorization Grant - choose the Authorization Code option
○ Mark the Confidential option, and provide a secret (password)
4. Save the client.
○ ID
○ Secret
Context
Connect the two subaccounts by describing the connection properties in a destination. For more information see
Modeling Destinations [page 1396].
Procedure
1. Choose the subaccount in the Cloud Foundry environment, and navigate to Connectivity Destinations .
2. Choose New Destination.
3. In the new destination, provide the following information:
Field Description
Name Technical name of the destination. It can be used later on to get an instance of that destination. It must be
unique for the global account.
Type HTTP
Example: https://myneoapp.hana.ondemand.com/myprotectedresource/
Authentica OAuth2SAMLBearerAssertion
tion
Audience The value of the local service provider name in the subaccount in the Neo environment.
Copy the value from cloud cockpit <your Neo subaccount> Security Trust Local Service
Client Key The ID of the OAuth client for the application in the Neo environment.
Token Serv Copy the value of Token Endpoint from the following place: cloud cockpit cloud cockpit <your Neo
ice URL subaccount> <your application> Security OAuth Branding .
Token Serv The ID of the OAuth client for the application in the Neo environment.
ice User
authnCon urn:oasis:names:tc:SAML:2.0:ac:classes:PreviousSession
textClassRef
The security guide provides an overview of the security-relevant information that applies to HTML5 applications.
Related Information
6.1.2.2.1 Authentication
SAP Cloud Platform uses the Security Assertion Markup Language (SAML) 2.0 protocol for authentication and
single sign-on.
By default, the SAP Cloud Platform is configured to use the SAP ID service as identity provider (IdP), as specified
in SAML 2.0. You can configure a trust relationship to your custom IdP to provide access to the cloud using your
own user database. For information, see Application Identity Provider [page 2161].
HTML5 applications are protected with SAML2 authentication by default. For publicly accessible applications, the
authentication can be switched off. For information about how to switch off authentication, see Authentication
[page 1271].
Permissions for an HTML5 application are defined in the application descriptor file. For more information about
how to define permissions for an HTML5 application, see Authorization [page 1272].
Permissions defined in the application descriptor are only effective for the active application version. To protect
non-active application versions, the default permission NonActiveApplicationPermission is defined by the
system for every HTML5 application.
To assign users to a permission of an HTML5 application, a role must be assigned to the corresponding
permission. As a result, all users who are assigned to the role get the corresponding permission. Roles are not
application-specific but can be reused across multiple HTML5 applications. For more information about creating
roles and assigning roles to permissions, see Managing Roles and Permissions [page 1737].
Note
HTML5 application permissions can only protect the access to the REST service through the HTML5
application. If the REST service is otherwise accessible on the Internet or a corporate network, it must
implement its own authentication and authorization concept..
To access a system that is running in an on-premise network, you can set up an SSL tunnel from your on-premise
network to the SAP Cloud Platform using the SAP Cloud Platform Cloud Connector.
For more information about setting up the Cloud connector, see the Cloud Connector Operator's Guide.
Related Information
Cross-site scripting (XSS) is one of the most common types of malicious attacks on web applications.
If an HTML5 application is connected to a REST service, the corresponding REST service must take measures to
protect the application against this type of vulnerabilities. For REST services implemented on the SAP Cloud
Platform a common output encoding library may be used to protect applications. For more information about XSS
protection on the SAP Cloud Platform, see Protection from Cross-Site Scripting (XSS) [page 2259].
Cross-Site Request Forgery (CSRF) is another common type of attack on web applications.
If an application connects to a REST service, the corresponding REST service must take measures to protect
against CSRF. For REST services implemented on the SAP Cloud Platform a CSRF prevention filter may be used in
the corresponding REST service. For more information about CSRF protection on the SAP Cloud Platform,see
Protection from Cross-Site Request Forgery [page 2262].
Related Information
In this section, you can find information relevant for securing SAP HANA applications running on SAP Cloud
Platform.
Security Information
Info Type See
General security concepts for SAP HANA applications SAP HANA Security Guide
Specific security concepts for SAP HANA applications running Configure SAML 2.0 Authentication [page 1254]
on SAP Cloud Platform
Setting up SAML authentication for SAP HANA XS applica How to Set Up SAML Authentication For Your SAP Cloud
tions Platform Trial Instance
The platform identity provider is the user base for access to SAP Cloud Platform account and tools (cockpit,
console client, Eclipse tools, and others). The default user base is provided by SAP ID Service. You can switch to an
Identity Authentication tenant.
Overview
By default, the cockpit and console client are configured to use SAP ID Service as an identity provider for user
authentication. SAP ID Service, however, uses the SAP user base and default tenant settings. If you want to use
Note
Changing the platform identity provider settings does not affect the application identity provider settings in
this subaccount.
Prerequisites
● You have a user with Administrator role for your subaccount (provided by the default user base, SAP ID
Service).
● You have enabled the Platform Identity Provider service. See Using Services in the Neo Environment [page
1119].
● You have an Identity Authentication tenant configured. See Identity Authentication documentation.
Procedure
1. Log in to the SAP Cloud Platform cockpit with the Administrator user from the default user base.
2. Navigate to the required SAP Cloud Platform subaccount. See Navigate to Global Accounts and Subaccounts
[page 964].
Prerequisites
You are the global account administrator from the default user base, SAP ID Service.
Context
Now that you have switched the user base, you need to add as global account members, existing users in the
Identity Authentication tenant. These global account memebrs will then have the authorization to add more global
account members and configure entitlements for the subaccount.
Procedure
You can see all cockpit users, with their IDs, roles and user base, listed here.
2. Choose Add Members.
3. In the User Base dropdown list, choose the custom user base (Identity Authentication tenant).
If the name does not appear, choose Other and enter the name of the custom user base with .com suffix.
4. Enter the required custom user IDs.
Context
Now that you have switched the user base, you need to add as subaccount members existing users in the Identity
Authentication tenant.
Go to the Members tab in the cockpit. You can see all cockpit users, with their IDs, roles and user base, listed here.
To add a new member, choose Add Members and configure the member users from the respective user base
(Identity Authentication tenant). See also Add Members to Subaccounts [page 965].
Note
The account members for access to this subaccount from the console client must have Administrator role.
Context
You can configure the Identity Authentication tenant for specific authentication scenarios using its Administration
Console UI.
To do so, choose the Administration Console button next to the registered tenant in the Security Trust
Platform Identity Provider section of the cloud cockpit.
In the tenant's Administration Console you will notice it displays the SAP Cloud Platform cockpit as a registered
application. The application has <Identity Authentication tenant ID> as display name, and https://
account.hana.ondemand.com/<account name>/admin as SP name.
Context
If you open the default cockpit URL (see SAP Cloud Platform Cockpit [page 900]), SAP ID Service will be used for
user authentication.
To request the cockpit using the Identity Authentication tenant user base, use the following URL:
For the SAP Cloud Platform host, see Regions [page 21].
Tip
Make sure you use the subaccount name, not the subaccount display name, which could be different. Check
the value of the subaccount name in the subaccount overview section in the cloud cockpit.
Note
● You can see only those subaccounts that are in the region of the tenant cockpit URL.
● If you want to use risk-based authentication, for example, to enable two-factor authentication (TFA), you
have to enable it for all subaccounts in your global account. This means for each subaccount you need to
configure the platform identity provider to be an Identity Authentication tenant configured properly for risk-
based authentication.
For more information about TFA in your Identity Authentication tenant, see Two-Factor Authentication.
Procedure
1. In an incognito browser window, open the tenant cockpit URL. This is required to make sure you are not logged
in with the SAP ID Service user.
2. Log in with a user name and password from the Identity Authentication tenant.
Context
Note
If you want to use the console client with the default user base of SAP ID Service, you need to remove the
custom platform identity provider configuration you created.
After setting up the trust and assigning the members, you must authenticate with a user from your custom
Identity Authentication tenant. For example, if you want to execute the list-schemas command, you can either
provide the login id or email address of your user in the Identity Authentication tenant as follows:
If you have enabled two-factor authentication (TFA) in your Identity Authentication tenant, you can enter the 6-
digit passcode after the user’s password when the console client prompts you for password.
For more information about two-factor authentication in your Identity Authentication tenant, see Two-Factor
Authentication.
Use the OAuth 2.0 Service to protect applications in the Neo environment using the OAuth 2.0 protocol.
OAuth 2.0 is a widely adopted security protocol for protection of resources over the Internet. It is used by many
social network providers and by corporate networks. It allows an application to request authentication on behalf of
users with third-party user accounts, without the user having to grant its credentials to the application. SAP Cloud
Platform provides an API for developing OAuth-protected applications. You can configure the required scopes and
clients using the Cockpit.
The following graphic illustrates protecting applications with OAuth on SAP Cloud Platform.
● Authorization code grant - there is a human user who authorizes a mobile application to access resources on
his or her behalf. See OAuth 2.0 Authorization Code Grant [page 2208]
● Client credentials grant - there is no human user but a device instead. In such case, the access token is
granted on the basis of client credentials only. See OAuth 2.0 Client Credentials Grant [page 2214]
Related Information
SAP Cloud Platform supports the OAuth 2.0 protocol as a reliable way to protect application resources. The
current document describes the specifics of implementing an OAuth-protected application (resource server) for
SAP Cloud Platform.
Overview
OAuth 2.0
OAuth has taken off as a standard way and a best practice for applications and websites to handle authorization.
OAuth defines an open protocol for allowing secure API authorization of desktop, mobile and web applications
through a simple and standard method.
In this way, OAuth mitigates some of the common concerns with authorization scenarios.
The following table shows the roles defined by OAuth, and their respective entities in SAP Cloud Platform:
Authorization server SAP Cloud Platform infrastructure The server that manages the
authentication and authorization of the
different entities involved.
If you want to implement a login based on credentials in the form of an OAuth token, you can do that by using
OAuth as a login method in your application web.xml. For example:
<login-config>
In your protected application you can acquire the user ID and attributes as described in Working with User Profile
Attributes [page 2135].
There are two additional user attributes you can use to retrieve token specific information:
Handling Sessions
The Java EE specification requires session support on the client side. Sessions are maintained with a cookie which
the client receives during the authentication and then passes it along to the server on every request. The OAuth
specification, however, does not necessarily require the client to support such a session mechanism. That is, the
support of cookies is not mandatory. On every request, the client passes along to the server only the token instead
of passing cookies. Using the OAuth login module described in the Protecting Resources Declaratively section, you
can implement a user login based on an access token. The login, however, occurs on every request, and thus it
implies the risk of creating too many sessions in the Web container.
To use requests that do not hold a Web container session, use a filter with the proper configuration, as described in
the following example:
<filter>
<display-name>OAuth scope definition for viewing a photo album</display-name>
<filter-name>OAuthViewPhotosScopeFilter</filter-name>
<filter-class>
com.sap.cloud.security.oauth2.OAuthAuthorizationFilter
</filter-class>
<init-param>
<param-name>scope</param-name>
<param-value>view-photos_upload-photos</param-value>
</init-param>
<init-param>
<param-name>no-session</param-name>
<param-value>false</param-value>
</init-param>
</filter>
One of the ways to enforce scope checks for resources is to declare the resource protection in the web.xml. This is
done by specifying the following elements:
Initial parameters With these, you specify the scope, user principal and HTTP
method:
● scope
● http-method
● user-principal - if set to "yes", you will get the user
ID
● no-session - if you set this to "true", the session will
be destroyed when you finish using the filter. This means
that each time the filter is used, a new session will be
created. Default value: false.
The following example shows a sample web.xml for defining and configuring OAuth resource protection for the
application.
<filter>
<display-name>OAuth scope definition for viewing a photo album</display-name>
<filter-name>OAuthViewPhotosScopeFilter</filter-name>
<filter-class>
com.sap.cloud.security.oauth2.OAuthAuthorizationFilter
</filter-class>
<init-param>
<param-name>scope</param-name>
<param-value>view-photos</param-value>
</init-param>
<init-param>
<param-name>http-method</param-name>
<param-value>get post</param-value>
</init-param>
</filter>
In this code snippet you can observe how the PhotoAlbumServlet is mapped to the previously specified OAuth
scope filter:
<filter-mapping>
<filter-name>OAuthViewPhotosScopeFilter</filter-name>
<servlet-name>PhotoAlbumServlet</servlet-name>
</filter-mapping>
If you would like to use URL pattern instead, simply specify the pattern that should apply here:
<filter-mapping>
<filter-name>OAuthViewPhotosScopeFilter</filter-name>
<url-pattern>/photos/*.jpg</url-pattern>
</filter-mapping>
For more information regarding possible mappings, see the filter-mapping element specification.
Alternatively to the declarative approach with the web.xml (described above), you can use the OAUTH login
module programmatically. For more information, see Programmatic Authentication [page 2127].
When a resource protected by OAuth is requested, your application must pass the access token using the HTTP
"Authorization" request header field. The value of this header must be the token type and access token value. The
currently supported token type is "bearer".
When the protected resource access check is performed the filter calls the API and the API calls the authorization
server to check the validity of the access token and retrieve token’s scopes.
In the table below the result handling between the authorization server and resource server, resource server and
the API, and resource server and filter is presented.
If user-
principal=tru
e ->
request.getUs
erPrincipal()
. getName()
returns user_id
reason =
"access_forbi
dden"
reason =
"missing_acce
ss_token
reason =
"missing_acce
ss_token
reason =
"missing_acce
ss_token
Next Steps
1. You can now deploy the application on SAP Cloud Platform. For more information, see Deploying and Updating
Applications [page 1175]
2. After you deploy, you need to configure clients and scopes for the application. For more information, see
OAuth 2.0 Configuration [page 2215].
SAP Cloud Platform supports the client credentials grant flow from the OAuth 2.0 specfication. This flow enables
grant of an OAuth access token based on the client credentials only, without user interaction. You can use this flow
for enabling system-to-system communication (with a service user), for example, in device communication in an
Internet of things scenario.
Context
The current procedure is for application developers that need their SAP Cloud Platform applications to be enabled
for OAuth 2.0 client credentials grant.
Procedure
1. Register a new OAuth client of type Confidential. See Register an OAuth Client [page 2215].
2. Using that client, you can get an access token using a REST call to the endpoints shown in cockpit
Security OAuth Branding .
○ Protect your application declaratively with the OAuth login method in the web.xml. See OAuth 2.0
Authorization Code Grant [page 2208].
○ Use the getRemoteUser() method of the HTTP request
(javax.servlet.http.HttpServletRequest) to get the client ID.
The getRemoteUser() method returns the client ID prefixed by oauth_client_ as follows:
oauth_client_<client ID>
Tip
You can use the client ID returned as remote user to assign Java EE roles to clients, and use them for
role-based authorizations. See:
Caution
Having multiple clients with the same case-sensitive name will lead to having the same user ID at
runtime. This could lead to incorrect user role assignments and authorizations.
Register clients, manage access tokens, configure scopes and perform other OAuth configuration tasks.
Prerequisites
● You have an account with administrator role in SAP Cloud Platform. See Managing Member Authorizations
[page 1671].
● You have developed an OAuth-protected application (resource server). See OAuth 2.0 Authorization Code
Grant [page 2208].
● You have deployed the application on SAP Cloud Platform. See Deploying and Updating Applications [page
1175].
Contents:
Procedure
1. In your Web browser, log on to the cockpit, and select an account. See SAP Cloud Platform Cockpit [page
900].
Field Description
Subscription The application for which you are registering this client. To
be able to register for a particular application, this account
must be subscribed to it. For more information, see Getting
Started with Business Application Subscriptions [page
967]
Note
The client ID must be globally unique within the entire
SAP Cloud Platform.
Confidential If you mark this box, the client ID will be protected with a
password. You will need to supply the password here, and
provide it to the client.
Skip Consent Screen If you mark this option, no end user action will be required
for authorizing this client. Otherwise, the end user will have
to confirm granting the requested authorization.
Redirect URI The application URI to which the authorization server will
connect the client with the authorization code.
Token Lifetime The token lifetime.This value applies to the access token
and authorization code.
Results
Define scopes for your OAuth-protected application to fine-grain the access rights to it.
Procedure
1. In your Web browser, log on to the cockpit, and select an account. See SAP Cloud Platform Cockpit [page
900].
With revoking access tokens, you can immediately reject access rights you have previously granted. You may wish
to revoke an access token if you believe the token is be stolen, for example.
● The Cockpit - an administrator user may use the Cockpit to revoke tokens on behalf of different end users
● The end user UI - an end user may access its tokens (and no other user's) and revoke the required using that
UI
1. In the Cockpit, choose the Security OAuth section, and go to the Branding tab.
2. Click the End User UI link.You are now opening the end user UI in a new browser window. You can see all access
tokens issued for the current user.
3. Choose the Revoke button for the tokens to revoke.
Use a QR code for easier copying of the OAuth authorization code on mobile devices.
Context
When your account is configured for trust with a corporate identity provider (IdP), it is often impossible to connect
to the IdP directly using a personal mobile device. The corporate IdP is often part of a protected corporate
network, which does not allow personal devices to access it. To facilitate OAuth authentication on mobile devices,
you can use the end user UI's QR code generation option. It provides as a scannable QR code the authorization
code sent by the OAuth authorization server.
Procedure
You can customize the lookandfeel of the authorization page displayed to end users with your corporate branding.
This will make it easier for them to recognize your organization.
Procedure
1. In your Web browser, log on to the cockpit, and select an account. See SAP Cloud Platform Cockpit [page
900].
Results
The authorization page that end users see contains the company logo and colors you specify. The following image
shows an example of a customized authorization page.
Propagate users from external applications with SAML identity federation to OAuth-protected applications running
on SAP Cloud Platform. Exchange the user ID and attributes from a SAML assertion for an OAuth access token,
and use the access token to access the OAuth-protected application.
Prerequisites
● You have an application external to SAP Cloud Platform. The application is integrated with a third-party library
or system functioning as a SAML identity provider. That application has a SAML assertion for each
authenticated user.
Note
How the external application and its SAML identity provider work together and communicate is outside the
scope of this documentation. They can be separate applications, or the external application may be using a
library integrated in it.
Note
If you are using a separate third-party identity provider system for this scenario, make sure you have
configured correctly trust between the external application and the identity provider system. Refer to the
identity provider vendor's documentation for details.
Context
This scenario follows the SAML 2.0 Profile for OAuth 2.0 Client Authentication and Authorization Grants
specification. The scenario is based on exchanging the SAML (bearer) assertion from the third-party identity
provider for an OAuth access token from the SAP Cloud Platform authorization server. Using the access token,
the external application can access the OAuth-protected application.
The graphic below illustrates the scenario implemented in terms of SAP Cloud Platform.
Procedure
1. Configure SAP Cloud Platform for trust with the SAML identity provider. See Configure Trust to the SAML
Identity Provider [page 2165].
2. Register the external application as an OAuth client in SAP Cloud Platform. See Register an OAuth Client [page
2215].
3. Make sure the SAML (bearer) assertion that the external application presents contains the following
information:
Format="urn:oasis:names:
tc:SAML:1.1:nameid
format:unspecified"
xmlns:saml="urn:oasis:na
mes:tc:SAML:
2.0:assertion">p12356789
</saml:NameID>
Client ID ). xmlns:saml="urn:oasis:na
mes:tc:SAML:
2.0:assertion">myClientI
D
</saml:Issuer>
Certificate ).
xmlns:xsi="http://
www.w3.org/2001/
XMLSchema-instance"
xsi:type="xs:string">tes
t@sap.com
</AttributeValue>
</Attribute>
<Attribute
Name="first_name">
<AttributeValue
xmlns:xs="http://
www.w3.org/2001/
XMLSchema"
xmlns:xsi="http://
www.w3.org/2001/
XMLSchema-instance"
xsi:type="xs:string">Jon
</AttributeValue>
</Attribute>
4. In the code of the OAuth-protected application, you can retrieve the user attributes using the relevant SAP
Cloud Platform API. See User Attributes [page 2135].
The Keystore Service provides a repository for cryptographic keys and certificates to the applications in the Neo
environment of SAP Cloud Platform.
If you want to use cryptography with unlimited strength in an SAP Cloud Platform application, you need to enable
it via installing the necessary Java Cryptography Extension (JCE) Unlimited Strength Jurisdiction Policy Files on
SAP JVM.
Related Information
The Кeystore API provides a repository for cryptographic keys and certificates to the applications in the Neo
environment of SAP Cloud Platform. It allows you to manage keystores at subaccount, application or subscription
level.
The Keystore API is protected with OAuth 2.0 client credentials. Create an OAuth client and obtain an access token
to call the API methods. See Using Platform APIs [page 1289].
Using an HTTP destination is a convenient way to establish connection to the keystore. Once created, you can re-
use the destination for different API calls. To create the required destination, do the following steps:
1. At the required level, create an HTTP destination with the following information:
○ Name=<your destination name>
○ URL=https://api.<SAP Cloud Platform host>/keystore/v1
○ ProxyType=Internet
○ Type=HTTP
○ CloudConnectorVersion=2
○ Authentication=NoAuthentication
See Create HTTP Destinations [page 111].
2. In your application, obtain an HttpURLConnection object that uses the destination.
See ConnectivityConfiguration API [page 80].
Tip
We recommend using If-None-Match header for subsequent calls to the keystore to check if the keystore
contents have been modified since your last GET call.
From the response, copy the Etag header value and repeat the request with the added header below:
You can do it using the same code excerpt as above with added the following line before the last line:
Expected responses:
If you want to overwrite the keystore, set to true the query parameter overwrite. For example:
Expected response:
Related Information
Overview
The Keystore Service provides a repository for cryptographic keys and certificates to the applications hosted on
SAP Cloud Platform. By using the Keystore Service, the applications could easily retrieve keystores and use them
in various cryptographic operations such as signing and verifying of digital signatures, encrypting and decrypting
messages, and performing SSL communication.
Features
The SAP HANA Keystore Service stores and provides keystores encoded in the following formats:
The keystore service works with keystores available on the following levels:
● Subscription level
Keystores available for a certain application provided by another account.
● Application level
Keystores available for a certain application in a particular consumer account.
● Account level
Keystores available for all applications in a particular consumer account.
When searching for a keystore with a certain name, the keystore service will search on the different levels in
following order: Subscription level Application level Account level .
Once a keystore with the specified name has been found at a certain location, further locations will no more be
searched for.
To consume the Keystore Service, you need to add the following reference to your web.xml file:
<resource-ref>
<res-ref-name>KeyStoreService</res-ref-name>
<res-type>com.sap.cloud.crypto.keystore.api.KeyStoreService</res-type>
</resource-ref>
Then, in the code you can look up Keystore Service API via JNDI:
import com.sap.cloud.crypto.keystore.api.KeyStoreService;
...
KeyStoreService keystoreService = (KeyStoreService) new
InitialContext().lookup("java:comp/env/KeyStoreService");
For more information, see Tutorial: Using the Keystore Service for Client Side HTTPS Connections.
The keystore console commands are called from the SAP Cloud Platform console client and allow users to list,
upload, download, and delete keystores. To be able to use them, the user must have administrative rights for that
account. The console supports the following keystore commands: list-keystores, upload-keystore, download-
keystore, and delete-keystore.
Related Information
SAP JVM, used by SAP Cloud Platform, trusts the below-listed certificate authorities (CAs) by default. This means
that the external HTTPS services which use X.509 server certificates (which are issued by those CAs), are trusted
by default on SAP Cloud Platform and no trust needs to be configured manually.
For SSL connections to services which use different certificate issuers, you need to configure trust to use the
keystore service of the platform. For more information, see Using the Keystore Service for Client Side HTTPS
Connections [page 2240].
Properties
Related Information
Prerequisites
● You have downloaded and configured the SAP Eclipse platform. For more information, see Setting Up the
Development Environment [page 1126].
● You have created a HelloWorld Web application as described in the Creating a HelloWorld Application tutorial.
For more information, see Creating a Hello World Application [page 1139].
● You have an HTTPS server hosting a resource which you would like to access in your application.
● You have prepared the required key material as .jks files in the local file system.
Note
File client.jks contains a client identity key pair trusted by the HTTPS server, and cacerts.jks
contains all issuer certificates for the HTTPS server. The files are created with the keytool from the standard
JDK distribution. For more information, see Key and Certificate Management Tool .
Context
This tutorial describes how to extend the HelloWorld Web application to use SAP Cloud Platform Keystore Service.
It tells you how to make an SSL connection to an external HTTPS server by using the JDK and Apache HTTP Client.
For more information about the HelloWorld Web application, see Creating a Hello World Application [page 1139].
You test and run the application on your local server and on SAP Cloud Platform.
Procedure
To enable the look-up of the Keystore Service through JNDI, you need to add a resource reference entry to the
web.xml descriptor.
<resource-ref>
<res-ref-name>KeyStoreService</res-ref-name>
<res-type>com.sap.cloud.crypto.keystore.api.KeyStoreService</res-type>
</resource-ref>
package com.sap.cloud.sample.keystoreservice;
import java.io.BufferedReader;
import java.io.BufferedWriter;
import java.io.IOException;
import java.io.InputStreamReader;
import java.io.OutputStreamWriter;
import java.io.PrintWriter;
import java.security.KeyStore;
import javax.naming.Context;
import javax.naming.InitialContext;
import javax.naming.NamingException;
import javax.net.ssl.KeyManager;
import javax.net.ssl.KeyManagerFactory;
import javax.net.ssl.SSLContext;
import javax.net.ssl.SSLSocket;
import javax.net.ssl.SSLSocketFactory;
import javax.net.ssl.TrustManager;
import javax.net.ssl.TrustManagerFactory;
import javax.servlet.ServletException;
import javax.servlet.http.HttpServlet;
import javax.servlet.http.HttpServletRequest;
import javax.servlet.http.HttpServletResponse;
import com.sap.cloud.crypto.keystore.api.KeyStoreService;
public class SSLExampleServlet extends HttpServlet {
private static final long serialVersionUID = 1L;
/**
* @see HttpServlet#doGet(HttpServletRequest request, HttpServletResponse
response)
*/
protected void doGet(HttpServletRequest request, HttpServletResponse
response) throws ServletException, IOException {
// get Keystore Service
KeyStoreService keystoreService;
try {
Context context = new InitialContext();
keystoreService = (KeyStoreService) context.lookup("java:comp/env/
KeyStoreService");
} catch (NamingException e) {
response.getWriter().println("Error:<br><pre>");
e.printStackTrace(response.getWriter());
response.getWriter().println("</pre>");
throw new ServletException(e);
}
String host = request.getParameter("host");
if (host == null || (host = host.trim()).isEmpty()) {
response.getWriter().println("Host is not specified");
return;
}
String port = request.getParameter("port");
if (port == null || (port = port.trim()).isEmpty()) {
port = "443";
}
String path = request.getParameter("path");
if (path == null || (path = path.trim()).isEmpty()) {
path = "/";
}
String clientKeystoreName = "client";
f. Save the Java editor and make sure that the project compiles without errors.
3. Deploy and Test the Web Application
Procedure
1. Add the required .jar files of the Apache HTTP Client (version 4.2 or higher) to the build path of your project.
2. Add the following imports:
import org.apache.http.HttpEntity;
import org.apache.http.HttpResponse;
import org.apache.http.client.methods.HttpGet;
import org.apache.http.conn.scheme.Scheme;
import org.apache.http.conn.scheme.SchemeSocketFactory;
import org.apache.http.conn.ssl.SSLSocketFactory;
import org.apache.http.impl.client.DefaultHttpClient;
import org.apache.http.util.EntityUtils;
3. Replace callHTTPSServer() method with the one using Apache HTTP client.
Related Information
Procedure
1. To deploy your Web application on the local server, follow the steps for deploying a Web application locally as
described in Deploy Locally from Eclipse IDE [page 1189].
2. To upload the required keystores, copy the prepared client.jks and cacerts.jks files into <local
server root>\config_master\com.sap.cloud.crypto.keystore subfolder.
3. To test the functionality, open the following URL in your Web browser: http://localhost:<local server
HTTP port>/HelloWorld/SSLExampleServlet?host=<remote HTTPS server host
name>&port=<remote HTTPS server port number>&path=<remote HTTPS server
resource>&client.keystore.password=<client identity keystore password>.
Related Information
Procedure
1. To deploy your Web application on the cloud, follow the steps for deploying a Web application to SAP Cloud
Platform as described in Deploy on the Cloud with the Console Client [page 1197].
Example
Assuming you have mySubaccount subaccount, myApplication application, myUser user, and the keystore
files in folder C:\Keystores, you need to execute the following commands in your local <SDK root>
\tools folder:
For more information about the keystore console commands, see Keystore Console Commands [page
2233].
3. To test the functionality, open the application URL shown by SAP Cloud Platform cockpit with the following
options:<SAP Cloud Platform Application URL>/SSLExampleServlet?host=<remote HTTPS
server host name>&port=<remote HTTPS server port number>&path=<remote HTTPS server
resource>& client.keystore.password=<client identity keystore password>.
For more information, see Start and Stop Applications [page 1706].
Related Information
You can enable the users for your Web application to authenticate using client certificates. This corresponds to the
CERT and BASICCERT authentication methods supported in Java EE.
Overview
Prerequisites
(For the mapping modes requiring certificate authorities) You have a keystore defined. See Keys and Certificates
[page 2231].
Using information in the client certificate, SAP Cloud Platform will map the certificate to a user name using the
mapping mode you specify.
Context
By default, SAP Cloud Platform supports SSL communication for Web applications through a reverse proxy that
does not request a client certificate. To enable client certificate authentication, you need to configure the reverse
proxy to request a client certificate.
Add cert.hana.ondemand.com as a platform domain. See Using Platform Domains [page 1757].
For more information about the trusted certificate authorities (CAs) for SAP Cloud Platform, see Trusted
Certificate Authorities for Client Certificate Authentication [page 2251].
In your Web application, use declarative or programmatic authentication to protect application resources.
Use one of the following two methods for client certificate authentication:
If you use the declarative approach, you need to specify the authentication method in the application web.xml file.
See Declarative Authentication [page 2122].
If you use the programmatic approach, specify the authentication method as a parameter for the login context
creation. For more information, see Programmatic Authentication [page 2127].
The user mapping defines how the user name is derived from the received client certificate. You configure user
mapping using Java system properties.
com.sap.cloud.crypto.clientcert.keystore_n Defines the name of the keystore used during the user map
ame ping process, and it is mandatory for the mapping modes that
use the keystore.
Note
Use a keystore that is available in the Keystore Service. See
Keys and Certificates [page 2231].
Note
Use the keystore name without the keystore file extension
(jks for example).
Note
Depending on the value of the
com.sap.cloud.crypto.clientcert.mapping
_mode property,using the
com.sap.cloud.crypto.clientcert.keystor
e_name property may be mandatory.
For more information how to set the value of the system property, see Configure VM Arguments [page 1702].
For more information about the particular values you need to set, see the table below.
CN The user name equals the Set the A client certificate with
common name (CN) of the com.sap.cloud.crypto cn=myuser,ou=security as a
certificate’s subject. .clientcert.mapping_ subject is mapped to a
mode property with value CN. myuser user name.
Note
The client certificate is not
accepted if its issuer is not
in the keystore or is not in
a chain trusted by this key
store, and then the authen
tication fails. For more in
formation about the Key
store Service, see Keys
and Certificates [page
2231].
CN@issuer For this mapping mode, the To use this mapping mode, A client certificate with
user name is defined as <CN you have to set the following CN=john, C=DE, O=SAP,
of the certificate’s system properties: OU=Development as a subject
subject>@<keystore alias of
● com.sap.cloud.cry and CN=SSO CA, O=SAP as
the certificate’s issuer>. Use pto.clientcert.ma an issuer is received. The
this mapping mode when you pping_mode with a specified keystore with
have certificates with identical value CN@Issuer trusted issuers contains the
CNs. ● com.sap.cloud.cry same issuer, CN=SSO CA,
pto.clientcert.ke O=SAP, that has an sso_ca
ystore_name with a alias. Then the user name is
value the keystore name defined as john@sso_ca.
containing the trusted is
suers
The issuer is trusted if it
is in the keystore or is
part of a trusted certifi-
cate chain. A certificate
chain is trusted if at least
one of its issuers exists in
the keystore.
Note
The client certificate is not
accepted if its issuer is not
in the keystore or is not in
a chain trusted by this key
store, and then the authen
tication fails. For more in
formation about setting
the Keystore Service, see
Keys and Certificates
[page 2231].
wholeCert For this mapping mode, the To use this mapping mode, The following client certificate
whole client certificate is you have to set the following is received:
compared with each entry in system properties:
Subject: CN=john.miller,
the specified keystore, and
● com.sap.cloud.cry C=DE, O=SAP,
then the user name is defined pto.clientcert.ma OU=Development
as the alias of the matching pping_mode with a
entry. value wholeCert Validity Start Date:
● com.sap.cloud.cry March 19 09:04:32 2013 GMT
pto.clientcert.ke
Validity End Date:
ystore_name with a
March 19 09:04:32 2018 GMT
value the keystore name
containing the respective …
user certificates
The specified keystore con
Note tains the same certificate with
an alias john. Then the user
The client certificate is not
name is defined as john.
accepted if no exact match
is found in the specified
keystore, and then the au
thentication fails. For more
information about the Key
store Service, see Keys
and Certificates [page
2231].
subjectAndIssuer For this mapping mode, only To use this mapping mode, A certificate with
the subject and issuer fields you have to set the following CN=john.miller, C=DE, O=SAP,
of the received client certifi- system properties: OU=Development as a subject
cate are compared with the
● com.sap.cloud.cry and CN=SSO CA, O=SAP as
ones of each keystore entry, pto.clientcert.ma an issuer is received. The
and then the user name is de pping_mode with a specified keystore contains a
fined as the alias of the value subjectAndIssuer certificate with alias john that
matching entry. ● com.sap.cloud.cry has the same subject and is
pto.clientcert.ke suer fields. Then the user
Use this mapping mode when
ystore_name with a name is defined as john.
you want authentication by
value the keystore name
validating only the certifi-
containing the respective
cate’s subject and issuer. user certificates
Note
The client certificate is not
accepted if an entry with
the same subject and is
suer is missing in the
specified keystore, and
then the authentication
fails. For more information
about the Keystore Serv
ice, see Keys and Certifi-
cates [page 2231].
To enable client certificate authentication in your application, users need to present client certificates issued by
some of the certificate authorities (CAs) listed below.
Trusted CAs
CN=Go Daddy Root Certificate Authority CN=Go Daddy Root Certificate Authority 47:BE:AB:C9:22:EA:E8:0E:
- G2, O="GoDaddy.com, Inc.", L=Scotts - G2, O="GoDaddy.com, Inc.", L=Scotts 78:78:34:62:A7:9F:45:C2:54:FD:E6:8B
dale, ST=Arizona, C=US dale, ST=Arizona, C=US
CN=SAP Internet of Things CA, O=SAP CN=SAP Internet of Things CA, O=SAP 45:53:D3:F2:22:58:FE:35:59:B1:84:9F:
IoT Trust Community II, C=DE IoT Trust Community II, C=DE 27:3B:8C:69:C2:4C:FA:15
CN=SAP Passport CA, O=SAP Trust CN=SAP Passport CA, O=SAP Trust 10:BD:99:32:E8:3A:01:CD:C4:4F:
Community, C=DE Community, C=DE 56:10:05:47:30:A8:73:18:16:6D
CN=thawte Primary Root CA, OU="(c) CN=thawte Primary Root CA, OU="(c) 91:C6:D6:EE:3E:
2006 thawte, Inc. - For authorized use 2006 thawte, Inc. - For authorized use 8A:C8:63:84:E5:48:C2:99:29:5C:75:6C:
only", OU=Certification Services Divi only", OU=Certification Services Divi 81:7B:81
sion, O="thawte, Inc.", C=US sion, O="thawte, Inc.", C=US
CN=VeriSign Class 1 Public Primary Cer CN=VeriSign Class 1 Public Primary Cer 20:42:85:DC:F7:EB:76:41:95:57:8E:
tification Authority - G3, OU="(c) 1999 tification Authority - G3, OU="(c) 1999 13:6B:D4:B7:D1:E9:8E:46:A5
VeriSign, Inc. - For authorized use only", VeriSign, Inc. - For authorized use only",
OU=VeriSign Trust Network, O="Veri OU=VeriSign Trust Network, O="Veri
Sign, Inc.", C=US Sign, Inc.", C=US
CN=VeriSign Class 2 Public Primary Cer CN=VeriSign Class 2 Public Primary Cer 61:EF:43:D7:7F:CA:D4:61:51:BC:
tification Authority - G3, OU="(c) 1999 tification Authority - G3, OU="(c) 1999 98:E0:C3:59:12:AF:9F:EB:63:11
VeriSign, Inc. - For authorized use only", VeriSign, Inc. - For authorized use only",
OU=VeriSign Trust Network, O="Veri OU=VeriSign Trust Network, O="Veri
Sign, Inc.", C=US Sign, Inc.", C=US
CN=VeriSign Class 3 Public Primary Cer CN=VeriSign Class 3 Public Primary Cer 13:2D:0D:45:53:4B:
tification Authority - G3, OU="(c) 1999 tification Authority - G3, OU="(c) 1999 69:97:CD:B2:D5:C3:39:E2:55:76:60:9B:
VeriSign, Inc. - For authorized use only", VeriSign, Inc. - For authorized use only", 5C:C6
OU=VeriSign Trust Network, O="Veri OU=VeriSign Trust Network, O="Veri
Sign, Inc.", C=US Sign, Inc.", C=US
CN=VeriSign Class 3 Public Primary Cer CN=VeriSign Class 3 Public Primary Cer 22:D5:D8:DF:8F:
tification Authority - G4, OU="(c) 2007 tification Authority - G4, OU="(c) 2007 02:31:D1:8D:F7:9D:B7:CF:8A:2D:
VeriSign, Inc. - For authorized use only", VeriSign, Inc. - For authorized use only", 64:C9:3F:6C:3A
OU=VeriSign Trust Network, O="Veri OU=VeriSign Trust Network, O="Veri
Sign, Inc.", C=US Sign, Inc.", C=US
CN=VeriSign Class 3 Public Primary Cer CN=VeriSign Class 3 Public Primary Cer 4E:B6:D5:78:49:9B:1C:CF:5F:58:1E:AD:
tification Authority - G5, OU="(c) 2006 tification Authority - G5, OU="(c) 2006 56:BE:3D:9B:67:44:A5:E5
VeriSign, Inc. - For authorized use only", VeriSign, Inc. - For authorized use only",
OU=VeriSign Trust Network, O="Veri OU=VeriSign Trust Network, O="Veri
Sign, Inc.", C=US Sign, Inc.", C=US
CN=VeriSign Class 4 Public Primary Cer CN=VeriSign Class 4 Public Primary Cer C8:EC:8C:87:92:69:CB:4B:AB:39:E9:8D:
tification Authority - G3, OU="(c) 1999 tification Authority - G3, OU="(c) 1999 7E:57:67:F3:14:95:73:9D
VeriSign, Inc. - For authorized use only", VeriSign, Inc. - For authorized use only",
OU=VeriSign Trust Network, O="Veri OU=VeriSign Trust Network, O="Veri
Sign, Inc.", C=US Sign, Inc.", C=US
OU=Go Daddy Class 2 Certification Au OU=Go Daddy Class 2 Certification Au 27:96:BA:E6:3F:
thority, O="The Go Daddy Group, Inc.", thority, O="The Go Daddy Group, Inc.", 18:01:E2:77:26:1B:A0:D7:77:70:02:8F:
C=US C=US 20:EE:E4
By default, SAP JVM provides Java Cryptography Extension (JCE) with limited cryptography strength. If you want
to use cryptography with unlimited strength in an SAP Cloud Platform application, you need to enable it via
installing the necessary Java Cryptography Extension (JCE) Unlimited Strength Jurisdiction Policy Files on SAP
JVM. To do that, follow the procedure below.
Prerequisites
You have the appropriate Java Cryptography Extension (JCE) Unlimited Strength Jurisdiction Policy Files enabling
cryptography with unlimited strength.
Procedure
1. Pack the encryption policy files (JCE Unlimited Strength Jurisdiction Policy Files) in the following folder of the
Web application:
Results
The encryption policy files (Java Cryptography Extension (JCE) Unlimited Strength Jurisdiction Policy Files) will be
installed on the JVM of the application prior to start. As a result, the application can use unlimited strength
encryption.
Example
The WAR file of the application must have the following file entries:
META-INF/ext_security/jre7/local_policy.jar
META-INF/ext_security/jre7/US_export_policy.jar
Related Information
Context
Using the password storage API, you can securely persist passwords and key phrases such as passwords for
keystore files. Once persisted in the password storage, they:
Before transportation and persistence, passwords are encrypted with an encryption key which is specific for the
application that owns the password. They are stored according to subscription, and accessible only when the
owning application is working on behalf of the corresponding subscription.
To use the password storage API, you need to add a resource reference to PasswordStorage in the web.xml file
of your application, which is located in the \WebContent\WEB-INF folder as shown below:
<resource-ref>
<res-ref-name>PasswordStorage</res-ref-name>
<res-type>com.sap.cloud.security.password.PasswordStorage</res-type>
</resource-ref>
An initial JNDI context can be obtained by creating a javax.naming.InitialContext object. You can then
consume the resource by looking up the naming environment through the InitialContext class as follows:
Note that according to the Java EE Specification, the prefix java:comp/env should be added to the JNDI resource
name (as specified in the web.xml file) to form the lookup name.
Below is a code example of how to use the API to set, get or delete passwords. These methods provide the option
of assigning an alias to the password.
import javax.naming.InitialContext;
import javax.naming.NamingException;
import com.sap.cloud.security.password.PasswordStorage;
import com.sap.cloud.security.password.PasswordStorageException;
.......
Note
It is recommended to cache the obtained value, as reading of passwords is an expensive operation and involves
several internal remote calls to central storage and audit infrastructure. As passwords are different for the
different tenant the cache should be tenant aware. PasswordsStorage instance obtained via lookup can be
cached and used by multiple threads.
Local Testing
When you run applications on SAP Cloud Platform local runtime, you can use a local implementation of the
password storage API, but keep in mind that the passwords are not encrypted and stored in a local file. Therefore,
for local testing, use only test passwords.
Related Information
To protect your applications from different kind of web attacks SAP Cloud Platform provides several mechanisms
for you to use with your applications.
This document describes how to protect SAP Cloud Platform applications from XSS attacks.
Cross-site Scripting (XSS) is the name of a class of security vulnerabilities that can occur in Web applications. It
summarizes all vulnerabilities that allow an attacker to inject HTML Markup and/or JavaScript into the affected
Web application's front-end.
XSS can occur whenever the application dynamically creates its HTML/JavaScript/CSS content, which is passed
to the user's Web browser, and attacker-controlled values are used in this process. In case these values are
included into the generated HTML/JavaScript/CSS without proper validation and encoding, the attacker is able to
include arbitrary HTML/JavaScript/CSS into the application's frontend, which in turn is rendered by the victim's
Web browser and, thus, interpreted in the victim's current authentication context.
There are several possibilities you can use to protect your application:
●
●
● Within the HTML page or custom data transports sent to the browser by the server
● Within the JavaScript Code of the application processing server responses
● Within the HTML renderers of SAPUI5 controls
For more information about the security measures implemented by SAPUI5, see Securing SAPUI5 Applications.
Note
Using the XSS output encoding library is given as an option that you can use for your applications. You can
successfully use your custom or third-party XSS protection libraries that you have available.
The interface provides methods for retrieving parameters or attributes, and for encoding and decoding data.
It also has various methods for different data types that should be encoded:
Тo use XSS output encoding API, you need to add it as library to the Dynamic Web Project. This is done with the
following steps:
In the following example, we demonstrate the use of the XSS Output Encoding API. The example has one HTML
form that retrieves user input, which can contain malicious code:
Even though the attacker might attempt to inject malicious code in both parameters - firstname and lastname, the
firstname is protected, since it uses the output encoding library to neutralize all special symbols. However, the
attack attempt will be successful for the lastname parameter since it is printed directly to the output. This is
unsafe behavior and should be avoided.
Cross-site request forgery (CSRF or XSRF) is also known as one-click attack or session riding. The key step of the
attack is that a malicious user tricks the victim’s browser into executing an HTTP request on behalf of the valid
user. As a result, a security sensitive action is performed on the server side. If the victim has already logged in the
attacked site, the browser has valid session cookies and sends them automatically with subsequent requests. The
server trusts these requests based on the valid cookies sent by the browser and confirms that the action has been
initiated by the victim.
The predictability of the HTTP request is a prerequisite for the attacker to be able to insert a request in advance in
order to make the browser execute it. Therefore, the common prevention to this attack is to embed a secret
unpredictable token into the request, unique for each session or request.
1. The victim logs in and creates session for the attacked web application.
2. The victim visits a malicious site in another browser window.
3. The malicious site makes request to the attacked application using the victim‘s session cookies.
URL encoding approach Based on the CSRF Preven This is the most common See Using the Apache Tomcat
tion Filter provided by CSRF protection. Use it for
CSRF Prevention Filter [page
Apache Tomcat 7. The preven protecting resources that are
2264]
tion mechanism is based on a supposed to be accessed via
token (a nonce value) gener some sort of navigation. For
ated on each request and example, if there is a refer
stored in the session. The to ence to them in an entry point
ken is used to encode all URLs page (included in links/post
on the entry point sites. Upon forms, and so on).
request to a protected URL,
the existence and value of the
token is checked. The request
is allowed to proceed only if
the nonce from the token
equals the one stored in the
session. The prevention
mechanism is applied for all
URLs mapped to the filter ex
cept for specially defined en
try points.
Custom header approach Based on a secret token (a Use it when URL encoding is See Using Custom Header
nonce value) generated on not suitable. For example,
Protection [page 2266]
server side and stored in the when protecting resources
session, but unlike the first that are requested only as
approach, here the token is REST APIs (one time requests
transported as a custom that should be served inde
header of the HTTP requests. pendently from previous re
quests and are not included in
links and HTML forms). The
same approach is imple
mented in other SAP web ap
plication servers like AS ABAP
and HANA XS, and is sup
ported by SAP UI5. Common
scenarios that can benefit
from this approach are those
using ODATA services, REST,
AJAX, etc.
Custom CSRF filtering imple If you cannot use URL encod Use it when implementing sin Logout [page 2136]
mentation
ing or custom header protec gle logout (SLO) for SAP
tion, you can implement your Cloud Platform applications.
custom CSRF filtering Due to redirects to the SAML
2.0 identity provider, you can
not use the out-of-the-box ap
proaches listed here (custom
header protection or URL en
coding.
Note
These approaches cannot be applied together to protect one and the same web resource.
Prerequisites
You have created a working Web application and have enforced authentication for it. See Authentication [page
2122]
For the purposes of this tutorial, an example application consisting of the following URLs will be used:
● /home - displays home page, and has links to /doActionA and /doActionB
● /doActionA - executes a security sensitive action A, and also has a link to /doActionB
● /doActionB - executes a security sensitive action B
Entry points are URLs used as a starting point for the navigation across the application. They are not protected
against CSRF as requests to them will not be tested for the presence of a valid nonce. Entry points should meet the
following criteria:
Considering the example application, /doActionA and /doActionB are not plausible for entry points since they
are state changing URLs. They should be protected against CSRF. Following the rules above, you could easily
conclude that /home is best suited to be the entry point.
The CSRF Prevention Filter should be defined in the web.xml configuration file. Important init parameters are
entryPoints and nonceCacheSize. The first parameter's value is a comma separated list of the entry points,
identified in the previous step. In this case /home.
The second parameter, nonceCacheSize, should be used in case of parallel requests that might cause a new
nonce to be generated, before the validation of an encoded URL. The nonceCacheSize parameters defines the
number of previous values stored. The default number is 5.
The definition below will protect all URLs except for the entry point /home.
<filter>
<filter-name>CsrfFilter</filter-name>
<filter-class>org.apache.catalina.filters.CsrfPreventionFilter</filter-class>
<init-param>
<param-name>entryPoints</param-name>
<param-value>/home</param-value>
</init-param>
</filter>
<filter-mapping>
<filter-name>CsrfFilter</filter-name>
<url-pattern>/*</url-pattern>
</filter-mapping>
In the example application the URLs that should be encoded are /protected/doActionA and /protected/
doActionB in /protected/home, and the /protected/doActionB URL in /protected/doActionA. To
encode the URLs use HttpServletResponse#encodeRedirectURL(String) or
HttpServletResponse#encodeURL(String).
In case a new URL needs to be added to the application later, for example, /newlink, then you should evaluate its
need of CSRF protection. For example, if it executes a state changing action, it certainly should be protected.
Depending on the case there are two possibilities:
All CSRF protected links that are used in the new page should be encoded, as described in step 4.
Context
Custom header protection is one of the possible approaches for CSRF protection. It is based on adding a servlet
filter that inspects state modifying requests for the presence of valid CSRF token. The CSRF token is transferred as
a custom header and is valid during the user session. This kind of protection specifically addresses the protection
of REST APIs, which are normally not accessed from entry point pages. Note that the CSRF protection is
performed only for modifying HTTP requests (different from GET|HEAD or OPTIONS).
In a nutshell, the REST CSRF protection mechanism consists of the following communication steps:
1. The REST CLIENT obtains a valid CSRF token with an initial non-modifying "Fetch" request to the application.
2. The SERVER responds with the valid CSRF token mapped to the current user session.
3. The REST CLIENT includes the valid CSRF token in the subsequent modifying REST requests in the frame of
the same user session.
4. The SERVER rejects all modifying requests to protected resources that do not contain the valid CSRF token.
Custom header CSRF protection mechanism requires adoption both in the client (JavaScript) and server (REST)
parts of the Web applications.
To better illustrate the mechanism we’ll use an example web application exposing the following REST APIs. We’ll
use the same example application throughout the document.
Prerequisites
You have created a working Web application and have enforced authentication for it, as described in Authentication
[page 2122]. All CSRF protected resources should be protected with an authentication mechanism.
In the application's web.xml, protect all REST APIs using the out-of-the-box CSRF filter available with the SAP
Cloud Platform SDK.
Note
You must have at least one non-modifying REST operation listed.
Identify all web application resources that have to be CSRF protected and map them to
org.apache.catalina.filters.RestCsrfPreventionFilter (this class represents the out-of-the-box
CSRF filter available with the SAP Cloud Platform SDK, so you do not need to instantiate/implement it) in the
web.xml.
Note
If you are using an older version of the SAP Cloud Platform rutime for Java, use the
com.sap.core.js.csrf.RestCsrfPreventionFilter class instead. It delivers the same implementation
as the other one. Namely, use that class with the following runtime versions:
As a result, all modifying HTTP requests matching the given url-pattern would be CSRF validated, i.e. checked
for the presence of the valid CSRF token.
Applications should expose at least one non-modifying REST operation to enable CSRF token fetch mechanism. In
order to obtain the valid CSRF token, the clients need to make an initial fetch requests. That is why the non-
modifying REST API is necessary. Requirements for the non-modifying REST API:
○ Any GET/HEAD/OPTIONS requests to the URL shall not cause state modification.
○ The URL should be mapped to the RestCsrfPreventionFilter
○ The URL should be protected with authentication mechanism.
Example
The following example illustrates mapping a set of modifying REST APIs and one non-modifying REST API to the
CSRF protection filter in the application’s web.xml deployment descriptor:
<filter>
<filter-name>RestCSRF</filter-name>
<filter-class>org.apache.catalina.filters.RestCsrfPreventionFilter</filter-
class>
</filter>
<filter-mapping>
<filter-name>RestCSRF</filter-name>
<!— modifying REST APIs-->
<url-pattern>/services/customers/removeCustomer</url-pattern>
<url-pattern>/services/customers/addCustomer</url-pattern>
<url-pattern>/services/customers/initCustomers</url-pattern>
<!— non-modifying REST API-->
<url-pattern>/services/customers/list</url-pattern>
</filter-mapping>
Procedure
As a first step, the REST client should obtain the valid CSRF token for the current session. For this it makes a
non-modifying request and includes a custom header "X-CSRF-Token: Fetch". The returned [sessionid
– csrf token] pair should be cached and used in subsequent REST requests by the client. Another option is
to send Fetch request before every REST request and thus to use the [sessionid – csrf token] pair only
once.
Client Request:
GET /restDemo/services/customers/list HTTP/1.1
X-CSRF-Token: Fetch
Authorization: Basic dG9tY2F0OnRvbWNhdA==
Host: localhost:8080
Server Response:
HTTP/1.1 200 OK
Set-Cookie: JSESSIONID=4BA3D75B73B8C4591F1D915BA9C2B660; Path=/restDemo/;
HttpOnly
X-CSRF-Token: 5A44B387B75E54417F6C64FF3D485141
..
2. Use the cached [sessionid – csrf token] pair for subsequent REST requests.
Subsequent modifying REST requests to the same application should include the valid jsessionid cookie and
the valid X-CSRF-Token header.
Client Request:
POST /restDemo/services/customers/removeCustomer HTTP/1.1
Cookie: JSESSIONID=4BA3D75B73B8C4591F1D915BA9C2B660
X-CSRF-Token: 5A44B387B75E54417F6C64FF3D485141
Authorization: Basic dG9tY2F0OnRvbWNhdA==
Host: localhost:8080
Server Response:
HTTP/1.1 200 OK
..
403 Forbidden
X-CSRF-Token: Required
Context
In small number of use cases the client is not able to insert custom headers in its calls to a REST API. For example
file uploads via POST HTML FORM consuming a REST API. Only for such use-cases there is an additional capability
to configure REST APIs for which the valid CSRF token will be accepted as request parameter (not only header). If
there is a X-CSRF-Token header, it will be taken with preference over any parameter with the same name in the
request.
Tip
For security reasons we strongly recommend the following:
● Use this approach only when the header approach cannot be applied.
● Use only hidden post parameter with name X-CSRF-Token, and not query parameters.
<filter>
<filter-name>CSRF</filter-name>
<filter-class>org.apache.catalina.filters.RestCsrfPreventionFilter</filter-
class>
<init-param>
<param-name>pathsAcceptingParams</param-name>
<param-value>/services/customers/acceptedPath1.jsp,/services/customers/
acceptedPath2.jsp
</param-value>
</init-param>
</filter>
<filter-mapping>
<filter-name>CSRF</filter-name>
<url-pattern>/services/customers/*</url-pattern>
</filter-mapping>
Governments place legal requirements on industry to protect data and privacy. We provide features and functions
to help you meet these requirements.
Note
SAP does not provide legal advice in any form. SAP software supports data protection compliance by providing
security features and data protection-relevant functions, such as blocking and deletion of personal data. In
many cases, compliance with applicable data protection and privacy laws is not covered by a product feature.
Furthermore, this information should not be taken as advice or a recommendation regarding additional features
that would be required in specific IT environments. Decisions related to data protection must be made on a
case-by-case basis, taking into consideration the given system landscape and the applicable legal requirements.
Definitions and other terms used in this documentation are not taken from a specific legal source.
This documentation covers personal data relating to SAP Cloud Platform accounts and data stored in databases
by SAP Cloud Platform. SAP Cloud Platform offers a number of capabilities, that is, services, buildpacks,
application, and so on. Here we cover the core platform. For more information about data protection and privacy
for capabilities you have purchased, see the data protection and privacy documentation for those capabilities.
For your convenience, we have created a list of capabilities with data protection and privacy documentation.
This documentation is written with the data protection officer of a company in mind. The processes described here
may be required for a data protection officer or an administrator of the user accounts for your tenants or even
business users of the tenants. In particular the processes for business users are described here so that you in your
role of data protection officer or account administrator can communicate them to your business users if required.
● Global account users are stored in platform identity provider or a tenant of SAP Cloud Platform Identity
Authentication service.
● Platform users are stored in platform identity provider, a tenant of SAP Cloud Platform Identity Authentication
service, or your own identity provider.
● Business users are stored in a tenant of SAP Cloud Platform Identity Authentication service or your own
identity provider.
The following terms are general to SAP products. Not all terms may be relevant for SAP Cloud Platform.
Term Definition
Business purpose A legal, contractual, or in other form justified reason for the
processing of personal data. The assumption is that any pur
pose has an end that is usually already defined when the pur
pose starts.
Consent The action of the data subject confirming that the usage of his
or her personal data shall be allowed for a given purpose. A
consent functionality allows the storage of a consent record in
relation to a specific purpose and shows if a data subject has
granted, withdrawn, or denied consent.
End of business Date where the business with a data subject ends, for example
the order is completed, the subscription is canceled, or the
last bill is settled.
End of purpose (EoP) End of purpose and start of blocking period. The point in time,
when the primary processing purpose ends (for example con
tract is fulfilled).
End of purpose (EoP) check A method of identifying the point in time for a data set when
the processing of personal data is no longer required for the
primary business purpose. After the EoP has been reached,
the data is blocked and can only be accessed by users with
special authorization (for example, tax auditors).
Purpose The information that specifies the reason and the goal for the
processing of a specific set of personal data. As a rule, the pur
pose references the relevant legal basis for the processing of
personal data.
Residence period The period of time between the end of business and the end of
purpose (EoP) for a data set during which the data remains in
the database and can be used in case of subsequent proc
esses related to the original purpose. At the end of the longest
configured residence period, the data is blocked or deleted.
The residence period is part of the overall retention period.
Retention period The period of time between the end of the last business activ
ity involving a specific object (for example, a business partner)
and the deletion of the corresponding data, subject to applica
ble laws. The retention period is a combination of the resi
dence period and the blocking period.
Sensitive personal data A category of personal data that usually includes the following
type of information:
Where-used check (WUC) A process designed to ensure data integrity in the case of po
tential blocking of business partner data. An application's
where-used check (WUC) determines if there is any depend
ent data for a certain business partner in the database. If de
pendent data exists, this means the data is still required for
business activities. Therefore, the blocking of business part
ners referenced in the data is prevented.
Audit logs are available on request by opening a ticket with primary support.
For more information, see Request Extraction of Audit Logs [page 2291].
Note
For any applications you develop, you must ensure they include logging functions. SAP Cloud Platform does not
provide audit logging functions for custom developments.
The audit log retrieval API allows you to retrieve the audit logs for your SAP Cloud Platform Neo environment
account. It follows the OData 4.0 standard, providing the audit log results as OData with collection of JSON
entities.
The audit log retrieval API is protected with OAuth 2.0 client credentials. To call the API methods, create an OAuth
client and obtain an access token. See Using Platform APIs [page 1289].
To authenticate, in the header provide the taken OAuth token similar to: "Authorization: Bearer
41fce723412c6c18961f7e95d911ad37"
The returned results would be split on pages with size – the default server page size. If the number of results is
higher than the default server page size, in the response @odata.nextLink would be provided with the URL, to
retrieve the next results' chunk.
● audit.security-events
● audit.configuration
● audit.data-access
● audit.data-modification
To get results based on pages with size 50, first check the total results number, execute a similar GET request:
To split the pages on a desired size, 50 results per page in this example, execute a similar GET request:
Continue the same requesting pattern, until the number of the results returned by count, in the first example of
this section, is reached.
Note
If you use client side pagination and request a client side page bigger that the server side default page, the audit
log retrieval API will split the requested page in several chunks that would be returned. As a result you will
receive a response containing an @odata.nextLink field, where the next data chunk could be retrieved (for more
information, see the Results section below). Go to the next client side page value only after you have iterated all
the chunks the server breaks the result to, which means that there is no @odata.nextLink field as part of the
response provided.
Results
Executing a GET request towards the audit log retrieval API, results in response similar to the one below. The
information for the AuditLogRecords can be checked in the metadata OData part. In the “value” part you receive
the audit log messages in the format shown in the response example. The results returned on page are limited to
the server page size. To get the next result page, navigate to the URL provided in @odata.nextLink.
Sample Code
{
"@odata.context": "$metadata#AuditLogRecords",
"value": [
{
"Uuid": "3b8a8b-16247c70836-8",
"Category": "audit.data-access",
"User": "<user>",
"Tenant": "<tenant>",
"Account": "<account>",
The audit log retention API allows you to view your currently active retention period for all the audit log data that is
stored for your account.
Using the API, you can modify the default retention period for a customer retention period that corresponds to
your legal, business, or other restrictions.
Note
The setup of a custom retention for the first time is related to data migration that could last for up to 24 hours.
The audit log retention API may return inconsistency as a results from the migration during that time frame.
All the audit logs written during the transition period are stored and are not lost. They would be visible after the
initial transition stage is over. This however is not valid for all the subsequent changes in the retention period
made for the same account.
The audit log retention API is protected with OAuth 2.0 client credentials. Create an OAuth client and obtain an
access token to call the API methods. See Using Platform APIs [page 1289].
To authenticate, in the header provide the taken OAuth token similar to: "Authorization: Bearer
41fce723412c6c18961f7e95d911ad37"
To authenticate, in header provide the taken OAuth token similar to: "Authorization: Bearer
41fce723412c6c18961f7e95d911ad37"
Note
Usage of audit log's custom retention period currently does not incur additional charges. This may change in
the future and a fee based on the stored data volume and retention period can be applied.
Related Information
An information report is a collection of data relating to a data subject. A data privacy specialist may be required to
provide such a report or an application may offer a self-service.
To see the personal data that is used for membership management within SAP Cloud Platform, access the cloud
cockpit.
To see the personal data that is used for application logging within SAP Cloud Platform, access the cloud cockpit.
For more information, see Using Logs in the Cockpit in the Logging Services documentation.
If you do not use your own identity provider for identity federation, you can view the profiles available in SAP Cloud
Platform Identity Authentication service.
For more information, see Information Report in the SAP Cloud Platform Identity Authentication service
documentation.
For all other services, which persist data, such as databases or document services, retrieve the data you stored
with the same APIs, protocols, or languages you used to store the data.
6.6.4 Erasure
When handling personal data, consider the legislation in the different countries where your organization operates.
After the data has passed the end of purpose, regulations may require you to delete the data. However, additional
regulations may require you to keep the data longer. During this period you must block access to the data by
unauthorized persons until the end of the retention period, when the data is finally deleted.
Personal data can also include referenced data. The challenge for deletion and blocking is first to handle
referenced data and then other data, such as business partner data.
When accounts expire, we delete your data barring legal requirements that SAP retains your data. If your
organization has separate retention requirements, you are responsible for saving this data before we terminate
your account.
● For trial accounts in the Cloud Foundry environment, your account expires after 90 days.
● Productive accounts expire based on the terms of your contract.
To deactive or delete users, see Erasure in the SAP Cloud Platform Identity Authentication service documentation.
For all other services which persist data, such as databases, document service, and such, you can retrieve the data
you stored with the same APIs, protocols, or languages which you used to store the data.
To view the services used in a global account, choose Entitlements in the navigation area.
We maintain backups of the data for disaster recovery. When your account is deleted, we may have this data in our
backup system for the length of our backup cycle.
Note
If your data is stored outside SAP Cloud Platform, we cannot guarantee that your data does not get reintegrated
if you are pushing such data to our systems. You are responsible for terminating such integrations.
Related Information
We assume that software operators, such as SAP customers, collect and store the consent of data subjects, before
collecting personal data from data subjects. A data privacy specialist can later determine whether data subjects
have granted, withdrawn, or denied consent.
SAP Cloud Platform offers services to help you manage the consent of data subjects.
● SAP Cloud Platform Identity Authentication service provides tools to manage privacy policies and terms of use
agreements.
For more information, see Configuring Privacy Policies in SAP Cloud Platform Identity Authentication Service
and Configuring Terms of Use.
See also Consent in the SAP Cloud Platform Identity Authentication service documentation.
If you have questions or encounter an issue while working with SAP Cloud Platform, you can address them as
described below.
Depending on your global account, you can use the following support media:
To report an incident (issue) in the SAP Support Portal, follow the steps below.
Before reporting an incident, check the availability of the platform at SAP Cloud Platform Status Page .
For more information about selected platform incidents, see Root Cause Analyses.
Note
When you specify the correct product, installation and system, the correct support SLA will be applied to
your case.
Please note that not choosing the appropriate product, installation, and system may negatively affect the
processing of the incident. For more information on product, installation, and system values, see KBA
2379404 .
1. Select language, set priority of the incident and enter a subject. Note that if you set a high or very high priority,
you have to also describe the business impact of the incident.
2. To help the support staff process your issue as fast as possible, please provide the following information in the
Description field:
○ Region and global account name. In the cockpit, open the affected subaccount, and copy the URL.
○ Java application name and URL (only when the problem is related to Java applications). In the cockpit,
open the respective Java application’s Overview page.
○ Database or schema ID (only when the problem is related to a database system or schema). In the
cockpit, see the ID column by navigating to SAP HANA / SAP ASE Databases & Schemas .
3. From the Component dropdown list, select the component name of the area, which fits best to your issue.
Selecting the right component will direct your issue to the corresponding support team. To check the
complete list of components, see SAP Note 1888290 .
4. Enter the steps to reproduce the issue and if necessary, add some attachments.
5. Optionally, define contact(s) apart from the reporter, who is filled in automatically.
6. When ready, choose Submit to create the incident.
Note
If you have problems creating and sending an incident, or your ticket is not processed as fast as you need,
contact the 24/7 phone hotlines. See SAP Note 560499 .
Additional Resources
https://help.sap.com/viewer/product/SCP_RCA/Latest/en-US
The Eclipse tools come with a wizard for gathering support information in case you need help with a feature or
operation (during deploying/debugging applications, logging, configurations, and so on).
Context
The wizard collects the information in a ZIP file, which can be later sent to SAP support. This way, SAP support
developers can get better understanding of your environment and process the issue faster.
Procedure
Note
If you select Screenshot, your currently open Eclipse windows and views will be snapped as a picture and
added to the ZIP file . Make sure you don't reveal sensitive information.
3. In the File Name field, specify the ZIP file name and location.
4. Choose Finish.
Next Steps
You can create a support ticket, attach the ZIP file to it and send it to the relevant OSS component. For more
information, see Getting Support [page 2280].
SAP Cloud Platform is a dynamic product, which has continuous production releases (updates). To get
notifications for the new features and fixes every release, subscribe at the SAP Community wiki by choosing the
Watch icon.
● Bi-weekly updates (standard) - every other Thursday, aligned with the contractual obligations of SAP Cloud
Platform to customers and partners. Such updates usually do not affect productive applications, because
most SAP Cloud Platform services support zero-downtime maintenance. See Service Level Agreement for
SAP Cloud Services .
● Immediate updates - fixes required for bugs that affect productive application operations, or due to urgent
security fixes. In some cases, this might lead to downtime or application restart, for which the application
groups will receive a notification.
● Major upgrades - happen rarely, in a bigger maintenance window, - usually up to four times per year. For the
time frames of the services' major upgrades, see Service Level Agreement for SAP Cloud Services . We'll let
you know about these upgrades one week in advance.
You can follow the availability of the platform and the announcements about upcoming updates and downtimes at
https://sapcp.statuspage.io/ . Subscribe to the status page to get notifications for updates and downtimes.
Related Information
What's New
An operating model clearly defines the separation of tasks between SAP and the customer during all phases of an
integration project.
SAP Cloud Platform and its services have been developed on the assumption that specific processes and tasks will
be the responsibility of the customer. The following table contains all processes and tasks involved in operating the
platform and the services and specifies how the responsibilities are divided between SAP and the customer for
Changes to the operating model defined for the services in scope are published using the What's New (release
notes) section of the platform. Customers and other interested parties must review the product documentation on
a regular basis. If critical changes are made to the operating model, which require action on the customer side, an
explicit notification is sent by e-mail to the affected customers.
It is not the intent of this document to supplement or modify the contractual agreement between SAP and the
customer for the purchase of any of the services in scope. In the event of a conflict, the contractual agreement
between SAP and the customer as set out in the Order Form, the General Terms and Conditions of SAP Cloud
Services, the supplemental terms and conditions, and any resources referenced by those documents always takes
precedence over this document.
The responsibilities for operating SAP Cloud Platform are listed in the service catalog below.
Service Catalog
Context
Due to the lack of a self-service audit log viewing tool, applications and services that want to view audit logs, have
to request their extraction for a given time period. This is a temporary manual process until an audit log viewer is in
place.
Procedure
You can use $ cf a to get the application name and $ cf env <application_name> to get the
application id.
f. (Optional) Tenant ID - The ID of the tenant. If you need audit logs for a specific tenant specify the tenant ID
in the ticket.
g. (Optional) Service instance GUID
Neo Environment
Procedure
Related Information
Change
Change
You can prepare role collections and directly assign them to users provided by the default identity provider (SAP ID service).
It is not necessary anymore that these users have to authenticate at least once before the role collection assignment. See
Directly Assign Role Collections to Users [page 2074].
Change
Change
SAP Java Buildpack with version 1.6.15 uses SAP JVM 8.1.035 for its runtimes. This version of SAP JVM includes the DigiCert
root CA.
Announcement
The HTML5 Application Repository service enables you to centrally store and provision HTML5 applications on the SAP
Cloud Platform Cloud Foundry environment. The service also provides the runtime environment for HTML5 applications and
runs as a dedicated Web server, which is optimized to serve static content efficiently.
Announcement
Certificates on all regions in the Cloud Foundry environment will be issued by a new root CA issuer, DigiCert.
If you use certificates for API calls (for example, REST, OData, or SOAP), please ensure that the new root CA certificate is
included in all participating trust stores. See Certificate Authority Change .
Announcement
SAP JVM6 is marked as deprecated. As of 18 January 2018, you will not be able to deploy applications with SAP JVM 6 to the
platform. In case of issues, create a ticket in component BC-NEO-RT-JAVA.
New
A new REST API is available for managing cryptographic keys and certificates in your applications in the Neo environment.
You can use the API alternatively to the available console commands. See Keystore API [page 2229].
Announcement
Certificates on all new regions and the certificates renewals on the existing regions in the Neo environment will be issued by
a new root CA issuer, DigiCert. See Certificate Authority Change .
Тhe --direct-access-code parameter of the start-maintenance console command gives you the option to
specify an access code while configuring your application in maintenance mode. You can use this access code as a value of
the Direct-Access-Code HTTP header to gain access to your application for testing and administration purposes during the
maintenance period. See start-maintenance [page 1980].
New
Extension Integration
Now you can define a custom platform role to create an integration token required for the automated configuration of an
SAP Cloud Platform extension package for SAP SuccessFactors. You have to assign to this custom role the following plat
form scopes:
● readExtensionIntegration
● manageExtensionIntegration
● manageAccount
See Create an Integration Token for SAP SuccessFactors [page 1627].
New
Cloud Foundry еnvironment has been updated from version 278 to 280.
More information:
● v279
● v280
New
More information:
● v1.29.0
● v1.29.1
● v1.29.2
● v1.30.0
Administrators of a global account who are members of a subaccount in the Cloud Foundry environment can delete an or
ganization in this subaccount. All data in the organization including spaces, applications, service instances, and member in
formation is lost. You can then create a new organization. See Delete Organizations [page 1663].
New
SAP JVM memory calculator adds the possibility to limit the size of JIT compiler code cache and direct memory. The feature
is not active by default.
With the command below, the size of code cache is limited to 256M and the size of direct memory to 10M:
When you set upper limits for codecache and directmemory, the memory calculator automatically decreases propor
tionally the metaspace, stack, and heap sizes. To keep these sizes to their pre-upper-limit values, set the new memory limit
to equal the sum of the memory limit before activating the feature and the sizes of codecache and directmemory.
For example an application is started with 1024M memory limit and default settings of the memory calculator. This means
that the Java process was started with -Xmx768M, -XX:MaxMetaspaceSize=102M, -Xss1M.
If you want to set codecache:100M, directmemory:100M and keep heap, metaspace, stack as before activating the
feature, the application should be pushed with memory limit of 1024M + 200M = 1224M.
Announcement
The upcoming versions of the Node.js Cloud Foundry system buildpack will:
The changes are anticipated to take effect on SAP Cloud Platform at the beginning of 2018.
Note
SAML for Web SSO using a Web browser is not affected.
○ Two of the fields in the Json Web Token (JWT) generated by XSUAA upon authentication will change:
○ "authorities" - The field is used for tokens generated by the OAuth client credential flow and duplicates
information present in the scopes field of a JWT token. The field is not required by resource servers that use
the SAP container security library and will be removed by 18 January 2018.
○ "xs.system.attributes" - The field contains information about SAML Groups (xs.saml.groups)
and (xs.rolecollections). For XSUAA service instances generated after 18 January 2018, this informa
tion will not be added by default. If you run applications that require this information, you need to set the at
tribute "add-system-attributes": "true" in the xs-security.json after 18 January 2018.
New
Change
Java Web, Java EE 6 Web Profile, and Java Web Tomcat 7 have been updated to Tomcat version 7.0.82.
Java Web Tomcat 8 and Java EE 7 Web Profile TomEE 7 have been updated to Tomcat version 8.5.23.
The new Tomcat versions contain security fixes. See Tomcat 7.0.82 (violetagg) and Tomcat 8.5.23 (markt) .
New
Cloud Foundry еnvironment has been updated from version 275 to 278.
More information:
● v276
● v277
● v278
New
More information:
● v1.26.1
● v1.26.2
● v1.27.0
● v1.28.0
Change
Administrators of global accounts in the Cloud Foundry environment can now delete subaccounts. Each subaccount tile has
a Delete button. See Delete Subaccounts [page 1662].
Application logging will be updated to Elastic 5 on 9 November 2017. A short downtime of the service is expected during the
upgrade.
During the downtime, the web front-end Kibana won't be reachable, and no new logs will be shipped to the application log
ging stack.
During the update, you can still access your logs by using the cf logs command.
After the update, you'll need to migrate your custom dashboards by adapting the index names and the document ID. The ZZ
namespace restrictions have been removed. See .
New
Cockpit
To ask a question or send feedback, choose (Contact Us) in the header toolbar. See SAP Cloud Platform Cockpit [page
900].
New
Software Logistics
Multi-target applications (MTAs) now support the import of SAP Cloud Platform Integration content. With the new module
type com.sap.integration, you can integrate packages described in an MTA archive that has been transported
by CTS+ or deployed using the deploy-mta command or the cockpit. See MTA Module Types, Resource Types, and Pa
rameters for Applications in the Neo Environment [page 1351].
New
The Java EE 7 Web Profile TomEE 7 runtime is generally available. See Java EE 7 Web Profile TomEE 7 [page 1158].
New
A new region is now productive: br1.hana.ondemand.com, located in São Paulo, Brazil. See Regions [page 21].
New
Release of SAPUI5 distribution 1.48.11 for Java and HTML5 applications. See What's New in SAPUI5 and Change Log.
The welcome message on the login page for business applications now includes the display name of the targeted subac
count. Previously, only the subaccount ID was displayed. For existing subaccounts, change the subaccount display name to
trigger the appearance of this name on the login page.
Announcement
A new Cloud Foundry еnvironment trial will be suspended automatically after 30 days. An existing Cloud Foundry environ
ment trial gets a grace period of an additional 30 days starting now. The cockpit shows the time left in a free trial. Once a
trial is suspended, you can still log on to it, but you won’t be able to use applications or services.
Between 30 and 90 days after the creation of the trial, you can renew the trial by using the Extend Free Trial button in the
cockpit.
Change
During the downtime, the web front-end Kibana won't be reachable, and no new logs will be shipped to the application log
ging stack.Application logging will be updated to Elastic 5 on 9 November 2017. A short downtime of the service is expected
during the upgrade.
During the update, you can still access your logs using the Cloud Foundry cf logs command.
New
Multitenant subscriptions are now available in the Cloud Foundry environment. The new SubscriptionsApplication logging
will be updated to Elastic 5 on 9 November 2017. A short downtime of page in the cockpit lists all business applications to
which your subaccount is entitled. In this page, you can also create and remove subscriptions, and launch subscribed appli
cations.
You can also view the list of application roles defined for a subscribed application; go to the Roles tab of the subscribed appli
cation.
See Subscribing to Business Applications in the Cloud Foundry Environment [page 968].
See https://blogs.sap.com/2017/07/25/cloud-foundry-java-buildpack-version-4.3/
Change
● The Java System Buildpack of Cloud Foundry еnvironment is updated to include the service instance ID generated by
the service broker in /sap/rest/authorization/apps endpoint.
● Incompatible changes in The JSON Web Token (JWT) generated by XSUAA upon authentication:
○ Field "authorities" for tokens generated by the OAuth client credential flow duplicates information present in
the scopes field of a JWT token. This field is not required by resource servers that use the SAP container security
library and will be removed by 18 January 2018.
○ Field "xs.system.attributes" provides information about SAML groups (xs.saml.groups) and (xs.rolecol
lections). After 18 January 2018, for XSUAA service instances this information will not be added by default upon
generation. Applications that require this information need to set the attribute Cloud Foundry Java Buildpack ver
sion 4.xThe Java System Buildpack of Cloud Foundry is updated to version 4.5 (as announced in July). This will be
a major version update and has incompatible changes"add-system-attributes": "true" in their xs-
security.json.
New
Cloud Foundry еnvironment has been updated from version 271 to 275.
More information:
● v272
● v273
● v274
● v275
More information:
● v1.25.2
● v1.25.3
● v1.26.0
Announcement
Cloud Foundry Java Buildpack versionThe Java System Buildpack of Cloud Foundry еnvironment remains on v3.19.
New
SAP Cloud Platform is available on Google Cloud Platform as a IaaS provider in the trial account as beta. See Regions [page
21]
Announcement
No email notifications are sent for upcoming updates provided such updates do not affect productive applications.
Enhancement
Cloud Cockpit
Information about processes of Java applications, which used to be shown on the Overview page, is now shown on a new
dedicated page Processes, which is located under Monitoring. The Overview page now shows a summary of the number of
processes and the overall metrics. See Check the Process Status [page 1709] and .
Software Logistics
With the new module type com.sap.integration, you can integrate packages described in an MTA archive that has been
transported by the CTS+ or deployed using the deploy-mta command or the cockpit.Information about processes of
Java applications, which used to be shown on the
Fix
Log meta-data is taken into account for application log quota calculation.
Enhancement
Information about processes of Java applications, which usedThe Platform Roles list now shows predefined platform roles
as well. Predefined roles are marked by an i icon after the display name. You can clone predefined roles to create custom
copies that you can then modify to your needs. See Manage Custom Platform Roles [page 1675].
Enhancement
Cloud Foundry еnvironment has been updated from version 270 to 271.
More information:
● v271
Enhancement
More information:
● v1.24.0
● v1.25.0
● v1.25.1
Enhancement
Release of SAPUI5 distribution 1.48.6 for Java and HTML5 applications. See What's New in SAPUI5 and Change Log.
New
Enhancement
● During application deployment or subscription using the Solutions view in the cockpit, you can manually provide config-
uration parameters that have been intentionally left unspecified. See Defining MTA Deployment Descriptors for the Neo
Environment [page 1345].
● In the deploy-mta command you can use the --extension parameter to specify the location of one or several
MTA extension descriptor files that provide additional data to a Multi-Target Application. See deploy-mta [page 1862].
Announcement
The Virtual Machine service is available in the US East (Sterling/VA) region. See Virtual Machines [page 1761].
Enhancement
Release of SAPUI5 distribution 1.48.5 for Java and HTML5 applications. See What's New in SAPUI5 and Change Log.
Enhancement
The node.js container security library supports the user_token grant flow. See The User Account and Authentication (UAA)
Service . Because the new XSUAA service instances support this flow, please update the previous service instances by
using cf update-service <instance> -c <xs-security.json file>.
As an administrator, you can configure in the Cloud Foundry environment a mutual trust relationship between user authenti
cation and authorization (UAA) and the SAML 2.0 identity provider that holds the business users for SAP Cloud Platform.
See Identity Federation [page 2034].
New
In the Cloud Foundry environment, developers can create and deploy application-based authorization information for busi
ness users. Administrators can then use this information to assign roles to the users and thus control their permissions. See
Set Up Security Artifacts [page 2105].
Enhancement
Cloud Foundry еnvironment has been updated from version 268 to 270.
More information:
● v269
● v270
Enhancement
More information:
● v1.23.2
● v1.23.1
New
Cloud Cockpit
The cockpit is available in Korean. See the Language section in SAP Cloud Platform Cockpit [page 900].
Cloud Cockpit
The custom platform roles feature has been improved. You can:
● Use the search function on the Platform Roles page to restrict the entries in the list to the ones you are interested in.
● Create a new custom platform role by copying from an existing one.
Enhancement
SAP Cloud Platform Tools supports Eclipse Oxygen. You can install the new toolkit from https://tools.hana.ondemand.com/
#cloud. See Install SAP Development Tools for Eclipse [page 1129].
Enhancement
Log quotas are based on actual log message size and applied per log service instance.
New
The new Java EE 7 Web Profile TomEE 7 runtime (Beta) is available in the Neo environment. It is based on the TomEE server
and supports the Java EE 7 Web Profile specification. See SAP Cloud Platform is Java EE 7 Web Profile Certified and Java
EE 7 Web Profile TomEE 7 [page 1158].
New
You can use custom platform roles to manage member authorizations in the Neo environment. You define custom roles on
global account level. See Managing Member Authorizations [page 1671].
Announcement
The Virtual Machines service is available in the Australia (Sydney 1) region. See Virtual Machines [page 1761].
Announcement
Enhancement
Cloud Foundry еnvironment has been updated from version 265 to 268 because of high-severity issues.See all Cloud Foun
dry notes
More information:
● v266
● v267
● v268
Enhancement
More information:
● Diego v1.20.0
● Diego v1.21.0
● Diego v1.22.0
● Diego v1.23.0
New
Based on the OAuth 2.0 standard, the Cloud Foundry environment provides platform security functions such as business-
user authentication, authentication of applications, and authorization management. User Account and Authentication
(UAA) is the central infrastructure component for authentication and authorization management. See User Account and Au
thentication Service of the Cloud Foundry Environment [page 2038].
Enhancement
Cloud Foundry еnvironment has been updated from version 263 to 265.
More information:
● v264
● v265
More information:
● Diego v1.18.0
● Diego v1.18.1
● Diego v1.19.0
Enhancement
In the Cloud Foundry environment, you can use the cockpit to install additional components, and to restart or update your
SAP HANA MDC database systems.
Enhancement
Announcement
Security Releases for Node.js Buildpack See all Cloud Foundry еnvironment notes
The Node.js project has released new versions across all of its active release lines (4.x, 6.x, 8.x as well as 7.x) to incorporate a
security fix. The Node.js buildpack is updated to v1.6.2 including fixed Node.js versions. All applications need to be re-staged
for the change to take effect. See Security updates for all active release lines, July 2017 .
Announcement
On 1 Aug 2017, support for TLS versions 1.0 and 1.1 will be discontinued. HAProxy will begin to accept only connections that
use TLS 1.2.
As of 7 Jul 2017, the client protocols TLS 1.1 and TLS 1.2 are enabled by default for all Java runtimes.
If you have Java applications that have TLS version 1.1 or 1.2 enabled, and you want to switch them to using TLS version 1.1
or 1.2, restart these applications.
On 20 Jul 2017, 1.2 will become the default TLS version of HTTP destinations.
Recommendation: Test in advance all HTTP destinations that use the HTTPS protocol and have ProxyType=Internet: in the
destination, set an additional property TLSVersion=TLSv1.2 and make sure it works. See Server Certificate Authentication
[page 132].
To align with the industry best practices for security and data integrity, SAP SuccessFactors will disable the support for TLS
1.0. To check which extension applications are affected, see Adapting SAP Cloud Platform extensions after TLS change in
SAP SuccessFactors .
Recommendation: Ensure in advance that your applications can communicate with SAP SuccessFactors SFAPI and OData
APIs using TLS versions 1.1 or 1.2.
Enhancement
Release of SAPUI5 distribution 1.46.9 for Java and HTML5 applications. See What's New in SAPUI5 and Change Log.
Enhancement
Three new commands allow you to manage deployed solutions: list-mtas [page 1926], display-mta [page 1869], and delete-
mta [page 1850].
The Cloud Foundry API is protected by a rate limit against misuse. The limit is in the range of a few 10k requests per hour per
user.
Enhancement
Add new endpoint retrieving a single application (including scopes and attributes) identified by a client credential token clas
sifying an XSApp:
New
Cloud Foundry Route Services have been enabled. Recommendation: Test your clients that request connections through the
HAProxy application requests via the route service models. Users can implement route services that preprocess application
requests via the route service models user-provided service and fully-brokered service. Only space-scoped service brokers
are supported. See Route Services .
Enhancement
Cloud Foundry еnvironment has been updated to version 263. See v263 .
Enhancement
New
Security
You can use an SAP Cloud Platform Identity Authentication service tenant instead of SAP ID Service as an identity provider
for managing account members and login to the cockpit. Authentication with your own tenant for the console client is now in
beta phase. See Platform Identity Provider [page 2200].
Announcement
The Accept-Encoding HTTP header was removed from the default header whitelist and added to the excluded headers. With
this change, the header is no longer forwarded to backends but is instead managed by the HTML5 application. Adding the
header to the whitelist of the application does not have an effect any more. See Header Whitelisting [page 1285].
New
A new video guides you through SAP Cloud Platform cockpit forAuthorization assignment UIs for business uses are availa
ble. Platform users who have created a subaccount will be allowed to administrate SAML identity providers, role collections,
and authorization assignments for business users, authenticated through the XSUAA service.
New
The App-AutoScaler provides the capability to automatically scale Cloud Foundry еnvironment applications up or down
through a set of rules based on application metrics that applications provide to the service. In the Beta version, the service
will support a plan called "lite".
Application developers will be able to create policy documents with a maximum of 2 scaling rules per application.
Enhancement
● v262
● v261
● v260
● v259
● v1.16.1
● v1.16.0
● v1.15.3
● v1.15.2
● v1.15.1
● v1.15.0
Enhancement
SAPUI5
Release of SAPUI5 distribution 1.46.7 for Java and HTML5 applications. See What's New in SAPUI5 and Change Log.
Enhancement
The Console Client commands for configuring SAP SuccessFactors extension applications (under the Extensions group) are
globally available, except for the commands related to the Home Page tiles. See Console Client Commands [page 1799].
Enhancement
Due to an incompatibility of the used Erlang version with the underlaying stemcell, Erlang had to be updated from version 16
to version19. The RabbitMQ version remains on 3.6.9.
Please delete all RabbitMQ services with service plan name v3.6.2-container, v3.6.5-container, v3.6.9-1-container, v3.6-con
tainer, v3.6.9-1-dev, and recreate these services using the service plan v3.6-dev until the following dates:
● AWS-Canary: 29.06.2017
● AWS-LIVE (EU10, US10): 11.07.2017
● Azure-LIVE: 13.07
With this update, all RabbbitMQ Docker containers, still running with Erlang 16, will be purged.
Deprecation
New
New
In the launchpad configuration cockpit, the Transport Manager provides export and import functionality. It enables adminis
trators to transport site content between subaccounts, landscapes, and regions. See Transporting Content of a Lauchpad
Site.
New
Developers can use the cross app state functionality. The AppState functionality is exposed via the CrossApplicationNaviga
tion service.
Announcement
Deprecation
● TLSv1.0 and TLSv1.1 are deprecated. The HAProxy in cf-release always has TLS enabled and accepted connections us
ing TLSv1.0, TLSv1.1, or TLSv1.2. For security reasons, support in these components for all versions except TLSv1.2 will
stop by 01 August 2017.
● The old and now deprecated stack for application logging is still available under: https://logs-deprecated.cf.<domain>.
For the new Application Logging Service, it is mandatory that you bind your application as described in the documenta
tion. See .
● The deprecated Application logging https://logs-deprecated.cf.<domain> will be removed with the next update on 30
May 2017.
Software Logistics
You can configure rolling update for the deployment of your Java applications in the MTA deployment descriptor of your MTA
archive. See Multi-Target Applications [page 1292].
Maintenance
Certificates Renewal
The Certificate for SAP Cloud Platform Australia (Sydney) Region *cert.ap1.hana.ondemand.com has been re
newed. The Certificate Authority is changed from Baltimore/Verizon to VeriSign/Symantec.
Enhancement
SAP Cloud Platform now supports Cloud Foundry еnvironment as a generally available platform environment, next to the
existing environment that is now called Neo environment. Cloud Foundry еnvironment runs on multiple new regions and in
frastructures which are visible to users in the SAP Cloud Platform Cockpit. This release also includes a changed onboarding
and provisioning process: users can now acquire quota for Cloud Foundry еnvironment resources and services, and then use
the available quota on the available Cloud Foundry еnvironment regions via a self-service in the cockpit. The Cloud Foundry
еnvironment provides additional choices for runtimes (also called buildpacks) and services.
The SAP Cloud Platform service catalog is now accessible for anonymous users and consists of many new services sup
ported on the Cloud Foundry environment.
More details on the included changes can be found in following blog posts:
Deprecation
SAP Cloud Platform, Starter Edition for Cloud Foundry Environment Services (Beta)
The SAP Cloud Platform,Starter Edition for Cloud Foundry Environment Service
New
Get Support
There are new options available when creating a support incident [page 2280]. When selecting a product and installation
type, you can choose the following options depending on the contract:
Enhancement
SAPUI5
Release of SAPUI5 distribution 1.44.13 for Java and HTML5 applications. See What's New in SAPUI5 or Change Log.
Enhancement
● v257
● v256
● v255
● v254
Enhancement
● v1.13.0
● v1.12.0
● v1.11.0
● v1.10.0
● v1.9.0
The XS advanced model provides support for business applications consisting of multiple software “modules” that are im
plemented as separate Cloud Foundry еnvironment applications and “resources” mapped to Cloud Foundry еnvironment
backing services.
Deploy Service allows application developers and operators to perform operations on multi-target applications in Cloud
Foundry еnvironment, such as deploying, removing, and viewing. See Multi-Target Applications [page 1292].
Enhancement
Enhancement
A new default Application Logging service replaces andud applications. The service provides in-memory and deprecates the
old stack (which is still available underInitial revision of the SAP Cloud Platform SAP HANA Service relational data persis
tence to applications running on SAP Cloud Platform. It processes transactions and analytics]) . in-memory on a single data
copy, to deliver real-time insights from live data. The service offers advanced data which sets up and manages SAP HANA
databases and bind them to clo processing for business, text, spatial, graph, and series data to give unprecedented insight.
To start using the new logging service, bind your application as described in .
Enhancement
MongoDB is available with the following plans: v3.0-dev, v3.0-xsmall, v3.0-small, v3.0-medium, v3.0-large.
Enhancement
PostgreSQL is available with the following plans: v9.4-dev, v9.4-dev-large, v9.4-xsmall, v9.4-small, v9.4-medium, v9.4-large.
Enhancement
Redis is available with the following service plans: v3.0-dev, v3.0-dev-large, v3.0-xsmall-single-node, v3.0-small-single-node,
v3.0-medium-single-node, v3.0-large-single-node, v3.0-xxlarge-single-node, v3.0-xsmall, v3.0-small, v3.0-medium, v3.0-
large, v3.0-xxlarge.
Enhancement
RabbitMQ is available with the following service plans: v3.6-dev, v3.6-xsmall, v3.6-small, v3.6-medium, v3.6-large.
ApplicationObjectStore is available with the s3-standard plan. A Cloud Foundry environment application can use the service
to create a storage space and can perform object related operations within it on AWS S3.
New
Region
New
Software Logistics
● Deploy and monitor solutions in multi-tenant mode, and create entitlements for the accounts allowed to subscribe to
that solution.
● View the list of accounts subscribed to it.
● Subscribe to a solution for which your account is granted entitlement. See .
Enhancement
Software Logistics
You can model destinations, security groups, and role assignments by using an MTA deployment descriptor. See Multi-Target
Applications [page 1292].
Enhancement
Java Runtime
The SAP Cloud Platform Java Web Tomcat 7 runtime has been updated to Tomcat version 7.0.77. Changelog - http://
tomcat.apache.org/tomcat-7.0-doc/changelog.html#Tomcat_7.0.77_(violetagg) .
Enhancement
Java Runtime
The SAP Cloud Platform Java Web Tomcat 8 runtime has been updated to Tomcat version 8.5.13. Changelog - http://
tomcat.apache.org/tomcat-8.5-doc/changelog.html#Tomcat_8.5.13_(markt) .
Java Runtime
The Java Web runtime is deprecated. Support will be discontinued after 31 December 2017. We recommend using Java Web
Tomcat 8. See Java Web [page 1154].
Enhancement
Application Management
Generally available for productive use: Access to Java and HTML5 Applications can be configured via an on-premise reverse
proxy. At the same time, the SAML 2.0 authentication can be preserved with browser redirect. See Configuring Application
Access via On-Premise Reverse Proxy [page 1759].
Enhancement
Custom Domains
The change-domain-certificate command lets you change the domain certificate of a custom domain in one step
instead of executing both the unbind-domain-certificate and bind-domain-certificate commands. See
change-domain-certificate [page 1812].
Enhancement
SAPUI5
Release of SAPUI5 distribution 1.44.12 for Java and HTML5 applications. See What's New in SAPUI5 or Change Log.
Announcement
Beginning 20 May 2017, the standard regular maintenance window for SAP Cloud Services will be harmonized and some
SAP Cloud Services will start to offer Zero Downtime Maintenance, which means that these services will remain online while
regular maintenance is being performed. See New Maintenance Windows for the SAP Cloud Platform Services .
Enhancement
Access to Java and HTML5 Applications can be configured via an on-premise reverse proxy. At the same time, the SAML 2.0
authentication can be preserved with browser redirect. See Configuring Application Access via On-Premise Reverse Proxy
[page 1759].
SAP HANA
SAP HANA revision 122.08 is now supported. See SAP Note 2021789 - SAP HANA Revision and Maintenance Strategy .
Enhancement
Eclipse Tools
You can no longer set up a default SDK in Eclipse. You can specify the SDK only when defining a new runtime. See Set Up the
Runtime Environment [page 1131].
This change was introduced because when you set up a default SDK for a specific runtime and then you defined a different
runtime, the default SDK was automatically assigned. This lead to misconfiguration and errors that do not let you set up the
correct SDK. Now, you only specify the SDK for a specific runtime.
New
Enhancement
User shall be able to manage the Jobs and schedules from the Service Dashboard of Job Scheduler.
Deprecation
The DEA runtime has been switched off and all DEA applications have been migrated to the Diego runtime. See Product
Scope.
Announcement
Fix
Authorization for API calls is based on the User Account and Authorization (UAA) client_id or client_secret.
Announcement
The DEA runtime will be retired on 18 April 2017. All applications still running on DEA will be migrated to Diego.
Diego was introduced as the default container-management system on 21 March 2017. See Product Scope.
Announcement
Documentation
SAP Cloud Platform documentation has officially been moved to help.sap.com and will no longer be available on
help.hana.ondemand.com. All links to help.hana.ondemand.com will redirect to the new SAP Help Portal. See SAP Cloud
Platform.
Announcement
SAP HANA
SAP Cloud Platform, smart data streaming has been renamed to SAP Cloud Platform, streaming analytics.
Enhancement
SAP HANA
Enhancement
Cockpit
You can browse all the services available on the platform without logging on; select Services from the Navigation pane on the
left.
SAPUI5
Release of SAPUI5 distribution 1.44.10 for Java and HTML5 applications. See What's New in SAPUI5 or Change Log.
Maintenance
Certificates Renewal
The certificate for SAP Cloud Platform AP1/Australia landscape *.ap1.hana.ondemand.com has been renewed. The Certifi-
cate Authority is changed from Baltimore/Verizon to VeriSign/Symantec. See Certificate Authority Change .
Bugfix
Software Logistics
Calling neo deploy-mta --synchronous with an MTA archive twice no longer results in a successful deployment but returns an
error for duplicate version. See deploy-mta [page 1862].
Enhancement
Cockpit
You can change the display name of the global account per data center from the Edit button of the global account tile.
Enhancement
SAPUI5
Release of SAPUI5 distribution 1.44.7 for Java and HTML5 applications. See What's New in SAPUI5 or Change Log.
Enhancement
SAP HANA
SAP HANA revision 122.07 is now supported. See SAP Note 2021789 - SAP HANA Revision and Maintenance Strategy . It
also contains security fixes, see SAP Security Patch Day – March 2017 and SAP Security Note 2424173 - Vulnerabilities in
the user self-service tools of SAP HANA .
Enhancement
SAP ASE
SAP ASE 16.0 SP02 PL05 HF1 is supported. See What’s new in SAP Adaptive Server Enterprise 16.0 SP02 .
Enhancement
Java Runtime
● Java Web Tomcat 8 runtime to Tomcat 8.5.11. See Tomcat 8.5.11 (markt) .
● Java Web, JavaEE 6 Web Profile, and Java Web Tomcat 7 runtimes to Tomcat 7.0.75. See Tomcat 7.0.75 (violetagg) .
Enhancement
Java Runtime
The default SAP JVM for the Java Web Profile has been changed from version 6 to version 7. See Application Runtime Con
tainer [page 1153].
You can pin your Java version to 6 by using the --java-version parameter of the deploy console command. See deploy
[page 1856].
Enhancement
The user interface for managing OAuth clients to access platform APIs has been enhanced. You can now manage the OAuth
scopes for each individual client, and view or delete existing clients. See Using Platform APIs [page 1289].
New
SAP HANA
Smart data streaming 1.0 SPS 11 revision 112 patch 8 is now available. This new version supports SAP HANA 1.0 SPS 12.
Announcement
Rebranding
SAP HANA Cloud Platform has been renamed to SAP Cloud Platform. See SAP Cloud Platform – New name, new dimen
sions .
Announcement
Diego is planned as the default container-management system from 21 March 2017. See Product Scope.
Enhancement
HTML5 Applications
The default HTTP header whitelist has been extended with the Slug header of the Atom Publishing Protocol . The Slug
header is used in an HTTP POST request and influences the URL of the created resource. With this enhancement, the header
is always forwarded in the communication with backends, and you do not need to specify the Slug header in the header
whitelist of the application. See Header Whitelisting [page 1285].
Announcement
HTML5 Applications
The X-Forwarded-Proto HTTP header has been removed from the default header whitelist. With this change, the header is no
longer being forwarded to backends, unless there is an entry in the header whitelist of the application. See Header Whitelist
ing [page 1285].
Enhancement
SAP HANA
SAP HANA revision 122.06 is now supported. See SAP Note 2021789 - SAP HANA Revision and Maintenance Strategy .
Enhancement
Java Applications
The Java application overview page features charts that show a quick snapshot of the last 24 hours of the app: number of
requests and CPU consumption.
Enhancement
HTML5 Applications
The /attributes endpoint of the user API can be set to return multiple values. Set the multiValuesAsArrays URL query
parameter to true to return a multivalued attribute formatted as a JSON array. Set the parameter to false to return only the
first value of the entire value range of the specific attribute formatted as a simple string. The output of single-valued user
attributes is not affected. See Accessing the User API [page 1280].
HTML5 Applications
GET, When forwarding HTTP requests to backends, an application transfers the body for all supported request types. Previ
ously, bodies of DELETE and HEAD requests were dropped. See Accessing REST Services [page 1275].
Enhancement
SAP ASE
When forwarding HTTP requests to backends, an applicationSAP ASE revision 16.0 SP02 PL05 is now supported. See Ver
sion Update - 16.0 SP02 PL05.
New (Beta)
Security
You can use an SAP Cloud Identity tenant instead of SAP ID Service as an identity provider for cockpit access to your ac
count. See Platform Identity Provider [page 2200].
Deprecation
As announced in the release notes in November 2016, the old version of the Docker-based service plans of Redis, MongoDB,
RabbitMQ and PostgreSQL will be removed completely on January 27th, 2017. This means the old service plans and related
service instances will then be dropped and the stored data will be lost. You are affected if cf service
<service_instance_name> returns -deprecated in the service name.
Enhancement
SAP HANA
SAP HANA revision 122.05 is now supported. See SAP Note 2021789 - SAP HANA Revision and Maintenance Strategy .
Announcement
Documentation
Hyperlinks
Some links are classified by an icon and/or a mouseover text. These links provide additional information.
About the icons:
● Links with the icon : You are entering a Web site that is not hosted by SAP. By using such links, you agree (unless expressly stated otherwise in your agreements
with SAP) to this:
● The content of the linked-to site is not SAP documentation. You may not infer any product claims against SAP based on this information.
● SAP does not agree or disagree with the content on the linked-to site, nor does SAP warrant the availability and correctness. SAP shall not be liable for any
damages caused by the use of such content unless damages have been caused by SAP's gross negligence or willful misconduct.
● Links with the icon : You are leaving the documentation for that particular SAP product or service and are entering a SAP-hosted Web site. By using such links,
you agree that (unless expressly stated otherwise in your agreements with SAP) you may not infer any product claims against SAP based on this information.
Example Code
Any software coding and/or code snippets are examples. They are not for productive use. The example code is only intended to better explain and visualize the syntax and
phrasing rules. SAP does not warrant the correctness and completeness of the example code. SAP shall not be liable for errors or damages caused by the use of example
code unless damages have been caused by SAP's gross negligence or willful misconduct.
Gender-Related Language
We try not to use gender-specific word forms and formulations. As appropriate for context and readability, SAP may use masculine word forms to refer to all genders.