Additional Blogs by SAP
cancel
Showing results for 
Search instead for 
Did you mean: 
Former Member

We have been asked a couple of times to give some insights about the underlying technical landscape and the development processes used for the new SAP Help Portal platform. So here we go:

General overview

The SAP Help Portal has some differences compared with most other SAP applications. The speed dating profile of the SAP Help Portal looks like:

Additionally, we are following the DevOps movement to have a better integration of development and operational aspects.

Infrastructure layer


The SAP Help Portal is hosted on SAP IT’s own private cloud called “Monsoon” which is NOT an SAP product. It’s build by SAP IT internally, to speed up and improve efficiency of our own project delivery. For the SAP Help Portal, Monsoon is providing a lot of services out of the box:

  • Server provisioning
  • System landscape orchestration
  • Configuration management
  • Continuous integration and deployment support 
  • Monitoring and Alerting capabilities
  • Integrated access to SAP HANA for Reporting
  • Backup for database servers

The virtual servers we are using are running on Red Hat Enterprise Linux as operating system.  The standard operating system template is provided by Monsoon. After the template is applied it’s enhanced with a lot of additional configurations. The enhancements are created by the SAP Help project itself and are mixed into the existing Monsoon configuration management capabilities. 

The SAP Help Portal platform consists of 4 different landscapes to ensure proper quality. QA is done for the application developments, the system configurations and the SAP Help Portal content. 

Help Portal Landscapes


As mentioned above, we are using 4 landscapes at the moment. Each landscape is used for a different purpose. The servers used for the SAP Help Portal are technically very similar. All of them are small machines with 2 GB of RAM and 2 CPU’s configured. Disc space is between 20 GB for all non database and 50 GB for the database machines.

Build landscape ( 3 servers)

The build landscape is used for regression testing and controlling the deployment pipeline. The pipeline enables the continuous delivery approach we are following at the SAP Help Portal project. The build landscape consists of 3 machines, 2 build/test servers + 1 configuration server.  The build servers are doing the regression  tests and drive the  execution of the deployment pipeline. The pipeline itself is defined inside an Atlassian Bamboo server, which is part of the monsoon cloud service offering. The Bamboo server delegates all pipeline execution steps to the build agents. They are installed on the 2 build servers.

Stage landscape (3 servers)

The stage landscape is used by the content editors to verify content their content development. It consists of 2 standalone application servers + the obligatory configuration server. Each of the application servers got the full stack installed, so they contain nginx, MongoDB, rvm, Ruby programming language, etc.

QA landscape

The QA landscape is a clone of the production landscape, with the same setup. It is used by the content editors and the QA team to review new features and to verify the content rendering on the new SAP Help Portal.

Production landscape (7 servers)

The production landscape consists of 2 application servers that are responsible for http(s) requests from the end user. A F5 BIG-IP load balancer is used for distributing the load between them and for the support of zero downtime deployments.  A batch instance helps the 2 application serves with CPU consuming processes like the PDF generation feature. The creation of PDF documents is done asynchronously on that dedicated instance. The MongoDB database is build as a replication set consisting of 3 members to ensure high availability. Together with the obligatory configuration server we got 7 servers for the production landscape.

So in total we have about 20 machines for the SAP Help Portal platform. That seems to be quite a lot but if we take a closer look at the machines we will notice that most of the servers are used for supporting the high availability, the QA processes and the software development approach. The “real” work, in terms of serving the help portal content and it features to the end users, is done by 3 servers (one application server, the batch server and one database server).  We figured out that this setup would have been enough capacity to serve most of the actual user load with response times below one second per click.

Configuration management

You might have wondered about the configuration servers in each of the landscapes. They are used to configure the application and database servers inside the corresponding landscape. Configuration here means that software installations and upgrades, plus the deployment of application code, is done in an automated way with the help of Opscode Chef configuration management. All of the needed steps to create an application server with nginx/passenger and monitoring agents is done in a fully automated way by the Opscode Chef toolset. Also the application deployment and the installation and configuration of the MongoDB servers is done with Chef. None of the systems was or will ever be changed manually. I’m not kidding! This is one of the major principals we’re following. If a server setup or an installed software on a server has to be changed, you have to change the relevant configuration recipe. The next configuration run will apply that to all of the relevant machines. The recipes itself are written in a specific Chef DSL and the Ruby programming language.

SAP IT’s Monsoon provides already a lot of generic recipes for the installation of various software components like Apache and Nginx webserver, MySQL or MongoDB database, ... . All of them are hosted in a private Github Enterprise installation. A project like the SAP Help Portal has to do some adjustments to the generic recipes like overriding of parameters. Additionally we've to create all of the project specific scripts. At the end everything that is related to our landscape and server definitions, the chef configuration recipes and the application code for the SAP Help Portal platform is pushed into the Github Enterprise source code control system. With that, we are having a lot of transparency about our system changes because of the versioning capabilities of Github. In case of an emergency we can roll back to the last good commit and reapply this to the servers.

That is also another goal on our side: We have to be able to rebuild the complete SAP Help Portal platform based on the code inside Github and the database backup made from the MongoDB's. The application and database servers itself are not backed up at all. So a classical full system backup doesn't exist. In case of a corruption the platform will be rebuild by running the Chef tool on all of the servers. Afterwards the backup will be restored on the database cluster.

Application development

Let’s discuss first the development process and tools that are used before we come to the Continuous Integration (CI) and Continuous Deployment process. By that, we made sure that we have at least something that we can integrate and deploy later on 😉 .

I will describe now some of the tools and software libraries (gems) we are using for the SAP Help Portal project at the moment. If you’re not a Ruby developer this section might be a little bit confusing.

As mentioned in the beginning the SAP Help Portal application is developed with Ruby as the main development language. Additionally we are using Ruby on Rails as our Web Development framework to make our life much easier. The development is done locally on Apple MacBooks and the application has to be developed in a way that it will run either on a developer laptop or on a hosted linux server.

To start all the needed applications locally, we’re using a tool called foreman. It’s starting the MongoDB and the Rails server. The Nginx/passenger application server is not needed locally for the application development. 

As mentioned before, the code is versioned locally with Git and can be pushed to the private Github Enterprise installation that is part of SAP IT’s Monsoon. 

We are doing the library management for the ruby gem’s with bundler tool and most of us have the Ruby Version Manager rvm installed to manage different ruby versions and project specific gem sets. All of that is also installed on the linux machines. Some of the most prominent ruby gems we’re using are

  • haml : For the templating of the views
  • sass : For css development
  • mongo_mapper : OR mapper for MongoDB
  • nokogiri : XML, HTML, … Parser
  • pdfkit : Create pdf’s with the help of wkhtmltopdf
  • rspec, cucumber and capybara : For behavior driven testing

The development process is easy to describe:

The developer clones or pulls the source code from the Github remote repository, starts all needed applications like rails server and DB with the foreman tool, writes new tests if needed and develops or fixes the application code as intended. When done, he has to test the changes locally and runs the test suite. After that the code changes are committed to the local Git and pushed to the Github remote. Sometimes changes have to be merged with other parallel developments before the push get accepted by the Github repository.

That’s it for a development cycle. But how does the change come to the QA and production landscape?

Continuous integration and deployment

With continuous integration developers have the ability to merge their changes continuously to a main code line. At the end of the development process described above the code was pushed by the developer to the Github repository. The SAP IT Monsoon CI server detects these changes periodically. When a change is detected it starts the deployment pipeline associated with that repository. The SAP Help Portal deployment pipeline first triggers the build landscape to sync all the code changes and to run the various RSpec and Cucumber tests on the build servers.

If no errors are detected, the pipeline pushes the Github master branch to the QA branch. That is done for the application and the configuration code. Afterwards, the pipeline triggers the repository sync on the QA configuration server. When this is done, the pipeline starts a so called “chef client run” on the servers inside the QA landscape. The client run executes all of the relevant chef recipes on the servers, but just in case when differences are detected. On each application server the deploy recipe is one of the last to run. This recipe will sync the application code onto the server from the Github QA branch of the application repository. Afterwards the ruby bundler tool will be executed to install the needed gem libraries. At the end a short restart of some components might be necessary, depending on the nature of the applied changes. The last step on the QA landscape is a test that will ensure that the landscape is up and running after all configurations and deployments were applied.

When that test is successful the same steps will be done for the production landscape. That means branching to production branch, updating of production configuration server, applying of all configuration/deployment steps and final testing to make sure the landscape is running properly.

To avoid any downtimes the deployment on production is done in two steps so that one portion of the servers is still up and running to fulfill ongoing request while the other part is deployed. 

Monitoring and Alerting

All servers in production landscape and all of the software components running on them are monitored automatically by one of SAP IT Hyperic servers. The configuration for that is also done automatically with Chef recipes.

Log files written from the application, database or webserver are collected inside a central Splunk landscape. In case of a Hyperic monitoring alert the relevant logs can be searched and read inside the Splunk. with htat there is no need to log into a server manually to get access to the logs.

Reporting

The reporting is done with, have a guess what, SAP HANA. Thilo Brand has written a Blog about the import of Apache Logs into SAP HANA. The import of the SAP Help Portal web statistics data is done in a similar way. In addition, the HANA team had built some analytical dashboards for the SAP Help Portal business end users on top of Business Objects Explorer. Those dashboards give a quick overview on the most relevant help statistics like access by country, most visited pages/topics, etc. and are accessible in a browser as well as on a mobile device like iPhone or iPad.

Search


Search is a big topic for the SAP Help because of the massive amount of documentation. The SAP Help portal contains more than 12.000.000 html pages and sometimes it’s not so easy to find the right one by navigation. So in some cases you have to search for it and this is done by SAP IT’s Search platform. It provides an excellent search for most of the internal and external SAP web sites. The SAP Help Portal uses the same search platform as SCN. Some more information on the SCN search can be found inside the following blogs: SCN Search enhancementsHow to use SCN Search and OpenSearch on SCN

The search crawlers for the SAP Help Portal were created by the project. They are scheduled by a linux cron job regularly and seek for content changes continuously. The crawlers maintain a history log on their own to identify already indexed content and newly changed one. The crawler extracts the metadata from a changed content object and sends it together with the url to the search indexer. We’re using REST based https calls for that. The indexer later will request the page content and indexes it together with the provided metadata.

Content store and Web Content management system


Another challenge for the SAP Help Portal is the content storage and the maintenance processes for that. The Help Portal is hosting hundreds of SAP documentations created over the past years. In total there are more than 12.000.000 html pages the SAP Help Portal is providing to the public. The storage for all of them is done in a high available NetApp filer storage. The filer is accessible by nfs and smb protocol so all help application servers are mounting the share to serve the content to the public.

The content is available “as is” so the SAP Help Portal is using the same html pages that are delivered with the corresponding SAP product on a DVD/CD.  The Help portal application is doing a transformation of these html files to render them into a new style. It also does a lot of analysis of the navigation structure to provide the new navigation approach. At the moment there are only a few SAP product documentations rendered in the new style together with the navigation approach, but it’s planned to present more and more of the existing documentation in that way. A good example of a “new” documentation is http://help.sap.com/saphelp_nw73ehp1/helpdata/en/ca/6fbd35746dbd2de10000009b38f889/frameset.htm.


In contrast to that the old approach can be found for example here:

http://help.sap.com/saphelp_nw04/helpdata/en/e1/8e51341a06084de10000009b38f83b/frameset.htm.

For more information about the new SAP Help Portal and it's new features you can have a look at the following blogs “Help Portal Transformation – Part 1” and “Help Portal Transformation – Part 2”.

1 Comment