In heterogeneous environments, it is commonly required to analyse huge amount of logs generated by various systems – and it is convenient to manage these logs centrally in order to avoid overhead caused by accessing local log viewing tools of each affected system. Generally speaking, there are several approaches in populating centralized log management systems with logs produced by backend systems:
- Poll logs: the backend system generates and persists logs locally and centralized log management system collects (polls) and processes generated logs periodically or ad hoc (real time on user demand);
- Push logs: the backend system generates logs and sends (pushes) them to the centralized log management system.
In this blog, I would like to focus on the second approach and describe one of its possible implementations suitable for SAP AS Java systems (for example, SAP Process Orchestration or Enterprise Portal) using standard APIs shipped with AS Java – namely, functionality of SAP Logging API. In sake of concretization of the example, let us consider the scenario where some application of the SAP PO system (in real world, it can be mapping, adapter or adapter module, some other application deployed on AS Java) generates logs and our intention is to propagate these logs to some JMS broker (for example, to the specific JMS queue hosted on that broker), which is used by centralized log management system to parse and process log records later on. The one may think of other communication techniques different from JMS – using the approach discussed in this blog, the solution can be adapted to particular needs and communication techniques, JMS has been chosen for demonstration purposes as a commonly used technique for building distributed solutions.
Some 3rd party logging frameworks implement approach of decoupling log producer (the application which utilizes logger and creates a log record) from log consumer (the application which processes logs) and have capabilities of propagating generated log records to destinations other than local console or file. For example, one of commonly used logging APIs – Apache Log4j – introduces a concept of appenders, which are components delivering the generated log record to the specific destination. Destination may be console, file, JMS queue/topic, database, mail recipient, syslog, some arbitrary output stream, etc. It is possible to deploy such 3rd party logging library to AS Java system and utilize its functionality, but as stated above, the goal of this blog is to describe the solution where SAP standard functionality is employed, so usage of 3rd party logging frameworks is out of scope of this blog.
Overview of log destinations in SAP Logging API
Architecture and main components of SAP Logging API are well described in SAP Help: SAP Logging API – Using Central Development Services – SAP Library. The aspect which is important for us in scope of this blog, is the way how logging framework sends log records out. The component responsible for management of this process is Log Controller. For each log location, it is possible to assign one or several logs, where Log is a representation of the destination to which the assigned Log Controller will distribute generated log records for the specific location. In SAP Logging API, there are several classes that implement logs and that may be of interest for us:
- ConsoleLog – used to write log record to System.err;
- FileLog – used to write log record to the specified file;
- StreamLog – used to write log record to an arbitrary output stream.
Log destinations can be configured in various ways:
- Programmatically from application source code (refer to Sample Java Code with Logging – Using Central Development Services – SAP Library);
- Using SAP Logging API Configuration Tool and preparing properties file that contains information regarding logging configuration, which is loaded further from the application level. Together with periodic reloading feature, this approach makes logging configuration very flexible since parameterization is done in the file and doesn’t require changes to source code of the respective application in case logging configuration should be modified. Refer to Configuration Tool – Using Central Development Services – SAP Library;
- Using AS Java ConfigTool and configuring the respective log destination. This approach can be used to configure additional destinations for FileLog and ConsoleLog. Rrefer to Adding, Editing and Removing Log Destinations – Monitoring – SAP Library.
There is a brief description of these log destinations in SAP Help: Log (Destination) – Using Central Development Services – SAP Library.
ConsoleLog is the simplest from them and is the least applicable when thinking of centralized log management system.
FileLog can be of use when we need to output log records not to default log files of AS Java, but to some specific file or a set of rotating files (potentially, to the location which is scanned by collectors of centralized log management system). This may be helpful, for example, if we need to persist log records generated by some application, in a specific dedicated file and not in a common shared application log files. FileLog is described in several materials published on SCN, such as:
- Karsten Geiseler’s blog Netweaver Portal Log Configuration & Viewing (Part 3);
- YiNing Mao’s blog Handle Standard JAVA Output Using Log File;
- Jacek Wozniczak’s blog Logging in Web Dynpro (Java) – a guide and tutorial for beginners [Part 1];
- Iwan Zarembo’s Wiki page How to create own log files on a SAP NetWeaver AS Java 7.0 – CRM – SCN Wiki.
You may also find relevant information regarding usage and configuration of FileLog in SAP Help: Output File – Using Central Development Services – SAP Library.
In this blog, my focus will be on the log destination StreamLog, which will be helpful in fulfilment of our requirement formulated at the beginning.
In sake of simplified demonstration, logging configuration will be implemented from the source code of the application.
In the demo scenario, Apache ActiveMQ is used as a JMS broker. The JMS queue named Logs has been registered there and is intended to be used as a destination for generated logs so that log records are accumulated and persisted in that queue:
The entire utilized set of operations with the logging system can be logically split into three lifecycle phases:
- Initialization of the log destination and corresponding output stream, followed by initialization of a logger which writes to it;
- Generation of log records and writing them to the log destination;
- Termination and closure of used resources.
As a part of initialization, it is firstly necessary to establish connection to the log destination and open the output stream to it:
ActiveMQConnectionFactory connectionFactory = new ActiveMQConnectionFactory(jmsBrokerUrl); Connection jmsConnection = connectionFactory.createConnection(); jmsConnection.start(); Session jmsSession = jmsConnection.createSession(false, Session.AUTO_ACKNOWLEDGE); Destination jmsQueue = jmsSession.createQueue(jmsQueueName); OutputStream os = ((ActiveMQConnection) jmsConnection).createOutputStream(jmsQueue);
Here, jmsBrokerUrl is a String holding JMS broker URL (tcp://<host>:<port> for ActiveMQ – for example, tcp://activemqhost:61616) and jmsQueueName is a String holding JMS queue name (in this example, Logs).
The next step is to initialize the logger:
Location logger = Location.getLocation(logLocationName); logger.setEffectiveSeverity(Severity.ALL); Formatter formatTrace = new TraceFormatter(); Log logJms = new StreamLog(os, formatTrace); logger.addLog(logJms);
Here, logLocationName is a String holding log location name (can be arbitrary meaningful name which would, for example, identify the log location in the application hierarchy).
Note that in this example, we used a simple trace formatter – based on requirements, it is possible to utilize variety of other formatters in order to apply required layout to the generated log record. In sake of demonstration, severity was explicitly set to all – depending on logging needs, this can also be adjusted accordingly.
The important part of this block is creation of the Log object (that represents the JMS queue to which log records will be written) and adding this Log to the initialized logger. In this way, the logger gets instructed regarding destination or several destinations (if several Log objects are created and added to the logger), to which generated and filtered log record should be written to.
After two initialization blocks are executed successfully, we can now create log records – in the simplest way, by calling method <severity>T() of the Location object, which is corresponding to the desired log record severity:
As a result, the created log record will be sent to the queue hosted on ActiveMQ server and can be observed there:
Attention should be paid to termination logic in case the application doesn’t need this log destination anymore – this is important in order to ensure there is no resource leak (unclosed streams, sessions, connections, etc.). To be more precise, it is important to take care of closing the used output stream, JMS session and JMS connection:
os.close(); jmsSession.close(); jmsConnection.close();
Respective exception handling and proper output stream closure and JMS resources release should be implemented accordingly.
After the log record has been written to the JMS queue and persisted there, the centralized log management system may process it further – for example, aggregate with other log records based on some rules, retrieve required information and visualize it in a user-friendly way, generate alerts, etc. That part of log management is out of scope of this blog – our current goal was to make log records of the application running on AS Java delivered to the central destination and storage.
This described solution has several drawbacks which should be taken into account:
- Performance. A log record is created and written to a log destination synchronously. This means, logging operation is a blocking operation for the application which triggered log record creation and the application has to wait until log record creation operation is completed, before it can continue execution of application logic. As a result, the more time is spent for logging logic, the more performance overhead logging will bring to the application and bring negative impact to overall processing time of the application. Writing log entries to the remote log destination (such as remote JMS destination, remote database, etc.) is more “expensive” operation than writing them to a local file system, that’s why should be implemented carefully. Compromise can be found in locating the log destination (for example, JMS broker instance) as close as possible to SAP AS Java.
- Lifecycle of operations with the output stream prescribe necessity of prior creation of the output stream with attachment to the specific data destination and finalization of writing to the output stream by closing the stream and respective connections. These operations should normally be executed at initialization and termination phases, correspondingly, and not for every written log record, in order to avoid additional overhead related to output stream management.
- Possible necessity of 3rd party libraries deployment. In order to utilize log destinations which are components other than SAP AS Java, it may be needed to deploy 3rd party libraries which will provide necessary APIs for SAP Logging API to be capable of writing to those destinations. In its turn, this brings necessity of maintenance overhead of such a solution and ensuring compatibility of deployed libraries during upgrades.
Summarizing all described above, StreamLog is a powerful and flexible feature of SAP Logging API in building centralized log management systems and facilitating logs processing and analysis routines, but it should be estimated and used thoughtfully.