Additional Blogs by Members
cancel
Showing results for 
Search instead for 
Did you mean: 
Former Member
0 Kudos

The landscapes those are ever moving,
Treading newer paths and roads,
To move ahead in an efficient way,
Is the mandate of the day



This weblog looks into the domain of optimising the performance levels of the managed runtime applications which act as foundations to the enterprise applications. Today, in an ever-changing programming environment, where the utility of most existing environments are mandated to be highly effective in creating strong and flexible enterprise business applications, the parameters which play an important role in delivering are discussed here. The log looks into depth, the various hierarchies of operations like the machine, applications, systems level and tries to map the commonalities existing therein.

Enterprise Java applications:- Creating a heightened environment for business applications.
In the realms of the development of enterprise applications, the importance of managed runtime environments like Java cannot be over emphasized. Some of the salient features offered by this environment which deliver a highly productive foundation for business application development are easy access to object orientation, memory management which is automated to high degrees and programming safety. To top these features, the platform-agnostic profile offered by this environment provides huge investment protection which is like solving some of the major painpoints of the many a CTOs. Also concretizing the issue is the current IT scenarios which are dynamic and ever-changing. Newer formats and versions of enterprise softwares coming into the market create a unique interoperability situation which can be effectively addressed by these managed runtime applications.

Advanced Just-In-Time (JIT) compilation, memory management, and garbage collection technologies have combated the doubts raised about the poor performance of Java-based applications. Today’s Java Virtual Machines (JVM) take full advantage of a variety of target platforms, and keep up to date with the performance of the latest hardware and operating system advances as they evolve over time.

The arrival of the J2EE framework changed the way the enterprise application architecture was defined. This also paved a way towards a plethora of service architectures like clusters of servers offering differentiated services, the client-server, the GUI-application layer-operational layer-database concepts which have become norms today.

As applications move from development to production, performance becomes a critical life-cycle requirement. Applications must not only meet stringent performance requirements upon deployment, but they must be able to gracefully scale with varying usage patterns and increased demand. Performance optimization and management in this environment is a difficult task, as performance is affected by many interrelated elements. An open survey initiated in the market revealed that close to one third of the cost of deployment was reinvested in most cases, to make the application measure upto the scalability of the usage patterns.



Performance optimization considerations at the three levels –taking a top down approach…


A. System-Level
In the hierarchy of the optimising of the entire application, the emphasis given to the system layer in maximizing the efficiency is the most important facet. It is important to understand the type of the application (batch or interactive) and to identify system hardware and software components that meet that goal.
A typical system can be conceptualized as an array of networked entities which have some transactional functions or logical actions being passed between them. In a specific Java environment, this could be some hardware interfacing or some software component or could be combination of both in which hardware and software talk to each other for some mutual functioning. This paradigm goes a long way in understanding the componentizing the entire Java environment structure.

The ability of the system to process more than one task at a time, called multitasking proves to be a parameter which can measure the effectiveness of any program or system efficiency. Two methods of multi-processing are pipelining and parallelism:
  • Pipelining is the concept of breaking down the required work into many parts.
  • Parallelism throws multiple resources at a task so that the task completes faster.
Both the methods are analogous to the serial and parallel processing ideas for data processing to be done in queues at the processor level instructions.
Most systems typically use both multi-processing approaches. Theoretically, the only system with no performance bottleneck is designed such that every component of the system exhibits the same performance behavior and has identical capacity. In practical terms, every system has a performance bottleneck and it is this bottleneck that renders many software as low-executable ones. The best of both the methods is what is usually applied in the industry.

At the system level, the goal is to ensure that the bottleneck is in the application code over which the developer has direct control, so that changes can be made that directly improve performance. If the bottleneck was elsewhere in the system, then even large-scale performance improvements in the developer's code may have only a slight effect on measured system performance.

A key component of enterprise applications is a back-end relational database, as it provides essential persistence services, data retrieval capabilities for downstream systems, as well as support for querying and reporting applications. In many cases than not, the backend database proves to be the spoilsport in slowing down the performance, what with vast multitudes of data entries in multiple formats, multiple columns and the associations between them needed to be maintained. It is, therefore, extremely important to pay special attention to the physical design and tuning of the database, to ensure acceptable levels of performance. Fundamental considerations include isolating log files to dedicated devices to reduce conflicts between the sequential nature of log operations and random access to data tables; adequately sizing the sort area memory size to minimize disk sort operations; allocating sufficient database cache memory (but avoiding swapping); carefully defining indices such as indexing frequently used, highly selective keys, indexing foreign keys frequently used in joins, using full-text retrieval keys where appropriate; and using disk striping (e.g., RAID 1+0) to spread I/O operations and to avoid device contention. Good amount of research is currently going on in this domain of storage devices which complement the constraints offered by database limitations. Efforts are continually on to increase processing speeds by quicker depopulation of log entries, simpler query structures etc.

B. Application-Level
A firm foundation will always ensure a strong structure. In lieu of this concept, a well conceptualized application level design will go a long way in avoiding many a pitfalls in the course of the performance testing phase of the software development life cycle.

Many J2EE application development best practices are well documented in design patterns. Design patterns are about communicating problems and solutions. Simply put, patterns enable us to document a known recurring problem and its solution in a particular context, and to communicate this knowledge to others. A key point to understand is that Patterns alone won’t solve application problems. There are a series of best practices as well that, when combined with the proper use of patterns can make a robust and highly scalable J2EE application.. Thus, a series of iterative patterns give rise to the concept of best practices which become standards of sort to be followed by future developers or performance enabling processes.


Some of the common characteristics of patterns are
  • Patterns are observed through experience.
  • Patterns are typically written in a structured format (see “Pattern Template”).
  • Patterns prevent reinventing the wheel.
  • Patterns exist at different levels of abstraction.
  • Patterns undergo continuous improvement.
  • Patterns are reusable artifacts.
  • Patterns communicate designs and best practices.
  • Patterns can be used together to solve a larger problem.
Patterns in the J2EE Pattern Catalog
1. Presentation Tier
a. Intercepting Filter
b. Front Controller
c. View Helper
d. Composite View
e. Service to Worker
f. Dispatcher View

2. Business Tier
a. Business Delegate
b. Value Object
c. Session Facade
d. Composite Entity
e. Value Object Assembler
f. Value List Handler
g. Service Locator

3. Integration Tier
a. Data Access Object
b. Service Activator

In addition to design patterns, there are a number of programming practices that reduce performance bottlenecks, such as the following:
Enterprise JavaBeans (EJB) homes and data sources should be cached to avoid repeated JNDI lookup of EJB objects and data source objects.
Use of HTTP sessions should be minimized and used only for state that cannot realistically be kept on the client.
Java Server Pages (JSP) creates HTTP sessions by default. This should be overridden (i.e., session=“false”) when not needed, to prevent inefficient use of session resources.
Database connections should be released when not needed as unreleased connections result in resource leakage problems.
Unused stateful session beans should be removed explicitly, and appropriate idle timeout seconds should be set to control stateful bean life cycle to conserve scarce resources.

C. Machine-Level
Applications developed using static environments such as C or C++, machine-level performance involves tuning the code for the hardware through recompilation, which complicates enterprise application deployment. With Java though, there is an additional layer – the virtual machine. Java virtual machine (JVM) is a program that makes it possible for Java applications to run on a platform. It converts machine instruction into specific instructions executable by a real processor for a specific computer. JVM is based on standards that specify the implementation and deployment of Java applications for a hardware platform. JVM is significant for two reasons. The Java Virtual Machine (JVM) allows the application to take quick and effective advantage of new processor features since this involves only the deployment of a new JVM version and not an expensive rebuild of the entire application code.
Second, multiple versions of the application are not required to get best performance from differences in the platform such as available memory or cache. Such aspects of the hardware are abstracted away by the JVM, and the very same version of application code can get optimal performance on the different platforms through JVM configuration tunable.


JVM-Level Performance
Selecting the correct JVM is critical. It is essential to use a JVM that has been optimized for the underlying hardware of choice. The best optimizations for various processor platforms are known, and a Java application needs to rely on the JVM to harness these optimizations. It is also desirable that the JVM provide a rich set of configuration tunable that can be adjusted for peak performance. A JVM that can tune itself for good performance is an asset, since it simplifies deployment on a variety of platforms using the same architecture.
There are three JVM functions of interest to us:
A. Memory management
Memory management includes object allocation, heap management, and garbage collection. Modern JVMs use a variety of algorithms for these; some incorporate several algorithms for each and allow the user to select the desired one. The correct choice of algorithm is important since there are fairly significant performance differences between them. However, different techniques work better for different applications.
B. Code generation
There are two main approaches to code generation: interpreting and compiling (with a Just-in-Time (JIT) compiler). An interpreter translates each new bytecode to machine code just before execution; a compiler translates a whole segment of code (the whole application, a class, a set of classes, a set of methods, even a single method) into machine code before use.
C. Thread management
A JVM either uses the threading package provided by the operating system (native threads) or it can use its own threading package and map several threads onto each kernel thread (thin threads). If the application suffers a lot from context switches, then the cost of that can be reduced by using thin threads. Similarly, if there is a pool of threads that operate on the same data, then cache performance can be enhanced by tying all the threads onto a single kernel thread. This will result in all of these threads tending to run on the same processor and benefiting from the shared data.

Thus in conclusion, I would like to re-emphasize on the fact that businesses today are more-than-ever dependent on IT systems to make them profitable, but a concurrent irony of the matter is that the IT budgets across the industry are reducing marginally by the year. The mandate is to extract more from less-for any CTO. In such a domain, where business applications need to function to their optimum functionalities and also be efficient in what they perform, the evaluation criteria are very important. Probably, who knows, in the future, with multiple environments existing (with monopoly becoming near extinct), the efficiency of any application would become the sole criterion of purchase and use.
 

3 Comments