Additional Blogs by SAP
cancel
Showing results for 
Search instead for 
Did you mean: 
Former Member
0 Kudos

Talking with people about SAP security is a curious thing. Almost everybody has another perception of it. Some people see it as roles and authorizations. Some as encryption. Some see it as having the most recent patches installed. Some perceive it as protecting systems with firewalls, reverse proxies and similar solutions. Others see it a single sign-on. Or identity management. Or password policies. Or system hardening. Oh yes, and GRC of course.

As a result, whenever I talk with people about SAP security I try to guess their "security type". Is it a "roles guy"? Is it a "GRC gal"?

But curiously, almost nobody I talk to knows anything about application security. When you tell these people that bad code means security vulnerabilites, they get that strange expression in their face. Almost as if you told them that you just arrived from Mars. Or K'PAX.

After a while you get the feeling that SAP application security is the Bermuda triangle of security know-how.

This is the reason why I write this blog. To explain what application security is all about and why it is really important.

So I start with a formal description and then turn to a real-life example.

Application security covers security problems related to the design and implementation of (custom) coding. In other words: if you make modifications to the SAP standard or build custom solutions for your company / industry, this code can introduce unexpected security holes.

And if there is a security defect in an application, any user with access to that application might exploit this defect. Firewalls won't help. Neither will encryption. Neither will authorizations. The truth is: A single coding defect in an application may allow an attacker to bypass all defenses a company has painstakingly brought into place.

But how is this possible? Companies test their applications before they put them in production. And why would experienced developers build security defects into their own code?

The answer is quite simple. Neither testers nor developers know what application security defects look like. Therefore testers won't find coding defects. And developers won't see them as the write them.

Wait a minute. Someone can actually write code and not know that it contains security defects?


Definitly YES.

Here is a self-test: Do you see the security defect in the following BSP code snippet?

----------------------------------------------------

OnRequest handler of a BSP page

----------------------------------------------------

* this handler is called whenever a request is made for a particular page
* it is used to restore the internal data structures from the request
  DATA: today TYPE string.
  DATA: smonth TYPE string.
  today = sy-datum.
* read user input
  input_year  = request->get_form_field('input_year').
  input_month = request->get_form_field('input_month').
* set default values, if empty
  IF input_year IS INITIAL.
    input_year = today(4).
  ENDIF.
  IF input_month IS INITIAL.
    input_month = today+4(2).
  ENDIF.
* special case: all months in the given year shall be displayed
  IF input_month = '00'.
    smonth = ``.
  ELSE.
    smonth = input_month.
  ENDIF.
* get table content of ZCCINFO, filtered by current user, selected month and year.
  CONCATENATE `uname = '` sy-uname `'` INTO cl_where.
  CONCATENATE cl_where ` AND ta_date LIKE '` input_year smonth `%'` INTO cl_where.
  SELECT * FROM zccinfo INTO CORRESPONDING FIELDS OF TABLE itab_zccinfo
  WHERE (cl_where) ORDER BY ta_date.

----------------------------------------------------

Coffee break - take your time to spot the bug...

----------------------------------------------------

Hey, no peeking !

----------------------------------------------------

Done? Then read on.

OK, let's resolve this. The problem is the generic WHERE clause. If the user input cotains SQL instead of the expected year/month the resulting WHERE clause can be modified.

If users enter "good" input the resulting WHERE clause could be

uname = 'JANEDOE' AND ta_date LIKE '200810%'

But image, a user feeds this input to the BSP via field "input_month" : ' OR mandt LIKE '% .

The resulting WHERE clause will be

uname = 'JANEDOE' AND ta_date LIKE '2008' OR mandt LIKE '%%'

This would produce a WHERE clause that is always true. Therefore the query would return the contents of the entire table, instead of limiting them to what the user is supposed to see.

This vulnerability is called SQL injection. It is a most unpleasant side-effect of generic SQL usage, when mixed with user input. Note, that this is just one problem out of many.

This examples demonstrates one elementary fact. Code may work perfectly fine on the functional level. But that does not mean that this code is also free of unexpected side effects.

In application security, the goal is not to write code that works as designed.

In application security, the goal is to write code that works as designed and does nothing else.

In court one would say: "Do you swear to write code that complies to the specification, the entire specification and nothing but the specification?"

This brings us to one of the problems of security testing: you have to prove the absence of all(!) side-effects. As opposed to validating the presence of desired effects, as in functional testing.

This requires an entirely different testing approach. Also, you need to think out of the box in order to get fresh perspectives on what could go wrong. And on unexpected ways to use the application. This is very difficult and requires special expertise and experience.

As a result, developers that have no specific application security know-how will most likely write code that is insecure. And testers that have no specific application security know-how will most likely fail to find the mistakes the developers made. On top of that, business process experts will also fail to write secure specifications.

The entire topic gets even more problematic, if a company outsources development. Do the 3rd party developers have specific security know-how? How can the company verify this? If there really is a security defect, who is responsible? Who will fix it? And who will pay for this fix?

One other important aspect is that application security defects can also violate regulatory compliance.
Yes, coding defects can actually violate compliance.

Now the good question is "How could a security defect violate compliance?".
The better question is "How can code with unknown side-effects be compliant to any given standard?".

Currently, you don't go to jail for coding defects. Because auditors (like developers) have practically no idea what application security is. But this will change. Be prepared.

Understand application security. Identify your risks. And mitigate them.


But finally, there is one good thing.

You have just made the first step towards secure applications:
You are now aware of the problem.


To be continued...

1 Comment