Lately, I’ve been quite busy working on some demo applications using the Document Management System. What typically happens in most cases, is that a link to the original document is stored somewhere. I wanted to do things differently and save the streams into the SAP database (temporarily) and publish final documents to an archive, as PDF.
Doing so, you often have to make conversions from string to Xstring and vice versa. At first, this looked cumbersome and unnecessary to me, but as I started musing about the pro’s and con’s, I had a couple of revelations that could be helpful to others.
Because it’s such a tough topic, I decided to lighten things up a bit, by using metaphors and illustrations.
If you take a string, than, one character in that string represents a byte. (I’m deliberately simplifying by using ASCII representation. The principles remain the same in Unicode) Let’s represent that one byte by a cookie. If you’re a very hungry person, than you can munch up the cookie in a single byte.
So, in a string, every character is an entire cookie (byte). However, when we convert this to an Xstring, we move from a byte based representation to a hexadecimal representation. A byte is 8 bits, but a hexadecimal character is only 4 bits. In other words, you need two hexadecimal characters, to represent the same bit sequence as a byte (ASCII). This doesn’t mean that you need twice as many bits. No, the amount of bits stays the same, only the textual representation looks like it just doubled up.
To convert these 8 bits from your Cookie into 4 bits, you simply break the cookie in half. This results you in 2 nibbles. (a nibble is effectively the official name for 4 bits) So rather than representing your single character with a byte, you chop it up in two nibbles, which gives you a hexadecimal pair. So your chopped up cookie now represents 2 characters. Odd, isn’t it?
Now you may wonder what this does to your memory consumption. “I used to have a string with 6 characters, and now all of the sudden, I have 12 characters in my Xstring! That’s not efficient!”
Well, actualy, it doesn’t impact your memory consumption. Even though the textual representation of your xstring is twice as long, the actual memory consumption didn’t change. It’s still the same amount of bits. You used to have 6 bytes, now you have 12 nibbles. But since a nibble is only half a byte, the total memory consumption is still the same.
Think of it as a cookie-jar. If you break every cookie in the jar in half, and you put all these halves (and crumbs) back into the jar, it still weighs the same as before. It just looks messier.
But why on earth would we use XStrings, if they look messier, are harder to understand, and don’t really change anything on bit-level?
You see, it’s a form of security. Because you encode your strings differently, you avoid the risk of malicious code injection. Suppose someone enters malicious SQL in an input field somewhere.
Something in the likes of “1′; DROP DATABASE; –“
For the ABAP system, this isn’t really a problem. With the Open SQL optimizer, this sort of nonsense is caught and escaped so it doesn’t do any harm to the database. But your SAP application is not the only one in your company, Your SAP system also talks to other applications. What happens if you send that, unencoded string of malicious code to another system?
Exactly: Chaos, mayhem, angry sysadmins, angry users and a lot of work for you.
So that’s why you should use an Xstring, because then, your input is automatically encoded and can be decoded by the receiver.