Application Development Blog Posts
Learn and share on deeper, cross technology development topics such as integration and connectivity, automation, cloud extensibility, developing at scale, and security.
cancel
Showing results for 
Search instead for 
Did you mean: 
jbungay
Participant
Along with many others I participated in the "SAP Community Coding Challenge Series" that thomas.jung recently put on.

It was a neat challenge in that it was simple in nature, but invited different ways to approach the challenge and opportunities to dig into some of the newer, more sophisticated ABAP syntax.

When breaking down the challenge itself I decided right away that a "REDUCE" would help with cheating a on the "lines of code" part of the challenge.

The parts of the challenge that presented more creative opportunities were:

  • determining the number of words in the sentence?

  • iterating to each word in the sentence?

  • counting the unique characters in each word?


 

Pretty straight forward stuff, but how to do it in an efficient way became the challenge.

 

Determining the number of words in the sentence?


 

Some of the older ABAP mechanisms included a "SPLIT" command, but the sentence provided, "ABАP is excellent " has some extra spaces, so a "CONDENSE" would have been needed. A "SPLIT into table" with a "CONDENSE" inside of it would have returned a table of words and a "lines( )" call would have taken care of the number of words:
SPLIT condense( sentence ) at space into table data(words).

( I'll show an example solution below that includes this )

But I was looking at a way to basically scan the sentence instead. So I used a condense with regular expressions within a count call to get the number of words needed to display and to support my scan of the sentence:
data(number_of_words) = count( val = condense( sentence ) regex = `(\b[^\s]+\b)` ).

 

Iterating to each word in the sentence?


 

The "SPLIT into table" with an embedded "CONDENSE" would have take care of this, but like I said I wanted to try and scan the sentence instead.

This is where I ran into the "segment" command. Along with the regular expression pattern I was able to do just that:
word = segment( val = sentence index = index space = ` ` )

(you do have to take care in that there is an exception raised if the index is out of bounds)

 

Counting the unique characters in each word?


 

In my opinion this was the more complex part of the challenge and my initial thoughts were to leverage regular expressions again because it would be the most straightforward way to go about it.

Otherwise you could go with some kind of loop or breaking the letters of the word into another table and doing a SELECT DISTINCT kind of thing....

Here's what I ended up with:
number_of_unique_characters = count( val = word regex = `(\S)(?!.*\1.*)` )

It seemed pretty simple.

 

REDUCE


 

Now that these parts of the process were figured out it was a matter of putting it together inside the "REDUCE" call:
    DATA(sentence) = `ABАP  is excellent `.

data(number_of_words) = count( val = condense( sentence ) regex = `(\b[^\s]+\b)` ).

out->write( reduce stringtab(
init output = value #( ( |Number of words: { conv string( number_of_words ) }| ) )
word type string
number_of_unique_characters type i
for index = 1 until index > number_of_words
next word = segment( val = sentence index = index space = ` ` )
number_of_unique_characters = count( val = word regex = `(\S)(?!.*\1.*)` )
output = value #( BASE output ( |Number of unique characters in the word: { word } - { conv string( number_of_unique_characters ) }| ) ) ) ).

I used a stringtab to make the writing of the results more convenient as well as string templates.

This particular implementation ended up being one of my submissions.

The other was the one I was awarded the "Test Class Award" for.

 

Alternative using SPLIT into table


 

Here is how I might go about it had I decided to split the words into a table:
    DATA(sentence) = `ABАP  is excellent `.

SPLIT condense( sentence ) at space into table data(words).

out->write( reduce stringtab(
init output = value #( ( |Number of words: { conv string( lines( words ) ) }| ) )
number_of_unique_characters type i
for index = 1 until index > lines( words )
next number_of_unique_characters = count( val = words[ index ] regex = `(\S)(?!.*\1.*)` )
output = value #( BASE output ( |Number of unique characters in the word: { words[ index ] } - { conv string( number_of_unique_characters ) }| ) ) ) ).

Just as good if not a better implementation.........I should have submitted this one too 🙂

 

Final Thoughts


 

As some people have noted regular expressions can get pretty rough.  They've come in very handy for me in various projects.

I personally use tools such as Regex Magic and Regex Buddy to help with building the expressions and testing them out.

I personally would like to see more challenges like this, similar to something like Code Signal (formerly Code Fights), where you get a set of unit tests to pass for each challenge, and once successfully completed you can see how other people approached the same challenge and even vote for your favorite.

But I'm fine with thomas.jung and rich.heilman parsing through hundreds of submissions too..Lol.

Huge shout out to them for facilitating this first challenge and hopefully more to come!

Cheers,

James
10 Comments