Skip to Content

Target marketing and segmentation has roots in database technologies, when over 30 years ago marketers started creating target groups using (then) large databases of consumer information.  They did so by working closely with IT – marketers would specify the business query parameters and IT would create the (typically long SQL) queries for them.  While this approach originated several years ago, many companies still rely on this model of writing SQL to generate the target groups.  Campaign management tools have automated the task of SQL generation; however such SQLs are quite rudimentary in nature. As a result, IT would often have to rewrite special queries for marketer’s demanding needs. Additionally, relational databases often need to be highly optimized and constantly tuned to perform well, which would be a costly exercise. The result is that marketers either have to wait for IT to fulfill their queries at a higher cost or simplify their requirements which results in limiting the sophistication of their campaigns.

So, how does a marketer now deal with exploding amounts of data created through the web that they need to analyze in order to target their customer base? Web 2.0 is piling on more rich information through structured and unstructured data through social channels and the likes. As a result, marketer’s needs for computing in segmentation have been growing exponentially. Relational technology, even with proprietary hardware is challenged to serve the segmentation and analytical needs of tomorrow’s marketers. This is where in-memory technologies like SAP TREX can provide a scalable, high-performance computational solution.

In-memory computing technology offers an exciting solution to the performance limitations of the SQL/relational data model.  In-memory computing has been made possible due to the 64 bit CPUs and now very affordable due to the rapidly declining cost of memory. Even more importantly, indexing the data to be analyzed and loading just the indexes in a compressed manner in memory creates order of magnitude scalability for such technology. For example, with a 1:100 compression ratio, a 16 GB RAM processor would be able to address 1.6 TB of data. By combining in-memory and indexing technology that compresses the amount of data that needs to be loaded into the RAM, technologies like SAP TREX deliver high performance on queries with much lower cost.

SAP CRM marketing has adopted high-performance segmentation that combines the computational prowess of the TREX platform with a highly intuitive graphical user interface. You can find a short video of this engine here. We think that this new paradigm will make the marketer much more effective, self-sufficient and allow them to launch much more sophisticated and targeted campaigns.

To report this post you need to login first.

2 Comments

You must be Logged on to comment or reply to a post.

    1. Sunil Dixit Post author
      Hi Vitaliy,

      The compression ratio really changes on case to case basis, depending on the kind (attributes, documents, numeric etc.) & distribution of the data. But here’s a link to a TREX article that talks about compression ratios & sizing.

      https://www.sdn.sap.com/irj/scn/index?rid=/library/uuid/00de20e1-6160-2b10-b6ab-dedb9d10b8e8&overridelayout=true

      For example, the document states that 100 million attributes would need about 2 GB RAM. For unstructured data, the compression ratio suggested is 1:40.

      The key point is that the memory requirement is orders of magnitude lesser than the storage space requirements.

      Regards,

      Sunil

      (0) 

Leave a Reply