Skip to Content

At the recent SAPPhireNow conference, there was heated discussion about In-Memory technology regarding BI Analytics (for example, the SAP Business Analytic Engine) and some discussions regarding the importance of using such innovations for transactions.

At a SAPMentor/Blogger meeting with Ingo Brenckmann (Senior Director of the SAP Solution Management team) about In Memory Computing Business Benefits and the Road Ahead at the Sapphire, I was intrigued abut what has been accomplished in this area – especially regarding BusinessByDesign and the role of T-REX in this environment. I admit I was envious about the impact of this technology in that product line – however, I didn’t seen anything similar regarding processes. I kept looking – perhaps expecting – more description from SAP regarding the possible uses of this technology in BPM environments but I didn’t find anything that really met my needs, so I decided to explore the various options on my own.   

How InMemory technology will impact Process Environments: Indirect vs. Direct 

If you look at how InMemory technology might influence processes, the first split is between a direct and an indirect influence. This distinction refers whether the InMemory technology is integrated tightly with process environments (for example, changes in BPM runtimes) or it is used in ways that complement existing BPM environments.   Since the rest of this blog examines the direct influence of InMemory technology on processes, I’d like to first examine the indirect influence. 

Indirect Impact of InMemory Technology 

If you look at the usual relationship between analytics and process environments, BI data is often used to assist users when they are working on a particular task. Charts and diagrams help illustrate certain patterns and assist in making a decision. Thus, one indirect benefit would be the use of InMemory-enhanced analytics in existing BPM User Interfaces (UIs).

 

Of course, such graphic representations of process-related data are present in existing environments where InMemory technology isn’t available but the advantage of InMemory technology is the amazing speed at which users can access / manipulate this data. Often, such graphs and charts are static rather than dynamic. Ideally, InMemory technology would enable users to drill down into greater amounts of data in a more interactive / dynamic fashion.

 Note: Based on the current status of SAP’s InMemory portfolio, this might be able to be implemented today based on the availability of certain BI-related technologies (for example, the BI Accelerator and perhaps the BusinessObjects Explorer).

Besides embedded process support, there are other interesting uses of InMemory BI analysis. Another possibility would be the use of this enhanced BI Analysis on process-related metrics. A recent Custom KPI measurement solution for BPM described a NetWeaver BPM dashboard and I wrote a Guided Procedures Explorations:  Process Runtime Dashboard a few years back about a similar dashboard based on Guided Procedures.  If you are interested in continuous process improvement, such dashboards are critical.  InMemory technology could be used to create a more detailed process metrics dashboards that would allow managers discover process-related problems more rapidly.

Another indirect influence would be when the business objects that underlie the majority of enterprise processes were based on InMemory technology.

 Such a change probably wouldn’t have that great an impact on processes, because end users have only indirect access to such business objects via standard APIs, etc. Thus, user might notice improved times when accessing such BO-based data but fundamental changes would not to be expected. 

Direct Impact of InMemory Technology

Now, let’s take a look at some use cases where this revolutionary technology could have a direct influence on process environments.

Before we continue, I have to make a distinction between process runtime and designtime environments. “Runtime” refers to those environments where process instances are created, administered and monitored. “Designtime” are those environments in which process design takes place. 

The Impact on Runtime Environments 

Abstract process design / structure and the data of process instances must be stored in some form. Another opportunity would be when the BPM environment itself was based on InMemory storage. This change would occur when the underlying data storage of the BPM environment was moved to InMemory.

 

 Besides the performance boost that might occur, more intriguing are the new possibilities of interaction between the process-related objects (structure of the process and data from the process instances) and the data stored in the business objects. At the current time, this interaction takes place via “standardized” interfaces (usually web-services) – the interaction is indirect.  What would happen if process instances had direct interaction with business objects at a far deeper level? 

The resulting potential is evident in a process metrics dashboard (as mentioned above) based on a common InMemory Process and business object storage which would enable analysis of how a particular process directly impacts the underlying business objects. For example, you might explore how process KPIs (for example, time to complete a particular process task) impact business-object-related KPIs (manufacturing delays, etc).  Another possibility might allow conditions in the underlying business objects to dynamically affect the basic structure of the related processes.

Imagine if social network-based data associated with a particular brand were stored in an InMemory-enhanced business object. Forrester Analyst Clay Richardson points to this possibility regarding the influence of social network in processes – the direct influence of such buzz on the process instance itself. 

Many speculate that social BPM will have the greatest impact at runtime.  I refer to this as “runtime process guidance,” and we are starting to see really good examples of this emerge for customer service/customer experience processes – where processes use social analysis to determine “the next best action.” 

Note: Some might say that such possibilities are also possible without InMemory technology. One problem is that without such technology the processing speed to analyze such masses of data is so slow that interesting scenarios aren’t really acceptable/ practical for end-users. 

The importance of context

 Users are becoming increasingly critical of process environments and the associated expectations regarding such systems are also growing. Users want processes that respond to their individual context.

A recent Why Business Rules Are Important in Real-World BPM from Greg Chase stresses the importance of adding user-specific intelligence to process environments to improve user adoption.

If you make applications smarter so they pre-fill related data and tailor themselves to the specific context of the user and the process instance, you’ll make it much easier for casual business users to engage with a process. This is instead of creating a process that requires more power users to handle overly complicated data entry tasks.

As seen in a The Next Step in SAP  Business Process Optimization – Mobility from Kevin Benedict, user expectations regarding mobility and the fact that processes must take into account a user’s location are also changing. 

Decision makers are not stationary. They are decision makers because of their experience and value to the company.  They are mobile.If all of these optimized business processes assume a stationary decision maker, they fail to recognize reality.  All business processes and IT solutions today must assume that the key human players in a business process are mobile. Decisions must be able to be made in mobile environments.

Thus, context for a user (his location, what projects he is currently involved in, what customer he supports, etc.) is critical for user acceptance for process environments.

Such personalized processes, however, require a fundamental change in how such runtime environments function. InMemory technology would be ideal to deal with the immense data storage and fast processing speeds necessary to implement such developments.

 

 

This information could be stored in business objects that are based in InMemory storage.

I’ll discuss the implications of the inclusion of context on process design later in this blog.

Exceptions

I’ve always thought exceptions were one of the most intriguing parts of BPM. Their very existence represents a threat to the necessity of structure and order that is fundamental to the concept of the process as it exists in the modern enterprise.   This diversity reflects the complex context in which a particular process instance exists. In our previous discussion, we examined the importance of personal context. Context, however, can also refer to the business objects (a customer, an order, etc.).   The distinct context based on the complexity of a momentary snapshot of a situation wasn’t expected when designing the process and thus an exception occurs in the runtime environment

Perhaps, it is the difficulty dealing with such complexity that leads most companies deal with such problems manually (as Greg Chase SapphireNow Day 1:  BPM Communities of  Pod People Straying Off the Happy Path in an Agile, Sporty way recently).

She provided a unique explanation about how BPM is handy for handling exceptions to core processes.  As Suja puts it, “The ‘Happy Path’ is the well tested path.” – such as the core process provided by SAP BusinessSuite. 

Extending on Suja’s comment above, you have to consider how well your company handles cases where a request or task falls out of the “happy path” and into manualexception handling.  Dealing with these kinds of exceptions, sources of inconsistent interaction with customers and suppliers, are very costly in terms of manual labor, and can seriously damage customer relationships.

Exceptions are often viewed with a malice that borders on pure teeth-gritting hatred. The job of the process designer is to exterminate these pests that blemish the purity of the process. However, as Peter Evans-Greenwood comments in a blog from James Taylor, these exceptions often represent the differentiator for an enterprise.

Much more interesting is the exception rich processes which we can’t nail down. We spend our time mapping our decision trees and analysing business processes to try and find a way to stabilise and optimise the business process. We might even apply the bag of tricks we’ve learn’t from Six Sigma and LEAN.

It’s the wrong solution to the right problem. Our highly valued tools for process optimisation work by minimising and managing the variation in business processes. Reducing variation enables us to increase velocity, automate via BPM, and thereby minimise cost. But it is this processes variation, the business exceptions, which can have a disproportionate effect on creating value. In a world where we’re all good, it’s the ability to be original that enables you to stand out.

Rather than fight against exceptions, the idea is more how to take advantage of them.  

This requirement, however, necessitates a fundamental change in how processes are designed. 

The Impact on Designtime Environments

A restriction of the InMemory-related changes to process runtimes is difficult to imagine. Without corresponding changes in designtime environments, the full potential of this technology can not be exploited.   Once the underlying data from processes instances is stored in a columnar format, how will process design evolve?

 As I described above, the factors influencing the particular path a process instance follows will be based on a wide variety of factors / the distinct context of those involved.  By itself, this change leads to an amazing and heart-stopping increase in complexity. How can you design a process to reflect all possible paths? If you depicted all these possibilities, you would have a process design so complex that process maintenance would be impossible and performance would in all likelihood be horrible.

Currently, a certain degree of process flexibility is provided via business rules.  The idea would be to simplify processes and externalize the context information.  The use of InMemory technology would allow formore complex rules and faster application of such rules on process steps.  

As James Taylor describes in a blog, business rules are especially useful for certain types of processes.

Core processes, however, are much more stable. Everyone knows the paths that work through the process, the activities involved are well defined. Changes to these processes are a big deal, disruptive to the company regardless of how they are implemented. In these processes what changes are the decisions and the business rules behind those decisions – what makes a customer eligible for this level in the loyalty program, what price is this policy for this customer, what’s the best retention offer to make. These decision changes can be mistaken for a process change if the decision has not been broken out but they are not process changes – the activities, their sequence and their purpose all remain the same. The decision-making behavior of a specific activity is what changes.

However, as Peter Evans-Greenwood comments on the same blog, another approach may be necessary to deal with those Edge processes – remember the ones mentioned above with all those pesky exceptions.

An alternative approach is to embrace this variation. Simplify the processes until it is stable, reducing it to its essential core. Treat exceptions as alternate scenarios, compiling the set of commonplaces required to support the vast majority of exceptions. We can then use a backwards chaining rule to bind process instances to the appropriate commonplace in a variation of Jim Sinur‘s “simple processes, complex rules” approach.

This approach reduces the complexity of an ever changing process by transforming change into the evolution of an appropriate suite of commonplaces, and the goal-directed rules used to bind them to process instances.

Some of you now may thinking: “STOP. Dude, this blog started out talking out about InMemory technology and now we are talking about business rules and commonplaces. I don’t see the connection – you’ve lost me.”  For me, InMemory technology represents the ability to analyze huge chunks of data at a speed that was not previously possible.  I’m throwing the InMemory stone into the BPM pond and following the waves as they expand / grow. You look at the impact of InMemory technology on one part of the BPM pond and see the impact that a change in one area has on its counterparts.  I agree it is impossible to design a process that takes into account every possible location of a user – regardless of whether you use business rules or not – my intention is to propose methods that would enable designers to start to take advantage of this technology.   I found Peter’s comment fascinating and I looked for a technical foundation with which to implement it. InMemory technology isn’t a panacea but it is a foundation on which solutions may be built.

If business rules and other standard tools in existing process environments are inadequate to deal with the potential of InMemory technology, then perhaps even more radical / fundamental changes are necessary. In a recent blog, Dennis Moore describes one such potential shift – towards a focus on events in process environments.

If HassoDB understands that an object is being stored, updated, or accessed, HassoDB could publish an event – and that event could be consumed by new applications that speed up integration between business processes, allow the insertion of new business processes, or that simply generate alerts for users.

SAP even has a design for such a capability: SAP Live Enterprise from SAP’s Imagineering team.

How could this capability be deployed? Well, imagine that a sales person gets an alert every time their customer makes a payment, is late with a payment, submits a complaint or service request, or places an order on-line. Or that a salesperson sets up an “auto-responder” for those events, thanking the customer or asking her for feedback as appropriate. Event-based capabilities would greatly speed up and improve service.

Another example could be in integrating business processes. Rather than hard-coding the “on-boarding” process for a new employee, there could be an event-driven integration. The hiring process could generate an event when an employee’s starting date is set; other processes could subscribe to that event, and do the appropriate processing, including reserving an office, preparing the HR orientation, ordering a company credit card, requesting an entry badge, or assigning and configuring a computer. Whenever the on-boarding process changes, rather than editing the process definition, taking the application down in the process, and restarting it, instead an administrator would just load a new action and subscribe it to the appropriate event. 

But there are also other potential design-time-related changes that aren’t as revolutionary: 

  • One immediate opportunity would be an analysis of the various patterns existing in the process instances that are already finished – what parts of a particular process design are used most frequently, what paths are used infrequently, etc. This information could be used to enhance the design environment – for example, the presentation of the design elements could reflect their actual usage. For example, a particular process path could be drawn in a different color or line thickness based on its degree of utilization.  This could occur in real-time, based on actual usage.
  • Process simulation based on InMemory technology.  In a podcast  with Hasso Plattner, the Chairman of the Supervisory Board at SAP describes simulation as one of the new advantages in the use of InMemory Technology when performing enterprise resource planning. A similar functionality might also be possible in BPM design environments where designers could simulate various possible process paths before moving to a runtime environment.
  • I liked the ability to discover relevant participants for projects that is demonstrated in the SAP project Elements where an analysis of existing social networks, mail and other sources helps users discover others who might be able to provide useful information or are ideal candidates for collaboration.

 

 

It would be great to have something similar in a process design / Social BPM environments where data from process instances as well as other sources (corporate social networks) are stored in InMemory and used to select individuals who are the best candidates to collaborate on process design.

Conclusion 

The usage of InMemory technology in other environments (BI, etc.) is usually focused on speed. In both runtime and designtime process environments, the main benefit of InMemory technology involves increased flexibility as well as the ability to better respond to the particular context in which a process takes place.  What you have is a fundamental change in the nature of the process and the assumption that its structure is fixed over all process participants and situations.

To report this post you need to login first.

8 Comments

You must be Logged on to comment or reply to a post.

  1. James Geddes
    Thank you. This was a very interesting post. But I must confess: I’m confused. I think you realised your readers might be. (“STOP. Dude…”). And my confusion extends further than just this blog.

    There seems to be a general agreement that the InMemory technology SAP has been talking about is revolutionary. As a technical goal, I see how that’s true. I also see the potentially enormous effect it has on BI. But that’s because the OLAP structures involved in BI are structured at design-time for certain kinds of reporting. They have to be, because BI presents aggregates and trends whose calculations are intensive and time-consuming. InMemory technology will allow those calculations to proceed far more quickly, allowing access to more relevant, up-to-date data. Perhaps more importantly, because calculations will be more rapid, users will be afforded more flexibility to evaluate data in new ways that won’t have to be decided upon and implemented beforehand, as they often are now.

    What I don’t get is all the other applications. As far as I can tell, few of the innovations you provide in your blog will become possible because of InMemory technology — many of them seem possible without it.

    “For example, you might explore how process KPIs (for example, time to complete a particular process task) impact business-object-related KPIs (manufacturing delays, etc).”

    Why can’t you do this now? The data is accessible, and you can analyse the trends. You seem to be saying that because business objects and the BPM environment both reside InMemory, the data will somehow become more accessible. But I don’t understand how this is true. Two objects in memory are no more easily able to interface with each other than two objects in a database — they just do so more rapidly. We will still need to access objects via APIs, programming layers, and services. Just because we no longer have our objects stored in disk-based systems doesn’t mean we can suddenly troll through the memory, picking objects at random and inspecting their properties any more than we can do in a database at the moment. Right?

    “Imagine if social network-based data associated with a particular brand were stored in an InMemory-enhanced business object.”

    If this data is coming from multiple systems anyway (and presumably it would be?), is the database the bottleneck? Just how much would having the data sit in memory speed this up?

    “Such personalized processes, however, require a fundamental change in how such runtime environments function. InMemory technology would be ideal to deal with the immense data storage and fast processing speeds necessary to implement such developments.”

    Would it be? We’re talking about things like current location, and project and customer lists. Surely we could pull this kind of context information without InMemory technology? Don’t we already?

    “If HassoDB understands that an object is being stored, updated, or accessed, HassoDB could publish an event – and that event could be consumed by new applications that speed up integration between business processes, allow the insertion of new business processes, or that simply generate alerts for users.” (I know this isn’t your statement, but you refer to it in your points.)

    How is recognising that an object is being stored, updated or accessed intensive in a way that InMemory would solve? How is raising events and reacting to them intensive in a way that InMemory would solve? (Furthermore — is event-driven design revolutionary in the BPM space?)

    “… for example, the presentation of the design elements could reflect their actual usage. For example, a particular process path could be drawn in a different color or line thickness based on its degree of utilization.  This could occur in real-time, based on actual usage.”

    Is this an intensive process? We’re just counting the number of times particular paths are followed.

    “I liked the ability to discover relevant participants for projects that is demonstrated in the SAP project Elements where an analysis of existing social networks, mail and other sources helps users discover others who might be able to provide useful information or are ideal candidates for collaboration.”

    Again, is this intensive? Is this not the kind of thing that we could do right now (perhaps even faster) using existing indexing technologies?

    You know a great deal more about the subject that you’re discussing (BPM) than I do, and I’m sure you’ll be able to shed some light on the questions I raised. I ask them only because I think that people are trying too hard to look for problems for which InMemory is the solution. I fully agree it means a massive advance for BI, but it’s only useful where we need to access and aggregate a lot of data flexibly and quickly. (It’s not going to increase the speed of our CPUs — so more complex logic, where the CPU is the bottleneck, will derive no benefit from this technology.)

    Are there really uses cases everywhere for that?

    (0) 
    1. Richard Hirsch Post author
      Killer comment.

      You aren’t the only one who is confused by SAP’s InMemory strategy. I just read Ethan Jewett’s excellent blog “Why in-memory doesn’t matter (and why it does)” (http://bit.ly/4zKzp5) where he also makes the point that many of the advantages prescribed to InMemory (for example in BWA) are largely based on other technological / functional changes. Many have viewed SAP’s announcements in this space as marketing fluff and I don’t have the expertise or personal experience with InMemory technologies to dispute such fundamental questions.

      My intention in the blog was an attempt to take this technology which most pundits have admitted is useful in BI and applied it to another technical area (in my case, BPM).  Yes, most of these suggestions could be implemented via traditional DB functionality but I’m assuming that the huge amounts of data (for example, all tweets associated with a particular brand and the data from all instances of the process dealing customer complaints) necessary to identify useful patterns would require the analytical power / speed of InMemory technology to provide realtime results / monitoring of a firm’s performance.

      As you say in your last paragraph – “…but it’s only useful where we need to access and aggregate a lot of data flexibly and quickly” – for me – that is the main use case of using InMemory technology in process environments. It is exactly this speed and flexibility which is necessary to make my ideas possible.  Processing speed isn’t an end but rather an enabler to take advantage of the patterns hiding in the data.

      (0) 
      1. James Geddes
        Thanks for taking the time to reply. One of the most interesting points in Ethan’s blog is where he mentions his concerns that InMemory might become a crutch. (“The massive BW query on a DSO is slow? Throw the DSO into the BWA index.”)

        He’s afraid of what he thinks might become a tendency to use InMemory to enable processes that _would have been possibly anyway_ in a way that is less stable and generally less than optimal. I think this is where his commentary speaks to mine: so many of the things we’re throwing around are possible today, and don’t need a broader application of InMemory to become viable.

        I think we broadly agree, though: this is an exciting new technology, and while it’s easy to see quick wins where they don’t really exist, we need to start looking at where InMemory can be used to improve the solutions we deliver. This blog does precisely that.

        (0) 
        1. Ethan Jewett
          Hey! No fair giving a preview of my next blog on the topic 😉

          I must say that I found both the blog and the comment very insightful. Definitely raising the bar on the in-memory discussion in the SAP ecosystem.

          What you get at, James, around using technological crutches to enable processes that would have been possible anyway (if implemented correctly) is something I often struggle with when writing on this topic. On the one hand, we should be demanding that our processes be implemented efficiently as possible, and where they are not optimal they should be optimized. On the other hand, it is the case that in the real world it is often easier to throw hardware at a problem when time and expertise are in short supply. I have a tendency to advocate for perfection when we live in an imperfect world, but I do recognize that sometimes a working non-optimal algorithm on monster hardware is the answer that a company is going to choose.

          That said, if a company is hoping to gain competitive advantage from its processes or analytics, then there must be an internal culture in those areas that demands perfection. Yes, the culture of perfection must be moderated by tactical and strategic decisions about where to focus effort, but the culture must be there.

          I find BWA very interesting because it theoretically allows an organization to trade money for a level of performance they could previously only achieve by investing a lot of time and skill. I don’t necessarily see BWA or “in memory” as enabling previously unthinkable business processes or analytics, except in some special cases.

          That said, I’m all for dreaming up new process and analytics scenarios, and that’s where I found this blog very insightful.

          (0) 
          1. Richard Hirsch Post author
            Although I also agree that just throwing memory at a problem might not be the ideal solution, however, it might be appropriate where it is not possible to wait for those solutions.  If InMemory technology can solve the problem in a week instead of 2 months, this difference could be essential and provide a real competitive advantage.  The interesting decision is when should companies pick the solution based on that “non-optimal algorithm on monster hardware” rather than biting the bullet  and finding those experts necessary for the optimal solution.  When the requirements are new and unconventional (as I hope I proposed in the blog), then the non-optimal path might be the only one possible.

            @Ethan Don’t want to take the thunder from your next blog but I’d be interested to hear about those special cases where BWA / InMemory would enable ‘unthinkable business processes’. 

            (0) 
            1. James Geddes
              All true. When BWA/InMemory can deliver a much-needed innovation in a quarter of the time, and there _is_ a competitive advantage to be gained in delivering it rapidly, I think we have a very valid use case.

              I suspect that there are instances where this application would be inappropriate, and I think a lot of this has to do with Ethan’s point about volatility. It’s a point that should be glaringly obvious, but few people have devoted many column inches to: this technology is fine for analysis of unchanging data. But it’s not fine (and isn’t going to be any time soon) to use RAM as a backing store for your business applications, because you’re going to end up losing data that you can’t afford to lose.

              We can create inefficient solutions that are only effective when they have up-to-date access to all their data in memory. This might be fine when companies first adopt those solutions. But should parameters change, and we need to begin writing to our data store more frequently, we might find our reliance on InMemory to be a limitation.

              I admit, however, that I haven’t thought this through fully — I’ll need to consider more carefully where the line is between leveraging InMemory effectively to deliver solutions more quickly, and using it in a way that is irresponsible/lazy, and potentially limiting in the future.

              (0) 
            2. Ethan Jewett
              Don’t worry, I didn’t plan on it being too thunderous 🙂

              Before I talk about the types of special technical processes (perhaps backing business processes) that in-memory approaches might enable, I do want to point out that BWA and in-memory computing are two different things. BWA caches its indices in memory (for the most part), but in my mind the important thing about BWA is that it fairly efficiently automates BI/DW data modeling, partially by revamping the data modeling techniques to use a column-based approach and compression, and partially by using in-memory techniques to cover up the areas in which we don’t know how to automate efficiently.

              We can see this “cover-up” approach in BWA’s answer to the “What data and indices should we cache in RAM?” question, to which BWA answers, “All of them!”. This type of cover-up can be a tad expensive, but a lot of companies will find the cost is outweighed by the benefit.

              Ok, now on to the actual question …

              I’ve spent a fair amount of time thinking about this, and what I’ve come up with is that moving all data processing to physical RAM is unlikely to give you huge gains with any given algorithm. The kind of algorithms that it really will help with are ones that require a lot of repeated, random (non-sequential) data access (reads and/or writes), and can put up with a certain amount of volatility.

              Certain types of optimization algorithms fall into this category. I believe the SAP APO system uses several of these types of algorithms, and has been running in-memory by way of LiveCache for maybe a decade now. BI-, analytics-, and reporting-caches fall into this category and have been running in-memory for maybe 2 or 3 decades. Local variable storage is perfect for an in-memory data model and has been on physical RAM for 4 or 5 decades and moved mostly to even faster processor-based caches over the last 2 decades. I’m waiting for BI vendors to start marketing their products as “in L3 cache”. (I kid. I kid. But only sort of.)

              One more recent type of business process that in-memory techniques have made possible is large social networking websites. According to reports (I read it on the ESME mailing list so it must be true 🙂 Facebook caches almost 200% of its data in memory. That number may be out of date, and Facebook may have more efficient approaches now, but I think we can say that Facebook was made possible for at least a small period of time by simply throwing memory at a very difficult scaling problem.

              Note that this is almost the same answer to a performance efficiency problem as BWA’s:

              Us: “How much of our data do we cache in memory?”
              Facebook: “All of it! (Twice, just to be sure, because we’re going to get totally killed if we have more than a few % cache misses from our queries.)”

              Obviously, as James has pointed out, this is not an end-state solution, but rather a temporary bridge to a more permanent solution that will be facilitated by a better understanding of our data. Same goes for BWA. Data is growing faster than we can manufacture RAM, so we’ll need to find a better way, but for now, for some problems, just throwing lots of RAM at the headache makes a lot of sense. For other problems it won’t make a bit of difference.

              (0) 
  2. Peter Evans-Greenwood
    Hiya,

    Interesting discussion. Unfortunately I’m a little late to it, with the last couple of hours due to a painful SCN registration process.

    As others have pointed out, moving data from disk to memory only has the effect of reducing latency. In the past this reduction in latency has been used to move offline BI functions online, where we can make realtime use of them. From a BPM point of view this doesn’t seem particularly interesting, at first glance, as there no real need to leverage these BI functions in real time. However, what if we reconsider the basis for BPM?

    BPM is considered a programming challenge, and this has driven the evolution of BPM platforms to date. In this context, there is some benefit from reduced latency, but not a lot. After all, a process is effectively a programme, and as programmes reducing data latency will not have much of an impact on the overall solution unless we’re not hitting our end-to-end service level.

    What, however, if we consider BPM to be a planning challenge, rather than a programming one. This goes back to where Richard quoted my comment on James Taylor’s blog post. If we consider BPM a realtime planning challenge — making exceptions unexceptional — then we’ve created a whole new ball game.

    I’m currently doing some workflow / process work where I’m using a lot of those ideas around “BPM as a planning challenge”. Rather than try and get the business to develop “BPM programmes”, we’re helping them find tasks (i.e. conventional, exception free, business process fragments), business rules and something we’re calling points of variation. A point of variation is simply a point in the process where we want to allow a managed number of business exceptions.

    The idea is to streamline the process, capturing the cow path, but provide the business with a tool to specify wherein the process the art lies: where the justified exceptions live. The business is calling this “balancing art vs. science”, where science is the compliant repeatable and streamlined processes, while art is the exceptions which create all the value. The also think that the approach is “awesome” (their word, not mine), so it is hitting the mark.

    The limitation with this approach is technology. Now that we’re capturing and managing business exceptions (i.e. alternative processes and the — somewhat — complex rules which specify that they are applicable), it’s fairly obvious that there is potential to leverage a lot more, and a lot more expensive, business drivers than a traditional linear process is capable of. This is where InMemory ideas start adding value, as they let us bring more drivers to the table for the business to use.

    If you consider BPM as a programming challenge, then it can be very difficult to leverage complex BI measures in realtime without a whole lot of spaghetti BPEL code, or burying the lot in an infrequently used rules base to drive a choice between a small number of options. The end result is that you just don’t both: while the data is useful in theory, in practice it’s not worth the bother.

    However, with the planning approach integrating these drivers is easy, and the limitation becomes the cost of accessing the drivers.

    Imagine if we extend the idea of a “session” well beyond the current scope of transactional (e.g. shopping cart) data that it currently contains. We should start building up a realtime in-memory BI picture of the stakeholder (customer, employee, partner) which we use to drive exception rich edge processes. These exceptions might be tuned to context, preferences, goals and business drivers etc. We could do some pretty cool stuff with that: stuff that has a clear business benefit.

    I think HassoDB — and even the whole complex event processing space — is a distraction in this instance. We can use events to trigger processes (and one can argue that all processes are triggered by events) without HassoDB & CEP. The interesting thing is what we decide to do with the event — how we select which business exception to use — and here HassoDB and CEP both fall flat.  HassoDB is an event generator, while CEP relies on simple forward chaining rules.

    For extra credit, if you can slash the cost of a round trip from BPM to rules, then that would be greatly apprechiated!

    r.

    PEG

    (0) 

Leave a Reply