• Post Reply Bookmark Topic Watch Topic
  • New Topic
programming forums Java Mobile Certification Databases Caching Books Engineering Micro Controllers OS Languages Paradigms IDEs Build Tools Frameworks Application Servers Open Source This Site Careers Other Pie Elite all forums
this forum made possible by our volunteer staff, including ...
Marshals:
  • Campbell Ritchie
  • Jeanne Boyarsky
  • Ron McLeod
  • Paul Clapham
  • Liutauras Vilda
Sheriffs:
  • paul wheaton
  • Rob Spoor
  • Devaka Cooray
Saloon Keepers:
  • Stephan van Hulst
  • Tim Holloway
  • Carey Brown
  • Frits Walraven
  • Tim Moores
Bartenders:
  • Mikalai Zaikin

Event Processing in Action -- rule state management

 
Greenhorn
Posts: 13
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
Hi all.

This sounds like an interesting book.

We've dabbled with event processing (formally and informally) in the past.

One of the benefits of event processing is more in being aware of how much can be thrown away.

At a basic level, especially with modern systems, with all of the auditing and monitoring traffic, there is an enormous amount of POTENTIAL data being pushed around. But, 99% of the time, that data is ignored yet still stored. It's almost always used for post-mortems after something had gone awry.

Many times systems may well display, or make available, that data in real time, for example for presentation on running graphs monitoring activity. Very pretty graphs, but when the data scrolls off, it will likely never be seen again, save for post-mortem review.

A potential advantage of event processing is that the system can be informed as to what data is interesting at a specific time. The system can look at what rules are in place, what events are actually "interesting", and forego processing for those that are not interesting.

This concept make event processing interesting because a rule can make its decision based on the current event, plus any retained state.

A simple example is say that you're interested when a customer makes more than 10 orders in a 30 day period.

One way of implementing that rule is every time an order come in, you can do "SELECT COUNT(*) FROM ORDERS WHERE ORDER_DATE >= NOW() - 30", or some such thing, and compare the result to your threshold (10 in this case).

But, that can get expensive. Doing potentially large aggregate queries with a large order volume, can be expensive.

Another technique, however, is that the rule itself knows what it is interested in, and thus keeps track of if its own state. The rule can track a customers recent orders, or simply a date and a count. When a new order comes in, the rule adds 1 to the numbers of orders for the order date, and then sums up the last 30 days of orders to compare against the target. Even more important, though, is that when 31 days have gone by, the older data is simply removed from the system, as the rule doesn't care about it any more.

So, the rule is tracking a moving window on the data, and its tracking only a subset of the total data (orders per date in this example, rather than entire orders).

You can see that in a fast moving system, like in a system monitoring application, how you can have short temporal windows (requests/minute say), managed locally in memory, with the rule system creating individual buckets on the fly (for example, when it sees a customer it has not seen before, it can create a holder for the rule state data specific to the customer criteria).

The dark side is that if you rule engine goes down, all of this data is lost. If all of your rules are for short time spans (i.e. 5 minutes), then it's likely not that important -- in 5 minutes you'll be caught up.

But if you're tracking data over larger time spans, say 30 days, then that's much more a problem.

It can be mitigated by replaying event data, perhaps with a flag to disable firing of events that have, in theory, already been fired. But that can certainly add to the start up time of the event application.

So, the problem is really more of an issue with persisting rule state. What little I've seen in this field, it seems that persisting the rule state wasn't common.

Does the book address this issue? I would think this would be a fairly common issue, particularly if you want to support failover for the event processor, etc. Perhaps the more commercial systems support this, I have not seen those.

I've skimmed through the sample Chapter 10, which seems perhaps most relevant to this (you mention Recoverability, which is probably what I'm primarily discussing here), but it seems pretty high level.

 
author
Posts: 14
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
Hi Will. The focus of the book is on building event processing applications, and not on how to build an event processing engine, this can be another book
The question you are raising is about keeping internal states when the temporal context is of long duration; some of the products in this area use in-memory databases with persistence capabilities, so that states are persisted (in a database, or sometime over the main memory of a cluster of machines in a grid), and are brought to memory when required. There is, of course, a trade-off between pure memory-based state, pure persistent state, and intermediate solutions. In any event, event processing tools typically support also long-term temporal context.

cheers,

Opher
 
reply
    Bookmark Topic Watch Topic
  • New Topic