Friday, November 30, 2007

Inter Process Orchestration

In this white paper I am thinking, infact had starting making the framework, that will enable inter process orchestration, with due respect to BPEL of the world. I have named this framework as "Process Choreographer"
Note :- Process can be any Java Object.

Process Choreographer is a lightweight Java framework for building workflows using java beans to orchestrate events. You can think of Process Choreographer as a simple alternative to BPEL where the workflows are all specified and implemented using Java code rather than declarative XML.

When building highly concurrent or distributed applications (like insurance claim processing) it is very common for there to be many events happening; often asynchronously and in different threads and its very common to need to perform kinds of workflow or orchestration across these events.

Off course one can certainly use things like BPEL to solve these kinds of problems. However often this is a bit heavy weight & complex and you just want to have a bean based workflow using regular Java code to represent the activities involved in the workflow. Further more, BPEL will always comes with the cost of performance. I have heard many times where my client/patners have asked me, we want to run atleast 10 transaction per second, Process Choreographer framework will be answer to these problems.

One of the main goals of Process Choreographer is to reuse what the Java platform is good for in the workflow space; then supplement it with missing abstractions rather than re inventing the wheel. Hence
1) Write a plain old java class (from where every things begins)
2) Use normal java code construct like if - else, while etc
3) Use regular Java fields in your workflow class, then use JDBC/DAO/JPA to deal with
persistence

While working with various client and their concurrent or distributed applications there are many use cases for needing to orchestrate among multiple concurrent events.

1) starting dependent services that is where a parent component is dependent on the child components starting; where the start process may be asynchronous in different threads. e.g. there may be a recovery process on startup which you need to wait for.
2) implementing master-slave type protocols where you need to monitor the state of the master and slave to make decisions on what to do; together with dealing with transitions from Started to Recovering to Running then maybe to FailingOver etc.
3) implementing message orchestration. You may want to implement some simple orchestrations, waiting for either a response to arrive or a timeout to fire etc

Process Choreographer aim to solve to over come above problem by providing the ability to do the following

1. Execute Process In Sequence

2. Execute Process In Parallel Split

3. Process Synchronization : Synchronize two parallel threads of execution.

4. Process Exclusive choice: Choose one execution path from many alternatives

5. Simple Process Merge: Merge two alternative execution paths

Soon I will update you with design of this framework.

Monday, November 12, 2007

Injecting Transactions Into Rules

First of all, some guys may think this slightly strange. With due respect to rules and not making them procedural, there are several business use case where from the THEN part, there is a need to call the web service. I was experimenting this thought on JBoss Rules, Quick Rules etc rule engine. But interestingly , all these rule engine never stops me calling the Web Service but there is no transaction control over it (distributed transaction). Thanks to java open source world and jboss rules guys, I can play with their code.

How can we inject transaction? Thanks to Spring and declarative transactions, this is very much feasible. So here was my game plan in nut shell, introduce a new attribute in the JBoss Rule grammar named "Transaction" and at the time of rule execution run the spring container and inject the declarative transactions.

Friday, November 9, 2007

BPM, BRM in SOA

Introduction

It’s a myth propagated by many consultants and vendors that Business Rule Management System (BRM) and Business Process Management System (BPM) are two alternatives for creating flexible and agile enterprise solution. There is no denying the fact that BPM uses rules for decision making. As always truth lies somewhere in the middle, BPM and BRMS are like two sides of a coin which can’t be separated, especially if used in conjunction with principle of service oriented architecture (SOA)

The underline idea behind this is totally separate process logic from decision logic. So what is process logic? Process logic is the specific logic of the business process such as controlling the sequence of activities, adhering of deadline and handling of exception. It is implemented using Process Engines like jBPM etc. Decision logic represents process independent management policies and principles and are implemented using rule engines like JBoss rules etc as part of BRMS

Principles of SOA

  1. Explicit boundaries
    SOA is a design approach for special enterprise solution and special information technology software architecture

=> Consistent result responsibility

  1. Shared Contract and Schema, not class that is, service share common contract

SOA is strictly independent of the technology.

=> Unambiguous service level

  1. Service orientation is an evolution of component based architectures that is service are reusable

=> Proactive event sharing

  1. Vendor Independent

SOA is strictly independent of the technology

  1. Policy Driven

  1. Services are discoverable

  1. Business Driven

The granularity of the process modeling determines the granularity of the business service

  1. Loosely coupled
  2. Wire format not programming languages APIs
  3. Document oriented

Consider a message in string format

2007-11-0642055

and compare it to document format:

2007-1-06

420

55

BPM & BRM in SOA

BPM is a closed loop model consisting of three steps: -

Step 1: Analyze, plan, model, test and simulate business process

Step 2: Execute business process via workflow spanning all applications

(process logic) by mean of process engine on a SOA as the

infrastructure.

Step 3: Plan, monitor and control processes, their performance and the

interplay of all business process.

BRP is a closed loop model consisting of three steps: -

Step 1: Based on business vocabulary it describes rules

Step 2: Describes the lifecycle of rules perform analysis, and design

simulation and test, via execution through rule engine.

Step3: Rule monitoring and controlling including responsibilities.


SOA transforms existing business computing assets into well-defined services. It can work effortlessly with BPM because of the reliance on services. SOA exposes services while BPM consumes them. When properly implemented, SOA opens a vast inventory of services for BPM to piece together into an all-inclusive flow of services.

While BPM defines and orchestrates the flow

The drive to use SOA to create a more agile infrastructure also highlights the importance of externalizing highly volatile business logic that is subject to change.

Highly volatile business logic can be defined as business rules. In the traditional application structure these business rules are buried in the application while in a more modern approach they are separated. Just as process flow can be separated from application code into an external BPM engine, the same can be done with business rules. Separating both process flow and business rules empowers a business analyst to make operational changes more quickly, providing maximum flexibility and adaptability.

An important shared characteristic of BPM, BRM and SOA is that they all deliver tactical cost/time benefits while building a base for competitive growth. Each one contributes to the overall agility of a company's IT infrastructure in the long term.

Business application companies such as Oracle/PeopleSoft/Siebel and SAP are contributing to the new opportunity of SOAs by supplying a new class of business applications called service-oriented business applications (SOBAs). SOBAs provide extended functionality for use on Web services standards and should contribute significantly to a company's repository of business services.

So the evolution toward service orientation as an enterprise elevates process thinking, analytics and performance measurement as core competencies to achieve competitive advantage. Accordingly, companies should establish best practices around these disciplines in parallel with implementing the supporting technologies.

Process and Rules in SOA

Many business areas employ rules. Traditional examples include marketing strategies,
pricing policies, customer relationship management practices, human resources activities,
regulatory constraints, product and service offerings. As rules evaluation matured, areas
such as recommendation technology revolve around rules.
Although the definitions of processes and rules differ the differences are not as clear cut.
Typically both rules and orchestration concepts are intertwined within the definition of a
business process.

Feature of rule engines vs orchestration engine (BPM)

Execution time Rules Engines strive to evaluate business rules as quickly as possible. In
contrast, ORCHESTRATION ENGINEs cater to long-running processes, where services
can take minutes, hours or even days to complete.

Synchronicity Rules evaluation is synchronous. In contrast, processes are intrinsically
asynchronous. Typically the ORCHESTRATION ENGINE invokes services in an
asynchronous manner. The mechanisms required to deal with asynchonicity such as
correlation and compensations are readily available in orchestration environments.

Statefulness Rules Engines are stateless; when a rule fires an engine typically pulls its
inputs from the knowledge base, evaluates it, and then updates the knowledge base.
In contrast, ORCHESTRATION ENGINEs are specifically designed to hold the state
(i.e., execution context) of each active orchestration.

Determinism The Rules Engine fires simultaneously all rules whose conditions are satisfied.
The order in which these rules actually execute is non-deterministic. In contrast,
process implementations strive for close alignment with the business. Business
processes are deterministic, and people go to great lengths to ensure determinism.

Rules are orchestrated by the process engine the same way as other services. This is absolutely consistent and show SOA elegance and power. Decision logic is primarily understood as business logic, as its process independent. This underscores the principle of the resusability of services. Rules are modeled and implemented once and are then available for use in different processes.

The right way of using rule engine by mean of BRM for all process-independent rules in SOA and it bring following merits: -

1. reusability of services and hence higher productivity.

2. rule change control mechanism can be controlled.

3. rule changes are versioned and achieved

4. process independent of rules.

Example, showing rule and process are closely coupled

WHEN status=new
THEN DO
creditRating:=call(creditService,customerData)
status:=ranked
END
WHEN status=ranked AND creditRating=threshold
THEN DO
floodCertification:=call(floodService,propertyAddress)
END
WHEN status=ranked AND creditRating>=threshold
THEN DO
appraisedValue:=call(appraisalService,propertyAddress)
END
WHEN status=ranked AND haveCertification and haveAppraisal
THEN DO
decision:=call(decisionService,application,creditRating,floodCertification,
appraisedValue)
status:=decisioned
END

WHEN status=decisioned
THEN DO
documentation:=call(letterService,application,decision)
status:=documented
END
WHEN status=documented
THEN DO
call(mailingService,documentation)
status:=mailed
END


Challenges in BRM

<<todo>>

Thursday, November 8, 2007

JBoss Seam

What is JBoss Seam

JBoss definition of JBoss Seam is “It’s a lightweight framework of Java EE 5.0

After reading this definition normally following question starts popping up:-

  • Isn’t Java EE (Enterprise Edition) 5.0 itself a collection of

“frameworks”?

  • Why do you need another one that is outside the official specification

Then what is actually JBoss Seam? JBoss seam can be viewed as “missing framework “, that should have been included in Java EE 5.0.

Key Features

  1. It sits on top of Java EE 5.0 framework to provide a consistent and easy to understand programming model for all components in an enterprise web application
  2. It also makes stateful applications and business process-driven applications a breeze to develop.

So, JBoss Seam is an agile framework. That is, Seam is all about developer productivity and application scalability.

The “glue” to Java Framework

The core framework in Java EE 5.0 is composed of:-

  • EJB (Enterprise Java Bean) 3.0. EJB3 is a POJO (Plain Old Java Objects) based lightweight framework for business services and data persistence
  • JSF (Java Server Faces) 1.2. JSF is a MVC (Model View Controller) component framework for web application

To make EJB3 and JSF work together, there is requirement for an artificial facade object (which is normally called backing bean) to tie business components to web pages, and boilerplate code to make method calls across framework boundaries. Gluing these technologies is the part of JBoss Seam.

Seam collapses the artificial layer between EJB3 and JSF. It provides a consistent, annotation based approach to integrate EJB3 and JSF. With a few simple annotations, the EJB3 business components in Seam can now be used directly to back JSF web forms or handle web UI events. Seam allows developers to use the “same kind of stuff”, annotated POJOs, for all application components. In another word, Seam brings out the synergy between EJB3 and JSF.

Designed for Stateful application

Seam is designed for stateful applications. Web applications are inherently multi – user applications and e-commerce based applications are inherently stateful and transactional.

In Seam, all the basic application components are inherently stateful. They are much easier to use than the HTTP session since their states are declaratively managed by Seam. There is no need to write distracting state management code in a Seam application -- just annotate the component with its scope, lifecycle methods, and other stateful properties -- and Seam takes over the rest. Seam stateful components also provide much finer control over user states than the plain HTTP session does. For instance, you can have multiple “conversations”, each consisting of a sequence of web requests and business method calls, in a HTTP session.

Furthermore, database caches and transactions can be automatically tied with the application state in Seam. Seam automatically holds database updates in memory and only commits to the database at the end of a conversation. The in-memory cache greatly reduces database load in complex stateful applications.

Seam takes state management in web applications a big step further by supporting integration with the Open Source JBoss jBPM business process engine.

POJO Services via Dependency Injection

Seam is a “lightweight framework” because it promotes the use of POJO (plain old Java objects) as service components. There are no framework interfaces or abstract classes to “hook” components into the application.

Seam wires POJO components together using a popular design pattern known as “dependency injection” (DI). Under this pattern, the Seam framework manages the lifecycle of all the components. When a component needs to use another, it declares this dependency to Seam using annotations. Seam determines where to get this dependent component based on the application’s current state and “injects” it into the asking component.

Expanding on the dependency injection concept, a Seam component A can also create another component B and “outjects” the created component B back to Seam for other components, such as C, to use later.

Avoid XML Abuse

Annotation based approach coupled with configuration by exception approach of JBoss Seam has made possible to remove hassles of xml configurations

Designed for testing

Seam is designed from ground up for easy testing. Since all Seam components are just annotated POJOs, they are very easy to unit test

RETE

The Rete algorithm uses a rooted acyclic directed graph, the Rete, where the nodes, with the exception of the root, represent patterns, and paths from the root to the leaves represent left-hand sides of rules. At each node is stored information about the facts satisfied by the patterns of the nodes in the paths from the root up to and including this node. This information is a relation representing the possible values of the variables occurring in the patterns in the path.

The Rete algorithm keeps up to date the information associated with the nodes in the graph. When a fact is added or removed from working memory, a token representing that fact and operation is entered at the root of the graph and propagated to its leaves modifying as appropriate the information associated with the nodes.

Example

When a fact is modified, say, the age of Ram is changed from 20 to 21, this is expressed as a deletion of the old fact (the age of Ram is 20) and the addition of a new fact (the age of Ram is 21).

The Rete Algorithm s intended to improve the speed of forward-chained rule systems by limiting the effort required to recomputed the conflict set after a rule is fired. Its drawback is that it has high memory space requirements. It takes advantage of two empirical observations:

  • Temporal Redundancy: The firing of a rule usually changes only a few facts, and only a few rules are affected by each of those changes.
  • Structural Similarity: The same pattern often appears in the left-hand side of more than one rule.

The Rete consists of:-

· Root node

· Of one input-pattern nodes

· Two input join nodes

The root node has as successors one-input "kind" nodes, one for each possible kind of fact (the kind of a fact is its first component). When a token arrives to the root a copy of that token is sent to each "kind" node where a SELECT operation is carried out that selects only the tokens of its kind.

Then for each rule and each of its patterns we create a one input alpha node. Each "kind" node is connected to all the alpha nodes of its kind and delivers to them copies of the tokens it receives. To each alpha node is associated a relation, the Alpha Memory, whose columns are named by the variables appearing in the node's pattern. For example, if the pattern for the node is (is-a-parent-of ?x ?y) then the relation has columns named X and Y. When a token arrives to the alpha node a PROJECT operation extracts from the token tuple's the components that match the variables of the pattern. The resulting tuple is added to the alpha memory of the node.

Then, for each rule Ri, if Ai,1 Ai,2 ... Ai,n are in order the alpha nodes of the rule, we construct two-input nodes, called Beta Nodes, Bi,2 Bi,3 ... Bi,n where

Bi,2 has its left input from Ai,1 and its right input from Ai,2

Bi,j, for j greater than 2, has its left input from Bi,j-1

and its right input from Ai,j

At each beta node Bi,j we store a relation, the Beta Memory, which is the JOIN of the relations associated to its left and right input, joined on the columns named by variables that occur in both relations. For example if the left input relation and right input relations are:

               X              Y                            X              Z
               =========                            =========
               ann          4                             ann          tom
               sam         22                           ann          sue
                                                             tom          jane
 
then the resulting beta memory relation is
 
               X              Y             Z
               =================
               ann          4              tom
               ann          4              sue
 
Finally the last beta node of each rule is connected to a new alpha node where a
PROJECT operation takes place to select all and only the variables that occur on
the right-hand side of the rule.

New Articles/Papers

I am currently working on white paper on BPM/BRM ins SOA.

Key points knocking in my mind
=======================
1. Expose rules as webservice (how to auto generate code)
2. Where and how logic (business/presenatation etc) can exposed as web service? That is Process
Logic and Decision Logic in SOA
3. Processes and Rules in an SOA
4. Challenges in BRMS exposing logic as SOA

Few More Areas of My Research
=======================
1. Rule Analytics
2. Rule execution simulation and testing
3. Rule authoring & security. How to make sure changes in the rule conditions (IF part).
That is, something on the line of Rule Fraud Detection
3. Rule of BRMS in claim management
4. Orchestration framework both Inter and Intra process
5. Increasing availability (caching solutions), looking into open source solution like Terracota vs
paid solution like Tangasol conhernece
6. Rule base Navigation handling that is, flexib le web page navigation handling using Rules
Engine
7. Rule based UI validations
8. Sync and Async parallel execution framework using java.util.concurrent
9. Rule Processing and BPM design patterns

Model Driven Archecture

Model Driven Architecture (MDA)

Introduction

Model Driven Engineering refers to the systematic use of models as primary engineering artifacts throughout the engineering lifecycle, the set of tools needed to apply such an approach is made of graphical modelers (UML), interoperability layers (XML/XMI imports and exports) and tools to take advantage of this models using "Model to Text" transformations. MDA is standardized by OMG.

History of Software Development

Increase level of abstraction for the software practitioner


Essential to this change, of course, was the UML. It provided a single set of common concepts that became widely used across the software industry, which soon ended the lengthy debate over which set of concepts to use when designing software systems. organizations are well-served by creating models of the problem domain and solution domain, and by coordinating these models throughout the life of a software project.

MDA Theory

Following a long history of the use of models to represent key ideas in both problem and solution domains, MDA provides a conceptual framework for using models and applying transformations between them as part of a controlled, efficient software development process. Here are the basic assumptions and parameters governing MDA usage today:

  • Models help people understand and communicate complex ideas.
  • Many different kinds of elements can be modeled, depending on the context. These offer different views of the world that must ultimately be reconciled.
  • There is a commonality at all levels of these models – in both the problems being analyzed and the proposed solutions.
  • Applying the ideas of different kinds of models and transforming them between representations provides a well-defined style of development, enabling the identification and reuse of common approaches.
  • In what it calls "model driven architecture," the UML has provided a conceptual design representation framework and a set of standards to express models, model relationships, and model-to-model transformations.
  • Tools and technologies can help to realize this approach, and make it practical and efficient to apply.

Four principles of MDA:

· Models expressed in a well-defined notation are a cornerstone to understanding systems for enterprise-scale solutions.

· The building of systems can be organized around a set of models by imposing a series of transformations between models, organized into an architectural framework of layers and transformations.

· A formal underpinning for describing models in a set of metamodels facilitates meaningful integration and transformation among models, and is the basis for automation through tools.

· Acceptance and broad adoption of this model-based approach requires industry standards to provide openness to consumers, and foster competition among vendors

Based on above principles models can be classified into four types:-

1. Computation Independent Model (CIM)

2. Platform Independent Model (PIM)

3. Platform Specific Model (PSM) described by a Platform Model (PM)

4. An Implementation Specific Model (ISM)

Models, modeling and MDA

Models and model driven software development are at the heart of the MDA approach. In the software engineering world, modeling has a rich tradition, dating back to the earliest days of programming. The most recent innovations have focused on notations and tools that allow users to express system perspectives of value to software architects and developers in ways that are readily mapped into the programming language code that can be compiled for a particular operating system platform. The current state of this practice employs the Unified Modeling Language (UML) as the primary modeling notation. The UML allows development teams to capture a variety of important characteristics of a system in corresponding models. Transformations among these models are primarily manual. UML modeling tools typically support requirements traceability and dependency relationships among modeling elements, with supporting documents and complementary consulting offerings providing best practice guidance on how to maintain synchronized models as part of a large-scale development effort.

MDA tools provide ability to synchronized models with code as shown diagrammatically below:-

MDA tools provide ability to automate the initial transformation and also help to keep the design and implementation models in step as they evolve. Typically, the tools generate code stubs from the design models that the user has to further refine. Changes to the code must at some point be reconciled with the original model (hence the term "roundtrip engineering," or RTE). To achieve this, you need a way to recognize generated versus user-defined code; placing markers in the code is one approach.

Open Source MDA tools

1. AndroMDA

AndroMDA (pronounced "Andromeda") is an extensible generator framework that adheres to the Model Driven Architecture (MDA) paradigm. Models from UML tools will be transformed into deployable components for your favorite platform (J2EE, Spring, .NET). Unlike other MDA toolkits, AndroMDA comes with a host of ready-made cartridges that target today's development toolkits like Axis, jBPM, Struts, JSF, Spring and Hibernate. AndroMDA also contains a toolkit for building your own cartridges or customize existing ones - the meta cartridge. Using it, enterprise can build a custom code generator using your favorite UML tool.

Note: - Cartridge is important concept here. There can be concept of Becon cartridge or Tuscany cartridge. Hence model and generate code compliant to Becon/Tuscany Framework.

2. Acceleo

This is new kid on the block

Wednesday, November 7, 2007

Rule Engine Agnostic Business Rule Management System (BRMS)

Rule Engine Agnostic Business Rule Management System (BRMS)

What is BRMS?

BRMS automates business policies in custom and composite business application

Typical Rules & Software Management Cycle




Role of BRMS in Software Development Cycle

Advantages: -
  1. Ease of Use
  2. Efficient and Scalable
  3. Improve Productivity and Maintainability
  4. Centralized knowledge repository
  5. Customize product and services
  6. Customize/Schedule deployment
  7. Create rule repository and manage rule meta data
  8. Expose rule as web service, ejb etc
  9. Ability to simulate testing i.e. scenarios
  10. Rule versioning management
All the stake holder of rules mainly
  1. Architect
  2. Business Analyst
  3. Developer
  4. Policy Manager
  5. System Manager
can all work in unified manner.

Paradigm shift - BRMS Agnostic of Rule Engine

There is twist in the BRMS products after coming of jsr 94 specification. Let’s give end user ability to swap and change the underline rule engine with ease. But what end user least wants a complete change in the existing production rules. Here what I am envisioning a BRMS product which is agnostic of rule engine and whole BRMS is written using jsr 94 specifications so that changing rules engine will be quiet easy.
Such a development paradigm has emerged, and it has demonstrated the ability to radically improve the efficiencies of creating, modifying, extending, and repurposing solutions for enterprise application integration, process automation, and trading partner interchanges. The Service Oriented Architecture (SOA) paradigm has redefined the concept of an application. No longer an opaque, procedural implementation mechanism, an application is an orchestrated sequence of messaging, routing, processing, and transformation events where both message content and the functional components that operate on the message are exposed using XML technologies. XML-based development and deployment platforms that facilitate the SOA paradigm are highly compelling because they alleviate significant development and life cycle overhead and enable the extension and reuse of components and entire applications to an unprecedented extent.
Applications that rely on sophisticated, constantly evolving business rules stand to benefit substantially from this paradigm. It has long been recognized that isolating business rules entirely from procedural code, or any process implementation mechanism would dramatically improve a business’ ability to manage and adapt their business processes in response to new requirements or business conditions. Consequently, isolating, exposing, and publishing business rule sets as services that can be accessed by any application or process provides one of the most compelling value propositions for the Services Oriented Architecture paradigm.

Rule Processing and Rule flow language

Introduction

This paper provides the foundational description of rule flow processing technologies and along with it architectural pattern and practices involved in applying rule flow processing with in business rule management system. This paper suggest the need of rule flow execution language which is agnostic of the underlying rule engine.

What is a Business Rule?

The word ‘rule’ is a surprisingly general term which is difficult to define in a formal, succinct and precise manner.

Tony Morgan Definition

A compact statement about an aspect of a business [that] can be expressed in terms that can be directly related to the business, using simple, unambiguous language that’s accessible to all interested parties: business owner, business analyst, technical architect, and so on”.

The formula margin = revenue – direct.costs is not a rule. It is a statement of identity. It could even be regarded as a procedure for computing margin, as can many algebraic equations of this sort.

In the domain of Business Process Management (BPM), rules are very critical and utmost important. These are the rules that constrain and govern commercial organizations and processes they enact.

In general rules are of following types: -

  1. Integrity constraints
  2. Derivation rules
  3. Transformation rules
  4. Reaction rules
  5. Production rules
  6. Deontic rule

Rule processing for BPM

Business flows are orchestrated by BPM where rules act as their brain i.e. decision logic. An emphasis on rule processing can lead to the mistaken impression that rules represent a species of logic separate to other programmatic approaches. However, rules processing is just another form of computation that emphasizes the evaluation of conditions in order to control execution of actions. We could characterize this as

‘rules-orientated’ programming.

The most common category of rules processing patterns in BPM are implemented within individual executable process definitions such as orchestrations. Rule processing can classified as Intra – process (that is, a rule set can potentially orchestrate a process) and Inter – process (that is, rule set can potentially orchestrate different process and itself act as controller)

Inter – process orchestration via rules

The most common category of rules processing in BPM is implemented within individual executable process definitions such as orchestrations.

Rule process aid Inter – process orchestration as: -

  1. Rules based derivation that is, drive and infer new facts from existing fact and are used in data and message generation and data & message transformation.
  2. Rule based assertion that is, enforce validity and integrity constraint with in a process and are used in data and message validation
  3. Rule driver process flow that is, control sequence of process activities and are used in processes decision point, conditional process logic and process with branch and loop

Intra – process orchestration via rules

Inter-process patterns apply rules to the control of interactions and message exchange between multiple orchestrations or workflows.

Rule process aid Intra – process orchestration as: -

  1. Rule – driven Process routing that is, helps in process invocation
  2. Policy driven Process versioning that is, helps in dynamic composition of discrete services and processes based on version
  3. Policy driven Process composition that is, helps in dynamic composition of discrete services and processes

Rule flow language

Rule processing can aid BPM in inter – process or intra process orchestration but different question pops into mind

  1. What about flow with in rule a set?
  2. What about orchestration of different rules with in rule set?
  3. How to orchestrate different artifact of capture first order logic like decision table, decision tree, and score card and off course rule set?
  4. What about conditionally calling rules etc? That is, how to aspectize execution of rules.
  5. How to execute rules or its artifact in parallel?

BPM fails to answer these questions. But the answer to the above question lies in making a new language on the line of BPEL which will be able to cater to above need and answer above questions also.

I have researching on above point and came up with rule flow execution language which is agnostic of the underlying rule engine. Rule flow execution language exploit and use all the feature of the rule engine but using jsr 94 specifications.

There is some development in this direction in JBoss Rules, refer to blog http://markproctor.blogspot.com/search/label/Rule%20Flow

But this fails to address above mentioned questions/concerns.

The rule flow execution language has been created using xsd that is grammar of language is defined in it. Here are few silent feature of the rule flow execution language I conceptualized: -

· Each node or lex is a self executable

· Creating of dependent resources is made on the fly.

· Execution if optimized by using parallel execution concept, which make the rule execution faster.

· Construction of runtime lex component of the rule execution language is based on AST (Abstract Sytax Tree) with some deviations. That is, since, rule execution flow is nothing but execution of rules/ruleset/decisions table based on flow chart; as a result a linear linked list is created which contains the complete execution plan of the flow, with each node as self executable component. Also each node is also aware of its evaluating condition.

· Introspect the flow file and all the drools resources (i.e. rule and decision table) in it and create xml containing all the facts used in it. (which gets passed to the working memory while execution)

· Of course rule flow language is xml based which can traditionally make the application slow. This problem is taken care by ability to produce deployable objects rather parsing the xml again and again for execution. That is, after first execution the rule flow will be expanded as an sequence self executable lex objects