Do Not Let the Perfect Be the Enemy of the Good!

This post's title is cribbed from Celent's Latest "Report on SOA in Insurance 2007". The report is primarily the result of surveys conducted by Celent with a number of insurers and includes mid size to large carriers (premiums > $1 billion). The report is mildly interesting for someone in the midst of deploying SOA techniques/technologies in the insurance industry (namely me;) but clearly reiterates three key points:

  • The promise of a fully realized SOA enabled infrastructure is alluring and there is an incredible amount of hype around the potential.
  • Building out a fully realized SOA enabled infrastructure is hard and will take time
  • Even small steps toward SOA can pay dividends so get started now!

I Think the quote below best summarizes this:

Celent - Web Services and SOA in Insurance 2007
While the model may not be very sexy, Celent believes that for the next two years Web services/SOA in insurance will remain essentially a "Plumbing" issue - a set of technologies and practices that make it more efficient to share data and transactional capabilities between systems, both internal and external, in a reusable way that allows the value of systems investments to be leveraged repeatedly in subsequent initiatives. While this may not get the CEO's attention, it is absolutely key to the CIO's mission to do more with less.

So get started... Each company is unique in its own way and requires it's on approach but don't get caught up in analysis paralysis. Do not wait for the perfect project or tool. Starting allows you to "do" and doing allows you to learn from the experience. Learning allows you to adapt and "do" again providing opportunity to continually improve as you go. "Do not let the perfect be the enemy of the good!"

Mega Services: Getting There?

It is always interesting to hear Steve Ballmer spin Microsoft's latest marketing, product, technical, whatever announcement. As if on cue eWeek.com published an article titled

"Microsoft CEO Touts Company 'Bet' on Web Services"

Quoting Steve Ballmer characterizing the move as a "big bold bet". I just had to laugh. How could anyone view this as a "bet" in the current technology environment. About the only part of this decision that could be considered a risk is weather to use the W3C "formal" Web Services specifications or a more lean REST based architecture. Granted the formal Web Services specifications are starting to get that hairball consistency, but there are still valid reasons to go either way.

The article goes on to state...
Microsoft believes Web services will work in tandem with PC-installed software, a vision that differs from that of "software as a service" advocates, such as Salesforce.com and Google, who expect services delivered over the Web to replace traditional software... "We believe this shift is the most important technological transformation during the next decade," Ballmer said.

Aahhh.... now this is more like it. I began to have a flashback. [the screen goes fuzzy and "Dreamweaver" begins to play in the background...] In my days at Viant Corp. during the Web 1.0 heyday (don't remember if it was 98 or 99), I had the privilege of being invited to Seattle for a special Microsoft conference. Before I arrived, I didn't have any other information than that it was a selected audience of internet savvy technology partners.

Microsoft began by discussing what they felt was a challenge for them. They were encountering both resistance and lack of interest to using their products any deeper in Corporate America than the desktop. The biggest issue seemed to be the integration challenge. Unix seemed to be the OS lingua franca for integration projects and Sun was selling servers by the boat load.

Microsoft proposed that they were considering XML messaging as the lingua franca for their platform and were seriously looking into embedding its use throughout all its products. From applications like Excel, Outlook right down to the core operating system, all would expose access to features through XML messaging. I was not a Microsoft fan at the time, but the idea was exciting, Microsoft even had a cool code name for it... "Mega Services!".

Alas, although Microsoft bore SOAP and .NET, the vision never seemed to fully pan out in the way described in that conference. This latest announcement is probably the closest they've come and is a step in the right direction, but I still hope for a day when the vision is fulfilled;)

Business Enabled Integration...

A few years ago I worked at an incubator called 12 Entrepreneuring. It was during the heady but waning days of the internet boom. 12 was in the midst of incubating Grand Central Networks, an integration network company. I was asked to help temporarily with the conceptual design and architecture on the project, which was based in the San Francisco office, until the New York office (my eventual homebase) was up and running.

My background was in systems integration at the enterprise level. I felt that the value of the network would be to provide a rich set of pre-integrated business partners and industry aware components. My point was to align with the business and not be purely technology focused. Founder Halsey Minor pushed to keep the service simple. His catch phrase was "a better FTP". Alas, Grand Central Networks has gone the way of many a startup.

Alignment with business is where the industry is finally heading... In an effort to differentiate themselves in the inevitable arms race like spiral of the integration market, webMethods has sought and received SWIFT Certification. Banks looking to upgrade their SWIFT integration technology now have a reason to seriously consider webMethods. iWay Software, as the adapter company has been at the forefront of supporting industry protocols such as SWIFT, FIX, HIPAA, etc. and is pushing into insurance with ACORD. Yes, vendors will continue to add bells and whistles to their products but I expect them to become more tightly aligned with various industries and their standards. A tool that is already aware of your industry's standard processes and vocabulary is incredibly valuable.

Security Onus Is on Developers

I actually read this eWeek article on my way into work in it's print form. (A one hour train ride into NY City every day can do wonders for your reading backlog...) I liked some of the points so much, that I tracked down the online version so I could blog it here.

Essentially, poor code quality can have as much of an impact on security as the hackers themselves.
"In 2004, Internet Explorer had a publicly revealed vulnerability that had not been patched on 98 percent of the days [of that year]. Firefox was vulnerable on 7 percent of the days [ of that year]. That tells you that what the application developers are doing can make a big difference."

David Wagner
I'm a strong proponent of code inspections, especially automation of the more mundane but insane little detail checking. But it has always amazed me how hard it is to get managements approval to purchase inspection tools. It usually comes quickly though, once one of those small mistakes has caused a big problem. Bill Pugh points out the other side of the coin... that everyone makes mistakes, even the smartest of developers.
"A lot of people think that errors and defects and stupid mistakes are things that the "lesser programmers" make. One of the things that I've found is that tools find insanely embarrassing bugs, written in production code, by some of the very best programmers I know."

Bill Pugh

Business Rules: A Report from the Field

I'll preface this entry by stating that I'm not a "Business Rules" expert by any stretch. I write this as one who has experienced an implementation from the trenches and who is about to dive into the deep once again at my latest account.

I arrived on a project that needed an injection of technical leadership. The project was basically a complex settlement system for a large energy market. There were hundreds of steps required to create a bill for a customer with numerous regulatory considerations. The complexity of each step could range from a simple atomic calculation to an extremely complex set of "business rules" with 10 to 15 pages of single spaced specifications. In general it was determined by the client that a business rules engine should be used to construct these steps. The goal would be to allow the business users to maintain the rules for the new system. I a'm not going to explain business rules basics, but going to assume youÂ've read the basic marketing available. This article contains the implementation experience that I pass on to you.

Business People Should Write Business Rules?

The ability for a business person to directly control the business rules, this is the Holy Grail when implementing business rules engines. The thinking here is that a visual tool can allow business users to simply "draw" a business flow. Additionally, by bypassing the Business Analyst, developer and associated typical software development life cycle (SDLC) that rules can be constructed more quickly and managed more easily. It is true that Business Rule Management Systems (BRMS) can accelerate and streamline the implementation of business rules, but buyers should understand some of the realities of adopting the technology and adjust their expectations accordingly.

It has been my experience that even the most advanced rules engine with the slickest interface requires training and, in some respects, structured thinking about how to organize and construct the rules. There are two factors at work here. First, visual rule environments provided by Business Rules Engines are not "Visio". There are structural constraints enforced during the definition of a valid visual rule flow. How long before a business user becomes frustrated because they just can't draw the "picture" they can clearly see in their minds eye.

The second issue with business users and direct manipulation of rules is rule decomposition. Rules engines that implement a Rete algorithm allow large numbers of simple rule statements to be automatically organized into a network of nodes that allow efficient execution of highly interdependent rules. Business users typically do not think in terms of rules, they think in terms of business policies.

Merriam-Webster defines policy as
"a high-level overall plan embracing the general goals and acceptable procedures"


So in essence policies are much broader than individual rules. Someone must decompose the various business policies into discrete rules and structure them appropriately in order to represent these policies in a form that can be executed by the engine. Although rules can be structured such that they can be easily read and understood by business users, rules engines are not "Natural Language" processors. They require strict syntax in order to operate. An example would be the Yasu QuickRules engine. The user interface allows rules to be entered either visually or as text that adhere to a VB like syntax. Rules that do not conform to the syntax can not be saved and therefore never executed by the engine. The product gives relatively vague error messages, so good knowledge of the syntax and operation of the engine are typically required.

So if the business person is not the most appropriate person to implement business rules, who is? We could naturally gravitate towards the programmer as an option, but my experience is that a typical programmer does not easily become an effective business rules designer. There is the natural tendency for a programmer to try and use the engine's rule language like another general purpose programming language. This usually ends in frustration because general purpose "programming" is not the goal of a rules engine and therefore can seem a poor substitute for languages targeted for the general purpose space like Java or C#.

Business rules in general should be recognized as a separate disciplinep. Special roles such as rules analyst, designer and administrator should be created. Talent for these roles can be sourced from disciplines like business analysis, software development and operations, but there must be the appropriate investment in training, mentoring and time to allow existing employees to grow into these roles. Think back to the investment required to move functional programmers to a more object oriented methodology. It took significant investment and effort for a successful transition. You can expect your results to match your investment.

Business Rules Require Facts (Data)

Business rules operate on facts.Facts are essentially the data provided to the business rules for execution and also derived during their operation. Facts are compiled, or loaded, by the calling application and passed into the rules engine for evaluation. The Fact Model is the entire set of facts that has been exposed for use to the rules engine. This model is typically constructed incrementally over time as facts are added to support new or modified business rules. The complete fact model can be developed in advance of implementing any rules, but in practice this can be very time consuming.

There are two issues that may arise in the management and transfer of facts (data) to the business rules engine. The first is version control. New facts required because of rule changes result in associated changes to the calling application. Additionally this requires that the modified application be deployed along with the new business rules. Something that we would hope to minimize as part of a business rules strategy. So in this case, the SDLC overhead can not be avoided and synchronized deployment of business rules and application software must be managed appropriately.

The second issue arises from the overhead of moving data in and out of the rules engine. Depending on the amount of data and the location of the engine (remote?), this could be a relatively costly endeavor. There are a number of engines that can execute within the same memory space as the calling application and therefore reduce the overhead of moving data between the two. But again, depending on the amount of data required it may be questionable as to whether data, not explicitly required for rule execution, should be loaded at all. Some engines have the ability to read and write application databases directly which can simplify the data movement process, but this mixes IO overhead with rule execution. This feature also removes the ability to use optimization techniques like write-through share memory caching in applications.

Business Rules Portability?

With industry support for standards such as the Java Rule Engine API or Java Specification Requests (JSR) 94, applications can be developed that can plug in different rules engines. JSR-94 standardizes the way rules engines are called, but does not standardize how rules are written. We can plug a new rules engine into an application quickly, but moving the rules between engines can be problematic. There are initiatives out there like RuleML that attempt to facilitate interoperability among vendors, but none have reach critical mass. Do not count on JSR-94 to ease replacement of a rules engine significantly.

Testability

When constructing a suite of business rules, it is important to be able to verify that we achieve the desired results. Testing all rules in isolation will not necessarily give a definitive answer as to expected outcomes. Dependencies between rules may have unexpected effects. Yes we can do "what if" scenario testing, but the real work is in defining a comprehensive set of test cases that exercise the dark corners of the rule network. This was our most difficult part of the project, generating the volume of scenarios and associated data to ensure coverage across all the business rules.