Friday, November 05, 2010

LIXI forum 2010

Over the past two days, I have been lucky enough to attend the 2010 LIXI forum.

LIXI is a set of standards relating to how financial institutions can deal with loan applications, valuations, and a few other items.
Typically it's serialized into an XML version of a paper form; and gets sent around by SOAP to various workflow engines.

Previously, my involvement with LIXI has been in the valuations space, with ValEx being a big driver of LIXI usage; but no one really talking about the standards (or asking for improvements).

So, it was much to my surprise when I got to meet much of the industry over the last two days.

Probably one of the most interesting ideas put forward was by Christopher Joye. He proposed a national clearing house for data relating to credit:

Moving from the general to the specific, I would propose that the Commonwealth establish a central electronic ‘clearinghouse’ of all residential, personal and business credit originated in Australia. For simplicity’s sake, let’s call it the National Electronic Credit Register (NECR).

The main purpose of this information would be to provide much more accurate, real time, granular data to understand how individuals are actually fairing, rather than trying to guess at an aggregate level.
This is primarily to allow entities like the RBA to act quickly and accurately to help manage our economy.

I personally think it is a quite valid idea, as much of my frustration with how statistics and modeling takes place stems from the use of assumptions, or aggregate data.

There are obviously obstacles, not least of which is the potential for scope creep and how that relates to individual's privacy; but as a concept, it quite impressed me.

Other presenters touched on everything from the cost of software failure to a comparison of Australia's railway gauge mess to the current fragmentation of the LIXI standards.

The big move at the moment seems to be to standardize on a mechanism to transmit business rules.
Three vendors all have the same problem of trying to communicate changing credit policy for specific products from a lender to many brokers; and there are a number of solutions out there on the cusp of realization.

We saw practical demonstration of real time transmission of changed requirements (application A hits the save button, changing the age limit on a product, application B becomes aware of it within seconds).

Whilst people did not get into the technical details behind that particular demo, a technical training session on schematron was run.
I had initially dismissed the idea of schematron, but hearing Rick Jelliffe (one of the creators of XSD) talk about many of the concepts that drove the design; I'm more sold on it.

If you haven't used schematron, in short it differs from XSD by being much more relaxed. You write specific xpath rules to target segments of a document as being valid.
XSD on the other hand ends up being much more restrictive. For instance, it forces you to make choices about sequence of nodes and tends to fail validation if you have extra content.

The extra content item is a killer for the LIXI ecosystem, as everyone seems to have practice of grabbing a copy of the XSD at a given point in time; and if need be; doing minor modifications for their patch of it.
It's not on the web, because no one wants to expose a core banking system to the internet.
As an example, I introduced a simple additional Time element to ValEx's schema and the LIXI standards. It has taken months of testing with multiple organisations just to take a tiny step forward, and in the end a partner application still ended up ceasing to communicate with us.
The most painful part of it all? The only bits which "broke" in every single system were the XSD validation components. Every other bit of code simply ignored the additional XML.

With such a minor change requiring so much effort, you can understand how we've ended up in a fragmented state, and why XSD has been harmful.

I would be very keen to see the entire basis of validation (both at a syntax/expected data level and a business rules level) shift to schematron; or something else - much of the RDF world does not have the XSD problem, because no one uses XSD to validate!


Overall, I made quite a few contacts in all sorts of places, touching on topics such as the use of unique identifiers (ala Linked Data, even if we're not an RDF based standard); to improving physical documentation handling with our existing partners.

I look forward to the coming year, and the opportunities it may create.

1 comment:

Damo said...

Interesting, Dan. I found an article by Christopher Joye about the NECR: http://christopherjoye.blogspot.com.au/2009/11/if-i-ran-apra.html