Category Archives: Agile & Architecture

Agile Development & Software Architecture

Review: Next Generation SOA

A Concise Introduction to Service Technology & Service-Orientation, By Thomas Erl and others

Dry, terse text which misses its mark

This book sets out to provide a concise overview of the current state of, and best practices for, Service Oriented Architecture. While it may achieve that for some managerial readers, it is simultaneously too general for those with more background, and may be too terse for those with less technical understanding.

The authors and editors have clearly set themselves the admirable aim of producing a short and concise overview of the field. Unfortunately in the quest for brevity they have ended up with a terse, dry and dense writing style which is very difficult to read. At times it feels almost like a game of "buzzword bingo". I frequently had to re-read sentences several times to understand the authors’ intended relationships between the elements, and I’m a very experienced integration architect.

At the same time, for a book on architecture there are very few explanatory diagrams, wordy descriptions being used instead. To add insult to injury a few low-value diagrams such as one depicting the cycle of interaction between business and IT change drivers are used repeatedly, when once would be enough.

The first chapter provide a overview of service orientation and its key principles, characteristics, goals and organisational implications.  This is followed by a chapter on service definition and composition. Ironically this part of the book is is quite repetitive, but manages to omit some key concepts. There’s no real concrete explanation of what a service is or does – maybe that’s taken as read, but a formal definition and some examples would go a long way. Likewise there’s nothing at this point on basic concepts such as service contracts and self-description, synchronous vs asynchronous operation or security. The second chapter goes into some detail on the idea of service composition but only really deals with the ideal green-field case where functionality can be developed new aligned exactly to business functions.

The following chapter on the SOA manifesto is better, but again doesn’t recognise the realities of real enterprise portfolios, with legacy systems, package solutions and external elements which must be maintained and exploited, and non-functional priorities which must be met.

Chapter 5 deals with service-related technologies and their potential interactions. This is good, and for me represented the core value of the book, but is crying out for some diagrams to supplement the lengthy text. There are good notes on service definition under Model Driven Service Design, but this key topic should really have been a major section in Chapter 3 in its own right. The statements about technical architecture are rather simplistic, with an overall position of "this is expensive and difficult, or just use the cloud" which is not necessarily right for all organisations.

The next chapter, on business models, is very prescriptive. It is also slightly misleading in some places about the role of IT in transactional services – such services are delivered by a business unit, possibly but not necessarily enabled by and carried through an IT service. It would be perfectly viable in some cases for specific services to have a manual implementation. This is well explained in the case study, but not here or in the Business Process Management section of the previous chapter.

The final chapter of the main text is a "case study" describing the wholesale transformation of a car rental company through adoption of service, agile and cloud approaches. It feels slightly contrived, especially in terms of its timeline, the preponderance of successes, and the surprising lack of resistance to CIO-led business change. However it fills a useful gap by explaining much better than the technologies chapter how the different technologies and approaches fit together and build on one another.

Appendix A is a taster for the other books in the series. Unfortunately the content is presented as small images which cannot be resized and are almost unreadable in the Kindle version. It has also been "summarized", with the result that it appears to add very little meaningful detail to what has already been said.

Appendix B is a useful expansion of the main text regarding organisational preparation, maturity levels and governance for SOA. I would personally have been tempted to merge the first two parts to the main text rather than positioning them as an appendix, where they are necessarily repetitive of some material which has already been read.

Appendix C is another taster for one of the other books in the series, this time with an overview of cloud computing. While this is at a fairly high level, it’s a useful and well-written overview for those unfamiliar with the concepts.

Overall this is a frustrating book. There is some good material, but missing key "reality checks" and presented in a terse, text-heavy style which makes it harder to read than it should be.

Categories: Agile & Architecture and Reviews. Content Types: Book, Computing, and Software Architecture.
Posted in Agile & Architecture, Reviews | Leave a comment

Efficient Fuzzy Matching at Word Level

I’ve just solved a tricky problem with what I think is quite an elegant solution, and thought it would be interesting to share it.

I’m building a system in which I have to process fault data. Sometimes this comes with a standard fault code (hallelujah!), but quite often it comes with the manufacturer’s own fault code and a description which may (or may not) be quite close to the description against one of the standard faults. If I can match the description up, I can treat the fault as standard.

The problem is that the description matching is not exact. Variations in punctuation are common, but the wording can also change so that, for example, “Evaporative emission system incorrect purge flow” in one system is “Evaporative emission control system incorrect purge flow” in another. To a human reader this is fine, but eliminates simplistic exact matching.

I spent some time Googling fuzzy matching, but most of the available literature focuses on character or even bit-level matching and looks both complex and compute-intensive. However finally I found the Jaccard similarity coefficient. This is designed for establishing the “similarity” between two objects with similar lists of attributes, and I had a “lights on” moment and realised I could apply a similar algorithm, but to the set of words used in the pair of descriptions.

The algorithm to calculate the coefficient for a given pair is actually very simple:

  1. Convert Text1 to a list of words/tokens, excluding spaces and punctuation. In VB.NET the string.split() function does this very neatly and you can specify exactly what counts as punctuation or white space. For simplicity it’s a good idea to convert both strings to uppercase to eliminate capitalisation variations.
  2. Convert Text2 to a list of tokens on the same basis.
  3. For each token from Text1, see if it appears in the list of tokens from Text2. If so, increment a counter M
  4. For each token from Text2, see if it appears in the list of tokens from Text1. If so, increment M
  5. Calculate the coefficient as M / (total number of tokens from both lists)

This produces a very intuitive result: 1 if the token sets are an exact match, 0 if they are completely disjoint, and a linearly varying value between. The process does, however, ignore transpositions, so that “Fuel rail pressure low” equates to “Fuel rail low pressure”. In my context this matches what a human assessor would do.

Now I simply have to repeat steps 2-5 above for each standard error description, and pick the one which produces the highest coefficient. If the value is below about 80% I treat the string as “matched”, and I can quote the coefficient to give a feel for “how good” the match is.

Hopefully that’s useful.

Posted in Agile & Architecture, Code & Development | 1 Comment

Caught by The Law!

Don’t get too excited. Those of you hoping to see me carted off in manacles and an orange jumpsuit will be sadly disappointed…

No, the law to which I refer is Moore’s Law, which states effectively, if you need reminding, that computing power doubles roughly every eighteen months.

Recently I’ve been doing some work to model a system in which two sub-systems collaborate by exchanging a very large number of relatively fine-grained web services. (I know, I wouldn’t have designed it that way…) The two partners disagree about how the system will scale, so it fell to me to do some modelling of the behaviour. I decided to back my analysis up with a practical simulation.

Working in my preferred environment (VB.Net) it didn’t take long to knock up a web service simulating the server, and a client which could load it up with either synchronous or asynchronous calls on various threading and bundling models. To make the simulation more realistic I decided that the service should wait, with the processing thread under load, for a given period before returning, to simulate the back-end processing which will occur in reality. The implementation should be simple: note the time when the service starts processing, set up the return structures and data required by my simulation, check the time, and then if necessary sit in a continuous loop until the desired total time has elapsed.

It didn’t work! I couldn’t get the system to recognise the time taken by the internal processing I had done, which threw out the logic for the loop. Effectively the system was telling me this was taking zero time. The problem turned out to be that I had assumed all processing times should be measured in ms. 5ms is our estimate of the average internal processing time. 6ms is our estimate of the round trip time for the web services. It seemed reasonable to allow a few ms for the processing in my simulation. Wrong!

It turns out that VB.Net now measures time in Ticks, which are units of 100ns, or one tenth of a microsecond. So I rewrote the timing logic to use this timing granularity, but still couldn’t quite believe the results. My internal processing was completing in approximately 1 Tick, or roughly 10,000 times faster than I expected.

Part of this is down to the fact that my simulation doesn’t require access to external resources, such as a database, which the real system does. But much of the difference is down to Moore’s Law. The last time I did something similar was around 10 years ago, and my current laptop must at least 100 times faster.

The moral of the story: beware your assumptions – they may need a refresh!

Posted in Agile & Architecture, Code & Development, PCs/Laptops, Thoughts on the World | Leave a comment

Webkit, KitKat and Deadlocks!

I don’t know what provision Dante Alighieri made, but I’m hoping there’s a special corner of Hell reserved for paedophiles, mass murderers and so-called engineers from big software companies who think there might ever be a justification for breaking backwards compatibility. I suspect that over the past 10-15 years I have wasted more computing effort trying to keep things working which a big company has broken without providing an adequate replacement, than is due to any other single cause.

The latest centre of incompetence seems to be Google. Hot on the heels of my last moan on the same topic, I’ve just wasted some more effort because of a major Google c**k-up in Android 4.4.X, AKA KitKat. My new app, Stash-It!, includes a web browser based on the “Webkit” component widely used for that purpose across the Android, OSX and Linux worlds. On versions of Android up to 4.3, it works. However when I released it out into the wild I started getting complaints from users running KitKat that the browser had either frozen altogether, or was running unusably slowly.

It took a bit of effort to get a test platform running. In the end I went for a VM on my PC running the very useful Androidx86 distribution (as the Google SDK emulator is almost unusable even when it’s working), and after a bit of fiddling reproduced the problem. Sometimes web pages would load, sometimes they would just stop, with no code-level indication why.

After various fruitless attempts to fix the problem, I discovered (Google.com still has some uses) that this is a common problem. In their “wisdom” Google have replaced the browser component in KitKat with one which is a close relative of the Chrome browser, but seem to have done so without adequate testing or attention to compatibility. There are wide reports of deadlocks when applications attempt any logic during the process of loading a web page, with the application just sticking somewhere inside the web view code. That’s what was happening to me.

The fix eventually turned out to be relatively simple: Stash-It feeds back progress on the loading of a web page to the user. I have simply disabled this feedback when the app is running under KitKat, which is a slight reduction in functionality but a reasonable swap for getting the app working… However it’s cost a lot of time and aggro I could well have done without.

Can anyone arrange a plague of frogs and boils for Google, please?

Posted in Agile & Architecture, Android, Code & Development, Thoughts on the World | Leave a comment

My First Android App: Stash-It!

After a couple of months of busy early morning and late night programming, my first Android app has finally been released. Please meet Stash-It!

Stash-It! responds to an odd side-effect of the difference between the iOS and Android security models. On the iPad, there are a large number of applications which offer an “all in one” approach to managing a group of related content. These are a bit frustrating if you want to share files transparently and seamlessly between applications, but there are times when you want to manage a group of files securely, and then the iOS approach is great.

Android is the original way around. The more open file system and component model encourages the use of specialist applications which do one job well, but it can be a challenge to keep related files of different types together, and hide them if you don’t want private client files or the like turning up un-announced in your gallery of family photos!

Stash-It! tries to plug this gap, by providing an “all in one” private file manager, tabbed browser and downloader for Android. You can get all these functions independently in other apps, but Stash-It! is the only one which brings them together in one place. It’s the ideal place to keep content you want safe from prying eyes: financial and banking records, health research, client documents. I suspect a few will even use it for a porn stash, but that’s not its only use! 🙂

There are built in viewers for most common image and movie formats, plus PDF and web files, so you don’t have to move these outside the application to view them. However when you do need to use an external application Stash-It! has a full suite of import and export functions to move your files or open them with other applications.

It took a while to design the security model. Stash-It! encrypts the names of files so that they can’t be read, and won’t be visible to the tablet’s gallery and similar applications, but the content of your files is untouched, so there’s little risk of losing data. Hopefully this strikes a sensible balance between privacy and risk.

Even if you’re not too worried about privacy Stash-It! is a great place to collect material related to as particular project, with all your different file types and web research in one place. You can bookmark web links, but also positions in video files or PDF documents. Web pages can be saved intact for reference or offline reading. Again you can do a lot of these things in separate apps, but I believe Stash-It! is the first one to bring all these functions together where you might want them.

I’ve got a lot of ideas in the pipeline to improve it further, but its now time to test the market and see whether I’ve spotted a gap which needed plugging or not.

Take a look and let me know what you think!

 

Here’s the Google Play Page. You can also read the helpfile.

Posted in Agile & Architecture, Android, Apps, Code & Development, My Publications, Thoughts on the World | Leave a comment

What Do I Mean by "Agile Architecture"?

A little while back I was approached by EITA Global, a global provider of on-line training, and we have now agreed that I should present for them a webinar entitled "Agile Architects, and Agile Architecture". The current plan is for this run on 8th April. I’ll keep you all posted with any changes.

As part of my preparation, I decided to do a literature scan to see how this topic may have moved on since the last time I did some significant work on it, a couple of years ago. I have to say that based on my initial research I’m not that impressed… I don’t know whether to be flattered or slightly perturbed that AgileArchitect.org comes up squarely at the top of a Google search. There are a few decent web articles around, although most are several years old and I’d seen them before. The Google search also turns up several dead links.

Amazon turned up a couple of loosely-related books, and the most obvious candidate appeared to be "Lean Architecture: for Agile Software Development" by James O. Coplien and Gertrud Bj�rnvig. I’ve now read a couple of chapters, but my first impression is not very favourable. I may be rushing to judgement, in which case I’ll apologise later, but the book seems to somehow equate "architecture" with "code structure" with "project structure", which isn’t right at all, missing a number of the most important dimensions of any true architecture.

This led me to ask myself a very basic question. "What do I mean by ‘Agile Architecture’?". In Coplien and Bj�rnvig’s book they seem to answer "an architecture which facilitates agile development". That may be one definition, but it isn’t mine.

I think the confusion arises from the difference between "agile" applied to a process (e.g. software development), and applied to a product. In the former case, the Agile Manifesto undoubtedly applies. In the latter, I’m not so sure. I think that for a product, and especially its architecture, the primary meaning of "agile" must be "able to respond to change". The larger the change which can be handled quickly and cheaply, the more agile the architecture. An architecture which has been built in a beautifully run agile project but which needs new code the first time a business rule changes is fragile, not agile. The system which can absorb major changes in the business rules without a single line of code is genuinely agile. The integration architecture which allows multi-million pound system A to be upgraded with no impact on adjacent multi-million pound system B, or which allows the company to be restructured just by re-configuring its services, is the most agile of all.

I’m slightly worried that "agile" may have become a "reserved word", and this "architecture in the large" definition may run counter to accepted practice. Is that right, or am I reading too much into a few examples?

Posted in Agile & Architecture, Thoughts on the World | 4 Comments

Break Compatibility, Lose Loyalty

For almost 20 years I have been a fan of, and borderline apologist for, Microsoft. One of the main reasons was their focus on software usability, backed up by a visible intention to preserve backwards compatibility wherever possible. While each new release of Windows, Office, IE and Visual Studio brought new features, these were by and large an extension to rather than a replacement for that which already worked. When a compatibility break was absolutely necessary, such as with the transition to VB.NET, it was well signposted and the option to parallel run the old version well supported.

Sometime around 2007-8, maybe by coincidence just when Bill Gates retired, this all went to hell in a handcart, and since then I’ve been cursing new Microsoft software versions as much as praising them. Each release has brought frustrations, and in many cases they have been sufficiently severe to drive me to adopt a competitor’s product, or at least a third party add-on.

XP SP 2 broke WMA format so it is incompatible with most third party players. My car was new in 2008, but I have to rip CDs using an XP SP1 virtual machine. Vista broke the reliable and flexible ntbackup. It took a bit of effort to get it working again, and it’s still part of my (more complex) backup strategy, but the “heavy lifting” is now done by Acronis rather than Windows.

The disruptive user interface and file format changes of Office 2007 have been widely discussed elsewhere. Suffice to say that I never used Office 2007, and run Office 2010 only with a third party add-on which restores the old menus. The compatibility-breaking changes to follow up flags in Outlook 2010 are extremely annoying, but as yet insufficient to drive me to an alternative product.

The same is not true of the changes to Virtual Machine support in Windows 7. Before that move, I used Mirosoft’s own Virtual PC extensively. However, the loss of compatibility, features and reliability were so severe that I now only use and recommend VMWare WorkStation/Player for this purpose. You can read about my experiences here.

The latest problem, and what has prompted this blog, is the appalling state of Internet Explorer 9. I have been a faithful user of IE since V1, and have lived, fairly happily, with its limitations through to IE8. However, since “upgrading” to IE9 I have become completely disillusioned, because it just isn’t reliable enough. Here are a sample of the things which just don’t work properly:

  • Downloading dynamically-generated PDF files, such as bills from BT,
  • MasterCard SecureCard authentication. This one’s a real pain if you’re at the end of a long online purchase, and you find your main credit card won’t work,
  • The combined address / Google search bar. If I type in a valid www…. address, I expect the browser to at least attempt to use it , not do a search!
  • Printing. Some long text pages, especially from typepad blogs, get mashed with the main font/character set replaced by something unreadable,
  • Rendering some web sites readably at all. Some of the worst offenders, ironically, are Microsoft’s own “support” forums.

By direct contrast, Google Chrome seems to do a decent job of all the above. I am hereby announcing my intention to make it my primary browser whenever I have a choice.

I’m now really scared about Windows 8, with it’s so far half-hearted changes to the desktop. What will that wreck?

Now in fairness, Microsoft are not the only, or maybe even the worst offenders in this space. For example Bibble/Corel have just pushed through a change to their AfterShot Pro software which no-one wanted and which breaks a plugin I’ve written, and I suspect in that community I have some influence to say “the new version is broken, don’t use it.”

I really don’t understand Microsoft’s behaviour here. Are all these compatibility wrecks conscious decisions? If so, do the conquest sales related to cool new features really outweigh the loss of loyalty from existing users? If not, have they just got lazy and complacent? Who knows?

Posted in Agile & Architecture, Thoughts on the World, VMWare | 1 Comment

Tyranny of the Colour Blind

Shot at the Botanical Gardens near Chania, Crete. I don't know what this plant is, and judging from the four or five different colours for its fruit, I'm not sure it does either! However, the world is definitely richer for the splashes of colour...
Camera: Canon EOS 7D | Lens: EF-S17-85mm f/4-5.6 IS USM | Date: 08-10-2010 09:02 | ISO: 200 | Exp. bias: 0 EV | Exp. Time: 1/80s | Aperture: 9.0 | Focal Length: 59.0mm (~95.6mm) | Lens: Canon EF-S 17-85mm f4-5.6 IS USM

Or Have Microsoft Lost Their Mojo?

I like colour. I see in colour, dream in colour and have a rich colour vocabulary which drives much of my photographic style (see Seeing in Black and White). It’s also an important part of how I work – colour can be a powerful “dimension” in the visualisation of information. The human eye and brain are remarkably good at processing and using colour signals, whether it’s a highlighted line of text on screen, or a flashing blue light in traffic.

Now I acknowledge that this isn’t universal. As a designer you have to cater for a significant proportion of users (about 8% of males) who have poorer colour vision, and especially in mobile systems there will be times when ambient lighting conditions reduce effective colour saturation to a point where it doesn’t work. The traditional way to deal with this is to combine colour with another signal, such as shape – green tick vs red cross, for example. Then each user can use the signal which works best for them.

Microsoft used to get this. Their software was frequently a model of usability, and exploited colour, shape and shading to both guide the user, and allow the user to better manage their data. Icons could be rapidly located by colour as much as by detail. Data items of a particular status would “leap out” from a forest of those without the status marking. Office 2003 introduced follow-up flags for both OneNote and Outlook, which proved to be a great way to identify and retrieve key items in large lists. These supported both colour and shape or text as “identifying dimensions”.

Then sometime in the late noughties, Microsoft lost their way. Office 2010 has abandoned colour as a navigational tool. Tools, icons and the dividers between sections of the screen are all subtle shades or pale pastels, making them very difficult to visually distinguish, particularly in poor lighting conditions. Icons are no longer clearly distinguishable. However the worst regression is in respect of Outlook’s follow-up flags, which now actively disable the use of colour via a tyranically imposed colour scheme consisting of “multiple shades of puce”, rendering them completely useless for their original purpose.

This rant had been brewing for some time as I try to get to grips with Office 2010 and its inexplicable abandonment of many well-established user interface standards at the cost of enormous frustration for long-standing users. What tipped me over the edge was the announcement last week of Microsoft’s new Windows logo. Gone are the cheerful primary colours, and the careful shading which made later versions pop out of the screen with real depth. In their place is a plain white cross on a muddy blue background. Useless!

Now I suppose there might be people who think that this reduced colour palette is somehow “cool” or “elegant”. They’re probably the same group who think that it’s appropriate to model fashion on anorexic teenagers rather than real women. In both cases they’ve clearly lost track of who their real customers are, who has to get real utility from their work.

I’m not against change, and I accept that high-resolution graphics allows more subtle designs that we were previously used to. However, this rush to abandon colour in user interfaces and branding robs us of an important dimension. We absolutely do have to make sure that designs are also usable for users and in conditions where colour may not work, but we must not throw away or disable powerful tools which have real value to the majority of us. Microsoft should know better.

View featured image in Album
Posted in Agile & Architecture, Thoughts on the World | Comments Off on Tyranny of the Colour Blind

Ten Ways to Make Your iPad Work Effectively With Windows

If you’re one of those people who uses loads of Apple products, and is thinking of proposing Steve Jobs for canonisation, then you may be happy with how your iPad works, but if you’re trying to make it work effectively in a Windows-based environment you may have found shortcomings with the “out of the box” solutions.

It is perfectly possible to make the iPad play nicely as part of a professional Windows-based environment, but you do have to be prepared to grab the bull by the horns, dump most of the built-in apps (which are almost all pretty useless), and take control of both file management and communications via partner applications on the PC. This article presents some of my hard-won tips and recommendations on how to do this and get productive work out of the iPad’s great hardware.

Read the full article
Posted in Agile & Architecture, iPad, Thoughts on the World | Tagged | 3 Comments

Enterprise Architecture Conference 2011 Day 3

Well the third day of EAC 2011 came and went. My talk went well. Despite the last minute scheduling change I got a decent audience, and once in front of real listeners managed to find my style and pace again. They seemed to appreciate it, but as none of the inveterate tweeters was in attendance I’ll have to wait for the feedback analysis to be sure.

This morning’s keynote was excellent, it’s just a shame that I had to leave early to set up for my own talk. It could have been subtitled “why ‘cloud’ means people trying to sell you stuff”, and was the most balanced discussion I have yet heard on cloud computing. The most interesting observation is that individual component reliability is very much subservient to scalability and “elasticity”, which has major implications for more critical applications.

The rest of the day’s presentations were a mixed bunch. Some were too academic, others very light on real content. The one exception was Mike Rosen talking about SOA case studies, which included both real successes and failures, and should be the yardstick for anyone looking to move to SOA.

One thing I have learned from this conference is a (arguably the) real purpose for Twitter. It’s a great way for a group engaged in a joint activity like this to have a shared background conversation. In many ways it’s the electronic reincarnation of the DeMarco/Lister red and green voting card system, but with wider and longer reach. It’s not without problems: it can be a distraction, some users can dominate with high volume, low value tweets and retweets, and Twitter’s search and the available clients (certainly on the iPad) are not optimised for hashtag-based operation. However, these are minor complaints.

The iPad makes a superb conference tool, and I was amazed by the number of them in use, for making notes, reviewing slides, and tweeting. Interestingly I think this trend will drive a move to standardise on PDF-format material: slides published this way worked very well, but some available only in PowerPoint format weren’t viewable.

My congratulations and thanks to the conference chairs and the IRM team for an excellent event. Time to start thinking about a topic for the next one…

– Posted using BlogPress from my iPad

Location:Falcon Rd,Wandsworth,United Kingdom

Posted in Agile & Architecture, iPad, Thoughts on the World | 1 Comment

No Plan B

I don’t think the reason why the British travel infrastructure copes so badly with problems is actually down to a fundamental lack of capability or investment. The real problem is that the operators lack sufficient planning, and/or imagination, and/or flexibility to shift their services to alternative patterns better matched to changing circumstances. The only “plan B” seems to be “run what’s left of plan A and apologise”.

Take, for example, South West Trains, who run commuter services to the South West of London. There are two main lines out from Waterloo via Guildford and Woking, but also a number of parallel minor lines, like the secondary line to Guildford which runs past my house.

When North Surrey got a foot of snow for the first time in 30 years in February 2009, it was clear that no trains were going to run on any of these lines for a couple of days, but only a relatively short stretch of the lines was blocked. It was still possible, for example, to get from Surbiton (about 10 miles nearer to London than my home) to Waterloo.

I had to attend a course in London, and the roads were becoming passable, so I dug the car out and drove to Surbiton. It rapidly became clear that everyone else had had the same idea. How had SWT reacted? By running the same four commuter services an hour from Surbiton. These were, of course, enormously overcrowded and slow. What about the other trains which would, for example, have usually been running the express services carrying the rest of the traffic? These were nowhere to be seen, presumably sat in a siding near Waterloo. Would it have been beyond the wit of man to press some of these into use as additional shuttle services to carry the excess traffic from those stations which were accessible? Apparently so.

Last night, I got caught again. I got to Waterloo at 10:30 pm to see a blank indicator board. The cause of the trouble was signalling problems in turn due to cable theft at Woking. Now I don’t blame the rail companies for that, and I hope the perpetrators are found, hung, drawn and transported to South Georgia, but I do think the train companies’ response is inadequate.

True to form, they had reverted to “what’s left of plan A”, running a tiny number of overcrowded and delayed services under manual signalling procedures. Now theoretically my line should not have been affected. Not only should I have been able to get home, but my line is perfectly capable of carrying some additional “relief” traffic, as it does when there is planned engineering work on the main lines. (About once a month the 8 commuter services per hour are joined by about 20 express and freight services, and when planned that seems to work fine.) With a bit of ingenuity you could even alert taxi drivers at the intermediate stops to the sudden need for their services, at profitable late night rates.

Is that what happened? I should coco. Instead not even the regular services to my home station appeared to be running. I ended up on one of the overcrowded trains to Surbiton, and finished my day with a £40 cab ride.

Why is this so difficult for the train companies to get right? In both of these cases there was no fundamental problem with the remaining infrastructure or rolling stock. In both cases they even have a model for the alternative schedule. For last night it’s in a file marked “Saturday service with engineering work at Woking”. Staff flexibility might be the problem, but that must be resolvable, maybe via higher overtime rates?

There’s also an architectural lesson here. I design computer systems and networks. My clients run national power networks. In both cases the customers expect those systems and networks to be resilient, and to cope with growing demand without wholesale replacement. It’s not always possible to justify dedicated “DR” capacity, so you have to get inventive with alternative configurations of the capacity you do have, and then run tests and introduce clever asset monitoring and management practices to make sure those configurations can be used safely.

If we can do it, why can’t the transport operators?

– Posted using BlogPress from my iPad

Location:Cobham,United Kingdom

Posted in Agile & Architecture, Thoughts on the World | Leave a comment

Enterprise Architecture Conference

Halfway through, and this is shaping up to be the best EAC I have attended for a while.

I was umming and aahing about whether to attend yesterday’s seminar sessions, and couldn’t make up my mind which to join. In the end I made up my mind about the morning session while having a cup of coffee on the way, when I recognised one of the speakers, Lawrence Helm, as having given an excellent presentation a couple of years ago on NASA’s knowledge management problems. This time he and his colleague Robert Stauffer were talking about NASA’s adoption of Capability Modelling, and how they have put it to use supporting some very high level decisions about NASA’s future shape.

This was another stimulating session, and really benefitted from the extra space from making it a half-day session. Lawrence and Robert actually ran out of time, which was probably a testament to the depth of the material and the discussions it engendered.

The principle of relating capabilities to strategic objectives was not new to me, although the NASA examples certainly were. What did surprise me was the level of detail required for capability definitions in that environment. For example, the launch capabilities relate specifically to certain target longitudes and temperature ranges, and could not be moved to a location outside those ranges (for example Korou or Baikonur) without re-engineering the rocket platforms.

The afternoon session was also a bit random, as I got confused between Mike Rosen’s half-day seminar and his separate one hour talk for which I had the slides. Not a problem, the half day session on case study methods was very educational. The example, of how Wells Fargo created a federated model to integrate their various systems under a common customer model was interesting, and plays nicely into my EAI talk tomorrow. Like a good sermon, I didn’t learn much new, but I felt thoroughly validated that Wells Fargo did what I would have recommended, and succeeded with it. We had a very robust discussion on the importance of stable service interfaces, so hopefully that will drum up some support for my talk.

You get a very good class of attendee at these sessions. Alec Sharp joined the NASA session, and John Zachman joined the afternoon session, although he didn’t participate much.

Thursday’s highlights have probably been the two keynotes: this morning on how different companies have developed different strategies to come through and out of the recession, and this afternoon on “how to think like a CEO” and get your messages across to senior managers. However, there was also an excellent talk this morning by David Tollow on how EA feeds management and planning of long term outsourcing deals, from the supplier’s viewpoint. Very relevant to many of us in the current day and age.

Just to make things interesting, Sally has asked me to swap slots with someone else tomorrow, so my talk which was carefully trimmed to the constraints of the last slot on Friday will now be at 10 am. This may or may not be a good thing.

Wish me luck!

– Posted using BlogPress from my iPad

Location:Portman Towers,Paddington,United Kingdom

Posted in Agile & Architecture, Thoughts on the World | Leave a comment