I’ve just posted my review of Gregor Hohpe and Bobby Woolfe’s excellent book on Enterprise Integration using messaging, “Enterprise Integration Patterns”. Overall it’s an excellent book, and wiil probably become a “bible” for those involved in the high-level design of integration solutions. To find out more, please read my review.
Category Archives: Thoughts on the World
Interfaces and Document IDs – A Rant
Please forgive me if this sounds like a rant, but I’m very annoyed. Someone who should know better has without warning changed a public interface, with the inevitable effect that dependent systems, in particular my blog, have broken. The offender? The mighty Microsoft.
Regular readers will know that I’ve highlighted several articles published in Microsoft’s Architecture Journal. A week ago I went to post a note on another article, to find that all my previous hyperlinks were broken. Thanks to my regular correspondents Richard Veryard and Arnon Rotem-Gal-Oz I discovered that the cause is an internal reorganisation within Microsoft, and there is a new web location for the journal, although it wouldn’t surprise me if that changes again. (To add insult to injury, the new URLs are very cryptic, and don’t paste easily into my blog!)
Now if you follow Microsoft’s advice when building systems, interfaces should be immutable. Otherwise you just don’t know what will break. The Microsoft advice is to never change an existing interface – if you need a different one, create a new interface, or at least a different version. And maintain the old one as long as dependent systems need it.
Microsoft are actually very good (not perfect, but quite good) at following this rule in their software systems. But they don’t seem to understand that the same rule should apply to that big public interface called the website. There are, of course, perfectly good strategies which would avoid this problem.
First, don’t try to reflect internal structural changes in the MSDN website. Doing so is like changing a system’s interface just because the implementation has been updated – the opposite of good practice. The public interface should be independent of implementation details.
Second, if you must create a new interface, keep the old one working. In systems, you can usually wrap the new interface to mimic the old. The same is true for a website. A set of auto-redirect pages at the old addresses, and I would never have even noticed the change.
Unfortunately Microsoft have done neither of these. And they seem to have a corporate blindness to the fact that documents are interfaces too. MS SharePoint is based on a web idiom, in which documents are identified by their position in a hierarchy. Re-arrange the hierarchy, and any external references or cross-references suddenly break.
Professional-quality document management systems don’t do this. They identify and control documents via a unique, immutable key into the underlying repository, and the primary document access via this key is guaranteed. Of course, you also want to show the document in a hierarchical structure, but any such entry is just a pointer to the underlying document. And if you want to change the hierarchy, or expose the same underlying document at multiple places in multiple hierarchies, it’s easy to do. The world of blogging has a similar concept, with “permalinks” which (should) survive a reorganisation.
Memo to Microsoft: links are interfaces too!
Metropolis – a Metaphor for IT Maturity
I’ve just read an excellent paper by Pat Helland of Microsoft, in which he likens the development of cities and manufacturing in the 19th century to the development of systems and business models now. His conclusion – IT at the moment is about at the same stage as America in the 1880s, when they were just starting to turn the Wild West into an industrialised nation!
Three short quotes from Helland’s conclusions bear repeating directly. On heterogeneity he says:
Remember that heterogeneity happens. Unless you have a very simple application portfolio, shared services will not be achieved by trying to put all
of your applications on one version of one platform. Even if you could, the next
merger would change that! Rather, you have to design for interoperability and
integration across platforms. This is the force that is driving the industry
wide work in service-oriented architectures.
He extends the popular “city planning” metaphor to IT investment:
IT investment is a balance of funding the sacred, protecting historic monuments, and allocating spending between infrastructure and business opportunity. Striking this balance is a key facet in effective governance, and in realizing the potential of IT in your organization.
And finally, those who seek to maintain control of their enterprise
architecture through heavy governance would be well advised to note:
You have to maintain a light hand. It is counterproductive to try to dictate
what happens in every structure in town, what color shirts are made, and how much is charged for soap. You have to embrace the semi-autonomous approach to governance that is characteristic of our cities, and allow the process owners to optimize and achieve efficiencies with as few constraints as
possible.
Death of the Microsoft Architecture Journal?
Does anybody know if Microsoft have killed off their Architecture Journal?
I was just about to write a post linking to it, and I find the content has been moved to an archive area and all the links have changed. Please send me a comment if you know!
Update
Thanks to my regular corrspondents Richard Veryard and Arnon Rotem-Gal-Oz I’ve managed to track down what’s happened. The journal is now at http://www.microsoft.com/architecture/default.aspx?pid=journal
This “broken interface” breaks so many architecture rules it deserves a separate post, so I’ve written one!
Cirrus Minor – A New Architecture Site
Arnon Rotem-Gal-Oz has set up an interesting new site / blog dedicated to software architecture. Of particular note, he’s trying to put some detail on the architecture “process” which is often negelcted as a single box on the development process picture. His approach has the name SPAMMED, catchy, but might cause the odd problem with email filters 🙂
Domain-Specific Languages
There seems to be quite a lot of activity on the “Domain Specific Language” front at the moment. Martin Fowler published “Language Workbenches: The Killer-App for Domain Specific Languages?”, in which he concludes that the common programming pattern of setting up repeating data structures via either very similar lines of code, or an external configuration file, is actually a DSL. He also republished a paper by Dave Thomas entitled “Design to Accomodate Change” on the related topic of table-driven programming.
However, Martin’s essay goes beyond common programming and data techniques to look at the development of specialist tools which he calls “Language Workbenches”. I’m not completely convinced that we need these in the world of XML and XSD. If you have a defined schema for you XML-based DSL (and aren’t all the many *ML langauges just different DLSs?) then any schema-sensitive editor will provide you with good design and editing support. The leading IDEs (e.g. Visual Studio) all have such a tool built into their core capabilities. Surely we now have a sufficiently sophisticated set of XML-based tools and standards that we have an opportunity to exploit synergies rather than re-inventing the wheel?
The Fear Premium
In an interesting echo of my last piece (Why Software Isn’t Like Building Construction), Scott Ambler has analysed bureaucratic processes as a response to management fear about what can go wrong in software development. His conclusion is that these processes only give the illusion of addressing the underlying fear. His article is well worth reading.
Why Software Isn’t Like Building Construction
Many software development and management methods are founded on a basic assumption – that constructing software is rather like building a bridge or a house. Once we’ve “done the design”, actually generating the software ought to be a completely predictable, relatively low-skilled process. However four decades of failure to achieve this vision might suggest that we should revisit
the assumption.
In a paper entitled “The New Methodology” Martin Fowler, the guru of object-oriented development, suggests a couple of reasons why this might be.
My article answers Martin’s, suggesting a couple of other considerations, and whether we have to completely abandon the physical construction analogy as a result.
Application Development Strategies
I recently attended a day of the Butler Group “Application Development Strategies” Symposium. I’ve just posted a short report on some of the more interesting discussions and presentations.
Almost all of the presentations shared a reminder that we still have a “software crisis” – the vast majority of software projects fail to deliver to their original targets and estimates. The presentations suggested three independent, but not exclusive, approaches to try and resolve the problem:
- Adopting better, more agile processes to address fundamental weaknesses in “waterfall” processes,
- Adopting better tools and techniques to improve development productivity and the integration of the application life-cycle,
- Enforcing a stronger “enterprise architecture” framework for development.
This last one was surprising, with several papers echoing my view that a strong architecture is essential if agile development is to succeed on a large scale or in complex and critical applications.
There was also surprising agreement on things which won’t solve the problem:
- No-one was promising a technical or product “silver bullet”. This includes web services!
- No-one was suggesting that we should just “try harder” with old-fashioned tools and processes.
- There’s no “one size fits all” solution. For example it’s a mistake to force a formal, high-ceremony process onto small business systems developments.
- Excessive technical standardisation is also not the answer. The drawbacks include “lowest common denominator” technical solutions and inflated costs where the standard solution is “overkill”.
Read my report for more details.
Review – My Early Life
I’ve just posted my review of this wonderfull book, by one of the world’s greatest leaders. The book is exciting, inspiring and, most of all, fun. I urge you to read it (and my review)!
The Laws of Identity
Microsoft have just published an excellent paper by Kim Cameron discussing the characteristics of an “identity metasystem” which must evolve if we are to have proper trust in the Internet and interactions which take place through it.
The paper is also available from the Identity Blog.
The paper’s thrust is that we need to develop a unifying set of identity-related technologies, but that these must observe certain key “laws”, and must accomodate varying technologies and requirements, much as unifying APIs provide access to a variety of hardware technologies.
I started thinking about the most common form of digital identify at the moment, the email address. It can be used in accordance with many of the laws. I can (usually) control when I release it. I can have different identities in different contexts, and choose which one to disclose. The identity is verifiable (to a limited extent) – someone can send me mail to check my address is valid. A variety of service providers and technologies are supported.
The big problem with email, of course, is that I can’t usually verify that email is from the claimed sender. For example, my spam whitelist admits email apparently from microsoft.com, but some of these emails are offers of dodgy mortgages and promises of increased manhood, obviusly not from the claimed source!
As a result, I wonder whether there is a missing “law of identity”. I need to be able to verify a claimed identity by methods I trust. I’d express the law something like “A party must be able to validate any identity claim, particularly its ownership, by reference (directly or indirectly) to resources he or she trusts.” This is implied in the current laws, but might be important enough to promote to a law in its own right.
Growing a Language
I’ve just read a wonderful paper by Guy L Steele, “Growing a Language“. He argues strongly that programming languages must be “small”, but able to grow. Such a language will have a relatively simple structure, syntactic rules, and a small core vocabulary. However it must also be able to “grow”, integrating new data types and functions, in the form of user code accessed in exactly the same way as core functions.
Steele’s argument is highlighted by the paper’s unique style – he uses a small but growing language himself. He writes using only words of one syllable, gradually adding other words he has defined in these terms.
The paper works at many levels. As well as the fascinating intellectual exercise in style, it makes a strong case for:
- simple but extensible programming languages,
- improving the extensibility of Java, rather than its core vocabulary,
- an agile community process for developing languages, rather than up-front design of great monoliths,
- the communication power of simple language and words.
Steele exhorts us to think about extensibility mechanisms – if we get these right then the core functionality can always grow. And by example, he encourages us to use simple, direct language and benefit from its discipline. On both accounts I agree wholeheartedly.