They’re All Missing the Point

Since Google’s demo of an AI bot making a phone call a few weeks ago, the reaction I have read seems to be completely polarised. About half the reviewers are blown away, believing it to be unleashing AI wonders/horrors which are half a step away from SkyNet going live. The other half are nonplussed, seeing no potential value.

They are all wrong.

Let’s deal with the "this is the advent of true AI" bunch first. Google have demonstrated a realistic sounding voice which can currently deal with a few, very limited scenarios, and I suspect will rapidly fail if the other party goes significantly off track. Sure, it’s a step forward, but just a step. If you want to see a much more convincing demo, catch up with the program "How to Build a Human" from about 18 months ago, in which the makers of the Channel 4 Sci-Fi program "Humans" got a mix of British experts to build a robot Gemma Chan, who (which?) was then interviewed over Skype by a bunch of entertainment journalists. About half the reviewers didn’t realise they weren’t talking to the real Gemma. That’s much closer to a Turing test pass.

At the other end of the scale we’ve got those who don’t see any advance or value to a machine which can help make a phone call. To those, I have a simple question: "how did you get on, the last time you rang your bank / utility / travel company / <insert other large organisation here>?"

I completely agree that it’s a waste, and maybe a bit sinister, to task a robot with making a call to a local restaurant or hairdresser. But when was the last time you rang anything other than a small local business, and got straight through to talk to a human being? We all waste far too much of our time sitting on the phone, trying to navigate endless menus, trying to avoid the dead end where all you can do is hang up and try again, or listening to "Greensleeves" being played on a stylophone with a reminder every 20s that the recipient values your call. Yeah, right.

If I want to deal with a computer, I’ll go onto the website. I’m very happy doing that, and if I can do my business that way I will. The reason I have picked up the phone is one of the following:

  • The website doesn’t support the transaction I want to execute, or the information I need. I need to speak to a human being.
  • The website has a problem. I need to speak to a human being.
  • The website has instructed me to phone and speak to a human being.

Spot the common thread?

So I have the ideal use case for Google’s new technology. It makes the phone call. It navigates the endless menus, referring to a machine learning database of how to get to a human being as quickly as possible, and how to avoid dead ends in that organisation’s phone system. It provides simple responses to authentication prompts if it can, or prompts me for just the required information. If the call drops or dead ends it starts again. And it listens to "Greensleeves" or equivalent, silently in the background, until it’s sure it’s speaking to a human being. At that point, it says, like a good secretary would, "please hold, I have Mr Andrew Johnston for you", gets my attention and I pick up the call.

In the meantime, I get on with my life.

In some ways, this is actually easier than what Google have already done, because most of the interaction is computer-to-computer, and actively doesn’t need a human-like voice or understanding. It’s certainly a better use of the technology than pestering the local hairdresser.

OK Google. Build this, please.

Posted in Thoughts on the World | Leave a comment

How Hard Can It Possibly Be?

I really should have known better. In last week’s piece on random music player algorithms, I made the rather blasé statement "I can live with it for a while and I can probably resolve the issue by downloading another music player app". Yeah, sure.

Now we all know that assumptions are dangerous. One boss of mine was inordinately fond of the quote "assumptions are the mother of all f*** ups", and he wasn’t wrong. However I really did expect that music players were a relatively mature and stable component of the Android app space.

So how did I get on with trying to download a better random music player? So far, I have downloaded somewhere between 10 and 20 apps. I have discovered:

  • Apps which just don’t start, or which crash immediately
  • Apps which can’t see the SD card on which my music is stored, and insist on randomly playing 3 ringtones
  • Apps which can’t play a lot of my music. Come on guys, WMV format is not exactly "edge".
  • Apps which don’t have a random function, despite the words "random" or "shuffle" in the description
  • Apps which don’t display properly on my phone’s screen
  • Apps which display nicely and seem to have all the functions I need, but where the random function is to start with one song chosen at random, and then just play all the other songs on my device in alphabetic order of title (at least 3 instances of this!)
  • Apps which display nicely and have a decent random function, but then 60% of the time no sound comes out of the headphones when you press "play"
  • Apps which display OK, and appear to have a decent random function, but most of the other advertised functions don’t work

Worse case, I can probably live with the last – I can always use the Sony app for other purposes – and late last night I spent another 5 minutes and maybe, just maybe, I have found one app which will work, albeit with a slightly odd user interface.

But honestly, how hard can it possibly be?

Posted in Thoughts on the World | Leave a comment

Inferring Algorithms: How Random is Your Music Player?

“You’re Inferring that I’m stupid.”

“No, I’m implying that you’re stupid. You’re inferring it.”

– Wilt, by Tom Sharpe

My latest contract means spending some time on a bus at each end of the day. The movement of the bus means it’s not comfortable to read, so I treated myself to a nearly new pair of decent BlueTooth headphones, and rediscovered the joys of just listening to music. I set the default music player app to “random” and let it do its stuff.

That’s when the trouble started. I started thinking about the randomisation algorithm used by the music player on the Sony phone. I can’t help it. I’m a software architect – it’s what I do.

One good music randomisation algorithm would look like this:

  1. Assign every song on your device a number from 1 to n
  2. When you want to play a random song, generate a random number between 1 and n, and play the song with that number.

However in my experience no-one ever implements this, as it relies on maintaining an index of all the music on the device, and assigning sequential numbers to it. That’s not actually very difficult, given that every platform indexes the music anyway and a developer can usually access that data, but it’s not the path of least resistance.

Let’s also say a word about generating random numbers. In reality these are always pseudo-random, and depending on how you seed the generator the values may be predictable. That may be the case with Microsoft’s software for picking desktop backgrounds, which seems to pick the same picture simultaneously on my laptop and desktop more often than I’d expect, but that’s a topic for another blog, so for now let’s assume that we can generate an acceptably random spread of pseudo-random numbers in a given integer range.

Here’s another algorithm:

  1. Start in the top directory for the music files
  2. Pick an item from that directory at random. Depending on the type:
    • If it’s a music file, play it. When finished, start again at step 1
    • If it’s a directory, make it your target and redo step 2
    • If it’s anything else, just repeat step 2

This is easy to implement, runs quickly and plays nicely with independently changing media files. I’ve written something similar for displaying random pictures on a website. It doesn’t require maintaining any sort of index. It generates a good spread of chosen files, but will play albums which are alone under the first level root (usually the artist) much more than those which have multiple siblings.

My old VW Eos had a neat but very different system. Like most players it could work through the entire catalogue in order, spidering up and down the directory structure as required. In “random” mode it simply calculated a number from 1 to approximately 30 after each song, and used that as the number of songs to skip forwards in the sequence.

This was actually quite a good algorithm. As well as being easy to implement it had the side-effect of being at least partially predictable, usually playing a couple of songs by the same artist before moving on, and allowing a bit of “what’s next” guesswork which could be entertaining on a long drive.

So what about the Sony music app on my phone? At first it felt like it was doing the job well, providing a good mix of genres, but after a while I started to become suspicious. As it holds the playlist in a readable form, I could check that suspicion. These are key highlights from the playlist after about 40 songs:

  • 1 from ZZ top
  • 1 from “Zumba”
  • 3 from Yazoo!
  • 1 from Wild Cherry
  • 1 from Wet Wet Wet
  • Several from “Various Artists” with album titles like “The Very Best…”
  • 0 from any artist filed under A-S!

I wasn’t absolutely sure about the last point. What about Acker Bilk and Louis Armstrong? Turns out they are both on an album entitled “The Very Best of Smooth Jazz”…

I can also look ahead at the list, and it doesn’t get much better. Van Morrison, Walter Trout, The Walker Brothers, and more Wet Wet Wet :(

So how does this algorithm work (apart from “badly”)? I have a couple of hypotheses:

  • It implements a form of the “give every track a number” algorithm, but the index only remembers a fixed number of tracks numbering a few hundreds (maybe ~1000), and anything it read earlier in the indexing process is discarded.
  • It implements the “give every track a number algorithm”, but the random number generator is heavily biased towards the end of the number range.
  • It’s attempting a “random walk”, skipping a random number of steps forwards or backwards through the list at each play (a bit like the VW algorithm, but bidirectional). If this is correct it’s odd that it has never gone into “positive” territory (artists beginning with A-S), but that could be down to chance and not impossible. The problem is that without a definite bias a random walk tends to stay in the same place, so it’s a very poor way of scanning your music collection.

Otherwise I’m at a loss. It’s not like I have a massive number of songs and could have run into an integer size limit or similar (there are only around 11,000 files, including directories and artwork).

Ultimately it doesn’t matter that much. I can live with it for a while and I can probably resolve the issue by downloading another music player app. However you can’t help feeling that a giant of entertainment technology like Sony should probably manage better.

Regardless of that, it’s an interesting exercise in analysis, and also potentially in design. Having identified some poor models, what constitutes a “good ” random music player? I’ve seen some good concepts around grouping songs by “mood”, or machine learning from previous playlists, and I’ve got an idea forming in my head about an app being more like a radio DJ, looking for “links” between the songs in terms of their artist names, titles or genres. Maybe that’s the next development concept. Watch this space.

Posted in Code & Development, Thoughts on the World | Leave a comment

Why REST Doesn’t Make Life More Rest-full

Really Rest-full (Cuba 2010)
Camera: Canon EOS 7D | Lens: EF-S15-85mm f/3.5-5.6 IS USM | Date: 20-11-2010 15:41 | ISO: 200 | Exp. bias: -1/3 EV | Exp. Time: 1/250s | Aperture: 9.0 | Focal Length: 53.0mm (~85.9mm) | Lens: Canon EF-S 15-85mm f3.5-5.6 IS USM

As I have observed before, IT as a field is highly driven by both fashion and received wisdom, and it can be difficult to challenge the commonly accepted position.

In the current world it is barely more politically acceptable to criticise the currently-dominant model of REST, Javascript and microservices than it is to audibly assess the figure of a female co-worker. I was seriously starting to think that I was in some age-defined Luddite minority of one in not being 100% convinced about the universal goodness of that model, but then I discovered an encouraging article by Pascal Chambon “REST is the new SOAP“, and realised that it’s not just me. I am not alone.

I don’t want to re-create that excellent article, and I recommend it to you, but it is maybe instructive to provide some additional examples of the failings Chambon calls out. I have certainly fallen foul of the quasi-religious belief that REST is somehow “better because it uses the right HTTP verbs”, and that as a result the “right verbs must be used”. On my last contract there was a lengthy argument because someone became convinced I was using the wrong ones. “You’re using POST to do a DELETE. That’s wrong.”

“No, we’re submitting a request to do a delete, if approved. At some later point, after the request has been reviewed and processed, this may or may not result in a low-level delete action, but the API is about the request submission. And anyway, you can’t submit a proper payload with a DELETE.”

“But you’re using a POST to do a DELETE…”

In the end I mollified him slightly by changing the URL of the API so that the tip wasn’t …/host, but …/host/request, but that did feel like the tail wagging the dog.

Generally REST promotes a fairly inflexible CRUD model, and by default without the ability to specify exactly which items are retrieved or updated. In a good design we may need a much richer set of operations. In either an RPC approach (as outlined in Chambon’s article), or a “remote object access” approach, such as one based on SOAP, we can flexibly tailor the operations precisely to the needs of the solution.

Here’s a good example. I need to “rename” an object, effectively changing its primary key. In the REST model, I have to choose one of the following:

  • Add extra fields to the PUT payload to carry the “new” and “old” keys, and write both client- and server-side conditional code around their values, or an additional “operation” value
  • Do a DELETE (with the old key) followed by a POST (with the new one), making sure that all the other data required to recreate the record is passed back for the POST, and write a host of additional code to handle cases like the DELETE succeeding but the POST failing, or the POST being treated as a new item, not just an update (because it’s not a PUT).
  • Have a dedicated endpoint (e.g. …/object/rename) which accepts a POST operation with just the required data for the rename. That would probably be my favourite, but I can hear the REST purists screaming in the wind…

In a SOAP model, I can just have an explicit Rename(oldkey, newkey) operation on a service named for the underlying business object. Simples.

So Is SOAP The Old REST?

I’m comfortable with Chambon’s casting of REST as the supposed handsome hero who turns out to be a useless, treacherous bastard. I’m less comfortable with the casting of SOAP as the pantomime villain (boo hiss).

Now your mileage may vary, and Chambon obviously had some bad experiences, but in my own experience SOAP is a very strong and reliable technology which a lot of the time “just works”. I’ve worked in environments where systems developed in .Net, Oracle, Enterprise Java, a LAMP stack and Python cheerfully exchanged with each other using SOAP, across multiple physical locations, with relatively few complexities and usually just a couple of lines of code to access a full object model with formal schema and policy support.

In contrast, even if you navigate through the various different ways a REST service may work, inter-platform operation is by no means as simple as claimed. In just the past week I wasted about half a day trying to pass a body parameter between a Python client and a REST API presented by .Net. It should have worked. It didn’t. I converted the service to SOAP, and it worked almost first time. (Almost. It would have been even quicker if I’d remembered to RTFM…)

Notwithstanding the laudable attempts to fill the gap for REST, SOAP is still the only integration technology where every service has full machine and human readable documentation built in, and usually in a standard fashion. Get a copy of the WSDL (Web Service Definition Language) either from the service itself, or separately, and you know what it does, with what data, and, where it’s relevant to the client, how.

To extend the theatrical metaphor, in my world SOAP is the elderly retired hero who’s a bit pedantic and old-fashioned, maybe a bit slow on his feet, but actually saves the day.

It’s About the Architecture, Stupid

Ultimately it doesn’t actually matter whether your solution uses REST, SOAP, messages, distributed objects or CSV file transfers. Any can be made to work with sufficient attention to the architecture. All will fail in the presence of common antipatterns such as complex mixed data models, massive functional decomposition to too fine a level, or trying to make high-frequency chatty exchanges over higher-latency links.

Modern technologies attempt to hide a lot of technical complexity behind simple abstraction layers. While that’s an excellent approach overall, it does raise a risk that developers are unaware of how a poor design may cause underlying technical problems which will cause failure. For example while some low-level protocols are more tolerant than others, the naïve expectation that REST will work over any network regardless “because it is based on HTTP” is quite wrong. REST, SOAP and plain old web pages can all make good, efficient use of HTTP. REST, SOAP and plain old web pages will all fail if you insist on a unit of work being composed of vast numbers of separate small exchanges rather than a few larger ones. They will all fail if you insist on transferring large amounts of unfiltered data to the client, when that data should be pre-processed and filtered on the server. They will all fail if you insist on making every low-level exchange a network service when many of these should be direct in-process operations.

Likewise if you have a load of services, whether your own microservices or third party endpoints, and each service defines its own data structure which may be subject to change, and you try and directly consume and produce those proprietary data structures everywhere you need them, you are building yourself a world of pain. A core common data model with adapters for each format will serve you much better in the long run.

So Does Technology Choice Matter?

Ultimately no. For example, I have built an architecture with an underlying canonical data and adapter model but using REST for every exchange we controlled and it worked fine. Also in the real world whatever your primary choice you’ll probably have to deal with all the others as well. That shouldn’t scare you, but I have seen REST-obsessed developers run screaming from the room at the thought of having to use SOAP as well…

However, a good base choice will definitely make things easier. It’s instructive to think about a layered model of the things you have to define in a complex integration:

  • Documentation
  • Functionality
  • Data structure and format
  • Data encoding and transport
  • Policies
  • Service location and routing

SOAP is unique among the options in always providing built-in documentation for the service’s functions, data structures and policies. This is a major omission in the REST world, which is progressively being addressed by the Swagger / OpenAPI initiative and variants, but they will always be optional add-ons with variable coverage rather than a fundamental part of the model. For all other options, documentation is necessarily external to the service itself, and it may or may not be up to date and available to whoever needs it.

Functionality is discussed above and in Chambon’s article. Basically REST maps naturally to CRUD operations, and anything else is a bit of a bodge. SOAP and other RPC or distributed object models provide direct, explicit support for whatever functions are required by the business problem.

SOAP provides built-in definition and documentation of data structures and formatting, using XML Schema which means that the definition is machine and human readable, standardised, and uses namespaces and references to manage, for example, items with the same name but different uses and formats. Complexities such as optionality and alternative structures are readily defined. In addition a payload can be easily verified against the defined schema. Swagger optionally adds similar capabilities to the REST model, although without some discipline it’s easy for the implemented service to differ from the documented one, and it’s less easy to confirm that a given payload conforms. Both approaches focus on syntactic definition with semantic guidance optional and mainly through comments and examples.

In terms of encoding the data, the fashionable approach is JSON. The major benefits are that it’s simple, payloads are a bit smaller than the equivalent XML, and that it’s easy to parse into and generate from equivalent data structures in languages like Python.

However, I’m not a great follower of fashion. XML may be less trendy, but it offers a host of industrial-strength features which may be important in more complex use cases. It’s easy to unambiguously indicate the schema for each document and validate against it. If you have non-ASCII or binary data then their encoding is unambiguously defined. It’s easy to work separately with fragments of a larger document if you need to. Personally I also find XML easier to read and manually edit if I have to, but I accept that’s a bit subjective. One argument is that JSON is easier to render into a HTML page, but I’ve achieved much the same without any procedural code at all using XML with XSLT.

Of course, there’s no real need to have to choose. The best REST APIs I have worked with have the ability to generate equivalent JSON and XML from the same queries, and you choose which works best in a given context. Sadly this is again a bit too much for the REST purists, but a good solution when it works.

Beyond the functional definition of a service and its data, we also have to consider the non-functional behaviours, what are often referred to as “policies” in this context. How is the service secured? What encryption is applied to payloads and headers? What is the SLA, and what action should you take if it is exceeded? Is asynchronous or callback behaviour defined? How do I confirm I have all the required items in a set of exchanges, and what do I do about missing ones? What happens if a service fails, or raises an error?

In the early 2000s, when web services were a new concept, a lot of effort was invested in trying to establish standard ways to define these policies. The result was a set of extensions to SOAP known as the WS-* specifications: a set of rules to enable direct and potentially automated negotiation of all these aspects based on standardised information in the service WSDL and SOAP headers. The problem was that the standards quickly proliferated, and created the risk of making genuinely simple cases more complex than necessary. REST emerged as a simpler alternative, but with a KISS ethic which means ignoring the genuinely complex.

Chambon’s article touched on this in his discussion of error coding, but there are many other similar aspects. REST is a great solution for simple cases, but should not blind the developer to SOAP’s menu of standard, stronger solutions to more difficult problems.

A similar choice applies at the final level, that of locating and connecting service endpoints at runtime. For many cases we simply rely on network infrastructure and services like DNS and load balancing. However when this doesn’t meet more complex requirements then the alternatives are to construct or adopt a complex proprietary solution, or to embrace the extended standards in the WS-* space.

One technology choice is important. A professional modern Integrated Development Environment such as Visual Studio or Intellij Idea will do much of the “heavy lifting” of development, and does make work much quicker and less error-prone. I completely fail to understand why in 2018 some developers are still trying to do everything with vi and a Unix command line. When I was a schoolboy in the 1970s there was a saying “shouldn’t you have handed that in at the end of the war?”, referring to people still using or hoarding equipment issued in WW2. Anyone who is trying to do software development in the late 2010s with the software equivalent deserve what they get… It is a mistake to drive a solution from the constraints of your toolset.

Conclusions

The old chestnut that “to the man who only has a hammer, every problem looks like a nail” is nowhere more true than in software development. We seem to spend a great deal of effort trying to make every new software technique the complete solution to life, the universe, and everything, rather than accepting that it’s just another tool in the toolbox.

REST is a valid addition to the toolbox. Like it’s predecessors it has strengths and weaknesses. It’s a great way to solve a whole class of relatively simple web service requirements, but there are definite boundaries to that capability. When you reach those boundaries, be prepared to embrace some older, less-fashionable but ultimately more capable technologies. A religious approach will fail, whereas one based on an architectural viewpoint and an open assessment of all the valid options has a much greater chance of success.

Posted in Agile & Architecture, Code & Development | Leave a comment

The Architect’s USP

Standing out in the marketplace (Morocco 2013)
Camera: Panasonic DMC-GX7 | Date: 11-11-2013 17:09 | Resolution: 3064 x 3064 | ISO: 1600 | Exp. bias: -33/100 EV | Exp. Time: 1/500s | Aperture: 8.0 | Focal Length: 300.0mm | Location: Djemaa el Fna | State/Province: Marrakech-Tensift-Al Haouz | See map | Lens: LUMIX G VARIO 100-300/F4.0-5.6

Very early on in any course in marketing or economics you will encounter the concept of the "Unique Selling Proposition", the USP, that factor which differentiates a given product or service from its competitors. It’s "what you have that competitors don’t", a key reason to buy this one rather than an alternative.

With the current trend away from development specialisms such as architect towards relatively homogenous development teams, it is perhaps instructive to ask "What is the architect’s USP?" Why should I employ someone who claims that specialism, and give him or her design responsibility, rather than just expecting my developers to cover it?

I have written elsewhere about why I don’t buy into the ultra-agile concept of "architecture emerging from the code", any more than I would bet money on the script for Hamlet "emerging" from a finite group of randomly typing monkeys. (Of course, if you have an infinite number of monkeys then it’s more achievable, but that’s infinity for you…) However that argument is about process, and I believe that almost irrespective of process a good architect’s skills and perspectives can have a significant beneficial effect on the result. That’s what I want to explore here.

The Architect’s Perspective

One key distinction between the manager, the architect and the developer is that of perspective. As an architect I spend a lot of time understanding and analysing the different forces on a problem. These design forces may be technical, or human: financial, commercial or political. The challenge is to find a solution which best balances all the design forces, which if possible satisfies the requirements of all stakeholders. It is usually wrong and ultimately counter-productive to simply ignore some of the stakeholders or requirements as "less important" – any stakeholder (and by stakeholders I mean all those involved, not just senior managers) can derail a project if not happy.

Where design forces are either aligned or orthogonal, there is usually a "sweet spot" which strikes an acceptable balance. The problem effectively becomes one of performing a multi-dimensional linear analysis, and then articulating the solution.

However, sometimes the forces act in direct opposition. A good example is system security, where requirements for broad, easy access directly conflict with those for high security. In these cases the architect has to invest heavily in diplomacy skills – to invest a lot of time understanding and addressing different stakeholder positions. One common problem is "requirements" expressed as solutions, which usually hide an underlying concern that can be met many ways, once understood and articulated.

In cases of diametrically opposed requirements, there are usually three options:

  • Compromise – find an intermediate position acceptable to both. This may work, but it may be unacceptable to both, or it may fatally compromise the architecture.
  • Allow one requirement to dominate. This has to be a senior level business decision, but the architect must be sensitive to whether the outcome is genuinely accepted and viable, or whether suppressing the other requirements will cause the solution to fail.
  • Reformulate the problem to remove or reduce the conflict. In the security example the architect might come up with a cunning partitioning of the system which allows access to different elements under different security rules.

Of course, you can’t resolve all the problems at once – that way lies madness. An architect uses techniques like layered or modular structures, and multiple views of the architecture to "separate concerns". These are powerful tools to manage the problem’s complexity.

The architect must look at the big picture, balance the needs of multiple stakeholders, and bring to bear an understanding of the business, of strategy, of technology and of development project work at the same time. If these responsibilities are split among too many heads and isolated within separate organisational confines then you lose the ability to see how it all fits together, and increase the danger of things "falling through the cracks".

The Architect’s Responsibilities

The architecture, and its resolution of the various design forces (i.e. how it meets various stakeholder needs) have to be communicated to many who are not technical experts. The architect acting as technical leader must take much of this responsibility. The messages may have to be reformulated separately for different audiences: I have had great success with single-topic briefing papers, which describe aspects like security in business terms, and which are short and focused enough to encourage the readers to also consider their concerns separately.

The architect must listen to the voice inside, and carry decisions through with integrity. For an architect, the question is whether the architecture is elegant, and will deliver an adequately efficient, reliable and flexible solution. If the internal answer to this is not an honest "yes", it is important to understand why not, and decide whether all the various stakeholders can live with the compromises.

The architect must protect the integrity of the solution against the slings and arrows of outrageous projects. (Hamlet again?) Monitor in particular those design aspects which reflect compromises between design forces, because they will inevitably come under renewed pressure over time. The architect must not only do the right thing, but ensure it is done right.

While every person on the project should be doing these things, there is a natural tendency for most to allow delivery priorities to take precedence. A developer’s documentation, for example, must be adequate to communicate the solution to other developers and maintainers, but does not have to be comprehensible to other stakeholders. However for the architect integrity, fit and communication of the solution are primary responsibilities, not optional. In addition the architect should have sufficient independence to call out and challenge conflicts of interest when they do occur.

The Architect’s Skills

The architect should be equipped with a distinct set of skills in support of these responsibilities. These will include:

  • Design patterns and knowledge of how to apply them
  • Tools and techniques to formally document both detail designs and wider portfolios
  • Methods to ensure that requirements, especially non-functional ones, are documented unambiguously
  • Methods to review a solution design, model its behaviour and confirm the solution’s ability to meet requirements
  • The ability to clearly communicate solutions, issues and potential resolutions to a wide variety of stakeholders
  • The ability to support the project and programme managers in handling the impact of issues and related decisions

Now it’s perfectly possible (and highly desirable) that others on the project will have many of these skills between them. However their combination in the architect is key to the delivery of the architect’s value, and a solution with a good chance of meeting its various objectives.

The Architect’s Position

A good architect should be able to operate in various organisational positions or roles and still deliver the above. Irrespective of the official organisation chart I often end up working between two or more groups, and I suspect this is a common position for many architects. It may actually be a natural result of adopting the architect’s unique perspectives.

The architect’s role may to some extent overlap with that of developers, analysts or product owners, and in smaller organisations or projects the architect may also take on one of these roles. In that case the architect must be able to "wear the appropriate hat" when focusing on a specific project issue or taking a wider view. The architect must then ensure that his or her ability to look at the wider picture is not compromised by the project relationship.

Conversely, a central architecture group may become accused of being in an ivory tower, separate from the realities of the business and the developers at the coal face. An architect in such a position must actively display an interest in and willingness to help with practical project issues.

A good architect will reconcile the need for a broad perspective and the specific responsibilities of a given position, thereby delivering distinct value compared with someone who has a more specific scope. I may on occasion be challenged for taking a wider interpretation of scope than others, but the insights which accrue from that perspective are almost always seen as valuable.

Conclusions

These are generalisations, and in practice there are as many variants on the architect’s role, skills and delivery as there are individuals who take the title. However it is generally true that an architect’s involvement increases the chance that a solution’s behaviour will be predictable, understood, and a good fit to its objectives. That’s the fundamental USP of the architect.

Posted in Agile & Architecture | Leave a comment

To BD or Not to BD

Should I buy the Blu-Ray?

So you have a collection of several hundred DVDs, you’ve finally managed to remove almost every VHS tape from the house, and you’ve bought a shiny new TV and disk player. Which, if any, of you existing disks should you replace with new versions, and which versions should you buy?

We have a large video collection, and we’ve already owned several versions of some titles, maybe a couple of different tapes or different DVD releases. Replacing some of our existing disks might make sense, but we really don’t want to do it wholesale when we’ve already got "good" copies of a lot of stuff. Our experience is that there are cases where the cost of replacement is fully justified, and others where it is just a waste of money. I thought it might be useful to try and distil that experience into some guidelines for others in the same predicament.

This does assume that you like "big" films, or the best output of National Geographic and the BBC Wildlife Unit. If fluffy romantic comedies are your thing, or you like budget arthouse movies, then this may not apply. That’s also the case if you don’t like 3D, or your system doesn’t support it (ditto 4K). Please modify this advice accordingly.

Newer Films

The first thing to say is that if you have a "good" DVD of a film or TV series made after about 1995, and it’s not covered by one of the following special cases, then there’s limited benefit to replacing your DVD with the equivalent Blu-Ray. If your disk player does a good job of "upscaling" to HD, or even 4K, then the change will be marginal and you will wonder why you spent that money. If your disk player does not play recent high-quality DVDs well, then your money is better spent on getting a better one.

Crude DVD Transfers

A lot of my DVDs, even for big blockbuster films, are fine based on the previous advice, and aren’t going anywhere. However there are exceptions. These tend to be films from the 1980s and 1990s which were released on VHS and then pushed to DVD using the same digital version, and while the quality was adequate for viewing in the early 2000s, it shows up really badly on newer kit. Grainy/noisy video and inaudible sound are common problems. The dead give-away is when your DVD player produces a half-sized picture in the middle of the screen, suggesting that the video isn’t even full DVD resolution.

This is true of my DVDs of some quite major films, including Robin Hood Prince of Thieves and Tremors. Buy the Blu-Ray, but look for some evidence like the word "remastered" which suggests that they went back to the film and re-processed it (and didn’t just push the same awful video onto a Blu-Ray). For some favourites the improvement will blow you away, but even in more marginal cases you will be at least less frustrated.

There is an obvious consideration about the quality of the source material. If it was recorded on 1980s videotape there’s a limit to what can be achieved. Sadly, the DVD of Edge of Darkness (the TV masterpiece) is about as good as that’s going to get, but I will be very happy and first in the queue if someone can prove me wrong.

Remastered Classics

Where the source material does support it, which is true of a lot of classic films made in the 1960s and 1970s (and some earlier ones), there’s the option of a frame-by-frame restoration to the highest possible modern video and sound standard. The British Film Institute has done this for favourites such as The Italian Job, Zulu and most of David Lean’s films. MGM/Eon has done it for all the Bond films.

The results, on Blu-Ray, can be absolutely stunning. It’s like a 2010s film crew was transported back and filmed the same performances on modern kit.

In Zulu you can see every barb of every feather on the Zulus’ clothing, and you can see that because Chard and Bromhead were from different regiments, there’s a little piece of dark green trim on one tunic which is dark blue on the other. In The Italian Job you can read the badges on the cars and motorbikes. The night-time scenes in From Russia with Love are no longer muddy brown, but sharp blacks in Istanbul, and with a lovely pre-dawn blue glow on the Yugoslavian border. You can admire the couture workmanship on the Bond girls’ dresses. I could go on.

It’s literally like watching a new film. You’ll see so much you didn’t before.

In fairness, it’s the remastering which makes the difference as much as the disk format. Before we bought the Bond Blu-Ray collection we had a DVD of Goldfinger which was based on the remastered version, and that delivered much of the same benefit, but if you haven’t invested in those intermediate versions then the Blu-Ray is even better.

Films Released in 3D

We love 3D, even if sadly the entertainment industry has fallen out of love with it again, and the availability of support in new kit and new film releases is reducing. If you like it, and your system supports it, and there’s a 3D Blu Ray of a film you have on DVD, get the 3D disk. The video and sound quality will be better, and you’ll enjoy the literal extra dimension to the work.

3D Remasters

A small and select but wonderful set of films have been subject to the best of both worlds, remastering the video, but also retrospectively putting them into 3D. The primary examples are Titanic, Jurassic Park, Predator and Terminator 2: Judgement Day, but there are a few others. Like the remastered 60s films, it’s a whole new level of enjoyment. Highly recommended, even if like me, you have probably purchased each of these films in about 4 different previous versions. While industry trends and costs mean there may not be too many more films given this treatment, the fact that the 3D version of T2 was released just before Christmas 2017 does mean that we shouldn’t give up hope.

4K Remasters

As part of the shift away from 3D, the industry is pushing 4K / UltraHD. (This has twice the resolution of normal Blu-Rays and HD TV, at 2160 pixels vertically.) In addition to 4K versions of new blockbusters, there are some "4K remasters" of big films from the last 20 years. However I’m much less convinced about these.

First, if you have normal eyes, ears and equipment, 4K really isn’t the vast improvement over standard HD Blu-Ray that the hype claims. Part of this is just simple diminishing returns as the picture resolution increases beyond what we can easily distinguish. There’s a very good chart on this at http://carltonbale.com/1080p-does-matter/, reproduced below:

What this boils down to is that unless you are viewing 4K on a 60" screen from about 5′ (1.5m), you’re not going to notice much difference from HD, and in practice, that’s far too close to view a screen of that size. We view our 58" screen from about 8′, which is probably still a bit too close, and I can just about see a difference in normal viewing. Obviously if you’re a 20 year old bird spotter things might be different… 4K is great for a cinema, limited value for a telly.

However, there are also a couple of more insidious problems. Some of the conversions are significantly "overdone" – pushing the contrast to extremes which don’t match the material. The Mummy (the 1999 Stephen Sommers film) is a good example, where the 4K version is a riot of shiny highlights and pitch black shadows, while the Blu-Ray retains the beautiful look of the original film. In addition, many 4K remasters end up with a grainy look which the BD version avoids.

While some of this might be down to my eyes, or my kit, I’ve heard similar complaints elsewhere, including from a couple of guys who run a TV/HiFi shop and whose job is to set up high quality demo systems.

Personally I’m probably going to keep 4K for new blockbusters without a 3D version. If a favourite gets an anniversary 4K makeover I may buy the 4K/BD combo, but I could easily end up watching the Blu-Ray.

What About Streaming?

What about it? It’s a great way to get instant access to material you won’t want to view over and over, and where picture quality is not the key requirement: catching up on box sets is a great example. However if you want quality then streaming is currently still inferior to broadcast HD, which is in turn inferior to a disk, even a good DVD (your mileage may vary…). Don’t throw your disks away yet!

Conclusions

For new purchases, buy at least a Blu-Ray version, and consider the 3D or 4K version if there is one. If the old DVD version isn’t great, and there’s a remastered version on Blu-Ray, then it’s worth an upgrade. However if your existing DVD version is a good one, save your money and buy yourself some new films and shows instead.

Posted in Thoughts on the World | Leave a comment

An Odd Omission

Let’s start with a common use case…

"I have a television / hi-fi / home cinema system which has several components from different manufacturers. I would like to control all of them with a single remote control. I would like that remote control to be configurable, so that I can decide which functions are prioritised, and so that I can control multiple devices without having to switch "modes". (For example, the primary channel controls should change the TV channel, but at the same time and without changing modes the volume controls should change the amplifier volume.) As not all of my devices are controllable via Wi-Fi, Infrared is the required primary carrier/protocol. The ideal solution would be a remote control with a configurable touch screen, probably about 6" x 3" which would suit one-handed operation."

I can’t believe I’m the first person to articulate such a use case. In fact I know I’m not, for two reasons. When I set up the first iteration of my home cinema system in about 2004, I read a lot of magazines and they said similar things.

And then I managed to buy a dedicated device which actually did this job remarkably well. It was called a Sunwave Universal Remote, and had a programmable LCD touchscreen. It had the ability to choose which device functions appeared where, and to record commands from existing remotes or define macros (sequences of commands). This provided some, limited, "mixed device" capability, although the primary approach was modal (select the target device, and then use controls for that device). A set of batteries lasted about a year.

There were only two problems. First, as successive TVs became smarter than in 2004 it became an increasing challenge to find appropriate buttons for all the functions from within the fixed option list. Then, after 13 or so years of sterling service the LCD started to die. I still own the control, but it’s now effectively unusable.

My first approach was to try and get a direct replacement. However it’s clear that these devices haven’t been manufactured for years. The few similar items on eBay are either later poor copies, with very limited functionality, or high-end solutions based on old PDAs at ridiculous prices.

But hang on. "a configurable touch screen, probably about 6" x 3"". Didn’t I see such a device quite recently? I think someone was using one to make a phone call, or surf the internet, or check Facebook, or play Angry Birds, or some such. In fact we all use smartphones for much of our technology interaction, so why not this use case?

Achtung! Rabbit hole! Dive! Dive! :)

Why not, indeed? Actually I knew it was theoretically possible, because my old Samsung 10" tablet which was about to go on eBay had some software called "Peel Remote" installed as standard, and I’d played with controlling hotel TVs with it. I rescued it from the eBay pile and had an experiment. The first discovery was that while there’s a lot of "universal remote" software on Google Play, most is rubbish, either with very limited functionality, or crippled by stupid amounts of highly-invasive advertising. There are a few honourable exceptions, and after a couple of false starts I settled on AnyMote developed by Color Tiger. This has good "lookup" support to get you started, a nice editing function within the app, and decent ways to backup and share remote definitions between devices. A bit of fiddling got me set up with a screen which controlled our system much better than before, and it got us through all our Christmas watching.

However picking up a 10" tablet and turning it on every time you want to pause a video is a bit clumsy, so back to the idea of using a phone…

And here’s the problem. Most phones have no infrared support. While I haven’t done any sort of scientific analysis, I’d guess that 70-80% (by model) just don’t have what’s known as an "infrared blaster", the element which actually emits the infrared signals. Given that this is very simple technology, not much more than an infrared LED in the phone’s top edge, it’s an odd omission. We build devices stuffed with every sort of wireless and radio interface, but omit this common one used by much of our other technology.

Fortunately it’s not universal, and there are some viable options. A bit of googling suggested that the LG G2 does have an IR blaster, and I tracked down one for about £50 on eBay. It turns up, the software installs…, and it just doesn’t work. That’s when I find the next problem: several of the phone manufacturers who make both TVs and phones (LG and Sony are the most obvious offenders) lock down their IR capabilities, so they are not accessible to third party software. You can use your LG phone to control your LG TV, but that’s it, and f*** all use to me.

Back on Google and eBay. The HTC One M7 and M8 do have IR and do seem to support third-party software. The M8 is a bit bigger, probably better for my use case, and there’s one on eBay in nice condition for a good price. It turns up, the software installs…, and then refuses to run properly. It can’t access the IR blaster. Back on Google and confirm the next problem. Most phones which have been upgraded from Android 5 or earlier to Android 6 have a changed software interface to the infrared which doesn’t work for a lot of third-party software. Thanks a billon, Google. :-(

OK, last roll of the dice. The HTC One M7 still runs Android 5. I find a nice blue one, a bit more money than the M8 ironically, but still within budget. It turns up, the software installs…, and it works! I have to do a few minor adjustments on the settings copied from my tablet, but otherwise straightforward. I had to install some software to make the phone turn on automatically when it’s picked up, and I may still have to do a bit of fiddling to optimise battery life, but for now it’s looking good…

Third time lucky, but it really didn’t have to be that difficult. For reasons which are impossible to fathom, both Google and most phone manufacturers seem to somewhere between ignoring and actively obstructing this valid and common use case. Ironically, given their usual insularity, things are a bit easier in the Apple world, with good support for third party IR blasters which plug into an iPhone’s headphone socket, but that wouldn’t be a good solution given the rest of my tech portfolio. For now I have a solution, but I’m not impressed.

Posted in Android, Thoughts on the World | Leave a comment

The Decisive Moment

My old mum has recently moved from her house to a smaller retirement flat, and is still in the process of sorting out some of the accumulated lifetime’s possessions. On this visit, I was presented with a large carrier bag of old cameras.

I have to say, I wasn’t expecting miracles. Mum and Dad never spent a vast amount on photographic equipment, usually buying a mid-range "point and click", using it till it stopped working and then buying another.

First out, an ancient Canon Powershot, for 35mm film. It probably works, but I tried explaining to Mum that there’s no longer any real market for such items.

"No-one really wants the bother of getting films developed. You don’t – you have a digital camera yourself now, you were using it last night."

"But surely there are people who love old cameras."

"Yes there are, but they have to be a bit special. If this was a Leica, with a little red dot on it, it would probably be worth some money, but not an ancient cheap Canon."

To settle it, I opened up my laptop and had a look on eBay. There were a couple, for about £15 and about £12, both with no bids.

Next up, a similar Panasonic. This still had a film in it, which was suspicious as it probably meant that the camera had died mid-holiday and been abandoned. eBay suggested an asking price somewhere in the range £8 to £11.99. Getting worse.

"I could offer it to the charity shop" said Mum, hopefully.

"Well you could, but don’t be surprised if they are underwhelmed." I told her about my recent experience of having a perfectly good 32" flatscreen TV rejected by our local charity shop, which didn’t encourage her.

"But surely if things still work?"

"I keep on saying, Mum, things have to be a bit special. You know, a Leica or something, with a nice red dot."

Next out of the bag was a Konica. This was a slightly different shape and had the rather ominous indicator "110" in the model number. That’s definitely not a good sign, I mean can you actually still get and process 110 film? (That’s assuming that you can see any point in shooting a format which is distinctly inferior to 35mm in the first place.) Amazingly enough there is one on eBay. £2.99, no bids…

"OK", says Mum, deciding that there’s no point in arguing that one. "There’s one box left in the bag."

What? Hoist by my own petard! I mean, what were the chances??

Sadly it’s actually only a slide box, and eBay suggests that it’s going to get £20 at best, but I am now honour-bound to do my best to find it a good home.

Be careful what you wish for…

Posted in Humour, Photography, Thoughts on the World | Leave a comment

Testing vs Modelling, Detection vs Prediction, Hope vs Knowledge

The Challenge

I often hear a statement which worries me, especially but not exclusively in agile projects, along the lines of “we’ll make sure it works when we test it later”.

Now you may think this is an odd view coming from a man who has written testing courses, presented conference papers on testing and developed testing tools, but let me explain myself.

First up, there’s the old chestnut that the objective of testing is not to prove something works, but to find errors. All you can actually do by testing is locate problems to be fixed, although obviously if problems are hard to find, that increases confidence in your product. However the much deeper issue is that testing is commonly viewed as an alternative to properly understanding and documenting the expected behaviour of a system, and reviewing in advance whether a proposed design will deliver that behaviour. That can be a recipe for failure.

Obviously in some areas this is an acknowledged and viable trade-off. If we are exploring functional alternatives, or working in a problem space where extracting documented requirements is tricky, then agile development and testing is a powerful solution, and we accept the rework that may result where we get it wrong. Having said that, even in something like UI development it may be better to develop cheap models such as wireframes, and at least attempt to explore solution fit before we commit too much to code.

The problem is that when we come to the more fundamental architectural elements and non-functional behaviour, the dynamics change dramatically. The best way to show this is a variant of the testing “V Model”:

For functional details, the gap between development and testing is small, and they can quickly be reworked and retested. However some of the key architectural and non-functional aspects can only be fully tested late in the delivery process (and frequently only late in the overall programme), if at all. The “testing gap” becomes huge, the impact of any change substantial, and the rework path lengthy.

One challenge is that many non-functional tests require an environment representative of the technology and scale of the production system. If this is provided at all, it is typically late in the project, or testing has to be shoe-horned into a short window on the production system before operations commence. If that uncovers a major issue, it is simply too late.

That’s assuming that the issue is detectable. In an agile development, it may be difficult to understand “what acceptable looks like”, if there is no adequate agreed, documented definition of the expected non-functional behaviour.

The other challenge is that good non-functional testing is hard, and limited in what it can achieve. Simulating a peak load is difficult, especially with the variety of data in a real production peak. You can simulate planned and unplanned equipment failures and restarts, but by definition only predictable events. If a problem only emerges from lengthy running or a “perfect storm” event, then testing is unlikely to uncover it. Basically resilience is testable, performance may be testable, reliability isn’t. Similar considerations also apply to other non-functional aspects like security.

The Solution

The solution is to adopt an analytical and predictive approach: trying to understand, articulate and document the expected behaviour of the solution, before you build it. Importantly this is not just thinking about the solution (although thinking is vital), but thinking with models.

Models in this context take many forms. They can be diagrams, possibly based on UML, but not necessarily: for example reliability block diagrams or fault tree analyses are powerful tools to understand resilience and reliability. They can be spreadsheets, for example profiling expected transaction mixes and their relative resource requirements. They can also be active software, whether simulations of some expected behaviour, or point implementations to quantify some aspect of the solution, but the point is that their purpose is to understand the solution before a major technical commitment, not to deliver functionality. Irrespective of form all models should lend themselves to a quantitative understanding of the solution, not just “what?”, but “how much?” and “how well?”.

For example, here’s a simple redundancy scheme modelled using RelQuest, my own Visio-based fault tree analysis tool, from which we can not only understand the various combinations of failures which lead to loss of service, but the relative probability and impact (e.g. Mean Time to Repair) for each combination.

Models and simulations provide you with an early understanding of the system behaviour, so you can understand whether something should work, or not, and if not where to focus your efforts. They can be detailed, like the example fault tree above, or doing an early first pass on a platform provider’s sizing tool, but a more approximate approach may also provide value.

Numbers are your friends. I am a great fan of Fermi estimates (see the sidebar) – quick “order of magnitude” approximations to see if you have understood the key elements in a problem, and whether the answer looks viable or not.

You can easily get viable estimates of this type for performance, capacity or reliability. If the answer is “no problem”, like we can easily accommodate millions of transactions per hour on a single server and we expect thousands, then you’re probably fine. If the answer is the other way round, like the developer who proudly presented me a solution which would take 1s CPU time to do a calculation, but we needed to do a thousand a second, then the design needs to change (I got it down to 2ms, which was acceptable). If it’s marginal, then you probably need to do a more accurate model and calculation, or build a greater degree of flexibility into the solution.

Simulations or low-volume experiments may be a valid way to understand CPU, storage and memory usage, network bandwidth requirements, threading, virtualisation, and even failover behaviour. Anything which scales linearly can be measured at low volume and extrapolated, but you need to be wary of areas such as network latency or storage throughput where that may not be valid.

Ultimately anything which builds your understanding and proves that you have thought about the problems in advance is good, even if some detail may only be confirmed at later stages. The key point is that the problems become targets for analytical thinking rather than hope and prayers, and that makes them solvable.

The Conclusion

Testing on its own is absolutely necessary, but very much not sufficient. For tests to be meaningful you have to describe the predicted behaviour in advance, and for the system to have any chance of passing those tests it has to be engineered accordingly. We increasingly seek to drive functional development from written user stories and behaviour specifications. In the same way professional development must be driven by quantitative models which forecast non-functional behaviour for testing to confirm, not discover in surprise.

 

I love Fermi estimates, named for the great Italian-American physicist Enrico Fermi, who was always doing them. These are calculations which you know have a lot of inaccuracies, but which are simple enough to do quickly and get an answer which is “sort of right” to tell you if you have correctly understood the dimensions of the problem, and if something should work, or not.

Let’s do one. This is not about computing, but is an easy example to understand the process. How much does my house weigh?

Well my house is built mainly of brick, and for the purposes of this calculation can be thought of as a rectangular block roughly 8m x 12m, and about 3m high. (I happened to have these figures, but I could always just pace it out and use 1 pace = 1m). Allow for internal walls, and you could think of my house as four slabs of brick 8m long x 3m high, and four slabs 12m long x 3m. Alternatively that’s 4 slabs 20m long, or one slab 80m long. But remember that all the walls are at least two bricks thick, so it’s like one stack of single brick 160m long and 3m high. Now I know this doesn’t take any account of windows and doors, and the open plan bit at the front, but it’s also ignoring the roof and floor slabs, and I think that will balance out quite well. Google “house brick dimensions” gives us 215mm long and 65mm high, and a typical weight of 3.5kg. Divide 160m by 0.2m (this is a Fermi approximation remember) to get 800 bricks long. At 65mm high 3 bricks on top of each other will also be about 0.2m high, so the height of our stack will be 3x3m/0.2m = 45 bricks high, call it 50. That gives us a grand total of 50×800=40,000 bricks. Now 40,000×3.5kg = 140,000kg, or 140 tons. Fermi approximations are good for at best one significant figure, so round it off to 100 tons. Bingo!

So a simple model can get you a useful answer quickly, and you may even be able to do the maths in your head. Now obviously there are a lot of guesses and approximations here, like the density of all key materials is similar, and I haven’t so far accounted for the foundations, which might be needed, and I might want to double-check the typical weight of a brick which is a key value, but I’d be surprised if the “real” answer wasn’t somewhere between 50 and 300 tons.

You can easily do the same thing to get viable “order of magnitude” figures for performance, capacity or reliability.

Posted in Agile & Architecture | Leave a comment

Does Agile Miss The Point About Engineering?

A bicycle-car

A former colleague, Neil Schiller, recently wrote an excellent article, https://www.linkedin.com/pulse/agile-data-programmes-neil-schiller/, on the challenge of using agile approaches in data-centric programmes. In it, he referenced and reviewed a classic cartoon by Henrik Kniberg which is often used to promote the advantages of agile delivery:

Now it’s wholly possible that I am reading more into a limited analogy than appropriate, but I think this same diagram can also be used to explain some of the fundamental issues with agile approaches.

Think about what the bottom line is claiming: that by a set of small incremental deliveries we can somehow achieve the equivalent of transforming a scooter into a bicycle, into a motorbike and then into a car, each a fully working vehicle meeting the user’s requirements. In the real physical world this is laughable: each has a wholly different architecture with no commonality whatsoever between equivalent subsystems at any of the stages. Key properties arise from the fundamental structure – a simple tubular chassis for bike, a more complex frame including complex stressed moving parts like the engine and transmission for the motorbike, typically a monocoque chassis/exoskeleton for the car. These underlying elements form the basis, and you have to get them right as you can’t modify them later: you can’t “add strength” to a car by adding more tubes after the event (unless you are going banger racing!).

In the real physical world you create a complex engineered artefact by understanding its required properties, creating a layered structure which is designed to meet them, and then building up those layers to progressively deliver the required result. This requires that the most fundamental, least readily changed layers have to be right, and stable, early on, and only then can you add the upper more flexible elements. The first version of the process in the diagram is actually wholly correct, the second a joke.

Is it so very different for software? If we’re talking about major systems with real-world complexity and non-functional demands, I’m not convinced. The “ultra-agile” argument that it is always possible to “refactor” code to make changes. This is true up to a point, but it can be difficult and costly to change the underlying structure. If that does not meet requirements for security, or reliability, or performance, then no amount of fiddling will fix it, and if changes amount to a fundamental rewrite then it’s difficult to see where any advantage has been gained.

Obviously there are differences. The vehicle designer seeks to both create and use solutions which once right will be re-used many times (from hundreds to millions of instances), but will be difficult to change once in production. Software development is still largely about one-offs. Software requirements are typically less well-defined than for established hardware products. In vehicle manufacture, the roles of engineer/designer and constructor are distinct, whereas in software the designers often have an ongoing role in construction, and may at least subconsciously seek to extend that role (guilty as charged :) ). On the other hand, the car designer knows that an approved design will be built largely as documented, whereas the software designer has no such assurance.

Fundamentally, however, I believe that software development can benefit from engineering disciplines just as much as the design of physical products. For example, it is much better to attempt to understand and predict up front how a given design will respond against non-functional requirements. Testing is a very good way to confirm that your solution basically works and to refine refinements. It is a very bad way to uncover fundamental deficiencies, especially if this occurs late in the development process.

This doesn’t mean that I don’t believe in agile development. Far from it, I am a great believer in iterative and incremental development, and structures such as Scrum sprints to manage them. However, I really don’t believe in architecture “emerging from the code”, just the same as I would not expect to see a great car design “emerge” from the work of a group of independent fabricators working on small parts of the problem without any overarching design. Cars “designed” in such a way tend to be more Austin Allegro (or AMC Pacer) than Bugatti Veyron.

Instead, Architecture has to be understood as providing the structure within which the code is developed, with that overall structure developed using engineering disciplines: assess the various forces on the design, articulate how these forces will be resolved (including what compromises are required), then document and model the solution to predict its properties.

If the requirement is for a sports car, design a sports car, don’t try and “refactor” a pushbike…

Creation of such designs, documents and models is a distinct discipline from coding. Some of this may be the domain of specialists, some may be performed by those who also have other development roles, but as a separate activity requiring appropriate skills and experience. Ironically I think Tom Gilb got it about right in his 1988 book “Principles of Software Engineering Management”, when he defined “Software Engineer” as someone who “can translate cost and quality requirements into a set of solutions to reach the planned levels”, and who has the skill to change any given quality dimension of a system by a factor of ten if required. The latter challenge would uncover a lot of people who call themselves “architects”.

In addition complex designs need some form of centralised, overall ownership and design control – this again requires specialist skills and cannot just be allocated randomly, but will sit with an Architect and/or a Product Owner.

Within such a framework concepts such as continuous integration and testing still make sense. Development, both functional and non-functional can still be managed via the backlog and sprint plans, epics and stories. However the “minimum viable product” may require completion of much of the underlying architecture as well as major functional capabilities. Major capabilities, both functional and non-functional, have to be analysed and designed up front, not left to stories somewhere in the backlog. The intermediate delivery is a car, albeit incomplete, not a complete bicycle.

Agile development and architecture are not incompatible, but complementary. Successful development of a complex real-world system will inevitably follow the first model in Kniberg’s cartoon, no matter how much the agilists would like it to be the second. At scale, and in the face of more challenging requirements software development needs to be treated as an engineering discipline, with agile structures in service of that discipline, not avoiding it.

Posted in Agile & Architecture | Leave a comment

The Hut of Alleged Towels

The Hut of Alleged Towels, The Crane, Barbados
Camera: Panasonic DMC-GX8 | Date: 10-11-2017 13:34 | Resolution: 5184 x 2920 | ISO: 200 | Exp. bias: -33/100 EV | Exp. Time: 1/500s | Aperture: 5.6 | Focal Length: 27.0mm | Lens: LUMIX G VARIO 12-35/F2.8

The Crane Hotel, Barbados has a hut whose purpose is to take in used beach towels, and dispense fresh ones. It has no other purpose. It is staffed during daylight hours by a helpful young chap, but on our recent visit he seemed to rarely, if ever, have any towels to dispense. Now if I was the manager and paying that chap’s salary, I would make sure enough laundry was being done to provide a reasonable supply, but then I’m weird…

We took to calling it "The Alleged Towel Hut", but then decided that was unfair. The hut itself satisfies reasonable standards of proof of its existence. The towels do not. Hence we have decided on a better term. This is now officially "The Hut of Alleged Towels". :)

Posted in Barbados, Humour, Thoughts on the World, Travel | Leave a comment

Architecture Lessons from a Watch Collection

Early 1990s Hybrid Watches
Camera: Panasonic DMC-GX8 | Date: 21-10-2017 10:06 | Resolution: 5118 x 3199 | ISO: 1250 | Exp. bias: -66/100 EV | Exp. Time: 1/60s | Aperture: 8.0 | Focal Length: 30.0mm | Lens: LUMIX G VARIO 12-35/F2.8

I recently started a watch collection. To be different, to control costs and to honour a style which I have long liked, all my watches are hybrid analogue/digital models. Within that constraint, they vary widely in age, cost, manufacturer and style.

I wanted to write something about my observations, but not just a puff piece about my collection. At the same time, I am long overdue to write something on software architecture and design. This piece grew out of wondering whether there are real lessons for the software architect in my collection. Hopefully without being too contrived, there really are.

Hybrid Architectures Allow the Right Technology for the Job

There’s a common tendency in both watch and software design to try and solve all requirements the same way. Sometimes this comes out of a semi-religious obsession with a certain technology, at others it’s down to the limitations of the tools and mind-set of the designer. Designs like the hybrid watch show that allowing multiple technologies to play to their strengths may be a better solution, and not necessarily even with a net increase in complexity.

Using two or three rotating hands to indicate the time is an excellent, elegant and proven solution, arguably more effective for a “quick glance” than the digital equivalent. However, for anything beyond that basic function the world of analogue horology has long had a very apt name: “complications”. Mechanical complexity ratchets up rapidly, for even the simplest of additional functions . Conversely even the cheapest of my watches has a stopwatch, alarm and perpetual calendar, and most support multiple time zones and easy or even automatic travel and clock-change adjustments. Spots of luminous paint make a watch readable in darkness, but illuminating a small digital display is more effective.

The hybrid approach also tackles the aesthetic challenge: while many analogue watches are things of beauty, most digital watches just aren’t. Hybrid watches (just like analogue ones) certainly can be hit with the ugly stick, but I’ve managed to assemble a number of very pretty examples.

The lesson for the software architect is simple: if the compromise of trying to do everything with a single technology is too great, don’t be afraid to embrace a hybrid solution. Hybrid architectures are a powerful tool in the right place, not something to be ruthlessly eliminated by purist “Thought Police”.

A Strong, Layered Architecture Promotes Longevity

Take a look at these three watches: a 1986 Omega Seamaster, a 1999 Rado Diastar, and a recent Breitling Aerospace. Very different, yes?


Sisters Under the Skin, or Brothers from Another Mother?

Visually, they are. But their operation is almost identical, so much so that the user manuals are interchangeable. Clearly some Swiss watchmaker just “got it right” in the late 1980s, and that solution has endured, with a life both within and outside the Swatch Group, the watch equivalent of the shark or crocodile. While the underlying technology has changed only slightly, the strong layering has allowed the creation of several different base models, and then numerous variants in size, shape and external materials.

This is a classic example of long-term value from investing in a strong underlying architecture, but also ensuring that the architecture allows for “pace layering”, with the visible elements changing rapidly, while the underpinnings may be remarkably stable.

It’s worth noting that basic functionality alone does not ensure longevity. None of these watches have survived unchanged, it’s the strength of the underlying design which endures.

Oh, and yes, the Omega is a full-sized mans’ watch (as per 1986)! More about fashion later…

Enabling Integration Unlocks New Value

The earliest dual mode watches were little more than a simple digital watch and a quartz analogue watch sharing the same case, but not much else except the battery (and sometimes not even that!). The cheapest are still built on this model, which might most charitably be labelled “Independent” – my Lambretta watch is a good example. There’s actually nothing wrong with this model: improve the capability of the digital part, the quality of the analogue part and the case materials and design and you have, for example, my early 1990s Citizen watches which are among my favourites. However as a watch user you are essentially just running two watches in one case. They may or may not tell the same time.

The three premium Swiss watches represent the next stage of integration. The time is set by the crown moving the hands, but the digital time is set in synchronisation. There’s a simple way to advance and retard both in whole hours to simplify travel and clock-change adjustments. Seconds display is digital-only to simplify matters. Let’s borrow a photography term and call this “Analogue Priority” – still largely manual, but much more streamlined.

“Digital Priority”, as implemented in early 2000s Seikos is another step forwards. You set the digital time accurately for your current location and DST status, and you have one-touch change of both digital and analogue time to any other time zone. The second hand works as a status indicator, or automatically synchronises to the digital time when in time mode.

However the crown has to go to the Tissot T-Touch watches. Here the hands are just three indicators driven entirely by the digital functions: they become the compass needle in compass mode, show the pressure trend in barometer mode, sweep in stopwatch mode, park at 12.00 when the watch is in battery-saving sleep mode. And they tell the time as well! Clearly full integration unlocks a whole set of capabilities not previously accessible.


Extremes of analogue/digital integration

So it is with software. Expose the control and integration points of your modules to one another, or to external access, and new value emerges as the whole rapidly becomes much more than the sum of the separate parts.

Provide for Adjustment Where Needed…

While I love the look of some watch bracelets (especially those with unusual materials, like the high-tech black ceramic of the Rado), adjusting them is a complex process, and inevitably ends up with a compromise: either too loose or too tight. Even if the bracelet offers some form of micro-adjustment and you get it “just right” at one point, it will be wrong as the wrist naturally swells and shrinks over time. Leather straps allow easier adjustment, but usually in quite coarse increments of about 1cm, so you’re back to a compromise again.

The ideal would be a bracelet with either an elastic/sprung element, or easily accessible micro-adjustment, but I don’t have a single example in my collection like that. I hear Apple are thinking about an electrically self-adjusting strap for the next iWatch, but that sounds somewhat OTT.

On the other hand, I have a couple of £10 silicone straps for my Fitbit which offer easy adjustment in 2mm increments. Go figure…

We could all quote countless similar software examples, of either a “one size fits all” setting which doesn’t really suit, or an allegedly controllable or automated setting which misses the useful values. The lesson here is to understand where adjustment is required, and provide some accessible way to achieve it.

… But Avoid Wasting Effort on the Useless

At the other end of the scale, several of my watches have “functions” of dubious value. The most obvious is the rotating bezel. In the Tissot, it can be combined with the compass function to provide heading/azimuth information. That’s genuinely useful. The Citizen Wingman has a functioning circular slide rule. Again valid, but something of a hostage to progress. :)


At least the slide rule does something, if you can remember how!

Do the rotating bezels of my Citizen Yachtsman, or the Breitling Aerospace have any function? Not as far as I can see.

Now I’m not against decorative or “fun” features, especially in a product like a watch which nowadays is as frequently worn as jewellery than for its primary function. But I do think that they need to be the result of deliberate decisions, and designers need to think carefully about which are worth the effort, and which introduce complexity outweighing their value. That lesson applies equally to software as to hardware.

… And Don’t Over-Design the User Interface

The other issue here is that unless it’s pure jewellery, a watch does need to honour its primary function, and support easily telling the time, ideally for users with varying eyesight and in varying lighting conditions. While I have been the Rado’s proud owner for nearly 18 years, as my 50-something eyesight has changed it has become increasingly annoying as a time-telling device, mainly due to its “low contrast” design. It’s not alone: for example my very pretty Citizen Yachtsman has gold and pale green hands and a gold and pale green face, which almost renders it back to a pure digital watch in some lights!

At the other end of the scale, the Breitling Aerospace is also very elegant, but an exemplar of clarity, with a high-contrast display, and clear markings including actual numbers. It can be done, and the message is that clarity and simplicity trump “design” in the user interface.

This is equally true of software. I am not the only person to have written bemoaning the usability issues which arise from loss of contrast and colour in modern designs. The message is “keep it simple”, and make sure that your content is properly visible, don’t hide it.

Fashion Drives Technology. Fashion Has Nothing To Do With Technical Excellence

All my watches are good timepieces, bar the odd UI foible, and will run accurately and reliably for years with an occasional battery change. However, if you pick up a watch magazine, or browse any of the dedicated blogs, there is almost no mention of such devices, or largely of quartz/digital watches at all.

Instead, like so much else in the world we are seeing a polarisation around two more “extreme” alternatives: manual wind and “automatic” (i.e. self-winding) mechanical watches, or “charge every day” (and replace every couple of years) smartwatches. The former can be very elegant and impressive pieces of engineering, but will stop and need resetting unless you wind or wear them at least every few days – a challenge for the collector! The latter offer high functionality, but few seem engineered to provide 30 years of hard-wearing service, because we know they will be obsolete in a fraction of that time.

Essentially fashion has driven the market to displace a proven, reliable technology with “challenging” alternatives, which are potentially less good solutions to the core requirements, at least while they are immature.

This is not new, or unique to the watch market. In software, we see a number of equivalent trends which also seem to be driven by fashion rather than technical considerations. A good, if possibly slightly contentious example, might be the displacement of server-centric website technologies, which are very easy to develop, debug and maintain, with more complex and trickier client-centric solutions based on scripting languages. There may be genuine architectural requirements which dictate using such technologies as part of the solution, e.g. “this payload is easy to secure and send as raw data, but difficult and expensive to transmit fully rendered”. Fine. But “it’s what Facebook does” or “it’s the modern solution” are not architecture, just fashion statements.

On a more positive note, another force may tend to correct things. Earlier I likened the Omega/Rado/Breitling design to the evolutionary position of a shark. Well there’s another thing about sharks: evolution keeps using the same design. The shark, swordfish, ichthyosaur, and dolphin are essentially successive re-uses of a successful design with upgraded underlying architecture. Right now, Fossil and others are starting to announce hybrid smartwatches with analogue hands alongside a fully-fledged smartwatch digital display.

In fashion terms, what goes around, comes around. It’s true for many things, watches and software architectures among them.

Conclusions

Trying to understand the familial relationships, similarities and differences in a group of similar artefacts is interesting. It’s also useful for a software architect to try and understand the architectural characteristics behind them, and especially how this can help some designs endure and progressively evolve to deliver long-term value, something we frequently fail to achieve in software. At the same time, it’s also salutary to recognise where non-architectural considerations have a significant architectural impact. Think about the components, relationships and dynamics of other objects in architecture terms, and the architecture of our own software artefacts will benefit.

Posted in Agile & Architecture, Thoughts on the World | Leave a comment

Integration Or Incantation?

I was travelling recently with Virgin Atlantic. I went to check in online, typed in my booking code and selected both our names, clicked "Next", and got an odd error saying that I couldn’t check in. I wondered momentarily if it was yet more pre-Brexit paranoia about Frances’ Irish passport, but there was a "check in individually" option which rapidly revealed that Frances was fine, it was my ticket which was causing the problem.

The web site suggested I ring the reservation number, which I did, listened to 5 minutes of surprisingly loud rock music (you never mistake being on hold for Virgin with anyone else), and got through to a helpful chap. He said "OK, I can see the problem, I will re-issue the ticket." Two minutes of more distinctive music, and he invited me to try again. Same result. He confirmed that we were definitely booked in and had our seat reservations, and suggested that I wait until I get to the airport. "They will help you there." Fine.

Next morning, we were tackled on our way into the Virgin area by a keen young lady who asked if we had had any problems with check in. I said we had, and she led us into what can best be described as a "krall" of check-in terminals, and logged herself into one. This displayed a smart check-in agent’s application, complete with all the logos, the picture of Branson’s glamorous Mum, and so on. She quickly clicked through a set of very similar steps to the ones I had tried, and then click OK. "Oh, that’s odd", she said.

Next, she opens up a green screen application. Well, OK, it’s actually white on a Virgin red background, but I know a green screen application when I see one. She locates my ticket, checks a few things, and types in the command to issue my pass. Now I’m not an expert on Virgin’s IT solutions, but I know the word "ERR" when I see it. "Oh, that’s not right either" says the helpful young lady "I’ll get help".

Two minutes later, the young lady is joined by a somewhat older, rather larger lady. (OK, about the same age as me and she looked a lot better in her uniform than I would, but you get the idea.) "Hello Mr Johnston, let’s see if we can sort this out". She takes one look at the screen and says "We actually have two computer systems, and they don’t always talk to each other or have the same information."

… which could be the best, most succinct summary of the last 25 years of my career I have heard, but I digress…

Back to the story. The new lady looks hard at both applications, and then announces she can see the problem (remember, all this is happening on a screen I can see as well as the two Virgin employees). "Look, they’ve got your name with a ‘T’ here, and no ‘T’ here" (pointing to the "red screen" programme).

Turning to the younger lady, she says "Right, this is how to fix it." "Type DJT, then 01" (The details are wrong, but the flavour is correct…) "Put in his ticket number. Type CHG, then enter. Type in his name, make sure we’ve got the T this time. Now set that value to zero, because this isn’t a chargeable change, and we can do a one letter change without a charge. Put in zero for the luggage, we can change that in a minute. Type DJQ, enter. Type JYZ, enter. OK, that’s better. Now try and print his pass." Back to the sexy new check in app, click a few buttons, and I’m presented with two fresh boarding passes. Job done.

Now didn’t we have a series of books where a bunch of older, experienced wizards taught keen you wizards to tap things with sticks and make incantations? The solution might as well have been to tap the red screen programme with a wand and shout "ticketamus"…

The issues here are common ones. Is it right to be so dependent on what is clearly an elderly and complex legacy system? Are the knowledge transfer processes good enough, or is there a risk that the next time the more experienced lady who knows the magic incantations might not be available? Why is such a fundamental piece of information as the passenger names clearly being copy typed, not part of the automated integrations? As a result, is this a frequent enough problem that there should really be an easier way to fix it? Ultimately the solutions are traditional ones: replace the legacy system, or improve its integrations, but these are never quick or easy.

Now please note I’m not trying to get at Virgin at all. I know for a fact that every company more than a few years old has a similar situation somewhere in the depths of their IT. The Virgin staff were all cheerful, helpful and eventually resolved the problem quickly. However it is maybe a bit of a management error to publicly show the workings "behind the green screen" (to borrow another remarkably apposite magical image, from the Wizard of Oz). We expect to see the swan gliding, not the feet busily paddling. On this occasion it was interesting to get a glimpse, and I was sympathetic, but if the workings cannot be less dependent on "magic", maybe they should be less visible?

Posted in Agile & Architecture, Thoughts on the World | Leave a comment

Singing With Each Other

We went to see The Hollies at G Live in Guildford last night. While the words and melodies were those we loved,  and the instrumental performances were good, the trademark harmonies sounded, frankly, a bit flat, and I wondered if they had finally lost it.

Then, towards the end of the first set, they announced an experiment. They would sing a song around one microphone (“you know, like in the days when we only had one mike”). The three main vocalists moved together and sang Here I Go Again. Suddenly the sound was transformed. The magic was back. It sparkled. It flew. It disappeared, sadly, when the song ended and they moved back to their respective positions on the large stage.

If I had to characterise what happened, and I was being slightly harsh, I would say for short time they were singing together, but the rest of the time they were singing at the same time.

Now we know that G Live has an odd, flat, acoustic. We have seen experienced stand-up comedians struggle because they can’t hear the laughter, and other experienced musicians ask for monitor/foldback adjustments mid performance. However we seem to have really found the Achilles Heel of this otherwise good venue – it doesn’t work if you need to hear what other people are singing and wrap your voice around theirs.

Next time, guys, please just ignore the big stage and use one mike. We’ll love it!

Posted in Thoughts on the World | Leave a comment

Collection, or Obsession?

I have decided to start another collection. Actually the real truth is that I’ve got a bit obsessive about something, and now I’m trying to put a bit of shape and control on it.

I don’t generally have an addictive personality but I do get occasional obsessions where I get one thing and then have to have more similar things, or research and build my kit ad infinitum, until the fascination wears off a bit. The trick is to make sure that it’s something I can afford, where ownership of multiple items makes some sense and where it is possible to dispose of the unwanted items without costing too much money.

Most of my collections involve clothing, where it makes reasonable sense to buy another T shirt, or bright jacket, or endangered species tie (of which I may well have the world’s largest collection). They can all be used, don’t take up too much space, and have some natural turnover as favourites wear out. Likewise I have a reasonable collection of malt whiskies, but I do steadily drink them.

Another trick is to make sure that the collection has a strong theme, which makes sure you stay focused, and which ideally limits the rate of acquisition to one compatible with your financial and storage resources. I don’t collect "ties", or even "animal ties", I collect Endangered Species ties, which only came from two companies and haven’t been made for several years. Likewise my jackets must have a single strong colour, and fit me, which narrows things down usefully.

The new collection got started innocently enough. For nearly 18 years my only "good" watch was a Rado Ceramica, a dual display model. About a year ago I started to fancy a change, not least because between changes in my sight, a dimming of the Rado’s digital display, and a lot of nights in a very dark hotel room I realised it was functioning more as jewellery than a reliable way of telling the time. So I wanted a new watch, but I wasn’t inspired as to what.

Then I watched Broken Arrow, and fell in lust with John Travolta’s Breitling Aerospace. The only challenge was that they are quite expensive items, and I wasn’t quite ready to make that purchase. In the meantime we watched Mission Impossible 5, and I was also quite impressed with Simon Pegg’s Tissot T-Touch. That was more readily satisfied, and I got hold of a second-hand one with nice titanium trim and a cheerful orange strap for about £200. This turned out to be an excellent "holiday" watch, tough, colourful and with lots of fun features including a thermometer, an altimeter/barometer, a compass, and a clever dual time zone system. That temporarily kept the lust at bay, but as quite a chunky device it wasn’t the whole solution.

The astute amongst you will have recognised that there a couple of things going on here which could be the start of a "theme". Firstly I very much like unusual materials: the titanium in all watches I’ve mentioned, the sapphire faces of the Breitling and the Rado and that watch’s hi-tech ceramic.

Second all these watches have a dual digital/analogue display. I’ve always liked that concept, ever since the inexpensive Casio watch which I wore for most of the 90s. Not only is it a style I like, it’s also now a disappearing one, being displaced by cleverer smartphones and smart watches. Of the mainstream manufacturers only Breitling and Tissot still make such watches. That makes older, rarer examples eminently collectable.

To refine the collection, there’s another dimension. I like my stuff to be unusual, ideally unique. Sometimes there’s a functional justification, like the modified keyboards on my MacBooks, but it’s also why my last two cars started off black and ended up being resprayed. Likewise, when I finally decided to take advantage of the cheap jewellery prices in Barbados and bought my Breitling I looked hard at the different colour options and ended up getting the vendor to track down the last Aerospace with a blue face and matching blue strap in the Caribbean.

Of course, if I’m being honest there’s a certain amount of rationalisation after the event going on here. What actually happened is that after buying the Breitling I got a bit obsessed and bought several and sold several cheaper watches before really formulating the rules of my collection. However I can now specify that any new entrant must be (unless I change the rules, which may happen at any time at the collector’s sole option :) ):

  • Dual display. That’s the theme, and I’m happy to stick to it, for now.
  • Functional and in good condition. These watches are going to be worn, and having tried to fix a duff one it’s not worth the effort.
  • Affordable. This is a collection for fun and function, not gain. While there’s a wide range between the cheapest and most expensive, most have cost around £200, and are at least second-hand.
  • The right size. With my relatively small hands and wrists, that means a maximum of about 44mm, but a minimum of about 37mm (below which the eyes may be more challenged). As I’m no fan of "knuckle dusters" most are no more than 11mm thick, although I’m slightly more flexible on that.
  • Beautiful, or really clever, or both. Like most men, a watch is my only jewellery, and I want to feel some pride of ownership and pleasure looking at it. Alternatively I’ll give a bit on that (just a bit) for a watch with unusual functionality or materials.
  • Unusual. Rare colour and material combinations preferred, and I’m highly likely to change straps and bracelets as well.

Ironically I’m not so insistent that it has to be a great "time telling" device. There are honourable exceptions (the Breitling), but there does seem to be a rough inverse relationship between a watch’s beauty and its clarity. I’m prepared to accommodate a range here, although it has to be said that most of the acquisitions beat the Rado in a dark room.

So will these conditions control my obsession, or inflame and challenge it? Time will tell, as will telling the time…

Posted in Thoughts on the World | 1 Comment

Back to the Future

I’ve opined before about how Microsoft have made significant retrograde steps with recent versions of Office. However this morning they topped themselves when Office 2016 started complaining about not being activated, and the recommended, automated solution was to do a complete download and "click to run" installation of some weird version of Office 365 over the top of my current installation.

In the meantime, I’ve been working with a main client whose standard desktop is based on Office 2010, and, you know what, it’s just better.

I’ve had enough. Office 2016 and 2013 have been removed from the primary operating systems of all my machines. In the unlikely event that I need Office 2016 (and the only real candidate is Skype for Business), I’ll run it in a VM. Long live Office 2010!

Posted in Thoughts on the World | Leave a comment

Business Models

Here’s a business model:

I’m a drug dealer. I sell you a crack cocaine pipe complete with a packet of wraps for £220. It’s a good pipe (assuming that such things exist) – burns clean and always hits the spot (OK I’m making this bit up, it’s not exactly an area of first-hand knowledge.)

To make my business plan work the packet of wraps is half high quality crack cocaine and half icing sugar. You come back to me and I’m very happy to sell you another packet of wraps. This time the price is £340, again for half high quality crack and half icing sugar.

This business model is illegal and for a number of very good reasons.

OK here is a completely different business model, nothing at all like the last one:

I am a manufacturer of consumer electronics. To be specific I’m a Korean manufacturer of occasionally explosively good consumer electronics. I sell you a printer complete with a set of toner cartridges for £220. It’s a very good printer – quiet, reliable, lovely output (I’m on safer ground here.)

To make my business plan work I put a little circuit in each toner cartridge so that at 5000 pages it says that it’s empty even if it it’s still half full. You come back to me and I’m very happy to sell you another set of cartridges, this time the price is £340. Again each cartridge is wired to show empty even when it’s still half full.

For reasons I fail to understand this model is legal, certainly in the UK.

There is of course an answer but it feels morally wrong. I just put my perfectly good printer in the bin and buy a new one complete with toner cartridges. I have also found a little chap in China who for £40 will sell me a set of chips for the cartridges. Five minutes with a junior hacksaw and some blu-tack and I can double their life.

Maybe the answer is just to throw the printer away every time the cartridges are empty. Surely it is not sustainable for the manufacturer if everyone just does this. But it doesn’t feel right…

Posted in Thoughts on the World | 1 Comment

A "False Colour" Experiment

Infrared trees with false colour
Camera: Panasonic DMC-GX7 | Date: 05-07-2017 09:54 | Resolution: 4390 x 1756 | ISO: 200 | Exp. bias: 0.33 EV | Exp. Time: 1/640s | Aperture: 8.0 | Focal Length: 17.0mm | State/Province: Swinhoe, Northumberland | See map

This is a bit of an experiment, but I think it works. I started with an infrared image in its standard form: yellow skies and blue foliage. I then performed a series of fairly simple colour replacement operations in Photoshop Elements: yellow to red, blue in top half of image to dark green, blue in bottom half of image to pale green, red to blue. The result is a bit like a hand-coloured black and white image. I like it, do you?

Posted in Photography | Leave a comment

Infrared White Balance

Alnwick Castle Reflections in the Infrared
Camera: Panasonic DMC-GX7 | Date: 05-07-2017 14:29 | Resolution: 4653 x 2908 | ISO: 200 | Exp. bias: 0 EV | Exp. Time: 1/800s | Aperture: 6.3 | Focal Length: 12.0mm | State/Province: Alnwick, Northumberland | See map | Lens: LUMIX G VARIO 12-35/F2.8

"I’m shooting infrared. My main output is RAW files, and any JPGs are just aides memoire. Between my raw processor and Photoshop I’m going to do some fancy channel mixing to either add false colour, or take it away entirely and generate a monochrome image. So I’m assuming my white balance doesn’t matter. Is that right?"

Nope, and this article explains why. If you’re struggling with, or puzzled by, the role of white balance in infrared photography, hopefully this will help untangle things.

Posted in Photography | Leave a comment

Liberation from the "Frightful Five"

There’s an interesting NY Times article on our dependency on "Tech’s Frightful Five", which includes a little interactive assessment of whether you could liberate yourself, and if so in which order. I thought it would be interesting to document my own assessment.

  1. FaceBook. No great loss. I’ve only started recently and I’m not a terribly social animal. I also have my own website and LinkedIn. Gone.
  2. Apple. Momentary wrench. My only connection to Apple is my MacBook Pro laptop, which is a great bit of kit. However it runs Windows and I’m sure Dell or Sony could sell me a reasonable replacement, although I would really miss the large 16×10 Retina screen.
  3. Alphabet/Google. Harder work, but straightforward. There are alternatives to Chrome as a browser, Google as a search engine, even Android as a phone/tablet operating system. It helps that Google has a bit of a track record of providing something you get to like, and then without warning disabling or crippling that rendering it of reduced or no value (think Android KitKat, Google Currents, I could go on). There’s a bit of work here, but it could be done.

And then I’m stuck. Like Farhad Manjoo Amazon has worked its way into a prime (or should that be "Prime") position in not only our shopping but also our viewing and reading habits. Yes, there are options, but the pain of transition would be substantial, and the loss of content (almost 400 Kindle books, Top Gear, Ripper Street and the Man in the High Castle among others) expensive. Amazon probably gets 4th place, but don’t ask me to do it! Steps 1-3 would leave me with an even heavier dependency than today on Windows and other Microsoft products and subsidiaries for all my day to day technical actions, and unless we’re going back to the Dark Ages I don’t see good alternatives, so Microsoft gets 5th by default, but it’s not really on the list. Well played, Bill.

Who are you most dependent on?

Posted in Thoughts on the World | Leave a comment

What Are Your Waypoints?

Country singer at the Listening Room, Nashville, providing important routeing information!
Camera: Panasonic DMC-GX7 | Date: 24-09-2014 18:14 | Resolution: 3424 x 3424 | ISO: 3200 | Exp. bias: 0 EV | Exp. Time: 1/25s | Aperture: 5.6 | Focal Length: 46.0mm | Location: The District | State/Province: Tennessee | See map | Lens: LUMIX G VARIO 35-100/F2.8

How do you remember the waypoints and landmarks on a journey? What are the key features by which you can replay in your mind, or to someone else, where you went and what you did?

Like any good Englishman, I can navigate substantial sections of our sceptred  isle by drinking establishment. This is, of course, a long tradition and officially recognised mechanism – it’s why British pubs have recognisable iconic signs, so that even if you were illiterate you could get yourself from inn to inn. It’s a bit more difficult today thanks to pub closures and the rise of pub chains with less distinguishable names, but it still works. Ask me to navigate you around Surrey, and there will be a lot of such landmarks in the discussion.

When I look back at other trips, especially to foreign parts, the mechanisms change. I can usually remember where I took favourite photographs, even without the GPS tagging, and I could immediately point to the locations of traumatic events whether in motion ("the Italian motorway with the big steel fences either side") or at rest ("the hotel with the sticky bathroom floor"). I also tend to hold in my head a sort of "moving map" picture of the journey’s flow, which might not be terribly accurate, but could be rendered more so quite quickly by studying a real map.

Frances, despite appearances to the contrary, navigates largely using food. Yesterday we had a typical example: "do you remember that lovely town square where we had breakfast in front of the town hall and we had to ask them whether they had real eggs because the powdered eggs were disagreeing with me? I think it was on the Washington trip." This was a challenge. "Breakfast" was probably right, so that narrowed things down a bit. "The Washington trip" was probably correct, but I have learned to treat such information with an element of caution.

At this point we had therefore to marry up two different reference systems, and try and work out where they overlapped. My first pass was to run the moving map of the Washington trip in my head, and call out the towns where we stayed. That eliminated a couple of stops, where we could both remember the breakfast arrangements (the very good restaurant at the Peaks of Otter Lodge, and a nice diner in Gatlinburg), but we were still missing an obvious match.

Then Frances said "I think we had to drive out of town for a bit because we’d had to change our route". Bingo! This now triggered the "traumatic event" register in my mind, specifically listening to a charming young lady in Nashville singing a song about the journey of a bottle of Jack Daniels, and suddenly realising I had put the wrong bloody Lynchburg on our route! Over dinner I had to do a quick replan and include Lynchburg Tennessee as well as Lynchburg Virginia in our itinerary. That meant an early start from Nashville next morning, heading south rather than directly east, and half-way to Lynchburg (the one with the Jack Daniels distillery) we stopped for breakfast because the offering at the hotel had looked very grim. Got there in the end.

(If you’re wondering, I do actually have a photographic record of this event. The young lady above is the one who sang the song with the critical routeing information.)

We’ve also had "that restaurant where we were the only white faces and the manager kept asking if we were OK" (Memphis, near Gracelands), and "that little store where they did the pulled pork sandwiches and the woman’s daughter lived in Birmingham" (Vesuvius, Virginia). In fairness to my wife, she can also accurately recall details of most of our retail transactions on each trip, including the unsuccessful ones. ("That town where we bought my Kokopeli material, and the old lady had to run across the street although there was no traffic"). Again there’s the challenge of marrying these up with my frame of reference, but the poor old lady in Cortez, Colorado, desperately trying to beat the count down timer on the pedestrian crossing, despite a traffic level of about 1 vehicle a minute, sticks in my mind as well, so that one was easy. Admittedly, I remember Cortez as "that nice town just outside Mesa Verde", but that’s me.

What’s your frame of reference?

Posted in Humour, Thoughts on the World, Travel | Leave a comment

How Strong Is Your Programming Language?

Line-up at the 2013 Europe's Strongest Man competition
Camera: Canon EOS 7D | Date: 29-06-2013 05:31 | Resolution: 5184 x 3456 | ISO: 200 | Exp. bias: -1/3 EV | Exp. Time: 1/160s | Aperture: 13.0 | Focal Length: 70.0mm (~113.4mm)

I write this with slight trepidation as I don’t want to provoke a "religious" discussion. I would appreciate comments focused on the engineering issues I have highlighted.

I’m in the middle of learning some new programming tools and languages, and my observations are coalescing around a metric which I haven’t seen assessed elsewhere. I’m going to call this "strength", as in "steel is strong", defined as the extent to which a programming language and its standard tooling avoid wasted effort and prevent errors. Essentially, "how hard is it to break?". This is not about the "power" or "reach" of a language, or its performance, although typically these correlate quite well with "strength". Neither does it include other considerations such as portability, tool cost or ease of deployment, which might be important in a specific choice. This is about the extent to which avoidable mistakes are actively avoided, thereby promoting developer productivity and low error rates.

I freely acknowledge that most languages have their place, and that it is perfectly possible to write good, solid code with a "weaker" language, as measured by this metric. It’s just harder than it has to be, especially if you are free to choose a stronger one.

I have identified the following factors which contribute to the strength of a language:

1. Explicit variable and type declaration

Together with case sensitivity issues, this is the primary cause of "silly" errors. If I start with a variable called FieldStrength and then accidentally refer to FeildStrength, and this can get through the editing and compile processes and throw a runtime error because I’m trying to use an undefined value then then programming "language" doesn’t deserve the label. In a strong language, this will be immediately questioned at edit time, because each variable must be explicitly defined, with a meaningful and clear type. Named types are better than those assigned by, for example, using multiple different types of brackets in the declaration.

2 Strong typing and early binding

Each variable’s type should be used by the editor to only allow code which invokes valid operations. To maximise the value of this the language and tooling should promote strong, "early bound" types in favour of weaker generic types: VehicleData not object or var. Generic objects and late binding have their place, in specific cases where code must handle incoming values whose type is not known until runtime, but the editor and language standards should then promote the practice of converting these to a strong type at the earliest practical opportunity.

Alongside this, the majority of type conversions should be explicit in code. Those which are always "safe" (e.g. from an integer to a floating point value, or from a strong type to a generic object) may be implicit, but all others should be spelt out in code with the ability to trap errors if they occur.

3. Intelligent case insensitivity

As noted above, this is a primary cause of "silly" errors. The worst case is a language which allows unintentional case errors at edit time and through deployment, and then throws runtime errors when things don’t match. Such a language isn’t worth the name. Best case is a language where the developer can choose meaningful capitalisation for clarity when defining methods and data structures, and the tools automatically correct any minor case issues as the developer references them, but if the items are accessed via a mechanism which cannot be corrected (e.g. via a text string passed from external sources), that’s case insensitive. In this best case the editor and compiler will reject any two definitions with overlapping scope which differ only in case, and require a stronger differentiation.

Somewhere between these extremes a language may be case sensitive but require explicit variable and method declaration and flag any mismatches at edit time. That’s weaker, as it becomes possible to have overlapping identifiers and accidentally invoke the wrong one, but it’s better than nothing.

4. Lack of "cruft", and elimination of "ambiguous cruft"

By "cruft", I mean all those language elements which are not strictly necessary for a human reader or an intelligent compiler/interpreter to unambiguously understand the code’s intent, but which the language’s syntax requires. They increase the programmer’s work, and each extra element introduces another opportunity for errors. Semicolons at the ends of statements, brackets everywhere and multiply repeated type names are good (or should that be bad?) examples. If I forget the semicolon but the statement fits on one line and otherwise makes syntactic sense then then code should work without it, or the tooling should insert it automatically.

However, the worse issue is what I have termed "ambiguous cruft", where it’s relatively easy to make an error in this stuff which takes time to track down and correct. My personal bête noire is the chain of multiple closing curly brackets at the end of a complex C-like code block or JSON file, where it’s very easy to mis-count and end up with the wrong nesting.  Contrast this with the explicit End XXX statements of VB.Net or name-matched closing tags of XML. Another example is where an identifier may or may not be followed by a pair of empty parentheses, but the two cases have different meanings: another error waiting to occur.

5. Automated dependency checking

Not a lot to say about this one. The compile/deploy stage should not allow through any code without all its dependencies being identified and appropriately handled. It just beggars belief that in 2017 we still have substantial volumes of work in environments which don’t guarantee this.

6. Edit and continue debugging

Single-stepping code is still one of the most powerful ways to check that it actually does what you intend, or to track down more complex errors. What is annoying is when this process indicates the error, but it requires a lengthy stop/edit/recompile/retest cycle to fix a minor problem, or when even a small exception causes the entire debug session to terminate. Best practice, although rare, is "edit and continue" support which allows code to be changed during a debug session. Worst case is where there’s no effective single-step debug support.

 

Some Assessments

Having defined the metric, here’s an attempt to assess some languages I know using it.

It will come as no surprise to those who know me that I give VB.Net a rating of Very Strong. It scores almost 100% on all the factors above, in particular being one of very few languages to express the outlined best practice approach to case sensitivity . Although fans of more "symbolic" languages derived from C may not like the way things are spelled out in words, the number of "tokens" required to achieve things is very low, with minimal "cruft". For example, creating a variable as a new instance of a specific type takes exactly 5 tokens in VB.Net, including explicit scope control if required and with the type name (often the longest token) used once. The same takes at least 6 tokens plus a semicolon in Java or C#, with the type name repeated at least once. As noted above, elements like code block ends are clear and specific removing a common cause of  silly errors.

Is VB.Net perfect? No. For example if I had a free hand I would be tempted to make the declaration of variables for collections or similar automatically create a new instance of the appropriate type rather than requiring explicit initiation, as this is a common source of errors (albeit well flagged by the editor and easily fixed). It allows some implicit type conversions which can cause problems, albeit rarely. However it’s pretty "bomb proof". I acknowledge there may be some cause and effect interplay going on here: it’s my language of choice because I’m sensitive to these issues, but I’m sensitive to these issues because the language I know best does them well and I miss that when working in other contexts.

It’s worth noting that these strengths relate to the language and are not restricted to expensive tools from "Big bad Microsoft". For example the same statements can be made for the excellent VB-based B4X Suite from tiny Israeli software house Anywhere Software, which uses Java as a runtime, executes on almost any platform, and includes remarkable edit and continue features for software which is being developed on PC but running on a mobile device.

I would rate Java and C# slightly lower as Pretty Strong. As fully compiled, strongly typed languages many potential error sources are caught at compile time if not earlier. However, the case-sensitivity and the reliance on additional, arguably redundant "punctuation" are both common sources of errors, as noted above. Tool support is also maybe a notch down: for example while the VB.Net editor can automatically correct minor errors such as the case of an identifier or missing parentheses, the C# editor either can’t do this, or it’s turned off and well hidden. On a positive note, both languages enforce slightly more rigor on type conversions. Score 4.5 out of 6?

Strongly-typed interpreted languages such as Python get a Moderate rating. The big issue is that the combination of implicit variable declaration and case sensitivity allow through far too many "silly" errors which cause runtime failures. "Cruft" is minimal, but the reliance on punctuation variations to distinguish the declaration and use of different collection types can be tricky. The use of indentation levels to distinguish code blocks is clear and reasonably unambiguous, but can be vulnerable to editors invisibly changing whitespace (e.g. converting tabs to spaces). On a positive note the better editors make good use of the strong typing to help the developer navigate and use the class structure. I also like the strong separation of concerns in the Django/Jinja development model, which echoes that of ASP.Net or Java Server Faces. I haven’t yet found an environment which offers edit and continue debugging, or graceful handling of runtime exceptions, but my investigations continue. Score 2.5 out of 6?

Weakly-typed scripting languages such as JavaScript or PHP are Weak, and in my experience highly error prone, offering almost none of the protections of a strong language as outlined above. While I am fully aware that like King Canute, I am powerless to stop the incoming tide of these languages, I would like to hope that maybe a few of those who promote their use might read this article, and take a minute to consider the possible benefits of a stronger choice.

 

Final Thoughts

There’s a lot of fashion in development, but like massive platforms and enormous flares, not all fashions are sensible ones… We need a return to treating development as an engineering discipline, and part of that may be choosing languages and tools which actively help us to avoid mistakes. I hope this concept of a "strength" metric might help promote such thinking.

Posted in Agile & Architecture, Code & Development | Leave a comment

3D Photos from Myanmar

Small temple at the Swedagon Pagoda, Yangon
Camera: Panasonic DMC-GX8 | Date: 10-02-2017 08:22 | Resolution: 5240 x 3275 | ISO: 200 | Exp. bias: 0 EV | Exp. Time: 1/80s | Aperture: 14.0 | Focal Length: 21.0mm | Location: Shwedagon Pagoda | State/Province: Wingaba, Yangon | See map | Lens: LUMIX G VARIO 12-35/F2.8

I’ve just finished processing my 3D shots from Myanmar. If you have a 3D TV or VR goggles, download a couple of the files from the following link and have a look.

http://www.andrewj.com/public/3D/

Posted in Myanmar Travel Blog, Photography, Travel | Leave a comment

Why I (Still) Do Programming

It’s an oddity that although I sell most of my time as a senior software architect, and can also afford to purchase software I need, I still spend a lot of time programming, writing code. Twenty-five years ago people a little older than I was then frequently told me “I stopped writing code a long time ago, you will probably be the same”, but it’s just turned out to be completely untrue. It’s not even that I only do it for a hobby or personal projects, I work some hands-on development into the majority of my professional engagements. Why?

At the risk of mis-quoting the Bible, the answer is legion, for they are many…

To get the functionality I want

I have always been a believer in getting computers to automate repetitive actions, something they are supremely good at. At the same time I have a very low patience threshold for undertaking repetitive tasks myself. If I can find an existing software solution great, but if not I will seriously consider writing one, or at the very least the “scaffolding” to integrate available tools into a smooth process. What often happens is I find a partial solution first, but as I get tired of working around its limitations I get to the point where I say “to hell with this, I’ll write my own”. This is more commonly a justification for personal projects, but there have been cases where I have filled gaps in client projects on this basis.

Related to this, if I need to quickly get a result in a complex calculation or piece of data processing, I’m happy to jump into a suitable macro language (or just VB) to get it, even for a single execution. Computers are faster than people, as long as it doesn’t take too long to set the process up.

To explore complex problems

While I am a great believer in the value of analysis and modelling, I acknowledge that words and diagrams have their limits in the case of the most complicated problem domains, and may be fundamentally difficult to formulate and communicate for complex and chaotic problem domains (using all these terms in their formal sense, and as they are used in the Cynefin framework, see here).

Even a low-functionality prototype may do more to elicit an understanding of a complex requirement than a lot of words and pictures: that’s one reason why agile methods have become so popular. The challenge is to strike a balance, and make sure that an analytical understanding does genuinely emerge, rather than just being buried in the code and my head. That’s why I am always keen to generate genuine models and documentation off the back of any such prototype.

The other case in which I may jump into code is if the dynamic behaviour of a system or process is difficult to model, and a simulation may be a valid way of exploring it. This may just be the implementation of a mathematical model, for example a Monte Carlo simulation, but I have also found myself building dynamic visual models of complex interactions.

To prove my ideas

Part of the value I bring to professional engagements is experience or knowledge of a range of architectural solutions, and the willingness to invoke unusual approaches if I think they are a good fit to a challenge. However it’s not unusual to find that other architects or developers are resistant to less traditional approaches, or those outside their comfort zones. Models and PowerPoint can go only so far in such situations, and a working proof of concept can be a very persuasive tool. Conversely, if I find that it isn’t as easy or as effective as I’d hoped, then “prove” takes on its older meaning of “test” and I may be the one being persuaded. I’m a scientist, so that’s fine too.

To prove or assess a technology

Related to the last, I have found by hard-won experience that vendors consistently overstate the capabilities of their solutions, and a quick proof of concept can be very powerful in confirming or refuting a proposed solution, establishing its limitations or narrowing down options.

A variant on this is where I need to measure myself, or others, for example to calibrate what might or might not be adequate productivity in a given situation.

To prove I can

While I am sceptical of overstated claims, I am equally suspicious if I think something should be achievable, and someone else says “that’s not possible”. Many projects both professional and personal have started from the assertion that “X is impossible”, and my disbelief in that. I get a great kick from bending technology to my will. To quote Deep Purple’s famously filthy song, Knocking At Your Back Door, itself a exploration into the limits of possibility (with censorship), “It’s not the kill, it’s the thrill of the chase.”.

In the modern world of agile development processes, architect and analyst roles are becoming blurred with that of “developer”. I have always straddled that boundary, and proving my development abilities my help my credibility with development teams, allowing me to engage at a lower level of detail when necessary. My ability to program makes me a better architect, at the same time as architecture knowledge makes me a better programmer.

To make money?

Maybe. If a development activity can help to sell my skills, or advance a client’s project, then it’s just part of my professional service offering, and on the same commercial basis as the rest. That’s great, especially if I can charge a rate commensurate with the bundle of skills, not just coding. My output may be part of the overall product or solution or a enduring utility, but more often any development I do is merely the means to an end which is a design, proof of concept, or measurement.

On the other hand, quite a lot of what I do makes little or no money. The stuff I build for my own purposes costs me little, but has a substantial opportunity cost if I could use the time another way, and I will usually buy a commercial solution if one exists. The total income from all my app and plugin development over the years has been a few hundred pounds, probably less than I’ve paid out for related tools and components. This is a “hobby with benefits”, not an income stream.

Because I enjoy it

This is perhaps the nub of the case: programming is something I enjoy doing. It’s a creative act, and puts my mind into a state I enjoy, solving problems, mastering technologies and creating an artefact of value from (usually) a blank sheet. It’s good mental exercise, and like any skill, if you want to retain it you have to keep in practice. The challenge is to do it in the right cases and at the right times, and remember that sometimes I really should be doing something else!

Posted in Agile & Architecture, Code & Development, Thoughts on the World | Leave a comment

Travel Blogging and Photo Editing

Weaver's hand
Camera: Panasonic DMC-GX8 | Date: 17-02-2017 11:39 | Resolution: 5184 x 3456 | ISO: 1600 | Exp. bias: -66/100 EV | Exp. Time: 1/40s | Aperture: 4.5 | Focal Length: 30.0mm | Location: Weaving village at In Paw Khone | State/Province: Inbawhkon, Shan | See map | Lens: LUMIX G VARIO 12-35/F2.8

I’ve been asked a number of times recently how I manage to write my blog during the often hectic schedule of my trips. It is sometimes a challenge, but it’s something that I want to do, and so I make it a priority for any "down time". I don’t see it as a chore, but as a way of enhancing my enjoyment, re-living the best experiences, working through any frustrations, and building valuable memories. If I’m travelling without Frances then there’s a lot of overlap with my report home, and if we’re travelling together then drafting the blog has become an enjoyable joint activity for coffee stops and dinner times.

That said, there are a few tricks to make the task manageable, and I’m happy to pass on some of those I have developed.

There’s no great magic to the writing. The main ingredient is practice. However I do spend quite a lot of time thinking through what to say about a day, trying to draft suitable paragraphs in my mind. If it was good enough for Gideon it’s good enough for me :). It is useful to capture ideas and even draft words whenever you get an opportunity, even on the go: travel time in buses and coffee stops are ideal. I just start drafting an email to myself on my phone, which can be saved at any time, reopened to add more as the day goes on, and sent before I start writing the blog.

The other important tool is a blogging app on your device which works offline and can save multiple drafts locally. I use the excellent Microsoft Live Writer on my PC, and the WordPress app on my phone and tablet, but any decent text editor would do. I would strongly counsel against trying to do travel blogging directly onto an online service – you will just be too obstructed by connectivity challenges.

Images are the other part of the equation. It’s very easy to be overwhelmed by the sheer volume of images, especially if you shoot prolifically like I tend to do, and if you have a relatively slow processing workflow. The first trick is to shoot RAW+JPG, so you always have something which you can share and post, even if it’s not perfect. As I observed in a previous post, you don’t need perfect in this context, and it would be rare if you didn’t from a day’s shooting have a least one image good enough in camera to share.

However, as long as I have at least some time, I do try to perform a basic edit (filter) on my shots, and process at least the one or two I want to publish to my blog. That requires a robust but quick and efficient workflow. Different photographers work different ways, but the following describes mine.

Importantly, I don’t use LightRoom or the image management features in Photoshop. Neither do I use Capture One’s catalogue features. All my image management takes place directly in Windows, supported by the excellent XnView and a few tools of my own making. I find that this is both quicker, and puts me in direct control of the process, rather than at the mercy of a model which might not suit.

The first step is to copy (not move) the images off the memory card. If I have only used one card in a session, I find it perfectly adequate to just connect the camera via USB – this works quite quickly, and avoids fiddling with card readers. As long as I have sufficient cards I don’t re-format them until I’m home (just in case something happens to the PC), nor do I do much in-camera deleting, which is very cumbersome.

In terms of organisation I have a top-level directory on each laptop called "Pictures" under which is a directory called "Incoming". This is synchronised across all my computers, and holds all "work in progress". Under that I have two master directories for each year or major trip, and then subdirectories for each event. So for Myanmar I will have top level directories called "Myanmar 2017" (for output files and fully-processed originals) and "Myanmar 2017 – Incoming" (for work in progress). Under the latter I would typically have a directory for the images from each day’s shooting, e.g. "Lake Inle Day 2". On the "output" side I will typically have a directory for each location, plus one for all the originals (RAW files and Capture One settings), but I could easily also end up with others for video, and particular events or topics such as the group.

Having copied the pictures over to the right working directory, I fire up XnView. The first step is to run a batch rename process which sets each image filename to my standard, which includes the date (in YYYYMMDD format), the camera and the number assigned by the camera, so all shots from a given camera will always sort alphabetically in shot order, and I can immediately see when an image was taken and on which camera. After that I run a script which moves all "multi-shot" images into sub-directories by type (I shoot panoramas, HDR, focus blends and 3D images each using a distinct custom mode on the camera) and takes these out of the main editing workflow.

The next step is to "edit" the images, by which I mean filtering out the bad, poor, and very good. Because I have JPG files for each shot, I can set XnView to sort by file type, and quickly scan all the JPG files in full screen mode, tagging each (using shortcut keys) on the following scheme:

  • Two stars means "delete". This is for images which are beyond use: out of focus, blurred, subject not fully in the frame. These will be moved to the wastebasket, and once that’s emptied, they are gone forever.
  • Three stars means "others". This is for images which are technically viable but which I don’t think merit processing. The obvious candidates are things like alternative people shots where the expressions weren’t ideal (but I have a better shot) or where I took a few slightly different compositions and some obviously don’t work. However this is also where I park duplicates or the unwanted frames from high-speed sequences. When I get home the JPGs will be deleted and the RAW files moved to an old external hard drive to free up disk space.
  • Four stars means "OK". This is for technically and compositionally adequate images, albeit which may not be the best, or may need substantial processing work.
  • Five stars means "good". These are the images which leap out at a quick viewing as "yes, that’s going to work".

Having tagged the images in the working folder, I have another script which deletes the two star images, moves the "others", and creates a .XMP file marking the five star images with a colour tag which can be read by Capture One. I can also copy the in-camera JPG versions of the 5 star images as a starting point for my portfolio, although these will be replaced by processed versions later.

The thing about the tagging process is to keep going, quickly, but err on the side of caution (so tag borderline delete as 3 star, and borderline others as 4 star). I can usually work through at an image every one or two seconds, so the first filter of an intensive shoot of 500 images takes less than 20 minutes. At this point I have typically reduced the retained images by 40-60%, but that varies by subject matter and the percentage of rejects can be much higher for challenging subjects such as high-speed action but also people other than professional models, where a lot get rejected for poor expressions. The reason I’ve chosen the image at the top is that I love trying to capture hands at work, but that’s another subject with a high "miss" rate. I also find that I fairly consistently mark about 4-5% of shots as 5 star.

I don’t just delete the "others", because there is the occasional case where my selected shot of a group turns out to have a major flaw, and it’s worth reviewing the options. More importantly, for family events, weddings and the like there’s the occasional "didn’t anyone take a picture of Aunty Ethel?" I rescued a friend of mine from a serious family bust-up when it emerged that the official photographer at his wedding hadn’t taken a single photo of my friend, the groom’s parents! On the case, I found a shot in "others" which after processing kept everyone happy.

At this point, and only then, I start up Capture One and navigate to the target working directory. It takes a minute or two to perform its first scan, and then I can change the sort order to "colour tag", and there are the best of the day’s images, right at the top of the list ready to select a couple for the blog and process them. 90% of the time I restrict processing changes to the crop and exposure (levels and curves) – I wouldn’t usually select for the blog any image needing more than that. Finish the words, and I’m ready to post my blog.

From plugging in the camera to posting typically takes around an hour. There’s some scope for multi-tasking, so I can work on the words (or get a cup of tea) while the images are downloading from the camera, or while posting the images to my website (which in my case is a separate step from posting the blog). As a by-product, I have performed my first edit on the shoot, and have more or less the best images prioritised for further processing.

And I have an enduring and sharable record of what I did on my holidays!

Posted in Myanmar Travel Blog, Photography, Thoughts on the World, Travel | Leave a comment

The Perfect is the Enemy of the Good

Buddha at Pa-Hto-Thar-Myar Pagoda, camera lying on bag!
Camera: Panasonic DMC-GX8 | Date: 11-02-2017 12:14 | Resolution: 4072 x 5429 | ISO: 400 | Exp. bias: 0.66 EV | Exp. Time: 1.6s | Aperture: 4.5 | Focal Length: 7.0mm | Location: Pa-Hto-Thar-Myar Pagoda | State/Province: Nyaung-U, Mandalay | See map | Lens: LUMIX G VARIO 7-14/F4.0

The Perfect is the Enemy of the Good. I’m not sure who first explained this to me, but I’m pretty sure it was my school metalwork teacher, Mr Bickle. Physically and vocally he was a cross between Nigel Green and Brian Blessed, and the rumour that he had been a Regimental Sergeant Major during the war was perfectly credible, especially when he was controlling a vast playing field full of chatty children without benefit of a bell or megaphone. However behind the forbidding exterior was a kindly man and a good teacher. When my first attempt at enamel work went a bit wrong, and some of the enamel ended up on the rear of the spoon, I was very upset. He kindly pointed out that it was still a good effort, and the flaw "added character". My mother, another teacher, agreed, and the spoon is still on her kitchen windowsill 45 years later.

I learned an important lesson: things don’t need to be perfect to be "good enough", and it’s better to move on and do something else good than to agonise over imperfections.

I also quickly found that this is a good exam strategy: 16/20 in all five questions is potentially top marks, whereas 20/20 in one and insufficient time for the others could mean a failure. The same is true in some (not all) sports: the strongman who is second in every event may go home with the title.

Later, in my training as a physicist and engineer, I learned a related lesson. There’s no such thing as an exact measurement or a perfectly accurate construction. I learned to think in terms of errors, variances and tolerances, and to understand their net effect on an overall result. When in my late 20s I did some formal Quality Management training the same message emerged a different way: in industrial QA you’re most interested in ensuring that all output meets a defined, measureable standard, and the last thing you want is an individual perfectionist obstructing the process.

Seeking perfection can easily lead to a very low (if high quality) output, and missed opportunities. It also risks absolute failure, as perfectionists often have no "Plan B" and limited if any ability to adapt to changing circumstances. "Very good", on the other hand, is an easy bedfellow with high productivity and planning for contingencies and changes.

I adopt this view in pretty much everything I do: professional work, hobbies, DIY, commercial relationships, entertainment. I hold myself and others to high standards, but I have learned to be tolerant of the odd imperfection. This does mean living with the occasional annoying wrinkle, but I judge that to be an acceptable compromise within overall achievement and satisfaction. Practice, criticism (from self and others) and active continuous improvement are still essential, but I expect them to make me better, not perfect.

The trick, of course, is to define and quantify what is "good enough". I then expect important deficiencies against such a target to be rectified promptly, correctly and completely. In my own work, this means allowing some room for change and correction, whether it’s circulating an early draft of a document to key reviewers, or making sure that I can easily reach plumbing pipework. If something must be "set in stone" then it has to be right, and whatever early checks and tests are possible are essential, but it’s much better to understand and allow for change and adjustment.

In the work of others, it means setting or understanding appropriate standards, and then living by them. After I had my car resprayed, I noticed a small run in the paint on the bonnet. Would I prefer this hadn’t happened? Yes. Does it prevent me enjoying my unique car and cheerfully recommending the guys who did the work? No. Professionally I can and will be highly critical of sloppy, incomplete or inaccurate work, but I will be understanding of odd errors in presentation or detail, providing that they don’t affect the overall result or number too many (which is in turn another indicator of poor underlying quality).

So why have I written this now, why have I tagged it as part of my Myanmar photo blog, and why is there a picture of the Buddha at the top? In photography, there are those who seek to create a small number of "perfect" images. They can get very upset if circumstances prevent them from doing so. My aim is instead to accept the conditions, get a good image if I can, and then move on to the next opportunity. At the Pa-Hto-Thar-Myar Pagoda I (stupidly) arrived without my tripod, and had to get the pictures resting my camera on any convenient support using the self timer to avoid shake, in this case flat on its back on my camera bag on the temple floor. Is this the best possible image from that location? Probably not. Am I happy with it? Yes, and if I have correctly understood Buddhist principles, I think the Buddha would approve as well.

It is in humanity’s interest that in some fields of artistic endeavour, there are those who seek perfection. For the rest of us, perfection is the wrong target.

Posted in Agile & Architecture, Myanmar Travel Blog, Thoughts on the World, Travel | Leave a comment

Myanmar Musings (What Worked and What Didn’t)

Scarf seller at Thaung Yoeu
Camera: Panasonic DMC-GX8 | Date: 15-02-2017 17:37 | Resolution: 3888 x 3888 | ISO: 1600 | Exp. bias: 0 EV | Exp. Time: 1/25s | Aperture: 9.0 | Focal Length: 33.0mm | Location: Thaung Yoeu ladies and pagoda ru | State/Province: Indein, Shan | See map | Lens: LUMIX G VARIO 12-35/F2.8

Well, I’m back! Apart from a mad dash the length of Bangkok airport which got us to our plane to the UK with only a couple of minutes to spare, the flights home were uneventful and timely. Here’s my traditional tail-end blog piece, with a combination of “what worked and what didn’t” and more general musings.

This was a truly inspiring photographic trip, with a combination of great locations, events and people to photograph. We had a very capable “leadership team” who got us to great locations in great light, and the Burmese people were only too happy to participate in the process. No praise can be too high for our local guide, Nay Win Oo (Shine), who is not only a great guide and competent logistician, but has a good feel for what makes great photography, and a real talent for directing the local people as models.

If I have a minor complaint, it’s the observation that the trip was largely focused on interiors and people to the occasional exclusion of landscapes and architecture. I had to declare UDI a couple of times to get a bit more of the latter subject matter in front of my lens. Bhutan was perhaps a better match to my own style, but that didn’t stop this trip being a great source of images.

Cameras and Shot Count

The Panasonic GX8 was the workhorse of the trip, and took approximately 3690 exposures. That’s about 20% higher than either Bhutan or Morocco, both of which were slightly longer trips, and reflects the more “interactive” nature of the photography, with a rather higher discard ratio than normal. As usual the total also includes raw material for quite a lot of multi-exposure images, mainly for 3D and panoramas. I expect to end up with 100-200 images worth sharing, which is about the norm.

I took around 84 stills on the Sony RX100, mainly “grab shots” from the bus, but it came into its own for video, and I have a number of great video clips, more than  on previous trips. I also took a handful of images using the infrared-converted Panasonic GX7, but whether due to the subject matter or the lighting they weren’t terribly inspiring.

I used my Ricoh Theta 360-degree camera several times, mainly in the markets and at the group mealtimes. I’m treating this as “found photography” – I haven’t had much of a look yet at what was captured, and will look forward to exploring the output over time.

My equipment all behaved faultlessly. I used all the lenses a reasonable amount, with the Panasonic 12-35mm doing the lion’s share as expected, but the 7-14mm, 35-100mm and 100-300mm all getting substantial use. I didn’t use the camera on my new Sony Experia Ultra phone, but its excellent GPS was a vast improvement over the Galaxy Note’s poor performance in Bhutan.

I also did not use the Panasonic GX7 which I was carrying as a spare, but was able to lend it as a complete solution to another member of the group when her Canon L Series zoom lens started misbehaving. Having been burned previously I always carry a spare everything, and that’s a lot easier with the diminutive Panasonic kit.

Human Factors

While technology was broadly reliable, human systems were more challenged. The combined effects of the intensive schedule and the expected risk of tummy bugs led to as fairly high attrition rate. At least half the group missed a shoot or a meal, and a couple were quite ill for a couple of days. I was lucky that my own “wobble” was brief and started within a quick walk of a five star hotel. I would advise most travellers to think in terms of “when” not “if”, and definitely avoid all uncooked food.

Hotels and restaurants were clean, and even out and about most washrooms were acceptable. Similarly temple areas were kept clean, with the fact that all shoes are removed at the entrance a clear contributor. The challenge is in the more general areas, especially in the towns and cities, where any surface you touch may also have been touched by many others. Money is a particular challenge. All you can do is to keep sanitising your hands, but also bags, cameras, wallets and other items which you may have to touch with dirty hands.

Our Burmese travel agents certainly did everything they could to reduce stress.  Once we arrived in Burma responsibility for our large luggage and travel documents began and ended with putting our bags outside the room at the appointed time. Then we just got on the bus, walked through the airport picking up a boarding pass as we passed Shine, and that’s about it! I could get used to travelling that way…

With someone else doing the “heavy lifting” (quite literally in the case of my case), you can get around with two phrases and 3 gestures:

  • Minga-la-ba, which is a polite “good day” exchanged between any two people who make eye contact. The choruses in the school and markets were fascinating! This can be used to cover a multitude of sins, and works very well as “please can I take your photograph?”
  • Che-su-ba, which means “thank you”. ‘Nuff said.
  • The smiley face and thumbs up, which work when you’re not close enough to use Minga-la-ba and che-su-ba.
  • A gesture consisting of the left hand held out at table level, palm up, with the right hand held about a foot above it, palm down. This is universally interpreted as “I would like a large Myanman beer, please” :)

Burmese Bizarre

Myanmar is a bit bizarre in a number of ways. Let’s start with the name. Myanmar (pronounce “mee…” not “my…”) is a relatively recent invention, and is not universally adopted. It doesn’t help that Aung San Suu Kyi (the popular and de-facto leader) tends to use “Burma” herself, and there’s no common adjective derived from Myanmar, whereas “Burmese” works, and is officially valid if it relates to the dominant ethnic group and language. It wouldn’t surprise me if “Myanmar” goes the way of “Zaire” and “Tanganyika”, and we’re all back to “Burma” in a few years.

The Burmese really do “drive on the wrong side of the road”. In another anti-colonial dictat a few years ago, one of the madder generals decided to change from the British practice, and instructed the country to drive on the right. On it’s own, that’s not a problem. It works fairly well for the Americas and most of Europe. However the Burmese are trying to do it with the same almost completely right-hand-drive vehicle supply as the rest of Asia and Australasia. So all of the drivers are unable to see round corners or larger vehicles in front, and every bus has a “driver’s assistant” who’s main job is to stop passengers being mown down by passing traffic as they disembark into the middle of the road!

At a daily level Myanmar is almost entirely cash-based, with effectively three currencies in circulation. Major tourist transactions are conducted in US Dollars. These must be large denominations and absolutely pristine – they may be rejected for a tiny mark or fold. Next down, most day to day transactions by tourists and the more wealthy are conducted in Kyat (pronounced “Chat”), in round units of 1000 Kyat (about 60p). 10,000K and 5,000K notes tend to also be quite tidy. Transactions with and between the poorer people are in tens or hundreds of Kyat and the money is quite different. It’s absolutely disgusting, clearly and literally passing through a lot of hands in its lifetime. It’s all slightly reminiscent of the two currency system in Cuba, but with one currency used two distinct ways.

Uniquely among the countries I have visited, Myanmar has no international GSM roaming. However we had good straightforward Wifi connectivity at reasonable speeds and without any obvious restrictions at all the hotels and in several other locations. I suspect this is a transitional state, as the enthusiastic adoption of mobile phones in the local population will inevitably drive a standard solution fairly rapidly.

One thing which did amuse me – one of the primary providers of Internet services is a company called SkyNet. Shine say’s they’ve all seen the films, so I’m assuming the founder is a Terminator fan…

The usual Asian approach of throwing people at any problem showed mixed results. Bangkok Airport is an enormous hub trying to run on small site processes which don’t scale just by adding people. The role of “bus driver’s assistant” does find employment for young lads with a helpful attitude but few exams. However we did have one very delayed meal where the problem seemed to be one of short staffing, despite a lot of people milling around the restaurant with nothing to do, most of the order taking, cooking and serving was being done by one or two individuals who were run ragged. It will be interesting to see how the approaches vary as the economy grows.

Guide books describe the food as “a rich fusion of unusual flavours” and “a repertoire of ingredients not found in any other cuisine”. Yeah, right. I’ll admit that I was being a bit cautious and avoided some of the more unusual fish and hot curry dishes, but basically it was Chinese or Thai food with a few local variations (more pineapple), alongside a number of Indian, Italian and Anglo-American favourites. One member of our group survived almost the whole trip on chicken and cashew nuts, and I’ll admit to a couple of pizzas!

To Sum Up

Lovely country, lovely people, great photos, but keep cleaning your hands and stick to the Chinese food (and beer)!

Posted in Myanmar Travel Blog, Photography, Thoughts on the World, Travel | Leave a comment

The World’s Worst Panorama – 2017

The Light and Land Myanmar 2017 Tour Group
Camera: Panasonic DMC-GX8 | Date: 17-02-2017 19:29 | Resolution: 18092 x 2401 | ISO: 3200 | Exp. bias: 0 EV | Exp. Time: 1/10s | Aperture: 4.0 | Focal Length: 12.0mm

As per tradition, I’ve compiled a group photograph from a series of hand-held shots taken by the members of the group in turn, in low light and high alcohol conditions. I’m moderately pleased with this year’s which was taken using the Sony RX100.

Sadly, Christine was missing as she wasn’t feeling too well. Otherwise here’s the Light and Land Myanmar 2017 tour group, from left to right: Julia, Andy, Geoffrey, Linda, Annette, Sara, Yours Truly, Neil, Fiona, Beverley and our leaders, Phil Malpas and Clive Minnit.

Please just don’t try and match up the beer bottles or count the legs too closely!

Posted in Myanmar Travel Blog, Travel | Leave a comment

The Oldest Established Permanent Floating Crap Game in New York

Transport, Lake Inle Floating Market
Camera: Panasonic DMC-GX8 | Date: 18-02-2017 08:45 | Resolution: 5602 x 3501 | ISO: 250 | Exp. bias: -33/100 EV | Exp. Time: 1/60s | Aperture: 9.0 | Focal Length: 12.0mm | Lens: LUMIX G VARIO 12-35/F2.8

I am slightly disappointed to find that the "floating market" is actually on solid ground, and only "floating" in the same sense as the "floating crap game" in Guys and Dolls. However it’s still a bustling, vibrant place, with lots of both photo and retail opportunities! It’s Saturday, and we’re in a part of the world where many people don’t yet have ready access to refrigeration, so most of the locals buy fresh food every day or two. It’s definitely fresh: some of the fish come wriggling out of buckets, and some of the chickens are plucked squawking from baskets just before becoming quarters…

Another observation is that for anyone used to Chinese food, there’s nothing that unusual. Chillies aside, there’s nothing I wouldn’t eat, provided it was prepared cleanly and cooked well.

Despite expectations to the contrary expressed widely within the group, I manage to find a carved elephant plaque which will go nicely alongside the animal-themed masks from Venice and Bhutan. As they say in Apollo 13, "Failure is not an option" :)

Once back at the hotel we say goodbye to the very friendly and helpful staff and start the journey back to Yangon. The initial trip across the lake, lunch, and the climb up to Heho airport are uneventful and much more enjoyable as we arrived in fog and mist, whereas we now have a glorious sunny day and can see much more of what’s going on. The trouble starts when Shine announces an extra stop, to visit a paper manufacturing workshop, and it becomes apparent that our flight is going to be significantly delayed. At 5pm the other flights have all departed and the shops and cafes in the tiny terminal put up their shutters and quit for the night.

I discover there is a hidden step down into the gents. That’s right, I went headlong into a haha at Heho airport. He he.

Well after 6pm we are still waiting for our flight, very much on the "last one out turn the lights off" basis. There’s a loud cheer when the flight finally lands. We get to Yangon a couple of hours behind schedule and we have an early start. Oh well, this trip has been consistent in several ways.

Posted in Myanmar Travel Blog, Travel | Leave a comment

Drifting Along

A proper Burmese Gent!
Camera: Panasonic DMC-GX8 | Date: 17-02-2017 16:01 | Resolution: 4600 x 3067 | ISO: 200 | Exp. bias: 0 EV | Exp. Time: 1/250s | Aperture: 5.0 | Focal Length: 21.0mm | Lens: LUMIX G VARIO 12-35/F2.8

A decent night’s sleep! I am obviously now so knackered that I have just “tuned out” the boats.

After breakfast we go to a different area of the lake, to watch more leg-rowing fishermen but who use a different style of net (and who are quite obviously really trying to catch fish), and then the "island builders". Essentially they harvest "lake weed" (mainly a type of water hyacinth) and place it on the "floating gardens", which are essentially just vast vegetation bundles bound together with bamboo, but on which some of the islanders live.  These are very productive agricultural resources, for growing lots of things but tomatoes do particularly well. After about 10-15 years a particular area is left to disintegrate and return to the lake, and they start on another one.

This visit is followed by one of the more peaceful moments of the whole trip, drifting without engines down a "side street" of one of the villages. Great reflections, and observations of village life. It’s intriguing to see one crew demolishing one of the stilt houses, and another one building a new one. Their boats are all tied up neatly underneath, not unlike a row of white Transit vans at an equivalent site in the UK.

Then it’s a trip to weaving centre, where they create beautiful cloth of cotton, silk and from the lotus plant, which grows on the lake. Photographically it’s a bit of a challenge given the high dynamic range of the lighting, but everyone is very friendly and accommodating, and we make appropriate use of the well-stocked shop.

After lunch and a break, we gather in our longhis for the group photograph. I have also supplemented mine with a rather nice cotton top from the weaving shop, and look every inch the Burmese gent, once I’ve been reminded to remove my Italian mountain shoes and socks!

Another hour on the lake at sunset is pleasant, and Shine has persuaded one of the waitresses from the hotel to model for us. Tomorrow morning we visit the floating market, then start the long journey home.

Posted in Myanmar Travel Blog, Travel | 1 Comment