Category Archives: Thoughts on the World

Do We Want Product Development, or Platform Flexibility?

There’s been a bit of noise recently in the photography blogosphere relating to how easy it is to make changes to camera software, and why, as a result, it feels like camera manufacturers are flat out not interested in the feature ideas of their professional and more capable enthusiast users. It probably started with this article by Ming Thein, and this rebuttal by Kirk Tuck, followed by this one  and this one by Andrew Molitor.

The problem is that my "colleagues" (I’m not quite sure what the correct collective term is here) are wrong. For different reasons. They are all thinking of the camera as a unitary product, and none of them (even Molitor, who claims to have some experience as a system architect) are thinking as they should, of the camera as a platform.

OK, one at a time, please…

There are a lot of good ideas in Ming Thein’s article. A lot of his suggestions to improve current mirrorless cameras are good ones with which I agree. The trouble is that he is trying to design "Ming Thein’s perfect camera", and I suspect that it wouldn’t be mine. For a start it would end up far too heavy, too expensive and with too many knobs!

Kirk Tuck gets, this, and his article is a sensible exploration of trade-offs and how one photographer’s ideal may be another’s nightmare. However he paints a picture of flat-lining development which is very concerning, because there are some significant deficiencies in current mainstream cameras which it would be great to address.

Andrew Molitor then picks up this strand, and tries to explain why all camera feature development is difficult, and prohibitively expensive, and why Expose to the Right (ETTR) is especially difficult. Set aside that referring to Michael Reichmann as "a pundit" is unkind and a considerable underestimation of that eminent photographer’s capabilities, there are several fallacies in Molitor’s articles. Firstly, it just would not be as difficult as claimed to implement ETTR metering, or any variant of it. It’s just another metering calculation. If you have a camera with some form of live histogram or overexposure warning, then you can already operate this semi-manually, tweaking down the exposure compensation until the level of clipping is what you want. If you can do it via a predictable process, then that enormously powerful computer you call a digital camera can easily be made to replicate the same quickly and efficiently. That’s what the metering system does. It’s even quite likely that the engineers have already done something similar, but hidden it. (Hint: if you have a scene mode called something like "candle-lit interior", you’re almost there…)

I suspect the calculations of grossed-up cost are also fallacious. If that were the case, in a market which manages US sales of only a few tens of thousands of mirrorless cameras per year (for example), we would never get any new features at all. The twin realities are that by combining multiple features into the normal streams of product or major release development, many of the extra costs are amortised, but we also know that the big Japanese electronics companies apply different accounting standards to development of their flagship products. If Molitor’s argument was correct, we would not see features in each new camera such as a scene mode for  "baby’s bottom on pink rug" (OK, I made that one up :)) or in-camera HDR, and things like that don’t seem to be a problem. I simply cannot believe that "baby’s bottom on pink rug" will generate millions of extra dollars revenue, compared with a "control highlight clipping" advanced metering mode, which would be widely celebrated by almost all equipment reviewers and advanced users.

So assuming that I’m right, and on-going feature development is both feasible and desirable, where does that leave us?

Ming Thein is not alone in expressing disappointment with the provision of improved features focused for the advanced photographer, and I agree with him that the slow progress is really very annoying. In my most recent review, I identified several relatively simple features which would be of significant value to the advanced photographer, and which could easily be implemented in the software of any good mirrorless camera without hardware changes, including:

  1. Expose to the right or other "automatically control highlight clipping" metering
  2. Optimisation for RAW Capture (e.g. histogram from RAW, not JPG)
  3. Proper RAW-based support for HDR, panoramas, focus stacking and other multishot techniques
  4. Focal distance read-out and hyperfocal focus
  5. Note taking and other content enrichment

All of these have been identified requirements/opportunities since the early era of digital photography. Many of them are successfully implemented in a few, perhaps more unusual models. For example the Phase One cameras implement a lot of the focus-related features, the Olympus OM-D E5-II does a form of image stacking for resolution enhancement, and Panasonic have just introduced a very clever implementation of focus bracketing in the GX8 based on a short 4K burst. However by and large the mainstream manufacturers have not made any significant progress towards them.  Even if Molitor’s analysis is correct, and this is all much more difficult than I expect (despite my strong software development experience) you would think that over time there would be at least some perhaps limited visible progress, but no. If the concepts were really "on the product backlog" (to use the iterative development term), then some would by now have "made the cut", but instead we get yet more features for registering babies’ faces…

My guess is that some combination of the following is going on:

  • The "advanced photographer" market is relatively small, and quite saturated. Camera manufacturers are therefore trying to make their mid-range products attractive to users who would previously have bought a cheaper device, and who may well consider just using a phone as an option. To do this, the device needs to offer lots of "ease of use" features.
  • Marketing and product management groups are focused on the output of "focus groups", which inevitably generate lowest-common denominator requirements which look a lot like current capabilities.
  • Manufacturers are fixated on a particular set of use cases and can’t conceive that anyone would use their products in a different way.

The trouble is that this leaves the more experienced photographers very frustrated. The answer is flexibility. By all means offer an in-camera, JPG-only HDR for the novice user, but don’t fob me off with it – offer me flexible RAW-based multishot support as well. Re-assignable buttons are a good step in the right direction, but they are not where flexibility begins and ends. The challenge, of course, is to find a way to provide this within fixed product cycles and limited budgets.

I think the answer lies with software architecture, and in particular how we view the digital camera. It’s time for us all, manufacturers and advanced users alike, to stop thinking of the camera as a "product", and start thinking of it as a "platform", for more open development. In this model the manufacturer still sells the hardware, complete with basic functionality. Others extend the platform, with "add-ins" or "apps", which exploit the hardware by providing new ways to drive and exploit its capabilities.

We’ve been here before. In the early noughties, mobile phone hardware had evolved beyond all recognition (my first mobile phone was a Vodafone prototype which filled one seat and the boot of my Golf GTI, and needed a six-foot whip antenna!) However, you bought your phone from Nokia, for example, and it did what it did. If you didn’t like the contact management functionality, you were stuck with it.

Then Microsoft, followed more visibly by Apple and eventually Google, broke this model, by delivering a platform, a device which made phone calls, sure, but which also supported a development ecosystem so that some people could develop "apps", and others could install and use those which met their needs. Contact management functionality is now limited only by the imagination of the developer community. Despite my criticism of some early attempts, the model is now pretty much universal, and I don’t think I could go back to a model where my phone was a locked-down, single-purpose device.

The digital camera needs to go the same way, and quickly before it is over-run by the phone coming at the same challenge from the other side. Camera manufacturers need to stop thinking about "what other features should we develop for the next camera", and instead direct themselves to two questions, one familiar and one not. The familiar one is, of course, "how can we make the hardware even better"? The unfamiliar one is "how can we open up this platform so that developers can exploit it, and deliver all that stuff the advanced users keep going on about"?

Ironically, for many manufacturers many of the concepts are in place, just not joined up. The big manufacturers all offer open lens mounts, so that anyone can develop lenses for their bodies. In the case of Panasonic, Olympus and the other micro-four thirds partners it’s even an open multi-party standard. Panasonic certainly now deliver "platform" televisions with the concept of third party apps. There’s a healthy community of "hackers" developing modified firmware for Canon and Panasonic cameras, albeit at arms length from and with a slightly ambivalent relationship to the manufacturers. I’m sure many of those would very much prefer to be working as partners, within an open development model.

So what should such a "platform for extensibility" look like? Assuming we have a high-end mirrorless camera (something broadly equivalent to a Panasonic GX8) to work with as base platform, here are some ideas:

  1. A software development kit, API and "app store" or similar for the development and delivery of in-camera "apps". For example, it should be possible to develop an ETTR metering module, which the user can choose as an optional metering mode (instead of standard matrix metering). This would be activated in place of the standard metering routine, take in current exposure, and return required exposure settings and perhaps some correction metadata. Obviously the manufacturer would have to make sure that any such module returned "safe" values, but in a mirrorless camera it should be very easy to check that the exposure settings are "reasonable" and revert to a default if not. Other add-ins could tap into events such as the completion of an exposure, or could activate functions such as setting focal distance. The API should either be development language-agnostic, or should support a well-known language such as Java, C++ or VB. That would also make it easier to develop an IDE (exploiting Visual Studio or Eclipse as a base), emulators and the like. There’s no reason why the camera needs an "open" operating system.
  2. An SDK for phone apps. This might be an even easier starting point, albeit with limitations. Currently manufacturers such as Panasonic provide some extended functions (e.g. geotagging) via a companion app for the user’s phone, but these apps are "closed", and if they don’t do what you want, that’s an end of it. It would be very easy for these manufacturers to open up this API, by providing libraries which other developers can access. My note taking concept could easily be delivered this way. The beauty of this approach is that it has few or no security issues for the camera, and the application management infrastructure is delivered by Google, Apple and Microsoft.
  3. An open way to share, extend and move metadata. Panasonic support some content enrichment, but in an absolutely nonsensical way, as those features only work for JPEG files. What Panasonic appear to be doing is writing to the JPEG EXIF data, but not even copying to the RAW files. The right solution is support for XMP companion files. These can then accompany the RAW file through the development process, being progressively enhanced by different tools, and relevant data will be permanently written to the output JPEG. This doesn’t have to be restricted to static, human-readable information. If, for example, the ETTR metering module can record the difference between its exposure and the one set by the default matrix method, then this can be used by the RAW processing to automatically "normalise" back to standard exposure during processing. XMP files have the great advantages that they are already an open standard, designed to be extensible and shared between multiple applications, and it’s pretty trivial to write code to manipulate them, so this route would be much better than opening up the proprietary EXIF metadata structures.
  4. A controllable camera. What I mean by this is that the features of the camera which might be within the scope of the new "apps" must be set via buttons, menus and "continuous" controls (e.g. wheels with no specific set positions), so that they can be over-ridden or adjusted by software. They must not be set by fixed manual switches, which may or may not be set where the software requires. The Nikon DF or the Fuji XT1 may suit the working style of some photographers – that’s fine – but they are unsuited to the more flexible software environment I’m envisaging. While I prefer the ergonomics of "soft" controls, in this instance they are also a solution which promotes flexibility, which is what we’re seeking to achieve here.

This doesn’t have to be done in one fell swoop, and it might not be achieved (or even appropriate) 100% for every camera. That’s fine. Panasonic, for example, could make a great start by opening up the "Image App" library, which wouldn’t require any immediate changes to the cameras at all.

So how about it?

Posted in Agile & Architecture, Code & Development, Photography, Thoughts on the World | Leave a comment

From Nobgang to Bumthang…

Yak at the top of the Pelela Pass
Camera: Panasonic DMC-GX8 | Date: 20-11-2015 10:57 | Resolution: 3575 x 3575 | ISO: 400 | Exp. bias: -33/100 EV | Exp. Time: 1/640s | Aperture: 8.0 | Focal Length: 300.0mm | Lens: LUMIX G VARIO 100-300/F4.0-5.6

Via Nobding (with more phalluses) – I couldn’t make this up if I tried!

Today was essentially a very long and somewhat boring drive, to the “alpine” bit of Bhutan. Although start and end are probably only 50km apart as the crow flies, the road takes 200km as it hugs the sides of the very steep valleys, and crosses 3 passes all well over 3000m. On a normal day, the bus trip takes at least 10 hours (an average of about 20kph, including stops).

However, to make things significantly worse the Bhutanese have initiated a completely crazy programme of road improvement, which really isn’t working and slows everything down even further. At least 70% of the route is currently "undergoing widening", but rather than having a few moderate to large teams focusing on specific sections, they seem to have decided to try and do it all at once, with a large number of small teams going almost the same work concurrently. What this means in practice is that for much of the whole route they have just finished drilling/dynamiting/digging the bank for the widened route, but you now have a route which is regularly almost blocked by heaps of stone either waiting to be taken away, or being assembled for the next stage, reinforcing the banks. Also the original surface is now either broken up, or covered in rock and mud. There’s a lot of big machinery busy doing the digging and moving the rock and soil around, but very little evidence of anything at any other stage. I estimate the average speed has dropped to 15 kph for a bus, or rather less than 10 mph, and it’s all very uncomfortable with a very uneven surface and large amounts of dust throughout the journey.

If it was me, I’d have a much smaller number of larger teams, with each section in a "pipeline" – a group doing digging and basic earthworks, one or more behind them doing reinforcing, bridges etc, and the last one surfacing. The road users might experience a few short stretches with perhaps bigger challenges, but offset by most of the journey being on either old, untouched road (fine, if a bit narrow), or by this point in time some on stretches of new, wide and fully surfaced road.

A "big parallel waterfall" method never, ever works in software development. It doesn’t appear to work in roadworks either.

The worst thing is that we have to do it all in reverse on Sunday.

OK. Rant over.

Great lunch, and dinner, both including recognisable and very tasty beef dishes. We’ve obviously moved into an area with cuisine more compatible with my normal diet.

The hotel in Bumthang is wonderful. It has literally just opened, and reminds me of a an official park lodge in the US (but brand new). I have a room you could kick a football in, all done in lovely wood. Even the dragons in the foyer (just to remind you you are still in Bhutan) are carved in the same wood and not painted. Very elegant. We haven’t seen Bumthang yet as we arrived in the dark, but it’s meant to be very pretty, so fingers crossed.

First thing tomorrow we have been invited to attend an assembly at the local school, which should be fascinating.

View featured image in Album
Posted in Bhutan Travel Blog, Thoughts on the World, Travel | Leave a comment

SharePoint: Simply C%@p, or Really Complicated C%@p?

There’s a common requirement for professional users of online document management systems. Sometimes you want to have access to a subset of files offline, with the ability to upload changes when you have finished work and are connected again. Genuine professional document management solutions like Open Text LiveLink have been able to do this for years, frequently with a little desktop add-in which presents part of the document library as a pseudo-drive in Windows Explorer.

Microsoft SharePoint can’t do this. It has never been able to do this, and it still can’t. Microsoft have worked out that it’s a requirement, they just seem completely incapable of implementing a usable solution to achieve it, despite the fact that doing so would instantly bridge a significant gap between their online DM solution and their desktop products.

For the first 10 years, they had no solution at all. Then Office 2010 introduced "Microsoft SharePoint Workspace 2010". This promises, but under-delivers. It can cache all documents in a site into a hidden folder on your PC, and allows access to them through an application which looks a little bit like Windows Explorer, but isn’t. It’s very fiddly, and breaks all the rules about how you expect Office apps to work. It’s also slow and unreliable. Google it, and you find bloggers who usually praise Microsoft products to the skies using words like "excrable". Despite at least three office releases since 2010, Microsoft don’t appear to have made any attempt to fix it.

There’s now an alternative option, in the form of OneDrive for Business. This has a different balance of behaviours. On the upside, you can control where it syncs files so that they do appear in Explorer in a controlled fashion. On the downside, you can only link to a single SharePoint site (not much use if you have a client with multiple sites for different groups), and it still insists on synching all files in bulk, which is not what you want at all. On top of that I couldn’t get it to authenticate reliably, and was seeing a lot of failed synchronisations leaving my copy in an indeterminate state. There’s supposed to be a major rewrite in progress, bringing it more inline with the personal version of OneDrive, which works quite well, but no sign of anything useful yet…

Having wasted enough time on a Microsoft-only solution, I reverted to a solution which does work fairly well, using the excellent Syncback Pro. You have to log in using  Internet Explorer and the "keep me signed in" setting before it will work, but after that it delivers exactly what I want, allowing the selection of an exact subset of files, and the location of the copy on your PC, with intelligent two-way synchronisation. Perfect.

Perfect? Well, sort of. Syncback works very well, but even it can’t work around some fundamental limitations of SharePoint. The biggest problem is that when SharePoint ingests a file, at resets both the file modified date and the file created date to be the date and time of ingestion! When you export or check the file, it therefore appears to be a changed, later version than the one you uploaded. Proper professional DM systems just don’t do this, and the Syncback guys haven’t found a solution. Worse, I discovered that SharePoint process was marking some files as checked in, and therefore visible to other users, and some as still checked out to me, and therefore invisible to others.

The latter is a real problem, since the point of uploading the files is to share them with others. It’s also very fiddly to fix as SharePoint doesn’t seem to provide any list of files checked out, and there’s no mechanism to check files in in bulk – you have to click on each file individually and go through the manual check-in process.

Aha, I thought. Surely Microsoft’s excellent development tools will allow me to quickly knock up a little utility to search through a site, find the files checked out to me, and programmatically check them in. Unfortunately not. the first red flag was the fact that on a PC with full installations of Office and a couple of versions of Visual Studio, there’s no installed object model for SharePoint. After a lot of Googling I found a download called the "Office Developer Tools for VS 2013". I didn’t think I needed this, given what I already had installed, but ran the installer anyway. This took longer to complete than a full installation of Office or Visual Studio would, and in the process silently closed all my open office apps, losing some work. When it finished I still couldn’t see the SharePoint objects immediately, but adding a couple of references to my project manually finally worked. Right up to the point where I tried to test run the project, at which point the execution failed on the first line. It appears that these objects are designed to only support development but the code must execute on a server running SharePoint – there’s no concept of developing a desktop tool remotely interrogating a library.

OK, I thought. What about web services? I remember in the early days of SharePoint I was able to use SOAP web services to access and interrogate it, and I thought the same should still be true. To cut a long story short, that’s wrong. There’s no simple listing of the API, and attempting to interrogate services using Visual Studio’s usually excellent tools failed at the first post, with unresolveable authentication errors. In addition they seem to have moved to a REST API which is fundamentally much more difficult to drive if you don’t have a clear API listing. A lot of developers seem to be complaining about similar issues. I did find a couple of articles with sample code, but it all seems to be very complicated compared with what I remembered of the original SOAP API.

After wasting a couple of hours on "quickly knocking up a little utility" I gave up, at least for now. Back to the manual check-in method…

I’ve never been a fan of SharePoint, but it appears to be betting worse, not better. At least the first versions were simply cr@p. The new versions are very complicated cr@p.

Posted in Agile & Architecture, Code & Development, Thoughts on the World | Leave a comment

The Software Utility Cycle

There’s a well-known model called the “Hype Cycle”, which plots how technology evolves to the point of general adoption and usefulness. While there are a lot of detail variants, they all boil down to something like the following (courtesy Wikipedia & Gartner):

Hype Cycle

 

While this correctly plots the pattern of adoption of a new technology, it hides a nasty truth, that the “plateau of productivity” is not a picture of nice, gentle, continuous, enduring improvement. Eventually all good things must come to an end. Now sometimes what happens is that an older technology is replaced outright by a newer one, and the old one continues in obsolescence for a while, and then withers away. We understand that pattern quite well as well. However, I think we are now beginning to experience another behaviour, especially in the software world.

Welcome to the Software Utility Curve:

Software Utility Curve

 

We’re all familiar with the first couple of points on this curve. Someone has a great idea for a piece of software (the “outcrop of ideas”). V1 works, just about, and drums up interest, but it’s not unusual for there to be a number of obvious missing features, or for the number of initial bugs and incomplete implementations to almost outweigh the usefulness of the new concept. Hopefully suitably encouraged and funded, the developers get cracking moving up the “Escarpment of Error Removal”. At the same time the product grows new, major features. V2 is better, and V3 is traditionally stable, usefully and widely-acclaimed (the “Little peak of Usefulness”).

I give you, for example, Windows 3.1, or MS Office 4.0.

What happens next is interesting. It seems to be not uncommon that at this point the product is either acquired, or re-aligned by its parent company, or the developers realise that they’ve done a great job, but at the cost of some architectural dead-ends. Whatever the cause, this is the point of the “Great Architectural Rewrite Chasm”. The new version is maybe on a stronger foundation, maybe better integrated with other software, but in the process things have changed or broken. This can, of course, happen more than once…

MS Office 95? Certainly almost every alternative version of Windows (see my musings on the history and future of Microsoft Windows).

The problems can usually be fixed, and the next version is back to the stability and utility of the one at the previous “Little Peak of Usefulness”, maybe better.

Subsequent versions may further enhance the product, but there may be emerging evidence of diminishing returns. The challenge for the providers is that they have to change enough to make people pay for upgrades or subscriptions, rather than just soldiering on with an old version, but if the product is now a pretty much perfect fit to its niche there may be nowhere to go. Somewhere around Version 7 or 8, you get a product which is represents a high point: stable, powerful, popular. I call this the “Peak of Productivity”.

Windows 7. Office 2003. Acrobat 9.

Then the rot sets in, as the diminishing returns finally turn negative. The developers get increasingly desperate to find incremental improvements, and start thinking about change for its own sake. Pretty soon they come up with something which may have sounded great in a product strategy meeting, but which breaks compatibility, or the established user experience model, and we’re into negative territory. The problems may be so significant that the product is tipped into another chasm, not just a gentle downhill trundle.

Ladies and Gentlemen, I proudly present to you Microsoft Office 2007. With its ribbon interface which no-one likes, and incompatible file formats. We also proudly announce the Microsoft Chair of Studies into the  working of the list indentation feature…

I’m not sure where this story ends, but I feel increasing frustration with many of the core software products we all spend much of the day with. MS Office 2010+ is just not as easy to use as in the 2003 version. OK, youngsters who never used anything else may be comfortable with the ribbon, but I’m not convinced. I’m not sure I ever asked for the “improvements” we have received, but it annoys intensely that we still can’t easily set the indents in a list hierarchy, save the style, and it stays set. That  said, I have to credit Microsoft with a decent multi-platform solution in Office 365, so maybe there’s hope. Acrobat still doesn’t have the ability to cut/paste pages from one document to another, although you can do a (very, very fiddly) drag and drop to achieve the same thing… And this morning I watched an experienced IT architect struggling with settings in Windows 8, and eventually helped him solve the problem by going to Explorer and doing a right click, Manage, which fortunately still works like it did in Windows NT.

There’s an old engineering saying: “If it ain’t broke, don’t fix it”. Sadly the big software companies seem to be incapable of following that sound advice.

Posted in Agile & Architecture, Thoughts on the World | Leave a comment

Lies, Damn’ Lies…

The trouble Volkswagen have got themselves into may be symptomatic of a wider malaise, and we may find that their main failing is breaking the 11th Commandment.

Most people, quite naturally, tend to believe the information provided by their gadgets. Between my training as a physicist, my fascination with numbers and my professional leanings, I’m definitely inclined to the view expressed in the famous quote "never believe anything you read in a newspaper except the date, and that only after you have checked it in a calendar". I’m always trying to cross-check the instrumentation of everyday equipment, to understand which are accurate, and which not. This goes especially for all those read-outs in a car, with ideal opportunities on long journeys.

A car’s speedo, for example, can be cross-checked against a GPS with a speed readout. The latter tend to lag slightly behind the actual value, but can be very accurate once you are travelling at a constant speed, such as on the motorway with the cruise control engaged. I reckon a GPS is good to within about 0.5 mph under those conditions. Alternatively, there’s always the old "Sherlock Holmes" method, which I used to use before the GPS days: travel at a constant speed and time yourself past 17.5 of those little blue posts. That’s one mile, and as the great detective says in Silver Blaze, "the calculation is a simple one".

Over the years I’ve seen a steady improvement in the accuracy of speedometers. In my early motoring years it wasn’t unusual to find the speed being exaggerated by as much as 5mp at motorway speeds, but my latest car, the Mercedes E-Class, seems to be accurate to about 1mph at speeds as fast as I can safely check on British motorways.

For some reason, that’s not true of fuel efficiency. The most accurate way to measure that is the old one: fill up to the brim (or at least the pump cutout) and zero the trip counter. When the tank is nearly empty fill up again, and divide miles by gallons, or litres depending on your persuasion. That measurement is probably accurate to about +3%, maybe better, or less than 1mpg in the 30-40mpg range.

Now on my VW Eos, I found that the average fuel economy readout from the trip meter consistently agreed with my own calculation to within about 1mpg. Good enough that I stopped checking manually. Not true of the Mercedes. The error varies, but it’s always considerably optimistic, sometimes by as much as 3 or 4 mpg on a real figure in the range 32-35mg. That’s an error in excess of 10%. In absolute terms it’s still very impressive for a big heavy car which can do 0-60 in around 6s, but not as good as you are led to believe…

If you think about it, the reasons are obvious. In older cars, accurate speed measurement was a challenge. Both regulations and psychology inclined to flatter a car’s performance: the regulation states that any error must be to show a speed above actual, and that was also desirable in sales terms when cars were slower.  Nowadays there’s no benefit to exaggerating the real speed, and a distinct benefit to providing an accurate value if possible so the driver can maximise use of the speed limit.

The opposite is unfortunately true of fuel economy. My own VW experience suggests that it’s perfectly possible to provide a fairly accurate report (although it’s always possible that I may just have been lucky), and I struggle to understand any technical reason why the Mercedes is so inaccurate. I’ve checked the obvious sources of error, such as an inaccurate odometer, and can’t find anything. However when you consider the psychology, the reason is apparent – we all want to think that we’re driving efficient cars, and my Mercedes tells a very good story, if only I wasn’t a cussed so and so who checks things!

While an inaccurate fuel economy read-out is nothing new, and probably hasn’t broken any laws the way the VW diagnostic software did, it does appear that the general issue may be broader than we think.

Posted in Thoughts on the World | Leave a comment

A Laser-Like Focus?

Market Traders, Marrakech
Camera: Panasonic DMC-GX7 | Date: 10-11-2013 17:20 | Resolution: 3067 x 3067 | ISO: 200 | Exp. bias: -66/100 EV | Exp. Time: 1/400s | Aperture: 8.0 | Focal Length: 77.0mm | Location: Djemaa el Fna | State/Province: Marrakech-Tensift-Al Haouz | See map | Lens: LUMIX G VARIO PZ 45-175/F4.0-5.6

I suspect we all have something which can attract our attention, like a missile locking onto a homing beacon, even against significant background noise. With Frances, it’s shoes. With me, it’s bread!

There was a scene in the excellent, but very complicated, Belgian conspiracy thriller, Salamander which demonstrated this. Set in a Belgian monastery, in the foreground the central character is discussing the case with his brother, formerly a policeman but now a monk. They are trying to work out who has covered up doing what to whom, and how. In Flemish, so we’re getting this through subtitles. Even by the standards of the rest of the series it’s very, very complicated.

A monk wheels a trolley through the background, destined for the refectory. I go, "Ooh, that’s nice bread"! That breaks our chain of thought and we have to go back about a minute…

I can’t remember, but I think the same happened here. This was taken across the big square in the Marrakesh Medina, through a lot of cooking smoke and dust. The original has almost no contrast, and is quite indistinct. However Capture One has worked its magic and I think it now works. What attracted my eyes in the first place? Guess…

View featured image in Album
Posted in Morocco Travel Blog, Thoughts on the World, Travel | Leave a comment

It’s Not Just What You Do With It, Size IS Important

Sextant statue in front of the Liver Building, Liverpool
Camera: Panasonic DMC-GM5 | Date: 22-07-2015 19:41 | Resolution: 3423 x 4564 | ISO: 200 | Exp. bias: 0 EV | Exp. Time: 1/800s | Aperture: 5.6 | Focal Length: 14.0mm | Lens: LUMIX G VARIO PZ 14-42/F3.5-5.6

On paper, the Panasonic GM5 should be an ideal "carry around" camera for me. The same sensor and processor as the excellent GX7 and GH4 in a neat pocket-sized packaged. A proper electronic viewfinder. Access to all the Micro Four Thirds lenses. Panasonic’s engineers have even been cunning beyond the normal behaviour of camera manufacturers and although it has a different battery to its larger brethren, it uses exactly the same charger. I’d managed to get a couple of minutes "hands on" in a shop and was reasonably impressed.

Last week, driven to Amazon by their remarkably "rubbish but effective" Prime Day pseudo-sale, I bit the bullet and ordered one, in a cheerful red. The general capability and image quality, as evidenced above, is all I expected. However, after a few days in my hands it’s going to go back. The reason – size. Like all disappointing love stories, it’s complicated…

It’s Too Large…

Although the GM5 body is tiny, not much larger than a Canon Powershot S series, put a lens, any lens, on the front, and it becomes too large to put in your trouser pocket, and too large to comfortably travel in my computer bag the whole time. In addition, I really need two lenses to cover a decent zoom range. The Panasonic 14-42mm and 45-175mm power zooms are both tiny, but together they make it into a package which demands a camera bag, in reality no different to using a next size up body.

… But It’s Too Small

In use, the camera is remarkably fiddly. I could live with the small buttons, but their legends and markings have also been scaled down, to a point which is almost invisible to me when I’m wearing my glasses. Also the smaller body puts my hands much closer to the lens and viewfinder in use, and I find that with the camera to my eye my hands are fouling my glasses.

Even wearing the smallest lens I own (the 14-42 PZ), there’s a bad case of "lens too big for the camera", and it won’t even sit flat on the desk. More of an issue, there’s no easy way to carry it in the hand, except gripping right round the body or lens, which makes it difficult to raise to the eye for a quick shot without having to use both hands.

For me, however, the killer is the tiny EVF. Impressive in the shop, in real use out and about, wearing my glasses, it’s almost unusable. The effective view size is tiny, and despite several attempts at adjustment I couldn’t get the view sharp with my glasses. You get, at best, a sense of what’s in shot, rather than being able to scan the picture for meaningful details. (Ideally I would have avoided the sextant statue "fouling" the statue of Edward VII on his horse in the above shot, but I just couldn’t see that detail.) If I can’t use the EVF I’d rather have a camera with a size larger rear screen, to give me some chance of being able to use it with glasses on, and in varying ambient light conditions.

So much though I wanted to like this camera, It isn’t for me. Sometimes engineers can shoot for a compromise between two opposing targets and pull off a remarkable double. My delightfully schizophrenic Mercedes Cabrio is a case in point. Sometimes, however, you end up with the worst of both worlds, and that’s what’s happened here.

Just Right?

Ironically, the day I ordered the GM5, Panasonic announced the follow-up model to my much-loved GX7, unsurprisingly named the GX8. The improvements in pixel count, functionality and weather protection are all almost uniformly welcomed, but there’s been some criticism of the fact that the GX8 is a bit bigger than its predecessor, by about 5mm in height and depth, 10mm in width, and 75g in weight.

Now I love my GX7. It’s my favourite camera of the many I’ve owned. But it’s never been out of the house except wearing the bottom half of the "ever ready case" Panasonic supplied with it. This improves its fit to my hand no end. By my estimate, the ERC adds about 5mm to the height and depth, and about 10mm to the width, and weighs somewhere between 25 and 50g. It sounds like the GX8 is spot on!

I wait with baited breath…

View featured image in Album
Posted in Photography, Thoughts on the World | Leave a comment

Crash, Bang, Wallop, What a Picture

Fireworks Through the Liverpool Eye
Camera: Canon PowerShot S120 | Date: 13-07-2015 23:31 | Resolution: 3920 x 2940 | ISO: 80 | Exp. bias: 0 EV | Exp. Time: 10.0s | Aperture: 6.3 | Focal Length: 5.2mm | Caption: Fireworks Through the Liverpool Eye

I was literally just about to get into bed in my hotel in Liverpool last night, when the air was rent with loud explosions. Fortunately nothing sinister – just fireworks giving a cruise ship a good send-off on her voyage. My hotel room was very well positioned to watch the show, with the fireworks and the ship visible through Liverpool’s "Big Wheel".

I did have my little Canon S120 in my bag, and couldn’t resist trying to capture the scene. I had a minor panic as I ran round the hotel room and rummaged through my bag trying to find something on which to rest the camera – good fireworks photos need exposures of 10s or longer. In the end I think this one was taken with the camera propped up on the TV remote control. Not ideal, but a reasonable success given the circumstances…

View featured image in Album
Posted in Photography, Thoughts on the World | Leave a comment

Can No-One Write A Good Book About Oracle SOA?

I’m frustrated. I’ve just read a couple of good, if somewhat repetitive, design pattern books: one on SOA design with a resolutely platform-neutral stance, and another on architecting for the cloud, with a Microsoft Azure bent but which struck an admirable balance between generic advice and Microsoft specific examples.

So far so good. However although the Microsoft Azure information may come in handy for my next role, what I really need is some good quality, easy to read guidance on how current generic guidance relates to the Oracle SOA/Fusion Suite. I identified four candidates, but none of them seem worth completing:

  • Thomas Erl’s SOA Design Patterns. This is very expensive (more than £40 even in Kindle format), gets a lot of relatively poor reviews, and I didn’t much like the last book I read by the same author.
  • Sergey Popov’s Applied SOA Patterns on the Oracle Platform. This is another expensive book, but at least you can read a decent-length Kindle sample. However doing so has somewhat put me off. There are pages upon pages upon pages of front-matter. Do I really want to read about reviewers thanking their mothers for having them before I get to the first real content? Fortunately even with that issue the sample gets as far as an introductory chapter, but this makes two things apparent. Firstly, the author has quite a wordy and academic style, but more importantly he has re-defined the well-established term "pattern" to mean either "design rule" or "Oracle example", neither of which works for me. However I really parted company when I got to a section which states "… security … is nothing more than pure money, as almost no one these days seeks fun in simple informational vandalism", and then went off into a discussion of development costs. If this "expert" has such a poor understanding of cyber-security it doesn’t bode well…
  • Harish Gaur’s Oracle Fusion Middleware Patterns. Again, this appears to have redefined "pattern" as "Opportunity to show a good Oracle example", but that might be valid in my current position. Unfortunately I can’t tell you much more as the Kindle sample finished in the middle of "about the co-authors", before we get to any substantive content at all. As it’s another relatively expensive book with quite a few poor reviews I’m not sure whether it’s worth proceeding.
  • Kathiravan Udayakumar’s Oracle SOA Patterns. Although only published in 2012, this appears to already be out of print. It has two reviews on Amazon, one at one-star (from someone who did try and read it) and one at three stars (from someone who didn’t!).

In the meantime I’ve started what looks like a much more promising book, David Chappell’s Enterprise Service Bus. This appears to be well-written, well-reviewed and reasonably priced. What really attracts me is that he’s attempted to extend the "Gregorgram" visual design language invented for Enterprise Integration Patterns to service bus architectures, which was in many ways the missing piece from the Service Design Patterns book. Unfortunately the book may be a bit out of date and Java-focused to give me an up-to-date technical briefing, but as it’s fairly short that’s not an issue.

After that it’s back to trying to find a decent book which links all this to the Oracle platform. If anyone would like to recommend one please let me know.

Posted in Agile & Architecture, Reviews, Thoughts on the World | Leave a comment

Things Which Really Bug Me About the Kindle

I  read a lot using the Kindle applications for Android and PC. While there’s a lot which is good about that process there are a number of things which really bug me. Some of these look incredibly simple to resolve, from my standpoint as a competent software developer, and I have to question whether Amazon actually care about getting the user experience right…

Changing Font Size

The current behaviour of the font selection option is completely brain-dead, especially when switching between documents. Suppose I open one book which has been composed using a large base font. The text comes up very large and I set my font size to 2. I then open a second book, which has been composed using a smaller base font, and I have to change the font setting to 4 to get back to a size I’m comfortable with. Open the first document and the text is now enormous!

The application should actually work as follows. I would set a preferred font face and size and that would just be used automatically for all the bulk text in all documents. Anything styled with style tags like normal,  body text,  list,  should just use my selected font and size. Automatically. Paragraphs with heading styles would use progressively larger fonts, and the style might change to an author preference, although I should be able to over-ride that.

If that’s not possible, although I really don’t understand why not, then any change I make to my settings should apply only for a single document, and my settings for each document should be remembered if I switch between them. If I have to set size 2 in one document and size 4 in another to get a consistent reading experience the app should remember that.

Have the developers ever actually used the devices and apps with real eBooks?

Collections and Tagging

When,  early on, you have half a dozen books in your Kindle account, the lack of effective library management tools is not too much of an issue. When, like us, that library has grown to several hundred titles this starts to be a major problem.

Amazon allege that the solution is to use collections. That might help, if it weren’t for another brain-dead implementation. Collections on the physical Kindle are a local data structure, effectively invisible to other devices. In the Android app they are quite a usable feature, and sync with other Android devices, but not other platforms. On the PC you can create local collections, and allegedly import collections from physical Kindles (although I haven’t got that to work) but the collections are then completely independent of all other devices.

Is this really the best that can be achieved by one of the leading cloud services companies? Surely it’s not rocket science to come up with an architecture for collections / lists and tags, which is synchronised with the cloud account from and to all devices on the account? (And I note that there can’t possibly be any real technical issue, because notes and highlights synchronise perfectly across all my devices…)

Again, this looks like the developers are either stupid, or lazy, or completely indifferent to the implications of their substandard work.

Book Descriptions

If you are reading a book on the Kindle, you can quickly pop up some key descriptive details. Relatively recently Amazon have supported the same feature in the Android app, although it doesn’t work for books which aren’t open. On the PC it’s not supported at all.

There are three sets of books for which I would like to be able to quickly access descriptive details, ideally on- and off-line:

  • Books I have downloaded to my device, but which I’m not currently reading
  • Books in my archive, to remember which is which
  • Books which are being recommended by Amazon within my mobile reading experience, e.g. the recommendations panel on the home page of the Kindle app.

No, I do NOT want to "view in store", especially if it’s a book I’ve already downloaded and I’m just not 100% which is which from the cover image, and I’m offline. And I don’t really want to have to open up a book to see it’s description. Surely it wouldn’t be rocket science (again) to download the key descriptive details for all the books in the above categories at every sync, and have those details available via a long press from the overview pages just like they would be from within an open book?

Position References

Some books insist on referring internally by using a page number from the printed edition. If you’re referring to a specific position in a book in the outside world, this is also still a common practice (and probably the only viable one unless the book has quite a fine-grained and well-numbered heading structure). Kindle insists on referring to and navigating locations using an internal "position" reference, which not only has zero relationship to the outside world, but can change from time to time depending on font choice and other settings. Therefore unless you have access to the physical edition as well as the eBook, you’re stuffed. It’s not even easy if you have a relative reference (e.g. page 200 of 300), because you have to get the calculator out to work out that this is equivalent to "position 3400 of 5393".

It would undoubtedly be better if authors creating Kindle versions of technical and reference books made sure all internal references were simply hyperlinks to the right point in the document. However I’m sure Amazon could help as well. How about, for example, holding the page count of the physical edition(s) against the Kindle version, and modifying the "Go To" dialog so that I can specify the target position as a percentage, or as a page number relative to the page count for the physical edition?

The Back Button

The physical Kindle and all Android devices have a "back" button, which should take you back steadily through your work contexts, like the back button on a browser. On the Kindle, or the PC app, this behaves as you’d expect. If you follow a link within a book, then it takes you to a new page, but the back button takes you back to the page you were previously reading. Only when you get back to your first context does it take you right out to the menu. Not on Android. Click on a link to an external source, and the back button takes you back into Kindle at the right point. So far so good. Click on an internal link, and the back button takes you right out of the book. To make matters worse it has now remembered the location you navigated to as your "current" location, so to get back to where you were previously you have to navigate manually. Completely useless, and presumably about 1 line of code to fix properly.

Conclusions

I don’t think I’m being unreasonable here. Amazon make a vast amount of money out of the Kindle platform, and could make more if it is a sound platform for reference books as well as novels and the like. None of these issues would take a vast amount of effort to fix, just the will to be bothered and do a professional job. Amazon’s persistent indifference on these points reveals an attitude which bugs me even more than the issues themselves.

Posted in Agile & Architecture, Thoughts on the World | Leave a comment

A First Day Mistake I’ve Never Seen on LinkedIn

LinkedIn is full of useful little articles about mistakes not to make in the world of work. However here’s one I’ve never seen mentioned. I’ve just had a kick-off meeting with a new client. In order to appear friendly and unthreatening I dressed in a dark green suit, with a brighter green shirt. Unbeknown to me, the brighter green is not only quite similar to one of the company’s logo colours, it’s also the colour they have chosen for many of the walls and much of the furniture at their offices. Take off my jacket, and I was approaching sniper levels of camouflage. There’s a lesson here somewhere…

Posted in Thoughts on the World | Leave a comment

Scary Format Reversal

My penultimate purchase of music on vinyl was in 1989. I think, if memory at this distance serves, it was Running in the Family by Level 42. In the intervening 26 years I have felt very limited need to use other than CD or purely electronic formats.

That all went out of the window last week, when I tried to track down a particularly arcane track by the King’s Singers (their version of Eurovision winner Ding-a-Dong, if you must know). Despite their enduring popularity their album Lollipops has apparently never been released in a digital format. However a few minutes on eBay and £9 later I tracked down the LP, which turned up a few days ago nicely packed and in good order. Our record deck with a USB output and EZ Vinyl/Tape Convertor made quick work of digitising it, although it did get a bit confused by the track on side 2 with the substantial rests… Makes you wonder why the youth of today are so obsessed with all this downloading business when the alternative is so straightforward Smile

Posted in Thoughts on the World | Leave a comment