Is Theatre Killing Theatre?

Is the theatre its own worst enemy? Is it the engine of its own destruction?

Let me explain what I mean. We love the cinema. We go most weeks, and most weeks we come away feeling well entertained, even inspired. We have a pretty high hit rate: I keep a note of the films we see and score them out of 10 – this year we have awarded several 9s and a couple of 10s. The last film to score less than 5 was Guy Ritchie’s execrable King Arthur over a year ago. (Admittedly, that was so bad we had to rush home and watch the Antoine Fuqua / Clive Owen version just to remind ourselves what good looks like, but failing once a year at a cost of about £25 I can accept.)

Going to the cinema can even be an "event". In the Spring we caught the first showing of Avengers, Infinity War in Barbados. With the assembled "Marvel fans of Barbados" this was not unlike a good Panto – applause for the heroes and cameos, boos for the villains, mass cheers and gasps in all the right places. Hilarious. We also went to the Dambusters 75th Anniversary event, with a great introduction broadcast live from the Royal Albert Hall, followed by a beautifully cleaned up restoration of the film. Again, wonderful.

But surely, it must be even more magical seeing great actors in person on the stage? Maybe, but our practical experience varies. For a start, you don’t always get to see the names you expect. Benedict Cumberbatch, Tom Hollander, John Lithgow and Keeley Hawes are just some famous actors we paid to see on stage, and didn’t due to last-minute cast changes. We did get to see F Murray Abraham in The Mentor. He was fine, but the play was only about an hour long, and a load of introspective b*****s. We came away feeling somewhat short-changed.

Even more disappointing: Robert Powell and Lisa Goddard in Sherlock Holmes – The Final Curtain. Now we saw Robert Powell play Sherlock Holmes once before, in the hilarious Sherlock Holmes The Musical, so we had a not unreasonable expectation of being entertained again in like style. Sadly not. The new play is a dark, grim, rambling, soul searching piece with neither action nor humour. The plot, as much as there is one, centres around Mary Morstan/Watson turning out to be Moriarty’s sister, which raises a question, not well answered, about why she waits 30 years to attempt to have her revenge. It runs for about 40 minutes each act, which is a relief given the poor writing, but poor value for money in any event. To add injury to insult this was our first visit to the Rose Theatre in Kingston, which is cramped, dark, poorly ventilated and with a poor view from about 20% of the seats. There’s a reason why round Tudor theatres were replaced by square or horse-shoe shaped ones…

Now we really enjoy the theatre, with the right content. There are some stalwarts: the local pantomimes, and musicals with high production values. (For example the current West End revival of Chess is absolutely superb, but good seats, travel and a meal beforehand are going to cost around £200 a head.) It’s also perfectly possible for theatre, even with a budget production, to hit all the spots. A few months ago we saw David Haig’s Pressure, a delightful play about both the mechanics and the personal dynamics of the D Day weather forecasts. It was educational, telling an important true story which deserves exposure, enthralling (we know the final score, but not how close it came), and entertaining – laugh out loud funny in the right places.

The trouble is that while we seem to be seeing more we enjoy on both the small and silver screens, it seems to be more and more difficult to find genuine entertainment on stage. The tendency towards a focus on grim introspection seems to be catching. For years one of our favourite theatres, The Orange Tree in Richmond, mixed into its programme both unusual subjects (the story of Gerald Bull and the Iraqi Super Gun) and innovative entertainment (French farces in the round, with sound effects instead of the usual multiple doors). However for the last couple of seasons the fayre has been endless relationship dramas, and nothing has appealed.

It’s generally a challenge, and discouraging when the cost of a night at the theatre is so expensive. Disappointment might be better managed if theatres were obliged to be more truthful in describing their repertoire: obliged to use words like "grim", "gloomy" and "introspective" where appropriate, and forbidden to use the word "comedy" unless it’s actually funny. However I suspect a challenge under the Trades Descriptions Act might be tricky…

This leaves us going less and less frequently to the theatre, and seeking other forms of entertainment instead. I know we’re far from alone – very few of our friends go even as often as we do. Oh well, there’s always the flicks.

Posted in Thoughts on the World | Leave a comment

That Was Too easy…

There is an old plot device, which goes back to at least Homer, although the version which popped into my head this evening was Genesis of the Daleks, a 1970s Dr Who story. A group of warriors fight a short but intense battle, and appear to triumph. In Dr Who, the Kaled freedom fighters burst into Davros’s headquarters and think they have dispatched him and his dalek bodyguards. Just as they are starting to celebrate, one of them, typically an old, grizzled soldier who has been round the block a few times, says "Have your instincts abandoned you? That was too easy." True enough, a few seconds later the elaborate trap is sprung, and the tables are turned.

Android 8 is like that. Not that it’s in the service of a malevolent genius, although I’m beginning to wonder, but it lulls you into a false sense of security, and then throws some significant challenges at you.

I got a new phone last week. I have loved my Sony Experia XA Ultra which I have used for the last two years, but been constantly frustrated by the miserly 16GB main memory. The Experia XA1 Ultra is an almost identical device, but with a decent amount of main storage. I had to forgo the cheerfully "bling" lime gold of the XA, replaced by a dusky metallic pink XA1, but otherwise the hardware change was straightforward.

So, initially, was the transfer. Android now has a feature to re-install the same applications as on a previous device, and, where it can, transfer the same settings. This takes a number of hours, but seems to work quite well. I had to manually transfer a few things, but a couple of hours in I worked through the list of applications, and most seemed to be in order with their settings. I could even see the same pending playlist in the music player which, after a lot of trial and error, I installed to randomly play music while I’m on the bus.

The new version of the Android alarm/clock app seems to be complete b****cks, and more trouble than it’s worth, but there’s no barrier to installing the old version which seems to work OK. My preferred app to get Tube Status updates is no longer available to download, but I could reload the old version from a backup. So that was most of the problems in the upgrade dealt with.

My instincts had abandoned me. It was too easy…

I had also forgotten Weinberg’s New Law. ("Nothing new works")

I got to the gym, and tried to play my music, using the standard Sony music player. Some of it was there, but the playlist I wanted wasn’t. I realised the app could no longer see WMA files (Windows Media format), which make up about 95% of my collection. A bit of googling, and it turned out the recommendation was to install PowerAmp, which I did, and it worked fine.

Then I got on the bus, and tried to play some randomised music. Nothing. The app had the files in its playlist, but couldn’t find them. I rapidly confirmed that the problem again was WMA files, which had suddenly become "invisible" to the app. After yet more trial and error installing, the conclusion is that it’s the Android Media Storage service which is at fault. Apps which build their own index (like PowerAmp) are fine. Apps which are built "the proper way" and use the shared index are screwed, because in the latest version of Android this just completely ignores WMA files.

Someone at Google has taken the decision to actively suppress WMA files from those added to the index. This isn’t a question of a problematic codec or similar – they had perfectly good indexing code which worked, and for some reason it has been removed or disabled. I can only think it’s some political battle between Microsoft and Google, but it’s vastly frustrating that users are caught in the crossfire.

I trust Dante reserved some special corner of Hell for those who break what works, for no good reason. If his spectre wants a bit of support designing it, I’ll be glad to help.

And I’ll resist saying "that wasn’t too bad" when I upgrade my technology…

Posted in Android, Thoughts on the World | Leave a comment

Prediction Realised: The AlpinerX

My AlpinerX
Resolution: 1280 x 1400

In October last year I wrote an article celebrating the hybrid analogue/digital watch and offering some architecture and design observations from my collection of them. I ended up slightly sad about the style’s fall from fashion, but confidently predicting that new models with smartwatch capabilities would be forthcoming. It turned out that I did not have to wait long.

In March Alpina announced the AlpinerX, via a KickStarter campaign. That approach was designed to work around a frequent challenge with new digital watches from smaller brands, that of guaranteeing sufficient early sales to justify decent batch sizes of the components and materials. Predictably, I was an early backer, and my watch arrived in mid-June.

At first sight, this is simply a classic analogue/digital watch. I have read reviews comparing it with the Breitling Aerospace or Omega Speedmaster X33, but a much closer existing comparator is the Tissot T-Touch Expert Solar. That watch is a similar size, style and price and has a similar sensor set. The two watches share similar deep integration of the hands with the digital functions, so that they become, for example, the needle in compass mode.

However the AlpinerX goes further. It has a couple of extra sensors including a pedometer and a UV level meter, but this is also a fully-fledged connected smartwatch, just as much as an iWatch or equivalent, and it really comes into its own in partnership with your phone.

Size and Styling

Regarding the watch’s design, we should start by addressing the elephant in the room, or more correctly the elephant on the end of your arm. While it’s certainly not the largest gong around, it is a big watch, 45mm in diameter, larger than the Breitling Aerospace, and 14mm thick, much thicker than the Tissot T-Touch Expert Solar (or latest Breitling Aerospace Evo). This is not going to slide unnoticed under a dress shirt cuff. The size is a result of several factors. First fashion – we have all got used to wearing a dinner plate on our wrist like something from The Fifth Element – and its outdoor focus. Practically its composite case (which Alpina call glass fibre) may well have to be a bit thicker than a metal one.

However although I haven’t confirmed it, my money is on the size of the battery, or batteries. If Alpina’s claims are borne out they have pulled off a remarkable coup: a watch with the rich sensors and connectivity of a smartwatch, but with battery life measured not in hours or days, but around two years just like its non-connected cousins. I hope that promise comes good: Alpina haven’t provided a “sleep mode”, like older T-Touch models, in which the watch can be put into a battery-saving dormant state when not being used, and I do hope I’m not going to be changing the battery too frequently.

Although it’s quite large, it’s not a heavy watch by any means (a benefit of the composite shell), and it sits comfortably on my fairly average wrist. The Tissot, with its solar power solution, may be slimmer, but the AlpinerX is perfectly wearable, albeit better with casual clothing.

The watch has a simple, clean design, with simple white digits on the face, matching markings on the bezel (which rotates to work with the compass) and clear intermediate markings. The digital display takes up most of the bottom of the dial, dark yellow in background mode, or white on black when lit. Unlike some designs, the digital display has been positioned symmetrically, and all the cardinal points of the analogue display are retained.

Alpina offer buyers the ability to select the colour scheme of almost all elements of the watch, allowing extensive customisation, although in reality the main choices for most components are black or navy – the latter being a sort of dark purple (which I rather like) rather than a completely neutral blue. In the expectation this will be a travel/holiday watch, I have chosen cheerful orange highlights wherever possible: for the hands, the ring, and the stitching on the leather strap. I also have a rubber strap in bright orange, but so far the leather strap has proven adequate for my use, and very attractive with a texture reminiscent of woven carbon fibre.

Operation – General Observations

Operation of the watch is very simple, using the three buttons on the right-hand side. The “pusher” button in the crown lights the display and toggles through the main functions. Within a selected function the bottom button selects sub-functions (e.g. count up or count down) and the top button does start/stop.

The rotating “crown” appears to be simply decorative. As we will see, Alpina have missed a number of opportunities where this could usefully provide setting adjustment , but that’s not the model here. For more complex settings this is a watch controlled not directly, but via the companion app on your phone. That allows the local controls to be simpler, but does sometimes mean that it can take some digging into the app to find out how something is managed.

The free-rotating bezel (with click stops) provides a compass indicator which can be teamed with the compass needle to provide azimuth and heading indication. At least it does something useful!

For those used to more complex smartwatches with high-resolution OLED displays, the simple two-line alphanumeric digital display might look a little crude. However it suffices to provide most of the information you need while actually on the move, and presumably helps deliver the excellent battery life. Again the operating model is for detailed review to be done on the much larger display of the phone. On a positive note, the simple display could be readily combined with the design of any of my Swiss hybrid watches, even the diminutive 1987 Omega Seamaster Polaris, so maybe there’s scope for a smaller, neater variant of the watch at a later date.

When illuminated (which happens by default every time you switch functions or activate the connection to the phone) the digital display is bright and clear. When the backlight is off the digital display is a bit dim, but there’s no issue with the clean, high-contrast analogue indicators (or hands, as they are otherwise known 🙂 ).

The watch has a number of nice touches. For example, one of the challenges with this style of watch (which is also a problem with multiple dial chronograph watches, although it’s rarely mentioned) is that sometimes the hands obscure a key part of the digital display. Alpina has come up with a neat solution to this – simply swing the hands out of the way of the display when the user activates the digital display. (However it has to be said that the neatest solution for smaller watches, adopted by Rado and older Casio and Seiko models, is an oblong case with digital displays above and/or below the dial. Sometimes simple is best.)

Pairing/Connection

Use of the AlpinerX depends heavily on connection to a phone. It is therefore rather annoying that the process of connection can be rather fiddly and unreliable, especially with Android devices. Experiences vary – mine is that the two devices will connect and communicate easily immediately after the phone has been rebooted. However if thereafter the phone’s BlueTooth is turned off and on, or the devices are separated for a long time, then it can be tricky to get the connection working again, and the simplest, but not ideal, solution is to restart the phone.

What seems to happen is that the watch thinks it is connected but the phone does not, and in this mode there’s no reliable way to restart the process. I just hope that Alpina can improve things and deliver a firmware and/or app fix, which at least is an option here.

Timekeeping Functions

Ultimately, setting the extended functions aside, this is a watch, and so needs to provide good basic timekeeping. It therefore comes as a surprise that some capabilities standard in every digital watch since the 1970s are either missing, or delivered in a non-standard and somewhat clumsy fashion.

The biggest omission is the alarm function. Either I am being very stupid, or the AlpinerX doesn’t have one! There is no way to simply set the watch to make a noise at a pre-appointed time of day. You can set the watch to receive a push notification from your phone, and then set your phone to provide the alarm, but Alpina warn that doing so can harm battery life, and if you are going to do so, you might as well just use the alarm on your phone. If your phone suffers from late alarms due to the brain-dead Android “doze” mode, then this watch is not going to help you.

There are no direct controls to set the time on the watch. The idea is that the watch takes its primary time from the phone, which in turn takes the time from the network. This allows an elegant, simple solution to travel adjustments and so forth, but it’s not clear how to make micro adjustments if needed. In my experience “network time” can sometimes be adrift of the time provided by a good watch. If you are in an area where the network does not provide reliable time indication (like during a flight) you will have to adjust your phone manually, and if you don’t have your phone when you need to adjust the time, you’re stuffed.

Operation of the stopwatch is straightforward, but the count-up/count-down timer is really annoying, as you have to set the target value on the phone before it can be used. This is one example where it would be really useful to provide a way (the rotating crown, obviously?) to set the value locally. If I’m going to have to use my phone, I’ll just use the timer app on my phone, or wear a thirty year old watch where this just works.

Fitness Monitoring

On a more positive note, the AlpinerX does provide some very useful fitness monitoring features: principally a pedometer and a “connected GPS” mode for tracking an exercise route and duration. If you’re not doing complex exercise and you don’t need heart rate monitoring, then you don’t need to wear a Fitbit. That could provide a useful simplification to the holiday gadget set.

As pedometers the AlpinerX and Fitbit Charge 2 agree within 0.2%: 12 steps in over 6300 on my first test. However they behave very differently in “connected GPS walk” mode. The AlpinerX can be fiddly to get started with first GPS fix, but then very accurate – you can see where I double back to my car at the start of the walk with the parking ticket. The Fitbit is very crude by comparison, taking only a handful of fixes in an hour. The result is about a 10% difference in distance, with the AlpinerX’s figure of 4.7km rather more believable than the Fitbit’s “straight line” estimate of 4.3km. (The Fitbit is also more painful to sync with your phone if they have been disconnected for some time, although the AlpinerX can get confused if you turn Bluetooth off and on and try to reconnect. You pay your money and take your choice.)

The AlpinerX’s “phone first” model means that it only provides a simple time display during the exercise, and I would like to see this extended to some basic “steps/distance so far” information. Yes, I know I can get my phone out, sync them and read the phone, but I don’t want to do this when walking.

I haven’t tried the sleep monitoring, but I don’t hold out a lot of hope for it. Even with its heart rate monitoring the Fitbit can’t discriminate (for me) between “asleep” and “lying awake but still”, and I don’t expect the AlpinerX to do any better, especially since I would probably have to use the “under the pillow” mode. If you thrash about all the time when you are awake it might work…

UV Sensor

The AlpinerX has something which I haven’t encountered previously in a watch, a UV sensor. The marketing claim was that “AlpinerX can give timely warnings to reapply sunscreen or seek the shadow…” This is a great idea, but unfortunately the initial implementation falls a long way short of expectations.

Based on the claims, I was expecting an intelligent function which would continuously monitor UV exposure throughout the day. Plug in some information about your skin type and the strength of your suncream, and the phone would automatically set an alarm to remind you when to take action. Fat chance.

As far as I can see, the current implementation requires the user to switch the watch to UV monitoring mode and manually initiate each measurement. The phone then displays a very simple set of maximum, minimum and average values for the day. There is no concept of history or cumulative values. There is also no way to get the promised “timely warnings”, because there is no alarm function.

There is a text page in the app which provides some guidance on interpreting the UV measurement, but I’m not convinced of its value. The guidance is almost exactly the same for all UV levels from 3 to 11, effectively just “use SPF 30+ sunscreen and re-apply every 2 hours”. That’s for a range which at one end shouldn’t trouble anyone but a troglodyte albino, and at the other would rapidly scorch an Ethiopian mountain dweller.

Alpina really need to sort this out, or modify their claims. Regular automatic measurements and an exposure history would be a start, and ought to be pretty simple to achieve.

Altimeter and Barometer

Like the Tissot T-Touch watches, the AlpinerX provides altimeter and barometer functions. Like the Tissot watches, it has then same challenge that with a single measurement it my be difficult to disentangle changes of weather and changes of location during the same period. You can come back to your starting point after a day’s travel which included weather changes and the altitude doesn’t quite return to its initial value. The AlpinerX does, however, appear to do something clever with either average pressure or in concert with the phone’s GPS and will correct itself given a bit of time at rest. Advantage AlpinerX.

The app displays a continuous periodic readout of your altitude throughout the day, but like the UV, the barometer reading is displayed as a crude set of current, maximum and minimum values. Given that the rate of change of pressure can be important, it would be great, and presumably relatively simple, to be able to see this as a timeline as well.

Thermometer

The AlpinerX has a built-in thermometer. Like other watch thermometers, this tends to indicate the temperature of the wrist while being worn, but the AlpinerX seems to be better than most, with a smaller error and quicker recovery to ambient temperature when then watch is removed, maybe due to the non-metal case. Ironically temperature is displayed as a timeline in the app, but tends to hover round a fixed value close to human skin temperature through the wearing day.

Guidance and Documentation

While the watch does many things well, getting the best from it is a real challenge given the frankly appalling documentation which is delivered with it. The box includes a thick printed manual … which doesn’t cover this watch at all! There is a three page “getting started” leaflet, but that doesn’t cover key functions such as time setting. Between the two of these I spent some time trying to pull out the crown, which is how other watches in the Alpina range achieve that, and I’m lucky that I haven’t broken anything.

You need to find the relatively well hidden link to download a PDF of the 23 page version of the manual to have a hope of understanding the watch. Why a printed copy of a 23 page manual isn’t included in the box is a complete mystery. The fact that it isn’t downloaded automatically with and intelligently linked directly from the app is a travesty.

It doesn’t help that the app is a graphic example of how ease of use and ease of learning are completely separate and sometimes even conflicting objectives. There is little or no help to find your way through its structure and the options. Once you have found how something works it is usually easy to use repeatedly, but I do wonder how many users will abandon some tasks altogether, defeated by the poor guidance.

Conclusions

I do like the AlpinerX. It is a smart, capable watch and has delivered on a majority of its promises, if not all. It has already supplanted my Fitbit for my fitness walks, and I expect it to become my primary travel watch, although given the additional dependency on my phone, I may have to carry a second more traditional hybrid watch on longer trips, just in case.

Coming to this watch from my experience with older hybrid models, that phone dependency is a challenge, although I suspect users of other smartwatches might be less surprised. I would prefer the AlpinerX to be independently capable of all the traditional timekeeping functions, including setting alarms and timers, without recourse to the phone, and I don’t see a good reason why it isn’t.

With my other watches, any limitations are permanent, for the duration of my ownership. By contrast the AlpinerX architecture does allow some of its limitations to be addressed through firmware updates or even simple app changes, and I hope Alpina listen to me, and other users, and work hard to progressively improve the product. At the same time, I would like to see them open up the data, and maybe even the app functions, through a development API or SDK. The independent developer community could deliver significant value to users if this watch is treated as a platform, not a closed product.

If Alpina are thinking of further similar models, then I suggest they do treat the Breitling Aerospace Evo as a reference, not for its functionality, but for its size. It pulls off the trick of being wearable as both a casual watch, and also with formal or business attire. A smaller and thinner AlpinerX model which could do that might make it into my list of regular daily timepieces, and that would be a great result.

This is a good watch, and at least partially realises my prediction about the future of analogue/digital models. It’s not without frustrations, many of which could have been avoided, some of which can still be fixed. It will be interesting to see where Alpina take it, and whether others recognise a good thing.

View featured image in Album
Posted in Thoughts on the World, Watches | Leave a comment

Panasonic G9. Close? Yes. Cigar? No.

Beware, bears! Russian strongman and former commando Mikhail Shivlyakov “psychs up” friend and fellow competitor Konstantina Janashia from Georgia, ready for a successful 480kg deadlift.
Camera: Panasonic DC-G9 | Date: 31-05-2018 15:07 | Resolution: 5017 x 3763 | ISO: 640 | Exp. bias: 0 EV | Exp. Time: 1/250s | Aperture: 5.6 | Focal Length: 300.0mm | Lens: LUMIX G VARIO 100-300/F4.0-5.6

This article was also published as a guest article on "The Online Photographer".

My Panasonic GX8 arrived pretty much on the day of official availability and has been my primary camera for almost three years, including two major photographic trips, and innumerable other opportunities in between. It improved on the already good GX7 with "just right" sizing, a better sensor and higher speeds. Like many other owners and fans I was looking forward to a fairly straight replacement – all Panasonic had to do was fix the awkward exposure compensation control and improve the action autofocus and it would be pretty much perfect. Fat chance.

Instead, and not for the first time, Panasonic have shaken up the Lumix G range, with the GX9 effectively moving down the range, and all the new goodness going into a new "stills flagship" the G9, which sits at the top alongside the video-centric GH5 and its variants.

After a bit of prevarication, I decided that I was due an upgrade, and plumped for the G9. My new camera arrived a few days ago. This review is based on the first few days’ moderately heavy use. It’s not meant to be a comprehensive, or dispassionate blow-by-blow review, but a set of personal impressions from a long-standing Panasonic user and fan.

Body Style and Size

At first the G9 looks like quite a different camera, larger and more expensive, and more of a "DSLR ethos" than the rangefinder-style GX8. I’ll come back to cost, but the size issue is deceptive: put the two cameras side by side and it’s clear that the only real difference is the G9’s DSLR "hump", and a slightly deeper grip, which is academic unless you use a very small pancake lens. Given that similarity it’s surprising that the G9 is a significant 171g (about 6oz) heavier. The camera offers better weatherproofing and a bigger battery, and does feel a bit more rugged, so that’s acceptable. Unlike its predecessor, but like my old Canon 7D, it feels like it might take the odd knock without problems. In practice, you get used to the weight quite quickly.

Like every new flagship camera the G9 is initially priced high, but this gives Panasonic and their dealers some room for manoeuvre with discounts, trade-ins and freebies. Depending on how you look at it my G9 cost me only about 2/3 of the advertised price, or the 5 year lifetime cost of my old GX7 net of trade-in was about £250. I can live with that.

Controls and Ergonomics

Back in early 2016 I wrote an open letter to Panasonic regarding the GX8, acknowledging its good points, but identifying opportunities to improve the ergonomics and usefully extend its stills capability. They clearly ignored the letter for the GX9, but either great minds think alike, or it did influence the G9.

Ergonomically, I am a fan of "electronic" control, by which I mean the ability to set camera functions fluidly between on-camera buttons and wheels including your choice of programmable controls, the menu system, and stored custom values. By contrast "fixed switches" break this free control model and cannot be included in stored settings for custom shooting modes. In addition, I am short sighted and wearing my "distance" glasses the tiny markings on such controls are effectively invisible.

The GX8’s exposure compensation control is a good (or should that be bad?) example of the latter. Apart from breaking my preferred control model it is also badly placed – I found that to operate it I either have to take my right hand off the camera and reach in from above, or somehow slide my thumb behind the camera, which usually results in both adjusted exposure and smeared glasses! No such problem with the G9 – you can quickly set up the camera so that the rear wheel, under the right thumb, controls the primary exposure value (aperture or shutter speed as appropriate), while the front wheel, easily in reach of the shutter finger, controls compensation. Vice-versa if you prefer. Perfect.

Unfortunately, however, Panasonic have perpetuated, and even aggravated one of the GX8’s other ergonomic failings, and arguably introduced a new one! The perpetual horror is focus mode. The G9, like most of the G series, has four main modes: manual focus (’nuff said), autofocus "single" (half press the shutter button to focus, then full press to expose with that focus), "follow" (another single shot mode, but if the primary subject moves while the shutter button is half pressed, the camera refocuses), and "continuous" (aligned to the high-speed shooting modes, refocuses for each exposure). The ideal solution would be a button which toggles between the modes. That’s good enough for a lot of very good cameras. However the G9 has a switch.

If you must have a switch, then surely it should have four modes? Nope. You select manual, continuous or single/follow on a three position switch, then have to dive into the menus to choose between single and follow, or the several variants of continuous. To add insult to injury, at least in the GX8 you could set the button in the middle of the focus switch to toggle between single and follow. Not on the G9, at least not with its initial firmware – this is set to AE/AF lock (which I personally never, ever use) and not programmable. The obvious fix is to make that button programmable so that when in the single/follow position it toggles between the two, when in the continuous position it toggles between the various variants of that mode, and when in the manual position it does something equivalently useful like turning focus peaking (highlighting) on and off. This could be fixed in a firmware update – I will just have to write to Panasonic and cross my fingers.

The other fixed switch on the G9 is for the drive mode (single, high speed, timer etc.) On the GX8 this is on a button, which is much better as you can include infrequent or situation-specific settings (like high speed mode) in appropriate custom shooting modes, and just leave the main aperture-priority settings or equivalent on single-shot, with a much reduced risk of going to take a shot and being in the wrong mode. The G9 arrangement seems like a retrograde step, but liveable.

Strengths


Krzysztof Radzikowski sets a new world record with a 150kg dumbell lift

That brings us from some arguable weaknesses of the G9 onto its real strengths. It’s fast – so fast it has three high-speed modes: high (about 5FPS), super-high 1 (about 15FPS) and super-high 2 (about 20FPS). The two super-high modes also have a very useful feature for sports and wildlife photography: hold the shutter half pressed and they will continuously store a few frames (about 0.4s worth) in the buffer, and write these to the card when you press the shutter, so if you are fractionally late clicking, you don’t lose the event. The downside is that you need to use the super-high settings with caution: if you are saving RAW + large JPEG files super-high 2 will chew up your memory cards at roughly 1GByte every 1.5 seconds. Another reason why I’d prefer to lock this to a custom mode!

Autofocus is much improved over the GX8, although I have to admit that my first sporting event with the new camera didn’t give it that much of a workout: in absolute terms, strongmen don’t move fast. it’s impressive to see a 150kg (330lb) man jogging with the same weight in each hand, but it’s not the harshest test of autofocus! However I can report that the G9 seems to adjust focus very quickly in continuous mode and seems to have missed relatively few shots. If there’s any pattern to the misses they tend to be the first shots of longer sequences, when I may have been moving the camera into position on the action. I’ll have to try and find something involving horses or fast cars for a better check.

Sensor readout also appears to have been improved, with a bit less banding on pictures of LED displays, and no obvious rolling shutter effects so far, although a higher-speed subject will really be required to confirm that.

The other area where Panasonic seem to have listened to my prior pleas is in support for bracketed and multi-shot images. In addition to the established support for exposure bracketing (for HDR), the new camera now does focus bracketing/scanning, as well as bracketing for aperture and white balance. Intelligently, even in single-shot drive mode you can choose to have the bracket shot at high speed to minimise the effect of subject or camera movement. The focus bracketing capability is something I have been seeking for a long time, and records full RAW files, a completely separate capability from the camera’s other ability to do in-camera focus stacking or post-shot focus selection from within a 6K movie file. Bracketed photos are clearly marked in their metadata, which makes it quite easy to build a script to sort them out from the rest of a day’s shooting.

Battery life is excellent – at the aforementioned strongman competition the camera was on for most of the five hours of competition and took about 600 shots. It used one battery and was about 30% into the second, much better than the GX8 would manage. I can also confirm that the two card slot arrangement works fine, effectively doubling the memory capacity, so I wasn’t fiddling with cards.

Two other ergonomic points are worth making. The rear display can be manually set to a nice bright setting for outdoors, but it’s automatic setting is far too dim. The EVF is large, detailed and bright, but as adjusted for my glasses has an odd pincushion distortion, with noticeably curved edges. This is nothing to do with the lens, which the camera corrects as required, but the way the EVF display is presented to the eyepiece. It’s not a major problem, but annoying to an inveterate picture-straightener like myself, especially as I haven’t had that problem with any of the predecessors.

Otherwise it’s pretty much business as usual. Image quality appears to be just the same as the GX8, much as expected given the common sensor, and the camera has a nicely familiar feel even if some of the controls are different and it’s definitely a bit heavier. Stabilisation is at least as good as the predecessor, with no noticeable penalty from the increased weight, but it’s clear that the full multi-second goodness of "dual IS 2" will have to wait until I can afford to start replacing my lenses with the new Mark II versions.

Conclusion

Would I recommend it? If you’re a committed Panasonic user, or have no existing mirrorless camera affiliation, and you want a very high capability, stills-centric camera, then absolutely. However if video is your thing, the GH5 may be better, and if you really don’t need the high speed or new advanced stills features, then a GX-series camera will save you weight and money. This is a very good camera, but not perfect. Panasonic still have room for improvement…

View featured image in Album
Posted in Micro Four Thirds, Photography, Thoughts on the World | Leave a comment

OK Google, Here’s Another One…

Having established that there’s a real, valuable use case for Google’s phone-call-making AI (making outgoing calls which have to be routed via complex menus, lengthy queues, or security gatekeepers) I got thinking.

When I was in my early 20s and worked in a real office with doors and a bit of peace and quiet, I had access to a much valued colleague who’s function has almost entirely disappeared from modern life, unless you are enormously rich and powerful. She was called "a secretary".

One of the secretary’s functions was handling incoming phone calls: blocking the nuisances, re-routing the misdirected, taking understandable messages if I was not available, or putting the caller through with a clear announcement if I was. Where "a secretary" scored enormously over "a telephonist" was in knowing a bit about my business and me personally and being able to make some decisions on her (it was usually a her) own. She could also recognise regular accepted callers by their voice and deal with them much more quickly than strangers.

I’d like a computer which can do that.

Now this is definitely a step harder than just placing outgoing calls, but only a step. We don’t have to create a full-blown JARVIS (Iron Man’s AI butler) to get a lot of value.

Recognising known contacts by their voiceprint and incoming line details should be pretty straightforward, and it should be easy to make the list manageable, adding rules about how to deal with different people at different times. Taking messages can be a hybrid of two technologies. Because the caller is talking to a computer the call audio can be recorded, but the automated secretary could run through a simple script to get a direct call-back number ("now you are sure that’s direct and he’s not going to have to go through some horrible menu to get back to you"), spell out the caller’s name and company if it’s not recognised, and get an identifying account number or similar so I can verify the call’s veracity and quickly get my case recognised on call back. These could all be popped into an email or text to me, so I can see them written down rather than having to listen to them and write them down myself.

Those capabilities alone would get rid of a lot of nuisance callers. Scammers who want to offer to move my money to their own accounts are not going to want to leave verifiable contact details, or will not be able to provide valid authentication. Sales calls are a bit different. Most "spam" callers don’t waste their time with answering machines, so if we make the AI secretary recognisable as such that will get rid of most. Any who are really persistent can then be recognised by "trigger" words, such as "PPI", or "double glazing", or "the security department of Microsoft Windows" (yeah, right), plus non-verbal cues like the double-ring of a connection from Asia, or the chattering background in an Indian call centre, just like I do it. That would be a really powerful application of machine learning technology. I could choose how my secretary deals with identified nuisance callers: just hang up, choose a random insult from a list and hang up, keep them talking until they get bored, or redirect the call to an 0898 number where I’m sure the young ladies will be happy to listen to them all day, for a fee.

While we’re at it, let’s make the voice and personality programmable. I had Joanna Lumley’s voice (Purdey rather than Patsy) on my satnav for a while, and that would tick a lot of boxes for me, as a 50-something male. But I can also see the charm of recreating some famous fictional assistants: JARVIS, or how about Chris Hemsworth’s character from Ghostbusters 3, ladies?

OK Google. How about this?

Posted in Thoughts on the World | Leave a comment

They’re All Missing the Point

Since Google’s demo of an AI bot making a phone call a few weeks ago, the reaction I have read seems to be completely polarised. About half the reviewers are blown away, believing it to be unleashing AI wonders/horrors which are half a step away from SkyNet going live. The other half are nonplussed, seeing no potential value.

They are all wrong.

Let’s deal with the "this is the advent of true AI" bunch first. Google have demonstrated a realistic sounding voice which can currently deal with a few, very limited scenarios, and I suspect will rapidly fail if the other party goes significantly off track. Sure, it’s a step forward, but just a step. If you want to see a much more convincing demo, catch up with the program "How to Build a Human" from about 18 months ago, in which the makers of the Channel 4 Sci-Fi program "Humans" got a mix of British experts to build a robot Gemma Chan, who (which?) was then interviewed over Skype by a bunch of entertainment journalists. About half the reviewers didn’t realise they weren’t talking to the real Gemma. That’s much closer to a Turing test pass.

At the other end of the scale we’ve got those who don’t see any advance or value to a machine which can help make a phone call. To those, I have a simple question: "how did you get on, the last time you rang your bank / utility / travel company / <insert other large organisation here>?"

I completely agree that it’s a waste, and maybe a bit sinister, to task a robot with making a call to a local restaurant or hairdresser. But when was the last time you rang anything other than a small local business, and got straight through to talk to a human being? We all waste far too much of our time sitting on the phone, trying to navigate endless menus, trying to avoid the dead end where all you can do is hang up and try again, or listening to "Greensleeves" being played on a stylophone with a reminder every 20s that the recipient values your call. Yeah, right.

If I want to deal with a computer, I’ll go onto the website. I’m very happy doing that, and if I can do my business that way I will. The reason I have picked up the phone is one of the following:

  • The website doesn’t support the transaction I want to execute, or the information I need. I need to speak to a human being.
  • The website has a problem. I need to speak to a human being.
  • The website has instructed me to phone and speak to a human being.

Spot the common thread?

So I have the ideal use case for Google’s new technology. It makes the phone call. It navigates the endless menus, referring to a machine learning database of how to get to a human being as quickly as possible, and how to avoid dead ends in that organisation’s phone system. It provides simple responses to authentication prompts if it can, or prompts me for just the required information. If the call drops or dead ends it starts again. And it listens to "Greensleeves" or equivalent, silently in the background, until it’s sure it’s speaking to a human being. At that point, it says, like a good secretary would, "please hold, I have Mr Andrew Johnston for you", gets my attention and I pick up the call.

In the meantime, I get on with my life.

In some ways, this is actually easier than what Google have already done, because most of the interaction is computer-to-computer, and actively doesn’t need a human-like voice or understanding. It’s certainly a better use of the technology than pestering the local hairdresser.

OK Google. Build this, please.

Posted in Thoughts on the World | Leave a comment

How Hard Can It Possibly Be?

I really should have known better. In last week’s piece on random music player algorithms, I made the rather blasé statement "I can live with it for a while and I can probably resolve the issue by downloading another music player app". Yeah, sure.

Now we all know that assumptions are dangerous. One boss of mine was inordinately fond of the quote "assumptions are the mother of all f*** ups", and he wasn’t wrong. However I really did expect that music players were a relatively mature and stable component of the Android app space.

So how did I get on with trying to download a better random music player? So far, I have downloaded somewhere between 10 and 20 apps. I have discovered:

  • Apps which just don’t start, or which crash immediately
  • Apps which can’t see the SD card on which my music is stored, and insist on randomly playing 3 ringtones
  • Apps which can’t play a lot of my music. Come on guys, WMV format is not exactly "edge".
  • Apps which don’t have a random function, despite the words "random" or "shuffle" in the description
  • Apps which don’t display properly on my phone’s screen
  • Apps which display nicely and seem to have all the functions I need, but where the random function is to start with one song chosen at random, and then just play all the other songs on my device in alphabetic order of title (at least 3 instances of this!)
  • Apps which display nicely and have a decent random function, but then 60% of the time no sound comes out of the headphones when you press "play"
  • Apps which display OK, and appear to have a decent random function, but most of the other advertised functions don’t work

Worse case, I can probably live with the last – I can always use the Sony app for other purposes – and late last night I spent another 5 minutes and maybe, just maybe, I have found one app which will work, albeit with a slightly odd user interface.

But honestly, how hard can it possibly be?

 

Addendum – Two Months Later

Back to square one after upgrading to Android 8, which actively suppresses WMA files from the shared media index… (See "That Was Too Easy…") I’ve found some more ways you can make this not work. No Jemima, it doesn’t count playing Atomic Kitten, then CCS, then Led Zeppelin,  then AC/DC, if the songs are "Whole Again", "Whole Lotta Love", "Whole Lotta Love" and "Whole Lotta Rosie"!

Posted in Thoughts on the World | Leave a comment

Inferring Algorithms: How Random is Your Music Player?

“You’re Inferring that I’m stupid.”

“No, I’m implying that you’re stupid. You’re inferring it.”

– Wilt, by Tom Sharpe

My latest contract means spending some time on a bus at each end of the day. The movement of the bus means it’s not comfortable to read, so I treated myself to a nearly new pair of decent BlueTooth headphones, and rediscovered the joys of just listening to music. I set the default music player app to “random” and let it do its stuff.

That’s when the trouble started. I started thinking about the randomisation algorithm used by the music player on the Sony phone. I can’t help it. I’m a software architect – it’s what I do.

One good music randomisation algorithm would look like this:

  1. Assign every song on your device a number from 1 to n
  2. When you want to play a random song, generate a random number between 1 and n, and play the song with that number.

However in my experience no-one ever implements this, as it relies on maintaining an index of all the music on the device, and assigning sequential numbers to it. That’s not actually very difficult, given that every platform indexes the music anyway and a developer can usually access that data, but it’s not the path of least resistance.

Let’s also say a word about generating random numbers. In reality these are always pseudo-random, and depending on how you seed the generator the values may be predictable. That may be the case with Microsoft’s software for picking desktop backgrounds, which seems to pick the same picture simultaneously on my laptop and desktop more often than I’d expect, but that’s a topic for another blog, so for now let’s assume that we can generate an acceptably random spread of pseudo-random numbers in a given integer range.

Here’s another algorithm:

  1. Start in the top directory for the music files
  2. Pick an item from that directory at random. Depending on the type:
    • If it’s a music file, play it. When finished, start again at step 1
    • If it’s a directory, make it your target and redo step 2
    • If it’s anything else, just repeat step 2

This is easy to implement, runs quickly and plays nicely with independently changing media files. I’ve written something similar for displaying random pictures on a website. It doesn’t require maintaining any sort of index. It generates a good spread of chosen files, but will play albums which are alone under the first level root (usually the artist) much more than those which have multiple siblings.

My old VW Eos had a neat but very different system. Like most players it could work through the entire catalogue in order, spidering up and down the directory structure as required. In “random” mode it simply calculated a number from 1 to approximately 30 after each song, and used that as the number of songs to skip forwards in the sequence.

This was actually quite a good algorithm. As well as being easy to implement it had the side-effect of being at least partially predictable, usually playing a couple of songs by the same artist before moving on, and allowing a bit of “what’s next” guesswork which could be entertaining on a long drive.

So what about the Sony music app on my phone? At first it felt like it was doing the job well, providing a good mix of genres, but after a while I started to become suspicious. As it holds the playlist in a readable form, I could check that suspicion. These are key highlights from the playlist after about 40 songs:

  • 1 from ZZ top
  • 1 from “Zumba”
  • 3 from Yazoo!
  • 1 from Wild Cherry
  • 1 from Wet Wet Wet
  • Several from “Various Artists” with album titles like “The Very Best…”
  • 0 from any artist filed under A-S!

I wasn’t absolutely sure about the last point. What about Acker Bilk and Louis Armstrong? Turns out they are both on an album entitled “The Very Best of Smooth Jazz”…

I can also look ahead at the list, and it doesn’t get much better. Van Morrison, Walter Trout, The Walker Brothers, and more Wet Wet Wet 🙁

So how does this algorithm work (apart from “badly”)? I have a couple of hypotheses:

  • It implements a form of the “give every track a number” algorithm, but the index only remembers a fixed number of tracks numbering a few hundreds (maybe ~1000), and anything it read earlier in the indexing process is discarded.
  • It implements the “give every track a number algorithm”, but the random number generator is heavily biased towards the end of the number range.
  • It’s attempting a “random walk”, skipping a random number of steps forwards or backwards through the list at each play (a bit like the VW algorithm, but bidirectional). If this is correct it’s odd that it has never gone into “positive” territory (artists beginning with A-S), but that could be down to chance and not impossible. The problem is that without a definite bias a random walk tends to stay in the same place, so it’s a very poor way of scanning your music collection.

Otherwise I’m at a loss. It’s not like I have a massive number of songs and could have run into an integer size limit or similar (there are only around 11,000 files, including directories and artwork).

Ultimately it doesn’t matter that much. I can live with it for a while and I can probably resolve the issue by downloading another music player app. However you can’t help feeling that a giant of entertainment technology like Sony should probably manage better.

Regardless of that, it’s an interesting exercise in analysis, and also potentially in design. Having identified some poor models, what constitutes a “good ” random music player? I’ve seen some good concepts around grouping songs by “mood”, or machine learning from previous playlists, and I’ve got an idea forming in my head about an app being more like a radio DJ, looking for “links” between the songs in terms of their artist names, titles or genres. Maybe that’s the next development concept. Watch this space.

Posted in Code & Development, Thoughts on the World | Leave a comment

Why REST Doesn’t Make Life More Rest-full

Really Rest-full (Cuba 2010)
Camera: Canon EOS 7D | Lens: EF-S15-85mm f/3.5-5.6 IS USM | Date: 20-11-2010 15:41 | ISO: 200 | Exp. bias: -1/3 EV | Exp. Time: 1/250s | Aperture: 9.0 | Focal Length: 53.0mm (~85.9mm) | Lens: Canon EF-S 15-85mm f3.5-5.6 IS USM

As I have observed before, IT as a field is highly driven by both fashion and received wisdom, and it can be difficult to challenge the commonly accepted position.

In the current world it is barely more politically acceptable to criticise the currently-dominant model of REST, Javascript and microservices than it is to audibly assess the figure of a female co-worker. I was seriously starting to think that I was in some age-defined Luddite minority of one in not being 100% convinced about the universal goodness of that model, but then I discovered an encouraging article by Pascal Chambon “REST is the new SOAP“, and realised that it’s not just me. I am not alone.

I don’t want to re-create that excellent article, and I recommend it to you, but it is maybe instructive to provide some additional examples of the failings Chambon calls out. I have certainly fallen foul of the quasi-religious belief that REST is somehow “better because it uses the right HTTP verbs”, and that as a result the “right verbs must be used”. On my last contract there was a lengthy argument because someone became convinced I was using the wrong ones. “You’re using POST to do a DELETE. That’s wrong.”

“No, we’re submitting a request to do a delete, if approved. At some later point, after the request has been reviewed and processed, this may or may not result in a low-level delete action, but the API is about the request submission. And anyway, you can’t submit a proper payload with a DELETE.”

“But you’re using a POST to do a DELETE…”

In the end I mollified him slightly by changing the URL of the API so that the tip wasn’t …/host, but …/host/request, but that did feel like the tail wagging the dog.

Generally REST promotes a fairly inflexible CRUD model, and by default without the ability to specify exactly which items are retrieved or updated. In a good design we may need a much richer set of operations. In either an RPC approach (as outlined in Chambon’s article), or a “remote object access” approach, such as one based on SOAP, we can flexibly tailor the operations precisely to the needs of the solution.

Here’s a good example. I need to “rename” an object, effectively changing its primary key. In the REST model, I have to choose one of the following:

  • Add extra fields to the PUT payload to carry the “new” and “old” keys, and write both client- and server-side conditional code around their values, or an additional “operation” value
  • Do a DELETE (with the old key) followed by a POST (with the new one), making sure that all the other data required to recreate the record is passed back for the POST, and write a host of additional code to handle cases like the DELETE succeeding but the POST failing, or the POST being treated as a new item, not just an update (because it’s not a PUT).
  • Have a dedicated endpoint (e.g. …/object/rename) which accepts a POST operation with just the required data for the rename. That would probably be my favourite, but I can hear the REST purists screaming in the wind…

In a SOAP model, I can just have an explicit Rename(oldkey, newkey) operation on a service named for the underlying business object. Simples.

So Is SOAP The Old REST?

I’m comfortable with Chambon’s casting of REST as the supposed handsome hero who turns out to be a useless, treacherous bastard. I’m less comfortable with the casting of SOAP as the pantomime villain (boo hiss).

Now your mileage may vary, and Chambon obviously had some bad experiences, but in my own experience SOAP is a very strong and reliable technology which a lot of the time “just works”. I’ve worked in environments where systems developed in .Net, Oracle, Enterprise Java, a LAMP stack and Python cheerfully exchanged with each other using SOAP, across multiple physical locations, with relatively few complexities and usually just a couple of lines of code to access a full object model with formal schema and policy support.

In contrast, even if you navigate through the various different ways a REST service may work, inter-platform operation is by no means as simple as claimed. In just the past week I wasted about half a day trying to pass a body parameter between a Python client and a REST API presented by .Net. It should have worked. It didn’t. I converted the service to SOAP, and it worked almost first time. (Almost. It would have been even quicker if I’d remembered to RTFM…)

Notwithstanding the laudable attempts to fill the gap for REST, SOAP is still the only integration technology where every service has full machine and human readable documentation built in, and usually in a standard fashion. Get a copy of the WSDL (Web Service Definition Language) either from the service itself, or separately, and you know what it does, with what data, and, where it’s relevant to the client, how.

To extend the theatrical metaphor, in my world SOAP is the elderly retired hero who’s a bit pedantic and old-fashioned, maybe a bit slow on his feet, but actually saves the day.

It’s About the Architecture, Stupid

Ultimately it doesn’t actually matter whether your solution uses REST, SOAP, messages, distributed objects or CSV file transfers. Any can be made to work with sufficient attention to the architecture. All will fail in the presence of common antipatterns such as complex mixed data models, massive functional decomposition to too fine a level, or trying to make high-frequency chatty exchanges over higher-latency links.

Modern technologies attempt to hide a lot of technical complexity behind simple abstraction layers. While that’s an excellent approach overall, it does raise a risk that developers are unaware of how a poor design may cause underlying technical problems which will cause failure. For example while some low-level protocols are more tolerant than others, the naïve expectation that REST will work over any network regardless “because it is based on HTTP” is quite wrong. REST, SOAP and plain old web pages can all make good, efficient use of HTTP. REST, SOAP and plain old web pages will all fail if you insist on a unit of work being composed of vast numbers of separate small exchanges rather than a few larger ones. They will all fail if you insist on transferring large amounts of unfiltered data to the client, when that data should be pre-processed and filtered on the server. They will all fail if you insist on making every low-level exchange a network service when many of these should be direct in-process operations.

Likewise if you have a load of services, whether your own microservices or third party endpoints, and each service defines its own data structure which may be subject to change, and you try and directly consume and produce those proprietary data structures everywhere you need them, you are building yourself a world of pain. A core common data model with adapters for each format will serve you much better in the long run.

So Does Technology Choice Matter?

Ultimately no. For example, I have built an architecture with an underlying canonical data and adapter model but using REST for every exchange we controlled and it worked fine. Also in the real world whatever your primary choice you’ll probably have to deal with all the others as well. That shouldn’t scare you, but I have seen REST-obsessed developers run screaming from the room at the thought of having to use SOAP as well…

However, a good base choice will definitely make things easier. It’s instructive to think about a layered model of the things you have to define in a complex integration:

  • Documentation
  • Functionality
  • Data structure and format
  • Data encoding and transport
  • Policies
  • Service location and routing

SOAP is unique among the options in always providing built-in documentation for the service’s functions, data structures and policies. This is a major omission in the REST world, which is progressively being addressed by the Swagger / OpenAPI initiative and variants, but they will always be optional add-ons with variable coverage rather than a fundamental part of the model. For all other options, documentation is necessarily external to the service itself, and it may or may not be up to date and available to whoever needs it.

Functionality is discussed above and in Chambon’s article. Basically REST maps naturally to CRUD operations, and anything else is a bit of a bodge. SOAP and other RPC or distributed object models provide direct, explicit support for whatever functions are required by the business problem.

SOAP provides built-in definition and documentation of data structures and formatting, using XML Schema which means that the definition is machine and human readable, standardised, and uses namespaces and references to manage, for example, items with the same name but different uses and formats. Complexities such as optionality and alternative structures are readily defined. In addition a payload can be easily verified against the defined schema. Swagger optionally adds similar capabilities to the REST model, although without some discipline it’s easy for the implemented service to differ from the documented one, and it’s less easy to confirm that a given payload conforms. Both approaches focus on syntactic definition with semantic guidance optional and mainly through comments and examples.

In terms of encoding the data, the fashionable approach is JSON. The major benefits are that it’s simple, payloads are a bit smaller than the equivalent XML, and that it’s easy to parse into and generate from equivalent data structures in languages like Python.

However, I’m not a great follower of fashion. XML may be less trendy, but it offers a host of industrial-strength features which may be important in more complex use cases. It’s easy to unambiguously indicate the schema for each document and validate against it. If you have non-ASCII or binary data then their encoding is unambiguously defined. It’s easy to work separately with fragments of a larger document if you need to. Personally I also find XML easier to read and manually edit if I have to, but I accept that’s a bit subjective. One argument is that JSON is easier to render into a HTML page, but I’ve achieved much the same without any procedural code at all using XML with XSLT.

Of course, there’s no real need to have to choose. The best REST APIs I have worked with have the ability to generate equivalent JSON and XML from the same queries, and you choose which works best in a given context. Sadly this is again a bit too much for the REST purists, but a good solution when it works.

Beyond the functional definition of a service and its data, we also have to consider the non-functional behaviours, what are often referred to as “policies” in this context. How is the service secured? What encryption is applied to payloads and headers? What is the SLA, and what action should you take if it is exceeded? Is asynchronous or callback behaviour defined? How do I confirm I have all the required items in a set of exchanges, and what do I do about missing ones? What happens if a service fails, or raises an error?

In the early 2000s, when web services were a new concept, a lot of effort was invested in trying to establish standard ways to define these policies. The result was a set of extensions to SOAP known as the WS-* specifications: a set of rules to enable direct and potentially automated negotiation of all these aspects based on standardised information in the service WSDL and SOAP headers. The problem was that the standards quickly proliferated, and created the risk of making genuinely simple cases more complex than necessary. REST emerged as a simpler alternative, but with a KISS ethic which means ignoring the genuinely complex.

Chambon’s article touched on this in his discussion of error coding, but there are many other similar aspects. REST is a great solution for simple cases, but should not blind the developer to SOAP’s menu of standard, stronger solutions to more difficult problems.

A similar choice applies at the final level, that of locating and connecting service endpoints at runtime. For many cases we simply rely on network infrastructure and services like DNS and load balancing. However when this doesn’t meet more complex requirements then the alternatives are to construct or adopt a complex proprietary solution, or to embrace the extended standards in the WS-* space.

One technology choice is important. A professional modern Integrated Development Environment such as Visual Studio or Intellij Idea will do much of the “heavy lifting” of development, and does make work much quicker and less error-prone. I completely fail to understand why in 2018 some developers are still trying to do everything with vi and a Unix command line. When I was a schoolboy in the 1970s there was a saying “shouldn’t you have handed that in at the end of the war?”, referring to people still using or hoarding equipment issued in WW2. Anyone who is trying to do software development in the late 2010s with the software equivalent deserve what they get… It is a mistake to drive a solution from the constraints of your toolset.

Conclusions

The old chestnut that “to the man who only has a hammer, every problem looks like a nail” is nowhere more true than in software development. We seem to spend a great deal of effort trying to make every new software technique the complete solution to life, the universe, and everything, rather than accepting that it’s just another tool in the toolbox.

REST is a valid addition to the toolbox. Like it’s predecessors it has strengths and weaknesses. It’s a great way to solve a whole class of relatively simple web service requirements, but there are definite boundaries to that capability. When you reach those boundaries, be prepared to embrace some older, less-fashionable but ultimately more capable technologies. A religious approach will fail, whereas one based on an architectural viewpoint and an open assessment of all the valid options has a much greater chance of success.

View featured image in Album
Posted in Agile & Architecture, Code & Development | Leave a comment

The Architect’s USP

Standing out in the marketplace (Morocco 2013)
Camera: Panasonic DMC-GX7 | Date: 11-11-2013 17:09 | Resolution: 3064 x 3064 | ISO: 1600 | Exp. bias: -33/100 EV | Exp. Time: 1/500s | Aperture: 8.0 | Focal Length: 300.0mm | Location: Djemaa el Fna | State/Province: Marrakech-Tensift-Al Haouz | See map | Lens: LUMIX G VARIO 100-300/F4.0-5.6

Very early on in any course in marketing or economics you will encounter the concept of the "Unique Selling Proposition", the USP, that factor which differentiates a given product or service from its competitors. It’s "what you have that competitors don’t", a key reason to buy this one rather than an alternative.

With the current trend away from development specialisms such as architect towards relatively homogenous development teams, it is perhaps instructive to ask "What is the architect’s USP?" Why should I employ someone who claims that specialism, and give him or her design responsibility, rather than just expecting my developers to cover it?

I have written elsewhere about why I don’t buy into the ultra-agile concept of "architecture emerging from the code", any more than I would bet money on the script for Hamlet "emerging" from a finite group of randomly typing monkeys. (Of course, if you have an infinite number of monkeys then it’s more achievable, but that’s infinity for you…) However that argument is about process, and I believe that almost irrespective of process a good architect’s skills and perspectives can have a significant beneficial effect on the result. That’s what I want to explore here.

The Architect’s Perspective

One key distinction between the manager, the architect and the developer is that of perspective. As an architect I spend a lot of time understanding and analysing the different forces on a problem. These design forces may be technical, or human: financial, commercial or political. The challenge is to find a solution which best balances all the design forces, which if possible satisfies the requirements of all stakeholders. It is usually wrong and ultimately counter-productive to simply ignore some of the stakeholders or requirements as "less important" – any stakeholder (and by stakeholders I mean all those involved, not just senior managers) can derail a project if not happy.

Where design forces are either aligned or orthogonal, there is usually a "sweet spot" which strikes an acceptable balance. The problem effectively becomes one of performing a multi-dimensional linear analysis, and then articulating the solution.

However, sometimes the forces act in direct opposition. A good example is system security, where requirements for broad, easy access directly conflict with those for high security. In these cases the architect has to invest heavily in diplomacy skills – to invest a lot of time understanding and addressing different stakeholder positions. One common problem is "requirements" expressed as solutions, which usually hide an underlying concern that can be met many ways, once understood and articulated.

In cases of diametrically opposed requirements, there are usually three options:

  • Compromise – find an intermediate position acceptable to both. This may work, but it may be unacceptable to both, or it may fatally compromise the architecture.
  • Allow one requirement to dominate. This has to be a senior level business decision, but the architect must be sensitive to whether the outcome is genuinely accepted and viable, or whether suppressing the other requirements will cause the solution to fail.
  • Reformulate the problem to remove or reduce the conflict. In the security example the architect might come up with a cunning partitioning of the system which allows access to different elements under different security rules.

Of course, you can’t resolve all the problems at once – that way lies madness. An architect uses techniques like layered or modular structures, and multiple views of the architecture to "separate concerns". These are powerful tools to manage the problem’s complexity.

The architect must look at the big picture, balance the needs of multiple stakeholders, and bring to bear an understanding of the business, of strategy, of technology and of development project work at the same time. If these responsibilities are split among too many heads and isolated within separate organisational confines then you lose the ability to see how it all fits together, and increase the danger of things "falling through the cracks".

The Architect’s Responsibilities

The architecture, and its resolution of the various design forces (i.e. how it meets various stakeholder needs) have to be communicated to many who are not technical experts. The architect acting as technical leader must take much of this responsibility. The messages may have to be reformulated separately for different audiences: I have had great success with single-topic briefing papers, which describe aspects like security in business terms, and which are short and focused enough to encourage the readers to also consider their concerns separately.

The architect must listen to the voice inside, and carry decisions through with integrity. For an architect, the question is whether the architecture is elegant, and will deliver an adequately efficient, reliable and flexible solution. If the internal answer to this is not an honest "yes", it is important to understand why not, and decide whether all the various stakeholders can live with the compromises.

The architect must protect the integrity of the solution against the slings and arrows of outrageous projects. (Hamlet again?) Monitor in particular those design aspects which reflect compromises between design forces, because they will inevitably come under renewed pressure over time. The architect must not only do the right thing, but ensure it is done right.

While every person on the project should be doing these things, there is a natural tendency for most to allow delivery priorities to take precedence. A developer’s documentation, for example, must be adequate to communicate the solution to other developers and maintainers, but does not have to be comprehensible to other stakeholders. However for the architect integrity, fit and communication of the solution are primary responsibilities, not optional. In addition the architect should have sufficient independence to call out and challenge conflicts of interest when they do occur.

The Architect’s Skills

The architect should be equipped with a distinct set of skills in support of these responsibilities. These will include:

  • Design patterns and knowledge of how to apply them
  • Tools and techniques to formally document both detail designs and wider portfolios
  • Methods to ensure that requirements, especially non-functional ones, are documented unambiguously
  • Methods to review a solution design, model its behaviour and confirm the solution’s ability to meet requirements
  • The ability to clearly communicate solutions, issues and potential resolutions to a wide variety of stakeholders
  • The ability to support the project and programme managers in handling the impact of issues and related decisions

Now it’s perfectly possible (and highly desirable) that others on the project will have many of these skills between them. However their combination in the architect is key to the delivery of the architect’s value, and a solution with a good chance of meeting its various objectives.

The Architect’s Position

A good architect should be able to operate in various organisational positions or roles and still deliver the above. Irrespective of the official organisation chart I often end up working between two or more groups, and I suspect this is a common position for many architects. It may actually be a natural result of adopting the architect’s unique perspectives.

The architect’s role may to some extent overlap with that of developers, analysts or product owners, and in smaller organisations or projects the architect may also take on one of these roles. In that case the architect must be able to "wear the appropriate hat" when focusing on a specific project issue or taking a wider view. The architect must then ensure that his or her ability to look at the wider picture is not compromised by the project relationship.

Conversely, a central architecture group may become accused of being in an ivory tower, separate from the realities of the business and the developers at the coal face. An architect in such a position must actively display an interest in and willingness to help with practical project issues.

A good architect will reconcile the need for a broad perspective and the specific responsibilities of a given position, thereby delivering distinct value compared with someone who has a more specific scope. I may on occasion be challenged for taking a wider interpretation of scope than others, but the insights which accrue from that perspective are almost always seen as valuable.

Conclusions

These are generalisations, and in practice there are as many variants on the architect’s role, skills and delivery as there are individuals who take the title. However it is generally true that an architect’s involvement increases the chance that a solution’s behaviour will be predictable, understood, and a good fit to its objectives. That’s the fundamental USP of the architect.

View featured image in Album
Posted in Agile & Architecture | Leave a comment

To BD or Not to BD

Should I buy the Blu-Ray?

So you have a collection of several hundred DVDs, you’ve finally managed to remove almost every VHS tape from the house, and you’ve bought a shiny new TV and disk player. Which, if any, of you existing disks should you replace with new versions, and which versions should you buy?

We have a large video collection, and we’ve already owned several versions of some titles, maybe a couple of different tapes or different DVD releases. Replacing some of our existing disks might make sense, but we really don’t want to do it wholesale when we’ve already got "good" copies of a lot of stuff. Our experience is that there are cases where the cost of replacement is fully justified, and others where it is just a waste of money. I thought it might be useful to try and distil that experience into some guidelines for others in the same predicament.

This does assume that you like "big" films, or the best output of National Geographic and the BBC Wildlife Unit. If fluffy romantic comedies are your thing, or you like budget arthouse movies, then this may not apply. That’s also the case if you don’t like 3D, or your system doesn’t support it (ditto 4K). Please modify this advice accordingly.

Newer Films

The first thing to say is that if you have a "good" DVD of a film or TV series made after about 1995, and it’s not covered by one of the following special cases, then there’s limited benefit to replacing your DVD with the equivalent Blu-Ray. If your disk player does a good job of "upscaling" to HD, or even 4K, then the change will be marginal and you will wonder why you spent that money. If your disk player does not play recent high-quality DVDs well, then your money is better spent on getting a better one.

Crude DVD Transfers

A lot of my DVDs, even for big blockbuster films, are fine based on the previous advice, and aren’t going anywhere. However there are exceptions. These tend to be films from the 1980s and 1990s which were released on VHS and then pushed to DVD using the same digital version, and while the quality was adequate for viewing in the early 2000s, it shows up really badly on newer kit. Grainy/noisy video and inaudible sound are common problems. The dead give-away is when your DVD player produces a half-sized picture in the middle of the screen, suggesting that the video isn’t even full DVD resolution.

This is true of my DVDs of some quite major films, including Robin Hood Prince of Thieves and Tremors. Buy the Blu-Ray, but look for some evidence like the word "remastered" which suggests that they went back to the film and re-processed it (and didn’t just push the same awful video onto a Blu-Ray). For some favourites the improvement will blow you away, but even in more marginal cases you will be at least less frustrated.

There is an obvious consideration about the quality of the source material. If it was recorded on 1980s videotape there’s a limit to what can be achieved. Sadly, the DVD of Edge of Darkness (the TV masterpiece) is about as good as that’s going to get, but I will be very happy and first in the queue if someone can prove me wrong.

Remastered Classics

Where the source material does support it, which is true of a lot of classic films made in the 1960s and 1970s (and some earlier ones), there’s the option of a frame-by-frame restoration to the highest possible modern video and sound standard. The British Film Institute has done this for favourites such as The Italian Job, Zulu and most of David Lean’s films. MGM/Eon has done it for all the Bond films.

The results, on Blu-Ray, can be absolutely stunning. It’s like a 2010s film crew was transported back and filmed the same performances on modern kit.

In Zulu you can see every barb of every feather on the Zulus’ clothing, and you can see that because Chard and Bromhead were from different regiments, there’s a little piece of dark green trim on one tunic which is dark blue on the other. In The Italian Job you can read the badges on the cars and motorbikes. The night-time scenes in From Russia with Love are no longer muddy brown, but sharp blacks in Istanbul, and with a lovely pre-dawn blue glow on the Yugoslavian border. You can admire the couture workmanship on the Bond girls’ dresses. I could go on.

It’s literally like watching a new film. You’ll see so much you didn’t before.

In fairness, it’s the remastering which makes the difference as much as the disk format. Before we bought the Bond Blu-Ray collection we had a DVD of Goldfinger which was based on the remastered version, and that delivered much of the same benefit, but if you haven’t invested in those intermediate versions then the Blu-Ray is even better.

Films Released in 3D

We love 3D, even if sadly the entertainment industry has fallen out of love with it again, and the availability of support in new kit and new film releases is reducing. If you like it, and your system supports it, and there’s a 3D Blu Ray of a film you have on DVD, get the 3D disk. The video and sound quality will be better, and you’ll enjoy the literal extra dimension to the work.

3D Remasters

A small and select but wonderful set of films have been subject to the best of both worlds, remastering the video, but also retrospectively putting them into 3D. The primary examples are Titanic, Jurassic Park, Predator and Terminator 2: Judgement Day, but there are a few others. Like the remastered 60s films, it’s a whole new level of enjoyment. Highly recommended, even if like me, you have probably purchased each of these films in about 4 different previous versions. While industry trends and costs mean there may not be too many more films given this treatment, the fact that the 3D version of T2 was released just before Christmas 2017 does mean that we shouldn’t give up hope.

4K Remasters

As part of the shift away from 3D, the industry is pushing 4K / UltraHD. (This has twice the resolution of normal Blu-Rays and HD TV, at 2160 pixels vertically.) In addition to 4K versions of new blockbusters, there are some "4K remasters" of big films from the last 20 years. However I’m much less convinced about these.

First, if you have normal eyes, ears and equipment, 4K really isn’t the vast improvement over standard HD Blu-Ray that the hype claims. Part of this is just simple diminishing returns as the picture resolution increases beyond what we can easily distinguish. There’s a very good chart on this at http://carltonbale.com/1080p-does-matter/, reproduced below:

What this boils down to is that unless you are viewing 4K on a 60" screen from about 5′ (1.5m), you’re not going to notice much difference from HD, and in practice, that’s far too close to view a screen of that size. We view our 58" screen from about 8′, which is probably still a bit too close, and I can just about see a difference in normal viewing. Obviously if you’re a 20 year old bird spotter things might be different… 4K is great for a cinema, limited value for a telly.

However, there are also a couple of more insidious problems. Some of the conversions are significantly "overdone" – pushing the contrast to extremes which don’t match the material. The Mummy (the 1999 Stephen Sommers film) is a good example, where the 4K version is a riot of shiny highlights and pitch black shadows, while the Blu-Ray retains the beautiful look of the original film. In addition, many 4K remasters end up with a grainy look which the BD version avoids.

While some of this might be down to my eyes, or my kit, I’ve heard similar complaints elsewhere, including from a couple of guys who run a TV/HiFi shop and whose job is to set up high quality demo systems.

Personally I’m probably going to keep 4K for new blockbusters without a 3D version. If a favourite gets an anniversary 4K makeover I may buy the 4K/BD combo, but I could easily end up watching the Blu-Ray.

What About Streaming?

What about it? It’s a great way to get instant access to material you won’t want to view over and over, and where picture quality is not the key requirement: catching up on box sets is a great example. However if you want quality then streaming is currently still inferior to broadcast HD, which is in turn inferior to a disk, even a good DVD (your mileage may vary…). Don’t throw your disks away yet!

Conclusions

For new purchases, buy at least a Blu-Ray version, and consider the 3D or 4K version if there is one. If the old DVD version isn’t great, and there’s a remastered version on Blu-Ray, then it’s worth an upgrade. However if your existing DVD version is a good one, save your money and buy yourself some new films and shows instead.

Posted in Thoughts on the World | Leave a comment

An Odd Omission

Let’s start with a common use case…

"I have a television / hi-fi / home cinema system which has several components from different manufacturers. I would like to control all of them with a single remote control. I would like that remote control to be configurable, so that I can decide which functions are prioritised, and so that I can control multiple devices without having to switch "modes". (For example, the primary channel controls should change the TV channel, but at the same time and without changing modes the volume controls should change the amplifier volume.) As not all of my devices are controllable via Wi-Fi, Infrared is the required primary carrier/protocol. The ideal solution would be a remote control with a configurable touch screen, probably about 6" x 3" which would suit one-handed operation."

I can’t believe I’m the first person to articulate such a use case. In fact I know I’m not, for two reasons. When I set up the first iteration of my home cinema system in about 2004, I read a lot of magazines and they said similar things.

And then I managed to buy a dedicated device which actually did this job remarkably well. It was called a Sunwave Universal Remote, and had a programmable LCD touchscreen. It had the ability to choose which device functions appeared where, and to record commands from existing remotes or define macros (sequences of commands). This provided some, limited, "mixed device" capability, although the primary approach was modal (select the target device, and then use controls for that device). A set of batteries lasted about a year.

There were only two problems. First, as successive TVs became smarter than in 2004 it became an increasing challenge to find appropriate buttons for all the functions from within the fixed option list. Then, after 13 or so years of sterling service the LCD started to die. I still own the control, but it’s now effectively unusable.

My first approach was to try and get a direct replacement. However it’s clear that these devices haven’t been manufactured for years. The few similar items on eBay are either later poor copies, with very limited functionality, or high-end solutions based on old PDAs at ridiculous prices.

But hang on. "a configurable touch screen, probably about 6" x 3"". Didn’t I see such a device quite recently? I think someone was using one to make a phone call, or surf the internet, or check Facebook, or play Angry Birds, or some such. In fact we all use smartphones for much of our technology interaction, so why not this use case?

Achtung! Rabbit hole! Dive! Dive! 🙂

Why not, indeed? Actually I knew it was theoretically possible, because my old Samsung 10" tablet which was about to go on eBay had some software called "Peel Remote" installed as standard, and I’d played with controlling hotel TVs with it. I rescued it from the eBay pile and had an experiment. The first discovery was that while there’s a lot of "universal remote" software on Google Play, most is rubbish, either with very limited functionality, or crippled by stupid amounts of highly-invasive advertising. There are a few honourable exceptions, and after a couple of false starts I settled on AnyMote developed by Color Tiger. This has good "lookup" support to get you started, a nice editing function within the app, and decent ways to backup and share remote definitions between devices. A bit of fiddling got me set up with a screen which controlled our system much better than before, and it got us through all our Christmas watching.

However picking up a 10" tablet and turning it on every time you want to pause a video is a bit clumsy, so back to the idea of using a phone…

And here’s the problem. Most phones have no infrared support. While I haven’t done any sort of scientific analysis, I’d guess that 70-80% (by model) just don’t have what’s known as an "infrared blaster", the element which actually emits the infrared signals. Given that this is very simple technology, not much more than an infrared LED in the phone’s top edge, it’s an odd omission. We build devices stuffed with every sort of wireless and radio interface, but omit this common one used by much of our other technology.

Fortunately it’s not universal, and there are some viable options. A bit of googling suggested that the LG G2 does have an IR blaster, and I tracked down one for about £50 on eBay. It turns up, the software installs…, and it just doesn’t work. That’s when I find the next problem: several of the phone manufacturers who make both TVs and phones (LG and Sony are the most obvious offenders) lock down their IR capabilities, so they are not accessible to third party software. You can use your LG phone to control your LG TV, but that’s it, and f*** all use to me.

Back on Google and eBay. The HTC One M7 and M8 do have IR and do seem to support third-party software. The M8 is a bit bigger, probably better for my use case, and there’s one on eBay in nice condition for a good price. It turns up, the software installs…, and then refuses to run properly. It can’t access the IR blaster. Back on Google and confirm the next problem. Most phones which have been upgraded from Android 5 or earlier to Android 6 have a changed software interface to the infrared which doesn’t work for a lot of third-party software. Thanks a billon, Google. 🙁

OK, last roll of the dice. The HTC One M7 still runs Android 5. I find a nice blue one, a bit more money than the M8 ironically, but still within budget. It turns up, the software installs…, and it works! I have to do a few minor adjustments on the settings copied from my tablet, but otherwise straightforward. I had to install some software to make the phone turn on automatically when it’s picked up, and I may still have to do a bit of fiddling to optimise battery life, but for now it’s looking good…

Third time lucky, but it really didn’t have to be that difficult. For reasons which are impossible to fathom, both Google and most phone manufacturers seem to somewhere between ignoring and actively obstructing this valid and common use case. Ironically, given their usual insularity, things are a bit easier in the Apple world, with good support for third party IR blasters which plug into an iPhone’s headphone socket, but that wouldn’t be a good solution given the rest of my tech portfolio. For now I have a solution, but I’m not impressed.

Posted in Android, Thoughts on the World | Leave a comment