Category Archives: Agile & Architecture

Agile Development & Software Architecture

The Colour Nazis

Once upon a time, not so long ago, there was a movement obsessed with removing colour, especially those whose skin colour or religious preference was different to their own. This went to great extremes, caused the greatest of all wars, and we are all aware of the terrible atrocities done as a result. It is one of the horrors of our current time that those beliefs, which we thought had been consigned to history, seem to be getting some renewed attention and following.

If faced with political extremism, the predominantly liberal groups who control and shape our technology would typically be horrified and opposed. However at the same time they are forcing on us fashions and design paradigms which in their own way are just as odious, impacting the richness of our experience, and limiting rather than improving our ability to interact with technology.

I refer, of course, to the Colour Nazis. The members of this movement probably don’t think of themselves that way, and if forced to adopt a label would choose something much more neutral, but it is becoming apparent that some of their thinking is not that different.

This is not the first time I’ve complained about this. In 2012 I wrote “Tyranny of the Colour Blind, or Have Microsoft Lost Their Mojo?”. The trouble is that things are getting worse, not better. Grappling with Office 2016 I’m coming to grips with some really dramatically stupid decisions which can only be explained by a Nazi zeal to remove the colour from our technological interactions.

Here’s a quick test. Find Open, Save and the Thesaurus in Office 2003:

image

Now let’s try Office 2010:

image

Not too bad. The white background actually helps by increasing contrast, and the familiar splashes of colour still draw your eye quickly to the right icons, although the Thesaurus is a bit anonymous. Now let’s try Office 2016:

image

The faded grey on a grey background colour scheme has wiped out most of the contrast, and you’d be struggling to make these out if you have ageing sight in a poor working environment. The pale pastel yellow of “Open” is still just recognisable, but the “Save ” button has turned to a weird pale purple, and the Thesaurus is completely anonymous. I’d have to go hunting by hovering over each and reading the tooltip. (Before anyone shouts, I know I’ve used an add-in menu here to get a like-for-like comparison, but all this is equally true for the full-sized ribbon controls.)

Now let’s look at a really stupid example. One of Word’s great strengths is the ability to assemble and review tracked changes from multiple reviewers. In Word 2010 each will be assigned a distinctive colour, and I can very quickly see who’s who:

image

OK that works well. Let’s see what they’ve done in Office 2016:

image

WTF! One place where colour has a specific role as an information dimension, and they’ve actually taken it away. In the document the markup does use some colour, but in the form of a few pale pastel lines. Instead the screen is cluttered up with the name of the author against every single change, which makes it unreadable if multiple authors have made changes to a single page.

I am always among the first to remind designers not to rely on colour, as it doesn’t work well for about 8% of the population, or in some viewing conditions. But that’s no reason to remove it. Instead you should supplement it (e.g. make icons both distinctive colours and shapes), or allow the users a choice. Word 2016 should allow me to choose whether to use colour or explicit names in markup balloons, and I wouldn’t be having this rant.

There is apparently a name for this fad, “Complexion Reduction” (see Complexion Reduction: A New Trend In Mobile Design by Michael Horton). The problem is that its advocates seem to have lost sight of some key principles of human-computer interaction. One of these is that for normally-sighted people there’s a clear hierarchy in how we spot or identify things:

  1. Colour. If we can look for a splash of colour, that’s easiest. That’s why fire extinguishers are red, or the little red coat was so poignant in Schindler’s List.
  2. Shape / position. We manage a lot of interactions by recognising shapes. That’s why icons work in the first place. We even do this when the affordance supplies text as well. If you’re a native English speaker and reader you will inevitably have tried to move a door the wrong way, because “PUSH” and “PULL” have such similar shapes, and your brain tries shapes first, text second.
  3. Text. When all else fails, read the instructions. That’s not a joke, it’s a real fact about how people’s brains work. If I have to go hunting in a menu or reading tooltips, then the designer has failed miserably.

Sadly I don’t know if there’s any way to influence this. These decisions are probably being made by ultra-hip youngsters with ironic beards and 20 year old eyes who don’t really get HCI. I’d just like one of them to read this blog.

Addendum — May 2019

So the hierarchy for interactions is first colour, then shape, then text.

So please could someone explain to me why the latest versions of Android have also decided to force almost all application icons into a uniform shape (circular on my Sony phone, a rounded rectangle on my Samsung tablet) with exactly the same background colour?

On my phone, all the main Google apps now have icons which are white circles with tiny splashes of the same four colours. The Sony apps (including the main phone functions) are white circles with small icons, using the same pale blue, within them. To add an extra spice, the launcher I use occasionally moves the icons around, if I add a new front-page app or the labels change.

My poor brain has no chance whatsoever. I open my phone, and then have to READ labels to make sure I’m opening the right app. Hopeless!

Posted in Agile & Architecture, PCs/Laptops, Thoughts on the World | Leave a comment

Microsoft : Busy Fixing What Ain’t Broke

There’s an interesting, but intensely annoying, behaviour by the big software companies, which as far as I’m aware has no parallel in other areas of production for consumer consumption. We’ve all been used, since the mid-20th century, to the concept of "planned obsolescence" to make us buy new things. While you might argue that this is not great in terms of use of resources, it’s accepted by consumers because the new thing is usually better than the old one. There might be the odd annoyance (as captured by Weinberg’s New Law, on which I’ve written before), but by and large if I buy a new camera, or car, or TV there are enough definite improvements to justify the purchase and any transition pain. In addition I only usually have to make a change either because the old thing has reached the end of its economic life, or the new thing has a new feature I really want.

It’s not that way with core software, and especially Microsoft products (although they are not the only offenders). The big software providers continue to foist endless upgrades on us, but I can’t see any evidence of improvement. Instead I can actually see a lot of what is known in other trades as "de-contenting", taking away useful capabilities which were there before and not replacing them.

Windows 10 continues to reveal the loss of features which worked well under Windows 7, with unsatisfactory or no replacements. I mourn the loss of the beautiful "aero" features of Windows 7 (with its semi-transparent borders and title bars)  and a number of other stylistic elements, but there are some serious functional omissions as well. I couldn’t work out why my new laptop kept on trying to latch onto my neighbour’s Wifi, rather than use my high powered but secure internal service, and discovered that there’s now no manual mechanism to sort WiFi networks or set preferences. There is, allegedly, a brilliant new automated algorithm which just makes it automatic and no bother to the user. Yeah, right. Dear Microsoft, IT DOESN’T ***** WORK. Fortunately in the way of these things I’m not the only one to complain, and literally in the last couple of weeks a helpful Belgian developer has released a tiny utility which replaces the ability to list and manipulate the WiFi networks known to a Windows 10 machine (https://github.com/Bertware/wlan10). That’s great, and the young man will be receiving a few Euros from me, but it shouldn’t have to be this way. By all means add an automatic sequencer to the new system, but leave the manual mechanism as well.

However, my real object of hate at the moment is Microsoft Office. Since I set up the new MacBook with Windows 10 it’s never been entirely happy with the combination of versions I want to use: Office 2010, plus Skype for Business 2016. (Well actually I’d really prefer to use Office 2003, but I’m over that by now :)) I’ve had the odd problem before, having to install Visio 2016 because Visio 2010 and Skype/Lync 2016 keep breaking each other. I’m not sure how that’s even possible given the "side by side" library architecture which Microsoft introduced with Windows XP, but somehow they managed it, and they clearly don’t care enough about the old versions to fix the issue.

I could live with that, but a couple of weeks ago more serious problems set it. There was an odd "blip", and then OneNote just showed blank notebooks with the ominous statement "There are no sections open in this notebook or section group". That looked like a major disaster, as I rely on OneNote both to organise my work and to-do lists on a daily basis, and as a repository of notes going back well over 10 years. However a quick check online, and on other devices revealed that my data was fine. I lost a good chunk of a working day to trying to fix the problem, including a partial installation of Office 2016 to upgrade to OneNote 2016. That’s a lot more difficult that it should be, and something Microsoft really doesn’t want you to do. Nothing worked. By the end of the day I was so messed up I did a system restore to the previous day, hoping that would restore my system state and fix the original problem. At first glance this appeared to fix Office, although OneNote was still showing blank notebooks. However I then had a moment of inspiration and went online to OneDrive.com, and clicked the "edit in OneNote" option. This magically re-synced things, and got my notebooks re-opened on the laptop. Success?

Unfortunately not. Things seemed OK for a few days, but then I started getting odd error messages, and things associated with Outlook and the email system started breaking. Apparently even a complete "System Restore" hadn’t completely restored the registry, and my system couldn’t work out which version of Outlook was installed. An office repair did no good, and eventually I decided to bite the bullet and upgrade to Office 2016. Even that wasn’t trivial, and took a couple of goes but eventually I got there, and my system is now, fingers crossed, stable again.

And that would be fine if Office 2016 was actually a straightforward upgrade from its predecessors, maintaining operational compatibility under a stable user interface, but that’s where I came in. The look and feel, drained of colour and visual separation, is in my opinion poorer than before but I’ll probably get used to it. I’ve got an add-in (the excellent Ubit Menu) which gives me a version of the ribbon which mimics the Office 2003 menus, and which I also used with Office 2010, so I can quickly find things. But what that can’t do is fix features which Microsoft have just removed.

Take Outlook for example. I really liked the "autopreview" view on my inbox folders. Show me a few lines of unread emails, so I can both quickly identify them and, importantly, scan the content to decide whether they need to be processed urgently and if any can just be deleted, but hide the preview once I’ve read them. Brilliant. Gone. I have multiple accounts under the same Outlook profile, which is how Microsoft tell you it’s meant to work, and in previous versions I could adjust the visual properties of the folder pane at the left so I could see all the key folders at once. Great. Gone. Now I’m stuck with a stupid large font and line separation which would be great if I was working on a tablet with my fingers and a single mail account, but I’m not. Dear Microsoft, some people still use a ****** PC and a mouse…

Or take Word. Previous "upgrade" Office installations carefully preserved the styles in the "Normal" template, so that opening a document in the new version preserved its layout. Not this time. I’ve had to go through several documents with detail page layouts and check each one.

None of this is a disaster, but it is costing me time and money and it wouldn’t be necessary if either Microsoft didn’t keep forcing us to upgrade, or if they made sure to keep backwards compatibility of key features. It’s also not just a Microsoft problem: Adobe and Apple are equally guilty (witness features lost from recent versions of OSX, or the weird user interface of Acrobat XI). The problem seems to be that the big software companies don’t seem to have a business model for just keeping our core software "ticking over", and they confuse change with improvement, which is proving to not be the case now that these systems are functionally mature and already do what people need them to do.

I’m not sure what the answer is, or even if there is an answer. We can’t take these products away from the companies, and we don’t want them to become moribund and abandoned, gradually decaying as changes elsewhere render them unusable. Maybe they need to listen harder to their existing customers, and a bit less to potential "captures", but I’m not convinced that’s going to happen. Let the struggle continue…

Posted in Agile & Architecture, PCs/Laptops, Thoughts on the World | 1 Comment

Fashion Makes Doing IT Harder

I’m about to start building an expert system. Or maybe I might call it a "knowledge base", or a "rule based system". It’s not an "AI", as at least in its early life it won’t have any self-learning capability, but will just take largely existing guidance from master technicians, and stick some code behind it to deliver the right advice at the right time. Expert system is a good term, or so I thought…

It’s a while since I built a rule engine, and I’ve never truly designed an expert system before, so I thought it might be a good idea to do some reading and understand the state of the art. That’s when the the trouble started. My client recommended a book on analysis for knowledge based systems, which I managed to track down for 1p + postage (that should have warned me). I got through most of the introduction, but statements such as "these new-fangled 4GLs might be interesting" and "we don’t hold with this iterative development malarkey" (I paraphrase slightly, but not much) made me realise that the "state of the art" documented was at least a generation old. The book has a few sound ideas about data structure, but pretty much everything it says about technology or process is irrelevant.

Back on Amazon, and I tried searching for "expert system", "knowledge base" and "rule based system". That generates a few hits, but nothing of any substance younger than about 12 years old, nothing on Kindle, and prices varying dramatically between a few pence and the best part of £100, both indications of "this is an old, rare book" and neither tempting me to make a punt. It doesn’t help that the summaries tend to be a list of technologies I’ve never heard of, and few seem to be focused on re-usable concepts and techniques.

OK, I thought. There’s obviously just a new term and I don’t know it. Wikipedia wasn’t much help, observing that the term "expert system" has largely gone out of use, and offering two opposing views why. Either expert systems became discredited and no-one does them any longer (I don’t believe that), or they just became "business as usual" (quite possible, but a good reason why you might write a book about them, not the opposite). No indication of the "modern" term, and few recent references.

Phone a friend. I emailed a couple of friends both of whom are quite knowledgeable in a breadth of IT topics hoping that one of them might say "Oh yes, we now just call them XXX". Nope. Both suggested AI and one suggested "cognitive computing", but as I’ve already observed, that’s a fundamentally different topic. Beyond that both were just suggesting the same terms I’d already tried.

Googling a practical question such as "rule based systems in .NET" produces a few hits and suggests that the state of technology support is pretty good. For example, Microsoft put the "Windows Workflow Foundation" into .NET in about 2008, and this includes a powerful rule engine which is perfectly reusable in its own right. So the technology is there, but again there’s not much general information on how to use it.

This appears to be a case where fashion is getting in the way. If something works, but is not "in", then authors don’t want to write about it, and editors don’t actively commission material. If the "thing" is something where the technology has improved, but not in a "sexy" way, then it goes unreflected in deeper or third party literature. Maybe that explains why Oracle seem driven to rename all their technologies every couple of years, it’s their way of attracting at least a modicum of interest even if it does confuse the hell out of developers trying to work out what has changed, and what really hasn’t.

So be it. I’m going to build a rule-based expert system knowledge base, and I don’t care if that’s not the modern term. It’s just frustrating that no-one seems to have written about how to do this with 2015 technology…

Posted in Agile & Architecture, Thoughts on the World | Leave a comment

Does Your Broadband Beat a Carrier Pigeon?

There’s a famous quote "never underestimate the bandwidth of a station wagon full of tapes bowling down a highway". Musing on this I decided to try and estimate the bandwidth of a carrier pigeon, given modern storage technology. According to Wikipedia, a racing pigeon can maintain about 50 miles an hour over moderate distances. So let’s feed our pigeon, strap a 64GB micro SD card to each leg, and send him from Bristol to London,which should take about 2 hours.

128GB in 2 hours is roughly 1GB/minute, or say 160 Mbps (megabits per second). That’s about the effective transfer rate for USB 2, and is getting on for Gigabit LAN speed. It’s about 50 times faster than the best I get from BT Broadband, and probably over 100 times faster than the sustained broadband bandwidth over a week, which is about how long 128GB would take to transfer. Plus remember that that’s the download speed, and upload is another factor of ten slower…

Now I would be the first to admit that there are some limitations to the "pigeon post" architecture, especially in terms of range. The latency also precludes chatty protocols. But in terms of sheer transfer bandwidth Yankee Doodle Pigeon has "broadband" beaten hands down!

Posted in Agile & Architecture, Thoughts on the World | Leave a comment

Platform Flexibility – It’s Alive!

The last post, written largely back in November and published just before Christmas suggested that camera manufacturers should focus on opening up their products as development platforms, much as has happened with mobile phones. While I can’t yet report on this happening for cameras, I now have direct experience of exactly this approach in another consumer electronics area.

I decided to replace a large picture frame in my office with a electronic display, on which I could see a rolling presentation of my own images. This is not a new idea, but decreasing prices and improving specs brought into my budget the option of a 40"+ 4K TV, which on the experience of our main TV should be an excellent solution.

New Year’s Eve brought a trip to Richer Sounds in Guildford. As usual the staff were very helpful and we quickly narrowed down the options to equivalent models from Panasonic or Sony. The Panasonic option was essentially just a smaller version of our main TV, but the colours were slightly "off" and we preferred the picture quality of the Sony. The Panasonic’s slideshow application is OK, but limited, but the Sony’s built-app looked downright crude. It looked like a difficult choice, but then I realised that the Sony operating system is something called "AndroidTV" with Google Play support, and promised the option of a more open platform, maybe even development myself. Sold!

In practice, it’s exactly as I expected. The basic hardware is good, but the Sony’s default applications beyond the core TV are a bit crude. However a bit of browsing on Google Play revealed a couple of options, and I eventually settled on Kodi, a good open-source media player, which does about 90% of what I want for the slideshow. Getting it running was a bit fiddly, not least because a key picture-handling setting has to be set by uploading a small XML file rather than via the app’s UI, but after only a bit of juggling it’s now running well and doing most of what I want.

Beyond that, I can either develop an add-on for Kodi, or a native application for AndroidTV. However as the existing developer community has provided a 90% solution, I’m not in a great hurry.

I call that a result for platform vs product…

Posted in Agile & Architecture, Android, Code & Development, Photography, Thoughts on the World | Leave a comment

Do We Want Product Development, or Platform Flexibility?

There’s been a bit of noise recently in the photography blogosphere relating to how easy it is to make changes to camera software, and why, as a result, it feels like camera manufacturers are flat out not interested in the feature ideas of their professional and more capable enthusiast users. It probably started with this article by Ming Thein, and this rebuttal by Kirk Tuck, followed by this one  and this one by Andrew Molitor.

The problem is that my "colleagues" (I’m not quite sure what the correct collective term is here) are wrong. For different reasons. They are all thinking of the camera as a unitary product, and none of them (even Molitor, who claims to have some experience as a system architect) are thinking as they should, of the camera as a platform.

OK, one at a time, please…

There are a lot of good ideas in Ming Thein’s article. A lot of his suggestions to improve current mirrorless cameras are good ones with which I agree. The trouble is that he is trying to design "Ming Thein’s perfect camera", and I suspect that it wouldn’t be mine. For a start it would end up far too heavy, too expensive and with too many knobs!

Kirk Tuck gets, this, and his article is a sensible exploration of trade-offs and how one photographer’s ideal may be another’s nightmare. However he paints a picture of flat-lining development which is very concerning, because there are some significant deficiencies in current mainstream cameras which it would be great to address.

Andrew Molitor then picks up this strand, and tries to explain why all camera feature development is difficult, and prohibitively expensive, and why Expose to the Right (ETTR) is especially difficult. Set aside that referring to Michael Reichmann as "a pundit" is unkind and a considerable underestimation of that eminent photographer’s capabilities, there are several fallacies in Molitor’s articles. Firstly, it just would not be as difficult as claimed to implement ETTR metering, or any variant of it. It’s just another metering calculation. If you have a camera with some form of live histogram or overexposure warning, then you can already operate this semi-manually, tweaking down the exposure compensation until the level of clipping is what you want. If you can do it via a predictable process, then that enormously powerful computer you call a digital camera can easily be made to replicate the same quickly and efficiently. That’s what the metering system does. It’s even quite likely that the engineers have already done something similar, but hidden it. (Hint: if you have a scene mode called something like "candle-lit interior", you’re almost there…)

I suspect the calculations of grossed-up cost are also fallacious. If that were the case, in a market which manages US sales of only a few tens of thousands of mirrorless cameras per year (for example), we would never get any new features at all. The twin realities are that by combining multiple features into the normal streams of product or major release development, many of the extra costs are amortised, but we also know that the big Japanese electronics companies apply different accounting standards to development of their flagship products. If Molitor’s argument was correct, we would not see features in each new camera such as a scene mode for  "baby’s bottom on pink rug" (OK, I made that one up :)) or in-camera HDR, and things like that don’t seem to be a problem. I simply cannot believe that "baby’s bottom on pink rug" will generate millions of extra dollars revenue, compared with a "control highlight clipping" advanced metering mode, which would be widely celebrated by almost all equipment reviewers and advanced users.

So assuming that I’m right, and on-going feature development is both feasible and desirable, where does that leave us?

Ming Thein is not alone in expressing disappointment with the provision of improved features focused for the advanced photographer, and I agree with him that the slow progress is really very annoying. In my most recent review, I identified several relatively simple features which would be of significant value to the advanced photographer, and which could easily be implemented in the software of any good mirrorless camera without hardware changes, including:

  1. Expose to the right or other "automatically control highlight clipping" metering
  2. Optimisation for RAW Capture (e.g. histogram from RAW, not JPG)
  3. Proper RAW-based support for HDR, panoramas, focus stacking and other multishot techniques
  4. Focal distance read-out and hyperfocal focus
  5. Note taking and other content enrichment

All of these have been identified requirements/opportunities since the early era of digital photography. Many of them are successfully implemented in a few, perhaps more unusual models. For example the Phase One cameras implement a lot of the focus-related features, the Olympus OM-D E5-II does a form of image stacking for resolution enhancement, and Panasonic have just introduced a very clever implementation of focus bracketing in the GX8 based on a short 4K burst. However by and large the mainstream manufacturers have not made any significant progress towards them.  Even if Molitor’s analysis is correct, and this is all much more difficult than I expect (despite my strong software development experience) you would think that over time there would be at least some perhaps limited visible progress, but no. If the concepts were really "on the product backlog" (to use the iterative development term), then some would by now have "made the cut", but instead we get yet more features for registering babies’ faces…

My guess is that some combination of the following is going on:

  • The "advanced photographer" market is relatively small, and quite saturated. Camera manufacturers are therefore trying to make their mid-range products attractive to users who would previously have bought a cheaper device, and who may well consider just using a phone as an option. To do this, the device needs to offer lots of "ease of use" features.
  • Marketing and product management groups are focused on the output of "focus groups", which inevitably generate lowest-common denominator requirements which look a lot like current capabilities.
  • Manufacturers are fixated on a particular set of use cases and can’t conceive that anyone would use their products in a different way.

The trouble is that this leaves the more experienced photographers very frustrated. The answer is flexibility. By all means offer an in-camera, JPG-only HDR for the novice user, but don’t fob me off with it – offer me flexible RAW-based multishot support as well. Re-assignable buttons are a good step in the right direction, but they are not where flexibility begins and ends. The challenge, of course, is to find a way to provide this within fixed product cycles and limited budgets.

I think the answer lies with software architecture, and in particular how we view the digital camera. It’s time for us all, manufacturers and advanced users alike, to stop thinking of the camera as a "product", and start thinking of it as a "platform", for more open development. In this model the manufacturer still sells the hardware, complete with basic functionality. Others extend the platform, with "add-ins" or "apps", which exploit the hardware by providing new ways to drive and exploit its capabilities.

We’ve been here before. In the early noughties, mobile phone hardware had evolved beyond all recognition (my first mobile phone was a Vodafone prototype which filled one seat and the boot of my Golf GTI, and needed a six-foot whip antenna!) However, you bought your phone from Nokia, for example, and it did what it did. If you didn’t like the contact management functionality, you were stuck with it.

Then Microsoft, followed more visibly by Apple and eventually Google, broke this model, by delivering a platform, a device which made phone calls, sure, but which also supported a development ecosystem so that some people could develop "apps", and others could install and use those which met their needs. Contact management functionality is now limited only by the imagination of the developer community. Despite my criticism of some early attempts, the model is now pretty much universal, and I don’t think I could go back to a model where my phone was a locked-down, single-purpose device.

The digital camera needs to go the same way, and quickly before it is over-run by the phone coming at the same challenge from the other side. Camera manufacturers need to stop thinking about "what other features should we develop for the next camera", and instead direct themselves to two questions, one familiar and one not. The familiar one is, of course, "how can we make the hardware even better"? The unfamiliar one is "how can we open up this platform so that developers can exploit it, and deliver all that stuff the advanced users keep going on about"?

Ironically, for many manufacturers many of the concepts are in place, just not joined up. The big manufacturers all offer open lens mounts, so that anyone can develop lenses for their bodies. In the case of Panasonic, Olympus and the other micro-four thirds partners it’s even an open multi-party standard. Panasonic certainly now deliver "platform" televisions with the concept of third party apps. There’s a healthy community of "hackers" developing modified firmware for Canon and Panasonic cameras, albeit at arms length from and with a slightly ambivalent relationship to the manufacturers. I’m sure many of those would very much prefer to be working as partners, within an open development model.

So what should such a "platform for extensibility" look like? Assuming we have a high-end mirrorless camera (something broadly equivalent to a Panasonic GX8) to work with as base platform, here are some ideas:

  1. A software development kit, API and "app store" or similar for the development and delivery of in-camera "apps". For example, it should be possible to develop an ETTR metering module, which the user can choose as an optional metering mode (instead of standard matrix metering). This would be activated in place of the standard metering routine, take in current exposure, and return required exposure settings and perhaps some correction metadata. Obviously the manufacturer would have to make sure that any such module returned "safe" values, but in a mirrorless camera it should be very easy to check that the exposure settings are "reasonable" and revert to a default if not. Other add-ins could tap into events such as the completion of an exposure, or could activate functions such as setting focal distance. The API should either be development language-agnostic, or should support a well-known language such as Java, C++ or VB. That would also make it easier to develop an IDE (exploiting Visual Studio or Eclipse as a base), emulators and the like. There’s no reason why the camera needs an "open" operating system.
  2. An SDK for phone apps. This might be an even easier starting point, albeit with limitations. Currently manufacturers such as Panasonic provide some extended functions (e.g. geotagging) via a companion app for the user’s phone, but these apps are "closed", and if they don’t do what you want, that’s an end of it. It would be very easy for these manufacturers to open up this API, by providing libraries which other developers can access. My note taking concept could easily be delivered this way. The beauty of this approach is that it has few or no security issues for the camera, and the application management infrastructure is delivered by Google, Apple and Microsoft.
  3. An open way to share, extend and move metadata. Panasonic support some content enrichment, but in an absolutely nonsensical way, as those features only work for JPEG files. What Panasonic appear to be doing is writing to the JPEG EXIF data, but not even copying to the RAW files. The right solution is support for XMP companion files. These can then accompany the RAW file through the development process, being progressively enhanced by different tools, and relevant data will be permanently written to the output JPEG. This doesn’t have to be restricted to static, human-readable information. If, for example, the ETTR metering module can record the difference between its exposure and the one set by the default matrix method, then this can be used by the RAW processing to automatically "normalise" back to standard exposure during processing. XMP files have the great advantages that they are already an open standard, designed to be extensible and shared between multiple applications, and it’s pretty trivial to write code to manipulate them, so this route would be much better than opening up the proprietary EXIF metadata structures.
  4. A controllable camera. What I mean by this is that the features of the camera which might be within the scope of the new "apps" must be set via buttons, menus and "continuous" controls (e.g. wheels with no specific set positions), so that they can be over-ridden or adjusted by software. They must not be set by fixed manual switches, which may or may not be set where the software requires. The Nikon DF or the Fuji XT1 may suit the working style of some photographers – that’s fine – but they are unsuited to the more flexible software environment I’m envisaging. While I prefer the ergonomics of "soft" controls, in this instance they are also a solution which promotes flexibility, which is what we’re seeking to achieve here.

This doesn’t have to be done in one fell swoop, and it might not be achieved (or even appropriate) 100% for every camera. That’s fine. Panasonic, for example, could make a great start by opening up the "Image App" library, which wouldn’t require any immediate changes to the cameras at all.

So how about it?

Posted in Agile & Architecture, Code & Development, Photography, Thoughts on the World | Leave a comment

SharePoint: Simply C%@p, or Really Complicated C%@p?

There’s a common requirement for professional users of online document management systems. Sometimes you want to have access to a subset of files offline, with the ability to upload changes when you have finished work and are connected again. Genuine professional document management solutions like Open Text LiveLink have been able to do this for years, frequently with a little desktop add-in which presents part of the document library as a pseudo-drive in Windows Explorer.

Microsoft SharePoint can’t do this. It has never been able to do this, and it still can’t. Microsoft have worked out that it’s a requirement, they just seem completely incapable of implementing a usable solution to achieve it, despite the fact that doing so would instantly bridge a significant gap between their online DM solution and their desktop products.

For the first 10 years, they had no solution at all. Then Office 2010 introduced "Microsoft SharePoint Workspace 2010". This promises, but under-delivers. It can cache all documents in a site into a hidden folder on your PC, and allows access to them through an application which looks a little bit like Windows Explorer, but isn’t. It’s very fiddly, and breaks all the rules about how you expect Office apps to work. It’s also slow and unreliable. Google it, and you find bloggers who usually praise Microsoft products to the skies using words like "excrable". Despite at least three office releases since 2010, Microsoft don’t appear to have made any attempt to fix it.

There’s now an alternative option, in the form of OneDrive for Business. This has a different balance of behaviours. On the upside, you can control where it syncs files so that they do appear in Explorer in a controlled fashion. On the downside, you can only link to a single SharePoint site (not much use if you have a client with multiple sites for different groups), and it still insists on synching all files in bulk, which is not what you want at all. On top of that I couldn’t get it to authenticate reliably, and was seeing a lot of failed synchronisations leaving my copy in an indeterminate state. There’s supposed to be a major rewrite in progress, bringing it more inline with the personal version of OneDrive, which works quite well, but no sign of anything useful yet…

Having wasted enough time on a Microsoft-only solution, I reverted to a solution which does work fairly well, using the excellent Syncback Pro. You have to log in using  Internet Explorer and the "keep me signed in" setting before it will work, but after that it delivers exactly what I want, allowing the selection of an exact subset of files, and the location of the copy on your PC, with intelligent two-way synchronisation. Perfect.

Perfect? Well, sort of. Syncback works very well, but even it can’t work around some fundamental limitations of SharePoint. The biggest problem is that when SharePoint ingests a file, at resets both the file modified date and the file created date to be the date and time of ingestion! When you export or check the file, it therefore appears to be a changed, later version than the one you uploaded. Proper professional DM systems just don’t do this, and the Syncback guys haven’t found a solution. Worse, I discovered that SharePoint process was marking some files as checked in, and therefore visible to other users, and some as still checked out to me, and therefore invisible to others.

The latter is a real problem, since the point of uploading the files is to share them with others. It’s also very fiddly to fix as SharePoint doesn’t seem to provide any list of files checked out, and there’s no mechanism to check files in in bulk – you have to click on each file individually and go through the manual check-in process.

Aha, I thought. Surely Microsoft’s excellent development tools will allow me to quickly knock up a little utility to search through a site, find the files checked out to me, and programmatically check them in. Unfortunately not. the first red flag was the fact that on a PC with full installations of Office and a couple of versions of Visual Studio, there’s no installed object model for SharePoint. After a lot of Googling I found a download called the "Office Developer Tools for VS 2013". I didn’t think I needed this, given what I already had installed, but ran the installer anyway. This took longer to complete than a full installation of Office or Visual Studio would, and in the process silently closed all my open office apps, losing some work. When it finished I still couldn’t see the SharePoint objects immediately, but adding a couple of references to my project manually finally worked. Right up to the point where I tried to test run the project, at which point the execution failed on the first line. It appears that these objects are designed to only support development but the code must execute on a server running SharePoint – there’s no concept of developing a desktop tool remotely interrogating a library.

OK, I thought. What about web services? I remember in the early days of SharePoint I was able to use SOAP web services to access and interrogate it, and I thought the same should still be true. To cut a long story short, that’s wrong. There’s no simple listing of the API, and attempting to interrogate services using Visual Studio’s usually excellent tools failed at the first post, with unresolveable authentication errors. In addition they seem to have moved to a REST API which is fundamentally much more difficult to drive if you don’t have a clear API listing. A lot of developers seem to be complaining about similar issues. I did find a couple of articles with sample code, but it all seems to be very complicated compared with what I remembered of the original SOAP API.

After wasting a couple of hours on "quickly knocking up a little utility" I gave up, at least for now. Back to the manual check-in method…

I’ve never been a fan of SharePoint, but it appears to be betting worse, not better. At least the first versions were simply cr@p. The new versions are very complicated cr@p.

Posted in Agile & Architecture, Code & Development, Thoughts on the World | Leave a comment

The Software Utility Cycle

There’s a well-known model called the “Hype Cycle”, which plots how technology evolves to the point of general adoption and usefulness. While there are a lot of detail variants, they all boil down to something like the following (courtesy Wikipedia & Gartner):

Hype Cycle

 

While this correctly plots the pattern of adoption of a new technology, it hides a nasty truth, that the “plateau of productivity” is not a picture of nice, gentle, continuous, enduring improvement. Eventually all good things must come to an end. Now sometimes what happens is that an older technology is replaced outright by a newer one, and the old one continues in obsolescence for a while, and then withers away. We understand that pattern quite well as well. However, I think we are now beginning to experience another behaviour, especially in the software world.

Welcome to the Software Utility Curve:

Software Utility Curve

 

We’re all familiar with the first couple of points on this curve. Someone has a great idea for a piece of software (the “outcrop of ideas”). V1 works, just about, and drums up interest, but it’s not unusual for there to be a number of obvious missing features, or for the number of initial bugs and incomplete implementations to almost outweigh the usefulness of the new concept. Hopefully suitably encouraged and funded, the developers get cracking moving up the “Escarpment of Error Removal”. At the same time the product grows new, major features. V2 is better, and V3 is traditionally stable, usefully and widely-acclaimed (the “Little peak of Usefulness”).

I give you, for example, Windows 3.1, or MS Office 4.0.

What happens next is interesting. It seems to be not uncommon that at this point the product is either acquired, or re-aligned by its parent company, or the developers realise that they’ve done a great job, but at the cost of some architectural dead-ends. Whatever the cause, this is the point of the “Great Architectural Rewrite Chasm”. The new version is maybe on a stronger foundation, maybe better integrated with other software, but in the process things have changed or broken. This can, of course, happen more than once…

MS Office 95? Certainly almost every alternative version of Windows (see my musings on the history and future of Microsoft Windows).

The problems can usually be fixed, and the next version is back to the stability and utility of the one at the previous “Little Peak of Usefulness”, maybe better.

Subsequent versions may further enhance the product, but there may be emerging evidence of diminishing returns. The challenge for the providers is that they have to change enough to make people pay for upgrades or subscriptions, rather than just soldiering on with an old version, but if the product is now a pretty much perfect fit to its niche there may be nowhere to go. Somewhere around Version 7 or 8, you get a product which is represents a high point: stable, powerful, popular. I call this the “Peak of Productivity”.

Windows 7. Office 2003. Acrobat 9.

Then the rot sets in, as the diminishing returns finally turn negative. The developers get increasingly desperate to find incremental improvements, and start thinking about change for its own sake. Pretty soon they come up with something which may have sounded great in a product strategy meeting, but which breaks compatibility, or the established user experience model, and we’re into negative territory. The problems may be so significant that the product is tipped into another chasm, not just a gentle downhill trundle.

Ladies and Gentlemen, I proudly present to you Microsoft Office 2007. With its ribbon interface which no-one likes, and incompatible file formats. We also proudly announce the Microsoft Chair of Studies into the  working of the list indentation feature…

I’m not sure where this story ends, but I feel increasing frustration with many of the core software products we all spend much of the day with. MS Office 2010+ is just not as easy to use as in the 2003 version. OK, youngsters who never used anything else may be comfortable with the ribbon, but I’m not convinced. I’m not sure I ever asked for the “improvements” we have received, but it annoys intensely that we still can’t easily set the indents in a list hierarchy, save the style, and it stays set. That  said, I have to credit Microsoft with a decent multi-platform solution in Office 365, so maybe there’s hope. Acrobat still doesn’t have the ability to cut/paste pages from one document to another, although you can do a (very, very fiddly) drag and drop to achieve the same thing… And this morning I watched an experienced IT architect struggling with settings in Windows 8, and eventually helped him solve the problem by going to Explorer and doing a right click, Manage, which fortunately still works like it did in Windows NT.

There’s an old engineering saying: “If it ain’t broke, don’t fix it”. Sadly the big software companies seem to be incapable of following that sound advice.

Posted in Agile & Architecture, Thoughts on the World | Leave a comment

Can No-One Write A Good Book About Oracle SOA?

I’m frustrated. I’ve just read a couple of good, if somewhat repetitive, design pattern books: one on SOA design with a resolutely platform-neutral stance, and another on architecting for the cloud, with a Microsoft Azure bent but which struck an admirable balance between generic advice and Microsoft specific examples.

So far so good. However although the Microsoft Azure information may come in handy for my next role, what I really need is some good quality, easy to read guidance on how current generic guidance relates to the Oracle SOA/Fusion Suite. I identified four candidates, but none of them seem worth completing:

  • Thomas Erl’s SOA Design Patterns. This is very expensive (more than £40 even in Kindle format), gets a lot of relatively poor reviews, and I didn’t much like the last book I read by the same author.
  • Sergey Popov’s Applied SOA Patterns on the Oracle Platform. This is another expensive book, but at least you can read a decent-length Kindle sample. However doing so has somewhat put me off. There are pages upon pages upon pages of front-matter. Do I really want to read about reviewers thanking their mothers for having them before I get to the first real content? Fortunately even with that issue the sample gets as far as an introductory chapter, but this makes two things apparent. Firstly, the author has quite a wordy and academic style, but more importantly he has re-defined the well-established term "pattern" to mean either "design rule" or "Oracle example", neither of which works for me. However I really parted company when I got to a section which states "… security … is nothing more than pure money, as almost no one these days seeks fun in simple informational vandalism", and then went off into a discussion of development costs. If this "expert" has such a poor understanding of cyber-security it doesn’t bode well…
  • Harish Gaur’s Oracle Fusion Middleware Patterns. Again, this appears to have redefined "pattern" as "Opportunity to show a good Oracle example", but that might be valid in my current position. Unfortunately I can’t tell you much more as the Kindle sample finished in the middle of "about the co-authors", before we get to any substantive content at all. As it’s another relatively expensive book with quite a few poor reviews I’m not sure whether it’s worth proceeding.
  • Kathiravan Udayakumar’s Oracle SOA Patterns. Although only published in 2012, this appears to already be out of print. It has two reviews on Amazon, one at one-star (from someone who did try and read it) and one at three stars (from someone who didn’t!).

In the meantime I’ve started what looks like a much more promising book, David Chappell’s Enterprise Service Bus. This appears to be well-written, well-reviewed and reasonably priced. What really attracts me is that he’s attempted to extend the "Gregorgram" visual design language invented for Enterprise Integration Patterns to service bus architectures, which was in many ways the missing piece from the Service Design Patterns book. Unfortunately the book may be a bit out of date and Java-focused to give me an up-to-date technical briefing, but as it’s fairly short that’s not an issue.

After that it’s back to trying to find a decent book which links all this to the Oracle platform. If anyone would like to recommend one please let me know.

Posted in Agile & Architecture, Reviews, Thoughts on the World | Leave a comment

Review: Cloud Design Patterns

Prescriptive Architecture Guidance for Cloud Applications , By Alex Homer, John Sharp, Larry Brader, Masashi Narumoto, Trent Swanson

Good book let down by poor high-level structure

This is a very useful introduction to key cloud concepts and how common challenges can be met. It’s also a good overview of how Microsoft technologies may fit into these solutions, but avoids becoming so Microsoft-centric that it becomes useless in other contexts. Unfortunately, however, the overall structure means that this is not a book designed for easy end to end reading. It may work better as a reference work, but that reduces what should have been its primary value.

The book starts with a good introduction and list of the patterns and supporting "guidance" sections, and is then followed first by the patterns, and then the guidance sections (useful technology primers). This is where things break down a bit, as the patterns are presented in alphabetical order, which means a somewhat random mix of topics, followed by the same again for the guidance sections. I attempted to read the book cover to cover over about a week and I found the constant jumping about between topics extremely confusing, and the constant repetition of common content very wearing. In addition by presenting the guidance material at the end it is arguably of less value as most of the concepts have already been covered in related patterns. Ultimately the differentiation between the two is very arbitrary and not helpful. For example is "throttling" really a pattern or a core concept? If "throttling" is a pattern why is "autoscaling" not described as a pattern?

The book would be about 10 times better if it were re-organised into half a dozen "topics" (for example data management, compute resource management, integration, security…), with the relevant guidance and overviews first in each topic, followed by the related patterns which could then be stripped of a lot of repetitive content, and topped off with common cross-reference and further reading material.

This is not just a book about cloud specifics. A lot of the material reflects general good practice building and integrating large systems, even for on-premise deployment, and reinforces my view that "Cloud" is just a special case of this established body of practice. As a result there’s quite a lot of overlap with older pattern books especially Enterprise Integration Patterns, which is also directly referenced. The surprisingly substantial content related to message-based integration, confirms my view that this is still the best model for loosely coupled extended portfolios, but I would have appreciated more on the overlap with service technologies.

The overlap with other standard pattern books might have been managed just by referencing them, but this would play against Microsoft’s objective of making this material readily available to all readers at low cost.

The book is spectacularly good value for money, especially as you can download it free from Microsoft if you are prepared to do a bit of juggling with document formats. That it forms part of a series also available under similar options is even better. This perpetuates Microsoft’s tradition of providing cheap, high-quality guidance to developers and sits in sharp contrast with the high costs of comparable works from not only independent publishers (which may be understandable) but other technology vendors.

The book does assume some familiarity with Windows Server concepts, for example worker roles vs machine or application instances, and doesn’t always explain these terms. A glossary or an clear reference to a suitable external source would have been useful.

At a practical level I’m pleased to see that the Kindle version works well, with internal links hyperlinked and clear diagrams, plus access to each pattern directly from the menu in the Android Kindle app. Offset against this are a few cases of poor proofreading related to problems with document format conversions, in particular with characters like apostrophes turned into garbage character strings.

Overall I found this a useful book, and I’m sure it will become a valuable reference work, but I just wish the authors and editors had paid more attention to the high-level structure for those trying to read it like a traditional book.

Categories: Agile & Architecture and Reviews. Content Types: Book, Computing, and Software Architecture.
Posted in Agile & Architecture, Reviews | Leave a comment

Things Which Really Bug Me About the Kindle

I  read a lot using the Kindle applications for Android and PC. While there’s a lot which is good about that process there are a number of things which really bug me. Some of these look incredibly simple to resolve, from my standpoint as a competent software developer, and I have to question whether Amazon actually care about getting the user experience right…

Changing Font Size

The current behaviour of the font selection option is completely brain-dead, especially when switching between documents. Suppose I open one book which has been composed using a large base font. The text comes up very large and I set my font size to 2. I then open a second book, which has been composed using a smaller base font, and I have to change the font setting to 4 to get back to a size I’m comfortable with. Open the first document and the text is now enormous!

The application should actually work as follows. I would set a preferred font face and size and that would just be used automatically for all the bulk text in all documents. Anything styled with style tags like normal,  body text,  list,  should just use my selected font and size. Automatically. Paragraphs with heading styles would use progressively larger fonts, and the style might change to an author preference, although I should be able to over-ride that.

If that’s not possible, although I really don’t understand why not, then any change I make to my settings should apply only for a single document, and my settings for each document should be remembered if I switch between them. If I have to set size 2 in one document and size 4 in another to get a consistent reading experience the app should remember that.

Have the developers ever actually used the devices and apps with real eBooks?

Collections and Tagging

When,  early on, you have half a dozen books in your Kindle account, the lack of effective library management tools is not too much of an issue. When, like us, that library has grown to several hundred titles this starts to be a major problem.

Amazon allege that the solution is to use collections. That might help, if it weren’t for another brain-dead implementation. Collections on the physical Kindle are a local data structure, effectively invisible to other devices. In the Android app they are quite a usable feature, and sync with other Android devices, but not other platforms. On the PC you can create local collections, and allegedly import collections from physical Kindles (although I haven’t got that to work) but the collections are then completely independent of all other devices.

Is this really the best that can be achieved by one of the leading cloud services companies? Surely it’s not rocket science to come up with an architecture for collections / lists and tags, which is synchronised with the cloud account from and to all devices on the account? (And I note that there can’t possibly be any real technical issue, because notes and highlights synchronise perfectly across all my devices…)

Again, this looks like the developers are either stupid, or lazy, or completely indifferent to the implications of their substandard work.

Book Descriptions

If you are reading a book on the Kindle, you can quickly pop up some key descriptive details. Relatively recently Amazon have supported the same feature in the Android app, although it doesn’t work for books which aren’t open. On the PC it’s not supported at all.

There are three sets of books for which I would like to be able to quickly access descriptive details, ideally on- and off-line:

  • Books I have downloaded to my device, but which I’m not currently reading
  • Books in my archive, to remember which is which
  • Books which are being recommended by Amazon within my mobile reading experience, e.g. the recommendations panel on the home page of the Kindle app.

No, I do NOT want to "view in store", especially if it’s a book I’ve already downloaded and I’m just not 100% which is which from the cover image, and I’m offline. And I don’t really want to have to open up a book to see it’s description. Surely it wouldn’t be rocket science (again) to download the key descriptive details for all the books in the above categories at every sync, and have those details available via a long press from the overview pages just like they would be from within an open book?

Position References

Some books insist on referring internally by using a page number from the printed edition. If you’re referring to a specific position in a book in the outside world, this is also still a common practice (and probably the only viable one unless the book has quite a fine-grained and well-numbered heading structure). Kindle insists on referring to and navigating locations using an internal "position" reference, which not only has zero relationship to the outside world, but can change from time to time depending on font choice and other settings. Therefore unless you have access to the physical edition as well as the eBook, you’re stuffed. It’s not even easy if you have a relative reference (e.g. page 200 of 300), because you have to get the calculator out to work out that this is equivalent to "position 3400 of 5393".

It would undoubtedly be better if authors creating Kindle versions of technical and reference books made sure all internal references were simply hyperlinks to the right point in the document. However I’m sure Amazon could help as well. How about, for example, holding the page count of the physical edition(s) against the Kindle version, and modifying the "Go To" dialog so that I can specify the target position as a percentage, or as a page number relative to the page count for the physical edition?

The Back Button

The physical Kindle and all Android devices have a "back" button, which should take you back steadily through your work contexts, like the back button on a browser. On the Kindle, or the PC app, this behaves as you’d expect. If you follow a link within a book, then it takes you to a new page, but the back button takes you back to the page you were previously reading. Only when you get back to your first context does it take you right out to the menu. Not on Android. Click on a link to an external source, and the back button takes you back into Kindle at the right point. So far so good. Click on an internal link, and the back button takes you right out of the book. To make matters worse it has now remembered the location you navigated to as your "current" location, so to get back to where you were previously you have to navigate manually. Completely useless, and presumably about 1 line of code to fix properly.

Conclusions

I don’t think I’m being unreasonable here. Amazon make a vast amount of money out of the Kindle platform, and could make more if it is a sound platform for reference books as well as novels and the like. None of these issues would take a vast amount of effort to fix, just the will to be bothered and do a professional job. Amazon’s persistent indifference on these points reveals an attitude which bugs me even more than the issues themselves.

Posted in Agile & Architecture, Thoughts on the World | Leave a comment

Review: Service Design Patterns

Fundamental Design Solutions for SOAP/ WSDL and RESTful Web Services, By Robert Daigneau

Good book, but some practical annoyances

One of the most influential architecture books of the early 00s was Enterprise Integration Patterns by Gregor Hohpe and Bobby Woolf. That book not only provided far and away the best set of patterns and supporting explanations for designers of message-based integration, but it also introduced the concept of a visual pattern language allowing an architecture (or other patterns) to be described as assemblies of existing patterns. While this concept had been in existence for some time, I’m not aware of any other patterns book which realises it so well or consistently. The EIP book became very much my Bible for integration design, but technology has moved on an service-based integration is now the dominant paradigm, and in need of a similar reference work.

The Service Design Patterns is in the same series as the EIP book (and the closely related Patterns of Enterprise Application Architecture), and overtly takes the earlier books as a baseline to build an additional set of patterns more directly related to Service-oriented integration. Where the earlier books’ content is relevant, it is just referred to. This helps to build a strong library of patterns, but also actively reinforces the important message that designers of newer integration architectures will do well to heed the lessons of previous generations.

The pattern structure is very similar to the one used in the EIP book, which is helpful. The "Headline" context description is occasionally a bit cryptic, but is usually followed by a very comprehensive section which describes the problem in sufficient detail, with an explanation of why and when alternative approaches may or may not work, and the role of other patterns in the solution. The text can be a little repetitive, especially as the authors try to deliver the specifics of each pattern explicitly for each of three key web service styles, but it’s well written and easily readable.

This is not a very graphical book. Each pattern usually has one or two explanatory diagrams, but they vary in style and usefulness. I was rather sad that the book didn’t try to extend the original EIP concept and try to show the more complex patterns as assemblies of icons representing the simpler ones. I think there may be value in exploring this in later work.

One complaint is the difficulty of navigating within the Kindle edition, or in future using it as a reference work. Internal references to patterns are identified by their page number in the physical book, which is of precisely zero use in the Kindle context. In addition the contents structure which is directly accessible via the Kindle menu only goes to chapter level, not to individual patterns. If you can remember which chapter a pattern is in you can get there via the contents section of index, but this is much more difficult than it should be. In other pattern books any internal references in the Kindle edition are hyperlinked, and I don’t understand why this has not been done here.

To add a further annoyance, the only summary listings of the patterns are presented as multiple small bitmapped graphics, so not easily searchable or extractable for external reference. An early hyperlinked text listing with a summary would be much more useful. Please could the publishers have a look at the Kindle versions of recent pattern books from Microsoft Press to see how this should be done?

A final moan is that the book is quite expensive! I want to get all three books in the series in Kindle format (as well as having the hardcover versions of the two earlier books, purchased before ebooks were a practical reality), and it will cost over £70. This may put less pecunious readers off, especially as there’s so much front matter that the Kindle sample ends before you get to the first real pattern. That would be a  shame, as the industry needs less experienced designers to read and absorb these messages.

These practical niggles aside, this is a very good book, and I can recommend it.

Categories: Agile & Architecture and Reviews. Content Types: Book, Computing, and Software Architecture.
Posted in Agile & Architecture, Reviews | Leave a comment