Skip to main content

Platform standards and loose coupling

Recently people have been talking about organizational standards for web application platforms (like Linux/Apache/Tomcat for example, or ASP.net for another).  Personally, I'm a big fan of the "small pieces loosely joined" concept.  Smallpieces are exponentially easier to build, test, deploy, and upgrade. Loose coupling gives flexibility and risk mitigation -- components canfail or be replaced without major impacts to the entire structure.  Allof these things help us cope with schedule and product risks.  Thetechnical tradeoff is a performance (latency) hit; for webapplications, I think the industry has proven that this is usually agood tradeoff.

I guess I should be clear here that I'm interested optimizing for effectiveness, notefficiency.  By effectiveness I mean that speed of development, qualityof service, time to market, flexibility in the face of changingbusiness conditions, and ability to adapt in general are much moreimportant than overall number of lines of code produced or even averagefunction points per month.  That is, a function delivered next week isoften far more valuable than ten delivered six months from now.

To do this, you need to start with the organization: Architecture reflects the organization that produces it.  So, first you need to create an organization of loosely coupled small pieces,with a very few well chosen defining principles that let theorganization work effectively.

Each part of the organization should decide on things like the application server platform they should use individually.  They're the ones with the expertise, and if there really is a best answer for a given situation they should be looking for it.  On the other hand, if there's no clear answer and if there's a critical mass of experience with one platform, that one will end up being the default option.  Which is just what we want; there's no top-down control needed here.


So the only case where there's an actual need for top-down organizational platform standards is to get acritical mass of people doing the same thing, where the benefit accruesmostly because of the critical mass, not because of individual projectbenefits.  There's not much benefit to bulk orders of Apache/Tomcat, soif you're avoiding vendor lock-in the main reason to do thisis toenable interoperation.  But that can be accomplished by picking openstandards and protocols -- pick some basic, simple, straightforwardstandards, make sure that teams know about them and are applying themwhere appropriate, and they'll be able to talk together.  This is a risk mitigation strategy rather than an optimization strategy; in other words, you know you can always get something working with a known amount of effort using the loosely coupled strategy.  When you're tightly coupled to anything, this is no longer true -- you inherit its risks.  Tightly coupling an entire organization to an application server platform also creates a monoculture, making some things very efficient but also increasing the risk that you'll be less able to adapt to new environments.


Popular posts from this blog

Personal Web Discovery (aka Webfinger)

There's a particular discovery problem for open and distributed protocols such as OpenID, OAuth, Portable Contacts, Activity Streams, and OpenSocial.  It seems like a trivial problem, but it's one of the stumbling blocks that slows mass adoption.  We need to fix it.  So first, I'm going to name it:

The Personal Web Discovery Problem:  Given a person, how do I find out what services that person uses?
This does sound trivial, doesn't it?  And it is easy as long as you're service-centric; if you're building on top of social network X, there is no discovery problem, or at least only a trivial one that can be solved with proprietary APIs.  But what if you want to build on top of X,Y, and Z?  Well, you write code to make the user log in to each one so you can call those proprietary APIs... which means the user has to tell you their identity (and probably password) on each one... and the user has already clicked the Back button because this is complicated and annoying.

The problem with creation date metadata in PDF documents

Last night Rachel Maddow talked about an apparently fake NSA document "leaked" to her organization.  There's a lot of info there, I suggest you listen to the whole thing:

http://www.msnbc.com/rachel-maddow/watch/maddow-to-news-orgs-heads-up-for-hoaxes-985491523709

There's a lot to unpack there but it looks like somebody tried to fool MSNBC into running with a fake accusation based on faked NSA documents, apparently based on cloning the document the Intercept published back on 6/5/2017, which to all appearances was itself a real NSA document in PDF form.

I think the main thrust of this story is chilling and really important to get straight -- some person or persons unknown is sending forged PDFs to news organization(s), apparently trying to get them to run stories based on forged documents.  And I completely agree with Maddow that she was right to send up a "signal flare" to all the news organizations to look out for forgeries.  Really, really, really import…
Twister is interesting.  It's a decentralized "microblogging" system based on putting together existing protocols:  Bitcoin, distributed hash tables, and Bittorrent.  The most interesting part for me is using Bitcoin for user registration and spam control.  Federated systems handle this with federated trust, which is at least conceptually simple.  The Twister/Bitcoin mechanism looks intriguing though I don't know enough about Bitcoin to really comment.  Need to read further.