Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Why Do We Keep Building Tightly Coupled Software? (codethinked.com)
21 points by luccastera on July 15, 2009 | hide | past | favorite | 15 comments


Using the abstractions of loose coupling at early stages of development tends to add more complexity than you gain, violate YAGNI, and give you the wrong things later. Unless you've already developed exactly that kind of software before you'll just make a lot of mistakes.

You have to go through a refactoring process at every stage if you want to achieve a big system with loose coupling.


It is a balance. Follow DRY while you are developing and the refactoring should occur as you develop.

Need to write something to a file, fine; just do it. Need to do it again in another class; time to extract common behavior.

You should also have an idea of what components will likely need this kind of abstracted behavior and preempt the wasted time writing something twice by just doing it the first time.

All of this comes with experience and the best way to gain experience is by doing it. Avoid trying to keep everything loosely coupled because it adds too much complexity is a great way to introduce tons of code smells and maintainability/extensibility issues.


Furthermore, wouldn't you rather achieve a small system?


Because it's cheaper, more efficient, gives you more control between components, may keep your code-base smaller, your engineers actually get something done, etc. Pretty much all the important stuff except long-term maintenance. Loose coupling, like anything else, is a trade-off you need to make. Not some absolutely right solution.


This is an interesting problem and as someone who's never been much good at writing big systems, I've given it some thought.

A system like Erlang helps here, I think, not because it is functional (and I say this as a functional programming advocate), but because it has a different metaphor for partitioning programs at the medium to large scale: processes.

With a process, you have to think about where you draw the "error boundaries". When a process fails, it fails and it is either restarted, or the calling process also fails and propagates the problem, or your code might simply report that the subsystem failed.

In the OO world, the main metaphor for decomposition of a problem is the class.

There are two pitfalls here, as I see it:

* The class metaphor is uniform enough such that it becomes difficult to decide where to draw the "error boundaries". This leads me (and others too, I suspect) to blur the small, medium and large scale interactions. Think of code that tries to deal with weird exceptions and interactions form a subsystem where, if this subsystem were a failing process (as in Erlang), the subsystem would have died and your code would have had to restart the process or give up,

* It's easy to share state. I am not yet sold on the idea that all shared state is evil (since I think there are problems that are easier to deal with when using shared memory), but I do think that the sharing of object references between different subsystems, in general, makes it much harder to have decoupled code.

It is possible to write OO code such that some classes are the equivalents of processes and where interactions with instances of those classes are treated as such (i.e. any exception coming from such an instance means that the instance must be terminated).

This need not entail too much extra work and benefit here is that one doesn't need to limit communication between "processes" to simple datastructures. And since the code uses the normal calling conventions, a debugger will work fine.

I do agree with everything that's been said here about refactoring though: for my part, I can't typically at the start of a project see how things should be decomposed.


> It's easy to share state. I am not yet sold on the idea that all shared state is evil (since I think there are problems that are easier to deal with when using shared memory)

There absolutely are problems that are easier to deal with using shared memory. Clojure takes a middle ground here, and recognizes that shared state is sometimes necessary, but is often overused, mainly because the language makes it easier to make something mutable than immutable.

Just as you mention about Erlang, "With a process, you have to think about where you draw the 'error boundaries'", in Clojure you can paraphrase that as 'With refs, you have to think about where you use shared state'.

To create a local, immutable variable, you would write

    (let [x 42] ...)
If you wanted to make that mutable, it would be

    (let [x (ref 42)]...)
You have to explicitly say the variable is mutable. This change in defaults alone goes a long way in making Clojure a more correct and loosely coupled language.


Indeed. Shared state is absolutely necessary sometimes, it's just not a good default.

Exactly what you said about Clojure applies to OCaml as well, except the syntax in that case is

   let x = 42 in ...
and

   let x = ref 42 in ...


This seems, I could have missed something, more to be about the loose coupling in Object Land. Not about loose coupling in software land. Some sentences seems more or less taken out of context from any Haskell book (pure vs IO). Some seem, again, taken out of context from the OTP Design principle.

More annoying is the fact that there is a code example of How Not Todo It but nowhere any How Todo it. I would have liked if the article had some indication on what direction the Author thought on some way to do loose coupling.


Yeah, to me this article basically sounds like some guy heard you shouldn't tightly couple your software, and wrote a blog post about it to sound smart. That may not be the case, but giving an example of how not to do it, and then NOT giving an example of how to do it just seems silly to me.


I agree. To say loosely coupled software is better is just restating the Law of Demeter. The tricky part is figuring out where the interfaces should actually go. Every abstraction comes with overhead, so it only makes sense if it eliminates some emergent complexity from the whole system. Now I guess given the choice between the copy and paste code monkey and the architecture astronaut I'd choose the latter, but I'm pretty sure I'd rather work with someone who is smart and gets things done.

I think it's all too easy to look at old code with today's perspective, and assume the original programmers were idiots without considering what the code base looked like at the time or what were the requirements and time constraints given to them.


most of his posts are similar in nature imho


I have been the sole developer working on a enterprise web project (jboss/spring/hibernate) with extreme decoupling: 400k LOC and 48K of XML tying it together, rampant OOP and patterns du jour. First problem - too much code. Second problem - difficult to debug, because the flow of control often stopped at an interface. If the original developers had done decoupling as needed, the code would have been 50% smaller, more if done in something like Python, maybe 80% less.


difficult to debug, because the flow of control often stopped at an interface

Not sure if I understand how this is a problem - you should always have the concrete class's name in any stacktraces that you have, either from exceptions or by attaching a debugger. Are you attaching a debugger?

Not only that, but if you have your application's logging based around logger name = class name, you'll always know exactly which classes/objects are in control.

Also, if you have the "XML tying it together" how can anything hide behind the interface?


I liked one of the comments on the OP, "Tight coupling is often the proof you are not unit testing your application".


YAGNI. seriously... the curse of premature optimization is a deadly one.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: