Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I found your point hard to agree with, until I mentally replaced "HTML" with "bash" at which point I was enlightened of how important context is.

Though I was hired to work with languages $X and $Y at $dayjob, many, many bugs slip through because bash is used as glue for languages $X and $Y. This was true of my last job too, the only difference is that in my current job we don't have a resident bash expert to call out issues in code review.

I've always hated bash, but it's "essential" complexity when dealing with a modern Linux system. Technically it's within our domain to change, but for various reasons time and time again bash wins out as the de-facto glue holding infrastructure together.



Exactly. There is nothing you can do about bash unless you are the creators of bash. In which case, now you are dealing with whatever essential complexity it is you are forced to deal with to create your program. The program itself is all accidental using this model (the model initially outlined by the OP, which converges with the original article).

Maybe "incidental complexity" would be a better term. But the model is the same.


>You're missing the point which is context. If you're the accountant, the tax code is essential. HTML is accidental. If you're the programmer, then HTML is essential. The outcome of your page is accidental. Accidental is what you have control over. Essensial is what you're forced to work with.

That's not what accidental/essential complexity mean in Computer Science.

Essential Complexity is the business logic, program structure considerations, necessary tradeoffs, and so on.

Accidental Complexity is BS you have to put up that's not essential to the program, but you need to handle. Things from manual memory management to setting up Webpack are "accidental complexity".

It's not about the language itself. Even if you're programming in bash, bash is not "essential complexity".

The complexity inherent in the task is the essential complexity (e.g. "I need to copy files from a to b, and do a processing on them, handle the case where one file is missing or when a column in the file is mangled, etc").

Bash (or whatever other tool you use) can directly help you with this essential complexity, or impose accidental complexity on top of it.

E.g.

  cp /foo/a /bar/.
for copying a from /foo to /bar has pretty much no accidental complexity. It essentially captures just what you need to do.

But as the script becomes bigger and you implement more of the business logic, shit like bash messing up pipelines when a part of a pipeline fails, or dealing with its clumsy error handling, add accidental complexity.


> That's not what accidental/essential complexity mean in Computer Science.

Of course. I was responding to the OP. This thread began with:

> I have different idea about essential complexity and accidental complexity. I think examples in the article are all just accidental complexity.

I was elaborating on that different idea. Everyone seems to be just reverting back to the text book and rejecting the difference not on merits, but for simply not matching.

But complexity is complexity. If we really want to talk about complexity and where the unavoidable part is coming from, then it's from the layer underneath also. When speaking of complexity reduction, you cannot ignore the complexity imposed.


Well,

"I have different idea about essential complexity and accidental complexity"

might mean:

(a) I think we should think of essential complexity and accidental complexity differently (legit, but can be confusing, and overloads the terms).

(b) I think essential complexity and accidental complexity mean something different (in general), and TFA got them wrong (e.g. because the parent doesn't know the traditional definitions of the terms, and thinks they're open to personal interpretation).

>Everyone seems to be just reverting back to the text book and rejecting the difference not on merits, but for simply not matching.

Yes, and I think those people are right. Whether the new idea has merit or not, it should use new terminology, to not obscure things. Then, we can discuss it on its own merit.

Even so, considering it on its merit alone, I don't think it has that much (more on that below). Because it essentially amounts "if you're programming in X, you have to deal with X (e.g. bash/html/etc.) and that has some complexity". Well, duh. That's true, but it's something we already know.

Whereas the accidenal/essential complexity in Brook's sense, is an important philosophical/logical distinction.

>But complexity is complexity. If we really want to talk about complexity and where the unavoidable part is coming from, then it's from the layer underneath also

Well, the original formulation is more useful though, because having to use bash or html is not "unavoidable". It might just be "unavoidable" because of one's employee insistence, or something like that, but that's not a computer science concern.

Whereas essential complexity in Brook's sense is completely unavoidable (in the logical sense).

A better and non-confusing term for what the parent describes would be, I think, "imposed complexity" or "circumstancial complexity".


The crux of Brook's argument is with irreducible complexity which he calls essential, but also qualifies it with what cannot be reduced with technology:

> programming tasks contain a core of essential/conceptual1 complexity that's fundamentally not amenable to attack by any potential advances in technology

Luu is arguing that most of the complexity programmers deal with is of accidental kind. I would describe it as the "incidental to implementation kind":

> I've personally never worked on a non-trivial problem that isn't completely dominated by accidental complexity, making the concept of essential complexity meaningless on any problem I've worked on that's worth discussing.

The OP @ozim that got me thinking reformulated essential as:

> Essential complexity from how I read Fred Brooks is a domain knowledge like what an accountant does

Which I found quite ingenious. Because it's true. Complexity starts from before you sit down to write your program. It's what you bring to the computer. And any non-trivial problem would be dominated by accidental complexity in a computer's implementation space. Unless the computer was already written for accounting.

> essential complexity in Brook's sense is completely unavoidable (in the logical sense).

The moment you interpret any part as "unavoidable" you lose an important neuance. And I was trying to illustrate how "unavoidable" was determined precisely by where you are sandwiched within the existing abstraction layers not of your making.

Even the tax code is written by legislators that are capable of controlling the accidental complexity they implement based on the essential complexity they bring to the table; the requirements they must satisfy before they leave the table.

The accidental tax code created by the legislators becomes essential domain knowledge to the accountant.

And how is this different from HTML or bash or OS programming? Or working with PayPal APIs? The authors, with avoidable choices, determine what becomes unavoidable for the consumer.

The accidental complexity of someone else is now essential complexity to you, by the defintion that 1) it's unavoidable, 2) fundamentally not amenable to attack by any potential advances in technology, and 3) it's the bedrock upon which all of your accidental complexity lies.

So with this model, how do you reduce complexity?

We have layers upon layers of expanding specs. It's not just a computer problem, but a coding problem that also applies to tax law or any other rule making. And it's an entropy problem. Left unchecked, complexity only grows, so how do you fight entropy?

Stacking is part of the solution. Each layer shields all that is beneath it. And for someone using TurboTax, they just see English and buttons and fields to fill. The user is shieled even from HTML.

The front end coder is immersed in the essential complexities of HTML and all of the TurboTax pages are accidental, in accordance with satisfying the essential requirements of the page. But nevertheless, he is shielded from everything below his HTML stack.

We can already see without these layers, the level of sophistication we've obtained with our internet experience may never have been attained. The premise sort of being, TurboTax has never been better. Which I hate to have to admit is sort of true.

Refactoring is another part of the solution that we already do with our code. This may entail remodeling, redefining, and reexamining the problem space.

But we can also refactor our abstraction layers. JQuery was a new layer on top of javascript that was later refactored out by many, as javascript matured (if you could call it that).

In closing, we can refactor code, we can refactor abstraction layers, and we can also abstract more. We need to fight complexity by deleting as much as possible and reorganizing what remains. And part of the solution is finding the minimum number of abstractions that are needed to recreate our solution space.


I've figured it out.

The minimum "accidental complexity" in a system can only be equal or greater than the "essential complexity" imported from outside the system. If the complexity could be less, then that would be reducing the essential complexity as well.

Complexity can be measured by the number of abstractions (words, terms, and expressions) needed to express the system.

So if accounting is essential complexity, then the complexity of a computer system for accounting starts at essential complexity, and goes up. The best a system can do is provide 1 accidental abstraction implemented for 1 essential abstraction that needs implementation.

All implementation beyond the naming of the functions is accidental.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: