I started my first dev job in early 1997 which is more like 25 than 30 years ago but I think the milieu was similar.
The internet was mostly irrelevant to the line of work I was involved in although it was starting to have impact. We had one ISDN 2x line for the entire office. It was set up to open on demand and time out a few minutes later as it was billed by the minute.
I worked on an OpenGL desktop application for geoscience data visualization running on Irix and Solaris workstations.
The work life balance was great as the hardware limitations prevented any work from home. Once out of the office I was able to go back to my family and my hobbies.
Processes were much lighter with far less security paranoia as cyber attacks weren't a thing. Biggest IT risk was someone installing a virus on a computer from the disk they brought to install Doom shareware.
The small company I worked for did not have the army of product managers, project managers or any similar buffoonery. The geologists told us developers what they needed, we built it and asked if they liked the UI. If they didn't we'd tweak it and run it by them again until they liked it.
In terms of software design, OO and Gang of Four Patterns ruled the day. Everyone had that book on their desks to accompany their copies of Effective C++ and More Effective C++. We took the GoF a little too seriously.
Compensation was worse for me though some of that is a function of my being much more advanced in my career. These days I make about 10x what I made then (not adjusted for inflation). That said, I led a happier life then. Not without anxiety to which I'm very prone but happier.
Effective C++ was an amazing book. I bought copies for the entire team out of my own pocket. The Gang of Four on the other hand was an unfortunate turn for the industry. As you say we took it too seriously. In practice very few projects can benefit from the "Factory pattern", but I've seen it used in way too many projects to the detriment of readability. I worked in one source code base where you had to invoke 4 different factories spread across many different source files just to allocate one object.
The real problem is that many people didn't actually read the book or, if they did, they only took part of it seriously.
Each pattern chapter has a pretty long section that details when you should and should not use the pattern. The authors are very clear about understanding the context and not mis-applying patterns.
But once it became popular (which happened because these patterns are quite useful), it got cargo culted and people started over-applying them because it sent a social signal that, "Hey, I must be a good developer because I know all these patterns."
The software engineering world is a much better one today because of that book now that the pendulum has swung back some from the overshoot.
It's amazing how many times I saw the Singleton pattern between 2000 - 2012 or so, and in almost every case, it degenerated into a global variable that was used by everything in a component or system.
It would have been more apt to name it the Simpleton pattern, after most of their practitioners.
This stuff started to go away w/ modern DI frameworks. In fact, I don't really see much of the GoF patterns anymore, particularly ones for managing the order of instantiation of objects. Everything in the C# world has been abstracted/libraried/APIed away. But I wouldn't be surprised if GoF patterns are still prevalent in C/C++/SmallTalk code.
I am a newly employed engineer and I am assigned to learn Design patterns (the book and all) right as of today. Needless to day, I am very intrigued. Could you expand what you mean by too far, beyond over-applying the patterns?
Don't listen too closely to the issues people have with GoF and Design Patterns. Yes, they have their issues and an over-reliance is an issue. However as a junior engineer you should learn these things and come to this realization (or not) yourself! Also if your company does use these things, code standardization across the org is more important than the downfalls of overly complex patterns.
I read these arguments instead of focusing on learning patterns and it just caused grief until I decided to learn the patterns.
> However as a junior engineer you should learn these things and come to this realization (or not) yourself!
This is such great advice. Wonder if theres even something in there about how learning through experience why something doesn't work being as useful as learning it in the first place.
I do think we throw the baby out with the bath water with many ideas. Despite us learning useful skills subconsciously.
Yes I intend to learn and integrate design patterns and OO programming in general so that I can gain some confidence, and maybe later I can finally understand why my software development professor teachers hated this so much and teached us Haskell and Clojure instead :-)
If you're anything like me, given enough time you wonder if all the 'fluff' of OO is necessary as you seem to be writing code to satisfy the programming style rather than the domain problem. You'll then try FP and find it has its own pitfalls - especially around how complex the code can get if you have a load of smart Haskell engineers - and suddenly they have the same problems ('fluff'). Apparently at some point I'll have a similar move to Lisp and discover how complex the code base can be with multiple engineers (I'm 8 or 9 years into this engineering journey myself!).
My current goal with software is to write this as simply as possible that a junior developer with 6 months experience could read my code and know how to modify it.
Agree with the GP. In a sense even if design patterns are not such a great idea, there's so much code written with those patterns in mind (and classes/variables named accordingly) that it's beneficial to understand at least briefly what the names mean.
(That said, quoting Wikipedia, which I agree with also: "A primary criticism of Design Patterns is that its patterns are simply workarounds for missing features in C++". In particular, these days with more modern languages [and also the modernization of C++] some of the workarounds aren't that important any more)
As for why your professors prefer Haskell and Clojure... for some reason functional programming aligns with the way the stereotypical academia type person thinks. In practice, you should be using the best tool for the task, and learning various aspects of software engineering (as opposed to taking a side) should help you in the long run.
What often happens is you never get to "code standardization across the org" because technology changes too fast. But you still have to deal with overly complex patterns, varying and mis-applied patterns, etc.
Oh I 100% agree but as a junior engineer you're not going to be able to change that, if you can change it you probably won't change it for the better, and using HN comments to fuel debates over long-standing patterns will just cause resentment. These are totally valid opinions to have once you find them out for yourself, IMO.
Design patterns do a good job of capturing some design decisions you'll need to make in your career. They represent a level of architectural knowledge that is often poorly captured and communicated in our industry (slightly above language features, and below whole frameworks). Many people treated the book (either in good faith over-enthusiasm, or as a bad faith strawman) as a repository of 'good' code to cut and paste. Some of those people would end up working in a functional language and claiming they don't need design patterns because they're the same as having first class functions. This is just mistaking the implementation for the design motivation. And even then, tough luck, however clever your language. Monads? A design pattern.
So, I will stress again: design patterns represent decisions you can make about your code, in a particular context. If you want to be able to supply or accept a different algorithm to make a decision, that's the strategy pattern. Maybe it's an object with an interface, maybe it's a function callback. The important bit is you decided to let users of your API supply their own policy to make a decision (instead of you just asking for a massive dictionary called 'options' or whatever). If you want to ensure that all calls to one subsystem happen to a single instance, that's the singleton pattern. Whether you enforce that with static calls or in your inversion of control container, you're still making the decision to instantiate it once and not have every callee set up its own version taking up its own resources (or maybe you are, it's a decision after all).
I get somewhat agitated about this stuff, because people are raised to be very skeptical of design patterns. This means they go through their career building up all sorts of design knowledge, but rarely naming and sharing it at a useful level of granularity. That's a huge waste! But the saddest thing about the whole conversation was that the decision _not_ to use a particular design pattern is just as valid as the one to use it, and _even then_ it's still a superior approach because you can be explicit about what you're not doing in your code and why.
> working in a functional language and claiming they don't need design patterns because they're the same as having first class functions
The point really was that you need different design patterns in a functional language, and most of the GoF design patterns are useless in a functional language, as they either deal with state, or they deal with something that had some better solution in a functional language (e.g. through algebraic datatypes, which were built-in).
So if you amend "we don't need design patterns" to "we don't need most of the GoF design patterns", it's actually a true statement.
> Monads? A design pattern.
Exactly.
And now the pendulum has swung back, and instead of providing primitive language features that would make using the Monad design patterns easy, we have half-assed async/await implementations in lots of imperative languages, just because people didn't realize async/await is just a particular use of the Monad design pattern.
> This means they go through their career building up all sorts of design knowledge, but rarely naming and sharing it at a useful level of granularity.
Which is really sad, because the GoF book really emphasized this point.
But for some reason programmers seem to have a desire to turn everything into some kind of cult...
The patterns movement, for more context, arose out of the work of the architect Christopher Alexander, who explored the concept of patterns in buildings like “courtyards” or “bay windows”. The problem with the GoF ones, as they’ve been applied to software, is an overemphasis on applying them for their own sake rather than fitting the problem at hand - imagine if every window in a house was a bay window. There are a lot of software projects that end up like that and turn people off OOP in general.
Absolutely learn the patterns. You will encounter places to use them. The book isn't an instruction book on how to write software. It is a "here are some useful patterns the occur occasionally during development". Having a common language to talk about such things is useful.
It is very easy to over-apply patterns at the cost of readability and maintainability. Realize the your code is more likely to be rewritten before the of the features provided by your applications of patterns are used.
Also good to know these patterns if you're dealing with a lot of legacy code, particularly 199x - 201x code that implemented a lot of these patterns.
Some of them are straightforward (factory, adapter). Some of them are almost never used (flyweight). Some are more particular to a programming language; you might see a lot of visitor patterns in C++ code, for instance, but IIRC, that wouldn't come up in Smalltalk because it supported paradigms like double dispatch out of the box.
I will comment: when they came out the idea of patterns captivated a lot of us and it became an article of faith that all code would be designed around a particular pattern.
It's still a great idea to use patterns... but I think people have come to realise that sometimes they over complicate things and maybe they don't always fit the task at hand. If that's what you are finding then maybe don't use a pattern and just code something up.
They are a useful and powerful tool, but not a panacea.
It's that. Over applying patterns in the areas of the code that are unlikely to need the extensibility afforded by employing these patterns. The cost of using the patterns is that they add a level of indirection which later costs you and others some extra cognitive load. By and large though the GoF patterns are relevant today and when applied judiciously they do help to organize your code.
Have these OO-patterns become less relevant or is it just that they were absolutely standard 20-30 years ago- so that they are old and less relevant only in the perspective of their previous dominance?
A lot of is is that modern frameworks include a lot of those behaviors that required you to manually code the patterns back them. e.g., I can create a an ObservableCollection in C# with a single line of code, but in 1996 C++ I'd have to go to the trouble of building out an Observer pattern and it still wouldn't have all the features that IObservable does.
HeyLaughingBoy is right about patterns being built into frameworks we use today (can’t reply to that comment because it’s nested too deeply).
Rails is an example. I’ve seen a number of talks and articles by DHH that emphasize patterns and talking with people who wrote patterns. Rails built those in (like “model view controller”).
Libraries and frameworks weren’t publicly available 30 years ago. Certainly not for free. The patterns are still useful, it’s just that a library or framework is often more efficient than reimplementing a pattern from scratch.
definitely yes, I'd even go as far as to say they're just exemplary, meaning "well written code can look (for example) like this", but since they became "industry standards", they help your code be understood (either by other people, or by you when you eventually forget how/why you wrote it that way), which helps speed up code review and make it easier to maintain / refactor old code...
> The internet was mostly irrelevant to the line of work I was involved in although it was starting to have impact. We had one ISDN 2x line for the entire office. It was set up to open on demand and time out a few minutes later as it was billed by the minute.
Early gig I had in 97 was working on building an internal corp intranet for a prototyping shop. There were around 50-60 folks there - probably 20 "upstairs" - doing the office/business work. I was upstairs. I was instructed to build this in Front Page. Didn't want to (was already doing some decent PHP on the side) but... hey... the IT guy knew best.
Asked for some books on FP. Nope - denied. So I spent time surfing through a lot of MS docs (they had a moderate amount online docs for FP, seemingly) and a lot of newsgroups. I was pulled aside after a while saying I was using too much bandwidth. The entire building had - as you had - a double ISDN line - a whopping 128k shared between 20+ people. I was using 'too much' and this was deemed 'wrong'. I pointed out that they decided on the tool, which wasn't a great fit for the task, and then refused to provide any support (books/etc). I left soon after. They were looking for a way to get me out - I think they realized an intranet wasn't really something they could pull off (certainly not in FP) but didn't want to 'fire' me specifically, as that wasn't a good look. Was there all of... 3 months IIRC. Felt like an eternity.
Working in software in the 90s - a bookstore with good tech books became invaluable, as well as newsgroups. No google, no stackoverflow, often very slow internet, or... none sometimes.
And.. there were far fewer distractions. Waiting for compiling? Maybe you could play solitaire, but there were fewer 'rabbit holes' to go down because... you likely weren't on a network. Even by mid 90s, you weren't easily 2 seconds away from any distraction you wanted, even if you were 'online'.
Similarly I started in 2000, just as internet applications were starting to become a thing. I could list a whole load of defunct tech stacks: WebSphere, iPlanet, NSAPI, Zeus Web Server (where I worked for about a year), Apache mod_perl, Delphi etc. And the undead tech stack: MFC.
Compensation: well, this is the UK so it's never been anywhere near US levels, but it was certainly competitive with other white-collar jobs, and there was huge spike briefly around 2001 until the "dotcom boom" burst and a whole load of us were laid off.
Tooling: well, in the late 90s I got a copy of Visual Studio. I still have Visual Studio open today. It's still a slow but effective monolith.
The big difference is version control: not only no git, but no svn. I did my undergraduate work in CVS, and was briefly exposed to SourceSafe (in the way that one is exposed to a toxin).
Most of the computers we used back in 2000 were less powerful than an RPi4. All available computers 30 years ago would be outclassed by a Pi, and the "supercomputers" of that day would be outclassed by a single modern GPU. This .. makes less difference than you'd expect to application interactive performance, unless you're rendering 3D worlds.
We ran a university-wide proto-social-network (vaguely similar to today's "cohost") off a Pentium with a 100MB hard disk that would be outclassed by a low-end Android phone.
Another non-obvious difference: LCD monitors weren't really a thing until about 2000 - I was the first person I knew to get one, and it made a difference to reducing the eyestrain. Even if at 800x600 14" it was a slight downgrade from the CRT I had on my desk.
I kept buying used higher-end CRTs for almost a decade because their refresh rate and resolution so greatly outstripped anything LCD that was available for sale.
My parents got an early 2002 LCD display... I never knew what I lost... by not gaming on a CRT. Low rez too... sad. All for what, space and "enviroment"?
I went to a PC expo of some sort in NYC in 1999 because I was in town for an interview. LCDs had just come out, but every exhibit in the hall had them because they were new but also because you could ship a whole bunch of flat screens in the same weight as a decent CRT.
I was working at an internet startup in 1996. We basically built custom sites for companies.
It’s hard now to appreciate how “out there” the internet was at the time. One of the founders with a sales background would meet with CEOs to convince them they needed a website. Most of those meetings ended with a, “We think this internet web thing is a fad, but thanks for your time”.
It's interesting to consider this viewpoint 30 years later and wonder what will bring about the next age. Is it something in it's infancy being dismissing as a fad? Have we even thought of it yet?
"The Metaverse" is still fundamentally the same as other content/experience the same way 3D movies are fundamentally the same as 2D movies.
Being able to see a projection of someone in chair next to you does not really deepen or hasten the sharing of ideas in any drastic way compares to pre-internet vs post-internet communication.
If I had to guess, my suspicion is that direct brain-to-brain communication is the next epoch-definiting development.
> I started my first dev job in early 1997 which is more like 25 than 30 years ago but I think the milieu was similar.
My first internship was in 2000, and I feel like, overall, not a lot has changed except the deck chairs. Things still change just as fast as back then.
The internet was mostly irrelevant to the line of work I was involved in although it was starting to have impact. We had one ISDN 2x line for the entire office. It was set up to open on demand and time out a few minutes later as it was billed by the minute.
I worked on an OpenGL desktop application for geoscience data visualization running on Irix and Solaris workstations.
The work life balance was great as the hardware limitations prevented any work from home. Once out of the office I was able to go back to my family and my hobbies.
Processes were much lighter with far less security paranoia as cyber attacks weren't a thing. Biggest IT risk was someone installing a virus on a computer from the disk they brought to install Doom shareware.
The small company I worked for did not have the army of product managers, project managers or any similar buffoonery. The geologists told us developers what they needed, we built it and asked if they liked the UI. If they didn't we'd tweak it and run it by them again until they liked it.
In terms of software design, OO and Gang of Four Patterns ruled the day. Everyone had that book on their desks to accompany their copies of Effective C++ and More Effective C++. We took the GoF a little too seriously.
Compensation was worse for me though some of that is a function of my being much more advanced in my career. These days I make about 10x what I made then (not adjusted for inflation). That said, I led a happier life then. Not without anxiety to which I'm very prone but happier.