Where else are you going to learn that the system is your enemy and the people around you are your friends? I feel like that was a valuable thing to have learned and as a child I didn't really have anywhere else to learn it.
One does not simply walk into RPC country. Communication modes are architectual decisions, and they flavor everything. There's as much difference between IPC and RPC as there is between popping open a chat window to ask a question, and writing a letter on paper and mailing it. In both cases you can pretend they're equivalent, and it will work after a fashion, but your local communication will be vastly more inefficient and bogged down in minutia, and your remote comms will be plagued with odd and hard-to-diagnose bottlenecks and failures.
Some generalities:
Function call: The developer just calls it. Blocks until completion, errors are due to bad parameters or a resource availability problem. They are handled with exceptions or return-code checks. Tests are also simple function calls. Operationally everything is, to borrow a phrase from aviation regarding non-retractable landing gear, "down and welded".
IPC: Architectually, and as a developer, you start worrying about your function as a resource. Is the IPC recipient running? It's possible it's not; that's probably treated as fatal and your code just returns an error to its caller. You're more likely to have a m:n pairing between caller and callee instances, so requests will go into a queue. Your code may still block, but with a timeout, which will be a fatal error. Or you might treat it as a co-routine, with the extra headaches of deferred errors. You probably won't do retries. Testing has some more headaches, with IPC resource initialization and tear-down. You'll have to test queue failures. Operations is also a bit more involved, with an additional resource that needs to be baby-sat, and co-ordinated with multiple consumers.
RPC: IPC headaches, but now you need to worry about lost messages, and messages processed but the acknowledgements were lost. Temporary failures need to be faced and re-tried. You will need to think in terms of "best effort", and continually make decisions about how that is managed. You'll be dealing with issues such as at-least-once delivery vs. at-most-once. Consistency issues will need to be addressed much more than with IPC, and they will be thornier problems. Resource availability awareness will seep into everything; application-level back-pressure measures _should_ be built-in. Treating RPC as simple blocking calls will be a continual temptation; if you or less-enlightened team members subcumb then you'll have all kinds of flaky issues. Emergent, system-wide behavior will rear its ugly head, and it will involve counter-intuitive interactions (such as bigger buffers reducing throughput). Testing now involves three non-trivial parts--your code, the called code, and the communications mechanisms. Operations gets to play with all kinds of fun toys to deploy, monitor, and balance usage.
It has 3 ways to declare functions, multiple variations on arrow functions syntax, a weird prototyping inheritance system, objects you can create out of "new" on functions, object literals that can act an pseudo-classes, classes, decorators, for-i loop + maps + filter + for-in loop (with hasOwn) + forEach, async / await + promises and an invisible but always-on event loop, objects proxies, counter-intuitive array and mapping manipulations, lots of different ways to create said arrays and mappings, very rich destructuring, so many weirdnesses on parameter handling, multiple ways to do imports that don't work in all contexts, exports, string concatenation + string interpolation, no integer (but NaN), a "strict mode", two versions of comparison operators, a dangerous "with" keyword, undefined vs null, generators, sparse arrays, sets...
It also has complex rules for:
- scoping (plus global variables by default and hoisting)
- "this" values (and manual binding)
- type coercion (destroying commutativity!)
- semi-column automatic insertion
- "typeof" resolution
On top of that, you execute it in various different implementations and contexts: several browser engines and nodejs at least, with or without the DOM, in or out web workers, and potentially with WASM.
There are various versions of the ECMA standard that changes the features you have access to, unless you use a transpiler. But we don't even touch the ecosystem since it's about the language. There would be too much to say anyway.
There are only two reasons to believe JS is simple: you know too much about it, or you don't know enough.
A problem here is that ES (and Solr too) are pathological with respect to garbage collection.
To make generational GC efficient, you want to have very short lived objects, or objects that live forever. Lots of moderately long-lived objects is the worst case scenario, as it causes permanent fragmentation of the old GC generation.
Lucene generally churns through a lot of strings while indexing, but it also builds a lot of caches that live on-heap for a few minutes. Because the minor GCs come fast and furious due to indexing, that means you have caches that last just long enough to get evicted into the old generation, only to become useless shortly thereafter.
The end result looks like a slow burning memory leak. I've seen the worst cases take down servers every hour or two, but this can accumulate over time on a slower fuse as well.
Yes, in that there are DB technologies not built in a fashion where records are stored in a sorted order of some fashion. No, in that they are very much not common technologies. Most databases, relational, non-relational, etc have some form of a B-Tree at their core somewhere.
Early executives at those companies did very well. Early employees did well, but risk-adjusted , not really. I know people who were fairly early at those companies and they own nice SFH in the Bay Area but they're still working as Directors or whatever.
Consider that if you could make 400k (including liquid stock) in compensation at FAANG but you take 180k at the startup, you're basically betting 220k a year on the company. Except unlike any other company you bet 220k on, you won't get a board seat, you won't get access to key metrics, your influence will be dominated by "real" investor's influence.
If your NW is less than 10M, which presumably it is, anyone who's heard even heard of the words "Kelly Criterion" would tell you your nuts for betting 220k a year on one startup. And yet, you get treated like "an employee" and not like "an investor" for taking that insane risk.
So YC has invested in 5000 companies, and you can name 3 that had top-notch outcomes, thats 0.06% success - and you had to work like a dog to realize it! And that money was locked up. Those same early employees could have taken that $220k/ year, put it on Bitcoin or Apple stock, and retired off that. And Bitcoin and Apple were much easier "picks" than an given startup.
The math simply does not add up and the whole system runs off mystique and naivety. And I've worked at startups that gave me a hard time about asking about outstanding shares, about asking about the cap table, about asking about liquidation preference. This is _critical_ information before you invest a significant portion of your life and net worth on a company and that they're guarded about and it should raise the ultimate alarm bells that they don't fall over themselves to explain every part of it.
There's a bunch of propaganda out there "Explaining ISOs, written by a16z" that's a smoke screen of the truth. The math does not add up.
The dream startup employee is really really good at Transformer architectures and really really bad at personal finance. Fortunately for startups, a shocking amount of these people exist. But it doesn't change that if sharp financiers looked at employee equity packages at startups objectively, every single one would agree it's a scam deal.
For our production PGSQL databases, we use a combination of PGTuner[0] to help estimate RAM requirements and PGHero[1] to get a live view of the running DB. Furthermore, we use ZFS with the built-in compression to save disk space. Together, these three utilities help keep our DBs running very well.
I find average leaf density to be the best metric of them all. Most btree indexes with default settings (fill factor 90%) will converge to 67.5% leaf density over time. So anything below that is bloated and a candidate for reindexing.
It does support multiple pages but you can use just one.
It has a nifty feature in that you can divide the single file into virtual parts. They just have alternate backgrounds to tell them apart. And each virtual part can have a type for syntax highlighting (plain text, markdown or a programming language).
I've been using it for a few months now and it's my primary note taking / knowledge recording thing.
Even though it's web based, on Chrome you can save notes on disk so it works like a desktop app.
Each note is a plain text file so you can edit them in any text editor.
If you put notes on a shared drive (Dropbox, OneDrive, Google Drive etc.) you can work on notes on multiple computers.
Used to think OKRs were total bullshit, until I started working on a project spanning multiple teams with cross-org dependencies.
OKRs are effectively a quarterly negotiation to get alignment across teams so that large, complex, projects get done on time.
They create accountability and the ability to escalate up the chain when a team you have a dependency on isn't prioritizing what you need to unblock your project.
Generally, if you think OKRs are bullshit, it means they're being applied at a too granular level, or misapplied in an organization that doesn't require strong collaboration across team or org boundaries.
It "seems" easy and obvious when you're a leaf node how a team, org, company should be run and coordinated, but when you actually try and do it, and lead, it's nowhere near as easy as it seems.
OKRs, misapplied, suck. But there has to be some way to get large groups of people to align on specific targets. Effectively something like them have to exist.
Helping teams know what to say "no" to is the real power of OKR.
You do that by having the entire organization's O and KR roll up and cascade down. My Objectives directly roll up to my parent organization's Key Result. My parent org's Objectives rolls up to their parent's KR, and so on to the top. Then, you have the top-level check downwards if the sets of OKRs still make sense in their entirety.
Unfortunately, I have never seen any organization I was with ever do this...
The first approach (the 'It’s "obviously" the only way to go' one) is called an adjacency list.
The second (the 'vastly simpler method') i don't recall seeing before. It has some fairly obvious deficiencies, but it is clearly enough in some cases.
The third ('namespacing') is called a materialized path.
The sad thing about these "monolith vs microservice" debates is that to this day we have programming languages which favor shared mutable state, so a program written like this is an absolute hell (or a very leaky abstraction) to distribute. And it doesn't have to be like this.
Think about it. When your variable is a simple value, like a number or a string or a struct, we treat it as pass-by-copy (even if copy-on-write optimized), typically stack allocated. Remote IO is also pass-by-copy. But in-between those two levels, we have this intermediate pointer/handle hell of mutable shared state that the C family of languages promote, both procedural and OOP variety.
The original OOP definition is closer to the Actor model which has by-copy messages, but the actual OOP languages we use, like C++, Java, C# all derive philosophically from the way C handles entities on the heap, as this big shared local space you have immediate access to, and can pass around pointers to.
And that's where all our problems come from. This concept doesn't scale. Neither in terms of larger codebases. Nor in terms of distributing an application. It doesn't also scale cognitively, which the article mentions, but doesn't quite address in this context.
"My third remark introduces you to the Buxton Index, so named after its inventor, Professor John Buxton, at the time at Warwick University. The Buxton Index of an entity, i.e. person or organization, is defined as the length of the period, measured in years, over which the entity makes its plans. For the little grocery shop around the corner it is about 1/2,for the true Christian it is infinity, and for most other entities it is in between: about 4 for the average politician who aims at his re-election, slightly more for most industries, but much less for the managers who have to write quarterly reports. The Buxton Index is an important concept because close co-operation between entities with very different Buxton Indices invariably fails and leads to moral complaints about the partner. The party with the smaller Buxton Index is accused of being superficial and short-sighted, while the party with the larger Buxton Index is accused of neglect of duty, of backing out of its responsibility, of freewheeling, etc.. In addition, each party accuses the other one of being stupid. The great advantage of the Buxton Index is that, as a simple numerical notion, it is morally neutral and lifts the difference above the plane of moral concerns. The Buxton Index is important to bear in mind when considering academic/industrial co-operation."
> Technical debt, as originally coined by Ward Cunningham, is the idea that you can gain a temporary speed boost by rushing software development, at the cost of slowing down future development.
No! Ward's blog post linked in this very paragraph [0] describes the original definition quite clearly, and it's not this.
The term was originally intended to describe the delta between a programmer's current understanding acquired through the process of developing the software, and the accumulated implementation which is based upon a combination of past understanding. Because writing a program is often a process of discovery and learning about the problem and solution space, it's not usually possible to write the ideal implementation up front.
It's quite clear In Ward's 2nd paragraph where he describes the correction of this debt:
> it was important to me that we accumulate the learnings we did about the application over time by modifying the program to look as if we had known what we were doing all along and to look as if it had been easy to do in Smalltalk.
What the author is describing is the modern definition that it has acquired independently, where by programmers knowingly take shortcuts... this is an entirely different phenomenon, intentionally writing code in a way that is less than ideal even based on current understanding. I find the original definition far more interesting and insightful, but the modern one is perhaps more reflective of the more common realities of the pressure developing software in a business today.
Despite this error, I do agree with the sentiment of the article, that too much code is a liability, that features can cost more than they are worth.
Ward's definition of technical debt affects the cohesion and correctness of the code, the idea that if you could tear it all up every day and re-write it from scratch that it would always reflect the most up to date understanding, and for moving targets the most up to date problem, that it be absent of the weight of past misconceptions or solutions to past problems... aiming for minimal code makes achieving that state far more viable - these are complementary insights, so it's a shame the nuance of the original definition was lost.
A good QA person is basically a personification of all the edge cases of your actual production users. Our good QA person knew how human users used our app better than the dev or product team. It was generally a competition between QA & L2 support as to who actually understood the app best.
The problem with devs testing their own & other devs code is that we test what we expect to work in the way we expect the user to use it. This completely misses all sorts of implementation error and edge cases.
Of course the dev tests the happy path they coded.. that's what they thought users would do, and what they thought users wanted! Doesn't mean devs were right, and frequently they are not..
There is some truth in this. In the startup world, we often tend to avoid this framing because it isn't positive - but every startup begins in the process of failing (running out of resources and ceasing to be a going concern, one way or another).
The term "runway" is often interpreted to mean: "We need to get this plane going fast enough to lift off before the end of the road". Reality is more complex, because there isn't a plane yet. You have to design and build that as you go, and if you get it wrong, you crash.
Management is in this position: without action, the company will die. With the wrong actions, the company will die. Many decisions close doors, and it's not exactly clear from the start what sequence of doors will lead to the company not dying.
If you involve the team too much in the sausage-making aspects of this, they will invariably become distracted, lose focus, and the company will die. If you leave them too much in the dark, they will lose trust, and the company will die.
It's not easy, and all of this leads to the duck nature: calm up top, under the water feet are paddling like crazy.
I have this comic on my wall at my desk.[1] It's about how much work goes into every detail of the everyday things we take for granted. I practice law, mostly patent litigation, and often my job entails untangling one of these stories.
I would encourage anybody interested in a professional career (in anything) to zoom out and keep in mind that almost every profession is ultimately about providing service.
You will primarily work with (and for) other human beings, inside your organization and outside.
The measure of your success is often perceptive, coming from a boss, a coworker, or a client, and it may not directly correlate with your perception of what you may or may not have personally invested into the solution.
Software engineering is philosophically no different than plumbing -- sometimes the job is designing and implementing a plumbing system in a new building, other times it's diagnosing the source of a leak and fixing it, many times it's clearing literal feces from a pipe. Your value is not just extracting those turds, it's often being calm, competent, compassionate, timely and communicative while doing so. It comes from perseverance for solving the problem to the customer's satisfaction. It also comes from defusing situations with angry / incompetent clients, disaster projects, and resolving chaotic situations. Your role is to help reduce friction and solve problems for a person or organization in need.
That you're writing software is purely coincidental; it's but one of many deliverables you provide throughout the course of your career. The rest are "soft" -- insight, support, quality, reliability, trust, consistency, charisma, general likeability as a human being, etc.
If you're doing this for a job, you're going to have to deal with a lot of people, a lot of arbitrary constraints, a lot of bullshit, and a lot of bureaucracy that have nothing to do with writing software.
The same argument could be made for law, medicine, engineering, hospitality, cooking, fashion design, driving a taxi, street performing, drug dealing, sex work, you name it.
That's just the reality of work! If you're more interested in making art, do that instead (or both at the same time), but try to understand that there's a marked difference, and they serve separate, necessary roles in life :)
It's interesting that we treat kettling children up indoors by age cohort with little movement and making them solve math problems ad nauseum as if it were the most natural thing in the world and treat those who can't sit still and learn in such an environment (or understand the purpose of doing all of those math problems for 13 years) as if they are an aberration.
The article has some helpful points. But as a programmer-SAAS-founder-who-took-over-ads operation, I have some tips on some insights we gleaned doing paid ads (and getting it to be profitable for us):
1. Most important tip: is your product ready for ads?
- Do not do paid ads too early.
- Do it once you know that your product is compelling to your target audience.
- Ads are likely an expensive way of putting your product in front of an audience.
- No matter how good the ad operation, unless your product can convince a user to stay and explore it further, you've just gifted money to Google/X/Meta whoever.
- If you haven't already, sometimes when you think you want ads, what you more likely and more urgently need is better SEO optimization
2. The quality of your ad is important, but your on-boarding flows are way more important still.
- Most of the time, when we debugged why an ad wasn't showing conversions, rather than anything inherent to the ad, we found that it was the flows the user encountered _AFTER_ landing on the platform that made the performance suffer.
- In some cases, it's quite trivial: eg. one of our ads were performing poorly because the conversion criterion was a user login. And the login button ended up _slightly_ below the first 'fold' or view that a user saw. That tiny scroll we took for granted killed performance.
3. As a founder, learn the basics
- This is not rocket science, no matter how complex an agency/ad expert may make it look.
- There are some basic jargon that will be thrown around ('Target CPA', 'CPC', 'CTR', 'Impression share'); don't be intimidated
- Take the time to dig into the details
- They are not complicated and are worth your time especially as an early stage startup
- Don't assume that your 'Ad expert' or 'Ad agency' has 'got this'.
- At least early on, monitor the vital stats closely on weekly reviews
- Ad agencies especially struggle with understanding nuances of your business. So make sure to help them in early days.
4. Targeting Awareness/Consideration/Conversion
- Here I have to politely disagree with the article
- Focus on conversion keywords exclusively to begin with!
- These will give you low volume traffic, but the quality will likely be much higher
- Conversion keywords are also a great way to lock down the basics of your ad operation before blowing money on broad match 'awareness' keywords
- Most importantly, unless your competition is play dirty and advertising on your branded keywords, don't do it.
- Do NOT advertise on your own branded keywords, at least to begin with.
- Most of the audience that used your brand keywords to get to your site are essentially just repeat users using your ad as the quickest navigation link. Yikes!
5. Plug the leaks, set tight spend limits
- You'll find that while your running ads, you are in a somewhat adversarial dance with the ads platform
- Some caveats (also mentioned in the article)
- Ad reps (mostly) give poor advice, sometimes on borderline bad faith. We quickly learnt to disregard most of what they say. (But be polite, they're trying to make a living and they don't work for you.)
- (Also mentioned in the article) Do not accept any 'auto optimization' options from the ads platform. They mostly don't work.
- Set tight limits on spends for EVERYTHING in the beginning. I cannot emphasize this enough. Start small and slowly and incrementally crank up numbers, whether it be spend limits per ad group, target CPA values, CPC values - whatever. Patience is a big virtue here
- If you're running display ads, there are many more leaks to be plugged: disallow apps if you can (article mentions why), and disallow scammy sites that place ads strategically to get stray clicks.
- For display ads, controlling 'placement' also helps a lot
6. Read up `r/PPC` on Reddit
- Especially the old, well rated posts here.
- They're a gold mine of war stories from other people who got burnt doing PPC, whose mistakes you can avoid.
All the main players in Clickhouse's space like Apache Pinot, Apache Druid, StarRocks, PrestoDB all have mindshare and unicorns using their products. It sounds like you haven't seen whats happening in this space.
Second, even XP is not a "magic bullet". Nothing is. It's work that works. (Scrum, on the other hand, is not a "magic bullet" but simply a "bullet". Use it to kill projects very effectively).
Third, at my first real job after uni, we did most of the XP-like practices, and it worked amazingly well. But we didn't know about "XP". Partly because it didn't exist yet, as this was around the same time the Kent Beck started at Chrysler Comprehensive. When the XP books came out it was fun to have a name for what we had been doing so successfully. And also to compare and contrast.
Fourth, I had a great side-by-side natural experiment during my tenure at the BBC. My team did XP-ish things, mostly the technical practices, so test first/TDD, YAGNI etc. Pairing when necessary, but we were co-located around a desk "island" (sort of the way journalist workspaces are organised). My team succeeded far beyond expectations [1]
The team next to us, larger, more important and with way more experienced developers did SCRUM, but not XP. That project had to be rebooted completely after 2 years.
Most people don’t know some history. During 1990s, a group of people made a fortune out of consulting gigs where they will be called in by their CTO friends in traditional enterprises to save the late and over budget projects. One of these people was Kent Beck. Kent will use his license to kill to turn things around and eventually generalize his rescue formula and sell it to make 100X more. His crowning glory during those days was XP or eXtreme Programming.
Like with all self-help formulas, Kent will label his solution as magic bullet for all software development problems. He will advertise it as secret medicine that cures all ills. He will be at every conference, write articles after articles, publish books.
Also, like all magic self-help formulas, it wouldn’t quite work. So, Kent will invent something new. His next prescription was TDD and when I first saw it, I thought it was a joke. But people around me started drinking cool aid and if you didn’t join them then you weren’t one of them. Again, Kent and friends will go out on massive marketing spree advertising it as secret talisman. Like all overweight desperate people in need to lose weight, people will enthusiastically start new Kent Beck diet, lose few pounds and endorse the formula. But they will soon find that they had simply traded one problem for another more uglier one.
This went on for long time. For more than two decades, these group of people kept inventing these processes, selling it as magic pill and made millions upon millions in consulting gigs, books, training, certifications and so on. They came up with Agile and 17 people in that group created “agile manifesto”. Their most aggressively marketed prescription was scrum. Like their all previous prescription, world is finally coming off of night of drinking cool aid and feeling severe headache.
I think most of these people have now sort of retired after amassing massive fortunes and hopefully we will not see more of these magic processes pushed to dumb CTOs with promises of curing all ills.
The truth is Scrum was never a magic bullet and it is downright harmful for many projects. It is useful for highly predictable projects where research component is negligible, for example, CRUD websites AND where you are stuck with unmotivated tier-3 talent who failed to get job at insurance company. For everything else, it should never have been used. It is especially going to hurt creativity, originality and novelty if you are in business of making a differentiating unique novel product. It also is very very bad choice if you already had tier-1 highly motivated team.
The first time I expirienced Scrum rituals I thought I'm inside the Idiocracy movie, or playing some tech-themed LARP game.
In my anecdotal expirience the same org moved from mini-waterfall to Scrum now required order of magnitude more people to work on a project with a fraction of complexity (e.g. equivalent to a single component/service under mini-waterfall).
Not everybody knows that, but Scrum was invented to manage a team of dysfunctional COBOL programmers at a bank, not for product-led tech companies, and certainly not for startups.
If you're mostly hiring juniors, low-skilled, unpassionate, unable to work autonomously without constant handlholding, reactive instead of proactive people, then you'll certainly need some micromanaging SDLC like Scrum.
Having been on both sides of this, and having worked closely with "the business" doing requirements analysis, project management, tech-leading and individual development, my conclusion was that your original view is somewhat closer to the right one than the PMs.
One may ask, from where does the tech industry come from? From where tech startups come from? Why is there such a thing as the "tech" industry at all? Don't all companies use tech? We don't talk about the "science industry", do we? If you try and find a definition of tech firm that captures what people mean when they say this, you'll conclude they're basically either computer companies or ordinary firms doing ordinary things, who use computers more effectively than normal. And in the latter case what makes a firm a tech firm is basically unarticulated, people know it when they see it but it's not like there's a set of rules to classify, say, Netflix as a tech firm and Disney as not a tech firm.
So what is it that people see? Mostly it's the distinctive culture that appears when you have (ex-)programmers at the very top of the company, as in CEO and/or board level. This causes companies to differ in all sorts of ways, but one way in which tech firms differ, for example, is that in tech firms you don't get terms like "the business" and "IT". You don't get non-technical project managers. The distinction between the two sides simply doesn't exist.
Non-tech firms live in fear of tech firms and startups. It took me a while to notice this, but go to enough conferences, talk to enough people and you'll see it. The average firm is far more scared of Google or Apple encroaching on their space than they are of an established competitor. This is because tech firm culture is more effective than their own. Such firms have a long history of coming from nowhere to utterly dominate entire industries very fast, and they don't know how to respond to it.
The cultural problems can be seen in the stories you were told. Programmers who understand the business are too expensive. They get ideas. They get passionate, and that's a problem. In a tech firm, experienced devs who understand business end up at director level or higher and firms compete to pay them the best. In non tech firms, they are a problem and get pushed out. This is because the business people are scared of such devs because senior developers end up understanding the business better than the business people do. After all, they implemented the business logic so every rule, regulation and detail is in their mind. And they're used to the rigor demands of programming, so tend to say awkward things in meetings like "that won't work" or "that contradicts the other thing you just said". Tech firms don't mind this because that guy's boss is himself a former developer, and is used to such discussions (from e.g. code or design reviews). "Business people" on the other hand aren't used to this at all, yet feel like their value is their business understanding. They need their devs to be bored and uncaring drones because otherwise what's their own value? You don't want to be competing for a promotion against someone whose understanding of the business is better than yours and who can actually execute change projects effectively!
Underlying all this of course is the uncomfortable fact that programming is much harder than most office jobs. Programmers can and will learn programming and then go on to learn the fine details of finance, accounting or shipping without breaking a sweat, but the reverse is generally not true. It was maybe to some extent in the Visual Basic era but the move to web apps put a stop to gifted amateurs cobbling together business apps and nothing really replaced it (maybe Oracle APEX but it's not as widely used).
I've been using https://structurizr.com/ to automatically generate C4 diagrams from a model (rather than drawing them by hand). It works well with the approach for written documentation as proposed in https://arc42.org/. It's very easy to embed a C4 diagram into a markdown document.
The result is a set of documents and diagrams under version control that can be rendered using the structurizr documentation server (for interactive diagrams and indexed search).
I also use https://d2lang.com/ for declarative diagrams in addition to C4, e.g., sequence diagrams and https://adr.github.io/ for architectural decision records. These are also well integrated into structurizr.