Hacker Newsnew | past | comments | ask | show | jobs | submit | onetimeuse92304's commentslogin

> Weird that this is the top-rated comment,

The reason it is top-rated is because it sounds extremely reasonable. This is enough for most people.

I am not judging on whether the comment is correct or not, just answering why it is top-rated. I find nothing weird about it.


I don't think the comment sounds unreasonable, and I don't find anything weird about the words in the comment itself. It just kinda bums me out that so many people must come here and comment and upvote comments without even reading or skimming the first few paragraphs of the article. This isn't news to me, of course; I've been a frequent HN reader for over a decade now. But it still bums me out.


Welcome to our entire species. We can blame evolution for pretty much everything.


Doesn't a comment automatically rise to the top if there's lots of discussion below it?


I've read something more of the opposite before: posts with lots of comments risk being down weighted unless they've received a lot of upvotes. Supposedly something to do with wanting to avoid lower value content which is attracting a large amount of discussion anyways.

I've never heard of anything of this nature regarding comments though.


Their down weighted method is odd.

It causes stories to essentially die. I saw one get upvotes which turned into going lower down the list until it fell off.

Paste the post id after id=

https://news.social-protocols.org/stats?id=


Call me sceptical.

That would have been astronomically expensive given the enormous supply chain needed to produce charcoal to get that iron in those times.

I am sceptical on how they figured out iron stains are pothole fillings. I think much simpler explanation would be everyday items or metal pieces of carts getting stuck between stones.


How is this unfortunate?

Most programmers learn about loops pretty much at the absolute start of their development experience, where they don't yet have a way to talk about recursion. Don't even start about tail recursion or tail recursion optimisation.


> where they don't yet have a way to talk about recursion.

I'd like to know how its unfortunate as well, I'm not sure I agree with this though.

    int a = 0
  begin:
    if a == 10 {
      jump :end
    } else {
      a = a + 1
      jump :begin
    }
  end:
The programmer will have learnt that programs have a beginning and an end, they will have some notion of a variable, its type, and manipulating their values. They will even likely have learnt conditional branching logic. The only new concept here is that of jumping areas of code.

If you next introduce methods you can clean it up and illustrate it more cleanly:

  myFunc(int a) {
    if a == 10 {
      return
    } else {
      a = a + 1
      return myFunc(a)
    }
  }

  myFunc(0)
Finally you can explain the programmer "hey, there's this shortcut we can take called a loop that expresses this more succinctly":

  int a = 0
  while (a != 10) {
    a = a + 1
  }
Nice simple-looking code. Yet this concept requires being able to grok much more than the relatively simple tail-recursive definition.


I suppose the order your introduce those examples in ultimately comes down to whether it's more important for the student to first understand what the computer is doing or how it is doing it.

Most people don't start out thinking like computers, so I think it's probably more important for a new student to understand how code describes a particular series of operations and then help them translate that into how a computer "thinks" about those operations.


Most people don't start out thinking like computers

Everyone who has followed a sequential list of instructions would strongly disagree.


Which, FWIW, are almost never written using recursion: they are written with either imperative loops or the moral equivalent of a goto statement.


I think most people learn about loops before they've learned about functions because your first example maps obviously to your last.

I don't think your middle example would be obvious to most people learning about functions until they've had quite some time to get their heads around what functions do, even with the context of the top piece of code.


> I think most people learn about loops before they've learned about functions

That may be so, however I was taking issue with the idea that they couldn't possibly understand tail recursive functions first. Many (if not all) of the concepts that loops introduce also get introduced with functions (scoping, regions of code, stack parameters, jump return statements e.g. break/return). The programmers just may not have words for all of them with loops, however these concepts are usually explicitly covered with functions.

Loops are particularly useful in applications of batch processing or indirectly for parallelization (again, actually functions are more useful here), so they may learn them for a business use case first, but that doesn't mean they couldn't learn to master both functions and tail recursive variants first. As a sibling commenter pointed out, if you were to come from math's background you might even naturally prefer this construct.


I feel like there's a semi-philosophical question somehwere here. Recursion is clearly a core computer science concept (see: many university courses, or even the low level implementation of many data structures), but it's surprisingly rare to see it in "day to day" code (i.e I probably don't write recursive code in a typical week, but I know it's in library code I depend on...)

But why do you think we live in a world that likes to hide recursion? Why is it common for tree data structure APIs to expose visitors, rather than expecting you write your own recursive depth/breadth-first tree traversal?

Is there something innate in human nature that makes recursion less comprehensible than looping? In my career I've met many programmers who don't 'do' recursion, but none who are scared of loops.

And to me the weird thing about it is, looping is just a specialized form of recursion, so if you can wrap your head around a for loop it means you already understand tail call recursion.


> but it's surprisingly rare to see it in "day to day" code

I rarely use them because I became tired of having to explain them to others, where I've never had to explain a simple while loop that accomplishes the same thing with, usually literally, a couple more lines of code.

From all of my experience, recursion is usually at the expense of clarity, and not needed.

I think it's related to the door memory effect [1]: you loose the sense of state when hopping into the function, even though it's itself.

[1] https://www.scientificamerican.com/article/why-walking-throu...


Not "doing" recursion as a principle is often a sign the person has not been exposed to functional languages or relational kind of programming like Prolog. It often points at a lack of experience with what perhaps is not so mainstream.


Or the person is sensibly trying to make the code easier for other people to understand.

I am tech lead and architect for large financial systems written in Java but have done a bunch of Common Lisp and Clojure projects in the past. I will still avoid any recursion and as people to remove recursion from their PRs unless it is absolutely best way to get readable and verifiable code.

As a developer your job is not to look for intellectual rewards when writing code and your job is not to find elegant solutions to problems (although frequently elegant solutions are the best ones). Your job is taking responsibility for the reliability, performance and future maintenance of whatever you create.

In my experience there is nothing worse than having bright engineers on a project who don't understand they are creating for other less bright engineers who will be working with it after the bright engineer gets bored with the project and moves on to another green field, rewarding task.


The stack traces when something goes wrong are inscrutable under recursion. Same when looking at the program state using debuggers.

Fundamentally, the actual state of the program does not match the abstract state used when programming a recursive function. You are recursively solving subproblems, but when something goes wrong, it becomes very hard to reason about all the partial solutions within the whole problem.


> The stack traces when something goes wrong are inscrutable under recursion.

Hmm. This is a real issue, for the simple case. If tail recursion is not optimized correctly then you end up with a bunch of stack frames, wasted memory...

I propose partially this is a tooling issue, not a fundamental problem with recursion. For tail recursion the frames could be optimized away and a virtual counter employed.

For more complex cases, I'd argue it matters less. Saving on stack frames is still preferable, however this can be acheived with judicious use of inlining. But the looping construct is worse here, as you cannot see prior invocations of the loop, and so have generally no idea of program execution without resorting to log tracing, while even the inlined recusion will provide some idea of the previous execution path.


I don't agree with your last statement. I've been programming forever, and I understand recursion, and I use it, but I never equate it with loops in my mind (unless I'm deliberately thinking that way, like now) and I always find it more difficult to think about than loops.


This is more about people feeling they are clever. It's the CS equivalent of "everything is an eigenvector".


It is unfortunate, because many programmers develop some feeling of recursion being weird, more difficult or in any way worse than a loop, while this is mostly based on unfortunate language design and lack of familiarity with recursion. It also often comes along with fear of recursive data structures, which is holding programmers back and the products they make.


Most people learning programming have already been exposed to function calling in math (f(x)=x+1). Recursion is not a very big jump semantically from this. Conditional loops are a (relatively) big jump.


> Conditional loops are a (relatively) big jump.

I'd be very shocked if anyone past the age of 4 or 5 had never heard (and learned to understand) statements like "Wash your hands until they're clean" which is a conditional loop (wash your hands, check if they're still dirty, repeat if they are, stop otherwise). If a teen or adult learning to program has trouble with conditional loops, I'd be very very surprised. The translation into programming languages (syntax) may be a challenge, the correct logical expressions for their intent may be a challenge, but the concept should not be.


I happen to know (due to job), that many adults have problems grasping for loops (in Python) when learning programming. It is one of the main points where problems arise in programming introductions. It may all depend on how it is explained for what person, as different people understand different explanations better than others. Or it may be syntax related. Or that people for the first time fathom how they can make the computer do things in such a time saving manner. Who knows.


Used something very similar many times in my past without knowing it is formalised as a pattern.

For example, one application of this was a long migration project where a large collection of files (some petabytes of data) was to be migrated from an on-prem NAS to cloud filesystem. The files on NAS were managed with additional asset management solution which stored metadata in a PostgreSQL (actual filenames, etc.)

The application I wrote was composed of a series of small tools. One tool would contact all of the sources of information and create a file with a series of commands (copy file from location A to location B, create a folder, set metadata on a file, etc.)

Other tools could take that large file and run operations on it. Split it into smaller pieces, prioritise specific folders, filter out modified files by date, calculate fingerprints from actual file data for deduplication, etc. These tools would just operate on this common file format without actually doing any operations on files.

And finally tool that could be instantiated somewhere and execute a plan.

I designed it all this way as it was much more resilient process. I could have many different processes running at the same times and I had a common format (a file with a collection of directives) as a way to communicate between all these different tools.


It is hard to me to understand how much this revisionist tendency is just a recent invention and to what extent it has been present throughout the history.

For the most part, I can see old books on bookshelves are still unedited. But maybe some other books have been completely destroyed due to not being acceptable to future readers/powers?

But I really hate it. I dislike when people do not understand that moral and social norms change over time and you can't blindly apply your current views to historical people who were brought up and lived in a different world.

I am pretty sure people in some distant future will think about us as heathens for eating meat, driving cars and wearing plastic. I hope they will be wise enough not to cancel us complete for this and hear out other wisdom we might want to pass.


> It is hard to me to understand how much this revisionist tendency is just a recent invention and to what extent it has been present throughout the history

How could this be a recent invention when the bible literally exists? That we know that greek and roman gods have a complex and related history, itself derived from even older gods? We literally know that we know almost nothing about the vikings because they didn't write much stuff down so all accounts we know are almost entirely by people who hate them!


> I am pretty sure people in some distant future will think about us as heathens for eating meat, driving cars and wearing plastic. I hope they will be wise enough not to cancel us complete for this and hear out other wisdom we might want to pass.

I think we're pretty poor at predicting what future generations will think about us. To that point I heartily recommend "But What If We're Wrong" by Chuck Klosterman.


It's hard to know how predominate views will change, but it is certain that they will change. If views change, the future generations must, by necessity, see us as wrong on some dimension(s) or else their views would have remained the same.

So I think the need to be able to look at past generations and "hear them out" (i.e. not cancel them, take the good, leave the bad, etc.) is important regardless of how well we project out the future.


I'm sure children can distinguish fiction from reality better than adults give them credit for. Sure, it's possible for a kid to mimic a violent kid's show from time to time. But such incidents are rare, and seem to coincide with poor parenting for the most part.

That said, I find it reasonable to think that children may have an underdeveloped capacity to understand sophisticated phenomena such as social norms. I remember that I didn't truly understand the dynamic nature of social norms till middle school. Children can be quite trusting when it comes to moral instruction. In that sense, perhaps one can justify "sanitizing" stories for an audience with impaired discernment.


It's not new. Books have been getting revised for decades now for newer sensibilities. (e.g. even the Hardy Boys was revised more than 60 years ago to sanitise it - https://www.theatlantic.com/entertainment/archive/2019/01/re...)

There was recent controversy about Roald Dahl's books getting revised (and he said himself 'change one word [in my books] and deal with my crocodile'), yet he also made revisions in his own lifetime for the same reason (https://www.forbes.com/sites/danidiplacido/2023/02/21/woke-w...)


So what if it's not new? That doesn't really make it better. An author rewriting another edition of his own work is not the same as deceptively presenting an unoriginal work as being genuine.


I'm answering the musing from the person I replied to:

> It is hard to me to understand how much this revisionist tendency is just a recent invention and to what extent it has been present throughout the history.


Fair enough.


There's a world of difference between an author revising their own work voluntarily, and their work being censored and amended without their consent. Any writer may review their work and find it wanting for any variety of reasons - but it remains the record of their creative vision. The most perfect expression of their ideas and deepest self. Even children's stories. The Forbes article you link to lists a variety of nonsensical changes that seem to have been made 'just because'. As a writer myself, I find the concept of 'sensitivity readers' condescending, troubling and downright dangerous.

To cite the article you've linked - Author Salman Rushdie wrote, “Roald Dahl was no angel but this is absurd censorship. Puffin Books and the Dahl estate should be ashamed.”


> s a writer myself, I find the concept of 'sensitivity readers' condescending, troubling and downright dangerous.

Also a writer myself, I find 'sensitivity readers' just another tool in the toolbox. I wouldn't find it appropriate to have a generic one, but if I'm, say, depicting an addict I might want to consult someone who either has lived experiences with addiction or someone who is an expert on addicts, so that I'm not unintentionally spreading bullshit tropes. A basic "am I the asshole" sort of check.


What you're describing already existed. It's the role of a researcher or fact checker. A sensitivity reader explicitly serves a different function. Not checking for accuracy but perceived offensiveness. This is an ever expanding rubric and one that (for the 'sensitivity reader' like the bureaucrat), can only fail catastrophically in one direction. The incentive is not to ensure accuracy, it's to avoid controversy.

The phrase 'bullshit tropes', so reminiscent of 'piece of shit people' is telling here.


I mean I can factually portray a spiral into addiction pretty accurately but I would rather not do so in an asshole manner :)


It's fun to think about how much meducal and scientific stuff they were wrong about. But today people still persist with dogmatic belief in what they believe to be proven. It was more often quakaey than not... so the trend is continuing


It is not hilarious, it is actually the correct use of the term.

Otherwise, you would have to contend with the fact that "real time" does not exist at all, as information about any event has to necessarily take time to travel to reach you.

So no "real time" coverage of anything -- the information always takes time to travel the distance.

What is not a correct understanding of how time works is claiming that it happened some thousands of years ago. No, from our reference frame it happened now. It is meaningless to say that it happened thousands of years ago because it happened thousands of years ago in some other, arbitrary reference frame.


You’re using reference frames incorrectly.

It didn’t happen in real time, but they did observe in it real time.

One is a measurement of the event and one is a measurement of when the photons reached us.


This is why any faster-than-light travel must either be impossible or mean that that traveling backwards in time is possible.

I point my telescope at a planet four light years away (I have super advanced telescope that can see these details), and use a worm-hole or other plot device to teleport instantly to that spot. Where do I arrive -- at what I observed, or at some point in empty space because I've just arrived at where that planet was four years ago?

If the former, I must somehow have traveled back in time by four years to arrive at the spot I had observed.

If the latter, I suppose we could instead say our destination is where we calculate the planet will be four years from now. Except that my travel time was instantaneous, so again either I've arrived too early and need to wait around for four years, or I jumped 4 years into the future (at which point that's not really FTL travel, just kind of stepping outside of time into some nether state for four years).


>If the latter, I suppose we could instead say our destination is where we calculate the planet will be four years from now. Except that my travel time was instantaneous, so again either I've arrived too early and need to wait around for four years, or I jumped 4 years into the future (at which point that's not really FTL travel, just kind of stepping outside of time into some nether state for four years).

This doesn't make sense. If you have a wormhole teleporter, and teleport to where that 4ly-away planet is in your observation, it won't be there, since you saw it 4 years ago and it's moved. This seems fairly obvious.

Now, if you observe its motion and predict where it's traveled in the 4 years between your observation (now) and when the photons you saw started from that planet (4y ago, relative to your current position in spacetime), and set your wormhole teleporter to take you there instead, it should take you to the planet's current position. (Hopefully your calculations were accurate and you don't teleport into the planet...) I don't see why you think you'd arrive too early, or jump into the future. Your friend who stays behind would need to wait 4 years to see you arrive at the planet in your super-telescope, but that's just because it takes light that long to arrive.

I don't see how this particular thought experiment necessitates time travel.


There is no Cosmological Navigational System silly. Time-space is one.


we saw it happening in real time from here. does it even matter here that elsewhere sees it at a different time?


There is no single solution this problem.

Look at the changes that happened in the past and ask yourself:

* which people have been successful regardless of changes that happened?

I think almost independently of whatever you do in life, if you are absolutely best at what you do, you are probably going to be fine. Even if what you do is house cleaning, if you are best at houscleaning you are going to be fine. There is always going to be a millionaire or a billionaire who will prefer to have a human sweep the floor rather than a roomba. Or maybe a lab will prefer to have humans to do the work just to not invite potentially dangerous electronics on the site.

There is always demand for top level talent in any area. There will always be demand for human reporters, human drivers, human writers, human programmers, human graphics designers, human managers, regardless of the changes that will happen.

But it is possible that the demand will only be for top of the top of the top of people in each those areas and 99.9% or even more will be replaced and automated.

Another thing that can help is rare specialisation that is not worth automating.

One of the easier ways to find those rare specialisation is at a cross of two largely orthogonal areas of study. I like to think a lot of useful things happen through people who connect different, sometimes distant areas of knowledge / ability.

Another thing that helps people survive change is being a free agent. Don't be an employee -- be an enterpreneur with a mindset to learn and ability to pivot on a moment to moment basis. Learn a lot about life and universe, economics, trends, etc. Learn basis of how enterpreneurship works, how to find new areas that can provide value to people.

---

So if you are a developer, you have some choices:

* become best damn developer while you still can. Spend considerable time honestly learning your craft. Just completing projects is no longer enough to be safe, but outstanding developers who can complete projects will always be needed.

* learn deeply something else that can be connected with development. I know finances and it seems there will always be a need for people who know well development as well as finances.

* you could learn management/leadership skills. The trouble is, there is plenty of technical managers/leaders, just becoming one will not guarantee job safety. You will have to work hard to keep being strong technically while you are also trying to become very competent manager/leader.

* build on your development skills to become an enterpreneur. This is probably the hardest / riskiest path.

Other choices? Please, let me know... I am myself interested in this whole topic.


Not specifically about event-driven, but the most damaging anti-pattern I would say is microservices.

In pretty much all projects I worked with in recent years, people chop up the functionality into small separate services and have the events be serialised, sent over the network and deserialised on the other side.

This typically causes enormous waste of efficiency and consequently causes applications to be much more complex than they need to be.

I have many times worked with apps which occupied huge server farms when in reality the business logic would be fine to run on a single node if just structured correctly.

Add to that the amount of technology developers need to learn when they join the project or the amount of complexity they have to grasp to be able to be productive. Or the overhead of introducing a change to a complex project.

And the funniest of all, people spending significant portion of the project resources trying to improve the performance of a collection of slow nanoservices without ever realising that the main culprit is that the event processing spends 99.9% of the time being serialised, deserialised, in various buffers or somewhere in transit which could be easily avoided if the communication was a simple function call.

Now, I am not saying microservices is a useless pattern. But it is so abused that it might just as well be. I think most projects would be happier if the people simply never heard about the concept of microservices and instead spent some time trying to figure how to build a correctly modularised monolithic application first, before they needed to find something more complex.


Also, the single most nonsensical reason that people give for doing microservices is that "it allows you to scale parts of the application separately". Why the fuck do you need to do that? Do you scale every API endpoint separately based on the load that it gets? No, of course not. You scale until the hot parts have manageable load and the cold parts will just tag along at no cost. The only time this argument makes sense is if one part is a stateless application and the other part is a database or cache cluster.

Microservices make sense when there are very strong organizational boundaries between the parts (you'd have to reinterview to move from one team to the other), or if there are technical reasons why two parts of the code cannot share the same runtime environment (such as being written in different languages), and a few other less common reasons.


Oh, it is even worse.

The MAIN reason for microservices was that you could have multiple teams work on their services independently from each other. Because coordinating work of multiple teams on a single huge monolithic application is a very complex problem and has a lot of overhead.

But, in many companies the development of microservices/agile teams is actually synchronised between multiple teams. They would typically have common release schedule, want to deliver larger features across multitude of services all at the same time, etc.

Effectively making the task way more complex than it would be with a monolithic application


I've worked with thousands of other employees on a single monolithic codebase, which was delivered continuously. There was no complex overhead.

The process went something like this:

1. write code

2. get code review from my team (and/or the team whose code I was touching)

3. address feedback

4. on sign-off, merge and release code to production

5. monitor logs/alerts for increase in errors

In reality, even with thousands of developers, you don't have thousands of merges per day, it was more like 30-50 PRs being merged per day and on a multi-million line codebase, most PR's were never anywhere near each other.


Regarding monoliths...when there's an issue, now everyone who made a PR is subject to forensics to try to identify cause. I rather make a separate app that is infrequently changed, resulting in less faults and shorter investigations. Being on the hook to try to figure out when someone breaks "related" to my team's code, is also a waste of developer time. There is a middle ground for optimizing developer time, but putting everything in the same app is absurd, regardless of how much money it makes.


I'm not sure how you think microservices gets around that (it doesn't!).

We didn't play a blame game though... your team was responsible for your slice of the world and that was it. Anyone could open a PR to your code and you could open a PR to anyone else's code. It was a pretty rare event unless you were working pretty deep in the stack (aka, merging framework upgrades from open source) or needing new API's in someone else's stuff.


> I'm not sure how you think microservices gets around that (it doesn't!).

Microservices get around potential dependency bugs, because of the isolation. Now there's an API orchestration between the services. That can be a point of failure. This is why you want BDD testing for APIs, to provide a higher confidence.

The tradeoff isn't complicated. Slightly more work up front for less maintenance long term; granted this approach doesn't scale forever. There's not any science behind finding the tipping point.


> Microservices get around potential dependency bugs, because of the isolation.

How so? I'd buy that bridge if you could deliver, but you can't. Isolation doesn't protect you from dependency bugs and doesn't protect your dependents from your own bugs. If you start returning "payment successful" when it isn't; lots of people are going to get mad -- whether there is isolation or not.

> Now there's an API orchestration between the services

An API is simply an interface -- whether that is over a socket or in-memory, you don't need a microservice to provide an API.

> This is why you want BDD testing for APIs, to provide a higher confidence.

Testing is possible in all kinds of software architectures, but we didn't need testing just to make sure an API was followed. If you broke the API contract in the monolith, it simply didn't compile. No testing required.

> Slightly more work up front for less maintenance long term

I'm actually not sure which one you are pointing at here... I've worked with both pretty extensively in large projects and I would say the monolith was significantly LESS maintenance for a 20 year old project. The microservice architectures I've worked on have been a bit younger (5-10 years old) but require significantly more work just to keep the lights on, so maybe they hadn't hit that tipping point you refer to, yet.


50 PRs with a thousand developers is definitely not healthy situation.

It means any developer merges their work very, very rarely (20 days = 4 weeks on average...) and that in my experience means either low productivity (they just produce little) or huge PRs that have lots of conflicts and are PITA to review.


Heh, PRs were actually quite small (from what I saw), and many teams worked on their own repos and then grafted them into the main repo (via subtrees and automated commits). My team worked in the main repo, mostly on framework-ish code. I also remember quite a bit of other automated commits as well (mostly built caches for things that needed to be served sub-ms but changed very infrequently).

And yes, spending two-to-three weeks on getting 200 lines of code absolutely soul-crushingly perfect, sounds about right for that place but that has nothing to do with it being a monolith.


> Also, the single most nonsensical reason that people give for doing microservices is that "it allows you to scale parts of the application separately". Why the fuck do you need to do that? Do you scale every API endpoint separately based on the load that it gets? No, of course not. You scale until the hot parts have manageable load and the cold parts will just tag along at no cost. The only time this argument makes sense is if one part is a stateless application and the other part is a database or cache cluster.

I think it really matters what sort of application you are building. I do exactly this with my search engine.

If it was a monolith it would take about 10 minutes to cold-start, and it would consume far too much RAM to run a hot stand-by. This makes deploying changes pretty rough.

So the index is partitioned into partitions, each with about a minute start time. Thus, to be able to upgrade the application without long outages, I upgrade one index partition at a time. With 9 partitions, that's a rolling 10%-ish service outage.

The rest of the system is another couple of services that can also restart independently, these have a memory footprint less than 100MB, and have hot standbys.

This wouldn't make much sense for a CRUD app, but in my case I'm loading a ~100GB state into RAM.


> Why the fuck do you need to do that?

Because deploying the whole monolith takes a long time. There are ways to mitigate this, but in $currentjob we have a LARGE part of the monolith that is implemented as a library; so whenever we make changes to it, we have to deploy the entire thing.

If it were a service (which we are moving to), it would be able to be deployed independently, and much, much quicker.

There are other solutions to the problem, but "µs are bad, herr derr" is just trope at this point. Like anything, they're a tool, and can be used well or badly.


Yes. There are costs to having monoliths. There are also costs to having microservices.

My hypothesis is that in most projects, the problems with monoliths are smaller, better understood and easier to address than the problems with microservices.

There are truly valid cases for microservices. The reality is, however, that most projects are not large enough to qualify to benefit from microservices. They are only large projects because they made a bunch of stupid performance and efficiency mistakes and now they need all this hardware to be able to provide services.

As to your statement that deploying monoliths takes time... that's not really that big of a problem. See, most projects can be engineered to build and deploy quickly. It takes truly large amount of code to make that real challenge.

And you still can use devops tools and best practices to manage monolithic applicaitons and deploy them quickly. The only thing that gets large is the compilation process itself and the size of the binary that is being transferred.

But in my experience it is not out of ordinary for a small microservice functionality that has just couple lines of code to produce image that take gigabytes in space and takes minutes to compile and deliver, so I think the argument is pretty moot.


Also - you give up type safety and refactoring. LoL


Well, technically, you can construct the microservices preserving type safety. You can have an interface with two implementations

- on the service provider, the implementation provides the actual functionality,

- on the client, the implementation of the interface is just a stub connecting to the actual service provider.

Thus you can sort of provide separation of services as an implementation detail.

However in practice very few projects elect to do this.


Even with this setup in place you need a heightened level of caution relative to a monolith. In a monolith I can refactor function signatures however I desire because the whole service is an atomically deployed unit. Once you have two independently deployed components that goes out the window and you now need to be a lot more mindful when introducing breaking changes to an endpoint’s types


You don't have to. The producers of the microservice also produces an adapter. The adapter looks like a regular local service, but it implements the code as a REST request to another microservice. This was you get you type-safety. Generally you structure the code as

Proj:

|-proj-api

|-proj-client

|-proj-service

Both proj-client and proj-service consume/depend-on proj-api so they are in sync of what is going on.

Now, you can switch the implementation of the service to gRPC if you wanted with full source compatibility. Or move it locally.


> Why on earth would anyone buy one?

For the same reason I, and a lot of other people, chose to not have sweets and snacks at home. Because we like them and if they were available, we would eat them. We know that this is really bad for health and we also know that we have limited willpower to prevent ourselves from reaching for them. So we elect to help make better decision by just not having them around all the time.

I still eat sweets. I just prefer this to be once a week in a form of a good dessert at a good restaurant, right after a good meal.

And if I need a snack I make sure to have plenty of alternative, healthy options available at all times -- mostly fresh fruit and veggies.


> with less red tape than NYSE or Nasdaq

Good luck with that.

The story of the red tape is that every time somebody does something stupid or malicious, there is some red tape added so that future investors face less trading risk.

Risk == cost

So less red tape needs to be translated to investors facing potentially higher risk on their transactions of various stupid or malicious shit others can pull on them.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: