Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Author here. The Go channel send behavior could certainly be altered depending on the particular semantics of the application, but the reason I chose to use a non-blocking buffered channel is so that no particular subcomponent can slow down the distribution of notifications for everybody.

> Shouldn't the channel rather block than discard if full?

In Go, a blocking channel is one that's initialized without a size (see [1]). You could have a blocking channel where the sender uses a `select/default` to discard after it's full, but that leaves very little margin of error for the receiver. If it's still processing message 1, and then message 2 comes in and the notifier tries to send it, message 2 is gone.

IMO, better to use a buffered channel with some leeway in terms of size, and then write receivers in such a way that they clear incoming messages as soon as possible. i.e. If messages are expected to take time to process, the receiver spins up a goroutine to do so, or has another internal queue of its own where they're placed so that new messages from the notifier never get dropped.

---

[1] https://gobyexample.com/channels



We probably both understand the underlying concepts correctly. But just in case: also buffered channels will block if full.

So I don't see how notifications get discarded (meaning lost). But somehow that's what your text says?


They'll get lost when using a non-blocking send with select/default:

    messages := make(chan string)
    select {
    case messages <- "hi":
        fmt.Println("sent message", msg)
    default:
        fmt.Println("no message sent")
    }
The reason you'd use a non-blocking send is to make sure that in the event of one slow consumer that the entire system doesn't slow down.

Imagine a scaled out version of the notifier in which it's listening on hundreds of topics and receiving thousands of notifications. Each notification is received one-by-one using something like Pgx's `ListenForNotification`, and then distributed via channel to subscriptions that were listening for it.

In the case of a blocking send without `default`, one slow consumer that was taking too much time to receive and process its notifications would cause a build up of all other notifications the notifier's supposed to send, so one bad actor would have the effect of degrading the time-to-receive for all listening components.

With buffered channels, a poorly written consumer could still drop messages for itself, which isn't optimal (it should be fixed), but all other consumers will still receive theirs promptly. Overall preferable to the alternative.


Right, I didn't know it was possible to bypass full channels like that with select/default. Thanks for spelling it out




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: