I really dislike this idea of testing in go: only ever use an interface, never the real implementation + mockgen the mocks based on this interface + use the mocks to assert that a function is called, with exactly this parameters and in this exact order.
I find this types of tests incredibly coupled with the implementation, since any chance require you to chance your interfaces + mocks + tests, also very brittle and many times it ends up not even testing the thing that actually matters.
I try to make integration test whenever possible now, even if they are costly I find that the flexibility of being able to change my implementation and not break a thousand tests for no reason much better to work with.
I'm a fan of writing tests that can be either. Write your tests first such that the real dependencies can be run against. Snapshot the results to feed into integration test mocks for those dependencies so that you can maintain the speed benefit of limited test scope. Re-run against the real dependencies at intervals you feel is right to ensure that your contracts remain satisfied, or just dedicate a test per external endpoint on top of this to validate the response shape hasn't changed.
The fundamental point of tests should be to check that your assumptions about a system's behavior hold true over time. If your tests break that is a good thing. Your tests breaking should mean that your users will have a degraded experience at best if you try to deploy your changes. If your tests break for any other reason then what the hell are they even doing?
> I really dislike this idea of testing in go: only ever use an interface, never the real implementation + mockgen the mocks based on this interface + use the mocks to assert that a function is called, with exactly this parameters and in this exact order.
Same I have zero confidence in these tests and the article even states that the tests will fail if a contract for a external service/system changes
I see this kind of testing as more for regression prevention than anything. The tests pass if the code handles all possible return values of the dependencies correctly, so if someone goes and changes your code such that the tests fail they have to either fix the errors they've introduced or go change the tests if the desired code functionality has really changed.
These tests won't detect if a dependency has changed, but that's not what they're meant for. You want infrastructure to monitor that as well.
If you're testing the interface, changing the implementation internals won't create any churn (as the mocks and tests don't change).
If you are changing the interface, though, that would mean a contract change. And if you're changing the contract, surely you wouldn't be able to even use the old tests?
This isn't really a go problem at all. Any contract change means changing tests.
> only ever use an interface, never the real implementation + mockgen the mocks based on this interface + use the mocks to assert that a function is called, with exactly this parameters and in this exact order.
is not ideal, and that's what we don't do. We test the real implementation, then that becomes the contract. We assume the contract when we write the mocks.
They mean the dependencies. If you’re testing system A whose sole purpose is to call functions in systems B and C, one approach is to replace B and C with mocks. The test simply checks that A calls the right functions.
The pain comes when system B changes. Oftentimes you can’t even make a benign change (like renaming a function) without updating a million tests.
Tests are only concerned with the user interface, not the implementation. If System B changes, that means that you only have to change your implementation around using System B to reflect it. The user interface remains the same, and thus the tests can remain the same, and therefore so can the mocks.
I think we’re in agreement. Mocks are usually all about reaching inside the implementation and checking things. I prefer highly accurate “fakes” - for example running queries against a real ephemeral Postgres instance in a Docker container instead of mocking out every SQL query and checking that query.Execute was called with the correct arguments.
> Mocks are usually all about reaching inside the implementation and checking things.
Unfortunately there is no consistency in the nomenclature used around testing. Testing is, after all, the least understood aspect of computer science. However, the dictionary suggests that a "mock" is something that is not authentic, but does not deceive (i.e. not the real thing, but behaves like the real thing). That is what I consider a "mock", but I'm gathering that is what you call a "fake".
Sticking with your example, a mock data provider to me is something that, for example, uses in-memory data structures instead of SQL. Tested with the same test suite as the SQL implementation. It is not the datastore intended to be used, but behaves the same way (as proven by the shared tests).
> checking that query.Execute was called with the correct arguments.
That sounds ridiculous and I am not sure why anyone would ever do such a thing. I'm not sure that even needs a name.
The last batch of juniors we hired just completed 4 years in the company, which I would say is a pretty successful batch, but sadly we haven't hired juniors since.
Edit: I must qualify that this is for software developers only, we did hired juniors for things like data engineers, security, IT and such.
Implement the simplest thing that works, maybe even by hand at first, instead of adding the tool that does "the whole thing" when you don't need "the whole thing".
Eventually you might start adding more things to it because of needs you haven't anticipated, do it.
If you find yourself building the tool that does "the whole thing" but worse, then now you know that you could actually use the tool that does "the whole thing".
Did you waste time not using the tool right from the start? That's almost a filosofical question, now you know what you need, you had the chance to avoid it if it turned out you didn't, and maybe 9 times out of 10 you will be right.
As far as I know there is no way to do Promise like async in go, you HAVE to create a goroutine for each concurrent async task. If this is really the case then I believe the submition is valid.
But I do think that spawning a goroutine just to do a non-blocking task and get its return is kinda wasteful.
You could in theory create your own event loop and then get the exact same behaviour as Promises in Go, but you probably shouldn't. Goroutines are the way to do this in Go, and it wouldn't be useful to benchmark code that would never be written in real life.
I guess what you can do in golang that would be very similar to the rust impl would be this (and could be helpful even in real life, if all you need is a whole lot of timers):
func test2(count int) {
timers := make([]*time.Timer,count)
for idx, _ := range timers {
timers[idx] = time.NewTimer(10 * time.Second)
}
for idx, _ := range timers {
<-timers[idx].C
}
}
This yields to 263552 Maximum resident set size (kbytes) according to /usr/bin/time -v
I'm not sure if I missed it, but I don't see the benchmark specify how the memory was measured, so I assumed the time -v.
I think I tested very casually some time ago with Go maps and up to like one hundred items the linear search on array was faster than map lookup. Considering that many times when we use Maps for convenience they will have less than a hundred items this could be useful.
Unfortunately I don't have the results (or the test code) anymore, but it shouldn't be hard to do again (casually at least).
Not sure about this particular instance. LTT got a video taken down by community guidelines violation just the other day, and they are much bigger than Jeff.
I was thinking about that just the other day, who it would be really cool if Go had compile time code execution. I think Jai is making that a very prominent feature of the language.
I don't know if there's ever been a language that was announced 10 years before its first closed beta. I don't even know if having a "closed beta" for a language is something that really happens.
So there's a lot that is different with Jai, it's more like a highly anticipated game than a language.
That's pretty much it. Jonathan Blow's strongly worded negative opinions on most things attracts a certain kind of person who will go on to evangelize.
Yes, I guess. I wouldn't be surprised to see "Jai inspired" languages coming out before Jai itself, since some of the ideas look pretty good.
> How does one even get the compiler and build programs ?
Don't know how seriously you're asking this, but I believe you can apply to use it and if Jon likes your credentials and what you want to try it on he might let you have the compiler for some version. Don't know why he won't just open source it and say it's a early version passive of change, but game development is not usually very open source friendly compared to web dev.
I find this types of tests incredibly coupled with the implementation, since any chance require you to chance your interfaces + mocks + tests, also very brittle and many times it ends up not even testing the thing that actually matters.
I try to make integration test whenever possible now, even if they are costly I find that the flexibility of being able to change my implementation and not break a thousand tests for no reason much better to work with.