It would be helpful to have it more prominently displayed, after landing and learning a bit about the product the locations is the second thing you want to look at.
I use Cloud Firestore, it's also serverless and is really easy to use for building SPAs as you can do a lot of things with just frontend code and trigger cloud functions when things in the database change.
I do wish there was a serverless SQL option like AWS has, but nothing yet. Can always use Cloud SQL, but as others have mentioned you need to access it over a public IP (for now)
Not OP but I can setup DB of choice on an instance or group of instances and access directly from the Cloud Run containers. Our particular main stack is on MongoDB 3.4 so we deployed Atlas on the VPC and the containers access it just fine.
Looks like they really plan to move forward with the `try` function... I'm personally not a fan of it, I hope they listen to the feedback they received on the GitHub Issue and not just push it through like the error inspection [0].
This looks pretty much exactly like the old try!(expr) macro in Rust, which has since been replaced with the postfix `?` operator.
One crucial difference is that the proposal uses a named `err` return value that can be manipulated in defer. This is supposed to allow cleanup and wrapping of the error type.
Rust solves wrapping is by allowing auto-conversion between error types (if they implement it).
My first intuition is that it would be an improvement over omni-present `if err != nil {}`, but it feels somewhat awkward and tacked on. Especially the mutable `err` return value.
Of course there also was a reason why try! was replaced with `?`: awkward nesting and chaining. Go would have the same problem.
To be clear, you can name the return value whatever you want. People just typically name their error values `err`. Also, the mechanisms to mess with named return values in defer already exist: https://play.golang.org/p/7nFyiuAa3Ra
Just to clarify, I was aware that named returns already exist in Go.
My main point is that this somewhat weird pattern is encouraged by the proposal since no other wrapping mechanism exists. And IMO wrapping is really essential for debuggable code.
I totally agree that good error wrapping is essential. In fact, I've been using this exact system for wrapping my errors for years, and it's been great (https://godoc.org/github.com/zeebo/errs#WrapP).
I suspect it's only "somewhat weird" due to lack of familiarity, and that adding an additional mechanism to wrap errors when one already exists is not in the spirit of a language built from small orthogonal components.
I don't like that it's mutable state that is easy to shadow accidentally. (tooling can warn you about it, but it's suboptimal).
Also, in a function with a couple of different error types, do you end up checking the error type manually and reacting accordingly, all in one final defer? That seems error prone. And it doesn't work at all if multiple statements can produce the same error - you won't know which statement caused it.
And, last but not least, this would loose proper backtrace information: the backtraces all point to the defer line rather than separate lines for each error.
You can’t naked return with a shadowed named return variable. The compiler disallows it. Thus, at any return site, you locally know that you’re either returning the only err in scope (the named return), or the specific value listed in the return will be assigned to the named return variable.
I don’t know what error types has to do with it. If you need to annotate differently for every exit point, sure, a defer doesn’t work, but neither would any other proposals I’ve seen. In those cases, do the more verbose thing because the verbosity is apparently warranted. In my experience, it is rarely warranted, and a stack trace with annotation about the package/general operation is sufficient.
The stack traces contained still include what line the return happened on when queried from inside the defer. They retain the information about which return executed. My, or any, helper could, if desired, explicitly skip the defer stack frame, and it would be indistinguishable from capturing at the return itself.
> Also, in a function with a couple of different error types, do you end up checking the error type manually and reacting accordingly, all in one final defer?
If a function needs to check the error returned by another function and act accordingly, then don't use try() and use a if statement.
If we just need to decorate the errors with proper context before returning them to the caller, then we use try everywhere and a single defer statement for the decoration. We can use a single defer statement because we expect that the error context is the same in the whole function. See this comment by Russ Cox: https://github.com/golang/go/issues/32437#issuecomment-50329...
> this would loose proper backtrace information: the backtraces all point to the defer line rather than separate lines for each error
It looks like instead of making a better error handling system, they just made it easier to not type `if err != nil` everywhere. Then there's all that handler stuff that looks very much like Java's `try ... catch` in reverse order.
Pretty underwhelming for what's supposed to be a modern language.
"Then there's all that handler stuff that looks very much like Java's `try ... catch` in reverse order."
The key characteristic of Go's error handling is that you have to handle errors in the scope in which they occur, vs. exception handing which is designed around throwing errors up the stack until something finally handles it, often quite distant from the point of the error and lacking context.
"try" just re-spells that. It isn't a step towards exception handling; it's exactly as "exception-handle-y" as if err != nil { return err } already is, whatever value you may consider that to be. Part of the goal is to make correct handling where you actually do something with the error that much easier, instead of having to do something essentially unrefactorable for every error, through a combination of allowing error handling to be factorable in this new scheme, and some other changes to errors to add more structure by convention to create official ways of composing them together and examining these composed errors in sensible ways.
The confusing part is the underlying assumption that there exists a way to "handle" errors, understood as a way to workaround the error and somehow continue execution. This is most of the times futile, there is nothing to "handle", other than abort the execution and report the condition.
For example, consider a bug that causes a data structure invariant to be violated. The correct "handling" of the situation is to fix the bug and rerun the code, not add layers and layers of error "handling" code ahead of time.
One of the things the Go community has encountered is chaining together too many "if err != nil { return err }" basically returns you to the exceptions problem of having no context around the error anymore, except now you don't even have a stack trace to help you out. (This is what I meant by the bare "if err != nil { return err }" being "exception-y", only, if you like, unambiguously worse if the code base is full of that literal block.)
"Handling" an error includes further annotating it with information about why the code in question couldn't fix it, and this is probably the most common case. (I hedge only because we humans are actually really bad at judging such things, with our availability heuristic bias and other biases. I'm fairly confident this would indeed be the #1 case, but I've been wrong about this sort of thing in the past when I actually went to check.) We don't mean "fix" the error, just... "handle". Ideally you end up with a composite error object (as I said, in conjunction with a few other library-level things that are likely to get pulled up to official support and culture) that contains much more information about the error, and that if you do end up flinging it up to higher level code, you're leaving it with more options for understanding the resulting problems and dealing with them.
The task of an error "handling" mechanism could be then described as:
A. Create an error object, enter "error mode".
B. Annotate an error object with contextual information at each call point.
C. Return the error object up the call stack.
D. Translate from "error mode" into "value mode", producing the annotated error object as a value.
For languages with exceptions, B is automatic, but limited to stack traces, C is automatic. A and D are obtained via special language constructs, for example "raise ... / try: ... except E as ex: ...".
For Go [please excuse my almost total ignorance], B is manual, but arbitrarily expressive, C is manual, but terse via "try(...)", whereas A and D are done using a combination of standard language constructs and style conventions.
Assuming the above is correct, perhaps there is some reasonable design that automates B for most common use-cases. In particular, logging the invocation arguments at D makes it trivial to re-run the offending code in a debugger, with full stack trace and invocation arguments. Wrapping most function calls with "try(...)" is annoying, but manageable, whereas thinking what information should be carried by the error object on a case by case basis is a waste of brain cycles.
The example was a worst case example. There are weaker versions thereof, for example the familiar:
def handle(request):
try:
return Response(200, process(request))
except UserError as ex:
return Response(400, ex.message)
except InternalError as ex:
return Response(500)
The principle stays the same, there is not much to "handle" and no option to recover without external help, either by providing a well-formed request for 400 errors or by providing well-behaving code for 500 errors.
If an error is unrecoverable, panic. If all you're trying to do is generate a 500 error, net/http even handles the panic for you. If you're a library author and unsure of whether your callers want a panic or not, provide a normal and an (idiomatic) "Must" version that panics on the error.
There are legitimate criticisms to be made of how Go's error handling works, but I think the language already handles the case you're talking about.
I see Javascript made it mainstream for all sort of clients and servers, Electron desktop apps made it to mainstream.
So I remain skeptic to argument just because new features have been invented they are good or need to be implemented everywhere.
Been a fan of JS since before The Good Parts book. Used it server-side in Classic ASP, Netscape Server and a couple more obscure runtimes before Node.js.
Personally, I find Rust as more approachable and easier to wrap my head around opposed to Go. Though some of the syntax changes I don't like as much. Waiting on async/await to land in a couple months though.
No surprise there. Rust is most loved language in so many surveys. Most devs planning to learn in near future. Many say it will be polished and ready by next year. I personally feel it may be ready for general developers like linux desktop will be ready for general users next year.
I think it's already ready for a lot of use cases... though, I think baked in async/await will carry it across the line for many. It's good for low-level duties and even has some decent/good web server frameworks. The webassembly flow is better than a lot of tools as well. I think that there will be some work around nicer UI/UX tooling for graphical apps, but it's still very usable as it is.
I'm learning it now. The async / await stuff is great to have in the core, but isn't something that was otherwise holding me up from learning it.
I believe that after I fully integrate the Zen of Rust, that I will be writing multi-threaded programs with fewer bugs than I would in my alternative languages (C, C++, Go, Python, Lua).
The annoying things in Go that have accumulated in my mind over the last few years are all dealt with in a superior fashion in Rust today.
I'll still be using Go for some stuff at work, but I won't be starting personal projects in it, like I might have in the past.
I don't care about the features in the last 20 years if I'm able to do my job efficiently which Go as a language provides. Never wonder why those great academics languages with a ton of features are not adopted?
I would say it is a modern language or attempts to be one, as it was designed recently, and was able to take lessons from a variety of older languages. Rust and Go are probably the most popular general purpose, good performance modern languages. Swift and TypeScript are also pretty modern.
Sorry, I said that poorly. I mean that the problem `try` is fixing is that all the `if err != nil ..` clutters up the code and they are introducing `try` to clean that up.
They aren't really moving forward with it, they are implementing an experimental version of it so people can try it in real code. Many things are added temporarily when a new version is in development. Eg. the error handling changes they mentioned were implemented, and then later most of it was removed before the feature freeze.
So there's still plenty of time to stop the proposal.
My main beef with this and other proposals is that they don't clear up a fundamental flaw in Go: Multi-value returns with errors as a poor man's sum type.
Most functions in Go have a contract that the return values are disjoint, or mutually exclusive — they either return a valid value or an error:
s, err := getString()
if err != nil {
// It returned an error, but "s" is of no use
}
// s is valid
This pattern so ingrained that there's hardly a single Go doc comments on the planet that says "returns value or error". The same goes for the "v, ok := ..." pattern.
But this is just a pattern and not universally true. A commonly misunderstood contract is that of the `Read` method of `io.Reader`, which says that when `io.EOF` is returned, the returned count must be honoured. This is an outlier, but because the convention of disjointness is so widely adopted, many developers make this assumption (it's trivial to find repos in the wild [1] that make this mistake), and so this is, in my opinion, bad API design.
(As an aside, it's also true that multi-value returns beyond two values almost always become cumbersome and impractical, especially if said values are _also_ mutually exclusive. Structs, having named fields, are almost always better than > 2 return values.)
This kind of careless wart is typical of Go, just like other surprising edge cases like nil channels (or indeed nil anything).
I would much rather see a serious stab made at supporting real sum types, or at least mutually exclusive return values. For example, I could easily see this as being a practical syntax:
func Get() Result | error {
...
}
This union syntax showed up in the Ceylon language, and it's a neat pattern for a conservative language that doesn't want to venture into full-blown GADTs.
Such a syntax would be a much better match for a try() function, since there's no longer any doubt about the flow of data — there's never a result returned with an error, it's always either a result or an error:
result := try(Get())
or simply support existing mechanisms for checking:
if err, ok := Get().(error); ok {
...
}
if result, ok := Get().(Result); ok {
...
}
switch t := Get().(type) {
case Result:
// ...
case error:
// ...
}
I'd love to see a `case` syntax that allows real local variable names:
switch Get().(type) {
case result := Result:
log.Printf("got %d results", len(result.Items))
case err := error:
log.Fatal(err)
}
And of course, you could have more than two values:
switch Get().(type) {
case ParentNode:
// ...
case ChildNode:
// ...
case error:
// ...
}
The Go compiler can be strict here and require that every branch be satisfied or that there's a default fallback, although some might prefer that to be a "go vet" check.
A full-blown sum type syntax would be awesome, though I know it's been discussed before, and been shot down, partly for performance reasons. Personally, I think it's solveable. I'd love to be able to do things like:
type Expression Plus | Minus | Integer
type Plus struct { L, R Expression }
type Minus struct { L, R Expression }
type Integer struct { V int }
I like Rust a lot, and I'm doing a couple of projects in it.
But Rust is an advanced language. In the company I work for, Rust would be a no-go simply because some developers would struggle with it too much.
Go hits a nice sweet spot. You don't need to worry too much about whether something is on the heap or stack, or what the overhead of copying a struct is. It's conducive to incremental "sculpting": You can write a broad, naive implementation where you even wing it a bit with types first, and then slowly fill in detail, refining types (to the extent Go lets you), locking down performance, and so on.
My feeling with Rust is that you start and end with just one level of granularity: You can't really defer the implementation of lifetimes and copying semantics and so on until later; you have to add clone() at the beginning, whereas Go is all copy by default (with the exception of interfaces).
But yes, Rust's enums are much nicer than what Go has.
Interesting to go with Monad "by convention". They wont build in an explicit type but rather utilize convention (placing err as the last param and giving it a certain shape) to replicate that behavior
Yep. Being practical and simple with easy to remember rules seems to be more of a value to the designers than to build a language with “features from Ivory Towers” (in the Haskell sense anyway).
The designers hope to give Go users “what’s needed to write good code” without having the learning curve of other languages.
I suspect this balance is hard to strike at times and I rather admire their efforts.
It was abandoned in favor of the try proposal, mostly because the handle statement was coming partly redundant with the defer statement, but with subtle differences.