What makes languages like Zig, Rust, C and C++ the best fit for cross platform applications over many garbage collected languages? Why is bringing the language runtime a problem?
What does it mean to compile to a C-compatible library?
ChatGPT's responses start accurate then quickly go off the rails. The section from this point onwards is completely incorrect:
> Say I called a bunch of goroutines when I was in the Add function of the example you gave, would this be a problem?
The Go runtime is initialised once only in c-shared mode for the lifetime of the application - it would make no sense to do it on every function invocation, and be incredibly slow. So the answer to this section and the next one are just largely bogus.
ie. this response
> However, once you call a function via a C or Swift bridge, it becomes a synchronous operation and will block the calling thread until all goroutines have completed execution. Therefore, you would need to effectively manage the synchronization of these goroutines to avoid unnecessary blocking of the calling thread.
And the response to this question:
> You said in 4 the Go runtime may not keep running, does this mean that every invocation of the Add function has to spin up the whole Go runtime every time? Why cant it just stay alive inside the Swift process?
Pretty much every modern language (Zig, Rust, C, and C++ included) depends on a runtime. The C runtime is privileged because it is already present on all 3 desktop OSes.
It is also a lot smaller than most other runtimes, which makes bundling the C runtime with the program more palatable.
A "C-compatible library" is a library (i.e. a collection of functions) that is callable in the same way that functions written in C are called. Nearly all non-C languages provide a way to call C functions (because, again on all modern desktop OSes, the operating-system interface is written in C).
If everyone wrote OS interfaces in perl, then you would want to compile to a perl-compatible library. If the Lisp machines had won, then you would be compiling to a Common Lisp compatible library.
That is somewhat conflating the calling convention and the runtime - C really doesn't have much that you can call a runtime outside the calling convention, although some libraries (eg pthreads say) have a little runtime which matters in practise for integrations, but these are libraries not parts of the language itself.
> The C runtime is privileged because it is already present on all 3 desktop OSes.
Yes, Ubuntu, Debian and Fedora.
On Windows, you don't get a C run time; you ship a MS Visual Studio run-time DLL if you're using Microsoft's tools (and not static linking), or something else with someone else's tools; maybe a CYGWIN1.DLL or whatever.
You get platform libraries like kernel32.dll and user32.dll; but those are not a C run-time. They are easy to call from C, but other than that, they are the OS run time.
Recently Microsoft has made an effort to create a "Universal C Run Time" for Windows; but I think you still have to download and ship that, and there may be reasons for someone to choose a different run-time. (E.g. needing a pretty detailed POSIX implementation.)
Microsoft's new Universal C Run Time addresses the problem of there not being a C library on Windows that is for public use (every compiler vendor providing their own).
Curious to know from experts in the above to see if ChatGPTs response is valid?
Me as someone who knows nothing about either, looking at a nuanced response from ChatGPT, puts me at awe - esp. response to the question: "Say I called a bunch of goroutines when I was in the Add function of the example you gave, would this be a problem?"
As I responded in a sibling comment, this is the point where ChatGPT goes completely off the rails and starts fabricating responses. Temper your awe :)
Does anyone know of other tools like this which are a web UI of systemd logs on a VPS which I access, self hosted, from a website and secure behind a password?
I use New Relic for APM already, but I was surprised to find that New Relic does not support Debian 11 for their default infrastructure agent to forward logs from?
I did have a quick play with Grafana Cloud using the default dashboards for their linux agent were so incredibly slow when trying to navigate simple metrics.
That has not been my experience whatsoever. At work we run our own self-hosted Grafana open-source and it's been fantastic. I've also run it on my own Raspberry Pi 4B at home and it's worked very well.
Yeah, that's not great. The interface seems snappier for me, but I may just be partial. I dunno. There is no perfect solution, but Grafana has been pretty amazing in my experiences.
Even if you fix where the backend is & use something like Edge workers around the world, you still run into the issue of where the database is hosted. Making all the work useless. Any useful endpoint is going to change some state like the timesheet app.
I very much like the ethos of Golang for this reason. Still not had a reason to use it but like the idea of mastering the fundamentals in a weekend. Even if I lose the flexibility of LINQ or Java streams.
It feels like a scale of language conservativeness Go all the way at the top, Java somewhat in the middle (a little above) and C# at the bottom. It is going the route with lots of features and complexity, which can be a great thing but not for grug devs https://grugbrain.dev like me.
But saying all that Blazor looks really good for WebDev, I just worry it gets abandoned. It feels everything does in the C# space.
C# also made a big mistake imo by going with async/await instead of lightweight threads which will add a ton of complexity in the future for if they decide to go the greenthread route like Goroutines/Project Loom.
> C# also made a big mistake imo by going with async/await instead of lightweight threads which will add a ton of complexity in the future for if they decide to go the greenthread route like Goroutines/Project Loom.
Could you expand on this? Async/await is just syntax magic for Task continuations (in other words, Promises [0]), which have very little to do with the underlying threading model. This statement is equivalent to saying "Completable Futures add a ton of complexity to Project Loom."
Yes I understand the function coloring "problem" (oh no functions need to specify in their signature whether they return results immediately or eventually). Regardless, I still don't understand how this prevents green threads a la Project Loom, if you have a function that returns a `CompletableFuture` in Java, it also needs to change its signature.
IIRC, the statement was from some Java blog about Loom. Idea is that with lightweight threads you can make everything sync and still be performant. While C# has gone ahead with making everything async
I understand the difference in approaches. However, the parent stated that this decision makes green threads harder in C#, which is what I don't understand.
Is immediate mode GUI's just that much better? Instead of a complex React/Redux style setup, how much easier would state management be if we had a render loop like game dev? Does that even make sense?
I am very envious at the pure programming skills of so many game developers. UI's in indie games are just a side thing in the deep complexity of a game and they end up looking incredible compared to the level of effort required for year long web app projects.
They've drifted away from calling it that, but the whole idea behind React was to bring immediate-mode rendering to the web. It has an event loop where each "frame" you render the current state from scratch, rather than explicitly updating the state of retained controls.
I use Zui with Kha/Haxe - an immediate mode GUI library. I cannot overstate how immediate mode GUI greatly simplifies ui development. React is massive step up from OOP/scenegraph/display list style gui, but it is still so complex and cumbersome compared to immediate mode. I can’t fathom why there is such little exploration in this space and it seems mostly limited to gamedev.
Immediate mode GUI makes perfect sense for applications that repaint themselves all the time 60 (or more) times per second, as they're already doing the work and there's little overhead added by handling such GUI. I can deal with 3D editor working this way, but I don't think I would be very happy if my e-mail client did.
That's just an implementation detail. The main benefits of an immediate mode UI is the API for using it. There's no reason that the loop has to run at 60 FPS. It could even be event-based, so it only updates when the user does something, for example.
Yes and no. That API makes certain approaches easier and other harder. Even if you manage to render only the changed parts of the screen, you still execute all your UI logic all the time so the renderer can be aware of what changed in the first place. Also, more involved stuff like animations become much more complex once you go event-based, to the point where retained mode UI may end up being a better choice from the API perspective. It all depends on what exactly you're implementing, and games often are a natural fit for immediate mode as they usually allow to keep it simple with no real downsides.
I don’t have much understanding of the internals of the immediate mode renderers, but I think there are optimisations to only redraw regions where component inputs have changed.
You typically have to implement those optimizations yourself, and most people don’t. It’s outside the scope of the renderer in most cases for it to decide what should or should not be rendered. As a result, immediate mode GUI, while fast to develop, typically really kills a battery life on mobile.
I feel another point is that early tech decisions have a huge impact on the type of people you will have to hire. For example, Ruby attracts ruthless productivity and Go attracts people who prefer longer standing apps with less dependencies and maintainability.
I make it sound like Go may be the 'better' choice here but that is not the case, as the author mentions, it's a balance.
This brings up another point I hope someone tries to solve in programming. Every time a new language comes out we have to recreate millions of baseline libraries and it just sucks. As a dev I want to be able to make use of great libraries oblivious to what they are created with.
> Every time a new language comes out we have to recreate millions of baseline libraries and it just sucks. As a dev I want to be able to make use of great libraries oblivious to what they are created with.
Technically this tool mostly does exist already with the OpenAPI specification if we're talking about REST APIs. If you as the API provider put in the leg work to create a very detailed specification which is a YAML file, you can generate programming language specific SDKs out of it as long as the language has an OpenAPI library to consume this spec in an accurate way.
Stripe has publicly mentioned[0] they mostly use this spec to generate their SDKs (even as of a few years ago), but I guess auto-generated code still requires some developer time and there's a quality assurance level of "hey we're dedicated to internally supporting this". It's a huge deal having a provider internally support your language's SDK.
Oh that is interesting, I guess they just spun up a beefy EC2 instance. I'm noticing slower performance, I used to get about <200ms for front page. Now it's 500ms-1s? Or is this placebo with my bias to thinking AWS is slow?