Hacker Newsnew | past | comments | ask | show | jobs | submit | khazit's commentslogin

We dismissed using journalctl at the very start. We’ve had similar experiences with other CLI tools: the moment you start embedding them inside a program, you introduce a whole new class of problems. What if journalctl exits? What if it outputs an error? What if it hangs? On top of that, you have to manage the subprocess lifecycle yourself. It’s not as easy as it may seem.

You can also argue that sd_journal (the C API) exists for this exact reason, rather than shelling out to journalctl. These are technical trade-offs, doesn't mean we're fuckups


> You can also argue that sd_journal (the C API) exists for this exact reason, rather than shelling out to journalctl.

Quoting from https://systemd.io/JOURNAL_FILE_FORMAT/

> If you need access to the raw journal data in serialized stream form without C API our recommendation is to make use of the Journal Export Format, which you can get via journalctl -o export or via systemd-journal-gatewayd.

Certainly sounds like running journalctl, or using the gateway, is a supported option.


Does Go really not have any libraries capable of supervising an external program? If you'd considered journalctl, why didn't you mention it in the article? As many have pointed out here, it is the obvious and intended way to do this, and the path you chose was harder for reasons that seemed to surprise you but were entirely foreseeable.


JFTR, of course it has a library for it https://pkg.go.dev/os/exec


There is an existing pure Go library [1] written by someone else. The issue is that we weren’t confident we could ship a reliable parser. We even included an excerpt from the systemd documentation, which didn’t exactly reassure us:

> Note that the actual implementation in the systemd codebase is the only ultimately authoritative description of the format, so if this document and the code disagree, the code is right

This required a lot of extra effort and hoop-jumping, but at least it’s on our side rather than something users have to deal with at deploy time.

[1]: https://github.com/Velocidex/go-journalctl


The macOS bit wasn’t about trying to get systemd logs on mac. The issue was that the build itself fails because libsystemd-dev isn’t available. We (naively) expected journal support to be something that we can detect and handle at runtime.


Well... yeah. It's a Linux API for a Linux feature only available on Linux systems. If you use a platform-specific API on a multiplatform project, the portability work falls on you. Do you expect to be able to run your Swift UI on Windows? Same thing!


Of course you still need one binary per CPU architecture. But when you rely on a dynamic link, you need to build from the same architecture as the target system. At that point cross-compiling stops being reliable.


I am complaining about the language (phrasing) used: a Python, TypeScript or Java program might be truly portable across architectures too.

Since architectures are only brought up in relation to dynamic libraries, it implied it is otherwise as portable as above languages.

With that out of the way, it seems like a small thing for the Go build system if it's already doing cross compilation (and thus has understanding of foreign architectures and executable formats). I am guessing it just hasn't been done and is not a big lift, so perhaps look into it yourself?


they're only portable if you don't count the architecture specific runtime that you need to somehow obtain...

go doesn't require dynamic linking for C, if you can figure out the right C compiler flags you can cross compile statically linked go+c binaries as well.


Is it some tooling issue? Why is is an issue to cross-compile programs with dynamic linking?


It's a tooling issue. No one has done the work to make things work as smoothly as they could.

Traditionally, cross-compilers generally didn't even work the way that the Zig and Go toolchains approach it—achieving cross-compilation could be expected to be a much more trying process. The Zig folks and the Go folks broke with tradition by choosing to architect their compilers more sensibly for the 21st century, but the effects of the older convention remains.


In general, cross compilers can do dynamic linking.


In my experience, the cross-compiler will refuse to link against shared libraries that "don't exist", which they usually don't in a cross compiler setup (e.g. cross compiling an aarch64 application that uses SDL on a ppc64le host with ppc64le SDL libraries)

The usual workaround, I think, is to use dlopen/dlsym from within the program. This is how the Nim language handles libraries in the general case: at compile time, C imports are converted into a block of dlopen/dl* calls, with compiler options for indicating some (or all) libraries should be passed to the linker instead, either for static or dynamic linking.

Alternatively I think you could "trick" the linker with a stub library just containing the symbol names it wants, but never tried that.


You just need a compiler & linker that understand the target + image format, and a sysroot for the target. I've cross compiled from Linux x86 clang/lld to macOS arm64, all it took was the target SDK & a couple of env vars.

Clang knows C, lld knows macho, and the SDK knows the target libraries.


Well, you need to link against them and you can't do that when they don't exist. I don't understand the purpose of a stub library, it is also only a file and if you need to provide that, you can also provide the real thing right away.


I happily and reliably cross build Go code that uses CGO and generate static binaries on amd64 for arm64.


In case you don't already know, there are Github-hosted runners that run Windows arm64 [1]

Also, it's not what you're asking, but self-hosted runners are a security nightmare if you don't have the hardware to completely isolate them from your local network.

[1] https://github.com/actions/partner-runner-images/blob/main/i...


They don't seem to be available for private repos (unless you sign up for Github Teams or Enterprise).


I’m still working on Simple Observability:

https://simpleobservability.com

I built it because I needed two things:

- A super easy-to-install monitoring tool that doesn’t require bash scripts or config files

- A mobile-friendly, UX-first interface where I can check everything from my phone

It’s now pretty feature complete. I can see a full picture of all the servers and VPS I run straight from my phone.

Setup is one command, no config files, and everything else happens in the UI. There’s a catalog of predefined alert rules, and creating new ones is easier than anything else I’ve used.

There’s a free tier if anyone wants to try it!


Very cool! However I couldn't get the agent running on an ARM based Oracle Linux Server 10 in OCI. I tried two different servers

level=ERROR msg="failed to fetch collection config. retrying in 5s..." error="GET /configs/ failed with status: 204"


That’s not actually a bug (maybe the message need to be more verbose). The agent is running, but it doesn’t yet know what data to collect. You’ll need to finish the setup in the UI by choosing what metrics/logs you want. Once you do that, the error will go away and the agent will start collecting data


Ah thats what I get for not readin! It's working perfectly! The only thing missing for me is ingesting the logs for my service directly from journalctl, that would be amazing


I'm working on Simple Observability[0], a platform for monitoring servers (metrics and logs). Think of it as a super simple alternative to the Prometheus + Grafana + Loki stack, designed for teams who just want to know “is my server healthy?” without setting up and maintaining a full observability pipeline.

It uses a lightweight, open-source agent[1] that collects data and pushes it to the backend, so it works behind firewalls and doesn’t require any open ports or scraping setup. The goal is to get useful monitoring and alerting with minimal effort: one command install and a UI-based configuration.

[0] https://simpleobservability.com

[1] https://github.com/Simple-Observability/simob-agent


This is really cool.

For the landing page, I think it'd be useful to see an actual screenshot of the UI. Also, what I'm looking for in a solution like this is to receive this information passively — I don't want to need to proactively watch a dashboard. I would want to receive email alerts when, for example, I'm running out of disk space. It says on your landing page that you provide this feature, but it also says it's configurable. Everything on Grafana is configurable, but tbh it's a PITA to configure. It'd be nice if SO just worked OOTB wrt alerts.


Thanks for the feedback. I'll make sure to add screenshots.

For the alerts it is configurable pretty quickly (you just select what you want to monitor, a threshold value, and a notification channel). But I’ll look into having some sensible defaults built in so it works out of the box


From the 10-Q filing: "We are focused on the continued development of Intel 14A, the next generation node beyond Intel 18A and Intel 18A-P, and on securing a significant external customer for such node. However, if we are unable to secure a significant external customer and meet important customer milestones for Intel 14A, we face the prospect that it will not be economical to develop and manufacture Intel 14A and successor leading-edge nodes on a go-forward basis. In such event, we may pause or discontinue our pursuit of Intel 14A and successor nodes and various of our manufacturing expansion projects"


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: