Domain-specific knowledge, having no relation to software engineering per se, is a necessary skill set.
The best analogy I can find, if not a tired one, is the equivalence of software engineering to tool-and-die making.
In prior generations where manufacturing was king, it was a necessary operational skill set in order to produce things at scale, yet is much less (if no longer) relevant in the age of additive or subtractive manufacturing, where quantities can be varied according to immediate requirements.
Along the same lines, a skill set in traditional software engineering is less enamored in the age of AI agents that can better regurgitate boilerplate code.
The corresponding next-level-up analogy is the tool-and-die maker that learns 3D modeling + additive manufacturing, with FE analysis and CNC skills as a fallback. For software engineers, it's AI agent prompt engineering and data modeling, according to use cases defined by business needs.
You need to put on your entrepreneurial hat and figure out how to do things faster, with greater accuracy, relevant to business needs - not navel-gazing at package management and build automation exclusively.
This is, of course, an extremely naïve view of the state of things, though I cannot imagine, as a generalist, how one could survive with increasingly niche skills that, a decade ago, would have commanded six-figure salaries.
I'd think the opposite though, with nowadays "AI"/LLMs - retrieving a domain/specialist knowledge became easier, while general software development is still unsolved. E.g. you can use LLMs for generating many well-known/documented/specified particular image processing algorithms, but creating a high-quality Photoshop-like software still needs a good generalist developer.
I would say you can take opposite route as well. Become even more of a T-shaped engineer than you were before. For me that meant transitioning to vertical roles (i.e., performance engineering) rather than backend engineering. Sure, an AI can understand every level of the stack but reasoning up and down at every level of abstraction still has a human element to it (at least for now).
Quality still matters sometimes. You can make a lot of things by AI, but you can't make them good. The same is true of 3D printing.
Also 3D printing is good at making unique objects, but if you want to make ten thousand of the same object, you definitely need someone who knows the "old" ways. They're not irrelevant at all. And you can even use a 3D printer to help make your tools and dies.
Problem is, display profile support for Wayland has been, at best, spotty until recently - and, there should be multiple accurate targets available on any good display panel.
My factory-seconds F13 (using 11th-gen Intel, still the best in terms of power savings) shipped with the older glossy display, which had a known, disclosed-as-cheaper LUT issue at lower brightness settings. After a couple of calibration rounds, it is spot-on and my go-to PC laptop.
Decent keyboard, too.
Of course, things are often more expensive in Europe (compared to the US) for zero good reason, so the F16 will always be at a proportional disadvantage compared to the F13. You may find that a much better fit.
Somewhat tongue-in-cheek, there will be three or four development tiers left:
1. model development and optimization,
2. data pool management,
3. downstream consumers of item 1, and
4. everything else
reply