Hacker Newsnew | past | comments | ask | show | jobs | submit | uberduper's commentslogin

What are your discovery mechanisms? I don't know what exists for automatic peer management with wg. If you're doing bgp evpn for vxlan endpoint discovery then I'd think WG over vxlan would be the easier to manage option.

If you actually want to use vxlan ids to isolate l2 domains, like if you want multiple hypervisors separated by public networks to run groups of VMs on distinct l2 domains, then vxlan over WG seems like the way to go.

> This leads to authors having to re-explain their thinking in detail, covering points that they’d omitted for brevity or because they are obvious to those with a good understanding of the problem.

There's nothing wrong with this. Being able to explain your thinking in detail to someone that doesn't necessarily understand the problem is a pretty good exercise to make sure you yourself fully understand the problem _and your thinking._ Of course, this can't turn in to a lecture on basic things people should know or have looked up before commenting.


Sure, now imagine answering 10 different people to all of their questions? It's the largest hindrance I have ever seen but I agree with the above comment that it largely depends on the team.


My take is that an RFC should be very early in the engineering process, like as part of a proof of concept phase, and should not block progress towards completing a design proposal. The design proposal should list any legitimate alternatives to overall or component designs discussed during RFC along with the reasoning for not using them in a "designs not chosen" appendix. This at least gives your engineering leadership an opportunity to evaluate the general design ideas before anyone is prepared to die on the hill of those ideas.

Architecture / Design review happens post proof of concept but still before any significant development work and major action items are blockers to beginning development. Further discussion about designs not chosen can happen here, especially when a flaw is uncovered that would be addressed by one of those not chosen.


What a timely article and comment. I've been watching a lecture series over the last few days about quantum mechanics and the many worlds interpretation. And I have questions.

I may have missed it or didn't understand it when I heard it explained. What underpins the notion that when a particle transitions from a superposed to defined state, the other basis states continue to exist? If they have to continue to exist, then okay many worlds, but why do we think (or know?) they must continue to exist?


Because quantum mechanics describes the universe with a wave function, which evolves according to the schroedinger equation.

In it, there is no notion of collapse. The only thing that makes sense is saying the observer becomes entangled with the measurement.

So if you only look at the Schrödinger equation, this is the only conclusion.

Wave function collapse is something which is simply added ad-hoc to describe our observation, not something which is actually defined in QM


That's an unsatisfying answer. I have some work to do if I want to understand it.


Double slit experiment has been done with electrons which are, afaik, much easier to detect and send single file. It's been done with molecules. It's not a thought experiment.

Quantum superposition is real. There's no doubt about that.


Not a physicist, just here to observe single photons weren't reliably emitted until the modern era. like the 1970s. The double slit experiment pre-dates this. it's from 1801. The one which confirms "self interaction" was 1974. I was in high school 1973-78 so the stuff we did, was comparatively "new" physics in that sense. Not a message I remember receiving at the time.

From the pop-sci history reading I do, "detecting" reliable generation of single photon STREAMS in the early days depended on using mechanisms which inherently would release a sequence of photons on a time base, over time, and then gating the time sufficiently accurately to have high confidence you know the time base, and can discriminate an "individual" from the herd.

I don't doubt quantum theory. I only observe it's mostly for young students (like almost all received wisdom) grounded in experiments which don't actually do what people think they do. The ones you run in the school lab are illustrative not probative.

What people do in places like the BIPM in Paris, or CERN, isn't the same as that experiment you did with a ticker-tape and a weighted trolleycar down a ramp. "it's been confirmed" is the unfortunate reality of received wisdom, and inherently depends on trust in science. I do trust science.

Now we have quantum dots, and processes which will depend on reliably emitting single photons and single electrons, the trust has moved beyond "because they did it in CERN" into "because it's implemented in the chipset attached to the system I am using" -QC will need massive amounts of reliably generated single instance signals.


> just here to observe single photons weren't reliably emitted until the modern era.

A dim light bulb from a few feet away emits on the order of 1k photons/sec, which is low enough that you can count individual emissions using fairly simple analog equipment [0] [1].

> The double slit experiment pre-dates this. it's from 1801. The one which confirms "self interaction" was 1974.

There's an experiment from 1909 that demonstrated the double-slit experiment with single(ish) photons [2].

> I only observe it's mostly for young students (like almost all received wisdom) grounded in experiments which don't actually do what people think they do. The ones you run in the school lab are illustrative not probative.

> What people do in places like the BIPM in Paris, or CERN, isn't the same as that experiment you did with a ticker-tape and a weighted trolleycar down a ramp. "it's been confirmed" is the unfortunate reality of received wisdom, and inherently depends on trust in science. I do trust science.

The double-slit experiment is actually fairly easy and cheap to run [3]. Certainly more complicated than ticker tape, but not by much.

[0]: https://en.wikipedia.org/wiki/Scintillation_counter

[1]: https://en.wikipedia.org/wiki/Photomultiplier_tube

[2]: https://www.biodiversitylibrary.org/page/31034247

[3]: https://www.teachspin.com/two-slit


It's difficult to quantify the value of "I know the shit out of linux" to a prospective employer when they're looking for cog developer #471.

In my experience it's the network of people you've worked with that know how beneficial you are and want to work with you again (this is key) that will keep you in demand regardless of the market conditions.


Victim-blaming is not necessary in this hiring environment. In the last decade only small companies have been available to me which means there’s under five folks I can turn to directly for jobs, and all are not hiring now.


I've made quite a career out of knowing how linux works and not reinventing the wheels it provides. I read man pages. I sometimes run `systemctl list-unit-files` and say, "hmm what is that??" then go find out what it is. I've been at this for decades and curiosity keeps pushing me to learn new things and keep up with recent developments.


But how did you get your first Linux job? That's where I'm stuck at. Where I live, there's literally zero entry level Linux roles, and the literally couple of Linux roles that are available require you to have centuries worth of enterprise experience with Kuberneres, Openshift, Ansible, Chef, Puppet, Terraform etc...


I was a windows guy at a large auction site and started bringing linux in to my workflows and solutions. I'd already been gaining personal experience with linux and the BSDs, solaris, etc. That was my last "windows job."

I'd say there's really no "linux roles" out there. Entry level or not. Everyone collectively decided "devops" was a big bright beautiful tomorrow and made devs learn release management and made ops people get lost (or become the developer they never wanted to be). Everyone shifted their focus towards "as code" solutions because the SRE book said nobody should log in to servers or something. So we hire people that know the abstractions instead and assume nobody really needs to go deeper than that.

It sucks, but learning the abstractions is how you're gonna have to get started. If you're already a linux nerd then you may benefit from understanding what the abstraction is doing under the hood.

If I was starting out right now, I'd go work through Kelsey Hightower's 'Kubernetes The Hard Way' and build functional kubernetes clusters from scratch on a couple of the cloud providers. Do not copy&paste anything from the blog. Every line, every command, by hand. Type out those CSRs and manifests! Recognize what each component you're setting up is and what it's used for. Like "what is the CCM and what's it responsible for?" Or "What's going on with every step of the kubelet bootstrapping process? What controllers are involved and what are they doing?" Read EVERYTHING under kubernetes.io/docs. Understand the relationships between all the primitives.

If you already have some linux, networking, and containers knowledge to build on top of, I think you could work through all of that in less than 4 weeks and have a better understanding of kubernetes than 80%+ of engineers at any level and crush a kubernetes focused interview.


Thanks but my point still stands: there's no entry-level roles, whether it's "Linux" or a Linux-based "DevOps" role. I'm actually working in a windows-based mostly-DevOps type role, but we use almost zero opensource tools and it's very Microsoft centric.

The closest Linux-y roles that I might have a shot at getting into are "cloud engineer" type roles, with a heavy emphasis on AWS - and I hate AWS with a passion (just as much as I hate Azure).

Regardless, the biggest issue is getting that interview call - now in the age of AI, people are faking their CVs and companies are getting flooded with hundreds or thousands of junk applications, so getting that interview call - especially when you don't meet their professional experience requirements - is next to impossible. I could have all the Kuberneres certs in the world, but what's the point if I get filtered out right at the first stage?


Start introducing it where you are. I was an early advocate for the use of WSL2/Docker and along with that a push towards deploying to Linux initially as a cost saving as projects started shifting away from .Net Framework and into .Net Core and Node that were actually easier to deploy to Linux... WSL/Docker became a natural fit as it was "closer to production" for the development workflow.

It's not always possible, but there are definitely selling points that can help you introduce these things. Hell, scripting out the onboarding chores from a clean windows install (powershell to bootstrap in windows, then bash, etc for the WSL environment) with only 3-4 manual steps... and you get a new dev onboarded in a couple hours with a fully working environment and software stack, including an initialized database... You can raise some eyebrows.

Do the same for automated deployments on your existing projects... shift the testing environments to Linux as a "test" or "experiment" ... you can eat away at both directions.

Before you know it, developers can choose windows or mac instead of one or the other, and can use whatever editor they like. Maybe still stuck with C# or MS-SQL, maybe PostgreSQL for green projects.


I thought you were asking for advice. Sorry.


It’s been 17 years since I got my first Linux job in 2008. Where I live, that’s rare, 99% of the industry here is a 'Microsoft Shop,' and the biggest player in town is practically married to them.

I started out at a small Linux company working with Plone CMS. The pay wasn’t great, but it was the perfect place to learn Linux and Python. Since then, I’ve used Linux every single day, become a Java developer, and started a few businesses. Using Linux of course.

But lately, things are changing. Companies are realizing that when it comes to Data Engineering and Science, C# just can't compete with Python's ecosystem. Now that they need to pivot, they're looking for help, and there are very few people in this area with the experience to answer that call.


I was working in a Windows-centric environment and started using ProxMox as the hypervisor instead of Windows Server. This combined with my self. Hosting hobby (Proxmox mini PC cluster, network diagrams of vlans, self hosting my own blog website, having a handful of small tools in my git repos) was what sold my current company on hiring me, more than my resume of working in tech.


You can make almost any job into a Linux job. Use a linux VM on your desktop to solve a problem for the company. Things change once your employer knows its essential.

I've also seen Linux make inroads in "windows only" enterprises when it became essential for performance reasons. A couple of times, towards the start of a project, windows APIs were discovered to be too slow to meet requirements:

In one case, customer needed us to send a report packet every 40ms. But even when we passed "0" to the windows Sleep() function, it would sometimes stop our program for 100ms at a time. The sleep function on linux was highly accurate, so we shipped linux. Along the way 5-6 devs switched to, or got a second PC to run linux.

In another case, we needed to saturate a 10GbE link with a data stream. We evaluated windows with a simple program:

   while(1) send(sock, &buffer, len(buffer);
... but we found windows could only squeeze out 10% of the link capacity. Linux, on the other hand, could saturate the 10GbE link before we had even done any performance tuning. On linux, our production program met all requirements while using only 3% CPU usage. Windows simply couldn't touch this. More devs learned linux to support this product.

Those companies still don't require linux skills when hiring, because everyone there was once a windows guy who figured it out on the job. But when we see linux abilities on the resume it gives candidates a boost because we know they'll be up to speed faster.


Lie and learn.


That's the way.


In my experience

- Try to find a way to not go to the meeting. Anything you say, especially the most insignificant part, will be used against _someone_ in an argument that doesn't make any sense. You're going to feel the need to correct their misunderstanding and misuse of what you said. You might even try to re-focus the discussion back to the important thing you were _trying_ to say. It only goes down hill from there. You're better off interfacing with a group of C-level people through documents.

- 1:1 meetings can work. Make sure you can back up everything you say with data.

- You're a developer, you can't estimate time and effort for shit. If asked, say you'll get with your manager or the PjM or w/e to get a date.

- Find out from the person that asked you to join what you should be prepared to speak to.

- If there's an agenda or documents that will be discussed, read them before the meeting. Doesn't matter if they plan to read it during the meeting.

- No hemming and hawing. If you don't understand what you're being asked, ask for clarification. If you don't have an answer you're confident in, say so. If they insist, prefix your crisp and concise answer with your level of confidence. "In my experience.." "From what I've read.." etc.


I'd like to add that, what you are describing, sounds like a pretty hostile environment. Luckily, I made the opposite experience. However, it also requires that you, the developer, are open and curious about the motifs that the C-Level has. Sure, you can sing the tantrum that you cannot estimate times - and I agree! But it helps to ask what they actually need, and why they need it. Maybe you can come up with better ways to help them get clarity. Remember - from their standpoint, it's all a big black box they cannot understand.

From my experience, execs want to know the current state, and also want to be able to intervene before a project derails. That's usually accomplished by open and coherent communication - a skill that is yet to be found by some developers. But you can work on it! ...if you want.


If I'm smart, I certainly don't feel like it.

I can tell you I do not enjoy thinking. I hate it. It is a compulsion that I cannot avoid. I know that it makes most interactions in my life more difficult. I know it's a source of unhappiness. I cannot stop thinking.

I want to do. Not think. I fail to do. I think about failure.


Two things. First, not all smart people are overthinkers and not all overthinkers are smart.

Second, I find that a great way to change one's self-damaging behavior is, rather than the therapy that is often recommended, to try to be as much as possible, relatively speaking, in the company of people who behave the way we would like to.

For the person who wants to exercise, but for some psychological hang-ups, can't, the company of people who exercise tends to be much more effective than finding out the root causes of the behavior. The same for thinking too much, eating too much, not being able to talk to other people.


You should look into meditation.

Let me explain.

Meditation teaches that your thoughts are uncontrolled expressions of your subconcious; as are your worries, your fears, your anxieties.

To meditate is not to stop thinking thoughts, but to observe them as they spontaneously appear, and - just as quickly - disappear. To recognize that you are not the thinker of your thoughts. To view them from a place of detachment and curious observation, instead of a place of investment and worry.


May I recommend an alternative to Eastern Meditation practices?

The alternative is Autogenic Training (AT), a method invented by Dr. Schultz a century ago. It is a well-tested scientific approach, and the outcomes are generally very positive, if not life-changing.

AT does not involve interpreting obscure texts written thousands of years ago in other languages and referring to ways of life that have long been forgotten.

AT does not require silent retreats or attending workshops and seminars at the end of which you are more confused than before. It is simple and just requires following the steps outlined by Schultz and his students.

I am surprised that it is not popular at all, but its strengths are also its weaknesses. Most people long for the esoteric and unexplained, while AT is clear, easy to understand and practice.


It would be more convincing if you explained what it actually is. Rather than what it is not.


There are books and Google and Wikipedia.

Like people refer to meditation and don't explain all the process involved in one of the traditions because there is a wealth of information available, I would much prefer to answer to specific questions on the practice instead of copying and pasting from Wikipedia, which I am doing now.

"The technique involves repetitions of a set of visualisations accompanied by vocal suggestions that induce a state of relaxation and is based on passive concentration of bodily perceptions like heaviness and warmth of limbs, which are facilitated by self-suggestions.Autogenic training is used to alleviate many stress-induced psychosomatic disorders"

The formulas are six: heaviness, warmth, heart beating regularly and strongly, calm breath, warm solar plexus, and cool forehead.

There's no vocal suggestion (the Wikipedia article is wrong in that regard), the formulas are repeated silently. It's a much more effective practice of the hocus-pocus that is often meditation of the Eastern tradition, especially the bastardized variety adopted in the West, and there are plenty of books and papers available on the results of scientific studies that measure the effect on soma and psyche of AT.


Sometimes I start thinking our brains work the same way as an LLM does when it comes to language processing. Are we just using probability based on what we already know and the context of the statement we're making to select the next few words? Maybe we apply a few more rules than an LLM on what comes next as we go.

We train ourselves on content. We give more weight to some content than others. While listening to someone speak, we can often predict their next words.

What is thinking without language? Without language are we just bags of meat reacting to instincts and emotions? Are instincts and emotions what's missing for AGI?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: