Hacker Newsnew | past | comments | ask | show | jobs | submit | ademup's commentslogin

I call this the "Everything in Moderation" fallacy. From what I've heard people who say it, they emphasize the everything part of it. In other words almost everything is bad for you so just eat a little bit of everything and you won't get too much of the bad stuff. It's maddening.

The way I understand it (and my understanding is certainly poor, so I welcome well-supported pushback on it), is that few, if any, components in the food that we in developed countries eat today are actively harmful in themselves (with the caveat outlined below)

The main issue is overconsumption leading to overweight and obesity. Food that’s high in refined sugars and/or saturated fats tend to contribute to this, because it’s palatable and calorie-dense

So in that sense, yes - I believe that as long as your diet is varied enough that you get sufficient intake of all, or at least most, of the essential nutrients, and you don’t eat too much (i.e. in moderation), the ratio of macronutrients doesn’t make a big difference to your health outcome

The crux is that moderation is hard when the food is jam-packed with calories, and it’s so delicious you just want to keep stuffing your face


By volume most of the food in modern western grocery stores is unnaturally sugary or otherwise calorie dense.

You have to restrict yourself to produce and a few scant other options to escape with balanced nutritional products.

They even advertise cereals as a "part of a healthy breakfast". Which is a lie under any circumstances, because it's never a healthy part if you eat it long term. (Yes it could keep you from starving to death in a famine, still not 'healthy'.) Imagine if they could only say "it will keep you from starving, and may significantly contribute to diabetes"


I don’t think “Everything in Moderation” means you won’t get too much of the bad stuff. The philosophy alludes to the fact that in the modern world, trying to have the ideal diet is exhausting and near impossible. Lack of choice, money, time, education, self control etc. all contribute to you intentionally or unintentionally eating stuff that’s going to do irreparable harm to you. You could be eating salads and somehow poisoning yourself with pesticides and high sugar/fat/sodium salad dressings. Which is why this philosophy focuses on, do everything in moderation and you’ll maybe avoid CVD and other diseases for longer. It is meant for people who cannot meet the idealistic standards of what you are supposed to do.

Is it really that exhausting though? I've been on a zero-carb diet for two months (other than thanksgiving or christmas), and it really hasn't been hard at all. If I eat at a restaurant there's some things I can't avoid (seed oils), but otherwise it's not too hard to look at a menu and see things I can eat. The only hard part is to be optimally healthy I need to cook for myself, but that's always been true.

In a lot of ways, it's actually been easier. Because my blood sugar isn't crashing every few hours, I can easily skip a meal and feel perfectly fine. Fasting is very easy for me now, which it wasn't at all on an unhealthy diet.


Yet you are unavoidably eating micro-plastics too, which have been linked to adverse CV events.

Also:

- If you are eating more fish (as opposed to eating meat), you are likely consuming more mercury.

- If you are eating more fresh veggies you are probably ingesting more pesticides.

- If you are easting dark chocolate for its health benefits, you are also ingesting cadmium and other heavy metals.

So all the above should be done in moderation. Even things that seem like unalloyed good can be dangerous. A burst of exercise beyond your conditioning can lead to a CV event. Too much water can be poisonous. Some people get constipation for too much veggies in their diet.

For example, instead to sticking to a narrow faddish supposedly healthy diet, you can enjoy a wide range of foods, which will make it more likely you are getting all the nutrients that will do you good (of course clearly unhealthy food should be avoided).

The body is more complex than we can ever know. There are some general principles for good health (including CV health) that should be followed, but to me it is clear that good health does not arise from a slavish devotion to very detailed set of rules.


Funnily I've heard that one reason why obesity is prevalent is that we have too many variations of food. Seems like our hunger controller suspends satiety when we eat a food too much, but when we eat few of lots of different foods, it's broken.

It'd be funny if lots of fad diets actually works because people are forced to eat a single type of food and that's entirely enough for it.


It could also explain why most of us can eat like pigs in all-you-can-eat buffets.

Your post sounds like "bad things can happen so why bother". Having a good diet isn't "slavish devotion", it's more like "don't eat something obviously terrible"

Way to totally miss the gist of my comment:

> "don't eat something obviously terrible"

This is an exact thought in the comment.

> "bad things can happen so why bother"

The exact thing I was arguing against.

Jeez, why bother responding if you can't be bothered to read the actual comment I wrote.


People who don’t make money, have to take care of childcare while working 2-3 jobs, probably aren’t able to cook themselves. Nor are the people who live in food deserts that only have limited options able to optimize around a specific diet that’s not restricted by availability.

I cook my own food and optimize around eating healthy. I wouldn’t be able to do it if I made less money or had a more demanding job or didn’t have great grocery stores in a 10 mile radius or had to spend time in childcare or any of the other completely valid reasons people have.

Besides, you yourself just described “do things in moderation” yourself: holidays, Christmas, restaurants etc. That’s really what the philosophy is.


What is moderation? The volume (or mass) of a single apple of alcohol is going to make you very drunk (most alcohol is mixed with water: an apple's worth of beer is very little, that much Everclear is a problem).

That is what I hate about the everything in moderation. We need to do better since some things should be in much larger amounts than others.

I think we all would agree that any amount of rat poison is a bad thing, thought perhaps this is too much of a strawman.


Respectfully disagree. An AI with full access to robots could do everything on its own that it would need to "survive" and grow. I argue that humans are actually in the way of that.


"robots" is a very hand wavy answer. There's so much that goes into the supply chain of improving and running AI that I, a human, feel quite safe.


Is there any particular element of the supply chain that you feel make “robots” hand-wavy?


In support of the other reply, here is a look at the supply chain of a very simple product - a can of Coke.

https://medium.com/@kevin_ashton/what-coke-contains-221d4499...

(https://archive.md/PPYez)

The highlighted parts are a kind of TL;DR, but in the context here actually reading it - it is not much - is actually required to get anything out of it for the arguments used here.

Anything technological is orders of magnitude more complex.

Pointing to any single part really makes no sense, the point is the complexity and interconnectedness of everything.

Some AI doing everything is harder than the East Bloc countries attempting to use central planning for the whole economy. Their economy was much more simple than what such a mighty AI would require for itself and its robot minions. And that's just the organization.

I did like "Gaia" in Horizon Zero Dawn (game) because it made a great story though. This would be pretty much exactly the kind of AI fantasized about here.

Douglas Adams hints at hidden complexity towards the end of HHGTTG, talking about the collapse of Golgafrincham's society.

You overlook just one single tiny thing and it escalates to failure from there. Biological systems don't have that problem, they are self-assembling no matter how you slice and dice them. You may just end up with a very difference eco-system, but as long as the environment is not completely outside the useful range it will grow and reorganize. human-made engineered things on the other hand will just fail and that's it, they will not rise on their own from nothing. Human-made systems are much much more fragile than biological ones (even if you can't guarantee the kind of biological system you will get after rounds of growth and adaptations).


Thanks for providing the archive link!

> Pointing to any single part really makes no sense, the point is the complexity and interconnectedness of everything

Doesn’t it though?

The bauxite mine owners in Pincarra could purchase hypothetical robotic mining & smelting equipment. The mill owners in Downey, the cocoa leaf processor in New Jersey, the syrup factory in Atlanta, and others could purchase similar equipment. Maybe they all just buy humanoid robots and surveil their works for awhile to train the robots and replace the workers.

If all of those events happen, Coca Cola supply chain has been automated. Also, since e.g. the aluminum mill probably handles more orders beyond just coke cans, other supply chains for other products will now be that much more automated. Thereby the same mechanism that built these deep supply chains will (I bet) also automate them.

> Biological systems don't have that problem, they are self-assembling no matter how you slice and dice them.

If the machines used to implement manufacturing processes are also built in an automated way, the system is effectively self-healing as you describe for biological systems.

> did like "Gaia" in Horizon Zero Dawn (game) because it made a great story though. This would be pretty much exactly the kind of AI fantasized about here.

Perhaps the centralized AI “Gaia” becomes an orchestrator in this scheme, rather than the sole intelligence in all of manufacturing? Not too familiar with this franchise to make a more direct comparison, but my larger point is that the complexity of the system doesn’t need to be focused on one single greenfield entity.


Man made stuff does not self-repair and self-replicate.

So, no. You are not thinking far enough, only the next step. But it is a complex vast network, and every single thing in it except the humans has that man-made item deficiency of decay without renewal.

You miss even repairs of the tiniest item - which in turn requires repairing he repairers, everything eventually stops.

Humans have to intervene fixing unforeseen problems all the time! It is humans that hold all those systems together.

Even if you had AGI, human brains are far from perfect too so that would not change anything in the end, we have biology to the rescue (of us in general, not necessarily the individual ofc) when we miss stuff.


Let us assume, at some point in the near future, it is possible to build a humanoid robot that is able to operate human-run machines and mimic human labor:

> Man made stuff does not self-repair and self-replicate.

If robots can repair a man-made object or build an entirely new one, the object is effectively self-repairing and self-replicating for the purposes of a larger goal to automate manufacturing.

> You miss even repairs of the tiniest item - which in turn requires repairing he repairers, everything eventually stops

So… don’t? Surely the robots can be tasked to perform proactive inspections and maintenance of their machines and “coworkers” too.

> But it is a complex vast network

…that already exists, and doesn’t even need to be reimagined in our scenario. If one day our hypothetical robots become available, each individual factory owner could independently decide the next day to purchase them. If all of the factories in the “supply chain graph” for a particular product do this, the complex decentralized system they represent doesn’t require human labor to run. It doesn’t even need to happen all at once. By this mechanism I propose the supply chain could rapidly organically automate itself.


The length and breadth of it mostly.


I think this is a very common opinion here. I'd say at least 15% people believe that.


Yeah? How many robots? What kind of robots? What would the AI need to survive? Are the robots able to produce more robots? How are the robots powered? Where will they get energy from?

Sure it's easy to just throw that out there in one sentence, but once you actually dig into it, it turns out to be a lot more complicated than you thought at first. It's not just a matter of "AI" + "Robots" = "self-sustaining". The details matter.


I'm surprised to see so many people using containers when setting up a KVM is so easy, gives the most robust environment possible, and to my knowledge much has better isolation. A vanilla build of Linux plus your IDE of choice and you're off to the races.


You often don't need strong isolation. The sandboxing is more to avoid model accidents than a Skynet scenario.


Not everyone has spare hardware lying around!


For sure! But just for reference, I'm on a mid-tier 2022 Dell Inspiron laptop: Ryzen 7 5825U with 64GB ram and 500GB SSD.

On it, I run Ubuntu 24.04 as my host, and my guest is Lubuntu with 16GB ram and 80GB ssd for my KVM.

I almost always have 2 instances of PHPstorm open in both Host and and Guest with multiple terminal tabs running various agentic tasks.


64 GB. Say no more :)

I lived with 16GB until last year and upgraded to 32 only this year, which I thought was a huge improvement. I suspect a lot of people are around this ballpark, especially if they have bought Macs. Mine is Linux, still. So containers are the “simpler” versions.


...wait, you and I are using "KVM" in different ways, then. To me, it means a switch that lets you use the same Keyboard, Monitor ("Video"), and Mouse for two different machines. Sounds like you're talking instead about a technique for running a VM on a single machine - which, from Googlin', I suspect is "Kernel-based Virtual Machine", a new-to-me term. Thanks for teaching me something!


For anyone else on the fence about moving to CLI: I'm really glad I did.

I am converting a WordPress site to a much leaner custom one, including the functionality of all plugins and migrating all the data. I've put in about 20 hours or so and I'd be shocked if I have another 20 hours to go. What I have so far looks and operates better than the original (according to the owner). It's much faster and has more features.

The original site took more than 10 people to build, and many months to get up and running. I will have it up single-handedly inside of 1 month, and it will have much faster load times and many more features. The site makes enough money to fully support 2 families in the USA very well.

My Stack: Old school LAMP. PHPstorm locally. No frameworks. Vanilla JS.

Original process: webchat based since sonnet 3.5 came out, but I used Gemini a lot after 2.5 pro came out, but primarily sonnet.

- Use Claude projects for "features". Give it only the files strictly required to do the specific thing I'm working on. - Have it read the files closely, "think hard" and make a plan - Then write the code - MINOR iteration if needed. Sometimes bounce it off of Gemini first. - the trick was to "know when to stop" using the LLM and just get to coding. - copy code into PHPStorm and edit/commit as needed - repeat for every feature. (refresh the claude project each time).

Evolution: Finally take the CLI plunge: Claude Code - Spin up a KVM: I'm not taking any chances. - Run PHPStorm + CC in the KVM as a "contract developer" - the "KVM developer" cannot push to main - set up claude.md carefully - carefully prompt it with structure, bounds, and instructions

- run into lots of quirks with lots of little "fixes" -- too verbose -- does not respect "my coding style" -- poor adherence to claude.md instructions when over half way through context, etc - Start looking into subagents. It feels like it's not really working? - Instead: I break my site into logical "features" -- Terminal Tab 1: "You may only work in X folder" -- Terminal Tab 2: "You may only work in Y folder". -- THIS WORKS WELL. I am finally in a "HOLY MOLLY, I am now unquestionably more productive territory!"

Codex model comes out - I open another tab and try it - I use it until I hit the "You've reached your limit. Wait 3 hour" message. - I go back to Claude (Man is this SLOW! and Verbose!). Minor irritation. - Go back to Codex until I hit my weekly limit - Go back to Claude again. "Oh wow, Codex works SO MUCH BETTER for me". - I actually haven't fussed with the AGENTS.md, nor do I give it a bunch of extra hand-holding. It just works really well by itself. - Buy the OpenAI PRO plan and haven't looked back.

I haven't "coded" much since switching to Codex and couldn't be happier. I just say "Do this" and it does it. Then I say "Change this" and it does it. On the rare occasions it takes a wrong turn, I simply add a coding comment like "Create a new method that does X and use that instead" and we're right back on track.

We are 100% at a point where people can just "Tell the computer what you want in a web page, and it will work".

And I am SOOOO Excited to see what's next.


I'd heard Codex improved a ton since I last tried it (definitely prior to some of the latest improvements), and am really tempted to give it another shot. This is very much inspiring me to do so asap, thank you for sharing!


"We are 100% at a point where people can just "Tell the computer what you want in a web page, and it will work"."

I await the good software. Where is the good software?


> > We are 100% at a point where people can just "Tell the computer what you want in a web page, and it will work"

> I await the good software. Where is the good software?

Exactly this, it looks great on the surface until you dig in to find it using BlinkMacSystemFont and absolute positioning because it can't handle a proper grid layout.

You argue with it and it adds !important everywhere because the concept of cascading style is too much for its context window.


The status quo system you describe isn't objectively optimal. It sounds archaic to me. "We" would never intentionally design it this way if we had a fresh start. I believe it is this way due to a meriad of reasons, mostly stemming from the frailty and avarice of people.

I'd argue the opposite of your stance: we've never had a chance at a fresh start without destruction, but agents (or their near-future offspring) can hold our entire systems "in nemory", and therefore might be our only chance at a redo without literally killing ourselves to get there.


It's not claimed to be an "objectively optimal" solution, it's claimed to represent how the world works.

I don't know where you're going with discussion of destruction and killing, but even fairly simple consumer products have any number of edge cases that initial specifications rarely capture. I'm not sure what "objectively optimal" is supposed to mean here, either.

If a spec described every edge case it would basically be executable already.

The pain of developing software at scale is that you're creating the blueprint on the fly from high-level vague directions.

Something trivial that nevertheless often results in meetings and debate in the development world:

Spec requirement 1: "Give new users a 10% discount, but only if they haven't purchased in the last year."

Spec requirement 2, a year later: "Now offer a second product the user can purchase."

Does the 10% discount apply to the second product too? Do you get the 10% discount on the second product if you purchased the first product in the last year, or does a purchase on any product consume the discount eligibility? What if the prices are very different and customers would be pissed off if a $1 discount on the cheaper product (which didn't meet their needs in the end) prevented them from getting a 10$ discount 9 months later (which they think will)? What if the second product is a superset of the first product? What if there are different relevant laws in different jurisdictions where you're selling your product?

Agents aren't going to figure out the intent of the company's principal's automatically here because the decision maker doesn't actually even realize it's a question until the implementers get into the weeds.

A sufficiently advanced agent would present all the options to the person running the task, and then the humans could decide. But then you've slowed things back down the pace of the human decision makers.

The complexities only increase as the product grows. And once you get into distributed or concurrent systems even most of our code today is ambiguous enough about intent that bugs are common.


Agents quite literally cannot do this today.

Additionally, I disagree with your point:

> The status quo system you describe isn't objectively optimal.

On the basis that I would challenge you or anyone to judge what is objectively optimal. Google Search is a wildly complex system, an iceberg or rules on top of rules specifically because it is a digital infrastructure surrounding an organic system filled with a diverse group of people with ever-changing preferences and behaviors. What, exactly, would be optimal here?


Is Ubuntu 24.04 supported? (Docker Desktop doesn't support 24.04 currently)


It is what I am using, so by default, yes!


Bulma.io layout is broken in landscape mode on my phone (responsively and full page refresh) with several sections going beyond the edge of the page. Firefix on Pixel 7.

I'd love to use this because it looks good, but this makes me lose trust.


For a far more in depth look into high intensity gardening, with real data and realistic expectations, I strongly recommend John Jeavons: How to Grow More Vegetables: Than You Ever Thought Possible on Less Land Than You Can Imagine


Location: San Francisco Bay Area

Remote: Remote, Hybrid, On Site

Willing to relocate: No

Technologies: Linux, PHP, MariaDB, VPS, Javascript, HIPAA, CMS/EHR creation, APIs, Analytics, Bootstrap, Plotly, Twilio, Sendgrid. Small amount of Python.

Résumé/CV: https://www.linkedin.com/in/ademmiller/

Email: adem@mentalnexus.com

- Over 15 years of experience writing multiple CRM/EHR web apps in PHP and JavaScript from the ground up without frameworks.

- 8 Years as Co-Founder and CTO orchestrating millions of surveys in the private pay mental health and addiction treatment arenas

- Extensive experience working with billion dollar valuation companies as customers and integrating with the wide range of software vendors and APIs they consume

- Degrees in Business Administration and Small Signal Electronics - Huge fan of entrepreneurs and small startups


I'm experiencing the same massive pain point from Twilio. Our business sends survey links to about 300 people a day who have all opted in. We've been doing this for 7 years. Our undelivered rate had been under 5% for years. It's now at 33% and is threatening our business.

Twilio's support has been absolutely dismal. They say things like "Your number is not attached to your campaign" and show a screen shot from their admin screen, but when I point out it is all connected in the console: they respond with the generic Twilio links about A2P implementation.

Has anyone figured out a solution? ... I'm happy to switch to another company at this point.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: