Hacker Newsnew | past | comments | ask | show | jobs | submit | jjav's commentslogin

> generates far more in economic activity

The LVT focus on profit above all else is why it is an unsatisfactory solution.

If the most important goal for every plot of land is to maximize its economic activity & tax revenue, that's going to be a miserable place to live.

All of the space uses that make a town nice to live in, are also underutilizing the land if the sole goal is to maximize economic activity.

Open space with native vegetation, parks, playgrounds, sports fields of all kinds like soccer fields, community pools, hiking trails.. all of that is wasted land if viewed through the lens of LVT maximization. All that space should be crammed full of high rise offices and apartments.


LVTs focus is on maximizing land value, not profit. It just so happens that when a landowner maximizes the value a piece of their land provides, higher profits are almost guaranteed.

It's also a bit of a mistake to view LVT solely through an economic lense. Sure, we quantify it through a dollar amount or a difference in profits, but the value in LVT comes from how individuals value the land as a whole. So you are absolutely correct that a place without native vegetation, parks, playgrounds, etc. is going to be valued less than a place with those amenities by a lot of people. But only if people value greenspace and amenities more than pure economic output, which is mostly the case when it comes to residential spaces.

If people value greenspace, than the land around said greenspace will have a higher value. LVT would then incentivize those land owners to maximize their value, which would obviously include not destroying or removing the greenspace. Instead of would (likely) be to densify housing, or convert existing buildings to mixed-use spaces.


> If people value greenspace, than the land around said greenspace will have a higher value. LVT would then incentivize those land owners to maximize their value, which would obviously include not destroying or removing the greenspace.

This is where I believe LVT breaks down when faced with greedy reality.

In a perfect world, I totally agree with the above. That would be pretty awesome.

Could that ever happen in the real world of greedy corrupt politicians who never look further in time than the next election?

How do we assign monetary value to pleasant and beautiful things that provide quality of life? Like the parks and playgrounds and sports fields, etc etc. I'm sure there are studies, but the numbers are not as clear-cut and not as immediate as tax revenue this quarter, so they get ignored.

Each individual lot gets evaluated in isolation and the most profitable choice, individually, is to maximize revenue on that lot, so every lot ends up being a high rise concrete box, either offices or apartments. It would take a very brave politician to say let's look at the big picture long term, sacrifice some tax revenue today and build for a better quality of life because long term that will raise values more.

LVT is uncommon so a lot of it is argued in theory, but I suggest looking at a somewhat similar decision process happening in cities today, which relates to the homeless.

How are cities reacting to homeless? They fence off all the open green space and parks, rip out benches and bus stop roofs, eliminate all public bathrooms and so on. Making the area miserabe for everyone, destroying quality of life. Oh but it is difficult to measure quality of life, so they don't.

It would be much wiser for society as a whole to attend to the homeless and let us all have the open parks and benches and bathrooms, city life would be far more pleasant and long term also more profitable if cities can thrive instead of decay.

But that's not how politicians think or act, so I'm fairly sure it would be the same with LVT.


Well I can't speak to the notion of corrupt politicians, but it's worth noting that if it's in the interests of the landowners, then they'd likely fight to keep anything that they feel would keep their value high. And especially if they started developing/investing in their land to maximize the potential return of the land. Anecdotally, I've seen individual homeowners stir up enough support in my major Canadian city to stop city councils from starting somewhat major development projects, so I don't think that it'd be as inevitable as you're making it out to be.

It's also a mistake to say that a lot of land gets evaluated in isolation, because that's not even true with a the current property tax. You absolutely factor in the surrounding community and external factors when valuing a piece of land. Land in a downtown area is going to be inherently worth more than land on the periphery of a city due to the activity and potential of the land to generate economic activity.

To your point though, would you say that an apartment building next to a park (or even within several blocks of a park) is worth more than an apartment building with no park in proximity? I think most people would as well, therefore the apartment building with the park in proximity would have a higher value (which would extend to all land in proximity of the park), and thus the local government would be able to collect a higher tax dollar amount because of the park being there. Whereas maybe they could get a similar total amount by building another building, but why would a local government purposefully lower the amount of tax they'd collect on each plot of land? It's in the interest of the local government to maximize the value of the land within their jurisdiction to collect the highest amount of tax possible. Just like it's in the landowners interest to develop and invest in their land to get the highest return on their investment possible.

Re: homelessness, it would seem to me like a large group of people without housing would benefit from a system that incentivizes building more housing. Which LVT does. It would also encourage public spaces to be as ammenible as possible, so that the park is as appealing as possible in order to maximize the value for surrounding lots of land. At this point though we're talking second or even third order effects of LVT, which like you mentioned aren't super clear or even assured because LVT mostly remains in the theoretical. But if we have a sound theory, at this point why not try it and see what happens? Our current systems are very clearly failing us, so if we have ideas with sound reasoning, can things really get so much worse than they already are?


> It's also a mistake to say that a lot of land gets evaluated in isolation, because that's not even true with a the current property tax.

Sorry, my sentence may have been confusingly worded. I don't mean for tax computation (which certainly uses neighborhood comparables), but I mean that every lot owner will evaluate the maximum profit for their own pocket only, without any regard to greater good of the town. So every lot owner will sell to the developer who'll make a highrise building. Let "someone else" sell a lot to build a library or a tennis court! But there is no "someone else", everyone will seek to maximize personal profit which means no nice places will exist, only tightly packed concrete highrises.

> To your point though, would you say that an apartment building next to a park (or even within several blocks of a park) is worth more than an apartment building with no park in proximity?

Absolutely! But to actually sacrifice short-term tax revenue for longer-term benefit would require forward-thinking politicians. You mention being in Canada so those might exist there, but here in the US, there are none.

> homelessness, it would seem to me like a large group of people without housing would benefit from a system that incentivizes building more housing

I hesitated to mention homeless because my comment has nothing to do with the homeless issue per se. Only using it as a very real example where we can see that town governments are completely willing to ruin quality of life for everyone (fencing off parks, etc) just to save a few dollars short term. Even though it would be immensely better to spend a bit more upfront, to raise the quality of life for the whole town, which will bring in more prosperity and more property value and more tax later on.


> Open space with native vegetation, parks, playgrounds, sports fields of all kinds like soccer fields, community pools, hiking trails.. all of that is wasted land if viewed through the lens of LVT maximization.

No, because all of that would be open to the community. The waste is only if it was locked up for use by certain people.


> It is really easy for a Downtown to go into a downward spiral if you take away the ability of people to get there.

I've seen this sad downward spiral multiple times, it is not a good outcome.

I used to live not too far from a town with a mellow but nice downtown center. Not a huge draw but many small nice restaurants and shops and there was steady business. Sensing a profit machine, the city filled all streets with parking meters. Turns out that while it was a nice area, it wasn't so irreplaceable, so nobody goes anymore. Business collapsed. I drove by last summer and everything is closed, the parking meters sit empty.

Same is happening now to the downtown one town over. It used to very vibrant awesome downtown, although small. Bars, restaurants, music venues, fun shops. I was there every night for something or other. Loved it. Easy free parking around. Some of the parking lots have office buildings now and the city lots have become very expensive. Much less activity there now, about a third of the venues are closed and the remaining ones are saying they can't last very long with fewer people going. While in its heyday this downtown was far more active than my first example, turns out it wasn't irreplaceable either. People just don't go anymore.

Point is that this tactic works only when the downtown is so established and so dense that people are going to go anyway even if parking is hard, like Manhattan.


> ome of the parking lots have office buildings now and the city lots have become very expensive. Much less activity there now, about a third of the venues are closed and the remaining ones are saying they can't last very long with fewer people going.

Sounds to me like that found a valueable use for their land and got rid of the low value things you really enjoyed...

Of course to you this is bad, and the city lost the night life, but that might or might not be worse overall. They seem to be a denser area despite it, for whatever that means.


> Sounds to me like that found a valueable use for their land and got rid of the low value things you really enjoyed...

Explain how is it more valuable to have roughly a third of the businesses close? And many others borderline surviving?

I fear in ten years this will be like the first example I mentioned, a ghost street with all business closed.


> Sounds to me like that found a valueable use for their land and got rid of the low value things you really enjoyed...

That would be the case if the storefronts didn't just wind up remaining empty. Empty commercial real estate is rife in the US right now.

Your "No Parking" area always has competition from the suburbs in the US. If you make parking too problematic, things can invert. Then, people will save up tasks for their trip to the burbs and be completely inert locally--they will do next to nothing with local businesses, do everything inside their house (way cheaper, you know, since I bought the stuff at Costco) and the car remains parked and unmoving until their next trip to the burbs. Once that inversion happens, your "walkable business area" spirals into more and more empty storefronts and the decline becomes ridiculously difficult to arrest.


The us is in a recession now. Just like every other recession there are a lot of empty store fronts.

> Point is that this tactic works only when the downtown is so established and so dense that people are going to go anyway even if parking is hard, like Manhattan.

Or the facilitating of cars has now made it more unattractive for people to go and hangout there even if it is easier to drive to.


> Isn't this true of any greenfield project?

That is a good point and true to some extent. But IME with AI, both the initial speedup and the eventual slowdown are accelerated vs. a human.

I've been thinking that one reason is that while AI coding generates code far faster (on a greenfield project I estimate about 50x), it also generates tech-debt at a hyperastonishing rate.

It used to be that tech debt started to catch up with teams in a few years, but with AI coded software it's only a few months into it that tech debt is so massive that it is slowing progress down.

I also find that I can keep the tech debt in check by using the bot only as a junior engineer, where I specify precisely the architecture and the design down to object and function definitions and I only let the bot write individual functions at a time.

That is much slower, but also much more sustainable. I'd estimate my productivity gains are "only" 2x to 3x (instead of ~50x) but tech debt accumulates no faster than a purely human-coded project.

This is based on various projects only about one year into it, so time will tell how it evolves longer term.


In your experience, can you take the tech debt riddled code, and ask claude to come up with an entirely new version that fixes the tech debt/design issues you've identified? Presumably there's a set of tests that you'd keep the same, but you could leverage the power of ai in greenfield scenarios to just do a rewrite (while letting it see the old code). I dont know how well this would work, i havn't got to the heavy tech debt stage in any of my projects as I do mostly prototyping. I'd be interested in others thoughts.

I built an inventory tracking system as an exercise in "vibe coding" recently. I built a decent spec in conversation with Claude, then asked it to build it. It was kind of amazing - in 2 hours Claude churned out a credible looking app.

It looked really good, but as I got into the details the weirdness really started coming out. There's huge functions which interleave many concepts, and there's database queries everywhere. Huge amounts of duplication. It makes it very hard to change anything without breaking something else.

You can of course focus on getting the AI to simplify and condense. But that requires a good understanding of the codebase. Definitely no longer vibe-coded.

My enthusiasm for the technology has really gone in a wave. From "WOW" when it churned out 10k lines of credible looking code, to "Ohhhh" when I started getting into the weeds of the implementation and realising just how much of a mess it was. It's clearly very powerful for quick and dirty prototypes (and it seems to be particularly good at building decent CRUD frontends), but in software and user interaction the devil is in the details. And the details are a mess.


At the moment, good code structure for humans is good code structure for AIs and bad code structure for humans is still bad code structure for AIs too. At least to a first approximation.

I qualify that because hey, someone comes back and reads this 5 years later, I have no idea what you will be facing then. But at the moment this is still true.

The problem is, people see the AIs coding, I dunno, what, a 100 times faster minimum in terms of churning out lines? And it just blows out their mental estimation models and they substitute an "infinity" for the capability of the models, either today or in the future. But they are not infinitely capable. They are finitely capable. As such they will still face many of the same challenges humans do... no matter how good they get in the future. Getting better will move the threshold but it can never remove it.

There is no model coming that will be able to consume an arbitrarily large amount of code goop and integrate with it instantly. That's not a limitation of Artificial Intelligences, that's a limitation of finite intelligences. A model that makes what we humans would call subjectively better code is going to produce a code base that can do more and go farther than a model that just hyper-focuses on the short-term and slops something out that works today. That's a continuum, not a binary, so there will always be room for a better model that makes better code. We will never overwhelm bad code with infinite intelligence because we can't have the latter.

Today, in 2026, providing the guidance for better code is a human role. I'm not promising it will be forever, but it is today. If you're not doing that, you will pay the price of a bad code base. I say that without emotion, just as "tech debt" is not always necessarily bad. It's just a tradeoff you need to decide about, but I guarantee a lot of people are making poor ones today without realizing it, and will be paying for it for years to come no matter how good the future AIs may be. (If the rumors and guesses are true that Windows is nearly in collapse from AI code... how much larger an object lesson do you need? If that is their problem they're probably in even bigger trouble than they realize.)

I also don't guarantee that "good code for humans" and "good code for AIs" will remain as aligned as they are now, though it is my opinion we ought to strive for that to be the case. It hasn't been talked about as much lately, but it's still good for us to be able to figure out why a system did what it did and even if it costs us some percentage of efficiency, having the AIs write human-legible code into the indefinite future is probably still a valuable thing to do so we can examine things if necessary. (Personally I suspect that while there will be some efficiency gain for letting the AIs make their own programming languages that I doubt it'll ever be more than some more-or-less fixed percentage gain rather than some step-change in capability that we're missing out on... and if it is, maybe we should miss out on that step-change. As the moltbots prove that whatever fiction we may have told ourselves about keeping AIs in boxes is total garbage in a world where people will proactively let AIs out of the box for entertainment purposes.)


Perhaps it depends on the nature of the tech-debt. A lot of the software we create has consequences beyond a paticular codebase.

Published APIs cannot be changed without causing friction on the client's end, which may not be under our control. Even if the API is properly versioned, users will be unhappy if they are asked to adopt a completely changed version of the API on a regular basis.

Data that was created according to a previous version of the data model continues to exist in various places and may not be easy to migrate.

User interfaces cannot be radically changed too frequently without confusing the hell out of human users.


> ask claude to come up with an entirely new version that fixes the tech debt/design issues you've identified?

I haven't tried that yet, so not sure.

Once upon a time I was at a company where the PRD specified that the product needs to have a toggle to enable a certain feature temporarily. Engineering implemented it literally, it worked perfectly. But it was vital to be able to disable the feature, which should've been obvious to anyone. Since the PRD didn't mention that, it was not implemented.

In that case, it was done as a protest. But AI is kind of like that, although out of sheer dumbness.

The story is meant to say that with AI it is imperative to be extremely prescriptive about everything, or things will go haywire. So doing a full rewrite will probably work well, only if you manage to have very tight test case coverage for absolutely everything. Which is pretty hard.


Take Claude Code itself. It's got access to an endless amount of tokens and many (hopefully smart) engineers working on it and they can't build a fucking TUI with it.

So, my answer would be no. Tech debt shows up even if every single change made the right decisions and this type of holistic view of projects is something AIs absolutely suck at. They can't keep all that context in their heads so they are forever stuck in the local maxima. That has been my experience at least. Maybe it'll get better... any day now!


> well, your background just changed, didn't it?

The First Amendment is still in the constitution and has not been formally repealed (yet).

So, no.


See also this metric, showing how fast the US is falling away from a democracy:

https://www.ft.com/content/b474855e-66b0-4e6e-9b73-7e252bd88...


Well yes, but the US was supposed to have three separate branches of government to keep each other in check.

Unfortunately turns out that in practice two of the three don't actually have any power at all when push comes to shove.


I think Congress does have power, it's just chosen not to wield it to control this presidency.

Based on what we've seen of the courts, I have doubts about that.

Congress does not have an army they can send out to enforce any law they pass, so turns out the president can simply just ignore it all without consequences. What are they going to do?


Courts don't have an army either. Only the executive has an army. Actually the president doesn't have an army. The generals have an army. You know we've never invented a system that stops the guys who have an army from taking over the guys who don't have an army, and we call it a coup d'etat, and it happens all over the world with some regularity. The best we can do is make sure the guys who have the army are guys who are committed to the wellbeing of the country.

> Courts don't have an army either. Only the executive has an army.

Exactly, that's the bug. Two of the three branches of government can only write sternly worded opinions on paper. Only one has the brute force to impose their will. So there really is only one branch of government in the US.


It was a long period of time voting for totalitarians. Checks and balances worked by design: preventing immediate radical changes. And they worked by design: allowing changes gradually over a period of time if people keep voting for the same thing. And now it's here.

No such studies can exist since AI coding has not been around for a long term.

Clearly AI is much faster and good enough to create new one-off bits of code.

Like I tend to create small helper scripts for all kinds of things both at work and home all the time. Typically these would take me 2-4 hours and aside from a few tweaks early on, they receive no maintenance as they just do some one simple thing.

Now with AI coding these take me just a few minutes, done.

But I believe this is the optimal productivity sweet spot for AI coding, as no maintenance is needed.

I've also been running a couple experiments vibe-coding larger apps over the span of months and while initial ramp-up is very fast, productivity starts to drop off after a few weeks as the code becomes more complex and ever more full of special case exceptions that a human wouldn't have done that way. So I spend more and more time correcting behavior and writing test cases to root out insanity in the code.

How will this go for code bases which need to continuously evolve and mature over many years and decades? I guess we'll see.


Absolutely!

On my desk I have an HP-28S and use it nearly every day.

In the office I have a newer HP, which isn't quite as nice to use as the 28S but still quite good.

The ergonomics of these are so far superior to using software apps that there is no comparison.


> Tesla secrecy is likely due to avoid journalists taking any chance they can to sell more news by writing an autonomous vehicles horror story

That would mean their secret data, if published, supports writing horror stories about it.

OTOH if the data turned out to show spectacularly safe operations, that would shut off any possible negative articles.

Of all people, how likely is it that Musk is intentionally avoiding good publicity by keeping a lot of data secret?


> I think the humans in London at least do not adjust their behaviour for the perceived risk!

Sure they do, all humans do. Nobody wants to get hurt and nobody wants to hurt anyone else.

(Yes there are few exceptions, people with mental disorders that I'm not qualified to diagnose; but vast majority of normal humans don't.)

Humans are extremely good at moderating behavior to perceived risk, thank evolution for that.

(This is what self-driving cars lack; machines have no fear of preservation)

The key part is perceived though. This is why building the road to match the level of true risk works so well. No need for artificial speed limits or policing, if people perceive the risk is what it truly is, people adjust instictively.

This is why it is terrible to build wide 4 lane avenues right next to schools for example.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: