Disagree. Enums are named for being enumerable which is not the same thing as simply having an equivalent number.
It’s incredibly useful to be able to easily iterate over all possible values of a type at runtime or otherwise handle enum types as if they are their enum value and not just a leaky wrapper around an int.
If you let an enum be any old number or make the user implement that themselves, they also have to implement the enumeration of those numbers and any optimizations that you can unlock by explicitly knowing ahead of time what all possible values of a type are and how to quickly enumerate them.
What’s a better representation: letting an enum with two values be “1245927” or “0” or maybe even a float or a string whatever the programmer wants? Or, should they be 0 and 1 or directly compiled into the program on a way that allows the programmer to only ever need to think about the enum values and not the implementation?
IMO the first approach completely defeats the purpose of an enum. It’s supposed to be a union type, not a static set of values of any type. If I want the enum to be tagged or serializable to a string that should be implemented on top of the actual enumerable type.
They’re not mutually exclusive at all, it’s just that making enums “just tags” forces you to think about their internals even if you don’t need to serialize them and doesn’t give you enumerability, so why would I even use those enums at all when a string does the same thing with less jank?
> Enums are named for being enumerable which is not the same thing as simply having an equivalent number.
Exactly. Like before, in the context of compilers, it refers to certain 'built-in' values that are generated by the compiler; which is done using an enumerable. Hence the name. It is an implementation detail around value creation and has nothing to do with types. Types exist in a very different dimension.
> It’s supposed to be a union type
It is not supposed to be anything, only referring to what it is — a feature implemented with an enumerable. Which, again, produces a value. Nothing to do with types.
I know, language evolves and whatnot. We can start to use it to be mean the same thing as tagged unions if we really want, but if we're going to rebrand "enums", what do we call what was formally known as enums? Are we going to call that "tagged unions" since that term now serves no purpose, confusing everyone?
That's the problem here. If we already had a generally accepted term to use to refer to what was historically known as enums, then at least we could use that in place of "enums" and move on with life. But with "enums" trying to take on two completely different, albeit somewhat adjacent due to how things are sometimes implemented, meanings, nobody has any clue as to what anyone is talking about and there is no clear path forward on how to rectify that.
Perhaps Go even chose the "itoa" keyword in place of "enum" in order to try and introduce that new term into the lexicon. But I think we can agree that it never caught on. If I, speaking to people who have never used Go before, started talking about iotas, would they know what I was talking about? I expect the answer is a hard "no".
Granted, more likely it was done because naming a keyword that activates a feature after how the feature is implemented under the hood is pretty strange when you think about it. I'm not sure "an extremely small amount" improves upon the understanding of what it is, but at least tries to separate what it is from how it works inside of the black box.
The problem is that this makes enums non-enumerable. They need to be represented as a range or union type to do that. I am pretty sure I know why/how Go ended up like this because it’s inherited behavior from proto, wrote a large comment later down the thread explaining why.
I think I know why Go ended up without good enum support.
(Disclaimer, formerly worked at Google and used proto/grpc/go there and now in my own startup in github.com/accretional/collector which tries to address this problem with a type registry and fully reflective API. Not privy to the full history, just reasoning.)
Proto is designed so that messages can be deserialized into older/previous proto definitions by clients even if the server is responding with messages of a more recent version. Field numbers Re what let you to start serializing new fields (add a new field with an unused/the next number) or safely stop setting fields in proto responses (reserve a field) without risking older clients misinterpreting the data as belonging to some existing field they know about. This requires you to encode the field numbers alongside the field data in the proto wire format.
Two major problems: nothing in proto itself enforces that field numbers are assigned sequentially, because there is no single source of truth for the proto schema (you can still have one of your own, but it’s not a “thing” in proto). Also, the whole point of field numbers is that they can be selectively missing/reserved/ignored and allow you to deserialize messages without special handling for version changes in your code at runtime.
So, field numbers aren’t a dense, easily enumerable range of numbers, they’re basically tags that can be any number between 1 and 536,870,911 except for the reserved 19,000-19,999. This smells like serious tech debt/ a design flaw that completely closes the door for even fixing this at Google or anywhere else, because it’s arbitrarily in the middle of the range of field numbers and a leaked implementation detail from the internals. You couldn’t build your own dense field number management/sequential enforcement system on top of proto without ripping that part out, but your existing proto usage relies on that part and changing it would break existing clients because you’re removing field numbers, which is the whole fucking point of proto, and makes it difficult to roll out even if you did fix it yourself.
So, representing union/enumerable types in proto is impossible. For proto enums to have forward compatibility, they have to handle adding new enum values over time, or need to remove and reserve old ones. So, proto enums end up being basically just field numbers. That’s exactly what you see in Golang enums and I don’t think it’s a coincidence: Google has no good way to serialize/deserialize/operate on enumerable enums or union types anywhere they use proto/grpc. Golang inherits this “enums” implementation from protobuf because it’s the context in which it was created.
Wha do you mean by “fixing this” or it being a design flaw?
I agree with the point about sequential allocation, but that can also be solved by something like a linter. How do you achieve compatibility with old clients without allowing something similar to reserved field numbers to deal with version skew ambiguity?
I view an enum more as an abstraction to create subtypes, especially named ones. “Enumerability” is not necessarily required and in some cases is detrimental (if you design software in the way proto wants you to). Whether an enum is “open” or “closed” is a similar decision to something like required vs optional fields enforced by the proto itself (“hard” required being something that was later deprecated).
One option would be to have enums be “closed” and call it a day - but then that means you can never add new values to a public enum without breaking all downstream software. Sometimes this may be justified, but other times it’s not something that is strictly required (basically it comes down to whether an API of static enumerability for the enum is required or not).
IMO the Go way is the most flexible and sane default. Putting aside dedicated keywords etc, the “open by default” design means you can add enum values when necessary. You can still do dynamic closed enums with extra code. Static ones are still not possible though without codegen. However if the default was closed enums, you wouldn’t be able to use it when you wanted an open one, and would have it set it up the way it does now anyway.
No its because 99% of the time people use enums to give names to magic constants... That is it. Go went for simplicity and const+iota achieves it just fine. People act like enums make or break software itself or something.
That seems unlikely to me to be the actual explanation. It could very well be what you prefer or how you would do it, but I can definitely assure you that the Go/other infrastructure teams think about these problems and hear plenty of complaints about lack of union type support.
I'm not sure I understand your argument. You're saying that you can't really use enums for field numbers. And let's say that you're absolutely correct about that.
It still seems to me that you're addressing a completely separate issue from having a specific field that is an enum - not an enum of a field number, but an enum of something else, like encryption algorithm or SHA type or something.
I’m pointing out that proto’s enums are wrappers around proto field numbers, which are numerical ids for proto message fields that enable forward/backward compatibility across proto message version changes. These field numbers are necessarily sparse but don’t even need to be assigned sequentially, and in fact, because of the leaked implementation details from the internal field number range you have to assume they are non-sequential. So field numbers are sparse, which makes them non-enumerable even though they’re represented with numeric tags.
Hence, proto enums are essentially non-enumerable wrappers around numeric values. And this is (almost certainly) why Golang’s enums are structured the same way, as transparent wrappers around values that are not necessarily sequential or enumerable.
C++ does sparse enums just fine. Are you saying that those are not "real" enums because their sparse? Or that C++ doesn't have "real" enums because it allows that? Or what?
I'm a big agent proponent myself but I don't think these kinds of companies actually exist yet. It's gotta either be some CTO who learned the word "orchestration" or "agent harness" and decided to play around with that stuff, or pure fantasy from the usual suspects of VC-twitter trying to build FOMO/signal themselves as part of the "in crowd"/drive engagement with hyperbole.
> Even if there is a "fully vibe-coded" product that has real customers, the fact that it's vibe-coded means that others can do the same.
I think you are strawmanning what "vibe coders" do when they build stuff. It's not simple one-shot generation of eg twitter clones, it's really just iterative product development through an inconsistently capable/spotty LLM developer. It's not really that different from a product manager hiring some cheap developer and feeding them tasks/feature requests. By the way, competitors can hire those and chip away at your moat too!
> Unless you have a secret LLM or some magical prompts that make the code better/more efficient than your competitions, your vibe coded product has no advantage over competition and no moat
This is just not true, and you kind of make my point in the next sentence: many companies competitive advantages come from distribution, trust, integration, regulatory, marketing/sales, network effects. But also, vibe coding is not really about prompts so much as it is product iteration. Anybody product can be copied already, yet people still make way more new products than direct product clones anyway, because it's usually more valuable to go to market with stronger, more focused, or more specialized/differentiated software than a copy.
How many 1+ hour videos of someone building with AI tools have you sought out and watched? Those definitely exist, it sounds like you didn't go seeking them out or watch them because even with 7 less hours you'd better understand where they add value enough to believe they can help with challenging projects.
So why should anybody produce an 8 hour video for you when you wouldn't watch it? Let's be real. You would not watch that video.
In my opinion most of the people who refuse to believe AI can help them while work with software are just incurious/archetypical late adopters.
If you've ever interacted with these kinds of users, even though they might ask for specs/more resources/more demos and case studies or maturity or whatever, you know that really they are just change-resistant and will probably continue to be as as long as they can get away with it being framed as skepticism rather than simply being out of touch.
I don't mean that in a moralizing sense btw - I think it is a natural part of aging and gaining experience, shifting priorities, being burned too many times. A lot of business owners 30 years ago probably truly didn't need to "learn that email thing", because learning it would have required more of a time investment than it would yield, due to being later in their career with less time for it to payoff, and having already built skills/habits/processes around physical mail that would become obsolete with virtual mail. But a lot of them did end up learning that email thing 5, 10, whatever years later when the benefits were more obvious and the rest of the world had already reoriented itself around email. Even if they still didn't want to, they'd risk looking like a fossil/"too old" to adapt to changes in the workplace if they didn't just do it.
That's why you're seeing so many directors/middle managers doing all these though leader posts about AI recently. Lots of these guys 1-2 years ago were either saying AI is spicy autocomplete or "our OKR this quarter is to Do AI Things". Now they can't get away with phoning it in anymore and need to prove to their boss that they are capable of understanding and using AI, the same way they had to prove that they understood cloud by writing about kubernetes or microservices or whatever 5-10 years ago.
> In my opinion most of the people who refuse to believe AI can help them while work with software are just incurious/archetypical late adopters.
The biggest blocker I see to having AI help us be more productive is that it transforms how the day to day operations work.
Right now there is some balance in the pipeline of receiving change requests/enhancements, documenting them, estimating implementation time, analyzing cost and benefits, breaking out the feature into discrete stories, having the teams review the stories and 'vote' on a point sizing, planning on when each feature should be completed given the teams current capacity and committing to the releases (PI Planning), and then actually implementing the changes being requested.
However if I can take a code base and enter in a high level feature request from the stakeholders and then hold hands with Kiro to produce a functioning implementation in a day, then the majority of those steps above are just wasting time. Spending a few hundred man-hours to prepare for work that takes a few hundred man-hours might be reasonable, but doing that same prep work for a task that takes 8 man-hours isn't.
And we can't shift to that faster workflow without significant changes to entire software pipeline. The entire PMO team dedicated to reporting when things will be done shifts if that 'thing' is done before the report to the PMO lead is finished being created. Or we need significantly more resources dedicated to planning enhancements so that we could have an actual backlog of work for the developers. But my company appears to neither be interested in shrinking the PMO team nor in expanding the intake staff.
It could be really beneficial for Anthropic to showcase how they use their own product; since they're developers already, they're probably dogfooding their product, and the effort required should be minimal.
- A lot of skeptics have complained that AI companies aren't specific about how they use their products, and this would be a great example of specificity.
- It could serve as a tutorial for people who are unfamiliar with coding agents.
- The video might not convince people who have already made up their minds, but at least you could point to it as a primary source of information.
These exist. Just now I triedfinding such a video for a medium-sized contemporary AI devtools product (Mastra) and it took me only a few seconds to arrive at https://www.youtube.com/watch?v=fWmSWSg848Q
There could be a million of these videos and it wouldn't matter, the problem is incuriosity/resistance/change-aversion. It's why so many people write comments complaining about these videos not existing without spending even a single minute looking for them: they wouldn't watch these videos even if they existed. In fact, they assume/assert they don't exist without even looking for them because they don't want them to exist: it's their excuse for not doing something they don't want to do.
That video was completely useless for me. I didn't see a single thing I would consider programming. I don't want to waste time building workflows or agentic agents, I want to see them being used to solve real world difficult problems from start to finish.
> How many 1+ hour videos of someone building with AI tools have you sought out and watched?
A lot, they've mostly all been advertising trite and completely useless.
I don't want a demonstration of what a jet-powered hammer is by the sales person or how to oil it, or mindless fluff about how much time it will save me hammering things. I want to see a journeyman use a jet-powered hammer to build a log cabin.
I am personally not seeing this magic utopia. No one wants to show me it, they just want to talk about how better it is.
Back in the day when you found a solution to your problem on Stackoverflow, you typically had to make some minor changes and perhaps engage in some critical thinking to integrate it into your code base. It was still worth looking for those answers, though, because it was much easier to complete the fix starting from something 90% working than 0%.
The first few times in your career you found answers that solved your problem but needed non-trivial changes to apply it to your code, you might remember that it was a real struggle to complete the fix even starting from 90%. Maybe you thought that ultimately, that stackoverflow fix really was more trouble than it was worth. And then the next few times you went looking for answers on stackoverflow you were better at determining what answers were relevant to your problem/worth using, and better at going from 90% to 100% by applying their answers.
> it was much easier to complete the fix starting from something 90% working than 0%.
As an expert now though, it is genuinely easier and faster to complete the work starting from 0 than to modify something junky. The realplayer example above I could do much faster, correctly, than I could figure out what the AI code was trying to do with all the effects and refactor it correctly. This is why I don't use AI for programming.
And for the cases where I'm not skilled, I would prefer to just gain skill, even though it takes longer than using the AI.
Anecdotally I think you're right that the more skilled you are at something, the less utility there is for something that quickly but incompletely takes you from 0 to 90%
But I would generally be skeptical of anybody who claims that all their work is better off starting from 0, the same way I'd be skeptical of someone who claims to not use or need to make google searches about docs/terms/issues as they work.
I'll give you an example of something I understand decently well but get a lot of use out of AI for: bash scripts and unit testing. These are not my core work but they are a large chunk of my work. Without LLMs I would just not write a lot of bash scripts because I found myself constantly looking things up and spending more time than expected getting the script to work across environments / ironing out bugs - I would only write absolutely essential scripts, and generally they'd not be polished enough to check in and share with the team, and just live on my computer in some random location. Now with LLMs I can essentially script in english and get very good bash scripts, so I write a lot more of them and it's easier for me to get them into an acceptable state worth sharing with my team.
Similarly, I really like Golang table tests but hate writing all the cases out and dealing with all the symbols/formatting. Now I can just describe all the different permutations I want and get something that I can lightly edit into being good enough.
I've also found that with domains I am knowledgable enough about, that can translate into being better at going from ~70% to 95% with AI too. In those cases I am not necessarily using AI the same way as someone trying to go from 0->90%: usually they're describing the outcome/goals/features they want relatively informally without knowledge of the known-unknowns and gotchas involved in implementing that. With more knowledge you can prompt LLMs with more implementation/design details and requirements, and course correct away from bad approaches much faster than someone who doesn't know the shape of what they're trying to do. That still comes in handy a lot of the time.
Think about how much time you can save by feeding an API spec/docs into an LLM, telling it create a Go struct for JSON (de)serialization of some monstrous interface like https://docs.cloud.google.com/compute/docs/reference/rest/v1...? Or how much easier it is to upgrade across breaking versions of a language/library when you can just bump the version, note all the places where the old code broke, and have an LLM with an upgrade guide/changelog do all the drudgery of fixing each of the 200 callsites you need to migrate to the next version.
The difference is you’re generally retooling for your purpose rather than scouring for multiple, easily avoidable screw ups that if overlooked will cause massive headaches later on.
Except for maybe an "Excel killer", all those things you listed are not things people are willing to pay for. Also agents are bad at that kind of work (most devs are bad at that stuff, it's why it was something people whined about even before agents).
And funnily enough there are products and tools that are essentially less bloated slack/discord. Have you heard of https://stoat.chat/ (aka revolt) or https://pumble.com/ or https://meet.jit.si/? If not I would guess it's for one of two reasons: not caring enough about these problems to even go looking for them yourself, or their lack of "bloatedness" resulting in them not being a mature/fully featured enough product to be worth marketing or adopting.
If you'd like to see a product mostly made with agents/for agents you can check out mine at https://statue.dev/ - we're making a static site generator with a templating and component system paired with user-story driven "agentic workflows" (~blueprints/playbooks for common user actions like "I need to add a new page and list it on the navbar" or "create a site from the developer portfolio template personalized for my github").
I would guess most other projects are probably in a similar situation as we are: agentic developer tools have only really been good enough to heavily use/build products around for a few months, so it's a typical few-month-old project. But agents definitely made it easier to build.
Not willing to pay for? How can you be sure? For example explain then why many gamers are ditching Windows for Linux and buying hardware from Valve... There must be a reason. Every person I talked to that uses Excel hate how slow it is, same for teams and many other products. Finally, were the mentioned products built with vibe coding?
Generally if something is fast enough/efficient enough that a paying customer can use it without having to worry or actively think about performance and un-bloatedness, that's enough for them. The only people who might complain still are developers who are bothered by the inefficiency and are technically literate enough to notice it, and maybe the users with less powerful/capable devices than the ones the big paying customers use. Generally these groups of people are not the actual customers of these products.
The people who actually pay for slack and discord (eg enterprises that need workplace chat app and decided to go with the "gold standard", consumers with discord servers and such) need the features/tradeoffs choosing featuers over efficiency causing that bloat. They just don't all need the exact same set of those features as the other customers. So because customers are willing to pay for all these features the product tries to ship all of them and becomes bloated.
> Every person I talked to that uses Excel hate how slow it is
But do they make the purchasing decisions behind using Excel?
To be clear I am not really arguing that bloat/overly enterprisey products are good. What I mean that you don't see the world exploding with more elegant products now with agents for the same reason you didn't see the world exploding with them before agents either: the people who pay for those products and build them for a living are not incentivized or necessarily even rewarded for choosing to make them more efficient or elegant when there are other things that customers are asking for with more $$$ behind them.
I did a lot of analysis and biz dev work on the "Excel killer" and came to the conclusion that it would be hard to get people to pay for.
For one thing most enterprises and many individuals have an Office 365 subscription to access Office programs which are less offensive than Excel so they aren't going to save any money by dropping Excel.
On top of it the "killer" would probably not be one product aimed at one market but maybe a few different things. Some people could use "visual pandas" for instance, something that today would be LLM-infused. Other people could use a no-code builder for calculations. The kind of person who is doing muddled and confused work with Excel wouldn't know which "killer" they needed or understand why decimal math would mean they always cut checks in the right amount.
Wrt statue.dev good luck for sure with the project but I personally don't need yet another static site generator, nextjs like but with unpopular svelte, bloated with tons of node modules creating another black hole impossible to escape from. If agents works this well why would I need to use your library? I just tell an agent to maintain my static site who cares which tech stack
Consumer AI product posted on a weekend during prime European hours. Brace yourselves!
Actually I would consider this setup to not be very user friendly. This makes a lot of assumptions about the data/format you have available already. Personally I would assume that anything operating on my bank transactions would be through some more turnkey/handsoff integration rather than a direct import.
Puzzle basically does this by hooking directly into my bank and gives me other tools where I can easily use the categorizations
Well, sure. Accounting solution. But in a post about scripting your way to financial fun and games (and I think there are a fair share of people here who are locked into their accounting platforms/apps for various reasons), what solution does the API call to the bank (unavailable to the small player) and then gives you an API endpoint to get cleaned data into whichever accounting solution you happen to be using? Puzzle.io ain't going to do it at any price.
Only a very, very small fraction of open source projects get to the point where they legitimately need committees and working groups and maintainer politics/drama.
> quite a few people would consider "benevolent dictator for life" an outdated model for open source communities.
I think what most people dislike are rugpulls and when commercial interests override what contributors/users/maintainers are trying to get out of a project.
For example, we use forgejo at my company because it was not clear to us to what extent gitea would play nicely with us if we externalized a hosted version/deployment their open source software (which they somewhat recently formed a company around, and led to forgejo forking it under the GPL). I'm also not a fan of what minio did recently to that effect, and am skeptical but hopeful that seaweedfs is not going to do something similar.
We ourselves are building out a community around our static site generator https://github.com/accretional/statue as FOSS with commercial backing. The difference is that we're open and transparent about it from the beginning, and static site generators/component libraries are probably some of the least painful to fork or take issue with their direction, vs critical infrastructure like distributed systems' storage layer.
Bottom line is, BDFL works when 1. you aren't asking people to bet their business on you staying benevolent 2. you remain benevolent.
> Only a very, very small fraction of open source projects get to the point where they legitimately need committees and working groups and maintainer politics/drama.
You’re not wrong, but those are the projects we’re talking about in this thread. uv has become large enough to enter this realm.
> Bottom line is, BDFL works when 1. you aren't asking people to bet their business on you staying benevolent 2. you remain benevolent.
That second point is doing a lot of heavy lifting. All of the BDFL models depend on that one person remaining aligned, interested, and open to new ideas. A lot of the small projects I’ve worked with have had BDFL models where even simple issues like the BDFL becoming busy or losing interest became the death knell of the project. On the other hand, I can think of a few committee-style projects where everything collapsed under infighting and drama from the committee.
It’s incredibly useful to be able to easily iterate over all possible values of a type at runtime or otherwise handle enum types as if they are their enum value and not just a leaky wrapper around an int.
If you let an enum be any old number or make the user implement that themselves, they also have to implement the enumeration of those numbers and any optimizations that you can unlock by explicitly knowing ahead of time what all possible values of a type are and how to quickly enumerate them.
What’s a better representation: letting an enum with two values be “1245927” or “0” or maybe even a float or a string whatever the programmer wants? Or, should they be 0 and 1 or directly compiled into the program on a way that allows the programmer to only ever need to think about the enum values and not the implementation?
IMO the first approach completely defeats the purpose of an enum. It’s supposed to be a union type, not a static set of values of any type. If I want the enum to be tagged or serializable to a string that should be implemented on top of the actual enumerable type.
They’re not mutually exclusive at all, it’s just that making enums “just tags” forces you to think about their internals even if you don’t need to serialize them and doesn’t give you enumerability, so why would I even use those enums at all when a string does the same thing with less jank?
reply