> why use HTMX when it really seems like (a heavier) Datastar-lite?
The reason to use htmx is that it has a simpler interface optimized for the majority use-case.
With htmx, you are largely tied to a request/reply paradigm. Something happens that triggers a request (e.g. user clicks a button, or some element scrolls into view), htmx sends the request, and then it processes the response. The htmx interface (`hx-get`, hx-trigger`) is optimized to make this paradigm extremely simple and concise to specify.
Datastar's focus (last I checked) is on decoupling these two things. Events may stream to the client at any time, regardless of whether or not they were triggered by a specific action on the client, and they get processed by Datastar and have some effect on the page. htmx has affordances for listening to events (SEE extension, new fetch support) and for placing items arbitrarily on the page (out-of-band swaps) but if your use-case is a video game or a dashboard or something else where the updates are frequently uncorrelated with user actions, Datastar makes a lot of sense. It's a bit like driving a manual transmission.
Delaney is fond of saying that there's no need for htmx when Datastar can technically do everything htmx can [0]. But I think this misses the point of what makes htmx so popular: most people's applications do fit within a largely request/reply paradigm, and using a library that assumes this paradigm is both simpler to implement and simpler to debug. As an htmx maintainer, I often encourage people to even use htmx less than they want to, because the request/reply paradigm is very powerful and the more you can adhere to browser's understanding of it, the more durable and maintainable your website will be [1].
Moreover, if htmx's real value is ajax request/response, then why are you introducing SSE as a first-class citizen now?
2. Datastar has data-on, and various other attributes, that allow for triggering far more actions than just backend requests, from far more (any) events. I'm glad to see that htmx is now following suit with hx-on, even if it is apparently limited in capabilities.
3. Datastar can do OOB-swaps just fine - that's literally the core functionality, via (their own, faster) idiomorph.
4. Its a misnomer that Datastar is for video games etc - again, as described above, it can do all of the simple things that that HTMX can do, and more. And, again, why is HTMX introducing SSE if its so apparently unnecessary and unwieldy?
5. What makes htmx popular is that it was the first library to make declarative fragment swapping easy. And Carson is just a god-tier marketer. Its nice to see that he's now realized that Delaney was on to something when he wanted to introduce all of these v4 features to HTMX 3 years ago, but was (fortunately for us happy users) forced to go make Datastar instead.
6. We havent even talked about one of the key features - declarative signals. Signals are justifiably taking over all of the JS frameworks and there's even an active proposal to make them part of the web platform. D* makes them simpler to use than any of them, and in a tiny package.
I, Delaney, and all other D* users are grateful for HTMX opening this door. But I reiterate my original question - now that HTMX is becoming Datastar-lite, why not just use Datastar given that the powerful extras don't add any complexity and comes in a smaller package?
And here's the datastar one, edited for parity: [1]
<button data-on:click="@get('/contact/1/edit')">
The htmx one is simpler. There's fewer mini-languages to learn and the API makes more assumptions about what you want. As you noted, Datastar has more generalized mechanisms that are certainly less clunky than htmx's if you lean heavily into more signals- or event-driven behavior, but for (what I believe to be) the majority use-case of a CRUD website, htmx's simpler interface is easier to implement and debug.(For example: you will see the response associated with the request in the browser network tab; I'm not sure if Datastar has a non-SSE mode to support that but it wouldn't be true for SSE.) To each their own.
As for "well then why implement X, Y, or Z," as the OP notes, refactoring to use fetch() means you get them largely for free, without compromising the nice interface. So why not?
That tiny difference is hardly a reason not to use datastar, especially when it brings SO MUCH more useful stuff - all in a smaller package.
Moreover, the fact that Datastar is more generalized is actually better - HTMX has vastly more (non-standards-compliant) attributes that you need to learn.
> I'm not sure if Datastar has a non-SSE mode to support that but it wouldn't be true for SSE.) To each their own.
My first point was literally saying that it has non-SSE and linked to the docs. You're not even trying to be objective here...
> So why not?
Yes, I have no problem with these things being implemented in v4. In fact, celebrated it in my original post. I brought it all up because you were describing that all as needless complexity in Datastar, but now you're implementing it.
Also, most of Datastar can be trivially disabled/unbundled because its nearly all plugins. That is largely not the case for HTMX.
Thus far, you've simply strongly confirmed my initial hunch that HTMX v4 is unnecessary compared to Datastar.
Which is why HTMX is having to bolt on more gubbins. Because, although it's less characters to type its fundamentally complected and therefore less composable.
I'm sure you've already warched it but if you haven't I'd recommend Rich Hickey's talk Simple made Easy.
I don't think the takeaway from Hickey's talk is just a blind 'simple is always good, easy is always bad'. It's actually important to offer 'easy' affordances in some cases. Htmx operates at a specific level of abstraction. This abstraction has been carefully chosen to work to create apps in a certain way. It's like a shortcut for creating a simple kind of hypermedia-driven app with some easy conveniences. I think this is a significant and important point in the design space.
The power in Datastar is YOU get to choose what plugins you use to build the API you what... want data-get, it's a few lines away from being yours! You can rebuild all of HTMX in Datastar, not the other way around. https://data-star.dev/examples/custom_plugin is a great intro
Yes, and the htmx philosophy is that it gives you some default functionality out of the box, it doesn't just hand you a bunch of building blocks and ask you to put everything together yourself. Both are valid approaches. I don't know why one side always has to come in and try to put down the other. Without htmx, there is no Datastar.
You are right! without HTMX 1 and the choices for future dev there would be no Datastar! If you think having a full framework while still allowing super easy way to make whatever API as you see fit is putting someone down, idk, ngmi.
That's not what I think. Just take a look at the top-level comments where Datastar supporters are coming in and doing the 'htmx 4 is just doing what Datastar does now, why do we need htmx' routine. First of all that's not even true, and secondly it's kinda transparent–whenever htmx gets discussed the Datastar fans show up to naysay.
the top-level comment (mine) literally celebrated the v4 changes, and I've done nothing but show respect and gratitude for HTMX.
I was simply asking, though, what I might be missing now that HTMX is becoming a (heavier) Datastar-lite (no signals and more). Given that these changes were literally previously rejected by HTMX and caused Datastar to even come into existence, it seems wholly appropriate to be making the comparison here.
Also, I can't speak for others, but when I've brought up Datastar in other HTMX-centric discussions here and elsewhere, its only when someone asks about things like SSE, idiomorph etc... I always say that if you're looking for those features, you might prefer to just use D... Now that those features are native to HTMX, I suppose they can just stay with it. But you get even more for less weight by moving to D
The largely nonsensical and overly-defensive responses from HTMX's devs/supporters have only made it clear to me that D* is the appropriate choice here.
Htmx supporters have explained why htmx over Datastar many times. This is not even the first thread about this topic. Even in this thread you can easily find people (including myself) explaining why htmx and what it does differently. We don't need to repetitively explain the same exact justifications every time someone asks. Do a little reading, it's all there.
If you are making technical decisions about web programming based on HN threads though, best of luck to you.
I make technical decisions based off of metrics. Last time we talked as soon as I showed you flamegraphs you yeeted. If you think D* version of morph is just idiomorph... please, make charts that actually show that to be the case.
I make decisions based off of metrics too, it's just that we disagree about the metrics. You think about milliseconds in download times while I think about Core Web Vitals. Both have their place of course, in different contexts.
I don't make any claims about the D* version of morph or idiomorph as I use neither of those.
> Why bother with v4 at all? If it dilutes that simpler interface?
v4 makes almost no changes to the interface, other than to flip inheritance to be off by default.
> I think that even with req/resp morph leads to a simpler majority use case and that's what Turbo and Datastar have both shown. No?
Although you can use the idiomorph extension for htmx, I personally don't think idiomorph is simpler, because there's an algorithm choosing what parts of the page get replaced based on the server response; I prefer to specify exactly what parts of the page get replaced in much simpler terms, a CSS selector, with `hx-target`.
Per [1] above, my style is minimize partial page responses wherever possible, so the ones that I do have are bespoke and replace a specific thing.
With highly dynamic page where you would normally start using a front end lib, Idiomorph makes it so you can stick with the hypermedia approach instead.
Yes! I expect that I will mostly be sticking to `hx-target` though, for the reasons stated above.
My interest in htmx is more on the coarse-grained aspects of its interface, not the finer ones, which is a consistent theme in my writings about it [0].
hey Alex, I hope you are well. Datastar has had direct support for req/rep of HTML, JS, JSON while still morphing for a quite a while. They allow you to go as coarse as you want. Give the size and ability to choose what plugins you actually need seems like Datastar is more in line with your wants at this point. Strange times.
> Purists have long claimed that a “truly” RESTful API should be fully self-describing, such that a client can explore and interact with it knowing nothing but an entrypoint in advance, with hyperlinks providing all necessary context to discover and consume additional endpoints.
> This never worked in practice. Building hypertext APIs was too cumbersome and to actually consume APIs a human needed to understand the API structure in a useful manner anyway.
Every time I read one of these comments I feel like DiCaprio's character in Inception going "but we did grow old together." HATEOAS worked phenomenally. Every time you go to a webpage with buttons and links in HTML that describe what the webpage is capable of (its API, if you will), you are doing HATEOAS [0]. That this interface can be consumed by both a user (via the browser) and a web scraper (via some other program) is the foundation of modern web infrastructure.
It's a little ironic that the explosion of information made possible by HATEOAS happened while the term it self largely got misunderstood, but such is life. Much like reclaiming the proper usage of its close cousin, "REST," using HATEOAS correctly is helpful for properly identifying what made the world's largest hypermedia system successful—useful if you endeavor to design a new one [1].
I think you're misunderstanding the purpose of hateoas.
If we jump down to the bolts and nuts, let's say on a json API, it's about including extra attributes/fields in your json response that contain links and information of how to continue. These attributes have to be blended with your other real attributes.
For example if you just created a resource with a POST endpoint, you can include a link to GET the freshly created resource ("_fetch"), a link to delete it ("_delete"), a link to list all resources of the same collection ("_list"), etc...
Then the client application is supposed to automatically discover the API's functionality. In case of a UI, it's supposed to automatically discover the API's functionality and build a presentation layer on the fly, which the user can see and use. From our example above, the UI codebase would never have a "delete" resource button, it would have a generic button which would be created and placed on the UI based on the _delete field coming back from the API
I worked for a company that was all hateoas. In the formal sense, explicitly structured around the concept, not the sense that html has both data and actions via links, it worked, it was a real product, but it was slow and terrible to develop and debug.
The front end ui was entirely driven, ui and functionality exposed by the data/action payload.
I'm still not sure if it's because of the implementation or because there is something fundemental.
I came away from that thinking that the db structure, the dag and data flow is what's really important for thinking about any problem space and any ui considerations should be not first class.
But I'm not a theorist, I just found a specific real, real formal working implementation in prod to be not great, it's a little hard even now to understand why.
Maybe it just works for purely text interfaces, adding any design or dynamic interaction causes issues.
I think maybe it's that the data itself should be first class, that well typed data should exist and a system that allows any ui and behavior to be attached to that data is more important than an api saying what explicit mutations are allowed.
If I was to explore this, I think folder and files, spreadsheet, dbs, data structures, those are the real things and the tools we use to mutate them are second order and should be treated as such. Any action that can be done on data should be defined elsewhere and not treated as being the same importance, but idk, that's just me thinking outloud.
> I worked for a company that was all hateoas. In the formal sense, explicitly structured around the concept, not the sense that html has both data and actions via links, it worked, it was a real product, but it was slow and terrible to develop and debug.
The web is also a real product, one that's (when not bloated with adtech) capable of being fast and easy to develop on. That other people have tried to do HATEOAS and failed to make it nice is part of why it's so useful to acknowledge as valid the one implementation that has wildly succeeded.
REST, including HATEOAS, was largely retrospective documentation of the architectural underpinning of the WWW by Roy Fielding (who played an important role in web technology from 1993 on, was the co-lead for the HTTP/1.0 spec, the lead for the original HTTP/1.1 spec, and also, IIRC, lead or co-lead on the original URL spec. The things it documented existed before it documented them.
> You aren't saying hypermedia/hyperlinks served by a backend equal hateaos are you?
That’s exactly what it is.
> hateaos is from 2000 isn't it? Long after hyperlinks and the web already existed.
> Over the past six years, the REST architectural style has been used to guide the design and development of the architecture for the modern Web, as presented in Chapter 6. This work was done in conjunction with my authoring of the Internet standards for the Hypertext Transfer Protocol (HTTP) and Uniform Resource Identifiers (URI), the two specifications that define the generic interface used by all component interactions on the Web.
This is straight from the intro of fielding’s doctoral dissertation.
REST / HATEOAS is basically one of the main architects of the web saying “these are the things we did that made the web work so well”. So yes, REST was published after the web already existed, but no that doesn’t mean that the web is not REST / HATEOAS.
Within the last 3 years. They had their own open sourced functional typescript framework that drove the front end.
You could use whatever lightweight rendering you wanted, mostly it was very minimal react but that hardly mattered. One thing that was a positive was how little the ui rendering choice mattered.
I don't really want to say more as it's unique enough to be equivalent to naming the company itself.
I agree. The “purist” REST using HATEOAS is the single most successful API architectural style in history by miles. It’s the foundation of the World-Wide Web, which would not have been anywhere near as successful with a different approach.
Totally agree, the web itself is absolutely HATEOAS, but there was a type of person in the 2000s era who insisted that APIs were not truly RESTful if they weren't also hypermedia APIs, but the only real benefit of those APIs was to enable overly generic API clients that were usually strictly worse than even clumsily tailored custom clients.
The missing piece was having machines that could handle enough ambiguity to "understand" the structure of the API without it needing to be generic to the point of uselessness.
> there was a type of person in the 2000s era who insisted that APIs were not truly RESTful if they weren't also hypermedia APIs
The creator of REST, Roy Fielding, literally said this loud and clear:
> REST APIs must be hypertext-driven
> What needs to be done to make the REST architectural style clear on the notion that hypertext is a constraint? In other words, if the engine of application state (and hence the API) is not being driven by hypertext, then it cannot be RESTful and cannot be a REST API. Period.
I appreciate the conceptual analogy, but that's not really HATEOAS. HATEOAS would mean your browser/client would be entirely responsible for the presentation layer, in whatever form you desired, whether it's buttons or forms or pages or not even a GUI at all, such as a chat interface.
The Web is not only true HATEOAS, it is in fact the motivating example for HATEOAS. Roy Fielding's paper that introduced the concept is exactly about the web, REST and HATEOAS are the architecture patterns that he introduces primarily to guide the design of HTTP for the WWW.
The concept of a HATEOAS API is also very simple: the API is defined by a communication protocol, 1 endpoint, and a series of well-defined media types. For a website, the protocol is HTTP, that 1 endpoint is /index.html, and the media types are text/html, application/javascript, image/jpeg, application/json and all of the others.
The purpose of this system is to allow the creation of clients and servers completely independently of each other, and to allow the protocols to evolve independently in subsets of clients and servers without losing interoperability. This is perfectly achieved on the web, to an almost incredible degree. There has never been, at least not in the last decades, a big where, say, Firefox can't correctly display pages severed by Microsoft IIS: every browser really works with every web server, and no browser or server dev even feels a great need to explicitly test against the others.
It's a broader definition of HATEOAS. A stricter interpretation with practical, real-world benefits is a RESTful API definition that is fully self-contained that the client can get in a single request from the server and construct the presentation layer in whole with no further information except server responses in the same format. Or, slightly less strictly, a system where the server procedurally generates the presentation layer from the same API definition, rather than requiring separate frontend code for the client.
It is the original definition from Roy Fielding's paper. Arguably, you are talking about a more specific notion than the full breadth of what the HATEOAS concept was meant to inform.
The point of HATEOAS is to inform the architecture of any system that requires numerous clients and servers to interoperate with little ability for direct cooperation; and where you also need the ability to evolve this interaction in the longer term with the same constraint of no direct cooperation. As the dissertation explains, HATEOAS was used to guide specific fixes to correct mistakes in the HTTP/1.0 standard that limited the ability to achieve this goal for the WWW.
> HATEOAS would mean your browser/client would be entirely responsible for the presentation layer, in whatever form you desired, whether it's buttons or forms or pages or not even a GUI at all, such as a chat interface.
Browsers can alter a webpage with your chosen CSS, interactively read webpages out loud to you, or, as is the case with all the new AI browsers, provide LLM powered "answers" about a page's contents. These are all recontextualizations made possible by the universal HATEOAS interface of HTML.
Altering the presentation layer is possible precisely because HTML is a semantic API definition: one broad enough to enable self-description across a variety of domains, but specific enough that those applications can still be re-contextualized according to the user's needs and preferences.
Deriving a presentation layer from an API definition has no bearing on whether the client has to be stateful or not. The key difference for 'true' HATEOAS is that the API schema is sufficiently descriptive that the client does not need to request any presentation layer; arguably not even HTML, but definitely not CSS or JavaScript.
Dude, he literally mentions Java Applets as an example (it was popular back then, if it was written today it would have been JavaScript). It's all there. Section 5.1.7.
It's an optional constraint. It's valid for CSS, JavaScript and any kind of media type that is negotiable.
> resource: the intended conceptual target of a hypertext reference
> representation: HTML document, JPEG image
A resource is abstract. You always negotiate it, and receive a representation with a specific type. It's like an interface.
Therefore, `/style.css` is a resource. You can negotiate with clients if that resource is acceptable (using the Accept header).
"Presentation layer" is not even a concept for REST. You're trying to map framework-related ideas to REST, bumping into an impedance mismatch, and not realizing that the issue is in that mismatch, not REST itself.
REST is not responsible for people trying to make anemic APIs. They do it out of some sense of purity, but the demands do not come from HATEOAS. They come from other choices the designer made.
I will concede the thrust of my argument probably does not fully align with Fielding's academic definition, so thank you for pointing me to that and explaining it a bit.
I'm realizing/remembering now that our internal working group's concept of HATEOAS was, apparently, much stricter to the point of being arguably divergent from Fielding's. For us "HATEOAS" became a flag in the ground for defining RESTful(ish) API schemas from which a user interface could be unambiguously derived and presented, in full with 100% functionality, with no HTML/CSS/JS, or at least only completely generic components and none specific to the particular schema.
"Schema" is also foreign to REST. That is also a requirement coming from somewhere else.
You're probably coming from a post-GraphQL generation. They introduced this idea of sharing a schema, and influenced a lot of people. That is not, however, a requirement for REST.
State is the important thing. It's in the name, right? Hypermedia as the engine of application state. Not application schema.
It's much simpler than it seems. I can give a common example of a mistake:
GET /account/12345/balance <- Stateless, good (an ID represents the resource, unambiguous URI for that thing)
GET /my/balance <- Stateful, bad (depends on application knowing who's logged in)
In the second example, the concept of resource is being corrupted. It means something from some users, and something to others, depending on state.
In the first example, the hypermedia drives the state. It's in the link (but it can be on form data, or negotiation, for example, as long as it is stateless).
There is a little bit more to it, and it goes beyond URI design, but that's the gist of it.
It's really simple and not that academical as it seems.
Fielding's work is more a historical formalisation where he derives this notion from first principles. He kind of proves that this is a great style for networking architectures. If you read it, you understand how it can be performant, scalable, fast, etc, by principle. Most of the dissertation is just that.
From what I read on wiki, I'm not sure what to think anymore - it does at least sound inline with the opinion that the current websites are actually HATeOAS.
I guess someone interested would have to read the original work by Roy (who seems to have come up with the term) to find out which opinion is true
I worked on frontend projects and API designs directly related to trying to achieve HATEOAS, in a general, practical sense, for years. Browsing the modern web is not it.
I think you are confusing the browser with the web page. You probably think that the Javascript code executed by your browser is part of the "client" in the REST architecture - which is simply not what we're talking about. When analyzing the WWW, the REST API interface is the interface between the web browser and the web server, i.e. the interface between, say, Safari and Apache. The web browser accesses a single endpoint on the server with no prior knowledge of what that endpoint represents, downloads a file from the server, analyzes the Content-Type, and can show the user what the server intends to show based on that Content-Type. The fact that one of these content types is a language for running server-controlled code doesn't influence this one bit.
The only thing that would have made the web not conform to HATEOAS were if browsers had to have code that's specific to, say, google.com, or maybe to Apache servers. The only example of anything like this on the modern web is the special log in integrations that Microsoft and Google added for their own web properties - that is indeed a break of the HATEOAS paradigm.
I'm not confusing it. I was heavily motivated by business goals to find a general solution for HATEOAS-ifying API definitions. And yes, a web page, implemented in HTML/CSS/JS is a facsimile for it in a certain sense, but it's not self-contained RESTful API definition.
Again, you're talking about a particular web page, when I'm talking about the entire World Wide Web. The API of the WWW is indeed a RESTful API, driven entirely by hyperlinks. You can consider the WWW as a single service in this sense, where there is a single, and your browser is a client of that service. The API of this service is described in the HTTP RFCs and the WHATWG living standard for HTML, and the ECMAScript standard.
Say I as a user want to read the latest news stories of the day in the NYT. I tell my browser to access the NYT website root address, and then it contacts the server and discovers all necessary information for achieving this task on its own. It may choose to present this information as a graphical web page, or as a stream of sound, all without knowing anything about the NYT web site a priori.
HATEOAS is hypertext as the engine of application state. When a person reads a webpage and follows links, it’s not HATEOAS, because the person is not an application.
HATEOAS and by-the-book REST don’t provide much practical value for writing applications. As the article says, a human has to read the spec, make sense of each endpoint’s semantics, and write code specific to those semantics. At that point you might as well hardcode the relevant URLs (with string templating where appropriate) rather than jumping through hoops and pretending every URL has to be “discovered” on the off chance that some lunatic will change the entire URL structure of your backend but somehow leave all the semantics unchanged.
The exception, as the article says, is if we don’t have to understand the spec and write custom code for each endpoint. Now we truly can have self-describing endpoints, and HATEOAS moves from a purist fantasy to something that actually makes sense.
> Like, you shouldn't be super happy about walking across a desert because your car can't drive on sand. "Look how simple legs are, and they work on sand".
I didn't say I was happy about it—I just said I needed to get across the desert!
I have typically understood the "Sufficiently Smart Compiler" to be one that can arrive at the platonic performance ideal of some procedure, regardless of how the steps in that procedure are actually expressed (as long as they are technically correct). This is probably impossible.
What I'm proposing is quite a bit more reasonable—so reasonable that versions of it exist in various ecosystems. I just think they can be better and am essentially thinking out loud about how I'd like that to work.
I'm fully on board with improving compilers. My issue is that you compare the current state of (some) dynamically-typed languages with a hypothetical future state of statically-typed languages.
You use `req.cookies['token']` as an example of a subtle bug in JavaScript, but this isn't necessarily an inherent bug to dynamic typing in general. You could, for example, have a key lookup function that requires you to pass in a default value, or callback to handle what occurs if the value is missing.
req.cookies.get('token', () => {
throw new AuthFailure("Missing token")
})
I'd love the article to clarify this, because it stuck out to me as well. But as you pointed out, I think that's what they meant: the lighthouse warning is called "H1UserAgentFontSizeInSection"
Hi, author here. The full quote is: "In my opinion, most websites should be using htmx for either:", and then I list two cases where I think htmx is appropriate.
In context, it's clear that I'm not saying "everyone should use htmx," but rather "if you are using htmx, here is how I recommend you do it."
As for the shiny object concern, I have a talk (which you can also find on this blog) called "Building the Hundred-Year Web Service", that dives into that question.
In the near future I'll write a blog about this, but the short answer is that even though more developers use REST incorrectly than not, it's still the term that best communicates our intent to the audience we are trying to reach.
Eventually, I would like that audience to be "everyone," but for the time being, the simplest and clearest way to build on the intellectual heritage that we're referencing is to the use the term the same way they did. I benefited from Carson's refusal to let REST mean the opposite of REST, just as he benefited from Martin Fowler's usage of the term, who benefited from Leonard's Richardson's, who benefited from Roy Fielding's.
> About 20 years ago, Firefox attempted to add PUT and DELETE support to the <form> element, only to roll it back. Why? Because the semantics of PUT and DELETE are not consistently implemented across all layers of the HTTP infrastructure—proxies, caches, and intermediary systems.
This is incorrect, according to this comment from the Firefox implementer who delayed the feature. He intended the roll back to be temporary. [0]
> The reality we live in, shaped by decades of organic evolution, is that only GET and POST are universally supported across all layers of internet infrastructure.
This is also incorrect. The organic evolution we actually have is that servers widely support the standardized method semantics in spite of the incomplete browser support. [1] When provided with the opportunity to take advantage of additional methods in the client (via libraries), developers user them, because they are useful. [2][3]
> Take a cue from the WHATWG HTML5 approach: create your RFC based on what is already the de facto standard: GET is for reading, and POST is for writing.
What you're describing isn't the de defacto standard, it is the actual standard. GET is for reading and POST is for writing. The actual standard also includes additional methods, namely PUT, PATCH, and DELETE, which describe useful subsets of writing, and our proposal adds them to the hypertext.
> Trying to push a theoretically "correct" standard ignores this reality and, as people jump into the hype train, will consume significant time and resources across the industry without delivering proportional value. It's going to be XHTML all over again, it's going to be IPv6 all over again.
You're not making an actual argument here, just asserting that takes time—I agree—and that it has no value—I disagree, and wrote a really long document about why.
You're right about the quote, thanks for pointing that out. And somehow I can't find the original one anymore, which is frustrating. I replaced it was a different quote from the same guy saying the same thing elsewhere in the discussion.
The reason to use htmx is that it has a simpler interface optimized for the majority use-case.
With htmx, you are largely tied to a request/reply paradigm. Something happens that triggers a request (e.g. user clicks a button, or some element scrolls into view), htmx sends the request, and then it processes the response. The htmx interface (`hx-get`, hx-trigger`) is optimized to make this paradigm extremely simple and concise to specify.
Datastar's focus (last I checked) is on decoupling these two things. Events may stream to the client at any time, regardless of whether or not they were triggered by a specific action on the client, and they get processed by Datastar and have some effect on the page. htmx has affordances for listening to events (SEE extension, new fetch support) and for placing items arbitrarily on the page (out-of-band swaps) but if your use-case is a video game or a dashboard or something else where the updates are frequently uncorrelated with user actions, Datastar makes a lot of sense. It's a bit like driving a manual transmission.
Delaney is fond of saying that there's no need for htmx when Datastar can technically do everything htmx can [0]. But I think this misses the point of what makes htmx so popular: most people's applications do fit within a largely request/reply paradigm, and using a library that assumes this paradigm is both simpler to implement and simpler to debug. As an htmx maintainer, I often encourage people to even use htmx less than they want to, because the request/reply paradigm is very powerful and the more you can adhere to browser's understanding of it, the more durable and maintainable your website will be [1].
[0] https://data-star.dev/essays/v1_and_beyond
[1] https://unplannedobsolescence.com/blog/less-htmx-is-more/