Hacker Newsnew | past | comments | ask | show | jobs | submit | Daril's commentslogin

I used Turbo + Stimulus with a CodeIgniter PHP backend. Later, I used an HTMX bot with CodeIgniter and GoLang. Now, I have migrated my second brain web app (Brainminder) from HTMX to Unpoly.

I really liked HTMX, and I thank the authors for this marvelous library!

I switched from Turbo to HTMX because the latter is much more flexible, and I try to avoid Node.js as much as possible, only using it to compile some JavaScript code for Stimulus.

I finally moved from HTMX to Unpoly for the following reasons:

1. Layer support: Unpoly makes it easy to create layers and modal overlays, saving a lot of time and JavaScript code. You can achieve the same functionality with HTMX, but you have to write more code.

2. JavaScript code is better organized thanks to up.compile hooks.

3. HTMX and Unpoly treat fragments slightly differently. With HTMX, you have to use an out-of-band feature to update multiple fragments together. With Unpoly, you can easily add them to the response (and declare them in the front end, of course).

In my opinion, Unpoly has a better-organized approach to everything. On the other hand, apart from the official documentation, it is difficult to find examples for some edge-case features.


I have a miniPC (Minisforum) with Debian as server.

I use : - Syncthing (https://syncthing.net/) to keep the files synchronized between desktops and laptops computers

- Webdav (https://github.com/hacdias/webdav) to access the files on the server via other applications

- Cryptomator (https://cryptomator.org/) to crypt/decrypt sensible directories (that are synchronized through Syncthing) Cryptomator allow me to access also the directories via webdav

- MaterialFiles on Andrid to access the files on the server

I access my mini server from outside with a Wireguard VPN created on my Fritz!Box router.

Between home and office I created a site-to-site Wireguard VPN between the two Fritz!Box routers.


Forgot to mention also SFTPGo : https://sftpgo.com/


I wanted to try it months ago ... but I stopped when I read in the install documentation :

To configure passwordless sudo, open the /etc/sudoers file, and add a line of the form: %username ALL = (ALL) NOPASSWD: ALL

And the same user should have a password less SSH access with private key ...


Honest question, what's the problem with that? Hinging admin access for some machine on an ssh key seems like not too unusual practice?


From a security point of view, I am not comfortable giving a user unlimited access to the server. I don't know what solution pgEdge is implementing, but granting full access to the server when it should only operate on PostgreSQL is a security concern for me.


the Getting Started guide is definitely a different mindset than what we would recommend for Production Ready, particularly if there's specific security requirements in mind. With that being said, it should be more clear, so we've reported this to our documentation team to make sure it is!


It could do better for sure, but it's a just a Get Started guide, I never consider that a Production Ready guide.


I use Syncthing in combination with Cryptomator for sensible files, but there is also the Localsend app : https://localsend.org/


Very good news that the Freepascal version compiles and works correctly also on Windows. As you said, Delphi was a huge barrier to prevent other developers to contribute, but if we can use Lazarus, Heidi can receive a lot of help not only for Linux version but also for Windows one. Probably, thanks to Freepascal / Lazarus, it can be ported easily also to Mac OS now.


> easily

Depending on how OS-independent it is, it might just be a matter of opening the project file and selecting Run -> Build (yes, the fact that Build is under Run is something that always bothered me, but it has been like that for 22+ years now). However the resulting app will be very "win-like" and when i was making macOS builds using Lazarus back when i cared about macOS i always had a "Macize" function i called at startup (ifdef'd for mac builds) that did things like replace the Ctrl modifier shortcuts with Command in menus (you can enumerate the menus, no need to do that by hand), move the About command to the apple menu, etc. There are also some other things that you may feel like doing.

TBH one thing that i wish was possible with Lazarus (at the time, now i don't care much :-P) was to be able to define different "layouts" per widgetset in a way that allowed you to use, say, a 'default' layout for Windows and Linux but a modified layout for macOS. Technically it is possible to design a form and then have another form inherit from it and apply modifications there, but it feels kinda awkward to use for different layouts (it is mainly meant for creating forms that you want to reuse but still have modifications - and it can be clunky in how it decides when to ignore changes in the base form or not - i do not use visual form inheritance, but i do use frames to design reusable controls visually and i often have to edit the source code of form files that use frames to remove overrides after saving the form so that changes to the frame are reflected in all forms that use it - this makes me want to add a readonly property to frames at some point :-P).


These days Delphi has community license for non-commercial use


Right, but having a unique codebase and one development tool for Linux, Windows and Macos (probably), would help a lot to reduce the effort to maintain the application.


Since Linux version works, I wonder if it could work on Mac, too?


There is also Peazip : https://github.com/peazip/PeaZip


Based on this comparison :

https://c3-lang.org/faq/compare-languages/

One would argue that the best C/C++ alternative/evolution language to use would be D. D also has its own cross-platform GUI library and an IDE.

I wonder for which reasons D doesn't have a large base adoption.


I can only speak for myself:

1. It is so big.

2. It still largely depends on GC (less important actually)

It keeps adding features, but adding features isn't what makes a language worth using. In fact, that's one of the least attractive things about C++ as well.

So my guess:

1. It betted wrong on GC trying to compete with C++.

2. After failing to get traction, kept adding features to it – which felt a bit like there was some feature that would finally be the killer feature of the language.

3. Not understanding that the added features actually made it less attractive.

4. C++ then left the GC track completely and became a more low level alternative to, at which point D ended up in a weird position: neither high level enough to feel like a high level alternative, nor low level enough to compete with C++.

5. Finally: the fact that it's been around for so long and never taking off makes it even harder for it to take off because it's seen as a has-been.

Maybe Walter Bright should create a curated version of D with only the best features. But given how long it takes to create a language and a mature stdlib, that's WAY easier said than done.


The dmd compiler not being open source until 2017[1] made it more or less a non-starter for a great many use cases. That would have been okay in the 80s, but with tons of languages to choose from since the 90s/00s, your language needs something very special to sell licenses.

[1]: Specifically: "The Software is copyrighted and comes with a single user license, and may not be redistributed. If you wish to obtain a redistribution license, please contact Digital Mars."


I think the biggest issue has been trying to always chase the next big thing that eventually could bring mindshare to D, while not finishing the previous attempts, so there are quite a few half baked features by now.

Even Andrei Alexandrescu eventually refocused on C++, and is contributing to some of the C++26 reflection papers.


>while not finishing the previous attempts

I agree, and that applies to many software projects, and not just programming languages only.

>so there are quite a few half baked features by now

what are some of those half baked features?


The new allocators, some corner cases of the destroy and destructors, not everything on Phobos is @nogc friendly, BetterC still chockes on many common C extensions, DIP 1000, the whole set of @live semantics.

Now there is a new GC being redesigned, and there are discussions about a possible Phobos V3.


> still chockes on many common C extensions

I play with ImportC occasionally, a lot of those can actually be opt out by undef'ing __GNUC__ on the preprocessor invocation, idk why they don't do that. Oh, now it chokes on C23 features as well because system cpp defines __STDC_VERSION__=202311L now. Edit: that was solved: dlang/dmd/pull/21372


You mean ImportC and not BetterC, right?


Right, got that one wrong.

Although, on the context of BetterC, there is the debate about having more regular features available in that mode as well.


thanks.

interesting, didn't know about the new GC, or possible Phobos V3


Indeed, first get traction, then add as many features as you want and become perl. That's the real carcinization.


this is spot on. With all due respect to his technical achievement (and maybe I'm just speaking for myself), Walter Bright very much has a "tryhard" persona online, which gives a lot of developers "the ick".


6. It has exceptions.

Many people consider that an anti-feature.


HeidiSQL is free software for people who work with databases, and aims to be intuitive to use. "Heidi" lets you connect to a variety of databases, like MariaDB, MySQL, Microsoft SQL, PostgreSQL, SQLite, Interbase and Firebird.

Since some days it is finally available in a native Linux version. The code has been ported from Delphi to FreePascal / Lazarus.


I think an PKMS is strictly related to how each of us thinks. It's similar to project management/organizer tools. I also created my own (https://brainminder.it/) based on how I think: I prefer to organize items by type with different fields that I can add and search. Instead of simply collecting ideas and thoughts, I'm trying to build a system that can help me evaluate leveraging what I've entered.


Reminds me of the Zettelkasten note-taking method.

I'm not confident I understand how it works from the site though. A video of how it works would be helpful.


Hi, I know I didn't have time to describe it properly. Currently, I am focusing more on adding features and making it more usable. In simple terms:

1. It is possible to create different types of items: books, ideas, projects, tasks, etc.

2. Every item can have its own custom fields, such as author for a book or priority for a task.

3. All items are stored in a single SQLite table, so you can search through all items and edit them if necessary.

4. Fourth, it is possible to establish relations between items: parent, child, or simple link.

5. There is a space called "Quickbox" where you can quickly register a link or a note to read later and transform it into an item.

6. Items can be part of one or more notebooks, such as Personal, Work, or Family.

I have many ideas to make it more useful, but some basic features are still missing, such as:

1. Attach images or documents to each item and access all the attachments as a separate library.

2. Multi-user support

3. Multilingual support

4. Kanban support for tasks.

The most interesting part for me is adding systems/structures that can help me analyze problems and find proper solutions.

This idea is still vague, but I'd like to implement workflows that can help me become a better thinker, improve my creativity, and enhance my ability to make rational decisions. I'd like to integrate also logic programming in the process, probabily using Prolog.

I don't want to lose the manual aspect of thinking, so I'm considering creating prefilled documents to help study problems and find solutions.

I have used Golang and SQlite on backend and PWA and HTMX on frontend.


I'm not a fan of the complexity added by this and other similar frameworks. PHP and Go are very different languages, so trying to replicate the same concepts for one language to another I don't think it is a good idea.

One of the things I would discard would be the use of an ORM library : every library adds another level of complexity and doesn't allow to see what is happening when the SQL statements are built. In my opinion, it is better to create some simple methods for each object that implement the CRUD operations and build the SQL statements directly.

It is possible to write a web application with GO using only some libraries, for example for routing and authentication.

My favorite place to start is Autostrada : https://autostrada.dev/


> One of the things I would discard would be the use of an ORM library ... In my opinion, it is better to create some simple methods for each object that implement the CRUD operations and build the SQL statements directly.

Have you done this for any complex system? I'd love to see you do this for the AzerothCore: it has 298 tables, 3,010,875 rows across those tables, and one table (quest_template) has 105 columns.

Instead I've thrown SQLAlchemy in front of it and now I can query it without writing a single line of SQL.

I think tools are tools, and using the right tool at the right time is an important skill I think you've yet to develop.


Yes, I understand your point of view, but in my experience these ORM libraries when you create a class or a structure and then the library build the SQL code behind the scenes can suffer from some relevant issues :

1. you have no control over the generated SQL and because it has to be generic and db agnostic, might not be the best option depending on the database you are currently using

2. when something doesn't work as expected, and it happens, they are difficult to debug (too many layers) and find the issue

3. they are extremely inefficient, because they have to dynamically build every time the code is run the corresponding SQL code : I'm sure most would implement some caching mechanism to prevent this , but in any case it's a waste of resources.

This is just anecdotal, but I remember trying SQLAlchemy many years ago for a small Python program I was writing for a RaspberryPi 3 : it was extremely slow. So, I removed the library and used instead the native database binding for MariaDB instead, and the speed improved a lot.

For PHP, the situation is the worst because there is no application server (they exist, but not very widely used), but the code is regenerated every time. This is the main problem in any large PHP project, such as Nextcloud. If they would adopt FrankenPHP or RoadRunner, they could improve the performance of the applications a lot.


> 1. you have no control over the generated SQL

Depending on the tech in use, there's usually some sort of an escape hatch, such as writing your own native SQL queries that take advantage of the exact functionality you need, while letting you keep the 90% of the rest CRUD based on the automatically generated stuff.

Plus, nothing is preventing you from putting complex querying in a DB view and then have a read only mapping for the ORM against that, giving you the best of both worlds - using the DB for what it's good at, keeping things relatively simple and yet powerful on the app side.


I too used to believe those were valid points not to use an ORM back in the day. That was easily 2013/2014. Since then I’ve never found an ORM that gets in the way letting my just run raw SQL. And not just run raw SQL as complex as I’d like: it’ll also still give you all the magic once the response comes back.


If you’re only doing CRUD, you can use any reputable query builder or ORM. But sometimes the best model for business logic and the database table differs, and the methods for persistence are Load, Save or Add, Remove instead. That’s when you want custom SQL where the ORM/query builder is not great.

Laravel is great, but that because they have nicely designed escape hatches and their architecture is very modular.


SQLAlchemy doesn’t get in the way of anything you might want to do. In fact, you can do a “textual” query and then have the response mapped to classes for you :-)


You can do that in every ORM including the infamous hibernate.


Writing SQL against systems much larger than that used to be the norm.

You are correct that "using the right tool at the right time" is important, and often, that right tool is SQL. Other times it's not. Unfortunately there are many developers who don't really know SQL, so every problem is ORM-shaped.


> Writing SQL against systems much larger than that used to be the norm.

Doing things the hard way was the norm until a better way was found.

Saying something was the norm in the past doesn't imply it was good


> used to be the norm.

People also "used to" invest radioactive water and used radioactive cremes and toothpastes for health benefits in the 20's and 30's. So what's your point?


That any discussion around systems that uses some arbitrary size of tables/rows/etc is empirically disproven.

Moreover, any exaggerated example of a bygone time is unrelated, as many SQL-driven systems still exist today. I work on one such system which is much larger than the example given, and not long ago, I increased performance of some ActiveRecord queries 1000x by simply rewriting them in SQL. (No hate against ActiveRecord, I use it regularly. It just takes a lot of discipline once you hit queries of a certain complexity.)


> ... and not long ago, I increased performance of some ActiveRecord queries 1000x by simply rewriting them in SQL.

But it was ActiveRecord that got you there in the first place, as a business, and enabled you to even build anything quickly enough to meet market demand and therefore make money. Moving to a few raw SQL statements today to improve performance is called optimising and everyone in every industry does that... _after the fact_. No one should be (pre-)optimising from day 1. That's a good place to start with an ORM.

Our industry is about balancing engineering knowledge with business knowledge and market forces: we have to accept that we can't write perfect code today otherwise you won't have a job tomorrow. You have to get up and running now and optimise later, which might look like replacing some parts of an ORM's job with an optimised SQL statement.

(And again: no ORM is stopping you from running raw SQL. You can have both. It's foolish to throw out an entire ORM and everything it gives you because, "Remember when I optimised that one statement that one time?")


Sure, both tools are great options to have in your toolbox. My bigger issue was with the claim that you can't build an application bigger than arbitrary size X without an ORM, which is empirically untrue.

I will say that there are queries that take 2-3 minutes to write in SQL where you have to bang your head into the wall to make the problem fit into an ORM-shaped box. (and vice versa)

A bigger problem are developers who haven't never truly learned to write SQL (outside of a few basic statements; akin to a React developer who never really learned Javascript)


No, sorry, the vice versa is not true.

You can write any possible SQL query in SQL, but you can't write all of them in an ORM (without falling back to native SQL queries)

SQL is strictly more powerful.

Sometimes it's also way more verbose.

So I agree that you should be able to leverage both tools, but they are not even remotely as powerful.

I every project we used ORM for, at some point we had to jump through hoops and write plain old SQL (depending on the language and framework that's simple or terribly complex), because we had to fix the n+1 query problem.

In every project, since more than 18 years!


And what's wrong with any of that?


> My bigger issue was with the claim that you can't build an application bigger than arbitrary size X without an ORM, which is empirically untrue.

I never said that. I said that it's harder to do and there's no reason not to use an ORM to make it easier.


> I increased performance of some ActiveRecord queries 1000x by simply rewriting them in SQL. (No hate against ActiveRecord, I use it regularly. It just takes a lot of discipline once you hit queries of a certain complexity.)

ActiveRecord is the problem there, though. Or any active record-style ORM. That's why Hybernate/SQLAlchemy-style ORMs are so useful: they don't suffer from the same issues as ActiveRecord-style ORMs.


I totally agree with you. Writing raw SQL, and really just thinking of usecases in a more data-orieted way I think leads to clearer and more maintainable code, even if it takes a little more time initially. Although even then, I have to say ChatGPT can spit out such good SQL and accompanying Go models now that its not even that slow to do.

We've taken over a rails codebase at work, and the number of O(n) queries that are being done instead of bulk queries is rather shocking. I'm sure that one can write efficient queries using the Rails ORM, but it does seem to me that the presence of it allows for more misuse.


My experience is that if one has never thought in sets, only in iterations, then the proverbial "when you're a hammer, everything looks like a nail" kicks in and every database query become a for-loop or, if it's Rails, an each-loop.



The ORM question is a very good example for the issue you describe. Laravel is great to get a CRUD app started right now, iterate very quickly, and leave implementation complexity to the framework. This isn’t the right tool for every job, but it shines at those where it is. For example, you can drop an engineer with Laravel experience in pretty much any Laravel codebase, since the conventions are so strong, they’ll probably understand your business logic upfront.

Additionally, Laravel ships with a huge swath of functionality out of the box. While you’re still researching the best SQL library for a project, a Laravel developer has spun up the framework with a working Postgres connection and starts being productive. There is no value in inspecting the SQL underneath, because the queries never get complex enough to warrant that. And if they do, you drop out of the ORM and write SQL.

As I said before: this isn’t the best way to do something, but a very peculiar way that works well for a specific kind of application. Go is simply a tool for other kinds of applications, where Laravel cannot compete.


> While you’re still researching the best SQL library for a project

People do this? In every language I've worked with, there's practically just one SQL library to use. And you just write a query, execute it, and map some results. Very basic.


Come on, that’s not what you really do, but an oversimplification that skimps on the actually hard parts.

You manage (as few as possible) connections to a DBMS, create prepared statements, cache and transform results, lazily iterate over large result sets, batch writes, manage schema migrations, and more. It’s not very basic, unless you’re cobbling up a prototype.

> In every language I've worked with, there's practically just one SQL library to use.

Curious which languages that were. In practically all ecosystems I know, there are 2-4 contenders.


> My favorite place to start is Autostrada : https://autostrada.dev/

It's nice but they could make it even nicer by adding succint descriptions and/or pros and cons to alternatives. Choosing a database probably needs no explaining, but choosing a http router is different.

Also "Read configuration settings from env vars or command line switches". Both. Both is good.


What would you use if ORM is to be avoided?

Perhaps something like https://github.com/sqlc-dev/sqlc ?


My advice is to just use standard library `database/sql`. Every abstraction on top adds extra complexity and isn't really necessary. Just execute a query, map the results, and there you go.


"Every abstraction on top adds extra complexity and isn't really necessary"

This. In my experience, every project that has non-trivial query requirements starts out as "this ORM is nice, it takes away 90% of my work" and ends with "how do I get rid of that leaky abstraction layer that constantly makes my life harder."


I'm using this at the moment : https://jmoiron.github.io/sqlx/

Didnt' now about sqlc, it seems very interesting ! Thanks for sharing !


Seconding sqlx, it's wonderful, is not an ORM and doesn't need code gen.


sqlc is _excellent_ if you are comfortable with its limitations. I love it even though I don't love optional fields query pattern with NULL.


I would say: SQL is a DSL for interacting with DBs. If it's not doing what you want, consider a different DSL, like Gel, SurrealDB, or PRQL.

These also have the advantage of being programming language-agnostic.


Anti-ORM sentiment is a senior developer red flag. It indicates pretty clearly that an engineer views their work more as an art project, rather than valuing achieving the actual business goals in any reasonable time frame.


Or it can mean that the engineer is tired of rewriting ORM generated queries into performant queries.

Sometimes it is better to use 'explain plan' once rather than cleaning up a generated sql filled with outer joins, table scans, and difficult to understand variable names.

The ORM code in this case can look more "pristine" but can cause the app to fail in production. If you are using createNativeQuery everywhere, what is the point of an ORM?


80% of the time, the queries the ORM produces are just fine. For the rest 20% left, I code them myself (I think senior engineers now how to distinguish these two scenarios). Now, what I don’t want to code myself is the transformation between rows and objects… that’s what an ORM is for.


The author presented their opinion as broadly stroked general advice and in that context it is poor. And, specifically regarding database/sql, creating a bunch of pointers to scan values into for every query you write is the definition of insanity in all but the most performance sensitive applications. We're talking microseconds (or even nanoseconds in some instances) on an operation typically measured in milliseconds.


You don’t need an ORM for that, though. I’ve used Scany in the past, and it was great. Raw, parameterized SQL that is easy to reason about and easy to scan into your structs:

https://github.com/georgysavva/scany


These things exist on a spectrum of features but they are all mapping tables to objects at the end of the day at the bare minimum. Pedantry around what technically should count as an ORM is not super productive. The fact is that defining a schema in one place and getting a whole slew of features out of that automatically is multiplicative to productivity.


This is the sweet spot in my opinion. I haven't been in the .NET world for a few years but there's a very similar library called Dapper. Best "ORM" I ever used.

https://github.com/DapperLib/Dapper


"better" .... Only the Sith deal in absolutes.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: