> Text-based browsers and modern HTML, no success story in sight. Given the progress we see in web technologies, the gap will only widen, so much so that w3m and its friends might fall into oblivion.
This is a fun article and the conclusion is very real.
People shit on Gemini:// because “The web can support text documents”. They say this as if they are actually proposing a real solution. It’s true that the web _can_ support lightweight content (IE5 on Windows 3.1- I was there man), but the problem is that it _won’t_ because it consistently chooses not to. If you’ve ever tried to actually perform this experiment of running the web in text mode you will quickly realize how futile it truly is. Every step you take on a well meaning site like lite.cnn.com is just one click away from transferring you to a bloated SPA app that renders a blank screen on a text-based browser. You can disable JavaScript, or disable images or whatever hoops you want to jump through (increasingly hidden with every FireFox release that goes by) but that’s not going to actually work long term. The web is too extensible and feature hungry to support text based content. It’s better to just use the web for the usual cool shit like WASM and WebRTC or whatever and admit that no one can help themselves and no amount of awareness is going to make the cookie consent banners go away.
Let’s take Gemini more seriously because it already has adoption and it works and it’s not perfect but it sure as fuck isn’t substack.
What's the difference between "let's encourage people to create gemini documents" and "let's encourage people to publish text/markdown documents on the www"?
That’s subtle but the Gemtext format is really really constrained, which forces people to do one thing: write text. Nothing else.
So, when you are on Gemini://, you know that you will only encounter linear text. You will read stuff, written by other people. It is really relaxing. I’m a huge fan of Gemini.
I would advice to start your Gemini journey by reading links on Antenna and Cosmos (which are link aggregators)
I'm not. I get the whole "the medium is the message" and why it feels appealing to some, but I don't subscribe to the idea that the only way to have proper digital hygiene is by restraining myself to this ascetic channel. I'd rather encourage more people to put content on the web in whatever form they think is best, and I'll let it up to my user agent to filter out the noise.
The dream of course would be both: if you’re already writing textual content you might as well publish it on both protocols, so anyone can get to it with any tool they like. Gemtext can be trivially converted up to Markdown, the opposite is lossy but very doable.
Quick question on gemini://, I have no idea what gemini:// is but I typed gemini:// on my mac and it prompted to open my iterm shell. Is this a normal behavior, I am using chrome browser.
I really like reading text with variable-width fonts. Gemini requires fixed-width fonts due to its terminal-based approach. Thus, I have no desire to use it ever.
I've only dabbled in Gemini so I don't know their names off the top of my head, but I tried out a number of GUI Gemini browsers in the past, and they're quite nice. Easy on the eyes, simple design, all the variable width fonts you could ask for if that's your bag.
Gemini is my go to now when I need a recipe. Pick a recipe site, any recipe site, and its guaranteed to be the most painful experience on mobile, and slightly less painful on a laptop experience you have on the web. Pure fucking trash. And if you happen to be a recipe publisher who does this and is reading this, fuck you.
Enter Gemini. It consistently can give me a text only version of the recipe that I can copy into a notes app if I want with zero pain. Zero. Now I have my own set of "wtf are you doing Gemini" and "why are you halucinating on this request" experiences at work with Gemini, but recipe extraction.. the goat.
You go to site with your text browser. An LLM loads and renders the content in memory and then is helping to convert that to a text only interface for your tui browser to display and navigate.
Apparently other systems are using a similar method.
A more pragmatic approach would be to run the content through something like readability[0] but leaves navigation untouched. The AI could hallucinate and add content that isn't in the original, something accessibility tools don't.
I do 90% of my browsing using Offpunk (reading blogs and articles) and, suprizingly, it often works better than a graphical browser (no ads, no popup, no paywall). Of course, it doesn’t work when you really needs JS.
Dillo uses something similar with rdrview, you can use rdrview://$URL (altough I hacked the dpi plugin to use the rd:// 'protocol' for shortness).
It lacks the filter thingy but now has the dilloc tool where it can print the current URL, open a
new page... and with sed you can trivially reopen a page with an alternative from https://farside.link
You know, medium.com -> scribe.rip and the like.
But Dillo is not a terminal browser, altough it's a really barebones one and thanks to DPI and dilloc it can be really powerful (gopher, gemini, ipfs, man, -info in the future) and so on available as simple plugins, either in sh, C or even Go) and inspiring for both offpunk and w3m (where it has similar capabilities as Dillo to print/mangle URL's and the like).
What I'd love is to integrate Apertium (or any translating service) with Dillo as a plugin so by just running trans://example.com you could get any page translated inline without running tons of Google propietary JS to achieve the same task.
I love the https://linux.org.ru forum and often they post interesting setups but I don't speak Russian.
Linux is one of the last strong defenses for the idea that people should control the computers they own. On desktops and servers, root access is normal, and attempts to take it away do not work because software freedom is well established. On phones, that never happened. There is no real, mainstream “Linux for mobile,” and the result is a world of locked-down platforms where things like “sideloading” are treated as scary security risks instead of basic user rights. This makes it much easier for lawmakers to argue for removing root access on mobile devices, even though the same idea would be unrealistic on desktop systems.
A great deal of gratitude is owed to all the people who volunteer their free time to create the stable desktop environment we have free access to on Linux in 2026.
If you're interested in this aspect of user agency, you might like the "trustworthy technology" site a few friends and I are working on: https://aol.codeberg.page/eci/
"There is no real, mainstream “Linux for mobile,”"
Probably need to clarify since Android is Linux. Assume you're referring to community run distros. Unfortunately the issue is usually proprietary hardware that has to be reverse engineered and nobody willing to pay engineers full time to do that.
What I don't understand is why it is so much effort to use linux on a phone. Surely these 8 core ARM monsters these days should be more than enough to handle a full kernel. Hopefully it's not a driver issue where manufacturers only contribute the necessary drivers to the android kernel, not the linux one.
The ultimate question is "Would you give up the right of owning your machine to have access to services", and with the decline of rooting scene on Android, the answer is pretty clear.
Not really, it makes use of Linux kernel, cages it on pseudo-microkernel architecture since Treble and Mainline refactorings, uses a Java userspace, and the NDK has a quite clear list of what APIs are allowed to be called.
Sure. But the malaise of smug people taking decisions that are outside of the scope of the software is creeping into linux too. It is up to me decide what is secure, not them.
It does get some coverage on Lobste.rs, the Fediverse and some pockets of the IRC world and retro comp community. If your point is that it has not hit mainstream adoption, I won’t argue that.
The web could in theory support text-first content, but it won't. The Gemini protocol, though not perfect, was built to avoid extensibility that inevitably leads us away from text. I long for the day more bloggers make their content available on Gemspace, the way we see RSS as an option on some blogs.
The web will continue to stray from text-first content because it is too easy to add things that are not text.
It's probably harder for bison to free range like deer these days. Deer are extremely agile and can leap most fences with ease. Deer are also pretty docile when they're not in rut. Outside of nature preserves it doesn't seem realistic.
Deer have become almost a nuisance species closer in to Chicago. I’ve seen them in Oak Park about 2 miles away from the nearest forest land. In River Forest, which actually contains forest preserve, things got so bad the village wanted to hire a firm to shoot the deer, but the residents were too shocked by that proposal and it never happened.
I’m in River Forest and the deer are a pain to deal with. They eat your plants, they’re not afraid of people (because they get hand feed) and they get hit by cars.
They’re lacking their natural predators — and the logical solution of introducing them is ruled out because the local forest preserves aren’t large enough to support wolf packs.
Maybe the coyotes will figure out how to take them down.
You need to shoot the people who are feeding them - that's the logical solution to the problem you posed 8) Their natural predators are now cars because that is how things are now.
An environment is whatever it is at a point in time. You have described how things are around you and that is the current normal. You may not like it or even understand it but that is how it is.
You have to decide whether deer should live within your domain or not. At the moment it sounds like they are a negative factor for you. When you have run out of deer, will you start on the coyotes? When you have run out of creatures with backbones, will you start on arthropodia or amphibians?
Not really. The deer that thrive in suburban areas learn to watch for traffic. Even where deer vs car collisions are common, deer multiply well beyond what car traffic takes out. Really, hunting is the only way to thin the numbers.
Deer eat grass, they can thrive almost anywhere in North America just fine with or without people feeding them.
In suburbs they probably need to capture and slaughter some number of them to keep the numbers reasonable.
Deer can eat grass, but it's not their preferred food, and they can't thrive on it. They eat forbs, shoots, browse (twigs, buds, etc.) and mast like acorns (they are set up to deal with the large amounts of tannin in acorns).
"Although low quality forages such as mature grasses provide adequate nutrition to animals such as elk and cattle, the quicker digestive process of whitetails requires more readily digestible forages to fulfill their energy and protein requirements. On severely overpopulated and depleted ranges, white-tailed deer have starved to death with their stomachs full of low quality forages."
Point taken. Of course, again there is no shortage of shrubbery in suburban environments. And the last point is just what always happens when a species that evolved as prey is no longer hunted.
Well there was a lynx spotted in north Oak Park in the last couple-three years so there’s another potential predator, but yep, they definitely need predation. I’ve seen some sizable herds north of North Avenue in the forest preserve there (along with lots of bread put out by people who wanted to feed the deer). They’re a lot bolder there than south of North.
Look up to my post—the village proposed shooting the deer and residents decided that they’d rather have nuisance deer than see Bambi shot in their neighborhood. (There’s also the safety questions around shooting deer in residential neighborhoods to deal with as well.)
An additional data point is that Midewin's bison area is surrounded by a double fence - a barbed-wire one to keep the humans out and a stout steel one to keep the bison in.
The Fermilab bison used to have (probably still do) a sign in their field that said, amusingly, not to jump the fence into the field unless you can cross it in 9 seconds, because the bull can do it in 10. (grew up on the DuPage county side of Fermilab, got to take physics there too, which was awesome)
I realize bison can force down many fences but thats what I mean. I've seen neighborhoods where deer thrive in the suburbs largely grazing in people's yards and medians on the roadway. They are sometimes even fed corn by the residents. Bison are not only much more destructive, they are sometimes quite violent and will charge and horn people without warning. They need to be on ranches with special fencing or preserves.
Alternatively, RemObjects makes Elements, also a RAD programming environment in which you can code in Oxygene (their Object Pascal), C#, Swift, Java, Go, or Mercury (VB) and target all platforms: .Net, iOS and macOS, Android, WebAssemblyl, Java, Linux, Windows.
Yes, you can build cross-platform GUI apps with Delphi. However, that requires using Firemonkey (FMX). If you build a GUI app using VCL on Delphi, it's limited to Windows. If you build an app with Lazarus and LCL, you CAN have it work cross-platform.
I made the clarification because the comment I replied to mentioned Android, iOS, and macOS. There are many who used Delphi before FMX appeared and I thought it would be helpful to point out that VCL only makes Windows executables.
I’m always on the hunt for single language cross platform solutions, and I thought I knew every player in the field but had not heard of Elements before. So I followed your link enthusiastically. But these are just some of the excerpts from the website:
Java
Build code for any of the billions of devices, PCs and servers that run JavaSE, JavaEE or the OpenJVM.
.NET Core
The cross-platform .NET Core runtime is the future of .NET and will fully replace the current classic .NET 4.x framework when .NET Core 5 ships in late 2020.
It really seems like it was last updated sometime in the last decade. Not sure I want to base a future project on it.
> Alternatively, RemObjects makes Elements, also a RAD programming environment in which you can code in Oxygene (their Object Pascal), C#, Swift, Java, Go, or Mercury (VB) and target all platforms: .Net, iOS and macOS, Android, WebAssemblyl, Java, Linux, Windows.
Wait you can make Android applications with Golang without too much sorcery??
I just wanted to convert some Golang CLI applications to GUI's for Android and I instead ended up giving up on the project and just started recommending people to use termux.
Please tell me if there is a simple method for Golang which can "just work" for basically being the Visualbasic-alike glue code to just glue CLI and GUI mostly.
It's really price-y and I am not sure about if I could create applications for f-droid if they aren't open source and how it might go with something like remobjects.com/gold/
One of the key principles of f-droid is that it must be reproducible (I think) or open source with it being able to be built by f-droid servers but I suppose reproducibility must require having this software which is paid in this case.
I started with VB6 so I'm sometimes nostalgic for it too but let's not kid ourselves.
We might take it for granted but React-like declarative top-down component model (as opposed to imperative UI) was a huge step forward. In particular that there's no difference between initial render or a re-render, and that updating state is enough for everything to propagate down. That's why it went beyond web, and why all modern native UI frameworks have a similar model these days.
> and why all modern native UI frameworks have a similar model these days.
Personally I much rather the approach taken by solidjs / svelte.
React’s approach is very inefficient - the entire view tree is rerendered when any change happens. Then they need to diff the new UI state with the old state and do reconciliation. This works well enough for tiny examples, but it’s clunky at scale. And the code to do diffing and reconciliation is insanely complicated. Hello world in react is like 200kb of javascript or something like that. (Smaller gzipped, but the browser still needs to parse it all at startup). And all of that diffing is also pure overhead. It’s simply not needed.
The solidjs / react model uses the compiler to figure out how variables changing results in changes to the rendered view tree. Those variables are wrapped up as “observed state”. As a result, you can just update those variables and exactly and only the parts of the UI that need to be changed will be redrawn. No overrendering. No diffing. No virtual Dom and no reconciliation. Hello world in solid or svelte is minuscule - 2kb or something.
Unfortunately, swiftui has copied react. And not the superior approach of newer libraries.
The rust “Leptos” library implements this same fine grained reactivity, but it’s still married to the web. I’m really hoping someone takes the same idea and ports it to desktop / native UI.
>React’s approach is very inefficient - the entire view tree is rerendered when any change happens.
That's not true. React only re-renders down from where the update happens. And it skips over stuff that is provably unchanged -- which, fair, involves manual memoization hints. Although with React Compiler it's actually pretty good at automatically adding those so in practice it mostly re-renders along the actually changed path.
>And the code to do diffing and reconciliation is insanely complicated.
It's really not, the "diffing" is relatively simple and is maybe ~2kloc of repetitive functions (one per component kind) in the React source code. Most of complexity of React is elsewhere.
>The solidjs / react model uses the compiler to figure out how variables changing results in changes to the rendered view tree.
I actually count those as "React-like" because it's still declarative componentized top-down model unlike say VB6.
> That's not true. React only re-renders down from where the update happens. And it skips over stuff that is provably unchanged -- which, fair, involves manual memoization hints.
React only skips over stuff that's provably unchanged. But in many - most? web apps, it rerenders a lot. Yeah, you can add memoization hints. But how many people actually do that? I've worked on several react projects, and I don't think I've ever seen anyone manually add memoization hints.
To be honest it seems a bit like Electron. People who really know what they're doing can get decent performance. But the average person working with react doesn't understand how react works very well at all. And the average react website ends up feeling slow.
> Most of complexity of React is elsewhere.
Where is the rest of the complexity of react? The uncompressed JS bundle is huge. What does all that code even do?
> I actually count [solidjs / svelte] as "React-like" because it's still declarative componentized top-down model unlike say VB6.
Yeah, in the sense that Solidjs and svelte iterate on react's approach to application development. They're kinda React 2.0. Its fair to say they borrow a lot of ideas from react. And they wouldn't exist without react. But there's also a lot of differences. SolidJS and Svelte implement react's developer ergonomics, while having better performance and a web app download size that is many times smaller. Automatic fine grained reactivity means no virtual dom, no vdom diffing and no manual memoization or anything like that.
They also have a trick that react is missing: Your component can just have variables again. SolidJS looks like react, but your component is only executed once per instance in the page. Updates don't throw anything away. As a result, you don't need special react state / hooks / context / redux / whatever. You can mostly just use actual variables. Its lovely. (Though you will need a solidjs store if you want your page to react to variables being updated).
>React only skips over stuff that's provably unchanged. But in many - most? web apps, it rerenders a lot. Yeah, you can add memoization hints. But how many people actually do that?
Even without any hints, it doesn't re-render "the entire view tree" like your parent comment claims, but only stuff below the place that's updated. E.g. if you're updating a text box, only stuff under the component owning that text box's state is considered for reconciliation.
Re: manual memoization hints, I'm not sure what you mean — `useMemo` and `useCallback` are used all over the place in React projects, often unnecessarily. It's definitely something that people do a lot. But also, React Compiler does this automatically, so assuming it gets wider adoption, in the longer run manual hints aren't necessary anyway.
>Where is the rest of the complexity of react?
It's kind of spread around, I wouldn't say it's one specific piece. There's some complexity in hydration (for reviving HTML), declarative loading states (Suspense), interruptible updates (Transitions), error recovery (Error Boundaries), soon animations (View Transitions), and having all these features work with each other cohesively.
I used to work on React, so I'm familiar with what those other libraries do. I understand the things you enjoy about Solid. My bigger point is just that it's still a very different programming model as VB6 and such.
Thanks for your work on react. I just realised who I’m talking to sweats. I agree that the functional reactive model is a very different programming model than VB6. We all owe a lot to react, even though I personally don’t use the react library itself any more. But it does seem a pity to me how many sloppy, bloated websites out there are built on top of react. And how SwiftUI and others seem to be trying to copy react rather than copy its newer, younger siblings which had a chance to learn from some of react’s choices and iterate on them.
UI libraries aside, I’d really love to see the same reactive programming pattern applied to a compiler. Done well, I’m convinced we should be able to implement sub-millisecond patching of a binary as I chance my code.
If there was sufficient interest in it, most performance issues could be solved. Look at Python or Javascript, big companies have financial interest in it so they've poured an insane amount of capital into making them faster.
Being slower than other mainstream languages isn't really a problem in and of itself if it's fast enough to get the job done. Looking at all the ML and LLM work that's done in Python, I would say it is fast enough to get things done.
Same here. I have a physical copy of Word 97, although TBH I use the classic licence key 11111-111111111 to "activate" it because it's easier.
It runs fine under WINE, and you can install the 3 service packs for it too. As released, when you try to save a .RTF file it actually doesn't. Not that that matters, but it's nice to have all the known the bug fixes.
It runs inside the L2 cache on any modern-ish CPU. Even on a Core 2 Duo, it's fast.
It is hilarious and sad to recall that when it came out -- I was working for PC Pro magazine around then -- it was seen as big and bloated and sluggish compared to Office 95. The Mac version, Office 98, was a port of the Windows version, and Mac owners hated it.
Only if I don't need to do anything beyond the built-in widgets and effects of Win32. If I need to do anything beyond that then I don't see me being more productive than if I were using a mature, well documented and actively maintained application runtime like the Web.
That's not really true. Even in the 90s there were large libraries of 3rd party widgets available for Windows that could be drag-and-dropped into VB, Delphi, and even the Visual C++ UI editor. For tasks running the gamut from 3D graphics to interfacing with custom hardware.
The web was a big step backwards for UI design. It was a 30 year detour whose results still suck compared to pre-web UIs.
If it is made to allow C codes to be combined with VB6 codes easily, and a FOSS version of VB6 (and the other components it might use) is made available on ReactOS (and Wine, and it would also run on Windows as well), then it might be better than using web technologies (and is probably better is a lot of ways). (There are still many problems with it, although it would avoid many problems too.)
Traditionally WINE uses QEMU on Apple Silicon to execute x86 binaries on an ARM CPU, so while I’m aware WINE Is No an Emulator there’s likely emulation happening in a lot of cases.
Whenever people bring this up I find it somewhat silly. Wine originally stood for "Windows Emulator". See old release notes ( https://lwn.net/1998/1112/wine981108.html ) for one example: "This is release 981108 of Wine, the MS Windows emulator." The name change was made for trademark and marketing reasons. The maintainers were concerned that if the project got good enough to frighten Microsoft, they might get sued for having "Windows" in the name. They also had to deal with confusion from people such as yourself who thought "emulation" automatically meant "software-based, interpreted emulation" and therefore that running stuff in Wine must have some significant performance penalty. Other Windows compatibility solutions like SoftWindows and Virtual PC used interpreted emulation and were slow as a result, so the Wine maintainers wanted to emphasize that Wine could run software just as quickly as the same computer running Windows.
Emulation does not mean that the CPU must be interpreted. For example, the DOSEMU emulator for Linux from the early 90s ran DOS programs natively using the 386's virtual 8086 mode, and reimplemented the DOS API. This worked similarly to Microsoft's Virtual DOS Machine on Windows NT. For a more recent example, the ShadPS4 PS4 emulator runs the game code natively on your amd64 CPU and reimplements the PS4 API in the emulator source code for graphics/audio/input/etc calls.
The problem is the word "emulator" itself. It's a very flexible word in English, but when applied to computing, it very often implies emulating foreign hardware in software, which is always going to be slow. Wine doesn't do that and was wise to step away from the connotations.
The way people get information online is changing rapidly.
I run a local makerspace. It is not quite the same thing as a local entertainment business, but there are certainly some similarities. We are local, and we are very event-based.
For the last 10 years, the way we would get new members was to host Meetups. Meetups are slowly bringing in fewer members. When I ask tour guests how they found out about us, they recently started saying that they found us on ChatGPT. They did not know what a makerspace was but they explained their problem and ChatGPT presented our space as a local solution. This has been good for us because we offer something useful to the community but struggle to explain it. In the old days of search, this was a problem because many people were not using the correct phrase to describe what we are. That doesn’t matter anymore.
How does a local business optimize for this though? I am not sure.
One of two ways. Yes, by scraping, even it it requires users to 'sell' their own browsing data to the AI companies because places like Discord lock them out.
Or, the other way is for particular event organizers to pay directly for their services to be advertised/incorporated into the LLM itself. Those that don't pay get more and more of their data erased from the LLM maybe?
Business directory for most of the telephone era was simply known as "the yellow pages". About once a quarter we get a color mailer with all the local plumbers, fencing companies, electricians etc. for homeowners who want a company that is actually licensed and insured.
Are you using "Meetups" to mean Meetup.com or just events in general? Meetup.com has completely gone to shit. Trying to find an event is super frustrating. They show the same events over and over. They don't enforce categorization. People mark online only events as in person and the platform doesn't care. They also started trying to charge users (people looking to attend events) instead of only planners (people hosting events) so it drives people away.
Sadly I don't know any better platform but it seems ripe for a new entry.
There is a meetup-like platform called Spontacts here in Germany. I suspect that for the moment it is only available for meetups in Germany, but who knows, maybe it'll expand internationally if it's successful.
What’s super depressing about Meetup.com are those Modal popups that want you to sign up for Pro. You can’t dismiss them. It’s like they’re intentionally destroying their product to squeeze the last remaining dollars from their users, which I assume are becoming fewer and fewer.
This is a fun article and the conclusion is very real.
People shit on Gemini:// because “The web can support text documents”. They say this as if they are actually proposing a real solution. It’s true that the web _can_ support lightweight content (IE5 on Windows 3.1- I was there man), but the problem is that it _won’t_ because it consistently chooses not to. If you’ve ever tried to actually perform this experiment of running the web in text mode you will quickly realize how futile it truly is. Every step you take on a well meaning site like lite.cnn.com is just one click away from transferring you to a bloated SPA app that renders a blank screen on a text-based browser. You can disable JavaScript, or disable images or whatever hoops you want to jump through (increasingly hidden with every FireFox release that goes by) but that’s not going to actually work long term. The web is too extensible and feature hungry to support text based content. It’s better to just use the web for the usual cool shit like WASM and WebRTC or whatever and admit that no one can help themselves and no amount of awareness is going to make the cookie consent banners go away.
Let’s take Gemini more seriously because it already has adoption and it works and it’s not perfect but it sure as fuck isn’t substack.
reply