Run into a similar problem with my blog this year. After spending some time trying to resolve it, I just gave up.
I can understand that every now and then Google changes it's rules and validation procedures, so that what used to work now gets removed from the index out of sudden, given their fight with spam and slop. But what I'm struggling to understand is how could Google crawler and Google Search Console be so bad so that:
* google crawler stops fetching sitemap out of sudden, even though Google claims it's an important signal for the search engine
* requesting sitemap refresh via GSC fails on "unknown" error, which is puzzling considering according to my web logs, nobody tried to load it between my request and the error
* after fixing an error, validation job gets stuck for weeks, only to fail for unclear error
* random deindexing events as explained in the post
And I don't buy the argument that this is necessary for Google to deal with spam, because Bing Webmaster Tools just works flawlessly, and they have to deal with it as well.
I don't understand how a small business deal with this kind of issues.
> What is the overhead on a FUSE filesystem compared to being implemented in the kernel?
The overhead is quite high, because of the additional context switching and copying of data between user and kernel space.
> Could something like eBPF be used to make a faster FUSE-like filesystem driver?
eBPF can't really change any of the problems I noted above. To improve performance one would need to change how the interface between kernel and user space part of FUSE filesystem works to make it more efficient.
That said FUSE support for io_uring, which got merged recently in Linux 6.14, has a potential there, see:
> Isn't it time to update the law and catch up with tech?
In case of Telegram groups you mention, local authorities can use existing laws regulating public social networks (Telegram really works similar to Facebook in these use cases, these telegram groups are neither private nor end to end encrypted). The fact that Telegram doesn't cooperate is not because of lack of regulation, but lack of leverage against it (which seems to be one of the reasons why authorities in France detained Telegram CEO recently).
Yes, the first web browser/editor was a GUI application from the start, so was the original Tim's idea. But the first browser/editor was working on NeXT machines only, which were very expensive and rare. Only few people actually had the opportunity to experience the web this way and most people seen this software in action as a demonstration only.
The first browser most people used when introduced to the web was a "dumb" command line client https://en.wikipedia.org/wiki/Line_Mode_Browser. It was as simple as possible, so that it could be compiled on any platform and used over telnet, it wasn't even using curses library.
So the early web users were experiencing the web via text browsers only until the rise of gui browsers later.
Also most people could telnet to a host that already had the line mode browser installed. I believe that’s how a lot of non-NeXT users at CERN would be using it.
I wonder whether there are some new developments in digital archeology here which makes the source complete enough for one to be able to compile it (assuming one has access to a NeXT machine with its app builder from early 1990s).
One way to look at this code is as a quick prototype to get the idea into real thing to play with. And to appreciate that, one have to realize that the original idea included both reading and editing of web pages easily in the same client in WYSIWYG fashion.
I wrote the program using a NeXT computer. This had the advantage that there were some great tools available -it was a great computing environment in general. In fact, I could do in a couple of months what would take more like a year on other platforms, because on the NeXT, a lot of it was done for me already. There was an application builder to make all the menus as quickly as you could dream them up. there were all the software parts to make a wysiwyg (what you see is what you get - in other words direct manipulation of text on screen as on the printed - or browsed page) word processor. I just had to add hypertext, (by subclassing the Text object)
I can understand that every now and then Google changes it's rules and validation procedures, so that what used to work now gets removed from the index out of sudden, given their fight with spam and slop. But what I'm struggling to understand is how could Google crawler and Google Search Console be so bad so that:
* google crawler stops fetching sitemap out of sudden, even though Google claims it's an important signal for the search engine * requesting sitemap refresh via GSC fails on "unknown" error, which is puzzling considering according to my web logs, nobody tried to load it between my request and the error * after fixing an error, validation job gets stuck for weeks, only to fail for unclear error * random deindexing events as explained in the post
And I don't buy the argument that this is necessary for Google to deal with spam, because Bing Webmaster Tools just works flawlessly, and they have to deal with it as well.
I don't understand how a small business deal with this kind of issues.