Great job on this release! I've been waiting for something like it since my favorite browser, Kiwi, stopped getting updates.
Without updates, many sites will likely stop working with it soon.
Kiwi had some great features, like disabling AMP mode, rearranging the Chrome Store for mobile, and customizable tab layouts, etc. These features might interest others as well.
I still (unfortunately) am stuck with kiwi as well. I use it almost exclusively for a few webapps that use large amounts of indexeddb storage (>10gb) without a working export method[1]. With Firefox, I was able to export this data with devtools over ADB[2] to another Firefox install.
I really wish someone would create an indexeddb shim that interfaces with another system and only uses indexeddb for (very large) cache. Something I could drop in with a userscript would be lovely, even if it required running a local server with something like rsync or rclone responsible for the actual transfers.
[1]: dexie import export used to work, now it never returns. I have no way of verifying that it's doing nothing without putting it in background (thus suspending it...), but I've let it run 3 hours with no results.
[2]: Firefox doesn't allow backing up app data for some reason but devtools functions allow reading and writing the profile directory through the use of terminal commands (zip profile directory, unzip and restart browser).
I agree with this take, Product Hunt felt like it was chasing short term goals instead of building something sustainable They also allowed and sometimes encouraged behavior that undermined the quality of the site
The last time I used it one of the common hacks was adding 50 makers to a single app launch PH also openly condoned mass email blasts and tweets to drive votes which just rewarded whoever could push the hardest on promotion
In contrast Hacker News discourages asking people for upvotes and even treats it as a negative if you do That longterm focus on signal over hype is probably why HN still feels useful today while PH lost its way
Thanks for the helpful reply! As I wasn't able to fully understand it still, I pasted your reply in chatgpt and asked it some follow up questions and here is what i understand from my interaction:
- Big models like GPT-4 are split across many GPUs (sharding).
- Each GPU holds some layers in VRAM.
- To process a request, weights for a layer must be loaded from VRAM into the GPU's tiny on-chip cache before doing the math.
- Loading into cache is slow, the ops are fast though.
- Without batching: load layer > compute user1 > load again > compute user2.
- With batching: load layer once > compute for all users > send to gpu 2 etc
- This makes cost per user drop massively if you have enough simultaneous users.
- But bigger batches need more GPU memory for activations, so there's a max size.
This does makes sense to me but does this sound accurate to you?
Would love to know if I'm still missing something important.
This seems a bit complicated to me. They don't serve very many models. My assumption is they just dedicate GPUs to specific models, so the model is always in VRAM. No loading per request - it takes a while to load a model in anyway.
The limiting factor compared to local is dedicated VRAM - if you dedicate 80GB of VRAM locally 24 hours/day so response times are fast, you're wasting most of the time when you're not querying.
Loading here refers to loading from VRAM to the GPUs core cache, loading from VRAM is extremely slow in terms of GPU time that GPU cores end up idle most of the time just waiting for more data to come in.
But you still have to load the data for each request. And in an LLM doesnt this mean the WHOLE kv cache because the kv cache changes after every computation? So why isnt THIS the bottleneck? Gemini is talking about a context window of a million tokens- how big would the kv cache fir this get?
The number one has 32k which is equivellent of 64,000 commercial transantlantic flight trips (per person). For reference, 2024 had a record flights summer of 140k.
For a moment I thought it might be the presidential plane, which would explain the emissions, but no, for some reason Trump's personal plane is a whole ass Boring 757
I'm surprised there hasn't been dick swinging pressure for some billionaire (the type who cant remember how many billions but net worth probably begins with a 1 due to Benford's law) to get a dreamliner as their private jet.
The pro-AI people are as well, as these people are all on the Claude Max plan, and they’re just burning through resources for internet lols, while ruining the fun for the rest of us. It’s the tragedy of the commons at work.
I like the concept but the landing page is not good and too heavy.
My browser just froze after scrolling half-way. Not sure if this is something to do with the scroll fx but i really don't understand why this simple site is maxing out my CPUs.
I'm not a fan of usage caps either, but that Reddit post [1] (“You deserve harsh limits”) does highlight a perspective worth considering.
When some users burn massive amounts of compute just to climb leaderboards or farm karma, it’s not hard to imagine why providers might respond with tighter limits—not because it's ideal, but because that kind of behavior makes platforms harder to sustain and less accessible for everyone else. Because on the other hand a lot of genuine customers are canceling because they get API overload message after paying $200.
I still think caps are frustrating and often too blunt, but posts like that make it easier to see where the pressure might be coming from.
Bait : "For 200$ a month you get to use Claude 20x more than what the Pro users are entitled to. You don't know how much exactly though, but neither do we. We may limit your usage with weekly and monthly limits. Sounds good?"
Switch: "We limited your usage weekly and monthly. You don't know how those limits were set, we do but that's not information you need to know. However instead of choosing to hoard your usage out of fear of hitting the dreaded limit again, you've kept it again and again, using the product exactly the way it was intended to and now look what you've done."
so there was no bait and switch, you are just complaining about the lack of transparency around the specific limits that they never once said didn't exist
But as someone who writes both raw SQL and uses ORMs regularly, I treat a business project that doesn’t use an ORM as a bit of a red flag.
Here’s what I often see in those setups (sometimes just one or two, but usually at least one):
- SQL queries strung together with user-controllable variables — wide open to SQL injection. (Not even surprised anymore when form fields go straight into the query.)
- No clear separation of concerns — data access logic scattered everywhere like confetti.
- Some homegrown “SQL helper” that saves you from writing SELECT *, but now makes it a puzzle to reconstruct a basic query in a database
- Bonus points if the half-baked data access layer is buried under layers of “magic” and is next to impossible to find.
In short: I’m not anti-SQL, but I am vary of people who think they need hand-write everything in every application including small ones with a 5 - 50 simultaneous users.
People who avoid ORMs endup writing their own worse ORM*. ORMs are perfect if you know how and when to uses them. They encapsulate a lot of the mind numbing work that comes with raw sql such as writing inserts for a 50 column database.
100%. I once tried to optimize a SQL query, moving away from the ORM, so I can have more control of the query structure and performance.
I poorly implemented SOLID design principles, creating a complete mess of a SQL Factory, which made it impossible to reason about the query unless I had a debugger running and called the API directly.
Writing is just half the job. Now try migrations, or even something as fundamental as ”find references” on a column name. No, grep is not sufficient, most tables have fields called ”id” or ”name”.
I'd say, pure SQL gives you a higher performance ceiling and a lower performance and security floor. It's one of these features / design decisions that require diligence and discipline to use well. Which usually does not scale well beyond small team sizes.
Personally, from the database-ops side, I know how to read quite a few ORMs by now and what queries they result in. I'd rather point out a missing annotation in some Spring Data Repository or suggest a better access pattern (because I've seen a lot of those, and how those are fixed) than dig through what you describe.
The best is when you use an orm in standard ways throughout your project and can drop down to raw sql for edge things and performance critical sections… mmmmm. :chefs kiss:
I like Django's ORM for good schema migration. Other "ORMs" people build do not often have a good story around that. So often it's because developers aren't experiencing the best ORMs they could.
I think people should go all-in on either SQL or ORMs. The problems you described usually stem from people who come from the ORM world trying to write SQL, and invariably introducing SQL injection vulnerabilities because the ORM normally shields them from these risks. Or they end up trying to write their own pseudo-ORM in some misguided search for "clean code" and "DRY" but it leads to homegrown magic that's flaky.
I believe jOOQ is Java's database "sweet spot". You still have to think and code in a SQL-ish fashion (its not trying to "hide" any complexity) but everything is typed and it's very easy to convert returned records to objects (or collections of objects).
But seriously, yeah, every time I see a complaint about ORMs, I have to wonder if they ever wrote code on an "average team" that had some poor developers on it that didn't use ORMs. The problems, as you describe them, inevitably are worse.
I'm wary of people who are against query builders in addition to ORMs. I don't think it's possible to build complicated search (multiple joins, searching by aggregates, chaining conditions together) without a query builder of some sort, whether it's homegrown or imported. Better to pull in a tool when it's needed than to leave your junior devs blindly mashing SQL together by hand.
On the other hand, I agree that mapping SQL results to instances of shared models is not always desirable. Why do you need to load a whole user object when you want to display someone's initials and/or profile picture? And if you're not loading the whole thing, then why should this limited data be an instance of a class with methods that let you send a password reset email or request a GDPR deletion?
At least when I see raw sql I know me and the author are on a level playing field. I would rather deal with a directory full of sql statement that get run than some mysterious build tool that generates sql on the fly and thinks its smarter than me.
For example, I'm working on a project right now where I have to do a database migration. The project uses c# entity framework, I made a migration to create a table, realized I forgot a column, deleted the table and tried to start from scratch. For whatever reason, entity framework refuses to let go of the memory of the original table and will create migrations to restore the original table. I hate this so much.
You can use EF by writing the migrations yourself ("database first"). Also, whatever problem you have there seems to be easily fixed either by a better understanding of how EF's code generation works, or by more aggressive use of version control.
A bad ORM. Every application that accesses an SQL database contains an ad hoc, informally-specified, bug-ridden, slow implementation of half of an ORM.
at the very least, if you are really writing lots of INSERTs by hand I bet you are either not quoting properly or you are writing queries with 15 placeholders and someday you'll put one in the wrong place.
ORMs and related toolkits have come a long way since they were called the "Vietnam of Computer Science". I am a big fan of JooQ in Java
Note both of these support both an object <-> SQL mapper (usually with generated objects) that covers the case of my code sample above, and a DSL for SQL inside the host language which is delightful if you want to do code generation to make query builders and stuff like that. I work on a very complex search interface which builds out joins, subqueries, recursive CTEs, you name it, and the code is pretty easy to maintain.
My life is this in django. Querysets have been passed around everywhere and we've grown to 50 teams. Now, faced with ever slower dev velocity due to intertwined logic, and reduced system performance with often wildly non performant data access patterns, we have spent two years trying to untangle our knot of data access, leading to a six month push requiring disruption to 80% of team's roadmaps to refactor to get the ORM objects not passed around, but to use plain types or DTOs, which will only then allow us to migrate a core part of our database which is required for both product development and scaling needs.
Here's the thing. In five of six companies I have worked at, this story is the exact same. Python, Ruby, Elixir. Passing around ORM objects and getting boundaries mixed leading to more interdependencies and slower velocities and poor performance until a huge push is required to fix it all.
Querysets within a domain seems fine, but when you grow, domains get redefined. Defining good boundaries is important. And requires effort to maintain.
I believe your case is not specific to Django ORM in particular but to the inherent complexity of various teams working together on a single project.
For greenfield projects, you have a chance of splitting the codebase into packages with each one having its own model, migrations and repository, and if you want to cross these boundaries, make it an API, not a Django model. For existing projects this is hard to do most of the time though.
One thing that's interesting about Django I thought was that tools like celery will "pickle" orm objects, when they really should be passing the pk's of the objects.
The other thing that's interesting about Django is that you can subclass queryset to do things like .dehydrate() and .rehyrdrate() which can do the translations between json-like data and orm representations.
Then replace the model manager (in Django at least) with that queryset using queryset.as_manager().
If you're trying to decompose the monolith, this is a good way to start -- since it allows you an easier time to decompose and recompose the orm data.
That sounds like an awesome idea for a new, post-React web framework. Instead of simply packaging up an entire web SPA "application" and sending it to the client on first load, let's package the SPA app AND the entire database and send it all - eliminating the need for any server calls entirely. I like how you think!
I can unironically imagine legitimate use cases for this idea. I’d wager that many DBs could fit unnoticed into the data footprint of a modern SPA load.
Yes, probably a lot of storefronts could package up their entire inventory database in a relatively small (comparatively) JSON file, and avoid a lot of pagination and reloads. Regardless, my comment was, of course, intended as sarcasm.
I like Ecto's approach in Elixir. Bring SQL to the language to handle security, and then build opt-in solutions to real problems in app-land like schema structs and changesets. Underneath, everything is simple (e.g. queries are structs, remain composable), and at the driver layer it taks full advantage of the BEAM.
It's hard to find similarly mature and complete solutions. In the JS/TS world, I like where Drizzle is going, but there is an unavoidable baseline complexity level from the runtime and the type system (not to criticize type systems, but TS was not initially built with this level of sophistication in mind, and it shows in complexity, even if it is capable).
Ecto is a gold-standard ORM, in no small part because it doesn't eat your database, nor your codebase. It lives right at the intersection, and does it's job well.
A couple of years ago I had an opportunity to fill a fullstack role for the first time in several years.
First thing I noticed was that I couldn't roll an SQL statement by hand even though I had a distinct memory of being able to do so in the past.
I went with an ORM and eventually regretted it because it caused insurmountable performance issues.
And that, to me, is the definition of a senior engineer: someone who realised that they've already forgotten some things and that their pool of knowledge is limited.
ORMs are absolutely fantastic at getting rid of the need for CRUD queries and then boilerplate code for translating a result set to a POCO and vide versa. They also allow you to essentially have a strongly typed database definition. It allows you to trivialise db migrations and versioning, though you must learn the idiosyncrasies.
What they are not for is crafting high performance query code.
It literally cannot result in insurmountable performance issues if you use it for CRUD. It's impossible because the resulting SQL is virtually identical to what you'd write natively.
If you try to create complex queries with ORMs then yes, you're in for a world of hurt and only have yourself to blame.
I don't really understand people who still write basic INSERT statements. To me, it's a complete waste of time and money. And why would you write such basic, fiddly, code yourself? It's a nightmare to maintain that sort of code too whenever you add more properties.
Plenty of tools out here doing plain sql migrations with zero issues.
At my day job everyone gave up on attempting to use the awkward ORM dsl to do migrations and just writes the sql. It’s easier, and faster, and about a dozen times clearer.
> I don't really understand people who still write basic INSERT statements
Because it’s literally 1 minute, and it’s refreshingly simple. It’s like a little treat! An after dinner mint!
I jest, I’m not out here hand rolling all my stuff. I do often have semi-involved table designs that uphold quite a few constraints and “plain inserts” aren’t super common. Doing it in sql is only marginally more complex than the plain-inserts, but doing them with the ORM was nightmarish.
My definition of a senior engineer is someone who can think of most of the ways to do a thing... and has the wisdom to chose the best one, given the specific situation’s constraints.
Perhaps because databases were fundamental to the first programs I ever built (in the ancient 19xx's), but damn, I cannot believe how many so-called experienced devs - often with big titles and bigger salaries - cannot write SQL. It's honestly quite shocking to me. No offense, but wow.
Thing is, this used to be trivial to me, but I spent several years in a purely frontend role, so didn't interact directly with databases at all.
Moreover, the market promotes specialization. The other day I had a conversation with a friend who is rather a generalist and we contrasted his career opportunities with those of a person I know who started out as a civil engineer, but went into IT and over the course of about four years specialized so heavily in Angular, and only that, that now makes more than the two of us combined.
He can't write an SQL statement - I'm not sure he was ever introduced to the concept. How does that feel?
This is a common sentiment because so many people use ORMs, and because people are using them so often they take the upsides for granted and emphasise the negatives.
I've worked with devs who hated on ORMs for performance issues and opted for custom queries that in time became just as much a maintenance and performance burden as the ORM code they replaced. My suspicion is the issues, like with most tools, are a case of devs not taking the time to understand the limits and inner workings of what they're using.
This fully matches my experience, and my conclusions as well. I'd add that I often don't get to pick whether the logic will be more on the ORM side, or on the DB side. I end up not caring either - just pick a side. Either the DB be dumb and the code be smart, or the other way around. I don't like it when both are trying to be smart - that's just extra work, and usually one of them fighting the other.
The reason why I dislike ORMs is that you always have to learn a custom DSL and live in documentation to remember stuff. I think AI has more context than my brain.
Sql does not really needs fixing. And something like sqlc provides a good middle ground between orms and pure sql.
Prisma has shown me that anything is possible with an ORM. I think they may have changed this now, but at least within the last year, distincts were done IN MEMORY.
They had a reason, an I'm sure it had some merit, but we found this out while tracking down an OOM. On the bright side, my co worker and I got a good joke to bring up on occasion out of it.
That sounds plausible in theory, but I've been developing big ol' LOB apps for more than 10 years now and it happens very very sporadically.
I mean bloated joins is maybe the most common, but never near enough bloated to be an actual problem.
And schema changes and migrations? With ORMs those are a breeze, what are you're on about. It's like 80% of the reason why we want to use ORMs. A data type change or a typo would be immediately caught during compilation making refactoring super easy. It's like a free test of all queries in the entire system.
I assume that we're talking about decent ORMs where schema is also managed in code and a statically typed language, otherwise what's the point.
Object-relational mapping (ORM) is a key concept in the field of Database Management Systems (DBMS), addressing the bridge between the object-oriented programming approach and relational databases. ORM is critical in data interaction simplification, code optimization, and smooth blending of applications and databases. The purpose of this article is to explain ORM, covering its basic principles, benefits, and importance in modern software development.
At the end of day its a trade off. It would be an exception if anyone can remember their own code/customization after 3 months. ORMs or frameworks are more or less conventions which are easier to remember cause you iterate on them multiple times. They are bloated for a good reason, to be able to server much larger population than specific use cases and yes that does brings its own problems.
Weeks of handwriting SQL queries can save you hours of profiling and adding query hints.
If you want a maintainable system enforce that everything goes through the ORM. Migrations autogenerated from the ORM classes - have a check that the ORM representation and the deployed schema are in sync as part of your build. Block direct SQL access methods in your linter. Do that and maintainability is a breeze.
The only time I've seen migrations randomly fail was when others were manually-creating views that prevented modifications to tables. Using the migrations yourself for local dev environments is a good mitigation, except for that.
But think of how much time you’ll save needing to map entities to tables!!!! Better to reinvest that time trying to make the ORM do a worse job, automatically instead!!
Eh, nobody wants to transfer rows to DTOs by hand.
My personal opinion is that ORMs are absolutely fine for read provided you periodically check query performance, which you need to do anyway. For write it's a lot murkier.
It helps that EF+LINQ works so very well for me. You can even write something very close to SQL directly in C#, but I prefer the function syntax.
This is the new source of income and a lot of media orgs are getting paid - take ANI in India.
Theyve been hitting YouTubers like Mohak Mangal, Nitish Rajput, Dhruv Rathee with copyright strikes for using just a few seconds of news clips which you would think is fair use.
Then they privately message creators demanding $60000 to remove the strikes or else the channel gets deleted after the third strike.
It s not about protecting content anymore it's copyright extortion. Fair use doesn't matter. System like Youtube makes it easy to abuse and nearly impossible to fight.
It s turning into a business model: pay otherwise your channels with millions of subs get deleted
'Which you would think is fair use' - I must admit I wouldn't think that. When I consider Indian content creators making use of clips from Indian media organisations I can't really imagine why Indian copyright law fair dealing provisions, which are far narrower than the US provisions, wouldn't apply. Sure, you get to argue the strike on Youtube using their DMCA based system, but that has no legal bearing on your liability under Indian law.
I really like this aspect of US copyright law. I think the recent Anthropic judgement is a great example of how flexible US law is. I wish more jurisdictions would adopt it.
Very different in character. The US fair use four factor test (https://fairuse.stanford.edu/overview/fair-use/four-factors/) is really flexible. You don't need to fall into an enumerated exception to infringement to argue that your use is transformative, won't substitute in the marketplace, etc.
Look at the famous Authors Guild, Inc. v. Google, Inc. case. Google scanned every work they could put their hands on and showed excerpts to searching users. Copying and distribution on an incredible scale! Yet, they get to argue that it won't substitute in the marketplace (the snippets are too small to prevent people buying a book), it's a transformative use (this is about searching books not reading books), and the actual disclosed text is small (even if the copying in the backend is large scale).
On the other hand, fair dealing is purpose specific. Those enumerated purposes vary across jurisdictions and India's seems broadish (I live in a different fair dealing jurisdiction). Reading s52 your purposes are:
- private or personal use, including research
- criticism or review, whether of that work or of any other work
- reporting of current events and current affairs, including the reporting of a lecture delivered in public.
Within those confines, you then get to argue purpose (e.g. how transformative), amount used, market effect, nature of the copyrighted work, etc. But if your use doesn't fall into the allowed purposes, you're out of luck to begin with.
I'm not familiar enough with Indian common law to know if the media clips those youtubers you mentioned should fall within the reporting purpose. I'm sure the answer would be complex. But all of this is to say, we often treat the world like it has one copyright law (one of the better ones) when that's not the case! Something appreciated by TFA.
If what you say were true, Indian media conglomerates like the Times Group would be clamoring to sue the hell out of Google for every excerpt shown, yet I haven't heard of a single such case. What ANI did with Indian Youtubers was exploiting the Youtube platform's broken copyright reporting mechanism, not actual litigation.
Is there a video feed of the cockpit inside the black box?
If not there should be one as even my simple home wifi camera can record hours of hd video on the small sd card. And If there is, wouldn't that help to instantly identify such things?
No neither black box stores video. One stores audio on flash memory and the other stores flight details, sensors etc.
I don’t think video is a bad idea. I assume there is a reason why it wasn’t done. Data wise black boxes actually store very little data (maybe a 100mbs), I don’t know if that is due to how old they are, or the requirements of withstanding extremes.
This isn’t true. This was a 787. It does not use a separate recorder for voice and data (CVR, FDR).
(Most media outlets also got this wrong and were slow to make corrections. )
Rather, it uses a EAFR (Enhanced airborne flight recorder) which basically combines the functions. They’re also more advanced than older systems and can record for longer. The 787 has two of them - the forward one has its own power supply too.
There should be video as well, but I’m not sure what was recovered. Not necessarily part of the flight data recording, but there are other video systems.
That's really interesting. From reading air crash reports there's a lot of times I've seen."Nothing is known about the last 30 seconds because the damage broke the connection to the flight recorders in the tail"
In the US, the NTSB has been recommending it for over 20 years. The pilot unions have been blocking it, due to privacy and other things.
I'm not in aviation. But my between-the-lines straightforward reading is that unions see it as something with downsides (legal liability) but not much upside. It could be that there are a million tiny regulations that are known by everyone to be nonsensical, perhaps contradictory or just not in line with reality and it's basically impossible to be impeccably perfect if HD high fps video observation is done on them 24/7. Think about your own job and your boss's job or your home renovation work etc.
Theoretically they could say, ok, but the footage can only be used in case the plane crashes or something serious happens. Can't use it to detect minor deviations in the tiniest details. But we know that once the camera is there, there will be a push to scrutinize it all the time for everything. Next time there will be AI monitoring systems that check for alertness. Next time it will be checking for "psychological issues". Next time they will record and store it all and then when something happens, they will in hindsight point out some moment and sue the airline for not detecting that psychological cue and ban the pilot. It's a mess. If there's no footage, there's no such mess.
The truth is, you can't bring down the danger from human factors to absolute zero. It's exceedingly rare to have sabotage. In every human interaction, this can happen. The answer cannot be 24/7 full-blown totalitarian surveillance state on everyone. You'd have to prove that the danger from pilot is bigger than from any other occupation group. Should we also put bodycam on all medical doctors and record all surgeries and all interactions? It would help with malpractice cases. How about all teachers in school? To prevent child abuse. Etc. Etc.
Regulation is always in balance and in context of evidence possibilities and jurisprudence "reasonableness". If the interpretation is always to the letter and there is perfect surveillance, you need to adjust the rules to be actually realistic. If observation is hard and courts use common sense, rules can be more strict and stupid because "it looks good on paper".
You also have to think about potential abuses of footage. It would be an avenue for aircraft manufacturers, airlines, FAA, etc to push more blame on the pilots, because their side becomes more provable but the manufacturing side is not as much. You could then mandate camera video evidence for every maintenance task like with door plugs.
I wonder how the introduction of police body cam footage changed regulations of how police has to act. Along the lines of "hm, stuff on this footage is technically illegal but is clearly necessary, let's update the rules".
If you work in a job where the lives of hundreds could be ended in seconds due to an error or intentional action then there is no excuse to not have critical control surfaces recorded at all times. Non-commercial/private flights/flight instructors and trainees have cameras, trains have camera, stores have cameras, casinos have cameras, buses have cameras, workers who work for ride hailing services have cameras as do millions of other people who just drive.
Hopefully other countries will start deploying recording systems or start forcing manufacturers of planes to have these integrated into cockpits.
> The answer cannot be 24/7 full-blown totalitarian surveillance state on everyone.
Surveillance is actually pretty common in many high-risk environments. And piloting is very much not just any other job but an exceedingly rare situation where the lives of hundreds of people are in the hands of only two people without anyone else being able to do anything to influence the outcome.
That pilot unions don't want surveillance is to be expected (the union is there to act in the pilots interest) but ultimately it isn't just up to them.
> Should we also put bodycam on all medical doctors and record all surgeries and all interactions?
Yes. We are finally starting to do so for police. These are all situations where an individual or very small team has direct control over the life of others who can't defend themselves.
In fact, you could add some AI to it, even, as an embedded system with a decent GPU can be bought for under $2000. It could help prevent issues from happening in the first place. Of course airgapped from the actual control system. But an AI can be very helpful in detecting and diagnosing problems.
Without updates, many sites will likely stop working with it soon.
Kiwi had some great features, like disabling AMP mode, rearranging the Chrome Store for mobile, and customizable tab layouts, etc. These features might interest others as well.