> managing an open source codebase of this size would add real strain to our small team
Can you please elaborate what do you mean when you say this? This is something I do not understand. How licensing terms affect your codebase management beyond setting things up so the code is available to users?
Publishing something under a FLOSS license doesn’t mean anything except that you grant end-users certain rights (the four essential freedoms). The rest (like accepting patches or supporting external developers) is customary but by no means obligatory. You don’t have a capacity for it - don’t do it, easy. There are thousands of developers who do that - they just dump whatever they have under a nice license and that’s it.
Unless you’re saying your legal department doesn’t have capacity to handle licensing concerns, especially if you’re using or potentially using non-FLOSS third party components. That I can totally understand, it could be pretty gnarly.
Please don’t be mistaken: Free Software is a purely legal matter of what you allow users to do with your work - not some operating principles or way of organizing processes.
Note: All this said, I can understand that you may not want to grant some freedoms to the end users, particularly the freedom to redistribute copies, because this could affect your plans of selling the licenses. But that’d be a whole different story than codebase management concerns.
As I wrote, If the concern is that they cannot figure out a way to distribute it as paid software as others may redistribute it for free, that’d be a valid point of concern (and there are plenty of options). But that’s not what they’re saying.
Someone steals their work. Violates the license. To defend their rights Kagi has to sue or lay down. Not giving away the keys to the kingdom until that ability to defend is established just underlines that they're doing valuable work.
Pardon my skepticism but I don’t believe that’s a realistic threat model. Yea, purely hypothetically that could happen. But realistically, why would someone do that - what’s the point? Especially so it’s severe enough to warrant a serious legal battle that takes more than a few sternly worded DMCA-like emails to hosting providers?
Mind you, if we’re talking about hypotheticals, someone can ship a differently branded or malware-ridden (or idk what else, my imagination runs dry pretty fast here) version of their binary distribution without any source code access just fine, violating licensing all the same. Patching unprotected binaries is pretty easy, frequently much less demanding than building from source. And with all due respect to the good work they’re doing, I highly doubt Orion team needs to buy a Denuvo license, haha.
(And, as I said, it’s not even remotely what they wrote.)
If it is open source, it will end for in LLMs and will be used in other browser variants (bigger and smaller). Any USP of the code itself will be gone.
If LLMs hoover up removal of auto-shipped telemetry (currently the main selling point) then I’d say that’d be a reason to publish and submit this to every indexer imaginable ASAP ;-) Shame it’s a bit of absence of code so it’s nor really possible to submit anywhere.
And other features are worthy because they’re implemented ideas, not because of their actual implementations. Like programmable buttons or overflow menus - I’m pretty sure there’s no secret sauce there, and it’s extremely unlikely one can just grab some parts of that and move it to a different product - adapting the code from Orion’s codebase would likely take more effort than just implementing the feature anew.
Most code is just some complicated plumbing, not some valuable algorithmic novelty. And this plumbing is all about context it lives in.
The value is usually not in the code, but in the product itself. Some exceptions apply, of course.
> What’s this “it” are you talking about, exactly?
Orion's code.
LLMs facilitate the attribution-free pillaging of open-source code. This creates a prisoner's dilemma for anyone in a competitive context. Anything you build will be used by others at your cost. This was technically true in the past. But humans tried to honor open-source licenses, and open-source projects maintained license credibiilty by occasionally suing to enforce their terms. LLMs make no such attempt. And the AI companies have not been given an incentive to prevent vibe coders from violating licenses.
It's a dilemma I'm glad Kagi is taking seriously, and one the open-source community needs to start litigating around before it gets fully normalised. (It may already be too late. I could see this Congress legislating in favour of the AI companies over open source organisations.)
> Most code is just some complicated plumbing, not some valuable algorithmic novelty. And this plumbing is all about context it lives in
Sure. In this case, it's a WebKit browser running on Linux. Kagi is eating the cost to build that. It makes no sense for them to do that if, as soon as they have a stable build, (a) some rando uses Claude to copy their code and sell it as a competitor or (b) Perplexity straight up steals it and repackages it as their own.
You don’t need a LLM to just copy their code as a whole thing. Copying and rebranding (plus some vendor adaptations) is a valid concern that I have already agreed about, but for the third time: it’s not what they wrote. Has nothing to do with codebase management.
And taking some individual pieces may sound problematic as an abstract concern but have you ever tried to adopt code from one FLOSS codebase into another different one? Especially UI code, if it’s not some isolated purportedly reusable component? Maybe Orion developers are wizards who wrote exceptionally portable and reusable code, but usually in my experience it’s a very painful process, where you’re constantly fighting all the conceptual mismatches from different design choices. And I’ve yet to see a LLM that can do architectural refactoring without messing things up badly. So I’m generally skeptical of such statements. And that’s why I’m suggesting we pick a concrete example we can try to analyze, for doing this on highly abstract “the whole code” level is not going to succeed.
If you don’t specify types explicitly they have to still exist somewhere: in someone’s head (in oral tradition of “Ah, yea, those IDs are UUIDs, but not those - those are integers”), or denoted through some customary syntax (be it something more formal like Hungarian notation, or less so - suggestive suffixes, comments, supplementary documents).
They still exist at runtime, and people who work on the codebase need to somehow know what to expect. Having a uniform syntax helps to formalize this knowledge and make it machine understandable so it can assist developers by providing quick hits and preventing mixups automatically (saving attention/time for other matters).
Types may be rarely important for local variables, but they matter for API contracts.
> people who work on the codebase need to somehow know what to expect.
IME this is the exception more than the rule. There will be a ton of manipulations where I don't really care what the types are, just whether I can do the specific thing I want on them.
For instance you receive the data from an API and want to pass it to the appropriate validator. The type definitions are noise, and checking them is the validator's job.
Now it's nice to be able to work with stricter types where needed, ideally I'd want to switch that on and off, instead of being stuck with them everywhere.
It's just the "cognitive load" UX idea, with extremely non-technical people having extremely low limits before they decide to never try again, or just feel intimidated and never try to begin with.
Competence and possibility of malicious compliance are interesting questions, but I think the more appropriate question is if DoJ will be sued for violating the law by redacting unrelated content?
That would work nicely in an abstract spherical Japan in pure vacuum.
The hardest bit about redoing something from scratch is not how to design the new system, but it's in getting it adopted. Many societies have tried things like that, social inertia, especially paired with learning barriers (the steeper, the worse), and cultural and political notions (and Japan values and tries to preserve their history and culture quite a lot) is not something that can be just dismissed.
That's not to say that there weren't countries that had writing system overhauls, just that it's difficult and of questionable value and not entirely without negative effects.
>and Japan values and tries to preserve their history and culture quite a lot
Has to be said though that reform can be interpreted in exactly that way too, as revitalization. Hangul for example is also a kind of patriotic achievement. I've even heard, and that was coming from a Japanese friend (who speaks both languages): "we have the world's best and most logical writing system and the most illogical right here next to each other". And in the language department and the origins of their writing systems they're in a fairly comparable boat, just went in two very different directions.
I think Hangul worked because it was adopted at a time of mass increases in literacy. All those poor people who never wrote before didn't have any attachment to Chinese characters, and soon significantly outnumbered any monks, nobles, bureaucrats and merchants that were attached to them.
Imagine all the paperwork that would have to be rewritten now. The older generations who won't be able to learn the new system. Just commerce, with millions of small businesses, would be a nightmare to transition. Sounds like a lot of work for not much gain.
The issue is that its not theirs and that is exactly the problem. You can't just use China's writing system and try to make it fit to your language. Japan might have a high literacy rate but that is despite their horrible system and not because of it. Plus you can argue that they're not really literate, they just limit themselves to using a small portion of their 'kanji' and write little hiragana hints that tell you how to pronounce the written symbols for all the rest.
> You can't just use China's writing system and try to make it fit to your language
And yet we took the roman alphabet and adopted it to english just fine, why was that okay but adopting the chinese writing system into Japanese wasn't?
> you can argue that they're not really literate, they just limit themselves to using a small portion of their 'kanji' and write little hiragana hints that tell you how to pronounce the written symbols for all the rest.
You can argue english speakers aren't really literate, they just limit themselves to a subset of english vocabulary, and memorize word pronunciations to understand when "ea" is pronounced like "e" as in "sear", or "air" like in "wear".
Like, I do not get at all what you're arguing here. In every language people only know a subset of the total vocabulary, and people general limit themselves to the subset that's actually used. In phonetic languages, sure you can pronounce an unknown word, but that doesn't mean you have any clue what it means. In non-phonetic languages, like English and Japanese, you may not even be able to pronounce an unknown word. In hieroglyphic languages, like Japanese and Chinese, you may be able to derive the meaning and pronunciation of a new word just from looking at the component characters and knowing their individual reading and meanings, often with better success than trying to guess an unfamiliar english word from its roots.
Roman letters works somewhat with English because they are both phonetic. Japanese is phonetic too, they have an entire different hiragana alphanet with all the sounds of their language. There is no word in Japanese that you cannot sound out with that alphabet. In Chinese every symbol has a sound, a Chinese sound. Not sure how much you understand about Japanese but you can't just derive the pronunciation of a new word just from looking the components.
I do agree that English is terrible too. English is a mess of Latin, German, French words which is why spelling bee competitions are a thing in English but it would be stupid to have them in other languages such as Spanish and in fact Japanese too. In Spanish you can spell any word regardless of how long and confusing it might seem. Japanese too, using hiragana you can spell the sound of any Japanese word regardless of how how long or rare it is, good luck writing it though, a Japanese spelling bee is not possible but a written one is.
My argument is that the Japanese writing system is a big mess but spoken Japanese is not. Spoken English is a mess too. Any language were you have competitions about who can spell and write the words of the language is a big mess of a language.
> can't just derive the pronunciation of a new word just from looking the components.
While true, I believe there’s also plenty of heuristics available to make a good guess at what the unknown word may relate semantically, and how it might sound like. Not reliable, of course, but still conveying some information for a better-than-random guess. Or am I wrong?
Modern Japanese is half Chinese in its vocabulary, hence its only consequential for the writing system to be as well. The former wouldn't work without the latter.
I'm not sure it makes sense to classify desires as "good" or "bad" desires, or "thick" and "thin" (or however we may want to label it). One can make such a binary distinction, but it could be just as much as harmful as it could be helpful. There's always a nuance, a hidden variable that makes the whole thing moot.
If there's anything meaningfully binary, I think it's only an internal conflict between one's self-perception (who-I-think-I-am) and one's ideal/goal self-image (who-I-want-to-be) past some arbitrary threshold. Not transforming and not changing is not an issue until there's a desire to transform and become someone else that one has, but that isn't happening (or they don't see it) and that desire is strong or goes for a while and causes some non-negligible grief or stress or something that is not in one's own best interests.
Sure, in stressful modern-day environments, we're especially biased towards more immediate gratification than postponed one. Especially if the postponed one may never happen - modern times are crazy unpredictable. But naively suggesting to dismiss "thin" desires and pursue "thick" ones is dismissive of rest. I mean, people go to beaches and spend literal week doing absolutely nothing. Or binge watch giant series. Or just play games for the sake of it, all day long. And no one has to hate themselves afterwards - all we really need to do is to periodically pause and ask "would it be best to do something else now?" and ponder over that question for a little bit rather than dismiss it with immediate "no I want more".
And there should be a realization brief 5-minute "rest" to check some feeds is unlikely to give any meaningful rest. A non-rest masquerading as resting may be a thing to watch out, but I doubt there's any criteria, except for doing a retrospective observation and questioning oneself "does it satisfy my goals/needs, or am I just wasting my time on this needlessly?".
YMMV, but if there's some meaningful conclusion to be taken out of the article it should be more along the lines of "budget your time mindfully of its value and your long-term goals" than some desire classification model. I'm afraid this "thin vs thick desire" concept unnecessarily obscures the core idea, possibly to the extent it can become sort of a red herring.
Whenever a letter is written on paper or only exists in a digital form shouldn't matter, after all. Neither should a format of resting matter, be it making bread or watching reels, as long as it actually provides rest.
I agree, I would define those *thin desires" as whatever I'm engaging in a lot automatically, but if I were to pause and ask myself "do I really want to do this? Is this beneficial to me or am I being exploited?", part of me would say no. My "thin desires" might not be someone else's. We each have to take the time to ask ourselves these questions in order to figure it out.
> If society no longer values these qualities, then we don't deserve better.
Isn't it more like "if society has time to think about and can afford those qualities"?
If most folks out there have limited finances (CoL-relative, of course) and are just scrapping by, they'll buy the cheapest thing out there that just does the job (vacuums) and tend to ignore any extra luxuries, even if those would be more economically advantageous long-term (repairs/maintenance part of the TCO). That's simply because of the focus - it's more on the account balance, due bills and next paycheck, than on the implications for a more distant future. Crazy volatility and all the global rollercoasters like pandemics, wars, and all the crazy politicians around the world doesn't help regular folks' sensible decision-making at all, of course. The more stressed one is, the less rational they act.
People don't buy cheap junk because they don't value quality. They buy it primarily because of affordability reasons, or because their focus is forced to be elsewhere.
>People don't buy cheap junk because they don't value quality. They buy it primarily because of affordability reasons, or because their focus is forced to be elsewhere.
The focus, thanks to years of advertising, is shifted towards features, new features sell, quality doesn’t, so to keep the price point and “innovate” manufacturers need to lower the quality knowing that a new version will replace the device soon, most consumers see this as normal so when a poorly designed and cheaply made thing for what there’s no replacement parts , no repair info, no software/ firmware fails is just an excuse to purchase the new shiny iteration with all the the bells and whistles (and AI!, copilot toaster!) wich is gonna last les than the previous one but now needs an “app” an activation and a subscription for the premium features .
I think we’re waiting for the courts to deem LLMs able to sidestep any copyright and contract laws. If they do, artists and writers may be pissed, but engineers are gonna be lit (as long as they hate current status quo of nothing being interoperable)
“AI” is a semi-meaningless misnomer, of course, but e.g. a natural language interface is something Apple had tried since forever (Siri) and always failed to get functional and useful. So this part of “not gaining much” is probably false.
Paired with every vendor’s love to tweak things at random - including Apple, a natural language (if done right) could be a meaningful solution to UI consistency (“Hey Siri, I dunno where the goddamn toggle is located this time but stop making music auto-play every other time phone connects to CarPlay” - real use case with real value). Yet, as usual, Siri lacks in intelligence and capabilities.
I’m pretty sure it’s not some genius wisdom of Tim, or whoever. Apple simply didn’t do any user-facing useful shit (they did some interesting stuff for developers, but that’s a different story), plastered some generative emojis to tick the “AI” checkbox, and now people praise them for that.
Well, I can't fault you for suggesting that "AI" could be a workaround for Apple's derelict "design" of late.
And, I can see value in an all-local language model ingesting everything on my computer so I can find things. But let's face it: People managing large amounts of stuff on their computers and wanting to search it are a shrinking minority now.
But I still think the real-life payoff is minimal. A better Siri? Maybe. But if you look at Google's "AI" search results, they suck ass. They don't even hold up to what could be accomplished with straight-up parsing and searching. Here's an example I just got today, when I searched for "how much is a DeLorean worth?"
(Wow, Google has now disabled copying from search results. What petty jagoffs, after THEY copy and regurgitate others' work)
"A DeLorean DMC-12's value varies widely, from around $18,000 for project cars to over $50,000 for excellent condition models, with the nationwide average being about $83,725."
reply