Hacker Newsnew | past | comments | ask | show | jobs | submit | Kokouane's commentslogin

I have a speech impediment and currently a junior engineer.

Will this actually prevent me from getting promoted? :(

Maybe I should look into some speech therapy but not exactly sure how effective that is past a certain age.


My data point: I have a severe stutter. I might have been lucky (and also no two stutters are the same), but I don't think it has even mattered at work.

I don't know if you have a stutter as well, but I've had stuttering therapy again recently, it's not very age related. Happy to send you resources. Feel free to email me.


I hate to say it but it might negatively impact your career. I hope it doesn't, but it might

People in charge of promotions often have more than one choice for a given promotion and they will use any criteria they can to weigh for or against you

A speech impediment is more likely to weigh against you than for you, unfortunately

That's the sort of thing that anti discrimination laws and guidelines are supposed to remedy but I suspect they mostly don't actually fix

Personally, if speech therapy is an option I think I would try it? It can't hurt you any

I'm going deaf, and looking into fixes for that. I don't think you should be ashamed of your speech impediment, but I also don't think you should be ashamed for looking into help fixing your impediment either


This is good advice.

Their is also an effect that I have seen many times and experienced myself. Where it’s evident that someone has an idiosyncratic challenge, but you can tell they have put effort into overcoming or mitigating it. And they just work through it, not letting it get in their way.

It demonstrates life competence, and is a real positive.

Whatever you do, whatever you can do, don’t let “it” get in your way.


Absolutely, this. I see this all the time with colleagues that don't speak English too well (I'm not in an English speaking country, but development offices often use English)

I'm not talking about people who can't express themselves or can't understand, merely that they haven't mastered it - use clumsy wording, or have a thick accent. These are often some of the most technically capable and talented people I've worked with, but also typically are not perceived as such by others, and im ashamed to say, working together with them would often result on the credit being placed unduly on myself.


Nah, just own it. I know the rising star in a FANG team of 40 can barely speak one sentence without breaking into a stutter.


No, I got multiple cases of managers with speech impediment, I also saw a blind tech lead and a deaf one (who also happened to have a small speech impediment).

I also saw a bunch of C-levels being total sociopaths but that's another story :)


I've worked with multiple blind coworkers and they were all amazing. They weren't amazing because they were blind, but they sure didn't let it slow them down.


Joe Biden was president of the US. There's your answer.


Might be a crazy statement, but I believe Meta is on the right track. Right now, I think most people can clearly see that more and more people are getting addicted to the little device in their hand.

The "Metaverse" is going to be a more interactive, immersive extension of that device. I also believe that Meta's superintelligence team isn't necessarily about achieving AGI, but rather, creating personable, empathetic LLMs. People are so lonely and seeking friendship that this will be a very big reason to purchase their devices and get tapped into this world.


The observation about smartphone addiction is certainly valid, with studies showing average daily screen time exceeding 7 hours for many users, driven by algorithmic engagement.

BUT While the Metaverse could theoretically extend that immersion, historical execution suggests caution: initiatives like Horizon Worlds have struggled with user adoption and technical hurdles, indicating it might not seamlessly evolve from current devices as envisioned.

On the superintelligence front, focusing on empathetic LLMs for companionship taps into real societal issues like rising loneliness (e.g. reports from the WHO highlight it as a global health threat). This approach risks exacerbating dependency rather than alleviating it, potentially creating echo chambers of artificial interaction over genuine human bonds.

So yes, Meta shows some promise in these areas, but success is anything but assured. Their previous massive investments have largely failed to deliver the transformative changes they hyped.


Surely this would be easy to fix with a simple script that runs on a VPS to alert you on a platform of your choice, maybe using something like Apprise (https://github.com/caronc/apprise). Get the notification as an email, on Discord, Signal, etc.

This does complicate the system a bit, but still low overhead in my opinion.


Congratulations you invented a calendar with notifications. Which already exists on every digital device, it existed on Nokia phones 30 years ago :)


Beautiful web page. Author is a front-end developer at Canva according to his Twitter/X.


> It really is a sick joke that the experience for gaming, music and video is all far, far better for those who _don't_ pay than for those who do.

Denuvo is effective enough that if a game has it, it is almost impossible to pirate. So in most cases, it is either pay or do not play the game at all.

There was one key player who knew how to crack Denuvo DRM. They went by the name Empress but haven't cracked anything in the past year, and also mentally deranged, often including very transphobic rants in the NFO file of the torrents they release.


> it is either pay or do not play the game at all

That's still a net win for the pirate I'd argue; for them it's zero steps to "don't play the game at all", for someone like myself it's pay->waste time trying to get it run and fail->refund/no-refund.


The wasting of time is because you are using an unsupported operating system. It sounds like if you switched to one you wouldn't have to waste time since the OS would support everything the game needs.


There is quite a bit of anecdotal evidence that many Denuvo-protected games run worse on the recommended hardware and O/S until the Denuvo protection is removed. The end result is a worse day-one experience for the people who pay the most than for either the pirates (if any) or the people who wait for the game to fall out of the early hype phase.


It feels optimistic to think that the DRM works perfectly on every possible configuration running a supported OS though, does it not?


If you were working with code that was proprietary, you probably shouldn't of been using cloud hosted LLMs anyways, but this would seem to seal the deal.


I think you probably mean "shouldn't have". There is no "shouldn't of".


Which gives you an opening for the excellent double contraction “shouldn’t’ve”


My favorite variation of this is “oughtn’t to’ve”


The letter H deserves better.


I think we gave it too much leeway in the word sugar.


The funniest part is that in that contraction the first apostrophe does denote the elision of a vowel, but the second one doesn’t, the vowel is still there! So you end up with something like [nʔəv], much like as if you had—hold the rotten vegetables, please—“shouldn’t of” followed by a vowel.

Really, it’s funny watching from the outside and waiting for English to finally stop holding it in and get itself some sort of spelling reform to meaningfully move in a phonetic direction. My amateur impression, though, is that mandatory secondary education has made “correct” spelling such a strong social marker that everybody (not just English-speaking countries) is essentially stuck with whatever they have at the moment. In which case, my condolences to English speakers, your history really did work out in an unfortunate way.


> phonetic

A phonetic respelling would destroy the languages, because there are too many dialects without matching pronunciations. Though rendering historical texts illegible, a phonemic approach would work: https://en.wiktionary.org/wiki/Appendix:English_pronunciatio... But that would still mean most speakers have 2-3 ways of spelling various vowels. There are some further problems with a phonemic approach: https://alexalejandre.com/notes/phonetic-vs-phonemic-spellin...

Here's an example of a phonemic orthography, which is somewhat readable (to me) but illustrates how many diacritics you'd need. And it still spells the vowel in "ask" or "lot" with the same ä! https://www.reddit.com/media?url=https%3A%2F%2Fpreview.redd....


> A phonetic respelling would destroy the languages, because there are too many dialects without matching pronunciations.

Not only that, but since pronunciation tends to diverge over time, it will create a never-ending spelling-pronunciation drift where the same words won't be pronounced the same in, e.g. 100-200 years, which will result in future generations effectively losing easy access to the prior knowledge.


> since pronunciation tends to diverge over time, it will create a never-ending spelling-pronunciation drift

Once you switch to a phonetic respelling this is no longer a frequent problem. It does not happen, or at least happens very rarely with existing phonetic languages such as Turkish.

In the rare event that the pronunciation of a sound changes in time, the spelling doesn't have to change. You just pronounce the same letter differently.

If it's more than one sound, well, then you have a problem. But it happens in today's non-phonetic English as well (such as "gost" -> "ghost", or more recently "popped corn" -> "popcorn").


> Once you switch to a phonetic respelling this is no longer a frequent problem

Oh, but it does. It's just the standard is held as the official form of the language and dialects are killed off through standardized education etc. To do this in English would e.g. force all Australians, Englishmen etc. to speak like an American (when in the UK different cities and social classes have quite divergent usage!) This clearly would not work and would cause the system to break apart. English exhibits very minor diaglossia, as if all Turkic peoples used the same archaic spelling but pronounced it their own ways, e.g. tāg, kök, quruq, yultur etc. which Turks would pronounce as dāg, gök, yıldız etc. but other Turks today say gurt for kurt, isderik, giderim okula... You just say they're "wrong" because the government chose a standard and (Turkic people's outside of Turkey weren't forced to use it.)

As a native English speaker, I'm not even sure how to pronounce "either" (how it should be done in my dialect) and seemingly randomly reduce sounds. We'd have to change a lot of things before being able to agree on a single right version and slowly making everyone speak like that.


> dialects are killed off through standardized education etc.

Sorry, I didn't mean that it would be a smooth transition. It might even be impossible. What I wrote above is (paraphrasing myself) "Once you switch to a phonetic respelling [...] pronunciation [will not] tend to diverge over time [that much]". "Once you switch" is the key.

> To do this in English would e.g. force all Australians, Englishmen etc. to speak like an American

Why? There is nothing that prevents Australians from spelling some words differently (as we currently do, e.g. colour vs color, or tyre vs tire).


There's no particular reason why e.g. Australian English should have the same phonemic orthography as American English.

Nor is it some kind of insurmountable barrier to communication. For example, Serbian, Croatian, and Bosnian are all idiolects of the same language with some differences in phonemes (like i/e/ije) and the corresponding differences in standard orthographies, but it doesn't preclude speakers from understanding each other's written language anymore so than it precludes them from understanding each other's spoken language.


> Serbian, Croatian and Bosnian

are based on the exact same Štokavian dialect, ignoring Kajkavian, Čajkavian, Čakavian and Torlakian dialects. There is _no_ difference in standard orthography, because yat reflexes have nothing to do with national boundaries. Plenty of Serbs speak Ijekavian, for example. Here is a dialect map: https://www.reddit.com/media?url=https%3A%2F%2Fi.redd.it%2Fc...

Your example is literally arguing that Australian English should have the same _phonetic_ orthography, even. But Australian English must have the same orthography or else Australia will no longer speak English in 2-3 generations. The difference between Australian and American English is far larger than between modern varieties of naš jezik. Australians code switches talking to foreigners while Serbs and Croats do not.


> There is _no_ difference in standard orthography, because yat reflexes have nothing to do with national boundaries

But there is, though, e.g. "dolijevati" vs "dolivati". And sure, standard Serbian/Montenegrin allows the former as well, but the latter is not valid in standard Croatian orthography AFAIK. That this doesn't map neatly to national borders is irrelevant.

If Australian English is so drastically different that Australians "won't speak English in 2-3 generations" if their orthography is changed to reflect how they speak, that would indicate that their current orthography is highly divergent from the actual spoken language, which is a problem in its own right. But I don't believe that this is correct - Australian English content (even for domestic consumption, thus no code switching) is still very much accessible to British and American English speakers, so any orthography that would reflect the phonological differences would be just as accessible.


By tautology, if you split the language, you split the language. Different groups will exhibit divergent evolution.

> current orthography is highly divergent from the actual spoken language, which is a problem in its own right

The orthography is no more divergent to an Australians speech as to an American's speech, let alone a Londoner or Oxfordian. But why would it be a problem?


I think Norway did such a reform and they ended up with two languages now.


Or, if one considers that Icelandic is/was the «orginal» Old West Norwegian language, Norway has ended up with •three* languages.


The need for regular re-spelling and problems it introduces are precisely my point.

Consider three English words that have survived over the multiple centuries and their respective pronunciation in Old English (OE), Middle English around the vowel shift (MidE) and modern English, using the IPA: «knight», «through» and «daughter»:

  «knight»:  [knixt] or [kniçt] (OE) ↝ kniçt] or [knixt] (MidE) ↝ [naɪt] (E)

  «through»: [θurx] (OE) ↝ [θruːx] or [θruɣ] (MidE) ↝ [θruː] (E)

  «daughter»: [ˈdoxtor] (OE) ↝ [ˈdɔuxtər] or [ˈdauxtər] (MidE) ↝ [ˈdɔːtə] (E)
It is not possible for a modern English speaker to collate [knixt] and [naɪt], [θurx] and [θruː], [ˈdoxtor] and [ˈdɔːtə] as the same word in each case.

Regular re-spelling results in a loss of the linguistic continuity, and particularly so over a span of a few or more centuries.


Interesting, just how much the Old English words sound like modern German: Knecht, durch and Tochter. Even after 1000 years have elapsed.


Modern German didn't undergo the Norman Conquest, a mass influx of West African slaves, or an Empire on which the Sun never set, so it is much more conservative. The incredible thing about the Norman Conquest, linguistically speaking, is that English survived at all.


The great vowel shift happened in the 16th century and is responsible for most of these changes. The original grammatical simplification (loss of cases etc.) between 10-1300 is difficult to ascribe, as similar happened in continental Scandinavian languages (and the Swedes had their own vowel dance!) But the shift in words themselves came much after (and before empire).


English also shows a remarkable variation in pronunciation of words even for a single person. I don't know of any other language where, even in careful formal speech, words can just change pronunciation drastically based on emphasis. For example, the indefinite article "a" can be pronounced as either [ə] (schwa, for the weak form) or "ay" (strong form). "the" can be "thə" or "thee". Similar things happen with "an", "can", "and", "than", "that" and many, many other such words.


We had a spelling reform or two already, they were unfortunately stupid, eg doubt has never had the b pronounced in English. https://en.m.wiktionary.org/wiki/doubt

That said, phonetic spelling reform would of course privilege the phonemes as spoken by whoever happens to be most powerful or prestigious at the time (after all, the only way it could possibly stick is if it's pushed by the sufficiently powerful), and would itself fall out of date eventually anyway.


> but the second one doesn’t, the vowel is still there!

Isn't the "a" in "have" elided along with the "h?"

Shouldn't've Should not have

What am I missing?


Even though the vowel "a" is dropped from the spelling, if you actually say it out loud, you do pronounce a vowel sound when you get to that spot in the word, something like "shouldn'tuv", whereas the "o" in "not" is dropped from both the spelling and the pronounciation.


The pronounced vowel is different than the 'a' in 'have'. And the "h" is definitely elided.


Many English dialects elide "h" at the beginning even when nothing is contracted. The pronounced vowel is different mostly because it's unstressed, and unstressed vowels in English generally centralize to schwa or nearly so.


Don’t worry about us. English is truly a horrible language to learn, and I feel bad for anyone who has to learn it.

Also I have always liked this humorous plan for spelling reform: https://guidetogrammar.org/grammar/twain.htm


The node for it on Everything2 makes it a little bit easier to follow with links to the English word. https://everything2.com/title/A+Plan+for+the+Improvement+of+...

So, its something like:

    For example, in Year 1 that useless letter "c" would be dropped to be [replased](replaced) either by "k" or "s", and likewise "x" would no longer be part of the alphabet.
It becomes quite useful in the later sentences as more and more reformations are applied.


English spelling is pretty bad, but spoken English isn't terrible, is it? It's the most popular second language.


English is rather complex phonologically. Lots of vowels for starters, and if we're talking about American English these include the rather rare R-colored vowels - but even without them things are pretty crowded, e.g. /æ/ vs /ɑ/ vs /ʌ/ ("cat" vs "cart" vs "cut") is just one big WTF to anyone whose language has a single "a-like" phoneme, which is most of them. Consonants have some weirdness as well - e.g. a retroflex approximant for a primary rhotic is fairly rare, and pervasive non-sibilant coronals ("th") are also somewhat unusual.

There are certainly languages with even more spoken complexity - e.g. 4+ consonant clusters like "vzdr" typical of Slavic - but even so spoken English is not that easy to learn to understand, and very hard to learn to speak without a noticeable accent.


You never realize how many weird rules, weird exceptions, ambiguities, and complete redundancies there are in this language until you try to teach English, which will also probably teach you a bunch of terms and concepts you've never heard of. Know what a gerund is? Then there's things we don't even think about that challenge even advanced foreign learners, like when you use which articles: the/a.

English popularity was solely and exclusively driven by its use as a lingua franca. As times change, so too will the language we speak.


Every real, non-constructed language has weird rules, weird exceptions, ambiguities, and complete redundancies. English is on the more difficult end but it's not nearly the most difficult. I'm not sure how it got to be perceived as this exceptionally tough language just because pronunciation can be tough. Other languages have pronunciation ambiguities too...


English is far from the most complex or difficult.


English being particularily difficult is just a meme. only the orthography is confusing.


The thing is that English takes in words from other languages and keeps doing so, which means that there are several phonetic systems in use already. It's just that they use the same alphabet so you can't tell which one applies to which word.

There are occasional mixed horrors like "ptarmigan", which is a Gaelic word which was Romanized using Greek phonology, so it has the same silent p as "pterodactyl".

There's no academy of the English language anyway, so there's nobody to make such a change. And as others have said, the accent variation is pretty huge.


I care.


That used to be the case, but "shouldn't of" is definitely becoming more popular, even if it seems wrong. Languages change before our eyes :)


Who cares?


Why not? Assuming you believe you can use any cloud for backup or Github for code storage.


IIUC one reason is that prompts and other data sent to 3rd party LLM hosts have the chance to be funneled to 4th party RLHF platforms, e.g. Sagemaker, Mechanical Turks, etc. So a random gig worker could be reading a .env file the intern uploaded.


What do you mean by chance? It's clear that if users have not opted out from training the models, it would be used. If they have opted out, it wont be used. And most of the users are in first bucket.

Just because training on data is opt out doesn't mean business can't trust it. Not the best for user's privacy though.


I think it's fair to question how proprietary your data is.

Like there's the algorithm by which a hedge fund is doing algorithmic trading, they'd be insane to take the risk. Then there's the code for a video game, it's proprietary, but competitors don't benefit substantially from an illicit copy. You ship the compiled artifacts to everyone, so the logic isn't that secret. Copies of the similar source code have linked before with no significant effects.


Most (all?) hedge funds that use AI models explicitly run in-house. People do use commercial LLMs, but in cases where the LLMs are not run in-house, it's against the company policy to upload any proprietary information (and generally this is logged and policed).

A lot of the use is fairly mundane and basically replaces junior analysts. E.g. it's digesting and summarizing the insane amounts of research that is produced. I could ask an intern to summarize the analysis on platinum prices over the last week, and it'll take them a day. Alternatively, I can feed in all the analysis that banks produce to an LLM and have it done immediately. The data fed in is not a trade secret really, and neither is the output. What I do with the results is where the interesting things happen.


AFAIK, the actual trading algorithms themselves aren’t usually that far from what you can find in a textbook, their efficacy is mostly dictated by market conditions and the performance characteristics of the implementation / system as a whole.


This very much "depends".

Many algo strategies are indeed programmatically simple (e.g. use some sort of moving average), but the parametrization and how it's used is the secret sauce and you don't want that information to leak. They might be tuned to exploit a certain market behavior, and you want to keep this secret since other people targeting this same behavior will make your edge go away. The edge can be something purely statistical or it can be a specific timing window that you found, etc.

It's a bit like saying that a Formula 1 engine is not that far from what you'd find in a textbook. While it's true that it shares a lot of properties with a generic ICE, the edge comes from a lot of proprietary research that teams treat as secret and definitely don't want competitors to find out.


Wisconsin local, currently attending UW-Madison for CS, and visited the Epic Campus when I was going to a local high school less than 30 minutes out. Talking to the engineers and exploring the campus is what solidified my choice in deciding to become a software engineer, so thank you Epic!


Still feels like the solution here is just using Windows 10 IoT LTSC to avoid all this madness. It's a bloated product that feels worse to use than Windows 10, plain and simple.


Windows 11 IoT Enterprise 24H2 (LTSC and non), very officially does not require TPM.


Are there any off-the-beaten-path issues with setting up my parents with this?


LTSC doesn't come with Microsoft Store installed (a pro or con depending how you look at it), but it can be installed by running "wsreset -i" in Powershell.

Bonus: LTSC gets extended security patching support lifespan.


Purelymail has been great! I've used it for about a year, no issues, just works and extremely cheap.


Same here. It's wonderful. I moved over when Gandi started charging for its e-mail hosting.


Kagi is a tough pill to swallow. Their search is hands down the best around, there's no other way around it.

That being said, $10/mo is also expensive.

The workaround I found is using Kagi Ultimate. I get access to Claude (and I'm still able to attach files + access a dozen other LLMs) for $25/mo, so I was able to cancel Claude and keep Kagi and get the best of both worlds from either product.

Side note: incredible that a small team like Kagi's can somehow use LLMs more effectively in search than a company that has years of search experience (i.e. Google)


I love it too, and I do think it's expensive, too.

A lot of people laugh at thinking $10 (USD) pm is expensive, of course it's not huge money for most people. The problem is Kagi is a kind of "vote" towards moving the internet away from the ad-supported crapware trying to spy on your every click and capture your attention non-stop. If you're trying to replace a lot of free services with paid services to cast said vote towards shaping the web into what it should be, these costs really add up.

After paying for search, email, supporting a creator or two (e.g. a podcast), and some software here or there, you can easily end up in the hundreds of dollars, then you look back and notice that at best you've saved yourself from a few annoying ads and maybe gotten a fractionally better service and at worst your experience is unchanged and you're just deriving some intangible satisfaction from having not been spied on (which at the individual level doesn't make much difference unless millions of people follow your lead) or supporting a creator you admire.

It's tough.


It is tough.

For my company, it's easy: I pay for the tooling my devs need. Overly simplified, I only pay 50-65 % of that, because expenses lower taxes. And compared to salaries, a few hundred $ is not a big deal. If I think about the time it saves us and how much money we can make in that time, it's a no brainer. Even just having people enjoy their work more pays off.

There's opportunity cost. Ad supported services are not overly incentivised to provide a quality product in the long run, so it's a safe assumption to make that they will waste your time to some degree, at least as they mature and enshittify. Some are great, eventually they all become bad, in my experience. It can be smart to use free stuff while it's still good, with an eye on migrating.

I don't know how much sense it makes to apply this opportunity cost thinking to your personal time. I don't really do that, but I do try to reduce time spent on anything that annoys me, and to do more stuff that brings me joy or pride, even if it's not economical. Life is short.


> That being said, $10/mo is also expensive.

It is more expensive than $0, but if you value your time more than a dollar an hour, the time saved is worth more than $10. I've found I scroll a lot less and have fewer false positive sites where I click in and look around only to find it isn't what my search was looking for.

That is just for the basic search feature. TBH, I haven't even investigated its other features, like lens and the ai stuff.


Can't say I understand how $10/month is expensive.

Quality search results ultimately save time digging through poor quality search results. Add up 300+ searches per month and surely you're hitting minimum wage value at least.

The value proposition is absolutely there at $10.


Less than half of HN users are from the US and wages are lower in most countries, sometimes by a lot. Less than 10% in Turkey or Ukraine for example: https://en.wikipedia.org/wiki/List_of_countries_by_average_w...


Value proposition should be compared to the low cost alternative. Is it $10 better than Google? Maybe, I am not sure.


Just being able to rank domains (and nuke the ones that are usually spam) is enough to make it worth $10/mo for me. This is a tool you’re using constantly, so even small time savings per use adds up to a lot.


I am comparing it to the low cost alternative.


That (25 usd for Claude + Kagi) is the best sales pitch ever. I'm switching :)

Not sure what I'll do when Grok 3 comes out (I expect it to beat every other LLM out there hands down) but we'll see by then :p


What is it about Grok that you expect to be so much better? Not disagreeing necessarily, just want to know your reasoning.


Better LLM seems to be about who has the most compute and data for training.

xAI has (by far) the most compute and data now.


> [...] incredible that a small team [...]

Here, this. Small, focused teams usually deliver more output per person (or even overall) than larger ones. Less management overhead, clear goals and responsibilities, tendency to employ people with cross-disciplinary experience, hiring for talent and not checklists, etc.

> [...] can somehow use LLMs more effectively [...]

LLMs are an incredibly effective tool for the few areas where they do fit the problem. But there's so much "AI" hype going on, everyone is trying to cram it into anything and everything, running around with a hammer trying to smash things just in case they turn out to be a nail. Even the old-time players (who should know better) can't resist the urge.

It's almost like oligopolies faced with changing markets tend to start collapsing under their own weight.


Unless you're a really heavy user, you can possibly save a lot on those LLM bills by using the API and some third party app. (Like Msty for example)


This is true for me, except I don’t want to run another app, and I like using Claude on mobile.

But the API is tempting for a cost savings.

I put $5 in API for “Continue” extension in VS Code and it’s been months and haven’t used it all up yet.


There was a time when it was incredible that a small team like Google could somehow implement search more effectively than a company that has years of search experience (i.e. AltaVista)


In that case -- and probably in the case of Kagi vs. Google as well -- it was entirely dependent on focus. In the hypothetical situation where your goal changes from "provide the best possible search" to "beat Yahoo," your available resources will be used on different things, and then...


> Side note: incredible that a small team like Kagi's can somehow use LLMs more effectively in search than a company that has years of search experience (i.e. Google)

But Google is not a search company. It used to be, but now it's an ad company. I'm sure their LLM use serves their purposes right.


I'm discovering that Claude is included in Kagi Ultimate and this is also slightly blowing my mind... Probably going to do the switch.

The fact that it's expensive is true but for me largely compensated by the niceties of the service. I don't really like the fact that they use Yandex either but the other search engines are not really satisfying for me anymore.

I've used Kagi Pro for several months now and it's working great for my personal and professional needs. The only thing I'm missing is the shopping features but I can with that and switch back to Google when I'm looking to spend my money on physical goods...


Is $10 really expensive?

For most people, no. Can you think of $10/m you spend on something less important than search? A couple coffees, a sandwich, HBO, Netflix, a drink, using two gallons of gas recreationally, etc


I almost never buy coffee. Rarely buy sandwiches out. I get HBO for $3/month now for 6 months and will cancel after. Netflix is $7/month, though the whole household uses that. Two gallons a gas can buy a lot of transportation to necessities for the kiddo. Though we have an EV and $10 gets us maybe 250 miles more or less of driving -- that's a lot.


No, $10 a month is not expensive. However, the problem is the amount of products and services that in the past you could buy once or buy and if you wish to upgrade to the next version you had to pay again have become less and less and subscription services have skyrocketed.


Most people I know can afford some of the luxuries you list, but only barely. If you have to choose between having a drink once a month at a place other than your own home, and having an ad-free search engine that actually works, you'll find that many people are thick enough to go for that drink.

For context, this is speaking from the Netherlands, where housing is relatively expensive.


A couple of coffees isn't a luxury, not even for the worst penny pinching Dutch miser imaginable.


How many coffees can you drink in a day? How many services and even standalone apps that don't use server resources want a measly 10/month from you?


I don’t drink coffee or alcohol, watch movies or drive a car.


> That being said, $10/mo is also expensive.

Back in the beta they planned on launching with a $20-$30/month unlimited plan and they didn't think they'd be able to bring the price down. That was a little too expensive for me so I moved on. I like what they're doing and I'd pay $10/month but I just don't have a use for it anymore.


> a company that has years of search experience (i.e. Google)

From experience on the figurative side of this reality, I can attest that it is hard to build a track while the train is running on it.


Agreed on Kagi ultimate.

The agents (optional with a toggle) hooked up to the LLMs are fairly decent. They'll search, grab YouTube transcripts, read online dev documentation, etc.


How does Kagi Ultimate compare to Perplexity?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: