Hacker Newsnew | past | comments | ask | show | jobs | submit | monort's commentslogin

Email jeff@amazon.com, it's an unofficial escalation process that still works.


Kagi search results are great, but keywords are not highlighted. How is it usable? Is there an option to enable highlighting?


Can you add the ability to ask questions via GET requests? This is needed to add a site search keyword into the browser. Make it work only when logged in, if you are afraid of bot requests.

There is also a standard to add it automatically ( https://stackoverflow.com/questions/38670851/whats-a-script-... )


This is supported! You can use the https://www.phind.com/search?q=my+question syntax.


Thank you!



This strikes me as something that many people probably figured out a non-rigorous version of and didn't think it was special.

It's kind of one of those resource management hacks you do when you're constrained and screwed by limitations. Splitting things up by priority is a common go-to for resource allocation. This is a spin on that.

I wonder how many other "in the trenches hacks" people have done that overturn widely accepted things the inventors didn't realize were a big deal: "well I usually have a bunch of deliveries to make and I've figured out a clever way to map out the quickest path... "

Don't get me wrong - recognizing it and then formalizing it, doing the work, publishing the paper - that's a lot of effort. I'm not taking that away.


Also relevant: in this particular case the authors themselves note that the results better theoretical behavior in the worst case, but no practical uses yet. So I think any software engineer exploring this direction would have abandoned it pretty quickly, for the same reason that galactic algorithms aren't typically invented by them either (unless they also do compsci as a hobby of course). In fact the Wiki page for galactic algorithm mentions another optimal-but-impractical hash table as one of its examples[0][1].

[0] https://en.wikipedia.org/wiki/Galactic_algorithm

[1] https://www.quantamagazine.org/scientists-find-optimal-balan...


Relevant xkcd: https://xkcd.com/664/



Leapfrog Triejoin is an example of the trenches contributing to academia and academia valuing it: https://x.com/RelationalAI/status/1836115579133939752


> I wonder how many other "in the trenches hacks" people have done that overturn widely accepted things the inventors didn't realize were a big deal: "well I usually have a bunch of deliveries to make and I've figured out a clever way to map out the quickest path... "

A lot of them. Having said that: yes, I can imagine that others would have thought up Dijkstra's shortest path algorithm, since he himself said it came to him while shopping, and that it only took him twenty minutes to reason through the original O(n²) algorithm. (edit: oh wait, that's what you're alluding to isn't it? Heh, that went straight over my head).

On the other hand, I don't think the faster versions of Dijkstra's algorithm would have been invented by anyone without at least some understanding of priority queues and big-O behavior. And at that point I hope people realize that they possess some specialized knowledge that might not be entirely common.

In fact, I'd argue that the true strength of Dijkstra's write-up is that it gives us a vocabulary to reason about it and come up with specialized data structures for particular situations.

Anyway, what you're touching on is the difference between engineering and science: engineering works with confidence built from tests, rules of thumb that reflect lessons learned from historical results, and (in modern times) verified predictions from science. Those rules of thumb might be used when lacking a deeper scientific understanding of why it works. The tests might exist to work around the limitations of scientific knowledge (e.g. modelling turbulence). Science creates insights and predictions through modelling of empirical results. At least that's the difference according to Bill Hammack[0].

In an ideal world the two professions work together and build on each other's results to propel each other forward of course.

[0] https://www.youtube.com/playlist?list=PL0INsTTU1k2X4kCPqmi1e...


> "some specialized knowledge that might now be entirely common"

now -> not, right?

great comment

I'm not being pedantic about a typo, but it reverses the point I think you're making about UNcommon knowledge...


Yes, that was a typo that made it look like I contradicted myself, thank you for catching that :)


I was referring to the general TSP being solved.


Eh, the trading salesmal problem is more like the Collatz conjecture: it looks simple but there's a lot of complexity hiding under the surface, and it requires some expertise to truly understand why it's really hard. So then we're talking about the opposite problem.

Note that your informal description did not match the TSP since there's no reason to disallow backtracking or visiting the same place twice.


Thanks so much for this link. I remain convinced that papers are so much more understandable with an accompanying talk by the creators. I wish papers would just come with a video talk included.


Exactly, the authors get to eschew the formalism required in papers. Often the core ideas of research are simple in themselves and the real complexity lies in formally proving the results.

Also, I'd not be surprised if someone already invented and used this funnel hashing technique in say the 80's in some game or whatnot but just never realized what they had stumbled onto. Not to diminish the research, it's very ingenius.


Academic papers are terrible at knowledge transfer. A more casually spoken blog post is 100% more effective at communicating ideas imho.

Academia is a weird and broken place.

Disclaimer: work in a research lab full of awesome PhDs who largely agree with me!


I think papers make good references. I think of it more like the equivalent of a "datasheet" for an electronic part, say. Once you understand the intricacies, it's a valuable reference but more often than not, it's not very good and conveying motivation or intuition.


They're usually not very good as a reference either - they miss out key steps due to oversight or lack of time.


Great way to see it, papers should not be your first point of contact.


> Academic papers are terrible at knowledge transfer.

Well, at least they are better than patents.


Thanks for the video, def a lot better than the article.

I do find it a bit weird that this is somehow better than just over-allocating (and thus reducing the chances of key collisions, which also makes worst case 'less worse') given his approach also allocates more memory through the aux arrays.


I don't think anybody is really saying it is. Academics treat big-Oh performance on very very full hash tables like a sport. Real world code on real world CPUs often has a more complex cost function than what the academics considered; cache sizes, fitting in a cacheline, memory bandwidth, icache pressure, ...


He's not allocating through aux arrays, he's splitting the already allocated memory into log(n) layers. You can just track those aux arrays with math in the implementation.

It's probably not better than over-allocating except in memory constrained scenarios. But the overhead of funnel hashing is not high - it requires 0 extra memory


Overallcoation has a limit. You only have so much RAM/storage. Beyond that you start swapping. I could really use a hash table (or similar structure) that degrades less with higher occupancy.


Could it be that overallocation means you need a bigger search to find empty places or answer queries?



This law makes an intelligent speed limiter mandatory to be present in a car, but it does not currently require it to be activated. The software for road sign recognition is currently of poor quality, making a mandatory system impractical.

https://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri=PI_...


So I see a lot of stuttering during panning in that video. With motion smoothing on, almost no stuttering. How can anyone prefer the stuttering version?

https://youtu.be/INQrxHREmJ0?t=211


Some of us strongly prefer things to be presented as they are, without artificial enhancements.

This means that if a movie is shot at 24FPS (as nearly all of them are), and is shown to theater audiences at 24FPS, then it should also be displayed at 24FPS in the living room.

(But if you prefer to view the world through rose-tinted glasses, then you do you.)


I understand that, but everyone here is saying that the stuttering version is better in itself and the smooth version is horrible? To my eyes it's the opposite.


It's definitely something that is different from person to person. I strongly prefer it disabled, but not because it looks terrible most of the time - I could get used to it if it looked exactly like it would look if it had been produced with that higher framerate. The issue arises whenever it breaks, for example by making the acceleration of visible motion unnatural. This happens fairly often, either through unrealistic acceleration, or by breaking the previously established visual language of the movie. That's where it breaks my immersion - but that's not the case for everybody, and it's absolutely legitimate to say that you prefer either, or don't care at all!

Maybe a good analogy to understand the "it's objectively wrong" perspective (even if I disagree) is AI upscaling, for example of historical photos. Just like autosmoothing it adds details in a mostly plausible way, and some people prefer it, but it adds fake detail (which understandably annoys purists), and sometimes it actually breaks and produces visual artifacts.


To me, the "smooth" version is artificial and alien in ways I can't quite articulate, just as it is hard to articulate why a long-winded LLM response, while having good grammar, might be both stupid and wrong.

Sure, it's smoother; anyone can see that. It's also weirdly smeary or something.

The (presumably) 24FPS version has a regular amount of judder, and it's the same amount of judder that I've experienced when watching films for my entire life, and each of those frames is a distinct photograph. There is zero smearing betwixt them, and there is no smearing possible.


Yeah, I don't know why people want horrible low frame rates. It's distracting every time a shot pans. But it seems a lot of people do.


We don't want "low frame rates". A lower frame rate is not the goal.

If films were commonly shot and released at 120FPS, then we'd see videophiles clamoring to get the hardware in-place in their homes to support that framerate.

But we're not there. Films are 24FPS. That's what the content is. That's what the filmmakers worked with for the entirety of filming, editing, post, and distribution processes.

And the process of generating an extra 96 frames every second to fill in the gaps of the actual content is simply not always very good. Sometimes, it's even pretty awful.

It seems obvious to say, but artificially multiplying a framerate by a factor of 5 inside of a TV frequently has issues.


>A lower frame rate is not the goal.

>If films were commonly shot and released at 120FPS, then we'd see videophiles clamoring to get the hardware in-place in their homes to support that framerate.

I'm not sure that's actually the consensus opinion. Some of the complaints about frame interpolation are about specific kinds of artifacting, but many are of "the soap opera effect", and those same complaints were levied against The Hobbit, which was actually filmed at a higher frame rate.


I'm in a complete agreement with your analysis.

Suppose profile creation is possible (and marketed to) only on a desktop browser and messaging on mobile is with voice messages, no TTS needed. Do you think this could work?

The niche is small, this won't be a next Tinder, but in absolute numbers it's still a large number of users.


Why would you need to be a billionaire to build a platform like you described? If you think you can't acquire users without big spending, then this platform is not really much better than existing ones?


To be honest, I'm mostly joking about the billionaire thing. If I were emotionally invested enough in this idea above other parts of my life I would look for a way to get it moving. The problem is that, by its very nature, a non-profit open-source project like this could not attract venture capitalists or angel investors, lest it become the very thing it was created to challenge (*cough cough*, OpenAI, *cough cough*). A project like this would need venture altruists or angel donors instead to be feasible and sustainable — hence my fantasy of become stupidly rich to be able to get it moving, at the cost of downgrading to "normal rich".


You don't need investors, unless you think you can't get users without buying them. The platform you described don't need much resources and can be bootstrapped. You can still capture just 1% of the value that Match Group captures and live nicely.

I think the reason we don't see competitors like that is that dating without aggressive monetization won't be much better for the average user.


Shirt used to cost a month wage of a skilled worker. How much does it cost today? Why are you so sure workers don't see the benefits of automation?


Why does the cost of a shirt matter when people can't afford housing? We can create dwellings much more effectively than previous, especially high quality and high density ones


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: