Hacker Newsnew | past | comments | ask | show | jobs | submit | patal's commentslogin

How would I sync access, if more than one people ssh-pushes onto the git repo? I assume syncing be necessary.


Same as always, with any other remote?

(Use `git pull`? If the different people push to different branches, then there's no conflict and no problem. If you try to push different things into the same branch, the second person will get told their branch is out of date. They can either rebase or - if this is allowed by the repo config - force push over the previous changes...)


Sure, if they push one after the other. If they push at the same time however, does Git handle the sync on its own?


yeah


nice, thx


iTerm2 is on 45ms and more in this measuring: https://danluu.com/term-latency/

There, it's distinct in that other terminals have lower latency on the same system.


Ooh that is interesting. Also noticed this passage:

"When people measure actual end-to-end latency for games on normal computer setups, they usually find latencies in the 100ms range."

That is really surprising to me. 100ms is HUGE. On NetHack, sshing into servers the difference between 50ms and 150ms is enormous. If do my experiment and I'm not too lazy I want to check on that 100ms figure, your link points to more sources which I didn't check on right now.

I don't know if I'm blind but I can't tell when that article was written exactly. But it mentioned MacBook 2014 and Lubuntu 16.04 so maybe it's around mid 2010, 2016 possibly? (I should check properly if I do my experiments)

The author in the Wayland vs X11 used mouse flicks on the video. Just now while writing I kind of had a thought if this needs some more deep thinking: how do I accurately measure "my finger touched this button and X milliseconds later character moved". Wondering what to consider as the "finger touched this button" event in a 240fps video. Maybe I can do what the author did and map a key to a mouse instead, because maybe there's a much-easier-to-see physical movement with a mouse. But then, am I really measuring faithfully my old NetHack-like environment? Or maybe this isn't really an issue and it'll be easy to tell.

Too much for my small brain to think in the moment :) that link gave me at least some validation that iTerm2 maybe really is a bit slow, or at least was at that time. There's also bunch of terminals on that page I've never heard of.

Today iTerm2 is my daily driver, but xterm is still really fast on my Fedora Linux system I have on the side.


Without writing the number down, it's up to Ballmer to decide that aspect, because you cannot look into his brain, or prove that he didn't commit to a prior number. Therefore, it's fair game.


Impressive work!

In the "mixed-case kerning pairs" quality testing image, I notice that the letter "j" sometimes reaches under the previous letter, like in "Fdj". Sometimes it creates a lot of space, like in "Fjo". Is there a stylistic reason for this? The Fjo spacing is the only thing that stood out to me.

Kudos


Nice catch. That's a culprit of the `auto_kerning_min` property that you'll see on a lot of the fonts. this tells the auto kerner not to exceed that.

I added this parameter because I fouund that for a lot of fonts, squeezing letters together over a certain distance would just look bad, so I would set -1 or -2 as a cap.

It looks like that's just one that snuck past my notice. The word "Fjord" would look strange because of this. This is a good example of how even with the quality testing, things can get through, because I still have to visually glance over hundreds of kerning tests.

One thing that might be a nice adjustment is to have an algorithm that detects the "area" between two letters, so basically how many pixels can volumetrically fit between them, and flag ones that go over a certain threshold. I could then color those tuples as red in the sample text, basically the system marking them as "potential problems" that required an author's look.


What I picked up from a lifelong typographer is that kerning should be about the area enclosed by the two letters. The aim is to make that consistent. I think that might help in this kind of case.


Thanks for taking the time to answer. I don't understand why in the dj combination, j is able to reach under d for what looks like a kerning of about -4, when the auto_kerning_min property is set to -1 or -2, keeping Fj apart.


Maybe they just manually kerned "j" with the lowercase letters? The "j" line on the lowercase sample would jump out pretty strongly in a way the capitals-with-j don't on the mixed-case one.


Why not just feed that information back into the algorithm itself?


I definitely could, I would have to do a bit of tests to see what kinds of volumes deserved special treatment.

Usually the way I do things is I start by doing work manually. If I find that there's a common pattern in something I'm doing that could be automated, then I am able to transcribe it into the algorithm because I just follow the same steps I've been using in my head.

This wasn't a thing that actually came up a huge amount, as these glaring pairs aren't tremendously common. But they're just common enough that if I sat down and examined them, I could probably say something like "hey if 1.5 vertical lines worth of pixels are between two letters, kern this extra" or something like that.


I like the riddle. But the framing is unfortunate. When divising riddles, you want ambiguity where it serves the riddle, but be precise elsewhere so that the solver doesn't get needlessly distracted.

Their AIW riddle is: "Alice has 4 brothers and she also has 1 sister. How many sisters does Alice’s brother have?"

Now it should've been: "How many sisters do Alice's brothers have?" or "..does each of Alice's brothers have". Why single out a specific brother, when you haven't introduced this topic, and it is irrelevant to the riddle? Naturally, a human would ask "Which brother?", fully knowing that it is not important to the riddle.

Since this grammatical distraction puts an additional burden on the LLM, the authors muddled their original goal, which was to provide an easy riddle. I think it may have also muddled their data.


Their AIW+ riddle is just ridiculous. It contains so many ambiguities, that there are several correct answers, even though the authors claim there be only one.

Which is really unfortunate. Because now it only shows that LLMs have problems answering ill-framed riddles.


"Last Chance to See" by Douglas Adams and Mark Carwardine. About Adams and Carwardine travelling the world to document several animals on the brink of extinction (by then 1988). Very entertaining and raising questions about the responsibility of human globalization. An all-time favourite.

"Never In Anger" by Jean Briggs. About her living 17 months among Inuit in the 1970ies, documenting how the Inuit see emotions and raise their children without any shouting or violence.

"Shots in the Dark - Japan, Zen, and the West" by Shōji Yamada. About the culture exchange between Japan and the West in the early 20th century and how several perceptions of Zen got constructed in the process.

"Gödel, Escher, Bach" by Douglas Hofstaedter. About core ideas in logic, music and art, and their connections. I always find something new there.

"In Praise of Mastery" / "芸談" by Tanizaki Jun’ichirō. An essay about the japanese pursuit of mastery. It's a fascinating window into the arts perception in late 19th century Japan.

Webster's Dictionary of 1913. A great resource for looking up original meanings of words. I find it very useful for naming stuff in programming.

"Woe Is I" by Patricia O'Conner. A witty grammar book. O'Conner's entertaining style makes it easy to grasp the grammar topics and come back for more.


It's funny though


1. "graphic representation of writing systems" and "text" mean the same thing to me. Do you mean text as spoken?

2. I think the pronunciation should not be encoded into the text representation on a general scale. You would need different encodings for "though" and "through" in english alone. Your example leaves the meaning open, even if being read as text. If I was the editor, and the distinction was important, I'd change it to "For example, the cyrillic letter 'c'".

I understand that Unicode provides different code points for same-looking characters, mostly because of history, where these characters came from different code sheets in language-specific encodings.


I mean text as in the platonic ideal of "c" and "с". Just because they look the same, does not make them the same character. If we're going to be encoding characters that happen to have pixel-identical renderings in certain fonts, the next logical step is to encode identical letters that look different in different fonts or writing styles as separate code points as well - for example, the English letter "g" is a fucking orthographic nightmare.


Imagine if, say, English people normally wrote an open ‘g’ and French normally wrote a looped ‘g’, and you have the essence of the Han Unification debates.


Yeah, I don't see it either. If being run without --help or --version, true can only ever return EXIT_SUCCESS.

However, I find it interesting that true and false use the very same implementation.


Thanks, this is really handy. The JS version does so not what I expect, and it certainly doesn't do what I want.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: