I haven’t tried both. Nor am I a keyboard hardware enthusiast. I have just spent some time modifying my keyboard through software (Linux X Server).
I don’t get the apparent fascination that (mech) keyboard enthusiasts have with small keyboards. I can understand moving or removing the keypad since it displaces the mouse (then again, “don’t use the mouse!”), but beyond that I don’t see the appeal except for the aesthetics.
I think the happy hacking keyboard has the missing keys on the FN layer. Hopefully the FN key is programmable, or else you are stuck with whatever the manufacturer wants the FN key to mean. (Pet peeve of mine: cheap keyboards now seem to come with those dang FN keys which can’t be reprogrammed like the regular keys and are just for garbage multimedia functionality.) With a regular boring full-sized keyboard I can program/re-purpose whatever the key is on the equivalent position to the FN key, or choose another key entirely. Right now I can access all F1–F36 keys by using a series of modifier keys right in reach of the glorious Home Row (prostrates). That’s just by modifying my standard keyboard in X. And of course I can access F1–F12 without using modifier keys since I have an (almost) full-sized keyboard. That way I get the best of both worlds: I can use modifier keys to access every key from near the home row, or just one-key while using a fidget spinner with the free hand.
With most mechanical keyboards I’ve encountered the FN key is fully programmable, along with every other key on the keyboard. Typically rather than having the functionality hard coded the FN key will activate a second layer, it’s similar to how shift activated a layer of uppercase characters, but with different functionality.
On mine I have a key that flips into a symbols layer, placing characters commonly used in programming either on or close to the home row, and flips hjkl into vim style cursor keys. I’ve also got another layer on a different key which disables almost all my customisations and turns it into a standard qwerty keyboard for gaming.
If you’re deep into layout customisations via X I suspect you’d really like a decent mechanical with customisable firmware, and probably a few spare keys for mapping to layer shifts.
Xkb gives me eight different layers just on the shift keys (not the control keys etc.). Just takes a ton of time to learn because Xkb is borderline arcane (:)) and because you have to do everything manually if you want something beyond “swap control and Caps Lock”.
If time is money I wouldn’t be surprised if I could have saved a lot of money by just buying a mechanical keyboard with good on-board firmware and programming. At least I have a hard time imagining that the programming would be harder than on X.
> Gigerenzer is undoubtedly correct that Bayesian reasoning is something that can be learned by many people, and yet all that Kahnemann needs for the implications of his statement that "the human mind is not Bayesian at all" to be relevant in behavioural economics terms is for some portion of the population to reach systematically different conclusions from the correct Bayesian one using an inferior heuristic. (A point Gigerenzer essentially demonstrates by citing a study which show how outcomes improved after gynecologists already motivated to make correct predictions were taught Bayesian reasoning). Perhaps it's sloppy wording on the part of Kahnemann that Gigerenzer is taking exception to, but the core claim is not that humans cannot learn statistics, but that at a population level, some humans will continue to rely on less accurate heuristics which deviate from those predicted by rational expectations models in a systematic [and predictable] manner which are not simply eliminated over time due to financial incentives to be correct.
Indeed, the possible result that only some people are irrational would be very good for the paternalistic policy makers. In fact it would be better than the result that all people are irrational; if all people are irrational, what gives some irrational people the right to guide other irrational people’s lives.
Economists are always oh-so-sorry to inform us that some people just can’t run their own lives without the help of the “free” market or the government. So very sorry. Happily they know some technocrats that will bravely shoulder the burden of being Bayesians.
CDL sounds great. Writers won’t stop writing in any case. And if they did, well, there are already too many books to read in a lifetime.
It’s only natural that authors/writers use their skills and connection to publish moralizing defenses of current copyright laws. It’s in their own interest. It’s also easy to see through.
> In the talk, he gives several demonstrations a key aspect of why unix pipelines are so practically useful: you build them interactively.
The standard Unix interface might have been interactive in the ’70s, back when hardware and peripherals were horribly non-interactive. But I don’t know why so many so-called millenial programmers (people my age) get excited about the alleged interactivity of the Unix that most people are familiar with. It doesn’t even have the cutting edge ’90s interactivity of Plan 9, what with mouse(!) selection of arbitrary text that can be piped to commands and so on. And every time someone comes up with a Unix-hosted tool that uses some kind of fold-up menu that informs you about what key combination you can type next (you know, like what all GUI programs have with Alt+x and the file|edit|view|… toolbar), people hail it as some kind of UX innovation.
I think the interactivity you describe might be a different thing from what your parent is talking about.
From what I understand, your parent talks about how the commands are built iteratively, with some kind of trial-error loop, which is a strength that is supposedly not emphasized enough. And I agree by the way.
Nothing to do with how things are input.
That's correct. Articles/tutorials or an evangelizing fan often show the end result: the cool command/pipeline that does something cool and useful. The obvious question when someone unfamiliar with unix upon seeing something like the pipeline in this article:
is "Why would I want to write a complicated mess like that?" Just use ${FAVORITE_PROG_LANG:-Perl, Ruby, or whatever}". For many tasks, a short paragraph of code in a "normal" programming language is probably easier to write and is almost certainly a more robust, easier to maintain solution. However, this assumes that you knew what the problem was and that qualities like maintainability are a goal.
Bernhardt's (and my) point is that sometimes you don't know what the goal is yet. Sometimes you just need to do a small, one-off task where a half-assed solutions might be appropriate... iff it's the right half of the ass. Unix shell gets that right for a really useful set of tasks.
This works because you are free to utilize that powerful features incrementally, as needed. The interactive nature of the shell lets you explore the problem. The "better" version in a "proper" programming language doesn't exist when you don't yet know the exact nature of the problem. A half-assed bit of shell code that slowly evolved into something useful might be the step between "I have some data" and a larger "real" programming project.
That said, there is also wisdom in learning to recognize when your needs have outgrown "small, half-assed" solutions. If the project is growing and adding layers of complexity, it's probably time to switch to a more appropriate tool.
Just yesterday I needed to extract, sort, and categorize the user agent strings for 6 months' traffic to a handful of sites (attempting to convince a company to abandon TLS 1.0/1.1).
The first half of the job was exactly the process you described: start with one log file, craft a grep for it, craft a `grep -o` for the relevant part of each relevant line, add `sort | uniq -c | sort -r`, switch to zgrep for the archived rotated files, and so on.
The other half of the ass was done in a different language, using the output from the shell, because I needed to do a thousand or so lookups against a website and parse the results.
Composable shell tools is a very under-appreciated toolbox, IMO.
To be fair, it's possible to make this block simpler and more readable than what you have there. The problem with a lot of bash scripts I've seen is that they just duck-tape layer after layer of complexity on top of each other, instead of breaking things into smaller, composable pieces.
Here's a quick refactor for the block that I would say is simpler and easier to maintain.
function xform() {
local dir="$1"
ls -1 "$dir" |
grep '\d\d\d\d_A.csv' |
cut -c 1-4 |
python3 parse.py |
uniq
}
comm -1 -3 <(xform dataset-directory) <(seq 500)
That's not a “discussion on the pros and cons of those two approaches”; that's a skewed story about just one part of a particular review of an exercise done in a particular historical context. (More on that here: https://news.ycombinator.com/item?id=18699718)
Not that there isn't some merit to McIllroy's criticism (I know some of the frustration from trying to read Knuth's programs carefully), but at least link to the original context instead of a blog post that tells a partial story:
BTW, there's a wonderful book called “Exercises in Programming Style” (a review here: https://henrikwarne.com/2018/03/13/exercises-in-programming-...) that illustrates many different solutions to that problem (though as it happens it does not include Knuth's WEB program or McIllroy's Unix pipeline).
>BTW, there's a wonderful book called “Exercises in Programming Style” (a review here: https://henrikwarne.com/2018/03/13/exercises-in-programming-...) that illustrates many different solutions to that problem (though as it happens it does not include Knuth's WEB program or McIllroy's Unix pipeline).
I'm the same person who referred to my post with two solutions (in Python and shell) in that thread, here:
>Please consider all the viewpoints when linking to that blog post;
It should have been obvious to you, but maybe it wasn't: nobody always considers all viewpoints when making a comment, otherwise it would become a big essay. This is not a college debating forum. There is such a thing as "caveat lector", you know:
Let me put it this way: the last time the link was posted, I pointed out many serious problems with the impression it gives. Now, if the same link is posted again with no disclaimer, then either:
1. You don't think the mentioned problems are serious,
or
2. You agree there are serious problems but don't care and will just post it anyway.
Not sure which one it is, but it doesn't cost much to add a simple disclaimer (or at least link to the original articles). Else as long as I have the energy (and notice it) I'll keep trying to correct the misunderstandings it's likely to lead to.
I generalized interactivity to the Unix that most people seem familiar with.
“The interactive nature of the shell” isn’t that impressive in this day and age. Certainly not shells like Bash (Fish is probably better, but then again that’s very cutting edge shell (“for the ’90s”)).
Irrespective of the shell this just boils down to executing code, editing text, executing code, repeat. I suspect people started doing that once they got updating displays, if not sooner.
Some people figure out the utility of this right away. Many don't. Whenever I show my coworkers the 10-command pipeline I used to solve some ad-hoc one-time problem, many of them (even brilliant programmers and sysadmins among them) look at it as some kind of magic spell. But I'm just building it a step at a time. It looks impressive in the end, even though it's probably actually wildly inefficient and redundant.
But none of that is the point. The end result of a specific solution isn't the point. The cleverness of the pipeline isn't the point. The point is that if you are familiar with the tools, this is often the fastest method to solve a certain class of problem, and it works by being interactive and iterative, using tools that don't have to be perfect or in and of themselves brilliant innovations. Sometimes a simple screwdriver that could have been made in 1900 really is the best tool for the job!
> Irrespective of the shell this just boils down to executing code
Bernhardt's stated goal with that talk was get people to understand this point (and hopefully use and benefit from the power of a programmable tool). "If [only using files & binaries] is how you use Unix, then you are using it like DOS. That's ok, you can get stuff done... but you're not using any of the power of Unix."
> Fish
Fish is cool! I keep wanting to use it, but the inertia of Bourne shell is hard to overcome.
> Fish is cool! I keep wanting to use it, but the inertia of Bourne shell is hard to overcome.
Back when I tried Fish some like 5 or 6 years ago I think, I was really attracted by how you could write multiline commands in a single prompt. I left it, though, when I found out that its pipes were not true pipes. The second command in a pipeline did not run until the first finished, and that sucked and made it useless when the first command was never meant to finish on its own or when it should've been the second command to determine when the first should finish.
It seems they've fixed that, but now I found that you can also write multiline commands in a single prompt in zsh, and I can even use j/k to move between the lines, and have implemented tab to indent the current line to the same indentation as the previous line in the same prompt. Also, zsh has many features that make code significantly more succinct, making it quicker to write. This seems to go right against the design principle of fish of being a shell with a simpler syntax, so now I don't see the point of even trying to move to it.
I feel that more succinct and quicker to write does not mean simpler.
Fish tries to have a cleaner syntax and probably succeeds in doing so. It may even be an attempt to bring some change to the status quo that is the POSIX shell syntax.
I didn't try to fish anyway, because I like to not have to think about translating when following some tutorial or procedure on the Web. Zsh just works for that, except in a few very specific situations (for a long time, you could not just copy paste lines from SSH warnings to remove known hosts, but this has been fixed recently by adding quotes).
> I feel that more succinct and quicker to write does not mean simpler.
Indeed, it does not. They're design trade-offs of each other.
> Fish tries to have a cleaner syntax and probably succeeds in doing so. It may even be an attempt to bring some change to the status quo that is the POSIX shell syntax.
Indeed, it does, and it is (attempting to, though maybe not doing).
The thing is that, for shell languages, which are intended to be used more interactively for one-off things than for large scripting, I think being more succinct and quicker to write are more valuable qualities than being simpler.
If you are interested, I use a configuration for zsh that I see as "a shell with features like fish and a syntax like bash"
In .zshrc:
ZSH="$HOME/.oh-my-zsh"
if [ ! -d "$ZSH" ]; then
git clone --depth 1 git://github.com/robbyrussell/oh-my-zsh.git "$ZSH"
fi
DEFAULT_USER=jraph
plugins=(zsh-autosuggestions) # add zsh-syntax-highlighting if not provided by the system
source "$HOME/.oh-my-zsh/oh-my-zsh.sh"
PROMPT="%B%{%F{green}%}[%*] %{%F{red}%}%n@%{%F{blue}%}%m%b %{%F{yellow}%}%~ %f%(!.#.$) "
RPROMPT="[%{%F{yellow}%}%?%f]"
EDITOR=nano
if [ -f /usr/share/zsh-syntax-highlighting/zsh-syntax-highlighting.zsh ]; then
source /usr/share/zsh-syntax-highlighting/zsh-syntax-highlighting.zsh # Debian
elif [ -f /usr/share/zsh/plugins/zsh-syntax-highlighting/zsh-syntax-highlighting.zsh ]; then
source /usr/share/zsh/plugins/zsh-syntax-highlighting/zsh-syntax-highlighting.zsh # Arch
fi
Relevant packages to install: git zsh zsh-syntax-highlighting
Then:
zsh
WARNING: it downloads and executes Oh My Zsh automatically using git. You may want to review it before.
If it suits you:
chsh
Works on macOS, Arch, Debian, Ubuntu, Fedora, Termux and probably in most places anyway.
How is that not impressive for vast majority of developers?
For the past couple decades, the only other even remotely mainstream place where you could get a comparable experience was a Lisp REPL. And maaaybe Matlab, later on. Recently, projects like R, Jupyer, and (AFAIK) Julia have been introducing people to interactive development, but those are specific to scientific computing. For general programming, this approach is pretty much unknown outside of Lisp and Unix shell worlds.
The author is an MS student in statistics. Seems that Unix is well-represented in STEM university fields.
Old-timey Unix (as opposed to things like Plan 9) won. When does widespread ’70s/’80s computing stop being impressive? You say “unknown” as if we were talking about some research software, or some old and largely forgotten software. Unix shell programming doesn’t have hipster cred.
> When does widespread ’70s/’80s computing stop being impressive?
When the majority adopts it, or at least knows about it.
> You say “unknown” as if we were talking about some research software, or some old and largely forgotten software. Unix shell programming doesn’t have hipster cred.
It's unknown to those that are only experienced in working in a GUI, which I believe is still the majority of developers. In my experience, anyone of those people are always impressed when seeing me work in my screen filled with terminals, so it does seem to have some "hipster cred". :)
> When does widespread ’70s/’80s computing stop being impressive? You say “unknown” as if we were talking about some research software, or some old and largely forgotten software.
That's precisely what I'm talking about. The 70s/80s produced tons of insight into computer use in general, and programming in particular, that were mostly forgotten, and are slowly being rediscovered, or reinvented every couple years. Unix in fact was a step backwards in terms of capabilities exposed to users; it won because of economics.
It supports a large number of languages. I started using it while I was working through SICP. I've used the python and JS environments a little as well.
It's much slower, and doesn't lend itself as well for building the program up from small, independently tested and refined pieces. The speed of that feedback loop really matters - the slower it is, the larger chunks you'll be writing before testing. I currently believe the popularity of TDD is primarily a symptom of not having a decent REPL (though REPL doesn't replace unit tests, especially in terms of regression testing).
BTW. there's another nice feature of Lisp-style interactive development - you're mutating a living program. You can change the data or define and redefine functions and classes as the program is executing them, without pausing the program. The other end of your REPL essentially becomes a small OS. This matters less when you're building a terminal utility, but it's useful for server and GUI software, and leads to wonders like this:
The engineer in me that learned about computers on a 286 with 4MB of RAM and a Hercules graphics card screams in shock and horror at the thought of letting a Cray-2's worth of computing power burn in the background. The hacker in me thinks the engineer in me should shut up and realize that live-editing shader programs is fun[1] and a great way to play with interesting math[2].
> The hacker in me thinks the engineer in me should shut up and realize that live-editing shader programs is fun[1] and a great way to play with interesting math[2].
Yeah, sure. My point is, I assume you're not impressed by shader technology here (i.e. it's not new), but the remaining parts are Lisp/Smalltalk 70s/80s stuff, just in the browser.
> I think the interactivity you describe might be a different thing from what your parent is talking about.
Actually no, they're not different things; both refer to the same activity of a user analyzing the information on the screen and issuing commands that refine the available information iteratively, in order to solve a problem. (I would have bought your argument had you made a distinction between "solving the problem" and "finding the right tools to solve the problem").
The thing is that the Unix shell is terribly coarse-gained in terms of what interactivity is allowed, so that the smaller refinement actions (what you call "input") must be described in terms of a formal programming language, instead of having interactive tools for those smaller trial-error steps.
There are some very limited forms of interactivity (command line history, keyboard accelerators, "man" and "-h" help), but the kind of direct manipulation that would allow the user to select commands and data iteratively, are mostly absent from the Unix shell. Emacs is way better in that sense, except for the terrible discoverability of options (based on recall over recognition).
One of the dead ends of Unix UX are all the terse DSLs. I feel that terse languages like Vi’s command language [1] get confused with interactivity. It sure can be terse, but having dozens of tiny languages with little coherence is not interactive; it’s just confusing and error-prone.
One of these languages is the history expansion in Bash. At first I was taken by all the `!!^1` weirdness. But (of course) it’s better—and actually interactive—to use keybindings like `up` (previous command). Thankfully Fish had the good sense to not implement history expansion.
> select commands and data iteratively ... Emacs is way better in that sense
Bind up/down to history-search-backward/history-search-forward. In ~/.inputrc
# your terminal might send something else for the
# for the up/down keys; check with ^v<key>
# UP
"\e[A": history-search-backward
# DOWN
"\e[B": history-search-forward
(note that this affects anything that uses readline, not just bash)
The default (previous-history/next-history) only step through history one item at a time. The history-search- commands step through only the history entries that match the prefix you have already typed. (i.e. typing "cp<UP>" gets the last "cp ..." command; continuing to press <UP> steps through all of the "cp ..." commands in ${HISTFILE}). As your history file grows, this ends up kind of like smex[1] (ido-mode for M-x that prefers recently and most frequently used commands).
For maximum effect, you might want to also significantly increase the size of the saved history:
# no file size limit
HISTFILESIZE="-1"
# runtime limit of commands in history. default is 500!
HISTSIZE="1000000"
# ignoredups to make the searching more efficient
HISTCONTROL="ignorespace:ignoredups"
# (and make sure HISTFILE is set to something sane)
I also like setting my history so that it appends to the history file after each command, so that they don't get clobbered when you have two shells open:
One interactive feature I like about Bash though (or shell or whatever) is C-x C-e to edit the current line in a text editor (FC). Great when I know how to transform some text in the editor easily but not on the command line.
I don’t get the apparent fascination that (mech) keyboard enthusiasts have with small keyboards. I can understand moving or removing the keypad since it displaces the mouse (then again, “don’t use the mouse!”), but beyond that I don’t see the appeal except for the aesthetics.
I think the happy hacking keyboard has the missing keys on the FN layer. Hopefully the FN key is programmable, or else you are stuck with whatever the manufacturer wants the FN key to mean. (Pet peeve of mine: cheap keyboards now seem to come with those dang FN keys which can’t be reprogrammed like the regular keys and are just for garbage multimedia functionality.) With a regular boring full-sized keyboard I can program/re-purpose whatever the key is on the equivalent position to the FN key, or choose another key entirely. Right now I can access all F1–F36 keys by using a series of modifier keys right in reach of the glorious Home Row (prostrates). That’s just by modifying my standard keyboard in X. And of course I can access F1–F12 without using modifier keys since I have an (almost) full-sized keyboard. That way I get the best of both worlds: I can use modifier keys to access every key from near the home row, or just one-key while using a fidget spinner with the free hand.