Hacker Newsnew | past | comments | ask | show | jobs | submit | okayishdefaults's commentslogin

I'm unsure if "no descenders" provides increased clarity. For example, lowercase q is easy to recognize because your eyes are already drawn to it being one of few characters that descend. In the case of this font, you have a small uppercase Q as the lowercase q. This feels like it accomplishes the opposite of this stated benefit.


> I'm unsure if "no descenders" provides increased clarity.

Of course not: if it did we would be doing it that way everywhere. Typeface design has thousands of years of history, there's only a few major variations in latin types and we've tried them all. Descenders exist for a reason.

This type is pretty cool for what it is meant for, the retro aesthetics. Old school digital displays (like alarm clocks) don't have descenders so it fits pretty well.


> Descenders exist for a reason.

Yeah but I wouldn't just assume it's because they are the optimal solution. Look at architectural handwriting, very clear, no descenders.


> Look at architectural handwriting, very clear, no descenders.

I just looked it up, and every example I see has descenders in the lowercase letters.


technical drawings and notes are almost always all caps


It doesn’t really count as “no descenders” if you’re only using letters which don’t have any to begin with. And all caps is harder to read fluidly, so that also doesn’t support the point.


Yes, they've specifically chosen to avoid ascenders/descenders for clarity and uniform spacing. I don't see how that's not relevant.


> I don't see how that's not relevant.

Because it’s apples to oranges.

Deciding “my typeface won’t have any lowercase letters” is not the same as “my typeface won’t have descenders”. Technically none of them has descenders, but the former compromises by reducing the amount of characters—which keeps every remaining letterform distinct at the expense of reading fluidity—while the latter compromises by distorting a good chunk of letters—making them ambiguous and harder to read.

I very much doubt architects decided “let’s write everything in all caps because that avoids descenders”.

And again, while looking it up I see no end of examples of technical writing with lowercase letters, and they all have descenders.


Listen brother, some guy said "because this is how it is, obviously that's because it's better". All I did is say idk about that, and gave a simple counter example.

And we're talking about a monospaced font for your terminal. To me, that's more akin to technical drawing than publishing a book.

In my experience technical drawings often use all caps, which have no ascenders/descenders, and you googling specifically to find a counter example doesn't change that. NASA, for example, https://s3vi.ndc.nasa.gov/ssri-kb/static/resources/NASA%20GS...


Many styles of architectural handwriting use descenders in the lower case. (Many other styles forbid or disdain the lower case.)


Nowhere does it promise to increase clarity. In one place it says "modern clarity", which is in my opinion and many cases worse than non-modern clarity.


Modern claritu


Yes, it's surprisingly not terrible except for y->u


Yep, I agree. Your eyes need these clues to help you read at speed. When every character looks similar, you have to slow down to look at individual characters, rather than just glancing at a whole word.


Was wondering this to. Then I saw the final item in the list of uses: ASCII art...


At a glance, the phrase about it on the site looks like it's saying "Retro aesthetic meets modern claritu"


I encourage people to learn to program especially if they aren't pursuing a software engineering career. Someone that knows a specific domain that can see it through the lens of an expert at another will understand their domain in a way many others cannot. They will be able to break down problems into a collection of manageable chunks. They will learn valuable lessons that show up when you begin to intimately think your way through specific problems.

People may start out with the idea that they can be content creators. They'll have to go through several steps from planning, iteration, implementation, analyzing success or failure, etc.

I wanted to make video games as a kid. Then it was being a pro gamer. And then it was physics. And then it was linguistics. And now I'm rounding out the end of a software engineering career. I didn't know how to program, and I wasn't particularly mathematically inclined. This led me down several paths all around the idea of generally being a better user of technology.

One of the most seemingly random and yet greatest contributions to my path in life was playing EvE Online. I learned logistics, collaboration, tactics, strategy, spycraft, improvisation, mental fortitude, and even how to administrate LDAP servers. In no way was this a pursuit toward an engineering career.

I'm also a lifelong musician, but there was a significant pause through my twenties due to lack of means. Now that I'm a programmer, I've been able to intuitively command my knowledge of music theory because it's systematic and documented thoroughly.

Learning to play Counter Strike taught me how technique and approach is just as important as mechanical skill. I can specifically recall a tutorial regarding instantly headshotting someone as you round a corner without the need to flick your mouse. You simply anchor your crosshairs to the corner your pivoting around, place it at head height, and click when you see a head. This is an extremely valuable lesson in abstract.

Learning to play Street Fighter competitively was informed by my experience with learning instruments and specifically key components of Jazz. Improvisation, syncopation, consistency, timing, and training the other person to expect one thing and immediately subvert that expectation all translated well.

I am a champion-ranked Rocket League player. To me, my car is an instrument. I practice it like I practice any mechanical skill that I want to make second nature. Repetition, technique refinement and acquisition, control, and composition of all skills simultaneously are shared between these two things. Because of Street Fighter, I also approach it as a fighting game. Attacking your opponent's mental stack is key to high level success in the same way.

David Sirlin's "Play to Win" taught me the value of removing artificial constraints. I seek to explore the bounds of any problem space to their fullest extent and use that knowledge to exploit opportunities without changing the space I'm in. This is a book about applying Sun Tzu's "The Art of War" to Street Fighter and not directly abstract in the least.

Factorio is a common programmer obsession. Because of this game, I have an intuitive mental model of algorithms and data structures, separation of concerns, fault tolerance, and how different parts of any system interact. It's not abstract math in my head- it's Factorio.

My father started his career as a draftsman for oil companies, and his command over his hands has always inspired me. Reading "Drawing on the Right Side of the Brain" showed me that I could engage abstract thought at will. This would come up later when I read "Thinking, Fast and Slow" and I was able to draw connections between artistic pseudo science and an intellectual understanding of different modes of thought.

I am a veteran. My job was being a Crypto Linguist. My experience in the military taught me the value of motivation, rigor, and discipline. I failed basic Spanish multiple times in high school and yet could dream in Korean with the right environment supporting me. These skills and lessons are key to becoming an expert at anything.

I dismantle opponents in Rocket League by applying mental stack management from Street Fighter, tactical prowess from EvE, discipline and motivation from the military, acquisition of mechanical skill from learning instruments, and exploitation of existing mechanics from "Play to Win". Nearly everything I've learned has created a rich tapestry of thought that I pull from.

I am now a successful, specialized software engineer with a long career. I stumbled into this, and I've never been able to succeed with formal higher education. I attended several high schools, often switching mid-semester. This destroyed my ability to get the ball rolling in mathematics. I could write a compiler before I truly understood what math was. Everything from my childhood acted as the foundation for where I am today- even if it was "pointlessly" meandering my way through trying to make a video game, a better MySpace page, process diagrams, drawing, setting up Linux, audio engineering, etc.

People don't take a direct path to their dreams. They evolve and their former experiences inform their future goals, choices, and opportunities.


This is a great post and you made a compelling argument. But I think it's important to remember that for every case like yours, there's another person who became a directionless failure who spends his days lazing about and mooching off those around him. I think that parents are reasonable for being afraid that their kids will go down the failure path rather than your success path, because there's no way to know up front which branch they will take.


This is a sign that the user hasn't taken the time to set up their tools. You should be able to type log and have it tab complete because your editor should be aware of the context you're in. You don't need a fuzzy problem solver to solve non-fuzzy problems.


> user hasn't taken the time to set up their tools

The user, infact, has setup a tool for the task - an "AI model", unless you're saying one tool is better than others.


Then it's a real bad case of using the LLM hammer thinking everything is a nail. If you're truly using transformer inference to auto fill variables when your LSP could do that with orders of magnitude less power usage, 100% success rate (given it's parsed the source tree and knows exactly what variables exist, etc), I'd argue that that tool is better.

Of course LLMs can do a lot more than variable autocomplete. But all of the examples given are things that are removing cognitive overhead that probably won't exist after a little practice doing it yourself.


This. Set up your dev env and pay attention to details and get it right. Introducing probabilistic codegen before doing that is asking for trouble before you even really get started accruing tech debt.


You say "probabilistic" as if some kind of gotcha. The binary rigidness is merely an illusion that computers put up. At every layer, there's probabilistic events going on.

- Your hot path functions get optimized, probabilistically

- Your requests to a webserver are probabilistic, and most of the systems have retries built in.

- Heck, 1s and 0s operate in a range, with error bars built in. It isnt really 5V = 1 and 0V = 0.

Just because YOU dont deal with probabilistic events while programming in rust, or python doesnt mean it is inherently bad. Embrace it.


We’re comparing this to an LSP or intellisense type of system, how exactly are these probabilistic? Maybe they crash or get a memory leak every once in a while but that’s true of any software including an inference engine… I’m much more worried about the fact that I can’t guarantee that if I type in half of a variable name, that it’ll know exactly what i’m trying to type. It would be like preparing to delete a line in vim and it predicts you want to delete the next three. Even if you do 90% of the time, you have to verify its output. It’s nothing like a compiler, spurious network errors, etc (which still exist even with another layer of LLM on top).


>> Introducing probabilistic codegen ...

> Just because YOU dont deal with probabilistic events while programming in ...

Runtime events such as what you enumerate are unrelated to "probabilistic codegen" the GP references, as "codegen" is short for "code generation" and in this context identifies an implementation activity.


The scheduler that puts your program on a CPU works probabilistically. There are no rigid guarentees of workloads in Linux. Those only exist in real time operating systems.


> The scheduler that puts your program on a CPU works probabilistically. There are no rigid guarentees of workloads in Linux. Those only exist in real time operating systems.

Again, the post to which you originally replied was about code generation when authoring solution source code.

This has nothing to do with Linux, Linux process scheduling, RTOS[0], or any other runtime concern, be it operating system or otherwise.

0 - https://en.wikipedia.org/wiki/Real-time_operating_system


(eyeroll)

Apples and oranges. It's frankly nonsense to tell me to "embrace it" as a phantom / strawman rebuttal about a broader concept I never said was inherently bad or even avoidable. I was talking much more specifically about non-deterministic code generation during implementation / authoring phase.


> This. Set up your dev env and pay attention to details and get it right. Introducing function declarations before knowing what assembly instructions you need to generate is asking for trouble before you even really get started accruing tech debt.

Old heads cling to their tools and yell at kids walking on lawns, completely unaware that the world already changed right under their noses.


We know the "world has changed": that's why we're yelling. The Luddites yelled when factories started churning out cheap fabric that'd barely last 10 years, turning what was once a purchase into a subscription. The villagers of Capel Celyn yelled when their homes were flooded to provide a reservoir for the Liverpool Corporation – a reservoir used for drinking water, in which human corpses lie.

This change is good for some people, but it isn't good for us – and I suspect the problems we're raising the alarm about also affect you.


Honestly, I've used a fully set up Neovim for the past few years, and I recently tried Zed and its "edit prediction," which predicts what you're going to modify next. I was surprised by how nice that felt — instead of remembering the correct keys to surround a word or line with quotes, I could just type either quotation mark, and the edit prediction would instantly suggest that I could press Tab to jump to the location for the other quote and add it. And not only for surrounding quotes, it worked with everything similar with the same keys and workflow.

Still prefer my neovim, but it really made me realize how much cognitive load all the keyboard shortcuts and other features add, even if they feel like muscle memory at this point.


> Then it's a real bad case of using the LLM hammer thinking everything is a nail. If you're truly using transformer inference to auto fill variables when your LSP could do that with orders of magnitude less power usage, 100% success rate (given it's parsed the source tree and knows exactly what variables exist, etc), I'd argue that that tool is better.

I think you're clinging onto low-level thinking, whereas today you have tools at your disposal that allow you to easily focus on higher level details while eliminating the repetitive work required by, say, the shotgun surgery of adding individual log statements to a chain of function calls.

> Of course LLMs can do a lot more than variable autocomplete.

Yes, they can.

Managing log calls is just one of them. LLMs are a tool that you can use in many, many applications. And it's faster and more efficient than LSPs in accomplishing higher level tasks such as "add logs to this method/methods in this class/module". Why would anyone avoid using something that is just there?


I have seen people suggesting that it's OK that our codebase doesn't support deterministically auto-adding the import statement of a newly-referenced class "because AI can predict it".

I mean, sure, yes, it can. But drastically less efficiently, and with the possibility of errors. Where the problem can be easily soluble, why not pick the solution that's just...right?


Some tools are better than others at specific things. AI as is commonly understood today is better at fuzzy problems than many other tools. In the case of programming and being able to tab complete your way through symbols, you'll benefit greatly from having tools that can precisely parse the AST and understand schemas. There is no guess work when you can be exact. Using AI assistants for simple tab completion only opens the door for a class of mistakes we've been able to avoid for years.


In my experience consistency from your tools is really important, and AI models are worse at it than the more traditional solutions to the problem.


I don't want to wade into the debate here, but by "their tools" GP probably meant their existing tools (i.e. before adding a new tool), and by "a fuzzy problem solver" was referring to an "AI model".


I know old timers who think auto-completion is a sign of a lazy programmer. The wheel keeps turning....


Tab complete what? No LSP will complete context-appropriate message and parameters without writing any of it. "What did the user want to log here" is inherently a fuzzy problem.


And a chatbot can consistently work out what I want to log faster than I can?


Most of the time, yes. Since you refer to this as chatbot, I'm guessing you don't have much experience with things like cursor completion - give it a go before you're too negative.


Or the user works with user-hostile tools. Some stacks, cloud providers, etc are absolutely horrible to use.

There are many people out there that have absolutely no idea how horrible or great they have it.


The LLM is just Intellisense on literal steroids.

It can infer the correct logging setup from the rest of the project and add the most logical values to it automatically


> This is a sign that the user hasn't taken the time to set up their tools.

You are commenting a blog post on how a user set up his tools. It's just that it's not your tool that is being showcased.

> You should be able to type log and have it tab complete because your editor should be aware of the context you're in.

...or, hear me out, you don't have to. Think about it. If you have a tool that you type "add logs" and it's aware of best practices, context, and your own internal usage... I mean, why are you bothering with typing "log" at all?


To be perfectly fair saying ”it’s aware of the best practices, context and internal usage” is very misleading. It’s aware none of those (as it is not ”aware” of anything), and that is perfectly clear when it produces nonsensical results. Often the results are fine, but I see nonsensical results often enough in my more LLM dependant coworkers’ PRs.

I’m not saying not to use them, but you putting it like that is very dishonest and doesn’t represent the actual reality of it. It doesn’t serve anyone but the vendors to be a shill about LLMs.


I mean, "add logs" is twice as much work as "logs". I suspect you're implying a lot more, but you haven't really explained what you're referring to.


Just breaking down the thought a little, we truly can't say elections shouldn't have standards, right?


Elections at the local level should be governed by the locality. I do not see the need for standards at a higher level, other than for democracy to be maintained in some fashion. External data reporting certainly need not be standardized at t̶h̶e̶ ̶l̶o̶c̶a̶l̶ [sic] a higher level.


This is hacker news, and so I mean this earnestly

Why though? Because they have little to no ability to understand the consequences of their actions?

ot: have you ever looked into what the proper nomenclature is for dog breeds? None! They have subspecies based on vibes. Species is a human construct, one that doesn't even serve us in our second most familiar of categories. It seems difficult to double down based on a "science" of naming things.


> Why though?

Life matters. A lot. The whole Universe is this big mechanical device that obeys to the law of causality: every change in nature is produced by some cause. For billions of years, this was all there was. Then suddenly Life happened, turning causation on it's head: Life's actions have what Aristotle called a "final cause", that never existed before. Life has at least the goal of survival, and maybe more, yet to be discovered. And this changes the nature of the Universe for the Universe now has will.

As for humans, my comment was not meant to disparage them. They are far from perfect, basically apes, and right now they are the best hope we have for Life to survive the Sun. But humans are only a transient form that Life can take. There was Life before humans, let's hope there the will be Life after them. They are no more than a cog, albeit one that may someday reveal itself to be useful.


Is it at this point? When used earnestly, it's regulated, traceable, and slower than other methods of transacting money. History can and has been rewritten, but not when someone is scammed.

Seems like bad currency, but maybe you're aware of something meaningful that crypto contributes.


I mean we're in the comment section of a provider removing models primarily used to make non-consensual porn of celebrities (https://arxiv.org/html/2407.12876v1), then talking about how crypto is the answer.

Visa and co are a cartel, a lot of the pressure Civitai is facing is unreasonable, but even a broken clock is right twice a day: and they had a lot of problematic content.

Even if they turn to crypto, this is a change they shouldn't walk back, or other providers are probably going to turn on them too.


This paper is trash. They preemptively define any models they don't like as "abusive models". This includes any model that can generate real people (including for transformative, fair use purposes like parody) and separately any NSFW model, including stuff like cartoons.

Also they are using a ridiculous definition of "NSFW" to achieve the correlation they want to find. They are putting the prompt (not the image) into ChatGPT and applying an arbitrary metric of NSFW-ness sentiment analysis that returns false positives. Actually NSFW content of real people was always banned on CivitAI.


I'm not sure how anyone's really going act like the vast majority of the deepfakes generated with celebrity LORAs isn't porn when the term itself has become synonymous with non-consensual porn.

And it was just in April (less than a month ago) that they stepped up moderation of celebrity NSFW content: the study is from June of 2024.

The study was an attempt to avoid someone immediately trying to argue about a really obvious truth, but some people will still try to argue about the study about the really obvious truth.


I have been using CivitAI practically since it was created. Real person NSFW content has been banned since day one, because they didn't want to get sued. The change in April was a cosmetic update to how they displayed search results. It completely separated out real person content and NSFW content, to ensure that NSFW could not be displayed in proximity to real person content. This changed how content was displayed, not what content was allowed.

I'm sure some people do generate porn with celebrity LoRAs, but there are also plenty of legitimate uses such as parody, criticism, transformative art, etc. If people do post inappropriate content, there are civil remedies and now also federal criminal remedies via the TAKE IT DOWN Act. CivitAI is fully legally compliant, but they are being held hostage by an unelected, unaccountable payment processor cartel.


>Even if they turn to crypto, this is a change they shouldn't walk back, or other providers are probably going to turn on them too.

Yep


>Regulated

Well humans are regulated so I dont know what you are driving at here.

>Traceable

The core concept was built on traceability. Privacy coins are the aberration. Like crypto was developed by people who want provable auditing of banks online.

Actually if you compare Bitcoin to later standards, Bitcoins biggest weakness is that it wants to track coins individually instead of just balances. Literally invented by goldbugs.

>Slower

Depends on both parties. I can go Crypto -> Crypto -> Fiat in like 15 minutes. Osko can be faster.

>History can and has been rewritten

Thats a fail state but its done rather less than traditional currency

>Seems like bad currency, but maybe you're aware of something meaningful that crypto contributes.

My fondest memory was watching a whole bunch of libertarian crypto guys using it to donate to Venezuelans who would pop up in crypto spaces to talk about how hard their lives were and how bad government had screwed up their lives. I liked to think the libertarians were getting scammed but it didnt really matter, because there werent many other onramps into VZ at the time.

Really its best feature is that its largely unpreventable. Sure you can police the on and off ramps to an extent. But if I need to evade financial censorship, I can. Mostly I see people against crypto throw up a big smokescreen but at the end of the day they tend to be in favor of the financial censorship that crypto is avoiding at the moment. Be that donations to wikileaks, purchasing services without a credit card or what have you.


I think it doesn't need to be a direct weapon to be used in warfare. You can genetically modify your own military.


Yeah good point!

Something that a lot of people are unaware of is that US Military is allowed to do this. I forget the exact EO, but it was signed by Clinton and is in the 12333 chain of EOs. Mostly, this is used for the Anthrax vaccine. But, it does give clearance to do other forms of medical experimentation on warfighters.

No, really, I am serious here. This is true. I may have the details a bit off, so sorry there, but they can and do preform medical experiments on people without their consent. Now, to be fair, France does this too. They do sham surgeries over there. Non-consenting human medical experimentation is quite the rabbit-hole.

So, I can kinda see in the next 10 years, certainly the next 50, a routine shot given to warfighters to help them with things like blood loss, or vitamin C production, or fast twitch muscles, or whatever. The legal framework is already there and has been for a while, it's just an efficacy issue, honestly.


Surprised that the term "tacit programming" wasn't mentioned once in the article.

Point-free style and pipelining were meant for each other. https://en.m.wikipedia.org/wiki/Tacit_programming


Point free was technically mentioned once, but more as a "I'd rather not get into this in great detail right now." thing. It's really cool, though.


How do you know when it's small enough?


when it fears what I and my neighbors will do to it. When it personally thinks about its accountability to the people around it, on firstname basis, any time it even considers spending money.


HN User Silexia will tell you of course.


Glad to assist!


Yeah but if you want to remove those qualities from how a professional swe works, simply have them do it for free.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: