> doesn’t seem to have been offering any extra protection.
How would this be measured?
Since no one has pointed it out here, it seems obvious to me that the purpose of the notarization system is mainly to have the code signatures of software so that Apple can remotely disable any malware from running. (Kind of unsavory to some, but probably important in today's world, e.g., with Apple's reach with non-technical users especially?)
Not sure how anyone external to Apple would measure the effectiveness of the system (i.e., without knowing what has been disabled and why).
There's a lot of unsubstantiated rumors in this comment thread, e.g., that notarization on macOS has been deliberately used to block software that isn't malware on macOS. I haven't seen a concrete example of that though?
Disabling malware via hash or signature doesn't require the Notarization step at all. Server can tell clients to not run anything with hash xxyyzz and delete it. I mean, just think about it. If disabling stuff required the Notarization step beforehand, no anti-malware would have existed before Notarization. Nonsense.
I think notarization is just a more automated way to do this approach, e.g., otherwise Apple has to hunt down all the permutations of the binary themselves. It seems like it just simplifies the process? (It makes it a white list not a black list, so it's certainly more aggressive.)
Notarization is the same for macOS and iOS AFAIK. Both platforms have a separate app store review process that's even more strict than the notarization process.
> Notarization is the same for macOS and iOS AFAIK.
Assuming the basic facts are straight, the the linked story explicitly proves this is false:
> UTM says Apple refused to notarize the app because of the violation of rule 4.7, as that is included in Notarization Review Guidelines. However, the App Review Guidelines page disagrees. It does not annotate rule 4.7 as being part of the Notarization Review Guidelines. Indeed, if you select the “Show Notarization Review Guidelines Only” toggle, rule 4.7 is greyed out as not being applicable.
Rule 4.7 is App Review Guidelines for iOS, so this would be a case of failing notarization for iOS App Review Guidelines, which means the policies (and implementation) are different between platforms.
(Of course there's no such thing as "Notarization Review Guidelines" so maybe this whole story is suspect, but rule 4.7 is the App Review Guidelines rule that prohibits emulators.)
The point is that notarization plays the same role for both platforms: checks whose purpose is to make sure that the software won't harm the user's device, unrelated to the App Store review process. Both platforms have an additional App Store review process which is significantly more strict, and the notarization process isn't supposed to involve App Store review for either platform.
When Apple denies notarization for bullshit reasons on one platform, it makes me highly suspicious of their motivation for notarization on all platforms.
Their decision to use the same word for both is enough for me to treat them as the same. Apple has tried to convince people that notarization exists for the user's benefit; the iOS implementation of notarization has convinced me that that's not the case.
Is there a concrete example of this? We know this isn't blanket policy, because of a recent story (https://news.ycombinator.com/item?id=45376977) that contradicts it. I can't find a reference to any macOS app failing notarization due to API calls.
Notarization doesn't blanket block all access to private APIs; but the notarization process may look for and block certain known accesses in certain cases. This is because notarization is not intended to be an Apple policy enforcement mechanism. It's intended to block malicious software.
So in other words, using private APIs in and of itself isn't an issue. Neither is it an issue if your application is one that serves up adult content, or is an alternate App Store, or anything else that Apple might reject from its own App Store for policy reasons. It's basically doing what you might expect a virus scanner to do.
Yeah, don't disagree with any of that, but I'm looking for explicit evidence that that is true (right now it sounds like it's just an assumption)? E.g., either examples of apps failing notarization due to API calls, or Apple explicitly saying that they analyze API calls. Without that it sounds like we're just guessing?
I have experienced it myself but this was some years ago, may not be current. Think it was things they were trying to deprecate, which are now fully gone - was around the time they introduced Hardened Runtime, 2018-19 ish.
> My point was mainly that the keyboard (efficient use is difficult to learn) vs mouse (arguably easier to learn) is just one example of why the current desktop metaphor won over something I'd say is designed for heavy keyboard use (even if usable without it).
This comparison of the mouse and keyboard seems to have programmer tunnel vision. Anything involving layout, graphs, media editing (audio, video, image), 3D modeling, and drawing I think we can all agree are better served by the mouse (in tandem with the keyboard). It's really the mouse and keyboard together that's made the computer such a successful creative medium. Programming seems to me like a bit of anomaly in that it's one of the few creative tasks that doesn't benefit greatly from a mouse.
There's a ton of comments here saying the keyboard is more ergonomic than the mouse, I've never heard that before and it feels wrong on its face (it's called repetitive strain injury, using multiple forms of input should helpful).
But generally, please if you believe this provide some kind of source.
It's one of the oldest forms of "programmer identity" out there, one of those shibboleths that people who culturally identify as a hacker express that's independent of its factuality. A bit of a precursor to social media which elevates in group shibboleths over data as a matter of course. Programmers were the first to invent and use social media after all.
I think this is a bit of an oversimplification, I see art and technology as more like a dance where it's unclear who's leading who.
E.g., quick high-level examples: Photograph invented led to Impressionism, Andy Warhol's entire oeuvre. Today one of the most talked about artists is Beeple (technology-forward in distribution medium, tooling, and even practice techniques [e.g., "dailies"]).
Music is probably where this is the most profound, take the trajectories of Miles Davis and the Beatles, both started their career with a fledgling recording industry, ended it record in sophisticated studios using instruments and studio techniques that didn't exist a mere 5-10 years earlier.
In electronic music this is even more pronounced, e.g., Native Instrument's Massive synth facilitating dubstep is a nice clean example, but if you've followed electronic music overall the connection is obvious. E.g., what dates most pre-2000s era music is that today musicians use DAWs whereas before it was less powerful hardware devices that had a more specific sound (and other arrangement and composition limitations).
This actually feeds into one of the points you made: Being successful at art (or anything really) has a lot to do with how excited and motivated you are to pursue it. It's easier to be excited if you feel like you're exploring new territory, ripe with untapped potential, and that's historically often been unlocked by technology. Whereas if you keep comparing your solos to John Coltrane when you're learning the saxophone, that's going to be demoralizing and you'll feel like you'll never get there so why bother trying. There's also diminishing returns, e.g., that music territory has been so thoroughly explored now, so the ROI on developing that specific skill (playing jazz at that level) has been reduced, because so much of that artistic space has already been explored.
If you tie that all back to the art itself, I'd assume today that we already have saxophone soloist who are more technically skilled than John Coltrane, e.g., the music theory is better understood, and we've had decades of iteration to improve practice techniques (there are tons of books and studies on this subject now). But you can't replicate the sheer excitement that those musicians must have felt as they unlocked new music possibilities by iterating on music theory (a form of technology), and recording as a new medium to share and learn from.
To be clear, most of what you've said I'd agree with, but I'd add more nuance like: Leverage technology to make the act of creation as exciting for you as possible, but the main goal of the excitement is to keep yourself practicing and improving. And also look for untapped potential (e.g., a specific example that's relevant today, I think GPU-based rendering is still under-explored today Beeple has been able to leverage this in his art, but I think the big barrier of entry [probably ~$10,000+ for hardware/software over the course of a career] means there's untapped potential there.
> Technology has made music accessible in a philosophically interesting way, which is great. But on the other hand, when everybody has the ability to make magic, it's like there's no more magic—if the audience can just do it themselves, why are they going to bother?
> Not OP, but a surprising number of senior(+) engineers at my company use default vim or neovim (no plugins or customizations)
That's fine. They could very well be using the tool they always used. Support for vi bindings is not the best everywhere, and vim works also through terminal connections, which is great if you need to ssh somewhere to edit a few files.
If you have to work with anything related to TypeScript or even JavaScript, you need to think long and hard to figure out what you're doing if your first option isn't vscode.
There's the engineering maxim, which I completely, and unequivocally support; that perfection isn't achieved when there's nothing left to add, but only when there's nothing left to take away.
But that's not enough to explain why it's the preferred editor for elite tier eng.
The thing it offers, in contrast to everything else, is simplicity. Everyone loves to pretend that vi is so difficult o that it is impossible to quit. But if you can forgive it's steep learning curve, it does provide an accelerated experience. And then, critically, it's already out of your way.
All experts advocate the idea behind the quote, "if you give me 6 hours to cut down a tree, I'd spend the first 4 sharpening my axe" Learning the keys of vim is that same sharpening.
I used to use sublime text, my primary reason was because it was fast. That means it got out of my way.
Today, I use neovim. And I've never bothered to set up tab complete, nor anything else like it. It does take me, about 2 extra seconds, per meaningful code block, to type the extra characters needed. But in trade for those tens or milliseconds. I'm granted the intuition for the name of the exact stdlib function I want to call. It lives in not just my head, but I also have developed the habit of understanding the context behind the call.
The feature neovim gives to it's users, it the intuition and the confidence to reason about the code they've written.
There's a lot of anxiety going around about the future of software development, related to AI. The people who have invested the mental energy of learning vim aren't worried, because it's exceptionally obvious that LLMs are pathetic when compared to the quality they've learned to emit naturally.
Or, more simply; if you're the type of person who's willing to invest the mental effort to become good at something. Vim is engineered to make you even better. Contrasted with vscode which has been engineered to make it easier to type... but then... all that time spent has only made you good at the things AI can already do.
tldr; vscode improves the typing experience, vim improves the thinking experience. AI isn't coming for the jobs of the thinkers...
Heads up that if you're on macOS, the included version of vim, like bash,
is frickin ancient. If you run into problems with vim being slow, you may want a newer version.
I made the same argument about Figma (that what made Figma successful is that design software had started to be used more like office suite software) in my overview of the historical transitions in creative software https://blog.robenkleene.com/2023/06/19/software-transitions...:
> In the section on Photoshop to Sketch, we discussed an underappreciated factor in Sketch’s, and by extension, Figma’s, success: That flat design shifted the category of design software from professional creative software to something more akin to an office suite app (presentation software, like Google Slides, being the closest sibling). By the time work was starting on Figma in 2012, office suite software had already been long available and popular on the web, Google Docs was first released in 2006. This explains why no other application has been able to follow in Figma’s footsteps by bringing creative software to the web: Figma didn’t blaze a trail for other professional creative software to move to the web, instead Sketch blazed a trail for design software to become office suite software, a category that was already successful on the web.
Regarding this, I'm curious how big this market is really. E.g., for me, working on software, I almost never see design work from folks that aren't professional designers (and if I do, they use Figma already, not the Creative Suite). But I'd be curious to hear other folks impressions, even just anecdotally:
> To explain what I mean: Let’s say you’re a company that subscribes to Adobe Creative Cloud. You might buy it for one department—like your video team, or your web team, or your print team. But there are a lot of other people in your office, and they need design too. They need to build social posts and presentations and email signatures and graphical work that your $150,000-per-year senior designer doesn’t have the time for.
I disagree with the comparison to / characterization of “office suite software”. At least the desktop class of office suites have a lot of power features and power users.
It’s not that power users aren’t a market, it’s that casual users are now the larger market and cheaper to serve, and software companies have been catching on to that, to the detriment of power users.
Do you mind getting more specific about what you disagree with around the comparison / characterization of "office suite software"? I can't tell what you're disagreeing with. E.g., it sounds like you're saying I don't think office suite software is powerful, which I don't think I said? (And I don't believe, e.g., I think Excel is one of the most powerful applications there is). I do think the most popular web-based office suite software (e.g., the Google suite) is less powerful than the more desktop-orientated competitors (there's an obvious reason for this, web-based software facilitates collaboration, and complex features hinder collaboration, so they're in natural opposition).
But I definitely struggle with the comparison between power users and casual users. Like I wouldn't characterize designers that use Figma as casual users, it's that the needs of software designers have changed so much, and those changes mean treating design software more like office suite software make sense.
I guess the comparison of casual users and power users is more apt when comparing the Adobe suite and Affinity suite. And, e.g., Final Cut Pro X and CapCut are evidence of a wider industry trend towards serving that market. I wouldn't necessarily say that's to the detriment of power users though, it seems like there's software to serve both markets now?
The article is about how software is changing to target casual “normies” over power users. You agreed with this take and likened it to how software is becoming more like office suites. From this I inferred that you don’t think that office suites are catering to power users.
So I don’t understand what properties of office suites you are alluding to here. Or is your point that the previously desktop-only software suites now have web-based counterparts, and the latter aren’t catering to power users anymore?
> Or is your point that the previously desktop-only software suites now have web-based counterparts, and the latter aren’t catering to power users anymore?
Yes this. More specifically collaborative software (e.g., with features like live-collaborative editing) tend to be less capable than non-collaborative software.
These are not 1-for-1 comparisons though (Figma vs. Canva), I didn't mean to imply they were. E.g., Canva isn't emphasizing collaboration. But office suite software does also have a lower barrier of entry than creative software, which I think Canva's strategy should probably capitalize on. E.g., the market has already been split for pro vs prosumer/casual, I think Canva strategy will probably be to emphasize this split short term, which would mean focusing on ease of use at the expense of complex features (and then consider the more technically complicated led shift to collaborative web-based versions later, leveraging what they've learned so far).
What made Figma successful was being able to share via a URL. Period.
No program version problems. No file extension problems. No problems between Mac and Windows. No problems with anti-virus blocking your email attachment. etc.
Figma exists because sharing a bloody file between computers is still a clusterfsck in 2025.
I'm not sure what part of this you think I'd disagree with, if just looking at the microcosm of Sketch to Figma. In other words, the ease of sharing and collaborating via a URL I think is the underlying reason office suite software has become successful.
But I suspect you're arguing against the wider arch of the point I'm making (that design no longer requiring as sophisticated features helped facilitate the transition to the web-based software). If I have that right, I suggest making sure that your hypothesis about motivations behind the market transitions also incorporates the transition from Photoshop to Sketch. Because that transition (which preceded the transition to Figma) made every problem you're describing worse. Which means for example that you can't attribute the transition from Photoshop to Sketch to Figma just to the URL.
What made Figma the go-to tool is the in-browser approach, collaborative editing, and features like design tokens and constraints which were an afterthought on Sketch and required third party extensions.
Do you have an example of this on macOS?