Hacker Newsnew | past | comments | ask | show | jobs | submit | nrvn's commentslogin

Exactly. You can “have a vision” to accelerate full speed and hit the hard wall and just before going full throttle you are offered an opportunity to enter an open door around the corner which you have never even thought about. And that door helps you discover a new vision, that might stick for lifetime.

Also, while the original advice about “vision” sounds reasonable, it also sounds a bit dogmatic. The filpside of “career vision” is “tunnel vision”. And life is not deterministic, it has a much more probabalistic nature. Hence, curiosity and open mind.


Not being able to give granular permissions to folders is not the problem of an app which regardless of being open or closed source may be compromised. Remember that the risk is zero if and only if you avoid the risk, i.e. in this particular case do not install Obsidian.

Macos:

- does not have a granular permissions model as far as I know;

- deprecated sandbox-exec that allowed to achieve the above;

- macos appstore is a very strange phenomenon, I would not put much trust in it by default.

Obsidian:

- has a system of community plugins and themes which is dangerous and has been discussed multiple times[0]. But the problem of managing community plugins is not unique to them. Malicious npm packages, go modules and rust crates (and you name it) anyone?.. you are on your own here mostly. And you need to perform your own due diligence of those community supported random bits.

Obsidian could hugely benefit from an independent audit of the closed source base. That would help build trust in the core of their product.

[0]: https://www.emilebangma.com/Writings/Blog/An-open-letter-to-...


> Obsidian could hugely benefit from an independent audit of the closed source base.

They do a yearly audit: https://obsidian.md/security


Meanwhile, any plugin can do anything.


Sure, but that's not the issue raised by the article

And if it was the other way around, I guess people would be complaining about how closed it is for the developers

I think part of its success is due to the ecosystem composed of hundreds of plugins.


It reads like that to me:

> Since Obsidian isn’t distributed through the Mac App Store, it isn’t required to use sandboxing,

> Combined with the fact that its source code isn’t public,

> And that many users rely heavily on Community Plugins (some of my friends have customized their Obsidian setups so much that I barely recognize the app),

> And that users often grant Obsidian access to sensitive folders like iCloud Drive, Documents, or Desktop (protected by TCC or not), etc to open Vault.

> To me, this represents a very serious risk.


If MacOS, an OS with posix style permissions, app level permissions, and folder access limits per app does not have a “granular permissions model”, which OS does? What are you trying to say?


Website with background-color: #FDDB29 talking about color choices.

Irony at its best.


One of the worst things about Environment variables among others discussed here is the implicit and opaque nature of them. Majority of applications rely on them in the *nix world. Even if more explicit and obvious ways of configuration files or remote services (consul/etcd, et al.) and command line arguments are supported env vars are traditionally supported as well.

But as mentioned in the article it is just a global hashmap that can be cloned and extended for child processes. Maybe in 1979 it was a good design decision. Today it sometimes hurts.

For example, kubernetes by default literally pollutes the container’s environment with so-called service links. And you will have fun time debugging a broken application if any of those “default” env vars conflict with the env vars that your app might expect to see.

https://kubernetes.io/docs/tutorials/services/connect-applic...

They are ubiquitous and we are living in the world of neo-conservatism in IT where legacy corner cuts are treated as a standard and never challenged (hello /bin, /usr/bin, /lib, /usr/lib)[0]

[0] - https://askubuntu.com/a/135679


Heh, you can put hjkl in that neo conservatism bucket. Vi hjkl are the way they are because of a dumb terminal from 40+ years ago, which had fewer units sold than the Nokia N9 smartphone.


Design system as any other human-built system requires to be documented and described in order to maintain integrity where the integrity will be described in clear terms via foundational principles that the rest of the system is built upon. I am surprised to see how some things in today's world fall apart despite in mere seconds despite enormous amount of prior art and experience and lessons learnt.


As they say, YMMV.

My personal journey:

2010-2014 - sublime text

2014-2017 - vim and later neovim (bunch of plugins to resemble IDE-like experience)

2017-2024 - jetbrains (intellij idea with language plugins mostly)

2024-now - neovim (with lazyvim)

I tried helix in 2023 but it did not stick. Do not remember details but remember the final impressions of having to train muscle memory to “awkward” vim-like key bindings and dealing with various annoyances and bugs. End of trial and error I left it as a terminal $EDITOR for quick and adhoc tasks while doing all the heavylifting in intellij. Ditched it finally when vim muscle memory and hx muscle memory made my brain short circuit several times in a row.

Now I am back to neovim and it is surprisingly as productive (when equipped with proper plugins) as the beefy jetbrains IDEs.

That said, helix looks promising. Maybe it’s the next big thing, who knows)


To honor the memory of this noble man I am using a single bladed razor.


I used the following sources to create an RFC template (and promote the document culture across the engineering documentation):

- https://www.industrialempathy.com/posts/design-docs-at-googl...

- https://github.com/rust-lang/rfcs

- https://github.com/kubernetes/enhancements/blob/master/keps/...

- https://blog.pragmaticengineer.com/rfcs-and-design-docs/

Hint: tailor the process and template structure based on your org size/maturity and needs. Don’t try to blindly mimic/imitate.


I'm broadly in favor of RFCs but they need to be dictated from top-down. That's easier said than done.

Most RFC committee debates ime devolve into firing squads in which the presenter needs to answer every question with pin-point accuracy and perfect context from the asker. Otherwise, they look unprepared and the RFC is negated.

This is allowed to happen because everybody is a theoretical co-equal in the process. Thus, everybody wants to have their say. You'd hope people would read ahead of time but there's always somebody who doesn't yet feels entitled to ask pre-emptive questions. It makes for very combative discussions.

The exception is when a double-skip manager stops that from happening and lets the presenter "make their case" and walk through the whole RFC.


Re: https://www.industrialempathy.com/posts/design-docs-at-googl...

> ... sketching out that API is usually a good idea. In most cases, however, one should withstand the temptation to copy-paste formal interface or data definitions into the doc as these are often verbose, contain unnecessary detail and quickly get out of date.

Using R Markdown (or any Turing Complete documentation system), it's possible to introduce demarcations that allow the source code snippets to be the literal source of truth:

    // DOCGEN-BEGIN:API_CLASS_NAME
    /**
     * <description>
     *
     * @param arg <description>
     * @return <description>
     */
    uint8_t method( type arg );
    // DOCGEN-ENDED:API_CLASS_NAME
Use a GPT to implement a parser for snippets in a few minutes. Then invoke the function from the living document for given a source file, such as:

    `r#
      snippets -> parse.snippets( "relative/path/to/ClassName.hpp" );
      docs -> parse.api( snippets[[ "API_CLASS_NAME" ]] );
      export.api( docs );
    `
The documentation now cannot ever go stale with respect to the source code. If the comments are too verbose, simplify and capture implementation details elsewhere (e.g., as inline comments).

In one system I helped develop, we were asked to document what messages of a standard protocol were supported. The only place this knowledge exists is in a map in the code base. So instead of copy/pasting that knowledge, we have:

    MessageMap MESSAGE_MAP = {
    // DOCGEN-BEGIN:SUPPORTED_MESSAGES
    { MessageType1, create<MessageClassName1>() },
    { MessageType2, create<MessageClassName2>() },
    ...
    // DOCGEN-ENEDED:SUPPORTED_MESSAGES
    }
And something like:

    `r#
      snippets -> parse.snippets( "relative/path/to/MessageMap.hpp" );
      df -> parse.messages( snippets[[ "SUPPORTED_MESSAGES" ]] );
      export.table( df );
    `
This snippet is parsed into an R dataframe. Another function converts dataframes into Markdown tables. Changing the map starts a pipeline that rebuilds the documentation, ensuring that the documentation is always correct with respect to the code.

If a future developer introduces an unparseable change, or files are moved, or R code breaks, the documentation build pipeline fails and someone must investigate before the change goes onto main.

Shameless self-plug: The R Markdown documentation system we use is my FOSS application, KeenWrite; however, pandoc and knitr are equally capable.

https://keenwrite.com/


> The image attached to this blog post is a stock photo of a nude pregnant woman. Absolutely nothing sexual.

maiesiophilia (pregnancy fetishism), maschalagnia (armpit fetishism)…


... and?

We shouldn't erase things from the world just because some people have obscure fetishes. I'll keep having balloons at my kid's birthday parties, despite https://en.wikipedia.org/wiki/Balloon_fetish


You obviously didn’t read their article then because they already made your point in a following sentence.


cybersecurity 101:

- know your threats

- assess your risks based on identified threats

- backup 3-2-1 strategy (3 copies of your data on 2 independent storage places with 1 copy offline and offsite)

- "build the world from scratch" plan with the assumption that all infra is completely and irreversibly destroyed.

- assume you have already been hacked but you don't yet know about it. Build your indicators of compromise based on that simple assumption.

Observing how some "groups of people" act in a totally ignorant fashion is amusing.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: