Hacker Newsnew | past | comments | ask | show | jobs | submit | dvas's commentslogin

There are many ways of thinking and reasoning about the profession and what it means to each and every one of us.

Some of the buckets:

* The builders, don't care how they get the result.

* The crafters, those who care how they get to the results (vim vs emacs), and folks who enjoy looking at tiny tiny details deep in the stack.

* The get-it-done people, using standard IDE tools, stick with them, and it's a strict 9-5.

...

And many with types, and subtypes of each ^^.

In my opinion, many people have a passion for making computers to do cool things. Along the way, some of us have fallen into the mentality of using a particular tool or way of doing things and get stuck in our ways.

I think it's another tool that you must know how to utilize and utilize in a way that does not atrophy your skills in the long run. I personally use it for learning and allowing me to get an in on a knowledge topic which I can then pull on and verify that the information is correct.


Might not be for everyone, but using git to index markdown notes + linking it against cloud storage like dropbox/icloud to have it available on phones/tables have served me well over the last 5 years.


I think the best approach to take with personal knowledge bases or any knowledge data, is to have the ability to bring your data in and out as one pleases.

In my flow, I use a hybrid Zettelkasten-based approach to linking notes. vscode, zettlr, logseq, obsidian all render correctly. I am happy to see more tools which exhibit more of this "bring your own data" mantra.


Do you point Logseq and Obsidian to the same vault or you keep separated vaults for each?

I tried using both a few years back and found a pain to manage, ended up staying with Obsidian that was more my style.

I personally prefer organizing things in folder and sub-folder structures and Logseq was not friendly to that approach at the time.


I would like to add the thought of looking at where these elliptic curves are deployed, things like embedded devices and implementations bitcoin-core libraries for say secp256k1 [0].

Ref:

[0] Optimized C library for EC operations on curve secp256k1

https://github.com/bitcoin-core/secp256k1


Thanks for sharing Shane, and nice to see companies engaged with the community on a technical level!


For personal use:

What works is a good plain old rss.

Delivered to my client of choice, via gui or cli. Skipping ads and clickbait articles to save me time.

With a wave of generated content flooding some automated systems, the best curation will be done by yourself by finding reliable sources to subscribe to.


When Google News did their major redesign (back in, what, 2014?) it removed pretty much everything about the service that was useful to me.

I switched to curating my own news feeds from RSS feeds of various sources. It turned out to be the best thing I could have done and now is more useful to me than any third party news aggregator that I've seen.

I recommend this very much.


I think it is so important to be able to disconnect from whatever it is that we are doing, even for a very short period of time. Go for a walk, brew a coffee or simply close your eyes and breathe.

Many times, stress is created artificially. It hurts our performance and deteriorates our ability to think.

Encountered numerous situations where work was "urgent" and would likely land a contract or sales for the company, and everyone would be a superstar if they delivered this "crunch".

After 2 months of pulling all-nighters and sleeping for 3/4 hours, we deliver the project ahead of time. Apathy begins to set in after management/decision makers keep on giving these gifts we call "crunches".

To help the company and go the extra mile is something most of us have done in the past and will possibly do in the future. However, it's like the story of the boy who cried wolf, if everything is urgent and every task is to be done NOW, then there are bigger issues at play.

Like everything in life, there is usually a limit/budget of money, time and effort. By abusing these limits and tolerances, people will lose respect for the people crying wolf and will put less effort into their work.


> Encountered numerous situations where work was "urgent" and would likely land a contract or sales for the company, and everyone would be a superstar if they delivered this "crunch".

> After 2 months of pulling all-nighters and sleeping for 3/4 hours, we deliver the project ahead of time

In my career, none of these have ever paid off. Every time I've crunched this way on something dramatically urgent like this, it has turned out that the "if we can deliver this, this huge moonshot sale is a sure thing" turns into a no-sale

The sales person never seems to get cut loose for diverting the entire R&D towards a longshot for months and burning people out, though

And you can bet the sales person isn't putting in weeks of overtime for the duration, either

I basically refuse to do overtime anymore unless I'm working extra to make up for my own screw up. I'm not putting in extra to hit some other assholes unrealistic deadlines ever again


Agreed. Even if by some miracle you do deliver, and are considered a superstar, then what? What do superstars get? Probably just even more crunch work, since you've proven you're willing to do it.


This reminds me of the scene in Schindler's List where the SS officer asks the enslaved factory worker to show him how fast he can assemble a particular component. The terrified worker races to assemble it in record time, anxious to please and impress the nazi -- who responds to the effect of: "if you can make them that fast, why is your daily quota so low?"


This was also a standard technique on American plantations, then adapted to the industrial economy in the form of Taylorist time-and-motion studies. If you trace modern management practice, it is basically a straight line back to chattel slavery.


The plantation thing was so horrible. They would track your personal best output, and every day you didn’t beat it you would get whipped based on how much you fell short. Of course this made you work faster, but it was like a game of 21 because you knew if you went over you would now have a higher quota to meet from then on.


You imagine that it will catapult you ahead in your career, your income will skyrocket, you will be respected and loved by your company and peers

But in reality no one really cares much, you'll get the same raise everyone else gets, your bonus is still gonna be capped by your contract, and you will be better off finding a new job if you want more money

Man sometimes I want out of tech so badly it hurts but I don't generally think it's better anywhere else


Absolutely 0 had ever paid off. Probably worst was trusting too much a colleague perceived by everybody as Oracle/plsql guru, when troubleshooting vendor's abysmal performance of DB queries during some bigger migration (up to half an hour easily, for trivial 30 million rows). He didn't see any issue on DB side, pointed to useless oracle hints, crappy JDBC drivers, spring's jdbc templates, possibly my not-optimal code etc.

I went over my head, did probably the most complex code in my life, massively parallel, over weekends and evenings. That wonderful cathedral didn't move performance a zilch, just made debugging and further changes much harder. After few hours of actual debugging afterwards he found out vendor defined responsible DB table in such an obscure and bad way way that we had to literally copy whole table to another more sane one, and perform all the work there in maybe 5% of the time. In fact I suggested exactly same thing initially but it was quickly dismissed by him, and who questions the guru, right.

This didn't even come from management just colleague's incompetence/ego, hard deadlines, tons of pressure to deliver, and starting project already 2 months late. Closest I've been to burnout yet. I am still a bit pissed off on him, but I know it was not malice so that eases emotions quite a bit.

And to similar request coming from the top - been there, done that too, regretted that time & energy put in it. These days, 8 hours days, if I am not making it on time, I communicate early & clearly and that's it. They handle it, and if they don't, well there is always next job. Life is about priorities.


This is the hardest lesson to learn. Sometimes you won't be afforded the ability to do it "right", either for the company or the product or the customer. Eventually, you'll decide to just show up and ask what is most important today and work on that. Then clock out completely when your work is done and go find meaning and personal satisfaction in your personal life. Go exercise or volunteer or get a hobby or be present for your family. The best way to have work/life balance is to separate them. This is also one of the reasons why I hate wfh. The drive to/from is a great separator and decompressor for me.


I think this is so hard to learn because it's counter to human nature, and only necessary due to the artificial conditions of the modern world. We're programmed to want to be useful to our tribe. But we don't live in tribes any more. Our brains get confused and burnt out because we perform and perform and perform, but we don't get love and status and security in return, we just get this abstract thing called money, which it doesn't really understand.


I think a quick 2-minute read on the changes around each generation gen1 -> gen4 example from 2016 will make it a bit clearer [0].

Things like packet encoding etc. Then a quick look at the signalling change of NRZ vs PAM4 in later generations.

Gen1 -> Gen5 used NRZ, PAM4 is used in PCIe6.0.

[0] Understanding Bandwidth: Back to Basics, Richard Solomon, 2016: https://www.synopsys.com/blogs/chip-design/pcie-gen1-speed-b...


I don't think that makes the answer to the question clearer at all. The slight differences in encoding are interesting but they don't answer the big question:

They made a significant signalling change once, with 6. How did they manage to take the baud rate from 5 to 8 to 16 to 32 GHz?


To go a bit deeper, while still keeping it at a very high level what changes were made between each generation:

PCIe 1.0 & PCIe 2.0:

Encoding: 8b/10b

PCIe 2.0 -> PCIe 3.0 transition:

Encoding changed from 8b/10b to 128b/130b, reducing bandwidth overhead from 20% to 1.54. Changes here in the actual PCB material to allow for higher frequencies. Like changing away from PCB material like FR-4 to something else [2].

PCIe 3.0, PCIe 4.0, PCIe 5.0:

Encoding: 128b/130b

There is plenty to dive deep on, things like:

- PCB Material for high-frequency signals (FR4 vs others?)

- Signal integrity

- Link Equalization

- Link Negotiation

Then decide which layer of PCIe to look at:

- Physical

- Data / Transmission

- Link Layer

- Transaction

A good place to read more is from the PCI-SIG FAQ section for each generation spec that explains how they managed to change the baud rate as you mentioned.

PCI-SIG, community responsible for developing and maintaining the standardized approach to peripheral component I/O data transfers.

PCIe 1.0 : https://pcisig.com/faq?field_category_value%5B%5D=pci_expres...

PCIe 2.0 : https://pcisig.com/faq?field_category_value%5B%5D=pci_expres...

PCIe 3.0 : https://pcisig.com/faq?field_category_value%5B%5D=pci_expres...

PCIe 4.0 : https://pcisig.com/faq?field_category_value%5B%5D=pci_expres...

PCIe 5.0 : https://pcisig.com/faq?field_category_value%5B%5D=pci_expres...

PCIe 6.0 : https://pcisig.com/faq?field_category_value%5B%5D=pci_expres...

PCIe 7.0 : https://pcisig.com/faq?field_category_value%5B%5D=pci_expres...

[0] Optimizing PCIe High-Speed Signal Transmission — Dynamic Link Equalization https://www.graniteriverlabs.com/en-us/technical-blog/pcie-d...

[1] PCIe Link Training Overview, Texas Instruments

[2] PCIe Layout and Signal Routing https://electronics.stackexchange.com/questions/327902/pcie-...


Not my area at all, just passing by and wondering to the extent and how they use knowledge graphs for drug discovery.

Some time back I had a peek at AstraZeneca's GitHub [0] and got me curious. I know in genomics they try to use custom hardware to accelerate the process using FPGAs and others [1].

Curious if anyone can shed light on knowledge graph use at scale is being accelerated.

[0] AstraZeneca; Awesome Drug Discovery Knowledge Graphs https://github.com/AstraZeneca/awesome-drug-discovery-knowle...

[1] Gene sequencing accelerates with custom hardware https://www.mewburn.com/news-insights/gene-sequencing-accele...


I only know in my area you can infer gene and protein interaction network using knowledge graphs to some degree. In drug discovery I've seen graphs of chemical knowledges and what can be drugged and what not. https://pubs.acs.org/doi/10.1021/acs.jpca.2c06408

There might be more publications on mining literature and building such graphs but I'm not following it much since deep learning took over.


Extra links for the no gil work for anyone else curious about this [0], [1].

[0] Multithreaded Python without the GIL https://docs.google.com/document/d/18CXhDb1ygxg-YXNBJNzfzZsD...

[1] Github repo https://github.com/colesbury/nogil



Those links are both fairly old. See PEP 703 [0] and Sam’s nogil-3.12 repo [1] for more current versions.

[0] https://peps.python.org/pep-0703/

[1] https://github.com/colesbury/nogil-3.12


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: