NPM setup similar dl_files_security_sigs.db .database for all downloaded files from npm in all offline install? List all versions, latest mod date, multiple latest crypto signatures (shar256, etc) and have been reviewed by multiple security org/researchers, auto flag if any contents are not pure clear/clean txt...
If it detects anything (file date, size, crypto sigs) < N days and have not been thru M="enough" security reviews, the npm system will automatically raise a security flag and stop the install and auto trigger security review on those files.
With proper (default secure) setup, any new version of npm downloads (code, config, scripts) will auto trigger stop download and flagged for global security review by multiple folks/orgs.
When/if this setup available as NPM default, would it stop similar compromise from happen to NPM again? Can anyone think of anyway to hack around this?
About right, did it two years ago with 2 Sockets, 128Cores (64 core per socket), 256 Threads 2 Epyc Motherboard with 1TB DDR4. Build Kernel < 90 seconds. Should be faster nowadays....
RPi+ Yolo can do real time object recognition of cluster Tank, Trucks, BMP from orbit.
A service with real time streams of those objects + GPS data info directly from starlink for a select sets locations should worth a lot for DOD, NATO, Ukraine. DOD and NATO would likely flip the bill for everything need to build such system.
Love to see a container environment that can monitor
Monitor and log all outgoing network connection requests....
Monitor and log all critical file/directory access such as /etc/*
With such container, we can catch the compromised supply-chain attach easily, right?
Only using privileged containers, or else you don’t have visibility into signal from other containers.
But, say you had such a container, there’s an important distinction between “you captured a log showing the smoking gun evidence of the supply chain attack”, and “you successfully picked that log out of all of the log data you generated and classified it with high confidence as an attack”.
Speaking from experience, the second problem is the hard problem for a multitude of reasons. So while you would have the data, you’d probably have trouble getting good precision/recall on when to actually sound the alarms vs. when it’s some SRE who needed to troubleshoot some network connectivity issues.
> Only using privileged containers, or else you don’t have visibility into signal from other containers.
The suspect application doesn't need the privileges, so I'm not sure how much of a problem that is?
> there’s an important distinction between “you captured a log showing the smoking gun evidence of the supply chain attack”, and “you successfully picked that log out of all of the log data you generated and classified it with high confidence as an attack”.
Assuming that you're talking about the signal:noise problem, that's hard in the general case but I feel like you could easily pick off really obvious cases like trying to access private SSH/GPG keys and still get a lot of value.
> Assuming that you're talking about the signal:noise problem, that's hard in the general case but I feel like you could easily pick off really obvious cases like trying to access private SSH/GPG keys and still get a lot of value.
Probably. I’d agree that it’s worth trying at the very least. I’ve run into enough “should be easy” cases that turn out to be not that easy that my default is to get the data and see if the hypothesis really pans out.
It DOES NOT require a VM/Container; uses strace. It shows you a preview of file system changes that installation will make and can also block arbitrary network communication during installation (uses an allow-list).
Thanks for highlighting this! While PTRACE introduces TOCTTOU vulnerabilities, Packj sandboxes fixes that by using read-only args for ptrace. You can find my PhD work [1] on this relevant.
If CI/ CD pipeline uses GitHub Actions, you can monitor and even block outbound network calls at the DNS and network level using Harden Runner (https://github.com/step-security/harden-runner). It can also detect overwrite of files in the working directory. Harden Runner would have caught this dependency confusion and similar attacks due to a call to the attacker endpoint.
Got Honda Clarity PHEV since 2018. Loving it so far. It is 48 miles E range. I need ~36 for round trip daily commute. Charging at work is free. I remember only fuel up only 3 times in 2019 with the 7 gallon gas tank. Normal fuel up is 5-6 gallon only as it has a very small tank. But the Hybrid range is supposed to be 350 miles.
I also just installed solar at home. I expect the next family car would also be PHEV suv to replace the 16 years old minivan. Other than long trip, I don't expect to use gas much, maybe just once a week to oil up the engine for a few minutes.
NPM setup similar dl_files_security_sigs.db .database for all downloaded files from npm in all offline install? List all versions, latest mod date, multiple latest crypto signatures (shar256, etc) and have been reviewed by multiple security org/researchers, auto flag if any contents are not pure clear/clean txt...
If it detects anything (file date, size, crypto sigs) < N days and have not been thru M="enough" security reviews, the npm system will automatically raise a security flag and stop the install and auto trigger security review on those files.
With proper (default secure) setup, any new version of npm downloads (code, config, scripts) will auto trigger stop download and flagged for global security review by multiple folks/orgs.
When/if this setup available as NPM default, would it stop similar compromise from happen to NPM again? Can anyone think of anyway to hack around this?