In case of small enterprises, what are the options for migrating to Ubuntu for all remote users?
How does one have an MDM solution? Most of the solutions out there are poor on Ubuntu or need lots of work to get things right.
Can anyone provide a reference architecture/solution that allows them to be SOC2 compliant? But also not have high friction for developers and more importantly not have bigger overheads on process or investment?
Yeah, I requested to have a Linux desktop from my employer and was flatly told "NO". None of our many security applications supports it, which is a real shame. As we use Windows and MacOS, I can't see how we'll really be more secure on those platforms, even with the security theater applications they force us to use.
The standard approach is to use intrusive spyware to monitor all activity "for security" rather than to use systems designed to be resistant to attack. I call it the "fucking for virginity" approach to infosec. The reason why is because it's assumed that all attack-resistant systems break down somehow, under some circumstances but the audit trail to determine who committed the attack and how is non-negotiable, especially in regulatory and compliance settings. So institutional infosec tools are more interested in gathering the audit trail if/when an attack happens than in preventing the attack (in a "while we value the things in column A, the things in column B take priority" kind of way). And since they're almost always proprietary and considered beyond reproach by the corporate infosec division, well... occasionally something like the Clownstrike incident of 2024 does happen. But even that's not as bad as having had a breach without a sufficient audit trail to defend against liability or claims of noncompliance with regulations or industry standards (e.g., HITRUST in the health field).
It's been years since I've seen RHEL on the desktop at work. Any company that tolerated Linux desktops has either been large and geeky enough to go all-in on Linux and roll their own custom management solutions (Google), or else was still operating in "startup mode" with an attitude of "we trust our software devs, let's just give them a laptop and let them go nuts with root" which means they would flunk any serious security audit. And most of those used Ubuntu or similar.
The only place I've actually seen RHEL on the desktop, also the only large instutition besides Google I've seen Linux desktop rollout, was in government labs; and for those the government can commission arbitrarily bespoke security systems. In the real world, the CISO of your organization is going to go with one of the industry standards, like Cisco Secure Endpoint, which—again—only exist on Windows and Mac. In the real world, you might be issued a Mac if you're a developer, otherwise a Windows machine, and that's what you'll use, end of story.
Oh this is still very true!
I am from Banglaore, India. There are sites that outright block me. And in a day, I at least encounter 20-25 times where I need to click on "human checkbox" due to my region or IP.
In mobile it's worse. All sites that have "strict" mode on, will either block or show the "human checkbox".
Even sites that I manage with Cloudflare, I see the same. Even if I use relaxed mode on, If I visit the site via mobile, it can trigger the Cloudflare human validation.
Are there any tape based solution which can be used at home? I don't care about time retrieval. It's more for home archival purpose.
I have two NAS servers (both based on Synalogy). But I need something where I can back it up and forgot about it till I want to restore the stuff.
I am looking at a workflow of say, weekly backup to tape. Update the index.
Whenever I want to restore a directory or file, I search the index, find the tape and load the same for retrieval.
NAS can be used for continuous backup (aka timemachine and timeshift). And archival at a weekly level.
Tape drives are generally SAS so you will need a controller card
I've got a HP StorageWorks Ultrium 3000 drive (It's LTO-5 format) connected to one (LSI SAS SAS9300-4i), in my NAS/file server (HP Z420 workstation chassis). Don't go lower than LTO-5 as you will want LTFS support.
About £150 all in for the card and drive (including SFF-8643 to SFF-8482 cables etc..) on EBay
Tapes are 1.5TB uncompressed, and about £10/each on Ebay, you'll also want to pick up a cleaning cartridge.
I use this and RDX (1TB cartridges are 2-4 times the price, but drives are a lot cheaper, and SATA/USB3, and you can use them like a disk) for offline backup of stuff at home.
Not OP, but similar situation, trying to figure out tape archiving, already using SAS.
However, is there no open formats? The whole LTO ecosystem of course reeks of enterprise, and I'd expect by now at least one hardware hacker had picked together some off-the-shelf components to build something that is magnitude cheaper to acquire, maintain and upgrade.
Tape is really complicated and physically challenging, and there are no incentives for people investing insane amounts of time for something that has almost no fan base. See the blog post about why you don’t want tape from some time ago.
The LTO cartridges are cheap and the programs that you need for using LTO tape drives are open source.
The only problem is that the LTO tape drives are very expensive. If you want to use 18 TB LTO-9 tapes, the cost per TB is much lower than for HDDs, but you need to store at least a few hundred TB in order to recover the cost of the tape drive.
There is no chance to see less expensive tape drives because there is no competition and it would be extremely difficult for anyone to become a competitor as it is difficult to become able to design and manufacture the mechanical parts of the drive and the reading and writing magnetic heads.
If you "back up and forget" there is a good chance you will not be able to restore the tapes when the time comes.
At least with drives you can run regular health checks a corruption scans. Tape is good for large scale but you must have automation that keeps checking the tapes.
A tape can be checked much faster than a HDD, because its sequential read/write speed is several times higher than that of a HDD.
However, there is little need to check the tapes, because the likelihood of them developing defects during storage is far less than for HDDs.
Much more important than checking the tapes from time to time is to make multiple copies, i.e. to use at least duplicate tapes that are stored in different places.
Periodic reading is strictly necessary only for SSDs, and it is useful for HDDs, because in both cases their controllers will relocate any corrupted blocks. For tapes it is much less useful. There is more risk to damage the tape during an unnecessary reading, e.g. if the mechanism of the tape drive happens to become defective at exactly that moment, than for the tape to become defective during storage.
The LTO cartridges are quite robust and they are guaranteed for 30 years of storage after you write some data on them.
In the past there have existed badly designed tape cartridges, e.g. the quarter-inch cartridges, where the tape itself did not become defective during storage, but certain parts of the cartridge, i.e. a rubber belt, which was necessary to move the tape, disintegrated after several years of storage. Those have disappeared many years ago.
You can buy a tabletop LTO tape drive, a SAS HBA card and an appropriate cable and you can use them with any desktop computer with a free big enough PCIe slot.
The problem is that while the tapes are at least 3 times cheaper than HDDs, and you have other additional advantages, e.g. much higher sequential reading/writing speed and much longer storage lifetime of the tape, the tape drives are extremely expensive, at a few thousand $, usually above $3k.
You can find tape drives for obsolete standards at a lower price, but that is not recommended, because in the future you may have a big tape collection and after your drive dies you will no longer find any other compatible drive.
Because the tapes are cheap, there will be a threshold in the amount of data that you store where the huge initial cost of the tape drive will be covered by the savings from buying cheap tapes.
That threshold is currently at a few hundred TB of stored data.
I use an LTO tape drive and I have recovered its cost a long time ago, but I have more than 500 TB of data.
However, only a third of that is actual useful data, because I make 2 copies of each tape, which are stored in different locations. I am so paranoid because it is data that I intend to keep forever and I have destroyed all the other supports on which it was stored, e.g. the books that I have scanned, for lack of storage space. An important purpose of the digitization has been to reduce the need for storage space, besides reducing the access time.
I keep on my PC a database with the content of all tapes, i.e. with all the relevant metadata of all the files that are contained inside the archive files stored on the tapes.
When I need something, I search the database which will give me the location of the desired files as something like "tape 47 file 89" (where "file 89" is a big archive file, typically with a size of many tens of GB). I insert the appropriate tape in the drive and I have a script that will retrieve and expand the corresponding archive file. The access time to a file averages around 1 minute, but then the sequential copying speed is many times higher than with a HDD. Therefore, for something like retrieving a big movie, the tape may be faster overall than a HDD, despite its slow access time.
There are programs that simulate a file system over the tape, allowing you to use your standard file manager to copy or move files between a tape and your SSD. However I do not use such applications, because they reduce a lot the performance that can be achieved by the tape drive. I handle frequently large amounts of data, i.e. the archive files in which I store data on the tapes are typically around 50 GB, so the reduced performance would not be acceptable.
I have a Quantum LTO-7 drive (6-TB tapes) bought many years ago for $3000.
Today I would strongly recommend against buying a LTO-7 drive, as it is obsolete and you risk to have a tape collection that will become unreadable in the future for the lack of compatible drives. A LTO drive can read 2 previous generations of tapes, e.g. a LTO-9 drive can read LTO-7 and LTO-8 tapes. LTO-10 drives, when they will appear in a few years, will no longer be able to read LTO-7 tapes.
The current standard is LTO-9 (18-TB tapes). If you write today LTO-9 tapes, they will remain readable by LTO-11 drives, whenever those will appear.
Unfortunately, LTO-9 is a rather new standard and the tape drives, at least for now, are even more expensive.
For instance, looking right now on Newegg, I see a HPE LTO-9 tape drive for $4750.
Perhaps it could be found somewhat cheaper elsewhere, but I doubt that it is possible to find a LTO-9 tape drive anywhere for less than $4500.
If you need to store at least 200 TB of data, you may recover the cost of the tape drive from the difference in price between LTO-9 cartridges and HDDs.
Otherwise, you may choose to use a tape drive for improved peace of mind, because the chances for your data that is in cold storage on tapes to become corrupt are far less than if it were stored on HDDs.
I have stored data for many years on HDDs, but the only thing that has kept me from losing that data was that I have always duplicated the HDDs (and I had content hashes for all files, for corruption detection, as the HDD controller not always reported errors for the corrupted blocks). After many years, almost all HDDs had some corrupted blocks, but the corrupted blocks were not in the same positions on the duplicated HDDs, allowing the recovery of the data.
A beautiful pop book that gives a glimpse on types are designed. (https://www.kellianderson.com/books/alphabetinmotion.html)
In this age of generative AI and ebooks, this book is a very pleasant surprise. I loved the whole feel of the book, interactivity, ...
- signed url's in case you want a session base files download
- default public files, for e.g. a static site.
You can also map a domain (sub-domain) to Cloudfront with a CNAME record and serve the files via your own domain.
Cloudfront distributions are also CDN based. This way you serve files local to the users location, thus increasing the speed of your site.
For lower to mid range traffic, cloudfront with s3 is cheaper as the network cost of cloudfront is cheaper. But for large network traffic, cloudfront cost can balloon very fast. But in those scenarios S3 costs are prohibitive too!
It allows you to generate ansible or bash scripts for execution.
If you install OpenScap it comes with built-in policies, but it's always out of sync with the current version of Ubuntu, which is frustrating first time around.
For every version of Ubuntu, the default policies do not work, for e.g. in case of Ubuntu 24.04, I need to download
git clone https://github.com/complianceascode/content.git
cd content/ and ./build_product ubuntu2404 and cd ..
#Run either of the following commands:
oscap xccdf eval --profile xccdf_org.ssgproject.content_profile_cis_level1_server --results arf1.xml --report report1.html content/build/ssg-ubuntu2404-ds.xml
oscap xccdf eval --profile xccdf_org.ssgproject.content_profile_cis_level2_server --results arf2.xml --report report2.html content/build/ssg-ubuntu2404-ds.xml
We where on the same boat as you at the start. We tried multiple iterations of different process, but in the end what worked for us was following:
1. Use a bar code scanner to scan a batch of books into a text file.
2. Wrote a small script to use Amazon API (this was when Amazon had a public API available) and Goodreads (this was before Amazon acquisition)(do you see the pattern :-) to search for the books. I heuristically merged the book data. We manually verified it and then pushed it to a sqlite db.
3. We spent weeks doing this, where everyday either of us spent at least 1 hour doing the scanning, verifying and importing it.
By couple of months we where done.
4. After that I exported it to excel so that we had multiple copies (Google drive and Dropbox)
Post that we tried various tools, like calibre, a custom application I wrote, etc ... But maintaining that catalogue or software was painful.
Challenges, we faced:
- Some ISBN's where not available.
- Mix of ISBN13 and ISBN10, but that was fixed in the script
- Older books do not have barcodes or worse have barcodes but are not ISBN at all (ISBN was introduced sometime in 1970). For these I used to enter the title and author and then used the search API to fill in the rest of the data.
- Some books stayed in the boxes. But they where scanned and put back so location was at least known!
It has a built-in bar code scanner using your phone's camera which we like. But many a times it pulls in a wrong book. It's easy enough to correct it though as the search functionality works really well.
Overall what really worked for us
- Putting aside some time every day to scan the books. Every day half an hour to an hour was doable and did not feel overwhelming. Otherwise the project looked very daunting. And over a period of time we made substantial progress.
- Now whenever we get books, first thing we do is to scan it. My partner is anal about it (thankfully)
What does not work for us still:
- Re-arranging books screws up the database. Now the locations are all wrong :-(
- When we where giving away the books, we had to export the data into excel and then share it via google drive for people to block it for themselves. We packed them but they never turned up for picking it up. These are still in boxes. We need to figure out a way for us release it and notify everyone that these books are back to being available.
Hopefully this inspires you to get those books out of the boxes at least once :-)
reply