Hi everyone! I'm trying to create a version control system that solves some of the problems that Git and other version control software has when working in a team. Let me know if you have any feedback!
One thing that could be useful for this is writing code locally and testing it on a remote server. I assume the simple act of editing reflects the change on the other machine? No need to commit, push, then pull to do a one-liner experimental change
Thanks for checking this out! Yep that's where I'm planning on going with this. I think this could even be extended to modifying and deploying code without having the push/commit/pull process that we have now.
I would encourage you to explore this more thoroughly. I frequently find myself working with three machines at the same time. My Windows box because of needing to use speech recognition in conjunction with copilot and VS code. My Linux laptop for local endpoint, and a VM endpoint somewhere (the old man says standing in the yard shaking his fist at the cloud)
Far too often I find myself making all my changes on my Windows box, or copying them to one destination, the other, or both. Then bug fixing on the local machine as part of my test cycle. Then I have a few moments of confusion when I try to remember what I changed where and how to pull them all back over to my Windows box.
So yeah, something like your proposed tool would be quite useful.
It’s pretty trivial to setup syncthing for your working directory and have it sync between all 3 of your machines. you don’t need version control for that just file system syncing.
I did use Unison but I lost a lot of time merging changes. Then I switched to syncthing that also work on my Android devices. Syncthing has it's share of pain points, most notably that out of sync / override changes button that many people don't understand. It's a very bad piece of UX and it makes people wonder if they are about to lose some data on one or both sides of sync. They should do without it at least when sync is not bidirectional.
I never merge changes. I run my unison with `-batch -prefer newer`, so I never get queried. In the very, very rare case (like maybe once every few years), where I feel I lost a file to unison, I have to retrieve a backup. It would assume that in a multi-user setup, this could be trickier.
1) I recall a comment a while back from the xdelta author about how most history tools store a sequence of changes from nothing (as it sounds like yours does), but this isn't ideal since usually you want a recent version which then needs to be produced from the potentially long sequence of changes. Caching various revisions could help in your case but that is less than ideal when the files are large.
2) It isn't clear how you go from live editing type thing to revision control. If you haven't yet, you might want to look at the Aegis paper[1] (it is short) for a quick overview of the framework it used to form a revision control system from a collection of more specific tools that could be customized. That type of framework might go well with your rapid branch synchronization. Sadly the author, Peter Miller[2], died in 2014. As far as I know no one picked up development (it isn't the easiest name to search for :/). The User Guide [3] from the documentation page [4] gives a bit more info about the model.
1) Agreed, caching might not be great for large files. I don't think this is the only way though. I think there is another approach, that I haven't explored, which is storing a pointer to the last known Data block in each Range block. This means that you can just jump directly to the actual data without having to go through this regeneration or caching process. The space cost of this would be pretty low, one int per block, but it would mean that you wouldn't need to iterate through every change. Still figuring this out though and caching would be the best option if this doesn't work.
2) Yes I haven't totally figured this out yet. I'll give this a look, thanks!
There seem to be a few projects that try to be a live, collaborative git. Perhaps it’s just me, but I don’t necessarily think I’d want this for my main system. Maybe a short term collaboration/pair programming session, but not in general. I like how git is not immediate. I like that I can take time before sharing my code to clean it up. The issues that a tool like this purports to solve are not issues to me. Live collaboration is a great idea, but maybe not as a version control system. But hey if other people get use out of it, why not!
What I always wanted to have is a tool that syncs my uncommitted changes of git repos between all my devices. So that I can switch from the desktop to the laptop without committing, pushing and pulling. Right where I stopped. Also sync not yet pushed commits, I may want to rebase/edit before pushing.
This was exactly my use case for building gut, https://github.com/tillberg/gut. It's a daemon that wraps git, auto-committing changes for a tree and bidirectionally syncing them between N computers. The wrapped git is autorenamed from git to gut so that it can commit git folders. The gut tools are usable for exploring/manipulating history of this meta-repo, too. I saved myself from disaster a couple times with `gut checkout ...`.
Nowadays I use Syncthing for the same purpose (I learned about Syncthing when I did a Show HN for gut). Dropbox works reasonably well, too.
100x yes. I agree with parent that pushing live changes is way to intrusive, but a synced working/staging directory could be incredibly helpful.
I don’t see any fundamental reason why git or, even 3p tooling for git, couldn’t support this use case using something like private, ephemeral, auto-synced branches.
Here's the roundabout way I sync uncommitted changes. Push everything to a new tmp branch. Then on the other system, fetch and cherry-pick the last commit id, then reset HEAD~1. Boom uncommitted changes synced.
Appreciate the feedback! Will keep this use case in mind, this could definitely work as a layer on top of Git when people need to pair program or preview code.
3. Invites didn't go out immediately, just added people to a pool of potential future invites
A product that required the network effect of a large connected userbase to succeed only had random individuals with access. Literally no one I invited and would have used it with got in, and I was in one of the first couple batches of public users to get access. A spectacular self-own on Google's part with that rollout.
I never understood the real use case of Google wave.it had lots of features without a clear goal. Google docs seemed enough for all those but the curious.
I don't understand the use case. Very large file support aside, are you saying you've observed people looking at git and say "I wish I could use this like I use a chat server?"
Haha appreciate you taking a look. One use case is to prevent merge conflicts, especially when working on a fast-moving project. Merge conflicts generally occur when you make changes on some old code. Jamsync constantly "rebases" your changes on top of new changes so that your code will always be up to date. Obviously some merge conflicts will still occur but much less so in this approach.
Multiple times I've been on teams with spray and pray type developers who commit more frequently. Begetting more chaos and rework, of course. But great for apparent "velocity".
Whereas I tend to work slow and deliberate. Doing old timey stuff like unit tests and verifying my code works before committing. Which then puts the onus of resolving merge conflicts onto me. Which then delays my commits. Which means more even more merge conflicts.
A vicious cycle.
It'd be cool if better tooling, such as jamsync, helped mitigate the penalty for doing good work.
This could work for the pulling side but really what jamsync does is both pull and push. Having changes that are always present in the remote means that merge conflicts are less likely. If you were to do this with Git, you would also have to automate the git add, commit, and push steps which would ruin the log in addition to being pretty slow.
Hi! I decided to link to the About page since this content was better suited for HN. You can read more about the problems Jamsync plans to solve on the homepage at jamsync.dev/ . Thanks for the suggestion!
I think a lot of folks in this thread are forgetting the first day they spent using git. It's kind of like asking why someone would create a text editor in a world where vim exists, since vim can already do everything you need. Sure, but it takes months to become proficient with and years to master.
Years ago I worked at a company that had a team of front end developers that used Dropbox for version control instead of git. Conflicts were prevented by the default "this file has changed on disk, are you sure you want to save?" warning in their IDEs. It worked incredibly well for that team, and they never suffered a single gitastrophe.
Thanks for bringing this up! I somehow missed this. Yes I plan on implementing a CDC-based approach in the future but for now I'm just using a naive fixed chunk size.
Yes, versioning large files is certainly one use case that Jamsync solves (and was the original problem that caused me to develop this). More use cases are on the homepage, but I think that preventing merge conflicts could be interesting. Typically a source of merge conflicts is not pulling/rebasing frequently enough which Jamsync plans to solve by pulling in changes as they happen on the current branch and rebasing branches as changes happen on the "main" branch. This means that branches are constantly being merged and reduces the risk that you base your changes off of old code.
It's a bit more complicated for 3D CAD (I'm talking about the mechanical engineering type here) than just rebasing frequently. I'm going to write a long comment because I hope someone will solve my problem..
I'm looking for a good tool for 3D CAD version control because the available ones are all expensive and focus on features for managers, not engineers (like approval workflows). I was excited by Jamsync, but I don't think this will work well for a few reasons. First, CAD software generally modifies every file you open, unless it is read-only. This is mainly because even rotating a model causes a file change. Second, it's really common for multiple engineers to work in the same assembly, even if they are working in different parts. Third, merging of the CAD files, which are binary, isn't usually possible even when using the same vendor for both CAD and version control. So all you can do is select a version to keep, you can't merge. It's annoying because often one person's real work is overwritten just because someone else had a stale copy of an assembly file, and saved over the real work with something trivial like a view rotation. Only there's no easy way even compare two versions so the most common scenario is redoing the work. CAD won't generally continuously load any changes like in the example on your website, because engineers have assemblies open, for the entire day, with all or most files in a project, and these are loaded into memory once and locked on the disk.
All this means that one feature is 100% required: (a) explicit checkout and checkin of the files you want to work on, only allowed by one user at a time. It's almost a requirement (b) to do this with a plugin in the CAD software itself, because hierarchies within CAD assembly files are used to select, open, etc files, and because CAD files are often just names with a consecutive numbering scheme. It's quite painful to have to use a different interface to locate and checkout/checkin files.
If you can offer these two things you may have a decent market for small companies using CAD. Self-hosting (c) is a requirement for many industries. A file viewer (d) is nice to have but not at all required, and every other software attempts to add JIRA-type features (e) which frankly just becomes another place other than the correct place to document things.
The only software I've seen which really hit the mark was grabcad workbench, which is unfortunately being discontinued[1]. It did just a, b, and d and it was free too!
The offerings from actual CAD vendors are way too heavy, need dedicated roles to maintain them, and cost more than CAD licenses. Kenesto[2] does a, b, d and and Bild[3] does a, d only. Kenesto looks good but I've heard complaints that it feels barely maintained. Bild looked promising but it's slow and not only is it lacking b, a plugin for CAD, you have to use its webview application which offers no search, preview or sorting options so you have spend your time scrolling through a list of files. And of course, both offer way too many unwanted features. Everyone selling version control software for CAD doesn't seem to realize that we don't want to them to compete with Siemens on features. We want a few features done well, and already have tools or workflows for tickets, approvals, commenting, etc.
It's not yet clear to me whether Jamsync has the concept of commits and tracks the history of locally made changes. (Rsync obviously hasn't and doesn't.)
Does Jamsync have 3-way merging (e.g. of local and remote changes on top of a shared ancestor)?
I'm still figuring out semantics, but currently a "commit" is made every time you make a change to a file which is synced to the remote. Every time you run `jam`, you push a commit. Or if you leave `jam` running, commits will be made each time you save. 3-way merging will be possible in the future, but will be a little different than most VCS since there is no history stored locally.
So how do you communicate with other developers the intention of a commit or set of commits, like you would with commit messages in git? How do you rewrite/cleanup history?
This is still in-development but my plan is to essentially have "branches" which are merged and create a single "commit" on the main branch. You'll have the option to add information to a branch and a merge message if you would like, but it won't be required. I don't have any approaches on rewriting/cleaning up history right now since I think managing code history is pretty low-value for most teams. In my experience, it's far better to add another commit than it is to try to change history. Would be curious to know your thoughts though!
This sounds great. Your VCS should fade into the background, rather than be front and center like git. Nobody should be breaking their flow, staging, committing, pushing as an ersatz backup, and then going back to coding. Teams shouldn't be bikeshedding on merge vs rebase, many commits vs one commit, etc. They should just be coding.
However I can see where a sort of "checkpoint" feature before performing a "hold my beer" change makes sense. A commit in some sense. `jam checkpoint` ... code ... `jam revert`
You should just be coding, and when the thing your are coding is done just do a single "squash merge (in git speak)" to the main branch.
> This sounds great. Your VCS should fade into the background, rather than be front and center like git...Teams shouldn't be bikeshedding on merge vs rebase, many commits vs one commit, etc. They should just be coding.
you're being too hard on git. large teams sharing code in previous SCCS rcs's (haha) would frequently encounter locks and not be able to "just code" but would need to negotiate with teammates or complain to committees. The features of git allow you to "just code" in a group environment, but it moves conflicts to a different link in the chain. The interplay between changes sprinkled across a codebase need to be negotiated in any system, but git generally allows you to "just code" till it makes sense or is convenient to negotiate
I've been around long enough to know git is so much better than what came before, and I appreciate it. But even so, git yak shaving still consumes more developer time than it should.
that's fair. I just didn't like seeing git sold down the river for the feature it does better than what was before :)
maybe "git with wimpy locks" would be a nice feature. if you're editing files, the center node could keep track of that, and if I go to edit one of those files, I could be told that you've already edited it. I could forge ahead ("break the lock" and you would get notice) or I could ask you. It doesn't resolve all problem between prickly developers with different styles of workflow, but it would increase communication at little cost.
I don't think I would want this for code, but it sounds really interesting for a shared drive. I would love to version control the dir where my team dumps all of its docs and spreadsheets and everything else.
Thanks for the feedback! What about a version control system would be valuable for storing your documents? I'm curious why you couldn't use something like Google Drive/Dropbox.
The rsync paper https://www.andrew.cmu.edu/course/15-749/READINGS/required/c... is better than me at explaining this, but the rolling hash is the key to solving this misalignment problem that you describe. Essentially, using the rolling hash enables you to detect when a block as been misaligned (bytes added at the beginning of the file for example). There's no way to cause expensive resyncs in this case since it rolls over every byte in the file and reuses blocks on any alignment, even with data inserted in between.
An option for mounting the files with fuse and/or doing a sparse checkout would probably be good. Otherwise your working copy may use a lot of disk space with redundant data.
Thanks! Yeah planning on adding the ability to mount specific directories so people don't have to pull down large assets they don't need, for example in gamedev or machine learning applications.
Yeah this project is still pretty early in development. The main algorithm described on the site is implemented but there's a lot missing that developers would expect from a system like this. Trying to make a version control system and hosting platform in my free time while working a full-time job is not exactly an easy task haha. I plan on adding more features in the next few months.
Btw, Pijul (https://pijul.org) already does this for binary files and large files. But it also has other advantages such as easier workflows, provably correct merge and cherry-picking.
I wonder whether this system works for zip-compressed files, such as large .docx, .npz, etc. These are common large files, but it seems that rsync cannot handle compressed files.
Not currently, but it would definitely be interesting to have an unzip done automatically based on the file type. Then you would be able to track the contents efficiently, rather than having large binary blobs with large diffs. I haven't heard of other version control systems out there doing this.
Sounds very useful to me. rsync is my go to tool - I use it to manually maintain versions of my workstation as I find it the most reliable and usable among all options I've tried.
Yeah I agree that live editing is probably a more appropriate name right now but I think there are so many tools that already do that right now and haven't really made much impact. I don't see a large value in the live editing aspect when it comes to developing and deploying faster, even though this system will support it. The live editing is most valuable when preventing merge conflicts and working on several machines. Definitely open to your thoughts though.
There are some similarities but Fossil still has a push/pull/commit flow, although it does provide syncing features to automatically pull commits. Also, I don't think it has great support for large files.
Ultimately, the priorities of Fossil are going to be different than Jamsync, since Jamsync is not decentralized. Being centralized means that we'll be able to do some different things, like live file editing, but with some downsides related to replication and distribution.
so is this kind of a better replacement for subversion? thinking about large file support. i'm not aware of major problems that git and/or hg have when working with a team.
Yes, that's one way to think about it. You could probably say it's closer to Google Drive than a VCS right now but my idea is to make a VCS that's more collaborative than the options currently available. Some people will have no issues with the current VCS that we have now but many developers that I've talked to have expressed frustration with the general user experience of VCSs and when merge conflicts occur.
If two users are editing the same file this looks like you'll see my edits in your file in near real-time. This is being used to do way with merging / merge conflicts. A few observations:
* I think this makes two devs changing the file basically unusable as you're changing the file and potentially the same line simultaneous which could cause me to overwrite your changes without realising
* It still presumable doesn't get away from the fact I can edit the file with the jam command not running then run it after in which it'll just overwrite my local changes onto the remote without merging
* What does it do if my editor has changes in the change buffer that aren't saved and an update comes in on the same line, then I save. Will my now out of date change buffer overwrite the server version ?
I don't think you can do away with merging and merge conflicts it's a vital part of source control so that two devs can work on the same file simultaneously.
How often does the thing sync, none of the demos really show this. The demo's showed the thing running very often i.e. every few seconds. Does that mean if I'm making large changes to a file it automatically upload every time I save the file ? I definitely do not want that as typically when editing code as I edit I will save something that is in a partially broken / incomplete state. It also precludes the main CI/CD workflows which will normally trigger from a dev uploading something they think will work.
This just gives a link to a .zip which is presumably out of date now. I presume you are trying to self host this but since you haven't ironed out the details it would be better to use an open source code tool like github /gitlab ...etc for the time being.
You are allowing me to register / login to an "account" on something with basically no terms of service or privacy policy. Not only are you not GDPR compliant but more importantly at the moment you are not limiting your liability! The fact you've stuck this up without limiting your liability you are now partially liable for whatever stuff people are putting up on here baffles me. Seriously for your own good please get some terms of service with at least limitation of liability.
You'll have to also sort the problem of copyright as I wouldn't push code to a service that attempts to claim any copyright of that which I assume you're not doing but I would need that in writing.
Given the frequency of commits that this solution will introduce you'll need to offer way more than 5GB of hosting.
How will that version slider work when I have 500k commits. Even squashing branches, the repo I'm working on at the moment has 10k commits. This is a team of about 30-40 devs and each day is about 15ish PRs. With Jamsync syncing every save I could easily see this going up to millions of commits.
Hi! This is the first time I'm posting publicly about this in-development project so thanks for the feedback.
Causing a conflict will cause a .jamdiff file to be written out on the next sync with the remote changes and I'm planning on adding branches in the next update which will make how this works more clear. There's still a concept of parallel editing, since features cannot be developed/tested simultaneously without breaking. Also, anytime there's a conflict, we can just make a new branch and ask the developer to merge or keep working on the new branch.
The client will watch your file system for changes and hold a gRPC stream open for remote changes. If you don't want to sync your changes, you don't have to leave the command running. CI/CD support will come later.
I'll be compliant with regulations soon, but I'm not really expecting people to use this yet. I mostly wanted to release the source and see what people thought of the project. I am using GDPR compliant Plausible analytics if that makes you feel better!
Thanks for the suggestion on the source, the zip file is up-to-date and is part of my build process. I might host on Github in the future but really wanted to make this the source of truth. My goal of open-sourcing is not to get contributors, but to give back to people who want to view the code and self host.
> Also, anytime there's a conflict, we can just make a new branch and ask the developer to merge or keep working on the new branch.
Ok so this tool doesn't solve the problem of merge conflicts.
> The client will watch your file system for changes and hold a gRPC stream open for remote changes. If you don't want to sync your changes, you don't have to leave the command running. CI/CD support will come later.
Eek then the thing isn't consistent if two of us are using this tool and I keep it open all the time I'll generate a lot of commits. If you only run the command sporadically you'll generate significantly less.
> I'll be compliant with regulations soon, but I'm not really expecting people to use this yet. I mostly wanted to release the source and see what people thought of the project. I am using GDPR compliant Plausible analytics if that makes you feel better!
I feel nothing only pain :-). I note that you have missed out my comments on limiting your liability. Since you seem to be deliberately doing that I'll give you some advice. There are some heinous people online who will use this service in it's current form to share some vile shit with each other. At the moment you are liable for that. IANAL.
> Thanks for the suggestion on the source, the zip file is up-to-date and is part of my build process. I might host on Github in the future but really wanted to make this the source of truth. My goal of open-sourcing is not to get contributors, but to give back to people who want to view the code and self host.
Erm you know I can see the source. You have the code stored in a private repo in github.
Appreciate the feedback! I used the same name for that project but this is actually a completely different project rewritten from scratch and open-source. I've been working on the general idea for a better VCS for some time so I've restarted around 6 times so far to get something that works. Not sure it will be too productive to continue the conversation here but feel free to reach out to me if you would like.
The algorithms it uses are superior to rsync and git in a few ways. It comes short on features, especially for software development compared to Git. The motivation is more for personal file storage.
I notice you're using Go and AGPL licensed, so you could borrow any of Got's libraries without issue. (Got is GPL licensed.) Definitely reach out in a GitHub issue.
Sure, Git stores data in a trie. Each file is one blob identified by hash, and directories (called trees in Git) are blobs where each line is a directory entry with a name and the hash of a file or another tree. This means that modifying an object /down/a/long/path/like/this.txt has to create copies of all the trees on the way up. The technical term for this is "write amplification", and in Git it is affected by path length among other things.
Got stores data in a probabilistic tree (GotKV[0]). The number of nodes before you get to data will scale logarithmically with the size of the entire filesystem, not the depth of a specific object.
Then there is the issue of large files. A file in Git is always 1 blob. Syncing a large blob is not easy because if you are interrupted and have to restart, you have lost all your progress. You can't verify the hash of a blob until you have the whole thing. Got has a maximum blob size, so you'll only be buffering <2MB at a time before you can verify that the blob is correct. If a transfer is interrupted, the most you'll have to repeat is one blobs worth, plus any tree nodes above that blob.
Compared to rsync, Got uses variable size chunks and a faster content defined chunking algorithm, recently featured here on HN[1]. I haven't thought about if variable vs fixed chunks is better for file transfer, but for version control, the higher chance of convergence is important. It means you have better deduplication.
I've not heard the term "probabilistic tree" and I've having difficulty pulling up references. I suspect it's implemented by subpackage ptree[0]. Could you explain what makes probabilistic trees different from hash tables or other similar data structures?
Yep, your link is indeed to the probabilistic tree used in GotKV.
Here "probabilistic" just refers to a way of balancing a tree. Rather than having a set of rules to keep the tree balanced, like with a btree or red-black tree, balancing decisions are made pseudorandomly. The result is that the tree is very likely to be balanced, and is unlikely to be unbalanced.
In the case of GotKV's tree: the entries are stored together in a stream, and for each entry a hash is computed. If that hash is lower than a certain value then the entry is considered a split point, and a tree node is created. So now we have a stream of entries, divided probabilistically into sections. Each section is a tree node. Now take references to those nodes and turn them into entries, and repeat the process, so you have fewer nodes. That continues until you have one node, which is the root.
This technique is very similar to content defined chunking, and some probabilistic trees are implemented using content defined chunking on their record format, rather than a pseudorandom value calculated per entry, as in GotKV.
For those unfamiliar with probabilistic data structures, I highly recommend trying to understand skip-lists first. At least why they are balanced.
As an aside, one of the neat things about GotKV is that the keys are delta-encoded. Adding or removing a prefix from every key in the tree is a constant time operation. This might be obvious to some of the database folks out there, but it's a fun mind-blower if you haven't encountered the technique before.
Some, but not all, treaps have a node weight that is updated in a probabilistic fashion. The act of balancing the tree is still deterministic, but the weights of each node are randomized.
I keep trying to find a use for treaps, but haven't had a project that needed it. In particular, the value of a balanced tree is in consistent cost of lookups for arbitrary elements. But if you are mixing entries that are accessed often with those that are not, having an 8:1 access time ratio between the two would be a feature not a bug.
I used a persistent treap for a lock free priority queue (swap in a new root at insertion). It felt nice but to be honest, didn't do a comprehensive comparison to alternate implementation strategies.
Took a look and this is really cool. Will definitely keep this is mind. One major difference and area I'm focused on in this project is hosting. I would argue that there are currently much better VCS options for projects than Git (like yours and Fossil) but the reason these haven't taken off is that Github and Gitlab offer unmatched hosting and collaboration tools for these projects. Would be curious to know your thoughts/approach to this!
My approach to hosting with Got has been to make it easy and secure for users to host from any machine.
INET256 solves that problem nicely. If you have access to an INET256 network, then all you have to do is swap addresses and two Got instances can communicate.
Also, end-to-end encryption is table stakes. Any data that leaves the user needs to be encrypted in transit, and if it hangs around away from the user, at rest.
Nice, I hope you won't get an avalanche of comments here by grumpy old devs that are afraid this will somehow mean they'll have to learn something new.
I think it's great to look into possibilities of doing things in a better way, even if the majority of people think the current way is the only correct way.
I don't think of myself as old, but certainly grumpy and this comment rubbed me the wrong way.
In my time as a dev I've worked with CVS, SVN, Mercurial, and then git. I can confidently say that no dev I've worked with ever kicked up a fuss when they switched, because each iteration brought improvements.
I would however say that modern devs carry with them modern baggage. There are far too many bootcamps churning out devs who say "this is git, everyone uses it, heres the minimum you need to know". These are the devs who will struggle with this sort of change, it would literally change a magical system they don't fully understand for reasons they may not fully understand.
I would have fussed going from Mercurial to git. Mercurial is arguably superior, much like Fossil, but unfortunately mindshare / marketshare is everything.
The layering model of merging in both those systems is superior to the standard git merge (rebase is close though). they also made cherry picking a trivial operation compared to git
I willingly switched from Mercuial to Git due to, at the time, lack of in-repo branching and lack of history editing. IMO Mercurial shot themselves in the foot with some of their early decisions.
"no dev I've worked with ever kicked up a fuss when they switched"
Wow. I don't think your experience is the common one. I've been on teams switching to git, and there was always much fussing. Even if the benefits of switching were clear and it was worth it, doesn't mean there won't be pain along the way.
I've seen whole teams kick and scream about migrating, even when their preferred system was taking minutes to do basic operations because the repo was too large.
I love learning new exciting things that I know I’ll use for the rest of my live. C, Vim, Bash, HTML, CSS, JavaScript will all outlive everyone here. It’s a pretty safe bet. I hate the amount of useless knowledge I accumulated for obsolete unexciting things.
I feel like Git is not the final answer to version control, it had a lot of great ideas, but it’s not even good enough for some things and too complicated for most others. My hope is that it becomes like SVN, mostly legacy stuff, and that we can build something new with the lessons learned from Git. I wish Jamsync luck, it looks interesting and I freaking love rsync.
I guess Git suffers from a similar problem as (La)TeX: it's a truly great prototype which sadly wasn't thrown away to build something better.
Don't get me wrong, I like both TeX and Git _very_ much. (I even co-authored a LaTeX textbook.) I also have a lot of respect for DEK and LT. But they were trailblazers (especially DEK), and so they did a lot of things not knowing their true impact – and sometimes this means these decisions were far from optimal. (The case of TeX is even worse because the machines of the time were very limited compared to what we have today.)
That's a very interesting take. Both were also developed mostly by one person to do one job. Knuth for his “The Art of Computer Programming” masterpiece and Linus for the kernel. There may be similarities that arise from such histories and constrains.
Still in the early stages of development with code changing rapidly. Not sure if it would be the best idea to get other people working on things at this stage, but I'm open to suggestions!
My suggestion: keep doing what you’re comfortable with. Having source available is better than not, full stop - and should be commended. Don’t succumb to peer pressure. If you want to make a “community” project do it on your own terms.