Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
GitHub removes 5 years of One Hour One Life designer's work (twitter.com/jasonrohrer)
310 points by sharjeelsayed on June 5, 2019 | hide | past | favorite | 139 comments


Jason Rohrer [1] is the author of Passage [2].

His latest game, "One Hour One Life" [3], is a massively multiplayer online game where your character has only one hour of life. You start off as a baby and are dependent on others for shelter and heat before you're able to be autonomous. Eventually you die of old age but your impact on the world/server persists.

I recommend anyone interested to purchase the game [4] or at least look at video stream of gameplay [5].

When I purchased the game, the source code came with it and as far as I could tell was dedicated to the public domain. This was client code so I'm not sure if the server code is released anywhere (maybe this was what was taken down at GitHub).

[1] https://en.wikipedia.org/wiki/Jason_Rohrer

[2] https://en.wikipedia.org/wiki/Passage_(video_game)

[3] https://en.wikipedia.org/wiki/One_Hour_One_Life

[4] http://onehouronelife.com/

[5] https://www.youtube.com/watch?v=Hu7kXKuShks&list=PLcA1QDRup-...


He's also idiosyncratic in the approach to the rest of his life. When he practiced ‘simple living,’ in his front yard he had (or still has?) a small wild garden with uncut grass, where he of course walked barefoot and his children played naked (iirc). He was taken to the court by neighbors on the issue of grass, and appeared there with a tome of paper citing local laws and research that says cut grass exudes harmful substances. Whereupon he was left to do as he pleases.

The article on Rohrer, which included this anecdote, theorized that his court appearance proved his gardening choices to be a matter of principle and not just negligence, thus changing the perception of him by the neighbors.

Meanwhile, his ‘Passage’ is in MOMA's initial videogame collection, and he's had a solo exhibition at The Davis Museum.


It would seem that he has some idiosyncratic neighbors, then. But they are not alone, a certain amount of people are strangely obsessed by the grass of their neighbors. My kids loved the dandelions of my grass and I let them grow, but then I received regular bullying by the old lady next to our place, who had a "perfect" french style garden, and looked at us as if we were some kind of savages. I actually found this bevavior rather funny.


I don't care what my neighbors do, however cutting long grass helps prevent tick and flea infestations in my area so it does have a purpose. The ticks especially enjoy the long grass. I do however work towards planting more permaculture in my yard so that there's less grass to cut. The grass provides an excellent ground cover and builds up the soil in the meantime.


> cutting long grass helps prevent tick and flea infestations in my area so it does have a purpose

Not to mention helping to reduce the risk of rodent problems. Pest control and fire risk are the reasons why, in my city, your are legally required to keep your jungle down to a certain level.


Dandelions are fine, but for the sake of your neighbours it is kind to remove the heads before they seed. Having to continuously hand weed myriad young dandelion plants can be a pain, especially, I imagine if you are elderly.

I have an allotment, and you'll certainly get frowned at if you let your dandelions seed all over a neighbouring plot.


Well, those are things to consider, but the details depend on the country and the particular neighbourhood. Fortunately there are also people who prefer meadows to "perfect" green deserts.

I kinda enjoy both and I've found the hybrid model works best for me and my family (no complains from the neighbours, yet;).


Fortunately there are also people who prefer meadows to "perfect" green deserts.

And fortunately it's even possible (though pretty hard) to 'turn' people.

Couple of years ago I slowly started explaining in detail why pretty much anything is better as far as nature goes than a short lawn consisting of only grass, and also explaining that the idea that such lawn is the summum of gardening is purely a cultural and economical thing without much real benefits but just costing money for the owners of the lawn. This year my neighbour proudly said he would not be mowing a small piece of his fairly huge lawn because he wanted to see more bees and butterflies (which are abdundant in my own meadows) and he might not try to remove the moss this year. So now we're both happier than before. And my neighbour will spend less time and money. Sometimes life is simple yet good.


I quite happily have a meadow lawn. I do, however avoid cranesbill and dandelion in it because the seeds scatter so widely and are an annoyance to neighbours. I also have lots of rotting logs and a wildlife pond with native wetland plants. These don't annoy anyone.


In such situations I tend to go onto the meta level and ask them that they realize what they are doing at that moment: looking perfect is more important to them than acting like a perfect neighbour.

So something like: “I always thought of you as a good neighbour, because of your beautiful garden, but is complaining about your neighbours garden something a good neighbour would do?”

Usually they never bother you again then.


That particular phrasing comes across a mite hypocritical - if you admit that having a well-kept garden makes one a good neighbor, then you are admitting that your unkempt garden makes you a bad neighbor.


> if you admit that having a well-kept garden makes one a good neighbor, then you are admitting that your unkempt garden makes you a bad neighbor.

No, you aren't. https://en.wikipedia.org/wiki/Affirming_the_consequent


This isn't formal logic - it's basic reciprocal socialization. Humans believe in holding everyone to similar standards as a basic principle ("fairness"). You are simultaneously reaffirming your neighbor's standards for themselves, and rejecting them for yourself. This will be seen as hypocritical, and in the real world, if you try to defend this behavior by citing the Wikipedia article on "Affirming the consequent", you will rapidly find yourself on your neighbor's mental "pedantic asshole" list.


TL;DR Yeah, but I don't care.

> You are simultaneously reaffirming your neighbor's standards for themselves, and rejecting them for yourself.

Yes, because we are complex, subjective, multidimensional humans, and we can judge each other based on different referential systems. If I don't care about your subjective perspective, I don't have to prove myself in your eyes. And if I don't care what your referential system is then I'm only a hypocrite in your eyes.

- You have a clean yard, you must be a good person. I have a messy yard, but I like it / I don't care. I can still be a good person for other reasons, but I don't care to be that person in your eyes. I do care that you're pestering me with comments, so please stop.


1. That's not how the implication operator works

2. You don't necessary have to agree with the premise, but you're proposing it because you think the other person agrees with it.


Of course, there's a good chance the person you're talking to will not see it that way, instead making the same mistake dTal did.


I wouldn't call it a mistake. It definitely comes across as hypocritical, whether or not it "technically" is. You're essentially saying "I recognize this trait as valuable, however I reject your right to judge me on it".



I would guess that the neighbours mostly want to avoid the appearance of negligence in their neighbourhood which might impact house prices.


What a shitty neighborhood, when the main concern in the lives of your neighbors is the price of the houses they already own!


Dandelions used to be both cultivated and consumed, it is only in modern times that we've decided it's a weed.


Yet, the appropriate context to judge the contemporary dandelions is "modern times."


I have often purchased dandelion greens at my local farmer’s market - they are great sautéed!


Be sure to harvest when they're new. They turn tough and bitter with age.

Like us.


Or worse, bitter and weak.


> cut grass exudes harmful substances

Intriguing “fact” if true; I can’t reconcile this with so many animals eating grass. They’re both directly cutting it in their mouths, and also inhaling the volatiles from the part they leave connected to the plant.

Is there some problem common to all grazers? Something like the way koalas are proposed to be as languorous as they are because the eucalyptus leaves they eat are acting in a drug-like manner on them?

Or, alternately, is there some adaptation all grazers have in their bodily response to chemical volatiles released by the cutting of grasses, that other animals species don’t have?


It's actually an outstanding problem with feeding ruminants. In pasture raised animals, they tend to forage on things that avoid giving them problems by avoiding over-grazing thru taste and foraging patterns, but in a small contained yard of constantly cut grass the toxins will build up because the plant responds as if it is over-grazing.

https://doi.org/10.1016/S0167-7799(00)88995-7


Michael Pollan actually talks about this in his book Second Nature. His father never cut his lawn and the neighbors had the same reaction, but his dad fought back as well. Really cool book if you want to read about gardening and how society looks at nature especially the lawn and garden.

[1]https://www.amazon.com/Second-Nature-Gardeners-Michael-Polla...


> as far as I could tell was dedicated to the public domain

I recommend everyone to read this regarding that matter:

https://www.reddit.com/r/gamedev/comments/ay0w64/one_hour_on...

I don't think Jason really understood what public domain means when he released the game into public domain. Once people started using his work as PD work, he started to complain, file DMCA requests, etc.


Very interesting, thanks!

Just to be clear, it looks like Jason Rohrer is still very much committed to putting his work in the public domain and, as far as I can tell, has been consistent in his commitment to keeping the work he's dedicated to the public domain in the public domain.

What looks to be happening is more of a trademark issue. I think Jason Rohrer might be backpeddling on that. It looks like he clearly indicated to people making a port that the "One Hour One Life" name was also in the public domain and could be used but that they shouldn't, essentially making an ethical argument. When the iOS port that was similarly named ("One Hour One Life for iOS") and became more popular than his released game, that's when he started to push back. [1]

It looks like a big point of contention for Jason Rohrer is brand confusion as many people don't realize the iOS port of "One Hour One Life" has nothing to do with Jason Rohrer. I sympathize with him but I wish he had taken a bigger stance on his trademark.

Free/libre/open source organizations have understood this and use trademark to brand themselves and make sure others don't dilute that brand without permission. This is how the Open Source Hardware Association essentially does their certification and why you can't use "Free Software Foundation" in your open source project's name without permission.

[1] https://onehouronelife.com/forums/viewtopic.php?id=5479


Then he probably shouldn't have released his content with the following license:

"This work is not copyrighted.

Do whatever you want with it, absolutely no restrictions, and no permission necessary."

If you care about your fame, don't pick a license that literally states you are happy for people to do whatever they want.

I have no sympathy for somebody who writes a license himself (what's wrong with MIT), reassures people that they're allowed to base their livelihood off his work, then changes his mind once they start making more money than him.

He's of course welcome to have whatever restrictions he wants on his work. But don't claim to have some moral high ground when you're litigating against people who trusted you to be honest. At best he's doing it to satisfy his ego, at worst for money.


From his twitter, he apparently has recently filed a trademark application for OHOL.



On the contrary, from the quote it seems he has crystal clear understanding of copyright and public domain—not too surprisingly since his adherence to these principles dates at least fifteen years back (http://hcsoftware.sourceforge.net/jason-rohrer/freeDistribut...)

He's appealing to ethics, not the law. He says right there that his objection is to the naming of forks. If that looks to you like violation of the public domain promise, you must think the name constitutes the body of the work.

I'm not sure what to make of the DMCA filings, though, since those should obviously rely on copyright in the first place.

From his twitter, it seems he's also convinced by now that the trademark law provides perfect solution for this problem, and has apparently filed an application for a trademark on the OHOL name.


<Big corporation> removes <large body of work> without notice or consultation.

How often do we have these articles? Averaging once a month? Maybe more?

How often does it need to happen before people realise that your $10/mo means jack shit to a multi-million dollar company, and "accidents" like this are just a cost of business to them. Solving the problem properly would cost a lot more than just letting people fall through the cracks of automation.



What an interesting story, especially when you delve into the links on tort reform


So what should we do? What price do we need to pay to exceed the "jack shit" threshold? Specific recommendations please.


Do your own backups on physical drives you own of anything truly irreplaceable


Or host your backups on DigitalOcean.


Maybe not the best choice in light of them deleting this guy’s entire app, db, and backups the other day: https://twitter.com/w3Nicolas/status/1134529316904153089


I'm guessing GP was making a joke, considering the recent DO "scandal".

BTW as was made clear by multiple HN commenters in other discussions about this incident, nothing was actually deleted, the customer's account was suspended for 30+ hours.


Yeah, I didn’t miss that. I was being snarky.


Use GitHub and similar hosting only as a public gathering place for projects, not as the sole development repository. Which is to say, develop locally and ensure you have backups, only using GitHub to share the work with others.

This is the way development using git is supposed to work anyway.


Self host in multiple locations - either datacenters or cloud providers. Its the only way to be sure.


Boycott is often the only solution. That's why those articles are important.


Boycott is not a solution. It's the first and very tiny step towards doing something. People need to federate, mutualise, fight as part of communities. Personal actions have as only upside making you feel good (which is still a big win, but it's pretty superficial and doesn't scale).


The frustrating thing to me is that this guy will get this solved and we’ll all move on. He’s reasonably well known and had enough Twitter juice to get attention. All the small people who end up in this situation, whatever it truly is, without those resources will remain screwed.


I get what you're saying, and agree to a degree.

On the other hand this is like not backing up your hard drive. Having a single point of failure and expecting anything less is...

Like you said, he's big enough to shout into the abyss and get a response, while others have to accept that a single point of failure is exactly that.

If we changed the headline to hacker deletes '5 years worth of work', the response would be different; two-factor should have been enabled, should have used a strong password, one password per account, yubi-key, etc. The onus would have been on the developer.

There are plenty of free hosting services for repos, so there's no reason IMO to not maintain a mirror.


Looks like you are right, he's already back up and running: https://twitter.com/jasonrohrer/status/1136304496882044928


GitHub (CEO) claims customer support looked into this prior to any tweets FWIW


According to an earlier tweet [1], the reports are from users who are mad they were banned from his discord server.

> Actually, it seems like this "report" to discord came on the heels of us banning a member for posting offensive content. That member then, to seek revenge, reported the entire server to Discord, and told us about it. Nice.

I've experienced similar immature behaviour in the Minecraft community back when it was at its popularity peak.

[1] https://twitter.com/jasonrohrer/status/1119751548542652416


I've helped run a forum in the past, and holy shit is it scarily easy to make stupidly-determined enemies. Even people who joined just to spam offensive content would act like they were the one victimized when banned and would do all sorts of things to "even the score". I think these people would emotionally forget that they joined looking for a fight and earned their ban in the first place, and would then ride the high of righteous indignation.


Online forums for random topics used to be the last refuge of the mentally unstable. I've moderated several forums and saw this behavior first hand; a banned user would spend so much time and effort trying to stir the pot and get back at the forum's owners for their Injustice. I often wondered why would they choose to waste so much time in a niche forum rather than go do something productive.

Naturally, nowadays they have online forums that _cater_ to this kinds of individual (4chan, some subreddits, etc). Nature, um.. finds a way.


This is not unique to forums. Anyone that’s owned a retail or retail-analog business (eg, medicine) has to deal with precisely this sort of crazy-ass behavior on a reasonably frequent basis.

A tiny sliver of the population is effing nuts.


Mass Reporting to trigger automated systems has become a great way to silence your enemies. Note how discord did not explain jack squat about where this content was. Just an ultimatum to remove abstractly bad content with a deadline. All from mass reporting.


And how was there no system in place to prevent one disgruntled person from taking down someone else's entire project on a whim, without evidence?


And if they'd just found a few buddies to also spam Reports would that have made it any better?


Either way, a human should always review the case before pulling something long-lived offline. Random users should only be able to flag things for review, nothing more.


That is terrible and a good reminder that using the "cloud" means that you don't really control your destiny. A quick googling around reveal a couple of options for backing up everything from github (but I'm not sure what you'd do with a backup if you got kicked out like the OP).


It's also a good reminder of why data portability is important. For the git part, you can just git push to another service, even a selfhosted one, and be up and running in no time, and GitHub does keep pages and wikis in git... but all the tickets, that's a problem, and why I don't trust GitHub to be a canonical source for development.


Gitlab has a feature to copy an entire Github repo including issues and issue threads, etc.

You can also enable options for it to act as a mirror or to make GitHub act as the mirror for Gitlab. I'm not sure whether there is a mode to keep the issue threads synced as well without manual intervention. That'd be full data portability.


That is generally a good thing about cloud applications that you are forced to bundle everything. Most of the software I run on AWS could easily be transported to Google, Microsoft or some other service.

Sure, there are exceptions and optimizations for specific services but overall it did increase portability of server applications quite a bit. Maybe the times were you just need that legacy windows 2000 server encapsulated with 7 firewalls and proxies are finally over.

I usually prefer to host my stuff on private servers, but people unreasonably want to put as much software as possible into cloud providers. I think the reason for that is more often cocaine abuse and wanting to be hip instead of technical/financial arguments, but it is what it is.


I think the reminder is just that backups are necessary, and cloud isn't a required condition. I had a bare-metal host turn off my server because they were concerned I was violating the trademark of a big company with what I was serving. That big company was my client and I had permission, but the host shot first and asked questions later (actually, I had to call them when I saw the site went down).


Must have been a mistake. It's there again: https://github.com/jasonrohrer/OneLife

I also found this comment on his site:

Tarr (1,819.0 hours on record) Posted 42 hours ago

"Can't recommend the game to anyone who either doesn't have 15+ years of game development or doesn't know Jason personally. Game dynamics are too rich for a filthy casual like myself. If you want to make a suggestion please learn to read spaghetti code and hope the copypasta is as delicious as it is hard to untangle."

Which is interesting, because Jason's C++ code is one of the very best I've ever read. I normally hate C++, but his code is beautiful and very readable. Like out of a book or tutorial. E.g. https://github.com/jasonrohrer/OneLife/blob/master/gameSourc...

Also his dischord game support channel is filled with hate about him. Not only by this guy, but by many seemingly childish individuals. Looks like another gamer gate to me.


I'm admittedly not proficient in C++, but I can't see what's beatiful about the code. Looking at the specific example you linked to... inconsistent formatting and "magic values" aside, what's the point of, say:

        if( !relaunched ) {
            printf( "Relaunch failed\n" );
            setSignal( "relaunchFailed" );
            }
        else {
            printf( "Relaunched... but did not exit?\n" );
            setSignal( "relaunchFailed" );
            }
Instead of simply:

        if( !relaunched ) {
            printf( "Relaunch failed\n" );
            }
        else {
            printf( "Relaunched... but did not exit?\n" );
            }
        setSignal( "relaunchFailed" );
?

Such copy-pasta style redundancy does count as somewhat sloppy in my book.

And do C++ books or tutorials recommend cramming over 20 KLOC in a single file? https://github.com/jasonrohrer/OneLife/blob/master/gameSourc...

I don't have any bone to pick with the guy, whom I don't know and who I'm sure may be very talented.

I only fail to see why code quality, beauty even, would be considered a forte of this particular project.


I'd say it's pretty mundane code, like perhaps 90% of what I usually see.

If I were trying to reduce redundancy, it would become more like this:

    printf("Relaunch%s", relaunched? "ed... but did not exit?\n" : " failed\n" );
    setSignal("relaunchFailed");


This is not so much C++ as it's C with some C++ syntax. From the look of the bits of code I've seen is that it's written pretty much on demand. Not building an engine or library, just only the code that is needed and nothing more.


It’s probably a mistake done after refactoring. The guy is only human you know.


Magic strings and numbers all over the place are less likely to be refactoring artifacts.

Of course he's only human, but it wasn't me who picked this little (80 LOC) file, of all the codebase, as an example of code being like "out of a book or tutorial" :)

And what about stuff such as 20 KLOC in a single file? This is spaghetti, no matter how you spin it. I wouldn't want to work with such codebase.


It isn't really clear that splitting up your code into a bunch of files is always a good thing. Sometimes it becomes harder to search, for some people switching files incurs a sort of mental cost (like walking through a doorway), it can make compilation slower, and depending on your editor you might have to do some extra configuration to work well with your directory structure.


Just because there are many lines of code doesn't necessarily mean it's badly organized, hard to navigate, or spaghetti like. People have this idea that high LOC files = bad. It's really complexity of mental model of code and difficulty of navigation that is bad. Repetitive code isn't elegant but it it's often more obvious.


Nat Friedman just replied to this tweet:

> Sincere apologies Jason. We will investigate what happened here and learn from it.

https://twitter.com/natfriedman/status/1136216389054881792


https://www.softwareheritage.org/

"We collect and preserve software in source code form, because software embodies our technical and scientific knowledge and humanity cannot afford the risk of losing it.

Software is a precious part of our cultural heritage. We curate and make accessible all the software we collect, because only by sharing it we can guarantee its preservation in the very long term. "


FWIW, https://github.com/jasonrohrer/OneLife appears to be there at the moment (20190605T0812+0100) (along with 22 other repositories.)


Yeah, seems to be back up. That link (along with https://github.com/jasonrohrer) was 404ing when this was submitted.


From being on the other side of this kind of "big company screws little guy for no reason"...

In ~80% of cases, there is more to the story than meets the eye. For example maybe a legal complaint came in, or someone was DoS attacking that repo and it put the whole GitHub service at risk.


so in 20% of the cases it really is a big company just screwing a little guy for no reason?


Or there are more than 2 options I suppose


The remaining 20% are usually incompetence or malice on the part of the big company...


I a legal complaint came in I would have figured that you start a far more transparent, cohesive process where the one accused has some information of what's going on and feels some agency.

If there's a DoS, inform the party about it.


That's why online services will never really be a complete alternative to using your own hardware. They might not get their computers stolen or catch fire, but they destroy your data intentionally when they decide they don't like you. The failure modes are different but still present.


I think there is still an argument to be made that a it's worth it for a lot of companies (in particular in the beginning). Because your pokemon game or your scooter company going offline is not really really a big deal. And then you reach a size where more things are possible like an SLA, people you can call on the phone, and the possibility to shame the provider.

But it's true that each company probably need to think for themselves and their circumstances, and do at least a mental computation every six months: do I need offsite backup, a second cloud provider in cold standby, in hot standby, should I look at a colocation, should I look at square meters and ethernet crimping tools?


and so you don't send an email?


Maybe they did, but the mail's stuck in a spam filter or sent to an old email address...

Maybe they tried to call him but he rejected the call?

Only the stories that sound unreasonable get viral attention.


The irony is that Jason is the "big company screws the little guy for no reason" with his own work.

He licensed his work at "Do whatever you want with it, absolutely no restrictions, and no permission necessary.".

Then when someone started making money off his work, he turned around and started litigation.


There's always two sides to a story. That being said, it seems like every other week now there's a story of a cloud service or provider shutting something down or removing something, without our with very little notice, and the only recourse for somebody who cannot afford an army of lawyers is calling them out on Twitter and hoping for the best.

This is an alarming trend, and highlights how decentralization and, at the same time, more accountability for this sort of action from these providers is necessary (through regulation?). Sadly it seems like at least the EU is currently walking in the opposite direction, which favors large corporations and the content industry :(


> This is an alarming trend

This has always been the case, ever since BBS operators. Data on other people's machines is data on other people's machines.


This is true. The only difference is that now there's more money (and people's livelihood) on the table. Or even just bigger numbers.


It could be a BS/extortionate copyright strike, as it is becoming ever more common recently


there's no excuse to not email the guy.


Setting up your own git remote takes minutes. Github is just a pretty interface with bells and whistles. I think more people should go self hosted. It's cheaper, and none of the features are truly needed.


> none of the features are truly needed

I think ease of contribution and discovery shouldn't be underestimated. I find it hard to believe that our little project [1] would have had 90+ contributors, of whom ~8 made very substantial contributions, if we'd self-hosted it. The project would have been far lesser without those contributions, and I probably would have abandoned it out of lack of interest.

That said, most other projects I see around have much more uneven contribution levels, with typically 100:1 ratios from the core developer to the second biggest contributor. Maybe we're just weird.

[1]: https://github.com/tridactyl/tridactyl/graphs/contributors


well shepherding a project is truly important if you want contributors - you have to tag issues that are good for noobs, help people with pull requests, not blow off issues that are requests for more documentation of how a module works etc etc. a lot of which i can see you're doing by looking at your issues tab.

Another sad fact of life nowadays is that github issues reduce friction but a lot of "hardcore" developers prefer email or some other offsite system that has lots of friction to people getting involved(i see this a lot in gitlab/bitbucket/sourcehut hosted projects). which means an open source noob is going to look at another project instead of getting involved with yours(again not an issue with you). for better or worse github is king here.

most open source projects don't bother and tend to rely on interest levels to keep them afloat. this works for the popular ones but tends to sink niche projects


I've nothing to add to the conversation, but I would like the opportunity to thank you for Tridactyl. It makes the web usable.


> Github is just a pretty interface with bells and whistles. I think more people should go self hosted. It's cheaper, and none of the features are truly needed.

Maybe none of the Github features are useful to you in how you use it, but I find Github the most productive way to work with a distributed dev team that I've ever found by a large margin. It's an amazing system for productivity of large dev teams and a dream to work with if it fits your use case.

The biggest feature for me is the whole Pull Request flow with integration between filing the original issue, the code changes to fix it with inline code reviews and tracking everything through to a release all with a few clicks. And all of it is in a format you can link to in an email or drop into a chat, even referencing the specific lines of the diff you care about. It's really an amazing system that most of us take for granted.

Different people have different needs. And for a lot of people, git itself is just the (excellent) engine but really they want a whole car.


It might not be cheaper depending on your needs. You won't get all the features GitHub provides with just a git server. You can install something like GitLab, Gogs or Gitea but then you need to do the maintenance yourself which is not minutes.

Add to that the user permission system which Bitbucket like services provide out of the box (I don't know about GitHub), for example a certain team in your company can access certain repositories, interns can access other repositories etc.

Add to that when you want one of your customers to have a read-only access to a repository, so you not only have to manage user permissions, but your whole IT structure needs to provide outside access to a certain part of your network, in a secure way.

This is achievable but it certainly wouldn't take just minutes. Services like Github and Bitbucket provide all these features for a reasonable price, and they have dedicated sysadmins working on providing a secure service 24/7, who would do a better job than a lot of companies can do with their 1-2 person team doing sysadmin things in their free time, besides their main daily workload.


I acknowledge that this is a classic HN comment, but a git repo on a VPS with Unix file permissions meets all of your criteria with very low effort.


And when some script blocks your access to your VPS some know-it-all will tell you that this is your own fault because ...

See https://news.ycombinator.com/item?id=20064169


The electricity company/ISP can still deny service.


That's not only far less likely, but far easier to fix -- if my ISP and/or utility company shut me down, it would be relatively easy to physically move my servers to a friends house until I can get the situation taken care of.


Self-hosted is cheaper than free?


How does this keep happening, across all these different services? The very least you could do is give the account a few days' warning via email, which is totally automatable. There's no excuse for not doing that.


Seems it’s been a busy week for these reminders not to trust centralized services.

I think the the “Fediverse” will become more popular over the next few years.


There are in fact people who are already trying to "federate" code collaboration https://nth.io/notice/7652


That's interesting. IIRC Linus Torvalds even came up with a tool to make "distributed/federated" code collaboration quite easy, avoiding any reliance on a single centralized host. Maybe we should start making use of it.


Is this a sarcastic comment? Because it sounds like you're comparing two completely different things. Federation addresses a completely different problem than tracking file changes. It's supposed to make it simpler to collaborate across different platforms, whether you're using the command line or a platform the likes of github, gitolite, gitlab, or a desktop IDE.


Decentralized VCS -> centralized decentralised VCS -> federated centralized decentralised VCS and so on.


Huh? It's a game. On Steam.[1] It's not a copy of anything; it's quite original. Has good reviews.

[1] https://store.steampowered.com/app/595690/One_Hour_One_Life/


I feel like I need more information.


Twitter in a nutshell.


We need to hear 2 sides of a story. Github does not just 'delete' files. Author should contact them first and resolve any issues there is. And if github refuse that, than you can cry a river.


I don’t think it’s unreasonable for the author to be provided an explanation when the repo was taken down.


Is it not incumbent upon Github to contact the author before deleting his account?


Maybe they did but it ended up in his spam filter or he unintentionally ignored it? As GP said, we need 2 sides to the story to make any judgements.


I just backed up ~100 repos & ~500 issues, totaling ~100 MB of json and git tracked code.

Everything's right there waiting for you.

https://github.com/josegonzalez/python-github-backup


5 years of work

doesn't have a backup

can't be that important


I'm sure he'll get it resolved.

But come on people, "in the cloud" doesn't mean it is safe. If you care at all about what you're storing in the cloud, make copies to store locally and offsite.


well the beauty of git is that the entire repository is more than likely cloned in a few different places and is easily reproduced. Also makes it nice and easy to move elsewhere.


there is fork https://gitlab.com/_zed_/OneLife (with keyboard as it says)


What solution exist for me to keep a complete copy of my github account offline or keep an uptodate backup? Mainly all the repos, I started and just forked.


Surely they weren't keeping everything only on GitHub, and had locally stored copies, right? Right?


I suppose it's good to review all your big assumptions once in a while.

OP assumed GitHub would hold his code (and make it available) forever, seemingly a good assumption. Now he's massively inconvenienced.

Time for me to look around, consider all my big assumptions....


What's the reason? What the heck is Github doing.


I don't understand, doesn't he have the repo locally?


Might be annoying if he uses Github wikis, pull requests, issues, etc


The only lesson to be learned here, one that needs to be repeated time and time again:

Don't put all your eggs into one basket.


The lesson to be learned here is "Don't use GitHub."

Use something that you can run your own instance of (like GitLab) and make the cloud version a shadow copy of your local one. That way if they pull the rug out from under you, you actually have some way of dealing with the issue.


That's why I set up my github and gitlab to mirror each other. gitlab can even automatically get the latest changes from my github repo. too bad github can't do this yet [1].

[1]: https://github.com/isaacs/github/issues/415


Put another way, do not treat Github/lab/Bitbucket as file hosting. As a repository it's fine, because it is assumed you have a local copy.


Don't put all your projects in one cloud

And... Use Gitlab


Gitlab too had incidents, remember db backup fiasco ? I know few devs that lost issues and wikis that day.

Just backup your repos once a year, lets not be paranoid.


In case of GitLab, the incident was caused by a human error, and they worked hard to fix it afterwards. GitHub and DigitalOcean (to name the ones recently mentioned on HN) have delibrately set up automated kill switches with no possibility of appeal, sans crying on Twitter. Sure, when data is lost then it may not matter in the end, but I'm still more sympathetic to the former.


Indeed, how GitLab handled the incident was exemplary. That is entirely different topic.



Yeah sure. But since Gitlab is possible to run self-hosted, you can back up the entire instance from the public cloud to the self-hosted version. That's pretty great.


By the same logic you can have a script that does git pull locally as cron job all the time along with exporting issues (wikis are just another implicit repo).

The point was that nobody does that for backup purposes in advance, and maintaining local Gitlab along with supportive infrastructure is not trivial task.


Saying "they removed my life's work" when you has a local copy of it is, at best, intellectually dishonest.


Not really, that's what physically happened.

Local copy may have been deleted. Developer may have just got a new laptop and traded the old one. Argue at length over having multiple redundant backups perhaps, but the simple truth is his work was deleted, and it's only by luck (or foresight) he has a local copy.

That it even happened is a huge problem, whataboutism aside. My trust in the longevity and stability of the service offered by GitHub is significantly eroded by this news.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: