Hacker Newsnew | past | comments | ask | show | jobs | submit | zebra9978's commentslogin

There is also Gitbucket - https://gitbucket.github.io/gitbucket-news/gitbucket/2018/05...

Now with LFS, code review, etc features. Built on the JVM - blazing performance.


All these alternatives seem to use Github as the main code repository. That's a really bad smell. Why don't they host their codebase in their own service?

Gitlab's main codebase is in Gitlab, though.


Yeah no, they really need another name. Bitbucket is known. Everybody talking about gitbucket will have to make it twice clear that it's not Atlassian's bitbucket.


there's a pinch of humorous irony in this website url


is this the same technology that Numerai uses... or is it multi party computation (https://mortendahl.github.io/2017/04/17/private-deep-learnin...) ?


It is a virtual certainty that Numerai is lying about using homomorphic encryption. The only known algorithms for FHM encryption require specialized algorithms to perform computations on the encrypted data. You can't just run a standard neural net over some homomorphically encrypted data and expect an interpretable result. Yet, Numerai claims that this is exactly what's possible with their data. This is clearly false. They are probably obfuscating their private signals in some extremely trivial way.


There's also the unlikely possibility of discovering a truly homomorphic encryption scheme with no constraints on operations


About as likely as discovering Fermat's purported proof.


Unclear tbh, the only thing I could tell from numerai is that the data is time series. The evaluation of the predictions isn't public afaik so you can't tell.


We are extremely worried about the future of Docker Swarm as well. We love Swarm - but we are seeing most work out of the Docker team is to give a migration path to kubernetes. A huge number of docker swarm networking bugs are not being worked on.

We will be happy if Docker talks about Swarm becoming a management UX for K8s - but we need visibility. These are production orchestration systems. The migration path is not easy.

And seeing what Docker Co is doing with Cloud, it is not very comforting to trust that they will do the right thing with Swarm.


Why did you pick Swarm for production?

We followed Swarm from the beginning, but after a few releases at v0.4 it was clear not to ever use Swarm, and that it mostly was the Docker PR machine that made it sound nice, and not the actual features.

Maybe it got better later on, but the first several Swarm announcements seemed really off-putting to me.

We ended up on Mesos/Marathon, not that that has a bright future either, but it was at least capable of restarting containers from the beginning..

Just migrate to Kubernetes. It has won.


When we started looking for a container orchestration system, we naturally used Google Trends (https://trends.google.com/trends/explore?date=today%205-y&q=...). Kubernetes reached their 1.0 release shortly before we were ready to start using the system and a lot of the features added since then eliminated so of our other problems (e.g. configmaps/secrets combined with the existing service discovery pretty much eliminated our plans to use Consul).

I've been taking a second look at Mesos recently and found that I didn't really grok it the first time I looked at it. In any case, I think your assessment is correct ("Just migrate to Kubernetes" - https://trends.google.com/trends/explore?date=today%205-y&q=...).


Hmm, that k8s “has won” might make me a bit sad - not that I’ve ever gasped some fresh air out of the stranglehold of AWS - as I was impressed with Mesos.

Can you share some 1st person opinions on Mesos, and where K8s is a step forward, backward or aside?

TIA


Having moved from Mesos to Kubernetes, Kubernetes just felt more mature. Working solutions for stateful sets, service discovery with DNS, flexible scheduling with affinity and tolerance, saner resource limits, a good CLI tool.

It's not a completely fair comparison since we also were able to offload persistent storage to Google Cloud, which is one of the harder problems IMO.

I think Mesos has improved since then, but it always felt like they were a bit behind.

In general Kubernetes feels like it is designed by people with relevant experience. Especially compared to our earlier experiments with Docker Compose files. People are praising their simplicity, but they left us solving a lot of hard problems that Kubernetes solves for us better than we could have done.


Why is it sad? It's great that we can finally standardize and use a single powerful system that is very capable but also improving quickly.


i would encourage you to try Swarm. Its brilliant in its simplicity. I think Docker product marketing and customer success basically sucks. But Swarm as a product has been really, really nice. And yes I continuously evaluate kubernetes and swarm side by side.

You can get a swarm cluster running in less than 10 minutes on your local laptop after "apt-get install docker-ce". To run k8s, you will need to first muck about ingress and overlays and everything else.

I know its because of "flexibility" - its like Sinatra vs Rails. They are both great in their spaces.


    $ brew cask install minikube
    $ minikube start


minikube is a specific version/distro/packaging of kubernetes meant for testing on local laptop.

Docker Swarm runs exactly the same way with exactly the same components and with the same ease on laptop as well as the cloud.

TL;DR - you cant run minikube in production.


I see your point. For k8s production, you'd have to use a solution provided by the cloud operator, GoogleKE / AzureKS / AmazonKS / etc. Which leaves the on-prem cluster and/or baremetal hosted cluster uncovered. Not that I'm convinced it's worth running baremetal anything, you're likely to be less efficient than large cloud operators because of economies of scale.

Nit: $ minikube get-k8s-versions The following Kubernetes versions are available when using the localkube bootstrapper: - v1.9.4 - v1.9.0 - v1.8.0 - v1.7.5 - v1.7.4 - v1.7.3 - v1.7.2 - v1.7.0 - v1.7.0-rc.1 - v1.7.0-alpha.2 - v1.6.4 - v1.6.3 - v1.6.0 - v1.6.0-rc.1 - v1.6.0-beta.4 - v1.6.0-beta.3 - v1.6.0-beta.2 - v1.6.0-alpha.1 - v1.6.0-alpha.0 - v1.5.3 - v1.5.2 - v1.5.1 - v1.4.5 - v1.4.3 - v1.4.2 - v1.4.1 - v1.4.0 - v1.3.7 - v1.3.6 - v1.3.5 - v1.3.4 - v1.3.3 - v1.3.0



I used kubespray to set up a Kubernetes cluster on our own hardware. I have Ansible and Docker knowledge and ran into a few issues, but it didn't take much time to set up a custom cluster. It's still different as I had issues accessing the UI, but I think it'll become even more easier in the next month.


Creating a k8s cluster on GKE is just as easy (a single gcloud command, or use the GUI if that is your preference).

With hosted Kubernetes as a service (GKE, AKS and soon EKS), there is little reason to roll your own cluster.


But you see with Docker Swarm, you don't need "Docker Swarm as a service". That's why Docker Swarm wins in simplicity.


It's just Kubernetes in a VM, like Docker on Mac is just docker in a VM.

You can run Kubernetes on your Linux device machine as-is.


It's what's in the VM that counts.


What about HA / clustering, security mTLS?


Why would you want any of those on your laptop?


Because that is how I will deploy on production. At least the security pieces. Any difference & I am not sure if I can be assured of preventing the "works on my machine" kind of issues.


Is HashiCorp’s Nomad in a similar position at this point? I really enjoy it’s (relative) simplicity.


Nomad is very much alive and HashiCorp is committed to delivering a scheduler which concentrates on operational simplicity so teams can concentrate on building applications. It also gives the capability for running workloads other than Docker such as isolated fork for binaries, non containerised Java, etc. We have a great release with 0.8 and many features planned for the rest of the year.

Integrating Nomad with Vault and Consul is super easy and allows you to provide secrets, configuration and service discovery to the application with the right layer of abstraction, the application should not be aware of the scheduler it is running on. Cloud auto join allows super easy cluster config. Job files are declarative.

Yes, Nomad does not have all the features of Kubernetes, but we take a different approach believing in workflows and the unix phillosophy of a single tool for a single job. A fairer comaprison would be to comapare the HashiCorp suite of OSS tools to K8s, Nomad, Vault, Consul, Terraform, this gives you capabilities to manage your workload both legacy and modern.


I don't know about Nomad, though Hashicorp's Vault is going to do just fine I think. It fills in a gap in secrets management that K8S doesn't do out of the box.

Looking at doing Vault in HA leads people to look at Consul, which leads to Nomad. (Consul uses a consensus protocol for service discovery and I think that will be interesting for the next generation).

Last year, K8S had already captured the center of gravity, and it took a while for the rest of the dev community to catch up.

I think this year ia a lot of shuffling as the survivors settles into orbit around K8S. There is a lot of interesting innovations up the stack once orchestration is de facto standardized.

K8S still hasn't solved the stateful workloads, though it is introducing a lot of primitives to support those: controller hooks, third part resources, on top of which Operators can function.

I think we will see a lot more innovations as people create Operators. That can include anything from stateful loads for specific distributed stateful loads, to things like intrusion detection, ML-driven autoscaling, and so forth.


I get the impression that Nomad was never particularly alive to begin with, which is a shame since it seems better designed. But it doesn't have that "ZOMG Google has blessed us with the secrets of the borg" that DevOps crave.


Nomad was my choice for queue centric workloads, but it doesn't seem to fit the webserver / long living services as good as Kubernetes. I'm not sure, but I would think you could run Nomad and Kubernetes on the same servers, sharing the Docker runtime.


If you run two schedulers they don't have a correct view of available capacity. You can use Mesos as a meta-scheduler but that introduces more complexity.


I, too, was sad to see similar things with Nomad. HashiCorp really does write generally excellent software.


The actual features are very nice. I don't know what else makes you say 'migrate' apart from the fact that there's a growing support for k8s. The advantages of Swarm (very easy setup in private clouds, docker-compose format descriptors,etc.) don't go away just because k8s is popular.

Stop spreading FUD please, it is not good for anyone.


There were two swarm's, the swarm classic — which was okayy, & then the newer swarm Mode, introduced in docker 1.12. I'm not sure which one you refer to when you say v0.4. But swarm mode was good, extremely simple & worked well for the right workloads. Like most solutions it is not a silver bullet for every orchestration need, but worked very well in microservices, new-ish architectures. K8s is great too, but it seems like an overkill for a handful of services. Also setup of K8s used to be hard, especially HA. The learning curve is also quite steep. One of the features in swarm mode that makes docker swarm extremely intuitive — IPVS, is being incorporated in K8s. So I guess there has been some cross pollination on both sides. But I do not think the swarm mode is going to die anytime soon.


I had to make a decision for an orchestration tool a few weeks ago and I went with K8s. One of the main reasons was that even Docker advertises it on its website and with Docker for Mac. I expect Swarm support to be canceled in a not so distant future and I cannot rely on a tool with an unclear future.

Which is a pity because I really liked Swarm for its simplicity.

Side note: I am also concerned about Docker in general. CE/EE split, services shutting down, bugs seemingly not being fixed - I cannot point out a precise aspect, but I am concerned.


I’m concerned as well. We use Docker and Docker Compose heavily for our development and both on Docker for Windows and Docker for Mac developers have to restart their daemon several times a day. The binaries aren’t open so it’s tough to see and fix the issue; but because we aren’t Docker Enterprise Engine customers, there is no path to support. It would be helpful if there were a way to pay for Docker and receive support without having to go the enterprise route. I can see paying $200 a month for the team for support.


Yikes. Surely Docker is providing you more than $200 worth of value a month. If so, why would you only pay $200?


I might; but that’d have to mean seeing some traction from that money first. They haven’t proven that they’re able to run the sort of business they’re trying to run.


I'm not sure who's going to beat Docker. Docker is central to most orchestration tools so as long as they make money someplace with their central services, they should be fine.


Kubernetes could move to rkt or even the now standard systemd stuff, end users would hardly know the difference. The container format isn't a very strong lock-in effect and most people are probably better served without the image type format anyway (as the Linux block layer wasn't really constructed with that use case in mind, and fixing the plumbing will take longer time than developing the orchestration tools which is what'll win the users).

Docker the company have few options to monetize on Docker the software when it becomes commoditized. They seeminly chose the Enterprise way, which consists of pretty orchestration tools and integrations with Active Directory. (A perfectly valid option, which worked out well for VMware.) That's a dead end now that Kubernetes won container orchestration. It will be interesting to see where they will go next.


The kubernetes community is pouring a lot of resources into cri-o. I imagine you are going to see the kubernetes clusters that are built 'the hard way' start switching over and removing docker. It will still be used for building pushing containers for the time being.


Who’s beating them? Amazon?


Kubernetes had won the container scheduler wars. At GitLab we're all in on making a PaaS based on k8s and our CI/CD and the container registry that is part of GitLab.


It feels awfully 19th century though that despite k8s having "won", by far the biggest container schedulers by containers scheduled are, no doubt:

(I think this is the correct order, not 100% sure of course)

1) google borg (maybe omega) [1]

2) amazon ec2

3) whatever microsoft is using

(large gap)

4) all the rest of the world combined, a small portion of which is k8s

[1] https://www.quora.com/Does-Google-use-the-Open-Source-Kubern...

(one might even say [1] seems to imply it'll never happen, or at least take a very long time. Also if you read the papers it becomes very clear that "Google Borg" includes a lot of things these days at many levels, from custom ASICs, device firmware (as in standard device, google borg firmware), BIOS firmware, entirely custom sub-kernel code, custom kernels, custom userspace (ie. Google-specific libc that's not optional), ... all of these will turn out to have dependencies on eachother that have to be redone for k8s, could take a while to migrate over)

(although I have not read any papers on it (I'd love some though), I'd bet amazon is in a similar boat, and of course Microsoft is Microsoft)


EC2 is not a container scheduler - it's an IaaS for VMs. The Amazon container PaaS (ECS/EKS) is a layer on top of EC2. And that is being superseded by Fargate which will make the underlying EC2 invisible. If you need a Fargate-like capability now, Azure AKS does it.

See https://azure.microsoft.com/en-us/services/container-service... and https://aws.amazon.com/fargate/


Fargate is expensive as hell for long running services. You should only be using it for something that creates value 100% of the time that it is running.


So what is the EC2 container scheduler before Fargate called ? Any papers on it ?


ECS and EKS.


> At GitLab we're all in on making a PaaS based on k8s

This is very interesting. Could you talk more on this ? There is definitely space for an "opinionated k8s distro with batteries included". I have wished for Swarm to become this....


It is not a Kubernetes distribution. You can use any distribution or CaaS you want. The beginning of it is in GitLab Auto DevOps https://about.gitlab.com/2017/10/04/devops-strategy/


Interesting. Is there a blog post where i can read more about this?



Docker is a amazing tool, but I think the technical design and overall strategy for Swarm wasn't very well executed. Moving to k8s is a smart thing for them, because it's objectively better for real production use.

In our tests about a year ago, swarm started showing serious networking and cluster synchronization problems with cluster sizes over 30 nodes (physical servers), on a fast, reliable LAN.

I've heard similar stories from another big Docker customer- Docker support promised them that improving performance of Swarm and fixing scaling issues are the focus of "the next version", but they never came. This company is now moving to k8s.


Could it be that the teams are simply focused on adding K8s support and getting the Docker EE out of Beta?

Public statement on their blog after the K8s announcement in EU:

"But it’s equally important for us to note that Swarm orchestration is not going away. Swarm forms an integral cluster management component of the Docker EE platform; in addition, Swarm will operate side-by-side with Kubernetes in a Docker EE cluster, allowing customers to select, based on their needs, the most suitable orchestration tool at application deployment time."

https://blog.docker.com/2017/11/swarm-orchestration-in-docke...

There's still plenty of PR's and activity in the SwarmKit and Libnetwork repos:

https://github.com/docker/swarmkit/pulse/monthly https://github.com/docker/libnetwork/pulse/monthly


I hope this isn't so either. Docker Swarm is so simple and works well for many use cases.


This is already Swarm V2, there was older Swarm, which worked nice enough. It was equivalent of multi-host Docker run, which could filter based on constraints and do bin-packing, and had even support for multi host networking with etcd/consul/zookeeper.

Then, they cancelled it, no more patches, no mention of it anywhere unless you know where to look.

Then they created Swarm Mode, and add the concept of "services" which sucks compared to regular run because it lacked so many options the run command had, it took more than 6 months to implement most of them.



> Twitter users

People. People are up in arms.

Anyway, yeah, that's insane. Even Google, who constantly shuts stuff down, usually does so with way more heads up. For comparison, Google Reader, a completely free service, shut down with 3.5 months advance notice. Google Wave got almost 6 months notice.


> Google Reader

It still hurts.


Since I missed on missing it, was there any particular feature that's not in other rss readers?

I have only user rss via gwene over nntp, if that went away, i'd miss it bunches.


It’s not about features. Nowadays, I’d go with a self-hosted Open Source application from the start. In fact, that’s what I am looking for at this very moment.


It was the day RSS died for me


I really don't get why this statement is so common. More or less every other hosted reader immediately offered a migration path, so getting out of Reader and up and running somewhere else was really easy.


I'm glad it happened, now I'm happier with Feedly than I was back then with Google Reader.


https://bazqux.com/ - Highly recommended. I'm a lifetime subscriber.


Let it live again with InoReader....


I do think that for commercial services like services in the Google cloud platform they give a year+.


All Google Cloud GA features will have at least 1 year of deprecation period.

7.2 Deprecation Policy. Google will announce if it intends to discontinue or make backwards incompatible changes to the Services specified at the URL in the next sentence. Google will use commercially reasonable efforts to continue to operate those Services versions and features identified at https://cloud.google.com/terms/deprecation without these changes for at least one year after that announcement, unless (as Google determines in its reasonable good faith judgment):

(i) required by law or third party relationship (including if there is a change in applicable law or relationship), or

(ii) doing so could create a security risk or substantial economic or material technical burden.

The above policy is the "Deprecation Policy."

Disclaimer I work for Google in Cloud.


Google does make a mess with consumer apps but it’s entirely different when it comes to Google Cloud or any of their enterprise products.


It certainly makes me rethink if docker services should be relied on in production.

Migrating off docker cloud will be a pain, but the service was already a pain to use, so maybe it's about time anyways.

But imagine being given 2 months to migrate off docker hub for image storage. Panic would ensue :)


I understand that having to abandon ship sucks, but wasn't the whole point of containers that they can be migrated easily? heck, the whole concept got its name from that idea. So why the fuss?

edit: typos


sure, but i'd wager that most of the pain point is getting orchestration tools to work on other platforms. and also vetting of other platforms, etc.


Because moving to another orchestration platform requires overhauling your CI/build system.


It's also due to the fact (which I also tweeted) that Kubernetes is not even supported on the stable release channel of Docker for Mac!


but why?

The whole point of containers is that they are ephemeral and can be booted up quickly anywhere because you statically link the whole fucking OS?


The APIs of platforms are often totally different. If you went to docker cloud for its simplicity and now have to move to AWS/GCP/Azure/etc. and don't have a dedicated DevOps that knows one of those platforms already, you have no choice other than taking a developer working on features and putting them on learning the new API in a few weeks including testing. ~8 weeks is not enough for that if you are a cash-strapped startup.


Such are the perils of using immature tools in your development chain and production systems.


Are you guys planning to migrate gitlab to golang ? I think the biggest feature that everyone wants is better performance.

Is the migration path that tough ?


There are no plans to migrate all of GitLab to Go. The main Rails app is going to stay a Rails app for the foreseeable future. There are a few reasons for this. For one it'd be such a huge project, but also Rails is working well for us, it's great for our pace of feature development.

We are working on moving the git layer to Gitaly[0] which is written in Go (and is what this blog post is about). It was one of our major bottlenecks and we've seen a lot of benefit from having made the switch. It's not done yet, but a lot of the calls to git that the application makes are now done through Gitaly.

[0]: https://gitlab.com/gitlab-org/gitaly


They have so many features that I don't see it happening ever.


They wouldn't need to stop the world and do a full rewrite. It would be feasible if they stop writing new components in Ruby and began replacing the existing parts piecemeal.


what I dont understand is that gitlab raised a very large amount of money - can it not pay a team in parallel to port it to golang or java ? Maybe the gitbucket or gitea teams are up for hire.


Suggest gitbucket - https://gitbucket.github.io/

blazingly fast (a single war file - just run "java -jar gitbucket.war" to get started) and has a very nice UI. A plugin system enables you to extend the functionality (including CI) ... and a very active dev community.

https://gitbucket.github.io/


For anyone running this, can you comment? For example, one recurring criticism, before GitLab and the community managed to stamp out the expectation that it would be feasible, was that people were trying to self host a GitLab instance on a cheap SBC (e.g., a Raspberry Pi) or the smallest DigitalOcean plan and were surprised when this wasn't doable, while Gogs and Gitea handle this fine.

So to get a general idea of what sort of setup is expected in order to run a gitbucket instance, if you're running one, what are the relevant details?


Many thanks for the link, as Java/.NET dev it is surely a very good option to know about.


isnt training in tensorflow more effective than doing it via spark ?



hi, im helping build a fairly vanilla ecommerce site in either reactjs/vue. I want it to be entirely server-side rendered with high SEO visibility. However.. (and this is the big point), have the same APIs exposed for mobile apps as well.

How do I do this ? Any templates to explore this pattern ? It is fairly straightforward to do this in rails, but in the world of react/vue, it seems that the whole community is geared towards rich client-side applications with low SEO.


There are many examles, like next.js. Or search for isomorphic or universal js.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: