I find Semantic UI to be very very interesting, but I have yet to figure out how to integrate it in my projects. I don't know if it's just me but it feels like they are forcing you to write your own styling in a separate repo, then generate the CSS and use it on your project, which to me, makes no sense.
So, basicaly, you're saying that you don't want to mess with that setup and use existing stuff... which you could achieve with everything as long as someone else do the setup for you. I don't see where Docker is better for a developper in this case... providing a vagrant box is exactly the same.
Startup time for a docker container is way way faster than a VM. Also, you could run the exact binary state of production, which is helpful if you run into "works on my machine" types of problems.
Agreed, but my point was that Docker doesn't reduce dev setup time. Give a good vagrant config file to a dev and tell him to do vagrant up and you have the same result as what you're saying. You can replicate production state with vagrant too (and bash scripts if we stretch this) and avoid "works on my machine" problems.
I'm not saying that vagrant > docker. The way I see it, docker is great if your infrastructure is using it all the way. If your prod setup if not dockerized, using docker in dev seems to me counterproductive than spinning up a VM and provisionning it with ansible or puppet to achieve production replication. As @netcraft said, I don't see why I should "change my server architecture" to use docker in dev.
If you have a complex stack (multiple services, different versions of Ruby/Python/etc, DB, search engine, etc), it's a real pain to shove them all into a single VM. Once you have 2 VM's running you have already lost to Docker on memory/space efficiency and start-up time.
I have yet to see real, complex, and distributed applications that share the exact same config in dev and production. I know that having the same versions of system libs in dev and prod can be a problem in some context and docker can help with that, but it's not the only solution and does not take care of the whole landscape (e.g., npm packages.json, pip requirements.txt, etc.).
I totally agree that startup time of a container is far less than a VM, but I don't see how docker "removes all the trouble of running applications that you need for your development: databases, application servers, queues"
You still need to install, configure these services, make sure that the containers can talk to each other in a reliable and secure way, etc.
First, I'm a dilettante. I haven't used docker in production. I've really only set up a handful of containers.
That said, all of those fiddly library dependencies are where i struggle the most at work. If i could just build a docker image and hand that off, it would save me a lot of grief with regard to getting deployment machines just right.
I do have a great deal of experience with legacy environments, and it seems like the only way to actually solve problems is to run as much as possible on my machine. Lowering that overhead would be valuable. Debugging simple database interaction is fine on a shared dev machine. a weblogic server that updates oracle that's polled by some random server that kicks of a shell script... ugh. Even worse when you can't log into those machines and inspect what a dev did years ago.
If you've got a clean environment, there's probably not as much value to you.
I hear you about legacy systems. Two years ago, I had to support a Python 2.4 system that used a deprecated crypto c library and I did not want to "pollute" my clean production infrastructure. Containers would definitively help with this scenario. The thought never occurred to me that docker could be used to reproduce/encapsulate legacy systems, thanks!
At the company I work for, we went through all the trouble of getting our distributed backend application running Vagrant using Chef so that we could have identical local, dev and production environments.
In the end, it's just so slow that nobody uses it locally. Even on a beefy Macbook Pro, spinning up the six VMs it needs takes nearly 20 minutes.
We're looking at moving towards docker, both for local use and production, and so far I'm excited by what I've seen but multi-host use still needs work. I'm evaluating CoreOS at the moment and I'm hopeful about it.
I don't see how Docker solves the speed problem without a workflow change that could already be accomplished with Vagrant.
* Install your stack from scratch in 6 VMs: slow
* Install your stack from scratch via 6 Dockerfilea: slow
* Download prebuild vagrant boxes with your stack installed: faster
* Download prebuilt docker images with your stack installed: fastest
The main drawback of Vagrant is that afaik it has to download the entire box each time instead of fetching just the delta. That may not matter much on a fast network.
I have to disagree, although I'll admit that what's "trivial" is subjective. Sure, a container means you don't have to run another kernel. If the container is single-purpose like docker encourages, you skip running some other software like ssh and syslog as well. That software doesn't use much CPU or memory though. I just booted Ubuntu 12.04 and it's using 53MB of memory. Multiplied across 6 VMs that's 318MB, not quite 4% of the 8GB my laptop has. I'd call that trivial.
On the last project where I had to regularly run many VMs on my laptop, the software being tested used more than 1GB. Calling it 1GB total per VM and sticking with the 53MB overhead, switching to containers would have reduced memory usage by 5%. Again, to my mind that's trivial.
> No cross-site loading of any kind; all source material MUST come from the domain you are on. This would seriously break some sites but it would close large gaps in security and tracking.
This would probably break almost everything. Who doesn't use some sort of a CDN nowadays?
A better alternative would be to promote the use of Content-Security-Policy, perhaps by requiring it be used in order to load source material from an alternate domain.
I must say that my Dell XPS 13 is a pretty good machine for development. I can take it anywhere without any problems. Sure it is 13", but I can go below 13".
On the plus side, it has a Ubuntu logo on it instead of that crappy Windows logo.
I don't know anything in iOS development and I have a basic knowledge of how Android works (I'm a web dev), but is this something that Android handles really well with intents?
Yeah, this is a solved problem on both Android and Windows Phone with Intents and Contracts.
It's silly that workarounds like this are necessary, but hopefully if enough people start using this (or something else like it) Apple will get the message that it's something users and developers actually care about.
I have no problem on my personal laptop (i7, 8GB RAM, SSD), but on my work laptop (i5,6GB, HDD) it is horrible. I can wait up to 10 seconds before it appears... I just disabled the Super key binding on that computer.
On a very slow netbook, I noticed that Unity 12.04 (LTS version) opens quicker on the second and subsequent invocation than on the first. However, there is still a significant delay. I suspect the searches and time taken to build the various lenses is the reason. Gnome Shell appears to be a lot snappier because the search does not happen on that first screen.
Does anyone remember Mac OS Spotlight? Took ages when first introduced (10.3?) then got loads faster on the same hardware. I'm hoping the same will happen with Unity.