Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
All the steps needed to create deep dream animations on EC2 (github.com/johnmount)
72 points by jmount on July 10, 2015 | hide | past | favorite | 28 comments


I really wanted to run the deep dream and deep dream animation scripts people have created. I am now re-sharing the instructions I found (with links to original guides). It isn't an automated install. You pretty much have to paste the lines one by one. But I did replace any "edit" steps with append or patch.

To do this on a fresh Ubuntu EC2 g instance there are a lot of steps- but I have tested them and put them all in one place (with links to the original sources and guides). I have CUDA up but not CUDNN as I haven't found how to legitimately download CUDNN without registering on the NVIDIA website.

Again: credit to the actual creators and all the original guide authors.


For better or for worse, with shell, the line between "a list of instructions to be typed in" and "shell script" is very, very thin.

Toss a

    #!/usr/bin/bash

    set -e
on top of that, call it "setup.sh", and give it a runthrough, and you're halfway to it being as "automated" as it needs to be to start with. You're fall more likely to get people start PR'ing a running shell script than dead text files.

(set -e will make the script stop when something returns with an error exit code. Despite temptation, given the size of this script I do advise doing your best to work with that... this is just the sort of script that if you let it blunder ahead blindly, when something doesn't work you'll need an hour just to figure out what didn't work, let alone fix it....)


Some of the steps require user input. Is the "set -e" for dealing with that? Also at least one reboot is needed, the script depends on a lot of external sources being up, and I have seen different behavior on the kernel upgrade steps.


No, see edit. But there are ways to do user input in shell, see Google. (By which I mean I literally do not personally know the best answers and that's what I'd be doing anyhow if I tried to explain here.)


Thanks for updating and organizing all of this. I got a working EC2 instance with CUDA and CUDNN working yesterday but it took some time. Having all of this is one place is excellent.


These steps are cool and very detailed, also check out https://github.com/VISIONAI/clouddream which is containerized so you can try it on local machine as well as on EC2.


Thanks, that might be the better option for a lot of people. I'll add that to the readme.


That's pretty insane. It makes me really appreciate that I get to use package managers and Puppet/Vagrant scripts for most of the stuff I program.


Yeah, I don't normally "do my own infrastructure" (I know, "it shows"). Usually the client or vendor has supplied the build. And I didn't want to commit to a package manager early in the exercise (in case it turned out to be incompatible with something one of the dependent projects did). So I just tried to start from the Caffe instructions and then fix things that broke.

The main components turned out to be: NVIDIA/CUDA, python, Caffe, ffmpeg.

I was very surprised how many steps ffmpeg took.


Would you be willing to publish an AMI of the completed image? Seems like that'd make it really easy to run without all of the package management work.


Yes, I have a running instance up now in USWest. If it is free and somebody could shoot me instructions I'll make it an AMI (after lunch, on the way out the door right now).


Actually I started looking into it. I don't think I am going to make the AMI.

First I am not sure if the NVIDIA driver licenses do or do not allow this. Second I would have to tear down my current run and pay S3 charges to distribute the AMI. Third the instructions are long and ugly ( http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/creating-... ) even after seeing the setup steps. Finally as people have mentioned in other comments there is a good to go Docker image for people to also play with.

Sorry about that.


No problem.

For what it's worth, I've had luck just using the AWS EC2 web console, and choosing "Create Image" and "No reboot" from the instance's context menu. That spits out an EBS snapshot and an EBS-backed AMI based on the snapshot, which is probably what you were after. I went through the instance store image creation steps a long time ago for a project that deployed on custom AMIs and it was pretty convoluted; we ended up going EBS-backed and skipping the instance store altogether.


I now list other people's docker and AMI instances in my README, that should help.


This is really cool, but honestly, I'm getting a bit tired of seeing all those "eyes". I want to feed it my own training image set. Is that possible?

Alternatively, is there a way to specify I don't want to enhance on certain outputs, like "dogs" and "faces"?


Thanks, the Caffe on the PYTHONPATH was the bit I needed.


Or... Just use deepdreamer.io


That is definitely the more convenient option for running the defaults, but I think this is for a different usecase. If someone wanted faster results and full access to edit the python code and caffe models, this would be the better option.


It's a bit better than the defaults because it generates many at once. But ya if you really wanna hack on it...


[flagged]


> Slapping some half-baked shit onto Github isn't open source. It's littering.

What a negative attitude...

Every programmer has written plenty of shitty code.

I know that I wrote lots of garbage code when I was a teenager just learning how to code. Unfortunately that was long before Github (or even Git) was a thing, and most of that code is either lost forever, or inaccessible on old floppy disks, etc. But today I would love it if my shitty Basic A text based games, and C code full of memory leaks still existed somewhere like Github, where I could look back at it and see how far I've come.

Projects like this one aren't intended to be open source masterpieces. They are fun learning projects by people who enjoy building stuff. If you don't like it then don't pay it any attention and build your own that is better. At least this guy put something together and documented the experience so someone else could follow up and improve on it if they wish.


[flagged]


Congratulations, you were some sort of child prodigy who could code perfectly from day one. Or maybe you just started earlier.

What exactly are you trying to achieve by hating on someone who's done nothing but share what they've learnt with other people. He isn't making you run this. He didn't even make you read it.


[flagged]


These comments break the HN guidelines. Thoughtful criticism is fine, but dumping all over someone else's work is not fine. Please don't do that here.


I agree that two comments upthread is inappropriately personal.

Several people downvoted all my comments in bulk, including the one below that's a purely technical explanation of using multi CPU/GPU caffee. I personally do not care about points (moreover the same comment in another thread more than offset the 'karmic punishment' in this one) but since you're here, please have a look at the way some people are treating the downvote button.

I will keep things up a level and more broadly constructive in the future.


I didn't write any of the Python code you are complaining about. The install steps do use environment variables (and in fact set some in the bashrc). The install steps are not a script for reasons I gave in other comments in this thread (reboots, remote dependencies). You mention elsewhere that the effect includes JPEG artifacts if you use JPEG as your intermediate state. That is unfortunate, so I have added a links to other Python scripts for making movies so more variation is available.

I started this because none of the instructions I could find actually worked or had prominent enough links into the original BVLC source and Wiki. And why I did not submit to somebody else's Github or Wiki- mostly so I wouldn't have to beg to have pull accepted or a Wiki edit not reverted.


I apologize if you felt I was picking on you personally however I was in the same position as you. Nothing out there works. Now including your recipe.

Meanwhile the top GitHub issue for your referenced Python project (also on the HN home page is):

"Unable to run script with default arguments, get an invalid syntax exception."

I poked in to see if he'd managed to rename "avconf" to "avconv" yet.

Right below:

  print("SHITTTTTTTTTTTTTT You're running CPU man =D")
I found this:

  # this line is not tested cuz i don't have avconv :(
It's just all so gloriously half-assed. Sorry if you got caught in the cross-fire but it's hard to soar with the eagles when you're hanging out with turkeys.


$2.60/hr is way overpaying. For this task people should definitely be using spot instances, which are $0.32/hr. Big difference! It seems to me like they should also switch from g2.8xlarge to g2.2xlarge because caffe does not benefit much from multiple GPUs. That brings the cost to $0.07/hr; more than an order of magnitude difference.


Actually multiple instances of CAFFE rendering these models is working well for me. I'm paralleling 16 processes across 8 GPUs and everything fits. (Training might not work as well.)

Spot instances are great until the random hedgefund peak sneaks in and deletes all your work.

Your guidance for people just noodling around is right on the mark, though! The 2x machines are fine for playing and, once you get the AMI set up (hint hint) it's a two minute upgrade.


You might want to re-read the HN guidelines.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: