As someone still nowhere near an expert in deep learning, a little more description of the examples would be great for helping me dive into this. What do the input/outputs look like and which configurations apply best to what situations?
As it is right now I just see acronyms and uncommented code so it doesn't describe much.
Still, following it would require some familiarity with Numpy and scikit-learn (or other libs in the same spirit). As well as some experience with neural networks.
Yeah that's what I was referring to - still a lot of missing pieces. What type is X_train or Y_train? a list of dicts? Same question for prediction output?
I know enough about deep learning to grok the overall concepts and structure, but your docs aren't telling me anything about how to ACTUALLY get started with your lib.
Yep, as he mentioned you need some familiarity with scikit learn or similar APIs, take a look at [1] for example. In essence X_train and Y_train are 2d arrays with shape (n_samples, n_features), Y_train is usually of shape (n_samples, 1) the same as the output. Normally both list of lists or numpy arrays are accepted, even a generator of samples as long as it is a 2D like structure. I would say that if this is not obvious to you maybe you should start with something more basic like linear models in scikit-learn before jumping to deep learning.
No need to be dismissive. The getting started guide linked to makes no mention of scikit - so yes, I don't even know what I don't know. scikit is not a prerequisite for machine learning, it's simply one way to approach it.
Sorry if it sounded like that. What I meant is that the 2d matrix representation with shape (n_samples, n_features) actually goes beyond scikit-learn and python(ex: dataframes in R or Julia), it is the standard representation of data in Machine Learning so it is assumed that someone who wants to do Deep Learning should already be familiar with it. That is why I thought you should start with something simpler than Deep Learning to get used to these concepts. Scikit-learn is a good option because it has more tutorials/examples/videos and more beginner friendly documentation in general.
In terms of speed, I don't have any benchmarks available. But it's using Theano under the hood, which is well optimized and should be very competitive with Torch and Caffe.
Love this package. I was using PyBrain for a recent project, but I had an awful time getting it to work and found the performance equally awful. After stumbling upon Keras, I was up in running in minutes, and the huge performance boost of Theano allowed me to actually finish on time. Thanks for putting this together!
Online learning, if that's what you are trying to do, can definitely be done with Keras. You would basically just feed samples to the model individually or in small batches using the method 'model.train(X_batch, y_batch)'.
If you raise this issue on the mailing list or in the Github discussion, you'll get more help and advice.
Not currently. We are looking to introduce easy interfacing with Spearmint, a library which does bayesian hyperparameter search. This should be part of the v1 release.
Yup, got it working on 14.04 here. My guess is you're having issues setting up the scientific python stack. You might want to try the anaconda distribution (https://store.continuum.io/cshop/anaconda/) if you continue having problems, or follow the instructions from the scipy website on installing via apt-get (http://www.scipy.org/install.html). The apt-get install should pull in some binaries and header files that you won't get from pip. After installing from apt-get you can then run the setup.py file and things should go much more smoothly.
The only other apt-get package I remember pulling down was the hd5f headers: "sudo apt-get install libhdf5-dev"
Thank you. I think I found most of the set-up complete through apt-get before I noticed this post, but have installed anaconda anyway's to see how that goes. Seems to have everything firing as it should and was a much smoother install.
@fchollet: in case you didn't notice the post back then, you might gain something from those comments, too. :)