Looks like a nice little library, and very relevant to my interests. My personal "dream scenario" is to have true multisource P2P video streaming in the browser with no extensions required - with WebRTC, this seems quite feasible as implementations mature.
I gues it could be because + infront of an empty array returns 0. So, when you do !+[] you get 1. What you pasted it a bit of a labyrinth, but if someone bothered to keep track of it, they could definitely find ten 1s in there. It's not why, it's a how.
I hope someone figures out the rest and gets a cookie! :D
Can someone tell me why 'Cat with a wry smile' is in Unicode? Presumably at some point someone thought that it would be useful to somebody else, hence it's inclusion. It would be very interesting to hear the back-story behind such seemingly useless glyphs.
These are Emoji: it's a set of smileys/icons originally used by Japanese carriers. Apple included them in their iPhone, and in order to standardise them they (successfully) requested they were added to Unicode.
>But you know when the data is changing -- when an article has been updated and republished ... or when you've done another load of the government dataset that's powering your visualization. Waiting for a user request to come in and then caching your response to that (while hoping that the thundering herd doesn't knock you over first) is backwards, right?
>I think it would be fun to play around with a Node-based framework that is based around this inverted publishing model, instead of the usual serving one. The default would be to bake out static resources when data changes, and you'd want to automatically track all of the data flows and dependencies within the application. So when your user submits a change, or your cron picks up new data from the FEC, or when your editor hits "publish", all of the bits that need to be updated get generated right then.
You mean most things don't already do this? I've been working on a personal blog engine with this as one of the core ideas (basically all static assets and pages are compiled on edit), and I thought it was a pretty obvious way to go about it. Looks like I'm indeed not the only one to think of it, but how "new" you present the idea as is a bit surprising to me.
For simple problem domains (Blog, Mom and Pop store website, etc) it's trivial to pre-generate content. For larger content systems you can run into a more complicated dependency tree. Then you have the choice between keeping the dependency logic accurate vs regenerating the entire content set on any change.
It also turns out that content sets that change infrequently, but also unpredictably are a pain to cache. You can cache them for a short time (as long as stale content can be tolerated), but then you lose cache effectiveness. Or you can cache it forever with some sort of generation/versioned cache, but that doesn't interface with named, public resources very well. Telling your visitors and Google that it's yourdomain.com/v12345/pricing not yourdomain.com/v12344/pricing doesn't really fly.
I definitely concur with your surprise about it being novel though. I think that for many situations it's just easier to run extra boxes to handle the increased load of generating dynamic content on the fly over and over again. It's good for SuperMicro and AWS. It's not so good for the planet.
I'm very excited to see Jeremy's approach to addressing the problem.
The thing that annoys me about both this and Zencoder is that for people who are actually experienced with video encoding, there is absolutely no way to tweak eg. the underlying x264 settings. There's quite a few settings that have no effect on decoding in any way but are pretty important in getting the most out of the video at a given bitrate (most notably the strength/mode of AQ and psychovisual optimizations). In case of AWS, there doesn't even seem to be any kind of "general" tuning (like whether the content is film, animation, extremely grainy or so - x264 has --tune settings for these among others - Zencoder at least allows you to access this option[1]) options available, making it pretty much "one size fits all". I could always rent a generic server and use that for my encoding needs, but it'd be much more convenient if these cloud transcoding services simply offered advanced configuration for people who know what they are doing.
Also, even for a "simple" cloud transcoding service, Amazon's offering is pretty limited in what it can do right now[2] - you can basically only encode H.264 & AAC in MP4, define the profile, level and bitrate, and that's about it. Zencoder has much more options in comparison and has generally more transparency in regards to what their encoding software actually does (sadly when I asked them about getting access to x264 settings directly, they replied along the lines of "they could change and things might break for users!" - which I don't think would be an actual issue since the direct settings ought to be for advanced users only, and they should be aware of things changing - plus Zencoder could just notify users of direct settings before they upgrade so they have time to adjust their settings if necessary).
As is the case with every part of AWS, we add additional features and options over time based on customer feedback and requests. Please feel free to let us (or me -- jbarr@amazon.com) know what you need and I'll bring it to the team's attention within 30 minutes.
A really simple way to obtain very high quality per bit per second, given prior knowledge of the nature of the material (film or not, grainy or not, cartoon or not, etc) and the type of output desired (AVCHD, Blu-Ray, etc) is to install MeGUI, then pick one of the community-built encoding profiles for x264.
Choosing the right profile for the job is absolutely crucial. The combinations of x264 parameters can be pretty arcane, and they sometimes change from one x264 version to another. There's a pretty active community on forum.doom9.org maintaining collections of profiles for MeGUI, some of those are excellent.
E.g., it is totally within the realm of possibility to put two hours of 1080p content on a single-layer DVD (4.4GB), in a format compatible with any Blu-Ray player out there (AVCHD, a subset of the Blu-Ray standard that accepts DVD as the storage layer), while keeping video quality at a very high level - basically indistinguishable from commercial Blu-Ray discs. But using a good encoding profile, feeding the appropriate parameters to x264, is the single most important factor in achieving that goal.
MeGUI is hardly necessary - x264 has a good set of presets and tunes built in to begin with. --preset veryslow --tune film/animation/grain will already get you very far, beyond that pretty much the two most important things to possibly tweak are the strengths of AQ and psychovisual optimizations (--aq-strength and --psy-rd).
>it is totally within the realm of possibility to put two hours of 1080p content on a single-layer DVD (4.4GB), in a format compatible with any Blu-Ray player out there (AVCHD, a subset of the Blu-Ray standard that accepts DVD as the storage layer), while keeping video quality at a very high level - basically indistinguishable from commercial Blu-Ray discs.
You might get away with an hour of almost-transparent content if it's not particularly bitrate-demanding, but two hours of live action will not look "indistinguishable from commercial Blu-ray discs". 5 Mbps High Profile L4.0 H.264 just won't look as good as ~30-40 Mbps H.264 High Profile L4.1 H.264 commonly found on BDs (unless the BD is really screwed up). At 720p you'd get pretty good results, though.
There has to be webm output support, firefox and chrome don't support mp4. And as always, it would really be helpful if s3 supported notifications, so you could automatically put a message into SQS or add a job to this new transcoding service when a file is uploaded to a bucket.
Chrome still supports MP4 as far as I know. They mentioned dropping support a couple of years ago but didn't.
Firefox is getting MP4 support. Firefox OS and Firefox for Android on some devices has it already. Support on some versions of Windows is in nightly builds hidden behind a preference setting. Linux support is hidden behind a build switch. At some point when these backends are stable they'll be in normal builds.
I thought chrome did drop it and microsoft released a plugin?
Firefox getting mp4 support doesn't change anything, people are already using firefox right now, and it doesn't support mp4 right now. The fact that the marketshare of non-mp4 firefox will eventually end up small enough that most people ignore it doesn't mean mp4 is all people need right now.
This isn't directly related to the blog post as it's about stuff available on OS X, but if you're on Windows I can very much recommend checking out Construct 2 by Scirra[1]. They have a pretty fantastic HTML5 game engine and an editor (and the whole "no programming required" is really just marketing - even though it uses a "visual" event system you still need to understand programming concepts like loops, conditions and such in order to make effective use of it). They have a feature-limited (no other limits though) free edition available for it too.
I could have done multiple test encodes, sure, but the problem in this case was that downloading several gigabytes of raw source material isn't exactly instant. And even if I tested with multiple clips, I doubt the conclusion would be that much different.
Because of texture this clip benefits enormously from 8x8 transforms (as well as substantially from an activity masking aware encoder). On an intra frame in prior testing Theora did enormously better than VP8 on this clip for these reasons. If your test was to compare an intraframe between vp8 / baseline h264 / and Theora, you would have concluded Theora was the best by a wide margin. But this would be an erroneous conclusion.
And sure, perhaps you'd get the same result on other clips. Over high-profile H264 the only obvious format feature that come to mind that could really let VP8 get ahead are the 'truemotion' intra-predictor and creative use of the synthetic reference frame (though I suppose the vp8 developers might have other suggestions) and I'd expect those features to only be big wins on a small number of clips so it wouldn't be hard to miss the cases where VP8 really shines over high profile h264.
But you (or I) could have said that without doing the test at all, and there would be 100% fewer clueless people going around claiming that something was proven here that wasn't. Your opinion (or mine) is a fine thing, but it's not proper to launder an opinion as fact by dressing it up in an inadequate test.
>If your test was to compare an intraframe between vp8 / baseline h264 / and Theora, you would have concluded Theora was the best by a wide margin.
But it wasn't. I was comparing the visual quality of the whole video, and provided the full encoded clips for people to download and compare for that reason.
I am willing to do further test encodes, but have no interest in doing something like encoding all 28 HD test clips available on derf's test clip page[1], since as a purely visual comparison, especially with the actual encodes, it would be incredibly exhausting.
EDIT: I added a notice about the downsides of single clip comparison to the top of the post.
I have no interest in doing something like encoding all 28 HD test clips, since as a purely visual comparison, it would be incredibly exhausting.
Science is exhausting. If you're not working hard, then you're likely to miss the interesting (counter-intuitive) results. In fact, finding counter-intuitive results is the whole point of science. If the truth were intuitive, explanations wouldn't need testing.
One problem is that even if I found that VP8 performed very well at one or two particular clips (out of the 28 HD test clips available), I couldn't say for sure why that is the case. There seems to be no clear information on what clips benefit from what kind of features, and as I'm not an expert on video encoding technology, it'd be hard for me to deduce these things by myself. General conclusions could still be reached, obviously, but if I was going to such lengths it'd suck if I couldn't get more overall detailed results.
Anyway, I brought up the subject to some Xiph folks over at IRC. Maybe in the future the test clips will come equipped with more detailed information to help in testing. It'd also benefit smaller scale tests, since it'd allow one to identify possible biases more easily.
Given that people do hundreds of test encodes when they actually use things like x264, I think that if you want to say anything general about these encoders you have to do more than one comparison.
H.264 isn't exactly "closed" per se - the standard itself is freely available[1], which is why we have great free and open source encoders and decoders for it. What you mean by "closed" is most likely just "patent-encumbered", which it most certainly is and which affects anyone wanting to use it commercially (at the moment you can use it freely for non-commercial purposes on the internet, but this may or may not change in the future).
By closed I mean not available for free use and distribution, for example in open source projects which can't pay any royalties. May be saying free format would sound more clear, since you are right, closed is a bit ambiguous, as it can also mean a format without published specification which needs to be reverse engineered to be used.
Also, I don't think that H.264 forbids only commercial use. Can you freely distribute their decoders and encoders? What about putting them in hardware for non commercial use?
The fine print of H264 says as a consumer you can implement the codec without paying a license fee for non commercial personal use. That does, however, mean all the free implementations like in FFMPEG are liable because since they are open sourced they are not restricted to consumer use.
You keep bashing anything not h.26x in multiple threads. MPEG-LA threatened to assemble an anti-VP8 patent pool ages ago, and that has yet to materialize. Show us the goods, and give a really darn good reason why the ideals behind wanting a royalty-free codec don't matter.
> Open source projects can't pay royalties. But people who use them can be forced to.
That's right, and that's the reason to avoid any closed codecs.
> The idea that VP8/VP9 will gain traction and remain royalty free is laughable.
What's laughable? VP8 remains royalty free. VP9 will as well. In practice you can never guarantee that some submarine patent troll won't appear tomorrow to threaten you. But the same perfectly applies to H.264/H.265 so your argument is irrelevant, since such threat applies to virtually anything, but it doesn't mean one should stop innovating because of it.
That he has. My test clip choice was inspired by him[1], and I also link to another blog post of his[2] in the conclusion. That VP8 blog post is almost three years old now, though, and the comment I'm replying to in my article claimed that VP8 is better than H.264 "at this point". This is why I did my test with the latest and greatest encoders available for both formats today.
I think it would be more interesting to compare VP9 with HEVC (h.265), I know the VP9 experimental branch can easily be checked out and tested, but I don't know of where to find HEVC encoders (although I'm sure they exist with some Google fu).
Obviously there are problems with testing unfinished implementations, as:
A) they are generally VERY slow given that there's little optimization during the development phase, and these next gen video codecs are more demanding than the previous generation
B) a great deal of the quality comes from fine-tuning during implementation, not from the specification. For example x264 is a 'best-in-class' implementation of h.264, readily beating many other implementations of the very same specification.
As such, this fine-tuning is not likely to exist until the codec implementations mature, which likely means that what is available now of HEVC/VP9 serves only as very rough estimates.
>VP8 (which is also slightly better than h.264 at this point)
I'm sorry, but you have been mislead - VP8 is not better than H.264, and comparison you linked is bad for multiple reasons, like not telling the exact encoder settings used, not providing actual video for users to compare and only showing one frame (for all we know, it might be a keyframe on VP8 and it pumped the bitrate up for it and x264 didn't), not providing source for test replication and so on - just read these:
I can do a proper comparison between H.264 and VP8 if you or anyone else is interested, though it'll take at least a few hours (I intend to use the park_joy test clip found in derf's test clip collection[1]).
Also, On2 is famous for hyping up their products to heaven and have yet to match their claims, so I remain skeptical about VP9. There's also Xiph working on Daala, but right now it doesn't seem to be much beyond big words.
While I would love an open format to provide better quality than H.264 and even H.265, I wouldn't hold my breath for such a thing.