This is just speculation but probably nothing. Imgix should be doing their image processing with GPGPU technology (CUDA or OpenCL) which should be pretty much available anywhere.
Technically Apple has had their Metal API available for a while now but I kind of doubt they're using that for image processing.
The results on every photo I tried were ludicrously wrong. The closest it got on photos of me was about five years older than me, but it ran anywhere up to 20 years too old.
The same thing on photos of others I tried: a 22-year old man was estimated to be 49, a fifty-year-old woman--who is generally considered to look young for her age--to be 69. Those are ages that no one would ever guess based on looks.
So I'd say the algorithm needs a little more work.
Yep. If you look at the "How?" and "Performance vs Quality" sections, you can see that you need to render the scene six times to get the surrounding environment, so all you would just need to do is make a shader for the projection[0]. (Rendering the scene six times is pretty common in graphics in order to generate light probes for dynamic lighting and global illumination.)
I suspect with a modern GPU implementation (Vulkan) and some other minor optimizations, this could run pretty easily in real-time.
I believe functional programming does have its place in games and graphics, but I don't think it should completely replace imperative/OO programming (yet). Some aspects of game/graphics development map better to OOP (e.g. graphics apis) and should be handled with such tools.
I think, for now, a hybrid OOP/FP approach would be best. Right now, I'm prototyping a game engine in F# and C# to see if one could be viable. Incorporating FP has definitely simplified and allowed me to reason better about parts of code but I'm not sure if the tools and languages work well with each other enough yet.
Also see [0] for some thoughts about FP and game development.
Not DirectX only per se but DirectX first and OpenGL later.
I think a lot of games featured in Humble Bundles used to be that way. A lot of games used to be (and some still are) initially only released on Windows with Mac/Linux ports released later.
The title of the article is really sensationalized. The resolution is still going to pass, just without the "covered business method" (CBM) provision. People in Congress support CBM and are actively trying to find a way to make it work. Plus we could always try and reform software patents at its source and make it harder to grant the lower quality patents.
Shading isn't terribly hard and you could probably afford to be a little sloppy in this case. I would suspect that you'll end up spending about 2 times as long on each animation than you would have before.
BUT, with that 2x effort, you're getting a significant improvement in visual quality. The alternative would be the Donkey Kong Country option: model the character in 3D (easily 10x more effort than flat 2D animation, with a much more expensive work force and software), bake in the lighting, and generate gigantic animation sheets. Your asset library will explode in size. The games that have done this have tended to employ significant compression on the images, which can negatively impact visual quality.
Besides, the 3D rendering technique is apples compared with the oranges of pixel art. There's really nothing like artisanally crafted, locally sourced pixels made with love ;)
Yeah, as a technical guy I would tend to go with the full 3D route. It might be 10x the upfront work but having a fully automated pipeline might save you a lot of work down the line. For example just changing the color of a character could be as few as two clicks in the full 3D solution, but you might have to manually go through each sprite sheet with the other route.
And technically you could export the sprite sheet with however many frames you want (and be able to lower and increase the number easily) while still getting the exact same results as the Sprite Lamp solution. And of course artists could go in and manually make any changes they want.
It's interesting hearing the perspective of the artists. Thanks.
If you're hand-crafting sprites there's a good chance you're using a palletized paint program, which makes changing a character a matter of two or three clicks.
Also there are major stylistic advantages to drawing it by hand. Check out the baked-in motion blur on Sonic's feet in this sprite rip of Sonic 1: http://www.spriters-resource.com/genesis_32x_scd/sonicth1/sh... a while back there was a 2.5d Sonic game, and its motion had a lot less impact because no attempt was made to replicate the motion blur.
Plus of course if you're just drawing it you don't have to worry whether or not it actually makes sense - a lot of the more stylized cartoon characters are VERY hard to build spot-on 3d models of, because they're full of weird abstractions that only make sense in the 2d plane.
And finally, some people just don't like modeling stuff in 3d.
You could easily combine both worlds. You don't have to bake in the lighting. Just export the light maps, and then use them with a tool like Sprite Lamp to dynamically merge them in at runtime.
Technically Apple has had their Metal API available for a while now but I kind of doubt they're using that for image processing.