Hacker Newsnew | past | comments | ask | show | jobs | submit | divy's commentslogin

To be fair, there were many constraints in MacPaint - color being the most obvious example. It was a limited drawing program. Put into context, I do believe the analogy works - you start simple (box, few components, limited layout) and evolve to include malleable 3D printed materials and generated electronics.

Disclosure - I wrote the post.


Quick question, perhaps: what's the maximum computing power, especially RAM, you're thinking of offering?

If it's just microcontrollers, my interest in somewhat limited. Something I reasonably put a Lisp on (code as data wants RAM, even if a lot can live on flash) ... well, looking at Wikipedia's list of Arduino CPUs, 96KiB is a very qualified maybe, 64MiB is a lot more like it.


You're forgetting the first wave of PC's - Apple II, TRS-80, TI-99, C64, etc. These machines had very little modularity compared to the wave of beige boxes that followed the IBM PC standard. I think Christensen even uses that as an example.


The S-100 buss based machines before those were very modular, if perhaps not PCs in the same sense.


We had to lock down the settings on the site to make it as light as possible. Consequently, you're limited to 75 frames. It's basically built for very short routes spinning around objects. If you want to do long routes or crazy camera movements, grab the source (https://github.com/TeehanLax/Hyperlapse.js) and roll your own solution. The API is really simple and versatile.


Thanks for creating Hyperlapse - it's awesome and is something I've been looking for! How is it possible to have multiple lookat point along a hyperlapse? Or would I have to stack multiple hyperlapses together to first look at this one point, then at the next point?


You could play around with the Three.js camera object. We made a separate viewer for the video team that had more complicated camera controls (and higher max frame + zoom level).


... and you wouldn't release that separate viewer, would you? I'm planing on using a hyperlapse for a recent roadtrip through the US, including photos. I also want to create a video from the hyperlapse and add the photos via AfterEffects.


You get the idea with the dat.gui strapped example from the repo: http://tllabs.io/hyperlapse/examples/viewer.html


Thanks, it does look better for short distances, but still seems very fast! Seems to be around 3 seconds for the 75 frames, I think I'd like to see it half that speed or less.


I should add, that's a maximum of 75 frames. We check if there are duplicate pano's on the same route and stagger them depending on distance, so you could get fewer frames. It's set to 50ms a frame atm.


Ah that'll be the problem I was having, some areas don't have a high enough frequency of images available to make a good hyperlapse.


It's very hit or miss. We've been calling good one's 'finds' because they're fairly rare.


It'd be neat if there was a way for people to highlight their 'finds'. You could then have a randomizer option similar to www.mapcrunch.com


Pretty neat. I can imagine this syncing with gps navigation and providing street previews for drivers.


The google maps navigation already does that, albeit only when you are closing in the target. But still very useful when looking for a restaurant or a company.


I'm aware they show a snapshot of the destination upon arrival, but I'm not aware of any kind of functionality to see video street previews through the upcoming street(s) during any point in the route.



That requires the driver to manually shift around an area though. Something that automatically hyperlapses through the whole next street or so would be neater.


This was just a quick shader port/hack from an openFrameworks library I was working on. You can see the LUT texture generation here (which is done on the fly in oF): https://github.com/TeehanLax/ofxAsciiArt/blob/master/src/ofx...

All of this is pulled from Sol's TextFX library. The character bitmap actually doesn't line up to Sol's table, so that's why you see weird characters popping up in places. The main goal was just to get image to text working on the GPU. I've typically done it on the CPU. Over time I hope the mess gets cleaned up.


Ah, I see what I was missing: there are four components in the key (r,g,b,a), and each one corresponds to how filled-in each quadrant of the character is. But since each component is 0..16, and the hash table lookup is 256x256, shouldn't packColor be defined as (color.r + 16.0 * color.g + 256.0 * color.b + 4096.0 * color.a)?


Am I wrong in thinking that a company making a closed ("walled garden") system from an open system is fairly derivative and an expected result? Seems like a massive oversight in the articles reasoning.


For the extended ASCII characters (ANSI) and aesthetic of the DOS font.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: