Hacker Newsnew | past | comments | ask | show | jobs | submit | gregburek's commentslogin

More informative link: http://web.mit.edu/newsoffice/2011/trillion-fps-camera-1213....

Imaging Systems Applications Paper "Picosecond Camera for Time-of-Flight Imaging": http://www.opticsinfobase.org/abstract.cfm?URI=IS-2011-IMB4

ACM paper "Slow art with a trillion frames per second camera": http://dl.acm.org/citation.cfm?doid=2037715.2037730


Also, some of the math behind reconstructing the 4d light-propagation is in this tech report from some of their collaborators: http://users.soe.ucsc.edu/~amsmith/papers/ucsc-soe-08-26.pdf


The MIT News Office article has more details than the PR video and a link to the paper: http://web.mit.edu/newsoffice/2011/trillion-fps-camera-1213....

The actual imaging device is just a long line of photosensors. The camera aperture uses a varying electric field to deflect photons that arrive later to sensors further down the line, producing an image in effective 2D - 1D of space and 1D of time. By repeating the scene and slowly scanning the camera's mirror, a composite video is built that shows diffusion of a picosecond laser pulse.


> The camera aperture uses a varying electric field to deflect photons

blink

They're using photon-photon scattering? Wow. I thought in order to pull that off you needed very high powered lasers.

Edit: "But while both systems use ultrashort bursts of laser light"

Depending on how they define "ultrashort" this might be the key.


The press release (edit: and I) skipped a step: "A portion of the laser beam is split off with a glass plate and directed to a photodetector that provides the syn- chronization signal for the camera. The camera images the incoming light on a slit and this slit on a photocathode where photons excite electrons. These electrons are accelerated in a vacuum tube and deflected by an electric field perpendicular to their direction of motion and perpendicular to the direction of the slit. The signal generating this field is obtained by filtering and amplifying the synchronization signal from the laser. The image is thus spread out over time in a “streak” that hits a micro channel plate at the far end of the vacuum tube that further amplifies the electrons and directs them to a phosphor screen where they are converted back to photons. The screen is than imaged on a conventional low noise CCD camera. The complete time interval captured in this way spans about 1 nanosecond." [Picosecond Camera for Time-of-Flight Imaging]

So a photo cathode generates electrons which are deflected.


Thank you. That makes a lot more sense.


Enable javascript to load current positions.


I have an electrical engineering background and all this FP VS IP sounds like Async Logic VS State Machines. How far off base is that description?


How does everyone here gather and analyze their metrics? What do you have always deployed and what do you use when shit hits the fan?

[Edit for typo]


<rant> The "standard" ways are all very outdated, ugly, unscalable, and brain dead in implementation. nagios, cacti, munin, ganglia, ... -- all crap. </rant>

People end up writing their own [1]. They rarely open source their custom monitoring infrastructure. Sometimes a private monitoring system gets open sourced, but then you see it has complex dependencies. The complexity of monitoring blocks wide-scale deployment. People stick with 15 year old, simple, dumb, solutions.

I'm working on making a new distributed monitoring/alerting/trending/stats framework/service, but it's slow going. One weekend per month of free time doesn't exactly yield the mindset to get into hardcore systems hacking flows [2].

[1]: http://www.slideshare.net/kevinweil/rainbird-realtime-analyt...

[2]: Will develop next-gen monitoring for food.


I'm getting the feeling that with all the unique server setups in use, monitoring and metrics systems are going to be just as unique and specific.

There are some interesting process monitoring projects out there like god, monit and bluepill, as well as ec2/cloud specific stuff from ylastic, rightscale and librato silverline. Have you ever used any of those tools?

Fitting all these together for my setup is trial and error, but it really does force me to think hard about my tools and assumptions even before I get hard data.


I hack on the aforementioned Silverline at http://librato.com, and we provide system-level monitoring at the process/application granularity as-a-Service. (We also have a bunch of features around active workload management controls, but that's out of scope here). It actually works on any server running one of the supported versions of Linux, not just EC2. Benefits of going with a service-based offering are the same as in any other vertical, you don't need to install and manage your own software/hardware for monitoring.

Here's an example of the visualizations we provide into what's going on in your server instances: http://support.silverline.librato.com/kb/monitoring-tags/app...


Sounds like Zabbix, Pandora FMS, Osmius, NetXMS, AccelOps and those are the ones that match your requirements.

Within each, if you search for templates or cookbooks or config scripts, you'll find ways of configuring it easily enough.

https://secure.wikimedia.org/wikipedia/en/wiki/Comparison_of...


Almost.

They all suffer from inflexible data models (how many are using SQL and rrdtool in that matrix?), death at scale (what happens when you go from 10 to 500 to 3000 to 10000 servers? across three data centers? and transient xen servers?), lack of UI design, and community involvement (because of that massive comparison grid).

That's not even considering broken models for alerting (a server dies at 3am -- should it page you? no, because you have 200 of the same servers in the same roll. the load balancer will compensate.), historical logging, trending, and event aggregation/dedup.

It's a big problem, but making flexible tools from the ground up with sensible defaults can go a long way towards helping everyone.

We can fix this. We can make the redis of monitoring.


I have to laugh at you pointing out 'redis', redis can not scale at this time. Clusters are planned sometime mid-year but it'll be sometime before it has more features. Maybe you meant the MongoDB?

Alerting is quite flexible from what I read to the point that they are quite customiseable. I agree that a server dying at 3 am is not as important but should still be a valid alert to make an API call to the host to start a new server (Not sure if possible, alerts seem to be shell based).

Here's what your offering needs top in what I'm considering lately: http://www.zabbix.com/features.php

I'd love more competition but even you point out community involvement won't be as much because there's a lot of competition. Including your offering, soon.

Disclaimer: I started researching server monitoring a few weeks ago and considering Zabbix since last week.

Edit: The one issue I find is that there's lack of web transactions like New Relic has: http://newrelic.com/features/performance-analytics

You can see it in action with average response time: http://blog.tstmedia.com/news_article/show/86942

As far as I know, no open network monitoring service offers it.


That's the kind of thinking Josh calls out in the article. Redis is great in most of the use cases for replicated Mongo if you're gearing the rest of your architecture to use it properly.



I tried out statsd + graphite and it's flexible, but it requires a ton of dependencies (python, django, apache, cairo, sqlite, and node.js) and needs a lot of configuration across a bunch of apps to get up and running. It might not be too bad if you have a django app, but there was very little overlap with my stack so I decided against it.

It would be nice if the whole thing could be rolled up into a single config file and one node.js app that forked two processes for web UI and stats collection. And cairo could be replaced with javascript+canvas or svg on the client side.


Unfortunately, this last AWS problem affected multiple AZs in the us-east region. The OP very well may have had an alternate AZ failover plan, but like Quora, Sencha, Reddit, FourSquare and Heroku, they probably kept it region specific.

As for backing up to multiple regions, I can imagine them thinking that sending everything over the public internet as being a bad idea. However, not having a multi-region/multi-provider failover plan was a worse one.


I think that Microsoft Corp. v. i4i Limited Partnership, which was argued before the Supreme Court on April 18, may change how this plays out in the end.

"Since 1983, the courts have followed a clear, firm rule: In order to overcome the statutory presumption that a patent is valid, a litigant must provide clear and convincing evidence that a patent is invalid. That's a high hurdle to overcome. ... Many observers expect the Supreme Court will reject the current bright-line rule and, at least under some circumstances, make it easier for parties to attack the validity of patents." - http://www.abajournal.com/magazine/article/court_may_make_it...

Previously on HN: http://news.ycombinator.com/item?id=2453895 and http://news.ycombinator.com/item?id=2464698


Convore is pretty new, but have you tried it for private chats? When it was announced, I immediately started to think how much easier it would be to set up for my non-technical co-workers.

Link that explains basic functionality: http://gigaom.com/collaboration/set-up-real-time-conversatio...


Tying you to the browser is only slightly better than tying you to a single client; textareas are not the kind of place you want to spend much time. I'd be game if there were an Emacs client.


It's true that there aren't a lot of clients out there for Convore yet. There are quite a few Convore projects started on GitHub (https://github.com/search?q=convore&type=Repositories) and more clients are welcome!


Convore looks promising, hope it will get support in clients soon .. either twitter-ones like tweetdeck, or im/irc ones like pidgin.


Build-out Script for Postgres/PostGIS with RAID 10 on Amazon EBS volumes: http://sproke.blogspot.com/2010/12/build-out-script-for-post...


I found a benchmark from 2008 that details the problems with RAID10 and sourced it in a comment above [1]. These are just raw disk transfer numbers, though. I can only imagine how they would change as CPU usage/postgres load climbs. IIRC disk IO is network traffic and network traffic is CPU dependent, so as load increases, IO will suffer greatly.

[1] http://news.ycombinator.com/item?id=2341425


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: