Hacker Newsnew | past | comments | ask | show | jobs | submit | cconstantine's commentslogin

> why not put a whole bunch of filters in front of a mono camera and get much more frequency information?

Just rgb filters aren't really going to get you anything better than a bayer matrix for the same exposure time, and most subjects on earth are moving too much to do separate exposures for 3 filters.

The benefits of a mono camera and rgb filters is that you can take advantage of another quirk of our perception; we are more sensitive to intensity than color. Because of this, it's possible to get a limited amount of exposure time with the rgb filters, and use a 4th "luminance" filter for the majority of the time. During processing you can combine your rgb images, convert that to HSI and replace the I channel with your luminance image. Because the L filter doesn't block much light it's faster at getting signal, but it's only really a benefit for really dark stuff where getting enough signal is an issue.


For the vast majority of things in the sky, they'd see black. This stuff is incredibly dark, and we need hours of exposure time to get enough signal. Even after hours of exposure the raw stacked frame is a black field with some pinpoints of lights.

The exception to this is stuff in our own solar system.


The problem is that the noise can swamp the signal. Another example of this would be doing astrophotography during the day. The sun doesn't block anything, it just makes the sky glow with "noise". Theoretically it has exactly as much signal from space as it does at night, but because the sun adds so much noise it's completely lost.


> "because the sun adds so much noise it's completely lost."

Do you mean that it would be conceptually possible to image planets or even deep-sky objects during the day with incredibly efficient denoising software? (I am a noob in astronomy)


I would be, yes. As early as the 1950’s, several avionics companies made daylight-capable star trackers (for jam-resistant long-distance airplane navigation) using chopper techniques. Those trackers were mostly mechanical, except for electronics to demodulate the star signal from the single pixel sensor.


Absolutely incredible.

For a little bit of context for how impressive this is, here's my take on it with a consumer grade 8" Newtonian telescope from my backyard: https://www.astrobin.com/full/w4tjwt/0/


Your picture is itself quite impressive. Do you mind sharing more about the equipment and process it takes to capture something like that?

Edit: Oh, you can click through the image and see technical details. Very cool.


You already noticed the technical card [1], but I can describe some of the details that go into this for those unfamiliar with the items on it.

1. The scope they used is roughly equivalent to shooting with an 800mm telephoto lens. But the fact that it's 8" wide means it can let in a lot of light.

2. The camera [2] is a cooled monochrome camera. Sensor heat is a major source of noise, so the idea is to cool the sensor to -10deg (C) to reduce that noise. Shooting in mono allows you shoot each color channel separately, with filters that correspond to the precise wavelengths of light that are dominant in the object you're shooting and ideally minimize wavelengths present in light pollution or the moon. Monochrome also allows you to make use of the full sensor rather than splitting the light up between each channel. These cameras also have other favorable low-light noise properties, like large pixels and deep wells.

3. The mount is an EQ6-R pro (same mount I use!) and this is effectively a tripod that rotates counter to the Earth's spin. Without this, stars would look like curved streaks across the image. Combined with other aspects of the setup, the mount can also point the camera to a specific spot in the sky and keep the object in frame very precisely.

4. The set of filters they used are interesting! Typically, people shoot with RGB (for things like galaxies that use the full spectrum of visible light) or HSO (very narrow slices of the red, yellow, and blue parts of the visible spectrum, better for nebulas composed of gas emitting and reflecting light at specific wavelengths). The image was shot with a combination: a 3nm H-Alpha filter captures that red dusty nebulosity in the image and, for a target like the horsehead nebula, has a really high signal-to-noise ratio. The RGB filters were presumably for the star colors and to incorporate the blue from Alnitak into the image. The processing here was really tasteful in my opinion. It says this was shot from a Bortle-7 location, so that ultra narrow 3nm filter is cutting out a significant amount of light pollution. These are impressive results for such a bright location.

5. They most likely used a secondary camera whose sole purpose is to guide the mount and keep it pointed at the target object. The basic idea is try to put the center of some small star into some pixel. If during a frame that star moves a pixel to the right, it'll send an instruction to the mount to compensate and put it back to its original pixel. The guide camera isn't on the technical card, but they're using PHD2 software for guiding which basically necessitates that. The guide camera could have its own scope, or be integrated into the main scope by stealing a little bit of the light using a prism.

6. Lastly, it looks like most of the editing was done using Pixinsight. This allows each filter to be assigned to various color channels, alignment and averaging of the 93 exposures shot over 10 hours across 3 nights, subtraction of the sensor noise pattern using dark frames, removal of dust/scratches/imperfections from flat frames, and whatever other edits to reduce gradients/noise and color calibration that went into creating the final image.

[1] https://www.astrobin.com/w4tjwt/0/

[2] https://astronomy-imaging-camera.com/product/asi294mm-pro/


Thanks! I hadn't gotten to writing this out, but you've pretty much nailed it.

> They most likely used a secondary camera whose sole purpose is to guide the mount and keep it pointed at the target object.

I did use a guide camera with an off-axis guider, I'm not sure why it wasn't in the equipment list. I've added it.

> The RGB filters were presumably for the star colors and to incorporate the blue from Alnitak into the image.

This is primarily an RGB image, so the RGB filters were used for more than the star colors. This is a proper true color image. I could get away with doing that from my location because this target is so bright. The HA filter was used as a luminance/detail layer. That gave me a bunch of detail that my local light pollution would hide, and let me pick up on that really wispy stuff in the upper right :)

> The processing here was really tasteful in my opinion.

Awe shucks, thanks :blush:


Ah, of course it's HaRGB. Really cool. I'm curious, you de-star the color layers or leave them as is when combining channels? When I've tried HaRGB, the Ha layer has the best/smallest stars which means that the RGB color layers end up leaving rings of color on the background around each star.


I don't remember exactly what I did, but I do remember running into that kind of problem. I probably used starnet2 to remove stars before doing much processing, and recombining stars towards the end.


One of my favorite comments ever on HN. I’m big into photography and yet learned something on nearly every bullet. Thank you!


Well, if you think photography is too easy you could try taking up astrophotograhy :)


The exact opposite for me! I have a hard enough time getting composition and exposure correct shooting stuff here on Earth!


Now I need to know the ballpark cost of this whole setup, so it will block me from trying to get into yet another very costly hobby.

EDIT: oh, just saw it https://news.ycombinator.com/item?id=40206558


You packed so much knowledge in your brief response! Thank you!


Thanks for detailing this. Learned a lot.


"Do you mind sharing more about the equipment and process"

I'm sorry, but this is making me laugh so hard. I don't know a lot about astrophotography, but one thing I've experienced so far is that astrophotographers love to talk about their equipment and process.

It's like asking a grandparent, "Oh, do you have pictures of your grandkids?" It kind of makes their day. :)


Haha, yeah. I could go on for hours. I've had to learn that most people really don't want a lecture series on the finer points of astrophotography. Seabass's comment was pretty much perfect; a bit of detail, but not so much to get lost in the detail.

I tried to write a quick comment on my process a couple of times before they posted, and each time I had way too many words on a small detail.


Nonsense! Just one more story, please?

(Thanks for sharing!)


How about a talk by an expert on the topic of noise in an image: https://www.youtube.com/watch?v=3RH93UvP358


Amazing shot! Lots of good stuff, really liked this full moon shot https://www.astrobin.com/w0lzn5/B/ - the color!


Here is my Esprit 120mm widefield version https://www.astrobin.com/full/r97r5j/0/


Oh nice! Except for Alnitak (I love me some spikes), I like yours more.


That’s super cool!!! Looks like quite a niche/technical hobby with amazing output. Do you mind sharing how much equipment costs to get similar results?


It's a wonderful niche/technical hobby, but it's not cheap. You could even say it's "pay to win". I didn't buy all of my stuff at once, and I had some mistakes, but I'd guess I use on the order of $10k in equipment.


Just to follow on, you can gets started with quite a bit less. My dad took a stab at some basic shots with his prosumer Nikon and a basic tracking tripod.

That's still $1000 body, $1000 glass, $500 tripod, give or take. So far from cheap if you're starting from scratch. But if you already have a body and some glass, it's not a stretch. Or, if you're ok with hunting for used gear, the body and glass can be ~half off new retail.


I'm assuming that'd be a non-moving/automated tripod?

I have a d850 full-frame DSLR and either a 200mm 2.8 or 500mm 5.6, with some decent tripods; but earth's rotation tends to get me pretty quickly with any long-exposure photos :(


I've seen some pretty impressive stuff done with a relatively cheap / simple DSLR setup.

The basics of astrophotography aren't that expensive, but it gets exponentially more expensive to meaningfully "zoom in". Because DSLRs with typical lenses are pretty zoomed out you can get away with much cheaper gear. You might look into getting a "star tracker". It's like a telescope mount for a camera; it'll keep the still relative to the stars but because they don't need to be as accurate they're far cheaper to make. They'll probably work just fine for your 200mm 2.8 lens for a fraction of the cost of a mount.


I think it's rotating, but doesn't have a secondary camera as described above. Maybe he spent more than $500, but I tend to doubt it, but I'm also not sure of the specifics beyond he's using a crop-frame Nikon DSLR with a lens he already had for birding (I think).


I mean I don't know if I'm more impressed by their level of detail from a $10 billion telescope or your level of detail from a consumer-grade telescope!


The James Webb image shows a level of detail we have never seen before. Hundreds of galaxies in the background that are invisible on the consumer grade telescope.

Here's the full resolution image:

https://www.esa.int/var/esa/storage/images/esa_multimedia/im...


Oh, don't get me wrong, I am absolutely astounded by the JWST's level of detail and am in awe of the pictures it takes. And they are obviously far more detailed than the OP's. I also think it was a worthy expense. I was just noting that my awe of both is comparable when you normalize for cost!


I'm quietly chomping at the bit for a Webb full deepfield survey rather than the hint of it we saw in 2022...


If this post is to be believed, a full deepfield survey would take four thousand to fourty thousand years. https://www.reddit.com/r/jameswebb/comments/wrwsfc/how_long_...


Thanks, but it if you look closely you'll see that the Webb image has almost an image worth of detail within each pixel of my image.


Once in a while, I have the impulse to buy the equipment to make these kinds of photos, then I check the price (at least 4k USD), realize I am not from US and cool down tell next time.

It's consumer level, but not cheap at all.


Its all relative, right? The cost is about a millionth of the JWST image. A millionth!


At that price difference it's silly to not buy the gear! Right? Right?


Yours is a superb image, too. Very impressive indeed. Kudos!


That's a really lovely shot.


TBH I like your shot more than JWST. You can at least see the whole horsehead. NASA should check their zoom setting.


Yep lets build a $10B space telescope to zoom out and use 0.001% of its resolution, matching a backyard telescope.


What good is a telephoto lens if you're just gonna zoom in on the very top of people's heads? It won't make for very good memories.


I've been using terraform for 10-ish years, and this is very much not how I feel about it. Terraform absolutely makes life easier; I've managed infrastructure without it and it's a nightmare.

Yes, it can be awkward, and yes the S3 bucket resource change was pretty bad, but overall its operating model (resources that move between states) is extremely powerful. The vast majority of "terraform" issues I've had have actually been issues with how something in AWS works or an attempt to use it for something that doesn't map well to resources with state. If an engineer at AWS makes a bone-headed decision about how something works then there isn't much the terraform folks can do to correct it.

I've actually been pretty frustrated trying to talk about terraform with people who don't "get it". They complain about the statefile without understanding how powerful it is. They complain about how it isn't truly cross-platform because you can't use the same code to launch an app in aws and gcp. They complain about the lack of first-party (aws) support. They complain about how hard it is to use without having tried to manually do what it does. Maybe you do "get it", and have a different idea of what terraform should do. Could you give a specific example (besides the s3 resource change) where it fails?

It's a complicated tool because the problem it's trying to solve is complicated. Maybe another tool can replace it, and maybe someone should make that tool because of this license change, but terraform does the thing it intends to do pretty well.


I'm no Terraform expert but it's been in my resume and toolbox since ~2016.

Up until these changes, I would always pick Terraform for managing AWS. I have my gripes with it but it has been the best choice (as the saying goes, anybody that uses a tool long enough should have complaints about its limitations).

Now, however, I'm finally thinking of going with the CDK to insulate myself from more seismic shifts in the "OSS ecosystem" of devops tools.


Yes, you can :)

It all depends on the properties of the signal and the noise. In photography you can combine multiple noisy images to increase the signal to noise ratio. This works because the signal increases O(N) with the number of images but the noise only increases O(sqrt(N)). The result is that while both signal and noise are increasing, the signal is increasing faster.

I have no idea if this idea could be used for AI detection, but it is possible to combine 2 noisy signals and get better SNR.


I think this is a really important comment, and until about 6mo ago I would have completely agreed with you. I even made these same arguments with my coworkers; it's just cooperative multithreading, it's making up for a defect in js, just use threading primitives. I think some people might use async in a fad-y way when they don't need to, or don't understand what it really is and think of it as an alternative to multithreading. You've generated a lot of good discussion, but maybe having a specific example of where async/await made writing a multithreaded process easier will help.

What changed my mind was accidentally making a (shitty/incomplete) async system while implementing a program "The Right Way" using threads and synchronization primitives. The program is for controlling an amateur telescope with a lot of equipment that could change states at any moment with a complex set of responses to those changes depending on what exactly the program is trying to accomplish at the time. Oof, that was a confusing sentence. Let's try again; The telescope has equipment like a camera, mount, guide scope, and focuser that all periodically report back to the computer. The camera might say "here's an image" after an exposure is finished, the mount might say "now we're pointing at this celestial coordinate", the focuser might say "the air temperature is now X", and the guide scope might say "We've had an error in tracking". Those pieces of equipment might say those things in response to a command, or on a fixed period, or just because it feels like it.

Controlling a telescope can be described as a set of operations. Some operations are fairly small and well contained, like taking a single long exposure. Some operations are composed of other operations, like taking a sequence of long exposures. Some operations are more like watchdogs that monitor how things are going and issue corrections or modify current operations. When taking a sequence of long exposures the program would need to issue commands to the telescope depending on which of those messages it receives from the telescope or the user; If the tracking error is too high (or the user hits a "cancel" button) we might want to cancel the current exposure. If the air temperature has changed too much we might want to refocus after the currently running exposure is finished. If the telescope moves to a new celestial coordinate we probably want to cancel the exposure sequence entirely. So, how do we manage all that state?

The way I solved it was to make a set of channels to push state changes from the telescope or user. Each active operation would be split into multiple methods for each stage of that operation, and they would return an object that held the current progress and what it needed to wait on before we could move onto the next stage. That next stage would be triggered by a controlling central method that listened for all possible state changes (including user input) and dispatch to the next appropriate method for any of the operations currently running. To make things a little simpler I made a common interface for that object that let the controlling central method know what to wait on and what to call next. This allowed me the most control over how different concurrent operations were running while staying completely thread-safe. It was great, I could even listen to multiple channels at the same time when multiple operations were happening concurrently.

At this point I realized I'd accidentally made an async system. The central controlling method is the async runtime. The common interface is a Future (in rust, or Promise in js, or Task in C#). Splitting an operation into multiple methods that all return a Future is the "await" keyword. Once I accepted my async/await future, operations that were previously split across multiple methods with custom data structures to record all of the intermediate stages evaporated and became much more clear.

I'm still using multiple threads for the problems that benefit from parallel computation, but making use of the async system in rust has made implementing new operations much easier.


In a language like C that isn't really possible because the language can't keep track of all of the places that memory address is stored.

If malloc were to return something like an address that holds the address of memory allocated there is nothing preventing the program from reading that address, doing math on it, and storing it somewhere else.


Amateur astrophotographer here. What I'm going to talk about is true for my rig. The JWST is astronomically a better telescope than what I have, but the same basic principles apply.

The cameras used here are more than 8 bit cameras, so there has to be some way to map the higher bit-depth color channels to 8 bits for publishing. The term for the pixel values coming off the camera is ADU. For an 8 bit camera, the ADU range is 0-255. For 16bit cameras (like what mine outputs) is 0-65536. That's not really what stretching is about though.

A lot of time, the signal for the nebula in an image might be in the 1k-2k range (for a 16bit camera), and the stars will be in the 30k to 65k range. If you were to compress the pixel values to an 8 bit range linearly (ie, 0 adu = 0 pixel, 65536 adu = 255) you're missing out on a ton of detail in the 1k-2k range of the nebula. If you were to say 'ok, let's have 1k adu = 0 in the final image, and 2k adu = 255', then you might be able to see some of the detail, but a lot of the frame will be clipped to white which is kind of awful. That would be a linear remapping of ADU to pixel values.

The solution is to use a power rule (aka, apply an exponent to the ADU, aka create a non-linear stretch). (EDIT: The specific math is probably wrong here) That way you can compress the high adu values where large differences in ADU aren't very interesting, and stretch the low-adu values that have all the visually interesting signal. In the software this is done via a histogram tool that has three sliders; one to set the zero point, one to set the max point, and a middle one to set the curve.

It's kinda like a gamma correction.


Also related: μ-law[1] and A-law[2] companding in telecoms.

[1]: https://en.wikipedia.org/wiki/%CE%9C-law_algorithm

[2]: https://en.wikipedia.org/wiki/%CE%9C-law_algorithm


Yes, they can be. Self-diagnosing isn't a great idea; lots of non-depressed people have some symptoms of depression. It would probably be a good idea to find someone to talk with though.


Where "someone" is a therapist or counsellor. They can refer you on to someone who works with pharmaceuticals if necessary, but there's a lot to try before going that route.


Heads up, ‘There is a lot to try before going that route’ is a medical opinion that is hard to notice. Exactly what the treatment plan would be and in what order is entirely up to the doctor and sometimes (often) the best thing that works is the actual drugs.

I’m just conscious of this because I’ve seen this argument used to justify delaying actual treatment in favour of ‘alternative medicine’.


True, yes, but nothing you hear in a CBT session should be super surprising. It may help you see things in a new way or understand the motivations of the other people involved in your life situations, but they're not going to tell you sleep with crystals or wear a magnetic bracelet or something.

Basically the counselling/therapy part of the process would be identifying if there's a "real world" root cause to address ahead of going the pharma route and facing potential side effects, and/or the reality of having to go off it later and immediately regressing because the root cause hasn't been fixed.

In any case, I'm obviously not a doctor; neither this post nor the GP should be construed as medical advice.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: