Jump to content

NLCbanner2024.jpg.2478be509670e60c2d6efd04834b8b47.jpg

L-RGB relative exposure ?


Recommended Posts

5 minutes ago, ollypenrice said:

I think that, in this analysis, you are looking at raw data and not at data as is it will be, or can be, processed for a final image. My capture procedure is based on what I will do with the data in processing, not on its relationship with raw signal coming from the sky.

I do not want equal SNR in all channels. If my data were to remain unprocessed then, yes, I might want that - but my data will be processed. If I expose for faint tidal tails, rarely seen, in luminance then my stars in luminance will be over exposed. This is not a problem because I will not use luminance on my stars, I will use RGB only for stars. If galaxy cores are, in the same way, over-exposed in luminance, I won't use them, I will use the RGB-only cores.

One of my fundamental principles in imaging is to ask myself, What am I going to do with this layer?  The answer to this question determines how I will shoot the layer.

Olly

Some may not want equal SNR in all channels, but the question of exposure time is often asked by beginners, and is a fundamental question even in pro work, when you have to establish how much telescope time you will need for your project.

I had the very same questions when I started, and about 4 years back, Vlaiv was kind enough to share an excel spreadsheet for exposure time estimation. It really inspired me to dig deeper and learn about the scientific part of the hobby, so I thank Vlaiv for his insight and inspiration.

I soon realised that if you want an accurate answer, it takes quite a bit of work.

Even  if you don't want equal SNR for your subs, you still have to pick an SNR in each channel, in order to compute the exposure time. Unfortunately, to have a good answer you have to do the math, either on paper, or by coding, or by excel etc.

It certainly depends on the object, but broadly speaking, because colour filters have a smaller bandwidth than a L filter, you will receive less photons, so you will have to expose more to get good signal and good colour. Is it wrong to have good signal and good colour in a faint galaxy arm? It will make processing easier for sure, the colours will pop without too much effort.

Also, I think the raw signal is certainly related to what you can achieve in processing. If you have 0 signal, no matter what you do in processing ,won't reveal anything.

An example comes from the pros, where they don't use an L filter at all, but photometric filters, to make an RGB image. They usually aim for equal SNR in all channels, which translates into different exposure times for BVR, or sloan gri etc. Why complicate things and aim for different SNR in each band? Is it a saturation problem? Perhaps, but no matter what you do, there is a big chance you will have saturated objects (stars) in your image

A simple answer could be expose until you are happy with the end result 😄. Joking aside one of these days I'll make an app for exposure time, had one before but it was not accurate enough so I started from scratch. Things get complicated when you want to know how much to expose for an emission nebula in H-alpha, OIII etc, and spectra are not always available for common objects amateurs go for.

  • Like 1
  • Thanks 1
Link to comment
Share on other sites

51 minutes ago, dan_adi said:

A simple answer could be expose until you are happy with the end result 😄. Joking aside one of these days I'll make an app for exposure time, had one before but it was not accurate enough so I started from scratch. Things get complicated when you want to know how much to expose for an emission nebula in H-alpha, OIII etc, and spectra are not always available for common objects amateurs go for.

That is the essence of the problem - you can't really tell how much exposure you need until you start exposing. Either that, or you rely on data provided by someone else - but most of the time, no such data is available.

I think that it would be very good idea for amateur astronomers to create sort of joint catalogs - by using the data from their own imaging.

For example - no one really knows what is the magnitude if say spiral arms of M81 - but everyone who has imaged this target can actually use their data to estimate such value.

But then again - not many people are invested in scientific side of this hobby and/or frown upon number crunching side of things.

  • Like 2
Link to comment
Share on other sites

24 minutes ago, vlaiv said:

That is the essence of the problem - you can't really tell how much exposure you need until you start exposing. Either that, or you rely on data provided by someone else - but most of the time, no such data is available.

I think that it would be very good idea for amateur astronomers to create sort of joint catalogs - by using the data from their own imaging.

For example - no one really knows what is the magnitude if say spiral arms of M81 - but everyone who has imaged this target can actually use their data to estimate such value.

But then again - not many people are invested in scientific side of this hobby and/or frown upon number crunching side of things.

Indeed Vlaiv,

For stars it's easier, for galaxies it gets tricky. 

If you have the raw stacked image, taken with a photometric  filter 😄 (you will have to provide a zero point) you can use astrophot in python( Link ) to fit a Sersic model (or whatever model fits best) to that galaxy. 

If everything goes well you will have the radial profile and surface magnitude like this:

astrophot.png.f18a4e9503f82e11dd9255a779d320be.png

The core of the galaxy here is around 19 surface mag, while the arms reach 23.

This way you have the whole range of surface magnitudes from the core to the faint arms, and can compute the exposure time for each component.

I have seen this approach in SkyTools software, where the imaging time is computed separately for the core and spiral arms and faint halo, but I haven't figured it out yet how it can be done without using the actual data. 

I quote from the Help file:

Quote

It begins with the target object and ends with an accurate prediction of the target object signal, sky signal, system noise, and, finally SNR, under any set of observing conditions. A brief explanation of the major components follows.

Spectral-Energy Distribution of the Target Object

Stars and other stellar sources (quasars, minor planets, etc.) are modeled based on the continuum, which is described by their UBVRI color indices.

This is  similar to the approach I used in the earlier post for computing mags from spectra for stars. They most likely use the B-V colour index to derive the star temperature and make a synthetic black body spectrum. I think my approach of getting the GAIA parameters for the star and using the Phoenix spectra library is more accurate, but who knows. 

Also they use the user provided Bortle scale for sky brightness, although making a table with sky counts for each filter, using your own data, seems more accurate. I fail to see how you can go from Bortle scale to sky counts in a H-alpha filter or any filter for that matter.

Resume quote:

Quote

Reflection nebulae, galaxies, and comets are modeled similarly, using UBVRI colors representative of these objects. For galaxies, the type of galaxy determines the color characteristics.

Planetary nebulae, HII regions and Supernova remnants are modeled via their emission line spectrum

I don't know how the type of galaxy (Sb, Sc etc) relates to colour, and how you can go from that to object counts.

For planetary nebulae, it's a simple matter of having the spectrum, so their approach seems fine, managed to do it alone.

 

Link to comment
Share on other sites

Shooting for longer in RGB than L seems to me to be a very eccentric use of the LRGB system, the purpose of which is to shoot the most of what you really need.

Yes, the RGB filters pass fewer photons than the L but that is the point: you don't actually need more RGB photons because the RGB layer can be processed for noise reduction with no perceptible lack of detail under the L layer. By shooting more L you have the opportunity to sharpen the very strong signal so 'more L' is a good trade-off when anticipating the processing requirements.

Ultimately there is no need to shoot luminance at all. The perfect dataset might well come from an enormous amount of RGB - but we live in the real world.

Let me pose a slightly impish question :grin: : When you assess the effectiveness of your mathematical approach, do you do so by assessing the quality of the resulting image?

Olly

  • Like 1
Link to comment
Share on other sites

5 minutes ago, ollypenrice said:

Let me pose a slightly impish question :grin: : When you assess the effectiveness of your mathematical approach, do you do so by assessing the quality of the resulting image?

That depends on what you want in an image.

Do you want to look at it and say "Awww" or you want to learn something from it.

I personally favor information and accuracy of information conveyed by the image.

By the way, probably the most effective use of imaging time is LRG imaging (out of various LRGB, RGB and other approaches) - but no one seem to use it :D

 

Link to comment
Share on other sites

46 minutes ago, ollypenrice said:

Let me pose a slightly impish question :grin: : When you assess the effectiveness of your mathematical approach, do you do so by assessing the quality of the resulting image?

Olly

The word "quality" has different meanings to different people. For one, I'm seldom interested in photometric quality, but I do like to show the extent of tidal tails and loops.

So, I don't fuss about the RGB data and its noise that much. But I do like to get as much L as the night sky allows.

Edited by wimvb
Link to comment
Share on other sites

19 minutes ago, ollypenrice said:

Shooting for longer in RGB than L seems to me to be a very eccentric use of the LRGB system, the purpose of which is to shoot the most of what you really need.

Yes, the RGB filters pass fewer photons than the L but that is the point: you don't actually need more RGB photons because the RGB layer can be processed for noise reduction with no perceptible lack of detail under the L layer. By shooting more L you have the opportunity to sharpen the very strong signal so 'more L' is a good trade-off when anticipating the processing requirements.

Ultimately there is no need to shoot luminance at all. The perfect dataset might well come from an enormous amount of RGB - but we live in the real world.

Let me pose a slightly impish question :grin: : When you assess the effectiveness of your mathematical approach, do you do so by assessing the quality of the resulting image?

Olly

The math tells you how much time you need to reach a certain SNR, nothing more. It goes without saying, the better the SNR the better the image

Link to comment
Share on other sites

Just now, dan_adi said:

the better the SNR the better the image

The important point here is the different uses of the word "better". I assume that by the first use of "better", you mean "higher" or "larger", ie the separation between signal and noise. But the second use of "better" is not obvious.

  • Like 1
Link to comment
Share on other sites

I'm afraid I don't analyse anything, I just do what past advice and experience has taught me.  (Don't have the tools or nouse to analyse stuff anyway, I just do what looks right).  

I still use a CCD camera so my subs are long, but I do what most people on here do. 

Subs: Twice as long for the Lum (600sec),  RGB (300secs binned x 2 sometimes 150secs binned x 2 depending on target)  Total imaging time 2 or 3 times as much Lum to RGB.  (when I get opportunities to do LRGB that is)

As I live in a LP location Bortle 8 I do mostly NB imaging and apply the same logic -  Ha (600secs) Oiii and Sii (300secs binned x 2) 

As Olly says the detail comes in the Lum or the Ha.  

One or two exceptions to this would be if imaging something like the Squid - you would probably want loads of Oiii and not binned.

Carole 

Edited by carastro
  • Like 1
Link to comment
Share on other sites

1 hour ago, carastro said:

As I live in a LP location Bortle 8 I do mostly NB imaging and apply the same logic -  Ha (600secs) Oiii and Sii (300secs binned x 2) 

As Olly says the detail comes in the Lum or the Ha.   

 

Where can I find your pics? That's something new to me, I still learn. 

Link to comment
Share on other sites

2 hours ago, dan_adi said:

It goes without saying, the better the SNR the better the image

This isn't wrong, but may not point to the best use of time. It would be interesting to compare two images, one with half the RGB exposure and noise reduced in post processing, the other with twice the exposure and no NR. My suspicion is that you would be hard put to tell the difference. If you tried the same comparison in L, though, the difference would be obvious.

I'm a pretty patient imager with several images topping 100 hours, but I like to put the time into the most effective captures.

Olly

Link to comment
Share on other sites

3 hours ago, vlaiv said:

That depends on what you want in an image.

Do you want to look at it and say "Awww" or you want to learn something from it.

I personally favor information and accuracy of information conveyed by the image.

By the way, probably the most effective use of imaging time is LRG imaging (out of various LRGB, RGB and other approaches) - but no one seem to use it :D

 

If there's nothing to learn from an image it won't make me say "Awwww."

And I won't be drawn into your false dichotomy!!!

:grin:lly

  • Like 1
Link to comment
Share on other sites

24 minutes ago, ollypenrice said:

This must be from the Influencers' Tik Tok Dictionary For The Under Fives.

:grin:lly

I sometimes let google decide .... I just type in the word / phrase and see what happens. In this instance first hit was WikiHow ...

Link to comment
Share on other sites

4 minutes ago, 900SL said:

How do you calculate SNR for a sample sub?

You can't do that - you can only estimate SNR for a single sub.

In order to really calculate SNR you need a sequence of subs.

First thing to understand is that there is no single SNR value for given sub. Each pixel effectively has its own SNR - or rather, areas that have very similar / the same signal - might share SNR as SNR depends on signal and noise obviously, but more importantly, if all is "fine" then area with the same signal should also have the same noise as all noise sources in such area should be the same - dark current noise should be roughly uniform for whole sub, read noise should be uniform for whole sub, LP noise should be fairly uniform (here there is issue with strong gradients on large FOVs where this might not be true), and finally shot noise is the same if the signal is the same.

In any case - to calculate SNR for particular area of roughly the same signal -  you stack subs and from stack you read average signal value and you stack again using different method - standard deviation method. This gives you difference between successive subs which is really the noise as in absence of the noise all the subs would have the same / perfect signal value.

You divide the two and you get SNR for said area.

  • Like 1
Link to comment
Share on other sites

2 hours ago, vlaiv said:

I sometimes let google decide .... I just type in the word / phrase and see what happens. In this instance first hit was WikiHow ...

Ah, but this contradicts all the scientific principles by which you live! 👹

:grin:lly

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.