Jump to content

NLCbanner2024.jpg.2478be509670e60c2d6efd04834b8b47.jpg

Shooting near horizon vs overhead


BrendanC

Recommended Posts

Hi all,

Quick question that will probably spark off long answers!

How do you handle shooting objects at different altitudes, including as they ascend and descend throughout the night?

Do you just not bother below a certain alt?

Or, do you shoot different sub lengths as it rises then falls?

Or, do you shoot the same sub length regardless?

Or, something else entirely?

Thanks
Brendan

Link to comment
Share on other sites

In theory, stacking algorithm should take care of that.

Part of stacking routine is sub normalization - which should account for at least two (if not more) things that happen:

1. Signal is weaker at lower altitude. This can be significantly so as there is more air mass

2. LP is often higher at lower altitudes - again related to number of air mass and overall air density

We can write this in form of x*Signal + LP, which is linear equation. Multiple subs allow for LP part (DC offset) and x (multiplicative constant) to be equalized - that forms stack compatible frames - one with same signal strength and same LP level.

Advanced versions handle LP gradient as well - they "normalize" LP gradient - or make it equal. As target moves across the sky - FOV rotates with respect to horizon, so any LP will rotate as well, as it is often fixed with brighter parts being closer to earth or in direction of major LP source (like a city).

Above happens if you mix subs from separate nights as well - one night can have poorer transparency than the next - so subs will be impacted in much the same way.

What remains is to stack subs with different SNR efficiently - and this is where there is much room for improvement in current software.

Most have very rudimentary way of handling different SNR - like per sub weight. But that is far from optimal.

Link to comment
Share on other sites

Interesting. So, let's say I shoot the same sub length from horizon to horizon throughout the night.

The stacking algorithm weights the subs that are overhead significantly more than those at either horizon.

Doesn't that mean I'm effectively wasting data at the horizons? If I could adjust my sub length or even gain/offset to get better data at the horizons, then wouldn't that be more efficient? Or, should I just shoot above a certain altitude as a general rule, maybe shooting two or more objects a night as they enter 'the zone', so they have less variance?

In other words: what do you do? Actually in practice? I totally agree with the theory, I just want to know what I should be doing, if anything, to maximise my shooting time.

Link to comment
Share on other sites

14 minutes ago, BrendanC said:

The stacking algorithm weights the subs that are overhead significantly more than those at either horizon.

Well, first thing stacking algorithm does is to "normalize" frames.

Sub shot close to horizon will have lower signal and higher LP - so algorithm multiplies data in the image to improves signal or makes it equal to that in sub shot directly overhead, and subtracts some DC offset from whole image - to make background equally bright.

After that is done - you virtually have two same subs - except for SNR. One shot close to horizon will have different SNR than that shot over head.

17 minutes ago, BrendanC said:

Doesn't that mean I'm effectively wasting data at the horizons?

Depends how you look at it. Data will be lower quality, and sure - if you can shoot higher quality data at the same time - then you are wasting your time shooting lower quality data. But you can't do that - at least not of the same target - as target is close to horizon and you can't do anything about it.

There are however - several things that you can do.

1. Don't shoot target that is close to horizon. Wait for it to be properly placed for duration of the session. Decide on your free imaging time. Say that due to commitments you have free imaging time between 9pm and 2am. That is 5 hours of imaging time. Choose target that will cross meridian smack in the middle of that imaging time - or at 11:30pm. This ensures that you get the best SNR for your subs on that target. You may modify this if your LP has strong bias in one part of the sky - then try to balance - try shooting target when it is in darker part of the sky - but not overly low down at horizon.

2. Work in parallel on several targets.

Maybe you have 6h of imaging time per night and few spare nights in a row?

Image 3 targets - each for 2h when they are highest in the sky. You need to select targets that "follow" each other in the sky - and work thru them as they approach highest point in the sky.

22 minutes ago, BrendanC said:

In other words: what do you do? Actually in practice? I totally agree with the theory, I just want to know what I should be doing, if anything, to maximise my shooting time.

In practice I try to image target that is best positioned taking into account LP levels and position in the sky. I also choose nights of good transparency for imaging.

I've never gone for multiple targets on the same night spread over couple of nights - but that is because I haven't had permanent setup so far, or commitments did not allow, but with obsy - I'm certainly going to go that route.

Link to comment
Share on other sites

Outstanding as ever, thanks vlaiv!

Another thing I'd add is that, when shooting multiple targets in a night (which I have done before), you can shoot until the meridian flip then go to another target instead. Makes for more productive time than waiting for the target to pass overhead. I have a planning spreadsheet that calculates the flip times so I can work around them this way.

Thanks again, all good advice.

  • Like 1
Link to comment
Share on other sites

2 minutes ago, BrendanC said:

you can shoot until the meridian flip then go to another target instead. Makes for more productive time than waiting for the target to pass overhead. I have a planning spreadsheet that calculates the flip times so I can work around them this way.

I did something like that - but I waited for target to pass the meridian. This was partly influenced by the fact that city center was towards the east and LP was much worse in that direction so it made sense to shoot only towards the west.

  • Like 1
Link to comment
Share on other sites

Depends on the location i am shooting from but i try not to image under 40 degrees due to seeing and light pollution getting pretty bad there in most of the locations i visit for imaging. In the case when a target starts at a low altitude but gets higher during the night i will start with another target that is higher in the sky and switch over to the primary target once its high enough. This way i always try to get the most optimal quality data possible from every night out with the scope as clear skies are so rare i really dont want to use the precious time poorly. Of course some targets do not rise that high and they need to be imaged at as low as 20 degrees and in this case i try to travel to a location that has the best possible southern low sky since light pollution is multiplied greatly by the low elevation.

For different sub lenghts, i dont pay it much attention. I shoot either 1, 2 or 4 minute exposures depending on how dark the skies are, filter used, and if its windy or not. I already know what ADU numbers to expect in the background for read noise to be swamped x3 (the minimum i aim for even if windy) so i just use 1,2 or 4 minutes based on that. Differing background levels between different nights or during the same night are of little concern because normalization will take care of that in the end. But i do try to not mix awful data and great data for the same stack as the bad data could lower the SNR in the end. For example i will probably not bother shooting a target during a full moon if i already have 3 nights from a borle 4 sky without a Moon.

Link to comment
Share on other sites

I’m  surrounded by houses so can’t image below 20 degrees altitude so that is my default cut off. If taking separate RGB subs, I always try and take the blue at the highest elevation. Beyond that I let the stacking software do it’s thing on weighting the subs.

Link to comment
Share on other sites

I don't change exposure for a single target as it moves. But I do choose different exposures for different conditions. 

I try and shoot objects when reasonably high in the sky, but obviously some never get there!  It's hard not to be tempted by Sagittarius but it's tricky. I've got half decent results with M8 and M20 at about 15 degrees max. I do have dark skies and a decent view to the south, but the the light pollution plumes here are low to the SE and SW.  Atmosphere and greater risk of cloud layers add to the difficulties. I'm normally shooting broadband on moonless nights, so I go for lots of shorter exposures, as low as 5 or 10 seconds with the RASA.  Slightly higher in the sky and I'll probably be at 20 seconds (e.g. Orion). Higher up, normally 30 seconds. 

 

Link to comment
Share on other sites

1 minute ago, ollypenrice said:

Go on, I'll bite! Tell me why I shouldn't...

:Dlly

:D

It's not that you shouldn't - it's more that you don't need to :D

If you think about it - we need 3 components to get the image. 4 components is additional information that we might not need. If 3 components are spaced properly to cover whole 400-700nm range - that is pretty much what one needs to be able to produce color image.

I'm not going to bore you with math details and matrix transforms - there is much simpler way to see this. Say we have filters that are spaced as 400-500, 500-600, 600-700nm - being B, G and R (and if you look at most filters - you'll see that they are spaced roughly so). With luminance being 400-700nm.

It is clear that if we add B+G+R we get what? Whole range so we can roughly write B+G+R = L (as far as signal goes not SNR).

This can easily be rearranged as B = L-(G+R)

Blue signal is already contained in the other three in some way. In fact - you can choose to shoot any of the three components out of four (people have done so with RGB without L) and you'll still be able to produce same color image (roughly the same - it depends on filters used - if they are equally spaced, then sure - the same, but if there is some overlap and gaps - then it will be roughly the same - difference subject to size of gaps and overlap).

Why blue? Several reasons:

- blue scatters the most in atmosphere

- blue is attenuated the most in atmosphere

- blue often has the weakest QE on sensor

Overall - blue is probably hardest to record out of three - R, G and B. This might not be so for some types of targets - like emission nebulae that dominate in Ha and Hb, but for most other targets it will be so.

I think that better image can be had if one uses time otherwise spent on getting the blue channel to improve SNR of L channel.

  • Like 1
Link to comment
Share on other sites

2 hours ago, vlaiv said:

:D

It's not that you shouldn't - it's more that you don't need to :D

If you think about it - we need 3 components to get the image. 4 components is additional information that we might not need. If 3 components are spaced properly to cover whole 400-700nm range - that is pretty much what one needs to be able to produce color image.

I'm not going to bore you with math details and matrix transforms - there is much simpler way to see this. Say we have filters that are spaced as 400-500, 500-600, 600-700nm - being B, G and R (and if you look at most filters - you'll see that they are spaced roughly so). With luminance being 400-700nm.

It is clear that if we add B+G+R we get what? Whole range so we can roughly write B+G+R = L (as far as signal goes not SNR).

This can easily be rearranged as B = L-(G+R)

Blue signal is already contained in the other three in some way. In fact - you can choose to shoot any of the three components out of four (people have done so with RGB without L) and you'll still be able to produce same color image (roughly the same - it depends on filters used - if they are equally spaced, then sure - the same, but if there is some overlap and gaps - then it will be roughly the same - difference subject to size of gaps and overlap).

Why blue? Several reasons:

- blue scatters the most in atmosphere

- blue is attenuated the most in atmosphere

- blue often has the weakest QE on sensor

Overall - blue is probably hardest to record out of three - R, G and B. This might not be so for some types of targets - like emission nebulae that dominate in Ha and Hb, but for most other targets it will be so.

I think that better image can be had if one uses time otherwise spent on getting the blue channel to improve SNR of L channel.

Yes, I get the idea that L-(G+R)=B. However the B component of L has to pass through the same optics and electronics as the B does, so I don't see why it should be any better than B-filtered B.

And then, when we add L to RG (B made by subtraction) it will be identical with the L in that part of the spectrum. Will the L then enhance the signal in the blue channel in the way that it does the R and G, which are not the same signal as the L? It seems to me that your method would be a bit like stacking the same sub 10 times (pointless) instead of stacking 10 different ones.

(Please note that I don't expect to win this argument with you. I never do. :grin: I'm still curious, though.)

Olly

  • Haha 1
Link to comment
Share on other sites

4 hours ago, vlaiv said:

I think that better image can be had if one uses time otherwise spent on getting the blue channel to improve SNR of L channel.

Not in a high Bortle area (6 and up).  I find L to be the filter most susceptible to LP.  

Link to comment
Share on other sites

26 minutes ago, tomato said:

FLO must be panicking if they read this thread, they will have visions of a big pile of unsold blue filters sitting in the warehouse.😉

I'll buy 'em all and sell them on at a mark-up when people realize Vaiv's theory doesn't really work. 😁

Olly

  • Haha 3
Link to comment
Share on other sites

4 hours ago, ollypenrice said:

Yes, I get the idea that L-(G+R)=B. However the B component of L has to pass through the same optics and electronics as the B does, so I don't see why it should be any better than B-filtered B.

Synthetic B (let's call it that for simplicity) is not better than regularly captured B. It does not need to be. It just need to be equally good, and since you don't need to shoot it - it saves you imaging time to do something else (like more lum)

4 hours ago, ollypenrice said:

And then, when we add L to RG (B made by subtraction) it will be identical with the L in that part of the spectrum. Will the L then enhance the signal in the blue channel in the way that it does the R and G, which are not the same signal as the L? It seems to me that your method would be a bit like stacking the same sub 10 times (pointless) instead of stacking 10 different ones.

No, not pointless. We stack to improve SNR. Stacking same sub 10 times will not improve SNR so that is pointless.

We take R, G and B to be able to reconstruct color information (we shoot luminance for SNR and luminance information). If we can reconstruct color information equally well from R, G and L-(R+G) - it is not pointless.

Sure, noise in B will be combination of noises in R, G and L - but it won't have visual impact. It is same as saying - noise of luminance is derived from R, G and B when doing straight RGB if we extract synthetic luminance and process it separately - but that won't cause any issues. Our brain won't see patterns if any arise from that - noise will be noise.

1 hour ago, ollypenrice said:

I'll buy 'em all and sell them on at a mark-up when people realize Vaiv's theory doesn't really work. 😁

Reason that I mentioned this theory is rather simple. You seem to have plenty data, some of which is in LRGB format, right?

Once you already recorded LRGB - it is easy experiment to use just LRG and reconstruct B and compose an image. You also have LRGB version to compare results to.

Maybe you'll find that it actually works, so it will save you some imaging time in the future? Who knows?

You can always earn your fortune on reselling those blue filters with a profit.

1 hour ago, tomato said:

FLO must be panicking if they read this thread, they will have visions of a big pile of unsold blue filters sitting in the warehouse.😉

If you think that is the problem, hold on to where I start talking about how cheap planetary absorption filters are better option for LRGB imaging than expensive interference ones :D

Something like regular L filter, Wratten #8 and Wratten #29 are all you need to produce accurate color image. Bonus is that you don't get nasty reflections that often come with interference filters.

  • Like 1
Link to comment
Share on other sites

On 23/02/2023 at 12:21, BrendanC said:

Doesn't that mean I'm effectively wasting data at the horizons?

Not much data to waste, in my opinion. I never image below 20 degrees, and usually try to get the whole session above 30. The only time I dip below 30 is when I'm targeting Orion (live at 64 N). I collect data an hour or two when the target is at it's highest, and continue one or two following evenings if necessary. The rest of the night is spent on other targets.

I shoot with a DSLR. I spend some time in the beginning of the first session to finetune exposure time and ISO settings, and I use the same settings on the whole session and the following ones as well. It's OK that stacking software can do wonderful tricks, but the foundation is laid out when you choose wich subs to stack and which to keep. During a session with my cameras, the best and most plentyful data lies in the darkest subs. The difficult part is to hit right at the beginning, with time you gather experience. Histogram peak 1/4 to 1/3 from left works for me. If you start fiddling with exposure as you go, you loose the reference sub (which always is the darkest) and the overall view. If you compare two subs with the same capture settings from the same session, the darkest will always have the most data and signal. When the subs get lighter, it's always light pollution, sky glow or skies. None of that adds data, rather they outshines and washes out. Reducing exposure will not help, it will only make the subs look a little bit more uniform, untill you blow them up and take a closer look.

Link to comment
Share on other sites

Let's quantify things a bit so that everyone can understand impact of different conditions on final result.

I'm going to use simplified calculation for sake of argument - one that does not treat whole SNR - but simply translates loss in signal to additional imaging time. Say that some conditions lower your signal to 66% of original - that simply means that you need to image for x1.5 times longer to get same amount of signal.

First thing that you should ask yourself would be - do I think 10% improvement in QE of a camera is a big deal or not?

Say that you need to select between two cameras - 60% peak QE and 66% peak QE - what is the time difference?

Time difference is also x1.1. You would need to image for x1.1 longer with lower QE camera versus higher QE camera.

Let's look at some other things:

1. Altitude above horizon

Say that we have extinction of 0.16 magnitudes per airmass.

Air mass = 1/sin(angle)

70 degrees:  ~1.06418 airmass = ~0.17 magnitudes = ~x1.1695 = ~x1.17 more imaging time or 17% more imaging time

60 degrees:  ~1.1547 airmass = ~0.185 magnitudes = ~x1.186 = ~x1.19 more imaging time or 19% more imaging time

45 degrees: ~1.4142 airmass = ~0.2263 magnitudes = ~x1.23 more imaging time or 23% more imaging time

30 degrees:  2 airmass = 0.32 magnitudes = x1.34 more imaging time or 34% more imaging time

2. Transparency

Say that instead of AOD 0.1 you decide to use AOD 0.3 skies to image. AOD is aerosol optical depth and 0 is completely clear sky while 0.5 and above is murky.

Here is current forecast for reference (along with scale):

image.png.6f08139e3eafba92f8b30544c705cc72.png

That is increase of 0.2 AOD or 0.2172 magnitudes of extinction. That is ~x1.2214 more imaging time or 22% increase in imaging time.

3. Improper sampling

Say that instead of sampling at 1.5"/px - you decide that is fine that your system is sampling at 1.1"/px and you don't worry about over sampling.

Here ratio of signal is proportional to surface of pixel so we have (1.5/1.1)^2 = ~x1.86 or 86% increase in imaging time

In the end remember - these effect combine and compound to produce final result.

You still think that 10% difference in sensor QE is a big deal?

 

Link to comment
Share on other sites

12 hours ago, vlaiv said:

 

Once you already recorded LRGB - it is easy experiment to use just LRG and reconstruct B and compose an image. You also have LRGB version to compare results to.

Maybe you'll find that it actually works, so it will save you some imaging time in the future? Who knows?

 

It has t be worth a try. As for the mechanics of how to do it, how would I perform this subtraction? I tried opening L and R in AstroArt and applying the Subtract command to subtract R from L. It left very little behind, as I suspected. (I did check that I'd done the subtraction the right way round because, when I substracted L from R it left even less!)

Also, what about the sub exposure lengths and number? I have always shot more L than RGB. That seems to be the whole point of doing LRGB.

Olly

Link to comment
Share on other sites

52 minutes ago, ollypenrice said:

It has t be worth a try. As for the mechanics of how to do it, how would I perform this subtraction? I tried opening L and R in AstroArt and applying the Subtract command to subtract R from L. It left very little behind, as I suspected. (I did check that I'd done the subtraction the right way round because, when I substracted L from R it left even less!)

Also, what about the sub exposure lengths and number? I have always shot more L than RGB. That seems to be the whole point of doing LRGB.

Olly

I would do it like this:

- background removal from L, R and G

- scale channels to same exposure length by dividing with exposure length in minutes.

- do pixel math where B = L - (R+G). I'd use ImageJ for this as I have confidence in that software that it won't do anything "funny" to the data (like normalization of result or clipping or anything like that).

- finally set some black point to the data (add small DC offset to all channels so that negative values are not clipped by processing software).

 

Link to comment
Share on other sites

2 hours ago, vlaiv said:

I would do it like this:

- background removal from L, R and G

- scale channels to same exposure length by dividing with exposure length in minutes.

- do pixel math where B = L - (R+G). I'd use ImageJ for this as I have confidence in that software that it won't do anything "funny" to the data (like normalization of result or clipping or anything like that).

- finally set some black point to the data (add small DC offset to all channels so that negative values are not clipped by processing software).

 

Too much PI for me. Easier to shoot blue!

:grin:lly

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.