Jump to content

sgl_imaging_challenge_6_banner_jupiter_2021.jpg.eacb9f0c2f90fdaafda890646b3fc199.jpg

 

 

Can you solve this puzzle about background noise?


Recommended Posts

Hi SGL Hive Mind,

I’ve got a real head-scratcher of a problem, and I’m hoping someone here can help me solve it. I’ve been experimenting with seeing the effects of increasing integration time on background noise levels. My understanding is that the greater the total integration time, the smoother the background noise should appear. But I’m finding that beyond one hour of integration, my noise levels see no improvement, and even maintain the same general structure.

I flagged this in another thread but think it deserves its own thread, so I thought I’d begin anew.

 

I figure either my understanding of integration and noise is incorrect, or maybe I’ve messed up something in pre-processing. I’ve conducted a lot of tests with different settings, copied below, but nothing seems to make much difference. I’ve uploaded my data to GDrive, in case anyone’s feeling generous with their time, and would care to see if they get the repeated noise pattern!  (Being GDrive, I think you need to be logged into a Google account to access).

My telescope is an Askar FRA400, and the camera is a 2600MC-Pro. All a series of 120-second images shot from Bortle 8 skies. For each test, I applied some basic functions in PixInsight just to get images to compare: ABE, ColorCalibration, EZ Stretch, Rescale to 1000px. I used SCNR to remove green from the first tests, but forgot that step for the second batch.

Any idea what's going on? Why isn't the noise smoothing out past the one hour mark?

 

569813642_GIF1DarksandFlats.gif.ad0869a7adc111058aaec6548bcebf78.gif

402716466_GIFDarksandFlats.gif.3ed8a519538b5d41608b46aa05d8a51e.gif

1114688998_GIFFlatDarksandFlats.gif.e3a2ca29c23e68b2bb1d0aae00d1a2fd.gif

1417954442_GIFDarksFlatsCC.gif.7bad35a9bf7e19ba29d536daca799ee6.gif

740990870_GIFDarksFlatDarksFlatsCC.gif.f110e511b93e609beb5b56e5d81fa1e9.gif

1493427093_GIFBiasDarksFlatDarksFlatsCC.gif.3e06cd625044dd5c718f0608eb2301b2.gif

 

Here are my PixInsight ImageIntegration settings:

Integration1.JPG.f27a493b13834dc3455439659ab713be.JPG

Integration2.JPG.5a088290ebdd50a68af1bdf68ec51b6a.JPG

Link to comment
Share on other sites

Posted (edited)

I cannot really add much apart from to say I have noticed ABE introduces noise similar to what you see here and, as yet, I've not been able to work out what settings I need to change or if it's my data. APP can make a better job of the stacking using its defaults. Having said that I know there are enough folk getting great results out of PI that I have basically accepted it's me/my data at fault.

I can see the screen shots but are you using WBPP and have you looked at Adam Block's videos for V2 of the script as there may be info in there that could help - I've not watched them all yet btw....

 

Edit just to be clear and what I mean re ABE adding noise is that if I use STF on a stacked image and then apply ABE the ABE version has noise whereas the STF image does not.

Edited by scotty38
  • Like 1
Link to comment
Share on other sites

Last year I made some (rough) measurements of the effect of different stacking methods (I use DSS). I generally try to use at least 80 lights (180s each, so 4 hrs total).  I found that using Average stacking the resulting noise closely followed the 1 / ROOT(N) formula and similarly for Kappa-Sigma stacking, but only up to about 30 lights. After that the resulting noise leveled off and occasionally appeared to increase slightly.  These days I tend to Kappa-Sigma stack the lights in seperate batches of 20 to 30 and then average stack the results.  It seems to give me the same resulting noise as average stacking does but without the satellite trails and hot pixels.

  • Thanks 1
Link to comment
Share on other sites

Have you measured the different stacks in Subframe Selector? It looks to me as if the longer integration time stacks are getting harder stretches, maybe put the same stretch on each stack and then compare, also what's happening to M81 and M82 are they gettng better?

  • Like 2
  • Thanks 1
Link to comment
Share on other sites

2 hours ago, Lee_P said:

Any idea what's going on? Why isn't the noise smoothing out past the one hour mark?

Can you post just couple of linear stacks without stretching them (as fits please as I don't have PI and can't open xifs)?

Visual feedback on how much noise there is - depends on stretch level. I think that you used PI automatic stretch - and that one works on basis of noise floor - this is why it stretched each sub so that noise in background is the same.

You can see that by size of stars in your animations.

image.png.027c56d0e8340c5630e1ef52764f6af0.pngimage.png.b894e46d9973405e6d04d06166143123.png

there is much more "light halo" around left star then on the right - but they are the same star (you can see this in your animated gifs - just watch any single star and see it "bloat" as total exposure increases).

In order to measure noise accurately - take two stacks still linear and select piece of background not containing any signal - no stars, no objects, mostly empty and do stats on each image in that region.

Compare pixel standard deviation values between images.

If you want to visually asses difference between two images - make sure they are normalized (one with respect to another) - or that they are stacked against same reference frame. Then do "split screen" type image while you still have linear data - prior to any stretch. This is easily done by selecting half of one stack and pasting it in the same place onto another stack.

Only when you have such image - do the stretch to show background noise. This way you are guaranteed to have same stretch on both images - as you'll be doing stretch on single image consisting out of two halves of different stacks.

  • Thanks 1
Link to comment
Share on other sites

15 minutes ago, vlaiv said:

Can you post just couple of linear stacks without stretching them (as fits please as I don't have PI and can't open xifs)?

Visual feedback on how much noise there is - depends on stretch level. I think that you used PI automatic stretch - and that one works on basis of noise floor - this is why it stretched each sub so that noise in background is the same.

You can see that by size of stars in your animations.

image.png.027c56d0e8340c5630e1ef52764f6af0.pngimage.png.b894e46d9973405e6d04d06166143123.png

there is much more "light halo" around left star then on the right - but they are the same star (you can see this in your animated gifs - just watch any single star and see it "bloat" as total exposure increases).

In order to measure noise accurately - take two stacks still linear and select piece of background not containing any signal - no stars, no objects, mostly empty and do stats on each image in that region.

Compare pixel standard deviation values between images.

If you want to visually asses difference between two images - make sure they are normalized (one with respect to another) - or that they are stacked against same reference frame. Then do "split screen" type image while you still have linear data - prior to any stretch. This is easily done by selecting half of one stack and pasting it in the same place onto another stack.

Only when you have such image - do the stretch to show background noise. This way you are guaranteed to have same stretch on both images - as you'll be doing stretch on single image consisting out of two halves of different stacks.

Thanks vlaiv, I'm going to read all that carefully -- for now, attached are four FITS stacks 😃

90_mins_darks_and_flats.fit

105_mins_darks_and_flats.fit

120_mins_darks_and_flats.fit

135_mins_darks_and_flats.fit

Link to comment
Share on other sites

2 hours ago, scotty38 said:

I cannot really add much apart from to say I have noticed ABE introduces noise similar to what you see here and, as yet, I've not been able to work out what settings I need to change or if it's my data. APP can make a better job of the stacking using its defaults. Having said that I know there are enough folk getting great results out of PI that I have basically accepted it's me/my data at fault.

I can see the screen shots but are you using WBPP and have you looked at Adam Block's videos for V2 of the script as there may be info in there that could help - I've not watched them all yet btw....

 

Edit just to be clear and what I mean re ABE adding noise is that if I use STF on a stacked image and then apply ABE the ABE version has noise whereas the STF image does not.

Interesting, I thought that ABE / DBE appears to introduce noise because they're removing gradients (e.g. from light pollution) that are essentially masking the underlying noise. If that makes sense? I could be wrong! 

I am indeed making my way through Adam Block's WBPP2.0 videos -- it's those that prompted me to make the tests including FlatDarks and CosmeticCorrection!

  • Like 2
Link to comment
Share on other sites

2 hours ago, Seelive said:

Last year I made some (rough) measurements of the effect of different stacking methods (I use DSS). I generally try to use at least 80 lights (180s each, so 4 hrs total).  I found that using Average stacking the resulting noise closely followed the 1 / ROOT(N) formula and similarly for Kappa-Sigma stacking, but only up to about 30 lights. After that the resulting noise leveled off and occasionally appeared to increase slightly.  These days I tend to Kappa-Sigma stack the lights in seperate batches of 20 to 30 and then average stack the results.  It seems to give me the same resulting noise as average stacking does but without the satellite trails and hot pixels.

Ok, interesting. Could be worth me investigating. Thanks!

Link to comment
Share on other sites

1 hour ago, Laurin Dave said:

Have you measured the different stacks in Subframe Selector? It looks to me as if the longer integration time stacks are getting harder stretches, maybe put the same stretch on each stack and then compare, also what's happening to M81 and M82 are they gettng better?

Oh, you're clever 😎

Noise_graph.thumb.JPG.df415ef0a89040abdb564d5a1a04531b.JPG

Looks like the noise level is going down?

Here's the noise level over 24 hours of data:

Noise_24_hours.thumb.JPG.8ca807857a04769e643fe0ba28bef4f9.JPG

 

So thanks to this and @vlaiv's insight, I think I understand that the noise level *is* going down, it just doesn't appear to in my GIFs because the process I used was automatically stretching it. BUT! I still don't understand why the noise pattern is the same in every image. Is that because of the automatic stretching?

 

 

To answer your other question, here's M81 over 24 hours:

937060247_M812-24integrationGIF.gif.e2a19b6cd0a601fda81c9a1d52045f6e.gif

 

And a version of the same data adjusted in Photoshop to keep the galaxy's brightness consistent:

327757997_M81galaxysamebrightnessGIF.gif.eb3adc5fa41f23a6bef9cd41b5bd7c2f.gif

Link to comment
Share on other sites

50 minutes ago, Lee_P said:

Thanks vlaiv, I'm going to read all that carefully -- for now, attached are four FITS stacks 

Well, there won't be much visual difference between 90min and 135min. That is only 50% increase in number of frames or about 22.47% increase in SNR. That is on edge of perception.

However, it can be seen with a bit of "trickery". Human brain is very good at recognizing patterns - and if you give it two patterns side by side - it will spot similarity and differences rather easily.

Untitled.gif.be83436b415e7119d85114d4351e14a8.gif

here is animated git that I made from 90minute and 135minute fits - green channel, linearly stretched and employed "split screen" trick.

Right part of the frame does not change - it is 135minute fits. Left side alters between 90 and 135 minute fits.

It is obvious that 135minute looks a bit smoother - smaller grain and more uniform than 90 minute data. Even with such a small increase in data - difference can be seen if you know where to look for it.

54 minutes ago, Lee_P said:

Interesting, I thought that ABE / DBE appears to introduce noise because they're removing gradients (e.g. from light pollution) that are essentially masking the underlying noise. If that makes sense? I could be wrong! 

No. Background extraction will remove very uniform component of background signal and leave background noise in place.

Here is analogy to help you understand better. Imagine that you have mountain trail full of rocks that you walk on. It's bumpy ride. That is noise. Background level is just difference between that path being on Mount Everest versus your local hill. Bumps in that trail won't change if you lower the mountain level.

Last and most important thing is - although size of bumps in the path does not change - their relative size to the mountain does change. Average size of rocks of 10cm on path compared to Mount Everest being 8848 or whatever meters - makes that almost one in 100,000, but if you "wipe" the mountain and leave local hill that is 100meters high - now rocks are 1 in 1000.

This makes no difference to the image - it just makes difference to processing parameters - where you put your black point. Depending on that - you'll get relative noise level - same image can appear quite noisy or very smooth - depending on how you process it. Here is 135 minute data in two stretch versions:

image.png.4fba1fe96c8f0d3948b68733608c9163.png

image.png.9ef3a55ab67f0cd66196da7ef14db239.png

Same image - different background.

One of very important skills to learn in image processing is to push the data only to the point it allows to be pushed. It is SNR that is important. In my view - it is better to keep the noise at bay by sacrificing signal part - maybe you won't be able to show those faint tidal tails nicely or those outer parts of the galaxy will be too dark to your liking - but image will be smooth and with very little noise.

Not being able to show faint detail does not mean that you did not push your data enough - it means that you did not spend enough time imaging.

  • Like 9
  • Thanks 1
Link to comment
Share on other sites

3 hours ago, vlaiv said:

Well, there won't be much visual difference between 90min and 135min. That is only 50% increase in number of frames or about 22.47% increase in SNR. That is on edge of perception.

However, it can be seen with a bit of "trickery". Human brain is very good at recognizing patterns - and if you give it two patterns side by side - it will spot similarity and differences rather easily.

Untitled.gif.be83436b415e7119d85114d4351e14a8.gif

here is animated git that I made from 90minute and 135minute fits - green channel, linearly stretched and employed "split screen" trick.

Right part of the frame does not change - it is 135minute fits. Left side alters between 90 and 135 minute fits.

It is obvious that 135minute looks a bit smoother - smaller grain and more uniform than 90 minute data. Even with such a small increase in data - difference can be seen if you know where to look for it.

No. Background extraction will remove very uniform component of background signal and leave background noise in place.

Here is analogy to help you understand better. Imagine that you have mountain trail full of rocks that you walk on. It's bumpy ride. That is noise. Background level is just difference between that path being on Mount Everest versus your local hill. Bumps in that trail won't change if you lower the mountain level.

Last and most important thing is - although size of bumps in the path does not change - their relative size to the mountain does change. Average size of rocks of 10cm on path compared to Mount Everest being 8848 or whatever meters - makes that almost one in 100,000, but if you "wipe" the mountain and leave local hill that is 100meters high - now rocks are 1 in 1000.

This makes no difference to the image - it just makes difference to processing parameters - where you put your black point. Depending on that - you'll get relative noise level - same image can appear quite noisy or very smooth - depending on how you process it. Here is 135 minute data in two stretch versions:

image.png.4fba1fe96c8f0d3948b68733608c9163.png

image.png.9ef3a55ab67f0cd66196da7ef14db239.png

Same image - different background.

One of very important skills to learn in image processing is to push the data only to the point it allows to be pushed. It is SNR that is important. In my view - it is better to keep the noise at bay by sacrificing signal part - maybe you won't be able to show those faint tidal tails nicely or those outer parts of the galaxy will be too dark to your liking - but image will be smooth and with very little noise.

Not being able to show faint detail does not mean that you did not push your data enough - it means that you did not spend enough time imaging.

Thanks vlaiv, this is pure gold. I think I'm getting my head around it... 

I tried reproducing your example. I find it useful, but did I mess up the "linear stretch" aspect? If you could give me a pointer on how to do that in Photoshop, maybe I could make a version 2.

483285033_NoiseGIF.gif.74ca2373ea265e14980b1aaef8cf9dbd.gif

And here's a graph of noise against integration time.

2040415313_IntegrationNoisechart.jpg.8fbb87905f8e2e7ebf8d6d8fe2a47057.jpg

Thanks again!

-Lee

Link to comment
Share on other sites

1 minute ago, Lee_P said:

Thanks vlaiv, this is pure gold. I think I'm getting my head around it... 

I tried reproducing your example. I find it useful, but did I mess up the "linear stretch" aspect? If you could give me a pointer on how to do that in Photoshop, maybe I could make a version 2.

First thing you need to do is normalize frames.

They have to have the same background level and same star intensity. I think that PI should have this feature - but since I don't use PI - I have no idea how it's performed.

After that - when intensities are normalized - linear stretch is simple - you do same levels without messing with middle slider - just top and bottom - on all frames and you should get same linear stretch.

In fact - when frames are normalized - you don't have to do linear stretch - as long as you perform same levels / curves (can save that as preset) - you should get compatible result.

  • Like 1
Link to comment
Share on other sites

Posted (edited)
1 hour ago, vlaiv said:

First thing you need to do is normalize frames.

They have to have the same background level and same star intensity. I think that PI should have this feature - but since I don't use PI - I have no idea how it's performed.

After that - when intensities are normalized - linear stretch is simple - you do same levels without messing with middle slider - just top and bottom - on all frames and you should get same linear stretch.

In fact - when frames are normalized - you don't have to do linear stretch - as long as you perform same levels / curves (can save that as preset) - you should get compatible result.

I've been trying for the last hour but just can't crack the normalisation stage. I'm sure it's simple, so I need to find some tutorials. I'll pause this for now, but thanks for all your help!

 

Edited by Lee_P
Link to comment
Share on other sites

1 minute ago, Lee_P said:

I've been trying for the last hour but just can't crack the normalisation stage. I'm sure it's simple, so I need to find some tutorials. I'll pause this for now, but thanks for all your help!

As simple fix - you can just "wipe" the background with simple process.

Using pixel math just subtract median value of each stack from it.

After doing that - all subs should have 0 median value and backgrounds should be normalized. There will still be some intensity mismatch due to atmospheric absorption on different altitudes of object during recording - but that is minor thing - you should get usable results from just setting median value across stacks to same value.

  • Like 2
Link to comment
Share on other sites

9 hours ago, vlaiv said:

As simple fix - you can just "wipe" the background with simple process.

Using pixel math just subtract median value of each stack from it.

After doing that - all subs should have 0 median value and backgrounds should be normalized. There will still be some intensity mismatch due to atmospheric absorption on different altitudes of object during recording - but that is minor thing - you should get usable results from just setting median value across stacks to same value.

Ok, thanks, this makes sense to me. I tried it and something is amiss, as after my PixelMath subtraction, the median value isn't 0. Could someone versed in PixelMath point out where I'm going wrong? 

 

Before PixelMath. Median value = 29.

1.thumb.JPG.341a2ae9a460f5fb677c2320399cfaa3.JPG

 

My PixelMath expression to remove the median value:

2.JPG.0e1be6bd028693c93e84e2f032ca1e74.JPG

 

After subtraction, the median value is 4. Shouldn't it be 0?

3.thumb.JPG.49a1a1d5d70185fe12d45c42066c1329.JPG

 

Further proof that I've got it wrong. The left half of the image is 2 hours of integration, the right half is 24 hours. These are after my evidently suspect process of subtracting median values!

4.thumb.JPG.48517f5768fd47077de60e116a72d1f7.JPG

 

Link to comment
Share on other sites

1 hour ago, Lee_P said:

Could someone versed in PixelMath point out where I'm going wrong? 

image.png.40e7469162cb1acd8b346ae0c367398f.png

It looks like you are working with 16 bit images.

Try working with 32bit floating point images when doing all of that. I think it will solve the problem but not 100% sure - in any case 32bit precision is much better.

  • Like 1
Link to comment
Share on other sites

1 hour ago, vlaiv said:

image.png.40e7469162cb1acd8b346ae0c367398f.png

It looks like you are working with 16 bit images.

Try working with 32bit floating point images when doing all of that. I think it will solve the problem but not 100% sure - in any case 32bit precision is much better.

It looks like the images are 32bit, but the statistics panel only has the option to display info for up to 16bit 🥴

32bit.thumb.JPG.e4e84ff524e19ddac3991861f748eea9.JPG

16bit.thumb.jpg.41d399a913735c5a71181b4d25b4131f.jpg

Link to comment
Share on other sites

1 minute ago, Lee_P said:

It looks like the images are 32bit, but the statistics panel only has the option to display info for up to 16bit 

In that case - I have no idea :D (don't own/use PI).

I can do it for you however, if you wish. I'm using ImageJ and it's really easy for me to do it. Just attach fits that you want normalized (like you did with 90 and 135 minute exposures) and I'll be happy to do it for you.

Alternatively - if you want to install ImageJ / Fiji - I'll be happy to show you how to do it in that software.

  • Thanks 1
Link to comment
Share on other sites

18 hours ago, Lee_P said:

To answer your other question, here's M81 over 24 hours:

There is definitely something not quite right. I am no expert in this and I got this literally as my first image and using around 500secs worth of data with no darks or flats. So comparing mine and 24hrs of yours I would have said you should have got a much clearer image.

 

  • Like 1
Link to comment
Share on other sites

Have you tried stacking the subs with the darks just to see if there's any improvement, as you say the background noise should smooth out with more light frames taken. I know with my sw 72ed AzGti Canon 600D I don't take any darks atall now, just flats and bias as I find the darks just add more noise, maybe its just my set up but quite a few others not take darks. Here's a stack of a few of mine only 50 mins-1hr total subs. Not award winners by any means atall. 

M104-No-darks-3-4-21.jpg

M57-Ring-Nebula-6-5-21_485186073252527.jpg

M100-6-5-21.jpg

The-Whale-&-Hockey-Stick-galaxies-1-5-2021.jpg

  • Like 1
Link to comment
Share on other sites

41 minutes ago, AstroMuni said:

There is definitely something not quite right. I am no expert in this and I got this literally as my first image and using around 500secs worth of data with no darks or flats. So comparing mine and 24hrs of yours I would have said you should have got a much clearer image.

 

That's a great picture, especially for such a short integration time. The images in my test just had some very basic edits done, to allow them to be compared fairly. This is a fully edited version with 24 hours of integration time. Looks a bit better than my simple tests:

M81M82_fullres.thumb.jpg.6fe1f3c1cc87d472a9d81c3e6eb5ea9a.jpg

What level of light pollution do you have? I'm going to guess you're imaging from some nice dark skies? I'm in a city centre, which puts me at a big disadvantage with broadband targets like M81.

  • Like 2
Link to comment
Share on other sites

46 minutes ago, AstroNebulee said:

Have you tried stacking the subs with the darks just to see if there's any improvement, as you say the background noise should smooth out with more light frames taken. I know with my sw 72ed AzGti Canon 600D I don't take any darks atall now, just flats and bias as I find the darks just add more noise, maybe its just my set up but quite a few others not take darks. Here's a stack of a few of mine only 50 mins-1hr total subs. Not award winners by any means atall. 

M104-No-darks-3-4-21.jpg

M57-Ring-Nebula-6-5-21_485186073252527.jpg

M100-6-5-21.jpg

The-Whale-&-Hockey-Stick-galaxies-1-5-2021.jpg

Nice pics! Back when I started with my camera I did some tests, and found that Darks + Flats gave me the best results. I might try again though, as I'm learning a lot about how to make proper comparisons.

  • Like 1
Link to comment
Share on other sites

1 hour ago, vlaiv said:

In that case - I have no idea :D (don't own/use PI).

I can do it for you however, if you wish. I'm using ImageJ and it's really easy for me to do it. Just attach fits that you want normalized (like you did with 90 and 135 minute exposures) and I'll be happy to do it for you.

Alternatively - if you want to install ImageJ / Fiji - I'll be happy to show you how to do it in that software.

Thanks, that's very kind. I'm here to learn, so I'll glady take you up on the offer of showing me how to do it in ImageJ. I've downloaded the software. If I fail then I'll take you up on your offer of just doing it for me 😂

Link to comment
Share on other sites

7 minutes ago, Lee_P said:

Thanks, that's very kind. I'm here to learn, so I'll glady take you up on the offer of showing me how to do it in ImageJ. I've downloaded the software. If I fail then I'll take you up on your offer of just doing it for me 😂

Sure, it is rather easy to do. There are several ways, but I'll show you easiest one.

- Either drag&drop your fits onto ImageJ bar (I do it below toolbar - onto status bar - but it will show words drag & drop as soon as you drag fits anywhere on that window) or go File / Open ... and select your fits

- make sure you are working with 32bit fits - if not, convert it to 32 bit (just click on menu option to switch data type - it is Image / Type menu)

image.png.23c41bae8021e611e406358a8d3e91a9.png

- If your fits is color one - you'll get "stack" (this is ImageJ terminology - not actual astronomy stacked image we call stack) of three separate images - each channel) - move bottom scroll bar onto middle one (being green) and then right click anywhere on the image, you'll get menu:

image.png.ac589660afa960022b632d46a079697a.png

Marked are scroll bar and menu option - duplicate. Once you hit that - it will ask you if you want to duplicate "stack" (several slices) or just one slice:

image.png.fac868e45d405eafc137cb2256aa507f.png

Leave "Duplicate stack" unchecked and this will create new image - copy of just green channel.

Close original stack of three channels as we will be working just on green for the time being.

- Hit Analyze / Measure menu option

image.png.37dc32d1b1cd0fc242826cc5fb6f2f79.png

You should get stats for your image and Median value should be there (it's not show on screen shot as it is far right but Mean value is shown).

If you by any chance don't have median in results - you need to customize your measurements. It is done via Analyze / Set measurements menu:

image.png.8f7cee879215caa0b5a10761af9df605.png

There are quite a few options that you can use - just check Median and make sure you have enough decimal places for precision.

Now you can repeat measurement via Analyze / Measure and you should have Median value included.

- Note down median value (in my case - it is 0.05964.... ) and do Process / Math / Subtract

image.png.2ba37d289ea1677d9c759e211d1717b6.png

- After that - you can do another measurement to confirm results:

image.png.fa051ef03bd1f0714e2557aea8ff9352.png

Here we go - old and new value

- In the end - File / Save as / Fits ...

Repeat with other files ...

 

  • Thanks 1
Link to comment
Share on other sites

Posted (edited)
3 hours ago, Lee_P said:

What level of light pollution do you have? I'm going to guess you're imaging from some nice dark skies?

Your image is beautiful btw. I am probably class 6 and I have street lights around 40ft away from where I park my scope.

Edited by AstroMuni
  • Like 1
Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
 Share

  • Recently Browsing   0 members

    No registered users viewing this page.

  • Similar Content

    • By Ollyb
      15 hours Integration
      Redcat 51
      AZ EQ6R Pro
      ASI1600mm Pro
      Processed in DSS, Pixinsight & Photoshop 

    • By bottletopburly
      Startools 1.8 is  currently under development, Ivo is  currently working a Narrowband Accent" module  for duo band users , initial image Ivo has posted certainly looks interesting https://forum.startools.org/viewtopic.php?f=4&t=2225&start=10 Ivo also working on a new deconvolution algorithm so some good things for Startools users to look forward too .
    • By Lee_P
      Hi SGLers,
      I’m hoping a PixInsight guru can help me. I’m a PI beginner, but am having fun learning. My question is about the level of noise in my images. After integrating and performing an STF stretch, the resulting image always looks quite smooth. But it doesn’t take long at all – just a DBE really, maybe then a gentle stretch – for the image to become really noisy. And then a lot of my editing is centred on battling that noise. My camera is an ASI2600MC-Pro, which I cool to -10. For a recent experiment, I gathered 20 hours of data from 120s subs. With that much integration time, and the low-noise camera, I was hoping for lower noise than I actually got. (I am shooting from Bortle 8, however).
      So my question is: are my expectations wrong, and actually the amount of noise I have is what’s to be expected? Or, have I messed something up in pre-processing or integration?
      In case it’s useful, I ran SCRIPT -> ImageAnalysis -> Noise Evaluation on the image straight out of integration and got the following:
      Ch | noise | count(%) | layers |
      ---+-----------+-----------+--------+
      0 | 2.626e-01 | 18.39 | 4 |
      1 | 1.037e-01 | 12.01 | 4 |
      2 | 1.636e-01 | 11.10 | 4 |
      ---+-----------+-----------+--------+
       
      I’ve also uploaded the file (1.16Gb) for anyone kind enough to help investigate further:
      https://drive.google.com/drive/folders/1wB3May69oEWniF8hikueUkSS-TJvjMKC?usp=sharing
       
      Thanks!
      -Lee
    • By EMG
      Recently I bought a ZWO ASI178MM for planetary/lunar/deep-sky imaging and last weekend i had clear skies so I was able to capture videos of the moon in two panels. I have already processed the videos in AutoStakkert, I used 3x drizzle because I intend to print the image so now I have two 120MB TIFF images that i would like to combine in a two panel mosaic. I tried doing it in Hugin, a free panorama stitcher but the program crashes due to the file sizes being too large. I have searched for tutorials on using DynamicAlignment in PixInsight but it seems to me that i'll need a reference image to be able to create mosaics in PixInsight. Is there any way to go about this, I feel like this should be a pretty straightforward job but I am not very experienced in PI so I would really appreciate some advice.
       
      I tried using DynamicAlignment as you can see in the attached screenshot but the result was just a cut version of the target image, aligned perfectly to the source image. It would be perfect if it didn't cut off the lower part of the target image.

    • By kryptonite
      I'm not experienced with LRGB imaging, so thought i'd give it a go on M81. However, when i combine the 4 individually processed integrations i end up with horrible colour hues across the image - they're all aligned and wotnot. Am i running into the issues of light pollution (inside the M25), which i can only remove with aggressive DBE application?  Individual files attached.




×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.