Jump to content

Stargazers Lounge Uses Cookies

Like most websites, SGL uses cookies in order to deliver a secure, personalised service, to provide social media functions and to analyse our traffic. Continued use of SGL indicates your acceptance of our cookie policy.

sgl_imaging_challenge_celestial_motion.thumb.jpg.a9e9349c45f96ed7928eb32f1baf76ed.jpg

vlaiv

Members
  • Content Count

    4,056
  • Joined

  • Last visited

Community Reputation

2,825 Excellent

6 Followers

About vlaiv

Profile Information

  • Gender
    Male
  • Location
    Novi Sad, Serbia
  1. Not sure if it has something to do with Windows update. First let address issue of noise everywhere but top part of the image. Noise that you are seeing is background noise consisting out of read noise, dark current noise (all across the frame) and LP noise. LP signal often has gradient to it because sky is not uniformly lit up by ground light sources - closer to horizon there is more LP and near the zenith there is less - that creates gradient. You can see that there is gradient running from top of the image down to bottom - that is most likely LP gradient. Your level of stretch is such that it managed to push top part of the image (or rather it's background) - to be dark enough so that noise does not stand out as much while bottom of the image is still bright enough for noise to show. Try removing gradient first and doing less aggressive histogram stretch so that background noise is better controlled. Now let's look at pattern in the image. My guess is that it is not related to the sensor at all, and it is due to noise visible and another effect. You are probably guiding but your polar alignment is not spot on. This creates slight rotation between subs. Slight rotation between subs can happen if you have cone / orthogonality error - once you flip meridian frame will be slightly rotated. If frames gradually change rotation - it's due to PA. If one side of pier has one orientation, and other side of pier after meridian flip is slightly rotated - it is due to cone / orthogonality error (I'm not expert in that field, still don't know why exactly it happens, but I know it does sometimes). Now let's see what that has to do with grid pattern? When you start stacking your images - subs need to be aligned, so some of them need to be "rotated back" to match reference frame. Software uses certain interpolating algorithm to do this, and choice of algorithm can impact creation of the grid pattern. Particularly bilinear resampling can cause that. Here is synthetic example done on just pure noise and stretched to show grid forming: This is pure gaussian noise (otherwise no pattern is visible), rotated by 1 degree using bilinear resampling method, and stretched to show grid clearly. Depending on rotation angle, grid will be coarser or finer. That is what I believe is going on, but might be wrong. If you want to lessen this effect- use bicubic resampling or some more advanced resampling technique (lanczos for example is excellent).
  2. It looks like you already got one? Nothing wrong with the design itself. It has certain characteristics much like any other design out there. With this design however, it proved that low cost implementations are plagued with poor execution of the design. Not the fault of the design but rather of manufacturing process and attention to detail. Not sure what you mean by longer term issues. If it's not performing adequately - it will do so from the start. It will not suddenly or over some period of time start performing poorly on its own. Since you already have it, and one could say you are in luck of not having much of experience so you won't be able to tell if it is poor optics straight away - just use it until you are ready to replace it. Do be careful however to blame things on the scope, as it might not be down to it, at least not everything will be down to it. There is seeing that can often be mistaken for poor optical quality, especially by novice observers (I'm guilty of that even after quite a bit of observing under my belt). Just use it in a way that is most pleasing - that gives the best image and as you progress in your observing skills it will be more apparent what is due to seeing and what is down to scope optics.
  3. I would not call it dramatic. It depends on how much things that you want to present in the image - Signal, is over the noise in the image. Doubling the amount of the data has rather straight forward consequence - it improves SNR by factor of x1.41... (square root of two). It will always have that affect regardless what the SNR of original image is. If you have image that has SNR of let's say 30, you will end up with image that has SNR of ~43. Visually there will be less difference then for example the case where your original image has SNR of 4 and you increase that to SNR of 5.6. In relative terms it is the same increase in SNR, but if your SNR is already high - visually it will make less difference, while if SNR is low to start with, such improvement can be considerable. It can even pull the data from "unrecognizable" region, into "starting to recognize features" region. Here is example of low base SNR: Here is example of higher base SNR: And here is the same image as above (higher base SNR) with a bit different linear stretch: This goes to show that doubling amount of data can produce different results - based on base SNR and also on the level of processing / stretch. In first case it makes unreadable text almost readable (it is easier to figure out what it says in right image). In second example target is rendered the same, difference is only in quality of the background. In third example if data is carefully stretched - you can see the text and background looks almost the same - in fact they look like there is almost no difference at all. And in all three cases we used same increase in amount of data - doubling the data.
  4. Ok, no, I was not trying to imply that I'll do some sort of "magic" and image will look better. You indicated that you did equal histogram stretch as closely as possible, but in reality it's quite different - when blinking scaled down version of native and binned - there is quite a bit of variation in brightness - different level of stretch. What I wanted to do is "split screen" type of image while still linear - and then post those so you can apply the same level of histogram stretch as it will be one image composed out of two halves. Here is blinking gif to show the difference in level of stretch:
  5. How about not doing histogram stretch and DBE? Just post original stack while linear. I can then bin it myself and show the difference both small and enlarged?
  6. Yes, gain is the issue. Here is comparison of the information from fits header for darks and lights: Darks were taken with gain 0 and lights were taken with with gain 90. There is slight mismatch between flats and flat darks in temperature. In all likelihood it will not make much difference, but I would recommend you following: - Take new set of darks at Gain 90 and try with those to calibrate your image - In future make sure that you take flats / flat darks at the same temperature - it would be best to have some settings that you will not change - offset, gain and temperature and always work on those. Offset of 65 seems fine - so keep it there, gain 90 is unity gain so again fine - keep it there and -5C looks like reasonable temperature that you can reach most of the time, so that is also something you should keep. Take all of your subs with these settings.
  7. There might be an issue with APT and the way it shoots flats - but I can't help there as I'm not using it (or have ever). What I can do is offer you list of reasons why this can happen so you can check if anything on that list is causing issue for you. You have under correction of the flats. Corrected value is equal to base value divided by flat value. For that number to be lower than it should, we have two options: 1. base value is lower than it should be 2. flat value is higher than it should be Usually this happens with wrong calibration or if there is some sort of light leak in the system. Case one can happen if you have: mismatching darks for your lights - longer duration darks, shot at higher gain, shot at higher temperature, or there were light leak while you shot your darks (but not with lights) - i.e. You did darks on scope during the day, or you took your camera off scope and did darks and there was either IR leak, or regular light leak (cap was not good enough at blocking the light). Case two can happen if you have: mismatching flat darks - shorter than flats, at lower gain or at lower temperature. Light leak while shooting flats can also be a problem. People sometimes calibrate flats with bias only - again that can be a problem. If you can post one of each: light, dark, flat and flat dark subs, we could possibly be able to tell what happened by examining fits header for information and doing measurements on each.
  8. Here is comparison between reduced and unreduced data. First comparison is 2h worth of data taken at 1"/px and then scaled down to 2"/px vs 1h worth of data taken at 2"/px: Left in this image is 2"/px data (1h total) and right in this image is 1"/px downsampled data (2h total). Sorry about extreme stretch - but this is linear stretch to get to noise floor to be able to actually asses if there is any difference. In my view 1h of 2"/px data (like taken with reducer) is almost as good as 2h of data taken at 1"/px then reduced down. I can tell that 2h of data reduced in fact has a bit less noise (right part of the image looks like it has a bit smoother background). Here is same image with proper (yet basic) histogram stretch in Gimp: Histogram stretch to my eye confirms above. 1h of 2"/px is almost as good as 2h of 1"/px downsampled to same size. It follows that 2h of 2"/px will beat 2h of 1"/px downsampled - or in another words Reducer wins over non reduced then downsampled to the same size. But we sort of already knew this from my previous comparison of binning vs downsampling. Let's look what happens when we upsample 1h of 2"/px image to match the resolution of 2h of 1"/px image. I have to say that for data above ideal sampling rate is slightly less than 2"/px, therefore we can expect some sharpness loss - but I think it will be minimal and we probably won't be able to tell. Here is linear stretch (again extreme to hit noise floor). Left part of the image is 1h of 2"/px upsampled and right side of the image is 2h of 1"/px. You will notice that noise in upsampled image is more grainy - there simply is no "information" that makes up fine grained noise, but noise levels are about the same, or rather we need histogram stretch to see which noise will start showing first. Not sure what to make out of this one. I think I can see the difference in size of noise grain, but if I did not tell you that this image is made partly out of 1h of data and partly of 2h of data and that half of it is "shot" at twice lower sampling rate, would you be able to tell?
  9. Not sure if I would go with this last one. Not many people can actually pull off 1"/px resolution. In most cases people are closer to 1.5-2"/px. Maybe better way to say it would be: - Don't use focal reducer if you are already at your target sampling rate and you don't want to trade resolution / image scale for less time imaging, or you want flexibility to do it later via binning and such.
  10. It certainly won't matter on chip the size of 178. Even on larger sensors you should not get much vignetting at that distance.
  11. Are you sure you have EFW the right way around? What EFW is it? ZWO as well? In fact it does not matter. EFW has both female T2 connections, your camera has T2 female connection, but EFW comes with T2-T2 adapter. This is how you should put together things: camera - t2-t2 - EFW - 16.5 extender - FF/FR It should all fit together like that, if I'm not mistaken.
  12. Indeed, both my data as your data is not gathered with reducer, I plan to use binning as it has the same effect as reducer for this purpose - it provides the same aperture at resolution
  13. I will do experiment with the data I have, but if you want, I can do it with the data you've gathered. No need to bin it, just post aligned and cropped (to remove any alignment / registration artifacts) set of subs and I'll do all the processing. Once I do experiment, you will be able to tell the difference between all versions - binned, native, reduced, enlarged ....
  14. I think I get it now. Idea is to simulate reducer scenario? I can do following. Take data set, stack it at "normal resolution". Take half of that data set (half the subs) and bin it 2x2 to simulate reducer. Stack that one as well. Create comparison between images at large scale and small scale. Enlarge small image to match scale of larger image and do another comparison. I can do the same with quarter of the subs (that would be equivalent time when using x0.5 reducer or 2x2 binning - it should give same SNR as native resolution). Will do that and post results sometime today.
  15. I don't really follow what you propose to be done? By half the data, do you mean half of the subs, or "every other pixel". I'm not getting the parts: "some fraction of the full size", and "we take it closer to full size" Once I understand your proposal, I'd be happy to give it a go and show the differences.
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.