Jump to content

NLCbanner2024.jpg.2478be509670e60c2d6efd04834b8b47.jpg

vlaiv

Members
  • Posts

    13,028
  • Joined

  • Last visited

  • Days Won

    11

Posts posted by vlaiv

  1. Just now, alacant said:

    OK. If I go darker than this, I find I begin to lose the arms of the spiral.

    Yes, I know - you can't render something that has lower SNR in the image if you want to avoid the noise and similarly if you bring out the faint stuff and you don't have enough SNR you will bring out the noise.

    That is what I wanted to point out - In my view, good astro image will have some noise, so it won't be completely noise free. Noise just needs to be fine grained enough and controlled enough so it does not distract from the image. On the other hand, I don't think that image looses much if you don't bring out every possible detail at expense of blowing out noise. If you can't fully render arms without making background too noisy - you have two options:

    1. Just don't try to get those arms bright and visible and settle for what you have captured well

    2. spend more time on target until you have enough SNR on each thing you want to render.

    Problem with approach 2 is that you are never going to be fully satisfied :D - more you spend on target, more faint stuff you reveal and want to get good SNR of and then you spend some more time on target and reveal more faint stuff and go in circles :D

     

    • Like 2
  2. @alacant I think you've been asked this before, but here it is again: Why do you post your images in Getting Started with imaging?

    Image such as above, in my view, should certainly be posted in main imaging section.

    Now onto the criticism of the image - background again :D. In my view improvement on clipping, but maybe a bit too bright this time? Also, I think you are pushing data beyond what it can deliver - background is too grainy at this level of stretch. I do understand the need to show every bit captured in the image, but M101 is very low surface brightness target and good SNR is important if you are going to try to show every last faint bit in spiral arms. If you don't have the SNR - you need to make effort not to over do the stretch (at least it is effort for me - I always have to make conscious effort not to stretch too much).

    @ollypenrice

    What's with hue in that image / screenshot? Do you have some sort of color profile enabled in PS or something else? As far as I can tell - you opened above image to analyze histogram so it should be the same image, but to my eye and on my computer screen, two images are distinctly different in color - not sure if it will be seen by others as it might be issue with my computer, but here it is:

    image.png.514020e638080e4c5d580da95e048d3f.png

    Left - your screen shot, right, original image. Maybe you did a bit of curves and levels and made hue different?

    In the end, here is my tweak to the image, hope it's ok to do this - background made a bit darker and a bit of noise handling:

    First histogram - to verify it is bell shaped and nice:

    image.png.d0f67011d75d755b5ce1eeec2eb41741.png

    and image (red has been touched a bit to make background a bit more neutral / gray):

    re-process.thumb.jpg.30152155c5126c32902bb2f57ebe04bd.jpg

    • Like 1
  3. 7 minutes ago, Sidecontrol said:

    For Image one, it was 71 x 10s ISO 6400 images stacked in DSS with x2 Drizzle and using the red rectangular box around the Orion Nebula.

     

    For Image two, it was 26 x 1min ISO 1600 exposures stacked in DDS with x3 Drizzle and using the red rectangular box around the fire, horsehead and orion nebulas.  For processing I was following this tutorial: 

    Why are you using drizzle?

  4. I'm not really convinced that is true. Maybe we can do math together and see if that is possible?

    ELT will have primary with diameter of 39.4m which means that Airy disk will be about 0.007" and maximum sampling rate would be ~0.001" - meaning every pixel would be 1mas (1 milli arc second).

    If we solve for diameter - angle of 1mas will give 0.010193 light minutes for object that is 2102400 light minutes away - 4 * 365 * 24 * 60, or light years).

    Light travels at 300000km/s so it travels 183474km in 0.010193 minutes.

    Earth sized planet will have diameter of about 12700km, or would occupy 1/14th of a single pixel and certainly not 9 pixels.

    If we take that 1 pixel is ~183500Km then 9 pixels across would be 1.6 million kilometers and that is close to Sun like object with 1.3 million kilometers.

    If above simulation is anything real then it is much more likely to be simulation of image of Solar diameter star at distance of 4Ly rather than earth sized planet.

     

  5. 2 hours ago, alan4908 said:

    Here's some documentation of the Pixinsight Photometeric Color Calibration tool which might help answer your question: https://pixinsight.com/tutorials/PCC/index.html

    I'm having difficulty understanding their explanation, some of the things they've written don't make much sense to me. I'll outline what confuses me, maybe someone will understand and explain to us what they meant so we can get to the bottom of this green color in the image.

    Quote

    PCC is a very special tool for several reasons. Besides the quality of our implementation, what really makes PCC unique is the fact that it materializes our philosophy of color in deep-sky astrophotography, and primarily, our philosophy of image processing: astrophotography is documentary photography, where there is no room for arbitrary manipulations without the basic documentary criterion of preserving the nature of the objects represented.

    So far, so good - I agree completely, if you want to document true color of the object - you can, and same as them, I object the notion that "color is arbitrary" in astrophotography. It can be arbitrary - but by choice only.

    Quote

    Following documentary criteria, such representation must be justified by properties of the objects photographed. This excludes, in our opinion, the classical concept of "natural color" based on the characteristics of the human vision, as applied to daylight scenes.

    Emphasis on last sentence is added by me as it is root of my misunderstanding of what they are saying. They say that they want to exclude human vision component - that is ok, light reaching sensor is physical thing and as any thing in nature that we measure - we should exclude our subjective sense of it.

    Quote

    The goal of PCC is to apply an absolute white balance to the image.

    This is where things go south ... In introduction we are talking about measurement and we are excluding notion of human vision and such, and yet tool itself does - what they describe as white balance and most of document is about choosing reference white balance value.

    Here is the thing - white balance is directly tied to human vision and perception. Absolute color spaces like CieXYZ does not have a white balance. It does not need white balance. White balance is used to define how color of particular object would be perceived by observer under certain illuminant. Our brain is funny thing. In an environment where we don't have pure white color - we choose closest color and that becomes white balance reference for our brain. All other colors in that scene are perceived "shifted" in hue to match that white point. Our brain does a bit of color balancing - although we have same spectrum - we perceive color as being different.

    Astronomical images don't need white balancing in this sense - this is typical sense of daytime photography - we do white balance to adjust colors and convert our perception from environment that image was taken in to environment that image is viewed in. This is why we have different presets in cameras - like sunny, cloudy, incandescent light, fluorescent light, etc ... to tell the "camera" what was illumination like and then camera will convert that to "standard" viewing conditions.

    In astronomy we don't have illuminant - we have sources of light and those don't depend on if it is sunny or cloudy day or we are using artificial illumination. No white balance is necessary or wanted.

    What we want to do in order to produce what we colloquially call color balanced image is color space transformation. From raw tristimulus values produced by our camera sensor - to some standard color space tristimulus values. One can either choose to perform conversion to CieXYZ or to sRGB Linear - as there is well known linear transform matrix between the two. For final color that is displayed on our computer screens we need to do sRGB standard gamma correction - and voila, we will get true color, or rather - we will see on our computer screen closest representation of that particular color.

    If we don't want to go as far as display - we can stop at CieXYZ value - that describes color well enough and is standardized, or we can choose to represent color in some other "color space" like BVR from UBVRI where we would use BVR filter response as matching functions instead of XYZ matching functions of CieXYZ color space).

    What we should not do - is take tristimulus value that we have - arbitrary assign that to RGB and then wonder why such RGB triplet is green when displayed on the screen.

    Back to the actual color of that thing:

    We have R_raw, G_raw and B_raw to be (1, 1.06, 1.03) and this is our starting point - we have camera, I suppose it is Starlight Xpress Trius SX-814, and we have Astrodon RGB filters that produced these values?

  6. 7 hours ago, alan4908 said:
    RGB ratios with R normalized to unity
           
      DBE PCC Final
    R 1 1 1
    G 1.06 1.07 1.60
    B 1.03 1.00 1.54

    So, this particular green blob has quite a peaky spectrum with respect to the red.  

    Do you have any idea of what "units" PCC is in? Or rather what color space it is in?

    I wonder how almost equal values in red green and blue (green being only 7% higher and red and blue equal) can suddenly change to red being 33% less than both G and B?

  7. Difference between doublet and triplet scopes can be due to F/ratio of the beam - it too can have an effect as well as distance and type of the filter.

    Btw, this is something I suspected for a long time now, and often advised people with this issue to change the distance if they can to see if it will lessen the effect. I even argued with some people who maintained that this effect is independent of the filter and is product of cover window and micro lens and as such - always present.

    Valuable work on your part that shows that one can still impact effect to some degree.

    • Like 1
    • Thanks 1
  8. 36 minutes ago, tooth_dr said:

    This has puzzled me. Why would this be any different to imaging in an area of low light pollution with a luminance filter?  

    In principle it is not, however good NB filter will cut LP at a rate of about x50-100 - for example 3nm NB filter will eliminate 99% of LP from 300nm range (400-700nm is full spectrum).

    x100 less light is 5 magnitudes. Natural sky brightness is about mag22, so yes, good NB filters make difference as moving from bright city center to best skies available - from mag17-18 to mag22 (4-5mag difference, or about x50-100 less light).

    Moving from mag19 skies to mag21 skies is only 2 mag difference and that is about 6-7 less LP light - effect of UHC filter maybe?

    • Like 1
  9. 29 minutes ago, Vulisha said:

    Vignetting  is due cardboard "dew shield" that I made for use as Street lamp light polution protection, and  EF-M to C adapter, but light pollution is reduced massively. 

    Could be due to EF-M / C adapter, but it is almost certainly not due to dew shield / LP shield.

    Field of view is at most a degree or two wide here and there is no chance that this amount of vignetting is due to 1 degree light cone being clipped on side. Odds are that dew shield is larger than ota diameter and lens/mirror is certainly less than ota diameter - so even that one degree is questionable. Here is diagram of what is happening:

    Let's assume that mirror is exactly the same diameter as dew shield:

    image.png.7ed81fd59886afa3c301bbd8dac4a723.png

    If a star is one degree of axis, then parallel rays coming down at aperture will create very small shadow on primary - marked with arrow - rest of the mirror will receive light. At most, only very small fraction of the mirror - less than 1% will be in shadow. You should not be able to detect that in the image, or it would be very very small effect. Usually mirror is not the same diameter but smaller a bit - just enough to still be fully illuminated.

    You are free to use dew shield - LP shield. If you are using newtonian scope - it will improve contrast even if there is no street lights around - It is often said that extension in front of focuser should be at least x1.5 diameter of tube, and today's scopes often have very small section of tube in front of focuser.

    image.png.fd0a7a90dbd838713357b3af7e3b5ccf.png

    Marked section of scope needs to be at least 1.5 x 130mm = about 200mm or 20cm and it is clearly not that much so yes, dew shield is a good thing to add contrast with newtonian. Refractors have focuser on the other side of the tube so they provide very good contrast by design, especially if they have tube baffles as many do.

    • Thanks 1
  10. I have an idea.

    Looking at the image can be misleading. We don't know what sort of color processing has been done on the image. Was there color management involved and if image is boosted in saturation and such.

    Simple intensity measurement in R, G and B channels recorded - raw data, together with camera specification and filter type will be a good starting point to see what sort of color object actually has, and to try to determine something about the spectrum of the light given off by object.

    That would give us some idea of what object might be, or at least a clue about the nature of light it is emitting.

  11. 49 minutes ago, Vulisha said:

    Yes I was thinking that too, eq5 could be great upgrade. It has full motion RA and DEX slow knobs, it has polarscope option, and in steel variant should be really sturdy

    Until you get EQ5 - which you can motorize / goto enable in DIY manner - look at this project: https://github.com/TCWORLD/AstroEQ

    here is another tip for reducing star trails from RA periodic error - shoot targets that have high DEC.

    RA error is most pronounced when you are tracking targets at equator while it is minimized when you track target near pole (in fact - when tracking exactly at pole - FOV does not move - it "rotates"). Choosing targets at high declination will lessen effects of trailing. Try imaging M81/M82 for example and maybe you will be even able to do 1 minute subs without trailing. If not, at least you will be able to keep most of 30 second subs.

     

    • Thanks 1
  12. 10 minutes ago, Xsubmariner said:

    I have enjoyed reading this thread, thanks Vlaiv for your straight forward words of wisdom. Now the confession, as a relatively new imager I have started capturing your words of wisdom in a notes document “ Vlaiv’s words of Imaging Wisdom” as a reference to remind myself in these early stages. Hope you don’t mind.

    Martin

    Of course I don't mind. Wanted to do something similar myself for quite some time now, but never seem to find spare time to do it. I wanted to make a website / blog kind of thing and do my best to explain and demonstrate things related to astronomy (mostly imaging/processing/theory). Hopefully I'll get it started sometime soon.

    • Like 1
  13. Could be that the green thing is an artifact of some sorts? I mean, object is real, but the color ....

    Here is image that I found:

    image.png.527dc9f0666223cfe15baed05d599c6b.png

    Something similar happened to me when I imaged M101 - I also had bluish green blob thing:

    image.png.dda3e0b028f708c3d0463a4300375c6b.png

    But on some other images it is not green :D

    image.png.d70e69072a31823c7ffa66833cca3527.png

    image.png.d23d8652866239fea798e9ebe90d5b79.png

  14. Ok, I see no problem here what so ever.

    There is slight issue that red is quite attenuated compared to other two colors - which means that your flat source produces really cool light - white bluish - possibly led flat panel?

    But that can be fixed in processing by using proper color balance techniques.

    Here are results of examination - first flat split into components and stretched:

    image.png.03f1477f4cf9fe87d65888e630282453.png

    image.png.b40b8206899bf0525dede38720ba592f.png

    image.png.3000cf0ffd3024df1c467a375f6a834c.png

    Everything looks fine with this flat apart from weak red color, but that is like we said - due to light source.

    Notice that there are no dust particles on the flat file - so if we find some on light that will be a problem.

    Here is light without any processing:

    image.png.7424b88bd96d0e332b83c99c8bcb3849.png

    I don't see any dust shadows on it, but I do see vignetting and that vignetting really reminds me of the flat above - stronger to the right side of the image. Let's see what calibrated sub looks like:

    image.png.2d3e9654654bd1b69acc2bedd0df8742.png

    from what I can see here - vignetting is gone, and there is just a background gradient in this sub - going from left to right.

    Once we remove that linear gradient:

    image.png.8d2b8d8628b77a3af15a6240551ab3c8.png

    That looks rather good?

  15. I'm not Olly, but can give it a go :D

    Place bright star in the center of the frame (hopefully center of the frame is aligned with focuser central axis and rotating camera will result in rotation of FOV around this same point).

    Using only RA controls - move star off center some distance (more is better - increased accuracy, but make sure it stays in FOV). Do frame and focus subs repeatedly so you can see what is going on.

    If star remains on horizontal line that represents X axis of FOV center - you are done, if not, rotate camera until star gets to this horizontal line.

    I'll do some diagrams to explain it better

    Step 1 - put star in center:

    image.png.9c164f3ba8b24f82cbe7209ed8b69cbe.png

    Step 2 - move mount in RA until either of the two happen:

    2.a - star is not on X axis:

    image.png.5190640744d3368bbe67f905adf35a52.png

    or 2.b - star is on X axis:

    image.png.f06e9aec8e904d0a81d2df1e232a1868.png

     

    If 2.b. happens - star on horizontal line - you are done, RA axis is aligned with X axis of the image.

    If 2.a. happens - rotate camera checking star position until it comes to horizontal line like this:

    image.png.4a8072ad4030caecd2b03bfd9a7791c8.png

    You can always check if your camera is oriented RA/horizontal - by getting bright star in center of the frame and then moving only in RA or DEC direction.

    If you move in RA direction - star should stay on horizontal line - move only in X direction. If you move in DEC - star should only move in vertical direction - stay on vertical line - move only in Y direction.

    Hope this helps.

     

    • Like 2
  16. 44 minutes ago, Scooot said:

    Thanks very much for your detailed response Vlaiv.

    I won’t use their SNR in this tool from now on. It certainly doesn’t seem to very useful with this set of subs. :) 

    Do you by any chance know how is it calculated? That might shed some light if it is useful and under what circumstances.

    For example - it could be calculated like this:

    Take all pixels that are dark enough (sigma reject all bright pixels) and calculate standard deviation of such pixels - that is good approximation of background noise if there is enough empty background - it will not work well if much of the image is covered by nebulosity.

    Take all other pixels and calculate average pixel value - subtract average pixel value from first group of pixels (LP level) - this gets you "average" signal.

    So you conclude that SNR is average signal / background noise for example or something similar.

  17. 16 minutes ago, wornish said:

    This is a very interesting thread can I check my understanding ?

    I have an image that is 4616 x 3488 pixels and the FWHM is on average about 3.5

    So if I use the formula for best image size then I should resize the image by 3.5/1.6 = 2.185

    ending up after resizing at  2110 x 1595px

    Is this correct ?

    Provided that FWHM is reported in pixels - yes.

    In this situation, I would actually bin x2 and then check FWHM and maybe scale by a bit, but I don't think that you would need to do any scaling as difference would be minimal. Do bin while data is still linear prior to processing.

    If you have 3.5" FWHM - then you should aim at about 2.185"/px sampling rate.

    • Like 2
    • Thanks 1
  18. I'm very skeptical of any tool that takes an image and reports single SNR value.

    Such thing does not make any sense at all. Every single pixel has its own signal and its own noise and every signal in the image has SNR value associated with it. In principle there is no way to find out SNR value of any pixel in the image.

    There is a way to get good SNR approximation if there is no much change in the conditions when image is shot - meaning SNR remains roughly the same over the course of the evening. In this case - you can take average of values for each pixel to be signal - that is what regular stacking does, and then take standard deviation of the stack for each pixel and divide with square root of number of subs to get the final noise of the stack (or just standard deviation without dividing with square root of number of subs to get noise per sub).

    This works if subs are the same - but they almost never are. As target moves across the sky during the course of the evening it will be both in parts of the sky with different levels of LP and will be at different air-mass meaning it will be attenuated by different amount - more or less bright. First changes total amount of noise, second changes both signal and noise parts - thus each sub will have different level of noise once we equalize signal.

    There are methods to actually both equalize the signal and measure noise - but not for single sub, rather for intensity range and sub as part of ensemble of subs.

    Back to original question and why there is no single SNR in the image. Imagine you have two galaxies in the image - one bright with signal 100 and other faint with signal 10, and background noise with value of 2. SNR of first galaxy will be 100/2 = 50, SNR of second galaxy will be 10/2 = 5 and SNR of background sky will be 0/2 = 0

    Which one is SNR of the image? 50, 5 or 0?

    Imagine if you have background LP, so there is component of the signal that is unwanted over the whole image - let it be 2 as well. Now we have signal in bright galaxy to be 102 and noise 2 so SNR now is 51, similarly for faint galaxy SNR will now be 6 and for background SNR will be 1.

    But we did not change signals of galaxies, nor did we change level of noise (although background LP will change the level of noise - I'm just making a point here) - suddenly we now have different SNR values because there is some offset in values in the image.

    True SNR value can only be associated with each pixel and can be approximately calculated once you take into account:

    - that each sub is part of ensemble of subs

    - equalize signal levels in each sub

    - account for different level of noise in different subs (this is really hard part)

    - remove background LP signal (this part is also quite hard).

    Weighted stacking by single SNR value is not really the best approach to handle things.

    • Like 1
  19. 57 minutes ago, FMA said:

    I didn't understand all, but I assume in your opinion is less accurate than this...

    https://www.firstlightoptics.com/skywatcher-mounts/skywatcher-heq5-pro-synscan.html

    And is 230 pounds less.......

    Am being stupid to think that having an equatorial and az in the same tripod is something desirable?

     

    I'm not sure who you are referring to, but if that response was directed towards me, I was saying following:

    If you want a mount that you will be using for both visual in alt-az mode and imaging in eq mode and that is equally important to you, and you want to have something more lightweight and you have limited budget - then it is a good choice.

    If you don't have limited budget and you don't mind the weight - AZEQ6 is better mount for both imaging and observing - well, primarily for imaging.

    If you are limited with your budget and imaging is very important to you, but visual not so much - then look at Heq5, and understand that stock version is not going to perform to best of its ability, but once you tune it and mod it - it will perform better than AZEQ5 mount for imaging.

    From what I gathered, I think you will be happy with AZEQ5 as you want to both observe and do some casual imaging - this is why I linked that thread where AZEQ5 mount has been tuned a bit. I also asked on that thread if owner could confirm what sort of guiding results could be expected from AZEQ5 - and yes, I was right about that - about 1" RMS or a bit less. This is in range of stock Heq5 mount, so guide performance for casual imaging will be quite good (although that also means that you will need to tweak your mount as most SkyWatcher owners do at some point when imaging to bring the best out of their mounts).

×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.