Jump to content

vlaiv

Members
  • Posts

    13,241
  • Joined

  • Last visited

  • Days Won

    12

Posts posted by vlaiv

  1. I think easiest and most cost effective thing would be to just try it out.

    As long as Samsung 1000nx can produce raw subs, there is setting for needed exposure length (at least 30s and up to few minutes would be good) and you can turn off any "advanced" processing - I think it is just matter of trying it out (if raw or long exposure are missing - then maybe there is no point in trying).

    Only thing that I can think of that will cause issues is if camera is doing some "enhancements" - like noise reduction or whatever and you can't turn that off. That is going to create issues in stacking later on and you won't get the quality of images that you would otherwise get if you just downloaded pure raw files without any processing done to them.

    • Like 1
  2. 4 minutes ago, jager945 said:

    I'm going to respectfully bow out here. 😁

    If you wish to learn more about illuminants (and their whitepoints) in the context of color spaces and color space conversions, have a look here;

    https://en.wikipedia.org/wiki/Standard_illuminant

    Lastly, I will leave with you a link/site that discusses different attempts to generate RGB values from blackbody temperatures, and discusses some pros and cons of choosing different white points, methods/formulas. sources for each, and the problem of intensity.

    http://www.vendian.org/mncharity/dir3/blackbody/

    Clear skies,

    I do understand that you might not want to discuss this further as we are moving away from the topic of this thread and I agree.

    I will just quote first sentence of wiki article that you linked to:

    Quote

    A standard illuminant is a theoretical source of visible light with a profile (its spectral power distribution) which is published. Standard illuminants provide a basis for comparing images or colors recorded under different lighting.

    Last part of sentence was emphasized by me.

    We only use D65 as it is part of sRGB standard - same as gamma function associated with that color space. It is way of encoding color information in that "format" - which is de facto standard for internet and images without color profile.

    You really don't need any sort of illuminant information when doing color calibration - just use custom RAW->XYZ (that will depend on camera) and standard XYZ->sRGB transform and you are done - no need to define white point or take illuminant in consideration.

  3. It is - up to a point.

    It's not focal length of guide scope that is important - it is sampling rate of guide system - so focal length + pixel size of camera used.

    Here is a bit of math to explain what you should be paying attention to.

    Centroid algorithm is said to have precision between 1/16 and 1/20 of a single pixel. This means that if you for example have 4"/px sampling rate - your precision in determining star position will be limited to about 0.2" - 0.25".

    If you have a mount that can guide to 0.2"-0.3" RMS - this is clearly not enough because guide system will issue correction larger than that because of error in star position.

    What is appropriate guide focal length then? You will hear different recommendations depending on who you ask, and here is my reply:

    Either start with your imaging resolution of your guide performance. If you start with imaging resolution - use half of that as you need at most that much guide RMS error. If you take RMS error - then just use that value. You need your guide precision to be at least 3-4 less than that.

    Let's say that you are imaging at 1.5"/px, and you guide at 0.8" RMS. You want your guide precision to be something like 0.2" (1/4 of rms). Let's say that your guide camera has 3.75um pixels and you need your guiding resolution to be at least x16 times larger than measured error which is 0.2 so your guide resolution is going to be 3.2"/px.

    From 3.2"/px and 3.75um pixel size we calculate required focal length to be about 240mm

    • Like 1
  4. 12 minutes ago, jager945 said:

    Yup, that looks like a simple way to process luminance and color separately.

    With regards to the colouring, you're almost getting it; you will notice, for example, color space conversions requires you specify an illuminant. For sRGB it's D65 (6500K). E.g. an image is supposed to be viewed under cloudy sky conditions in an office environment.

    However in space there is no "standard" illuminant. That's why it is equally valid to take a G2V star as an illuminant, a random selection of stars as an illuminant, or a nearby galaxy in the frame as an illuminant, yet in photometry, white stars are bluer again than the sun, which is considered a yellow star. The method you choose here for your color rendition is arbitrary.

    I'm not sure that you understand concept of illuminant.

    You are right that there is no standard illuminant in space - if you have emission sources then there is no illuminant. Illuminant is only important when you try to do color matching to a color that is reflective in nature.

    XYZ is absolute color space, and therefore does not require illuminant. Viewing in certain predefined conditions - like office environment with certain level of lighting, for a given color space ensures that if you take a colored object and look at it and look at its image on the screen - color will be matched in your brain (you will see the same thing).

    With astrophotography there is no such object that needs to be illuminated to see its color - we are already getting the light / spectra from stars as they are - if we observe it on computer monitor in dark room - it will be as if we were looking at "boosted" star light coming thru the opening in a wall while we sit in dark room. If we do that in lit up office - same thing - monitor will show us "color" that we would see as if there was an opening in the wall and amplified star light was coming thru it.

    Again - not talking about perception of the color but rather physical quantity. You can make sure that your images indeed represent values of xy chromaticity derived from XYZ space and that is what is called proper color as it will produce same brain stimuli if viewed on properly calibrated monitor (within gamut of that display device) that you would get from looking at actual starlight amplified enough to match that intensity under same circumstances/conditions, and all of that with minimal error (it's not going to be 100% due to factors discussed but it will be closest match).

  5. 1 hour ago, jager945 said:

    I'm sad that the point I was trying to make was not clear enough. That point boling down to; processing color information along with luminance (e.g. making linear data non-linear) destroys/mangles said color information in ways that make comparing images impossible.

    It really depends on how you process your color image. What do you think about following approach:

    ratio_r = r / max(r, g, b)
    ratio_g = g / max(r, g, b)
    ratio_b = b / max(r, g, b)

    final_r = gamma(inverse_gamma(stretched_luminance)*ratio_r)
    final_g = gamma(inverse_gamma(stretched_luminance)*ratio_g)
    final_b = gamma(inverse_gamma(stretched_luminance)*ratio_b)

    where r,g,b are color balanced - or (r,g,b) = (raw_r, raw_g, raw_b) * raw_to_xyz_matrix * xyz_to_linear_srgb_matrix

    This approach keeps proper rgb ratio in linear phase regardless of how much you blow out luminance due to processing so there is no color bleed. It does sacrifice wanted light intensity distribution but we are already using non linear transforms on intensity so it won't matter much.

    2 hours ago, jager945 said:

    Colouring in AP is a highly subjective thing, for a whole host of different reasons already mentioned, but chiefly because color perception is a highly subjective thing to begin with. My favourite example of this;

    Again - it is not subjective thing unless you make it. I agree about perception. Take photograph printed on paper of anything and use yellow light and complain how color is subjective - it is not fault in photograph - it contains proper color information (within gamut of media used to display image). I also noticed that you mention monitor calibration in first post as something relevant to this topic - it is irrelevant to proper color calibration. Image will contain proper information and with proper display medium it will show intended color. It can't be responsible for your decision to view it on wrong display device.

    Spectrum of light is physical thing - it is absolute and not left to interpretation. We are in a sense measuring this physical quantity and trying to reproduce this quantity. We are not actually reproducing spectrum with our displays but tristimulus value since our vision system will give same response to different spectra as long as they stimulate receptors in our eye in equal measure. This is physical process and that is what we are "capturing" here. What comes after that and how our brain interprets things is outside of this realm.

  6. 39 minutes ago, jager945 said:

    Without singling anyone out here, there may be a few misconceptions here in this thread about coloring, histograms, background calibration, noise and clipping.

    I will have to disagree with you on several points.

    In fact even you are in disagreement with yourself on certain points, like this one:

    40 minutes ago, jager945 said:

    Add to that atmospheric extinction, exposure choices, non-linear camera response,

    Important point being - non linear camera response, and then in next segment you write:

    41 minutes ago, jager945 said:

    1. Recording the same radiation signature will yield the same R:G:B ratios in the linear domain (provided you don't over expose and our camera's response is linear throughout the dynamic range of course). If I record 1:2:3 for R:G:B for one second, then I should/will record 2:4:6 if my exposure is two seconds instead (or is, for example, twice as bright). The ratios remain constant, just the multiplication factor changes.

    Which is definition of camera linearity - as long as for same source and different exposures, results have ratio of intensity same as ratio of exposure lengths - camera is linear.

    In fact - most cameras these days are very linear in their response, or at least linear enough not to impact color that much.

    46 minutes ago, jager945 said:

    As many already know, coloring in AP is a highly subjective thing. First off, B-V to Kelvin to RGB is fraught with arbitrary assumptions about white points, error margins, filter characteristics. Add to that atmospheric extinction, exposure choices, non-linear camera response, post processing choices (more on those later) and it becomes clear that coloring is... challenging. And that's without squabbling about aesthetics in the mix.

    This is common misconception. Color theory, while having certain tolerance / error is well defined. There is well defined absolute color space - XYZ color space, and any source out there will have particular coordinate in XYZ color space, or more importantly xy chromaticity - because, as you pointed out magnitude of tristimulus vector depends on exposure length among other things.

    Given transform matrix between color space of camera and XYZ color space - one can transform raw values into XYZ color space, with certain error - which depends both on chosen transform matrix but also on characteristics of camera. Transform that produces least amount of error is usually taken.

    Once we have xy chromaticity of source - then it is easy to transform that to color space of reproduction device. There will again be some errors involved - this time because display devices are not capable of displaying whole XYZ color space for example - gamut is lower. Choice of sRGB color space for images that will be shared on internet and viewed on computer screens is very good choice as it is de dacto standard and expected color space if color profile is not included with image.

    All above means that if two people using different cameras shoot the same target and process color in the same proper way - they will get the same result - within above described limits (just camera transform matrix errors since both will produce images in same gamut space - sRGB so that source of error will be the same).

    Not only that such images will look the same, but if you take all spectra of all sources in the image and ask people to say how much recorded image is different in color from those spectra (pixel - for pixel) you will get least total error compared to any other type of color processing - or in another words it will be best possible match in color.

  7. 12 minutes ago, maw lod qan said:

    Powered by radioisotopes. Am I correct in assuming (I'm worried using that word) that's simple terms for nuclear power?

    Sort of. Not like in nuclear reactor where we have stable chain reaction (unlike in explosion where it is uncontrolled chain reaction).

    Using radioisotopes means using small amount of radioactive material - radioactive enough to keep its own temperature "lukewarm" and then using Seebeck generator or Thermoelectric generator that is a solid state device that works on temperature differential - on one side warm radioisotope on the other - cool empty space (or conducting element exposed to empty space). Heat flow generates electricity.

    • Like 3
  8. Just now, alacant said:

    Hi. Yes but only ever used levels. 

    Thanks, will give it a go.

    I'm not recommending that you do that :D - I was just showing that it can be done really easily. I don't like effect that this sort of processing produces.

  9. 5 minutes ago, alacant said:

    Wow... Apart from moving the slider to stretch the image, I don't think we've any control over the shape of the histogram, have we? Or rather you have, but it's way beyond anything we could do ATM.

    You are using Gimp as far as I can see? It is really easy to get that sort of histogram shape that has been discussed above, here are steps (on generic bell shaped noise image):

    Step 1: do levels to do initial linear stretch and get background to be visible (I made random noise image to resemble background of astro image - I added one pixel star to get contrast range of astro image):

    Screenshot_1.jpg.8710c0bb5c1e2a68f2f69d9806c26841.jpg

    Step 2: Now we have nice bell shaped histogram after first linear stretch, in second we do the curves like this:

    image.png.475e48303dcc9d7e8d3d98d79483160c.png

    Most left point will be raised a bit - so our output is limited to the bottom of not going all the way to the black. Same point on "x" axis is starting to "eat" into histogram. Next point is just "pivot" point so we get nice smooth rising curve and next two points are just classical histogram stretch. This configuration of curves is often seen in tutorials and it produces histogram looking like this:

    image.png.63c23d60db9c097017a4e2ac3715e275.png

    flat on left side and almost bell shaped on opposite side - a bit more "ease out" because we applied gamma type histogram stretch in that section.

    Just one round of levels and one round of curves with more or less "recommended" settings and we have produced that effect.

     

     

  10. 15 minutes ago, alacant said:

    Hi

    Really? Then I don't understand!

    I've also noticed that often background looks funny in the images you are producing. I don't think it is necessarily background clipping - in this example it is more to do with distribution of the background that gives it such feel.

    You are right to say that your histogram is not clipping - but it is not bell shaped either:

    This is green channel - histogram in range 0-32 binned to 33 bins (each number one bin) in first image you posted:

    image.png.debcd1a7e31f2c5cf29981d6fe44f8bc.png

    Same thing in second image that you posted - one with color correction and background that looks better:

    image.png.f04e5f844567cd3dffdf325cce392e27.png

    Although your background is not clipping - histogram of it shows left side to be very steep in first image - it is almost if histogram was indeed clipping but not at 0 value but rather somewhere around 3-4.

    Second image shows histogram more resembling to bell shaped curve as you would expect from natural looking background - and this shows in the image - background looks better indeed. At least to my eye, and as far as I can tell @ollypenrice agrees with me:

    36 minutes ago, ollypenrice said:

    Your rework looks much nicer to me. The sky looks lighter, a better colour and not so shiny.

     

    • Like 1
  11. Nice capture.

    To my eye, this image shows typical color balance that one would get from StarTools processing.

    If you aim for more realistic color in stars you really need to do proper color calibration of your image. For reference here are few good graphs:

    First color range:

    image.png.ebe2cc1d469ba9748ac02990022d64e9.png

    Second - frequency of stars by type:

    image.png.43005dc637e4dced333a6e275fbf5194.png

    Mind you, this second graph is too saturated - that happens if you don't gamma correct your colors for sRGB standard and yet use image that implies sRGB standard. I think that first scale is more accurate but does not go as deep as this scale (which is to be expected as O type stars are about 0.00003% of stars so above range maybe stopped at B rather than going all the way to O type).

    Match those two with your image above and you will see that you are too cyan/aqua, and possibly over saturated in your star colors.

    • Like 3
  12. Do you mind posting original sub, unless that exact jpeg came out of the camera?

    Fact that above image is 1920x1280 means that either you or camera made image smaller. If this image as is came from camera - then Canon probably implemented very nice way to reduce image size - use of algorithms similar to binning.

    Additional thing that can happen is jpeg smoothing. In any case, combination of those parameters could make image look as x10 longer exposure, so image was not in fact 11s but about 110 seconds.

    Still impressive result.

  13. 4 minutes ago, PWS said:

    Both are "lenses" and I'm happier there than with scopes

    I honestly can't see the distinction :D, but I do understand that you have certain requirements. As I don't have experience with either, I can't help much except to point out that Canon lens is a bit on a heavy side to just hang of the camera body - you might want to look into some sort of bracing system - like rings and dovetail to support the lens.

    • Like 1
  14. Two very different focal lengths.

    Do you have any idea what camera will you be using and what is intended FOV? Do you already own either of them?

    My personal choice for such small focal length would be something like this:

    https://www.firstlightoptics.com/william-optics/william-optics-2019-zenithstar-73-ii-apo.html

    +

    https://www.teleskop-express.de/shop/product_info.php/info/p11122_Riccardi-0-75x-APO-Reducer-and-Flattener-with-M63x1-Thread.html

    (provided that focuser on scope has M63 thread - and being 2.5" R&P - I would believe that to be the case. Also I would probably go for TS photoline 72mm scope over WO one).

  15. Actually, now that I've inspected image further - it's not shadow - it is reflection.

    At first I noticed just dark spot - but it is bright doughnut around this dark spot that is the issue - it is reflection of unfocused light:

    Maybe this screen shot will help to see it easier:

    image.png.6ec6293c1b954d0288d2984323f51f8d.png

    That sort of thing happens when there is a bright star in the frame and it can't be corrected with flats - it is reflection - same as this here:

    image.png.03dd5796bfd9a5604ba93e8be5098ceb.png

    There is a faint halo around that star.

    In your image it looks like culprit is Alnitak - it is same distance in opposite direction with respect to optical axis.

  16. No, light can't reach it as it is behind secondary mirror, so you can't see center mark sticker. Even if you could - it would be so much out of focus that it would not register.

    What you are seeing here is dust shadow - a normal thing and is corrected with flats. Given the size of it and the fact you are using newtonian scope and hence very likely coma corrector - it is probably on the surface of coma corrector.

    Take flats to correct it (in fact you should be taking flats anyways).

    Here is screen shot of one of my flats from a RC scope:

    image.png.4533d61513b39cb03192235ce58780e6.png

    It shows a lot of "center marks" :D

     

    • Like 1
  17. Just now, pez_espada said:

    Will do, and I will keep on going trying to improve my collimation skills, etc. I am pretty new to Newtonians.  Perhaps an autocollimator will do? any experiences? I was looking at one of these https://www.firstlightoptics.com/catseye-collimation-tools/catseye-infinity-xlkp-autocollimator.html

    Are those better than the typical Cheshire/SightTube than I am currently using?

    Can't really tell - I'm not using anything in terms of collimation aid. I do occasional touch-ups on a star - pop in high mag EP, keep star centered (Polaris does great job with dob mounted manual tracking scopes) and do slight defocus and observe how concentric rings are.

  18. 1 hour ago, pez_espada said:

    I see what you mean but still I am surprised that all these factors put together can defeat an advantage of 27% more aperture in the reflector. This is 62% more light grasp, and the smaller scopes still not only matches the larger but defeat it. I am surprise that nobody else  seem to be surprised by this observation.

    I think that really important part of the puzzle here is precision of the focus. Next time if you feel like experimenting - try not having perfect focus and seeing how it affects threshold visibility of stars.

  19. 29 minutes ago, AweSIM said:

    Hi Vlaiv, I tried installing the ASCOM driver for ZWO but it started giving me issues. Frequently, no image would be downloaded from the ASCOM driver. So I uninstalled the driver and reverted back to the native driver. I checked out the ASICAP app that I also use to test the camera's live view for focus and stuff and found that there INDEED is a hidden setting about WB for this camera. And as you said, its set to R(52) B(95). Now I can change these values and the image updates accordingly. The issue is, what values do I set it to? I've tried setting both to 95 and then both to 52 and some other values as well. The issue is that now the bayer pattern seen in bias is such that R and B are of equal intensities, but there's a difference between G and R/B channels, What values should I set it to to achieve no WB?

    My guess is that it should be 50/50 for no WB, but I can't be 100% sure.

    If you had trouble with ASCOM driver download - did you try changing USB speed parameter in ASCOM driver (lowering it)? BTW for ASCOM driver to work - you need to have both driver installed (just in case if you only tried ASCOM and it did not work).

    • Like 1
  20. 19 minutes ago, AweSIM said:

    I'll try this next. I'm pretty sure I haven't installed the ascom driver and only installed the core zwo driver from its website 

    :D

    I think this solves the problem. You need to use ASCOM driver to do your captures.

    Once you install that driver, you will have option in SGP to choose ASCOM driver and there you will be able to set gain and offset.

  21. 13 minutes ago, AweSIM said:

    1. I have no idea how to control the WB of this camera. I haven't changed anything in its driver, and honestly, I haven't even seen any GUI for the driver that lets you control anything about the camera. Just that there was a driver available from ZWO site and I downloaded and installed it and that's it. I'll dig around on the internet about how to control its WB and fix it there. I'll report back when I've figured it out.

    This part got me worried as well. I could not find any reference of WB in ASCOM drivers online. In fact, I found one discussion on ZWO user forum that talks about WB settings - and there Sam stated clearly that ASCOM drivers don't have this setting.

    I can think of only one possibility for changed White Balance. Have you used SharpCap by any chance at any point? I suspect that there might be issue between native and ASCOM drivers for this camera model. In native drivers you can set WB and it is set to something like 50/100 and 90/100. I remember that later is for blue, and former is perhaps for red channel, but I'm not 100% sure on that one. Next thing that I read in the same post on ZWO forum is that once you set those WB values - they will not change.

    Maybe if you set WB values in native driver and ASCOM driver somehow works with native driver to produce images - it is quite possible that ASCOM results are impacted by native driver and remembered WB settings.

    This is of course just a theory - maybe even a bit far fetched - but that is only explanation I can offer on this.

    image.png.cb3332dbd222cf312582d03b0bd83f71.png

    Above is taken from internet - it shows sharpcap with ASI294 camera and relevant controls have been marked with arrow.

    21 minutes ago, AweSIM said:

    4. Can you tell me how you inspected the values inside these FIT files? How can I check that none of the values inside my bias frames are not negative and adjust the offset accordingly?

    You can't check for negative values - as you'll never get negative values out of camera. You can only conclude that there is histogram clipping to the left - by examining pixel ADU values and looking at the histogram.

    Here are values for subs you posted:

    1) Gain 1 offset 0

    image.png.1b5a2b5d739dc2fb79e6d6fc86773729.png

    It has 0 as min - this is very strange. I think it is again consequence of that white balance thing (I have no other explanation for it). All other ASI cameras that I've seen behave like this - if they have less than 16 bit ADC - all ADU pixel values that come from camera will be multiplied with either 4 or 16 - depends if camera is 14bit or 12bit. This is because 12/14bit values are stored within 16bit word in MSB (most significant bit) with LSBs being 0. That makes number values - multiplied by 4 (2^2 for 14bit camera) or 16 (2^4 for 12bit camera). I have never seen 0 as ADU value.

    On my ASI178 which is 14bit - lowest number I've seen is 4. On my ASI1600 which is 12bit - lowest number I've seen is 16.

    In any case - histogram shows that there is clipping to the left and this combination of offset and gain is not good.

    2) Offset 10 and Gain 1:

    image.png.a0abe33435d63e8272c78b04ef740d43.png

    This clearly shows that histogram is separated from left side and minimum number is 476. Next to it is max number - which is odd number - 1533. You can't have odd numbers if every value is multiplied with 4. Again this shows that values have been altered from default format.

    Now important bit - I can't tell if histogram is ok or not as it has been white balanced. I suspect it is ok - but look how far right is peak for blue. It is not that far right because it has that much offset - it is that far right because it has been multiplied with some value - white balanced. What if it was too left before multiplication.

    Offset 64 is way too high for this camera - no need to go that high. Offset 10 is quite ok.

    Use Gain 120 and offset 10 and I think you will be ok.

    Big question remains - have you used SharpCap and if maybe WB settings were changed from that software?

     

     

  22. 23 minutes ago, CaptainShiznit said:

    Again being my first attempt I just saved from DSS the default 16bit. What would be the better format between 32bit TIF/FITS and -interger/-rational?

    Use 32bit float point. I prefer the use of Fits format as it is standard format for astronomy images, but many people prefer tif because it is easier to work with - most software for image processing will open that but not fits.

    Gimp opens both. Do note that older versions of Photoshop can't work with 32bit data and you need 16bit data, and Gimp prior to 2.10 is limited to 8bit data only.

    25 minutes ago, CaptainShiznit said:

    found the flats a bit tricky as it was my first attempt at calibration frames. I put a baby's muslin cloth folded once over the scope and secured with an elastic band then pointed at my laptop screen with a white browser page open. Camera was in Av mode. Do you have any suggestions for a better method?

    Wim gave better advice than I could because I don't work with DSLRs, but in case your flat calibration does not work - it could be due to darks rather than flats.

    I would suggest that you use bias as master flat dark, use bias and darks and turn on dark optimization in DSS and then try calibration like that. While still not perfect, I think that would be the best way to calibrate your data.

    • Thanks 1
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.