Jump to content

NLCbanner2024.jpg.2478be509670e60c2d6efd04834b8b47.jpg

vlaiv

Members
  • Posts

    13,106
  • Joined

  • Last visited

  • Days Won

    12

Everything posted by vlaiv

  1. Don't be disappointed. I'm after something else - I'm after "documentary" value of the image - what is the real color of the thing - one that you are likely to see with your own eyes if you amplify light from that galaxy enough to see the color. This is not necessarily what people value as good image. Your data is good and it is about what you make of it. I like to make it as accurate, nicely rendered / sharp but without of obvious signs of forcing the data to be sharp or other obvious signs of processing - I like that natural feel of the image and I strive for that. This of course does not mean that someone else with different processing style and aspiration to present image in certain way won't make your data look differently. I do have one additional goal in all of this - and that is to educate people that have similar goals as my own - to do documentary type of image - how to process data so it is "scientifically" consistent, or in this case - renders proper true color. It is also important to understand that if you see someone's rendition of a target - it might not be what that target actually looks like. In fact, as many people now take on astrophotography - there is sort of copycat culture. We search previous work online to get the sense of what our work should look like. That is only natural - when we learn we look up to others to see how it's done. Issue is that this behavior can propagate and cement some beliefs that are wrong - like that strongly saturated colors are proper colors in sense that "it is how what target looks like" and "there is no true color" in astrophotography. In the end - I'm after left side of this image: but I'm certain that it is the right side of this image that provides viewing pleasure for spectators around the world.
  2. Here is first set of comparison renditions: 1. Naive RGB composition + Luminance transfer: This image was done in following manner: - Luminance was processed separately and stretched - RGB data had background wiped and single star calibration was performed - B-V 0.48 was set to 1:1:1 linear RGB ratio - RGB that was loaded in Gimp, composed into RGB image and stretched. After that luminance layer was added as separate layer and mode set to luminance (simple color transfer). Image suffers from background color gradients due to way color was processed. It looks like there were some high altitude clouds in color data so background is uneven and I don't use any background neutralization algorithms other than linear light pollution wipe. Linear RGB ratio transfer: Second image was processed more in line with proper processing - but "common mistake" was purposely done - color data was left linear without gamma adjustment needed by sRGB color coding: - Luminance same as above - RGB data background wiped, single star calibration as above and RGB ratio calculated. RGB ratio applied to luminance data - Gimp used just for color composing and export to 8 bit png This is how most images are processed. People seem to add more saturation to this type of processing regardless the fact that colors are already stronger then they should be (people like vibrant images it seems). 3. Gamma corrected RGB ratio transfer: - luminance as above - RGB wiped then calibrated on single star - RGB ratio transfer was applied on inverse gamma luminance and then result was gamma corrected (as per sRGB standard) - final color composing and PNG export done in GIMP Out of the three approaches - third one is the most correct approach. Only drawback to these three images is single star calibration which is inadequate for mono camera + RGB filters as RGB filters have very different QE curves than RGB matching functions. In order to get correct transform matrix for color calibration - multiple stars are needed and nine variables need to be determined (3x3 transform matrix). That will be next step in our quest for true color of this image Btw, I'm having so much fun with this data - this is seriously good work in acquisition by Rodd.
  3. In fact, I was intrigued - to me it looked like colors are good but I did not do direct comparison. Here is direct comparison, and now I'm even more convinced they've done proper color calibration on that image: What do you all think? We can even adopt this as a way to test color calibration of the images? - Just put this little color bar in the image and you should be able to tell if colors in the image are proper.
  4. I'm not really sure like I mentioned in my previous post, but here is a good reference: On the left we have closest color to what particular stellar class starlight looks like to us. We also have luminosity and frequency of occurrence. source: https://en.wikipedia.org/wiki/Stellar_classification Put this color table next to above Hubble image and I think you'll find that they match pretty good.
  5. In theory - if two different people were to process different data using same colorimetric calibration - they should get fairly close if not exactly the same colors in the image. I'm now going to do two different approaches to colorimetric calibration - one rather simple - that is single star calibration - which won't be as precise, and a bit more involved multiple stars colorimetric calibration. It is a bit more involved process since I don't have software in place to do it for me and need to do much manually (like select / measure stars and look up their temperature in online catalog and then solve for transformation matrix in LibreOffice cal). I'll post my results later so we can discuss how well it matches above Hubble reference. Btw - I'm not 100% sure that Hubble team used proper color calibration - but it does look like that once we take into account colors of different spectral classes, their luminosity and abundance in universe.
  6. But of course! https://hubblesite.org/image/3684/news Then there is this one for reference as well: https://hubblesite.org/image/3900/printshop
  7. What do you want your color to be like? If you are aiming for that blue in spiral arms because you think it is proper color, think again. Blue in spiral arms in this galaxy comes from the fact that people don't know how to properly color process image. On the other hand, if you don't want to go for that natural look - then you are free to create any sort of color balance you like - it's up to your creative side to impart a mood on the image by selection of palette. For the reference - here is what that galaxy looks like in term of color: With that "zombie" palette you are actually closer to what galaxy looks like to human eye then other renditions.
  8. @Rodd You have some wonderful data here. Still in processing on my part, but I wanted to share this stage to show you something. Maybe try not to push your data too much. Crucial areas of galaxy contain enough SNR to really bring out finest detail once sampling rate is properly set and some basic processing is done. Spiral arms are just faint. I know there is an urge to render them clearly visible - but they are faint, so leave them faint(ish). I really love the sharpness you can get out of this data:
  9. Thinks to keep in mind when considering type of monitor and calibration of it: 1. If you want accurate color in your astrophotos - it has nothing to do with monitor / calibration of that monitor. Colors of light coming from objects out there is independent of what hardware you have or how you calibrate it. 2. If you want to accurately reproduce color from your astro photos - you need to accurately record them (do color calibration of your data and use known color profile - either sRGB or some other together with ICC profile), and you need accurately calibrated monitor - see video above. 3. If you do creative work and in that process select a certain color and want other to know exact color you've selected - then you need accurately calibrated monitor and all other things (profile with your image and those looking at your color to have calibrated monitor). First point is reassuring - you can produce accurate astro photos even if you don't have high end monitor or great calibration. Point two is incentive to get one of those calibration devices and good monitor - if you want to look at colors of objects as they are.
  10. For guiding only - not really. Guiding may even work better if there is more light and that light is spread over more pixels. Sometimes one can benefit from slightly defocusing guide star - that is the same thing as not using IR/UV cut as IR and UV light is out of focus
  11. I'm also not familiar with advanced level monitor calibration (although I want to try it and get myself that spyder thingy), but I guess that for best calibration you need to do both. Monitor controls, such as brightness, contrast and color levels need to be properly adjusted to enable as wide gamut as possible. Then you need to create a color profile with that device. Color profiles are just mathematical operations to convert colors from different color spaces. I did some research after we had the discussion above, and for example - PNG file format allows you to embed ICC color profile of the image. This means that you can have any sort of color space saved as PNG as long as you provide proper color profile - others should be able to display it properly (if they have adequate monitor calibration and color profile). ICC color profile just contains transformation from any color space into either of two - CIE XYZ or CIE Lab. This means that "workflow" for displaying arbitrary color space image on arbitrary monitor would be: SomeColorSpace in PNG + ICC transform from PNG -> CIE XYZ -> ICC transform for Monitor -> Monitor color space (3 arbitrary primaries that are essentially some sort of blue, red and green) == Proper color rendered on screen.
  12. I'm afraid that histogram advice is not very useful. - It depends on range displayed (full range / min-max pixel value range) - If you have clipping to the left - don't use that camera at all - you won't be able to calibrate your data properly. On a normal camera that should never happen. Bias/offest signal makes sure of that. With dedicated CMOS cameras you sometimes have offset setting to control that it does not happen. - If you have clipping to the right - you will loose information in some areas (clipped pixels) - but that can be dealt with very easily - make a few shorter exposures that you will use to fill in those clipped pixels of long exposures. You don't need much of these filler subs - just a few as they will only be used where signal is strong (so strong that it clips in long exposure) so SNR will be good. - histogram does not tell you anything about noise distribution that is only relevant in choosing optimal sub duration.
  13. Guiding - using a small telescope / finder / guider scope or OAG (off axis guider device) to monitor position of the star and issue corrections to mount tracking if reference star deviates from expected position. Many mounts suffer periodic error - mount has a period (associated with reduction gear) and inside that period - it does not always track perfectly - sometimes it lags and sometimes it is in front of where it ought to be. This is because gear train elements are not perfectly round (just mechanical thing - you can't make something perfectly smooth or round or whatever - only to certain precision). This leads to elongated stars more often than polar alignment error. Guiding solves both - periodic error and polar alignment error - and couple more things, but requires special gear to be used - guide scope or OAG, guide camera and computer. Encoders deal with periodic error and make motion of mount very precise - so you don't need to guide to correct periodic error, but you still have polar alignment error to worry about and some other things (that are usually not much of an issue - like apparent star positions due to atmospheric refraction and such).
  14. Top right - NGC7331 (main) - Deer Lick group (with smaller surrounding galaxies) Bottom left - Stephan's Quintet
  15. Here is a thought. How about Berlebach planet tripod + CEM120. If you guide - get non EC version. If you don't guide get encoders. Mind the weight of the thing - 25+Kg!
  16. Must be hacked account - only sensible explanation.
  17. pawn-sales.live domain for e-mail inquiry: Domain Name: pawn-sales.live Registry Domain ID: 20ecd5b12f94473a85d7f995a572b053-DONUTS Registrar WHOIS Server: whois.namecheap.com Registrar URL: https://www.namecheap.com/ Updated Date: 2020-04-15T22:10:36Z Creation Date: 2020-04-10T22:10:20Z Registry Expiry Date: 2021-04-10T22:10:20Z Registrar: NameCheap, Inc. Registrar IANA ID: 1068 Registrar Abuse Contact Email: abuse@namecheap.com Again - just 14 days ago
  18. Indeed Image is hosted here: https://images345.com Whois info Registrar URL: http://www.namecheap.com Updated Date: 2020-04-11T20:31:49Z Creation Date: 2020-04-11T20:31:40Z Registry Expiry Date: 2021-04-11T20:31:40Z Registrar: NameCheap, Inc. Registrar IANA ID: 1068 Registrar Abuse Contact Email: abuse@namecheap.com Registrar Abuse Contact Phone: +1.6613102107 Registered 13 days ago.
  19. Why do you think it's fake? It says that actual price is $1900 and it can only be bought via "buy now" thingy (they don't accept bids and bids will be removed - so they say). Seller has 99.8% positive feedback and 4000+ transactions It also says that you have no guarantee - they think it appears to be new. According to their listing you get 30 money back guarantee.
  20. Complex topic. Here are some guidelines: 1. Calibrate your monitor properly. 2. If you intend to share your work - either do it in sRGB or Adobe RGB in formats that support color space information. This only makes sense if you have accurate colors. Many people don't bother getting color accurate in their astro photos, so there is not much point in paying attention to color space. Most monitors are capable of displaying some of sRGB gamut. Very few monitors are capable of displaying Adobe RGB gamut. If you don't include color space information with your image, it will be assumed that it is sRGB color space and colors in your image should be coded in sRGB color space. If you code them in Adobe Color space but record them in format that does not have color space information - colors will be wrong on other people's computers as they will assume it is sRGB (no profile given). 3. Here is breakdown of what will happen in different cases: - You have AdobeRGB gamut monitor. Both AdobeRGB and sRGB will be displayed properly on your monitor - You publish sRGB - on everyone's monitor it will display properly - (regardless if you include color profile with image - as it is assumed sRGB by default) - if they have their monitors calibrated. - You publish AdobeRGB and include color gamut info - People with sRGB capable monitors and operating systems / browsers that support color profiles (all modern) - will see it properly but in reduced gamut of sRGB, provided that they have their monitors calibrated. People with AdobeRGB will see larger gamut and again correct colors. - You publish AdobeRGB but don't include color gamut info - No one will see it properly except you. It is a bit like "selecting a language" Imagine you know two languages - English language and Japanese for example and you are trying to choose which one you want to use. If you use English and send your documents to people (in England) - not much is needed as everyone will just read it. But if you send them Japanese and say - this is in Japanese, please translate it prior to use, then very few people that know Japanese will be able to read it without translation, but everyone else will need to fire up google translate to be able to read it. If you send Japanese without telling people it's a different language, here analogy breaks a little as people will generally be able to recognize it's Japanese, but it will be just a bunch of markings that would be wrong if they tried to read them using rules of English language. AdobeRGB and sRGB are two different "languages" used to write down image color. sRGB is common language that everyone speaks. AdobeRGB is spoken by only a few. Here is page that will help you determine if you have AdobeRGB capable monitor. It contains sRGB and AdobeRGB images side by side. Images contain wide gamut (there is third image that shows where you should see difference in color) and if you see them equal - you only have sRGB gamut monitor. If you see them differently - your computer monitor speaks AdobeRGB https://webkit.org/blog-files/color-gamut/ In the end - just remember, above is rather pointless if you don't do precise color calibration of your astro images. Adjusting color tone and saturation to your liking is already producing wrong color. AdobeRGB and sRGB will give you slightly different results but neither will be correct as initial data is not color correct (but to your liking on your computer screen).
  21. I this particular case, the concern that I was addressing was that of shorter focal length coupled with DSLR type sensor. In this case FOV might look like too much - but like you mentioned, it is just the way image is presented and you can have different FOV by just cropping. What I really wanted to point out as important bit in small vs large scope, but somehow run out of steam and was distracted by other things is that people need to realize that speed depends on scope size in a certain way and this is why most images of small galaxies are taken with larger scope. If we take small scope and small sensor or large sensor and crop to target and compare it with large scope and large sensor (here small sensor won't do) - you'll get roughly the same FOV, but aperture will be different and large scope will win in light collecting ability and hence speed of the system will be greater. This is why smaller galaxies are more often imaged with larger scopes - not because small scopes can't do it.
  22. Specs says that it's only 35%, and it sort of looks like that in the image - it's about third of diameter, so I guess not that bad. Certainly better than my RC that has about 44%.
  23. Don't worry much about crop factor, you can check field of view for your particular camera and scope here: https://astronomy.tools/calculators/field_of_view/ You can always crop further your image - it will make it smaller but will also change FOV and therefore object of interest can appear larger (but not more detailed in 1:1 view) on screen size.
  24. Here are a few with very short focal length - 384mm - but also small sensor and small pixels:
  25. Nothing wrong with imaging smaller galaxies with focal length under 800mm. It is not focal length alone that determines scale of object in the image. There are two different metrics that are important - one is FOV and other is pixel scale. Field of view is determined by focal length and camera sensor size. You can have small FOV on short focal length scope if you use very small sensor. You can have largish FOV on a long focal length scope if you use large sensor. In any case, field of view, or proportion of field of view to object imaged gives how large object will appear when image is viewed on screen size. Screen size is size that we see here on forum or when you view your image with some image viewing application and you "fit image to screen". This is in fact often "scaled down" version of the image as image can have a few thousand pixels in height and width and still it is displayed on screen that has less pixels than that (like 1920x1080 or similar). Pixel scale or sometimes referred to as sampling resolution is number that represents mapping of angular sizes in the sky to pixel. It depends on focal length and actual physical pixel size on sensor. This metric is important when you view image on 1:1 zoom - or often referred as 100% zoom setting. In this case one pixel of image is mapped to one pixel on screen - you can usually pan around images that are larger than screen in pixel size. Back to the imaging. It is important to understand that in most cases either sky or mount is the limit in how high sampling rate you can get. With modern pixel sizes and 800mm focal length - you will probably be at sky limit. This means that you need mount good enough to support this. Aim at about 1.2"-1.4" per pixel to be safe (or just use camera that you have with scope that you have), but make sure your mount can guide at half that in RMS.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.