Jump to content

Banner.jpg.b83b14cd4142fe10848741bb2a14c66b.jpg

vlaiv

Members
  • Posts

    13,029
  • Joined

  • Last visited

  • Days Won

    11

Everything posted by vlaiv

  1. How much vignetting, if any, will there be from filters on a particular sensor depends on two things. F/ratio of the converging beam and distance of filter to sensor. It also depends on their sizes - you can't use filter that is smaller than sensor itself as that will cause vignetting in any case. 1.25" filter have about 28-29mm of clear aperture and that is large enough to cover 23mm diagonal of ASI294 if mounted fairly close. To roughly get max distance of filter without vignetting - you need to do a bit of math. First you take free aperture of filter and subtract diagonal of sensor. let's go with 28.2 - 23.2 to get nice round number of 5mm. Divide this value with 2, resulting in 2.5mm. Next thing to know is F/ratio or speed of the beam. If you are using ED80 that has F/7.5 natively, and you use something like x0.8 reducer you will get F/6 beam. Now it is just a matter of multiplication max distance without vignetting = 2.5 * 6 = 15mm (roughly). With asi294 you already get T2 ring with 1.25" filter thread that screws in camera nose piece (2" outer and T2 female inner diameter). That way filter sits less than 10mm away from sensor, and you are fine down to about F4.5 - F/5 with this combination. Quick search online gives this image that shows configuration: You can still use remaining T2 thread or 2" nose piece for further connection to telescope. Out of filter choices - you will need at least IR/UV cut filter because you are using refractive optics (and doublet). Might not be needed much for SCTs, but I would use it anyway just in case corrector plate causes a bit of color bloat. CLS/CLS-CCD is rather aggressive general purpose filter and will cut deep into LP but will make havoc out of your color balance. UHC filter is even worse / better - depends how you look at it - it is very good for emission type targets but should be avoided for star clusters / galaxies and reflection nebulae. It will also throw off color very much. I don't know about Optolong L-Pro filter, but looking at its response curve it looks like good general LP suppression filter. I do know that Hutech IDAS LPS filters are very good - I use P2 version for my LP levels and type of LP. If you live in very light polluted area then it is worth having LPS filter, however, because all filters block light - at some point it will do more harm than good, so if you have mild light pollution where you shoot - I would avoid LPS filters. As for field flatteners and reducers with ED80 - I think they will provide very flat field over 4/3 size chip, so you don't have to worry about that. As far as SCTs are concerned - if you don't have Edge versions - these scopes have quite a bit of coma and sometimes mild spherical because of focus position (moving mirror changes distance between primary and secondary and only one exact position is free from spherical if surfaces are figured well). Focal reducer designed for SCT does take this into account and I believe they reduce coma - so I would recommend them if you plan on using SCTs for small targets. However if you do plan to use them - I would suggest that you get OAG and make sure your mount really guides well and that you understand software binning as part of processing workflow. At these long focal lengths you will oversample and you need to recover back SNR by binning your data and have it at proper sampling rate for detail available - which is usually about 1.3-1.5"/px with these larger scopes, 1"-1.3"/px requires excellent seeing and excellent mount and optics.
  2. Your flat subs look good - both of them together with flat darks. In fact - this is what they look like after flat dark subtraction and stretch: Both look the same. Short one is obviously more noisy but they contain the same dust shadows and vignetting. This is of course when we inspect them by eye, but there is much better way of inspecting two flats if they match - we do what is called - flat/flat calibration - or we calibrate one flat with the other - result needs to be pure noise and gray image without any obvious features. Here is what it looks like when I divide two flats together: This is sort of text book example what flat/flat calibration should look like - histogram is beautiful bell shaped curve with value of about 0.1 (this is because we divided 4s flat with 40s flat - a bit less than that because it was 3.9s flat in fact) and the image itself is just pure noise - no features to be seen. Let me see what happens when I try to do calibration of that one sub to see if there is something else wrong (it's not flats usually when flat calibration is not working - it's darks or light leak or some other thing but rarely flats, unless of course flats are clipped - but they are not in your case). Ah yes - everything is fine with your flats - they calibrate out ok, except one thing - dust is no longer where it used to be: If you look at this image - which is calibrated single sub and then binned x6 to get enough SNR (this of course looses the color, but we don't care now - we want to see what is going on with calibration) - it looks fine with exception of perhaps two dust particles - that moved. vignetting is fine - it is neither under nor over correcting, and if you look at above flat - let's do another screen shot of it: I marked here with two red arrows - tow shadows of dust particles that moved in the mean time, and with blue arrows - all other dust particles and their shadows that simply calibrated out fine and can't be seen in light sub above. This happens sometimes and there is pretty much nothing you can do about it - except calculate distance of surface that has these dust particles (either coma corrector or filter) and then use blow bulb from time to time to blow away any loose dust particles so this would not happen again. Sometimes people have this issue with filter wheel if their filter wheel can't reposition precisely each time - any small shift will create above effect - it's like "double" or "embossed" dust doughnut. Here it is on one of my images - sometimes it looks so strange that people ask if that is some sort of planetary nebula they have not spotted before Hope this helps
  3. From what I've read so far in this thread - I think it would be best if you started with your goals and expectations and build on that. In the first post you said your interest is in AP of galaxies and nebulae. With full frame sensor and small refractor there are in fact just a couple of galaxies that are good fit for that. M31 - our closest neighbor and M33. Such setup is very wide field (nothing wrong with that - as long as you know it will be). For that reason, maybe best thing to do would be to go to: https://astronomy.tools/calculators/field_of_view/ And check out some targets with scope of your choice and build on that. For example - Let's take famous pair M81/M82 and see what sort of FOV can you expect if you use 70mm scope with FF/FR and your camera: You see that small smudges in the center? Those are galaxies that you will be imaging. Btw - these specs you should use when working with above field of view calculator - 4.88 um pixel size and 7360 x 4912 resolution with custom camera as Pentax K1 is not in database: Ok. Once you have that covered and decide on best FOV - or Scope to match your camera - then you have further parameters for choice of your mount: up to few kilograms of gear (that means 3-4) you can go with Star adventurer or AzGti mount in EQ mode - this is really suited for small focal length and scopes up to 70-80mm that are light - wide field imaging - but are highly mobile platforms. Eq3 - Eq35 class mount is still good for wide field imaging and scopes up to 5Kg (maybe up to 6Kg on Eq35). Eq5 you can push up to 8kg. Heq5 you can use with - 10-11Kg of gear. As you go up in "class" of mount - mobility goes down as bulk and weight of mount go up. But if you want to work with longer focal lengths and get close up images of targets - that is really requirement. EQ6 type mount is really heavy. I've got Heq5 mount and it is manageable - I set it up every time and tear down and it is a chore, no question about it. If you need mount in Heq5 class and have the funds - there are other lighter options available if you put high value on mobility and plan to image from remote locations. Have look at iOptron mounts like https://www.firstlightoptics.com/ioptron-mounts/ioptron-cem40-center-balanced-equatorial-goto-mount.html Btw - if you see mount weight limit - use about 60-70% of that as weight limit for imaging. For example Heq5 can hold something like 15kg and indeed I've mounted close to that weight on it but for smooth operation you really need to limit the weight on it to 10-11kg Mount is the most important thing in AP.
  4. I've found that I like guiding better when I use ascom driver and download 12 bit data. Never had SNR over 50 or so with 8bit and native drivers in Phd2 but regularly have SNR over 100 or even few hundred with ascom driver and 12bit
  5. Have no idea. I know that Altair Astro sources scope from same supplier (only different branding, but could contain different coatings and or glass types - can't be sure). Check their offering: https://www.altairastro.com/altair-wave-series-80mm-f6-super-ed-triplet-apo-2019-457-p.asp and 115mm model: https://www.altairastro.com/altair-wave-series-115-f7-ed-triplet-apo-453-p.asp
  6. That very much looks like QHY5LIIc that I used to have - I was happy with that camera except for some driver issues (a bit quirky drivers). I eventually replaced it with ASI185. It uses well known sensor and for guiding purpose you don't need high speed interface. I would only be worried about drivers quality and possibly lack of raw format. Does it support 12 bit RAW output? I can't tell from the link you provided:
  7. Using full frame sensor is going to be an issue as not many scopes are fully corrected for that format - especially smaller scopes. Are you limited by your mount in some way (star adventurer or AZGti or similar?) - if not - look at 80-100mm range of refractors with good field flattener. I have TS 80mm F/6 APO and it is indeed very nice little scope. With Riccardi reducer you will get nice F/4.5 scope and about 360mm FL - however, for imaging of galaxies I would choose something with a bit more oomph - maybe this scope: https://www.teleskop-express.de/shop/product_info.php/info/p11871_NEU--TS-Optics-PHOTOLINE-115-mm-f-7-Triplet-Apo---2-5--RAP-focuser.html I linked that one on purpose (there same model without discount price) - because I believe discount price is worth having since these are supposed to be showroom models at lower price. One of member recently purchased one and impressions were good (search the SGL - there is topic about it).
  8. I think easiest and most cost effective thing would be to just try it out. As long as Samsung 1000nx can produce raw subs, there is setting for needed exposure length (at least 30s and up to few minutes would be good) and you can turn off any "advanced" processing - I think it is just matter of trying it out (if raw or long exposure are missing - then maybe there is no point in trying). Only thing that I can think of that will cause issues is if camera is doing some "enhancements" - like noise reduction or whatever and you can't turn that off. That is going to create issues in stacking later on and you won't get the quality of images that you would otherwise get if you just downloaded pure raw files without any processing done to them.
  9. I do understand that you might not want to discuss this further as we are moving away from the topic of this thread and I agree. I will just quote first sentence of wiki article that you linked to: Last part of sentence was emphasized by me. We only use D65 as it is part of sRGB standard - same as gamma function associated with that color space. It is way of encoding color information in that "format" - which is de facto standard for internet and images without color profile. You really don't need any sort of illuminant information when doing color calibration - just use custom RAW->XYZ (that will depend on camera) and standard XYZ->sRGB transform and you are done - no need to define white point or take illuminant in consideration.
  10. It is - up to a point. It's not focal length of guide scope that is important - it is sampling rate of guide system - so focal length + pixel size of camera used. Here is a bit of math to explain what you should be paying attention to. Centroid algorithm is said to have precision between 1/16 and 1/20 of a single pixel. This means that if you for example have 4"/px sampling rate - your precision in determining star position will be limited to about 0.2" - 0.25". If you have a mount that can guide to 0.2"-0.3" RMS - this is clearly not enough because guide system will issue correction larger than that because of error in star position. What is appropriate guide focal length then? You will hear different recommendations depending on who you ask, and here is my reply: Either start with your imaging resolution of your guide performance. If you start with imaging resolution - use half of that as you need at most that much guide RMS error. If you take RMS error - then just use that value. You need your guide precision to be at least 3-4 less than that. Let's say that you are imaging at 1.5"/px, and you guide at 0.8" RMS. You want your guide precision to be something like 0.2" (1/4 of rms). Let's say that your guide camera has 3.75um pixels and you need your guiding resolution to be at least x16 times larger than measured error which is 0.2 so your guide resolution is going to be 3.2"/px. From 3.2"/px and 3.75um pixel size we calculate required focal length to be about 240mm
  11. I'm not sure that you understand concept of illuminant. You are right that there is no standard illuminant in space - if you have emission sources then there is no illuminant. Illuminant is only important when you try to do color matching to a color that is reflective in nature. XYZ is absolute color space, and therefore does not require illuminant. Viewing in certain predefined conditions - like office environment with certain level of lighting, for a given color space ensures that if you take a colored object and look at it and look at its image on the screen - color will be matched in your brain (you will see the same thing). With astrophotography there is no such object that needs to be illuminated to see its color - we are already getting the light / spectra from stars as they are - if we observe it on computer monitor in dark room - it will be as if we were looking at "boosted" star light coming thru the opening in a wall while we sit in dark room. If we do that in lit up office - same thing - monitor will show us "color" that we would see as if there was an opening in the wall and amplified star light was coming thru it. Again - not talking about perception of the color but rather physical quantity. You can make sure that your images indeed represent values of xy chromaticity derived from XYZ space and that is what is called proper color as it will produce same brain stimuli if viewed on properly calibrated monitor (within gamut of that display device) that you would get from looking at actual starlight amplified enough to match that intensity under same circumstances/conditions, and all of that with minimal error (it's not going to be 100% due to factors discussed but it will be closest match).
  12. It really depends on how you process your color image. What do you think about following approach: ratio_r = r / max(r, g, b) ratio_g = g / max(r, g, b) ratio_b = b / max(r, g, b) final_r = gamma(inverse_gamma(stretched_luminance)*ratio_r) final_g = gamma(inverse_gamma(stretched_luminance)*ratio_g) final_b = gamma(inverse_gamma(stretched_luminance)*ratio_b) where r,g,b are color balanced - or (r,g,b) = (raw_r, raw_g, raw_b) * raw_to_xyz_matrix * xyz_to_linear_srgb_matrix This approach keeps proper rgb ratio in linear phase regardless of how much you blow out luminance due to processing so there is no color bleed. It does sacrifice wanted light intensity distribution but we are already using non linear transforms on intensity so it won't matter much. Again - it is not subjective thing unless you make it. I agree about perception. Take photograph printed on paper of anything and use yellow light and complain how color is subjective - it is not fault in photograph - it contains proper color information (within gamut of media used to display image). I also noticed that you mention monitor calibration in first post as something relevant to this topic - it is irrelevant to proper color calibration. Image will contain proper information and with proper display medium it will show intended color. It can't be responsible for your decision to view it on wrong display device. Spectrum of light is physical thing - it is absolute and not left to interpretation. We are in a sense measuring this physical quantity and trying to reproduce this quantity. We are not actually reproducing spectrum with our displays but tristimulus value since our vision system will give same response to different spectra as long as they stimulate receptors in our eye in equal measure. This is physical process and that is what we are "capturing" here. What comes after that and how our brain interprets things is outside of this realm.
  13. Somehow I don't see them be priced at 1000e. I also don't like the fact that largest of sensors used is 17mm diagonal. Well, I stand corrected - cheaper models are indeed around 1000e mark - but that is for sensors up to 11mm diagonal.
  14. I will have to disagree with you on several points. In fact even you are in disagreement with yourself on certain points, like this one: Important point being - non linear camera response, and then in next segment you write: Which is definition of camera linearity - as long as for same source and different exposures, results have ratio of intensity same as ratio of exposure lengths - camera is linear. In fact - most cameras these days are very linear in their response, or at least linear enough not to impact color that much. This is common misconception. Color theory, while having certain tolerance / error is well defined. There is well defined absolute color space - XYZ color space, and any source out there will have particular coordinate in XYZ color space, or more importantly xy chromaticity - because, as you pointed out magnitude of tristimulus vector depends on exposure length among other things. Given transform matrix between color space of camera and XYZ color space - one can transform raw values into XYZ color space, with certain error - which depends both on chosen transform matrix but also on characteristics of camera. Transform that produces least amount of error is usually taken. Once we have xy chromaticity of source - then it is easy to transform that to color space of reproduction device. There will again be some errors involved - this time because display devices are not capable of displaying whole XYZ color space for example - gamut is lower. Choice of sRGB color space for images that will be shared on internet and viewed on computer screens is very good choice as it is de dacto standard and expected color space if color profile is not included with image. All above means that if two people using different cameras shoot the same target and process color in the same proper way - they will get the same result - within above described limits (just camera transform matrix errors since both will produce images in same gamut space - sRGB so that source of error will be the same). Not only that such images will look the same, but if you take all spectra of all sources in the image and ask people to say how much recorded image is different in color from those spectra (pixel - for pixel) you will get least total error compared to any other type of color processing - or in another words it will be best possible match in color.
  15. Sort of. Not like in nuclear reactor where we have stable chain reaction (unlike in explosion where it is uncontrolled chain reaction). Using radioisotopes means using small amount of radioactive material - radioactive enough to keep its own temperature "lukewarm" and then using Seebeck generator or Thermoelectric generator that is a solid state device that works on temperature differential - on one side warm radioisotope on the other - cool empty space (or conducting element exposed to empty space). Heat flow generates electricity.
  16. I'm not recommending that you do that - I was just showing that it can be done really easily. I don't like effect that this sort of processing produces.
  17. You are using Gimp as far as I can see? It is really easy to get that sort of histogram shape that has been discussed above, here are steps (on generic bell shaped noise image): Step 1: do levels to do initial linear stretch and get background to be visible (I made random noise image to resemble background of astro image - I added one pixel star to get contrast range of astro image): Step 2: Now we have nice bell shaped histogram after first linear stretch, in second we do the curves like this: Most left point will be raised a bit - so our output is limited to the bottom of not going all the way to the black. Same point on "x" axis is starting to "eat" into histogram. Next point is just "pivot" point so we get nice smooth rising curve and next two points are just classical histogram stretch. This configuration of curves is often seen in tutorials and it produces histogram looking like this: flat on left side and almost bell shaped on opposite side - a bit more "ease out" because we applied gamma type histogram stretch in that section. Just one round of levels and one round of curves with more or less "recommended" settings and we have produced that effect.
  18. I've also noticed that often background looks funny in the images you are producing. I don't think it is necessarily background clipping - in this example it is more to do with distribution of the background that gives it such feel. You are right to say that your histogram is not clipping - but it is not bell shaped either: This is green channel - histogram in range 0-32 binned to 33 bins (each number one bin) in first image you posted: Same thing in second image that you posted - one with color correction and background that looks better: Although your background is not clipping - histogram of it shows left side to be very steep in first image - it is almost if histogram was indeed clipping but not at 0 value but rather somewhere around 3-4. Second image shows histogram more resembling to bell shaped curve as you would expect from natural looking background - and this shows in the image - background looks better indeed. At least to my eye, and as far as I can tell @ollypenrice agrees with me:
  19. Nice capture. To my eye, this image shows typical color balance that one would get from StarTools processing. If you aim for more realistic color in stars you really need to do proper color calibration of your image. For reference here are few good graphs: First color range: Second - frequency of stars by type: Mind you, this second graph is too saturated - that happens if you don't gamma correct your colors for sRGB standard and yet use image that implies sRGB standard. I think that first scale is more accurate but does not go as deep as this scale (which is to be expected as O type stars are about 0.00003% of stars so above range maybe stopped at B rather than going all the way to O type). Match those two with your image above and you will see that you are too cyan/aqua, and possibly over saturated in your star colors.
  20. Do you mind posting original sub, unless that exact jpeg came out of the camera? Fact that above image is 1920x1280 means that either you or camera made image smaller. If this image as is came from camera - then Canon probably implemented very nice way to reduce image size - use of algorithms similar to binning. Additional thing that can happen is jpeg smoothing. In any case, combination of those parameters could make image look as x10 longer exposure, so image was not in fact 11s but about 110 seconds. Still impressive result.
  21. I honestly can't see the distinction , but I do understand that you have certain requirements. As I don't have experience with either, I can't help much except to point out that Canon lens is a bit on a heavy side to just hang of the camera body - you might want to look into some sort of bracing system - like rings and dovetail to support the lens.
  22. Two very different focal lengths. Do you have any idea what camera will you be using and what is intended FOV? Do you already own either of them? My personal choice for such small focal length would be something like this: https://www.firstlightoptics.com/william-optics/william-optics-2019-zenithstar-73-ii-apo.html + https://www.teleskop-express.de/shop/product_info.php/info/p11122_Riccardi-0-75x-APO-Reducer-and-Flattener-with-M63x1-Thread.html (provided that focuser on scope has M63 thread - and being 2.5" R&P - I would believe that to be the case. Also I would probably go for TS photoline 72mm scope over WO one).
  23. Actually, now that I've inspected image further - it's not shadow - it is reflection. At first I noticed just dark spot - but it is bright doughnut around this dark spot that is the issue - it is reflection of unfocused light: Maybe this screen shot will help to see it easier: That sort of thing happens when there is a bright star in the frame and it can't be corrected with flats - it is reflection - same as this here: There is a faint halo around that star. In your image it looks like culprit is Alnitak - it is same distance in opposite direction with respect to optical axis.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.