Jump to content

vlaiv

Members
  • Posts

    13,263
  • Joined

  • Last visited

  • Days Won

    12

Everything posted by vlaiv

  1. Excellent image and welcome to SGL
  2. The way I see this is that you are replacing scope that has: a) 8" of aperture (171mm of clear aperture - when mirror reflectivity and secondary obstruction effects are taken into account) b) 1000mm of focal length Even 6" APO refractor will be slight step down in terms of aperture, but yes - something like Esprit 150 would be considered as replacement. There aren't many options for 6" refractor really - and they are all very expensive. For about half of the price of one such scope - you could get a range of telescopes that you could use for different things, so consider that as well. Say you want very wide field - then 80mm F/6 APO with reducer is hard to beat. For medium fields you can get one of those 6" F/4 fast scopes to be used with 4/3 type camera (maybe APS-C - but here coma corrector needs to be good to match sensor size). Want to get up close? Maybe 8" RC scope with reducer at F/6? All that will cost half of Esprit 1500 - and you get nice 80mm apo as well
  3. I would not consider achromat refactors in that case. What focal length are you looking for?
  4. I'm slightly confused by your original post, you want scope for visual, AP or capable of both? When I was first reading your post - I understood that you have SW 200p dobsonian which is getting old now and you want something for visual. Then I read that you mostly do AP and haven't used scope for visual in quite some time. Now, looking at your gear list - I'm thinking - maybe it's not F/6 dob at all - maybe it is 200p parabolic F/5 newtonian used both for visual and AP. Scope that @johninderby recommended is primarily visual scope - that added to confusion.
  5. I now need to dial in power value to get good uniform distribution. I added second graph to show number of samples rather than average deltaE for that square. This is with power 96 and 100,000 samples. Starting to look good, at this power samples are slightly bunched up at blue and red part of spectrum for some reason - but I'll dial that in when I get the time. I was thinking of doing 3 things to asses how accurate transform is: - spectrum image - expected vs produced - plankian locus image - expected vs produced - above deltaE diagram I guess that three should give us good idea of how good camera is performing with said matrix in terms of color accuracy. I still need to see what sort of differences in matrix there will be with this power spectrum thing vs other methods as I think this will be the best for matrix generation. In the end, I'll have to see how far away real sensors are from these QE curves published from them - by using a bit of spectrometry and calibration with DSLR as reference.
  6. Above diagram shows mostly astigmatism due field curvature - but there can be coma as well - or combination of the two. You could be having a bit of sensor tilt in your setup since coma is stronger on one side. vs Right image (right side of your shot) is what it should look like in case of astigmatism - but it can become coma like when there is a bit of tilt. Deal with distance issue first and then look into tilt. If you can't find proper distance - try removing rotator. In itself, it should not cause any optical issues as it is in front of flattener and does not contain optical elements - but it can cause tilt, so remove that and see if it helps.
  7. I'm afraid that best method I know is trial and error method. Start very slowly, like changing distance by 1mm and compare results - if effect is smaller - then you should be going in right direction - if effect is increasing - you are moving away from ideal position. Variable extensions come in handy in situations like this, but you can try with set of spacers as well. If I recall correctly Canon T2 ring is actually 11mm - so ideal position is without any spacers, so start there perhaps and see if increasing distance reduces or amplifies effect?
  8. Well, that was easy fix It turns out that I just needed to think a bit of why above was happening. XYZ space was designed to be "equi-energy" color space - meaning spectrum that is "flat" - or has equal energy across wavelengths should land on 1,1,1 in XYZ (or equal X, Y and Z values) - or in xy chromaticity diagram at coordinates 1/3, 1/3. Random with uniform distribution tends to produce roughly equal energy spectra - finer the wavelength scale - more equal energy spectrum is. We can deal with this in two ways - go coarser (but that also means smoother), or use different distribution. So I thought a bit - what sort of distribution do we want - well, I thought like this - single spectral lines form locus and 1/3, 1/3 sort of forms center of that locus - and we need spectra that is spread from center to edges. In center we have uniform distribution and on edges we have single lines, so I said - why not raise 0-1 uniform distribution to some power. That will tend to create more single "spikes" and less uniformity as smaller numbers will be preferred over large ones - but that is "scaled" according to largest sample (sort of). In any case, here is 0-1 to the power of 16: Much better looking. Color here represents deltaE, but I need to properly "calibrate" it. It does not represent density of samples! It would be good to have one image representing density of samples as well - just to make sure it is anywhere near uniform over xy diagram.
  9. Small update - this is bad This is xy plot of deltaE from 10000 random spectra. Problem is - random spectra are not quite random in terms of color. They take up very small region of xy diagram. I guess, that next would be way of generating spectrum from x,y chromaticity color. That is not well defined problem (many spectra end up having same x,y coordinates), but I think I'll think of something clever.
  10. according to this diagram: your camera is too close - try extending distance between camera and flattener to see if you can reduce / remove issue?
  11. There are a few other things that you can do to improve your image further. At some point, you are probably going to want to remove those blue halos around stars. When you come to that point - lookup aperture masks and wratten #8 filter. That is easy and low cost method of getting rid of chromatic aberration - but it comes at expense of imaging time (you have to shoot more subs because aperture mask blocks some of the light), and custom white balance (yellow filter creates yellow cast that you need to remove in post processing).
  12. I've been doing some more work on this and have come to realize that we really need different method of assessing effectiveness of sensor / sensor + filters combination. Gamut / reproduction range is really not the metric that we are after - at least I think so. I think that we need a way to asses how precise / correct reproduced colors are. Following advice from @Martin Meredith - I checked out general solver in Scipy and added feature to derive Raw to XYZ transform matrix where distance will be calculated not in XYZ space but as deltaE*(76) (simple version from 1976 for now). Resulting matrices are quite different - here are spectral loci transformed with each: Blue is regular XYZ plot of single line spectra in 360-830nm range. Red graph is raw to xyz for ASI178 derived with deltaE approach, while yellow graph is same camera - derived with regular norm - distance in XYZ. For matrix derivation mixed data set is used in both cases - single line spectra 360-830nm range, every 5nm and black body curves from 2000K to 50000K, every 510K - that gives equal number of each - 94, so total of 188 samples for matrix solving. I think that next step should be trying to asses how accurate are produced colors - that is more important. I think I'll start with again random spectra and calculating XYZ, raw to XYZ and deltaE for each spectrum and then "binning" deltaE values over XY diagram - like average deltaE at each 0.01x0.01 square (to have 10,000 bins).
  13. C'mon, that's no coma, that is lacewing! == And this is what coma looks like: ==
  14. Yep, that won't work, or rather - it will work as you have discovered - it will be usable only in central part of the field of view and it will show serious issues. You only really need T2 adapter for your camera and you should put that directly in telescope focuser. You should be able to reach focus like that if I remember correctly - I used to have that scope and I took some pictures with DSLR attached - but only daytime shots with DSLR. Let me see if I can find a video that you can watch and see how best to attach DSLR. https://www.youtube.com/watch?v=8w_p2vM1g-w This shows ST120 - so bigger version but principle is the same. With the item you purchased - you should have everything you need - you only need to use T2 adapter to canon mount, so this bit: You should be able to screw that on T2 thread on back of your telescope. In order to reach focus - you'll have to wind out focuser quite a bit (see video for details).
  15. I have no idea what might be going on - but I really like the effect! Best warp speed I've seen so far Refractors suffer from field curvature - but what ever this is - it is far too much - it should not look like that. Can you describe your setup please? I'm wondering if there is some extra piece of glass between telescope lens and camera sensor?
  16. Well, yes, this changes things. Here is spectral based matrix for ASI178 with old and new code: To clarify again what I did here - under old code I assumed that XYZ space uses energy / power density over spectrum - which is right, and that camera records again energy / power over spectrum - which is wrong - camera records photon count. In order to convert from energy to photon count - one just needs to divide total energy at certain wavelength with energy of a single photon at that wavelength. Now that we have corrected method - we can compare gamuts of published ASI1600 and measured ASI1600 profiles - that vary slightly. Well, surprise, surprise - for matrix generated by single line spectral samples, gamut is absolutely the same (graphs are overlapping so only one is seen). That sort of makes sense - as QE determines only intensity of single line spectrum - not relative ratios - as there are only single lines. Intensity of light does not change its color - so both cameras record same values in terms of "color" but at different intensities. If we make matrix out of spectra containing mode than one line - things should be different. Indeed - there is subtle difference between the two. I'm now going to try color conversion matrices created with black body radiation curves and combined ones (spectral lines + black body curves)
  17. No, that is why you auto guide - so you don't loose subs. Once you start auto guiding on EQ mount, if your guiding is good - you don't loose any subs - unless something drastic happens that can't be guided out - like cable snag or you bump your telescope accidentally or whatever.
  18. Use ISO400, 800 or 1600 - there won't be much difference (I guess - go with higher for shorter exposures - that should reduce read noise somewhat) Wobbly images happen not because of wind but because of mount - even EQ mounts have periodic error in their drive trains and don't follow earth's rotation perfectly - that is why people guide. You are using Az mount that needs to track in both axis at once - two sources of error versus one with eq mounts. Mount itself is not really high quality in the first place. Orion nebula is pretty much as bright as it gets, so it is hard to find one as bright as that. Try maybe M81/M82?
  19. I think there will be only slight difference. We can "test" that as well - as I have two different profiles for ASI1600 - not sure which one is accurate - if any, but one is published and another is measured. I'll run matrix generation on both (as soon as I fix things with photon count mess-up) and we can compare results.
  20. Btw, I just realized I've made large error in above calculations. I was not paying attention to units. Not sure if that is going to impact results - but I know that it makes uniform random distribution be non uniform or "skewed". XYZ coordinate space is defined as integration of spectral radiance which is energy (per unit time) rather than photon count which is captured with camera. I used spectral radiance for both - when I should have used photon count for camera raw values and radiance for XYZ. Difference is obviously photon energy at certain level given with expression: In another words - I must multiply camera data with lambda. Will now check if it makes difference and how much (it makes difference in resulting matrix, but not sure about gamut).
  21. Depends on actual QE. Imagine for example that you have camera that only captures light in 500-600nm range and has QE of 0% otherwise - no filtering can make camera sensitive where it's not sensitive. In that sense - it can affect gamut greatly. Gamut is tricky thing to understand. It's not about if color can be registered - it's about if it can be 'resolved'. Gamut less than full gamut does not mean that camera will not collect photons and register some values. Camera always collects some photons and registers some values. Question is - can you distinguish them as different colors once camera registers them. Mono sensor without filters has narrowest possible gamut - it is single point on xy chromaticity diagram (does not matter which point - that depends on your selected matrix) - whole image will be monochromatic - only shades of single color - which ever color you choose that to be (you can have shades of red, or shades of green or shades of grey - all these are single points in xy diagram).
  22. Just a small update. I've found old color calibration of my ASI178 - that I know is a bit wrong. I downloaded color checker passport image on my mobile phone and used it to do color calibration. I suspect that this is wrong since my phone is giving somewhat bluish image and in DSLR measurement (assuming my DSLR is properly calibrated and produces good XYZ values) - it gives white point of about 7000K which is higher than expected 6500K for sRGB. In any case, I wanted to see what would gamut look with this transform versus derived transform: Red line represents gamut of 178 artificial profile + measured transform matrix and yellow line represents artificial 178 profile + spectral derived transform matrix. Measured matrix does not look bad at all - I think that I'll try to derive matrix from two different set of data: Plankian set in some range of temperatures - like 2000K - 50000K and also those distributions that I linked above - spectra of color checker + D50
  23. I'm having trouble accessing most of those links (at least 3-4 that I tried seem not to work any more) - but you are right, I have considered calculating actual spectral xy colors for most stars - to be used for further color calibration / adjustment I found this (working) catalog: https://www.eso.org/sci/facilities/paranal/decommissioned/isaac/tools/lib.html I guess weights should be assigned by relative luminosity and abundance. We should really include single spectral lines for astrophotography calibration since many of object recorded are emission type and shine in few dominant single spectral lines (like Ha, SII, OIII, Hb, NII, ....) Regular daytime calibration of DSLRs only uses handful of colors (color checker passport illuminated with D50), and I actually have curves for them. @sharkmelley pointed out these for me in another thread where we discussed something similar, but unfortunately could not reach agreement on our understanding of that particular topic. https://www.babelcolor.com/index_htm_files/ColorChecker_RGB_and_spectra.zip
  24. Now, onto gamut. First - let's explain what these (very strange looking) gamut graphs mean. Each graph will contain 3 things: 1. Spectral locus in XYZ color space This is set of points where all pure spectral lines land in xy chromaticity diagram. Btw, xy chormaticity is 2d space that deals with color exclusively and not brightness. It represents all colors that humans can see in terms of only two numbers - x and y, without any particular brightness assigned to them. 2. sRGB gamut - that is subset of all colors that we can see - one that can be displayed on sRGB calibrated computer / phone screen. Not all colors that we can see can be displayed on computer screens - simply because computer screens can't show those most saturated colors. 3. gamut of Raw to XYZ transformed spectral locus It is very important to understand what this represents - that is set of color values that camera can produce. This is not saying anything about whether these colors are accurate - we need another metric - it is just set of numbers that can be result of camera taking an image. If color falls outside of camera gamut - camera will never produce that color in image if color falls inside of camera gamut - camera can produce image containing that color - however, there is no guarantee (from this graph) that this particular color will be accurate Another note - colors that fall outside of sRGB gamut will certainly be displayed incorrectly - since sRGB monitor can't display original color - but closest color will be used instead (this is science in itself - what is "closest" color). Quick reminder - if you have not seen xy chromaticity diagram, here is one: This is good diagram. First - horse shoe that contains all colors is spectral locus. Bright triangle is sRGB gamut - and these are accurate colors. Rest of the colors are darker to represent the fact that color that you are seeing in that place - is not true color that belongs in that place - color that you see is "best effort" by you computer screen to show actual color in that spot. Greens that are at the top of horse shoe are simply more saturated greens than display can show .... Note that all colors that can be made with certain combination of lights - lie inside shape made by connecting the dots - R, G and B of sRGB color space form triangle and all colors that can be generated by those three colors - lie inside that triangle. Similarly all colors lie inside horse shoe since all colors are product of pure spectral colors in some ratio. Now onto gamuts ... ASI178: We can see a few things from this graph: - most blues fall inside gamut of this camera - most saturated greens actually fall outside of gamut of this camera (btw, when I say camera - it is actually camera + particular RAW to XYZ matrix - different matrix will have slightly different gamut) - camera also can't produce pure spectral reds (this really means that if camera is imaging Ha for example - it will produce color that is not pure deep saturated red - but rather somewhat unsaturated red color instead) - camera produces most of the sRGB gamut - only pure red of sRGB can't be produced. What ever image you record with this camera it will never have 1,0,0 color in it (in sRGB space) - IR colors will produce most weird yellowish orange hues ASI185: This one is even weirder I suspect that this is because camera has strong red response. Pretty much similar to above ASI178 except: - gamut on red side is even smaller - pure IR wavelengths will produce from yellow to bluish - even more of sRGB gamut is clipped close to Red vertex ASI1600 - this one is biggest surprise to me: It indeed has the smallest gamut of the three - it clips greens, reds and blues - but interestingly enough - it will cover whole sRGB gamut (note that we look at convex area that encloses all data points for ASI1600 spectral locus) Another thing that can be seen is that pure spectral colors are the most uneven on the "curve" (which is not curve but more resembling triangle) - and because of different distances between points - we can see "the banding" in rainbow colors I was talking about. Important thing to remember is that these are only matrices generated with uniform random spectra - might not be, and probably is not the best data set. It also does not use best metric with respect to human vision - it should minimize distance in perceptually uniform space rather than in XYZ space - so that converted colors are most "like" ones they should represent. In the end, let's see how matrix choice (or rather spectra used for matrix generation choice) changes gamut: Here we have ASI178 + random matrix gamut vs ASI178 + spectral matrix gamut. Second matrix was generated with only 94 different spectra - single spectral line in 360-830nm range in 5nm steps. Maybe second spectral matrix gamut is slightly larger - but it also loses some coverage around 400nm in blue part of spectrum. Reds are somewhat recovered and I think that overall error is smaller - respective locus points seem to be closer to original XYZ spectral locus. Hope you found this interesting - next thing would be to optimize matrix transform in some way - by selecting good spectral data set for generation - and may changing algorithm for generation of matrix. Another interesting topic is - how accurate will resulting colors be - over whole space, and in particularly - in sRGB space - that will answer question: How different would image on computer screen be taken with particular camera vs ideal camera. We want to know if using mono camera with filters is somehow inferior in accuracy of produced image when viewed on computer screen. Maybe do the same with wider gamut space - since computer screens are likely to go wider gamut and unlikely to go smaller in gamut - in the future.
  25. Yep, just uniform 0-1 over the range of 360-830nm. In fact - both are sampled at 5nm intervals (spectral data and matching functions) I'm aware that this type of matrix generation might not be the best, so I'm also going to present later on findings on two different method and propose third method - but I'm willing to take suggestions on best data set. Second method is pure spectral locus (I just use pure spectrum lines as spectra), and third is "weighted" method - we assign 3 weights to: - random spectra - Plankian locus spectra (black body curves in certain range say 2000-50000K) - spectral locus If this method is to be used - we really should somehow model colors that we are likely to encounter in astrophotography. Also, least squares is easiest to implement - but may not be the best metric - delta E might be better - however, I'm not sure if I know how to generate matrix that minimizes delta E for given set of spectra and XYZ and Raw profiles. If anyone knows optimization algorithm that I can use to transform set of vectors to different set of vectors using custom "distance" function - that would be excellent.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.