Jump to content

Banner.jpg.b83b14cd4142fe10848741bb2a14c66b.jpg

vlaiv

Members
  • Posts

    13,016
  • Joined

  • Last visited

  • Days Won

    11

Everything posted by vlaiv

  1. I was really struggling with this, and mind you, I'm no expert at NB composition, but here is something resembling a result Here is Ha only: I think it is quite a bit better than previous.
  2. On quick inspection, these look much better I must say, I'll have another go at processing now.
  3. You don't need a clear night to get your darks, nor you need to do it between filter changes. You need single set of darks, even if you are using different exposure lengths for different filters - as you will attempt to scale the darks. Just take a set of darks somewhere where it is close to temperature you were working at (but needs not be precise that temperature - so you can leave it in shed or basement or somewhere during cloudy night to record). Don't apply 3x3 median filter - it will just kill any chance of stacking properly. You can try single pixel cosmetic correction though - that might work. In fact one of steps when trying to get that Ha signal was to apply my own cosmetic correction thing on whole stack. You can even post all subs (maybe drop box or google drive) to see what sort of stacking would work the best on your data.
  4. Even triplets don't bring whole spectrum to same focus point - they bring only 3 points or rather three wavelengths of light to exact focus vs 2 points doublets bring together. If you want whole spectrum in focus - need to use pure reflection systems. APO performance is somewhat loose term - it should read as no chromatic aberration perceivable by observer. That is the key - there is no color aberration that you can see. How much of color aberration will you see depends on defocus for particular wavelength but also for eye sensitivity at that particular wavelength. Have a look at this graph: Blue line is simple lens - or a singlet - it just brings one wavelength at any one time to focus. Next is green dotted graph - that is doublet lens (this particular one is achromat) - it brings two wavelengths to focus at the same time. Orange and red are APO and four lens combination (optical quadruplet lens - but not photo quadruplet). Amount of color that you will see depends on how much depicted curve lies far away from central 0 axis (how much defocus and related blur there is). Thing with ED doublets is that they produce much "tighter" curve than green achromat one. It will be of same shape, but it will be very close to 0 at 400-700nm range. If you get really good ED doublet you will virtually see no color aberration. So why do people purchase APO triplets at all? One obvious reason is astro photography. CCDs are much more sensitive than human eye across larger range of frequencies, and while human eye can see color aberration - CCD will be able to record it. Other reasons might include - sensitivity to color (some people can see residual chromatic blur because they are more sensitive to it), just wanting higher class instrument (if instrument is of higher cost to manufacture - there is higher profit margin and incentive for maker to be extra careful of how they figure lens and QA things and so....). There is also issue of F/ratio - ED doublet can be corrected only at F/ratios from 7 or 8 (depending on scope size) and upward. If you want F/6 or F/5 instrument that is free of color - you need APO triplet to be able to do it. But for all other purposes (casual visual without color) - ED doublet is simply sufficient.
  5. I'm not sure I can make anything out of this data to be honest. It is just too noisy and noise is not "nice" kind of noise. Why didn't you include dark calibration? That could probably help a lot with that kind of hot pixel noise. Here is what I could manage to get out of Ha sub (with all kinds of magic): There is some sort of bright ridge over that image that I can't get rid of - that is not there in other subs. Also, other subs don't contain this much signal at all (Ha is usually the strongest of the three). So I believe you won't be able to get a decent image out of this data unless you try to remove that noise. I'm guessing that you did not want to take darks because you don't have set point cooling, right? Why don't you just give it a go with dark scaling? Maybe you will be surprised with results. Do take a set of darks - same exposure and use them as well, but tick "dark optimization" option in DSS (and leave everything else the same, in fact - don't even do sigma clip stacking - use regular average for everything). Btw, here is OIII sub processed the same as Ha: Not much there really and denoise makes things too soft.
  6. Just as a piece of information on maximum magnification of a telescope - how is it "established" and what it means. In reality max magnification of telescope as calculated by x2 aperture in millimeters is based on two things - theoretical resolving power of aperture of certain size (airy disk produced) and theoretical resolving power of human eye (20/20 vision). If you want to see if maximum telescope recommended magnification is just right, not enough or too much - just do the following - try to see two high contrast features - maybe best case would be two black poles next to each other against blue sky (aerial poles or similar) - such that their angular separation is 1 minute of arc. Another test would be to spot a feature on the Moon that is 1/30th of moon diameter - by naked eye of course. Maybe try to spot a Plato crater for example (as a single dark spot)? If you can do that - then maximum recommended magnification is just right for you - if not (and odds are that you can't), you will enjoy higher magnification. Problem with max theoretical magnification for all but smallest telescopes is that it is going to hit atmosphere limit much sooner - around x100 or so in most circumstances, and that would be max for 50mm scope - most of us use larger apertures than that, so we can't reach max magnification of aperture under most circumstances - and that is why going close to max magnification looks blurry - not because it is "real" upper limit of telescope (it is but for a person that has perfect vision).
  7. Ok, this is rather nice now I used FitsWork to convert raw (ARW) files to fits. This is first time I've used it and it looks ok, I just disabled debayering and color scaling in options to prevent any fiddling with data. Here are results of stats on each sub (1, 2, 3 and 4 being DSC0261 - 0269 respectively): First few pointers - camera is obviously 12 bits and it shows as min value is -128 and max value is 3965 (that is 0 - 4093 range and max number in 12 bit is 4095). FitsWork applied automatic offset, so minimum value is -128 which is fine, while mean value is above 0 which is again fine and should be expected. Mean value increases in each subsequent frame (this can be interpreted as higher dark current noise, or higher signal in general) but standard deviation does not increase like that (and it should if signal increase was due to dark current alone). Now standard deviation number here is not measure of noise since we did not remove bias from above subs (you need to remove bias signal to get pure random noise to measure it's value). All 4 darks look fairly nice - same amp glow + bias pattern and 0269 looks probably the cleanest of the bunch. Histograms also look nice - no clipping of any kind and they look the same for all four subs: Now, in order to see what sort of noise each one really has - we would need to remove bias (maybe shoot small number of bias subs, like 16 or so and post those as well)? For the time being I'll just play a bit more with the data to see what I can come up with: Here is last two subs subtracted (0269 and 0265): With set point cooled camera when you have exact temperature mean should be 0 - here it is 1 which is to be expected as respective means differ by 1 (17.955 and 18.977 in first table posted - stats for each sub). But this time noise went down and it is now 13.08 (compared to individual sub results in range of 40-50) - as this is true noise without bias signal. In fact, difference sub looks rather nice: Smooth noise and not much else there (except that nasty offset of 1 - but that is either because of bias instability or as consequence of cooling, but I'm not sure which - we need a set of bias to figure it out). Histogram of this difference between two darks also looks nice (bell shaped): and FFT of difference also looks ok: it is nice uniform noise - except for that vertical line which means that there are horizontal bands in resulting image - meaning not all bias was removed because of difference of temperatures (that is why you need to match the temperature to remove bias features if doing just dark calibration without bias or you need bias removal + dark scaling to get good calibration). Doing the same between first and second darks (0261 and 0263) reveals result with more noise: But cleaner looking FFT (less pronounced vertical line - smaller mean difference and bias better removed): Which number was shot under which conditions? Another important thing that I just noticed - exposure time is rather different for each of these subs. How did you set exposure time? 914s, 909s, 931s and 970s This could explain why there is "inversion" with subs - less mean level but higher noise (shot at higher temp but for less time than other subs).
  8. Not an expert on DSLR and the way they work, but I would imagine that white balance information is not applied to raw files, but rather in post processing (or in jpeg conversion step in camera). At least that is what I believe since I can change quite easily white balance in software with Canon DSLR and its raw images (I don't use it for AP but sometimes for daytime photography). Let's see if I can find software to convert ARW to Fits without any "distortion" and then examine those.
  9. Ok, let's keep things simple to see if we can avoid some issues. No median filter for noise, no cosmetic correction at this stage. Use regular average for calibration frames (which camera are you using btw, since you don't use darks?) and use kappa sigma clipping for final lights integration. Set sigma to something like 3 and kappa to something like 2-3 (sigma being standard deviation and kappa number of iterations). You can use Ha as reference frame, but tell DSS not to include it in stack (or even better - use groups, DSS should align them all to same reference frame, right?) Save result as 32bit fits and post those
  10. Best way to asses darks would be to post them in raw format without any sort of transformation / debayering done to them. Here is quick analysis of first posted png: Due to debayering performed (or maybe as a consequence of light leak?), statistics are not looking good: With regular raw you would expect "RGB channels" to have same mean value and roughly the same StdDev (same as Mode). Here we see that it is not the case - green is significantly lower than other two. This could be due to debayering or even some sort of light leak (different pixel sensitivity to light leak because of R, G and B filters on them). We can see what stretched channels look like to see if there is any sort of pattern (amp glow or bias pattern or whatever): You can see that green is much darker (this is same linear stretch on each channel). Here are channels stretched roughly to same visual intensity (different linear stretch for each but showing roughly same information), together with histograms: Channels look distinctly different, and in principle proper dark for color camera should not be different than that for mono camera - channels should look roughly the same, in fact you don't even need to split to channels for dark calibration - you do it while still raw. In any case, I would be happy to run this sort of thing on your darks again if you provide me with RAW frame rather than debayered one (to see if there is indeed light leak or all issued above were caused by debayering process).
  11. Hi, I just had a look at data you posted and am slightly confused about it. What stage of processing is this? I'm asking because I expected data straight out of DSS after stacking, but attached files feel like something has been done to them and I don't know quite what. It might be as simple thing as doing slight stretch in DSS itself or maybe some sharpening or something. Background does not feel right, stars don't feel right - something is odd. Can you explain how you did your integration? (did you drizzle or do something "fancy" to the data?). To be precise of what I mean, here is a piece of red channel in RGB composition: Grain is too big in the background - like image was enlarged or maybe I don't know, sharpened or denoised or combination of the two done. Btw, to answer your initial question, here is what I would do to do SHO with Ha lum composition (not sure if you can do that in PS, maybe you can with some clever layer manipulation): 1. make sure data is nicely stacked and linear, it is in 32bit float format (instead of 16bit fixed point/int) 2. do background elimination at this stage (wipe) 3. I would do individual stretch on each of R, G and B channels until I get nice looking nebulosity in each and then combine result in RGB image. You can use quite heavy denoising at this stage - this is just color information and it is not as important to preserve it's full sharpness. 4. I do the same with Ha to create luminance layer - stretch it, do sharpening / denoising - full processing thing like you would do mono image until you are happy with result. 5. Take rgb image and create "RGB map" image out of it - which is just R = R/max(R,G,B), G = G/max(R,G,B), B = B/max(R,G,B) . That is tricky part to do in PS, maybe one could do it with layers (make image from 3 layers set to "max value" then make other images with layers set to "divide value" or something like that). 6. Take completed luminance and multiply with Rmap, Gmap and Bmap to get R, G and B channels of completed color image.
  12. Hi and welcome to SGL. Since you are doing degree in arts, I'm wondering what sort of noise do you need (do you need to be very scientific about it and what qualities are you looking in the noise). It would be far easier to use software that can generate different types of noise for you. Most obvious difference between real data and "synthetic" data would be that one is true random number generator, while other is pseudo random number generator (good enough for solid scientific work, so I presume not poor for arts project as well)?
  13. Not really that tight - just a bit tighter than just holding in place can distort mirror for example. Remember, optical surface precision is measured in fractions of wavelength of light - and that is order of hundred or so nanometers. You simply can't perceive bending on that scale by eye - and most items flex that much when you apply even slight pressure on them. Clips holding mirrors and lens in position are in perfect place to cause pinching if too much force is applied (again too much force means anything more than what is barely needed to hold it in place). Btw pinched optics can be consequence of different thermal expansion coefficients. If two different materials are used to hold the lens in cell at proper spacing and there is significant temperature drop compared to temperature when scope was put together - there will be different contraction of said materials and that can be enough to cause needed pressure to twist optics out of shape.
  14. Sorry I did not get back to you sooner - somehow I missed notification that you replied. Here are my findings so far: - above subs are not going to be much of use because they were shot in 8bit format. You need to shoot at 16bit format - so choose RAW16 as data format when taking subs. - From fits headers I see that you used different software to capture your subs? Darks and bias have comment: "COMMENT Generated by INDI" while light subs have this: "PROGRAM = 'Siril v0.9.11' / Software that created this HDU". Siril data does not include gain setting (darks and bias show that gain used was 145). For calibration files it is important to use exact settings as light subs - so same gain, same offset. There might be issue with offset, but I can't tell for sure because you used 8bit data format. There is strong histogram clipping to the left in your bias/dark subs - this can be due to offset issues but also due to 8bit format. What offset value did you use (driver settings)? Bottom / right part of the image is definitively due to stacking artifacts - no subs contain anything that can be the cause of that, and there were enough drift over light subs to cause that much stacking artifact - you should crop it out. Other tips would be - your tracking is rather poor. These are 20s subs, right? Look at star shapes (I took one star and aligned it and made animated gif): Almost every frame has some level of distortion. Maybe try to improve tracking / rigidity of your mount.
  15. It looks like ZWO decided to go somewhere in "between" of my two suggested regimes as their gain 0 setting. According to specs on ZWO website and this is chart for their ASI2600: Now if you look at the read noise line for ZWO and above for QHY - blue line (mode #1) - you will see some similarity both are around 3e (ZWO one a bit more) and then fall of below 1.5e (around 1.45 or something like that). This means that ZWO is using mode #1 for their camera and they don't let user change it to other modes (which is fine if you ask me as mode #1 seems to be the best for AP applications). At gain 0, ZWO states that their e/ADU is -0.77 - but that is just their lowest gain setting. This corresponds to gain setting of about 19 with QHY model - just look at graph for gain and blue line - 0.77 e/ADU is at about gain 19. Now look at FW part of QHY graph - again blue line and gain 19: Same sensor - same values, it is just that ZWO opted to put gain 0 at e/ADU value of 0.77 and at that point sensor has FW of about 50K - hence ZWO claim that their model has FW of about 50K.
  16. I think that above are decent results. For comparison to other models, I would consider blue line to be interesting - that is mode #1 (not sure if there will be modes with other vendors - that looks like it is QHY thing). It can operate in two distinct regimes, and selection of regime should be based on sky LP levels and how good your guiding / tracking is (exposure duration is the key). "Low gain" regime (for mode #1) at gain 0: ~60000e FW, ~3e read noise, 0.925e/ADU (that is quite a bit of full well capacity and very decent read noise for such capacity - dynamic range is about 14.3 in this mode) "High gain" regime (again for mode #1) at gain 75: FW is ~16384e, read noise is 1.466e and e/ADU is 0.252 (or about 1/4 - which means that 14 out of 16 bits is used in practice). with these settings we have something like 13.45 dynamic range (not bad at all). I think we can conclude that in these parameters we have equal or better option with this sensor than other current OSC offerings. Size is of course important - and APS-C size is very nice. What remains to be seen is how well it calibrates.
  17. Any sort of light leak on scope needs to be dealt with regardless if darks are taken off scope or not - because it will impact lights and flats (to some extent, depends how strong light leak is).
  18. Not really following your reasoning - small non cooled CMOS camera for DSO imaging on small scope. Will you utilize that camera for anything else - like planetary imaging or EEVA? Why did you narrow the list down to 178 and 224? Any particular reason? You have 600D that has been modified, and if you add scope instead of camera - you will get pretty much the same thing but quite a bit "faster" setup. For example: Virtually the same FOV (just a tad wider) with RC6" + 600D vs ED80 + ASI178. Difference being of course 150mm aperture diameter vs 80mm aperture (almost x4 light gathering increase). In any case, if you are set on either 178 or 224 - my vote goes to 178 simply because it has larger surface (224 would be my choice for planetary role due to low read noise and 385 would be my choice for EEVA / Planetary / DSO - due to size, sensitivity and read noise - however, maybe wait for 533 camera that is to hit the market right about now to see what sort of price will non cooled version have).
  19. There is some amp glow in your subs, but I don't think that all of top left corner is due to amp glow (could be wrong though - we need proper master dark to establish that). You have image of what dark should look like above, and here is another example of it (my master dark really stretched): Amp glow pattern in ASI1600 goes like that - two sections at right side (top/bottom) and a bit less to the left side rather "undefined" in shape following edges. Here is your light frame very stretched to show "features": Now, I marked what could be a light leak, but could also be some sort of gradient from LP? Dark sub matches position of bright spot, so that reinforces probability of it being light leak, but it's not matching "shape" fully to rule out any other explanation.
  20. Ah yes, I get it now - twice diameter being related to two times focal length (simple triangle similarity) rather than magnification of barlow itself. For anyone interested, here is why it works (provided that there is no vignetting in barlow element (but even small vignetting will not hurt):
  21. Not sure how that works, can you explain why it works, and if for example it works only for x2 barlow (how about x2.5 barlow and alike?).
  22. You are welcome. There is mathematical way to determine barlow magnification once you know focal length of barlow, but you are right, you can do it via trial and error all the same. Just shoot something that has feature of known size (like disk of particular planet at particular time or crater at the moon) and then measure in pixels of your resulting image the size - ratio of real angular size of the feature and measured pixel count will give you roughly sampling rate. For mathematical way, use this: 1 + distance / focal_length, but you need to know focal length of barlow lens. Usually better barlow lens have that info published. There is quite a bit to learn before you get to your award winning image of the Moon, but I do encourage you to just start recording. Here are some tips for planetary imaging in general: And of course other threads offer good advice as well, so lookup optimizing planetary viewing/imaging (not all is related to gear used, ambient has quite an impact as well) and how to acquire and process planetary images.
  23. For best image quality you want to avoid using eyepiece projection and go for prime focus like explained above. There is a number of ways you can achieve different "magnification", but for start let's discuss why that is wrong term in this context. Magnification is the term that we use for visual applications - it explains how much something is magnified in the sense of - what would it look like to naked eye if it was X times larger or closer. It can be technically described as ratio of angular size of object. With imaging things are different - we no longer have two angular sizes to compare - angular size with naked eye and angular size with telescope and eyepiece, we now have a different process - mapping angular size to pixels (or sampling points). That is called sampling resolution. Given an image of certain sampling resolution - you can still make thing in the image appear small or large by using different projection on device that is used to display things - like this: Above is the same image (it is Voyager 1 image of Jupiter in high resolution so credits to NASA for that one) but displayed at different scale. What is the "magnification" of that image? Now do an "experiment" - stand really close to monitor and observe these two images, and then walk away 3-4 meters and again look at that image, compare "magnifications" of those images again (they will look less magnified from 3-4 meters away although we did nothing to them). Above was written just to show that "magnification" is meaningless term in imaging - it is related to visual and should not be used when imaging. Proper term for imaging is pixel scale, or sampling resolution and it is expressed in case of astro photography in arc seconds per pixel ("/px for short). Ok, now that we know what we are working with - let's see what would be the proper answer to the question "how one might change magnification". It is really about two things - changing pixel scale and changing FOV. First you need to understand that there is something called native sampling rate for camera and telescope. It depends on telescope focal length and size of pixels on camera chip. Native FOV depends again on telescope focal length and size of sensor. Since you can't physically change number of pixels sensor has - native sampling rate and native FOV are related in the same way pixel size, sensor size and number of pixels are related (sensor size = number of pixels x pixel size). Native sampling rate is your "baseline" - that is basic "magnification" that we can modify via different methods to obtain other "magnifications". It is calculated as 206.3 * pixel_size / focal_length. I would like to mention one more thing that is important - that is critical sampling rate. Due to physics of light there is only so much detail that a telescope of a given aperture will show, and if you are sampling too fine (high sampling rate) - you will be just "wasting pixels" simply because there is no finer detail to be recorded. In reality oversampling has both benefits and drawbacks, but that is another discussion. Once you match your sampling rate to level of detail that aperture can provide under ideal circumstances (not guaranteed that you will actually record that level of detail - it does depend on atmosphere and quality of optics) - we say you are at critical sampling rate. There is simply no benefit in detail capture if going with "higher magnification" - or rather higher sampling rate. Here is guide formula for critical sampling - you want your focal length to be equal or less to pixel * aperture * 3.857281 (this last number is 510nm and 2.4 and 1.22 combined into single constant to make things easy). For example - if your camera is 3.75um pixels and you have 150mm aperture (not sure if your mak is 150mm, but let's say it's that one) - max focal length is 2170mm - that is ~ F/14.5. In fact you will find that F/ratio for critical sampling depends only on pixel size of camera. How to change sampling resolution - to answer finally your question on magnification: - use barlow lens. You can change magnification of barlow lens by changing distance of barlow element to sensor. Add more distance - larger magnification you get (finer sampling rate). - use of binning - this process joins few adjacent pixels into "larger pixel". This method will not change FOV but will change sampling rate - use of mosaics - shooting multiple panels and stitching them together. This technique is useful for larger FOV - something you will want for the Moon for example. Most planetary imagers opt to sample at critical sampling rate and make mosaics for lunar and solar - only two planetary targets that are not tiny (please make sure you have proper filters when trying solar imaging!).
  24. There are couple of issues here. One might be light leak. It looks like dark is suffering from it but it looks like flat dark is also (there is negative imprint of that pattern on master flat as well): However that is not main issue with darks - master dark is wrongly created. If you have followed some tutorial there is a good chance that you missed a step or confused steps between flat and dark parts. Range of values in master dark are 0.1 - 0.2 ADU and that simply cannot be for proper master - it looks like it has been scaled like when creating master flat. Same seems to be the case with flat darks - as such a small bright patch could not make imprint on flats that are exposed properly, yet your master flat shows it clearly. Master flat is also scaled and again - it's not scaled properly - it is in range 0.28 - 0.64 and it should be scaled so that max is around 1.0 value (in principle it does not matter what range flat is in, but if you are going to scale it, one would expect that scaling to be done to unity range - brightest part to be at 1.0 - or 100% light). My recommendation would be to first redo everything that you already have to make sure that you did not mess up processing workflow. Start by creating master dark - create master dark and post it here together with single dark sub for inspection.
  25. Ah ok, yes, what you see is indeed very severe aberration but only because you are using it wrong. Well, not wrong per se, but in this case you don't want to use it like that. You want to loose much of the bits between focuser and camera. In fact you don't want to have anything between focuser and camera, except maybe adapter to attach camera to focuser. If I'm not mistaken, your focuser should have T2 thread on it (male T2 thread) and camera should have female T2 thread on it - so just simply screwing camera onto focuser tube will be enough. Next option is to use 1.25" camera nose piece that is shown in one of the images by screwing that into camera and then using that in focuser in place of 1.25" accessory. This option is a bit better then above direct connection because it let's you rotate camera to align FOV of your camera the way you want it to be aligned. Bottom line - you want your camera in prime focus - either straight or with barlow (depending on scope and application) and not in eyepiece projection mode. In fact, you might want to try eyepiece projection thing at some point, but do it via afocal method (using both eyepiece and small lens that you've got with your camera) - that way you might try the EEVA on Mak for example - but that is another story.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.