Jump to content

vlaiv

Members
  • Posts

    13,263
  • Joined

  • Last visited

  • Days Won

    12

Everything posted by vlaiv

  1. Looks like it - but M16 looks like genuine FOV without cropping - at least when compared to Astronomy Tools FOV. Probably cropped for effect / framing rather than to remove ill shaped stars.
  2. I would actually recommend binning in software since there is no difference with CMOS sensor and it gives you more control over the process
  3. x0.63 reducer and bin x2 could still be doable? It would sample at 1.26"/px - and that is still too high in my opinion, but not that far away from 1.5"/px
  4. Indeed. Two sample images on TS website show rather nice stars on ASI183 sized sensor.
  5. Btw, I have ASI178mm-cool version and I've also done some imaging with that and TS 80mm F/6 reduced by x0.79 - I'm in red zone on border of white (heavy LP at sqm 18.5), but here are some examples: Note that this is color version of the camera and cooled also. Good thing about this camera is that it doubles as rather nice planetary imaging camera (cooling has no significance there).
  6. If you take the same scope and put both cameras on and compare them - ASI224 will win / be more sensitive - but that is just pixel size "talking" there. You can easily turn ASI178 into winning camera even with above same setup - just bin x2 the data. Then you'll have 4.8µm (2 x 2.4µm) vs 3.75µm and that is easy win. If you have 5" F/10 scope - except for small FOV, you can still get good working resolution with ASI178. Natively, at 2.4µm pixel size and 1250mm of FL - this gives 0.4"/px. That is way too high sampling rate. You want to be around 1.5" or above at 5". You can bin x4 your data in software but that will make rather small image at 774 x 520. Alternative would be to get one of these: https://www.teleskop-express.de/shop/product_info.php/info/p11425_Starizona-Night-Owl-2--0-4x-Focal-Reducer---Corrector-for-SC-Telescopes.html but that would double the cost. If you get one of those - then you would be at 0.99"/px and with simple bin x2 - you would get 2"/px - perfect no fuss working resolution and decent size images at 1500 x 1000px
  7. What sort of nebulosity? Star clusters are generally not much affected by light pollution. Nebulosity comes in two forms - emission type and reflection type. For emission type nebulae (planetary nebulae, Ha region, SN remnants, etc) - you can get very nice results with UHC type filter. These filters disrupt color balance and make it impossible to get proper star color - but you can either accept that as a fact of life or shoot separate unfiltered exposures to get the color of bright stars (special processing needed that applies star mask or makes starless version and transfers color to stars only). For reflection type nebulae - you are out of luck. These are the most affected by heavy LP (similar to galaxies) and only a general purpose LPS filter can help a bit. Look at Hutech IDAS P2/D1 type of filters for example.
  8. Why do you think this to be the case? QE of 178 is higher than 224 (81% vs 75-80% according to ZWO website) ASI178 has higher read noise so you need to offset that with longer exposure time, but that will depend on your sky conditions and if you are using cooled or non cooled version (thermal noise can also serve to swamp read noise in non cooled models). Another factor to consider is of course pixel size - you need to match FL of telescope to get good sampling rate. If you match sampling rate and offset higher read noise then 178 is more sensitive than 224.
  9. As far as choosing imaging resolution goes, here is the quick break down: 1. Seeing FWHM somewhat relates to imaging resolution but it is not straight 1-1 correspondence 2. Aperture size and your guiding / tracking play a part too 3. Seeing + aperture + guiding will give you some final FWHM of stars in your image. This is usually larger than what would seeing alone produce. Having stars of 2" FWHM or less in your image is rather difficult to achieve although seeing of 2" FWHM is common 4. There is simple relationship between sampling rate and FWHM - which is star FWHM / 1.6 = sampling rate. In my view 1.2"/px is oversampling on 5" aperture in regular seeing and tracking conditions. Say you guide at 0.8" RMS and your seeing is 2" FWHM - what would be good sampling rate? You can expect star FWHM to be 2.88" and corresponding sampling rate around 1.8"/px Examine some of your subs that you've taken with 130pds to get the idea of what FWHM you can expect. By the way - above is for perfect optics. For example, newtonian with simple 2 element coma corrector will bloat stars somewhat due to introduced spherical aberration and you can easily have stars with FWHM 3" or 3.5" in your subs because of that. 250PDS will have large aperture and this means a bit tighter stars. In same conditions as above - 2" FWHM seeing and 0.8" RMS guiding, you'll get 2.78" star FWHM instead of 2.88" star FWHM - so there is a bit of improvement. In fact - better the seeing and your guiding - more improvement you'll see over smaller aperture. However, even scope that large requires good seeing and good guiding to hit 1.2"/px resolution. In 1.5" FWHM seeing and 0.5" RMS guiding, ideal 10 scope will produce stars with FWHM 1.96" and corresponding sampling rate will be 1.22"/px In any case - not all is lost - you can bin your subs in software after calibration to get to good sampling rate. Bottom line - increased focal length may give you larger objects - but not more resolution. You'll get same effect if you simply enlarge your images in the end - larger object without additional detail. In order to really get detail - you need sharp largish optics (say 8"-10"), exceptional mount that tracks/guides smoothly (rms in range of 0.2-0.3) and night of very good seeing (1.2-1.5" FWHM range).
  10. Dark current doubles about every 6°C. Higher dark current - higher dark current noise. When we image, we strive to get the best SNR or signal to noise ratio. This includes minimizing the noise. Heating up camera will increase noise (however small increase that would be). Why would you want to put an effort towards making your image worse than it can be?
  11. Just a small correction - dark current is perhaps <0.2 e/px/s and not noise. Why would you consider heating the camera? Important thing is that you have set point cooling camera and you can keep it at steady temperature. This will enable you to do proper calibration with darks. Say you have 0.2 e/px/s dark current and you are doing 300s exposures. That is 60e of dark current and ~7.75e of associated dark current noise per exposure. That is about x4 that of read noise - so you are approaching limit of exposure length (going longer with single exposure will not bring you further benefit since dark current noise is high enough even if LP noise is not). Anyway, why not use camera at -15C if you can? In that case dark current is very low at ~0.0065 e/px/s and for 300s exposure it will total at 1.95e of dark current or ~1.396424e of dark current noise. That is still lower than read noise.
  12. No. Did not spend anything on collimation aids. Only thing I used is the gear I already have and am using with this scope anyway - that is astro camera. You can do it with DSLR as well. Here is guide that I followed: https://deepspaceplace.com/gso8rccollimate.php I did not even use Bahtinov mask - but measured FWHM in SharpCap instead.
  13. https://www.firstlightoptics.com/counterweights/baader-dovetail-bar-levelling-counterweight.html or https://www.firstlightoptics.com/adm-counterweight-kits/adm-dovetail-counterweight-kit-v-series.html I use baader version for my RC8"
  14. To be honest - Fiji (Fiji Is Just ImageJ - recursive acronym) AstroImageJ is quite modified and geared towards measurement of astronomical data - photometry, astrometry and such. ImageJ is just basic package with basic functions, while Fiji is distribution loaded with various useful plugins.
  15. It is open source and free. It's used for scientific image analysis (mostly microscopy) but it has all the right tools. It's not hard to learn, but most of operations are fairly generic - you need to understand image processing quite well to accomplish what would otherwise be simple "filter" in photoshop or similar.
  16. Quite right. If gradient is not strong - then all you need to do is just move black point to proper place. Unfortunately gradient is almost always present so it is better to use some sort of gradient removal tool which does basically the same thing but eliminates gradient as well (different black point in different parts of the image).
  17. Sure. I loaded the Tiff in Gimp 2.10 and then I split channels - red, green and blue into separate monochromatic images and saved them as fits files. Then I loaded each of them in ImageJ. I binned them 3x3 to improve the SNR and then I did synthetic flat fielding. I inspect image and find max pixel value that saturates targets but does not saturate background. Then I run macro to remove all the pixels with higher value in the image (this sort of leaves only background). I then run another command that fills in missing pixels by taking average of existing pixels around them (sort of mean filter that only looks at actual pixels and not missing values). In the end, I do low pass filter with Fourier transform to remove fine detail / noise and leave "smooth" background. I divide each channel with respective background. Then I run gradient removal plugin that I wrote on each channel and save them all. I then loaded each channel in Gimp and did channel combine to get RGB image back again and did basic 3 point levels stretch (move top slider until nebulosity is about to saturate - move middle slider to properly expose everything and move bottom slider to remove background offset). That is it. Most of the work is done in ImageJ - but after flat field is applied and gradient removed - data is really nice to work with.
  18. Astro imaging is about signal to noise ratio in the image. You are right that LP adds more signal to the image - but that is signal that is mostly uniform and can be easily removed. What can't be removed is the noise associated with that signal - that is the problem. Noise associated with any signal (shot noise) is equal to square root of signal itself. This means that it "grows" slower than signal. If signal is amplified by x100 - noise will be only amplified by x10 (which is square root of 100). With adding more exposures in light pollution - you will increase target signal (good), target signal noise (bad), LP signal (not really an issue since you remove it in processing either by levels or using some sort of gradient removal tool) and LP signal noise (bad). Both target signal noise and LP signal noise accumulate slower than target signal - so more time you spend on target - greater Signal To Noise you'll achieve.
  19. Well, it is very decent recording I had to jump thru a lot of hoops to get this level of detail, but I think it is worth it. Flat calibration was not performed - there is a lot of vignetting and there is also some LP gradient in the image. Quite challenging processing to be honest. Here is result: I think that you should be pleased with this - you captured great nebula of orion, running man, horse head and flame in one frame!.
  20. It is normal to have high background levels if you shoot in light pollution and have faster lens. This is always dealt with in post processing. 54% is too aggressive setting for Kappa Sigma clip. Set it to reject top 5% or so of samples - sigma of 2 will give you 95% accepted and sigma of 3 will give you 99.7% samples accepted.
  21. If you have lens in front of your DSLR than you should not think in terms of linear units - you should think in terms of angular units. violet light: sin(theta) = 0.2375 => theta = arcsin(0.2375) = ~13.739 degrees red light: sin(theta) = 0.4375 => theta = arcsin(0.4375) = ~25.9445 degrees spectrum is spread over 12.2 degrees Now if you want spread that spectrum over sensor with 0 order you need to fit 0°-26° on sensor horizontally - so your DSLR (I'm assuming APS-C sized sensor here) should use 46mm focal length If you want to fit only spectrum on sensor, then you need to capture 12.2° - and you can use 100mm lens. With 18mm lens - you'll get 63.3° over horizontal and this means that zero+first order will span about 1/3 of sensor, while spectrum itself will span about 1/5 of sensor. In any case, if you have 18-55mm kit lens - you will be able to find suitable focal length that will capture both zero order and spectrum nicely, or only spectrum - but up to half of the sensor size.
  22. For flats you really need to keep infinity focus that you used when shooting lights. Best way to take flats is to do it at the end of the session before you strip down everything. Just take flats without changing anything to setup after you finish collecting the data (hopefully mini EFW is reliably repeatable). It might be worth investing into proper flat panel? Diy solutions are not that expensive - you need LED strip and piece of mat acrylic window to act as diffuser. Alternatively - you can purchase ready made unit. I'm using one of these: It is no longer made in that shape - they have new model now: https://www.teleskop-express.de/shop/product_info.php/info/p8241_Lacerta-LED-Flatfield-Box-with-240-mm-usable-Diameter.html (however, it is now much more expensive - in reality it is something like $10 worth of parts put together and that is why DIY is so much cheaper). If you search google for DIY flat field light box - you'll get a lot of interesting suggestions - like this one: http://www.astrosurf.com/comolli/flatfield2.htm
  23. Most basic way of doing this would be: In daylight take a piece of white paper and place it some distance away from telescope so you can aim your telescope at it and reach reasonable focus (does not need to be 100% accurate focus - just enough so you know what is the paper and that you are aiming at it). Pick a sunny day but don't let paper be in direct sunlight - but rather in shade. Shoot R, G and B images of this paper with same settings - exposure time needs to be the same. Make sure you don't clip any of images (like when shooting flats). Measure average ADU in each image. Record following values: mean_G_adu_value / mean_B_adu_value - we will call that B_scale and mean_G_adu_value / mean_R_adu_value - we will call that R_scale Now that you have B_scale and R_scale - each time you shoot a target after you stack your data but before you start processing it - multiply R channel with R_scale and B channel with B_scale to get color balanced image. Another approach would be to use color checker chart - and shoot that instead and derive color transform matrix, but that is a bit more involved.
  24. Indeed - that should be 1.5 per mm of aperture or x150 per 100mm of aperture What is few orders of magnitude difference between friends
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.