Jump to content

vlaiv

Members
  • Posts

    13,263
  • Joined

  • Last visited

  • Days Won

    12

Everything posted by vlaiv

  1. I would also like something like that, however, now I can't find any articles on the topic that are not highly technical in nature. What I've written above, I've read couple of years ago when I researched planetary imaging. I can't remember where. It is mentioned all over the place - for example here: https://ui.adsabs.harvard.edu/abs/2016ASSL..439....1B/abstract Here are some interesting starting points for research: https://en.wikipedia.org/wiki/Greenwood_frequency (defines minimum time in which atmosphere is frozen - important in adaptive optics systems) https://en.wikipedia.org/wiki/Fried_parameter Defines "size" of turbulent cells In this wiki article: https://en.wikipedia.org/wiki/Astronomical_seeing There is this quote: In the article I remember reading - there was relationship between t0 and r0 that depended on wind speed at altitude. It calculated time it takes for average seeing cell traveling at usual wind speeds to cross particular aperture - and then they related that to t0 - time that the seeing distortion is relatively constant.
  2. I would say tat 14ms is too high for most seeing conditions with aperture that large. One of things we try to do in lucky imaging approach is to freeze the seeing. In order to do that we have to set our exposure at or below coherence time for given seeing. If we don't do that, changing seeing effects cause "motion" type blur on top of seeing distortion. In most cases, 8" of aperture has coherence time of about 5-6ms. I'm guessing that larger aperture will have shorter time than that (but again - that depends on seeing on particular night). ASI178 has 2.4µm pixel size and that translates into F/7.4 for red part of spectrum (at 650nm) for critical sampling. What F/ratio are you using for your scope? Going for longer FL than F/7.4 does not yield any additional detail. Also - are you using coma corrector? Fast newtonian is going to have very small diffraction limited field, depending on barlow used - you might not be able to fully exploit 9mm diagonal of ASI178. In that case - it is much better to use ROI and limit recording to diffraction limited part of FOV.
  3. Why so low FPS? What was your exposure length?
  4. While SNR per exposure is of course very important - in lucky imaging it is more important to freeze the seeing. If we use longer exposure length than seeing allows - we just add motion blur to distortion created by atmosphere and blur increases. Lunar is particularly good for this approach as one has rather long time window to shoot the movie. When doing planetary AP - one must be aware that planets rotate and there is limited amount of time in which video can be taken without need for special processing like "derotation". For Jupiter it is something like 4-5minutes or less with larger scopes (it also depends on scale of the image - sampling at higher rates means smaller detail and shorter time needed for rotation blur to start showing) With lunar - we can take very large number of frames. This restores lost SNR as stacking improves SNR. For this reason, per frame SNR does not need to be very high and long exposure is not needed. There should be just enough SNR so that stacking software can stack properly and identify features instead of thinking it is noise. What is more important - is low read noise. Only difference between thousand millisecond subs stacked and single one second sub as far as SNR is concerned is read noise. If we had camera with 0 read noise - there would be no difference in SNR between the two - and we could do very short subs indeed. Interesting thing with CMOS sensors is that read noise goes down as gain goes up (that is because read noise comes in "two chunks" - one pre gain and one post gain. One that comes post gain - can get smaller in relation to signal once signal is boosted by gain): This is actually good for you - using higher gain will not make things noisier - it will actually make them less noisy after stacking. If you for example use 20ms exposure and stack 200 frames - that is total of 4s. When you lower your exposure to say 5ms - just keep in mind that you need to stack 800 frames to have same total integration time. Camera FPS actually makes this possible. If you use 20ms - max FPS you'll achieve will be 50fps, but if you switch to 5ms exposure - you'll be able to do 200fps (and that is why FPS is important - so you don't loose frames when doing short exposures). In any case - using say 10% of best frames - will end up with same total integration time - regardless of exposure length in most cases - if you can maintain given FPS. If not - then good thing you are doing Lunar - just shoot for longer
  5. With imaging - there is no such thing as magnification. There is something else - called pixel scale. In fact, since aperture of telescope is limiting factor - for planetary imaging there is suitable F/ratio that depends on pixel size. Around F/15 is good focal ratio for 3.75µm pixel size - like ASI224 has. You did go a bit higher with F/20 - but not by much. For beginning that is quite OK. Barlow magnification changes when you change distance from barlow element to sensor. If you can try to just use barlow element (if it is 1.25" thread type) with your own extensions to dial in F/15 (barlow to be x1.5 amplification). Besides that - there are couple of things that you can do to improve overall image quality: 1. use higher gain settings 2. shoot in 16bit mode (8bit mode requires somewhat careful handling) 3. shoot calibration frames - darks and flats if you can. Darks are quite easy - just cover the scope and shoot with same settings (same exposure length). For flats - you ideally need flat panel or something like that, but many people have success with sky flats as well (T-shirt over scope while it is still daylight outside - but no sun, just before dawn or after sun sets) 4. Use very short exposure time. One of important things with lucky imaging is to shoot very short exposures to freeze the seeing. Don't look at histogram at all. Max that you should go for is 5-6ms, but Moon often allows for 3-4ms exposures. Just be careful not to saturate (this can usually happen on low pixel scales - like when not using barlow) 5. Wait until the Moon is highest in the sky. Sometimes it is good to shoot the moon while it is still daylight. Seeing can be excellent just after sunset, but make sure that Moon is high in the sky. Your image looks like: - you did not use UV/IR cut filter with your ASI224 - you shot while moon was relatively low down - maybe you used longer exposure than is good for lucky imaging There is also pattern visible after wavelet sharpening - what bayer order did you give to AutoStakkert? it should be RGGB for ASI224 if I'm not mistaken.:
  6. Hi and welcome to SGL. Your telescope is best suited for planetary and lunar imaging. It is not well suited for deep sky imaging - because it is Alt Az mount and your telescope has very long focal length. For deep sky imaging you ideally want equatorial type mount, and although you can do limited imaging on Alt Az mount (look up "no eq challenge" thread under imaging section here on SGL) - you need "faster" scope and special technique (which benefits from dedicated astronomy camera with low read noise) in order to image DSOs that way. There is growing "branch" of amateur astronomy that might be interesting to you that you can practice with your setup (and some small additions) - EEVA or Electronically enhanced visual astronomy - which is use of camera and telescope to "observe" object on your computer in near real time. This is interesting as it also results in images, it is very similar to imaging technique for Az mounts - but emphasis is on capturing the object rather than making pretty image (which will be hard with such equipment). Maybe something to consider?
  7. Determining needed time for target is way too complex topic to explain in couple of sentences. It depends on target brightness, shooting conditions (sky transparency, light pollution levels, position of the target in the sky, ...), gear used (sampling resolution, telescope aperture and light throughput, camera quantum efficiency, filters used ...) to name few Even if you know all these parameters and know how to model all the (and yes, you can do that in say spreadsheet - or there have even been online calculators) - you don't know target brightness until you record it or until someone else does and publish the data (even then - there is no single value for targets - spiral arms are much fainter than galaxy core for example - you need to look at brightness of faintest area you want to record). Best thing to do is to get the feel for your location and gear. Start with objects people shoot most often - as these are easiest to shoot. Shoot those that are in "season" - or high up in the sky at the time you shoot in the night (I usually aim for target to cross meridian at midnight or 1am) Lookup surface brightness of the target in Stellarium - and based on that figure out how much time you'll need. Say you are happy with 4h of exposure on a target like M81 and want to know for some other target what is good time? Best indicator is surface brightness of the target. Is your new target also around mag22? In all likelihood - 4h will be enough for it as well. Is it maybe mag23 perhaps? In that case - be prepared to spend two nights on it. Start with single night and then you'll get to see the result - if you are not happy - well, then you need to spend more time on the target. That is in general very good rule - do spend some time on the target and don't be afraid to go out again and spend some time more on it (if weather and commitments allow of course).
  8. Very fine detail. I personally don't like that level of denoising applied - it creates a sense of "plastic" in darker regions - as if plastic model has been recorded rather than actual Moon. Brighter regions don't suffer from this effect and look very good - I guess denoising was selective based on signal (and SNR) levels.
  9. I haven't seen them before. One model in particular is interesting - IMX464 based model. I haven't seen any other vendor use that sensor, which I find interesting. I expected TS cameras to be rebrand of one of known CMOS camera manufacturers (QHY, ZWO or Altair - although I suspect that Altair Astro also order their cameras from China and have them branded).
  10. https://www.firstlightoptics.com/adapters/astro-essentials-sky-watcher-9x50-finder-to-t-adapter.html It works with RACI but you need to provide 40mm T2 extension between above adapter and camera (it is pointed out on FLO product page)
  11. Don't drizzle either It is maybe useful for very under sampled data - odds are - you'll never have such data.
  12. Maybe try camera at prime focus with barlow first before you dismiss prime focus in favor of eyepiece projection.
  13. You are welcome, and yes, by all means - do your own research to confirm what is correct and what is incorrect as has been suggested. Look at actual examples and data and theory backing those. That is always the best approach.
  14. I understand that you want higher megapixel count for your image, but shooting native oversampled will give you same the same result as binning your data and then enlarging image when you are done - except binned version will have better SNR. When you are oversampled - you are recording "empty" resolution - it is the same thing as when you enlarge image - image size will be there but detail wont. Resolution in astronomical images does not come from number of pixels - but rather how good seeing is, how good mount tracking (guiding) is and what is the size of your aperture. I just briefly had a look at the math on that site, and I'm sorry to say - it is faulty. I'll just mention few things that are wrong with it: 1. Nyquist criterion is always x2 max frequency component - in 1d and in 2d for square sampling. Stating that it is x3.3 is wrong. You can find numerous sources online that explain why is that. 2. Nyquist criterion holds for band limited signal and x2 relates to maximum frequency. Relating it to anything in spatial domain (like FWHM of star or Rayleigh criterion) is wrong use of sampling theorem. For long exposure imaging - there is no well defined cutoff frequency. If we approximate star shape with Gaussian profile - then we can use some sort of convention and put cutoff at some sensible frequency. For example, since Fourier Transform of Gaussian is Gaussian - we can easily do the math and put cutoff frequency at place where all higher frequencies are attenuated to more than 90% (their value is less than 10% of original). If we do that, we get that sampling rate is roughly FWHM of Gaussian profile divided with 1.6. 3. Seeing is not the only thing that affects the star image. For perfect aperture, we can devise formula that goes like this: sigma_profile = sqrt( sigma_seeing^2 + sigma_guiding^2 + sigma_aperture^2) Where sigma_seeing is FWHM / 2.355 - or regular seeing divided with 2.355 (relationship between Gaussian sigma and FWHM is with factor of x2.355) https://en.wikipedia.org/wiki/Full_width_at_half_maximum sigma_guiding is guiding RMS value that you get and sigma_aperture is Gaussian approximation to Airy pattern - and its sigma, given by this expression: https://en.wikipedia.org/wiki/Airy_disk#Approximation_using_a_Gaussian_profile This is of course for perfect aperture, but most fast astrograph telescopes are not diffraction limited over whole field. This means that actual FWHM of stars will be lower. You can get info on expected FWHM seeing value / forecast from this website: https://www.meteoblue.com/en/weather/outdoorsports/seeing/indianapolis_united-states-of-america_4259418 (this is for Indianapolis - but do select your place) Fourth column - astronomical seeing is value that you want, it is given in arc seconds. Compare that with FWHM values in arc seconds in your images - and you'll see that FWHM values in images are higher than seeing figures. This is because there is guiding component and there is aperture size component that has not been taken into account. As far as guiding goes - don't relate guiding to imaging focal length. Although general rule of thumb is that your guide RMS should be at least half of imaging resolution (and I agree with this rule in sensible range of imaging resolutions) - but I have another rule - make your RMS as low as you can regardless of your working sampling rate Above rule is very good to show you what sort of resolution you should not be using. Say you are guiding at 1" RMS - well, that means that you really should not go much below 2"/px. You can, but you really should not if you want sharp images. Only thing that I've found important for guiding is to get your guiding resolution fine enough to be able to reliably determine RMS and guide star position. Here math goes like this - you need to have guiding "/px lower than x6 your target guide RMS. Say you want to reliably guide at 0.4" RMS (or feel that you can with your mount) - then your guider resolution should be 2.4"/px or higher (higher as in resolution - lower in the number so 2"/px is good but 3"/px is bad). This has to do with centroid precision and all. Other than that - mount stability, mount mechanical soundness and shielding from wind (or not being undermounted) - is the key for good guiding performance. Hope this helps
  15. I'm not sure I understood half of what is bugging you , but I'll try to help. I'm going to make couple of points, and you can take from that what ever you like. 1. With 8" astrograph it is very likely that you are oversampling at anything below 1.2"/px if your guiding is in 0.4-0.6" RMS range - so bin your pixels accordingly. Depending on seeing - even 1.8"/px will be oversampling at some nights. 2. Not sure what math are you using but 3.56µm at 800mm is 0.92"/px - again most definitively over sampling for 8" of aperture even with premium mounts. You should really bin x2 ASI183MM to get 4.8µm effective pixel size 3. Larger sensors are better sensors. Sensor real estate translates into speed when paired with appropriate scope. You can pair larger sensor with longer FL scope and for same F/ratio longer FL scope - means larger aperture. If you adjust your sampling rate to be the same using pixel size / binning - you have more aperture at target resolution and that equals more speed. Don't like that sensor is not square? You can always crop away what you don't need (it is a waste of sensor area - but if you really like square images ...). 4. Consider lowering your sampling rate further. I know that 1.2"/px - sounds great - you are right there - face to face with galaxies and all - but most of the nights - you won't be able to achieve that. You'll need 1.18" FWHM seeing during the night in order to get 1.92" FWHM stars in your image with 0.6" RMS guiding and 8" aperture - and that is sharp enough for 1.2"/px. In fact - take some of your old subs and measure star FWHM in arc seconds - divide that with 1.6 and that will give you optimum sampling rate for that image / night. I'm convinced that most of your subs will be properly sampled at 1.5" or above
  16. Hi and welcome to SGL. "262 power telescope" is not very informative. Power of the telescope - or magnification it gives depends on used eyepiece. Most telescopes have exchangeable eyepieces and you can get different magnifications with them. Saying that telescope has x262 power - either means it has one fixed eyepiece yielding that particular magnification, or is marketing trick claiming that given telescope is capable of x262 magnification. Either of those is not good (that does not mean your telescope is not good). If we assume best case scenario - that you have diffraction limited reflector that is capable of really providing x262 magnification - which would mean minimum of 5" of aperture or more - then yes, it should be very good telescope for planetary observation. Most of the time atmosphere will actually limit maximum useful magnification to below x200. Do you have any more information on the telescope itself?
  17. Sorry, I seem to have missed that, if you mean one of these posts: https://stargazerslounge.com/topic/376040-why-are-my-colour-images-coming-out-in-black-and-white/?do=findComment&comment=4075548 https://stargazerslounge.com/topic/376040-why-are-my-colour-images-coming-out-in-black-and-white/?do=findComment&comment=4075742 In that case, I'll just point out following: first image: Second image: I don't maintain this to be the case, and, yes let's agree to disagree. Similar analysis on my calibration: As you can see - that is not perfect match either - bluish stars again look too greenish. In order to fix that we need to do 3x3 matrix instead of simple weights.
  18. No reason why should removing an offset skew up color calibration. Again, I get the same result even with your calibrated data, only this time background is not wiped. Could you post your processing if it is significantly different?
  19. @alacant Given what I've seen Siril do in your example - I would say we are doing the same thing. However, this is just basic color calibration. Better color correction is achieved if instead of only 3 coefficients (K0,K1 and K2 in Siril) we derive full 3x3 transform matrix. Method is the same except we do least squares fitting instead to get the matrix.
  20. Not sure who Ivo is, but that is besides the point. I tried to do calibration on linear data and not altered one - fits files before I started any sort of non linear processing. I just wiped the background (removing additive constant keeps data linear). I was using 0.9.12 version of Siril (one that I have installed - I don't use it otherwise, just installed it at some point to give it a go). Is that bottom image result of photometric calibration? It looks remarkably like result I got.
  21. I tried to do photometric color calibration in Siril - but failing. Can you give me some tips for that? Similar thing happens with other catalog:
  22. I don't know. Can you do photometric color calibration in Siril of above image so we can compare color of stars? All you need to do is perform color calibration then take color of star of known temperature and compare to actual color of black body of that temperature and see how well they match.
  23. Any software. I did not mean that it should be added to any specific software - but rather that such operation is much more easily done in software than manually. I can describe what needs to be done and how I do it manually, so you can see the number of steps involved. You need to select a number of stars in the image. For each of those stars you need to find effective temperature in Gaia DR2 database. I use Simbad / Aladin to do that: For example, above image is Aladin lite showing star that we used to do simple RGB balance on that has effective temperature of 6250K Next step is to derive linear rgb components of sRGB space for black body of that temperature. This is done via XYZ color matching functions and black body curve and then transforming XYZ color space into sRGB linear part. It is well known method, but you can use this online calculator (probably uses approximation - but good enough): http://www.brucelindbloom.com/index.html?ColorCalculator.html Now we have sRGB rgb linear triplet for that color temperature. Record it. Take actual image and your raw_r, raw_g and raw_b images and do photometric measurement (with say AstroImageJ) of values for that star. Record both of these values in a spread sheet. Repeat for as much stars as you can find in image that have temperature recorded in Gaia DR2 dataset. Do least squares fitting between two sets of vectors (triplets measured from image and triplets derived from effective temperature) to find transform matrix. Use that transform matrix to transform raw_r, raw_g and raw_b into linear_r, linear_b and linear_g for color information in linear sRGB space.
  24. Above was just approximation to the method that I propose - that people can use. Actual method is much more involved and ideally should be implemented in software rather than done manually (as it is tedious work). Having said that - take a look at this: Stars that are bluish in their nature account for only ~ 0.73% of all the stars out there - with majority of them being blue - white. Of course - it will depend on region where you look - if you look at young cluster of hot stars - well most of them will be bluish, but in general direction - odds are, you won't find many, if any truly blue stars. On the other hand - most stars that you will see will have yellowish / orange tone to them. About 96% of them. However, they tend to be much less bright then other stars and are often at low magnitude scale. If you look at image that I presented - you'll see that it is indeed the case. Faint stars tend to have yellowish orange tint. I think that your expectations of what stars should look like in image is biased with all the images that you've seen over the years and in many of those images - color management was not followed and often people boost saturation to get rich colors / colorful stars. Fact is that most stars out there are yellowish white without much saturation and there is much more orange ones - but they are fainter and often won't be captured in images (or will appear brownish because of how faint they are).
  25. Here is step by step guide and result: 1. As a first step, I opened TIF in Gimp and separated channels into mono images with Colors / Components / Decompose (use RGB model and turn off decompose to layers) 2. I then export these as fits since I like working with fits files 3. I fire up ImageJ (or rather Fiji - which is distribution of ImageJ with preloaded plugins) and load fits files 4. First step here is to crop edges off and remove stacking artifacts (Images to stack, rectangle select, right click and make a copy - whole stack. Close old one) 5. Next step is to bin x2 as image is from OSC sensor and it has been interpolated to this resolution anyway. I like working with smaller but sharper stars. (image / transform / bin - X and Y set to 2 and method average) 6. Next step is to remove background. I use my own plugin for this that removes gradients as well, but you can just make rectangular selection on piece of background and run stats on it. Then use Process/Math/Subract and subtract median value of background. Do this for each image (each channel) Here are two rounds of my plugin on Red channel - left is gradient and right is what plugin considers background / foreground: 7. Fire up stellarium and find a star that has B-V index of 0.32 (or 0.35) to be your reference star. These stars are roughly white in color and will help us to do simple RGB weighing 8. Select said star using elipse tool in ImageJ and do Image / Stacks / Measure stack 9. Now we have relative weights for our channels - or rather inverse of those. You need to divide each channel with corresponding mean value (use Process / Math / Divide on each sub - remember to remove selection first), 10. If we now again measure that star - we should get roughly the same values. Don't worry if you don't get exactly 1:1:1 as we have different selection and noise is going to skew values somewhat 11. Now we have to "equalize" the subs. This is due to Gimp and how it reads subs. We need to make sure that we have the same max and min values on all three color subs for it to match them when it loads them (to apply same 0-1 stretch on them). Undo selection on all three files and measure them. Set the largest of three minima to be minimum on each image - here -0.1096 and set lowest of the three maxima to be maximum on each image - here 57.4 12. Save each fits file 13. Now we open each of them in Gimp back again. Note that Gimp says it will scale fits values - and since we are opening each channel independently - it would apply different scaling had we not made sure each of them has the same min and max value 14. Do channel compose again (opposite of channel decompose we did in first step) 15. Extract luminance information by now decomposing that RGB image in LAB model. Keep L component and discard a and b 16. Stretch and process luminance to your liking. I'll do basic three step stretch: - step 1: use levels and bring down top slider so that galaxy core starts to show (apply levels) - step 2: move middle slider so that galaxy is nicely visible (again apply levels) - step 3: move bottom slider up to the foot of histogram (again apply levels) 17. You can also apply denoising at this stage or whatever you want - but I won't as this is tutorial for color processing. 18. Switch back to original image and apply one round of levels - but making sure you enter value of 2.2 in middle box. This simulates gamma of 2.2 for sRGB color space 19. Copy stretched image and paste it as layer over original. Set layer mode to luminance. Here is the final result of this operation: Now, this is approximate method as it uses only single star (and B-V index in Stellarium is not very reliable). For best results - data should be first corrected with derived color matrix and then calibrated against multiple stars of known stellar class - but that is much more complicated procedure.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.