Jump to content

vlaiv

Members
  • Posts

    13,263
  • Joined

  • Last visited

  • Days Won

    12

Everything posted by vlaiv

  1. Single exposure in lucky imaging is fine balance and in principle - if you can, you should go with longer exposures. On one hand - you want longer single exposure as that improves SNR both for that single frame and for whole stack. Software will more easily recognize good frame from poor one if there is not much noise - it will align it better. Total SNR will let you sharpen more before noise and artifacts become apparent. Problem with longer exposure is something called coherence time. Atmosphere is in motion and most of the blurring we see at telescope is actually motion blur of atmosphere. Point of lucky imaging is to minimize that motion blur among other things. In order to do so - you need to use very short exposures. You need to expose for just enough time for atmosphere to be stable. You'll still have distortion of the image - but that distortion won't turn into heavier blur if you don't let it by freezing it in "an instant" instead of letting it change and create motion blur on top of distortion (it's like blending two or more different distortions on top of each other). Given the above - correct approach would be - expose for short enough time to freeze the seeing (for coherence time for you site and telescope size) but not shorter than that. In answer to your question - yes, you can gain quite a bit if your coherence time is less then 5ms and you expose for shorter period of time, but if your coherence time is longer than 5ms - you gain nothing and actually loose a bit because each sub will have more noise and be harder to stack properly and resulting stack will have lower noise. People with very good observing sites on good nights can even expose for 10ms or even more - but that combination of things is rate. In most cases you need to limit yourself to about 5-6ms for 8" aperture.
  2. No, but you can get special bags for storage and transportation like this one: https://www.teleskop-express.de/shop/product_info.php/info/p10107_Geoptik-Transport-Bag--Pack-in-Bag--for-Skywatcher-AZ-EQ6.html It uses polystyrene foam that came with original box and is the right shape to accept it. Alternative is this: https://www.firstlightoptics.com/telescope-bags-cases-storage/oklop-padded-bag-for-sky-watcher-eq6-neq6-az-eq6-mounts.html In any case - those are for mount head only, you'll need to carry counter weights and tripod as well - you can purchase similar bags for those as well. Btw, I don't bother and just carry delicate stuff on the back seat and tripod and weights in car boot without any sort of bag/cover. I think it will be time well spent, and also - you can always ask questions here if you are in doubt about something or just want people's opinions on which is better...
  3. Give it a go, but it is not as user friendly as some other software as it is primarily for scientific image manipulation. Check out Fiji - it is distribution loaded with plugins. In principle - you can perform every step of processing with it, but I tend to finish off things in Gimp as it is much easier. You can however do stacking and everything else in ImageJ and even code your own stuff if you know how to program - it accepts macros and you can also use Java to write plugins.
  4. Best indication is to measure FWHM and average FWHM of stars in the image should be about x1.6 sampling rate (if you measure both in arc seconds per pixel - or simply FWHM should be ~1.6px if you measure both in pixels). However, you can see it fairly easily by naked eye. Just do simple processing of the image and if your stars look like this at 100% zoom level: Then you are hugely over sampled. You want your stars to be as small as possible when image is displayed at 100% zoom level - something like this: or maybe like this: Faintest stars should really be almost point like, and medium and brighter stars - maybe just 5-6 pixels across (if their FWHM is 1.6px then they can't be much larger in diameter when fully stretched). Another way to test if your image is over sampled is to resample it to smaller size and then scale it back up to original size. If it shows same level of detail (just be careful - noise is not detail although our brain perceives noise as sharpness) - then you can use smaller resolution without loss of detail. For example - let's take first example and scale it down. here is small version that I resized to only 25% of original image: When we scale that up back to 100% we get this: Original image is more aesthetically pleasing perhaps due to noise grain size (second one has lost detail in the noise and noise looks blurred) - but data is not lost - every single star / feature and its shape is the same in first and in second version. This means that you don't need to waste pixels to record the data - you can do it with only 1/16th of pixels in this example (25% or 1/4 by width and same by height). Note also that stars look much better in small version of the image - they look pin point-ish and much closer to other two examples that I gave as visual guide of proper sampling rate.
  5. Hi and welcome to SGL I'm not sure where do you get that red cast in upper left corner. I loaded fits into imageJ, did quick debayering, scaled channels, converted to composite image (RGB) and got this: There is a bit of light pollution but otherwise image looks ok. As far as number of stars - well, you mist focus a bit. When you don't focus properly on stars - their light gets spread out and they are less visible. They really need to be pin point in order to show properly. Here is non debayered raw data crop - you can see a lot of stars - but since they are little circles instead of dots - they are rather faint.
  6. Hi and welcome to SGL. Like it has been already pointed out - there is no telescope / setup that will do it all. Astrophotography can get very expensive. It can be done on a budget - but in that case, one should really modify their expectations. These days we have access to vast number of images taken by other people - and it is very easy to think that quality comes as a standard - but it really needs a lot of money, time and talent invested in order to create very good images. On the other hand - decent looking modest images can be taken with very basic equipment - like camera, lens and simple DIY tracking mount (or even tripod without tracking). Before getting into AP I recommend that you spend some time learning about it - maybe get a book or watch different tutorials on you tube - a lot of people document their work in making an astrophotography. If you want do it all scope for visual - there is almost such thing - get 8" F/6 dobsonian telescope. Do be careful - scopes of that size are heavy and bulky. You need to store them, transport and set them up for each use and if scope is too large for you - it can become a chore very fast. In case you still want "do it all" kind of telescope that is smaller in size - thus more manageable and one that will provide you imaging capability as well, then maybe get yourself something like this: https://www.firstlightoptics.com/ts-telescopes/ts-photon-6-f6-advanced-newtonian-telescope-with-metal-tube.html and put it on mount such as this: https://www.firstlightoptics.com/skywatcher-mounts/skywatcher-az-eq5-gt-geq-alt-az-mount.html Rationale being that for visual use, newtonian type telescope is better served by AltAz type mount (EQ mount gets eyepiece / finder / focuser in very strange positions thus must be constantly rotated in tube rings), F/6 6" telescope will do rather well on deep sky objects for visual and will be good planetary scope at F/6. You can easily connect such mount to computer by using dedicated cable and EQMod (ASCOM driver for SkyWatcher mounts). Mount can be converted into EQ mode easily (just adjust for latitude) - and can serve as decent beginner AP platform. There is better / heavier (and more expensive) version of this mount: https://www.firstlightoptics.com/skywatcher-mounts/skywatcher-az-eq6-mount.html but that mount alone is about 20kg of weight. I do urge you to consider size and weight of this equipment as it is important factor.
  7. With Jupiter, you are limited by planetary rotation speed. Capture time depends on resolution you are working with (larger scopes mean less time). Video can be derotated but that is special type of processing. I'd say limit video length to about 4-5 minutes with 8" scope if you are using AutoStakkert!3 (it can handle a bit of rotation without issues - otherwise, 2-3 minutes on 8" scope). Try to get at least 20000 frames captured (say you capture at 100fps - that would be 200s or just over 3 minutes - but greater FPS is better of course, with 5ms exposure you can capture up to 200fps if your camera and computer can handle it).
  8. Binning is procedure where you take certain number of adjacent pixels and create one "large" pixel in their place. Imagine you take every 2x2 pixels and sum / average their value and produce a single value in their place. This does two things: - it reduces pixel count of the image. If you start with say 5200px x 3600px image for example and bin x2, you'll end up with 2600px x 1800px image instead. - it improves SNR. It's a bit like stacking, stacking also averages pixel values - but does it on pixels in successive images. Binning averages adjacent pixels on single image - but SNR improvement is the same. If you end up over sampling your image, it is handy way to increase SNR without loosing detail. If image is over sampled in the first place - then you won't be loosing anything except pixel count. Over sampled means that you are using "too much zoom" for image sharpness - or more precisely, you are using too much pixels to capture what can be captured. I don't particularly like such techniques in images. I don't like to even sharpen image. If you get the sampling rate right - you'll match star size to pixels and there won't be a need to do any star reduction - stars will be small. I don't like B masks either. Have two of them but they just gather dust. My preferred way of focusing is just by looking at the stars - I tweak the focus until stars are tightest. That might not be as practical with DSLR (I use computer and dedicated camera so it is easier to see stars on computer screen) - but why not give it a go using DSLR screen in live mode zoomed in? In any case - that is something that you can try and maybe you'll find it easier and more precise than B-mask (if not, you can always revert back to B-mask, I'm sure that with practice one can be mastered as well).
  9. Gimp and ImageJ In gimp I first loaded tiff and separated channels into mono images and saved them as fits format. Then I loaded them into ImageJ where I did very small crop to remove stacking artifacts. Then I binned data x4 - as it was grossly over sampled to start with. That recovers much quite a bit of SNR. Next step was background / gradient removal for each of channels (custom plugin for ImageJ that I wrote) and then I just made sure min / max values for each channel were the same (it's like normalizing to 0-1 range except I don't have to do that as Gimp will automatically do that when importing data - I just needed to make sure that color will be preserved and each channel image has same min and max value). I then loaded images into Gimp, did RGB compose and three step levels stretch and a bit of wavelet denoising. Exported image as png I think that you missed focus by quite a bit in that image. Stars are huge and in fact - in blue channel stars are doughnuts rather than stars: For that reason I needed to bin x6 and to sharpen things up on top of that in order to try to get decent looking stars. In any case, here is the result:
  10. Pacman data is actually quite nice if handled properly:
  11. Hope you don't mind, I took liberty to process a bit more your image: I applied a bit more sharpening, RGB align, white balance and I resampled image to size that is appropriate to captured detail (it looks less blurry that way).
  12. If you mean EQ mod, then this one: Planets and stars / DSOs are tracked with "star" icon. The Moon is tracked with second one (lunar rate) Third one is solar rate for Sun.
  13. That is most likely artifact from display interpolation. If image was scaled, capture application probably used some sort of interpolation algorithm to do the scaling. Some interpolation algorithms don't deal particularly good with lone pixels that have big variation with respect to surroundings. Say you have hot pixel (that goes away with cooling) - then it will have value much larger than surrounding pixels. Look what happens with a single hot pixel when image size is reduced by 50% using bicubic interpolation:
  14. Do you have your scope on EQ mount? Your signature says that you have Heq5 pro mount. Make sure your mount is properly polar aligned and setup and it should track your target without much issues.
  15. Just to add - tracking needs not be ideal and planet can and even should have slight drift across the FOV in movie - that will naturally dither recording.
  16. Why wouldn't you track? You should track and ideally hold Jupiter in center of the FOV - on optical axis. Although you have EdgeHD scope - most scopes have best sharpness on optical axis - as soon as you move away from optical axis - you start getting aberrations - like coma or field curvature or whatever. Even EdgeHD, although it is corrected across the field for imaging is not diffraction limited away from optical axis.
  17. I'd probably change following for planetary work: 1. Use ROI instead of full resolution (you might be using full res for testing purposes?) - so instead of 3096x2080 use something like 640x480 to get high enough FPS. 2. Use 5ms exposure instead of 504ms (again - maybe you need longer exposure for testing purposes?) 3. Turn on high speed mode - option in lower section above temperature reading As far as colour ser - yes, you'll capture colour video as your camera is colour camera, but what format is it in, can you tell? It can be in raw format not debayered and in that case your preview application is debayering for preview, or it might be debayered and 8bit per channel (so RGB format). You can tell the difference by the size of the ser file. Say you capture 1000 frames in your ser format. If you shoot RAW8 mode, file size should be 1000 x 3096 x 2080 = ~6GB of data, but same file in RGB8 mode will produce 18GB file (three times as large).
  18. Hi, You want UHC type filter for observing emission type nebulae. You can also use more specific filter like OIII that works better on some targets, but UHC type will work on more emission targets. L-eNhance is probably going to be better for visual than L-extreme. This is because it also includes h-beta line: vs First one is more like regular UHC but will dim background sky better as it has narrower bands
  19. Any dust that is far away from focal plane - which really means filters or sensor cover window - won't leave any significant shadow for planetary purpose. I advocate for doing flats for planetary as well - but most people don't bother. Planet often drifts enough during the recording so that any dust shadow due to particles on camera window will simply be swamped with other regular pixels.
  20. PIPP is Planetary Image Pre Processor (If I'm not mistaken) - a free / open source tool that preprocesses video before stacking. https://sites.google.com/site/astropipp/ It has a bunch of functions - but I tend not to use most of them as they are not needed - except maybe frame stabilization / crop if you want to keep or have smaller video files. It is also useful for combining videos or maybe for format conversion. Main feature is calibration. There are three different set of files that you need for calibration. You can do dark calibration or dark + flat calibration of your video. In order to do dark calibration (which is subtraction of bias and dark signal) - you shoot video with same settings as your planet video - except you cover the scope. For flat calibration - you need flat panel or some other means of creating flat field (like T-shirt in evening / dawn or maybe laptop screen) - lookup flats for regular astrophotography - principle is the same - except you shoot again video of say ~100-200 frames for both flats and flat darks (same settings as flats but scope covered). You can load all of these files into PIPP and tell it to output 16bit calibrated video with all preprocessing done (you can say normalize histogram, preserve bayer matrix or debayer - depending on what you prefer and so on).
  21. Yes, SharpCap is very good for planetary imaging. Here are a few more tips to get you going: 1. Use SER video file format and don't debayer captured frames 2. Use very short exposure and don't pay attention to histogram or if image in capture looks dark. Use 5-6ms exposures 3. Use very high gain setting for your camera (one that minimizes read noise - lookup read noise vs gain for your particular model) 4. Use PIPP to calibrate your video (shoot dark video and flat / flat dark). Preserve bayer matrix in this step (PIPP has that option) 5. Use AutoStakkert!3 to stack your video. 6. Use Registax wavelets for sharpening.
  22. That is true, I was going to suggest this scope: https://www.teleskop-express.de/shop/product_info.php/info/p3881_TS-Optics-PHOTOLINE-80mm-f-6-FPL53-Triplet-APO---2-5--RAP-Focuser.html I have it and it is really excellent little scope, however, I fear it won't be as much difference to your WO61. Both are wide field scopes AP scopes. If you want to get into medium resolutions with refractor, you really need to go 100mm+ and that will get you below 2"/px down to say 1.6-1.8"/px range. However, that would require something like Heq5.
  23. You can image DSOs with planetary camera but it is a bit pain really as sensor is so small and you'll have to use mosaics to cover the target. DSLR will be much better for that. On the other hand, if you want to do Jupiter and Saturn - you really need planetary camera and specific approach to imaging - so called Lucky planetary imaging. It can also be done with DSLR to some extent, but major issue with DSLR is that it won't do raw movies (maybe some models will but most models use some sort of lossy compression) and you want raw data for stacking. Get both? You can always use planetary type camera for guiding as well.
  24. Do keep in mind that said scope will be good for visual or EEVA but not for astrophotography. It uses FCD1 glass and it is larger aperture fast doublet scope. It will have significant color fringing in astrophotography applications if you shoot broad band. For narrowband only it will probably be good choice.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.