Jump to content

Banner.jpg.b83b14cd4142fe10848741bb2a14c66b.jpg

vlaiv

Members
  • Posts

    13,048
  • Joined

  • Last visited

  • Days Won

    12

Everything posted by vlaiv

  1. I'm feeling cheeky tonight Here it goes: "If you can't take your stack, do basic white point / gamma 2.2 / black point stretch and get nice looking image - you are doing it wrong" aaaaand discuss!
  2. Maybe put it into numbers to help you out? There are several things that contribute to signal level on single sub: 1. Sub duration - if you want to have good SNR on single sub - you need to expose for longer time. However, in context of stacking - this becomes somewhat moot point, since it is total exposure time that dictates final SNR. What is important however, is to swamp the read noise with single exposure / background noise levels (look up how to determine single exposure here on SGL - there are numerous threads) 2. Aperture size. Large aperture gathers more light than small aperture in same amount of time - more signal, better SNR 3. Pixel scale. Pixel covers part of the sky - larger part of the sky pixel covers - more signal it will record (this is strictly speaking true for only extended light sources - but those we are interested in primarily. No one complains that the stars are too faint - mostly it is nebulosity / galaxies that are faint). 4. Quantum efficiency of you system. This includes any losses in optical train, any filters used and finally QE of your camera There are several constraints in above things - like it does not make sense to talk about aperture increase if you don't want to swap out the scope. There is very limited sense in which we can talk about pixel scale. It generally depends on physical pixel size in microns and focal length of telescope. You can't change first and you can slightly alter second. You can add focal reducer that will change the pixel scale. There is a way to "alter" pixel size or pixel physical dimensions - and it is called binning. It is not as complicated topic per se - but it gets complicated when you account for OSC camera and so on. You can alter QE of your system by changing or modifying your camera (since it is not modded).
  3. Here are some tips that might be helpful. 1. watch out for noise. In fact - SNR is different in different parts of the image and you can't sharpen the whole image the same if you want to get the most out of your sharpening. Best approach that I've found is as follows (Gimp but can be adopted to other software): - create copy of the image - sharpen that copy until you are satisfied how bright parts look. This will inevitably create a lot of noise in the background - add layer mask to this sharpened image. Copy original image as layer mask. Invert mask as necessary - bright parts should show sharpened layer and dark parts should show original. Stretch the mask more then you did the image. With opacity slider for whole layer control how much of it is blended with original image. - when happy - flatten the image. Above approach can work for denoising as well. Only difference is that you want inverted mask - you want to show denoised version in dark areas where SNR is poor. Signs that you've over sharpened: 1. There is dark ring around bright stars 2. You've reduced stars to bright single pixels 3. All stars in the image are at peak brightness (there is no natural variation of brightness among stars) 4. Noise obviously 5. You started to create posterization artifacts
  4. I'm not sure how keen are you on changing the workflow that you are comfortable with, but may I suggest something a bit different. I'll use simple explanation for what is going on rather than exact technical terms, but hopefully that will be enough to understand. This workflow is for luminance only - it does not preserve star color in cores as it leads to clipping, so you'll need to restore those with some mask or something. 1. Do levels / linear stretch until you start seeing nebulosity showing up. This step is essentially just linear transform bringing white point down. 2. Do gamma adjustment (or middle slider in levels) - to a controlled value (say 2.2 - 2.4. Not sure about PS but Gimp has feature where you can just enter the number). 3. Adjust black point to your liking. This is minimal stretch that will let star removal algorithms work - and it is actually reversible if there is need. Make a copy of above image and run starless routine on it - this will produce starless image. Now you need to do pixel math of sorts and simply take original copy and subtract starless image. This will give you stars only image. You will separately stretch that image just to a point where you are happy with stars. You might as well leave them at that level. Process starless image as you would and then layer stars only on top of that with brighten mode (should be really add mode, but brighten will work as well).
  5. I've been reading explanations on AstroBin - and this is essentially what he did - except he did it manually He did create "reference" image and then stacked mars from previous video and blended the two in Photoshop. My proposal above was to let stacking software do the composition - but general idea is the same.
  6. I'm not suggesting one should use such a short period of time to create whole image. I'm just saying that you can create reference frame that will be used to stack other subs onto (to calculate AP position / transformation). In other parts of video there will be usable sections of both Moon and Mars that can be stacked on to this reference image - provided that you don't attempt to stack whole sub, but you do it AP by AP - or take small section of sub where Mars and Moon are apart - one that covers piece of Mars and then stack that to appropriate place in reference image - something like this: left is some frame in video before occultation began and right is reference frame created out of those 100 subs. You can take alignment point on left frame and stack it to appropriate place on right point - but you can do that only for APs that are visible in right image - but that is enough - that is what you want.
  7. I just measured in Stellarium. It takes 30 seconds from first to "second" contact - or to fully occluded mars. At the time Mars had 17" of apparent diameter. This makes it roughly 0.5"/s. If image was sampled at say 0.2"/px - this makes 2px per second drift. You can easily create reference image in one second. Good Mars image requires good SNR - it does not have to be 6 minute video for that. Only if you sample at critical sampling frequency, but you can use larger telescope that gathers more light and you will shorten your imaging time that way. Seeing and local conditions play a part as well, however - for above technique, it does not matter how long the video is. AS!3 has interesting feature - it can use local quality estimator instead of whole frame. It also allows you to choose how to build reference frame - and even use external reference frame (previous stack as reference). Local / AP quality estimator means that only those APs that are of sufficiently high quality will get stacked. If you take part of video prior to reference point - whole mars will be seen, but only APs on whole mars that can be matched against reference (where half of mars is occluded) will get stacked. Similarly - same goes for the moon. Moon is not that much brighter than the Mars and they can be easily captured in single exposure. Exposure difference is not even x10 or so and with 12bit depth - you can easily capture both.
  8. Come on, admit it - you added vertical spike because of all the current JWST rage!
  9. Good point. I think one should be really clever in how it approaches the problem. First thing is ensuring short imaging time. This puts constraints on aperture size, and I was probably wrong to estimate it to around 10" telescope. If you choose not to do critical sampling, you can use larger telescope for shorter period of time. For example - if you use 20" telescope for 30 seconds run - you will get the same SNR (or even greater because of read noise contribution) as using 10" telescope for two minute run, as long as you match the sampling rate. Second trick that can be employed is to use very short section of the whole recording - say only one second or so for reference frame, at decent FPS like 200 FPS - that would mean 200 subs - which is plenty to get reference frame. Then you create two images. One image is stacking against the reference section of recording prior to that time you've chosen. This stack will provide data for Mars. Other image is created by stacking second half of video against this reference - this will create Moon image. You then combine the two images using Mars from first stack and Moon from second stack. In any case - it can be done with some trickery, but I did not really think about it at all until you pointed out the transient nature of event. This just makes recording much more "valuable" in my eyes if its done how I described above. Of course - one could image Mars and the Moon before occultation event and then just combined them in Photoshop to match what it would look like at the beginning of occultation. That is much easier to do - but not as "true" as above method.
  10. Not sure what you mean by this? this is Stellarium simulation on the date the image was taken.
  11. If you don't mind me asking - how do you extract stars in the first place? Do you create stars only image at all or do you work with integral data for stars?
  12. Jupiter was taken with 21" telescope. and Mars is in line with about 10" of aperture. Fact that it is behind the Moon is just good timing.
  13. Not literature per se, but series of lectures. Look up Leonard Susskind's lecture series called "Theoretical minimum" on Youtube. Actually - it is a book that he wrote, but he also recorded his lectures. They are aimed at undergraduate students to provide them with theoretical minimum needed to read further literature on particular topics. Here is playlist on youtube: https://www.youtube.com/playlist?list=PL6i60qoDQhQGaGbbg-4aSwXJvxOqO6o5e you can select lectures of interest - like cosmology or QM basics.
  14. Oh, I've misread the question - you want to avoid what I've described above Well, yes look at @Budgie1's answer above - usually there is a way to group lights with corresponding calibration frames
  15. Provided that you camera can do that (and not all can, mostly CCDs are good at this and possibly some CMOS sensors) - you can scale your darks to do dark calibration with single set of darks. You'd also need bias (it does not scale so it needs to be removed separately). There is "dark scaling factor" usually found in stacking software - and there you should use ratio of exposures. Say you want to calibrate 180s lights with 300s darks - you would use dark scaling factor of 180/300.
  16. the Emperor is not wearing anything at all!
  17. I think that is actually down to blend mod. Try "lighten" for example. Here - look at this part: Those larger stars look like they have dark halo around them. This is due to blending, but if you over sharpen things - you get the same effect.
  18. I'm getting distinct - "over sharpening on the stars" vibe from the image.
  19. Depends how much you want to spend. There is this: https://www.firstlightoptics.com/alt-azimuth-astronomy-mounts/rowan-az100-alt-az-mount.html And this: https://www.orionoptics.co.uk/product/250mm-dobsonian-mount/ or you can go DIY route: https://www.instructables.com/BUILDING-a-DOBSONIAN-TELESCOPE-MOUNT/ (or other examples online)
  20. I'd say - leave them as they are. They contribute a certain charm to the image - they emphasize how strong that star is and add very interesting "compositional twist" to the FOV. Flocking spider won't help. Diffraction happens because of light / no light situation (blockage) not because of any sort of reflection. Using completely dark spider that reflects no light will produce the same effect. Only thing that you can do to change appearance of spike is to change the thickness of spider vane. Spider acts as sort of diffraction grating (it is single groove diffraction grating) and spike that you see is different orders of diffraction - each being longer then the other. If you zoom on the spike itself - you will see that it is "series of rainbows" - each next longer than the one before it: This is why you seem to perceive spike being far away from the star - it is there all the way - but you've hit rainbow part where camera is less sensitive and rainbow color fits with background in that part. In any case - changing spider thickness changes how dense are these rainbows (how much dispersion there is in diffraction grating). Thin spider wane produces longer rainbows and thick produces shorter but more concentrated rainbows. That is about it what you can do without staring to curve vanes to affect how dispersion works.
  21. Take a look at this as well: https://www.firstlightoptics.com/reducersflatteners/ts-2-inch-1x-field-flattener-for-f4-f9-refractors.html It might be a bit more expensive, but it has been around for quite some time and it has good track record (people even use it for visual because of long working distance) See this as example of how it performs:
  22. It can't be done the way you want it to work. Most FF/FRs are at least x0.8 reduction factor and you want to illuminate 35mm diagonal in 2" focuser. With x0.8 reduction, you will effectively squeeze 35 / 0.8 = 43.75mm of focal plane onto 35mm sensor. 2" Has at most about 47 - but closer to 46mm of clear aperture. It is 50.6mm tube and you need to put some sort of FF/FR inside that holds lenses in its body. That body will likely by at least 2mm thick - so we have 2mm on each side 50.6 - 4mm = 46.6mm of free aperture at best. Most FF/FRs work at 55mm away from sensor (or more) and are at least 40-50mm long, so it is safe to assume that there is at least 100mm between FF/FR aperture and focal plane. At F/7 (this is very approximate calculation) - you need 100/7 = ~14mm additional millimeters for beam to narrow down to focal plane which we have calculated to be 43.75mm - so you need at least ~58 of free aperture on flattener entrance to get what you want. Not something you can do with 46mm max. First order of business, if you want to go that route - would be to replace focuser for 2.5" unit. Then maybe get one of these: https://www.teleskop-express.de/shop/product_info.php/info/p12210_TS-Optics-REFRACTOR-0-8x-corrector-for-refractors-from-102-mm-aperture---ADJUSTABLE.html or this one: https://www.teleskop-express.de/shop/product_info.php/info/p11122_Riccardi-0-75x-APO-Reducer-and-Flattener-with-M63x1-Thread.html
  23. I might be wrong, but I believe that AS!3 has surface stabilization turned on by default? In any case - if your target is somewhat drifting over the course of video - parts that are not captured during whole video will be clipped as they will have less frames to stack. Not sure if this behavior can be altered, but one solution is to make sure your tracking is good enough to keep region of interest on the sensor the whole time.
  24. I did not read the whole thread, but I suspect you are debating 102 vs 127 and collimatable vs non-collimatable (if those two are even proper words ). I don't remember seeing 127 that can't be collimated, but I do know for a fact that there are two types of 102 - one with and one without collimation screws. In fact SkyWatcher released whole line of scopes without collimation screws - newtonians and that mak, all of which sit on their light weight mounts - like AZ Pronto and Starquest. That is star quest (image is from you tube video, it is hard to find back side of this model in regular image). While AZ Pronto line is clearly labeled - having S at the end of the model names. FLO website clearly states that these don't have collimation ability: Interestingly - Mak page does not say this, but it is also labeled with S: Another sign for 102 that I managed to track down - is the shape of front of the scope: vs regular mak 102 seems to have smooth edge at front of the scope. In any case, I ended up purchasing regular OTA version although bundled one was more affordable and with more accessories (that I wanted) - just to have ability to collimate the scope if needed. I haven't seen any pictures or otherwise found evidence of 127 without collimation ability.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.