Jump to content

Banner.jpg.b83b14cd4142fe10848741bb2a14c66b.jpg

vlaiv

Members
  • Posts

    13,063
  • Joined

  • Last visited

  • Days Won

    12

Everything posted by vlaiv

  1. Yes indeed - bubble looks marvelous now, very sharp, so does surrounding gas clouds. Outer nebulosity is indeed faint, but I guess it needs to be - it is probably very faint in comparison to bubble and surrounding gas clouds. To get that part smooth - much more exposure is needed, but I don't think it would be feasible and it does not detract from the target much.
  2. Depends on intended targets. ST120 has rather good optical quality for its intended purpose - viewing DSOs at relatively low magnification. It will show more, and go deeper than 80ED as it will gather x2.25 more light. At low magnifications, problems with CA will not show. You will see purple halos only when looking at the brightest stars. Other than that, objects of deep sky will not look worse in ST120, on the contrary, you will be able to see deeper. On planets, story is much different. ST120 will give much poorer results than ED80. Although larger aperture will resolve more, CA blur will simply ruin this and ED80 will show more details and views will be much more pleasing.
  3. I'm running a risk of over explaining things here (too much info), or already telling you things that you know, but here we go: At 0.66"/px you are over sampling - this leads to poorer SNR and less sharp stars when image is viewed at 1:1 (one image pixel to one screen pixel, or 100% zoom). Depending on how good your guiding is, given your aperture and usual seeing conditions, you want to sample somewhere between 1.3"/px and 2"/px (depends on particular night). With cmos sensors there is a simple way to do this - software binning. You can either bin your subs x2 or x3 prior to stacking (or even bin your stack while still linear, but I think it is slightly better to do it on subs). You can also downsample your image. Both binning and downsampling will not have impact on image posted on forum (it won't change the FOV) - browser already downsamples them because image is large, unless you bin it to extent that it becomes smaller than browser shows it on forum. There is difference between binning and downsampling however. Both do the same thing but to a different extent. Both produce coarser sampling rate, and both increase SNR. SNR increase is a bit different, and depends on downsampling method used. Binning always provides predictable increase in SNR - by factor of how many pixels you bin. x2 binning will increase SNR by 2, x3 by 3, etc ... SNR increase with downsampling depends on downsampling method used and is always less than this maximum that binning provides. Downsampling also introduces cross pixel correlation that binning does not (not sure if you should care about this, but I pointed it out as a difference). There are more differences between the two, like impact of pixel level blur, etc ... It is worth doing however - when image is viewed 1:1 it will be sharper looking, although smaller in scale (not so zoomed in), but main benefit will be SNR increase. SNR increase will be visible on screen size image (like here on forum) as well, because you will be able to stretch more and show "deeper". I would recommend that you bin your image x2 regularly, and x3 on nights of poor seeing. You can try this with existing data (not sure what software are you using for stacking and processing), but do software bin x2 on your calibrated subs and then restack. Process result to see if you can go deeper. Also examine how it looks when viewed at screen size (like here posted on forum), but also what it looks at 1:1. Btw, there is really simple way to observe how downsampling increases SNR - just look at your image scaled down to be displayed on forum, and also at 1:1, here are screen shots of both: See how background looks more noisy on bottom image? They are the exact same image, but top one has been downsampled and background is much smoother because of this.
  4. Btw, here is a random screen shot from a google image search on this galaxy: If you compare the size of galaxy and size of "star bloat" in this and your image - you will see that you are not far off.
  5. I agree about CC - at such a small field, maybe only far corners will be affected, and it is questionable if it will show (or be masked by seeing / guiding errors). Another important fact to realize is that 178 sensor + SW 150 F/5 newtonian is going to give you 0.66"/px sampling rate. That is very high sampling rate and you are oversampling. It is also small sensor so FOV is small. I don't know what you used for imaging previously, but it is likely that your stars appeared tighter not because they were indeed tight in absolute size, but because things were at different scale. With this setup you are really "zooming" in, and due to laws of physics stars are no longer pin points but start to have rather substantial size (simply because there is lack of resolution to render them as tight dots). Same thing would happen if you use image with tight stars and then zoom in (more than 100%) - stars will look bloated.
  6. I second above for 8" dob being the best by far in that price range. Only consideration is the bulk/weight of it, both for transportation and storage and in use. If size is major factor, how about about having two scopes? ST120 will be very good on DSOs and wide field, and adding something like Mak102 will fill planetary role better than both of the listed scopes. I think that ST120+Mak102 will still be in 80ED price range. Only difference would be on mounting consideration - if your friend plans to use both scopes at the same time, then mount such as SkyTee or Giro Ercole (mini) that has mountings on both sides is a solution. If not, any mount that is usually used will support the both scopes (like AZ4/AZ5 for example).
  7. Hi and welcome to SGL. Don't worry about tracking, manual dobs are quite usable even on high magnifications used for planetary viewing. Dob is excellent choice, but I think that 10" will be a bit too much for you to handle regularly. OTA (optical tube assembly - or main part of the telescope that you use to view) for this telescope is almost 20kg in weight, while base is 25kg in weight. These two parts are easily assembled / disassembled so you will carry each one separately. It is still quite a bit of weight and bulk in each. You might want to look at 8" version as it is considerably lighter - about 10kg for OTA and 16kg for base (about two times lighter than 10"), and you won't be loosing much of light gathering power. I have solid tube 8" Skywatcher and it is quite manageable. It will fit even small car - base in the trunk and OTA goes on back seat, but it does take much of the space there as it is about 1.2m long. I did travel in 3 person arrangement with the scope in the smallish car, with person sitting in the back needing to hold a piece of OTA in their lap, so not quite comfortable, but manageable. Collapsible 8" version might fit on single seat when collapsed (not sure about that, but I guess it could). If you have not done that already - it would be worth looking at some videos on youtube of said scopes - both 8" and 10" to see how large they are compared to an average person. Most telescopes look smaller in the images than in real life, so seeing one next to people puts things in perspective.
  8. In principle it can work with stack, but you will need to "restack" it, and results will probably be poorer than doing it with subs. Idea is to split image into multiple subs (thus avoiding pixel blur that comes from larger pixels and preserving some of the sharpness in that stage). If you split subs, registration and stacking will do better job of optimizing everything then doing split on already stacked image. So yes, it will work on stack, but as I see it - better results would be had from doing it on each individual sub and then stacking those. I don't have plugin written yet, so it will be couple of days before I have it - no rush in this, we can take it slowly step by step, once I finish plugin, I'll write a "tutorial" on how to apply it, and then you can do it when you have the time for it ... Yes, there is no perfect sampling rate for all occasions - it really depends on night to night basis (or even on sub to sub basis) as seeing / guiding performance changes. Choice of target also plays a part both in brightness, but also in local contrast of the target - blur is really loss of local contrast, so not all targets will respond equally to the same amount of blur. Also position of the target in the sky relates to seeing effects as well - lower the target, more atmosphere you have between the target and the scope and more chance for seeing effects.
  9. No criticism on my part - I think image is as good as it gets.
  10. You are quite right about sharing 300 subs - even with faster internet connection than mine, sharing almost 20GB of data is not an easy thing to do (it would take me days to upload results on net connection that I currently have). That is going to be real pain to transfer. We could do things the other way around if it is not too much involved for you. I could write a ImageJ plugin and explain step by step how to apply it to your subs. Transfer time in this case would be much less (few kb if that much since I would send you source code and ImageJ would compile it when installing the plugin), but it would require you to follow some number of steps to apply plugin to your subs. We can do that if you are willing. It would however still produce smaller scale than you would like, unless you resize/upsample the image. This however won't bring any new detail in the image - just make things larger and a bit blurry. This might be the reason why there is slight softness to the image - besides no detail being captured at this scale - once you upsample image softness starts to show at 1:1 even if baseline image (one that was upsampled) is properly sampled. Undersampling does not yield softness - it yields lack of detail that could be possibly captured, but image should look sharp if viewed 1:1 without any sort of upscaling. It will look soft if you upsample it / enlarge things. Oversampling on the other hand leads to softness when you look at it 1:1. Indeed it does no fail to capture any additional detail so no data is lost when oversampling, but same thing happens as when you properly sample image and upscale / enlarge - it becomes blurry because lack of the finest detail that could be displayed by single pixels (or rather local contrast). This is why I advocate proper sampling - image should look sharp enough when viewed 1:1 (and also when viewed at screen size). Undersampled image will look similarly sharp at 1:1 (or screen size), but because it is so small scale - things won't be as resolved as with proper sampling. With oversampling - you get softness because things get really large in scale but with finest detail missing (not because of sampling but because you could not capture that finest detail due to other things - like seeing / aperture / guiding errors and associated blur). Does this sense?
  11. There seems to be prevalent trend in comments here if I'm not mistaken? People love first image at screen size because it provides more depth and sharpness. People also love second image on 1:1 inspection. Ideally you want the combination of the two in your final rendition (not images but things that people prefer). Second image feels a bit soft because it is oversampled, at 1000mm, 3.8um pixel size will give resolution of 0.78"/px. At 5.4um pixel size and 1000mm focal length - you will be sampling at 1.11"/px - this is closer to optimum sampling rate for your conditions (that I suspect will be order of 1.2-1.3"/px with 130mm aperture and good mount under average seeing in NB). Btw, SNR ratio between these two resolutions is about x1.42, so things that are very low SNR in one image let's say SNR of 3.5 (barely distinguished from the noise) will have SNR of 5 (perfectly acceptable to be included in the image as legitimate signal that will be fairly smooth with some denoising). @Rodd If you happen to have all the data from the first image and all the data from the second image (meaning lights and calibration frames or at least calibrated lights for each channel) - maybe we could make "master image" by combining all the data at the same resolution - that being 1.11"/px? We could do a little "collaboration" perhaps? I have fractional binning algorithm that works on calibrated subs rather than on final result (I think that it will be better that way), that actually does not produce "binned" subs, but splits subs in such way that makes more of them at target resolution. We could either make all subs be sampled at 1.11"/px if you have all the data or maybe target 1.2-1.3"/px if you have the data for second image only, and see if you can process such stack to have features of both images - sharpness and depth at screen size and detail and sharpness (as opposed to softness) at 1:1 zoom?
  12. Hint for everyone wishing to know camera used for each image: Both images were taken at same focal length but different pixel scale (one is more "close in" than the other, and other has better SNR for same integration time - "resolution at aperture" in works), none was binned. Cameras in question have different pixel size ....
  13. It's rather obvious which image is from which camera Therein lies part of explanation why some people can find upper image better - it has more depth / SNR because it was captured with certain camera (not due to camera being better, but one feature of that camera vs the other camera make it go deeper in this case - can be remedied with certain processing steps). I personally like second image (and may be a bit biased by knowing which was taken with what camera ) - but I think it is down to not "blowing it up" in second image and clearness of the bubble, although it is not as sharpened as much (maybe a bit would not hurt in second image).
  14. Very nice image indeed. I have a few remarks, but don't take them as criticism of your work - just general observations. Although I like the achieved palette, I would not call it SHO. This is something that is generally applicable to "SHO" renditions amateurs do, but your work here is a good showcase for my point, so I'd like to use it for that. Most of us start by assigning SHO to RGB channels and then proceed to tweak things to be visually satisfactory (often influenced by previous work of others and online tutorials - there is now a "strong" sense of how should the image in "SHO" palette look like), but we end up in loosing what the SHO palette is all about - identifying different gaseous regions of the object - notably presence of hydrogen, sulfur and oxygen gas. Here is what I mean - SHO maps SII region to red channel of the image, hence one would expect if there is distinctly red part of the image - that it should contain mostly SII gas. Or in another words - SII stack should have signal present in that place while Ha and OIII should lack it. Since you kindly provided stretched versions of all stacks - let's look if it is the case. Top right corner shows structure that is particularly red: But look at SII stack in the same region: I'm not seeing that signal in there. Nor in OIII for that matter: If we look at Ha stack on the other hand - above structure, rendered in red is clearly present and very strong: What I believe is happening in this particular case is the fact that background level of SII is higher than other two after stretch and Ha luminance is strong, so mixing those two makes red color in those regions - not because SII signal, but because of SII background. You can clearly see this in following region: This is lower central part of Ha - background is very nice and black here - no signal captured. Same regions in OIII and SII: You can almost make out same structure, but background is fairly grey here instead of being black as in above segment. There is room for improvement here if you follow my argument and want to add that level of information to your rendering - try to do similar level of stretch (actually Ha is often much stronger as you've seen from your capture and is stretched quite a bit less than other two channels) - particularly pay attention to background levels being equal. I advise strongly against using 16bit per channel in any segment of the workflow - use 32 bit if you can for all intermediate steps. I would advise against this. Use of synthetic luminance is fair processing technique, but one needs to be careful of how to create good synthetic luminance in narrow band data. One, most frequently used way is to just use Ha data. Ideally you want to make it by adding all three stacks while still linear - that will provide most accurate luminance. Problem with this approach is that Ha is high SNR data while other two are low SNR data - by virtue of having much lower signal to be captured in the first place. Adding stacks together does indeed add signal together but it also adds noise and you end up with result that has lower SNR value than single Ha channel. Weighted algorithm does produce slightly better result, but in all likelihood - you will still end up with lower SNR then using Ha stack alone as synthetic luminance. Good approach for creating synthetic luminance is - look at stretched versions of all three stacks, and observe if there is OIII / SII signal that is independent of Ha signal - it will rarely be the case, in most cases there will be Ha signal (that is stronger) in all the places there is SII or OIII. If Ha covers it all - just use Ha as luminance. If there are regions with SII or OIII where there is no Ha (often in SN remnants like Crab where gasses move at different speeds due to different mass) - use "max" stack of three stacks. It should provide better SNR over addition. Alternatively you can only use "max" on region where there is SII and/or OIII and Ha for the rest (combine using masks).
  15. Indeed. Scope like 80mm F/4.5 or there about (~360mm) will give about x3.25 FOV increase (~x1.8 in both width and height), so can be considered wide field upgrade over 130PDS. Either 80mm F/5 or F/6 with riccardi FF/FR (x0.75) will get you that sort of focal length.
  16. Then just don't worry about sampling rate. In wide field setups you need to pay attention to fully corrected circle and sensor size. You are not after finest detail so actual sampling (or rather under sampling) is of no particular concern. 4/3 sensor size like 294 should have no issues with field correction on 80mm triplets with suitable field flattener.
  17. What would be the main reason for this upgrade? 130PDS is very close in performance to most budget refractors (80-100mm). Differences being of course diffraction spikes and need for coma corrector. Other than that it will outperform refractors in light gathering having greater aperture. If you are looking for wider field setup - then don't bother with "suitability" calculator - get something like Esprit 80 or TS 80 triplets with matching field flattener / focal reducer to get to somewhere around F/4.5 or so and you will have great wide field setup. If your primary reason for upgrading is that you want "less fiddly" scope / no collimation, but don't want to go wide field or give up on aperture - have a look at this scope then: https://www.teleskop-express.de/shop/product_info.php/info/p6880_Explore-Scientific-MN-152-f-4-8-Maksutov-Newtonian-Telescope-with-Field-Correction.html
  18. Did you change anything between master dark and lights like exposure time, temperature, gain, offset? Another thing that can cause issues with amp glow would be dark optimization - you need to avoid it in stacking / calibration, so check if you have it turned on in software that you are using.
  19. You are comparing airy disk diameter vs FWHM (seeing) or Sigma (guide RMS) values. If we do gaussian approximation to airy pattern you will find that sigma of that approximation is about one third of disk radius. Value that you quote - about 3.2" is airy disk diameter - which means that sigma is about six times less than that. This would mean that comparable values (all converted to sigma of gaussian) are like: Airy disk - 0.53" Guiding RMS (that one stays the same) - you can put it at 0.5" RMS or what ever you are getting. Seeing of let's say 2" (this is FWHM value, and to get sigma you need to divide it by ~2.355) so we have ~ 0.85" In this case you can see that Airy disk size expressed in the same units is not dominant component and is not much larger than either seeing or guiding error.
  20. You can always scale your flats to 100% percent to avoid any color balance issues that your flat box produces. Best way to do it would be to measure top 5% pixels of each color - read of average of those and divide each color with that particular ADU value - that way flats will be scaled to 0-1 range (or 0-100% however you like to look a that). It does require treating each color separately. Alternatively you can produce color transform matrix based on results you get from your light box. Shooting raw simply does not produce proper color balance and you need to color correct your image even if you have "color balanced" flat panel. Just scaling R and B compared to G will give you some correction but not full correction. Star color calibration can be one way to do it. Shooting color chart and then calculating transform can be another way to do it.
  21. In principle it matters, but in reality there is a way to minimize impact. We put peak of the histogram to the right - meaning high values, because we want strong signal in flats to maximize signal to noise ratio (more signal - higher SNR). There is another way to improve SNR - that is stacking (which is very much like taking very long exposure and bypasses full well limit of the sensor). When doing OSC flats - aim for highest peak to be around 80-85%. This will put lowest peak at around 40ish % - or about half of highest one. That is difference of ~ x1.41 in SNR for high peak color vs low peak color. Doing twice as many flats improves SNR for same amount (stacking improves SNR by square root of number of frames - do twice as many and you will improve SNR by ~ x1.41). Most sensors these days are linear enough so you don't have to worry about that aspect. If you feel that your flats are noisy, just take more of them, it's that simple (I always recommend more calibration frames - more is better, simple as that - same as with lights - more lights better image). Don't be tempted to add additional filters when doing flats - it will mess up your flat calibration, and there certainly is no need to do it.
  22. I think it is mostly practical for Alt/Az type of mount, and above is AzEq6 - it comes with this adapter. I've seen adapters to do it regardless, here is one for example: https://www.teleskop-express.de/shop/product_info.php/info/p229_TS-Optics-Piggyback-Camera-Holder-for-D-20-mm-Counterweight-Shafts.html (that one is for camera or maybe finderscope, or lightweight rig). Geoptik also makes "generic" ones like this: https://www.teleskop-express.de/shop/product_info.php/info/p5555_Geoptik-Counterweight-Shaft-Adapter---diameter-25mm.html
  23. Actually in this context 4/3 sensor does specify diagonal size. One way of specifying diagonal of the sensor is to use fractions of the inch - 1/3", 1/2", 1/1.7, 2/3", etc. I don't think 4/3 means 4/3", but it is "legitimate" sensor size name - "Four thirds" or "Micro four thirds" - stands for diagonal of 21.60mm https://en.wikipedia.org/wiki/Image_sensor_format
  24. Not likely that it would cause difference. Here is a little comparison of effects: Let's suppose that we have rather decent seeing at 1.5" FWMH and it affects both wavelengths the same (500nm and 650nm). Both are on the same mount that is decent in performance for such a focal length, and guides with 0.5" RMS error. What are expected FWHM of stars for each? 500nm: ~2.3" 650nm: ~2.52" Although there is something like 30% or more difference in the size of airy disk, given that convolution of PSFs (seeing, airy and guide error) adds FWHM as square root of sum of squares, and airy disk being the smallest of the three, result will be much less in total FWHM - in this case, there will be increase of about 9.5%. Once we take into account seeing impact vs wavelength, we should end up with roughly the same FWHM, or even small Ha having a small edge. This is of course for perfect aperture. Any aberrations will have blurring effect that will raise FWHM.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.