Jump to content

Banner.jpg.b83b14cd4142fe10848741bb2a14c66b.jpg

vlaiv

Members
  • Posts

    13,032
  • Joined

  • Last visited

  • Days Won

    11

Everything posted by vlaiv

  1. Hi and welcome to SGL. Don't worry about tracking, manual dobs are quite usable even on high magnifications used for planetary viewing. Dob is excellent choice, but I think that 10" will be a bit too much for you to handle regularly. OTA (optical tube assembly - or main part of the telescope that you use to view) for this telescope is almost 20kg in weight, while base is 25kg in weight. These two parts are easily assembled / disassembled so you will carry each one separately. It is still quite a bit of weight and bulk in each. You might want to look at 8" version as it is considerably lighter - about 10kg for OTA and 16kg for base (about two times lighter than 10"), and you won't be loosing much of light gathering power. I have solid tube 8" Skywatcher and it is quite manageable. It will fit even small car - base in the trunk and OTA goes on back seat, but it does take much of the space there as it is about 1.2m long. I did travel in 3 person arrangement with the scope in the smallish car, with person sitting in the back needing to hold a piece of OTA in their lap, so not quite comfortable, but manageable. Collapsible 8" version might fit on single seat when collapsed (not sure about that, but I guess it could). If you have not done that already - it would be worth looking at some videos on youtube of said scopes - both 8" and 10" to see how large they are compared to an average person. Most telescopes look smaller in the images than in real life, so seeing one next to people puts things in perspective.
  2. In principle it can work with stack, but you will need to "restack" it, and results will probably be poorer than doing it with subs. Idea is to split image into multiple subs (thus avoiding pixel blur that comes from larger pixels and preserving some of the sharpness in that stage). If you split subs, registration and stacking will do better job of optimizing everything then doing split on already stacked image. So yes, it will work on stack, but as I see it - better results would be had from doing it on each individual sub and then stacking those. I don't have plugin written yet, so it will be couple of days before I have it - no rush in this, we can take it slowly step by step, once I finish plugin, I'll write a "tutorial" on how to apply it, and then you can do it when you have the time for it ... Yes, there is no perfect sampling rate for all occasions - it really depends on night to night basis (or even on sub to sub basis) as seeing / guiding performance changes. Choice of target also plays a part both in brightness, but also in local contrast of the target - blur is really loss of local contrast, so not all targets will respond equally to the same amount of blur. Also position of the target in the sky relates to seeing effects as well - lower the target, more atmosphere you have between the target and the scope and more chance for seeing effects.
  3. No criticism on my part - I think image is as good as it gets.
  4. You are quite right about sharing 300 subs - even with faster internet connection than mine, sharing almost 20GB of data is not an easy thing to do (it would take me days to upload results on net connection that I currently have). That is going to be real pain to transfer. We could do things the other way around if it is not too much involved for you. I could write a ImageJ plugin and explain step by step how to apply it to your subs. Transfer time in this case would be much less (few kb if that much since I would send you source code and ImageJ would compile it when installing the plugin), but it would require you to follow some number of steps to apply plugin to your subs. We can do that if you are willing. It would however still produce smaller scale than you would like, unless you resize/upsample the image. This however won't bring any new detail in the image - just make things larger and a bit blurry. This might be the reason why there is slight softness to the image - besides no detail being captured at this scale - once you upsample image softness starts to show at 1:1 even if baseline image (one that was upsampled) is properly sampled. Undersampling does not yield softness - it yields lack of detail that could be possibly captured, but image should look sharp if viewed 1:1 without any sort of upscaling. It will look soft if you upsample it / enlarge things. Oversampling on the other hand leads to softness when you look at it 1:1. Indeed it does no fail to capture any additional detail so no data is lost when oversampling, but same thing happens as when you properly sample image and upscale / enlarge - it becomes blurry because lack of the finest detail that could be displayed by single pixels (or rather local contrast). This is why I advocate proper sampling - image should look sharp enough when viewed 1:1 (and also when viewed at screen size). Undersampled image will look similarly sharp at 1:1 (or screen size), but because it is so small scale - things won't be as resolved as with proper sampling. With oversampling - you get softness because things get really large in scale but with finest detail missing (not because of sampling but because you could not capture that finest detail due to other things - like seeing / aperture / guiding errors and associated blur). Does this sense?
  5. There seems to be prevalent trend in comments here if I'm not mistaken? People love first image at screen size because it provides more depth and sharpness. People also love second image on 1:1 inspection. Ideally you want the combination of the two in your final rendition (not images but things that people prefer). Second image feels a bit soft because it is oversampled, at 1000mm, 3.8um pixel size will give resolution of 0.78"/px. At 5.4um pixel size and 1000mm focal length - you will be sampling at 1.11"/px - this is closer to optimum sampling rate for your conditions (that I suspect will be order of 1.2-1.3"/px with 130mm aperture and good mount under average seeing in NB). Btw, SNR ratio between these two resolutions is about x1.42, so things that are very low SNR in one image let's say SNR of 3.5 (barely distinguished from the noise) will have SNR of 5 (perfectly acceptable to be included in the image as legitimate signal that will be fairly smooth with some denoising). @Rodd If you happen to have all the data from the first image and all the data from the second image (meaning lights and calibration frames or at least calibrated lights for each channel) - maybe we could make "master image" by combining all the data at the same resolution - that being 1.11"/px? We could do a little "collaboration" perhaps? I have fractional binning algorithm that works on calibrated subs rather than on final result (I think that it will be better that way), that actually does not produce "binned" subs, but splits subs in such way that makes more of them at target resolution. We could either make all subs be sampled at 1.11"/px if you have all the data or maybe target 1.2-1.3"/px if you have the data for second image only, and see if you can process such stack to have features of both images - sharpness and depth at screen size and detail and sharpness (as opposed to softness) at 1:1 zoom?
  6. Hint for everyone wishing to know camera used for each image: Both images were taken at same focal length but different pixel scale (one is more "close in" than the other, and other has better SNR for same integration time - "resolution at aperture" in works), none was binned. Cameras in question have different pixel size ....
  7. It's rather obvious which image is from which camera Therein lies part of explanation why some people can find upper image better - it has more depth / SNR because it was captured with certain camera (not due to camera being better, but one feature of that camera vs the other camera make it go deeper in this case - can be remedied with certain processing steps). I personally like second image (and may be a bit biased by knowing which was taken with what camera ) - but I think it is down to not "blowing it up" in second image and clearness of the bubble, although it is not as sharpened as much (maybe a bit would not hurt in second image).
  8. Very nice image indeed. I have a few remarks, but don't take them as criticism of your work - just general observations. Although I like the achieved palette, I would not call it SHO. This is something that is generally applicable to "SHO" renditions amateurs do, but your work here is a good showcase for my point, so I'd like to use it for that. Most of us start by assigning SHO to RGB channels and then proceed to tweak things to be visually satisfactory (often influenced by previous work of others and online tutorials - there is now a "strong" sense of how should the image in "SHO" palette look like), but we end up in loosing what the SHO palette is all about - identifying different gaseous regions of the object - notably presence of hydrogen, sulfur and oxygen gas. Here is what I mean - SHO maps SII region to red channel of the image, hence one would expect if there is distinctly red part of the image - that it should contain mostly SII gas. Or in another words - SII stack should have signal present in that place while Ha and OIII should lack it. Since you kindly provided stretched versions of all stacks - let's look if it is the case. Top right corner shows structure that is particularly red: But look at SII stack in the same region: I'm not seeing that signal in there. Nor in OIII for that matter: If we look at Ha stack on the other hand - above structure, rendered in red is clearly present and very strong: What I believe is happening in this particular case is the fact that background level of SII is higher than other two after stretch and Ha luminance is strong, so mixing those two makes red color in those regions - not because SII signal, but because of SII background. You can clearly see this in following region: This is lower central part of Ha - background is very nice and black here - no signal captured. Same regions in OIII and SII: You can almost make out same structure, but background is fairly grey here instead of being black as in above segment. There is room for improvement here if you follow my argument and want to add that level of information to your rendering - try to do similar level of stretch (actually Ha is often much stronger as you've seen from your capture and is stretched quite a bit less than other two channels) - particularly pay attention to background levels being equal. I advise strongly against using 16bit per channel in any segment of the workflow - use 32 bit if you can for all intermediate steps. I would advise against this. Use of synthetic luminance is fair processing technique, but one needs to be careful of how to create good synthetic luminance in narrow band data. One, most frequently used way is to just use Ha data. Ideally you want to make it by adding all three stacks while still linear - that will provide most accurate luminance. Problem with this approach is that Ha is high SNR data while other two are low SNR data - by virtue of having much lower signal to be captured in the first place. Adding stacks together does indeed add signal together but it also adds noise and you end up with result that has lower SNR value than single Ha channel. Weighted algorithm does produce slightly better result, but in all likelihood - you will still end up with lower SNR then using Ha stack alone as synthetic luminance. Good approach for creating synthetic luminance is - look at stretched versions of all three stacks, and observe if there is OIII / SII signal that is independent of Ha signal - it will rarely be the case, in most cases there will be Ha signal (that is stronger) in all the places there is SII or OIII. If Ha covers it all - just use Ha as luminance. If there are regions with SII or OIII where there is no Ha (often in SN remnants like Crab where gasses move at different speeds due to different mass) - use "max" stack of three stacks. It should provide better SNR over addition. Alternatively you can only use "max" on region where there is SII and/or OIII and Ha for the rest (combine using masks).
  9. Indeed. Scope like 80mm F/4.5 or there about (~360mm) will give about x3.25 FOV increase (~x1.8 in both width and height), so can be considered wide field upgrade over 130PDS. Either 80mm F/5 or F/6 with riccardi FF/FR (x0.75) will get you that sort of focal length.
  10. Then just don't worry about sampling rate. In wide field setups you need to pay attention to fully corrected circle and sensor size. You are not after finest detail so actual sampling (or rather under sampling) is of no particular concern. 4/3 sensor size like 294 should have no issues with field correction on 80mm triplets with suitable field flattener.
  11. What would be the main reason for this upgrade? 130PDS is very close in performance to most budget refractors (80-100mm). Differences being of course diffraction spikes and need for coma corrector. Other than that it will outperform refractors in light gathering having greater aperture. If you are looking for wider field setup - then don't bother with "suitability" calculator - get something like Esprit 80 or TS 80 triplets with matching field flattener / focal reducer to get to somewhere around F/4.5 or so and you will have great wide field setup. If your primary reason for upgrading is that you want "less fiddly" scope / no collimation, but don't want to go wide field or give up on aperture - have a look at this scope then: https://www.teleskop-express.de/shop/product_info.php/info/p6880_Explore-Scientific-MN-152-f-4-8-Maksutov-Newtonian-Telescope-with-Field-Correction.html
  12. Did you change anything between master dark and lights like exposure time, temperature, gain, offset? Another thing that can cause issues with amp glow would be dark optimization - you need to avoid it in stacking / calibration, so check if you have it turned on in software that you are using.
  13. You are comparing airy disk diameter vs FWHM (seeing) or Sigma (guide RMS) values. If we do gaussian approximation to airy pattern you will find that sigma of that approximation is about one third of disk radius. Value that you quote - about 3.2" is airy disk diameter - which means that sigma is about six times less than that. This would mean that comparable values (all converted to sigma of gaussian) are like: Airy disk - 0.53" Guiding RMS (that one stays the same) - you can put it at 0.5" RMS or what ever you are getting. Seeing of let's say 2" (this is FWHM value, and to get sigma you need to divide it by ~2.355) so we have ~ 0.85" In this case you can see that Airy disk size expressed in the same units is not dominant component and is not much larger than either seeing or guiding error.
  14. You can always scale your flats to 100% percent to avoid any color balance issues that your flat box produces. Best way to do it would be to measure top 5% pixels of each color - read of average of those and divide each color with that particular ADU value - that way flats will be scaled to 0-1 range (or 0-100% however you like to look a that). It does require treating each color separately. Alternatively you can produce color transform matrix based on results you get from your light box. Shooting raw simply does not produce proper color balance and you need to color correct your image even if you have "color balanced" flat panel. Just scaling R and B compared to G will give you some correction but not full correction. Star color calibration can be one way to do it. Shooting color chart and then calculating transform can be another way to do it.
  15. In principle it matters, but in reality there is a way to minimize impact. We put peak of the histogram to the right - meaning high values, because we want strong signal in flats to maximize signal to noise ratio (more signal - higher SNR). There is another way to improve SNR - that is stacking (which is very much like taking very long exposure and bypasses full well limit of the sensor). When doing OSC flats - aim for highest peak to be around 80-85%. This will put lowest peak at around 40ish % - or about half of highest one. That is difference of ~ x1.41 in SNR for high peak color vs low peak color. Doing twice as many flats improves SNR for same amount (stacking improves SNR by square root of number of frames - do twice as many and you will improve SNR by ~ x1.41). Most sensors these days are linear enough so you don't have to worry about that aspect. If you feel that your flats are noisy, just take more of them, it's that simple (I always recommend more calibration frames - more is better, simple as that - same as with lights - more lights better image). Don't be tempted to add additional filters when doing flats - it will mess up your flat calibration, and there certainly is no need to do it.
  16. I think it is mostly practical for Alt/Az type of mount, and above is AzEq6 - it comes with this adapter. I've seen adapters to do it regardless, here is one for example: https://www.teleskop-express.de/shop/product_info.php/info/p229_TS-Optics-Piggyback-Camera-Holder-for-D-20-mm-Counterweight-Shafts.html (that one is for camera or maybe finderscope, or lightweight rig). Geoptik also makes "generic" ones like this: https://www.teleskop-express.de/shop/product_info.php/info/p5555_Geoptik-Counterweight-Shaft-Adapter---diameter-25mm.html
  17. Actually in this context 4/3 sensor does specify diagonal size. One way of specifying diagonal of the sensor is to use fractions of the inch - 1/3", 1/2", 1/1.7, 2/3", etc. I don't think 4/3 means 4/3", but it is "legitimate" sensor size name - "Four thirds" or "Micro four thirds" - stands for diagonal of 21.60mm https://en.wikipedia.org/wiki/Image_sensor_format
  18. Not likely that it would cause difference. Here is a little comparison of effects: Let's suppose that we have rather decent seeing at 1.5" FWMH and it affects both wavelengths the same (500nm and 650nm). Both are on the same mount that is decent in performance for such a focal length, and guides with 0.5" RMS error. What are expected FWHM of stars for each? 500nm: ~2.3" 650nm: ~2.52" Although there is something like 30% or more difference in the size of airy disk, given that convolution of PSFs (seeing, airy and guide error) adds FWHM as square root of sum of squares, and airy disk being the smallest of the three, result will be much less in total FWHM - in this case, there will be increase of about 9.5%. Once we take into account seeing impact vs wavelength, we should end up with roughly the same FWHM, or even small Ha having a small edge. This is of course for perfect aperture. Any aberrations will have blurring effect that will raise FWHM.
  19. Yes, quite close to what it should be and you are right about green cast in center - lowering green component will create center of the galaxy more yellow like (red+green = yellow).
  20. You are not far off ... Colors are a bit washed out due to the way you processed the image, but in principle it is fairly close to what it should look like. Don't be swayed by majority of renditions of this galaxy found online - most are just too saturated and generally not quite correct. Color renditions like these: are overly saturated and blue is pushed far more then it should be. On the other hand, renditions like these are closer to the real thing: Look at Hubble rendition of central part of this galaxy: Blue is rather scarce in outer arms, and mixed with Ha regions so its more "purplish" than pure blue, core is deep yellow / orange surrounded by ring of gas and Ha regions - which have deep red / brown look. Here is one of mine images where I was playing with getting correct color for this target:
  21. There is a simpler test to perform rather than doing Roddier test - not that it is complicated, but will require a bit more processing. Try imaging without field flattener. Maybe optics of the scope is fine, and field flattener is poorly corrected at red part of the spectrum. You will have smaller usable field for your test, but stars close to optical axis should be a good indication - they will be sharp.
  22. Here is a brief overview of how it is done, but there is much nicer explanation in document accompanying the software that is used: It would be best if you have access to NB filters - that way you can do exact measurement in certain wavelengths. I have OIII, Ha and also Baader solar continuum filter that is around 530nm or so, but have not done measurement with those. You can also do it with regular RGB filters but results will not be as precise - you will get a sort of "integrated" strehl ratio over respective bands. Same happens with OSC cameras. You will need WinRoddier 3.0 software (or any more recent version, last time I did this, it was with version 3.0). You will also need some sort of application for fast capture - like planetary imaging. Sharpcap will serve this purpose. Last you will need planetary stacking software - AS!3 for example. Measurement is done by recording defocused star pattern in particular wavelength (or filter). You should select fairly bright star - like mag 2-3, and high up in the sky to minimize seeing effects. Night of good seeing will give you more precision in your results (similar to planetary imaging). If you have OSC - you record all three bands at the same time, then split channels later and examine each R, G and B recordings. You proceed by doing short videos of about 1 to 2 minutes. You need one of in focus pattern and one of out focus pattern. There is method to calculate what diameter of defocused star pattern needs to be in pixels - depending on focal length of scope and size of pixels on camera, it is included as separate small piece of software and it is described in how to document / manual. When you finish your movies and calibrate them - you stack them in AS!3. It is important that you don't change intensity of image in any way, so avoid auto scaling of intensity and such. Also - no sharpening must be applied to image. Once you have your stacked images prepared you load them in WinRoddier, set wavelength (for NB filters - pick proper wavelength, and for RGB use wavelength in center of the band for each color). Software does the rest and prepares wavefront images, Zernike polynomials / coefficients and calculates strehl / star profile - look at images that I attached it is screen capture from WinRoddier software. This is process in the nutshell, for more detailed instructions - read the "manual" (or rather document describing workflow). All needed files (both program executables and manuals) can be obtained via Roddier yahoo group. You need to join the group (it might need permission from group maintainer/admin if I remember correctly - it took a day for me if I remember correctly). https://groups.yahoo.com/neo/groups/roddier/info Once you have joined the group look under Files / WinRoddier Ver. 3.0 (Latest)--Apps and User Manuals folder for pretty much everything you need. If you want to check out manuals and "how to" before joining the group - it looks like these are available on different urls on the net, so here are some to get you started: http://www.compubuild.com/astro/download/Roddier_Ver.3.0_Quick-Start_Guide.pdf http://www.compubuild.com/astro/download/New_WinRoddier_User_Manual.pdf
  23. Ah yes - if filter causes blur when in front of focal point (converging light beam close to focus point) but not when placed between eye and eyepiece - than it is very poor quality filter in terms of parallelism of surfaces and polish (general optical quality). In front of your eye, image is already amplified and small aberrations on amplified image will not be resolved by your eye. In converging light cone before eyepiece any aberrations will be amplified by eyepiece (same as image) - and things will go blurry if you don't have decent optical quality filters.
  24. Ususally I was expecting to have a cleaner Halpha channel than OIII, but it's the opposite. I also know that asiderom the optical color correction, in green, you get a better resolution and even better in blue than in red. I'm not going to question how good collimation is on this particular scope, but there might be an issue with correction. Although from your quote we can see that "The colors are very well balanced" - I'm not sure what it means. Triplets are tricky beasts with regards to collimation and proper lens spacing - and I for sure, have no clue about how it is properly done. What I do know is that there is no single strehl ratio for such a scope. If single strehl is quoted - then it is probably in single wavelength - usually around 550nm (or similar - peak of visual sensitivity in scotopic vision I think). Corrections in other wavelengths might and usually are different. Here is an example of strehl vs wavelength graph: This is result for a doublet scope, and while around 580nm it approaches respectable 98% strehl, at Ha this particular scope is going to have strehl around 65% or so. I "tested" my own TS80mm F/6 triplet with Roddier analysis - nothing fancy, RGB image split, and I for example got following results per "band": Blue was around 80% (let's say average), Green around 94% while Red was at around 98%. Such scope would have opposite characteristics - very sharp Ha, while OIII would be softer. Have a look at this post to see results of test: However, not sure if this is the cause of your issues with star bloat in Ha. No, actually, I'm not seeing it. I'm seeing what you are seeing and thinking that best focus position for small stars is as you said somewhere around 32407, but this is just a visual thing. You have no HFR for these stars, and because they are so dim - when not in perfect focus they will be even more dim and their "wings" (wings of star profile) will be below read noise. If you look carefully, you will actually be able to tell that it is spread around. it is very faint, but can be seen. I'll make a screen shot to point it out. Here it is - focus position 32407: Now the same image - ridiculous level of stretch - just look at noise distribution: Noise is more dense in rather large circle around central "tight" points - and this is also light from those stars but very low in intensity due to less than perfect focus, and being at the level of surrounding noise or below it, so it can't be easily spotted. Also compare how much the size of disk of large star increased (it is already stretched) and how much of those two tiny stars.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.