Jump to content

NLCbanner2024.jpg.2478be509670e60c2d6efd04834b8b47.jpg

vlaiv

Members
  • Posts

    13,106
  • Joined

  • Last visited

  • Days Won

    12

Everything posted by vlaiv

  1. That is kind of obvious - Lunar observing is always fun with a small scope, but is lunar imaging as well? I finally managed to process capture of the other night - took some Jupiter, Saturn and this Moon image. Processing of this image took most of the day today. Mak 102mm, ASI178mcc at prime focus, AZGti mount. Took 9 panels, 1000 subs each at 3ms exposure. Full FOV / no ROI. 256 of each calibration frames - darks, flats and flat darks. Stacked best 90% of each in AS!3. Stitching in MS ICE, wavelets in Registax 6 and final touch up in Gimp 2.10. This is monochromatic image of green channel. Image has been reduced x2 in size - it was too much resolution I think for given number of subs and seeing at that time in that direction. Click on the image for full resolution image (right click / new tab sort of thing).
  2. Indeed. One of my goals when getting this scope is to actually see how good "all-rounder" type scope this is. I intend to submit it to: - planetary/lunar visual - so far excellent - imaging planetary / lunar - this part shows planetary performance and I think it passes with flying colors here as well - lunar is being processed as we speak (again, a bit of surprise there as well, or maybe not surprising after seeing this) - EEVA / imaging platform - this will be demanding and of course DSO visual - probably weakest point since it is lacking widest views.
  3. Here we have another one - tiny little Saturn , but looks very nice also: Same equipment, same conditions, 20ms subs this time and yes, same processing workflow (but darks applied this time - not sure if it makes any difference or not).
  4. Thanks! I'm actually not quite satisfied with colour balance - it was just a simple - hit "auto white balance" in Registax and it did pretty good job versus data from camera, but I've got my own idea of how to get "true" color. Unfortunately, I have not yet found a way to do it in existing software and need to do some measurements first - with a colour chart and such (do color conversion to linear sRGB then apply gamma and so on ...). Thank you. No idea if it will make any difference. I usually like to remove the bias (as darks are really only bias at those speeds) and correct for pixel response non uniformity with flats (apparently CMOS sensors get some of that), but here SNR is so good and planet wobbled so much on sensor that it is properly dithered - I'm inclined to believe that there will be absolutely no difference Thanks. Conditions were rather good, here are short animated gif of ten consecutive subs (randomly taken from original recording) and result of stack prior to processing: It is quite still - or rather there is no much blurring only shimmering - I guess small aperture played a part there as it cuts thru smaller piece of atmosphere - less than size of seeing cell.
  5. Well, it's only Jupiter for now since that is all I managed to process tonight. I also took a recording of Saturn and Moon as well - will process and post results tommorow. This was taken with 4" Maksutov (SW Mak102) on AZGti mount and ASI178mcc camera. No barlow was used - at native F/13. Stacked in AS!3 and wavelets in Registax 6. Final touch up in Gimp 2.10 Above is just "preliminary" processing since I did not use either darks nor flats, but I did take them. I'll do another round of stacking to see if there is any difference (although SNR is very good as is). 5 minute recording at ~126fps (6ms subs) for total of ~38000 subs taken, stacked best 80% (seeing was rather good and I could not tell the difference in top quality sub vs one at 20% from the end - most of seeing was in form of shimmer rather than blurring). Surprised to see that much planetary detail with only 4" scope.
  6. I second addition of something in 17-21mm FL - you want around 2.5 - 3 mm exit pupil for deep sky ...
  7. No reason not to go for it then. If you ever find yourself having issues because of too big pixels - you can always switch to OAG.
  8. Yes, you are right, my mistake. That is not quite what I wanted to say. I was talking about best performance of the mount rather than ability to guide at all, but you are right, I should have been more specific and clear about that. Indeed - barlow will provide additional focal length and additional resolution. That will make centroid algorithm more precise. It is not about pixel size, nor focal length alone - combination of the two is important. Ah, ok, sorry about that. I'll try to be more straight forward with my explanations. It is about how precise something can be measured. Imagine you are trying to park a car in a garage where door is 3.2m wide, but I give you directions in whole meters - so I say either stay on course or shift left/right but one whole meter each time. Odds are - you will miss the garage door and hit the wall. Mount can be directed quite precisely in the sky but you don't tell it in precise units how much to move - using large pixels gives large "units" of move left/right (it depends on focal length). If you want mount to move precisely where its supposed to be - you need to tell it to move in small enough units. All the math above just calculates how small units you need depending on how fine your mount is. You can still use larger "units" in your directions but, and mount will respond but it simply won't be as precise as it can be - due to "coarse directions" rather than anything else. All of that is related to sharpness of your image. If mount moves a lot causing larger RMS guide error - that adds to blur and image becomes less sharp. It is one of contributing factors of overall resolution achieved - other two being aperture size and seeing. It is therefore straight forward to see that you want as low (true) guide RMS as possible. As pointed above, ASI120mm will do excellent job - both in pixel size and in general. It will also be cheaper than Lodestar X2. Do you have any particular reason to prefer Lodestar + Barlow option?
  9. Did image look soft while observing at eyepiece of just with camera? What were observing conditions like for you - what did you observe that looked blurry / soft?
  10. No, I'm not saying any of that and honestly, I'm failing to see what in my post lead you to that conclusion. First - I'm not saying that Lodestar X2 has too large pixels to guide HEQ5/EQ6 class mount. That depends on focal length used - not on aperture size. I'm also not saying that one can't use Lodestar X2 with certain focal length to guide HEQ5/EQ6 class mount even if guide resolution is too coarse - you can certainly guide your mount, only question is how well? In first post you asked what would be a good guide camera. I understood term good in a certain way - for me good guide camera will one that will enable mount to perform to its best (given choice of guide scope). I've shown that you can't reliably guide below about 1" RMS with Lodestar X2 and 60mm F/5.9 scope simply because it lacks precision to measure star position accurately. If you want to use Lodestar to guide to better precision - you certainly can - just change focal length of guide scope so that it has better precision. Alternatively, if you don't want to change guide scope - then select guide camera that has smaller pixels - sufficiently small to enable good centroid precision for your mount. Btw, I would expect EQ8 class mount to guide to at least 0.5" RMS or less and above calculation and recommendation is valid.
  11. I'm not seeing it. Maybe easiest way to see if it is concentric is to make mirror - either horizontal or vertical of half of the image. If it still appears circular - it is concentric. Here is first image with right part mirrored on left side: and here is bottom part mirrored on the top: That looks concentric to me. In fact, maybe this can show it even better: Inner circle is a bit misshapen (seeing, tube currents?) but it is concentric.
  12. That is way too defocused for proper star test. Last image shows out of collimation optics, but I suspect that is because focuser was racked all the way out and that could tilt the primary mirror. You want very small level of defocus, something like this: You want those concentric rings to still be visible. Btw - one on the right is out of collimation because rings are not concentric - that is what you want to see - either concentric rings or slightly closer on one side. In your images above - first two images look rather concentric, but it is hard to tell because there are so many rings that they appear like flat surface. Last one is obviously not concentric.
  13. But then again - you are not actually guiding either 55mm lens nor 1000mm F/5 reflector - you are guiding the mount holding either of those
  14. Sampling rate of main camera and sampling rate of guide camera are only related by bunch of rules of thumb but in principle are not related. We could say that there is two schools of thought at work here. 1. Good enough 2. Best given circumstances Using first school of thought you can get to relationship between guide resolution and imaging resolution and it goes something like this: - under reasonable circumstances you need your guide RMS to be at least half of your imaging resolution. You probably heard about this rule of the thumb - There is 1/16 centroid accuracy thing and x3 of that accuracy per guide RMS which generally give about x5 guiding resolution vs guide RMS. Put those two together and you get our general rule of thumb that says something - make your guide resolution about x2.5 that of imaging resolution. This approach is "good enough" kind of approach, and using this approach you are fine with Lodestar as it gives you x1.33 ratio to your imaging resolution. Let's now try a bit different approach - approach number two that will try to do the best given circumstances. Rationale behind this approach is simple - if you can, why not go for best result given your circumstances? To explain it a bit better - your imaging resolution is 3.55"/px. Does this mean that you should settle for 1.77" RMS guide error just because rule of thumb says that it should be (at least) half of imaging resolution? Notice that - at least part in parenthesis. If your mount is capable of doing 0.8" RMS guide precision - why not go with that. If you compare two images - both sampled at 3.55"/px and one being guided at 0.8" RMS and other at 1.7" RMS you will see the difference in sharpness between the two - regardless of the fact that you are sampling at rather low rate. Why not aim for the sharpest possible image given your setup? In this approach - we don't look at guide resolution in relation to imaging resolution, we look at guide resolution with respect to what your mount can achieve at its best. This is your starting point. What mount do you have and what sort of guide performance does such mount usually have? I gave you an example for mount capable of 0.5" RMS. But we don't have to do it like that - we can do reverse. Given your guide scope at 355mm and lodestar with 8.2um pixel size - what would be the best mount performance this combo is capable of guiding. This works out to 4.76"/px, so if we take 1/16 of that, that is about 0.3". Multiply that with at least x3 to get resulting RMS - and that is 0.9" RMS. So that setup is only good for mounts that on average do above 1" RMS and only in exceptional circumstances can achieve 0.9" RMS. HEQ5/EQ6/AZEQ6 and all of those mounts are capable of better guide performance. Why limit them with guide setup that can't measure star position with enough accuracy to be able to instruct the mount to track better? ASI290 has 2.9um pixel size, while X2 lodestar has 8.2um pixel size. One is too large and does not provide enough resolution to be able guide mount like HEQ5/EQ6, while other has too much resolution with said guide scope - you don't need that much precision and smaller pixel size is less sensitive (part of guide performance comes from good SNR on guide star so you want that as well). You can't split pixels on Lodestar, but you can bin pixels on ASI290 to get to pixel size that both balances sensitivity and resolution needed to determine star position to a good precision needed to properly guide mount like HEQ5/EQ6. If you on the other hand have mount like Mesu200 or similar that can guide below 0.3" RMS - then I would say - use ASI290 but don't bin it - as you'll need that much precision in order to guide mount with such low error.
  15. Let's try it this way: - re sampling resolution - depends on what scope you are using, quality of your mount and how wide you want to go. 1"/px is very high resolution and in most cases unattainable. You need mount that guides at least at 0.5" RMS or lower. You need very steady skies and you need at least 8" of aperture to get there. For telescope of 80-100mm of aperture, realistic maximum sampling rate is at about 2"/px. This does not mean that you have to go that high - if looking for wide field setup - simple fact is that you have limited size of sensor / corrected field and you won't be able to fit that many pixels. 3.5"/px is fine sampling rate for wide field. If we want to be specific about max sampling rate and Nyquist - there is simple rule to follow - measure your FWHM and go with sampling rate that is about 1.6 less than that. This means that one needs 1.6" FWHM stars in order to fully exploit 1"/px sampling rate. - re guiding resolution. Well depends on the mount you have and what is realistically achievable in terms of guide RMS. My advice would be to sample at about x3 per best possible RMS in terms of centroid accuracy. Centroid accuracy is about 1/16 - 1/20 of single pixel. To give a bit better explanation, here is how to calculate guider resolution. Let's say that you have mount that is capable of 0.5" RMS guiding under best circumstances. You want your centroid accuracy to be about 0.5/3 = 0.167". That will be 1/16 to 1/20 of a pixel so pixel size needs to be 0.167 * 16 to 0.167 * 20 = 2.67"/px to 3.34"/px You have guide scope that is 60mm F/5.9 or about 350mm of FL. You need a camera that has less than 6um pixel size to use as a guiding camera. With most common pixel size of 3.75um and 350mm of FL, you'll get 2.21"/px - which is slightly better than you need. With 290mm camera, which is a good choice for guide camera, you'll have 1.71"/px and if you choose that camera, then use x2 bin to further improve it's sensitivity as it will still provide you with 3.42"/px - close enough for above criteria. Hope this helps?
  16. I pushed my 8" to ridiculous powers like x500+ just to see that moment when image "falls apart" - could not understand why people call it that, and I still can't. For me, image did no fall apart - it just became magnified / larger with the same level of blurriness (and darker of course). Best views of Jupiter to date were with 8" at x200, so again - no crazy powers required.
  17. I like when image is sharp rather than magnified. x200 with my 8" scope and about x100 with 4" scope. I was observing two days ago after quite a long time - had my 4" Mak setup on balcony and took a glance at Moon, Jupiter and Mars. I found that ES82 11mm gave most pleasing views, although I also used ES82 6.7mm and BCO 6mm. On Moon I preferred 6.7mm but on Jupiter and Saturn - 11mm was more pleasing. That gave something like x118 magnification with 11mm and x194 with 6.7mm. Just to clarify - when I observed Jupiter - it was Europa transit at that moment and I caught second part of it. Seeing could have been better but it was relatively good. I could see Europa shadow with both 11mm and 6.7mm and the moment it emerged from in front of Jupiter - I could clearly see Europa itself - but actual disk was only clear to me with 6.7mm. However at that magnification and with 4" scope - you can't really be sure if you actually resolved the disk - it just looks different than star, a bit fatter but with airy rings around it - which is a clear sign that we are entering too much magnification territory.
  18. It actually depends on your eyesight. For 20/20 person - you don't really need to go very high to be able to see everything there is to be seen. For 102mm of aperture we are talking about resolving features the size of 1.26". 20/20 vision person will see 1' detail - that is 60". In principle you will be able to see it all with about x47.62. Any higher magnification will not show you more - just make it easier to see what is already there. People often use x2-x3 that magnification because it just makes things easier to see (you don't have to work that hard like reading last line in eye doctor's office ) but after that, things just become too soft (not because optics is poor - rather things are naturally blurry because there is no additional detail).
  19. Well, you can try two different approaches here. With Starnet++, this is how I would suggest that you try: - take all channels and stretch slightly and save as 16 bit images. Remove stars with Starnet++ - take Ha channel and subtract Starnet++ version from regular 16bit version. This should give you stars. If you did initial stretch right - this is about as much as you should stretch your stars or maybe just a tad more. - Now you can either stretch all three channels to your liking and do RGB combine or you could try LRGB approach (I'll explain that one as second approach without Starnet++). Once you have your image - you simply layer stars on top of it and do some blend mode (like lighten or maybe add layer mask with stars and normal mode - whatever puts stars on top of your image). That way you should have nice nebulosity with good color and white stars (no annoying purple halos and such). Second approach is probably a bit more complex but it does not involve Starnet++ (which could be advantage in some cases). You need to do LRGB composition with RGB in linear stage and synthetic L. Creating synthetic L is rather difficult to do, but in most cases Ha or Ha+OIII will be enough. SII rarely exists on its own and much more frequently it is in the same place as Ha. This lets you use Ha as luminance. In any case, once you have luminance - stretch it like a mono image until you are happy with what you have. Now you need to do RGB combine by applying RGB ratio method. But you need to assign sensible weights to each channel. Usually Ha will have some small weight like 1/16 or so of linear (it will still be linear data only scaled to bring it to same value as OIII and SII). Similarly you might need to scale OIII just a bit to bring it down to SII levels. Here you are not really stretching your data - just making it "color" compatible. Otherwise it will all be red with slight hue variations (because strength of Ha is so dominating). RGB ratio is rather straight forward: Resulting_R = Stretched_L * R / max(R,G,B) Resulting_G = Stretched_L * G / max(R,G,B) Resulting_B = Stretched_L * B / max(R,G,B) not sure how you are going to do that in either PI or PS or whatever software you are using (should be doable in both of those listed but using different approaches - either pixel math in PI or layers in PS).
  20. In the end, I guess that is proper explanation, as for Maksutovs (and other Cats) - it is magnifying secondary that acts the same, right?
  21. Yep, I've got some very strange astigmatism in my right eye so I amuse myself by trying to figure out PSF of street lamp (I see three moons / street lamps with my right eye). This is more like grease smudge - that sort of ghosting / scatter - it does not move with my eye but rather stays put with respect to eyepiece. My first reaction was that it is grease from eyelashes / eye - since it is in center of the eyepiece and it looks like that, but cotton swab with cleaning fluid made no difference - I expect at least to spread it around if not clean it completely. It is also present in various eyepieces (although that means nothing since I could have contaminated eye lens in all of them - but effect changes with eyepiece / scope combination).
  22. I just realized that Ruud's explanation given in the thread that I linked is much more likely than mine above. Mak has secondary mirror that magnifies - that makes rays look like they come from much shorter focal length - so I guess that is the reason behind longer eye relief in Mak? In any case, I made one wrong assumption - smaller exit pupil will not necessarily touch edge of eye lens and whole thing with exit pupil can easily be tested - one just needs aperture mask to create smaller exit pupil and leave everything else the same (focal length and eyepiece). Adding such aperture mask should change eye relief if it is related to this. On completely different note, @Louis D, very interesting thread you linked and your images seem to show something that has been troubling me. I get this dark spot in the center of the view sometimes and I don't understand why it is there. It is not round and it causes blur / scatter and some light fall off. It resembles the most to images of GSO and Orion plossls 32mm. When I move my eye - this dark spot "counter moves" - or rather it's position relative to eyepiece stays the same. I can often "look behind it" (sort of). If I place it directly over bright source than everything turns nasty (like placing Jupiter behind it) - so much scatter appears in FOV of eyepiece. This is not scope related nor barlow related nor eyepiece related since I experienced this with different eyepieces / scopes / barlow or no barlow situations - but it does not happen always. For example in ES82 11mm and Mak102 - it was there but very subtle, more like just a bit of "misting" in that place rather than shadow. With ES82 6.7mm it was very obvious and distracting. With BCO 6mm it was evident but somewhere between other two in intensity. What could that be and why does it happen?
  23. Barlow indeed produces higher F/ratio for any given scope and if above is correct - that would be explanation, but I don't remember that being offered as an explanation in previous discussion. Maybe it would be beneficial if I find actual discussion I was referring to and explanation given there. Ok, here is original discussion: And here is interesting point:
  24. That one is rather simple You set your temperature to -16 and said camera to start cooling and went about your business. Few minutes later you returned and was very pleased to see it at -16C with 74% of power being used. Then you started taking exposures and things changed. Taking exposures makes sensor electronics work something - it is not the same as having sensor sit there idly being cooled. Work that sensor does means increased power consumption of the sensor itself and as a result - increased heat. Same thing really that happens with car engine. If you put it in neutral and sit there on a parking lot - it's not going to heat up much and ventilator will not even start turning. Take that car up hill and it will soon start to heat properly and ventilator will kick in. Rev that engine very hard and go up very steep hill and there is even a chance that you will over heat it. Working sensor produces more heat and it is harder to cool than idle sensor.
  25. We know that barlow extends eye relief of eyepieces and some time ago I asked why would that be and was given an answer that I did not really understand well. Yesterday I had a brief session from my balcony - just a bit of planetary / lunar with Mak102 - very rewarding session indeed although seeing was not perfect (I have not been observing in ages). I had my BCO 6mm on my desk as I was meaning to sell it for quite some time - and for some reason I decided to try it in Mak. I was surprised by how much eye relief there is in this eyepiece. I remember it being very tight on my F/6 8" dob. Then I realized that it must be the same effect as with barlow. I'm now using it with F/13 scope and it feels quite comfortable. In any case, I do have one plausible explanation that I'll try to sum up in a diagram, but I'm not 100% sure it is solely down to that. Ok, I'm pretty sure I made a mess with above diagram, but let's try to decipher what I draw. It is about exit pupil size. Large exit pupil will have tighter eye relief than smaller exit pupil. In red we have rays of larger exit pupil. If we mark intersection of all of them - we get eye relief position - that is dark red vertical line. If we now observe smaller exit pupil - marked in blue - we can see that full intersection of those lines actually moved further away from eye lens. Here in diagram I'm making one assumption that stands to reason, but I'm not completely sure it is true - edge of the field rays that are still at full illumination will emerge at extreme ends on eye lens - like it has been drawn and not closer to center. Or to put it in another way - edge of the field "pencil" will touch edge of eye lens. We know that exit pupil is determined by F/ratio of scope and focal length of eyepiece (when we divide the two). 6mm BCO in F/6 scope will give 1mm exit pupil while same eyepiece in F/13 scope will give 0.46mm exit pupil. This effect is more noticeable with narrower field eyepieces because it depends on max angle exit pencil will make. With simple designs that angle is often 50 degrees or less (or rather half of that - because angle is observed with respect to optical axis). Does this make sense and could it be explanation for why barlows make eye relief longer and why eye relief varies with telescope (f/ratio) used? Maybe this is the reason why people used Orthos in the first place - they had decent eye relief on F/15 scopes
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.