Jump to content

vlaiv

Members
  • Posts

    13,264
  • Joined

  • Last visited

  • Days Won

    12

Everything posted by vlaiv

  1. It is everyone's understanding and experience that larger telescopes allow to see fainter objects - we are just trying to understand why is that. For point like sources - that is sort of easy to understand. When you zoom in on point source - well, it stays point like - star will be point like in both large and small scope. Point like star covers only very limited number of detector cells and that number does not change between large and small scope. Total photon count does change - and when you divide larger total photon count - over same number of receptors - each receptor gets more photons - hence image is brighter. With extended sources - photons spread around with magnification - that is why they are different to point sources and called extended sources - they behave differently. You don't need planet that is bright to see this - just look at background sky. It has some brightness to it and is extended source itself. Increase magnification and it will get darker.
  2. There is simple explanation for this: - larger scopes gather more light and speed of telescope is not tied to F/ratio of telescope as is usually believed - speed is "aperture at resolution". Once working resolution is set - and is often set by seeing + guiding (and use of binning if there is need for that), then aperture wins - larger aperture produces sharper images for same conditions. Sharpness of the image is combination of three factors - seeing, guiding and aperture size. If first two are kept the same (and they are provided that mount can handle both small and large OTA well) - again aperture wins. Granted - difference that aperture makes will vary depending on other two - in poor seeing, well poor seeing will mask both guiding errors and aperture size and difference will be minimal, but in good seeing with good mount - aperture will make difference that is visible
  3. It does not work quite like that - and you can easily test this on any planet / moon, Take Jupiter - pump up magnification and the planet will be less bright. When we increase magnification we spread light over larger surface - either angular or actual on the back of our eye. We have fixed number of sensing cells there and if we spread photons over lager area - each sensing cell will get less photons as total photon number is unchanged. Cells can't tell if this is due to dimming of the object (reduced number of photons) or because of increased magnification - object being bigger. In both cases - we will see equal dimming.
  4. Here is quick process of the image - no clipping:
  5. Ok, this made me think. It is impossible for use of flats to clip histogram unless something is seriously wrong, so I downloaded fist image - and no, your histogram is not clipped. Here are histograms per channel: None of them is clipping. When you use flats - you just "flatten" the image and histogram becomes very narrow and sharp -as you remove vignetting from the image. Maybe you just thought that histogram is clipping - but it's not. Here are stats on channels: as you see - min is never 0 and above histograms don't look like clipping histograms - they have proper curve to them. If you don't mind - I'll have quick go at processing the data as it looks nice and can't help myself
  6. Except color in the image has very little to do with optics and color correction of the telescope. In fact pure mirrored systems will simply always be better color corrected than any refractor no matter how well color corrected - because they don't need color correction in the first place. If your scope is sufficiently color corrected - well, then color is down to sensor used and processing. It stops being related to used scope altogether.
  7. Just to make myself clear - I don't aim for 0 pixel value background in final image. This is still linear stage and I simply remove background sky light from the image - so I can say measure things - or get true RGB ratios without influence of atmosphere and so on ... I can later in processing set background to any pixel value I choose to make image look natural.
  8. Yes the would - but it would not lead to histogram clipping unless you are using some sort of rounding off at 0. If using 32bit floating point data - negative values are OK. In fact - I end up with negative values in my regular image processing all the time and it does not pose a problem in any step. When I remove background - I aim for my background to have average value of 0. Due to noise floor - some pixel values are below 0 and some above 0 (and they average to 0). This will not lead to histogram clipping though. When I import such data into Gimp - it will simply auto scale the data - taking lowest and highest pixel values (regardless of their sign) - and scaling that to 0-1 range. Having lighter darks due to light leak - will cause other issues - like gradients or problems with flat correction - but should not cause clipped histogram by itself. Only using unsigned data format where values below 0 are clipped to 0, or deliberately clipping to 0 or some other value - will make histogram clipped.
  9. Only if you used unsigned integer as data format. With 32bit signed floating point - you would simply end up offsetting data to negative values. Setting levels properly will make histogram look natural (but would not correct other issues with mismatching darks).
  10. Maybe we are just wrong in our assumptions? Only difference between large and small scope operating at same F/ratio and using same eyepiece in both (same exit pupil) is - size of object / integrated brightness. Contrast ratio is the same - surface brightness photon count is the same - so no difference in absolute values of light hitting sensors. It can only be due to size of object / integrated brightness - and maybe we are simply wrong to assume that moving part of the object outside FOV will not impact perceived brightness of it / contrast with respect to background? I'm sure this can easily be checked - although I'm not sure when I'll be able to do it - it seems that I ran out of my clear skies observing budget for this year when I had that one session last week
  11. I'm sure that someone with 8" refractor and 4" reflector would come to the same conclusion - bigger scope throws brighter image for some reason
  12. This is routinely done in imaging with combination of different software. 1. PHD2 is often used for guiding - thus correcting for mount errors and poor polar alignment. Other software is also available for this purpose - like Metaguide and original PHD to name few 2. Plate solving is available as option in many imaging applications - it will identify where scope is pointing and update telescope driver so it knows where "it is". 3. SharpCap allows for polar alignment without polar scope. There are other techniques as well to do this - like drift methods All above StarAid does - already exists and depending on price of said accessory - it might be available at less cost although a bit more involved - for example, you can take Raspberry PI and guide camera and software on RPI will do all the functionality you need. It will also let you "login" via your phone / tablet and monitor what is happening. You can even have full fledged system with KStars/Ekos running and login with laptop via RDP/VNC type of connection.
  13. This can happen - and it's not good thing - it is error in how calibration is performed. Ideally you want to work with 32bit signed floating point when you start calibrating your data. Initially data is 16bit unsigned integer (we can't have negative number of photons hitting sensor, right?), but as soon as you start removing offset and doing calibration - due to noise, some of pixel values can end up being negative. If calibration is performed in 16bit unsigned format - well, negative numbers will not be recorded properly and will clip to 0. Maybe try performing calibration in another software (you can still stack calibrated data in DSS if you like) - and see if there is difference. I think that Siril will both calibrate and stack data and is free. Maybe give it a try?
  14. Actually - nothing wrong with that statement. As long as it "feels", "looks", "smells" or anything that has to do with our perception - it is quite possible that it differs from underlying physical reality. I'm sure you've seen this before: B looks brighter than A - but it isn't
  15. Ok, so here is experiment. You'll need two LED panels, or maybe two pieces of paper and something to illuminate them equally and a dark room. Illuminate papers so that they turn grey. Try to get them equally illuminated. Now cover 90% of one paper with something dark. Did other paper turn white? If we go by integrated brightness approach - total surface is important and reducing surface should reduce surface brightness as well. Any change in surface brightness will make one paper slightly darker and then another thing should kick in - we should perceive brightest "neutral" color in scene as white point - it should uncovered paper should become white - or at least lighter gray as it is brighter in intensity. Perceiving it as having higher surface brightness will do as well. Again, I have feeling that this will not happen - similarly to how you move half of object outside of FOV and it will not change brightness because of that.
  16. Not sure how it's done - but I'm open to new experiences Indeed - integrated brightness is just total photons form target captured by scope. Larger aperture - more photons (per unit time). It stays constant for any given scope because aperture is fixed. That is my problem as well - I also think it won't affect anything, but it should (according to integrated brightness approach).
  17. As far as surface brightness goes, I think following is true: 1. Contrast ratio in terms of photon counts between background and target is maintained as you increase magnification - They are dimmed per unit surface by same amount because both have fixed integrated brightness and increase in magnification - well magnifies them both by same amount (otherwise things would be very funny at the eyepiece). So photon / physics wise - contrast ratio is maintained 2. Perception of that contrast changes. It changes due to at least two things - one is already mentioned - as size of object changes so does our perception of contrast. I'm just having trouble accepting that this has magnitude that is ascribed to it. Second is absolute level of light. We must not assume that we will perceive contrast ratio the same at high levels of light as at low levels of light. For all those who maintain that integrated brightness is important - I'd say, what about the case when you move half or 3/4 of object outside of field stop. Does the rest change perceived contrast with respect to background? If you move part of object outside field stop - you'll stop those photons from reaching your eye. Surface brightness will remain the same but total / integrated brightness of visible portion will fall.
  18. My experience was with x2 larger magnification and galaxies in Markarian's chain at about x20 and x40. These are about 2' x 1' to maybe 3'x2'. This means that they were quite small in both scopes - around 1° when magnified.
  19. This size / contrast thing should work in single scope with zoom eyepiece. When we change magnification - we don't change contrast between the two - object and background. If we have threshold object - at some magnification (barely detected) - if we zoom out - it should disappear from view. We don't need constant exit pupil for this to happen. I'm fairly certain that this will happen far less often than is suggested.
  20. You want this line: 190 1,742364914 0,501801095 446,0454179 30,71525639 29,7470829 8 This gives ~0.5e of read noise - lowest. It is gain of 190 or "relative gain" of 30.71 (whatever that means) There is no relationship between gain and frame rate. You want lowest read noise and that is to some extent controlled by gain. Just simply select gain that provides lowest read noise. Exposure length is related to frame rate in certain way. You want your exposure length to be such that it freezes the seeing. That is in most cases around 5-6 ms. In very good seeing it can be as high as 10ms. You'll recognize such seeing as image being quite steady on preview - no jumping around or fast blurring of features. Exposure length limits max fps that you can achieve and relationship is as follows 1000/exposure = max fps. If you set you exposure to say 5ms - max FPS that is theoretically possible is 200FPS - you simply can't have more than that as there is only limited amount of time in one second . Real FPS will also depend on your camera and computer USB connection and also how fast computer can save those frames. Choosing smaller ROI will help there.
  21. Ok, I accept that size makes difference for same exit pupil, but I have a sense that it's given more credit than it deserves. How about objects that have features that are half the size of the whole object? Anyone looked observed something like that? Usually faint nebulae will be prime candidates for that. Say NAN for example. I marked two regions that are prime candidates for this - they look like they have same brightness and one is about x4 larger than the other in surface. Did anyone ever see one and not see the other? If size explanation holds - this will happen at least in some cases.
  22. Don't forget that those rod cells also pick up light from background at the same time - regardless if object is there or not.
  23. A human rod cell is about 2 microns in diameter (quote from wiki: https://en.wikipedia.org/wiki/Rod_cell) Focal length of human eye is 17mm - 22mm (depending on source) - but let's take average value of 20mm Single rod cell thus covers - 20" or 20 arc seconds. When we talk about galaxies - we usually talk about object that are about few degrees in AFOV - maybe smallest is half a degree - size of full Moon to the naked eye. That is x60 - x90 larger than single cell in diameter - and if we compare by surface - it contains about 3000 rod cells. I would not call that "few receptors". I think that in cases we are discussing - enough receptors are stimulated that size of object does not make difference with respect to number of stimulated cells.
  24. This part is fine This part - I'm not really sure. What does 25m/s mean? If you mean that you used 25ms exposures - than that is definitively not fine. You want your exposures to be around 5-6ms. Don't look at histogram values - don't try to get them to certain percentage of maximum. That is not important. Important bit is freezing the seeing and you can only do that with short exposures. Gain needs to be set as to minimize the read noise. Do you have any graphs - Gain vs read noise or similar for this camera? If not - then use high gain settings like 90% or so. Usually read noise goes down when you up the gain. Use Raw8 color space and save movie as ser, and then later select RGGB debayer in AS!3. 3000 frames is very low frame count - you want something like x10 or more. Try to get high FPS if you can (USB 3.0 connection, SSD, and fast capture mode / turbo USB - what ever you have available in SharpCap). 640x480 ROI is good - you can even go smaller if that will help with FPS Image for 4 minutes or so. Stack only 5-10% of the best frames.
  25. Care to share what you've found? Simple link will be enough.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.