Jump to content

vlaiv

Members
  • Posts

    13,265
  • Joined

  • Last visited

  • Days Won

    12

Everything posted by vlaiv

  1. Please do understand that the fact that you can represent function in 3 dimensions - does not make it 3d. In order for function to be 3d - it must be function of 3 parameters. Sine function is not 2d function because we can draw it on piece of paper (2d medium) or computer screen (again 2d medium) - it is 1d function because it accepts 1 parameter - sin(x). Image is 2d function because value / intensity of image depends on two parameters - x, and y so image = f(x,y). It is not 3d. Point source does indeed form quasi Gaussian shape. It is not true Gaussian shape because it consists (mainly) out of - Airy pattern shape convolved with two Gaussian functions (seeing and mount track / guide performance), however that has nothing to do with a) image being 2d function in x and y b) your ability or inability to measure something from profile plot of that function (Which is itself 1d function - not 2d as you are taking a slice of 2d function) Instead of muddying water further with confusing comments, could you please answer one of questions that you've been asked by either Olly or me - namely presenting sub that is under sampled at 0.5"/px or presenting sub that has 1.5" FWHM or less and was made with 150mm aperture - or to explain how is x3 FWHM optimum sampling given Nyquist theorem, or rather, given it's definition - how is x3 FWHM related to x2 maximum spatial frequency of image?
  2. Not sure what are you trying to say here in response to my assertion that image is in fact 2d function - intensity depends on x and y coordinate of pixel and f(x,y) is in fact 2d not 3d. Further, any image you try to measure will consist out of finite number of samples - it is not continuous function but rather discrete one - and that is what sampling is. Nyquist sampling theorem deals precisely with that and gives criterion of when can original signal be fully restored given sampled function (function that has values only at certain points) and further more - how to properly restore the function. Whenever I took line plot of any star - I always had enough samples to do some sort of meaningful measurement, so I'm not really sure what you mean by "insufficient points". Could you provide an example please and explain what you find insufficient for meaningful measurement?
  3. Not misinterpreting anything. It is a 2d function. It's like saying sine is 2d function because it has height / intensity - it is not, it is 1d function.
  4. That is something I'd like to see. Most amateur setups are limited to 1.6" FWHM or higher and this is with twice as much aperture. I would love to see sub with 1.5" FWHM
  5. Pixels are not 3d and are certainly not rectangular. For purpose of this discussion pixels are point samples. Even if you are referring to camera pixels not being perfect point sampling device - effect of that is convolution with pixel blur which is much smaller contributing factor than telescope aperture and seeing, and after all - it only adds blur and reduces resolution rather than enhance it. 2d sampling case is the same as 1d case in case of rectangular sampling grid - except we must break down to X and Y direction - in which case it holds that X sampling rate must be twice max wavelength in X direction and Y sampling rate must be twice sampling rate in Y direction. Optimum 2d sampling case is hexagonal grid - but no one is making such sensor, and since sensor is square grid and X and Y sampling rates are the same and any other wavelength in direction other than pure X and Y will have longer X and Y wavelength (vector projected on unit vector basis will have smaller length in those basis than length of vector itself) - original requirement still stands - we need to sample at twice highest frequency component regardless of wave orientation.
  6. Of course I have, and for the record - it states following: "Given band limited signal, you need to sample at twice highest frequency component of that signal in order to be able to perfectly restore it". Can you tell me how would you justify "x3 per FWHM" being twice highest frequency component of band limited signal?
  7. Please don't spread misinformation. x3 times FWHM is very far away from optimal sampling.
  8. Do reality check first - examine your subs at F/5 and this pixel size and compare working resolution with actual FWHM you are able to achieve. At F/5 or 750mm of focal length and 4.54um pixel size you are working at ~1.25"/px In theory, with well corrected scope and good skies - this should be ok for Ha/SII narrowband with 150mm of aperture. OIII will struggle though as seeing is often poorer. You also have fast achromat which is not well corrected scope and probably suffers from quite a bit spherochromatism - so using NB filters won't remove that bit - there will be some spherical aberration present for sure. In any case - if you are working at 1.25"/px - you should be having 1.25 * 1.6 = 2" FWHM stars in your subs. You can check that in your old/current subs. If your star FWHM is larger than two arc seconds - you are already over sampling as is and adding barlow won't bring any new detail (it will only hurt your images by lowering SNR).
  9. Why not? I mean in principle - in this particular case, it is way too over sampled even with pixel size of 460ex, but in general - it's not speed of optics that counts - it is aperture at resolution. Hubble is F/24 if I'm not mistaken.
  10. Any particular mechanism by which anyone or anything could distinguish "slice" as present in such construct?
  11. Ok, so here is something very interesting. People have an issue with the idea that everything came into existence at one particular time, but alternative is that everything existed infinitely long - it was never created it just existed without beginning, and that is perfectly fine?
  12. If I'm not mistaken, you should do the following: 1. return 1.25" attachment to your mak. It has some strange thread at the back and to use it like that - you would need special adapter (you can look up one if you wish - but need to check the thread on mak - either online or with pair of calipers). 2. Use adapter that came with flip mirror that is T2 / M42 x 0.75 From what I can see in images - you have two options to use flip mirror with - one is male T2 thread and other is male M48 (known as 2" filter thread). If I'm not mistaken - your 1.25" eyepiece end on mak should have T2 thread on it. It might also be male one - in that case you'd need continuous T2 female thread to connect the two. In any case - look up what sort of thread you have on 1.25" eyepiece side. Alternative to that would be to get one of these: https://www.firstlightoptics.com/zwo-accessories/zwo-125-t-mount-photography-camera-adapter.html and one of these: https://www.firstlightoptics.com/zwo-accessories/zwo-t2-female-to-t2-female-11mm-extender-ring.html
  13. I think that it really depends on implementation of how .cr2 is read. I sometimes get very strange values with command line DCRAW if I specify that I want raw output - minimally modified. In any case, you should not concern yourself with actual numbers as long as e/ADU and ADU values match. Now at least know where discrepancy comes from, but all that 16bit scaling thing is simply unnecessary and causing issues. My belief is that ADC numbers should be kept as they are regardless of bit format used to record them. Unfortunately, because of the way software works - developers decided to step in and change the numbers to suit people's expectations. If you over expose your camera - you expect to get the white screen, right? If your camera is 12bit and you record that as 16bit - you'll get only 1/16th of full range that bit format supports. Now put that image in software that works with 16 bit images (and expect full 16bit range to be utilized) - and you get very dark gray image. Not what people expect. So developers stepped in and said - ok, let's stretch intensity to 0-1 as that is how it is expected (I wrote 0-1 as it is universal thing regardless of bit format - for 16 bit it should read 0-65535 and in general from 0-max). Now we no longer have numbers that are result of ADC - but do have numbers that average Joe expects.
  14. You are welcome. I'd just like to point out one thing - e/ADU is used to convert from ADU back to electrons. If you don't know ADUs have been altered after e/ADU is specified - you can never get correct e-values back. In that sense, e/ADU should be stated for last ADU that you output and not just from first step from ADC.
  15. First - I'm guessing that we use rescale term differently. To me proper value for 12bit ADC are in fact in 0-4095 range and when we write them in 16bit in 0-65535 range then we are scaling them to that. I'm reading scaling as 12bit -> 16bit. I have a sense you are reading it the other way around and that 16bit -> 12bit is scaling for you? Not that it matters for e/ADU part, but just to be on the same page. Now, important part is - there is not rescaling afterwards. e/ADU measurement is last thing "in the chain" - because ADU values are as they will be on the end of the chain. You can't calculate e/ADU and then change ADU values and say still e/ADU applies. This is why I told the story of you handing over data in floating point. e/ADU for that data must be valid afterwards so I can perform "backward" transformation and calculate electrons from ADU readings that I perform. In any case - if you have 16e/ADU in 12 bit mode - it is perfectly normal to expect 1e/ADU in 16bit mode, provided that conversion between 12bit mode and 16bit mode is done "the standard way" - by multiplying pixel values with 16 (two to the power of difference between bit counts so 2^(16-12) = 2^4 = 16). Here is example of what might happen: you capture 16000e and you end up with 1000ADU - you do the math 16000e / 1000ADU = 16e/ADU Now you do the same - but this time using 16 bit mode: you capture 16000e and you end up with 1000ADU but this time - you multiply that number with 16 to scale it to 0-65535 range and you get 16000ADU. Now you calculate e/ADU conversion factor by 16000e / 16000ADU and you end up with 1e/ADU. So for same ISO and same number of captured electrons - we end up with two different e/ADU values because we decided to mess with actual pixel values and scale them.
  16. ADU is short (confusingly) from analog-digital-unit (not sure who named it that way) and represents a number without unit that is assigned to a pixel upon readout. We mostly think of pixels having value in unit of ADUs - but that is wrong as ADU does not have a unit nor it is itself a unit - meaning ADU from one camera and ADU from other camera can be and will be different (unlike for example meters or grams measured with different rulers or different scales - result will be the same). Gain of a camera is given in e/ADU and is measure of how these particular ADU values are derived. So electron count has a unit and two different cameras should agree on electron count (if they have the same QE or QE has been adjusted for). Same number of electrons and same gain should produce the same ADU number always. But - let's discuss following case: We have two cameras that are the same (or different - it does not matter in this case). They have 12bit ADC and can produce maximum value of 4095. You take one camera and record bright white light that saturates and record pixel value after gain conversion and you get 4095 in float point representation (so you don't know how this fits into 12 or 16 bit numbers). Now you take second camera - record the same thing, but by convention you decide to exploit whole 16bits of 16bit format you'll be using to record image, so you multiply value with 16 and from 4095 you get number 65520. You then convert that to floating point representation. You hand over those two images in floating point to me and say - gain was set equal on these two images. Since I have different ADU values and you told me that gain is equal - only conclusion to draw from that is that we had different number of electrons to start with - but that is not the case, we had the same number of electrons. How come? Well - e/ADU is valid only for the same scaling as scaling changes ADU value of pixel. If you had 1000e to start with and you ended up with 4095 then your e/ADU is equal to 1000e/4095ADU = ~0.2442e/ADU If you had 1000e to start with and you ended up with 65520 ADUs then your e/ADU is equal to 1000e / 65520 = ~0.0152625e/ADU Although you used the same ISO in this case - your gain in terms of e/ADU changed because you introduced another step - scaling. Makes sense?
  17. If you don't scale you data and leave it as 12bit, then yes. At 16bit scaled (multiplied with 16) - it would be very close to 1e/ADU.
  18. Maybe who ever measured e/ADU for ISO800 - measured scaled ADU values in range of 0-65535 In that case ADU is multiplied with 16 and values would be indeed ~1e/ADU So something like 17e/ADU when ADU is 16 times larger - would be in fact 17/16 e/ADU It is not "error" per se - it is wrong interpretation of the fact that ISO800 gives gain close to 1 - maybe it does, but only if values are scaled - if they are not scaled - you get 15-17 range like you have in that table?
  19. High e/ADU value actually means low gain not high. Many electrons per single ADU value means high signal = low output value or a lot of light = darker image.
  20. Check out spot diagram at ~400nm vs one at 550nm or 600nm (yes, it blows up again at 700nm, but hot blue stars have much stronger emission at shorter wavelengths).
  21. I'm with you on that - if it were not for working resolution. It is ~2"/px and such differences should be visible in higher working resolutions but not so much on 2"/px. In wide field images FWHM per channel is roughly the same with very little difference. From RASA paper - RASA8 data sheet 4.55um max RMS - equates to 2.34" RMS, which in turn equates to 5.51" FWHM over the field, just from optics. @ollypenrice can you confirm above for us? Can you take one of your calibrated subs and measure average star FWHM on it? I wonder which value you'll get from measurement. In any case - 5.5" FWHM requires sampling of 3.43"/px - or at 400mm FL that is equal to 6.65um pixel size - and this is without mount and seeing influence. So telescope alone requires at least that size of pixel.
  22. I think that after some separation there is not much influence from magnitude difference. Most separations you listed are quite large separations. Only few of them are less than width of a full moon - and that is quite a large angular separation (half a degree). Just to get the idea of separation - take next clear night, and although Ursa major is not well positioned - observe Alcor / Mizar pair they are very easy to tell apart and have about 11' of separation.
  23. Well, you can't get around physics, and it says that whatever affects stars, affects the whole image. Here, look at this: If I reduce image to 40% - which is equivalent of using 3.76 / 0.4 = 9.4um pixel size - feature marked with arrow in the top part is now comparable across all images and sharpness detail on the edge of trunk looks very similar.
  24. I don't know - to me it looks all mushy and without distinct detail. I pulled 3 different images here from SGL, did not pay attention to aperture, and sure, some of them are narrowband or enhanced by NB data - but feature should still be there and should be sharp:
  25. Visibility of line features is not directly related to resolution (it is indirectly related, but I think you know that). It is the same as saying - angular size of star is smaller then resolving power of telescope - therefore we should not be able to see it. Seeing / detecting is one thing - resolution is another. Resolution is tied to resolving stuff - having two stars next to each other and identifying them as separate - having two lines next to each other and resolving them as separate features, or resolving the shape of the feature rather than just saying - hey there is this smudge there . I think that we perfectly understand each other. Resolution that we speak of will impact even transition from dark to light region - it won't be clear sharp transition any more (much like line won't be line but wider feature and star is no pin point but is circular blob). As far as different positions, yes, it's best for Olly to clearly state his position and even better - provide example he mentioned when he gets the chance to do so.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.