Jump to content

vlaiv

Members
  • Posts

    13,265
  • Joined

  • Last visited

  • Days Won

    12

Everything posted by vlaiv

  1. I think that you are quite right. I think 12" RC and CDKs and above can illuminate full frame. There are also few other scopes that can - refractors with suitable corrector can (also larger ones).
  2. I'm really surprised. That is with field flattener or without one?
  3. So even APS-C requires flattner. What's the illumination like? I guess it is pretty good at 28mm?
  4. Is that a crop? If it's full sensor area - that is excellent. As far as I know 6" RC can use up to APS-C sized sensor or about 30mm of corrected circle?
  5. C6 probably won't illuminate fully even APS-C. There is nothing you can do about it - no magic attachment that will make it work. If you want to illuminate full frame - then you need scope capable of doing that (not many scopes have fully illuminated and corrected full frame capability).
  6. If stars are smaller - then image is sharper. These two are intimately interlinked. Star profile is the blur profile that is affecting the image, and tighter stars simply mean less blur. In the end - as you say, most dominant factor is still the atmosphere, and there is small difference between images created with smaller and larger apertures as far as sharpness goes. This changes somewhat in very good seeing, so someone that has access to site with very good seeing will make the most out of larger aperture.
  7. On first point - I agree completely. It is namely "aperture at resolution" that determines speed of the system. On second point - I don't completely agree, and in fact - there is proof hiding in these two images. Yes, atmosphere dictates resolution for the most part, but aperture does play a role. This is zoomed in C11 image on very close pair of stars: There is no mistake in this image - those are two stars next to each other. same region in FSQ image. Yes, SNR is much worse and noise is present, but still - I don't see two stars there. Resolution is clearly better in C11 image as it managed to resolve that stellar pair (see - resolution / resolve ).
  8. Well there is format that needs to be exploited:
  9. In principle - yes. If we for the moment ignore that there is actual width to each of those bands and that they response is not uniform across these bands (not constant but rather more like bell shaped) - we actually have very simple system of equations. Red_pixel = SII_signal * SII_QE_red + OIII_signal * OIII_QE_red Green_pixel = SII_signal * SII_QE_green + OIII_signal * OIII_QE_green We have pixel values and all QE values - so we can reconstruct SII and OIII signal that are unknowns in above equations. In fact - we will have 3 equations with 2 unknowns. In reality - things are not that simple as we have LP that is unknown and also - we don't have uniform response at each of these wavelengths (or rather wavelength ranges that filters pass), so above will be just approximation.
  10. That poster is a bit misleading. OSC + dual narrowband filters will never be able to do proper separation of signal like mono + regular NB filters. This has nothing to do with dual narrowband filters - but rather with OSC sensor themselves. If you look at QE chart for any OSC sensor - you will see that every pixel has sensitivity all over 400-700nm range. here is QE graph of ASI2600. Whenever you capture Ha or SII signal - it will be picked up by both green and blue pixels. If you are imaging OIII at the same time (which is similarly picked up by red pixels) - three will be some "crosstalk". Some of Ha/SII signal will be imprinted in OIII signal and vice verse. You won't be able to record pure Ha/SII or pure OIII. On the other hand - there is nothing stopping one from using regular NB filters with OSC camera as well. With careful processing one can even get better than often assumed sensitivities - 1/4 for Ha/SII and 1/2 for OIII - precisely because there is this "crosstalk" and all the pixels are sensitive to all the wavelengths (although not with the same QE). Just to be clear - I'm not saying that dual band filters are not good (or tri/quad for that matter) - I'm just saying that comparing them to Mono+NB or dedicating them exclusively to use with OSC sensors is misleading (why not use dual band with mono as well?).
  11. I have no idea. I used this build of Gimp (at the time it there was no official 2.10 build that could handle 32bit per channel data): https://www.partha.com/ and it came with G'mic plugins preinstalled.
  12. What is wrong with G'mic plugins for Gimp? There is vast array of noise reducing options - I find wavelet noise reduction to work the best for astronomical images. Do use layer mask composed out of image brightness - so that you denoise only in darkest areas of the image (where SNR is low because of low signal). (do notice how many different smoothing options there are in the list).
  13. Given your setup - I think it is probably some sort of tilt in optical train. Bottom right corner is the least affected (best looking stars) while opposite corner is the worst. It could be that too much weight is hanging of that focuser, or maybe you should check if everything is square before clamping things down.
  14. Because things in "free fall" don't experience gravity. This is true close here on earth's surface and in orbit. When you start to fall - you feel weightless - same feeling you get when for example elevator starts moving downwards while you are in it. Free fall does not need to be directed downward - it can be directed sideways - body orbiting another body is in effective "free fall". You can see this if you imagine cannon shooting projectiles with ever increasing speed: Cannonball falls to the ground each time - until you reach certain speed - where it is perpetually falling and "missing" the earth. If you shoot it faster than that - it will fly away into infinity (that is called V1 - or first cosmic velocity, speed that puts thing into an orbit). Astronauts in space together with ISS or space shuttle are in orbit, so they are in free fall and don't experience gravity pull. As for moisture on window - well it condenses there into drops - but these drops behave the same as far as gravity goes, however - gravity is not the only thing acting on bodies. Any sort of accelerated motion will cause those drops to slide across the glass and create trails. It can be space craft accelerating / decelerating or simply rotating (centrifugal force). There are other forces that can move water as well - like capillary force and surface tension.
  15. I just checked website for OpenAstroMount and I see that it is friction belt not timing belt - which is good as far as any meshing error goes - no meshing, no error. I do wonder how much friction there is, will belt slip at some point? In any case, given that it is open source project and that it is in part 3d printed - if any issues arise in use - some modifications can be done to mitigate the issue. I've seen PHD2 guide RMS of 0.5-0.7" quoted and that must be from tests, which would indicate that the mount works as intended.
  16. This is really interesting. I actually have concern about this bit. Small error on last stage can be significant in terms of absolute error. When belt is placed at first reduction stage - then any error it produces is small in magnitude (but much faster - which puts strain on guide system). Belt at last stage can have significant amount of error. If meshing is not very precise there could be fraction of a degree of peak to peak error (which is much larger than say half an arc minute often found in even cheap mounts).
  17. https://www.teleskop-express.de/shop/product_info.php/info/p14884_Omegon-Pro-Powerbank-48k-LiFePO4---12-V--13-Ah--154-Wh.html In general - look at page where that item is listed (power supply items) - there are plenty of options. Maybe simplest one is to get Long VRLA (or other) 12V battery with wanted capacity. Here is product brief for 17Ah model. https://www.kunglong.com/upload/product_pdf/en/LG17-12.pdf?v=3zds3e9r4 Do be careful that with such batteries you actually want higher capacity so you don't discharge it 100% With depth of discharge of say 30% (which would be 36Ah battery if you calculate that you'll need 12Ah) - you'll have 1200 cycles before capacity drops below 80% (and since it is larger than you need - it will still have plenty of capacity for your needs).
  18. I guess this almost fits the budget: https://www.firstlightoptics.com/evostar/sky-watcher-mercury-707-az-telescope.html It has couple of things going for it. It looks like "proper" scope (long skinny tube). It is light weight and easy to setup and use. It will show plenty to get someone really interested in astronomy (moon, planets and brighter deep sky objects).
  19. For 2.9um - optimum sampling is at F/14.5 so yes, it is a bit over sampled It is over sampled by factor of 17/14.5 = ~1.1724 - or signal should end at 2.34px / cycle or about here: You can verify that by looking at pixel per count in that FFT in ImageJ (or alternatively - it will be at 1024 / 2.34 = ~438px away from center of the image Ok, so it is that outer ring after all and not inner. Inner ring must be sharpening artifact.
  20. It does look good - 2.06px/cycle is very close to 2px/cycle Have you done F/ratio vs pixel size math on it to confirm sampling rate? It might be that inner circle is actual data limit and outer circle is some sort of stacking artifact. It is always guess a work. Resulting MTF of the image is combination of telescope MTF and resulting seeing from all little seeing distortions combined when stacking subs. It will change whenever we change subs that went into stack - and we hope that they average out to something close to gaussian (math says that in limit - whatever seeing is, stacking will tend to gaussian shape), however, we don't know sigma/FWHM of that gaussian - and that is part of guess work. Different algorithms approach this in different way. Wavelets do detail decomposition to several images. There is Gimp plugin that does this - it decomposes image into several layers, each layer containing image composed out of set of frequencies. In the graph, it would look something like this (theoretical exact case, but I don't think wavelets manage to decompose perfectly) So we get 6 images - each consisting out of certain set of frequencies. Then when you sharpen - or move slider in registax - you multiply that segment with some constant - and you aim to get that constant just right and you end up with something like this: (each part of original curve is raised back to some position - hopefully close to where it should be) - more layer there is, better restoration or closer to original curve. This is just simplified explanation - curves don't really look straight. For example in Gaussian wavelets, decomposition is done with gaussian kernel, so those boxes actually look something like this: (yes, those squiggly lines are supposed to be gaussian bell shapes ). Deconvolution on the other hand handles things differently. In fact - there is no single deconvolution algorithm, and basic deconvolution (which might probably be the best for this application) is just simply division. There is mathematical relationship between spatial and frequency domain that says that convolution in one domain is multiplication in other domain (and vice verse). So convolution in spatial domain is multiplication in frequency domain, and therefore - one could simply generate above curve with some means and use it to divide Fourier transform (inverse of multiplication) and then do inverse Fourier transform. Other deconvolution algorithms try to deal with problem in spatial domain. They examine PSF and try to reconstruct image based on blurred image and guessed PSF - they solve the question - what would image need to look like if it was blurred by this to produce this result. Often they use probability approach because of nature of random noise that is involved in the process. They ask a question - what image has the highest probability of being a solution, given these starting conditions - and then use clever math equations to solve that.
  21. This question actually has very counterintuitive and long answer. White balance as concept is completely useless in astrophotography, but there is other thing that you can do if you want to get accurate color in your images. We use white balance to compensate for the fact that DSLR is a good measuring device - while our visual system is not. We adapt to viewing conditions to some extent. If you take yellowish light source and illuminate the room, at some point our brain will start telling us that paper is still white - although at that point it is not - it is yellowish, but our brain will trick us into thinking it white. Camera can't be fooled and it will record the scene as is - not the way we remember it or expect it to be. We use white balance to correct for this and again - make what we think should look white - white under different viewing conditions. In astrophotography - we should not care about what will our brain tell us the color is - for several reasons. First is - we can't change illumination in outer space. There is no 3600K light bulb that is illuminating scene - most of the light is actually generated by objects out there and what is reflected light (like reflection nebulae) - will reflect local light that can't be changed. Second is - we can never go there in order to try to match - what we would see in that case. Best we can do is just try to match the color of light as light that is reaching us. For this DSLR is excellent - it will record the color as is and we only need to calibrate it properly. It's a bit like using any sort of measurement device. If we want our device to measure accurately - we need to calibrate it, to conform it to a standard. This is what color calibration does - and it is not as simple as setting the white balance. For this reason - most people don't bother with it and just set color in post processing as they see fit.
  22. Difference between FFTJ and FFT plugin in ImageJ is how they handle the data. Both do the same thing (although FFTJ does it with greater precision), but with FFTJ you have range of options for output data. You can choose Real and Imaginary part or you can choose frequency or power spectrum (power being square of frequency spectrum) - which are linear. You can also choose logarithmic representation of power or frequency spectrum. Standard FFT is logarithmic frequency spectrum scaled in certain way (not sure what is scaling function - but minimum values hover around 90 while max is 255). FFTJ leaves actual float point numbers - and it will depend on how you set "white" and black point to how they will look. Here is example - I took screen show of the image and did both FFTs on it - and result is the same, except for white and black point That is quite normal for processed image. I'll try to explain what is happening. It is effect of sharpening. Image from telescope has this sort of signature: Where curve represents top or max or clamping function - no signal can go above it, but actual signal is limited by that function (or multiplied with it). Ideal sharp image would have this sort of shape (or clamping function) This means that all frequencies remain the same - none is attenuated. With sharpening, what we really do is this: We try to boost each frequency to its original unaltered / unscaled value (by multiplying it with some constant - which is inverse of original MTF of image). Problem is that this MTF is unknown to us and depends on combination of telescope MTF and seeing MTF (whatever seeing was left in stack - we don't get all perfect frames in our stacks). For this reason - we try to guess MTF in different ways - we move sliders in Registax or guess sigma of Gaussian for deconvolution kernel and so on - until we are visually happy with result. Thing is - what satisfies us as result - might not be actual inverse of blur that was applied. What this shows: Is that your processing instead of this: Resulted in something like this: You did not manage to restore all frequencies properly but there was "gap" left in there. Not your fault - it is just the settings that you applied when sharpening that resulted in such restoration.
  23. No particular reason - I just take a point (near equator) - where edge is nicely defined and then drag line to other side trying to get it to go thru center. I don't think that odd pixel here and there will make much of a difference. According to what I've found difference seems to be 142984 - 133708 / 142984 = ~6.5%, so that is quite significant in general terms, but on 222px image that will be 14px depending on direction of measurement? (that is a bit more than I thought it would be) Yep, further measurement confirms that there is about 14px of difference However - I was not too off with 222 measurement (maybe 2% error).
  24. There are two types of drizzling. First is bayer drizzle that AS!3 uses to debayer data and it does its job well. It makes OSC camera have the same resolution as mono camera. This is related to pixel size and fact that color pixels are in fact spaced more than in mono version (every other pixel is red and every other is blue - green similarly but it is "more dense"). This is done by default for OSC data and you don't have to turn on anything. Second is "regular" drizzle. This kind of processing requires under sampled data and in my view - it is questionable if it works at all. It was designed for Hubble where very precise sub pixel dithering can be employed - and there was no atmosphere to mess up things. In either case - drizzle simply won't do anything useful to your data as you are not (even if it works) - as you are over sampled to start with and not under sampled. In this case that solely depends on processing. Data is not processed in the same way. Wavelets are done more conservatively in this second go. Maybe if you post 16bit raw stack - without any sharpening done (not even in AS!3) for people to process. Maybe there is more to the data. Many experienced imagers sometimes extract more from the same data in terms of detail than I able to, so there is certainly that factor.
  25. Here is FFT of that other image added later in post: It again shows that faint ring - maybe a bit better than the last one. If I run mean filter on it to remove some noise - maybe it will be easier to asses where edge of the data is. Ok, I've done a trick - I increased brightness / contrast until circle is readily detectable: Placing cursor on the edge says it is 2.64 pixels per cycle (ideal sampling rate is 2 pixels per cycle). Difference therefore is 2/2.64 = ~0.76 We calculated above that theoretical max is 0.8 at this sampling rate so data falls just a bit shorter than this theoretical max.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.