Jump to content

NLCbanner2024.jpg.2478be509670e60c2d6efd04834b8b47.jpg

vlaiv

Members
  • Posts

    13,029
  • Joined

  • Last visited

  • Days Won

    11

Everything posted by vlaiv

  1. Too red and too linear to be IFN - classical LP gradient
  2. Depends ... If we are talking about tripled or ED doublet refractor with good figure vs SCT with good figure - refractor wins in almost all areas. With achromatic refractor - it will depend on F/ratio of the scope. Problem is that it is hard to find refractors with matching aperture to that of commonly available SCTs. C5 is smallest SCT, and next is C6. You can find apo refractors in that range that don't cost an arm and a leg. They will still cost more (even twice or three times more in 6" range) than SCT. In 8" and above - apo refractors will be seriously expensive. For same aperture - they will provide more light gathering. SCT has two mirrors, central obstruction and corrector plate. Mirrors have lower reflectivity than losses in the lens (with modern coatings). Good lens will also have better sharpness than SCT telescope, and better correction (SCTs have some spherical aberration depending on focus position due to moving primary mirror - only one distance between mirrors has best correction - but you move mirror to focus). No central obstruction with refractor will provide better low contrast (better detail) performance. SCT will be easier to mount due to shorter tube and lighter for same aperture.
  3. It seems to be popular these days - we have one of our bridges lit up as well, similarly distasteful, luckily much mess LP produced.
  4. Just a couple of points. I'm having hard time identifying hot pixels in this image - star field is so dense that I'm never sure if it is hot pixel or the star. Hot pixels will show in your darks - stretch your master dark and note position of hot pixels in it - they should roughly correspond to pixels in final image (there is some alignment in final image). You can't really do cosmetic correction at this sampling resolution because stars are tiny so that is out of the question. Did you dither your subs? If you plan to use sigma reject stacking you need your subs dithered properly - that means quite a shift because of sampling rate. If you move your sub by just a few arc seconds - that is no dither at all when your imaging resolution is about 4"/px.
  5. First check if you got your debayer settings right. Most of the time, LP is dominant in red part of spectrum and most people have issues with red cast rather than green. I'm assuming you in fact have Altair 183C rather than 138C and above is typo - altair website says proper bayer pattern is: GBRG It is either that or GRBG edit: RGGB (image can only be flipped upside down by software but not left to right, and that depends if one observes Y coordinate to go up or Y to go down. X always goes to the right). If you are sure about your bayer settings and get those confirmed, here is "in depth" explanation on how color works and how to get proper color balance. Even when you get your debayer settings right, it is important to understand this. Each sensor has different sensitivity curve regardless if it is mono + filters or OSC sensor. R, G and B from sensor do not map to R, G and B for display directly. Here is what published sensor response looks like for 183 model: And here is what "response" for sRGB (color space most likely to be used by computer and therefore you should transform to this color space when working with computers / computer screens): Now, don't be confused by the fact that there is "negative" part of the curve in sRGB matching functions. sRGB color space has lower gamut than human vision and three primaries used for sRGB (type of blue, type of red and type of green) don't allow for all colors to be reproduced. Here is sRGB color gamut compared to human vision: sRGB color space can only display section in inner triangle, while humans can perceive everything that has been shown as color - much larger "area" in this chromaticity diagram. You need to "subtract" some light to get things that eye can see but sRGB can't reproduce. In any case, that should not worry you much. Important thing to note is that red and blue are "higher" then green in this graph. If you examine 183 spectral response - green is higher than other two. Raw color information from the sensor is not going to represent true color. You need to do color calibration to transform raw color information into sRGB color space. Most people don't do this, and they adjust color balance as they please (reduce green, boost blue and things like that - until they get result that they want). There are other ways to accurately transform raw color into sRGB (linear - it needs gamma adjustment later) color. One of the ways is to do star color calibration. PI has script for this for example. Another way is to use precomputed matrix (from someone who did general color calibration for this sensor - this works only if you use same filters as that person), or alternatively doing color cards. You can purchase standard color cards that you can use for color calibration. In fact that is what they are designed for. Photographers use them to calibrate equipment. It looks something like this: you take such card, provide uniform daylight illumination to it (or specific illuminant like D65) and shoot a picture of it with your sensor. From actual raw color values and expected color values for such chart you can derive transform matrix. Btw, transformation is as simple as channel mixer and it goes something like this: sRGB_red = a * raw_red + b * raw_green + c * raw_blue sRGB_green = d * raw_red + e * raw_green + f * raw_blue .... so you need nine numbers a, b, c, ..., i that represent your color transform matrix. Bottom line is - don't expect raw color information to be properly white balanced, you need to do white balance on your data to get good results. Some ways of doing color balancing will produce accurate color results (star color calibration, color cards, ....), while others will be only approximate.
  6. Ok, so here are couple of measurements that I took from some of my subs: Ha filter, night of good seeing - 4.7px at ~0.483"/px (I round that up to 0.5"/px when talking about it, but it is closer to 0.48"/px), so that gives: ~2.27" FWHM OIII filter, night of good seeing - 4.63px - ~ 2.24" FWHM Ha filter on a night of poor seeing - 7.87px = ~3.8" FWHM Lum filter (actually IDAS LPS P2), poor seeing - 5.91px = ~2.86" FWHM We could say that it ranges from 2.2" FWHM to 3.8+" FWHM, or expressed in effective resolution - 1.375"/px to over 2.275"/px. One could probably fare worse if seeing is really bad - but who would want to image in those conditions? Just for a comparison, here is theoretical expected FWHM for my conditions - 8" aperture, around 0.5" RMS guide precision, and let's say seeing in 1.2" - 2" range. Star FWHM for ideal optics under 1.2" seeing and 0.5" RMS guiding should be 1.78" FWHM, while for 2" seeing is 2.39" FWHM. My values are a bit more than this, and while I get rather OK seeing forecast, I think it's down to local thermals that bump up my FWHM values (surrounded by houses and bodies of water - Danube river is right in my imaging path - about 1-2Km away). This is what meteoblue forecasts for this evening, for example (gray column is seeing in arc seconds): Like I said, even in best conditions, I'm still in 1.3"/px zone. Maybe things will change once I move to the countryside and to a bit higher elevation, and get better mount. I have CCD47 reducer, tried it once, did not like it, but that could be due to tilt. In the mean time I upgraded my focuser on RC to 2.5" one with threaded connection. I will have to try again, but I'm not expecting much from it. It turns out that it will not correct fully if placed at distance that makes x0.67 reduction, so most people place it closer to get x0.7-x0.72 reduction. It's not corrector and if you apply x0.67 reduction, then 22mm diagonal will be close to 33mm circle, and anything above 30mm on that scope requires field flattening and correcting, even below 30mm might not be very good. If I were to look at reducer for RC again (and I will at some time in future) - I would look at x0.75 Riccardi FF/FR. It seems to work with this scope as well and I think it works good (at least that is what I read about the combo).
  7. Catadioptric is general term for all telescope designs using both mirrors and lens, or rather reflective and refractive elements (in base design - that means without eyepieces / barlows / detachable reducers / finders). Other two basic designs are reflectors and refractors. SCTs, Maksutovs (both Mak-Cass and Mak-Newt), and some other designs - like Bird Jones are all Catadioptric designs.
  8. Very good estimate of optimum sampling rate is about FWHM / 1.6. That is based on Nyquist sampling theorem and Gaussian approximation of star PSF (and it's Fourier transform). If you for example have star FWHM in arc seconds be 2.8" in your stack (individual subs will differ on average FWHM, but you can do "test round" of stacking to measure FWHM in resulting stack - just convert to arc seconds based on your initial sampling rate), then optimum sampling rate is 2.8 / 1.6 = 1.75"/px. Most of the time you won't be able to bin to exact value based on FWHM, and I advocate going "a bit over" rather than "a bit under". If you for example have 0.5"/px sampling and your ideal sampling rate is 1.2"/px - I would rather bin x3 and go for 1.5"/px then bin x2 and go for 1"/px. First - it will raise SNR by larger factor, and second - image will need less sharpening if any.
  9. There is very small benefit in terms of pixel blur if you process your subs right. However, that is not the main reason why I image at that resolution. I image at that resolution because that is what my setup gives me, and I can bin image to get to resolution that my guiding and sky allow. When I was choosing my setup, resolution was only one of concerns, and in hindsight, I probably over estimated things - 1"/px is really hard to reach (FWHM of 1.6") even if sometimes I do get seeing in 1" FWHM zone (0.8-1.2" FWHM). I might be able to utilize this resolution more often once I move away from the city and get better mound (aiming at mesu 200). Good thing is that one can use focal reducer to widen the field in this combination (up to 30mm of field will be usable and sensor is only ~22mm diagonal), and I'm working on fractional binning - that will get me optimal resolution for a given night (or target). There are other reasons why I've chosen that setup, and I'll name a few: - I wanted 8" aperture that could be used on HEQ5 mount. I tried 8" F/6 newtonian and while it worked - it was not stable platform. This means that I needed either F/4 scope - which is very demanding both on collimation and on coma corrector (design and placement). When I did my research, I found out that coma correctors don't really provide perfect correction - some have issues with spherical aberration, some correct coma only to a certain distance of center. Most are designed with very short back focus suitable for DSLR type cameras (55mm) and not for OAG + filters, etc ... Another option was SCT - but I don't like those in imaging role - RC on the other hand is very good imaging instrument - it is well baffled, won't dew up easily (just happened once to me - dew on secondary), it is compact so easier to mount and guide. Does not need any corrective optics on sensor the size of ASI1600 - it has fairly flat field and round stars. There are additional things about this scope that I value not related to imaging (but rather as science instrument). That is just a combination of scope and camera that I'm most pleased with and it forces me to use 0.5"/px when shooting and bin later, and that is fine with me. I do have another scope that I use with ASI1600 for wide field imaging and resolution of 2"/px - a small frac.
  10. Only issue I have here at the moment is that there is quite a bit of fog in the evening - high humidity and probably high air pressure. With LP, I can hardly make out Orion's belt in the evening. Don't know why is it so humid - we haven't had rain in quite a while. That is just because I use yr.no as my weather provider - they give forecast for much of Europe and are precise enough
  11. Over here it's been quite interesting 25-26C + during the day. We had warmest October day in past 60+ years couple a days ago with temperatures over 28C. Heating season starts around 15th of October (for those on remote / central heating) and most people that do their own heating start even earlier than that - second week of October. Average daily temperature for October here is about 12.4C. Last two weeks we did not get down to that temperature even in the early mornings. It's also much drier than usual. Warm weather will apparently continue until beginning of the next weak. Btw, I'm also currently in my shorts
  12. It could still be Bird Jones type telescope - spherical primary and corrector lens, only F/5 primary instead of faster F/4 like TS example above. Maybe in this slower configuration it is better optically? Or it could be regular newtonian with a barlow. But what would be point in that? I mean you can buy regular 6" F/5 and use barlow on planets and use it without barlow to get wider field.
  13. We could argue that their description is another "clue" to design of that specific telescope. Bird Jones does not use simple barlow lens. It has spherical primary (rather fast - F/4 perhaps) and corrector lens. It is a bit similar to Mak-Newtonian - except Mak-Newt uses full aperture corrector plate and modifies wave front prior to primary mirror. Bird Jones modifies wave front "in converging beam" (after primary) - so it has different curve on corrector lens, and also as a consequence corrector lens needs to extend focal length to correct it. In any case it is not simple barlow lens, and it should not be called so. There is no reason why regular barlow lens can't be mounted on parabolic newtonain before focuser at proper distance. Technically it would also be considered Catadioptic design - combining reflective and refractive elements.
  14. That Pollux does not look like Bird-Jones. It looks more like F/5 newtonian with barlow lens. Bird Jones design should be more compact. It might be that image on Bresser website is not true item, but compare that image: To this telescope sold on TS website: Ratio of diameter to physical OTA length don't match between the two, and one on Bresser website looks like regular F/5.
  15. Not sure if it has something to do with Windows update. First let address issue of noise everywhere but top part of the image. Noise that you are seeing is background noise consisting out of read noise, dark current noise (all across the frame) and LP noise. LP signal often has gradient to it because sky is not uniformly lit up by ground light sources - closer to horizon there is more LP and near the zenith there is less - that creates gradient. You can see that there is gradient running from top of the image down to bottom - that is most likely LP gradient. Your level of stretch is such that it managed to push top part of the image (or rather it's background) - to be dark enough so that noise does not stand out as much while bottom of the image is still bright enough for noise to show. Try removing gradient first and doing less aggressive histogram stretch so that background noise is better controlled. Now let's look at pattern in the image. My guess is that it is not related to the sensor at all, and it is due to noise visible and another effect. You are probably guiding but your polar alignment is not spot on. This creates slight rotation between subs. Slight rotation between subs can happen if you have cone / orthogonality error - once you flip meridian frame will be slightly rotated. If frames gradually change rotation - it's due to PA. If one side of pier has one orientation, and other side of pier after meridian flip is slightly rotated - it is due to cone / orthogonality error (I'm not expert in that field, still don't know why exactly it happens, but I know it does sometimes). Now let's see what that has to do with grid pattern? When you start stacking your images - subs need to be aligned, so some of them need to be "rotated back" to match reference frame. Software uses certain interpolating algorithm to do this, and choice of algorithm can impact creation of the grid pattern. Particularly bilinear resampling can cause that. Here is synthetic example done on just pure noise and stretched to show grid forming: This is pure gaussian noise (otherwise no pattern is visible), rotated by 1 degree using bilinear resampling method, and stretched to show grid clearly. Depending on rotation angle, grid will be coarser or finer. That is what I believe is going on, but might be wrong. If you want to lessen this effect- use bicubic resampling or some more advanced resampling technique (lanczos for example is excellent).
  16. It looks like you already got one? Nothing wrong with the design itself. It has certain characteristics much like any other design out there. With this design however, it proved that low cost implementations are plagued with poor execution of the design. Not the fault of the design but rather of manufacturing process and attention to detail. Not sure what you mean by longer term issues. If it's not performing adequately - it will do so from the start. It will not suddenly or over some period of time start performing poorly on its own. Since you already have it, and one could say you are in luck of not having much of experience so you won't be able to tell if it is poor optics straight away - just use it until you are ready to replace it. Do be careful however to blame things on the scope, as it might not be down to it, at least not everything will be down to it. There is seeing that can often be mistaken for poor optical quality, especially by novice observers (I'm guilty of that even after quite a bit of observing under my belt). Just use it in a way that is most pleasing - that gives the best image and as you progress in your observing skills it will be more apparent what is due to seeing and what is down to scope optics.
  17. I would not call it dramatic. It depends on how much things that you want to present in the image - Signal, is over the noise in the image. Doubling the amount of the data has rather straight forward consequence - it improves SNR by factor of x1.41... (square root of two). It will always have that affect regardless what the SNR of original image is. If you have image that has SNR of let's say 30, you will end up with image that has SNR of ~43. Visually there will be less difference then for example the case where your original image has SNR of 4 and you increase that to SNR of 5.6. In relative terms it is the same increase in SNR, but if your SNR is already high - visually it will make less difference, while if SNR is low to start with, such improvement can be considerable. It can even pull the data from "unrecognizable" region, into "starting to recognize features" region. Here is example of low base SNR: Here is example of higher base SNR: And here is the same image as above (higher base SNR) with a bit different linear stretch: This goes to show that doubling amount of data can produce different results - based on base SNR and also on the level of processing / stretch. In first case it makes unreadable text almost readable (it is easier to figure out what it says in right image). In second example target is rendered the same, difference is only in quality of the background. In third example if data is carefully stretched - you can see the text and background looks almost the same - in fact they look like there is almost no difference at all. And in all three cases we used same increase in amount of data - doubling the data.
  18. Ok, no, I was not trying to imply that I'll do some sort of "magic" and image will look better. You indicated that you did equal histogram stretch as closely as possible, but in reality it's quite different - when blinking scaled down version of native and binned - there is quite a bit of variation in brightness - different level of stretch. What I wanted to do is "split screen" type of image while still linear - and then post those so you can apply the same level of histogram stretch as it will be one image composed out of two halves. Here is blinking gif to show the difference in level of stretch:
  19. How about not doing histogram stretch and DBE? Just post original stack while linear. I can then bin it myself and show the difference both small and enlarged?
  20. Yes, gain is the issue. Here is comparison of the information from fits header for darks and lights: Darks were taken with gain 0 and lights were taken with with gain 90. There is slight mismatch between flats and flat darks in temperature. In all likelihood it will not make much difference, but I would recommend you following: - Take new set of darks at Gain 90 and try with those to calibrate your image - In future make sure that you take flats / flat darks at the same temperature - it would be best to have some settings that you will not change - offset, gain and temperature and always work on those. Offset of 65 seems fine - so keep it there, gain 90 is unity gain so again fine - keep it there and -5C looks like reasonable temperature that you can reach most of the time, so that is also something you should keep. Take all of your subs with these settings.
  21. There might be an issue with APT and the way it shoots flats - but I can't help there as I'm not using it (or have ever). What I can do is offer you list of reasons why this can happen so you can check if anything on that list is causing issue for you. You have under correction of the flats. Corrected value is equal to base value divided by flat value. For that number to be lower than it should, we have two options: 1. base value is lower than it should be 2. flat value is higher than it should be Usually this happens with wrong calibration or if there is some sort of light leak in the system. Case one can happen if you have: mismatching darks for your lights - longer duration darks, shot at higher gain, shot at higher temperature, or there were light leak while you shot your darks (but not with lights) - i.e. You did darks on scope during the day, or you took your camera off scope and did darks and there was either IR leak, or regular light leak (cap was not good enough at blocking the light). Case two can happen if you have: mismatching flat darks - shorter than flats, at lower gain or at lower temperature. Light leak while shooting flats can also be a problem. People sometimes calibrate flats with bias only - again that can be a problem. If you can post one of each: light, dark, flat and flat dark subs, we could possibly be able to tell what happened by examining fits header for information and doing measurements on each.
  22. Here is comparison between reduced and unreduced data. First comparison is 2h worth of data taken at 1"/px and then scaled down to 2"/px vs 1h worth of data taken at 2"/px: Left in this image is 2"/px data (1h total) and right in this image is 1"/px downsampled data (2h total). Sorry about extreme stretch - but this is linear stretch to get to noise floor to be able to actually asses if there is any difference. In my view 1h of 2"/px data (like taken with reducer) is almost as good as 2h of data taken at 1"/px then reduced down. I can tell that 2h of data reduced in fact has a bit less noise (right part of the image looks like it has a bit smoother background). Here is same image with proper (yet basic) histogram stretch in Gimp: Histogram stretch to my eye confirms above. 1h of 2"/px is almost as good as 2h of 1"/px downsampled to same size. It follows that 2h of 2"/px will beat 2h of 1"/px downsampled - or in another words Reducer wins over non reduced then downsampled to the same size. But we sort of already knew this from my previous comparison of binning vs downsampling. Let's look what happens when we upsample 1h of 2"/px image to match the resolution of 2h of 1"/px image. I have to say that for data above ideal sampling rate is slightly less than 2"/px, therefore we can expect some sharpness loss - but I think it will be minimal and we probably won't be able to tell. Here is linear stretch (again extreme to hit noise floor). Left part of the image is 1h of 2"/px upsampled and right side of the image is 2h of 1"/px. You will notice that noise in upsampled image is more grainy - there simply is no "information" that makes up fine grained noise, but noise levels are about the same, or rather we need histogram stretch to see which noise will start showing first. Not sure what to make out of this one. I think I can see the difference in size of noise grain, but if I did not tell you that this image is made partly out of 1h of data and partly of 2h of data and that half of it is "shot" at twice lower sampling rate, would you be able to tell?
  23. Not sure if I would go with this last one. Not many people can actually pull off 1"/px resolution. In most cases people are closer to 1.5-2"/px. Maybe better way to say it would be: - Don't use focal reducer if you are already at your target sampling rate and you don't want to trade resolution / image scale for less time imaging, or you want flexibility to do it later via binning and such.
  24. It certainly won't matter on chip the size of 178. Even on larger sensors you should not get much vignetting at that distance.
  25. Are you sure you have EFW the right way around? What EFW is it? ZWO as well? In fact it does not matter. EFW has both female T2 connections, your camera has T2 female connection, but EFW comes with T2-T2 adapter. This is how you should put together things: camera - t2-t2 - EFW - 16.5 extender - FF/FR It should all fit together like that, if I'm not mistaken.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.