Jump to content

Banner.jpg.b89429c566825f6ab32bcafbada449c9.jpg

vlaiv

Members
  • Posts

    13,016
  • Joined

  • Last visited

  • Days Won

    11

Everything posted by vlaiv

  1. Do you happen to have that graph that ZWO readily publishes for their camera models? Interested in read noise vs gain part.
  2. Don't know. According to TS website - these will be in stock in just two weeks - at least 533 model:
  3. Not sure why there is absolutely zero info from ZWO. Neither website nor forum mentions these (as far as I can tell - both 6200 model and this one 533). Even High Point Scientific info is questionable. It says that it is IMX571 sensor under specs tab: It mentions APS-C sensor size in text: It looks like there is in fact IMX571 in the form of ASI2600mc-pro - which is APS-C sized sensor, same pixel specs (3.76um and 50K full well - a bit large for such small pixel size, I wonder how they managed to do that?) but only 3.5e read noise vs 3.8e for 533 model. Price is of course much higher.
  4. ^ I think this is best solution. You should certainly try to do full calibration. Although most "imperfections" are not strong enough to be seen in single sub, after stacking and when trying to sharpen up things, better to have "clean" data rather one containing bias patterns and PRNU.
  5. I think that EPs get so much "chatter" coming their way for a reason. Just look at that list and name any other item on that list that you can vastly improve upon by spending only a few dozen of monetary units more over "stock" items . Well apart from collimation - that can cost nothing if you have certain model of scope (on the other hand, I guess it can cost quite a bit to have triplet collimated by professional?).
  6. Yes, poor focus can do that, but then again what would be the point of stacking subs with such bad focus in the first place? Here is something that you can do to test if it is indeed due to poor focus. Take all of your subs prior to registration and after you have calibrated them and do integer resample on them in PI (bin them in software). Bin them x3 or x4 and then try again to register and stack them. Binning will reduce resolution of the image and blur size due to poor focus will be less in relative terms. This could help star detection algorithm do it's thing.
  7. How about an experiment? I'll provide you with undersampled images with random offsets - and you perform combination of those images: 1) Drizzle method 2) Resampled integration We examine results for SNR and sharpness (PSF FWHM) to see if it actually works the way people expect it to work? Btw, by Resampled integration I mean - You take each undersampled sub and prior to stacking you resample to adequate size using Lanczos-3 resampling for example (adequate in this context means equivalent of drizzle factor - if you drizzle x2, you resample to x2 larger size). Then align/register these resampled images using Lanczos-3 and stack with average stacking method.
  8. Ok, yes, as predicted value is different, and in this particular case your calibrated sub will be less noisy if you use bias - because you stacked 200 of them vs only 20 of dark flats Best version would be to use 200 of dark flats. Just to explain what will happen when you calibrate with using bias only. Let's suppose that you have some pixel value in the image that received blocked light. It was 20% in shadow and it's original value was 100ADU, but since it was in shadow - sensor actually recorded only 80ADU (20% light block). You want to correct this by use of flat. Let's further suppose that your flat peak was at 75% of histogram - that means around 47872ADU. This value is together with bias and dark current. Shaded part is only 80% of this value. If you use bias only to calibrate your light, you will create 6ADU difference in your flat (because mean of dark flat is larger for about 6ADU than bias only). We can say that proper value for flat at 100% is 47872 so subtract 872 and you have 47000. You need 80% of that and that will be - 37600. When you "scale" your flat you will get 37600 / 47000 = 0.8. What happens if you use bias instead of flat dark? You will end up with following values: 47006 and 37605 and "scaled" value of flat will be 37605/47006 = ~0.800005 (very small change in this case). If you correct your 80ADU in light sub with factor of 0.8, you will get proper value 80 / 0.8 = 100ADU, but if you correct with bias only you will get slightly different value 80 / 0.800005 = ~99.9995ADU. Your signal is very slightly darker than it should be. You ended up with under correction because you used bias instead of flat darks. In this particular case that we did - difference is so tiny that it would not show at all, but it is there because there is difference between master bias and master flat dark in mean value. Depending on difference between bias and dark flat and how much your signal needs correction (we used 20% but sometimes it can be even larger than that) - under correction will show when you stretch your data enough.
  9. I agree with experiment part, but from what I know about RANSAC - tolerance is related to how much a star can move to be included in the match, or rather what is average displacement value. This can help if you have distorted stars in one or more subs and PI complains that it can't find enough descriptors to make a match or similar. RANSAC is process that happens once star detection is finished and PI already has a list of stars it is going to try to match over frames. It will not have impact on star detection.
  10. Pretty much the same thing - it corrects for "geometry" of image. Imagine that you have two images of pillars shot at different sections and at different perspective points. Both show curved pillars, but you can't put together two images because curve is not uniform, so you "straighten" pillars in the images, join images and then "return" the curve to some extent - that is what will happen with wide field lens shots that are offset by large distance, in order for stars to overlap you need to "bend" each image ... RANSAC is algorithm used to determine which stars in one image align to which stars in another image. It is short for "Random sample consensus". Star centers will not always perfectly align in images, partly due to above distortion but also due to noise and seeing (you can't pinpoint exact center of the star to great precision and there will always be some error). In addition to this - some stars in one image might be missing from the other image and vice verse (those that are just next to the edge and you dithered your sub and it moved FOV and some stars are no longer in the image but some other appeared on the other side). RANSAC tries to find best match in all available stars by removing "outliers" (those stars that don't have their match). It tries to find mathematical solution for transform between one image to other with respect to minimizing alignment error in all the stars. If you are more interested in this, here is wiki article on algorithm (it has nice visual explanation with fitting a line thru a set of points and choosing to discard outliers in the process): https://en.wikipedia.org/wiki/Random_sample_consensus
  11. Can you translate that in non PI speak? If you take master bias and master dark-flat and do stats on each, what do you get as mean pixel value?
  12. Distortion model is probably related to lens / wide field images. When we are imaging, we are mapping spherical "surface" to flat surface, or rather angles to distance on the sensor. Larger the angle (or smaller the radius of sphere compared to sensor size) - more distortion there will be in the image. When you try to align subs with very large offset this can cause issues as star distances will not be equal if stars are in center of the field and on the edge. Maybe easiest way to explain that would be to observe "north pole" and 4 points on equator. You can arrange them such that lines connecting them along the surface of the earth are - equal length and always at 90 degrees to one at north pole. Try placing such 5 points in the plane with those properties Angles don't map ideally to distances in plane and you have distortion. Not sure if you need to concern yourself with that unless you are using wide field lens or fisheye lens or something like that. As for star detection - I would try adjusting sensitivity, peak response and upper limit - these settings seem related to what you need but I have no clue what each does solely based on their names (try increasing sensitivity, lowering peak response and probably leave upper limit alone - not sure if it will help changing that).
  13. Presuming that difference between these two subs is only the way master flat was prepared (not quite clear to me from your post) - in one case you used proper flat-darks of same exposure (and other parameters) as flats and for other one you used bias instead of flat-darks to create master flat, then only difference that can show will be in flat application. You won't find significant difference in noise levels or star shapes or anything. One that uses bias can have either over or under correction by flats (not sure which one of the top of my head), but it's not necessarily the case. It will depend on few factors - how much light blockage there is in the first place (either from dust particles or vignetting), and what is the difference in bias mean value vs dark-flat mean value (larger the difference - more chance there will be issue with flat calibration). It might not even be visible there is issue (even if there is) unless you stack and stretch very hard, so there might be an issue that does not show on normal level of stretch and you can end up with good looking image anyway.
  14. Have no clue - quick search on the internet gave me this page: https://www.lightvortexastronomy.com/tutorial-pre-processing-calibrating-and-stacking-images-in-pixinsight.html#Section6 In section 6 it deals with registration / alignment of images and it shows that window so I assumed that is readily available option for image registration under PI.
  15. I can't seem to find relevant section in help file - maybe you can make a screen shot of that section opened to see what options are available?
  16. That is integration of already aligned images. From PI tutorial, there is this section: I'll check to see what options are listed in help to see if we can change something to aid detection process
  17. Is there any sort of threshold for star brightness in PI? DSS has that, and if you lower it - it will find more stars, but if you lower it too much it will start mistaking noisy pixels for stars and alignment will fail. I don't use PI so don't know exact setup of it, but maybe PI help files could provide a clue, or list of stacking options?
  18. I don't think it is noise in question. I had something similar in different software, and it might be the case with PI as well. Are you trying to align Ha frames on their own or are you trying to add them to some other stack? Issue that I was having that can be related to this is when software tries to match the stars not only by coordinates and relative spacing but also by star brightness. Ha subs will have significantly lower ADU values in stars then regular subs. Maybe PI tries to match subs with star intensity taken into account and fails for that reason? On the other hand it could be SNR ratio of Ha subs - how long are they in terms of exposure?
  19. This is just "redistribution" of the noise - signal will be the same, and noise in general will remain the same over the image - it will just change it's distribution. It depends on algorithm used to rotate - interpolation. Some interpolation techniques give better and some worse results with respect to this. I'll do another example for you here and comparison of bilinear and more advanced interpolation. Here is "base" sub - nothing but pure gaussian noise: Here are two subs rotated by 2 degrees - one bilinear interpolation other cubic O-moms: These are two rotated subs - no pattern is yet visible as we did not stretch the subs to show the pattern. Now I stretched in particular way to emphasize this pattern - left one is bilinear interpolation - and it shows pattern clearly. Right one is cubic O-moms - pattern is there but to much lesser extent. Every algorithm will produce it to some level because you need to cut into higher frequencies when you work with limited sampling rate, but some algorithms handle this much better. If you use Lanczos 3 resampling - it should have this effect down to minimum.
  20. This is consequence of small angle rotation and noise being clearly visible. Rotating image by small angle in presence of the noise produces this effect imprinted in the noise. It will not be imprinted in the signal in the image. You need hard stretch to show it. It can be explained as aliasing of high frequency components or it can be explained as consequence of averaging noise pixel values in some regions and not in other regions - it is same thing. You end up with noise - less noise - noise - less noise pattern which shows itself as grid. I created similar pattern in this thread: It depends on resampling method used - bilinear resampling gives worst results.
  21. It's a bit hard to tell who you are referring to in your post. Don't use "rank" but rather screen name of person you are referring to. It would also be helpful if you quote someone to actually write why you are quoting them. For example, in your post above, you quoted my topic without any text on your part. I have no idea if that was by accident or something else. If you want to mention someone without quoting them here on SGL - there is simple mechanism for doing so - just put "at sign" before their screen name like: @woodsie They will get notification that you mentioned them and can respond. As for returning the camera, maybe try couple of things first and see if that sorts out the problem: 1. Try using different capture software. There are couple of free alternatives out there that use ASCOM drivers: NINA (night time imaging or something like that), SIPS - Scientific Image Processing Software by Moravian instruments - I was using that but had some similar issues with ASI1600 - image was written in wrong format and broken, there is Sequence Generator Lite that is free (free version of SGP), and I'm sure there are other software as well. Not sure what you are currently using, but its worth trying other software to see what results you will get 2. Try setting your offset properly in ASCOM driver - that should solve issues with stripes and could possibly deal with strange part of the image as well. Set it to somewhere around 50-60. 3. Maybe try reinstalling drivers or use another computer. Above issue might be related to USB port on your computer (not likely but worth a shot). Once you try with different settings and still can't manage to solve the problem - yes, contact Telescope Express and ask for exchange or refund - whatever you prefer.
  22. Just don't use drizzle - no point in doing so. Drizzle as algorithm works only when you have certain preconditions, and in practice no one having amateur setup will have these preconditions met. There simply is no benefit for doing drizzle and it only "hurts" your data. In order to utilize drizzle algorithm, one needs predictable PSF, oversampling based on that PSF and means to point their scope with sub pixel precision. It requires guide system and imaging system to be connected in such way that dither issued by guide subsystem result in exact pixel fraction shift of imaging system. While this in principle can be done - no software support exists (that I'm aware of).
  23. Maybe just do visual examination of particular sub that is causing error? Second error that shows sometimes is completely unrelated - PI reports that it cannot successfully parse FITS header keyword for SITELAT / SITELONG - which should contain latitude and longitude of your observing site (I'm guessing here but it would not be hard to check those FITS keywords to see what they represent really). This could be because of either software used for acquisition is not writing proper format for these fields in FITS header, or PI can't interpret what is written according to FITS standard for some reason (not properly implemented in PI or part of specification not implemented at all). In any case, it should make absolutely no difference to stacking result - that is just metadata that you can go without when stacking subs. As for issues with stars - there could be number of reasons why these are not detected. Could be that there is too much of guiding error and stars are shaped like trails rather than circular - algorithm just does not recognize them, or maybe they are too large to be considered stars (depends on sampling resolution and how much "search area" is configured in PI when detecting stars), or any other number of reasons. Best thing to do is to first visually inspect said subs to see what the star profiles look like (round or not, etc) and then we can further think what would be good course of action.
  24. Not sure about that one. In classical interpretation of gravity hollow sphere (or any other shape) will not have gravitational field inside as gravitational influences of small pieces all cancel each other out perfectly. Have no idea what would be the case in GR though.
  25. Actually that was my point - there is spin without reference point. If you were out there spinning in empty space without being able to see any reference point - you would still know that you are spinning by stretching sensation in your head and feet. If you were in elevator without windows and there was a pull towards floor - you would not be able to tell if you were suspended in gravity field or you were uniformly accelerated in space. With rotation - you would be able to tell straight away as nothing acts as "negative" gravity source centered in your belly.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.