Jump to content

SkySurveyBanner.jpg.21855908fce40597655603b6c9af720d.jpg

vlaiv

Members
  • Posts

    12,976
  • Joined

  • Last visited

  • Days Won

    11

Everything posted by vlaiv

  1. If by CA filter you mean filter that will lessen chromatic aberration, then you can use any of the following: - simple wratten #8 filter. This works very well, but casts a yellow tint on the view - Baader Contrast booster filter - this works well on F/10 achromat and the Moon (I was really surprised) - probably other planets as well. Not as good as I expected on bright stars - For complete removal of any CA - Baader continuum filter. This is rather "specialty" filter usually used for white light solar observing. It impacts strong green cast on the image. I found that it transformed my F/5 achromat in spectacularly sharp moon scope (although like I said - green cast). It is probably very useful for double star observing as well (it helps with seeing besides CA). - Look also at Baader fringe killer and Baader semi apo filters (used neither and opted for Baader contrast booster instead on my F/10 achro).
  2. Why do you worry about luminance star cores? As for parts of target that gets blown out - use a few shorter exposures. Stack them in separate stack aligned with main stack. Multiply short stack with appropriate value after stacking (ratio of exposure lengths) and "copy/paste" blown out region of the target. Star cores will certainly reach saturation in luminance after any sort of stretch even if they are not blown in linear data. There is simply no way around it. Dynamic range is just simply too great to be able to see both faint stuff and star cores without blowing out later. Where people go wrong is color composition. If you compose your image prior to stretch - you will end up with stars that are white in center. Best way to do color composition and retain star color is to: 1. Make sure your RGB subs are not saturated in star cores - use same technique for each channel as described above - take a few short exposures in each channel and mix in with long exposures after stacking. There is simple way to do that via pixel math - "replace long stack pixel value with scaled short stack if over some threshold value - like 90% of max ADU" 2. Stretch your luminance data without color composing until you are satisfied - you can do even things like sharpening / denoising and such at this stage to remove luminance noise. 3. Out of linear, wiped color channels create three images that contain RGB ratio rather than actual color values. First do color calibration. Next stack R, G and B channels via max stacking into temporary image and then divide R, G and B channels with this temporary image. You must be careful not to have negative or zero values in your RGB images - so scale them to 0-1 (or just above 0 to 1) prior to making max stack. After you get ratio images - apply gamma correction (simple gamma 2.2 is enough) on each of them. 4. Produce R, G and B channels of final image by multiplying stretched luminance with each R, G and B ratio images. Voila, star color in cores will be accurate and not blown out.
  3. Not even going to pretend that I understood what you just said - quick google search on "curved spacetime well defined time coordinate" yielded this result: https://arxiv.org/pdf/1401.2026.pdf Paper is titled: "Quantum fields in curved spacetime" Might be of interest to someone
  4. What seems to be the problem with incorporating curved space time with quantum field theory? Too small contribution of gravity to make significant impact within limits of calculations?
  5. I wonder why it it not "known" thing then? Many people use CCD sensors that are based on "consumer" chips. In fact dark scaling depends on stable bias and many people do it. In any case - it's worth a try to take subs with a bit of time in between.
  6. This should be rather simple but a bit time consuming activity. You'll need something like hour or two of your time under stars. Procedure would be the same as preparing PEC curve - disable any PEC if you have it, fire up PHD2 and do calibration but disable guide output in PHD2. Make sure guide logging is enabled in PHD2 and run "guide" session for about hour or so (HEQ5 has worm period of 638s if I remember correctly - and you want at least couple of cycles to get good PE curve). Once you are done, you can load PHD2 guide log in software called Pecprep - part of EQMOD project. It is used for analysis of PE data and generation of PEC curves. Simple screen shot of PE curve in PEC prep is all you need. In fact - maybe just PHD2 log of this session is all that potential buyers will need - they can use either PECPrep or some other software to analyze it.
  7. So no one is having a clue what this might be? Could someone do the same measurement with their bias subs on CCD camera? I'm inclined to believe that this might not be fluke and that this stuff can happen to people. Something like this happening on occasion could well be solution to flat calibration mystery that was never solved but happens sometimes to @ollypenrice.
  8. No, you would not feel gravity instantly. Imagine you are in a hollow center of a planet and there is a shaft leading to its surface. Inside that central cavity you would not feel any gravitational pull. Once you start climbing up the shaft you would very gradually start feeling gravitational pull. In fact gravity pull that you would experience at certain height inside the shaft would be the same as if there were no mass "above" you - only one "below" you (only mass that is less than your height distance from center of the planet would exert gravitational pull on you - everything that is at larger distance would act as another hollow sphere - thus producing no gravity pull). This leads us to very interesting conclusion - gravity is the strongest on the surface of the planet, once you start going higher away from planet and going down the shaft - it starts reducing. I think this is related to how brain works - or rather how "understanding / knowing" works. In order to know something we need to establish some sort of equivalence - mapping of the phenomena to something that ultimately boils down to our experience. For example - it is much easier to accept that two electrons repulse one another because most people can relate that there is virtual photon exchange - momentum transfer. At some point this boils down to "hit one billiard ball with another and it will bounce off". Most people don't have any issues with Newtonian gravity because they relate it to "fields" and them acting as some sort of elastic cord between bodies that pulls them together. Once you start talking about space time curvature - all analogy goes down the drain. Imagine following: when most people hear about space time curvature - they expect space time to actually be "curved" so space craft orbiting a planet is following this "curve". This means that all points along that trajectory are somehow "curved straight line" or something. But at exact same time and at exact same spatial coordinate - shoot a beam of light, or put object moving at higher velocity. It will no longer follow same path - so there is no actual "curve in space time" (in the sense we started thinking about it) - confusion sets, there is nothing to relate to and we "loose knowledge" - or rather we concluded that we don't know / understand it.
  9. That one is pure beauty, but with price to match . If it's well behaved sensor (in terms of amp glow / darks / FPN and calibration in general) - it's going to be real hit (for anyone who can justify the cost). My only concern is data size. Single calibrated frame is going to be around 234MB or so.
  10. That price is excluding VAT. I'm outside EU so exported goods are sold without EU VAT. Price is actually about 32-33% higher for me than that (10% customs fee and then 20% VAT on all of it including postage).
  11. Do you happen to have that graph that ZWO readily publishes for their camera models? Interested in read noise vs gain part.
  12. Don't know. According to TS website - these will be in stock in just two weeks - at least 533 model:
  13. Not sure why there is absolutely zero info from ZWO. Neither website nor forum mentions these (as far as I can tell - both 6200 model and this one 533). Even High Point Scientific info is questionable. It says that it is IMX571 sensor under specs tab: It mentions APS-C sensor size in text: It looks like there is in fact IMX571 in the form of ASI2600mc-pro - which is APS-C sized sensor, same pixel specs (3.76um and 50K full well - a bit large for such small pixel size, I wonder how they managed to do that?) but only 3.5e read noise vs 3.8e for 533 model. Price is of course much higher.
  14. ^ I think this is best solution. You should certainly try to do full calibration. Although most "imperfections" are not strong enough to be seen in single sub, after stacking and when trying to sharpen up things, better to have "clean" data rather one containing bias patterns and PRNU.
  15. I think that EPs get so much "chatter" coming their way for a reason. Just look at that list and name any other item on that list that you can vastly improve upon by spending only a few dozen of monetary units more over "stock" items . Well apart from collimation - that can cost nothing if you have certain model of scope (on the other hand, I guess it can cost quite a bit to have triplet collimated by professional?).
  16. Yes, poor focus can do that, but then again what would be the point of stacking subs with such bad focus in the first place? Here is something that you can do to test if it is indeed due to poor focus. Take all of your subs prior to registration and after you have calibrated them and do integer resample on them in PI (bin them in software). Bin them x3 or x4 and then try again to register and stack them. Binning will reduce resolution of the image and blur size due to poor focus will be less in relative terms. This could help star detection algorithm do it's thing.
  17. How about an experiment? I'll provide you with undersampled images with random offsets - and you perform combination of those images: 1) Drizzle method 2) Resampled integration We examine results for SNR and sharpness (PSF FWHM) to see if it actually works the way people expect it to work? Btw, by Resampled integration I mean - You take each undersampled sub and prior to stacking you resample to adequate size using Lanczos-3 resampling for example (adequate in this context means equivalent of drizzle factor - if you drizzle x2, you resample to x2 larger size). Then align/register these resampled images using Lanczos-3 and stack with average stacking method.
  18. Ok, yes, as predicted value is different, and in this particular case your calibrated sub will be less noisy if you use bias - because you stacked 200 of them vs only 20 of dark flats Best version would be to use 200 of dark flats. Just to explain what will happen when you calibrate with using bias only. Let's suppose that you have some pixel value in the image that received blocked light. It was 20% in shadow and it's original value was 100ADU, but since it was in shadow - sensor actually recorded only 80ADU (20% light block). You want to correct this by use of flat. Let's further suppose that your flat peak was at 75% of histogram - that means around 47872ADU. This value is together with bias and dark current. Shaded part is only 80% of this value. If you use bias only to calibrate your light, you will create 6ADU difference in your flat (because mean of dark flat is larger for about 6ADU than bias only). We can say that proper value for flat at 100% is 47872 so subtract 872 and you have 47000. You need 80% of that and that will be - 37600. When you "scale" your flat you will get 37600 / 47000 = 0.8. What happens if you use bias instead of flat dark? You will end up with following values: 47006 and 37605 and "scaled" value of flat will be 37605/47006 = ~0.800005 (very small change in this case). If you correct your 80ADU in light sub with factor of 0.8, you will get proper value 80 / 0.8 = 100ADU, but if you correct with bias only you will get slightly different value 80 / 0.800005 = ~99.9995ADU. Your signal is very slightly darker than it should be. You ended up with under correction because you used bias instead of flat darks. In this particular case that we did - difference is so tiny that it would not show at all, but it is there because there is difference between master bias and master flat dark in mean value. Depending on difference between bias and dark flat and how much your signal needs correction (we used 20% but sometimes it can be even larger than that) - under correction will show when you stretch your data enough.
  19. I agree with experiment part, but from what I know about RANSAC - tolerance is related to how much a star can move to be included in the match, or rather what is average displacement value. This can help if you have distorted stars in one or more subs and PI complains that it can't find enough descriptors to make a match or similar. RANSAC is process that happens once star detection is finished and PI already has a list of stars it is going to try to match over frames. It will not have impact on star detection.
  20. Pretty much the same thing - it corrects for "geometry" of image. Imagine that you have two images of pillars shot at different sections and at different perspective points. Both show curved pillars, but you can't put together two images because curve is not uniform, so you "straighten" pillars in the images, join images and then "return" the curve to some extent - that is what will happen with wide field lens shots that are offset by large distance, in order for stars to overlap you need to "bend" each image ... RANSAC is algorithm used to determine which stars in one image align to which stars in another image. It is short for "Random sample consensus". Star centers will not always perfectly align in images, partly due to above distortion but also due to noise and seeing (you can't pinpoint exact center of the star to great precision and there will always be some error). In addition to this - some stars in one image might be missing from the other image and vice verse (those that are just next to the edge and you dithered your sub and it moved FOV and some stars are no longer in the image but some other appeared on the other side). RANSAC tries to find best match in all available stars by removing "outliers" (those stars that don't have their match). It tries to find mathematical solution for transform between one image to other with respect to minimizing alignment error in all the stars. If you are more interested in this, here is wiki article on algorithm (it has nice visual explanation with fitting a line thru a set of points and choosing to discard outliers in the process): https://en.wikipedia.org/wiki/Random_sample_consensus
  21. Can you translate that in non PI speak? If you take master bias and master dark-flat and do stats on each, what do you get as mean pixel value?
  22. Distortion model is probably related to lens / wide field images. When we are imaging, we are mapping spherical "surface" to flat surface, or rather angles to distance on the sensor. Larger the angle (or smaller the radius of sphere compared to sensor size) - more distortion there will be in the image. When you try to align subs with very large offset this can cause issues as star distances will not be equal if stars are in center of the field and on the edge. Maybe easiest way to explain that would be to observe "north pole" and 4 points on equator. You can arrange them such that lines connecting them along the surface of the earth are - equal length and always at 90 degrees to one at north pole. Try placing such 5 points in the plane with those properties Angles don't map ideally to distances in plane and you have distortion. Not sure if you need to concern yourself with that unless you are using wide field lens or fisheye lens or something like that. As for star detection - I would try adjusting sensitivity, peak response and upper limit - these settings seem related to what you need but I have no clue what each does solely based on their names (try increasing sensitivity, lowering peak response and probably leave upper limit alone - not sure if it will help changing that).
  23. Presuming that difference between these two subs is only the way master flat was prepared (not quite clear to me from your post) - in one case you used proper flat-darks of same exposure (and other parameters) as flats and for other one you used bias instead of flat-darks to create master flat, then only difference that can show will be in flat application. You won't find significant difference in noise levels or star shapes or anything. One that uses bias can have either over or under correction by flats (not sure which one of the top of my head), but it's not necessarily the case. It will depend on few factors - how much light blockage there is in the first place (either from dust particles or vignetting), and what is the difference in bias mean value vs dark-flat mean value (larger the difference - more chance there will be issue with flat calibration). It might not even be visible there is issue (even if there is) unless you stack and stretch very hard, so there might be an issue that does not show on normal level of stretch and you can end up with good looking image anyway.
  24. Have no clue - quick search on the internet gave me this page: https://www.lightvortexastronomy.com/tutorial-pre-processing-calibrating-and-stacking-images-in-pixinsight.html#Section6 In section 6 it deals with registration / alignment of images and it shows that window so I assumed that is readily available option for image registration under PI.
  25. I can't seem to find relevant section in help file - maybe you can make a screen shot of that section opened to see what options are available?
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.