Jump to content

Banner.jpg.b83b14cd4142fe10848741bb2a14c66b.jpg

vlaiv

Members
  • Posts

    13,030
  • Joined

  • Last visited

  • Days Won

    11

Everything posted by vlaiv

  1. I have - in 5" scope! Well - it was not color as we normally see it - it was more "impression" of the color, or rather feeling that leaves you wondering if you saw color at all or your mind is playing tricks on you by relating what you see to the images of object you saw before and knowing in principle what color it should be (mind you - it did not happen to me on other objects regardless of the fact that I also saw and remembered their color images). Part of the trick when trying to see color is not to get dark adapted. Once you get dark adapted - even light levels that should produce color response get contrast loss and you start not to see color at the level you should. This is also important when observing planets - you will loose both color fidelity and perception of detail if you fully dark adapt. In fact, I think it is related to this: https://en.wikipedia.org/wiki/Purkinje_effect
  2. That is quite correct. Issue with our vision is that it is not linear - that is why we have magnitudes for stars (logarithmic dependence) and also why there is Gamma on our displays. When we see something "twice" as bright - that does not mean it is twice the intensity of the light. This hurts contrast even more then simple signal addition that sensor experiences (LP + target photons give signal level). With sensors that LP level is just black point on histogram in case there is no gradient. In gradient case it's a bit more complicated but in principle the same - level of signal that will be subtracted / removed. Biggest issues is associated noise. Since that signal also comes in form of photons and those hit randomly (Poisson process) - there is noise that one can't remove because it's random and it is equal to square root of level of signal. More LP signal there is - more associated noise there will be. What filters do is rather simple - remove unwanted signal and keep wanted signal. Removing of unwanted signal brings back "black point" (and also makes gradients less of a hassle) - but also minimizes noise from that signal (square root of 0 is 0 - no signal no associated noise).
  3. Never thought of that, but indeed - depends on where you look from - southern or northern hemisphere
  4. It works because all other imaging "works" when the full moon is out Let me explain. Issue with full moon is light pollution. For visual it reduces the contrast making target almost invisible (and in some cases invisible) to the naked eye. Not so for photography. Sensor will gather both photons from the target and photons from the LP. Issue that we are having in imaging from LP can be summarized by: - annoying gradients (LP is not equal in all parts of the image and produces gradient) - lower SNR. This is important part. We don't need LP signal and we can remove it from the image by levels (and leave only target), or more fanciful ways that deal with gradients as well. What remains is shot noise associated with that signal - you can't remove that. Additional noise lower our SNR because it combines with other noise sources (read noise, dark current noise and shot noise from target). Whole issue with LP can be overcome by using longer exposure - you can get very good images in heavy LP provided that you image for enough time. Back to full moon and narrow band. If you are using narrow band filters that have band pass of about 7nm then you are in fact doing the following: - passing all the light from the target at particular wavelength (Ha for example) - cutting down all other type of light. This includes broadband LP signal. If we observe 400-700nm range that is used for imaging - it is 300nm wide. With narrow band filter - we remove everything except 7nm - or in another words we reduce LP by 300/7 = ~ x43. In reality we reduce LP even more because LP is strongest in other wavelengths (even Moon light). x43 is 4 magnitudes of difference, and you are making you sky be 4 magnitudes less bright. So if full moon puts you in Bortle 9 skies (18 mag skies) - using narro band filter returns you to Bortle 1 (mag 22 skies).
  5. If by CA filter you mean filter that will lessen chromatic aberration, then you can use any of the following: - simple wratten #8 filter. This works very well, but casts a yellow tint on the view - Baader Contrast booster filter - this works well on F/10 achromat and the Moon (I was really surprised) - probably other planets as well. Not as good as I expected on bright stars - For complete removal of any CA - Baader continuum filter. This is rather "specialty" filter usually used for white light solar observing. It impacts strong green cast on the image. I found that it transformed my F/5 achromat in spectacularly sharp moon scope (although like I said - green cast). It is probably very useful for double star observing as well (it helps with seeing besides CA). - Look also at Baader fringe killer and Baader semi apo filters (used neither and opted for Baader contrast booster instead on my F/10 achro).
  6. Why do you worry about luminance star cores? As for parts of target that gets blown out - use a few shorter exposures. Stack them in separate stack aligned with main stack. Multiply short stack with appropriate value after stacking (ratio of exposure lengths) and "copy/paste" blown out region of the target. Star cores will certainly reach saturation in luminance after any sort of stretch even if they are not blown in linear data. There is simply no way around it. Dynamic range is just simply too great to be able to see both faint stuff and star cores without blowing out later. Where people go wrong is color composition. If you compose your image prior to stretch - you will end up with stars that are white in center. Best way to do color composition and retain star color is to: 1. Make sure your RGB subs are not saturated in star cores - use same technique for each channel as described above - take a few short exposures in each channel and mix in with long exposures after stacking. There is simple way to do that via pixel math - "replace long stack pixel value with scaled short stack if over some threshold value - like 90% of max ADU" 2. Stretch your luminance data without color composing until you are satisfied - you can do even things like sharpening / denoising and such at this stage to remove luminance noise. 3. Out of linear, wiped color channels create three images that contain RGB ratio rather than actual color values. First do color calibration. Next stack R, G and B channels via max stacking into temporary image and then divide R, G and B channels with this temporary image. You must be careful not to have negative or zero values in your RGB images - so scale them to 0-1 (or just above 0 to 1) prior to making max stack. After you get ratio images - apply gamma correction (simple gamma 2.2 is enough) on each of them. 4. Produce R, G and B channels of final image by multiplying stretched luminance with each R, G and B ratio images. Voila, star color in cores will be accurate and not blown out.
  7. Not even going to pretend that I understood what you just said - quick google search on "curved spacetime well defined time coordinate" yielded this result: https://arxiv.org/pdf/1401.2026.pdf Paper is titled: "Quantum fields in curved spacetime" Might be of interest to someone
  8. What seems to be the problem with incorporating curved space time with quantum field theory? Too small contribution of gravity to make significant impact within limits of calculations?
  9. I wonder why it it not "known" thing then? Many people use CCD sensors that are based on "consumer" chips. In fact dark scaling depends on stable bias and many people do it. In any case - it's worth a try to take subs with a bit of time in between.
  10. This should be rather simple but a bit time consuming activity. You'll need something like hour or two of your time under stars. Procedure would be the same as preparing PEC curve - disable any PEC if you have it, fire up PHD2 and do calibration but disable guide output in PHD2. Make sure guide logging is enabled in PHD2 and run "guide" session for about hour or so (HEQ5 has worm period of 638s if I remember correctly - and you want at least couple of cycles to get good PE curve). Once you are done, you can load PHD2 guide log in software called Pecprep - part of EQMOD project. It is used for analysis of PE data and generation of PEC curves. Simple screen shot of PE curve in PEC prep is all you need. In fact - maybe just PHD2 log of this session is all that potential buyers will need - they can use either PECPrep or some other software to analyze it.
  11. So no one is having a clue what this might be? Could someone do the same measurement with their bias subs on CCD camera? I'm inclined to believe that this might not be fluke and that this stuff can happen to people. Something like this happening on occasion could well be solution to flat calibration mystery that was never solved but happens sometimes to @ollypenrice.
  12. No, you would not feel gravity instantly. Imagine you are in a hollow center of a planet and there is a shaft leading to its surface. Inside that central cavity you would not feel any gravitational pull. Once you start climbing up the shaft you would very gradually start feeling gravitational pull. In fact gravity pull that you would experience at certain height inside the shaft would be the same as if there were no mass "above" you - only one "below" you (only mass that is less than your height distance from center of the planet would exert gravitational pull on you - everything that is at larger distance would act as another hollow sphere - thus producing no gravity pull). This leads us to very interesting conclusion - gravity is the strongest on the surface of the planet, once you start going higher away from planet and going down the shaft - it starts reducing. I think this is related to how brain works - or rather how "understanding / knowing" works. In order to know something we need to establish some sort of equivalence - mapping of the phenomena to something that ultimately boils down to our experience. For example - it is much easier to accept that two electrons repulse one another because most people can relate that there is virtual photon exchange - momentum transfer. At some point this boils down to "hit one billiard ball with another and it will bounce off". Most people don't have any issues with Newtonian gravity because they relate it to "fields" and them acting as some sort of elastic cord between bodies that pulls them together. Once you start talking about space time curvature - all analogy goes down the drain. Imagine following: when most people hear about space time curvature - they expect space time to actually be "curved" so space craft orbiting a planet is following this "curve". This means that all points along that trajectory are somehow "curved straight line" or something. But at exact same time and at exact same spatial coordinate - shoot a beam of light, or put object moving at higher velocity. It will no longer follow same path - so there is no actual "curve in space time" (in the sense we started thinking about it) - confusion sets, there is nothing to relate to and we "loose knowledge" - or rather we concluded that we don't know / understand it.
  13. That one is pure beauty, but with price to match . If it's well behaved sensor (in terms of amp glow / darks / FPN and calibration in general) - it's going to be real hit (for anyone who can justify the cost). My only concern is data size. Single calibrated frame is going to be around 234MB or so.
  14. That price is excluding VAT. I'm outside EU so exported goods are sold without EU VAT. Price is actually about 32-33% higher for me than that (10% customs fee and then 20% VAT on all of it including postage).
  15. Do you happen to have that graph that ZWO readily publishes for their camera models? Interested in read noise vs gain part.
  16. Don't know. According to TS website - these will be in stock in just two weeks - at least 533 model:
  17. Not sure why there is absolutely zero info from ZWO. Neither website nor forum mentions these (as far as I can tell - both 6200 model and this one 533). Even High Point Scientific info is questionable. It says that it is IMX571 sensor under specs tab: It mentions APS-C sensor size in text: It looks like there is in fact IMX571 in the form of ASI2600mc-pro - which is APS-C sized sensor, same pixel specs (3.76um and 50K full well - a bit large for such small pixel size, I wonder how they managed to do that?) but only 3.5e read noise vs 3.8e for 533 model. Price is of course much higher.
  18. ^ I think this is best solution. You should certainly try to do full calibration. Although most "imperfections" are not strong enough to be seen in single sub, after stacking and when trying to sharpen up things, better to have "clean" data rather one containing bias patterns and PRNU.
  19. I think that EPs get so much "chatter" coming their way for a reason. Just look at that list and name any other item on that list that you can vastly improve upon by spending only a few dozen of monetary units more over "stock" items . Well apart from collimation - that can cost nothing if you have certain model of scope (on the other hand, I guess it can cost quite a bit to have triplet collimated by professional?).
  20. Yes, poor focus can do that, but then again what would be the point of stacking subs with such bad focus in the first place? Here is something that you can do to test if it is indeed due to poor focus. Take all of your subs prior to registration and after you have calibrated them and do integer resample on them in PI (bin them in software). Bin them x3 or x4 and then try again to register and stack them. Binning will reduce resolution of the image and blur size due to poor focus will be less in relative terms. This could help star detection algorithm do it's thing.
  21. How about an experiment? I'll provide you with undersampled images with random offsets - and you perform combination of those images: 1) Drizzle method 2) Resampled integration We examine results for SNR and sharpness (PSF FWHM) to see if it actually works the way people expect it to work? Btw, by Resampled integration I mean - You take each undersampled sub and prior to stacking you resample to adequate size using Lanczos-3 resampling for example (adequate in this context means equivalent of drizzle factor - if you drizzle x2, you resample to x2 larger size). Then align/register these resampled images using Lanczos-3 and stack with average stacking method.
  22. Ok, yes, as predicted value is different, and in this particular case your calibrated sub will be less noisy if you use bias - because you stacked 200 of them vs only 20 of dark flats Best version would be to use 200 of dark flats. Just to explain what will happen when you calibrate with using bias only. Let's suppose that you have some pixel value in the image that received blocked light. It was 20% in shadow and it's original value was 100ADU, but since it was in shadow - sensor actually recorded only 80ADU (20% light block). You want to correct this by use of flat. Let's further suppose that your flat peak was at 75% of histogram - that means around 47872ADU. This value is together with bias and dark current. Shaded part is only 80% of this value. If you use bias only to calibrate your light, you will create 6ADU difference in your flat (because mean of dark flat is larger for about 6ADU than bias only). We can say that proper value for flat at 100% is 47872 so subtract 872 and you have 47000. You need 80% of that and that will be - 37600. When you "scale" your flat you will get 37600 / 47000 = 0.8. What happens if you use bias instead of flat dark? You will end up with following values: 47006 and 37605 and "scaled" value of flat will be 37605/47006 = ~0.800005 (very small change in this case). If you correct your 80ADU in light sub with factor of 0.8, you will get proper value 80 / 0.8 = 100ADU, but if you correct with bias only you will get slightly different value 80 / 0.800005 = ~99.9995ADU. Your signal is very slightly darker than it should be. You ended up with under correction because you used bias instead of flat darks. In this particular case that we did - difference is so tiny that it would not show at all, but it is there because there is difference between master bias and master flat dark in mean value. Depending on difference between bias and dark flat and how much your signal needs correction (we used 20% but sometimes it can be even larger than that) - under correction will show when you stretch your data enough.
  23. I agree with experiment part, but from what I know about RANSAC - tolerance is related to how much a star can move to be included in the match, or rather what is average displacement value. This can help if you have distorted stars in one or more subs and PI complains that it can't find enough descriptors to make a match or similar. RANSAC is process that happens once star detection is finished and PI already has a list of stars it is going to try to match over frames. It will not have impact on star detection.
  24. Pretty much the same thing - it corrects for "geometry" of image. Imagine that you have two images of pillars shot at different sections and at different perspective points. Both show curved pillars, but you can't put together two images because curve is not uniform, so you "straighten" pillars in the images, join images and then "return" the curve to some extent - that is what will happen with wide field lens shots that are offset by large distance, in order for stars to overlap you need to "bend" each image ... RANSAC is algorithm used to determine which stars in one image align to which stars in another image. It is short for "Random sample consensus". Star centers will not always perfectly align in images, partly due to above distortion but also due to noise and seeing (you can't pinpoint exact center of the star to great precision and there will always be some error). In addition to this - some stars in one image might be missing from the other image and vice verse (those that are just next to the edge and you dithered your sub and it moved FOV and some stars are no longer in the image but some other appeared on the other side). RANSAC tries to find best match in all available stars by removing "outliers" (those stars that don't have their match). It tries to find mathematical solution for transform between one image to other with respect to minimizing alignment error in all the stars. If you are more interested in this, here is wiki article on algorithm (it has nice visual explanation with fitting a line thru a set of points and choosing to discard outliers in the process): https://en.wikipedia.org/wiki/Random_sample_consensus
  25. Can you translate that in non PI speak? If you take master bias and master dark-flat and do stats on each, what do you get as mean pixel value?
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.