Jump to content

NLCbanner2024.jpg.2478be509670e60c2d6efd04834b8b47.jpg

vlaiv

Members
  • Posts

    13,106
  • Joined

  • Last visited

  • Days Won

    12

Everything posted by vlaiv

  1. Sure you can - in pretty much the same way you can also bin CCD CCD on the other hand can be binned in the way CMOS sensor can't. CCD supports hardware binning that CMOS sensors do not support. But both can be binned in software. Difference between the two comes from read noise. CMOS sensors have quite a bit lower read noise (in general) and therefore suffer less from it's effects on software binning. At the moment, if you are looking for mono sensor, ASI1600 (and other cameras based on Panasonic chip) are biggest options. There are a few very interested options by QHY still being developed - I believe the price will be quite high compared to current CMOS offerings, have a look here: https://www.qhyccd.com/index.php?m=content&c=index&a=lists&catid=123 Or, better, here is excerpt from the bottom of the page: Check out last two entries. I wonder what the read noise will be like.
  2. There are couple of things that can create this sort of background. First is way the image was stacked - or rather, algorithm used to align subs before stacking. Any sort of alignment of frames requires some sort of interpolation algorithm to be used (actual pixel values in aligned frame fall in between pixels in original frame due to sub pixel shifts and rotation). Different interpolation algorithms have different properties. Some of them cause unacceptable artifacts and are avoided (like nearest neighbor interpolation), some are straight forward to do but cause "pixel blur" (like bilinear interpolation - which is same as bin x2 under some cases). Often used one is bicubic interpolation - that one smooths data too much and noise that should be "pixel" level noise is spread over couple of adjacent pixels and noise grain increases in size. Best to use is Lanczos interpolation as it preserves much of noise signature. Second thing that can create background like that is excessive use of denoising algorithms based on similar smoothing of data like bicubic interpolation. Selective gaussian filtering and such tend to create such large grained background. Whenever denoising algorithm fails to detect background star it will make it into a bit brighter "blob" rather than smooth it out. Third thing that can cause such background is using scope that is not diffraction limited, or oversampling a lot on star field that is dense. In this case, even if you have very good SNR star image will be "smeared" over larger number of pixels. If you oversample a lot (like in poor seeing) - most background stars will look like that, instead of being small concentrated light points and background will contain large grain structure that will be composed out of blurred stars rather than blurred noise (as in previous two cases). To get nice "high fidelity" background (I actually don't mind a bit of noise in background as long as noise looks right - pixel to pixel variation and very small variation in intensity - small scale grain) - you need to avoid things that alter background. Use good alignment resampling algorithm - Lanczos 3/4. Don't use aggressive denoising that will blur things out. Try to match sampling rate to scope / seeing as best as you can. And, yes, finally under these circumstances more data will certainly make background look smoother and better. More data will not make large scale grain go away if it is caused by things that I have listed.
  3. I don't think they all purchased high end scope with sole purpose of using it as guide scope. Many people with high end gear started their imaging with simple scopes like ED80. After upgrading to more expensive imaging scope in that class (by class I mean focal length primarily), ED80 remained (they could not part with it or be bothered to sell it) and it was relegated to guide scope role. There are people that can afford better guide scopes. While scope itself might not bring anything in terms of guiding precision - they justify expense by build quality of such item. Nicer focuser, better stability due to CNC tube rings and so on. After some time spent in imaging hobby, you start appreciating good guiding (and here I don't mean round stars, I'm referring to guiding your mount as good as it can be guided) as it has direct impact on image quality (sharpness), so you start upgrading your components in order to get best guiding you can have - and this is why people use good guide cameras (and also for similar reasons as above - good guide camera can double as good planetary cam, good EAA/EVAA camera and such).
  4. I believe you do. There are a couple of components that you can adjust in average OAG setup. For example - my OAG has mirror stalk that can be pushed in or pulled out. We use it to position prism in relation to sensor. Sensors are rectangular (most of them) so you can choose where to position prism - closer to optical axis, further away, etc ... You can also put certain spacers in optical train, and you can choose where to put them. If you need to provide exact distance between field flattener / coma corrector - opt to put spacers "before" OAG (so that OAG is closest to sensor). Sometimes you just don't have an option - like if you use integrated systems where filter wheel and OAG are single unit, or even cameras where you have all things prefabricated to fit together. But in this case you can understand before purchasing such items if you are going to stop down your OAG or not depending on scope you will be using it on.
  5. It's got to do with people programming different programs. Two different conventions on coordinate system orientation. In "normal" math we are used to X axis being to the right and Y axis being to the up (positive values increasing). Screen pixels work a bit different - top row of pixels on screen has Y coordinate 0 and it increases "downwards" (next is row 1, then row 2 and so on) - so there is "flip" of Y coordinate. If one simply loads file (that should be row 0 then row 1 then row 2, etc ...) and displays directly on the screen you get one vertical orientation. If one loads file into "math" coordinate system - it will be reversed in Y direction. How software operates depend on people who programmed it - some use math coordinate space and other just "dump" rows onto screen - hence Y flip between programs - but it is not a big deal as Vertical Flip operation is always present and it is "non destructive" - it does not change any pixel values it just reorders them (same goes for horizontal flip and 90 degree rotations).
  6. Here is another attempt - processing in Gimp. I did "custom" vignetting removal - kind of works. Here is what I've done: Open stacked image in Gimp 2.10 (use 32 bit per channel format). Copy layer. Do heavy median blur on top layer (blur radius of about 90px or something like that) - leave next two parameters at 50% and use high precision. Set layer mode to division, and put layer opacity to something low like 10%. Merge two layers. Now just do levels / curves and you should get fairly "flat central region.
  7. It did look rather whitish I used super pixel mode as debayering method: And these were stacking parameters: After that I loaded image in StarTools (trial - I don't use it to process images otherwise but it can attempt to remove vignetting and that is why I used it), did basic develop, remove vignetting and color cast and did color balance. Took a screen shot of the result at 50% zoom (trial version won't let you save image for further processing).
  8. I don't like working with uncalibrated data, but here is quick attempt: Lack of flats really show - there is enormous vignetting that is hard to remove. This is quick process in StarTools after stacking in DSS - I tried to wipe the background and remove vignetting, but as you can see, it did not work very well, still I kind of like the effect. There is some nebulosity showing - there is something to look at
  9. @alacant This is quite confusing now, as ASI294 has RGGB bayer pattern according to ZWO website Which means that it can be either RGGB or GBRG depending on software used (if it flips image upside / down - screen or image coordinate system). Fits header gives wrong bayer pattern (but it does note that some software should use vertical reversed order).
  10. Ok, I see your point. This image has very low saturation for some reason. It can be processed, but it requires usage of more sophisticated tools (or quite a bit of fiddling with white balance and saturation in image processing). Color is there but it is so low intensity that you need to do saturation quite a lot. This is just a bit of fiddling around with saturation and levels in Gimp after stacking in DSS. In order to process image as this, you need to be able to wipe the background - there is quite a bit of LP gradient. You also need to do a proper white balance on the image as green is too strong, etc ... I think above image posted on facebook was probably processed with PixInsight or similar tool which offers all of that functionality.
  11. Let me get this straight, you used someone else's data to stack an image right? Do you know what sort of raw data did you get from them? Raw subs? Already stacked image in LRGB? What sort of camera was that person using? I would need a bit more info before I diagnose why you are getting B/W result. It is entirely possible that you applied wrong workflow based on their data, but we need to check that.
  12. Honestly, I don't know, but I suspect it should work fine
  13. That is CCD, I'm fairly certain that dark scaling will work properly with that sensor, although it is better if someone using that same sensor could confirm that.
  14. Depends on what sort of sensor do you have. If your sensor has proper bias (and most CMOS sensors have some issues with bias signal) you can use only longest exposure length darks and do dark scaling. Dark current has linear dependence on time (unlike temperature where there is exponential dependence), therefore you can take 10 minute dark, remove bias and divide it with 2 to calibrate 5 minute light with it. Stacking software should be able to do this automatically for you. With CMOS sensors and their bias issues, you can't subtract bias from darks (it changes between exposure lengths for some reason, or even between power cycles of sensor if there is some sort of internal calibration) and there fore you can't scale darks because bias does not depend on exposure length and temperature in predictable way. If you have CMOS sensor, then it is best to do a set of darks for every exposure length you use. On the other hand, depending on the scopes you use and filters - you should not have too many exposures that you use. In principle you can get away with one exposure length per scope, maybe two if you run a risk of over exposing star cores - one "proper" exposure length and one short "filler" exposure length. Exposure length will largely depend on your conditions and capability of your mount - go with longest exposure length that you can manage to guide/track without any trailing issues. How short with exposure length you can go depends on how high your read noise is - lower read noise means you can use shorter exposures as main exposure. To be precise you need to compare read noise to other noise sources - when it becomes dominant noise component - increase exposure length. There are other considerations for exposure length - how much data you want to store per session (shorter subs - more of them, more data), and how likely is that you are going to have to discard a sub (wind gust, earthquake, cable snag, sudden flash of light in direction of scope - passing airplane or car headlights, etc ...). It is better to discard 1 minute sub (1 minute lost) than it is to discard 30 minute sub (that is 30 minutes of imaging time lost).
  15. Now that is proper dark Odd warm pixel here and there and amp glow visible, nothing else in that image (well there is bias signal embedded there but it looks like noise in single sub). Histogram confirms everything is proper:
  16. Astro photography is a bit different than regular photography in the way you treat your settings. With regular photography you compose your shot, do metering, decide on F/stop, then see what sort of exposure do you need (to avoid or impart blur), set your ISO based on those two, etc ... This is because you are taking single image, and you want to get it right in single image. With AP you are stacking images, and you need to process your image to get meaningful result. This is because in AP you have enormous dynamic range in your image (or rather captured data). You can have as much as 10 mags of difference in stars in single image - that is x10000 light intensity. That is why with AP you don't aim for "good exposure", you aim for good SNR, and you will adjust your exposure in post processing (think histogram manipulations to show what is in shadows in regular image - most of the stuff in AP is in shadows). In AP with DSLR, ISO should be always set to one value (each camera has best ISO value for AP), and that is most often ISO800 (some have ISO400 as best, while others might be best at ISO1600). Do a search for your camera model to see if you can find recommended ISO value for AP. This means that ISO will be known in advance. Same goes for exposure time - it is not governed by brightness of the target like in regular photography, it is dictated by other factors, and here is a short list of things that impact it: - Go with longest exposure that you can manage - You should take care not to saturate because of LP (light pollution) or to saturate parts of target. In general you can circumvent saturation of stars and bright target parts by using small set of short exposures at the end of session ("advanced" technique). - Depending on your mount / guiding / polar alignment, mechanical issues are most likely to be limiting factor in how much you can expose for. Again this is something that you need to test. Start with fairly common exposure value for DSLR AP and build from there - 30 seconds. If your subs look ok - no star trailing, stars are tight and round, you can try to increase exposure length. As for frame number - it is always better to have more. More of light frames, more of darks, more of bias, more of flats and flat darks .... At some point however you hit diminishing returns. We can sum things up by saying to improve things x2 you need x4 number of subs - and you can see how at some point you will hit "it does not make any sense to take more" . For example - you do total of 1h of imaging, so yes it is worth doing another 3h on the same night (provided you have time for it) to get twice better result. However, once you've done 4h for that evening, going for another x2 better result would mean three additional nights of imaging the same target - that means weather needs to cooperate, that means x3 additional setup times and doing exactly the same, etc, etc ... Yes, sometimes people are willing to spend 4 nights on one target, but for additional x2 of improvement you would need next 12 nights ... so there is a limit at some point, and that limit will be imposed by you. As for other frames, again - depends on your budget, but get at least few dozen of each (20-30 up to 50 of each). Just as a reference, I'm a bit of nut case here, but I do couple of hundred of calibration frames each. If you have some raw subs of Andromeda galaxy and want to get a grip on color in AP, then simply use data that you already have to learn. Stack those subs without any calibration frames - make sure you tell your stacking software to treat your RAWs as color (bayer matrix) images. This will produce color image - then import that image in some processing software (like PS or Gimp) and try to "expose" it properly - histogram stretch, apply some color correction and add some saturation. You should have color. You can always take stacked image and attach it here (maybe this thread or new one) and ask for help - people will process the image to show what you have captured (you will no doubt see different renditions of your data) and I'm sure they will explain how they got their results so you can try it yourself. One important thing I missed in previous post - Flats need to be done without altering anything on your scope, so it is best to do it after lights while camera is still attached and you have not moved your focus. If you change something there is a good chance that you won't be able to get matching flats. Focus change, camera rotation, some dust settling or being dislodged - all of those can disrupt proper flat calibration.
  17. That is how I do it, never had any issues. What sort of table is it? Some materials are transparent to IR - like plastic. That is why there is a risk of just using plastic cap on sensor. If table is plastic or glass type - that won't help much. Look for thick wooden table for this. Or better yet put a piece of aluminum foil over camera cap. If aluminum foil can stop someone messing with your brain waves, it can certainly fend off some IR radiation
  18. Everything appears to be in order but something is seriously wrong Header information is as it should be. Both show same temperature, gain settings, exposure length - one is labeled light frame, other is labeled dark frame (by APT). However, data in dark suggests it is more like flat file rather than dark. How did you take your darks? Look at this: This is histogram of light frame in full 16bit range - exactly what you would expect - it's bunched up to the left but not clipping, it has 3 peaks because it is OSC camera (slightly different sensitivity to sky background between R, G and B ) so it's looking like proper light frame. Now this is histogram of the dark frame - same range of values (16 bit or 0-65535): This should not be like that, this is how flat histogram would look like (under funny light source, with quite a bit of vignetting). This would mean that you either mistook flat for dark, but I don't think so, as stretched dark does not show usual things flat shows - dust shadows and proper vignetting. It shows very strange type of vignetting, so I'm guessing camera was not on the scope when you took darks. Here is stretch of dark to show what sort of vignetting it has: So my guess is that you took darks with camera detached from the scope, but you had some sort of light leak - camera was not properly covered, and judging by histogram it is most likely IR type of light. This is a good news. Your lights are fine, you just need to redo darks properly and everything should be fine.
  19. Hm, it looks like there is a mix of things going on to produce this result. Tiff format uses zip compression, so it will reduce actual file size, and "With Darks" one looks like there is quite a bit of "clipping to the left" - meaning most values are below 0, and it could be the case where for some reason (maybe file format or something) these ended up being 0. So you get image full of zeros and star here and there - that is easily compressible and ends up in small file. Can you do the stacking again and save FITS output (32bit float point precision) so we can inspect those? If most values are below 0 then there is some calibration error - "darks" are "too strong" (contain higher values than they should to be matched to lights) - can happen if there is difference in exposure time or gain or offset or something. It would be good to include one light FITS and one dark FITS to do analysis and see what is going on.
  20. It is mostly about read noise. This is important for CCDs because they tend to have larger read noise. Dark current is fairly low in most sensors, so read noise is dominant term. One can lessen the problem if using dithering, and of course one should use dithering because of this and other benefits. You can see this effect if you have data that was not dithered much. Do a simple experiment and calibrate with 10, 20 and large number of darks and measure background noise levels - just standard deviation in background patch of the image. You will see a difference in noise levels.
  21. No idea of what might be happening there, maybe if you post image of stacking results, both with darks and without, then we could be able to tell.
  22. Don't think it is typing error - maybe error in expression, I meant to say: You can also skip flats but THEN you run a risk of dust bunnies and vignetting showing in your images. I need to be more careful when phrasing a sentence Thanks for pointing it out and good thing we made this clear for everyone else.
  23. Some time ago I started thinking about it along following lines, but haven't progressed too far - mostly because lack of understanding / knowledge on the topic. Somewhere I've read fairly interesting explanation that goes in line with my way of thinking about it. It goes something like this in trying to "explain that there is no measurement problem at all": Suppose we have an electron with spin being in some nice complex superposition - and it enters measurement apparatus - decoherence happens and electron is now in 1/2 spin up + 1/2 spin down. There is actual measurement, but instead saying electron is now spin down, we need to actually say: System has evolved into 1/2 electron up / registered up + 1/2 electron down / registered down state and it further evolves into 1/2 electron up / registered up / scientist noted up + 1/2 electron down / registered down / scientist noted down state. Now question becomes why in reality we experience only "one branch" of this, or rather why do we only remember one branch of this. My way of thinking is that decoherence also "leads" to different meaning of probability - somehow. Before decoherence we have propensity type of probability, while after decoherence we have ensemble type of probability. Above example fits Bayesian type of probability - we can use probabilities from this "complex state" 1/2 measured up + 1/2 measured down + prior knowledge to discard one branch and "recalculate" probabilities based on the branch we know happened (this does not explain how we know what happened - or rather why we know of certain branch and not the other). But point is - when we have "complex" superposition state actual interpretation of probability is different than what we have after decoherence - decoherence makes reality either / or - somehow one definite answer by some mechanism, and inherent lack of knowledge (even in principle) what real situation is so we can only calculate as 50% chance of one and 50% chance of other. Something along those lines.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.