Jump to content

vlaiv

Members
  • Posts

    13,265
  • Joined

  • Last visited

  • Days Won

    12

Everything posted by vlaiv

  1. @69boss302 I took green channel and did FWHM measurement on some of fainter stars and got values of around 4.8px or 8.64" if your sampling rate is 1.8"/px (430mm FL and 3.76µm pixel size). That is way too large FWHM. Theory says that 70mm scope in 2" seeing and 1" RMS guiding (according to your image above) will have around 3.5" FWHM - so about 2.5 less than what you have. Either that guiding report is not accurate (maybe wrong FL or pixel size for guider?) or seeing is particularly poor. With 1" RMS guiding and 70mm scope - you need like 8" FWHM seeing to get stars that bloated - like worst seeing ever
  2. Ok, so there is definitively a bit to do with processing. Take for example this processing - it is only stretched using levels in Gimp: Stars are a bit larger than they should be - possibly due to seeing, but can also be due to guiding precision: This is at 100% zoom - two things are obvious. Stars are not pin point - which is strange, given short focal length scope used (again poor seeing and/or poor guiding are to blame) - but also notice scatter around bright stars. That can be due to optics - like a bit of haze / fogging on telescope. Do you have dew shields? Another possibility is simply haze in the air - if transparency is poor - there will be this sort of halo around bright stars. If you are not careful - it is very easy to turn that halo into "full star profile". Look at following stretch of the same image - again, I'll be using only levels in Gimp: See what happened - that halo joined with star core to make very bloated star. Then whole image looks more "crowded" in the end because of this:
  3. ASI2600mc-pro has integrated IR cut filter, so that should not be a problem - only if doublet scope is causing some bloating? I'll have a look at attached TIF to see if I can see something useful.
  4. It is O ring that is supposed to go on finder before it is inserted into its rings There is a groove on body of the finder - and that O ring helps it sit inside its rings more tightly. Back of the scope is open so it lets the air in and help bring telescope mirror to ambient temperature more quickly. This helps with cooling, but causes issues with light leak. I guess black paper is inserted to block the light while still letting the scope cool easier. Some people use shower cap on the back of the scope to prevent light leaks like this: This can easily be fitted once scope is cooled properly.
  5. What is your guide RMS in arc seconds? As far as processing goes - there is easy way to check - post your stacked fits in 32bit float without any processing and see what others manage with the data. If most people come up with fatter stars - then it is due to data (some will use morphological star tightening - which I'm against), but if most manage to get tight stars - then it is down to way you process.
  6. Problem is not with how many stars you have in the image - there is simply that much stars there - take any Soul image out there and compare - you will see same stars in both images. It is star "radius" that is the problem - you get impression that there are more stars because stars are wider in your image. There are several things that make stars larger in the image: 1. aperture of the telescope used - smaller telescopes produce larger stars 2. Seeing - poor seeing will produce larger stars 3. Poor mount tracking / poor guiding - again produces larger stars 4. Type of optics - some types of telescopes produce fatter stars - for example large central obstruction causes stars to be slightly more bloated, but large obstruction usually comes with larger aperture - so it kind of evens out. Another culprit is refractor telescope with less than perfect color correction. Fast doublets tend to create fatter stars because of this. You can use special luminance filters like Astronomik L3 filter to combat this. 5. Filters used - some filters increase stars (similar to poor optics - they make stars fatter) 6. Way you process your data. What you can do about it? Take care of point 2 and 3 - shoot when seeing is good and try to optimize guiding on your mount. Observe total RMS and try to make it smaller. Don't go by - guiding is fine enough for small scope - want tighter stars? make guiding better See if using L3 from Astronomik will help with star sizes if you have refractor (and judging by FOV and your signature you are using 73mm ED doublet so it might be worth doing). See what type of filters you are currently using and maybe try test image without them in order to see if there is any difference. Some LP suppression filters have been reported to cause slight star bloat Reprocess your data. Sometimes way you stretch can have impact on star sizes. Look into making starless image and processing stars separately to make them smaller.
  7. There has probably been some mix up with that statement. Correct statement for 1.25" 32mm and 40mm plossls would be: They will show you same true field of view (not AFOV) - because they both have same field stop diameter - max for 1.25" format, and yes, 40mm will have larger exit pupil, but AFOVs will not be the same. 32mm will give about 50° max AFOV and 40mm will give about 40° max AFOV in 1.25" format.
  8. Eyepiece can be either 1.25" or 2" in diameter, and respective max field stops are ~27mm or ~47mm. If you take 1.25" eyepiece in say 32, 40mm and 45mm focal lengths - AFOVs that you get will be 48°, 39° and 34° (this is zero angular magnification distortion case). Usually 32mm and 40mm plossl eyepieces are quoted to have 52° and 40° as they have some AMD and are between zero AMD and zero rectilinear distortion. With 56mm eyepiece in 1.25" format - max AFOV would be ~28° not 43°. Look here for fomulae used to calculate AFOV - two extreme cases no AMD and no RD : https://www.televue.com/engine/TV3b_page.asp?id=113 For TFOV it is even easier - you don't need to know AFOV - you only need to know focal length of the scope and field stop of eyepiece. Formula is arctan ( field_stop / 2 * focal_length) * 2 In case of 6" CC scope above, FL is 1836mm (give or take) and if we take 47mm field stop we get arctan(23.5 / 1836) * 2 = ~1.46 degrees So it is 1.46° rather than ~1.6° calculated by astronomy.tools FOV: (it is probably slightly off because it is not using field stop data but rather AFOV data - which can often be different than quoted by manufacturers).
  9. I'll give recommendation based on what I've read so far on your interests. This scope: https://www.firstlightoptics.com/stellalyra-telescopes/stellalyra-6-f12-m-crf-classical-cassegrain-telescope-ota.html This mount: https://www.firstlightoptics.com/equatorial-astronomy-mounts/skywatcher-eq5-deluxe.html and tracking motor for that mount. This scope will provide you with very good views of planets and the Moon. It does not require expensive eyepieces. It will show you deep sky objects as well. It's only "weakness" visually is rather narrow field of view, but even that is not as bad as you might think. With 56mm Astro Essentials plossl - it will be capable of 1.6° view at x32 magnification. Not quite wide field, but not bad either. With simple addition of planetary type camera - you'll be able to do some serious planetary and lunar astrophotography. Mounting just DSLR and lens on mount with tracking motor will give you chance to start doing some wide field astrophotography and as far as telescope goes - you'll be able to mount camera onto telescope and start doing some astrophotography there as well - but due to focal length of telescope - you'll have to do some special steps in processing (binning your data) - and images that you create will be sort of small - enough to fit smaller galaxies and planetary type nebulae - but not wide enough for M31 andromeda or M45 pleiades or similar (for that you can use DSLR and lens). If you can - get GOTO mount, but if not - you can take either single motor upgrade, or possibly dual motor with guide port. That is looking into future (but adds to required budget). If you get dedicated planetary camera - you'll be able to use it later as guide camera for more serious deep sky imaging with said scope.
  10. With CCDs - binning is hardware thing (although you can bin in software later as well), but with CMOS - it's always software thing - whether it is done in drivers in time of the capture or later - in processing. I advocate that you do it in processing - later, not at the time of capture - unless you have compelling reason to do so (like file size savings - but storage is cheap these days), because this lets you control the process. You can stack with and without binning and compare results and you can choose bin method that suits you best. I'm not sure that NINA will do good job of binning OSC data, and I would advise you to bin your stacked image while still linear. Siril does not have bin functionality as far as I can tell, but you can always save your stack in Siril, open it in ImageJ - do the binning and then load it back in Siril for processing it further (color calibration and histogram stretch and so on). ImageJ is open source software that can easily bin your data (and do some other fun stuff). I can now see why people don't do this on regular basis - it's just too much work - and new stuff to learn. I would expect Siril to have bin - and apparently it did at one point, but it was removed when some library for debayering was changed and they now plan to reintroduce it.
  11. Well - no I mean - you can, but it is not the same thing as binning or resizing prior to processing - result is not the same. Binning is process that produces known amount of SNR improvement. Simple resampling will also produce SNR improvement - but it depends on data and algorithm use for resampling. It also introduces pixel to pixel correlation - or certain amount of blur. Resampling algorithms can be broadly split into two categories - ones that improve SNR more - but also blur more, and those that don't blur as much - but also don't improve SNR as much. For that reason - it is best to bin if you can. Reducing image after processing - will just make it look nicer / sharper when viewed at 100% zoom - but it will not improve SNR. It will not make it less noisy when stretching your data. One thing you can do is - to change camera. Other thing that you can do is to bin, so you have a bit of choice in there. Instead of working at 0.96 - bin x2 and work with 1.92"/px. You won't loose much detail at all - images will look the same as far as FOV is concerned - but you might find it easier to process with less noise. In fact - you have nothing to loose - you can take linear data from those images you linked to - and bin those x2 before you process them again. If weather is poor - there is something you can do when not imaging - reprocess your images with bin x2 at linear stage. If you are using PI - just take your stacked data and before you start any stretching / processing - do linear resample x2 with average (that is binning) and then proceed to process. Yes, it is very hard to tell - but you can see some difference in the noise - if you blink the two images or if you make difference of them (subtract one from another): this is M33 ones subtracted from one another. Darker part is where signal is stronger and SNR is better (less noise) - so there is not much difference between the two - in fainter parts - its more noisy. This is very stretched different difference image. Here is what unstretched difference image looks like: There is a bit of grain that can be seen as difference - and that is subtle difference in noise that I was talking about - but no difference in signal - because smaller image contains all the signal in larger image
  12. 50% could be too much as I'm starting to see the difference - and also, quality depends on choice of resampling algorithm. Above was done in IrfanVew with Lanczos resampling.
  13. Depends on what the camera is going to be used for? You mention 300p goto dobsonian telescope? That in itself is going to pose a problem - such scope is really suited for planetary work, and neither of those two cameras are very good for planetary work. Planetary work really demands certain style of imaging - called lucky imaging. Second thing that you might try is EEVA/EEA approach - a lot of short exposures stacked (this is because AltAz mount causes field rotation and tracking on that mount is not going to be good for anything longer than 1-2 seconds) - in that case go for Altair camera. Lack of cooling is really not issue there as read noise will dominate dark current and it is easy to grab few dark frames in same conditions - as you'll be using short exposures.
  14. FWHM is measured in software. I'm sure PI has that functionality and you can also use AstroImageJ - free software for astronomical use. This is done on linear images after stacking (or calibrated subs if you wish to measure sub). In any case - I'll assume that optimum working resolution is ~1.5"/px rather than 0.96"/px - and we can check if that is true. I'll use parts of M33 and M42 images. First M42 image (actually part of M43) - you need to watch this on computer screen - don't use phone or small device - you want to see all the details in these images: Ok, so can you tell the difference between these two images - in nebulosity or stars? There is small difference in noise - one of them has a bit smoother noise profile (sampling down does that), but stars and signal - do you see the difference? Second image was created by taking first image, then scaling it down to 66% of original size: and then enlarging it back up to original size. If you look at this smaller image - every detail that you can see in larger image is here as well - every star, every feature - nothing is missing yet - things are smoother, noise is smaller. This just shows that you could have captured this image using 1.5"/px - without loosing any sort of detail in the image. You did not need to go for 0.96"/px resolution. In fact - it is quite possible that on a given night, actual resolution that you could have used is 2"/px without loss of any detail - that corresponds to 3.2" FWHM stars - and that is quite "common" star size for ~2" FWHM seeing. Let's do that with second image as well: First original: then rescaled (can you tell?): and of course - smaller image: One way to visually tell if you are at good sampling rate is to look at your image at 100% zoom - and look at faintest stars - they really need to be point like - as soon as you see them as little circles - you are oversampled. For example: vs Makes sense? Why is this important at all? Well - because if you over sample - you needlessly loose SNR. In same imaging time you could have smoother deeper image if light is more concentrated rather than spread over more pixels than it needs to be.
  15. You can of course bin color data and depending on how you do it - it will provide some or all "benefits" of binning. There are two benefits of binning - and we often bin to achieve them both: 1. bring image to more suitable image resolution (this can also be done with resizing / resampling of the image). For example - your image is 0.67"/px and when you zoom in 100% - you find that your stars are bloated, image looks blurry and detail is missing and you want your image to look good even when zoomed in 100% - for those that want to see even tiniest galaxies in the background. 2. Improve SNR. In this regard - binning works as stacking - if you average some samples - you improve SNR. With stacking - we take same images and average them whole, while with binning we take "same pixels" and average those locally (if image is over sampled - adjacent pixels can be thought as having almost the same signal). When you have OSC data - you can have either of above - or both, but most people don't manage to get both - I'll briefly explain why. Data from OSC sensor is "sparse" - meaning not every pixel is filled in with all colors. Some have only red, some have only green and some have only blue - rest is missing. You now have two options to resolve this: 1. Fill in the blanks (this is usually done) 2. Squeeze the data (this is super pixel mode of debayering) With option 1. - you simply make up missing data. There are algorithms develop to do good or better job of that, but point is - you don't have new original data in those places - you have data that is made up based on the data you have. When you try to bin such data - it will not provide you with SNR improvement because you are not stacking original data. It is like trying to stack 10 copies of same image and hoping that it will somehow improve SNR - it won't as you are not stacking original data - you are stacking data that you derived from data you already have - and that does not improve SNR With option 2 - you will start with all data there - but your resolution / sampling rate will already be altered. It will in fact - be "normal" for that sensor - you will only see it as halved - because you are thinking of that sensor in terms of pixel size it has on the label - which is not correct. OSC sensors simply have lower resolution than equivalent mono sensors (there are special cases where they can have the same resolution - and that is for example bayer drizzle, but that is beyond the scope of this reply ). There is option where you can have your cake and eat it too - where you prepare your data so it's like it came from OSC sensor with bigger pixels. I'm not aware of any software being capable of doing that - because that would mean x4 lower sampling rate (two times because it is OSC sensor and additional two times because of binning). This is actually feasible on some sensors with very small pixels - like ASI183mc for example - that one has 2.4µm pixel size and in some cases that is way too small. In the end - If you are over sampled, then I would recommend using one of two approaches: - one that @powerlord recommended few posts ago - just stack your image and bin linear stack. This will not have the same effect as binning mono data, but if you dither (and you should) - some SNR improvement will still happen (effect is similar to bayer drizzle - but not completely the same). - if you are hugely over sampled (by factor of x4) or you are doing mosaic and you can afford to bin further (doing wider field instead of going for max resolution) - then use super pixel mode debayering and then bin again after that for true x2 SNR improvement If we understand that tool in actual / math terms - then it is simply wrong. We should look at it like this: Given your camera and telescope - image will look "ok" in that range of sampling rates. I can actually go thru the math and science of why is that so - or you can do simple experiment. Post any image that is at 0.96"/px and I'll show you that it is over sampled . In fact - you can do it yourself. Take your image at 0.96"/px, estimate what is optimum sampling rate by that criteria I gave you - FWHM/1.6, then resize your image to smaller size to match that optimum sampling rate. Take small image - and resample it back to larger size. If you can't spot any difference between original and resized - that just means that original contained no detail that smaller image could not also hold - so, smaller image was sampled enough to hold all detail in the image. (I probably over complicated that last sentence - but if you have properly sampled image and you reduce its size and enlarge it back - you'll be able to see the difference. If you can't see the difference - then no finer detail is present in larger image to start with - it is over sampled).
  16. In fact - that 1.6 figure is approximation - not definitive value, but it works exceptionally well and is backed by theory. It comes from several approximations - first is that stars can be approximated by Gaussian distribution (each time we talk about FWHM - we talk of FWHM of Gaussian approximation to star profile). Next ingredient is Fourier transform of that PSF to get frequency response. FT of Gaussian is again Gaussian. We take that "frequency Gaussian" and see for what frequency its value falls below 0.1 - or 10%. Above that frequency all higher frequencies are attenuated to less than 10% of their original value by low pass filter that is PSF (essentially gaussian blur that has profile of single star PSF in the image). I chose 10% because even with sharpening it is very hard to restore frequencies that are attenuated to less than 10% of their original value - you need to multiply them with reciprocal - or in this case larger value than 10. This brings noise - and you can do that only if you have very high SNR in order not to make noise very noticeable - not something we have in abundance in AP. You can take different threshold - like 5% - and that will change that 1.6 number slightly - but in reality - even those frequencies that are attenuated to 10% can't be restored fully and thus can be considered cut off point. Your logic is sound and that is how it goes - but there is slight error in what you've written. ASI533 is indeed 9mp sensor and bayer matrix does consist out of 2x2 pixels - but not sub pixels. This means that 9mp number already contains - what you call "sub pixels", and by your logic sensor should be seen as 2.25mp pixel instead so when you count in all sub pixels you get 4 x x2.25mp = 9mp Out of those 3008 x 3008 pixels you really have only 1504 x 1504 pixels that are red, 1504 x 1504 that are blue and two times 1504 x 1504 that are green. When you shoot Ha with ASI533 you are effectively only using red pixels (not really true because QE is never 0 for all colors, so both green and blue catch some light - but let's put that aside for now) - you are using only 2.25mp of your sensor. It is if you are having 7.5µm pixel sensors that have 1/4 of QE that is declared for red pixel. In fact - that is one way of looking at bayer matrix element. You can look at it as pixel that has twice the side (hence x4 area) of specified pixel, captures all three channels at the same time, but has 1/4 of declared QE in red and blue and 1/2 of declared QE in green. With L-extreme it is a bit different story as you put use to all pixels at the same time. You'll capture OIII with both green and blue pixels (with a bit different QE) and Ha with red pixels at the same time. This offsets a bit fact that only part of pixels are working for any given filter (because you "image in parallel").
  17. I've written so many times about this subject that it sort of feels like repeating myself Mono: Hardware binning is almost the same as software binning - only difference is read noise. Everything else is the same. Say you have 3.75µm camera and you wonder whether you should get one that has 7.5µm pixel size or bin? Depends on sensor size of larger camera and do you have a need for larger FOV and can your scope handle larger imaging circle. 2000x2000 px sensor with 3.75µm pixels will be x4 smaller by surface than 2000x2000px 7.5µm pixel sensor. If you plan on purchasing same size sensor - then just don't. Bin. FOV when binning remains the same - only thing that changes (for software binning) - is amount of read noise and hence needed exposure length. If you bin x2 - you'll have x2 read noise over regular image. If you bin x3 - you'll have x3 read noise. This might seem large - but it is not. Modern CMOS sensors have ~2e of read noise. x2 is ~4e, x3 is ~6e read noise - still lower than most CCDs regardless of pixel size. Color - things are more difficult to explain because color sensors don't operate on resolution suggested by their pixel size. If you for example debayer images using regular debayering (interpolation), stack them and then bin x2 like @powerlord suggests - you'll get coarser sampling rate - but you won't get SNR improvement that you are looking for (by the way - this approach works fine on mono sensors as those don't debayer). You can bin color sensor in software and do it properly so that it produces expected results - but it is not as straight forward as mono. In the end - 0.96"/px is still oversampling. If you want to know sampling rate that is appropriate for any given image - measure average FWHM of stars in the image (in arc seconds) and divide value with 1.6. If you measure 3.2" FWHM - sampling rate should be 2"/px, if you measure 2.56" - then sampling should be 1.6"/px, etc ... (in order to sample at 1"/px - you need 1.6" FWHM stars in your image).
  18. Excellent image. Really.
  19. I think that image would benefit from more separation in the background. There is true background and dark nebular surrounding the object. Dark nebula can be sort of spotted in the image because of different density of star field. I think it would look much nicer if there was also color / brightness distinction between it and natural background. Level of denoising in the dark areas seems a bit forced as well.
  20. Indeed, not up to speed with whole issue - I thought that thread being too long is main problem and that solves easily with spacers / distance rings (I have similar issue with my filter drawer that is rather thin and longer thread just protrudes inside and stops drawer from sliding in/out properly).
  21. Or just use spacer that will shorten available thread so it does not protrude that much into filter wheel? There are also these: https://www.firstlightoptics.com/zwo-accessories/zwo-t2-male-to-male-adapter.html https://www.firstlightoptics.com/adapters/baader-t2-t2-inverter-ring.html https://www.firstlightoptics.com/adapters/astro-essentials-t2-to-t2-female-to-female-adaptor.html
  22. It is actually rather decent graphics of what is going on. People not getting it - would be down to their understanding of concept - or how often they encounter it in real life (never for 99.999% of people ). Not sure what the bottom line stands for - one with arrows and points (which I believe symbolize photons). That is a bit misleading as it looks like photons or some other particles of mass is speeding up. It can probably be more useful in gravity scenario rather than red shift. Top part is accurate as far as red shift goes. It shows galaxy - receding with an arrow - it shows a wave (light) - being stretched as it approaches us - again true if we take into account expanding universe. It shows little dots that can be stand in for photons or maybe stretching space. And it shows telescope tube - apparatus we use to observe said effect. Not sure about tattoo aspect of the whole thing as I'm not really into ink, so can't tell if it's going to be effective art / statement / whatever, but as far as pictogram goes - top part is ok. I would rethink bottom arrow part - not sure if it fits into this image.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.