Jump to content

vlaiv

Members
  • Posts

    13,263
  • Joined

  • Last visited

  • Days Won

    12

Everything posted by vlaiv

  1. For capture use Bin 1 mode. You can always (and should) bin in software. In my view, bin x2 is right way to go with RASA8 and ASI2600MC due to pixel size and Rasa spot diagram. As far as exposure length - that is something you should measure rather than rely on prescribed values. This is because exposure length will be different for people with different setups or even same setup in different conditions. I'd personally go for gain of 100 with that camera to bring read noise down. That will bring e/ADU to 0.25 or there about (according to ZWO published graph). In order to get good exposure - take one sub of exposure length that you currently have. Measure mean background value in say blue channel (which is the least sensitive). Multiply measured value with e/ADU value for your gain to get background value in electrons. Take square root of that to get background noise and then divide it with read noise (1.5e if using gain 100). That is your current ratio of read noise to background noise. You want that number to be at least 5. If it's less - increase exposure length. It is more than 5 - you can use same exposure length as you are currently using or lower your exposure length down. You can even calculate how much to increase/reduce exposure length - divide 5 with ratio that you obtained by measuring things. Square result and use that to multiply current exposure length to get needed exposure length. If you run into problem of over exposing either star cores or parts of target - there is quick fix for that. Take small number of very short exposures at the end of the session (say 5-10s exposures). Stack those separately and multiply linear values with ratio of exposure lengths (say 120s / 10s =12 if you use 2 minute long exposures and 10s short exposures). Replace all over exposed pixels in long exposure stack with values from short exposure stack.
  2. Here is an idea - Since you have 180mm F/3.5 lens - it will have ~52mm of aperture so probably threaded for 52mm filters. Get 2" filter (which is M48) and appropriate adapter. Mount L3 filter in front of the aperture. This will create small aperture stop - say you'll work at 46mm of clear aperture (46-47mm is the most clear aperture for M48 filter) and this will give you 180 / 46 = 3.9. You'll be effectively imaging at F/4 that should clean up stars a bit and you'll have benefit of stacked L3 filter?
  3. In that case - maybe star off by stopping the lens down a bit and see how you like it (both in terms of speed and in terms of blue bloat). Get L3 as single filter and use L3RGB approach. Refocus on filter change of course. Maybe you could try L3RG approach as well? That would require some fiddling with measurements and numbers - but could be worth while? When doing LRGB we are actually doing more "work" than we have to. Color can be captured with only three components - R, G and B. We capture L because it brings in the best SNR - but in doing so - we add fourth component. That is not needed, three components are enough so we might as well go with LRG instead of LRGB. I purposely omitted Blue from this - because of blue bloat, but also because it has the worst SNR out of three components for several reasons - atmosphere scatters the most light in blue part of the spectrum, blue is less common / intense than other wavelengths in astro sources - and when it is equally present (white and bluish stars) - it is the same energy and not the same photon count ( blue light is more energetic and for same energy that means less photons) and so on ... You can try LRG approach with the data you already have (it won't reduce bloat as bloat is also present in L as it covers whole spectrum) - just to try it out. make sure your exposure lengths are same, compute synthetic B as L - G - R and compose LRGB image that way and see how close it is to true LRGB.
  4. Changing camera will not help with optics aberration. Sure - you can change sensor for less sensitive one - but, read that sentence once again, here let me help you: Change the sensor for less sensitive one - does that sound like a solution to anything? It is the lens that is the problem, and here is list of things that work: 1. Stop down the lens. This is not fast lens so stopping it down will make it even slower, but lens often have CA at max aperture. 2. Use aggressive filtering - use for example Astronomik L3 or maybe even Baader long pass 495 in front of your other fitlers Using combination of above methods - I took following image with ST102 F/5 fast achromat that is otherwise quite "colorful character" 3. Change the lens. If you are happy imaging at F/3.5 then you will simply love imaging at say F/2.8? Get F/1.4 or F/2 lens and stop that down to F/2.8. In most cases that will clean up CA and you'll get good results. Samyang 85mm F/1.4 (or T1.5 version to be precise) wide open produces this: Stars have similar shape to those in your image, and there is a lot of CA, but here is test that I did with that lens and artificial star: From left to right - F/1.4, F/2, F/2.8 and F/4 - no special filtering (I have several filters that I used with this lens to see effect - like Baader Contrast Booster and IDAS LPS P2). As you can see - stopping down to F/2.8 produces rather good results. By the way - lens often are not designed for small pixels so it is best to bin your data with them (I used ASI178 that has same pixel size as ASI183 - 2.4µm and I binned that to 4.8µm as 2.4µm is too much for lens that is not diffraction limited). How about Samyang 135mm F/2 lens? That lens is well known and regarded for AP applications? 4. I could give you further recommendations that involve small telescopes, but I'm not sure how much weight can SA carry. For example get F/5.5 doublet with small aperture, and x0.6 FF/FR from Long Perng. That should correct ok for 1" sensor size like ASI183 and you'll have F/3.3 with say around 250mm. If that is too long FL - then make 2x2 mosaics and bin x2 additionally to get scope which is effectively half of that - so 125mm (that will still be F/3.3 - slower than Samyang at F/2.8 but probably sharper). I'm going to explore bringing my 80mm F/6 down to F/3 - but with small sensor like ASI178. Don't think that would work on larger sensors as corners would surely suffer from that much reduction.
  5. Walking noise usually appears in stacks rather than single subs. Can you post a single sub for inspection? This phenomena is related to read noise, but I haven't yet figured out how exactly. Dithering will help and possibly longer exposure.
  6. I saw this scope and think that it could be interesting: https://www.teleskop-express.de/shop/product_info.php/info/p13519_TS-Optics-ED-Apo-96-mm-f-6-with-2-5-Inchl-RAP-Focuser---ED-Objective-from-Japan.html
  7. Mono all the way. You really have three choices there ASI183mm, ASI1600 and ASI294 mono. I think the last would be the best choice for your use case - as it is largest sensor and has largest pixels. Don't think that number of subs alone determine quality of resulting image. It is combination of factors - primary being total integration time. If that is the same (and equipment is comparable) - you'll get virtually the same result regardless if you use 50 or 500 subs in that time. In fact - larger number of subs means worse result due to read noise. You need to adjust your exposure length depending on read noise and its magnitude in comparison to other noise sources. For NB imaging where LP impact is significantly reduced - impact of read noise becomes bigger and longer exposures are needed (fast optics does mitigate quite a bit of that so it facilitates shorter exposures). Main difference between Mono and OSC would be in sensitivity - OSC cameras only use 1/4 of pixels to record Ha. You can think of that as being 1/4 of sensitivity of Mono camera. I would not worry about amp glow - it calibrates out fine as long as you do proper calibration.
  8. vlaiv

    NGC 281

    I like this image very much. It changed the way I look at this object.
  9. Even 50mm x7 finder is enough to get you started - there is adapter that you need to connect guide camera to it: https://www.firstlightoptics.com/adapters/astro-essentials-sky-watcher-9x50-finder-to-t-adapter.html if you already have 50mm finder. I have this guide scope that I use for my wide field imaging lens - it is the same as that Svbony, and rather good (see which one is cheaper for you to get): https://www.firstlightoptics.com/guide-scopes/astro-essentials-32mm-f4-mini-guide-scope.html In any case - either will serve you ok for guiding at that level.
  10. In principle that is right - CMOS sensors have lower read noise and hence require less exposure length, but one should really measure how much read noise there is compared to other noise sources. I know that author of SharpCap did YouTube video on how and why regarding all of that - and I believe it is a good material to watch in order to understand.
  11. Don't mind me, I was being somewhat provocative with that post (trying to be half provocative and half funny). But there is a point to it - maybe not that exact SW model - as you yourself pointed out - people do like nice Strehl figure, but comparing two different aperture scopes with different Strehl is another matter. On purely optical side of things - larger aperture diffraction limited and obstructed scope can outperform smaller clear aperture with perfect figure. In practice it will also depend on atmosphere of course and target, and observing conditions, and there are also personal preferences of course.
  12. Maybe because it will be outperformed by this: https://www.firstlightoptics.com/reflectors/skywatcher-explorer-150pl-ota.html 😈
  13. If you decide to get auto focuser - then it would be good idea to let it check focus every hour or so (maybe two - depends on how quickly temperature changes) Tube cools as temperature drops and it shrinks by tiny bit - but enough to throw focus off. By the way - if there is significant difference between Ha and OIII, part of it might be due to scope cooling - so try to switch order of filters to see if you can make scope cooling work for you and "refocus" in good direction instead of bad one.
  14. Again, I can't really tell if you do. Pixel wise, ASI485mc and ASI178mm are very similar - 3840x2160 vs 3096x2080 - you are really only getting about 800px in width and 80px in height. If you want larger sensor just for the FOV - then with above trick you can extend FOV of ASI178 to larger size and still have 3096x2080px (mind you - you could do some hybrid approach when doing mosaics - if you put 3x3 panel mosaic for example - you'll end up with roughly 9000 x 6000 px - you could bin that x3 as described above or maybe even x4 and end up with 2250 x 1500px image in the end - gaining a bit more SNR in the process - not many people have displays to show 2250x1500 without resizing). If you want ASI485 for "speed", well, we need to do calculations - 2.9µm pixel size with 1/4 getting red, 1/2 getting green and 1/4 getting blue vs 2.4µm some sort of LRGB imaging with full pixel usage - for example you could do LRG. You really don't need to do full LRGB to get proper color - since we are using luminance that collects all three color bands at the same time thus increasing SNR for given imaging time and since luminance is more important visually then chrominance and since we need only 3 components to get color - you could do something like 3/4 of lum and split 1/4 of time between Red and Green. Which one would be faster in that case, Color camera or mono + filters? It is very hard to tell if we don't do the math (and even then we would need exact color matrices in order to calculate things properly - that is if you want true color rendition). If you want ASI485 for "novelty factor" - time to get new gear, then maybe consider getting cooled version of some camera instead? There are cheaper and better alternatives as well. How about APS-C sized mirrorless camera? If you mod that by removing IR cut filter - you'll get very good 6000x4000 large sensor camera that is essentially the same for DSO (it won't be as good for planetary). We touched up on that subject in another thread (might be interesting read):
  15. Quite normal for ED80 - as it is ED doublet scope not triplet. ED doublets also have chromatic aberration - only reduced to minute levels by use of very expensive/exotic glass. You can happily continue to use that scope for imaging - provided that you refocus when switching filters - although filters are parfocal, optics requires it. Maybe motor focuser can be good solution? Automatic focusing routines let you remember offsets when changing filters so they can easily adjust focus on filter change without refocusing all the way.
  16. It won't work on smaller sensor. I mean it will - but it will always provide you with equivalent of that sensor with shorter focal length - and not same scope with "larger sensor". Say you get ASI224 - above approach will always produce 1300x900 pixel image - as if using that same ASI224 with smaller scope. This is because the need to bin to recover SNR for each panel. You can't add panels to create larger (more mega pixels) image of the same SNR - for that you need to spend more time because you can't bin if you want more mega pixels. I used ImageJ/Fiji - it has few plugins that can stitch images for you. Yes, gradients are a pain and you need to remove gradient from each panel (sometimes not easy) before stitching. Imaging software like Nina, APT (I think) and SGP have mosaic planners/generators which will help you plan your mosaic. Alternative is to use Stellarium to plan mosaic. It is easy to do with custom markers (shift + click to place, ctrl + click to remove if I'm not mistaken - look up key assignment under F1/help). Here is B174 planned with ASI178 and 80mm F/6 reduced to F/4.5. I set first marker at the center of the target - then I move FOV so that marker ends up in corner. I eyeball distance of marker to corner (marked with arrows in above image). After I'm happy with placement of FOV - I add custom marker in center of the FOV at that position (markers 2, 3, 4 - and ready to add final marker). After that I can select markers and get exact RA/DEC coordinates for each of these custom markers - which can be then used to direct mount on the target (plate solving is good to have here). Here is an example of mosaic I did some years ago: This is 500mm scope with ASI185mc camera. I used ST102 and stopped down aperture (to tame chromatic blur). I did not do flats at that time - but that helps to see panels in the final image. Image does not go deep due to stopped down aperture and LP conditions it was shot in - but I think it is something like 2h of exposure divided in 9 panels.
  17. This is actually the proper way to asses differences - but I cheated a bit here. I already knew that there won't be perceivable difference as such test needs to be performed on linear scale - and often differences are order of 1/1000 of pixel values. In above image visible differences are more down to 8bit samples being compared than anything else. Data is also stretched which changes some of frequency response - in linear regime difference is even smaller. In fact - when I asses what is proper sampling rate - I go by SNR - what is level of detail that can be sharpened in image without making noise too visible.
  18. Did you actually measure level of read noise compared to other noise sources? Most dominant noise source is light pollution noise. That however varies with location, but also with sampling rate. You might be located in heavy LP - but similarly as target signal - LP also is "a signal" - unwanted one that brings its own noise into mix, but it still acts as signal. If you over sample when capturing you are in effect reducing LP signal in the same way you reduce target signal thus not gaining anything like lowering effective light pollution - but you do reduce level of it and hence noise. This means that LP noise becomes smaller - and you need to spend more time on single frame to swamp read noise. For example - if you image at 1"/px versus 2"/px - you need four time as long exposure to equally swamp read noise with LP noise. Binning does not help here - as it happens after you read out the pixels and read noise is already added to the mix. It helps with SNR as is after read noise has been added. Now I'm confused - I was under impression that I always advocated for proper sampling rate and in cases where this means binning - to go with binning, unless you are able to actually exploit resolution you are working at.
  19. Still not convinced that there would be speed benefit from this camera. I really do think that we need to crunch the numbers in order to get the idea. If you don't like doing mosaics - well that is fair enough, but here is fun fact: "Any scope, what ever it's focal length, can act as same F/ratio scope with integer fraction of its focal length - with any given camera". In another words - if you say have C6 - which has 1500mm of FL - you can make it work as 750mm (1500/2), 500mm (1500/3), 375 (1500/4) and so on F/10 scope with your ASI178. Say you want to do 500mm image in 2 hours. Instead of imaging single field for two hours - you image 9 panels, each one for 800 seconds. That will total to 7200 seconds or two hours. Then you take each panel and bin x3 effectively reducing pixel count to ~1000 x 700 (instead of ~ 3000 x 2100), but when you stitch back 3x3 mosaic you will again have ~3000 x 2100 (or just a bit less due to overlap, maybe 2800 x 1900). Although you imaged each panel for 1/9 of two hours - you binned it x3 which improves SNR by factor of x3 - same as stacking x9 more subs - or imaging for two whole hours There you go - you created image of wanted FOV - in wanted amount of the time, same as if you actually had same F/ratio scope of smaller FL and used it for same amount of time with the same camera.
  20. I'll use one of Olly's images to demonstrate that what looks perfectly fine at 0.9"/px - in fact has less detail than that and can be happily sampled at say ~1.5"/px instead. Here is crop of original image: I took that crop and resampled it to 2/3 of original size - which is 1.35"/px if original was at 0.9"/px. Here is what that looks like: Now, if original version has some details that can't be recorded at 1.35"/px - then if I enlarge this smaller version - we should be able to tell that it is different than original crop above. So I did exactly that, here is small version enlarged back to 0.9"/px: Can we spot any differences between these two (and no, it is not the same image - I really did reduce the size and enlarged back again )?
  21. I'd put this high on your priority list - get guiding running on your imaging rig. Pixel binning has the same effect as changing pixel size. You take group of 2x2 or 3x3 (or higher number) of pixels and treat them as single pixel by simply adding the light in each - same would happen if we had larger pixel to begin with - it would capture all light falling on it. If you change "pixel size" - you effectively change pixel scale although your focal length remains the same.
  22. I'm in - "if it's not too much hassle then do it" camp and take x256 flats, x256 flat darks and at least x64 darks but often more - as those tend to be cumulative and you can add more if it's cloudy and you want to see if your darks changed significantly from those that you already have (which you do by just creating another master from new session and subtract from old master - if you get pure noise with average pixel value of 0 - you are fine). My flats are very short - like 10ms range and it takes longer to download frame then to expose it (although I have CMOS cameras) as I have very strong flat panel. I never had issues that people experience with flats due to short exposure.
  23. One of the things that you need to be careful with comparing two different sensor sizes is that you need to pair them with different scopes in order to fully understand benefits. Best way to do it would be to match FOVs with similar type scope just larger FL and aperture (If you are matching them by using 130PDS - then look at 150PDS and 200PDS as OTAs to match with). Once you have similar FOVs - then bin as necessary and then compare sensitivity - taking into account aperture size and also any bayer filters. I think that 462 is quite unique as it is designed to be very sensitive in IR part of spectrum: That puts sensitivity in blue somewhere around 0.5 or so (since peak at 800nm is likely to be around 80% absolute QE so each reading from above relative chart should be multiplied by 0.8 - peak in blue is 0.67 * 0.8 = 0.536). It is good choice for planetary imaging (especially IR and Methane bands), and there have been models like that previously - although not as sensitive. I have one like that ASI185 that I use for guiding: It also acts almost as monochromatic sensor for wavelengths above 800nm - but no where near as sensitive as ASI462
  24. Yes, FITS from APP is linear data. Not sure what you use for processing, but I've heard that PixInsight has FWHM measurement. If you don't have any software that will do it - look up AstroImageJ. It has star measurement tool and it will provide you with that info. Just be careful if you are measuring in pixels or in arc seconds (convert between them using your "/px ratio of original file). Shift click will measure star that you click on and will show you FWHM in measurements table. Round stars are best achieved by mechanically tuning your setup. If your guiding is good - they should be round and tight regardless of exposure length. In principle, working on smaller resolution will create stars that appear tighter / rounder - due to different pixel scale, so yes, you should be able to expose for longer when binning - but again, that is not solving star shape issue - that is only masking it. Try to solve it properly - do it does not matter if you expose for 2 minutes or 5 minutes or 10 minutes - you should always get round stars (and as tight as possible).
  25. Yes, except that is for 462 model so really marginally better than nothing
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.