Jump to content

NLCbanner2024.jpg.2478be509670e60c2d6efd04834b8b47.jpg

vlaiv

Members
  • Posts

    13,106
  • Joined

  • Last visited

  • Days Won

    12

Posts posted by vlaiv

  1. 1 hour ago, Leon-Fleet said:

    I don't have any idea what the above means so will have to read up and learn by trial and error. 

    Sorry about that, I sometimes forget that people might not be acquainted with meaning of certain abbreviations. I'll expand on that.

    SharpCap and FireCapture are two pieces of software most commonly used to record planetary video for purpose of lucky imaging. Both are free (you can get pro version of SharpCap if you wish to support the product but at this stage I don't think it's necessary - free version will do the job just fine).

    Planetary cameras work in two different modes - one is 8bit mode and other is in 12bit mode - that just specifies number of bits used to record data. You can record video in 8bit format and that has its own advantages (less data and faster capture), but you need to adjust gain setting accordingly for each camera model to exploit this mode properly. If you don't set gain properly you can get data truncation and that is bad thing. 12bit mode will be a bit slower, but it should work regardless of the gain, there should be no truncation that I mentioned. This is the option in capture software - you will see different modes of capture offered - go with RAW16 (that is actually 12 bit rather than 16 because the way this particular camera works).

    ROI stands for Region Of Interest. With lucky imaging it is all about the speed of recording of frames. If you for example select 5ms exposure length - in theory you should be able to record 200 frames each second (200fps) as each one takes 5ms (1000ms / 5ms = 200 frames each second). There are certain technical limitations to how much data you can record, namely speed of camera/computer connection (which is USB type connection, and USB port has certain speed that it can transfer data at - version 2.0 of USB standard has lower speed than USB 3.0, this is why it is recommended to use USB 3.0 connection, but both camera and laptop has to support it - in your case you will be limited to USB 2.0 as your camera model operates on that standard) and speed of your hard drive that stores the data.

    Back to region of interest - instead of writing each frame as complete image - which contains a lot of pixels, you can select small region of sensor to be read out and recorded. This means less data to transfer and less data to store. Planets are small and usually only cover very small central region of sensor - something like couple hundred pixels across and most of the image is in fact just black background that you will not need and it's a waste to record / transfer and store that data. For that reason you can select smaller size of output image - just central region which will be large enough to contain complete planet - something like 320x200 or 640x480 instead of going for full 1280x1024 image size.

    If you look at the specs for QHY5II series of cameras at this page:

    https://www.qhyccd.com/index.php?m=content&c=index&a=show&catid=133&id=8&cut=1

    you will see that there is quite a bit of difference in achieved frame rate between full frame and 640x480 ROI, with later being much faster:

    image.png.c44dc3d375f0129e2b6bfb2a7bae56bc.png

    Only time when you don't want to "shrink" imaging area and use ROI is when you are shooting images of the Moon - simply because it is large enough and will usually fill the field of view and often be larger than sensor can cover (in that case you can do mosaics if you want to shoot whole lunar disk - meaning shooting separate parts of moon and then piecing separate parts in final large image).

    In any case - number of captured frames is really important for lucky imaging, because you will end up throwing away most of them since they will be distorted by seeing. More frames you have, more chance you will have enough good frames to stack and your image will be better.

    Last part of the equation is speed at which your laptop can record the video - it can also be a bottleneck in recording. This is why I mentioned replacing your standard hard drive with solid state drive  - these are much faster storage devices. You might not need to do it, but if you want the best possible results at some point you will want to move to laptop with SSD in the future (along with some other upgrades that I'll mention in the end).

    SER vs AVI - that is just a format for storing movie. SER file format let's you record in higher bit count (above mentioned 12bit, or more than 8 bits, since M model seems to work on 10bit format unlike others models of QHY5II line, but all the same - go with highest number of bits that is available) and is simpler file format to handle - a sort of standard for planetary imaging.

    In the end I would like to say that 72ED is probably worst scope for this purpose :D (sorry about that, and I do believe it is fine scope, just not well suited for this purpose). With planetary imaging - it is aperture that matters as resolved detail is related to aperture size. Your planets will be tiny with that scope. It is quite ok for you to start planetary imaging with such scope to get the hang of it and learn capture and processing parts of imaging, but if you are really interested in planetary imaging, you will want bigger scope soon.

    It does not need to be expensive scope - something like F/8 150mm newtonian is going to be really nice and cheap planetary imaging scope. With planetary imaging unlike long exposure DSO imaging - you don't need very stable mount. As long as it can carry the scope and track with decent precision - it will do for planetary imaging. Exposures involved are so short that there is simply no way that there will be any blur due to mount not tracking properly.

    For example, I did this image of Jupiter on EQ2 mount with simple DC RA tracking motor (one that you need to set proper speed with potentiometer) and 130mm newtonian scope:

    jup_16.png

    (it was also taken with QHY5II - but L model and it was color camera)

    • Like 2
  2. Not sure what is going on exactly, but here is "theory" in the nutshell - it might help you figure out what is going on.

    There are three different configurations that you can use to image with your scope:

    - prime focus (that is just camera sensor at focal plane of telescope)

    - EP projection - that one is eyepiece between sensor and telescope

    - Afocal imaging - that configuration has both eyepiece and camera lens between telescope and sensor.

    First one I presume you understand.

    Third one is like using telescope for visual observation with difference that camera lens acts like eye lens and camera sensor acts like human retina.

    In that configuration beam exiting eyepiece is collimated - or parallel and it is eye/camera lens that does the focusing. That gives proper EP focus position.

    If you are using EP projection, eyepiece acts as a simple lens and you no longer want light to exit as parallel rays as that would give you blurry image on sensor - you want EP to focus light to sensor.

    This can be achieved in two different ways - you can have EP act as focal reducer and you can have EP act as "regular" lens (re-imaging lens). Here are simple diagrams of light rays to help you understand:

    image.png.b5c6f1d4cd4e80195bf12b4b1157b86b.png

    Upper diagram shows EP acting as "focal reducer", while lower diagram shows EP acting as re imaging - lens.

    If we take regular EP focus position as "baseline", then two above cases can be summarized as:

    - for focal reducer, you need sensor to be closer than focal length of eyepiece (so if you use 32mm EP for example you need sensor to be less than 32mm away from EP, or "inside" its focal point). This configuration also moves focus position "inwards" with respect to baseline focus position - it will act as regular focal reducer - reducing size of the image. Reduction will depend on sensor-EP distance.

    - For regular EP projection, or bottom diagram, you want sensor to be further than focal length of EP away from EP. This configuration will move focus point further away from telescope (outward focuser position compared to baseline) and it can result in different magnification depending on where you put your sensor. If you put your sensor at twice focal length you will get 1:1 - or no change in scale, and it also means that you will need to get "one focal length" outward focuser travel as well.

    Depending what you want to achieve - you want second scenario, first one is rather difficult and in general you don't have enough inward travel to get it.

    You can use online calculator for distance and focus travel needed - like this one:

    http://www.wilmslowastro.com/software/formulae.htm#EPP

    It will give you approximate results (but good enough for orientation).

    For example, using 25mm eyepiece on your scope and placing sensor at 80mm from it will give you:

    image.png.ea9ddfdefefdd89e9376c8bb03ac9bed.png

    2200mm effective focal length.

    You can also use lens formula to calculate outward focus needed:

    1/object + 1/image = 1/focal_length

    so

    1/object = 1/focal_length - 1/image = 1/25 - 1/80 = 0.0275

    So object distance = 36.4mm and since regular FL of eyepiece is 25mm, difference is 11.4mm - that is how much outward focus you need in above case.

    Hope this helps.

  3. Not sure about which tutorial to recommend, but I can make quick list of things that you should try out in developing your own workflow for planetary imaging.

    Which model of QHY5-II do you have (there is L, M and P if I recall correctly - they are different in sensor used)?

    Use x2.5 to x3 barlow with that scope and camera. Use IR/UV cut filter with that scope (or maybe narrowband filter for the moon).

    Capture video using SharpCap or FireCapture in 12bit mode using ROI to frame the planet (unless doing lunar). Make sure your laptop has fast enough HDD (best if it is SSD with enough speed). Use USB3.0 port if your camera is USB3.0 (but I think that QHY5-II is only USB2.0? right?). Use SER file format to capture your movie (not avi).

    Keep exposure length short - about 5-10ms, get as many subs captured as you can for about 3-4 minutes if doing planets - for moon you can go more than that.

    Use higher gain settings. If you start saturating (very unlikely unless you are imaging the moon) - drop exposure length.

    Capture at least dark movie as well (same settings as regular movie except you will cover your scope to block all light). IF you have flat panel, capture flat and flat dark movies as well. Aim for at least 200-300 subs in dark, flat and flat dark movies.

    Use PIPP to calibrate and prepare your movie - basic calibration, stabilize planet, etc, and again export as SER.

    Use Autostakkert! 3.0 (or which ever version is latest now) to stack your movie - save as at least 16bit image (32bit if possible).

    Use Registax 6 to do wavelet sharpening and do final touch up in Gimp or PS or whatever image manipulation software you prefer.

  4. For ASI1600 and histogram, you really need to examine resulting 16bit fits to see if there is something wrong with it.

    It will also depend on gain and offset settings that you used. It would be best not to mess too much with those and leave gain at unity (139) and offset around 50-60 (I personally use 64).

    12bit range that ASI1600 operates on is not quite as large as 14 or 16 bit of some DSLR and most CCD cameras, but it is still quite large range. You can't expect histogram to be the way you are used to in daytime photography or when you open regular image in photoshop. That does not mean it is bad.

    You also need to understand that sub from ASI1600 is going to look rather dark if not scaled in intensity or stretched. That is quite normal and does not mean that camera is bad or malfunctioning. For reference, here is single sub from my ASI1600, histogram and what is really captured for comparison:

    image.png.ef7caffec0007be5550a56dd1a038864.png

    There seems to be almost nothing in that sub (it's single calibrated Ha sub 4 minutes long).

    Histogram also looks very poor - almost "flattened" to the left:

    image.png.6b83c8998727b964864d5ad32183ef80.png

    But in reality, that histogram is fine, if we "zoom in " on important section of it, you will see that it looks rather good:

    image.png.d61ac9466f2b63bcb7746cddfaca6981.png

    It has nice bell shape to it with right side being a bit more extended and "thicker" - meaning there is some signal there in the image. Don't be confused by negative sign in this histogram on the left - it is in fact calibrated sub, so dark frame has been subtracted.

    Resulting signal in the image when properly stretched is this:

    image.png.af5af06943119f2771e79401606322e0.png

    As you see, there is plenty of detail in single sub there, although it will not show without stretching.

    Here is one sub from the same set, but this one is still in 16bit mode and not calibrated:

    image.png.f850bada9f792fc1f06a125063204679.png

    You can see that image looks noisier and there is amp glow to the side (all of which calibrate out), and histogram is "bar like" - that is because image is still in 16bit mode uncalibrated  (unlike above one which is 32bit mode calibrated).

    Moral of this is - don't judge camera output and quality by what your capture application is showing unless you know what to look for - you need to examine what subs from your camera look like when you stretch them and when you calibrate them to see if there is something wrong with them or if they show enough detail as display in capture application can be misleading.

    • Like 1
  5. 55 minutes ago, sshenke said:

    Hello all, I have been struggling to obtain good images for some time now, which I have asked help with for , as in the following post:

    https://stargazerslounge.com/topic/342321-poor-image-quality-cause/

    I thought that dew was the problem, hence I posted in this thread discussing about the merits of buying a refractor instead. Previously I was absolutely certain that there was dew on the primary mirror, but I am not sure now. In fact the dew that I noticed on the primary mirror was seen as soon as I had brought the telescope indoors, so it is very likely that the dew had condensed on the surface when it was brought indoors rather than while it was outdoors.

    Last night has been another disastrous (sorry for the pun!) session where I could hardly see the M31, this is really perplexing to me. Interestingly, the guidescope seems to produce better image than teh main camera as you can seen from the attached images: The ASI 120 mini is the camera attached to the guidescope and the ASI 1600MM pro is the main camera attached to the telescope- skywatcher 130PDS. For the same exposure time, or even less, the guidescope camera collects more light than the main camera. I am guessing that this shoudl not be the case? I wonder if anyone can shed light on this problem please. Thanks in advance

    Screen capture suggests that you are using ASI1600 in 8bit mode and that is not going to produce good results.

    image.png.0d1154c57d8fbd1ec850074a36c2e2ef.png

    Switch to 16 bit mode and examine image stretched to see captured detail.

  6. Really remarkable images (both scaled down version and full size).

    Hope you don't mind me noticing following, but really that is in my view something that is robbing your work of perfection. If it were not for those small details, I would really consider above images perfection in AP.

    You are slightly oversampled (at 100% large image does not render stars as pin points, but rather there is softness to them - which means slight oversampling. There is also issue of "fat" diffraction spikes which means atmosphere was not playing ball and there is no detail to justify such resolution). Going with lower sampling rate will improve SNR further - which is great already, this is probably the best rendition of this galaxy that I've seen (not talking about M31, but this little fella):

    image.png.ab48f4f024d68259cc7c61530b6c8cfb.png

    This is first time I've clearly seen bar in this galaxy and how it's twisted.

    Going with lower pixel scale would give you additional SNR and smoother background while all things would still be visible in the image.

    Second thing is obviously edge correction of your setup. It is fast newtonian astrograph and sure it's going to suffer some edge softness on larger sensors, but in this case, it sort of hurts mosaic because overlaps can be easily seen, like this:

    image.png.b7070dc5102395088654c7260dc0fe47.png

    also, not sure what software you used, stitching is not quite perfect - as this part shows:

    image.png.7dc3a7bd83e3ca0d1de47b48b4f73ade.png

    And the third one is obviously blown cores of M31 / M32.

    I'm aware that I might be considered too harsh with my comments since you produced some splendid images, but I do really think that above things can be easily rectified (add a few filler exposures for the cores, be careful about stitching / blending part and do bin of your data in software) and then you will be closer to perfection in your work.

    • Like 2
  7. On the other hand, for price of that four SVBony eyepieces you can almost get 4 BST Starguiders that are known to work very well.

    FLO offers 15% discount on 4 or more EPs purchased, and stock price is £47 for single EP, so if you purchase 4 of them (5, 8, 12, 15, 18 and 25mm focal lengths available and each of them has 16mm eye relief and 60 degrees of AFOV and will work fine on F/6 scope) that will cost you only 20 quid more than offer you linked to.

    • Like 3
  8. 2 minutes ago, TareqPhoto said:

    I can't remember now, but mostly i try to have something like 10-20 frames, maybe i tried once for 40 frames of those short exposures, but i will consider that as failed experiments maybe and i will start over or try again with longer exposure a bit that is allowed and more subs, so like how much total integration time is sufficient under this Bortle sky?

    To get decent amount of nebulosity around M45 you need to expose for at least couple of hours total, so 20-40 30s to one minute subs is just not going to be enough.

    Second important thing is processing of course, you need to gain more skill in processing in order to be able to render faint nebulosity properly.

    For example, I'm certain that first image in the thread - that of M45 can reveal much more nebulosity then it now has. Quick manipulation of attached JPEG gets this:

    image.png.ff4cd731c675242a1100a2a6a5e9c45f.png

    As you can see there is nebulosity there even after converting to 8bit and jpeg. I'm sure that in 32bit format it can be rendered much better.

  9. Have no idea what they are like as I've not used them, but they do look conspicuously like known "gold line" range of eyepieces.

    Maybe have a read on that line of EPs to get idea of their performance, although SVBony states 17mm eye relief for whole range - gold line eps differ in ER from 14.8 to 15mm and gold line is quoted at 66 degrees vs 68 of SVBony, EP sizes and range of magnifications seem to match that of gold line.

    • Like 1
  10. 10 hours ago, Gan said:

    BTW I found this info via a link, although its beyond my understanding

    Found similar graphs myself, but have no idea how to read them, or rather what is the meaning of DN units (or for that matter log2(DN), although I suspect it is number of bits needed for DN unit - just a log base 2 of the number).

  11. 4 hours ago, Gan said:

    Again very informative and helpful discussion. Thank you.

    Sorry I should have mentioned full information: my DSLR is Canon 80 D.

    It seems very clear now that a dedicated astro-camera preferably with cooling is the way forward. Obviously this is a bit expensive venture. In the meantime , what can I do to get the most out of my DSLR?

    How do I get sampling rates of 1.2" to 1.5" using my Canon 80D?

    I didn't quite understand this super-pixel. Please could you explain and how to use it? 

     

    Unfortunately I can't seem to find read noise value for 80D expressed in electrons at different ISO settings (another advantage of astro cameras - you get those specs, or you can measure them), but regardless of that I would use longer exposures since you are vastly oversampling with 8" SCT. There is a hint of it being "iso-less" online so we can assume that read noise is pretty low. It also means that you can use something like ISO200-ISO400 range to give you more full well capacity.

    So first recommendation is go as long as you can in exposure length - at least couple of minutes.

    Second recommendation would be to try proper calibration regardless of the fact that you don't have set point cooling. For your next session  - gather all calibration subs to try it out (you can still skip some steps and do different calibration by just omitting certain files from your work flow).

    - Darks at temperature close to those you worked with during the night - maybe take 10-20 darks before you start your lights and then another 10-20 darks after you finish, or maybe do it on a cloudy night when temperature is close to that you shot your light subs at. Try to get as much dark subs as possible (at least 20-30, but if you can - make it more)

    - Do set of bias subs - again gather as much as you can

    - Do set of flats (you need to do it on night of imaging if you don't have obsy and need to disassemble your rig at the end) and

    - do a set of matching flat darks

    Don't know what software are you using, but do regular average for bias, flats and flat darks and use sigma reject stacking for darks. Also use Dark optimization (there is a checkbox in DSS if you are using that for stacking). In case you find any artifacts like vertical / horizontal streaks or similar in the background in your final image - that means that dark optimization failed for your sensor - then try what most people do with DSLRs - using bias instead of darks (and flat darks).

    Next thing to do is use superpixel mode when debayering. Again that is not the best way to do things, but best way would be very complicated in terms of software support, so we will settle for second best.

    Super pixel mode just means that R, G and B channel images are made in such way that 4 adjacent pixels in bayer matrix result in a single pixels in each channel. It just uses one R pixel from bayer 2x2 block for R channel image, one B pixel for B channel image and it averages 2 green pixels for G channel image.

    Resulting R, G and B images will have twice lower resolution than your sensor, and in this case it will be 3144 x 2028 instead of 6288 x 4056. It also means that these R, G and B images are no longer sampled at 0.38"/px but at 0.76"/px (that is actual sampling rate of color sensor).

    In DSS again there is an option for that in RAW settings

    image.png.ee874bf937291cf1ca84bcb824b2c87a.png

    Now stack your image and save result as fits file.

    Next step would be to bin that image x2 to get your sampling rate to 1.52"/px. For that you will need ImageJ software (it is free and written in java so runs on almost any OS).

    You open your fits file (for each channel, or if it is multi channel image it will open as stack) and run Image/Transform/Bin menu command. Select 2x2 and average method. Do this on each channel or once on whole stack.

    After that you can save resulting image as fits again (or in case if it was opened as stack  - use save as -> image sequence, select fits format and other options and it will write individual channel images that you can combine back in photo editing app of your choice).

    In case you are using Pixinsight, all above options are also available to you (at least I'm sure they are, I don't use it personally).

    Btw resulting image will be halved in height and width once more after bin, so final resolution will be 1572 x 1014 (or rather close to 1500 x 1000 if you account for slight cropping due to dither between frames).

    Yes, almost forgot - do dither between each sub, that will improve your final SNR quite a bit.

  12. 10 hours ago, Gan said:

    That's very helpful note. I can understand the thermal noise and the positive effect of cooling. In that case a mod DSLR should be at least as good as an astro CMOS camera that is not cooled.  My DSLR is giving me a resolution of 0.38 arc sec / pixel ( 1x 1 binning) on a 8inch SCT, can I expect a significantly better performance with an astrocamera ( CMOS) like ZWO ASI294 without cooling? Kindly advise 

    That really depends on comparison of two sensors and their characteristics.

    What DSLR do you currently have?

    Specifications that you want to look at are:

    - read noise in both sensors (lower is better)

    - dark current levels (again lower is better)

    - amp glow (absence is better - this one is particularly nasty if you don't have set point cooling, there is sort of a way to deal with it but it might or might not work for particular sensor - it is called dark frame optimization)

    - QE of each sensor (higher is better)

    In the end you have to see which one is going to be better matched in terms of resolution once you bin (you will need to bin on 8" SCT since it has quite a bit of focal length). You would ideally want to target sampling rate in range of 1.2-1.5"/px.

    With current DSLR and resolution of 0.38"/px that will be - super pixel mode (OSC sensors have twice lower sampling rate than mono sensors due to bayer matrix) + x2 binning, which will give you 1.52"/px.

    With ASI294 you will have 0.48"/px, so after super pixel mode that will be 0.96"/px - that is ok only if you have rather good guiding and good skies (like 0.5" RMS guiding and good seeing) and your optics are sharp (EdgeHD). If you go for ASI294 you will want to use reducer for SCT.

    My personal opinion is that ASI294 without cooling would not justify upgrade unless you are planning to use it for EEVA or similar along side imaging. If you want true upgrade then consider extending budget to cooled version.

    There are few additional benefits of astro camera vs DSLR - weight being one, external powering (USB connection in case of non cooled model) rather than internal battery (may help with thermal issues. Drawback is of course need for laptop to use astro camera along with cabling (at least USB lead in case of non cooled model, and power cord with cooled one).

  13. Just for those interested, I made an image / diagram or graph (not sure how to call it) that displays approximate surface brightness of M51 in magnitudes - might be useful for anyone trying to figure out needed exposure time to achieve good SNR.

    Data is luminance (LPS P2 filter) and roughly calibrated to give mags (it might be off by a bit - I was not overly scientific about it). Calibrated on single mag 12.9 star, median filter used to smooth things out.

    Here it is:

    image.png.75c76246e7a46726d65a176b46d4dbfa.png

    Each color is single "step" so cores are at mag 17 or higher and background is at mag27 or darker. Again, this is rough guide :D

    • Like 3
  14. 5 hours ago, alan potts said:

    I have a few scopes Borg 77ED ll F4.3,   a 70mm Ed F6 which has a problem of alignment,  a 115mm APO F7 with reducer .79    and 190mm M/N F5.26.     I guess I could also play with the 180mm Mak and the 12 inch Sc, I have mounting gear for them.

    Alan

     

    Well, you have quite a selection to choose from.

    I would personally go for M/N, but 115mm APO is also an option for wide field.

    image.png.6a4a5b549e9659a97699319b48e70d09.png

    You would need 9 panels to cover M31 for example.

    It would seem that taking 9 panels will take up too much time compare to single panel, but in fact you will get almost same SNR in the same time as using smaller scope that would cover whole FOV in single go (provided that you also have F/5.25 scope). I'll explain why in a minute.

    First thing to understand is sampling rate. I've seen that you expressed concerns about going at 2.29"/px. Fact is - when you are after a wide field that is really only sensible option - to go low sampling rate (unless you have very specific optics - fast and sharp, only in that case you can go high resolution wide field). Take for example scope that you were looking at - 73mm aperture. It will have size of airy disk of 3.52 arc seconds - aperture alone is not enough to resolve fine detail - add atmosphere and guiding and you can't really sample at below 2"/px. I mean, you can, but there will be no point.

    Another way to look at it is that you want something like at least 3-4 degrees of FOV. That is 4*60*60 = 14400 arc seconds of FOV in width. Most cameras don't have that much pixels in width. ASI071 is 4944 x 3284 camera, meaning you have only about 5000 pixels in width. Divide the two and you will get resolution that it can achieve on wide field that covers 4 degrees - 14400/5000 = 2.88"/px. So even that camera can't sample on less if you are after wide field (not to mention the fact that OSC cameras in reality sample at twice lower rate than mono).

    Don't be afraid of blocky stars - that sort of thing does not happen, and with proper processing you will just have a nice image even if you sample on very low resolution.

    Now a bit about the speed of taking panels vs single FOV. Take for example above M31 and 9 panels example.

    In order to shoot 9 panels you will need to spend 1/9 of time on each panel. That means x9 less subs for each panel than you would be able to do when doing single FOV with small scope. This also means that SNR per panel will be x3 less than single FOV if you use the same scope, but you will not be using same scope. Imagine that you are using small scope that is capable of covering same FOV in single scope - it needs to have 3 times smaller focal length to do that. So it will be 333mm FL scope. Now we said that we need to match F/ratio of two scopes, so you are looking at F/5.25 333mm scope. What sort of aperture will it have? It will be 333/5.25 = ~63.5mm scope.

    Let's compare light gathering surface of two scopes - first is 190mm and second is 63.5mm, and their respective surfaces 190^2 : 63.5^2 = ~9. So large scope gathers 9 times more light, which means that it will have x3 better SNR - that cancels with time needed to spend on each panel - you get roughly the same SNR per panel as you will for whole FOV.

    You end up with same result with larger scope and doing mosaic in one night as you would with small scope of the same F/ratio that covers same FOV in one night.

    There are some challenges when doing mosaic imaging - you need to point your scope at particular place and account for small overlap to be able to stitch your mosaic in the end (capture software like SGP offers mosaic assistant and EQMOD also has small utility program to help you make mosaics). You need to be able to stitch your mosaic properly - APP can do that automatically I believe, not sure about PI, but there are other options out there as well to do it (even free - there is plugin for ImageJ). You might have more issues with gradients if shooting in strong LP because their orientation might not match between panels - but that can be dealt with as well.

    Unless you really want small scope, you don't need it to get wide FOV shots - you already have equipment for that, just need to adopt certain workflow to do it.

    • Like 1
  15. Astro cameras use same sensors as DSLR cameras - either CMOS or CCD (of course actual sensors will differ a bit depending on camera model, but are in principle the same).

    It is other features that distinguish astro cameras and DSLR-s.

    Lack of filters - DSLR has IR/UV cut filter that needs to be removed/replaced to get the most out of DSLR (so called astro modding of DSLR).

    Astro cameras also don't have anti aliasing filter on them - some DSLRs do.

    Most significant feature is set point cooling (not all astro cameras have that) which enables precise calibration to be carried out on your data. Cooling as such is not as important for calibration - it does help with thermal noise, but ability to always have sensor at certain temperature is the key for good calibration, so main difference is that.

    • Like 2
  16. @alan potts

    What scopes do you have already to image with?

    Wider FOV is easily achieved by doing mosaics, so you don't really need to spend money on a new scope if you have one that you are pleased with, but gives narrower field of view than you would like.

    It is just a matter of proper acquisition and processing of such data, and although people think that doing mosaics is slower process than going with wider field scope - it is not necessarily so. If you already have fast scope (fast as having fast F/ratio), then doing mosaics is going to be marginally "slower" than using same F/ratio scope capable of wider field with the same sensor (difference being only overlap needed to properly align and stitch mosaic image).

  17. 20 minutes ago, mikemabel said:

    Cheers Vlaiv

    I set the binning in APT in the image plan settings but im not sure how i would do this otherwise.

    I use DSS and Gimp though i am trying Photoshop now

     

     

     

    Ah, I see, lack of software support. Maybe easiest way to do it (although not the best - I think it's better to bin individual subs after calibration and prior to stacking) would be to download ImageJ software (free/open source written in java so it's available for multiple platforms) and once you finish stacking image in DSS - save it as 32bit fits format (Gimp works with fits format).

    Then open it in ImageJ and use Image / Transform / Bin menu options. Then select average method and x2 or x3 (depending how much you want to bin) - after that save as fits and proceed to process it in Gimp (or Photoshop).

     

  18. It is indeed possible to do it but I don't think that anyone has done it.

    It would involve deconvolution (already exists implemented) with deconvolution kernel depending on position on image.

    Although in principle one knows level of coma in newtonian scope with parabolic primary, in practice things are not as easy. Level of coma depends on collimation of the scope and position of the sensor. If sensor is slightly shifted with respect to optical axis (not tilt, but rather shift - so that optical axis does not go thru exact center of sensor), aberrations will not be symmetric with respect to sensor.

    One can account for that by examining stars in the image and determining true optical axis / sensor intersection. One can also generate coma blur PSF for certain distance from optical axis (for perfectly collimated scope), so yes it can be done.

    Downside is that it is inherently probabilistic process because data that you have suffers from noise - you are trying to guess rather then precisely calculate because you don't have exact numbers to start with, but rather values polluted by noise. Another difficulty would be that it is better to do it on stack of data rather than single sub (better SNR) but stack of subs will have different levels of coma if you dither or otherwise have less than perfect alignment - like slow field rotation / drift / whatever, because it changes distance of pixels from optical axis.

    Result will be corrected image but lower SNR - very similar to what you get when sharpening - sharper but noiser image.

    That applies to any sort of optical aberration - as long as you have proper mathematical description of it (like astigmatism depending on distance from optical axis, or even field curvature) it can be done with deconvolution - at expense of SNR. It is much easier to correct it with optical elements (less costly) whether that is coma corrector, field flattener or whatever ...

    Just to add - above method will deal both with star coma blur but also extended features coma blur because it will operate on precise mathematical definition of coma blur rather than approximation of neural networks such as StarNet++

×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.