Jump to content

NLCbanner2024.jpg.2478be509670e60c2d6efd04834b8b47.jpg

geoflewis

Members
  • Posts

    3,770
  • Joined

  • Last visited

  • Days Won

    15

Posts posted by geoflewis

  1. 7 hours ago, vlaiv said:

    I've taken a look at subs you attached, and they look pretty much ok, so I'm certain they are not to blame for zeros that appear in calibrated frame.

    I do have couple of recommendations and one observation.

    - Use sigma clip when stacking your darks, flats and flat darks. You are using quite a long exposure for all of your calibration frames and chance you will pick up stray cosmic ray (or rather any sort of high energy particle) is high. It shows in your calibration masters. Sigma clip stacking is designed to deal with this. Here is an example in master flat dark:

    image.png.559cef8097fe4930af0d2b2d9aa160c5.png

    Now if that were hot pixels (like surrounding single pixels that are white) then it would show in the same place on master dark, but it does not (master dark has few of these, but at different locations / orientations).

    - There is quite a lot of hot pixels that saturate to 100% value - these can't be properly calibrated, so you need some sort of "cosmetic correction" of these in your master calibration files. Such pixels will be removed from light stack by again using sigma clip stacking, but you need to dither your light subs (and I'm sure you do).

    - I've noticed something that I find strange in your master flat, a sort of linear banding. Not sure why it is there, or if it has a significant impact on the image. Probably not, and it is probably related to manufacturing of sensor - sensor has slightly different QE in that area because of something.... If you did not notice this pattern in your final images, then it is all fine. Here it is what I'm talking about:

    image.png.8a0a9629212a8d3f555e28e660b07f95.png

    Does not look like regular vignetting as it has sort of "straight" edges (and not round) although it is in the corner where you would expect vignetting.

    It is probably nothing to be concerned about - I just noticed as it looks interesting and I have not seen anything like that before.

    Now on ImageJ - I've already written a bit on how to use it to do calibration, so maybe that thread would be a good place to start. I wanted to do a thread where I would describe full workflow in ImageJ, but it sort of did not draw too much interest so it slipped my mind after a while, and I did not post as much as I wanted, but I believe it has part on calibration. Let me see if I can dig it up.

    Have a look at that thread - there is a lot written about calibration, and also a plugin included that will do sigma reject (which you will need for your subs).

    You can try it out to see what sort of results you will get, but in long term - do look at specific software for doing it automatically like APP for example.

    If you have any trouble following tutorial, please let me know and we can go into specific details.

    Thanks Vlaiv,

    Sorry for my slow reply, but being on holiday I was out with my wife much of the day and your analysis is too complex for me to review on my phone, so I'm only just now reading it on my laptop.

    Good points about using sigma clip for my calibration masters, something that I never paid any attention to previously, just using the default in my software of average. I will certainly make that change in future and indeed I have resacked them that way and now there is no cosmic ray hit showing in the mast flatdark. I already use sigma clip combine method for my calibrated light frames, and yes, I do dither my lights, but I also use an additional filter in the software to correct any residual hot pixels not fixed during calibration, so usually my final stack lights are good.

    Regarding the banding in the corners of my master flats, yes, it is something I have noticed, but I don't think that it shows in the final processed image, not least because that reagion is usually cropped out. I don't know what causes it. Maybe it is something to do with my LED light panel, which has a slight flicker when on the low intensity that I need to shoot flats at my target range of 25k ADU. I have been thinking of increasing the ADU to say 30k to see if that makes any difference. The panel has variable intensity, but for all the LRGB flats I have to use the lowest intensity setting or the camera shutter is captured during the exposure. I could increase the intensity for the Ha flats to reduce the required exposure from the rather high 90 secs, but I just used the same intensity as set for the LRGB filters. I will definitely exeriment some more after I get home the week after next.

    Thanks for the link to your notes on ImageJ calibration, but I think this is going to be too complex for me to follow. As you advise, I think the better way forward is to trial something like APP, again something that I can do once I get home.

    In the meantime, I intend to write to the developer of ImagesPlus to ask him about the auto calibation process and why it is black clipping so may pixels. The software does include a manual calibration process, which uses the same calibration frames as auto. The manual calibration set up option also includes a dark scaling setting with the default set at 1.0. I found that when I set this to say, 0.75 then the number of clipped pixels was dramatically reduced to ~100. Even with dark scaling factor at 0.9 the 0 pixels was a few thousand rather than the sevearl hundred thousands at scaling of 1.0. I really don't understand what this dark scaling factor is as I can't find any explanation for it in the limited documentation that I have for the software, so I'm going to ask about that too. I wondered whether the setting in the manual calibration set up might carry through to the auto image set processing calibration, but that seems not to be the case the dark scaling factor getting reset to 1.0. Do you have any ideas what the dark scaling option might be Vlaiv? Here is a screen shot of the set up screen fyi...

    1540981784_ImagesPlusManualCalibrationSetUp.JPG.313e12ec30ca484335fafe30575776c6.JPG

    Many thanks, Geof

     

     

     

     

     

  2. Vlaiv,

    Again many thanks for your analysis. It's reassuring to know that the raw light, raw dark and master dark look ok, so now I have to understand what went wrong with the calibration, but I have no idea how to do that. The astro processing softwarwe that I use (ImagesPlus) provides an auto calibration tool where I load lights, darks, flats, flat darks and bias (though actually I don't use bias) frames. I then hit the 'process' button and it runs throught it's steps automatically with the output being the calibrated, aligned and combined stack ready for post processing. See screenshot below...

    648539775_ImagesPlusAutoImageSetProcess.JPG.d6f15bf37620382859fe5e4e0d96c9e3.JPG

    Is it possible that my flats and flat darks are the problem? I use an LED light panel for my lights and with the Ha filter the flat exposures were a fairly long 90sec, which I matched for the flat darks. Here are a single flat, the master flat (from 20), singe flatdark and master flatdark (from 20)....

    C14+Optec-Flats(-10C)_2019-09-20_Flat_Ha_90sec_2x2_Ha_frame1.fit

    C14+Optec+QSI(-10C)_20Sep2019_Ha_MasterFlat.fit

    C14+Optec-FlatDarks(-10C)_2019-09-20_Dark_Ha_90sec_2x2_Ha_frame1.fit

    C14+Optec+QSI(-10C)_20Sep2019_Ha_MasterFlatDark.fit

    I am interested in your suggestion to use ImageJ (though I never heard of it previously) and your tutorial for it. I did just Google it and it looks frightening to me.....

    I have also been thinking to switch to PixInsight (PI) or Astro Pixel Processor (APP) as I hear more and more that these are the best astro image processing tools, especially APP for calibration and stacking.

    I await you next lesson please 😀.

    Geof

  3. 3 hours ago, ollypenrice said:

    My feeling is that there is nothing 'bordeline' about 0.43"PP. It's oversampled and probably considerably so. I'd be amazed if you weren't in 'empty resolution' territory here. We certainly were at 0.6"PP. Assuming your camera bins cleanly (not all of them do) then I'd bin 2x2 for sure and experiment with 3x3. Your guide RMS in arcsecs needs to be no worse than half your image scale in arcsecs per pixel. Is this the case? Then you have the seeing to worry about. I simply look at the FWHM while focusing and if it's bad I only shoot RGB, Lum to be captured on a more stable night.

    I don't worry about saturating stellar cores. I regard it as inevitable if you're going to get enough signal at the faint end. Stars can be rescued later in post processing. Noel's Actions has an 'increase star color' routine which will pull colour down into the cores from the edge. Alternatively you can do a very soft RGB 'stars only' stretch, blur it a little, put it as a layer underneath the RGB hard stretch, and erase the hard stretched stellar cores. (You would do this by making a star selection as per MartinB's tutoprial in the processing section here or use Noel again.) Or you could take a short set of very quick RGBs at short exposure and use it as a layer in the same way. As you'd only be keeping the cores you might get away with just one sub of each since you'll be using only the bits close to saturation anyway, meaning no noise problem.

    Olly

    Hi Olly,

    Thanks for your observations. 0.43"/px is precisely why I bin 2x2 with the C14 and as you suggest I have considered 3x3, but never used that other than for plate solving where I actually use 4x4. My guide RMS is typically around 0.5"-0.6" total, with lower values for RA and Dec (typically 0.3"-0.5" each). That is clearly more than half my image scale of 0.85" when binned 2x2, so please could you explain the target of half image scale. As you will have concluded from my discussion with Vlaiv on this thread I really understand very little of the science of this hobby 😖. Up until now I've just guessed, but now I'm trying to better understand what I should be doing and why.

    Thanks for the explanation of how to improve star colours. I'm not familiar with MartinB's tutorial, so will look for that. I have just purchased Noel's tools last week and had a play, including with his 'increase star colour', but it didn't seem to have much effect - maybe I was expecting too much. I like the layer approach in PS that you [proposed, so I will give that a try, though I'm something of a novice with PS too....

    Many thanks, Geof

  4. 9 hours ago, vlaiv said:

    Let's see if we can figure out what is going on (if you wish of course) - could you please post two more subs:

    - single unprocessed dark sub directly from camera

    - master dark, again as 32bit fits

    Hi Vlaiv,

    I've now checked into my holiday lodge, but there is no wifi here, so I'm using my phone as hotspot, so that may be a little unreliable. I checked my darks and was very surprised to discover that the master dark was only 8 bit, so I've reprocessed the dark stack as 24 bit. Here is a single dark off the camera and the master dark from 20 dark frames.

    QSI-Darks(-10C)_300sec_Bin2x2_frame1.fit

    MasterDark.fit

    I checked and there are no zero values in the single dark, where the minimum seems to be ~400 ADU. The minimum value for the master dark is ~500 ADU.

    I then looked at singe uncalibrated frame and the same frame after calibration.

    M57_300sec_2x2_Ha_frame1.fit

    C_M57_300sec_2x2_Ha_frame1.fit

    The raw file off the camera shows minimum at ~400 ADU, this just 1 or 2 pixels, rising to ~500 ADU by the time 1000+ pixels at that level.

    Checking the calibrated same image shows a huge number of pixels with 0 ADU, so for sure the calibration seems to be making the difference. If calibration simple deducts the dark frame value per pixel this makes sense, as anything in the raw file with ADU <500 will move to 0 if the dark master has min ADU value of ~500.

    It makes me wonder how a dark frame has >ADU per pixel than a light frame even when using Ha filter, but perhaps I'm still not understanding this well enough. Truth to say, I've never really thought much about this previously, so this is definitely a journey of discovery for me and I very much appreciate your help and look forward to your reply.

    Best regards, Geof

  5. 6 hours ago, kirkster501 said:

    For right or wrong I go for 10 or 15 minutes for Luminance and Ha binned 1x1 with my Atik460 on my TEC and Mesu.  I also always do RGB 2x2 for 5 minutes.  I am considering dropping Ha to 2x2 binning. I have a dark and BIAS library for -10C for these parameters refreshed every three months. 

    On my widefield rig with the KAF8300 sensor - much noisier - I got for 10 minutes Ha and 5 minutes RGB.  I bin everything at 1x1.  I image at -20C on this sensor because of the noise.

    There is no right or wrong answer Geof.  It depends on if you are under flight paths, what the cloud pattern is like, what resolution you are imaging at and what your guiding is like.  I have only recently had the confidence in my right to go to 15 minutes on clear nights.  10 mins on nights with more cloud about.  30 mins where I live in the Midlands with the LP and East Midlands airport not far away is not possible.

    Steve

    Thanks Steve, it is very helpful to have your input. As I found in my discussion with Vlaiv, I was wrong in thinking that more shorter exposures gives better SNR than few longer exposures. I will go back to 10 min or maybe even 15 minute exposures, but I'm also very interest to learn more from Vlaiv more about this so will continue the thread with him by providing the dark frames that he requested.

    Cheers, Geof

  6. Hi Vlaiv,

    I am just leaving home to go on holiday for one week - actually to a local uk star party, so I cannot send these to you immediately, but I will take my laptop with me and see if I can upload them later today. I very much appreciate you helping me diagnose all this for me and helping me better understand  what is going on.

    Regards, Geof

  7. 8 hours ago, vlaiv said:

    Could you do a calibrated frame in 32 bit (with 32bit masters) and post it as fits to run the same measurement again?

    Hi Vlaiv,

    Thanks again, here is a 32bit FIT file - at least I think that it is.

    C_M57_300sec_2x2_Ha_frame1.fit

    I usually perform processing (calibration, align, combine) at 32 bit Fit in Images Plus, then save as 16 bit Tiff for post processing as tools that I use, e.g Registar and Photoshop (CS2) do not read the Fit files.

    Even at 32 bit level I think I'm seeing 16% (~339k) at 0, so what does this mean?

    Regards, Geof

  8. 55 minutes ago, geoflewis said:

    For C14 we have 300s with ADU of 1400. From this good exposure value will be 300s * 4100 / 1400 = ~878 seconds or about 14 and a half minutes of exposure.

    Hi Vlaiv,

    One more time please? I checked the background ADU for a series of Ha image that I captured last night. The exposures were 300sec and background ADU is ~50. So if target ADU is ~4100, then approx optimum exposure is 300s x 4100 / 50 = 24,600s or 410min (6.8 hours) - surely ~7 hour exposures cannot be right, so what did I do wrong? Does a background ADU of just 50 at 300s seem likely? I live in a bortle 4 location with readings that night of 21 SQM, so reasonably dark.

    I've attached the calibrated Tiff for you to check if that is possible please.

    C_M57_300SEC_2X2_HA_FRAME1.tif

    Thanks again,

    Geof

  9. 5 minutes ago, vlaiv said:

    Ok, let's go over specific measurements. I mentioned bin vs not binned in case you use the same telescope for regular (not binned) and you want to calculate exposure time for binned version.

    In above case, you are already shooting binned with C14 and regular with 4" APO, and I guess you intend to continue doing so, therefore no need to multiply things with 4 and do conversion between non binned and binned background levels.

    For 4" APO, you say you get around 3200 ADU for 600 seconds. We calculated that you want to get to about 4100 ADU background level. This means that good exposure value for 4" APO would be 600s * 4100 / 3200 = ~768seconds or about 12 minutes and 50 seconds. This is not optimum solution, but it does tell us that you will get slightly better result if you shoot 12 minute subs instead of 10 minutes subs.

    For C14 we have 300s with ADU of 1400. From this good exposure value will be 300s * 4100 / 1400 = ~878 seconds or about 14 and a half minutes of exposure.

    Don't know why you shot NB image of M13, it is globular cluster and there is no significant signal in emission lines, but we can do the same:

    120s * 4100/800 = 615s or about 10 minutes.

    I'll explain a bit more couple of things. First, when I wrote above about swamping read noise, I used rather arbitrary figure of x5 for LP noise vs read noise. I've seen this figure used in calculations, and it makes sense to use because following:

    Suppose that you have 1 unit of read noise and x5 larger LP noise, or in this case 5 units of LP noise. Total noise will be (according to noise addition, and not including other noise sources): square_root(1^2 + 5^2) = square_root(26) = ~5.1. This shows how much LP noise swamped read noise, as there is almost no difference in noise level of LP and LP+read noise - 5 units vs 5.1 units, very small increase.

    However, like I said, factor of x5 is arbitrary - which means that above calculated exposures are not "optimal" or something like that - they are just good guide line. If you have 12 minutes and 50 seconds as result of calculation - you can use 12 minutes or 13 minutes - what ever suits you (do some value that you will use across the scopes, so you don't have to build large dark library with many different exposures).

    Second thing that I wanted to explain is above calculation of sub duration in better terms for easier understanding. It is just a simple ratio when you think about it - let's again use APO example. You measured background level of 600 seconds exposure to be ~3200 ADU. This means that background signal is "building up" at 3200/600 ADUs per every second, or 5.3333ADU/s.

    Since we've seen that for our coefficient of x5 (which means LP noise needs to be about x5 in magnitude compared to read noise) this means ~4100 ADU, and just to iterate, read noise of your camera is ~8.7e, five time larger value is 43.5e and we need LP noise level to be about that number. LP noise level is equal to square root of LP signal, so we need LP signal to be square of ~43.5 = ~1892.25e (I rounded it up to 2000 above).

    Last thing that we need to do is convert electrons to ADU, and that is what gain value is used for. Your camera gain value is 0.485 e- / ADU, so in order to get ADUs we need to divide electrons with gain factor - ~2000 / 0.485 = ~4123 so I again rounded it to 4100 (you don't need all the rounding, but it was easier for me to write round numbers instead of typing in precise numbers from calculator).

    Back to our exposure time. We have LP level build up of 5.33333ADU/s. How much time it takes to build up to 4100ADU? Well, that is easy, 4100 / 5.33333 = 768s. Again you don't need to be very precise and do 12 minutes and 48 seconds - either 12 minutes or 13 will do.

    Makes sense?

    Vlaiv,

    Thank you so much for taking the time to explain this using my example ADU readings. I think that I finally get it, or at least I feel much more comfortable in my understanding. Thanks also for explaining your use of the x5 multiplier, which makes sense. I will continue to re-read this thread (probably many times to get it fixed in my brain), but at last I feel that I have some logical methodology for determining exposure durations for my different rigs and different binning.

    Oh and BTW (by the way) my reference to NB for M13 was not Narrow Band, but Nota Bene (Latin for note well). Of course I do not shoot narrow band for globular clusters 😉.

    Many, many thanks.

    Geof

    • Like 1
  10. 1 hour ago, vlaiv said:

    I don't know what software are you currently use for processing, but most astro software out there offers this functionality.

    You simply need to select a piece of background, trying to avoid stars and background nebulosity / galaxies and do statistics on it - or more precisely average value of selection. That is all it takes.

    You take that value from your calibrated sub and according to this:

    http://www.astrosurf.com/buil/qsi/comparison.htm

    your camera has:

    0.485 e- / ADU

    and

    8.7e read noise.

    This means that you should expose until you get about ~2000 electrons of background LP signal ( (8.7*5)^2 ), or converted in ADU (values you read of from sub) - ~4100.

    If your sub has lower value than this - increase exposure length, if it has higher background value than this - you can lower exposure length.

    In fact you can do this from your old subs - find a calibrated sub prior to x2 bin and measure background value and then multiply it with 4 (because when you bin you will get x4 higher background because values of 4 adjacent pixels add to form a single value). If this value is less than ~4100 increase exposure length, if higher - reduce exposure length. You can even calculate needed exposure length by using a bit of math. If for example you are using 10 minute subs and you find that measured value is 1200ADU (for one of past not binned subs), meaning it will be about 4800 when binned, then it leads that proper exposure would be 10 minutes * 4100/4800 = 8 and a half minutes (~8.51 minutes).

    Hope this makes sense.

    Actually no. Fewer longer subs will always have higher SNR than more shorter subs, for same total integration time. With above overcoming of read noise we just make sure that difference is too small to be of any practical significance. At some point difference becomes too small to be important, but still fewer longer subs will have better SNR than more shorter subs, only that difference will be something like less than 1%.

    If you observe what impacts sub duration, then it will be obvious why some of best imagers use longer exposures.

    - Sub duration depends on read noise - higher read noise, longer sub needs to be

    - Sub duration depends on LP levels - more LP you have, you can get away with shorter exposures.

    Most best imagers use CCD sensors (they are long enough in this to have invested in cameras before CMOS became available / popular), and CCDs tend to have higher read noise than CMOS sensors - often as much as 10e or so. This means that CCDs benefit from longer exposures. Also, most of best imagers shoot from dark skies (at least they try to), which means that LP levels are low - again promoting longer exposures (you need to wait longer for LP signal to build up to for its noise to become significantly larger than read noise).

    Add to that the fact that you've often seen NB exposure times (just did not pay attention to type of image vs sub length). Narrow band additionally cuts down LP levels - thus requiring longer exposures.

    Add all above together - CCD with high read noise, dark skies and 3nm Ha filter and you can see how it leads to optimum sub length being more than hour.

     

    Vlaiv,

    Many thanks again. I have found a statistics tool in my astro software ImagesPlus, so tried some tests on both binned 2x2 and unbinned 1x1 images. I only use the 1x1 when imaging with my 4" APO, whereas binned 2x2 is always with my C14. I'm not sure what if any significance these different configurations make as it is the same camera. Here is what I found checking  some L subs....

    4" APO unbinned 600 sec sub - ADU = ~3200

    If I understood you correctly I need to multiply that by 4 for comparison with binned 2x2 which gives 12,800 - I must say I'm not understanding this part....??

    C14+Optec lens binned 2x2:

    -300 sec sub - ADU = ~1400

    -120 sec sub - ADU = ~800 (NB for an image of M13 the background ADU was ~600, but if I shoot much longer surely the globular cluster core would become blown out)

    Of course the results vary depending whether there was a moon in the sky, hence recent subs from M57 had ADU of ~1800 at 120 sec.

    Based on my understanding of what you are saying the 600 sec sub binned 1x1 with the 4" APO looks to be too long duration, which I find very surprising, whereas both 120s and 300s subs with the C14 binned 2x2 is much too short.

    Please can you explain some more.

    Many thanks,

    Geof

  11. 27 minutes ago, Stub Mandrel said:

    The main thing for me is... my photos from 2015 - 16 aren't a patch on my ones from 2019 so in one way it could be a never endiong project of continual improvement. My M78, for example, is awful!

    I'm finding the same Neil, so have also been updating old crappy versions with better ones, but likely it will be a never ending project.....

    • Like 1
  12. 39 minutes ago, vlaiv said:

    Don't worry about exposure time in relation to binning or saturation of stars.

    You should base your exposure length based on few other factors:

    - how big is read noise compared to other noise sources - mainly LP levels. As soon as any other noise source becomes dominant, there is not much point in going longer. You can measure background levels in your exposure (best after calibration, per filter) and then determine if exposure is sufficiently long. You want your LP noise level to be at least 5 times more than read noise (so square root of background signal should be about 5 times as large as read noise - do pay attention to convert from ADU to electrons).

    - In contrast to above, you want more exposures, so each should be as short as possible. This is consequence of several reasons. Most algorithms work better when having a lot of data (statistical analysis fares better) so you want larger number of subs in your stack. Shorter exposure means less imaging time lost if something bad happens - like airplane passing, wind gust, cable snag, earthquake (yes, this is a real thing, at that focal length you will record even small tremors).

    So balance the two above - go long enough to defeat read noise, but don't over do it as you will benefit from larger number of subs.

    As for saturation, this is easily dealt with by using just a couple of very short subs at the end of the session (or per filter) that you will use only to "fill in" star cores. With star cores, or any part of image that saturates - signal is already very strong (otherwise it would not saturate sensor). This means that SNR is already high and you don't need many subs stacked to get good result. Just a few of 10-15s exposures will deal with this. After you stack - just select brightest stars and "paste" same region from scaled (you need to multiply linear values with ratio of exposure lengths) short stack.

    In principle, luminance does not need "fill in" subs, as star cores end up saturated after stretch anyway. You do need color fill in though - you need to have exact RGB ratio to preserve color.

    Thanks Vlaiv,

    I understand the advice about LP noise v read noise, but I have no idea how to measure them, nor how to convert ADU to electrons - actually I don't even understand what that means. What software do I need for that?

    I also understand that more shorter exposures is better than few longer exposures for improved S/N, provided each exposure is sufficient to dominate read noise, so why do many of the best imagers I see shoot exposures lasting 10, 15, even 30 minutes? I've never been able to get my head around that?

    I also understand that short exposures can be used to restore star colours, but I've never been much good at that, so I guess that don't know the correct processing steps. One of the problems of beinf self taught I guess....

    I sure wish that I understood all this much better than I do currently.... 😖

    Cheers, Geof

  13. 58 minutes ago, dph1nm said:

    Although you get 4x the signal (per binned pixel) the noise also goes up by the sqrt of this. So your S/N per binned pixel improves by 2x not 4x. I suspect this is where some of the confusion comes from. Of course, the total number  photons you collect from an given object has not changed - only exposure time can alter that.

    NigelM

    Thanks Nigel,

    I'm still trying to get my head around this, but it's gradually seeping in - I hope....

    Cheers, Geof

  14. 34 minutes ago, Mark at Beaufort said:

    That is a great project Neil and @geoflewis. Up until last month I had only observed M6 and M7 from Spain. I was on holiday on Hayling Island and easily saw these Messier objects in my 12x70 binos. So you can see them without leaving the UK.

    The most difficult objects I found was M69 and M70 so in hindsight I wish I had taken my Heritage 130P last month.

    Hi Mark,

    Yes, I believe that they are all visible from the UK provided you have a clear southerly horizon. I saw them using my 15x70 binoculars from my previous home in Surrey, but the extra degree or so further north here in Norfolk takes them really low; M6 and M7 being ~2-3 degrees altitude at best. Unfortunately I have an obstruction up to about 10 degrees due south and whilst I can see down to about 4 degrees SE and SW my observatory wall prevents my scope seeing below about 8 degrees, which isn't usually an issue, but is for these few very low Messier targets. It's good to have challenges to solve though - life shouldn't be easy and this hobby certainly isn't....

    Cheers, Geof

  15. Snap..!! I'ts a great project Neil and one that I've been working on for a few years. The low southern targets (particularly M6, M7, M54, M55, M69, M70) are a real challenge to image from my location, so whilst I believe I have seen most of them visually, I will have to travel south to image them. Messier-List.thumb.jpg.0166bbba93d89542167e5e7e344dee8d.jpg

    I'm currently at 88/110, having ticked off a few globs and open clusters this summer when in previous years I was dedicated to planetary imaging.  I will try for more of the Coma/Virgo/Leo cluster galaxies next Spring, but it's difficult to get more than a few each season, due to UK weather and/or moonlight, so I suspect that it will be a few years yet before I have to make specific travel plans for those horizon hugging ones....

    Good luck, Geof

    • Like 3
  16. Hi experts,

    I know that there's been a lot of discussion around binning on this forum, as I've read quite a bit of it this afternoon and previously, but the more I read the more confused I become.

    I've used the 'Resources' page to check CCD suitability for my C14 plus x0.67 Optec lens, when used with my QSI583 camera. The unbinned (or binned 1x1) resolution for this equipment combimation is 0.43"/px, which even in good seeing is borderline over sampled and in more typical UK seeing is definitely oversampled, so I typically capture all data at 2x2 binning, which gets me in the green sampling range of 0.85"/px.

    The question I now have is by how much should I vary my exposure times? Simple maths suggests that since I'm using 4 physical camera pixels in the 1 large (binned 2x2) pixel, that I'm capturing at x4, but I read that practical experience puts the ratio as more like x1.6 to x2. So, if I typically would use 10 min (600sec) exposures imaging unbinned, would that be equvalent to say 5 mins exposures when binning 2x2. I see many exposure durations far in excess of that being used by some folks, e.g. 900s, 1200s, 1800s albeit not imaging with a C14, but I'm seeing fully saturated stars with the C14 at even 300s binned 2x2 and for RGB images saturation of brighter stars occurs after even 120s. I want to shoot as long exposures as possible without saturating stars as there is a significant overhead in download times, storage and processing with many shorter exposures. I know I can experiment, but building a darks library for 10m, 15m, etc. exposures is not a 5 minute exercise, so I don't want to do that if there is no point.

    I'm hoping that my question makes sense, or if not someone can put me on the right track please.

    Cheers, Geof

  17. This image was something of an afterthought as my intention initially was a single test shot before packing up at about 1:30am, but I liked what I saw so kept going for about 2 hours until the clouds got me.

    The below final image comprises 1.5 hours data (18x300sec Bin2) captured with my QSI583wsg-5 and Astronomic 6nm Ha filte, through my C14 with Optec telecompressor lens in train.

    Mel15_Ha_IP_PS.thumb.jpg.40825baf8b3b643b6208368c28a81fe1.jpg

    I've just purchased Noel's actions to help with my very limited PS processing experience, so definitely was fumbling about in the dark. Calibation, grading, alignment, stacking and initial post processing was using Images Plus, then into PS for final tweaks.

    I have very little experience processing mono images, so all critique is very welcome.

    Cheers, Geof

     

    • Like 2
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.