Jump to content

Banner.jpg.b83b14cd4142fe10848741bb2a14c66b.jpg

jimjam11

Members
  • Posts

    338
  • Joined

  • Last visited

Posts posted by jimjam11

  1. 12 hours ago, malc-c said:

    Guys,

     

    It's been four years since I last used my 200P/HEQ5 combo, but with all this lockdown I needed something to stem the boredom, so over the past four / five days I've stripped down and overhauled the scope and took advantage of the clear(ish) evenings to set things up.  The optics were cleaned and re-collimated and the star test at both extremes of focus produced nice circular images with clear evenly spaced concentric rings.

    The ST80 was removed and the 9 x 50 finder converted to a finder/guider using the QHY5.  Over the past couple of nights I've spent time checking focus, aligning the finder to the main scope and Celestron star finder pointer so that all three have the target central in the field of view.  The mount was polar aligned using SharpCap's tool to 10 arc seconds.  So that left two things to check out, the first being to gather some guide data, and to generate a PE curve for EQMod to use in its correction.

    So having tweaked EQMod's pulse guiding settings and a few PHD2 settings, the calibration ran fine with about 18 steps in either direction E/W and 10 steps N/S - It didn't complain and PHD2 started guiding.  I was pointing at NGC2903 (darks are still running so it will be a while before I get to play with processing) so the weight bar was out fairly level, with a declination of 21.4 degrees.  I've attached the resulting log file trace, and would like some opinions if this is good enough for guiding.  The RMS values of 1.24 arc seconds for RA and 0.59 arc seconds for DEC sound OK, but the graph doesn't look as smooth as some I used to get when using the ST80.

    One last question.  There is obviously an underlying sine wave on the RA trace, which could suggest the mount needs PE training.  In order to do so I guess I should fit the camera to the main scope and log the guiding trace through it in order to get better accuracy ?

    airy rings.png

    guide2.png

    PA.png

    What imaging camera are you using?

    Your RA RMS is approx 2x your dec so that could lead to non-round stars, your PA might actually be too good? 

  2. 18 minutes ago, masjstovel said:

    Not wanting to hijack the thread, but how did you PEC train the EQ6? If you could explain it like i was 5 years old i would really appreciate it - its a mess in my head now between PHD2, PECprep, and EQMOD, where do i do what, when and how sort of. If @malc-c doesn't want it here on the thread, i would love it on PM :)

    Eqmod has an AutoPec feature. Get everything setup and start guiding on a target, start autopec and it will do everything for you.

     

  3. 1 hour ago, alacant said:

    Hi

    If you guide RA using PHD2's PPEC with an initial preriod of,say 120s, it should flatten the sine after a few minutes.

    Before the days of the PPEC algorithm, I wasted a lot of time trying to PEC train my eq6;)

    HTH

    Likewise, I havent found eqmod pec helpful vs straight guiding using the PPEC algorithm.

  4. Clear sky is a rare beast so I have spent a decent amount of time studying my imaging efficiency and trying to improve it. I want to get as much acquisition time as possible per clear dark hour but I wonder if I am alone in pursuing this?

    Products like SGPro make automation relatively easy but in my experience this automation can easily lead to poor efficiency especially if you are using a cmos camera which works best with shorter exposures.

    When I first got my ASI1600 and used it with SGPro I measured my interframe delay to be approximately 16s. Combined with 30s L subs that equated to a wastage of circa 50% meaning each hour would yield on 30mins of data. With 60s RGB subs it was slightly better but still leading to a wastage of around 27%.

    post-242929-0-89181000-1551132260_thumb.jpg.64e0ef4fcfa9e02763858c0f7bb03a3a.jpg

    Studying the SGPro logs I noted the following:

    1. I had image history turned on, this added approx 5s per frame and SGPro did not perform this analysis asynchronously.
    2. Despite my CEM60 only having a USB2 hub, image download was taking 2s.
    3. I was using an older i5 laptop for image aquisition. This was taking approx 12s to download, save and start the next exposure (this was not a slow laptop by any means and had plenty of ram and an ssd).

    I made the following changes to try and improve my efficiency:

    1. I upgraded my acquisition laptop to a decent (and brand new) i7 - This dramatically cut interframe delay from 12s to 3s (without image history).
    2. SGPro changed the analysis; it is now an async process. Nina was always async and considerably faster.
    3. I switched to using 60s gain 0 L subs. I still use 60s RGB subs @ gain 76.
    4. I measured image download as 2s through my cem60 usb2 hub, 1s without.

     

    This gets me to approx 48-50mins of acquisition per hour in broadband) but there are still some things I am looking at:

    1. I normally interleave my frames (LLRGB-dither) to minimise dithers and balance any sky changes. However, this costs time (approx 2-3s per filter change). I may stop doing this to save the additional 2-3s. Dithers are pretty fast especially if they are only performed every few frames (approx 5m worth of frames), but the filter changes are costly.
    2. I use autofocus typically set hourly or every 2C. Looking at my FWHM measurements I think 2C is correct, but the time interval can probably be stretched to 2 hours (maybe more). An autofocus run typically takes a few mins.

     

    Has anybody else looked into this, if so what kind of things have you changed to make improvements?

    N.B. For narrowband I use 5m subs so typically get 50-55mins worth of acquisition per hour.

    • Like 1
    • Thanks 1
  5. I have never used a polarscope with the polemaster. The pm has a wide field of view so you only need to be pointing vaguely north for it to work. In practice I try to get polaris somewhere near the middle of the screen before starting otherwise you can easily run out of az adjustment.

  6. 6 hours ago, Adam1234 said:

    If it is slightly tilted relative to the RA axis though, would it not calculate the centre of rotation incorrectly? Or perhaps by the amount it is titled in my case the error would be so small it's irrelevant?

    As an example, if for some really strange reason one decided to attach the Polemaster on its side then obviously the centre of rotation would clearly be wrong (it probably wouldn't even be able to calculate it).

    Just trying to determine why I'm finding it hard to get the star to stay on the green circle. It seems that after I have done the 2nd rotation, the star is just outside the circle, and is still off the circle by approximately the same amount by time I rotate back to the home position.

    Have you run the PhD guiding assistant for a few minutes to actually measure your pa error after aligning with the polemaster? You might be trying to fix something which doesn't need fixing...

  7. On 09/12/2018 at 16:36, Stub Mandrel said:

    Interesting link.

    With the SW CC and 130P-DS (f4.7) my curvature map is almost identical to the ES one in that link, except my curvature is only 12% rather than 26% and the corner stars are as round as a round thing. Strange that many people prefer the Baader?

    image.png

    Could it be that the 'economy' Skywatcher CC is the best of the bunch? Does suggest it might do a good job with the 200P-DS?

    Not sure how big the weight difference is between 200P-DAS and 150PL, but the 1200mm 150PL guides well on a HEQ5.

     

     

     

    I get similarly good results with the 200PDS and Skywatcher CC:

    1277661949_2020-04-04-200P2.thumb.png.ead5bf1ab7c804b196c0318bf250ce66.png1855886447_2020-04-04-200P.thumb.png.f0a4439c4984198cc09323367f5ad491.png

     

    I am never sure whether to measure the curvature from a single frame or a stack of a few to minimise temporary affects such as seeing?

    • Like 1
  8. 15 hours ago, Erling G-P said:

    How do you get DSS to produce those graphs - or have you copied the FWHM values to another program to make them ?

    Very strange problem for the OP.  Having a Nikon 18-200mm zoom lens which won't hold its zoom level if pointed up or down (it 'falls' in or out by itself), my initial tought was to suggest some sort of zoom or focus creep, but if the zoom level stays the same, and attempts to refocus doesn't work, that can't be it.

    I used Pixinsight for that, but looking in DSS it does have a FWHM column so I assume it is measured as part of the analysis (albeit in pixels rather than arcsec).

     

  9. Have you tried using the weighted batch preprocessing script? You dump your files into the appropriate windows, tell it you are working with cfa images, disable dark optimisation and it should do everything for you.

     

    Edit:

    The file you posted is pretty good:

    2020-03-27.thumb.png.01dbd583efb25e0f7dd4ccd994d16703.png

    It had a strong colour cast probably as a result of the filter but this can be reduced with ABE, Photometric Colour Cal and SCNR.

    The workflow I followed is based on the Warren Keller Pixinsight book.

  10. 14 hours ago, ollypenrice said:

    You can certainly have too much mount. If lugging it about puts you off you won't want to do it.  The EQ6, in my view, is significantly less portable than the HEQ5 and it is no more accurate - though it has a bigger payload. I image at about 0.9"PP with a premium mount at a premium site and the fact is that it is by no means always possible to beat the seeing, even so.

    Olly

    Completely agree. I originally had an avx which I could lift intact from house to garden. I sold this after getting my cem60 but really regretted it because the cem60 cannot be lifted intact. The cem60 is a dream to use and guides <0.5" rms if the sky allows but it isnt worth the effort unless I know I am going to get >4 hours of clear sky.

    I have therefore bought a heq5 which I can lift intact with a widefield setup and doesn't require anything more than a power tank. I can easily use this when the weather doesn't allow for a full night imaging. In practice I have also found this faster to setup and way more usable than the star adventurer.

    Buy something you can use opportunistically and keep setup. 

    • Like 1
    • Thanks 1
  11. The cn thread is a superb resource for the asi1600. I use same length subs but vary the gain between l and rgb. I normally use gain 0 for l and gain 76 for rgb. 

    @ f4.5 that leads me to 60s exposures.

    @ f5.9 I use 120s.

    @ f9 I use 300s.

    Something else to consider is your efficiency; lots of short subs with a long interframe delay can massively cut into your acquisition efficiency. If you use a slow computer with usb2 it could easily be >10s per frame which is a huge percentage loss. Try to get this delay <2s per frame.

  12. On 11/12/2019 at 13:28, cjdawson said:

    With my ASI Air, I simply turn off the "mount" which is only ever set to "on-camera".  Then take a preview and hit the plate solve button.    Once it's completed, I can then plug the RA and dec into Sky Safari and hey presto.  I know where the scope is pointing.     I'm currently thinking about trying a hack by setting the mount to the demo mount and seeing if I can connect SkySafari to the ASI Air and have it auto download the cooordinates.  I would love for the On camera mount of the ASI Air to be able to keep track of the RA and DEC from the last plate solve.    That would make like easier - of course, it shouldn't then be given to anything other than SkySafari.   (that could be confusing)

    I do something similar. I tell SGP my telescope is the ascom telescope simulator and connect. I then tell sgp to slew to my target followed by a solve and sync. This then syncs the ascom simulated telescope with your actual location. If you want visualisation beyond the coords you can hook cartes de ciel up to the same simulator and they work together...

  13. 55 minutes ago, ollypenrice said:

    If the infallible fully automated mosaic software exists I have yet to meet it.  Personally I make mosaics by removing gradients in the panels using Pixinsight's DBE and then I build an initial linear mosaic using Registar buit keep the Registar-adjusted individual panels to use as patches in Photoshop if neecessary.

    1) Edge crop the linear panels and give them to Registar to build an initial mosaic. Save.

    2) Save all the registered/calibrated panels as well for use as 'patches' if necessary.

    3) Photshop or similar: give the Registar mosaic an initial stretch, not necessarily going all the way but far enough to show edge defects while recording this stretch as an action.

    4) Identify an edge defect and open the 'patch' panel from Registar which will cover it. Apply the Action stretch to it. It should now be nearly identical to the area you want to patch. Slide it into place over the mosaic as a Layer, adjust it slightly in Levels if necessary, and simply use a feathered eraser to remove everything but the part covering the edge defect.

    5) Flatten and continue to the final stretch.

    Olly

    I follow the david ault workflow for PI and it works but panel matching can be torture:

    http://trappedphotons.com/blog/?p=994

  14. 7 minutes ago, vlaiv said:

    Above is "already binned" although I was probably not clear or forgot to mention it explicitly.

    If you need three panels in width to cover target (because focal length of larger scope is three times that of smaller scope) then to get same resulting image in terms of pixel count and sampling rate (same FOV, same number of pixels per width and height and there fore same sampling rate - that is what we would consider same image) you need to bin x3 which raises SNR x3.

    In my above example I mentioned that 1/9 of exposure time is compensated by x9 light gathering surface - but that is only if one makes sure that sampling rate is the same.

    If you on the other hand do not compensate for sampling rate/resolution then "standard rule" applies - two scopes of same F/ratio will have same "speed". Small scope will gather whole field and larger scope with gather only 1/9 of the field in same time so there is clear benefit of using smaller scope for doing wide field as with large scope and mosaic - you will end up with very large image in terms of mega pixels - but it is going to take x9 more time to get same SNR (or you will have x3 lower SNR in same time - something you can recover by binning x3 and equating sampling rate).

    Thanks, that makes much more sense now...

  15. On 06/12/2019 at 10:20, vlaiv said:

    Well, you have quite a selection to choose from.

    I would personally go for M/N, but 115mm APO is also an option for wide field.

    image.png.6a4a5b549e9659a97699319b48e70d09.png

    You would need 9 panels to cover M31 for example.

    It would seem that taking 9 panels will take up too much time compare to single panel, but in fact you will get almost same SNR in the same time as using smaller scope that would cover whole FOV in single go (provided that you also have F/5.25 scope). I'll explain why in a minute.

    First thing to understand is sampling rate. I've seen that you expressed concerns about going at 2.29"/px. Fact is - when you are after a wide field that is really only sensible option - to go low sampling rate (unless you have very specific optics - fast and sharp, only in that case you can go high resolution wide field). Take for example scope that you were looking at - 73mm aperture. It will have size of airy disk of 3.52 arc seconds - aperture alone is not enough to resolve fine detail - add atmosphere and guiding and you can't really sample at below 2"/px. I mean, you can, but there will be no point.

    Another way to look at it is that you want something like at least 3-4 degrees of FOV. That is 4*60*60 = 14400 arc seconds of FOV in width. Most cameras don't have that much pixels in width. ASI071 is 4944 x 3284 camera, meaning you have only about 5000 pixels in width. Divide the two and you will get resolution that it can achieve on wide field that covers 4 degrees - 14400/5000 = 2.88"/px. So even that camera can't sample on less if you are after wide field (not to mention the fact that OSC cameras in reality sample at twice lower rate than mono).

    Don't be afraid of blocky stars - that sort of thing does not happen, and with proper processing you will just have a nice image even if you sample on very low resolution.

    Now a bit about the speed of taking panels vs single FOV. Take for example above M31 and 9 panels example.

    In order to shoot 9 panels you will need to spend 1/9 of time on each panel. That means x9 less subs for each panel than you would be able to do when doing single FOV with small scope. This also means that SNR per panel will be x3 less than single FOV if you use the same scope, but you will not be using same scope. Imagine that you are using small scope that is capable of covering same FOV in single scope - it needs to have 3 times smaller focal length to do that. So it will be 333mm FL scope. Now we said that we need to match F/ratio of two scopes, so you are looking at F/5.25 333mm scope. What sort of aperture will it have? It will be 333/5.25 = ~63.5mm scope.

    Let's compare light gathering surface of two scopes - first is 190mm and second is 63.5mm, and their respective surfaces 190^2 : 63.5^2 = ~9. So large scope gathers 9 times more light, which means that it will have x3 better SNR - that cancels with time needed to spend on each panel - you get roughly the same SNR per panel as you will for whole FOV.

    You end up with same result with larger scope and doing mosaic in one night as you would with small scope of the same F/ratio that covers same FOV in one night.

    There are some challenges when doing mosaic imaging - you need to point your scope at particular place and account for small overlap to be able to stitch your mosaic in the end (capture software like SGP offers mosaic assistant and EQMOD also has small utility program to help you make mosaics). You need to be able to stitch your mosaic properly - APP can do that automatically I believe, not sure about PI, but there are other options out there as well to do it (even free - there is plugin for ImageJ). You might have more issues with gradients if shooting in strong LP because their orientation might not match between panels - but that can be dealt with as well.

    Unless you really want small scope, you don't need it to get wide FOV shots - you already have equipment for that, just need to adopt certain workflow to do it.

    The larger scope data could then be binned/resampled which would further increase snr? 

     

    Having said that my experience with mosaicing is that matching broadband panels can be very difficult when light pollution and/or the moon is in play. Narrowband mosaics seem relatively easy because background levels are more easily matched.

    SGP has a feature requested to allow interlacing of mosaic tiles in an attempt to improve tile matching. 

  16. 37 minutes ago, happy-kat said:

    Cheap quick option. Hand warmer rubber banded to lens, I haven't needed to do this yet but it will be the quick approach I'll take after reading about the idea on here.

    Also tried this and they work well but the proper lens heater is now a lot less hassle (and prob cheaper). I used this approach until the coowoo band included a controller, I have never needed anything other than the lowest setting and my 21ah phone charger seems to run for many hours without issue

×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.