Jump to content

Banner.jpg.39bf5bb2e6bf87794d3e2a4b88f26f1b.jpg

BrendanC

Members
  • Posts

    705
  • Joined

  • Last visited

Posts posted by BrendanC

  1. My last point was really referring to how I'd expect APP to have yielded better results generally, rather than just giving me better results with Ha. I wasn't really referring to colour management.

    I was just surprised that APP, despite taking much, much, much longer to produce a result, and being a pay-for product, didn't produce results that were much different from DSS in any respect that I could see. I know the maths is essentially the same. It just surprised me, is all, and I was looking for any recommendations that might help me get a better result.

  2. Hi all,

    I'm trialling APP for the second time, as a stacker before processing in StarTools. First time around I didn't know what I was doing, this time a year later I think I'm more clued up - but I'm still not quite getting the results I'd expect when compared to the free Deep Sky Stacker, and I'm wondering whether I'm doing something wrong here.

    I've been testing them against each other, trying to process them as closely as possible in StarTools, and honestly I'm not seeing much of a difference. My hardware is a modded EOS1000D, Sky-Watcher 130PDS on an NEQ6. I'm capturing using APT, then stacking in DSS using the Auto Adaptive mode, and in APP using the recommendations at https://www.startools.org/links--tutorials/starting-with-a-good-dataset/recommended-app-settings, then taking the linear output and post-processing in StarTools, without adding denoise just so I can really see what's going on. I use 25 flats, 25 dark flats and 50 flats each time.

    Here are some examples:

    1000114109_bubleappnodenoise.thumb.jpg.c9e46946df5efd6869d50c004f09791e.jpg

    1187918212_bubledssnodenoise.thumb.jpg.688e04bfbfdfaa9c27179defadac1737.jpg

    1807863866_Triangulumappnodenoise.thumb.jpg.449f7f88f26c33eca8149b4c97c7b134.jpg

     

    1722547879_Triangulumdssnodenoise.thumb.jpg.b72d365e425fc9856d4aeaa6dccdb2bd.jpg

    Now, I'm probably still going to invest in APP because I've just started doing some Ha work and it beats DSS hands-down for that. But, I would kind of like APP to do much better at RGB work than DSS too, especially given the price, and given the amount of time it takes to process.

    Is there anything I might be doing that is really badly wrong here, or some button or switch or tick box I should be looking at? Or is this really just a reflection of the limitation of my gear/technique?

    Thanks, Brendan

  3. And my second Ha image! The Pacman nebula.

    * 8:45 hours of H-Alpha from 105x300s subs
    * 2:49 hours of RGB at ISO800 from top 90% of 47x240s subs
    * Bortle 4, Moon 58% phase, 61° height
    * 25 flats, 25 dark flats, 50 darks
    * Sky-Watcher 130PDS with primary baffle, NEQ6 with Rowan belt, EOS1000D minus IR filter, 7nm H-Alpha filter, 0.9x coma corrector, APT, PHD2, APP, StarTools, Photoshop, Topaz DeNoise AI

    83662304_Honeyview_pacmanhacalibs2.thumb.jpg.c67a7e5b37cb163ea6fc2695895f5b09.jpg

    • Like 6
  4. My first foray into narrowband - The Wizard nebula, combining Ha with RGB. Also trying out APP which, while not substantially an improvement on DSS on initial tests, handled the H-Alpha much better. So, that's going to be my next investment. 

    * 3:50 hours of H-Alpha from 46x300s subs
    * 6:30 hours of RGB at ISO800 from 78x300s subs
    * Bortle 4, Moon 16% phase, 45° height
    * 25 flats, 25 dark flats, 50 darks
    * Sky-Watcher 130PDS with primary baffle, NEQ6 with Rowan belt, EOS1000D minus IR filter, 7nm H-Alpha filter, 0.9x coma corrector, APT, PHD2, APP, StarTools, Topaz DeNoise AI

     

    997071261_Honeyview_wizardHa.thumb.jpg.3cf470b5f05852dddd54e1667756129d.jpg

    • Like 5
  5. I did sound out the DSS forum and one person tried to replicate the issue and confirmed it. It's a bit weird, you'd have thought just using the same flats in different groups wouldn't be a problem.

    The only way I found was to duplicate the master flat for each group. So, you can't have the flats in the main group, you can only have your dark flats and/or bias in there, to work across all groups because temperature isn't important for them. Then for each subsequent group, you need to have the master flat duplicated so that it's being used properly (alongside your darks for that temperature).

    It's slightly inconvenient because you're duplicating files, but you only need to duplicate the master flat, not the subs. You can even leave a text file in the folder as a note to remind yourself that's what you've done, and why.

    Hope this helps. :) 

  6. @aknaggbaugh I decided against trying in the end. I figured learning to guide would be hard enough without relying on kit that may or may not work.

    In the end I went for a Datyson T7C which is a ZWO ASI 120MC clone, and even uses the same drivers. I got mine from AliExpress for around 75 quid but what with one thing and another, they're more expensive now. Just do a Google search and see what you can find. Here, I've even done it for you: https://www.google.com/search?q=datyson+t7c&oq=datyson+t7c. The mono units are very scarce currently.

    It seemed the best value cam that would work with my system and I'm pleased to say it's worked really well. I would occasionally get a 'split screen' effect with it, in that the left and right sides of the image would be swapped for some strange reason, but it didn't affect guiding at all. The reason I got the colour version was that I could guide with it, then remove it and do planetary, but what I found was that, in practice, it's such a pain having to refocus and recalibrate in PHD2, that I just didn't really get around to doing much planetary work and would leave it in the guidescope.

    I recently got a dedicated mono camera (a QHY5L-II mono) for dedicated guiding, but haven't got around to using it yet. I would also question whether it would be sensitive enough for off-axis guiding. It's definitely fine for use with a guidescope though.

    You could still give the Bresser a go! I've been tempted since but as I say, once the guiding is working well, I'm very hesitant to start fiddling with it.

    Hope this helps.

  7. I think I know what it is. A year or so ago, I went through all the different stacking modes in DSS and decided the Auto-adaptive one was best. So, I've been using that ever since. On a hunch, I went through them all again last night, and guess what? Average, Auto-adaptive, Entropy and Maximum modes seem not to handle hot pixels very well. Median, Kappa Sigma and Median Kappa Sigma handle them perfectly - no more hot pixels. It could still be that I need to dither more, because those modes might just be rejecting hot pixels that the darks haven't fixed. But it's an interesting finding.

  8. Hi all,

    I recently noticed that some of my images still have hot pixels even after running them through DSS with hot pixel detection turned on. I'm (fairly) sure my darks library is ok. If I use the 'Cosmetic' tab to auto detect and clean hot and cold pixels, it fixes them.

    Without the hot and cold pixel removal:

    hots.jpg.d2f22da4b5aaf694ae321dc090e91c28.jpg

    With the pixel removal:

    315144228_nothots.jpg.45b25a1d120393e0864b278f3397ddfc.jpg

    So, the cosmetic tab does the trick. I'd be happy with this, except that, as you can see above, it also seems to increase the overall brightness and noise of the image. I do not understand why. The above images were processed in precisely the same way, but the one with the hot pixels removed is definitely brighter.

    I could probably live with this, but I'd rather have a more 'true' image to work with. As things stand, it seems that I either live with the hot pixels (which you can't really see anyway most of the time), or use the cosmetic feature and live with an image that seems to have been affected overall in some weird way.

    Any ideas/comments/suggestions? I should add that I've tried and tested pretty much every other stacking solution and would very, very, very much prefer to stick with DSS.

    Thanks, Brendan

  9. Well, here we are, six months down the line, with JWST on the verge of launching... and still no sign of this doc!

    I've searched dozens of times in the intervening months, and have not found it. Subscribed for updates, tried contacting them again through FB and Twitter, even emailed Crazy Boat Pictures, even tweeted the documentary maker himself. No replies.

    Most peculiar. I wonder whether they just couldn't secure distribution or something? In which case, do it independently, online! I'd pay to see it.

    Very frustrating. Anyone with any pointers, please point! 

  10. Hmmmm. So, I recreated the original image, using the same reference frame as I used to create the H alpha image - and they're not aligned.

    What on earth is going on?

    POST-EDIT - I just restacked both and it's aligned now. But I'd still very much prefer to be able to align new Ha stacks with old RGB stacks if at all possible, rather than having to recreate everything from scratch or aligning in Photoshop. This must be possible.

    POST-POST EDIT - Got it - of course, you stack the Ha as a FTS file, then you have two FTS files which are compatible, use the original for registration. Boom-shanka.

  11. Right, next problem...

    I just took a load of my very first Ha narrowband frames with my EOS1000D (so, CR2 RAW files).

    I want to use them to enhance the nebulosity in my IC1396 Elephant's Trunk shot, which means I need to align them perfectly.

    So, to do this, I thought the best way would be to bring the FTS file for IC1396 into DSS, alongside the Ha CR2 files, use that as the registration frame, but uncheck it so it isn't stacked. The stack worked, but wasn't aligned with the original FTS file. At all.

    So I took the highest scoring CR2 file that I used to create the original shot, thinking that would be the one DSS used to register all the others. Straight away this looked more promising because when I right-clicked it in DSS to use it as the registration frame, it put a star next to it, which it mysteriously didn't with the FTS file. This was close, but just out.

    Having played around a bit more I can see that DSS doesn't like the FTS file because although it's the same size as the CR2 files, other properties are different, for example the colour profile and CFA. So, I tried saving it as a greyscale TIFF, but still no joy. I don't know what to do about the CFA property.

    So, to avoid going through every CR2 file to try and find out which one DSS used to register the FTS file, or having to completely redo the IC1396 shot from scratch by using a known, common registration frame, I think I'd much prefer to find a way to get DSS to use the FTS file as a registration frame because then I'll have a way forward to do this with many of my other past shots.

    Any ideas? Oh, and please don't suggest using APP, Siril, Pixinsight etc because I have actually tried them all in the past, and keep coming back to DSS as my preferred stacker. I just really like it and get (to my eye) decent results out of it. I'm sure there must be a way of doing this in DSS!

    Thanks, Brendan

  12. Thanks Olly, great advice. The plate solve and focus issue is now solved.

    I think the issue with Trevor Jones's stuff is that he documents as he learns, so I guess he would also do things differently now. I wasn't planning on using Ha for luminance for the exact reason you give, but that's great advice about combining it with Photoshop's blend lighten. Exactly what I'll need to know - when I finally learn how to align the Ha frames! Which is the subject of the question I'm about to post... see 

     

  13. OK, progress has been made.

    I plate solved without the filter to Vega. Then put the filter in, popped the Bahtinov mask on and amazingly, the Bahtinov grabber tool worked even with the poor image from through the filter. So, I was able to focus with the filter in.

    Then, I changed to Platesolver2 instead of ASTAP because I know it's more tolerant of bad stars. After a few goes, bumping the plate solve exposure time from 10s to 90s, it's working.

    So, I think this is the way forward.

    Thanks for all the help. :)

  14. Thanks - I have noticed that the Ha is out of focus. I have managed to get it to plate solve once, but I'm up to 4 minutes exposures now and it's not working again.

    I was thinking I could plate solve without filter, pop the filter in, focus, and off I go. But no - I focus using a brighter star such as Vega or Altair, so I'd still then need to be able to plate solve to the target. Given that my plan for tonight was to add Ha to an existing object which was at precisely 75 degrees rotation, that's something I would sort out at focus, and only then. I simply cannot take the camera out after that. So, it's plate solve with Ha filter in, or nothing.

  15. I just tried using my SVBONY h-alpha filter for the first time. It's attached to my coma corrector, on my EOS1000D astro-modded DSLR.

    I'm trying to plate solve in APT and it simply will not have it. Of course, this is because I now have a filter that is killing the stars. I expect I'll have the same problem when I try to focus.

    I thought this might be a problem but I'm stumped now because it really is a problem. The only way around it that I can see is to get the camera in focus, and the scope pointing in the right direction... and then take the camera out, put the filter in, start shooting, and pray that it's still in focus. I don't like this approach, and it really won't work at all when accurate rotation is important.

    Any ideas? Have I done something silly by getting this filter? I did it after seeing Trevor Jones's blog and video on the subject - using H-alpha in the red channel or as a luminance layer.

    Thanks, Brendan

  16. Hi all,

    I know this is an old thread but I'm reviving it!

    I have exactly this issue of rotation and whereas it doesn't bother me too much most of the time, if I have a good, full FOV image that has involved a meridian flip, it's frustrating having to crop it.

    There are lots of great diagrams and explanations in this thread and honestly, without my brain exploding/imploding, I don't completely understand them.

    So, after all the debate here: what's the fix? Is it cone error, because if so I'm considering using ConeSharp to fix it - https://www.sharpcap.co.uk/conesharp

    Thanks, Brendan

  17. Hi all,

    Thought I'd add this - an unusual target - 30 Cyg and 31 Cyg A & B.

    30 Cyg, to the right of the image, is a white giant star with a blue tint.

    31 Cyg A is a bright orange giant, with a smaller blue companion called 31 Cyg B (alternatively HD 192579). They are a binary system which orbits at a distance of a mere 11 astronomical units. As they are viewed side-on, every 10.32 years they are are in eclipse for 63 days.

    Image details:
    * 1:10 hours of integration at ISO800 from 70x60s subs
    * Bortle 4, Moon 100% phase, 47° height
    * 25 flats, 25 dark flats, 50 darks
    * Sky-Watcher 130PDS with primary baffle, NEQ6 with Rowan belt, EOS1000D minus IR filter, 0.9x coma corrector, APT, PHD2, DSS, StarTools, Topaz DeNoise AI

     

    Honeyview_Autosave-DeNoiseAI-standard.jpg.b22f5ba302395e2c9737245c868e9dc7.jpg

    Cheers, Brendan

     

     

    • Like 9
  18. OK, just bit the bullet - went out, checked everything, took the camera out, checked collimation (not easy in the dark, even with a torch), re-seated the camera, tweaked the vanes to make sure they were as straight as I could get them, re-focused... and the problem is gone. I think the camera may not have been quite properly seated. Anyway, problem solved, in the field! 🙂

×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.