Jump to content

NLCbanner2024.jpg.2478be509670e60c2d6efd04834b8b47.jpg

Filroden

Members
  • Posts

    1,373
  • Joined

  • Last visited

  • Days Won

    2

Posts posted by Filroden

  1. I thought I had a clear evening yesterday so I wanted to see if my new spacing on the camera improved my star shapes (and also hopefully get some more colour data for my Rosette image). After a new set up at the end of the garden I found my new mini-PC was just on the edge of wifi range and the connection just wasn't strong enough to use remotely. I'll have to test more in daytime but I swapped to the laptop and sat outside (thankfully a much warmer night than usual). The new spacing meant I needed to find the new focus points for each filter which took about 10 minutes. Autofocus worked so well on the Ha filter that i reduced the exposure time down from 10s to 5s and went from bin4 to bin2.

    I set up a new sequence to capture 10x60s each of RGB and Ha, hoping to run it at least twice before the forecast clouds came over. I managed 10x60s of Ha and RG.

    On review of the data, it looked of similar (if not slightly better) quality than previous sessions (much less gradient in the R and G). However, when I blinked through the images the field rotation was spectacular. Zooming in on individual subs showed beautiful trails moving clockwise around the Rosette :( I should have know better and a) checked the first sub before committing to the sequence and b ) probably reduce the exposure to 30s. So I can't use this data to check how flat my field is.

    I haven't taken flats yet but I did a quick integration without any calibration and all three looked good in the central area. Probably good enough to throw into a bigger stack once I've calibrated them since the older data has elongated stars in the corners because of poor spacing.

    Isn't it funny how routine makes you forget the basics. By setting up near my kitchen window I could only image to the east so field rotation was never really a major concern (wind and mount limited my exposures to about 60s). Because the Rosette had moved so far round I needed to set up in a new location and I never paused to think I was now imaging south where rotation becomes the single limiting factor for my system!

    • Like 2
  2. Reading the Ekos document, it looks like it takes a slightly different approach to SGPro:

    Quote

    Ekos then begins the focusing process by commanding the focuser to focus in or out, and re-measures the HFR. This establishes a V curve in which the sweet spot of optimal focus is at the center of the V curve, and the slope of which depends on the properties of the telescope and camera in use. Because the HFR varies linearly with focus distance, it is possible to calculate the optimal focus point. In practice, Ekos performs several large iterations to until it get closer to the optimal focus where the gears change and smaller, finer moves are now made to reach the optimal focus. Ekos let the user set a configurable tolerance parameter, or how good is good enough. The default value is set to 1% and is sufficient for most situations. The Step options specify the number of initial ticks the focuser has to move (assuming an absolute focuser, this is NOT applicable to relative focusers). If the image is severaly out of focus, we set the step size high (i.e. > 250). On the other hand, if the focus is close to optimal focus, we set the step size to a more reasonable range (< 50). It takes trial and error to find the best starting tick, but Ekos only uses that for the first focus motion, as all subsequent motions depend on the V-Curve calculations.

    SGPro requires that the stars to be close to focus before you run the autofocus routine. I looks like Ekos will travel over the full focuser distance to find focus (i.e. truely autofocus). If so, I suspect it's doing the right thing and it's just the scale of the graph not being small enough to show the V. So long as it's finding focus I'd live without improving the graph unless it was very easy to change!

    • Like 1
  3. In SGPro I would be looking at the travel (no of steps between max in and out focus) and step size. BUt I'd also define no of steps. So for me, my focuser travels about 8000 steps covering 6cm of travel. I use a 9 step routine with a step size of 10. I worked that out by manually finding focus then refocused until the HFR was about 4 times larger.  Used twice the distance between the 4xHFR and focused HFR divided by no of steps to get a good step size. 

    • Like 2
  4. 2 hours ago, The Admiral said:

    Not my post, gov! You are responding to Fabien's post.

    Ian

    You're right. I was quoting Fabien from within your post when I meant to quote Fabien directly. I think your image is representative of what could be achievable though the added light pollution probably means needing to stack many more images to overcome the noise.

  5. 8 hours ago, The Admiral said:

    Second is Rosette on which I failed to capture enough SNR (sorry for your eyes ;-)). I had to stretch so much I had to process the noise, even then the result isn't satisfying. Surely I was too ambitious when reducing the sub length to 20s, I suspect the UHC filter dims the overall image and requires at least 25-30s

    I wonder if you've lost some of the data when removing/minimising the light pollution during processing? I found the Rosette, Soul and Orion Nebula all very difficult to process because they filled my field of view, making it difficult to calibrate any background gradient. Pacman was much easier as it occupied much less of the frame.

    Here's two of my images of the Rosette - these are both from the same data (just one is rotated 180deg)! The biggest difference between them is that I improved my background model and preserved more of the nebula which existed in the unprocessed image.

    large.5863acb9e7596_NGC2239_20161228_v10.jpglarge.58652aaaae5c4_NGC2239_20161228_v11.jpg

  6. The way I think about it is that the target emits the same amount of photons they just land on more or less sensors depending on the field of view. The more sensors they land on the less photons per sensor, so the dimmer they will appear in the same time. So the target will appear smaller and brighter in the larger field of view. I think you're trading resolution for speed of capture.

    Olly has a good graphic about the f-ratio myth that is much clearer.

    • Like 2
  7. 12 hours ago, Nigel G said:

    Here's what I mean, these images are cropped from 135mm lens image to compare against 150p images.

    They are all similar exposure times, the 135mm lens appears to gather more.

    I don't detect much difference in the M42 shots. They have slightly different crops but looking closely at the area between M43 and the Running Man where I can only describe the Ha as forming a staircase between them. Both images are just starting to show this structure. There is a much more pronounced difference in the Flame/Horsehead image. As Neil says, I suspect this is for two reasons. I think you've got a lower black point in the image taken with the scope. Also, because the target covers far fewer pixels in the 135mm (it has the larger field of view by some margin), each pixel will be "seeing" more photons so it will be easier/faster to capture (though the reflector's aperture will compensate to some extent).

    • Like 1
  8. 5 minutes ago, Nigel G said:

    The modded 1200d and CLS filter seem to pick up more Ha than my scope on the same targets.

    That's odd? There is nothing about your scope that should reduce Ha detection compared to the 135mm lens.

    I've also been trying to put this bad weather to good use too. I have finally got the spacing on my camera down to within 1mm of optimal design and just need to test it with a star field. I can reduce it by 1.5mm and increase it by about the same using Delrin spacers, so I should now be able to get it spot on and finally get a flat field across my full image.

    I also invested in a cheap Mini PC which I intend to strap to my mount and run the camera, filter wheel and focuser from. It auto logs into Windows 10, launching TeamViewer so I can remotely connect to it from the laptop or main PC. It has a single USB3 for the camera/filter wheel and two USB2, one of which will control the focuser and the second will hopefully control the mount once I figure out how to do that. I still need a bigger memory card as it only has about 15Gb of storage and the largest SD card I can find in the house is only 8Gb (I was sure I had a 64Gb laying around somewhere).

    I'm hoping this allows me to set up at the end of the garden so I can see 20 degrees either side of the meridian (i.e. extend my current view by about 40 degrees). I have a power cable that will reach and with the PC strapped to the mount, only the power cable to the mount and PC will dangle and potentially cause cord wrap issues. Everything else will move with the mount.

    The only issue that worries me still (having not tested this in the field) is how I will frame targets after the initial goto. Currently I use a semi-live view on the laptop screen and manually adjust the framing from the mount's handset. I think I can still do this, but there will be a short delay as the live view now has to be sent over the network to the laptop.

    • Like 2
  9. 3 minutes ago, Nigel G said:

    Hey guys, I have a little confession to make.

    I have acquired an EQ3 pro mount, for wide field mainly.

    Not giving up on Alt-AZ imaging that's for sure. But it had to be done sometime.

    Still keen as ever with the Alt-AZ mount imaging.

    Cheers

    Nige.

    It's a logical step. It will be interesting how you find the difference in setup and use. 

    • Like 1
  10. I have the LakesideAstro. It works very well and the Windows driver integrates into SGPro. The PC is a little more than the PI (about £60 more) but I know I can load up TeamViewer and SGPro and run most of my set up remotely. I will still need to take the laptop out to frame the target until I can figure a solution for controlling the mount. 

    Im hoping I can remote in from the Mac and use SkySafari for mount control. 

  11. On 1/28/2017 at 15:46, Filroden said:

    So I'm now wondering if I can get away with the following:

    11.0mm Skywatcher spacer ring (included with FF)

    9.0mm FLO M48 to M42 adaptor (https://www.firstlightoptics.com/adapters/flo-m48-to-t2-adapter.html)

    10mm Modern Astronomy spacer (http://www.modernastronomy.com/shop/accessories/adapters/10mm-t-spacer/)

    7.5mm Baader T2 extension tube (https://www.firstlightoptics.com/adapters/baader-t2-extension-tube.html)

    20.0mm ZWO EFW

    1.0mm Astrodon filter

    6.5mm ZWO ASI1600MM sensor distance inside body

    ====

    65mm

    Maybe add a delrin spacer to make up the missing 1mm?

    Having exchanged emails with Bern at Modern Astronomy he's pointed out that the addition of the filter adds to the back spacing requirement. So the distance needed is 67mm, not 66mm, when using any Astrodon filter (they are all 3mm, so add 1mm).

    So I think:

    11.0mm Skywatcher spacer ring (included with FF)

    9.0mm FLO M48 to M42 adaptor (https://www.firstlightoptics.com/adapters/flo-m48-to-t2-adapter.html)

    1.0mm delrin spacer

    7.5mm Baader T2 extension tube (https://www.firstlightoptics.com/adapters/baader-t2-extension-tube.html)

    10mm spacer supplied with ZWO camera

    2mm male to male connector supplied with ZWO camera

    20.0mm ZWO EFW

    6.5mm ZWO ASI1600MM sensor distance inside body

    ====

    67mm!

    • Like 1
  12. That's looking good :)

    I've also made progress. The final adaptor I need to (hopefully) get my spacing right arrives tomorrow. I also managed to move my scope from the Mac and I also managed to vitualise my Windows installation so I can run SGPro from the Mac too. I now need to test the alignment process but I need clear skies for that. If it works, it means I no longer need to step outside other than to set up and take down the kit. Once set up, I can do the alignment, focusing and framing all indoors from either the kitchen where the Mac will live or remotely from the much warmer office PC.

    At some stage I will have to investigate doing something similar to your own setup with a mini-PC attached to the mount though I will have to solve the whole connecting and controlling the mount from Windows first. Every combination of connection/setup sequence has failed so far.

    • Like 2
  13. 3 hours ago, mxgallagher said:

    Ok so I had another go at my M31 image

    This time I re-stacked adding in some of the 15s subs to give a total of 8m20s!!

    I have played around with curves and levels using gimp 2.9.5 and also tried adjusting the colour slightly - less red, more blue

    I think it looks better, but the core is still blown out and the edges are now getting a lot of noise coming in - looks like I need more data (if only it would stop raining)

    I don't think you'll get much more from this with just 8 minutes of data. What you've done is a great improvement in terms of pulling some of the detail out and getting a better colour balance. The addition of more data is about the only thing left to do :) Once you have more data it will become easier to stretch and the image will support stronger processing, including noise reduction and sharpening. Let's just hope for some clear skies in February!

    • Like 1
  14. 15 minutes ago, Herzy said:

    Thanks. Ken's images makes the actual nebula look great, and Ian's has great detail in the dust. If I may, I might blend the two together. I did take darks (no bias), but I only took 9 so i thought it wouldn't make a big difference and didn't apply them. Also, I checked out the flats and there is that green gradient in it. I'll try to take another set, although it might be too late.

    You can always try converting the flats into mono.

    Forgot to add, you don't need to apply bias and darks if you don't subtract bias from the dark in the first place (the dark contains bias anyway). However, you still have to remove bias from flats so as not to double up your bias correction. I get myself so confused when I calibrate as I don't calibrate darks but I do calibrate flats. I need to write my own process down at some point so I don't forget it!

  15. 10 hours ago, Herzy said:

    Here is the Stacked fits file. You can layer a stretched version and a less-stretched version to preserve the core. If you want to give it a go, please do.

    You've really captured some good detail in the surrounding dust but I think you just lack enough data to really stretch it out of the background without the noise going crazy. A background wipe seemed to correct the vignetting and odd colour casts in the periphery of the image, so I think you might want to look at your flats to see what's different for this image. Otherwise, my processing pretty much matches yours. I noticed some horizontal banding in the bottom part of the image. Again, is this calibration?

    M42 - Stacked (DSS).jpg

    You were right about the stacked image already being very bright! It is possible to push the core harder, but the image takes on a very HDR overprocessed look and I don't think that works. The good thing is that it's easy to go back and shoot some very short exposures (even 5-10 minutes worth would be enough) which you can then use to blend into the core. That and I think it's worth collecting more data so you can really pull out the dust.

    • Like 1
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.