Jump to content

Banner.jpg.b83b14cd4142fe10848741bb2a14c66b.jpg

Filroden

Members
  • Posts

    1,373
  • Joined

  • Last visited

  • Days Won

    2

Posts posted by Filroden

  1. I calibrate my lights (bias, dark, flat) after every session. I store these as my "input" files (suffixed with a "_c" to show they are calibrated and with a "_cc" if I have also cosmetically corrected them for hot/cold pixels. If I image the same target, I just add more files into that target's folder. I can then, at any stage, realign and stack all the images. It also means I don't have to recalibrate and use different flats for different batches.

    It also avoids the need for stacking the stacks. It is possible, but you're losing some of the benefits of stacking larger numbers of lights. 

    It does take longer because as your library of lights grows, so does the time it takes to align and stack them. It also requires more storage as you're saving every light and not just the stacks (but I can never bring myself to delete original data).

    • Like 1
  2. 20 hours ago, jjosefsen said:

    I did experiment a lot with the initial color calibration/correction. In this image i used BackGroundNeutralization -> DynamicBackgroundExtraction -> LinearFit -> ChannelCombination -> SCNR:Green -> ColorCalibration

    I'd recommend looking at the new PhotometricColorCalibration process as an alternative and an improvement to BackgroundNeutralization and ColorCalibration. It calibrates the image to that the background is neutral and the stars match their BV values. I've found this gives much better starting points for star colour. You could also try the ArcsinhStretch process for at least some part of the stretching as it is better at retaining colour. I find it bloats the stars so I use it in combination with one or more of MaskedStretch, HistogramTransformation and CurvesTransformation.

    I'd recommend the following order whilst linear:

    DynamicCrop -> DynamicBackgroundExtraction (sometimes twice) -> ChannelCombination -> PhotometricColorCalibration -> (if necessary, and not always at 100%) SCNR:Green 

    • Thanks 1
  3. 35 minutes ago, maxchess said:

    Just starting out in imaging.

    A good start. You’ve just started to capture the nebula. Did you take flats? If not, they will help. You should also make sure to crop any stacking artefacts before processing. I can see a little has been left at the bottom and that can make it harder to remove gradients, etc. But welcome to the wonderful world of imaging :)

  4. 3 hours ago, Peco4321 said:

    I do feel like I will never achieve the standards of most others

    You will be surprised at what you can achieve. You’ve got some good data hiding behind the vignette and background gradients. I can’t recommend taking flats enough. They will make the single biggest impact on the images you’ve posted today. After that, a good gradient remover will also make a difference.

    • Like 2
  5. 51 minutes ago, Nigel G said:

    My first success with the new Ha filter. I love the stars, the filter really keeps them under control.

    Lovely target to start on. Ha is wonderful. Tight stars, minimal gradient and you can see results for each sub because it’s so easy to stretch. 

    The processing fun starts when you combine it with RGB. 

    • Like 1
  6. Found this in an earlier post on the same topic:

    Quote

     

    The following is taken from Astrodon's website where Don refers to backfocus and the use of a 3mm Astrodon filter:

    "There may be some confusion as camera manufacturers measure backfocus from the focal plane of the CCD to the outer surface of the camera. When they account for the thickness of the filters, they SUBTRACT 1mm, which is correct as measured from the CCD. However, most people measure backfocus from the back of their scope or from a corrector, and then add/subtract spacers to arrive at the correct backfocus. In this case, as measured from the scope, the 1mm must be ADDED. A subtle point, but does get people in trouble from time to time."

    Now on the QSI website they state that the backfocus of the QSI690 is 50.2mm as measured from the focal plane of the CCD to the outer surface of the camera. They then go on to say that when used with a 3mm filter you subtract 1mm from this backfocus distance i.e. 49.2mm.

    So, this indicates that Astrodon and QSI agree, does it not?

     

    So QSI are saying the effect of a filter means the distance inside the camera reduces by 1/3 the thickness of the filter. So to maintain the original spacing you have to add that space outside the camera. We would never really talk about the optical length of light inside the camera itself so we simplify this to just what we have control over, how much space we have between the reducer and front face of the camera. We add space to compensate for the effect of the filter.

    I think it's sangria time...

    • Like 2
  7. 12 minutes ago, ollypenrice said:

    So what's the error in my graph? In the second version the overlay at the bottom brings the focal point to that of the chip distance. The angle of convergence had been respected (because it was a cut and paste job from the original light path with filter.) On the other hand my overlay shows the filter intersecting the light path further up the light path (within the pink 'spacer' section) so that's not geometrically correct and could be the sorce of the error.

    By copying and pasting you've "chopped" off some of the light path inside the spacer. In reality, the only way to shorten the light path is to pass it through a lens. You cannot move the light path without adding a lens. So in the lower example, the addition of the filter does alter the light path by extending its point of focus. That point is now fixed in space in relation to the rear of the reducer. You could have placed the filter anywhere between the reducer and the chip and its effect would be the same (though the filter's diameter would obviously need to be bigger the further it was from the chip).

    So with a fixed point of focus there are only two ways to bring the system to focus:

    1) move the chip so it is now positioned at that point (by adding more space between it and the reducer)

    2) move the point of focus to where the chip is by placing an additional lens in the system that would reduce the light path by an equal and opposite amount as the filter. 

    Option 1 is much more prefereable as it does not introduce any further glass/air interfaces.

    [Edit: I'm in Spain atm so can't draw what I mean but if you move your diagram upwards so it is inline with the original you will see your light path and the original light path no longer align. This is the missing piece of the light path that cannot be accounted for simply through a spacer.]

  8. 41 minutes ago, ollypenrice said:

    But if we take as gospel the sentence 'angle of convergence remains the same' then we get this:

    59f213ed29505_FILTERTHICKNESSquestion.thumb.jpg.097e8cbf83c6b78ebee171f32b32669b.jpg

    ????

    Olly

    You’re trying to move the light to reach the chip by changing the spacing but the light path cannot change (not without adding lenses). You have to build more space to move the chip to where the light now reaches focus. 

  9. 4 minutes ago, ollypenrice said:

    I always struggle with this so I took Ray's diagram and added the hardware around it. The image below is how it came out. It shows why Davey T is right. The lengthening of the light path means you need a shorter spacer. Even as I write this it feels wrong, but see what you think of the diagram. For me, at least, it has brought a bit of clarity since, in the past, I though adding length to the light path meant adding it to the spacer. Mais non!

    59f20990dd033_FILTERTHICKNESS.thumb.JPG.c4940d9df820202c55c4e1fcb7053b06.JPG

    Olly

    It always felt counter-intuitive to me, but your diagram shows you need to add length to the spacer for the light path to hit the chip and be in focus. If you shorten the spacer, it would push the focus point even further behind the chip. And that is how it worked for me in practice. I need 66mm without a filter and with the Astrodon (3mm) filters I need 67mm of space.

    • Like 2
  10. 27 minutes ago, Nigel G said:

    Last image of M31 until fully completed. I wanted to show the accidental miss aligned version which I think is quite effective and almost 3D, giving the image depth.

    Now that's that I call red shift!

    I had a go with your jpegs using PixInsight to blend luminance from one and RGB from the other plus I used it's new colour calibration tool that uses the stars actual BV to calibrate colour across the image. Hope you don't mind!

    Image08_clone.thumb.jpg.9c262f76b5c8ecae76ab45e177acec42.jpg

    • Like 4
  11. 1 hour ago, Nigel G said:

    Being slightly disapointed with my M31 result. Its now obvious why it was only the same as my first stack, although it had 4 hours of non modified data it only had 1h 40m from the second camera, it would only be as good as the lesser data, adding 1 sub from the modified camera would introduce loads of noise. If I take the same 4 hours with the modified camera the result would be greatly improved. If I want an image quality with 8 hours worth of data and add a second camera data I would need to take 8 hours with that camera too.

    Mixing subs from 2 cameras shouldn't have that impact. Are you stacking all the subs together or stacking subs from different cameras then restacking the results? The latter is prone to degrading the image. But if you throw it all into the same stack and use an algorythm that weights by SNR and uses a good rejection technique then it should improve the image.

  12. I'm not much more of an expert but open both images. On the image you want to use as base, create a new layer and make sure that layer is highlighted. Go to other image and copy it. The I think its as simple as pasting it in the other image. I'm assuming both images are identical in scale, etc. If not, they need to be aligned first.

    • Thanks 1
  13. 2 hours ago, Nigel G said:

    I took 1 hour of 4mins 1 hour of 6mins and 40minutes of 8mins. Total 2H 40m ISO800 non modified 1300d with CLS filter, 80ED, dithered, flats and bias.

    Lovely images. And 8 min exposures :) Another advantage of the no darks strategy...I’d have spent 4 more hours collecting darks before I could even process.

    • Thanks 1
  14. 3 hours ago, rotatux said:

    This one is not really a test any more, as I now know what to get of this lens. I let it uncropped so you can appreciate the FoV given by this setup. Thanks for watching.

    Nice round stars from edge to edge so I guess the tracking is good :) You just need to tame the stars a little during processing to keep them smaller. Otherwise a great capture!

    • Like 1
  15. 25 minutes ago, profdann said:

    Yes, that's it - the azimuth tracking seems to be dead on all night, but the altitude has a huge error causing the object to drift more than half way out of frame over about 10 minutes before returning to the centre. It seems to be fairly periodic, but I don't think the SE mount allows that to be corrected. Pointing near to Polaris means that the object isn't moving very fast, so the error occurs over a longer time period, so the star trails are smaller. 

    That makes perfect sense. What a horrible combination to have to navigate through - the periodic tracking issue wants you to point to areas of the sky not liked for field rotation. I guess the one advantage you can take from lots of 5 second exposures is it really helps the S/N ratio :) And with so many subs, throwing away even 10% is not much of an issue.

    • Like 1
  16. First, what a great Ring Nebula and looks like you caught quite a nice spiral galaxy to the right. IC1296 maybe? Dare I say a little noisy and you may need a few more subs :) Just kidding. You're now facing that same dilemma anyone taking short exposures faces, whether to archive all those subs so you can add more at a later date! 

    6 hours ago, profdann said:

    I'm able to get 30s exposures when the object is close to polaris, e.g. M81, but for anything else, only 5-10 seconds is all I can do!

    This surprised me. I always thought it was easier to push longer subs the closer you image to the horizon (though this is counterbalanced by atmospheric disturbance) and the closer you image to east and west. Imaging at the zenith is all but impossible and imaging to the north or south difficult other than at very short exposures. I found my ideal targets were usually crossing east or west between 30 and 60 degrees above the horizon. These allowed me to consistently achieve longer exposures.

    • Like 1
  17. 2 hours ago, Petehog72 said:

    great thread, im just waiting to get a camera to start getting images like these, the images on here are very inspiring and give me the confidence knowing i can get images like these with our set up, we have a celestron 8se and im looking at getting a canon 600d, budget is tight as my camera died after a water leak.

    As Nigel says, you'll find that imaging on a mount designed for visual and a scope with a long focal length is a ... shall we say painful experience. I've tried it and whilst I was pleasantly surprised that I got an image, it's an experience I would never repeat.

    Here's my take on M1 with a 9.25 SCT.

    It's worth noting I used a very fast/sensitive camera so the limited and short exposures sort of worked. I don't think it would have been possible with my Canon 60D. I now image with a 500mm focal length scope and it's a breeze in comparison. 

    Give it a go but be prepared to switch to a camera/lens combo :)

     

    • Like 4
  18. 36 minutes ago, Nigel G said:

    Great minds :) I found a 12nm clip in filter for sale £75, still deciding :) I have the funds but still saving for the Zwo 1600mm which has 3 narrow band filters. Do I borrow from the funds?

    Nige.

    Hold out. I think the ZWO filters are narrower. There's plenty of other things to do while you wait :)

    • Like 1
    • Thanks 1
  19. I also prefer the first image. There are noticeably more gradients in the second image caused by the moon which would be difficult to remove in their entirety. The first image also has a warmer feel and the stars look delicious.

    1 hour ago, Nigel G said:

    On the other hand, we get so few clear nights here that a moon lit night can't be waisted and is better than a cloudy night.

    I guess its also down to target selection. With a bright moon, it may be better to stick to clusters where you can attack the gradients a little harder; something that you cannot do well with nebula or galaxies.

    • Like 1
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.