Jump to content

NLCbanner2024.jpg.2478be509670e60c2d6efd04834b8b47.jpg

ollypenrice

Members
  • Posts

    38,090
  • Joined

  • Last visited

  • Days Won

    303

Posts posted by ollypenrice

  1. If you consider the whole of the the word 'travel' rather than just the bit about the aeroplane, I wonder how travel-friendly an F4 Newt is likely to be? An instrument which is very sensitive to fine tuning is going to need fine tuning at your destination.  How much of your holiday time will that take? How much imaging time will be consumed? Does anybody know? I don't, but I think it's a big risk and not one that I, personally, would want to take.

    Olly

  2. 7 minutes ago, TiffsAndAstro said:

    ty :) i have been wondering about how the Ha and the Oiii could both be included with RGB to created a final image. i guess the green channel of Oiii will look different to the blue channell of Oiii? but it seems target/nebula dependant?

    turns out im not keen on hubble palette and was wondering if there was a different way that i'd prefer. just a personal preference thing. cheers for the help again :)

    Tony, the OP, will have his own answer to this.

    In my case I create three Ps layers having added Ha to red already.

    R/Ha  G/OIII  B

    R/Ha  G         B/OIII

    R/Ha  G         B

    I can then weight the relative OIII contribution to G and B very easily, seeing the result in real time, using the opacity slider. I can also weight the overall contribution of the OIII in G and B  to the HaRGB at this stage or I can do that later.

    Olly

     

    • Like 2
  3. 1 minute ago, TiffsAndAstro said:

    noob question apologies, i know what the H and the first O in HOORGB is but what is the second O for ?

    OIII applied to the blue channel.  The OIII line lies on the blue-green border, sometimes called 'Teal Blue.' On emission nebulae this will give a result very similar to RGB, so it's Ha-OIII-OIII.  It will differ from RGB, however, when there is reflection nebulosity which a blue filter will pass and which an OIII filter will mostly block.

    Olly

    • Like 1
    • Thanks 1
  4. 3 hours ago, vlaiv said:

    That depends on what you want in an image.

    Do you want to look at it and say "Awww" or you want to learn something from it.

    I personally favor information and accuracy of information conveyed by the image.

    By the way, probably the most effective use of imaging time is LRG imaging (out of various LRGB, RGB and other approaches) - but no one seem to use it :D

     

    If there's nothing to learn from an image it won't make me say "Awwww."

    And I won't be drawn into your false dichotomy!!!

    :grin:lly

    • Like 1
  5. 2 hours ago, dan_adi said:

    It goes without saying, the better the SNR the better the image

    This isn't wrong, but may not point to the best use of time. It would be interesting to compare two images, one with half the RGB exposure and noise reduced in post processing, the other with twice the exposure and no NR. My suspicion is that you would be hard put to tell the difference. If you tried the same comparison in L, though, the difference would be obvious.

    I'm a pretty patient imager with several images topping 100 hours, but I like to put the time into the most effective captures.

    Olly

  6. Shooting for longer in RGB than L seems to me to be a very eccentric use of the LRGB system, the purpose of which is to shoot the most of what you really need.

    Yes, the RGB filters pass fewer photons than the L but that is the point: you don't actually need more RGB photons because the RGB layer can be processed for noise reduction with no perceptible lack of detail under the L layer. By shooting more L you have the opportunity to sharpen the very strong signal so 'more L' is a good trade-off when anticipating the processing requirements.

    Ultimately there is no need to shoot luminance at all. The perfect dataset might well come from an enormous amount of RGB - but we live in the real world.

    Let me pose a slightly impish question :grin: : When you assess the effectiveness of your mathematical approach, do you do so by assessing the quality of the resulting image?

    Olly

    • Like 1
  7. 6 minutes ago, dan_adi said:

    It seems simple but is actually a difficult question, not because of the math involved but because of the accuracy of the parameters that have to be used in the math. In order to get an accurate exposure time for a given SNR, the most important factor is the sky brightness.

    Many professional observatories have tables relating the sky counts to moon illumination in different filters. To further complicate things, sky brightness changes during the night and the proximity of the object being imaged to the moon is also important.

    I started making tables relating the sky rate to moon illumination, and in makes a substantial difference than just relying on a ballpark sky mag chart.

    From what I found if you want to have equal SNR in all channels, you will have to expose more for RGB than L, contrary to what people are used too. A ratio for 1:3:3:3 is a good starting point (L:R:G:B)

     

    If someone is interested on how to get the sky counts from your images during a night session, the basics steps are simple:

    • Calibrate the subs using darks and bias (flat is optional)
    • Ideally mask the sources in your image (stars, galaxies)
    • Compute the median value of the background 
    • Optionally you can get sky magnitude from counts, but not really needed for anything since we already have the sky count rate
    • Do this for every frame acquired during the night, then take a median value (you will notice the sky rate changes during the night)

    I suppose the steps can be done in Pixinsight, maybe involving pixel math, but I work in python

    Here are the main steps in python:

    def calibrate_image(raw_frame, master_bias, master_dark):
        """
        Preprocess a raw image frame by applying bias and dark corrections.
        
        Parameters:
        -----------
        raw_frame : array-like
            The raw image data.
        master_bias : array-like
            The master bias frame.
        master_dark : array-like
            The master dark frame.
        
        Returns:
        --------
        corrected_frame : array-like
            The preprocessed image data.
        """
        # Bias correction
        bias_corrected = raw_frame - master_bias
    
        # Dark correction
        corrected_frame = bias_corrected - master_dark
    
        return corrected_frame
    
    def sky_brightness(sky_rate_electrons, gain, exposure_time, zero_point=27.095, image_scale=0.92):
        sky_rate_adu = sky_rate_electrons / gain
        total_sky_counts_adu = sky_rate_adu * exposure_time
        mag = -2.5 * np.log10(total_sky_counts_adu) + zero_point
        pixel_area_arcsec2 = image_scale ** 2
        surface_brightness = mag + 2.5 * np.log10(pixel_area_arcsec2)
        return surface_brightness
    
    
    def calculate_sky_electron_rate(corrected_frame, exposure_time, gain):
        bkg_estimator = MedianBackground()
        bkg = Background2D(corrected_frame, (50, 50), filter_size=(3, 3),
                       bkg_estimator=bkg_estimator)
       
        median_sky_level = bkg.background_median
    
        sky_electron_level = median_sky_level * gain
        sky_electron_rate = sky_electron_level / exposure_time
    
        return sky_electron_rate

     

     

    I think that, in this analysis, you are looking at raw data and not at data as is it will be, or can be, processed for a final image. My capture procedure is based on what I will do with the data in processing, not on its relationship with raw signal coming from the sky.

    I do not want equal SNR in all channels. If my data were to remain unprocessed then, yes, I might want that - but my data will be processed. If I expose for faint tidal tails, rarely seen, in luminance then my stars in luminance will be over exposed. This is not a problem because I will not use luminance on my stars, I will use RGB only for stars. If galaxy cores are, in the same way, over-exposed in luminance, I won't use them, I will use the RGB-only cores.

    One of my fundamental principles in imaging is to ask myself, What am I going to do with this layer?  The answer to this question determines how I will shoot the layer.

    Olly

    • Like 2
  8. It might also be worth knowing this Photoshop trick:

    Copy layer. Change blend mode to darken.

    Top layer active.

    Go to Filter - Other - Offset. By nudging one layer relative to another you can make stars look much rounder. The main tool only works in increments of a full pixel but if you go to Edit-Fade you can reduce the nudge to less than a pixel.

    Olly

    • Like 1
  9. 13 minutes ago, saac said:

    I like the idea of a course on the eyeball - sign me up too.  I must admit whenever I go for my annual check up at the optometrist I'm like a kid in a candy store asking questions about all the tests and all the equipment. It really is fascinating, as is anything to do with the human body, a remarkable machine. 

    Jim 

    A local Brit, also called Jim, is a retired eye surgeon (and pilot, and aeroplane builder, and motorcyclist, and model engineer...) but talking to him can leave you exhausted in less than five minutes!  Great guy, though.

    Olly

    • Like 2
  10. 15 minutes ago, Xilman said:

    My eyes point in different directions and my spectacles include prisms to bring them into alignment. With binoculars I see two images if both optical paths are clear, so I keep one eyepiece cap in place when using them.  This has the advantage as I can swap eyepieces when one becomes dewed up from proximity to a warm wet eyeball.

    A friend had this after a fall fractured his cheekbone and, after more than a year of misery, the introduction of prismatic glasses saw him comfortable again.

    Olly

  11. 9 minutes ago, Marvin Jenkins said:

    Thank you for this but I my confusion is that spectacles in my mind, are for wearing for reading, driving, and even more distant vision correction.

    I am still wondering why specs are needed at the EP when the Focuser is doing the work?

    Marv

    Astigmatism is not corrected by the focuser, so if that's what the specs are for they remain necessary.

    Binoviewers and beginner observers?  Nooo!!!  What a nightmare. You have both the distance between the EPs and twice as many elements to adjust, then you have the location of the eyes relative to the oculars to deal with.

    Keep it simple.

    Olly

    Edit: Children in particular, but also some adults, are not sure which eye they are closing. This may seem crazy but the human brain's 'sidedness' leads to many paradoxical complications and I think the best solution is to ask them to cover the unused eye.

    • Like 1
  12. On 31/05/2024 at 16:51, Scott Badger said:

     

    - Flats for each filter are necessary. Filters are where dust is most likely and using just the Lum flat for the other filters won't take care of most dust motes.

     

    I have many hundreds of hours exposure time which contradict this view and a number of experienced imagers who have worked from my place can add still more exposure time to mine. The filters are not, in my experience, the primary source of dust bunnies. The worst ones come from closer to the chip, the chip window being the usual culprit. The objective is way too far out of focus to add any. Of course, if your filters add dust bunnies then they do, and you need flats per filter. However, I think that literally none of my images here on AB has flats per filter.

    https://www.astrobin.com/users/ollypenrice/

    Later ones are OSC but the earlier ones are from mono using just a luminance flat.

    Olly

     

    • Like 4
  13. Why not just bolt the finder onto an aluminium strip and the aluminium strip onto the top of the imaging scope tube rings?

    Or something like that. All this hardware is very pretty and very expensive but, dammit, a guide scope needs to be mounted reasonably stiffly so it points in about the same direction as the imaging scope. That's the beginning and end of the problem and most people have enough scrap metal in their workshops/sheds/garages to make this happen at zero expense.

    Olly

    • Like 4
  14. 20 minutes ago, Marvin Jenkins said:

    More power to you with the out reach. I have tried it at a limited level and found the response baffling.

    It is baffling, sometimes. Someone apparently happy with what they have seen will hand you back the scope so far out of focus that it can only have given them pure visual mush.

    What we underestimate is that some people don't take naturally to looking through optical aid, just as some people don't take naturally to driving a car or changing gear on a bicycle.  I stopped waiting for night with beginner guests and got them used to looking at a distant electricity board sign half way up a nearby hillside. The text was readable if you were in focus. If not, it wasn't. This was a simple self-test for the guests and a simple third person test for me.

    At astro outreach events it is simply not viable to hand an instrument over to to a new observer in the expectation that they will see what is there. Some will and some won't. This seems unreasonable to us but that's how it is.

    Olly

    • Like 1
  15. I run an astronomy guest house (or now an astronomy flat) and, though we are mostly photographic these days, I once did a lot of visual observing with guests using a shared instrument. The focal point varies enormously from person to person.  If I had a guest who didn't want to refocus from my own critical focus position I was instantly suspicious that they were tolerating an imperfect focus and asked them to follow a procedure to verify their focal position. It is exceptional to share a critical focus position with another person.

    If observing with a beginner it is best to focus on a field of only stars. If the field contains nebulosity it distracts and deceives the beginner. Ask them simply to make the stars as small as possible, emphasizing the need to make tiny adjustments.

    Olly

    • Like 2
    • Thanks 1
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.