Jump to content

NLCbanner2024.jpg.2478be509670e60c2d6efd04834b8b47.jpg

ollypenrice

Members
  • Posts

    38,162
  • Joined

  • Last visited

  • Days Won

    305

Posts posted by ollypenrice

  1. 11 hours ago, Dan_Paris said:

     

    I don't make an unsubstantiated claim as you seem to suggest but share my experience of several years of galaxy imaging. The resolution increase when I swap a camera with 1"/pix to 0.66"/pix was just plain obvious. And none of the serious imagers that I know personally shares your point of view. 

     

    I don't know if I'm a serious imager or not but I do share Vlaiv's point of view and have found no significant difference in resolution between 0.6 and 0.9"PP.  In the end I went for the 0.9 option (TEC 140/Atik 460 mono.) Indeed, I wrote a feature article saying as much for the British magazine Astronomy Now.  I was half expecting a good deal of comeback from it but there was none and, I must say, many of my imaging guests do feel as I do. Arch enthusiast of all things technological, the late Per Frejvall was persuaded by my TEC 140 and bought one himself. It's still here in other hands.

    If you do find more real resolution from 0.6"PP then you do. I looked at my data carefully and concluded that I didn't. We can both be right.

    On 01/09/2023 at 21:08, OK Apricot said:

     

    I believe this is going to end up oversampled at 0.55"/px, but what's the deal here? Guiding on my EQ6-R is rarely  above that on the worst nights, so tracking errors shouldn't be an issue. Seeing? What about my sub frames - what will they look like? What's inherently bad about oversampling? 

    Careful! I don't think anyone's picked up on this but, to realize the resolution of 0.5"PP your guiding RMS needs to half that. (This is a rule of thumb but good enough for government work...) You are very unlikely to reach a guide RMS of 0.25". I can get about 0.33 out of my Mesu 200.

    Olly

     

    • Like 1
  2. 11 hours ago, Elp said:

    Cage rattled (225hrs)?

    www.galactic-hunter.com/amp/m51-the-whirlpool-galaxy

     

    It's a really great image, we can agree. Now let's ask what's great about it. The resolution of small scale detail? No, that is unremarkable. Plenty of M51s have that resolution, or even better, but it is representative of what I regard as roughly 'what you'll get' out of an amateur system, seeing-limited.

    What is remarkable, very remarkable, is the depth of signal on the faint stuff. The outer halo shows modelling and structure which I have never seen before and the Ha feature just beside the pair is also new to me.  These don't require high resolution.

    Indeed, this image supports the thrust of my argument perfectly: as amateurs we can bang our heads on a wall in search of scarcely perceptible improvements in resolution or we can go for what matters, what will really allow us to show something new, and go for depth of signal.  The headline of this image is 225 hours. Exactly, and it shows. It doesn't show in resolution, it shows in depth.

    Space telescopes are not primarily, or even significantly, created for making pictures. Their use is primarily spectroscopic. Amateur scopes are for whatever the amateur wants to do with them. Hubble and James Web could not/cannot take widefield images but amateurs can. There are PNs out there, still being discovered by amateurs but new insights into the night sky, in the amateur domain come, as it seems to me, from images which combine depth with breadth of field.

    Olly

    Edit: Ironically you have brought Adrian and me into agreement! :grin::grin:

  3. 44 minutes ago, CCD Imager said:

    Wrong!

    You hosted mine and its been perfect for 8 years :)

    :grin: I meant 'host permanently.' One of the four I host permanently has never gone wrong and one I host permanently has been back twice. I didn't say that every 10M I host permanently has been back... Naturally, I'm delighted that yours has been so good but, on my sample, I would have to stay with Mesu myself.

    Olly

  4. 8 hours ago, CCD Imager said:

    With your sampling at 0.9 arc sec/pixel, even given Nyquist at 2x sampling, you would not achieve resolution better than 1.8 arc secs.  That is quite a lot worse than 1.4 arc secs and would be visually noticeable. I think you have answered your own question with your belief that higher S/N is more gratifying. To me the two most important factors are BOTH S/N and resolution, I treat them with equal respect.

    Well, as I said earlier, I've shot the same targets at 0.6"PP and at 0.9"PP, the former with a far higher theoretical resolution (350mm aperture versus 140mm) but I could find no significant or consistent difference in real detail captured. I think we are, quite simply, seeing-limited and that shooting at less than an arcsecond per pixel is a waste of time.

    Olly

  5. 19 hours ago, CCD Imager said:

    Here is the single sub image taken through a blue filter. Stars in the center measure around 1.1 arc secs and periphery around 1.2 arc secs. BUT, a line plot thru stars gives insufficient points to adequately measure, i.e. under-sampled

    M51b sub.jpg

    Well, this is one definition of under sampling but it isn't a definition that's going to get me worked up into a tizzy. This image looks, to me, very much like images of M51 which I've shot at 0.6"PP and 0.9"PP. I remember very clearly Valiv doing the same test on my data that he performed on yours, and I was convinced.

    You have captured what most competent imagers will capture at non-Atacama/mountaintop observing sites. You are not, in any meaningful sense, undersampled. That's to say, if you reduced your arcsecs per pixel you would gain no new detail that anyone would be able to see. That's my definition of over/under sampling. Definitions which don't involve what you can see are, for me, so much waffle. While all this chatter is going on, what really matters in amateur astrophography is being missed, and that is going deeper.  What I've discovered since moving to super-fast systems is that going deeper is a darned sight more interesting than trying to go for more resolution. It's not just that you go deeper: you also gain more control over dynamic range in processing.

    Olly

    • Like 1
  6. Just now, CCD Imager said:

    Lol Olly
    apologies about the spelling, I forgot you were a teacher!
    The image in question is a single sub and the final image had a FWHM of 1.4 arc secs. I don't think you would be enamoured with a single sub?

    Adrian

    I'd be delighted to look at a single sub, Adrian. Most of us can see where a single sub will go if backed up by 80 more of the same.

    But why was it a single sub? If you failed to achieve this resolution on the rest, what does that say about your assertion that anything less than 0.5"PP is under sampled? I spent two years imaging at a little over 0.6"PP and was never, ever, able to present an image at full size. Unfortunately the camera in use refused to bin properly so we had to resample downwards in software. I then started shooting the same kind of targets, and sometimes the same targets, at 0.9"PP and found no consistent difference.

    Olly

  7. 4 minutes ago, CCD Imager said:

    Because the Nyquest theorem was applicable to continuous audio signal and astronomical measurements are 3D and pixels are rectangular not circular

    I couldn't care less about the Nyquest theorem (or the Nyquist :grin:), I want to see these images which were undersampled at 0.5"PP!

    1 minute ago, CCD Imager said:

    There was a discussion on CN about am image I took with 1.2 arc sec FWHM. The debate centered on insufficient pixels to properly sample in addition to insufficient aperture! I used the SW Esprit 150. However, I have regularly captured images around the 1.5 arc sec FWHM

    And I couldn't care less about discussions on CN or about FWHM, neither of which I can see when I look at an astrophoto. I want to see an actual image which was undersampled at 0.5"PP because it will be the most detailed image of the object in question that I have ever seen from an amateur system.

    Post the image and win the argument!

    Olly

  8. Just now, CCD Imager said:

    You havent heard of Nyquest sampling?
    I have used a sampling rate of 0.47 arc sec/pixel and been under sampled.

    This is a deep sky imaging thread, so not connected with 'lucky imaging.'

    If you have been under-sampled at 0.47"PP you have some astonishingly detailed images to show us and I'll look at them with interest - not to say astonishment. In the absence of such images I'll have to put this claim down to a triumph of theory over practice.

    Olly

  9. 9 hours ago, ONIKKINEN said:

    Here is how i do it:

    First i have stretched the image in Siril to a point where i think the stars are close to where i want them to be, but you could do it in any software of course (i do it in Siril because i do preprocessing there anyway).

    1. Open the image in Photoshop and duplicate it twice to create 2 extra layers
    2. Run StarXterminator on the top layer
    3. Duplicate the starless layer once the process has completed and then hide the top starless layer.
    4. Put the other, unhidden, starless layer into blend mode "subtract" and merge down.

    Now i have a starless layer, a stars only layer, and the original image as background for comparison purposes mostly.

    Once i have processed both layers the way i like i combine the starless and stars only layer using the blend mode "linear dodge(add)" and merge them. In my opinion simply adding the star layer back instead of screening the starless layer in front of it yields a more natural looking end result. With the blend mode set to screen you can end up with transparent looking stars, or stars that are hidden behind nebulosity. Entirely a matter of taste of course, and i think i may have the minority opinion here with preferring linear dodge over screening.

    I have created a GIF of the difference between screening nebulosity in front of stars, and simply adding them:

    lineardodgetext_pipp.gif.39038fa0e5ec9ebd6b81e583f9745778.gif

    The difference is barely noticeable outside nebulosity, but very easy to see for stars that are in front of nebulosity and appear to be muted with the blend mode set to screen.

    Interesting. I've encountered the problem of stars not showing well against nebulosity and tackled it by using Colour Select to select the nebulosity and giving that selection a slightly boosted stretch on the star layer. I'll try your method.

    Olly

  10. This is my workflow. Do create a Photoshop Action as suggested below.

    StarXterminator workflow in Photoshop.

    1 Stretch standard image as usual to about 50% of final stretch. Simple stretch, nothing detailed. Save as Proc 1.

    2 Run star Xterminator and save the starless image as Starless.

    3 Continue to process Starless. Cosmetic repair of artifacts, harder stretch, contrast enhancement, noise reduction, sharpening, colour etc. Save. Select and copy.

    4 Use Open Recent to re-open Proc 1 and paste Starless as a top layer.

    5 Invert both layers.

    6 Top layer active, change blend mode to divide.

    7 Stamp down. (Alt Ctrl E)

    8 Top layer active, Invert.

    9 Flatten Image. (I seem to have to do this from the toolbar Layers dropdown because CtrlE doesn’t work.)

    10 Save as Stars.

    11 Select Copy

    12 Paste onto Starless.

    13 Blend mode to Screen.

     

    Actions 5 to 9 inclusive can be recorded as a single action.

    The stars can be reduced simply by lowering the mid point in Levels. Small stars can look too hard and can benefit from the simple contrast tool to reduce contrast. Large stars with halo or bloat benefit from an increase in contrast. Other possibilities include Gaussian blur or a reduction in top layer opacity by a tiny amount. Stars which look ‘stuck on’ over nebulosity can be made to settle into the image by means of a dab with the burn tool on the bottom layer, just underneath them.

     

    Olly

    Edit: I posted the above without realising it was you! How goes it?

  11. 45 minutes ago, aleixandrus said:

    I had acquired some data to start building the large mosaic I mentioned a few post ago. I focused in O3 to take advantage of the past moonless nights but I also added some Ha and S2 to one particular panel. This way I can build the mosaic while having some data to fully process individual tiles and show some progress.

    Well, this is my first image it deserves to be shown. The Crescent Nebula region, ~8h with the Samyang 135mm @ f2.8 + ASI183MM Pro 300s + SHO 6.5nm Baader filters. This is also my first "properly processed" SHO image, this is, using PixInsight and the "correct" steps and a repeatable workflow. I know I have a loooong way to go but it is a first step. For instance, I have some issues with star processing I must deal with (check that dark halos around many of them) but I think the nebula it is reasonably accomplished (to my untrained eye). Please, feel free to C&C and point out any aspects that could be improved!

    cygnusp6-test-04_v1.thumb.jpg.ea27ab37a498f948a7fececb354d6ea1.jpg

    Splendid. Well done.

    Olly

    • Thanks 1
  12. 1 hour ago, Dan_Paris said:

    Quite well actually, significantly better than Starnet v2 which leaves behind some faint spike remnants.

    An example with StarX :

    image.thumb.png.6f1b978a8278005ce408bed30d43ceab.png

     

    Btw refractors are not free from diffraction artefacts, I've seen many FSQ106 images with diffraction patterns from lenses spacers around bright stars...

     

    Very true.

    Olly

×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.