Jump to content

NLCbanner2024.jpg.2478be509670e60c2d6efd04834b8b47.jpg

is this Chromatic aberration?


Dan13

Recommended Posts

6 minutes ago, vlaiv said:

I just realized that there is probably more down to earth way to explain things. For example first part relates how different things that blur image add up.

Here is intuitive way to understand it (and to try it out) - take any image and do gaussian blur with sigma 1.4 for example. Do another round of gaussian blur with sigma 0.8. Resulting image will be the same as if you did just one round of gaussian blur with sigma of not 2.2 (1.4 + 0.8) but rather of ~1.6 which is sqrt(1.4^2 + 0.8^2). So blurs add in quadrature and three main blur types are seeing blur, guiding blur and aperture blur.

In any case, it is a bit complicated stuff, so main point is - you can't really use lower sampling rate than about 1.7"/px when using 80mm of aperture and in most cases you should go for 2"/px - that is if you don't want to over sample.

I honestly don't know. I understand drizzle algorithm. I have my doubts if it works at all and how good it works in general case - but that needs further investigation. I have improvement on original algorithm that should work better if original algorithm works in the first place :D (yes, I know, it's funny to improve on something that you don't think works in the first place).

However, I don't understand how drizzle is implemented in pixinsight - so I can't comment on that one. I think that DSS implementation is straight forward, but interestingly enough - original algorithm calls for 2 parameters not one so I don't know how ticking x2 translates to two parameters.

Original algorithm asks for - resulting sampling rate and pixel reduction factor. x2 is directly related to resulting sampling rate. It will enlarge image x2 hence it increases sampling rate by factor of x2. However, you don't need to reduce original pixels by factor of x2 - you can reduce it more or less as per original algorithm and I have no idea what selecting x2 (or x3 in DSS) does with this parameter.

I think my 11am brain has taken in waaaay to much info already today :)  but i really appreciate the info and will def to continue to read over this.

 

one last thing, you pointed out that one image i posted was double the zoom the other was, why are my final stacked images not 1:1 they tend to be 1:5 or more, i dont understand what this is doing or telling me, seems the lower the ratio the better starnet works...

Link to comment
Share on other sites

13 minutes ago, Dan13 said:

one last thing, you pointed out that one image i posted was double the zoom the other was, why are my final stacked images not 1:1 they tend to be 1:5 or more, i dont understand what this is doing or telling me, seems the lower the ratio the better starnet works...

Ok, this is going to need a bit of explaining again, but this will by the easiest bit by far so bear with me.

Here we are dealing with "digital" phenomena. Both image is digital as being collection of numbers or pixels and display device is digital - again having certain number of display dots or pixels. Both have certain resolution  - here term resolution is used to denote number of dots / pixel count or "megapixels".

Easiest way to display image on the screen is to assign each pixel of the image (a number value) to corresponding pixel of display device (light intensity corresponding to that number value for each display dot).

This is so called 100% zoom level or 1:1. First of these two gives "scaling factor" (we will talk about that in a more detail shortly) while other just says the same with ratio of two numbers 1 divide with 1 gives 1 and that is a whole value or 100%.

Problem with this approach is that not everyone has same resolution display nor images all come in same "standard" resolution. Images vary by size (pixel count) and so do display devices. We now have 4K computer screens while old mobile phones have resolutions like 800x600. That is quite discrepancy between them.

For that reason we scale image - we don't change it - it remains the same, but we rather change mapping between it and the display screen. Most notably there is "fit to screen" scale mode. It will fit whole image - however large to pixels that are available on screen. Let's say you have 1920 x 1080 computer monitor and you have 5000x4000 image.

Image will be displayed at 26.74% of its original size in order to fit this screen. Or in another words it will be displayed at 1080 : 4000 ratio (it will use 1080 pixels available to the screen to display 4000 pixels of image).

This means that image is scaled down for display - again image is not changed - it is only displayed differently.

Numbers in PI in window title like 2:1 or 1:5 - show current display zoom in the same way. First one 2:1 - means that image is zoomed in x2 since 2 screen pixels are used to display single image pixel. 1:5 means that image is displayed at 20% of original size since one screen pixel is used to display 5 image pixels (it won't display 5 pixels at the same time - software and computer chooses which one of 5 pixels will be shown but to our eye it looks as it should).

Now we understand display scaling - 1:1 and fit to screen "modes" of displaying the image.

Two of these are very important - 1:1 and Fit to screen.

Fit to screen shows whole image at once regardless if image is larger or smaller than screen - it will zoom in/out just right amount to be able to display all of it. It is very useful for viewing image as a hole to see composition in the image and to see relation of target to FOV and such.

1:1 is also very important as it shows image to the "best level of detail" that current display device can show. If you look at your screen with single pixel turned on - you should be able to see it. Computer monitors are made so that you are seeing finest detail in the image when sitting at normal distance away from it.

We should make our images the same - we should optimally sample the image and when we display such image in 1:1 mode - it should look good and sharp. You can always recognize if image is over sampled when looking at it at 1:1 zoom level. Are stars small and tight / pinpoint like or are they "balls of light"?

Now the final words related to this. Software does not see pixels as little squares or dots of light. Software considers pixels to be values with coordinates. In some sense, software always see image as 1:1 or 100% scale.

When you zoom in or zoom out image when looking at it in PI - you are not changing the image itself. That will have no impact on how Starnet++ sees it for example.

If you rescale your image in software then you are actually changing it and rescaled image will appear differently to Starnet++. Rescaling changes pixel count of the image, while zooming in and out does not do anything.

Drizzle integration rescales your image - makes it have more pixels (x2 or x3 - depending on your settings) and this is why it looks larger - because it is larger, however detail in the image does not change (it is supposed to change - that is why algorithm has been developed in the first place - but even if it does change and resolution is restored - that happens under very specific circumstances - like original image is under sampled and you dithered and all of that). It is the same as if you took your image and upscaled it by factor of x2.

Stars now have more pixels across and when we view that image in 1:1 or 100% (As software sees it) - stars look bigger than in original image. Starnet++ has probably been trained on properly sampled images with tight stars and that is why it has problems when stars have many pixels across - it just can't tell star from a nebula feature that has many pixels across (it expects stars to have just few pixels across).

Hope all of this makes sense and helps?

 

  • Like 3
Link to comment
Share on other sites

41 minutes ago, vlaiv said:

Ok, this is going to need a bit of explaining again, but this will by the easiest bit by far so bear with me.

Here we are dealing with "digital" phenomena. Both image is digital as being collection of numbers or pixels and display device is digital - again having certain number of display dots or pixels. Both have certain resolution  - here term resolution is used to denote number of dots / pixel count or "megapixels".

Easiest way to display image on the screen is to assign each pixel of the image (a number value) to corresponding pixel of display device (light intensity corresponding to that number value for each display dot).

This is so called 100% zoom level or 1:1. First of these two gives "scaling factor" (we will talk about that in a more detail shortly) while other just says the same with ratio of two numbers 1 divide with 1 gives 1 and that is a whole value or 100%.

Problem with this approach is that not everyone has same resolution display nor images all come in same "standard" resolution. Images vary by size (pixel count) and so do display devices. We now have 4K computer screens while old mobile phones have resolutions like 800x600. That is quite discrepancy between them.

For that reason we scale image - we don't change it - it remains the same, but we rather change mapping between it and the display screen. Most notably there is "fit to screen" scale mode. It will fit whole image - however large to pixels that are available on screen. Let's say you have 1920 x 1080 computer monitor and you have 5000x4000 image.

Image will be displayed at 26.74% of its original size in order to fit this screen. Or in another words it will be displayed at 1080 : 4000 ratio (it will use 1080 pixels available to the screen to display 4000 pixels of image).

This means that image is scaled down for display - again image is not changed - it is only displayed differently.

Numbers in PI in window title like 2:1 or 1:5 - show current display zoom in the same way. First one 2:1 - means that image is zoomed in x2 since 2 screen pixels are used to display single image pixel. 1:5 means that image is displayed at 20% of original size since one screen pixel is used to display 5 image pixels (it won't display 5 pixels at the same time - software and computer chooses which one of 5 pixels will be shown but to our eye it looks as it should).

Now we understand display scaling - 1:1 and fit to screen "modes" of displaying the image.

Two of these are very important - 1:1 and Fit to screen.

Fit to screen shows whole image at once regardless if image is larger or smaller than screen - it will zoom in/out just right amount to be able to display all of it. It is very useful for viewing image as a hole to see composition in the image and to see relation of target to FOV and such.

1:1 is also very important as it shows image to the "best level of detail" that current display device can show. If you look at your screen with single pixel turned on - you should be able to see it. Computer monitors are made so that you are seeing finest detail in the image when sitting at normal distance away from it.

We should make our images the same - we should optimally sample the image and when we display such image in 1:1 mode - it should look good and sharp. You can always recognize if image is over sampled when looking at it at 1:1 zoom level. Are stars small and tight / pinpoint like or are they "balls of light"?

Now the final words related to this. Software does not see pixels as little squares or dots of light. Software considers pixels to be values with coordinates. In some sense, software always see image as 1:1 or 100% scale.

When you zoom in or zoom out image when looking at it in PI - you are not changing the image itself. That will have no impact on how Starnet++ sees it for example.

If you rescale your image in software then you are actually changing it and rescaled image will appear differently to Starnet++. Rescaling changes pixel count of the image, while zooming in and out does not do anything.

Drizzle integration rescales your image - makes it have more pixels (x2 or x3 - depending on your settings) and this is why it looks larger - because it is larger, however detail in the image does not change (it is supposed to change - that is why algorithm has been developed in the first place - but even if it does change and resolution is restored - that happens under very specific circumstances - like original image is under sampled and you dithered and all of that). It is the same as if you took your image and upscaled it by factor of x2.

Stars now have more pixels across and when we view that image in 1:1 or 100% (As software sees it) - stars look bigger than in original image. Starnet++ has probably been trained on properly sampled images with tight stars and that is why it has problems when stars have many pixels across - it just can't tell star from a nebula feature that has many pixels across (it expects stars to have just few pixels across).

Hope all of this makes sense and helps?

 

Thank you Vlaiv, it does make sense and you have been more then helpful so greatly appreciated. have a good week and clear skies :) 

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.