ngc1535
-
Posts
14 -
Joined
-
Last visited
Content Type
Profiles
Forums
Gallery
Events
Blogs
Posts posted by ngc1535
-
-
On 15/01/2024 at 05:55, Ouroboros said:
Even so the star de-emphasised script appears a bit clunky from my current position of ignorance. I am a bit surprised that in these days of AI driven processes someone hasn’t come up with a script or process to make this a simple one-step tool ie leave the top 10, 20, 30% of stars (user set) untouched and de-emphasise the rest.
I appreciate you guys mentioning my method for star de-emphasis. Yes, I came up with it back in 2018... well before BXT.
In my most recent video (NGC 1491, Fundamentals) I leverage BXT to do this job. If you understand my original methodology... you can probably figure out what I did. However, being a member of my site will save you from guessing.
-adam
- 1
-
1 minute ago, tomato said:
Regarding the information to work on (Tomato)- this is a BXT specific or special requirement.. .it has always been true for any deconvolution.
- 2
-
11 hours ago, Lee_P said:
@ngc1535 I'd love to get your thoughts on why I'm not seeing much improvement using BlurX on the nebulosity in these stacks -- is it because I need a higher SNR, as was suggested earlier in this thread? Thanks in advance if you're able to help!
Correct. You do not have the S/N (in most areas) necessary to enjoy the benefits of deconvolution (of any type).
-adam
- 1
- 1
-
On 18/12/2022 at 09:48, wimvb said:
I think a 14" BXT processed image should beat a 6" APO BXT image, because what BXT does is take care of blurring by atmosphere, guiding, and focus issues. All that is left after that are telescope optics (read: diffraction limit or spot size), imaging scale, and SNR in the unprocessed masters (BXT is done before any magical noise reduction).
I don't think that a direct shoot out is fully valid. Differences in images of the same target are at least as much due to differences in processing style and processing decisions as they are to differences in equipment and data acquisition.
Case in point: Adam Block recently published an image of M51, processed with "xXT".
https://www.astrobin.com/3ehcpo/?nc=all
The addition of Ha, and different processing, makes this a completely different image from the one Olly linked to earlier in this discussion.
Just to clarification... the latest image I published on AstroBin (linked to above) is from my own effort. It is not the same at the M51 data in the video (which is from a friend and I used his data as an example in my tutorials- it was taken with a 16-inch telescope. My latest version is with a telescope twice the size). I do think my latest version is pretty good since it leverages many tools and techniques of today. The original image I produced from this data is from 2011.
-adam
- 3
-
On 16/12/2022 at 08:27, gorann said:
I saw the whole video, but maybe I got it wrong. However, near the end he demonstrates the artifact that it can produce, exemplified by the Crab nebula. What worries me then is that if you apply it early on and then much later realize that it has produced artifacts, it will be a bit frustrating. It suggests that you need to do two processes, one with and one without BXT.
Goran,
For clarification to clear any doubt- in the video I operate on linear images that were created after RGB combination (having been DBE'ed and SPCC corrected).
Concerning the artifacts- all deconvolution algorithms produce them. The extra work you suggest you should have been doing all along if you deconvolution of any sort. This suggests nothing new and is not more frustrating than before. There is a good argument it is less frustrating. all you need to do and compare traditional decon to BXT. BXT with modest settings minimizes a number of artifacts that traditional decon produces.
-adam
- 2
- 1
-
It might be because with WBPP it is now using Average Sigma Clip for a frame number of 10... did you set the sigma thresholds for this? The defaults will not reject much if anything... which is why your things are likely showing up.
If you want to experiment, lower the Sigma Low to 3 (or maybe even 2.5) from a default of 4 and see what happens.
(most people do not use Average Sigma Clipping... )
I will send a note to Roberto... perhaps should have used regular Sigma Clipping in my opinion.
-adam
- 1
-
Not only am I releasing videos on WBPP... but I am making them public.
https://www.youtube.com/playlist?list=PLAzMa9eIVQkDnrwzRCDLYB3JGoJz7tlxF
There are still a few more sections to go... and the section I hope to release tomorrow is the most important one I think.
-Adam Block
-
9 hours ago, Northernlight said:
Hi Adam,
Just slowly working through your video's when time permits and the kids aren't constantly harassing me. I ordered a new high quality newt today, which will be ready in around 10 weeks - so it's motivation to keep working through the tutorials to get my PI skills in order.
For me, i've always found the automatic sigma method works very well.
I agree. I use Cosmetic Correction (auto) as an incremental step. Anything that it does not take care of- is likely to be handled during rejection in ImageIntegration.
-adam
-
9 hours ago, Northernlight said:
No I think That Adam is saying that we do the usual calibration then cosmetic correction can be performed in a number of ways:-
1) You can use a master dark with cosmetic correction as one method of removing the HotPixels
2) The other method is not to use a dark and just use just use the auto detection option for removing the HotPixels, as that what adam does in his Cosmetic correction Videos as far as i remember when i looked at it last week which allows you to use a single process icon for BPP/WBPP
Cheers,
Rich.
It is true that after you have calibrated data in hand, you can apply Cosmetic Correction. There are THREE different methods of using Cosmetic Correction. I use the Sigma method (which is relatively automatic and quite robust. You can be particular about it and determine hot pixels somewhat empirically by measuring one of the dark frames. (All of this is explained in my lessons. ) I don't like "1)" above as much because it doesn't work in every case.
Thanks!
-adam
-
10 hours ago, MarkAR said:
Hi Adam, if I understand you correctly then Cosmetic Calibration is applied to the calibrated frames (if necessary) before they are integrated.
Also wish to thank you for all the PI videos, they are very helpful.
Yes!
This is something of pitfall of "automation" through scripts- it isn't always obvious what the order of operations is.
Great job on working through the videos and I am very glad they are helpful!
Sincerely,
Adam Block
- 1
-
HI all,
There may be some confusion here. Cosmetic Correction is a kernel filter that is run on data after they are calibrated with biases, darks, and flats. Cosmetic Correction (basically a hot pixel filter as well can have potential uses for column defects) is not connected to darks in the sense of calibration, scaling, etc. I do demonstrate it is possible to scale darks- which has some benefits regarding maintaining a dark library for a cooled CCD camera- but it comes with some pitfalls that deal with hotpixels and occasional scaling errors through dark frame optimization. It is almost always better to subtract a dark of the same duration if you have a high quality master in hand.
This is a subtle connect between dark frames and Cosmetic Correction in that one of the methods of using Cosmetic Correction is to use a dark frame to generate a hotpixel map. But again, this isn't related to the calibration process.
Feel free to connect with me through my website at AdamBlockStudios.com if this particular issue is confusing.
Thanks,
Adam
- 1
- 1
-
Alan,
Great job on the use of the technique. I appreciate you taking it to heart. As I mentioned in my tutorials on this- even if you do not like outcomes of combinatorial means of processing an image (I used this "trick" as part of a larger thing to de-emphasize stars)- sometimes the individual small methods are useful in and of themselves.
-Adam
-
14 hours ago, alan4908 said:
Hi Vlaiv
Yes, the colour is definitely interesting.
Here's the image from Adam Block taken through a 20inch RC at the top of a mountain, I think he won an APOD for this so, I'm assuming that the colours are accurate.
I've used Registar to align with my own attempt and marked the location of the green blob. As you can see very green.
I would not assume that since an image is published as a NASA APOD that the colors are "accurate." The astronomers (Nemiroff and Bonnell) do not check for this, but are as knowledgeable as anyone in our active community that would spot the weird stuff. I do not believe this image was published as an APOD...but I could be wrong. A *better* assumption is that I took great care in the fidelity of the details and color for images I publish. 😁
Indeed, small green blobs that look like HII regions are, though uncommon, not really rare. It is all too easy to "remove green" blindly (or nowadays SCNR it out) and miss out on some interesting astrophysical things! This is where examining the data closely and letting it be is a good skill (one you have shown with your image!).
Another good example of these green blob things is this image:
http://www.caelumobservatory.com/gallery/n6240.shtml
It happens to be in this galaxy that I discovered my own Supernova..but it was not related to these blobs!
-Adam Block
- 1
Rosette nebula - SHO
in Imaging - Deep Sky
Posted