Recently Browsing 0 members
No registered users viewing this page.
I'm trying to stack 240 light frames of Milky Way taken from quite a light polluted area.
I tried to stack them in DSS. The Milky Way seems to have been stacked okay, but the surrounding stars look like they've been deliberately dimmed or brushed over.
I thought there was something wrong with my frames but when I used the same frames in Sequator, it's working fine.
I've attached both the images below, and I have exaggerated them a lot on Lightroom just to see how much details I could pull out. Sequator one seems to have too many weird light bands(?) but I think I can fix them using an adjustment brush. But look at how many stars Sequator is able to show compared to DSS.
DSS image: The settings I've tried on DSS and yet nothing changed:
1) Tried in Standard, Mosaic both
2) Tried both Sigma Clipping, and Auto Adaptive Average(These two were recommended by DSS)
3) Hot pixel detection and removal (tried it with enabled and disabled)
4) Nothing enabled in the cosmetics tab
5) Tried with and without Flat frames
6) Star Detection Threshold: Tried from a range of 50 stars to 300 stars. Even manually checked to see if DSS was picking any noise as star(it wasn't).
If I try with less than 30 light frames, DSS does an okayish job and the stars in the rest of the image still look like stars, but then the Milky Way has no details to pull out, unless I stack a lot more shots.
Can anyone please tell me what am I doing wrong in the case of DSS that it's doing such a poor job for the surrounding stars? If you guys need me to upload some Raw light frames I will do so as well. Any advice and suggestions are welcome. Thank you
Camera: Nikon D3100, with 18-55mm kit lens
Exposure Settings: F/3.5, ISO 3200, 15s x 240 frames, 18mm focal length, no tracker
50 Darks, 50 Bias and 50 Flat frames.
DSS version: 4.2.3
I'm not experienced with LRGB imaging, so thought i'd give it a go on M81. However, when i combine the 4 individually processed integrations i end up with horrible colour hues across the image - they're all aligned and wotnot. Am i running into the issues of light pollution (inside the M25), which i can only remove with aggressive DBE application? Individual files attached.
I've got a 5 year old desktop, which i think performs decently and does my Pixinsight processing pretty well. However, i want to change to a laptop for the form-factor, but i don't want to compromise on performance.
This is my rig: https://www.userbenchmark.com/UserRun/27054387
What machine do you use and how does it perform with Pixinsight? Any folk familiar with computer hardware able to provide steer on whether i'm due an upgrade?
Looking to understand how others do their processing.
This is from a couple of nights ago.
It clouded in quickly, so I didn't get much data. The temp was 2C, and I'm self-isolating - so that was also a factor! I only took 5 darks before I gave up and went to bed. I'll take more when the temp is right.
This is 140 min integration time on a new Canon EOS Ra with 17x500s exposures, and no filter (from the city).
Processed in PI and Photoshop - I should have used more star masks but the data aren't good enough to warrant the effort I think. I also had difficulty with flats - I tried a range of exposures with a Gerd Neumann panel, but I think they were all too short. Will go longer than 0.3s next time - very hard to figure out flat exposure on DSLRs, and APT's tool doesn't work for DSLRs yet.
I think I'm obsessed with M51 - and I know I'll be back to it again.
Stay safe everyone,