Jump to content

NLCbanner2024.jpg.2478be509670e60c2d6efd04834b8b47.jpg

My take on the Iris


wimvb

Recommended Posts

Imaged over one night last weekend.

I got 254 L-subs and 80 each of RGB. In the end I could only use 163 L subs and about 60 each of the RGB subs. The rest was affected by couds.

L: 163 x 30 s, gain = 300

RGB: 3 x 60 x 45 s

Camera: ZWO ASI 174MM-Cool at -20 C

Telescope: SW 150PDS on AZ EQ6 mount, no guiding

Darks, flats and darkflats

Processed in PixInsight

ngc7023_rgb_Repaired_RGB.thumb.jpg.7755574dd55d18f8d81394500aef7562.jpg

Link to comment
Share on other sites

23 minutes ago, DarkAntimatter said:

Striking image, thanks for posting.

 

13 minutes ago, gonzostar said:

Not bad Wim :) 

Great image 

Dean

Thanks guys.

I'm quite pleased with it, Dean. I've tried this target before, but never got more than a faint glimpse of the reflection nebula. I wasn't sure if the dark dust would come through from my light polluted lawn. I was quite surprised that the stacked image showed it so clearly. The processing wasn't too difficult, mainly stretching, lrgb combination and some noise reduction.

Link to comment
Share on other sites

21 minutes ago, gonzostar said:

Still a fantastic effort. Liking the very faint details I am amazed at your total integration time. I am slowly getting my head turned to go the mono -LRGB route now

How much longer would a equivalent image be with a DSLR? 

That depends on the dslr. The most modern dslrs have the same sensor as cmos astro cameras, like those from ZWO or QHY. The main differences are that astro cameras are cooled, and that they also come as mono cameras. But you can compare to this image, taken with a dslr, which has roughly the same integration time.

From my limited experience with LRGB imaging, I get the impression that it is less sensitive to light pollution than colour imaging (dslr). LRGB imaging isn't more difficult than colour imaging either. But having an electronic filter wheel helps. Because I never know how long an imaging session can last before clouds roll in, I set my imaging sequence as 10xR, 10xG, 10xB, 20xL, and repeat that. This means that at the end of a session, I will have the data for a complete image.

Link to comment
Share on other sites

Thanks for  link Wim very useful and inspiring image by Maurice can show what a dark site  can achieve. I dont know if you have seen my latest pic of M45, but that was 3.5 hrs and a few goes with gradient exterminator! :) 

How easy is it to focus your ZWO camera, i like the idea of setting a sequence for your imaging session. Which software do you use to do this?

Link to comment
Share on other sites

It's hard to miss your M45; it's your Avatar. :icon_biggrin: But yes, I have seen the large version of it. It looks really good. I know you have been struggling with the large SCT. The ES must be a lot easier.

For focusing, I bought a SW DC motor focuser, which I connect through an Arduino (microcontroller board) to a Raspberry Pi that holds my control software. The small Raspberry Pi sits outside on my mount, and is connected through wifi to my laptop. I do all of my set up and imaging (other than polar alignment) from my livingroom.

Here's the thread describing my power solution and control hardware. The large box holds the power supply. The smallest box holds the Raspberry Pi, and the one on the right holds the focuser controller (Arduino).

I use INDI Ekos/Kstars (http://indilib.org/). This software has an autofocus routine. The camera takes an image, and the software calculates the width of stars. It then changes focus, takes a new image, recalculates, etc. After a few times, the software finds the optimum focus (narrowest stars). I then slew to a target. The software takes another image, which it then plate solves to find out where exactly the mount is pointing. Then it corrects the mount's position, and I can start imaging.

Link to comment
Share on other sites

19 hours ago, gonzostar said:

Still a fantastic effort. Liking the very faint details I am amazed at your total integration time. I am slowly getting my head turned to go the mono -LRGB route now

How much longer would a equivalent image be with a DSLR? 

Yes, that is a very pleasing image Wim, with a lot of nice dust, especially when imaged from a light polluted site. The central star seems a bit over exposed - could you save it in processing or do you need some shorter exposures? Also in my efforts on this one I found more red signal.

Gonzostar wondered about how this would compare to DSLR imaging. DLSR works good if the outdoor temperature is low (to keep noise down) and there is not much light pollution. I have had a go at the Iris and this is what it looked like after 5.2 hours from my quite dark rural site:

http://www.astrobin.com/304594/

Link to comment
Share on other sites

2 hours ago, gonzostar said:

This looks fantastic. Must be rewarding making your own set up and getting images as above, 

I am not that technical so involves a lot of saving! Yes the ES is a lot kinder to me the the SCT. I have processed another Avatar image :) 

Yes, it does save me a lot of money, that (in my opinion) is better spend on optics. If only I could diy a cloud removal gadget ...

(I would patent it. :icon_biggrin:)

Link to comment
Share on other sites

2 hours ago, gorann said:

Yes, that is a very pleasing image Wim, with a lot of nice dust, especially when imaged from a light polluted site. The central star seems a bit over exposed - could you save it in processing or do you need some shorter exposures? Also in my efforts on this one I found more red signal.

Gonzostar wondered about how this would compare to DSLR imaging. DLSR works good if the outdoor temperature is low (to keep noise down) and there is not much light pollution. I have had a go at the Iris and this is what it looked like after 5.2 hours from my quite dark rural site:

http://www.astrobin.com/304594/

After colour calibration, the dust was quite neutral, so I didn't push the red. I was very much aware about the lacking red, but I wanted to keep the processing to a minimum. Besides, depending on the target, dust in images is depicted red, orange/beige, or 50+ shades of gray. I wonder if it actually is different, or whether we are trying to conform to some "standard" colour scheme per target? On earth, fine dust (like cigarette smoke) has a blue colour, due to light scattering. Rarified (Hydrogen) gas can shine with a weak red colour (extended red emission, ERE), and only very thick clouds of dust or gas have a true colour, such as smoke from forest fires or chlorine gas.

On a more practical note, the standard ZWO RGB filters that I use aren't matched to any sensor. The bandwidth is the same for all colours. This is unlike the ASI 1600 optimised set, where the red filter curve is much wider than the green. ZWO have designed it this way to get a constant "exposure flux" per filter. Since the QE for green is higher than blue (slightly) and red (a lot), you get different pixelvalues if you'd image a white light source with the standard filter set. The ASI 1600 filters are matched to the camera's QE curve, so when imaging a white light source, you get equal R, G, and B values. In other words, my red master is underexposed as compared to the green and blue masters. But I correct for this before I combine the three images in one RGB image. And then I do colour calibration, so there shouldn't be a difference once this is done correctly.

Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.