Jump to content

NLCbanner2024.jpg.2478be509670e60c2d6efd04834b8b47.jpg

wimvb

Members
  • Posts

    8,845
  • Joined

  • Last visited

  • Days Won

    5

Everything posted by wimvb

  1. How would you manage without a 3D printer? 😀
  2. Has Windows asigned a port to the camera? Unplug the camera from Windows. Check the com ports in windows control panel. Plug in the camera, and check again. The camera port should now be visible. If it isn't, the problem is with windows. Probably the driver. (Linux is so much easier. Sometimes. 😉)
  3. Have you tried the star cross test in PHD2? https://openphdguiding.org/man-dev/Tools.htm Also, what is your guide speed setting? Try setting it to 0.5 x sidereal. To make diagnostics easier, align RA with one edge of the sensor. This is easy to do with the hand controller. Set slew rate = sidereal rate (speed = x 1). Take a 30 s exposure (no guiding). After 5 seconds start slewing in RA+. This will create a star trail. This trail needs to be lined up with the sensor edge. After the exposure, turn the camera and repeat until aligned. Having the camera aligned makes it a lot easier to interprete star trails while trouble shooting.
  4. Beautiful image. Starless images don't always work. But you can use the module to controll the stars. Make one copy starless, then add stars from another copy that was stretched less.
  5. Yes. A program only needs info about your focal length, not how that focal length is reached. Only if the program asks for any reducers or barlows, will you need to enter the original fl and the reduction factor. The program will then do the math.
  6. I rarely ever do narrowband, and then only Ha. You should probably check out astrobin for nb images with a setup similar to yours. In general, nb uses a high gain of 200 - 300 and exposures that keep the background over the read noise "floor". Just remember that at high gain you have a low dynamic range, and you'll need more exposures to compensate for that.
  7. I wish. Work gets in the way, and darkness isn't quite that long. Clear skies.
  8. That's how I would do it. There is a chart on this forum somewhere that shows the direction of star elongation related to flattener - sensor distance. Here is the link: I have about the same darkness where I live. I also have the light bubble of Sotckholm in the Sourthwest to deal with. Not too bad, as it is low enough. Auroras can be spectacular, but if they are very faint, they're just a form of light pollution. Unfortunately we get too few of the spectacular ones. The long winter nights are great, I can start imaging from about 5 in the afternoon during the darkest season. Otoh, from late April to late August I have my gear in summer storage, as it just doesn't get dark enough to to any imaging. The problem is often that during the winter the cold not only stiffens my limbs, but also any cables hanging from my rig. I've started to replace all power cables with silicone insulated cables. And I try to keep my data cables in such a position that they don't need to bend/move too much. As for camera temperature, I tend to keep that at -20 C all year, but sometimes I need to lower it to -30 to keep it constant. I sometimes worry what the cold will do to the grease and belt in my mount. I think that I run most of my gear well outside operating temperature range at times. As the saying goes; "there ain't no such thing as a free lunch".
  9. As I wrote before, PixInsight has large scale pixel rejection, designed for just this. Set a sub without trails as a reference. No need to increase ordinary pixel rejection (sigma clipping), hence no loss in signal to noise ratio. And no need to correct each sub. Have a look here (first an ninth post in this thread) https://pixinsight.com/forum/index.php?topic=11067.0
  10. The new field flattener should give you round stars, once you got it tuned (correct distance to sensor). Further than that, you can experiment with longer exposure times. When you image R, G, B the sensor receives only about a third of the number of photons as compared to L, so in theory you can expose up to 3 times longer. In your Iris image, the L frames seem to have a good exposure time, but the R, G, B frames could then be exposed eg 2 minutes. If you aim at 2 x as much integration time for L than R, G, B, you'd need 4 x as many L frames as colour frames. 120 x 1 min L 30 x 2 mins RGB Don't vary temperature and gain/offset too much, because you need darks for every combination of exposure time, gain and temperature. At low gain and longer exposure times, your camera has a larger dynamic range, and you'll need fewer subs to get to a certain level of signal to noise. If storage space and computer speed are a concern, you may need to factor this in. The total integration time you need to get to a certain result (signal to noise ratio) depends very much on your sky conditions. According to an article by Jerry Lodriguss, you need 2.5 times more integration time for every magnitude of sky darkness/brightness you lose. Eg, if you image 1 hr from a dark site (magnitude 21), you need 2.5 hours to get an equivalent result from a magnitude 20 site, and 6.3 hours from a magnitude 19 site. It pays off to look for dark skies, if you have that possibility. I played with "curves" after initial stretching to increase local contrast. I measured the lightness in two points where I wanted to increase contrast, then pinned one of those points and increased/decreased the other. Good luck.
  11. A little more than 10 minutes, but here is my result. There is a lot of noise in the image, and it's pushed a bit too far, but at least it shows what's in there, and that your calibration process now works. I didn't do anything about the stars.
  12. I've only downloaded the data, but I must say that it looks a LOT better. Just a slight vignetting effect left, no problem for PixInsight DBE.
  13. Pixelmath. Check this site: http://www.werbeagentur.org/oldwexi/PixInsight/PixInsight.html
  14. No, just Artificial Intelligence at work 🤓
  15. I had a quick look at your data. Some general remarks: 1. the R, G, B frames weren't completely aligned. Espcecially the green was off: rotation : +0.01 deg dx : +1.62 px dy : +2.39 px 2. your flats are over correcting. This is either a result of flat calibration, ie, your flats need to have their own darks subtracted as I wrote previously, or the absence of darks. Here's what the central area of your image looks like after a rough process in PixInsight. Here is the rgb master with only a stretch applied to show the flat over correction
  16. @steviemac500, I just had a play with the image you posted. I created a star reduced and a starless version (starnet++) I think that your star reduction didn't work because of the mask. I created a ring mask by combining two ordinary star masks: A. Noise threshold = 0.5, Layers = 6, Large scale/Small scale/Compensation = 2/1/2, Smoothness = 10, Aggregate and Binarize checked B. Same settings but Large scale/Small scale/Compensation = 0/1/0 and smoothness = 3. Strengthened this mask by bringing in the white point somewhat. Then pixelmath to create the ring mask: A - B Morphological Transform: 3x3 element, Morph. Selection, amount = 0.5, selection = 0.25, iterations = 8 The starless version was created with standard settings in starnet++ on the original image. Combined with PixelMath, applied to the star reduced image: iif((X()+Y())>1, $T, starless)
  17. In the preprocessing (calibration and stacking) stage, I use darks, flats, and dark flats. The latter to calibrate the flats. 1. Create master dark flat by combining the dark flat subframes (same time, gain, offset and temperature as flats) 2. Calibrate flat subframes by subtracting the dark flat master from the flat subframes. 3. Combine the dark frames into a master dark. Dark subframes must have the same temperature, gain, offset and exposure time as the light frames. This is important. 4. Calibrate the light frames with the master dark and master flat. I don't know if dss can calibrate flats that way. But it is able to use master darks and master flats in stead of the dark and flat frames. My ASI camera has more amp glow than the 1600, so matching darks are crucial, both for lights and flats.
  18. That's exactly what PS feels like to me. 😄
  19. I'll have a look at it and see what I can produce (with pixinsight).
  20. Have a look here https://pixinsight.com/forum/index.php?topic=13691.0 It's available from sourceforge.
  21. Starnet++ is now a module in PI. That should do the trick. No sliders, no mask just select and apply.
  22. Very nice. What camera do you use? If you get 0.4" guiding rms, you should increase the subframe exposure time. But if your camera has low read noise (modern cmos), you should be able to stretch your final image harder to pull out more detail and colour. For example, this is what 3.6 hours of 30 and 45 seconds exposures got me. https://www.astrobin.com/318629/C/?nc=user Compare the different versions.
  23. In dss you should be able to set one subframe as alignment reference. Always use the same sub for this, eg one L sub, even for R, G, and B subs. Just exclude that sub from the stack. From the dss faq How do I align the resulting images of 4 stacks (red, green, blue and luminance)? To align the resulting images on the same reference frame just add the reference frame to the list even if it is not from the same stack, force DeepSkyStacker to use it as the reference frame (using the context menu) but left it unchecked. This tells DeepSkyStacker to use it as the reference frame but to not add it to the stack. http://deepskystacker.free.fr/english/faq.htm
  24. I played a bit with the image in PixInsight. But unlike Olly, who worked on the provided jpeg, I took the liberty of downloading the full version from Astrobin. I hope you don't mind. As I wrote before, there is colour mottle in the background, but also a lightness (luminance) variation. I tackled both with MMT. The image showed very little single pixel noise, so I used noise reduction only on structures ranging from 2 to 8 pixels in Luminance, and 2 to 16 pixels in Chrominance. The result is not as smooth as Olly's. But the image you posted didn't include the main target, the Veil complex. So here's a section with nebulosity in it, processed the same way. When applying noise reduction it's important to consider the entire image, and make decissions based on structures (signal) you want to keep, and structures (noise) you want to reduce. For completeness, here's the process container that you can load into PixInsight. MMT_noise_reduction.xpsm
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.